Foliotek Developer Blog

Simple Usability Testing with

Any software developer that is concerned with making great software should be testing their interfaces on their users (or people similar to their users). Its easy to convince yourself you’ve built a great UI if you only see people on the team using it. People on the team have intimate knowledge of how the system is supposed to work, and so they nearly always succeed.

The real test comes when you watch someone who has never seen your system before attempt to perform some vague tasks you have set out for them to do.? These types of usability tests can not only point out the failures of your system to match what typical users expect – but watching these users struggle with stuff you wrote can be a strong motivator to improve things.? For that reason, it’s important to do usability tests as often as possible, and let as many people watch as possible.

At Lanit, we’ve attempted to do usability tests a number of ways:

We’ve done full scale tests that record mouse tracking and video in addition to audio. These take a ton of time to set up, to edit/summarize the results, and to share. And, it doesn’t seem that the mouse tracking or video added much to what the user was saying (assuming the moderator was prompting for their thoughts often). This time investment caused us to do them at most once or twice a year.

We’ve tried doing simpler screen-cast and voice recordings at a local university. This was an improvement – but it still took time to schedule a room, and we still need to bring back videos to the office to share or spend time summarizing/editing them. It also eat up a half day for two people to find participants and set up the tests. And, it was a bit awkward to approach people to ask them to let you record them testing your software.? Usually, we managed to do these every few months.

We’ve also tried bringing users in to our office – so that we can easily share the screen and voice live, and have an immediate discussion afterwords (similar to what Steve Krug suggests here: This was an improvement,but still required a time investment in finding willing users to come to us.? We only tried this once – but the effort to advertise etc. felt like we would still only do a test at most once a month.

Finally,we decided to give one of the new services that have recently popped up a try. There are many of these services that do some form of usability testing for you – but we narrowed in down to and because they seemed to best emulate what we were doing live? (the only real difference is that users are screened/trained by the site to speak what they are thinking, instead of requiring a live moderator to prompt them). I chose trymyui mainly because I liked the results of my first (free) test, and it was slightly cheaper ($25/user/test instead of $39). All we had to do was provide the script of tasks they were to accomplish (took all of maybe 10 or 15 minutes to create), request testers, and usually within an hour or two we had a great recording of the users screen and voice. I had one experience where a video didn’t arrive for about 2 days, but their support was very helpful and indicated that they pre-screened the results to ensure the testers tried the tasks, etc (a bonus, as you are basically guaranteed to get a good video for your $25).

We were hoping that going the online service route would save us a bunch of time – and it did – but an unexpected benefit was that the short turnaround allowed us to do a kind of A/B testing.? Before when we did live tests – we’d come away with 10 or so items from across 3-5 tests in our session we’d want to improve, and we’d put them in a list to take care of some time in the future.? Then, we’d have to wait until the next time to see if we made a significant improvement and what the next set of problems were.

With trymyui, we could often develop a change to improve the experience and test that change the same day.? This created an even more motivating experience to build a great UI – because you could often see where the next user did better because? of the last test.? In the end, we made several improvements over a week that would have taken us months to do before.? And, it was so effortless to set up and so easy to see the benefits that I know we will continue to use this site to test our UIs on a regular basis.

Adapting a Development Process

One of the key reasons to choose a rapid development philosophy over a waterfall development philosophy is the ability to adapt to changing requirements.? Once you decide to go with some form of rapid development – which should you choose?? It seems like there are as many options as there are companies utilizing them.

There is no one correct answer.? My advice is to pick and choose what works best for your development team, your product, your customers, and your company.? In addition to adopting a development philosophy that allows rapid change in requirements of your software – be prepared to rapidly change the process itself.? Our process has changed dramatically over my years at Lanit due to new people, new priorities, new technologies, and new tools.

At Lanit, we are currently using a form of Agile software development.? We have very few rigid processes that the design/development team must follow – instead we have a toolbox from which we grab the appropriate tool from in any situation.? Below I will give an overview of a lot of things we do here.? Hopefully some of the things we do will be useful for you/your team.


Some agile shops will say that you should avoid all planning.? I don’t think that scales well to complicated problems? (and, I don’t really believe they don’t plan – they just have shorter term plans or personal plans).? We plan at Lanit – we are just ready for our plans to change at any time.? And, we are careful to scope our plans appropriately – longer term plans are more vague, shorter term plans get more specific.

Vision statements

Vision statements are simple, vague,big picture plans which are useful to guide the smaller picture plans later.? Be careful not to get too detailed so that it’s easy to adapt.

  • Company/Strategic planning – Lanit has a vague mission statement that pertains to everything we do,and helps guide our smaller plans.? Basically, we want to write software that improves people’s lives in a meaningful way.
  • Product planning – Each product sets forth its own broad goals.??? For Foliotek, we want to make the accreditation process easy for schools by providing sensible, clean, and easy to use portfolio software.? We focus on making the portfolio process as simple as possible for students and faculty to ease the burden that adding a portfolio culture to a school can create.
  • Release planning – Often times, we have a common goal (or a few goals) that helps us decide on feature sets for each release we do.? Recently, we’ve had releases that focused on making the system more maintainable by adopting some newly available frameworks throughout a project.? An upcoming release will focus on adding more interactivity to the system through the use of more client scripting and ajax.

Releases and Feature evaluation

For existing products at Lanit, we develop in 8 week cycles.?? If anything takes longer than that to get from idea to release, then we run the same risks of the waterfall model – we either build something that is to late to market (the market changes between when we plan and when we release).? Or, we waste a bunch of effort because we got the plan wrong to begin with, and spent months developing the wrong thing.? As with all rapid development philosophies – the point is to find out as soon as possible when you make a mistake and change course immediately.? Even inside of the 8 week cycle, you’ll see we allow customers to see and comment on complicated designs sooner.

For existing products – we keep feature wishlists that eventually evolve into planned feature sets for a particular release.? We use FogBugz ( to store the wishlists, and items move to different projects (new ideas -> for development) and releases (undecided release -> Summer 01 2009) as we evaluate the lists.

  1. Keep an ongoing wishlist - from customers (help them succeed with how they want to use the product)
  2. from business team (help sell the product to new customers)
  3. from support team (spot trouble spots, ease support burden)
  4. from development team (more maintainable code, or newer and better technologies)
  5. Shortly before starting to develop a release (at most a week, so that you have the best possible information), pull the most valuable ideas into a release wishlist - usually, stakeholders from support/business/dev make a ‘top ten’ type list, then combine them to create an initial release list
  6. this is also a good time to completely eliminate ideas from the wishlist that are no longer valid
  7. Dev team comes up with very rough estimates to develop ideas
  8. Dev, support, and marketing ranks the wishlist based on cost/benefit type analysis (usually, in a meeting.? also a good time to describe and document the needs of each feature better).? Often, the idea is refined or simplified based on discussions in this meeting.? We always try to build the simplest useful version of a feature possible, and only add complexity after people have tried the simple version and more is still needed.
  9. Narrow down the release to a reasonable list based on available time and estimates
  10. Dev team works on the list in order of priority – Everyone knows that the bottom items may drop into the next release based on changing circumstances and priorities.? This also allows for new items to be injected at the cost of items at the bottom, and allows more time to think about the expensive, less well defined items that should be further down the list.

Designing/Developing Features

The rest of the work is taking the requested feature and taking it to implementation.? This process has the most variability – some features are small and easily understood, a text description is enough to develop it.? Some features are more detailed or important and require more elaborate designs.? The most expensive features to implement should be discussed with customers at early stages to prevent wasted effort.? So, we never mandate that each feature must go through these steps – the dev team is allowed to determine which tasks are appropriate for the item they are working on.

  • Feature descriptions – pretty much every feature idea has at least a sentence in FogBugz describing it.? Typically, links to current screens are included (for “change” type requests) to get everyone on the same page.? Often, the descriptions are detailed during the release feature set prioritization meeting.
  • Paper sketches – if the feature has a small amount of sophistication, it often useful for the developer to do a rough paper sketch for their own benefit.? This could be a UI sketch, a db model, a flow diagram, etc.
  • Informal discussion – sometimes, a brief chat about the feature is all that is necessary.? Face-to-Face can be a double edged sword – they can be very powerful for the person that needs help, and very distracting for the other party.? We use yammer ( for these kinds of communications so that each person can decide their level of interruptibility (each user can decide to have an IM-like client open, to get email notifcations, to get daily email digests, etc – and can customize those options based on subject).? Many times, we still talk face to face – but we initiate conversations using yammer instead of physically disrupting the other person.
  • Plain ‘ol Whiteboards (POWs) – Sometimes, features are too hard to describe.? Others, the business team only has a need (this is too slow/complicated) but doesn’t have a clue how it should be solved.? In these cases, it’s useful to collaboratively sketch ideas at a whiteboard. - POWs can become real, permanent documentation!? We use a few handy tools in combo to make this happen: - A digital camera
  • An Eye-Fi ( wireless SD card – gets pictures to us without the hassle of a card reader
  • EverNote ( – archives whiteboard photos.? Allows easily retrieval through tags, and even can OCR/search some handwritten text in a pic.? Integrates with Eye-Fi – so you get a new note with each pic without any hassle.? Synchs across all popular computers and smartphones.
  • Whiteboard Photo ( – software package that takes a photo of a whiteboard and cleans it up a ton – picture ends up looking like it was sketched in paint.? Allows copy-paste – so you can click the photo in evernote, ctrl-c, click whiteboard photo, ctrl-v, clean, and repeat in opposite direction.
  • Comps – sometimes, the detail needed is aesthetic.? In those cases, someone is commissioned to a more refined Photoshop or Fireworks comp (often based on a sketch).
  • Paper or digital sketch “prototypes” -? Sometimes, the feature/ui itself is complicated.? In these cases its useful to get feedback from inside the team and from customers before you write a bunch of code.? Most of the time, you can get the info you need by walking the customer through sketches – either by explaining a flipping through a succession of paper sketches, or by building digital sketches in Fireworks – which can be linked together and allow clicking between screens.? This is a good low cost way to get something that feels a lot like a programmed prototype.
  • Coded prototype/betas – When a feature is very interactive, or is highly data driven, etc – sometimes you need something real to evaluate the design.? In those cases we build out the feature as small as possible and release it to a carefully chosen set of customers (or ourselves) for “real” use – and we tweak it before we release to everyone.

Testing and Maintanance

After the dev team believes it is done, the release is pushed to a testing area.? The main contact for each new feature request is responsible for testing it to make sure that it works properly and fills the intended need.? We usually spend about a week going back and forth with the support/sales teams until everyone is satisfied with the release.? Then, it goes out to our customers.

We’re not perfect.? Sometimes, bugs get out to the live environment.? For the highest priority issues, the support team can interrupt new development and get an immediate resolution.

Doing this for every trivial issue would severely hamper new development, so we limit these cases to a small number per year. For all other issues, we have a weekly patch schedule.? Support reports problems (to another area in FogBugz), and we fix throughout the week.? On Mondays, we send the set of fixes all out at once.? To keep the developers sane, we rotate the developer responsible for writing fixes each week.

This schedule allows the dev team to stay focused on making the product better – but also allows the support team to be responsive about issues in the system.? Customers are more accepting of problems when we can tell them when it will be fixed.

“Green Field” development

So far, I’ve focused on how we develop changes and additions for our existing products.? Many of these techniques are also useful for developing brand new products.? Planning new projects can often be more complicated, though, and features aren’t as well understood to begin with.? Many more decisions need to be made.

  • Brainstorming sessions – Early on, the idea is very vague.? The quickest way to narrow it down into a plan is to get people into a room and come up with ideas.? Be sure to involve potential customers.? We’ve been very successful by developing “advisory boards” of people who are in your market – and allowing them to help brainstorm and design the product.? When they are done, not only does your product fit the market better – but you end up with a group of devoted customers right off the bat since they feel some ownership of the product.
  • Multi disciplinary team design sessions – IDEO developed a method where you take a problem, and design separate solutions in several small groups of about three or four.? Then, you come back and evaluate as a group and combine the ideas into one solution.? This can be very useful for developing a feature set for a new product.? For best results, each team should have a tech person, a business person, a support person, etc.
  • User Studies – The best way to get all of the little details right is to sit down with a real user and watch them try to use your new product.? You don’t need expensive equipment – just sit down and watch, and take notes or record with a webcam.? You don’t need a completely functioning system – you can (and should) walk users through paper sketches (what would you click on here – ok, now you see this) and later have them use sketch prototypes (click through – ok that would be added here when its a real system).? If the system is really interactive, build a simple html/js prototype.? You also don’t need scientific population samples -? any 3-5 people you grab (your spouse, neighbors, friends…) will catch all of your really important usability problems.