10 tips for effective usability testing

As part of the evaluation of my project pip-db, I have been conducting usability tests with volunteer biology students and a couple of other stragglers who I managed to coerce into helping. It has been an incredibly educational experience, so I recorded notes on how each test went and have compiled a list of tips based on my findings:

  1. Usability testing is the single most effective way of getting valuable feedback on whether your product is achieving its goals, if performed correctly. It is impossible to overestimate the effect that bringing a fresh pair of eyes to your project will have on the way you evaluate it and make decisions in the future. As such, prepare to begin the testing process at the earliest possible stage, as soon as you have something that people can interact with.
  2. Don’t underestimate how hard it is to perform effective usability tests. Before the first round of testing, run through the list of tasks and questions with a friend in order to make sure that your tests make sense and are as effective as possible. Write yourself a script and deliver it consistently in each test.
  3. Choose your participants carefully. If your product is aimed at a very specific demographic, then make sure that the core majority of your testers comprises of people from that area of specialisation. Don’t reuse participants. Once someone has seen and used your product, then the entire benefit of assessing initial impressions will be lost.
  4. Aim for the largest possible sample size. If you’re dealing with a product of non-trivial complexity, then 5 users is not enough.
  5. If intuitiveness is not your primary goal, then don’t be frustrated by users getting stuck on a task. Don’t panic and start pointing users in the right direction at the first sign of confusion. If a user begins to perform the wrong action, let them see how far they get before trying a different tack. You never know, they may discover a successful workaround or even discover a new way of using your product which you had not anticipated.
  6. When constructing the scenarios list, use the tasks that your users want to do, not just the kind of tasks you want them to perform. Don’t just play to your product’s strengths. Before testing begins, make sure you have done your research into likely use cases.
  7. Don’t let participants feel like they are under scrutiny. Make sure that the participants understand that it is the product that is being assessed, not the project.
  8. Don’t say anything when if don’t need to. This was perhaps the hardest lesson to learn. Normally I am only too eager to explain how a product works and what it does, but in usability testing, you are evaluating how well other people can figure this out on their own. Once you have have given the participant the instructions on how to perform the tests, you shouldn’t have course to speak until they are completed.
  9. Deliver the same test to each participant. The value of testing results drops dramatically if you change the test between each iteration.
  10. Record your tests. If you’re testing software, then aim for a minimum of screen capture (including the mouse pointer) and audio recording. You’ll be sure to find things that you missed the first time around on repeat viewings.

I’d highly recommend that anyone who is directly involved in making software for users run some usability tests, as I guarantee that you will learn a lot about your product from watching other people use it, and will forever change the way you evaluate your projects. Hopefully you can learn from my mistakes and maximise the effectiveness of your own tests!

10 May 2014