Whether you like it or not, sooner or later you are going to have to talk to your prospective customers. No matter what your idea is, if you don’t validate it somehow you will simply be relying on blind luck for success. Given that 11 out of 12 start-ups fail, relying on luck doesn’t seem like such a smart choice.
Before you start speaking to customers make sure you have a plan and are crystal clear about what you are trying to achieve. Spend some time recruiting appropriate participants. Analyse the results with your team to share information and extract insight faster.
Research proves nothing, but only your customers have the answers you need.
But I don’t want to clean my room!
There may be reasons why you don’t want to talk to customers. Here are some that come up regularly:
- We don’t have time - Really? It’s far more likely that proceeding without validation is going to result in time wasted. Even a couple of weeks of talking to ‘real’ people is better than forging ahead shrouded in an impenetrable mist of uncertainty.
- We can’t afford it - Once again, really? You’re far more likely to waste what little money you have with an unvalidated product/service than you will spend getting that validation.
- We don’t have the skills/tools - Since you’re starting a business presumably you have a laptop you own or can borrow, and access to the internet? If not, you have far bigger problems than not having the skills/tools! If so, you’ve got pretty much everything you need except a plan, which this post will help you formulate. There are plenty of free tools you can use to actually conduct the research, like Google Docs, Hangouts or Zoom. Please note that I am not endorsing any of these tools, just pointing out that free tools exist. Use whichever is best for you.
- We don’t know which methodology is best - Indecision is crippling. As an entrepreneur you can’t afford to be blocked by it. There is no such thing as the ‘right’ methodology so stop looking for it. What you use depends upon what you want to find out. If you are interested in qualitative issues (e.g. what is the problem and why is it a problem?) use a qualitative method. If you’re more interested in quantitative issues (e.g. how big is the problem) then use a quantitative method.
- We can work it out in Beta - You can tell a lot of things in Beta about what you have already built. The question is, what do you do if Beta is telling you that you built the wrong thing? It may be very useful to know at the start how well what you are building fits with what customers need. It all depends what you want. Do you want to invest time and money now and try to set out in roughly the right direction? Or do you want to wait until you’ve built something only to find out that it’s not the right thing? Can you afford to do it all again?
- We don’t want to change direction now - Talking to potential customers may well change the direction and scope of your start-up. That’s the point. If you don’t want to know that you’re heading in the wrong direction because it’s inconvenient then you’re probably in the wrong job!
- It stops us from innovating - It could be argued that the practical difference between innovation and invention is relevance to the real world. What’s the point of an invention that no-one wants to use? Nothing can reasonably be called an innovation if it is of no use to anyone.
- We already know the answer - Maybe, maybe not. How recent was your research? There is a saying that ‘familiarity breeds contempt’. Can you be sure that your familiarity with the problem hasn’t resulted in any blind spots? Who is ‘we’? How can it hurt to confirm what you think you know?
It’s sometimes useful to ask yourself what the cost could be if, 6 months from now, you discover one or more of the following issues:
- You’re solving the wrong problem
- You didn’t have the Unique Selling Point you thought you had
- You failed to spot a competitive advantage before your competition
- You missed a key aspect of your users’ environments
- You didn’t properly understand what was important to your users
- You built something that interested you but which doesn’t really matter to your potential customers
- Your product/service can be mis-used in ways you didn’t forsee
If the potential damage to your business from any of these issues is high, you probably need to do some validation now.
A plan so cunning you could stick a tail on it and call it a weasel!
Hopefully, you have decided by now that it is worth your while to speak to real people in order to validate your business idea. Be aware, however, that:
Research proves nothing
Think of customer research as the foundation of evidence-based design. After all, if your design isn’t based on evidence, what is it based on?
As with every other aspect of your business, things will probably run much more smoothly if you have a plan (especially a cunning plan, as Blackadder will confirm). The following steps are suggested for approaching customer research.
Be clear about the question(s) you are asking
This refers to the business question that your research is intended to answer, rather than the interview questions you ask your customers (although it is also relevant to the questions you ask your customers). As with most things relating to start-ups, the clearer you can be in what you are aiming to achieve the more likely you are to achieve it. Without a clear focus, customer research is a waste of time and money.
Remember that you are undertaking research in order to make evidence-based decisions. A good research question is therefore specific, testable and measurable. Remember also that we’re talking about the research question here (i.e. the reason we are doing this) not the interview question(s). It must be possible to:
- Answer the research question using the methods/techniques you decide upon.
- Answer the research question with at least some degree of confidence that allows you to base subsequent decisions on what you have learned.
You want a result from this research, so try to avoid open-ended descriptions when forming your research questions. For example, verbs such as ‘describe’ or ‘identify’ are better than ‘understand’ or ‘explore’.
Be realistic about what you can achieve
In the same way that you should be clear about the question(s) you are researching, so you should be clear about how you will research them. Understand and specify the questions you will be asking, the methodology you will be using and the aspects of the business that will be affected by the result. Be clear about what you expect from your research in order to avoid disappointment later.
Be clear about exactly what you are researching. If you want to understand the true needs/priorities of your users/customers, the context they operate in, how they behave and why, then you are conducting user research. If you want to understand how users interact with an existing or potential system or process then you’re conduction evaluative research.
You will need to use different techniques for different types of research. For example, user research can be done simply by talking to people but evaluative research typically requires some physical equipment to monitor users while they perform a task.
Make like a Scout and ‘Be Prepared’
As with so much in life, better preparation leads to better outcomes. Get your materials ready beforehand. Test them. Make sure you have a fallback in case something goes wrong (it almost certainly will go wrong at some point) so you feel confident.
Sketch out an initial test plan, describing how much money will be spent and who will be involved. Clearly define your problem statement. Decide who your subjects will be and how you will obtain them. Be prepared to change or adapt your plans in the light of what you discover as you go along. Maybe you’ve assigned an hour for a test but you find you can get all the useful information you need in just a few minutes.
Allow contingency for things beyond your control. It may take longer than you think to recruit test subjects, or scheduling tests may become problematic. It’ll be easier to manage if you think about it now, before you get started.
Allow time to analyse your results
Sifting through the chaff to find the wheat in your responses is best done slowly and carefully. Try to avoid jumping to conclusions that may be more destructive than no research at all. You are looking for meaningful patterns which you can turn into observations. From these, you can make recommendations.
If you have a team, get everyone involved who can contribute and will benefit from the analysis session. Insights tend to come faster with more people involved (although beware of getting too many people involved - this tends to slow things down).
I’ve described some of the tools you can use to facilitate these sessions. Make sure you are clear what the research question was. You are looking for patterns that answer the original question you posed. In addition you are looking for feedback relating to the following:
- Goals - What the participant is trying to achieve.
- Jobs/tasks - How they go about achieving it. The more detail you can provide, the more value you can extract.
- Priorities - What matters most to them in this context. These are useful when coming up with ideas for gains your solution can provide.
- Habits - What the participant does on a regular basis. One of the best ways to predict future behaviour is to examine past behaviour.
- Pains/barriers - What makes life difficult for the participant when completing this task.
- Tools - The objects (real and virtual) the participant interacts with to get the job done.
- Environment - The context the participant operates in. How it affects their perception and decisions.
- Relationships - The individuals the participant interacts with.
Write it down
I’ve said it before and I’ll say it again, if it’s not written down it didn’t happen. Not everyone in your team will be able to participate in the research process, so think about how you are going to communicate what you have found. Make sure that when you come back to it in 6 months time you understand what you wrote and why you made the decisions you did.
All I need now is someone to ask
You’re all prepared to start conducting interviews. The only thing remaining is to find someone to speak to. By now you should be clear on the question you want to know the answer to. The next job is to find, interest, filter and acquire those participants.
Hopefully it won’t surprise you to hear that poor participants will give you poor results. In order to find appropriate participants you first need to decide what a good participant might look like. The people most likely to be your ideal customers share the following characteristics:
- They have the problem you are trying to solve.
- They know that they have the problem you are trying to solve.
- They are already paying to solve it. That is to say, they are buying books on the subject, paying to attend courses, workshops and conferences and/or purchasing sub-optimal products/services that solve part of the problem.
Once you have decided what your ideal participant looks like you need to find them. I’ve written a post describing one way of doing that using a landing site. In addition to this, go anywhere that you might be able to post a message that could be seen by your target audience or one of their forwarding friends. Twitter. Facebook. Reddit. Hacker News. Make it worth their while talking to you by offering a £10 Amazon voucher, for instance.
I’m often asked whether surveys are a good way of conducting user validation interviews. In general, my answer to that is ‘no’ for reasons to be covered in a future post. Where surveys can be extremely useful is in screening potential participants.
Screening is the process of determining who it will be worth your while talking to and who it won’t. When writing an online survey to screen potential participants, think about your research question and the characteristics of your ideal customer. What behaviours are you looking for in participants?
If your interview and/or solution involves using a tool, what level of knowledge do your participants require of that tool? For example, to use a mobile app you need enough knowledge of how to use a mobile device that the practical aspects of running the test don’t get in the way. You want participants to focus on the app, not on using the mobile device.
How much do your participants need to know about the problem domain? If you’re designing for the general public, the level of domain-specific knowledge should be low. If you’re designing for a niche group domain-specific knowledge should probably be higher.
Your screening survey should enable you to identify people who meet these criteria. I’ll be writing in the future about how to design good surveys.
Given that you are (hopefully) offering some kind of reward for participation you should expect to get some potential time-wasters applying. In order to prevent potential participants ‘gaming’ your system just to get a free voucher you should probably be as vague as possible when advertising recruitment. Describe it as ‘design research’, or ‘an interview about our website’.
Now all you have to do is go and ask the questions!
There is no secret to validating your idea with customers. As with so much else, the key is clarity and preparation. Be clear about the question your research is intended to answer. Be clear about who your ideal target audience is. Be prepared for your workshops and embrace the unknown.
Most of all, try to enjoy it.