How to perform post-launch user interviews in three easy steps

How can you assess whether the original goals and intentions of the solution have been achieved? Talk to your users. In this article, we outline how to perform user interviews as a post-launch qualitative analysis.

(Note: This is a follow-up post to Qualitative Analysis: Are you reaching your goals?)

In this example, we are going to perform user interviews one year after launch of a collaborative solution for an organization with 500 employees.

Step 1: Planning and preparation

When planning qualitative analysis, there are some key activities to perform. First of all, we need to figure out what we want to find out.

This could be:

  • are we on track to reach our goals?
  • are the goals still relevant? Do we need new goals?
  • what are current pain points for the customer? Are they experiencing deviations from the original plan which should be addressed?
  • which users are most interesting to us for this activity?

Agreeing with the customer on what you want to achieve with the qualitative analysis is very important – if you don’t do this, then it will be difficult to justify the results.

We now need to assess the situation and find out what kind of data we already have.

Typical sources are:

  • the original project plan, project documents and specification
  • any documents outlining goals and KPIs
  • current statistics, if any
  • results from earlier research, like user surveys

Plan for conversations, not interviews
While we call the activity ”user interviews”, we tend to plan conversations with users rather than plain interviews. This is basically the same as if you were interviewing someone for an editorial article – you have a list of questions and topics, but you don’t necessarily follow them in a strict order. You start a conversation, and then try to touch the various topics when it becomes relevant, or when the conversations dies out.

In order to perform a good qualitative analysis, we need to know what questions to ask so that we can get answers about relevant issues – but not so that we basically get users to answer our own questions for us. Leading questions (for example, ”do you think it takes too much time to locate the expense report”) can render useless data, because the answer will tend to be ”yes”. This could probably have been found through statistics alone.

A more relevant line of inquiry would be to discuss what sort of tasks users perform, and then have them explain how they do it today – and how they would like to do it. So for a user who requires the mentioned expense report, she would walk us through the situation for us and provide an opinionated view on whether it works or not.

This is also why it’s important to perform a selection of users. You need to talk to relevant users who actually use the solution to solve the tasks you want to address.

Together with the customer, we therefore come up with a list of ”interesting topics and problems” and then generate questions from this.

For example:

Topic/problem:

”Employees don’t use the collaboration areas enough”

Questions:

  • When do you use the collaboration area?
  • Can you walk me through a typical example of how you use the collaboration area?
  • Do your colleagues use the collaboration area in the same way as you?
  • What do you do to make your users collaborate with you?
  • Was the training relevant for you? What would you like to learn more about?

It is often necessary to be extremely clear about what you want the user to talk about. If they are very general, try to have examples direct questions available: ”Have you uploaded a document and shared it with a colleague”. This can help the user to understand what you are trying to discuss. However, these questions can also limit the user to talk only about the cues you provide, so the idea for the entire conversation is to guide rather than lead.

User selection
We always ask the customer to do this. They know their users much better than we do. The analysis can have several aspects which go beyond only testing – customers may want to do the analysis in order to figure out what stakeholders think of the solution one year after launch. Are they satisfied? Do they want to see changes? Do they feel that the goals have been achieved?

Typical candidates for interviews depend on what the goals are – here are some examples:

  • managers and directors who are responsible for the various parts of the solution
  • end users who perform relevant tasks in their jobs
  • middle managers who manage teams of employees with relevant job functions
  • trendsetters whose behaviour and opinions impact the organization

In principle, it would be ideal to do a ”random sampling”, which basically means pick the amount of people you want to interview from the entire organization through a lottery. But in practice this is not beneficial for what we are trying to achieve. We need to talk to actual users who have actual experience from the solution in question. So in our experience, it is much more interesting to talk to 5 people selected for relevance, rather than 5 people selected from a random pool.

How many users should we interview? There is only a practical limit. However, we have found that between 1-2% of the organization suffices. So for an organization of 500 people, it can actually be enough to talk to about 5-10 people in order to get useful data.

For each interview, we usually estimate the workload in this way:

Preparation15 minutes
Conversation time45 minutes
Documentation2 hours
Initial analysis1 hours
Total per interview 4 hours

In addition comes overall analysis when all the data has been collated, report writing, PowerPoint presentations and other reporting work which is done after all the interviews are performed. So the budget balloons pretty quickly with more interviews.

Who should do the interview? Usually the interviews are performed by business analysts who understand the solution. It can be a plus if the analyst was part of the delivery team who analysed the requirements prior to the implementation project. We do not often see customers doing the interviews themselves. It is time consuming – and it can be beneficial for the analysis if the interviewer is an outside person. It helps people relax and makes them less afraid of saying the ”wrong” things. We also recommend that the same person do all the interviews. It makes the results more coherent and easier to collate in the end. If the candidate group is larger than 10 people, it may be more practical to split it between several interviewers to save time.

The actual invitations to candidates are sent as e-mail by an internal person who the candidates trust. It is a good idea to set up fixed time slots and allocate them based on availability. The e-mail needs to have a brief explanation of what the meeting is about, and that candidates do not need to prepare anything, and that they should not bring their PC.

Step 2: Gathering data

Each interview is set up as a meeting in a dedicated meeting room. We usually run 2-3 interviews in succession on one day. More than that can be very taxing on the interviewer. It is more useful to spend half the day doing interviews, and the rest of the day transcribing the interviews.

We perform the interviews in a one-on-one setting. The interviewer must be present at least 15-30 minutes before the interview to ensure that the room is ready, empty of distractions, and to prepare mentally.

The interviewer should record all conversations with a basic audio recorder – a Zoom H1, for example. A smartphone can be used, but ensure that it is set to airplane mode to avoid distractions during the interview.

The interviewer welcomes the candidate, then starts the interview with a brief personal introduction and then explaining how the conversation will be done. Explain that the conversation is recorded, and then start the recorder before the candidate is asked to do a personal introduction – basically their name and what they work with.

We like to keep a printed copy of the conversation guide in front of us, and make little checkmarks for each topic covered. Interesting points can be noted for follow-up questions or to ensure we cover unexpected topics if there is time.

The interview should last no longer than 45 minutes. This is for several reasons – it should give you plenty of time to cover just about everything in your conversation guide, but it also is a practical limit to how long the conversation will stay meaningful.

Transcribing the interviews
45 minutes of recorded conversation also provides more than enough data which needs to be transcribed – it typically takes about two hours to transcribe each interview. It is a hard  and sometimes boring task, but it is through the transcriptions that we usually uncover the most interesting finds. When you listen intently in order to write something down, you become very familiar with the content and the person you are listening to.

It is often necessary to edit the transcription for readability. A conversation usually has questions that are asked several times for clarity, or one-word responses which lead to rephrasing of questions. But apart from this, leave the transcript as-is. It is a part of the documentation, not the analysis.

Part 3 – Analysis and presentation

Qualitative analysis is about identifying answers to a given problem through research which cannot be measured in a practical way. Thus, the analysis phase of the work needs a focus – what are we looking for? Have we uncovered insight into what the problem is, and do we have ideas for possible solutions?

When you have transcribed all the interviews, you will have upwards of 10 pages per interview. For 5 interviews, that is 50+ pages of more or less random paths through interesting topics. We propose that the best way to find interesting data in this situation is to use the topic guide in a reverse manner. Use a note-taking application to go through each topic, and then scan through each transcription and copy/paste interesting quotes into each topic.

You will inevitably encounter findings which are not directly related to any of the questions. These may be very interesting, so keep them in a ”general” topic and see if any of them can be used in your report. Often there are gold nuggets to be found, like comments on lack of training or technical problems you did not anticipate.

We then create a PowerPoint presentation which lists each topic and relevant quotes/extracts along with other interesting findings. Examples of general findings can be that ”nobody thought training was adequate” even if it was not a part of the original questioning. A typical finding is that an important goal has not been met, like ”all candidates found mobility to be seriously lacking in the solution”. We also often find that what we though was a problem is not viewed as a problem by any of the users.

It is important to recognize that when we sum up data from a qualitative analysis, you are taking everything out of context – in this case, removing each piece of data from the narrative situation and placing it next to other, similar pieces of data. This creates a false sense of consensus, since each statement may come from very different contexts. It is the job of the analyst to ensure that statements are used in relevant settings and that the extractions are reliable.

Therefore it is important that the interview transcripts are as true to the original conversation as possible, so that the customer can recreate the situation in which each statement was uttered.

We like to wrap up each interview by asking the candidates to evaluate 3-5 goals which were important for the project. They score them from 1-5 based on how well they feel that each goal has been reached. This provides thoughtful feedback to the project owners in a very visual manner, something which qualitative analysis often does not do.

The handover
At the end of the analysis period, we meet the customer for a formal handover. Here we walk through the PowerPoint presentation and discuss the findings with them. We do this in ”edit” mode so that we can make corrections and fix errors on the fly if necessary.

We then hand over all the transcripts, the audio files (in compressed MP3 format), and the PowerPoint – and the job is done. The customer takes over the process from here. The PowerPoint is ideal for internal presentations and discussions, and makes it easy for internal stakeholders to focus on the same topics.

The practical result of a qualitative analysis is usually a planning process which leads to changes in the solution based on findings. It can also lead to changes in training, content work, design and information architecture – and often it leads to a change in how the solution governance is performed in order to focus more on goal achievement. We also often follow up one analysis activity with another, for example to test findings from interviews through user testing or group interviews.

In conclusion, the proposed procedure is time consuming and thorough – but in our experience it provides relevant insight into how well a soliution complies with the original ideas we had for it. It also provides a lot of food for thought as well as ideas for simplification and suggestions for further development.

By | 2016-11-16T10:47:51+00:00 August 8th, 2016|

About the Author:

Tormod is Chief Strategy Officer at Puzzlepart. He has 20+ years' experience from working with communication, collaboration and analysis.