How to Run a Successful Remote User Study

By now, we all know about the invaluable knowledge that can be gained from user testing. It is one of the most effective ways to improve your site or product, by seeing real users interact with the digital side of your business - and offering a wealth of feedback you couldn’t otherwise obtain.

But for many organizations -- the time and effort involved has long seemed out of reach. Thankfully, remote user testing has become an affordable and crucial tool that can now fit in every company’s toolbox. And it’s never been easier to get started.

How to Get Started

The first step to executing a successful user study is to clearly define its goals. What is it that you’re hoping to learn? What is the scope of the study, and what specific aspects of the product, app or site are you trying to improve? What qualifies as a success or a failure, as users begin to take on various tasks and provide feedback?

It may seem obvious, but establishing specific, actionable and attainable goals will enable all stakeholders to plan and execute more effectively, aid in measuring and interpreting the results, and help the company realize a larger ROI.

Decide whether to study a specific part of your site or product, such as a user’s path to make a purchase, or several screens of a planned redesign. Typically, a 30-minute session contains several establishing questions, 6-8 tasks, and time for users to provide feedback, depending on the complexity of the tasks. It’s possible to test almost anything -- but begin by clearly defining the elements involved and what the organization wants to learn.


After defining the goals and elements to be studied, teams must decide on which platform to run the test. Two such platforms are  Usertesting & Userlytics, and there are others as well. Compare each company’s offerings and prices and seek referrals, or speak to colleagues who have completed similar user testing projects.

Once prepared, schedule a kickoff meeting with all stakeholders and the user-testing partners. Come armed with a plan, plenty of questions, and all of the needed test assets. It’s okay if some assets are not yet final, but be sure to include the time needed to prepare them in the overall testing plan. Also define all of the devices and/or screen sizes necessary for the study. You will likely need to create separate tests for each type of device or screen size, as the tasks tend to change - sometimes substantially - when the user’s view of the product changes, too.


After choosing the platform and the specific assets for the test, it’s time to define the target persona(s). Personas represent an organization’s target customers, and are meant to answer the question: “who is this product, app or site for?” Personas reveal a typical user’s demographics; including age, location, and archetype. They also highlight a user’s motivations (what drives them to do what they do, on and off the site), their goals and pain points. They can also include a short bio, which links back to the project by expressing the user’s needs as they pertain to the project at hand.

The ideal persona(s) match the customer or user type that makes up the product, site or app’s core audience, or a group that ideally would be that core audience (for unreleased products.)

Most remote user testing companies allow you to filter based on:

  • Country/Region
  • Age
  • Gender
  • Profession
  • Education level
  • Technology profile
  • Income
  • Intere sts/hobbies

If the testing needs are very specific, for example a user who has experience with competing products or who works in a specific industry, advanced user testing platforms enable a screener survey to find the right kind of user. Potential participants answer a series of questions, and participants answering the predefined “Approve” or “Accept” fields are then provided the test.

And of course, some remote user testing providers allow you to supply your own participants.

Prepare & Launch Your Study

Once the preliminary work is complete, the next step to launching a user study is writing a test script. The script often includes an introduction to bring participants into the right frame of mind, a series of survey questions to better frame the discussion and understand your audience, as well as written cues aligned to the various tasks the user must perform, and questions they will answer as they explore and try out various functions of the product.

When preparing a script, try to put yourself in the place of the user. This person will likely be seeing the prototype for the first time, and they will using it under sometimes less than ideal conditions. Think about the user’s wants, needs, challenges and strengths. Some questions to ask include:

  • Are they web savvy, or less frequent web users?
  • Do they have experience with similar products, or are they new to this platform?
  • Are they in a rush, or using your product, site or app at a leisurely pace?
  • What is the most likely environment in which the user will interact with your product?
  • Are the questions and tasks short, clear and concise, or long, meandering and ambiguous?

As the tasks are planned and written, consider what the user will need to know or should not know in order to complete them. Be sure to provide enough information for the user to understand the task, but do not write instructions or questions that are leading or “give away” the answer. This will undermine the results.

If a task is too complex for a user to accomplish, try breaking it up into a few tasks. Or consider whether the actual interaction is itself too complex, and perhaps needs to be reworked before being tested?

Remember, the idea is to determine whether the product performs as expected and to uncover any pain points - not to test the users’ ability to use a computer or smartphone.

Putting It All Together

One key to a successful study is using the right tools in the testing toolbox. Single choice, multiple choice, rating and “open answer” questions are good for surveys or to collect feedback directly after a task, System Usability Scale (SUS) and Net Promoter Score (NPS) can provide metrics to compare with previous iterations or competitors’ products, and the Five Second Test can uncover the clarity of the design, how effectively it communicates the intended message, and gauge users’ first impressions.

You may also wish to leverage the branching logic that advanced user testing platforms offer, enabling you to show tasks and questions in accordance with previously selected answers or actions.

Consider using a combination of the above to get a well-rounded picture, and consult with UX testing experts to understand which specific methodologies and tools will best suite the study’s needs.

Once the results are in, it’s time to analyze the data and collect the feedback. Take time to watch, listen and read the participants’ responses. Watching real users interact with the product will garner new insights, and in fact will likely uncover previously unseen issues.

Video based user testing platforms enable a wealth of rich insights, but can be time consuming to digest; consider leveraging automated transcription tools, time stamped and searchable, to accelerate your identification and sharing of key insights.

Then, consolidate that data to draw testable conclusions, and share that data with stakeholders to iterate and improve the product.

Rinse and Repeat

A single study can provide invaluable feedback, but repeat testing is where the true strength of online user testing lies. Once you’ve made changes to the test assets and incorporated feedback into your product, test again. Repeat the same tasks to establish the project’s progress when compared to a baseline, as measured by things like task completion, time on task, user feedback, ratings, SUS or NPS. Also consider adding new tasks, if necessary, to test the success of changes to the design or user flow.

If there are two competing solutions, another option is to conduct A/B testing. This is most effective when the test focuses on a single variable, such as the position of a headline or the label of a button. But the test can also run participants through two different user flows to determine which is more effective. A/B testing can be extremely valuable in helping direct the design down the correct path.


Getting started is often the hardest part, from convincing teams or executives to buy-in to the process, to securing funding and preparing the assets for testing. But once you see the valuable feedback gained from the first round, the process becomes much easier. Often, that initial feedback is all that’s needed to convince stakeholders to push further.

Good UX compels us to test with real users, to better understand their behaviors and their reactions to different situations. Your product will be better for it.

About the Author

John Saginario

John is a UX professional with experience in user research, information architecture and UX strategy. He advocates for user-centric design and testing. He also enjoys having spirited discussions about bad UX on social media. All of my opinions are my own, and not of my employer or its affiliates.