Insights — Making Surveys Mobile Friendly, Part 1

Making Surveys Mobile Friendly, Part 1

Resources , Thought leadership / August 06, 2015
SimpsonScarborough
SimpsonScarborough

At a SimpsonScarborough retreat two years ago, we started a serious conversation about the impact of mobile device usage on our online survey research that we have continued to revisit and debate. At the time, we had a feeling that more and more of our survey respondents were unintentional mobile participants, meaning they chose to take our survey that was designed to be taken online using some kind of web-enabled mobile device. While we “knew” this was true, we didn’t have any way to quantify it, and we didn’t really know what this meant for us moving forward. What we did know is that this was a topic we needed to start thinking about yesterday. In part one of a three part series, we explore the ins and outs of mobile survey design and how it affects your market research efforts, particularly for your most important audience, prospective undergraduate students.

After some (sometimes heated) debate about what was really in our control and if mobile participants were even an issue, we decided that we needed to be more informed. Even if what we learned was discouraging, we all agreed that it was better to know what we were facing than to pretend this was not going to have an impact on the way we do research. We started by looking at best practices for mobile surveys, and found the information available to be anemic at best. Most of what we found focused on standalone mobile surveys, and the recommendations were not at all practical given the type of research we are conducting. There was very little information, if any, regarding how to address the unintentional mobile survey participant, and how to ensure that surveys designed to be taken online could translate on a mobile device. We started to see updates from our survey software about programming mobile surveys, but again, this was more focused on programming a survey with the intent of it being administered on a mobile device. It was an either or situation. We could program a survey to be taken on a mobile device OR program it to be taken online. We were frustrated at first, but then realized with all of the data we collect on a daily basis, we should be able to start gathering information on our own surveys to get a better idea of how the rise of mobile usage impacts our response rates, quality of data collected, and success of different lines of questioning.

After reviewing a recent article in Quirks highlighting key findings from the 2014 Marketing Research Technology Survey conducted by FocusVision, we thought it might be time to reflect on what we have learned over the past two years about making our surveys mobile-friendly and the impact of the unintentional mobile participant. A lot has changed over the past two years both in our own understanding of how our audiences are using mobile and the technology that is available to make surveys more mobile-friendly.

First of all, I feel pretty good about the steps we have taken over the past two years. We have come a long way, and there are many marketing research companies who are still not making any serious effort to understand or address the unintentional mobile participant. Based on the 2014 MR Technology Survey results, this is how we stack up to other marketing research companies.

  • 71% are able to provide information about how respondents are taking their surveys. At SimpsonScarborough, we have added a question to all online surveys, asking the respondent how they completed the survey. We have also been capturing metadata on operating system, browser, screen resolution, etc.
  • Only about half routinely discuss mobile participation at the instrument design phase. At SimpsonScarborough, while we keep mobile users in mind when writing our survey instruments, we also test all online surveys on mobile devices as well as computers and try to limit the additional burden on mobile participants to the best of our ability.
  • Roughly a third of market research companies say all of their surveys are enabled for mobile participation. At SimpsonScarborough, all of our surveys are enabled for mobile participation. There are rare cases where we might specify that online participation is preferred due to significant increase in the burden on the respondent, but we do not restrict participant from taking our surveys on a mobile device.

The rise of unintentional mobile participants is clearly something we cannot afford to ignore as marketing research professionals. Two years ago we couldn’t quantify the number of unintentional mobile participants. Now that we have data to back up our hypothesis, we realize that number is even higher than we anticipated, particularly among one of our most important audiences; prospective undergraduate students. The chart below shows the average % of respondents taking our survey on a computer, a mobile phone or a tablet for the online surveys we have conducted with prospective undergraduate students over the past year.

fig1_HowProspCompSurvey

You can see that over the past year, on average 56% of prospective undergraduate students (high school juniors and seniors) completed our online surveys using a mobile device. The kicker? Most of these were taken using a cell phone. If you look at the image below showing a survey question displayed on a computer screen vs. a mobile phone, you can see that the much smaller screen size requires some additional work for the respondent. They frequently have to do more scrolling both up/down and left/right, and it is impossible to view all response options at one time.

Fig2_WebVsMobile

So what?

Okay, so more and more people are taking the surveys we intended to be taken on a computer on a mobile device. But what does that mean? And what are we doing about it?

These are some of the key concerns we have when thinking about unintended mobile responses:

  • Survey Length: Generally, it takes a bit longer to complete one of our online surveys on a mobile device. Differences in length vary greatly depending on overall survey length and question types (longer surveys and more complex questions lead to more significant increases in survey length), but on average we see an increase of 1-2 minutes in length when a survey is conducted on a mobile device. The longer it takes to complete the survey, the higher the burden on the respondent and higher likelihood of respondent fatigue and drop-off.
  • Increased Respondent Burden: In addition to the longer survey length, there are other factors that can increase the burden on the respondent. Certain question types that we already know to be a heavier burden on the respondent when taken on a computer are even more burdensome when attempted on a mobile device. Additionally, some questions that are relatively low in respondent burden when taken on a computer become more burdensome when answered on a much smaller screen size.
  • Question Functionality: Sometimes questions that work very well on a computer screen don’t translate to the smaller mobile screen size. Issues with having to scroll to see response options, having to zoom to read text, video functionality, viewing images and ability to click on a response using your finger are all concerns.

How are we addressing these issues?

  • It starts in the survey instrument development phase: Issues like survey length and respondent burden have always been a part of the survey development conversation at SimpsonScarborough. With the rise of mobile device usage we have to be more sensitive to these issues; limiting the number of burdensome questions in a survey and focusing on making all of our survey instruments as strategic and lean as possible. We can’t rely on technology to turn our online surveys into something that is mobile-friendly; we have to start thinking about the unintentional mobile users before a survey even makes it to the programming phase. With prospective undergraduate students in particular, we are even stricter about survey length, because this audience has the highest percentage of unintentional mobile participants.
  • Optimizing survey programming for mobile devices: One of the major steps in confronting these issues is using a survey platform that allows us to optimize our surveys for mobile devices. There have been significant strides in the technology since we first started thinking about this issue, and it is only continuing to advance. At SimpsonScarborough, we use Qualtrics, which is constantly improving its features to ensure functionality on mobile devices and ease respondent burden. All of our surveys are optimized for mobile usage to the best of our ability. We recognize that there are still limitations, and we cannot completely erase the additional burden on unintentional mobile participants, but we make every effort to keep that burden to a minimum. Our survey software also allows respondents to seamlessly switch between mobile and online versions. If a respondent initially attempts to take a survey on their mobile device and gets frustrated, they can always return to complete the survey on their desktop or laptop computer.
  • Extensive testing of surveys, including testing on mobile phones and tablets: All of our surveys go through a rigorous testing phase before we send them out into the world. In addition to testing our surveys in the format in which they are intended to be taken (on a computer), we always test our surveys on both mobile phones and tablets to ensure that all questions are functional and that any additional burden on unintentional mobile participants is kept to a minimum. When our testers notice something that is either impossible or extremely difficult to attempt on a mobile device, we revisit the programming to see if changes can be made to ease that burden. Sometimes this can mean going back to the drawing board and rethinking how we can best collect the data we are looking for.
  • Monitoring changes in mobile usage and looking for differences between mobile and online participants: The mobile completion data is very valuable to us for a few different reasons – not only does it allow us to track mobile usage, but it also gives is the opportunity to check for differences in survey responses based on how a respondent completed the survey. Luckily, it appears that our unintentional mobile participants are pretty savvy with their devices. We have observed very minor differences in mobile vs. online responses. There have been no huge red flags to date, but we continue to monitor this and have internal discussions about what even minor differences mean for us moving forward. Sometimes even slight differences in how mobile and online participants respond could trigger changes in how we approach certain question types in future surveys. And if there are differences, having this data available to us allows us to be aware of any potential error in the data resulting from unintentional mobile participants, and can help to provide context for understanding resulting data from certain survey questions.

The rise of unintentional mobile participants is a reality that we have to face. While it can be a challenge, it is not a problem we have to try to “fix.” This is an opportunity to reach our audiences in a format that they are becoming more comfortable with. It can allow us to reach more and more respondents who we might have missed if we only offered the survey online. It puts fewer restrictions on when and where a participant can respond to our survey. We have to continue to evolve to be able to reach these audiences effectively. We have come a long way, but we still have a long road ahead. Stay tuned next month when we take a deeper look at the differences between online and unintentional mobile participants in our surveys of prospective undergraduate students.

Related Insights