Jeff Papa Retiring in October

On October 1, 2015, Jeff Papa will retire from SimpsonScarborough. Jeff started his career in admissions and worked at University of Maryland, Colby-Sawyer College, and Randolph Macon College. He became a partner at SimpsonScarborough in 2008 and worked tirelessly to help colleges and universities all over the country with their marketing and branding strategies. His contributions to the marketing of higher education are countless, but Jeff will probably be celebrated most for his warmth, charm, and humor. Elizabeth Scarborough, on behalf of all of us here, shares her thoughts in her letter below. If you would like to share your words of encouragement, we invite you to email Jeff directly at jp@simpsonscarborough.com.

Dear Jeff:

It’s difficult to even begin describing the impact you’ve had on our company, our clients, our employees, our partners, and me. Because I couldn’t find the right words, I asked each person on our team to give me one word that best describes you. The responses included dedicated, thorough, caring, warmhearted, animated, loveable, inspirational, invested, well-coiffed, jovial, invaluable, ebullient, insightful, exuberant, sharp, and genuine. Jason may have put it best when he said, “It sounds so trite to say it, but the one word I keep thinking is ‘nice.’ Jeff is one of the happiest, sincerely nice people I’ve ever been around. He looks for the good in every person and every situation.”

From the very bottom of my heart, I’d like to thank you for the energy you’ve dedicated to SimpsonScarborough. You are an invaluable source of insight, experience, ideas, advice, and innovation. You’ve provided our team with the structure and support they need to thrive and operate efficiently. You’ve provided our clients with the attention, intelligence, and care they need to meet their goals. You’ve provided our leadership team with thoughtful decision-making and contributions that have been instrumental in helping our business grow. You’ve given me years of great ideas, great laughs, balanced opinions, and tireless effort and dedication. My words for you are reliable, hardworking, committed, upbeat, and adored.

Your contributions to our industry are immeasurable, Jeff, and your mark on SimpsonScarborough is indelible. But beyond all your accomplishments, what we’re really celebrating here is YOU. Know that you are simply irreplaceable—and we wouldn’t want it any other way. We love you.

Elizabeth

Making Surveys Mobile Friendly, Part 1

At a SimpsonScarborough retreat two years ago, we started a serious conversation about the impact of mobile device usage on our online survey research that we have continued to revisit and debate. At the time, we had a feeling that more and more of our survey respondents were unintentional mobile participants, meaning they chose to take our survey that was designed to be taken online using some kind of web-enabled mobile device. While we “knew” this was true, we didn’t have any way to quantify it, and we didn’t really know what this meant for us moving forward. What we did know is that this was a topic we needed to start thinking about yesterday. In part one of a three part series, we explore the ins and outs of mobile survey design and how it affects your market research efforts, particularly for your most important audience, prospective undergraduate students.

After some (sometimes heated) debate about what was really in our control and if mobile participants were even an issue, we decided that we needed to be more informed. Even if what we learned was discouraging, we all agreed that it was better to know what we were facing than to pretend this was not going to have an impact on the way we do research. We started by looking at best practices for mobile surveys, and found the information available to be anemic at best. Most of what we found focused on standalone mobile surveys, and the recommendations were not at all practical given the type of research we are conducting. There was very little information, if any, regarding how to address the unintentional mobile survey participant, and how to ensure that surveys designed to be taken online could translate on a mobile device. We started to see updates from our survey software about programming mobile surveys, but again, this was more focused on programming a survey with the intent of it being administered on a mobile device. It was an either or situation. We could program a survey to be taken on a mobile device OR program it to be taken online. We were frustrated at first, but then realized with all of the data we collect on a daily basis, we should be able to start gathering information on our own surveys to get a better idea of how the rise of mobile usage impacts our response rates, quality of data collected, and success of different lines of questioning.

After reviewing a recent article in Quirks highlighting key findings from the 2014 Marketing Research Technology Survey conducted by FocusVision, we thought it might be time to reflect on what we have learned over the past two years about making our surveys mobile-friendly and the impact of the unintentional mobile participant. A lot has changed over the past two years both in our own understanding of how our audiences are using mobile and the technology that is available to make surveys more mobile-friendly.

First of all, I feel pretty good about the steps we have taken over the past two years. We have come a long way, and there are many marketing research companies who are still not making any serious effort to understand or address the unintentional mobile participant. Based on the 2014 MR Technology Survey results, this is how we stack up to other marketing research companies.

  • 71% are able to provide information about how respondents are taking their surveys. At SimpsonScarborough, we have added a question to all online surveys, asking the respondent how they completed the survey. We have also been capturing metadata on operating system, browser, screen resolution, etc.
  • Only about half routinely discuss mobile participation at the instrument design phase. At SimpsonScarborough, while we keep mobile users in mind when writing our survey instruments, we also test all online surveys on mobile devices as well as computers and try to limit the additional burden on mobile participants to the best of our ability.
  • Roughly a third of market research companies say all of their surveys are enabled for mobile participation. At SimpsonScarborough, all of our surveys are enabled for mobile participation. There are rare cases where we might specify that online participation is preferred due to significant increase in the burden on the respondent, but we do not restrict participant from taking our surveys on a mobile device.

The rise of unintentional mobile participants is clearly something we cannot afford to ignore as marketing research professionals. Two years ago we couldn’t quantify the number of unintentional mobile participants. Now that we have data to back up our hypothesis, we realize that number is even higher than we anticipated, particularly among one of our most important audiences; prospective undergraduate students. The chart below shows the average % of respondents taking our survey on a computer, a mobile phone or a tablet for the online surveys we have conducted with prospective undergraduate students over the past year.

fig1_HowProspCompSurvey

You can see that over the past year, on average 56% of prospective undergraduate students (high school juniors and seniors) completed our online surveys using a mobile device. The kicker? Most of these were taken using a cell phone. If you look at the image below showing a survey question displayed on a computer screen vs. a mobile phone, you can see that the much smaller screen size requires some additional work for the respondent. They frequently have to do more scrolling both up/down and left/right, and it is impossible to view all response options at one time.

Fig2_WebVsMobile

So what?

Okay, so more and more people are taking the surveys we intended to be taken on a computer on a mobile device. But what does that mean? And what are we doing about it?

These are some of the key concerns we have when thinking about unintended mobile responses:

  • Survey Length: Generally, it takes a bit longer to complete one of our online surveys on a mobile device. Differences in length vary greatly depending on overall survey length and question types (longer surveys and more complex questions lead to more significant increases in survey length), but on average we see an increase of 1-2 minutes in length when a survey is conducted on a mobile device. The longer it takes to complete the survey, the higher the burden on the respondent and higher likelihood of respondent fatigue and drop-off.
  • Increased Respondent Burden: In addition to the longer survey length, there are other factors that can increase the burden on the respondent. Certain question types that we already know to be a heavier burden on the respondent when taken on a computer are even more burdensome when attempted on a mobile device. Additionally, some questions that are relatively low in respondent burden when taken on a computer become more burdensome when answered on a much smaller screen size.
  • Question Functionality: Sometimes questions that work very well on a computer screen don’t translate to the smaller mobile screen size. Issues with having to scroll to see response options, having to zoom to read text, video functionality, viewing images and ability to click on a response using your finger are all concerns.

How are we addressing these issues?

  • It starts in the survey instrument development phase: Issues like survey length and respondent burden have always been a part of the survey development conversation at SimpsonScarborough. With the rise of mobile device usage we have to be more sensitive to these issues; limiting the number of burdensome questions in a survey and focusing on making all of our survey instruments as strategic and lean as possible. We can’t rely on technology to turn our online surveys into something that is mobile-friendly; we have to start thinking about the unintentional mobile users before a survey even makes it to the programming phase. With prospective undergraduate students in particular, we are even stricter about survey length, because this audience has the highest percentage of unintentional mobile participants.
  • Optimizing survey programming for mobile devices: One of the major steps in confronting these issues is using a survey platform that allows us to optimize our surveys for mobile devices. There have been significant strides in the technology since we first started thinking about this issue, and it is only continuing to advance. At SimpsonScarborough, we use Qualtrics, which is constantly improving its features to ensure functionality on mobile devices and ease respondent burden. All of our surveys are optimized for mobile usage to the best of our ability. We recognize that there are still limitations, and we cannot completely erase the additional burden on unintentional mobile participants, but we make every effort to keep that burden to a minimum. Our survey software also allows respondents to seamlessly switch between mobile and online versions. If a respondent initially attempts to take a survey on their mobile device and gets frustrated, they can always return to complete the survey on their desktop or laptop computer.
  • Extensive testing of surveys, including testing on mobile phones and tablets: All of our surveys go through a rigorous testing phase before we send them out into the world. In addition to testing our surveys in the format in which they are intended to be taken (on a computer), we always test our surveys on both mobile phones and tablets to ensure that all questions are functional and that any additional burden on unintentional mobile participants is kept to a minimum. When our testers notice something that is either impossible or extremely difficult to attempt on a mobile device, we revisit the programming to see if changes can be made to ease that burden. Sometimes this can mean going back to the drawing board and rethinking how we can best collect the data we are looking for.
  • Monitoring changes in mobile usage and looking for differences between mobile and online participants: The mobile completion data is very valuable to us for a few different reasons – not only does it allow us to track mobile usage, but it also gives is the opportunity to check for differences in survey responses based on how a respondent completed the survey. Luckily, it appears that our unintentional mobile participants are pretty savvy with their devices. We have observed very minor differences in mobile vs. online responses. There have been no huge red flags to date, but we continue to monitor this and have internal discussions about what even minor differences mean for us moving forward. Sometimes even slight differences in how mobile and online participants respond could trigger changes in how we approach certain question types in future surveys. And if there are differences, having this data available to us allows us to be aware of any potential error in the data resulting from unintentional mobile participants, and can help to provide context for understanding resulting data from certain survey questions.

The rise of unintentional mobile participants is a reality that we have to face. While it can be a challenge, it is not a problem we have to try to “fix.” This is an opportunity to reach our audiences in a format that they are becoming more comfortable with. It can allow us to reach more and more respondents who we might have missed if we only offered the survey online. It puts fewer restrictions on when and where a participant can respond to our survey. We have to continue to evolve to be able to reach these audiences effectively. We have come a long way, but we still have a long road ahead. Stay tuned next month when we take a deeper look at the differences between online and unintentional mobile participants in our surveys of prospective undergraduate students.

Influencing Reputation: Do Your Efforts Reach and Engage Academic Peers?

Is it possible for a college or university to positively shape its image and reputation among academic peers? The majority of respondents to a nationwide survey of presidents, provosts, and admissions deans conducted by Stevens Institute of Technology and SimpsonScarborough said yes — and identified marketing and communications efforts as playing a key role.

Through over 400 completed surveys and 40 follow-up interviews, we uncovered what has the strongest positive impact on their perceptions and which communication methods are most compelling or desired. For example, the following were cited as having the strongest impact on an institution’s academic reputation:

  • The quality of faculty demonstrated through faculty accomplishments, individual reputation and classroom/student experience
  • Academic focus, program offerings, and institutional metrics, such as rankings and selectivity of incoming students
  • An institution’s visibility, as well as the types of organizations with which it develops strategic partnerships   
  • Quality of current students and alumni success

The content of and manner in which you communicate about your institution, faculty members, students, and graduates requires constant management. And, the relationships that members of your campus community have with their counterparts across higher ed is a tangible asset. If you aren’t investing in these activities, you’re not maximizing your marketing opportunities. 

Want to know more? Edward Stukane, Vice President of Communications & Marketing, will share how Stevens Institute of Technology used the findings of this study to understand their competitive strengths, challenges, and opportunities, as well as strategically inform their communications plan and marketing messages developed for academic peers and other target audiences at the 2015 Symposium for Higher Education Marketing during Track 5 on Monday, November 16 from 11:15am-12:00pm.

September Strategically Speaking Webinar

You’ve done your research, developed a positioning strategy and brought your campus along for the ride. But bringing it to life and expressing it creatively is where the rubber meets the road in a branding initiative. How does an institution effectively move from brand strategy to creative concept and expression? What are the key elements that need to be addressed? How—and should—you touch your school’s logo or logo guidelines? Where does a good idea come from? What does it take to build a case and support for a big idea?

These are among the critical questions in the creative development process. They are answered in the fourth webinar of our series, which will explore the key elements of developing a powerful, moving creative strategy that is authentic to your brand.

Creatively Expressing Your Brand Strategy
Led by: Jason Simon and Matt Checkowski
Date: Thursday, September 10, 1:00-2:30pm ET
Registration fee: $295

Register today and learn more about the Strategically Speaking webinar series.

The Power of the Net Promoter Score

Many of the image and branding studies we conduct for clients include the calculation of a Net Promoter Score, or NPS. The NPS is commonly used by a wide variety of companies because it’s such a simple metric that represents brand strength and can be tracked over time and compared by audience. This article in Quirk’s describes how the NPS is used by AAA.

The NPS is based on one question, “Would you recommend [institution]?” And, sometimes it’s narrowed to, “Would you recommend [institution] to an undergraduate/graduate student?” On a scale of 1 to 10 where 1=not at all likely and 10=very likely, respondents who answer 1-6 are considered “detractors” while those who answer 9 or 10 are considered “promoters.” The NPS is the % of promoters minus the % of detractors.

In one study we conducted for a university, the NPS among current undergraduate students was 34 while the NPS among graduate students was just 12. In the same study, the NPS among students in one college was 24 while the NPS among students in another college was 51. These findings illustrate how powerful the simple NPS can be in highlighting areas of strength and weakness in a university’s brand.