/dev/journal: I’ve been creating and sending out a few surveys this week, so I thought I might take some time to talk about how I design polls and surveys for our social learning sessions. We all know to use polls in virtual meetings to add energy and engagement to meetings, but too often I see polling used only because someone’s been told “you should add a poll to drive energy and engagement.” In my experience, bad polls are worse than no polls in terms of engagement, so here I’ll give some of my tips on how to create high-value polls.

We usually use Adobe Connect for our virtual meetings, but most platforms has polling built-in (e.g. Zoom, Teams, Lync Skype). And if you’re lucky enough to meet in person, these polls can all be done on whiteboards and/or with Post-Its.

Check-in Polls: Priming and Discovery

At the start of a session, I usually do one or two quick polls related to what the session is about. One great poll-question is to ask “what are you hoping to learn?”. This primes the participants to not only look out for the answers to their own quest, but also helps them discover what the other people in the room are looking for, and creates the opportunity for them to help each other. We encourage our particpants to engage with each other in the (public) chat during sessions, and often have enthusiastic side-bar conversations where participants are asking and answering questions and expanding on the core content of the session.

This can be distracting, especially for the facilitator, so we usually (at least) double-team our sessions - one person will be presenting, while another will be moderating the chat, while also mining it for insights and interesting digressions worth exploring in more detail.

Check-out Polls: Testing and Commitment

At the end of the session, my polls are designed to first lock-in (and share) any insights or epiphanies the participants have had during the session, and then to set them up for action once they leave. My go-to questions for this is things like “What was most useful in this session?” and “What are you most looking forward to trying out after you leave?”.

If I covered specific learning-points in the session that I want to make sure were understood and that participants remember, I might add an extra poll for this too, but take extra care that it doesn’t come across as patronising. (I might actually prefer to looks for it, or something related, in the responses to the first two questions, and push in on it in conversation!)

Designing Check-in and Check-out polls like this also helps with primacy and recency - participants will remember what they responded they were looking for, and what they said they got out of it at the end.

Surveys: Evaluation

The third use for polls (after “Priming and Discovery” and “Testing and Commitment”) is evaluation. This is my chance to get the participants’ help in improving the content, my delivery, or anything else related to the session. We typically do not use polls for this, but send out a survey to all attendees after the session.

We use SurveyMonkey (now part of Momentive) for our surveys, but many other platforms exist.

When designing survey questions, I try to use as many open-ended, free-text questions as possible. This gives my participants a chance to freely share their evaluative feedback without being constrained to check-boxes. I also favour anonymous responses, again to encourage participants to be as forthcoming as possible. (I sometimes add a final question along the lines of “If you’re willing to be contacted to help us understand your responses better, please tell us who you are:”, but always leave it an optional answer.)

I also try to remind myself to make sure I have more evaluating questions than validating ones. Validation for my hard work preparing and delivering a session is nice, but ultimately doesn’t let me get better at it. I might throw in a “Rate the session from 0 to 5 stars” (validation), but then follow that up with a “What made you pick this rating?” (evaluating).

One good practice when designing survey questions is to consider the action threshold for each question. If I get rated below 3/5 stars, what will my action be? If I can’t at least finger-in-the-air a threshold and an action, it is probably not a good question. I’m particularly hesitant to put NPS-style questions on surveys, especially if there is no clear agreement on how to treat non-respondents. (In fact, simply measuring the response-rate on your survey is likely to be more useful than trying to calculate your NPS.)

Finally, my best tip for a combined evaluative and validative survey question is “What advice would you give future participants of this session?” Answers to this can be insightful in many ways, and may serve as useful endorsements for future sessions.

=P

EDIT 2021-08-28: Corrected/updated Momentive’s new name