Deliberate open-ended questions for high response rates and depth

How was your day? What are we doing next weekend? What do you think of this idea? How can we best solve this problem? Open ended questions. People are tired of closed questions that surveys present us with: the well-known survey fatigue (interesting report dating back to 2013 by the CIPD). This is understandable. After all, we want to be able to give our opinion, to contribute to the issue at hand. And we don’t want to be forced to choose from a limited number of answers. Open-ended questions also let people know that you take them seriously, that you trust them. Gallup has been researching this for many years: employees love, Einstein quote futureno require, need to be take seriously for their opinions. You also get a higher response rate and a much better response; they take answering your question seriously. As a result, your research becomes more reliable. Don’t forget to ask more questions about the future and present, than about the past; the past doesn’t resonate strongly enough with people.

Closed question with open answer

Even if you ask a closed question (score, multiple choice, statement), it is almost always better to offer the open text field as well. You must then explicitly ask for an explanation of the score or of the answer to the multiple-choice question. By asking for an explanation, the score is given in a more considered way, which makes your results more reliable. Moreover, you learn about the reasons that lie behind scores and you can finally, reliably interpret your numerical results. This also enables you to analyse whether low scores have been given by mistake, while the open answer indicates the opposite; or vice versa. Your research results will improve, which in turn will lead to better follow-up and targeted, fast action! Managers need to be helped by your insights, not overwhelmed by data and additional home work. Just adding an open-text field to a closed-end question is not an option. If you don’t give the open answer meaning by a clear, guiding open-ended question, why do you expect respondents to give it meaning?

So, instead of asking: “How do you assess ABC?” and allowing a score or throwing in a text field,
ask this: “How do you assess ABC at this moment, and can you clearly substantiate that?”

Don’t limit them, or yourself

The current technology of survey providers is very limited in processing open answers. Therefore, they often discourage this, and you are left with the big question ‘what is behind the scores’. Or they limit themselves to counting words, topic modelling and trying to discover combinations of words. Consequently, the ease of processing the data for such providers is more important than the quality and depth of your research. Even worse, more important than the needs of your participants; there is hardly or no specific room for open answers. Open answers are driven by the open-ended question, not by a functional text box to put in ‘anything you want’. They can’t say what they do want to say. The section below is from the website of one of the market leaders for employee surveys, showing that after the survey, you have to find out for yourself what’s behind it.

the why behind numbers

 

Instant useful information

CircleLytics technology, on the other hand, has been developed to process open answers and convert them into information. Information that you can use immediately. Useful top 5s for instance. You can see right away what was said by which target group and why. This means you will get the most out of the open answers, making sure participants are heard.

Prof Neely (University of Cambridge): “People often reveal their true thoughts and feelings in the open-ended comment boxes. In general, the content of these comments offers a much more reliable predictor of a people’s behaviour.”

How do you deal with open answers?

What is the difference between open answers processing by a survey tool and by CircleLytics? We will explain this using a simple example. After a reorganisation of HR, in which they started working de-centrally as business partners, management and HR wanted to investigate the results. The question was simple: “How do you assess the effects for you of decentralising HR and can you explain this?” Approximately 3,000 employees gave a 1-10 score including their explanations. The average score was high. The answers were then grouped based on traditional word count and topic modelling. These are simple ways of processing texts which made it possible to group 14 themes and to count which words were mentioned most often. The latter resulted in a nice word cloud. That’s it?

For surveys, yes. For CircleLytics and participants, this is where it all starts. Our tool allows for a unique second round per question. In this second round, the unsorted answers are sent back to the participants in sets of varying compositions to maximize diversity of perspectives. Employees (or other participants such as customers, members, etc) can rate these and give them a score of -3 to +3, choose key words and explain their score. Do you know how much they enjoy doing that, how happy that makes them, to learn from each other’s answers and enrich these?

Sentiment & Semantic Analysis

They give a sentiment, a weighting, to the answers of others. You immediately have your real-time sentiment & semantic analysis and natural processing of language. No algorithm can beat the human mind in processing something so complex as natural language. To humans, language comes natural, yet to algorithms it’s one of the most complex things to capture in a meaningful, actionable and reliable way.

By selecting key words, the human minds (the participants) give those words even an extra weighting and extra semantic, contextual value to the data. That is an enormous enrichment; even before our AI/ML/NLP technology starts to perform analyses, hundreds or even thousands of participants are already doing the natural language processing. In practice, this proves extremely attractive: more than 70% of the participants rate more than 15 answers from others. Participants love this way of answering questions, learning from others and structuring the output, and give this two-round method a report mark of 4.1 on a 5-point scale. Very different from the fatigue that plagues surveys….

Now let’s see

Back to our example: 3,000 participants assessing an average of 30 answers, is almost 100,000 thought processes in just a few days! That 2nd round is called collective intelligence. This gave the client a whole new insight. Instead of 14 themes, it turned out that after the 2nd round, only 3 themes were favoured and supported. On second thought, the participants focused on those 3 themes, not on 14 at all. Moreover, a fourth, new theme emerged, which only a few people thought of in round 1, but was positively scored by a large group of participants in round 2!

The theme was about someone’s observation that HR business partners themselves were having a hard time…. A few had empathy for this in the 1st round. A regular one-round survey, a word count and topic modelling simply did not reveal the theme. Only through the 2nd round of CircleLytics could participants see the opinions of others, learn from them and gain a deeper understanding. On closer inspection, they thought differently. An unprecedented enrichment of the research. And a wonderful example of deep democracy in which a minority opinion is clearly revealed.

Think again

That is why the CircleLytics technology is not called a survey; it’s called a dialogue, based on deliberate open-ended question and human (and after that AI/NLP) processing of the open answers. Because dialogue requires that people are prepared to think differently about your question. They learn from the answers of others. People think better in the second instance. They usually think fast first, and then slow down (also read Kahneman’s research), which gives your research more depth. You also come up with subjects that no technology had come up with. Topics that are sometimes only mentioned by a few, and that cannot emerge without the collective intelligence of all brains together.

So when would it be useful to have a survey?

Forget the word survey for a moment. Based on our experience, we advise to determine the design per question in CircleLytics. This means adding a closed scale, presenting a multiple choice, or offering a text field. You can adjust the question when you add a text field. You can now explicitly ask participants to fill in the text field and what you expect them to do. Finally, you can determine whether the question will be included in the 2nd round and set the deadline for this. This is how you can combine the old survey with the new possibilities of the open text field and the 2nd round. If a question leaves no room for thinking for yourself but is just a score or ticking of a predetermined answer, you leave out the 2nd round, and possibly even the text field. This can all be done in a single tool. Tips for formulating strong questions, can be found in our WhitePaper with design principles.

Analyses and results

In conclusion: the response rate and quality are higher and, more importantly in our opinion, you will achieve accurate results and a reliable interpretation of the figures. You can also further analyse the results in CircleLytics by using the unique Weighted Word Count that takes sentiment (score/selection of words) into account. This leads to completely different insights than the old word count.

Everything has an expiry date. The regular surveys you are used to, something that causes fatigue among participants and shows fragile quality, need renewal.

We would like to invite you to get to know the power of people, based on (mostly) open questions, with the CircleLytics Dialogue Solution.

Request demo

 

Back to top
Close Offcanvas Sidebar