Recently I had an interesting discussion with Ron Stroeven, one the founders of Infotools, about open-enders, short for open-ended questions. In 1990 Infotools was established, but Ron has worked in market research far longer than that. He has a wealth of experience in survey design and data analysis, so it
Four people and several automated solutions were tested on a task of coding open-ended questions in a Net Promoter Score (NPS) survey. Their task: figure out the five key reasons behind an NPS survey and the five areas that could be improved. Here, we compare their performance using an academic metric of
In July 2016, I was fortunate enough to speak at the Sentiment Analysis Symposium in New York. It is one of the most important events for those who invent text analytics solutions and for those who use them. I also attended the co-located sentiment analysis tutorial run by Jason Baldridge.
They collect scores into pretty dashboards, but don’t actually tell what the feedback is or how to achieve customer loyalty. Customer feedback analysis tools are all the rage, but most of them suck.
If you ever left a review yourself, you will know that your score is not nearly as valuable
If you missed our presentation at the Sentiment Analysis Symposium in New York last July, read on to see it in full with accompanying slide notes.
In this article, we explain how to evaluate the accuracy of coding survey responses. Whether coding is manual or automated, we recommend using the same method and explain here how it works in practice.
Why measuring the accuracy of coding matters
Responses to open-ended questions in surveys are full of valuable insight, but
Maya Angelou once said “people will forget what you said, people will forget what you did, but people will never forget how you made them feel.” Results from a recent Mckinsey study demonstrate what this means for businesses: After a positive customer experience, more than 85 percent of customers