Surveys are important to find out what your customers prefer and need. From customer satisfaction and market surveys to product and event surveys, these are crucial tools for gathering customer feedback, gaining insights, and making data-driven decisions. And while there are plenty of tools for creating surveys, such as SurveyMonkey or Typeform, analyzing survey results can be hard and time-consuming.
If you’ve obtained results for closed-ended survey questions, such as responses to scaled and multiple-choice questions, this data will be easy to quantify. If you want to analyze a qualitative survey, the scenario is a bit more complicated.
First, you need to build up a solid tagging structure to categorize your survey texts and make sure the whole team is on board, which becomes even more difficult if you handle considerable volumes of data. Then, you’ll need to make sense of all this tagged data by analyzing it manually. As you can see, this is quite an inefficient process and one that you can enhance using text analysis tools with machine learning. These tools can help you carry out survey analysis by auto-tagging open-ended answers, saving a lot of time and resources.
In this guide, you’ll learn the best practices to obtain and analyze survey data, the importance of building up a solid tagging system, and how to use AI to effectively analyze your survey responses.
- Getting Started with Survey Analysis
- Prepare Survey Data for Analysis
- How to Analyze Quantitative Data in Surveys?
- How to Analyze Qualitative Data in Surveys?
- Automating Qualitative Analysis with AI
- Data Visualization of the Results
Getting Started with Survey Analysis
Conducting surveys to obtain customer feedback is crucial for companies to understand their customers’ positive and negative perceptions about products and services. You can conduct either open or closed-ended surveys, two very different instruments that will give you different types of information: qualitative and quantitative data.
Keep on reading to find out the ways in which surveys can help your business and how to analyze them!
Quantitative Vs. Qualitative Data
Obtaining quantitative data means measuring values expressed in numbers. This method is structured and helps you draw general conclusions from your research. For example, if you would like to know how many customers like your new product, you can conduct quantitative research using close-ended surveys and obtain numbers as a result (82% liked it, 15% did not, and the rest are unsure).
Qualitative data, on the other hand, describes and explains a topic or phenomenon rather than measure it. Instead of values expressed in numbers, qualitative data comes in the format of open-ended responses. It’s very useful when collecting and deriving meaning from impressions, opinions, and views. So, if you are more interested in knowing why 15% of your customers did not like your new product, you can conduct qualitative research and ask them an open-ended question such as ‘Can you tell us a little bit more why you didn’t like our new product?’ to get to know their thoughts on this topic and the different reasons why they dislike it.
A qualitative survey provides you with more insightful information about your business and customers, but they’re a lot harder to analyze than quantitative surveys.
When to Use Qualitative Vs. Quantitative Research?
Quantitative surveys are great for obtaining results that help you see the bigger picture, while qualitative data goes more in-depth to understand why people feel a certain way.
If you need a general overview based on cold, hard facts, then you should go for quantitative research. This approach is also useful to confirm or reject hypothesis you may have about how customers feel about a specific topic or why they behave in a certain way.
If you are more interested in getting to know more about your customers’ views, opinions, and beliefs, then a qualitative approach is more appropriate. This approach is also useful for discovering things that you may not know and issues or opportunities that you are not aware of.
Let’s take a look at some instances in which it’s better to apply qualitative methods over quantitative, and vice-versa.
- New Ideas: Qualitative surveys gather in-depth data about an event, topic, product, and so on. The feedback you obtain will help you make data-based decisions and improve your product or service. Also, survey responses can help you come up with new ideas tailored to your customers’ needs and requirements. You may want to start by asking your customers open-ended questions about a product and use the insights to design a quantitative survey to prove your hypothesis right or wrong, for example, whether a new app feature is a success or a flop.
- More Answers: Responding to a quantitative survey is easier than answering a lot of open-ended questions, so more customers are likely to respond. It’s also simpler for the company to analyze this data. So, if you are looking for a lot of answers to broad questions, and you need them quickly, perhaps for a presentation, then quantitative research is definitely the way to go.
- Personal Insights: Using qualitative research enables you to get a better idea of your customers’ emotions. Cold, hard facts don’t really show you how your customers feel about a specific topic and the reasons behind their sentiment, while open-ended responses let you ‘listen’ to your customers’ voices.
Difference Between Insightful and Non-Insightful Data
When it comes to customer feedback, you’ll find that not all the information you get is useful to your company. This feedback can be categorized into non-insightful and insightful data. The former refers to data you had already spotted as problematic, while insightful information either helps you confirm your hypotheses or notice new issues or opportunities.
Let’s imagine your company carries out a customer satisfaction survey, and 60% of the respondents claim that the pricing of your product or service is too high. You can use that valuable data to make a decision. That’s why this data is also called actionable insights because they either lead to action, validation, or rethinking of a specific strategy you have already implemented.
Now, let’s take a look at some best practices for preparing your survey data for analysis.
Prepare Survey Data for Analysis
So, you have carried out a survey and have fresh information to analyze. Is it time to make data-based decisions? Not yet! Before examining the responses, you should clean your survey data.
Cleaning data allows you to get the most valuable insights possible and increases the quality and reliability of your findings. Some things you will need to do to prepare your data for analysis are:
- Eliminate duplicate responses. It might come as a surprise, but there are some enthusiastic customers that will answer your survey more than once, especially if you are offering an incentive for completing the questionnaire. Luckily, it’s very easy to delete duplicate content to better structure your survey responses. It’s industry standard to keep a customer’s first answer and eliminate the rest.
- Look for problematic respondents. There are two types of respondents who pollute your data: flatliners and speedsters. Flatliners just pick the same option in a series of multiple-choice questions. Some surveys ask scaled questions such as, “How would you rate our customer service on a scale of 1-10?” A flatliner would assign the same score to every item.
Speedsters, on the other hand, read surveys as fast as they can and answer in a random way. Let’s imagine you have designed a questionnaire to be completed in 30 minutes. A person who answers in six minutes is considered to be a speedster, as it’s just not possible for them to answer each question appropriately in such a short time. As a result, their answers are not valid. Experts recommend ignoring surveys that were completed in a third of the median time of completion.
Here are some tips to obtain clean data from your surveys:
- Try to include open-ended questions your respondents cannot skip. If they provide nonsensical answers, then you should take a look at their other answers to see if it’s worth analyzing those survey results.
- Use ‘Cheater’ Questions. These are queries aimed at eliminating respondents who cheat when filling in your survey. It’s very easy to spot cheaters in open-ended comments, as they are likely to give random answers. Multiple-choice random answers, on the other hand, are much more difficult to spot. One strategy you can implement is to add questions with commands such as “Select two answers for this question” to see if the respondent is truly paying attention to the instructions.
After cleaning all your data, you can start categorizing your survey responses using different methods. Keep on reading to find out more!
How to Analyze Quantitative Data in Surveys?
Analyzing quantitative surveys may sound difficult, but it’s not. Actually, all you need to do is to organize your survey responses by coding them and transform these responses into aggregated numbers. What does this mean? Counting the total number of people who took your survey, and seeing how many of them chose option 1, 2, or 3.
Let’s imagine 200 people answer your survey. One of the questions asks: “How would you rate our product?” The responses are split into:
- Excellent: 100 answers.
- Good: 70 answers.
- Bad: 30 answers.
If you have used tools such as Typeform, you’ll automatically receive aggregated results without having to process them yourself.
In Excel, survey results might look like this:
Maybe you are interested in knowing which age group is likely to buy your product again, so you can target that market. Well, you’ll need to compare the relationship between each age group in those that responded ‘Excellent’, a step known as cross-tabulation. One very simple way to cross-tabulate information is using Excel’s pivot table feature. It groups raw data and searches for patterns that will give you better insights than analyzing survey data without cross-tabulation.
In the previous example, you could create the following pivot tables to see which age groups rated each of your products as Excellent:
Product = Sentiment analysis model:
Product = Topic detection model:
Product = Intent detection model:
In these examples, you can see the number of customers within each age group rated each of our products as ‘excellent’.
How to Analyze Qualitative Data in Surveys?
Analyzing qualitative survey data is quite different from examining quantitative answers. But even if it’s challenging and time-consuming, the results obtained will allow you to understand your respondents’ true feelings, views, and opinions on an issue and take action.
To obtain this information, you’ll need to make sense of your data by categorizing responses. Let’s take a look at why tagging is important for analyzing surveys, some ways to categorize your open-ended survey responses, and the best practices for doing so.
Why are Tags Important for Analyzing Surveys?
Most companies organize their text responses by using tags. Why? Because text without structure has almost no value to your company. Let’s picture this: you get hundreds of survey responses, emails, and product reviews every day, but you don’t have a system for tagging, organizing, and analyzing all this incoming data. This means that, although there are a lot of valuable insights within this data, it’s impossible to make sense of them.
Companies apply tags to survey responses so they can then get quantitative insights on open-responses. For example, by tagging survey responses into Positive, Negative, and Neutral, you can then count how many people answered positively or negatively.
Some organizations tag and analyze their text data manually, but this is time-consuming and tedious, especially if you have hundreds of survey responses to get through. Instead, auto-tagging survey responses with machine learning enables you to analyze your survey data in seconds.
How to Categorize Open-Ended Responses?
Open-ended responses can be categorized in many ways. For example, Hubspot suggests three categories into which you can sort customer feedback – into product, customer service, and marketing and sales:
When sending customers surveys for product feedback (that is, any piece of text that mentions a new feature, the name of the product, its pricing, etc), you’ll find yourself with many, and varied, responses. You can create sub-categories within this main category that sort product feedback by urgent and minor issues, and requests:
- Urgent issues that hinder your product. Let’s imagine you launch a new feature within an existing product, such as a new filter for your photo editing app, and want to know whether your customers are happy about it. If customer feedback highlights an error, such as: ‘since I added the new filter, I can’t use other important features’, you’d categorize this as an important product issue and take immediate action.
- Minor and distracting issues. Going back to the same example, a minor issue would be that two of the new filters are black and white when one should have a blue tint.
- Requests. It may happen that your customers come up with an idea about a feature they think your product should have. This is valuable insight you can take into account, but it all comes down to the volume of requests you receive and the feasibility (and impact) of that particular feature.
Customer satisfaction surveys are often sent after support tickets are closed to find out how happy customers are with the service they received. Hubspot suggests that you look for patterns and the questions that customers ask most often. Sixty-six percent of customers switch companies due to poor service, so improving the quality of yours is crucial to prevent frustrated customers from leaving for the competition. By categorizing open-ended responses related to customer service, you can find out what customers like and dislike about your process and discover ways in which to improve your customer service.
Marketing and Sales
Having a tight system loop to keep your marketing and sales team updated will save you a lot of headaches and problems. For example, imagine that your marketing team mistakenly advertises your mobile app as compatible with iOS. A person pays for the service only to realize that the app does not work on his or her phone, meaning you’ll have an angry customer to deal with!
You send a survey to find out how they rate the new app and ask an open-ended question to find out the reason for their rating. Obviously, the rating is low and the text response is negative about how the product was falsely marketed. By analyzing this text, you can quickly direct this feedback to the marketing and sales team, who can offer the customer a refund and post a tweet to let other potential iOS customers know that the app is not yet compatible with their software.
RUF: Another Way to Categorize Feedback
Of course, there are other paradigms for organizing and analyzing customer feedback. Atlassian, for example, designed its own framework that suits the needs of SaaS companies: RUF. They propose that you organize your feedback into 3 categories (Reliability, Usability, and Functionality), and use sub-categories within them.
- Reliability: It refers to the way in which your product performs (with or without errors, for example). Some subcategories include Performance and Bugs.
- Usability: This tag is related to how easy or difficult it is for customers to use your products. Within this category, you may use subtags such as Complexity, Content, or Navigation.
- Functionality: The functionality tag is specific to your product or service. If we take MonkeyLearn as an example, some subtags might include Training Models, Integrations, or Batch Analysis.
Why is (Great) Categorization Important?
Before creating and defining your tagging structure for organizing your survey responses, it’s important to identify the questions you want to answer. Some of your objectives may include:
- Understanding trends in your overall customer satisfaction over time.
- Identifying customer service problems that frustrate your customers.
- Discovering product issues that annoy your customers.
Devote some time to think strategically, and define a structure and criteria for your tags. If you don’t, it will be hard for you to get any value out of your surveys. Once you’ve processed them, it’s a lot of work to go back and re-tag those survey responses.
Inconsistent tagging affects your feedback analytics and your team’s workflow. Teammates might feel confused if your tagging infrastructure is unclear. For example, they may end up tagging every text as General because they don’t know which tags to use for texts, or they can’t find an appropriate tag.
Let’s imagine that someone tags a survey response as General but it’s actually about Bug. Another teammate may read this response hoping to process this information it, only to realize that it should have been routed immediately to the dev team so they can work on a fix. Time has been wasted, valuable insights might have been missed, and even potential customers lost.
Well-structured tagging is also essential to training a machine learning algorithm to auto-tag your customer feedback. When creating a custom model in MonkeyLearn, you have to first define your tags and then train the machine learning tool to use them correctly. If your tagging criteria is messy, then the model is likely to make mistakes, giving you incorrect results and insights.
Regardless of whether or not you want to use machine learning to analyze your surveys, it is crucial for you to come up with a clear and consistent tagging system. You’ll understand your customer feedback better, and gain deeper and more accurate insights about your company, such as: what are your customers most confused about? Which aspect often results in poor satisfaction scores? Is your interface simple to use or not?
Now, let’s examine the ways in which your team can improve your feedback tagging process so that your texts are ready for machine learning to analyze!
Best Practices for Tagging Open-Ended Responses
Tagging can be a hassle, especially if you are working with high volumes of data. Luckily, there are some practices that will make this process easier. The following best practices apply to both analyzing feedback manually and automatically:
Take a Look at What Your Respondents Say
As you’ll be creating tags that apply only to your business, you need to first understand what most of your respondents say. It is useful to read approximately 30 open-ended answers from different surveys and jot down notes about the features, themes, or problems people commonly mention. This will help you define your tags.
Think about Consistent Tags
You’ll need clearly defined tags that don’t overlap, especially to start with, so that humans and machines don’t get confused and tag responses incorrectly or inconsistently. Imagine receiving a comment that reads “I’m confused because the page is messy and has too many options”. If you created tags such as Design and Usability, this comment could fall into either category. To make it easier for the team (or the machine learning model) to tag this type of response, we recommend including brief summaries of each tag to make sure the difference between each tag is clear.
Do Not Create Tags That are Too Specific
If you come up with tags that are too specific, your machine learning model won’t have enough data to categorize your texts accurately. Likewise, your team might get confused or even forget about niche tags and opt for the ones they use more often. Instead of creating tags like Speed of Mobile Device, choose a broader topic like Function.
You Don’t Need to Tag Everything
It’s not necessary for you to tag every survey response, review, or comment you receive. Many of your customers leave comments about issues or problems that are original and unique. Focus on tagging common themes, opportunities, or problems that respond to a larger proportion of your customer base.
Try Not to Include too Many Tags
When analyzing your survey responses, you should always choose quality over quantity. If you include more than 15 tags, for example, machines and humans will find it hard to categorize survey responses accurately. Not only because it’s confusing having so many options, but also because teams would have to scroll down a long list of tags, looking for the most suitable one.
Help your team (or your model!) to analyze your texts by creating a hierarchy of tags. Grouping tags and having a solid structure makes your model more accurate when making predictions. Instead of lumping tags into one category, create sub-tags within the main ones. Ease of Use and Design can go inside Usability, for example.
Use a Single Classification Criterion per Model
When you analyze your survey responses, there are hundreds of ways to categorize them. For example, if you asked your customers to describe your products, you can categorize those responses in terms of the materials of the product (Wood, Steel, Plastic), its category (Healthcare, Electronics, Home), and so on.
So, instead of creating just one model with all these categories, it is much more convenient and precise to create two smaller models for the different groups of tags (one model for materials, one model for categories). It’s much easier for both people and machines to solve smaller problems separately!
Automating Qualitative Analysis with AI
Now, let’s take a look at what text analysis with machine learning is and how to use it to automatically analyze survey responses.
Text analysis uses Natural Language Processing (NLP) to automate the process of classifying and extracting data from texts, such as survey responses, product reviews, tweets, emails, and more. In other words, it automatically structures your data and allows you to get insights about your business. For this to happen, you will have to train your text analysis model to analyze and sort your data, which isn’t as difficult as it sounds!
Let’s imagine you have a bunch of survey responses and want to analyze them. First, you need to ‘show’ your AI model some of these responses and teach it how to tag each one. Once is has been fed enough samples, it will be able to differentiate responses on its own.
If you think about it, machine learning models learn in a similar way to humans. When we are children, we are shown different objects to identify their primary features. For example, a child sees a ball and recognizes it because it’s round and light enough to throw. After understanding this, the child will be able to see another ball and differentiate it from a doll, even if it’s not exactly the same object they had seen before.
So, to teach your text analysis model to automatically tag your survey responses, you need to tag responses yourself to show your model how to do it. It doesn’t take a lot of time, and it is worth the investment!
How many texts should you tag? Well, that depends on your objective and the type of model you are using. We will take a look at those details below, but it’s important to mention that the more texts you tag, the smarter the model becomes.
After you have provided the algorithm with a certain number of samples, it will start making predictions on its own. MonkeyLearn has a number of pre-trained models that can help you analyze your survey results right away. For example, our sentiment analysis model will help you see if your customers’ responses are Negative, Positive, or Neutral, while our aspect classifier identifies the theme or topic those customers mention.
Why is it Important to Analyze Surveys?
The amount of data companies get every day is massive. For example, 281 billion emails are sent and received each day, and the figure is expected to increase to over 347 billion in the near future. That’s too much for human beings to analyze alone! And it seems to be a common issue: While 74% of companies say they want to be data-driven, only 29% are good at connecting analytics to action.
Automated text analysis can help with the titanic task of transforming unstructured information into actionable insights. For example, it’s very effective when it comes to auto-tagging a survey.
Tagging your survey responses accurately will not only allow you to understand your customers but also enable you to meet their expectations and solve their problems before they turn to your competitors. Of course, after years in the business you will have your own hypotheses about what your clients like and don’t like, but why not prove those theories with hard, precise facts? Carry out unbiased survey data analysis and develop sensible decisions based on your results. As we have already mentioned, analyzing this data by hand is difficult and time-wasting so text analysis with AI comes in handy.
Let’s take a look at the benefits of machine learning models when analyzing your surveys.
Human agents can only handle a certain number of tasks per day, no matter how hard they work. If all of a sudden you get 1000 responses to a survey you sent out, how will they cope? Adding more members to your team is not only expensive but also time-consuming, as you will have to go through a hiring process and then train agents to tag your survey responses accurately.
Instead of hiring new employees to deal with the extra workload, you can train a machine learning model to sort large quantities of data in next to no time. Be it 50 or 5000 surveys, after running one of our pre-trained models you’ll get results in only seconds!
Businesses send out qualitative surveys on a regular basis to get insightful feedback about a particular product, feature, service, etc. And, if you’re a medium-sized company, you could get anything from 100 to 3000 responses. This is new information that could give you valuable, up-to-date insights about your business, so you probably want to sort it immediately and share this information with the wider team.
Machine learning models, then, are your best allies because they can analyze this data in real-time. If you spot negative comments, suggestions, or requests that could help you improve your product, you can act on them right away to avoid customer churn and even win new customers.
Tagging survey responses is not only time-consuming but also boring. Research shows that it’s hard to focus when a task is tedious, and this leads to mistakes and inconsistencies. Also, people have different views depending on their cultural, political and religious values, which will shape the way they categorize texts, for example, they may disagree on whether a text is Positive or Negative, about Pricing or Refunds, Urgent or Not Urgent.
In contrast, AI-equipped text analysis models will never get tired, bored, or change their criteria that they use to determine the topic, sentiment or urgency of a text.
Deeper Customer Understanding
Getting deep insights from your survey responses is the ultimate aim of analyzing feedback. As we mentioned above, it’s crucial to create a defined structure for tagging your texts to truly understand what your customers are saying. By creating sub-tags within main tags, you can get a fine-grained analysis of your text data and not just a general overview.
For example, one of your main tags may be Usability, and you want to know what aspect of usability your customers are talking about. Thus, you can create sub-tags such as Mobile Interface or Loading Speeds.
What is Aspect-Based Sentiment Analysis?
Let’s start with the basics: what is sentiment analysis? Well, it’s the process of identifying the attitude or opinion towards a certain topic, be it positive, neutral, or negative. It’s also known as opinion mining and has a lot of practical applications, such as social media monitoring, product analytics, and, of course, analyzing survey data to understand the opinions of your customers about your products or services.
To carry out this process, the first thing a sentiment analysis model needs to do is determine if a text is subjective or objective. Then, it will be able to classify it into Positive, Negative, or Neutral. For example, a customer might claim:
“The pricing of the package is too expensive”
In this case, this person is expressing a negative opinion about a feature (pricing) of an entity (a package). Also, this opinion is direct. A client might express his opinion about a product or service by comparing it with others:
“The pricing of package A is too expensive when compared to package B”.
Comparative opinions show how similar or different two or more products or services are. In this case, the customer is saying something positive about package B in contrast to package A.
However, before sentiment analysis models can detect these subtle nuances between positive and negative within the same text, you need to break it down into opinion units. These are fragments that contain just one sentiment.
Here’s another example of a customer response with two sentiments: “Easy to use and excellent design but pricing is too high”. Once this data has been preprocessed into opinion units, you’ll receive data fragments like this: “easy to use and excellent design” and “the pricing is too high”. Then, a sentiment analysis model can easily tag the first opinion as Positive and second opinion as Negative.
To conclude, breaking down your texts into opinion units means making your texts more manageable for machine learning models. It helps them tag more accurately which means you’ll gain better insights. Give our opinion unit extractor a go to see how it works!
If you want to obtain even more insightful information about your customer surveys, you can carry out aspect-based sentiment analysis, a more advanced technique that will make the most out of your customer feedback by breaking down your text into aspects, allocating each one a sentiment. For example, for the opinion “easy to use and excellent design”, aspect-based sentiment analysis will tag it as Usability (aspect) and Positive (sentiment). You’ll be able to read between the lines and take a look at the specific features of your business that make your customers happy (or not!).
Aspect-based sentiment analysis is great for analyzing open-ended responses. Sometimes, customers mention a lot of different aspects in just one piece, which makes it difficult for a person to tag these texts, as in the example above. Your machine learning model will solve this automatically just by following some steps. First, you need to process your texts into opinion units. Then, you have to run those results through a sentiment analysis model and, finally, those results have to be analyzed by a topic classifier.
Getting started with aspect-based sentiment analysis is very simple. Let’s take a look at every step you have to take to analyze your surveys with MonkeyLearn’s custom models.
How to Do Aspect-Based Sentiment Analysis?
Aspect-based sentiment analysis will allow you to base your decisions on objective information after examining your customer surveys in-depth.
Below, you’ll find a detailed tutorial with all the necessary steps (separating text into opinion units, creating a sentiment classifier and, finally, an aspect classifier) to carry out this process successfully.
Preprocess Your Data into Opinion Units
As we mentioned before, this is a crucial step that ensures the accuracy of your aspect-based sentiment analysis. To break your survey responses into opinion units, access your dashboard and click on explore:
At the top, click on ‘Extractors’”. Here, you’ll find our opinion unit extractor:
To get opinion units from a batch of survey responses, click on New batch and add the Excel or CSV file with your responses:
And that’s it! The model will break down your responses into opinion units and send you a new file!
How to Create a Sentiment Classifier
Now it’s time to perform sentiment analysis on the opinion units. Although you could use some of the pre-trained models for sentiment analysis if you want to get the most accurate predictions you should train your own model for sentiment analysis using your own data and criteria.
1- Choose your model
For creating your own sentiment classifier, after accessing your dashboard, click create model in the top right-hand corner of the page. Now, select ‘classifier’:
In the following screen, choose the sentiment analysis model:
2- Import Your Data
It is time to import your survey responses that we will use for training the AI model. You can upload an Excel or CSV file, or integrate apps such as Gmail, Twitter, or Zendesk. We are going to upload an Excel file of survey responses that have been separated into opinion units:
3- Start tagging!
This is one of the most important steps when creating your custom model: training. Every text you tag makes your model smarter, and after you have tagged a certain number of texts your model will be ready to make predictions on its own.
4- Test it
Just type something in the text box to see how your model works. You can also upload a new batch of survey responses broken into opinion units to test your sentiment analysis model!
To increase your model confidence, keep on tagging. The more samples the model has, the better its confidence and accuracy.
How to Create an Aspect Classifier
Now, let’s take a look at how to create your aspect classifier, a model that will identify the different topics of your texts. So, first you have to enter your dashboard and click on the button ‘create’. Again, we’ll go for a classifier:
Then, choose the topic classification model:
1- Upload Your Data
Now, upload the data you’ve already analyzed for sentiment to your topic classifier. Remember, you can upload either a CSV or an Excel file with your survey responses, or integrate MonkeyLearn with Zapier, Gmail, Google Sheets and more.
2- Define Your Tags
You’ll need to choose topics that are relevant to the problem you’re trying to solve, or the insights you’re hoping to gain from the survey responses. In this example below, we’ve used the tags Pricing, Ease of Use, and Customer Support
3- Start Tagging
Now it’s time to start training the model and tagging samples. The more texts you tag, the better equipped your model will be to auto-tag your survey responses on its own.
Once you have tagged enough samples, the model will be ready for you to test.
4- Try it Out!
Just type something in the text box and see how the text analysis tool tags your survey data. If you want to increase its confidence, you just have to keep on keep tagging samples!
After training the model, you can:
1- Upload new responses in an Excel or CSV file to conduct batch analysis,
2- Use Monkeylearn’s integrations with Google Sheets, Zapier, Zendesk, and more to analyze your texts or
3- If you know how to code, you can use our API.
So, the steps to follow to conduct aspect-based sentiment analysis are:
- Break down your survey responses into opinion units using an extractor.
- Create a sentiment analysis model and upload the Excel file with your opinion units to train and test your model.
- Create an aspect classifier to examine the file you get after carrying out sentiment analysis.
- Run the analysis of survey responses using the sentiment and aspect classifiers.
Check out this guide if you want to know more about text analysis tools and how they work.
Data Visualization of the Results
You have run all your survey responses through the aspect-based sentiment analysis model. Now what? Using visuals is a great way to present your results in a clear and inspiring way. With data visualization, it’s easier to detect insights and make better decisions. If you have analyzed 10,000 survey responses, this means you’ll have 10,000 cells with information in an Excel file. It is a lot better to make a graph using online tools so you can examine trends in a much easier way.
For example, in 2018 we performed sentiment analysis on Capterra reviews about Slack. We obtained some interesting results on how people were talking positively or negatively about the different aspects of Slack:
However, these results are better visualized when using beautifully designed graphs, which catch readers’ attention and are much easier to understand:
Let’s take a look at some data visualization tools available for you to use:
Google Data Studio
One of the main tools for data visualization is Google Data Studio, which helps to communicate data in a simple way. Its interface is intuitive and very similar to Google Drive, plus, it can extract data from Excel or CSV files. Need to upload large amounts of information at the same time from multiple files? That’s no problem at all since the maximum you can upload is 100 megabytes per upload! If you want to learn how to use Google Data Studio, there are some very useful official tutorials out there that can help.
Looker is another fine option for visualizing the results of your survey analysis. This tool allows you to analyze large and small amounts of data in real time and uses data analytics to interpret results. Looker can be connected to different databases and build user-friendly reports that can be shared with other teammates. It’s very simple to use, and you can learn more about how it works by checking out these online resources.
Finally, another great tool for data visualization is Tableau. It makes working with analytics and large amounts of data quite intuitive and simple. Using it doesn’t require any technical skills, so practically anyone can analyze information with it. Not only is it user-friendly, but it also works with almost any type of data source, including Excel and CSV files, creating bar charts in no time at all. Learning how to create graphs using Tableau is straightforward, and the company offers a thorough tutorial so you can understand each and every function that the platform has to offer.
Though these are the main players in the data visualization market, there are some other very interesting options, such as Klipfolio and Mode Analytics, which can help you better understand your data.
Conducting surveys is crucial for businesses to check customer satisfaction and to obtain powerful insights that will improve overall customer experience. Business owners have to options when it conducting surveys: qualitative and quantitative methods. The two of them have their own benefits and drawbacks, which we have explored in the previous sections. On the one hand, quantitative data is useful to obtain cold, hard numbers and to prove a hypothesis. The main benefit is the ease of analyzing survey results, though the insights obtained are not as detailed and significant as the ones you can get from qualitative analysis.
Analyzing open-ended questions in surveys can be a hassle, especially if you haven’t come up with a solid tagging structure. Manually reading and tagging each survey response is boring and inefficient, and even if you have a hard-working team, human agents get tired, distracted, and make mistakes. Yes, you could send out closed-ended surveys that deliver results that are easy to quantify, but you won’t gain granular insights about how your customers feel about specific topics.
Qualitative feedback is where the real insights lie, so you need tools that can help you extract them in the most effective way possible. That’s why analyzing your survey feedback with machine learning is key.
MonkeyLearn offers a plethora of pre-trained machine learning models, or you can create your own extractors or classifiers using our intuitive interface. As you have seen, it’s very easy and you can choose tags that are tailored to your business to gain deeper insights from your survey responses.
If you are interested in getting started using AI for survey analysis, request a demo to get more information. Our team is ready to help you start analyzing your surveys using machine learning models right away.