How to Take The Pain Out of Survey Analysis in 2021

Surveys are one of the most popular ways to gather customer feedback and understand your customers’ needs.

From customer satisfaction and market surveys to product and event surveys, you can choose from a number of surveys to send your customers and learn where your products, services, or business, may be lacking (or excelling).

While creating surveys is fairly straightforward, using tools like SurveyMonkey or Typeform, analyzing survey results is hard, expecially if you have thousands of responses to sort – and a large portion of those reponses are open ended.

Thankfully, there are all sorts of tools that take the pain out of survey data analysis. In this article, we’ll explain what survey analysis is and the quickest way to perform survey analysis on different types of data.

What Is Survey Analysis?

Survey analysis is the process of conducting and analyzing surveys for results that lead to actionable insights. Survey analysis examples include Net Promoter Score (NPS), customer satisfaction (CSAT), marketing, research, employee satisfaction, and more.

Conducting surveys to obtain customer feedback is crucial for companies to understand their customers’ positive and negative perceptions about products and services. You can conduct either open or closed-ended surveys, two very different instruments that will give you different types of information: qualitative and quantitative data.

However you choose to conduct your surveys, you’re guaranteed powerful results when you have the right processes in place. Read on to learn how surveys can help your business, and the best course of action for analyzing and visualizing your surveys.

Quantitative Vs. Qualitative Data

Obtaining quantitative data means measuring values expressed in numbers. This method is structured and helps you draw general conclusions from your surveys. For example, if you would like to know how many customers like your new product, you can conduct quantitative research using close-ended surveys and obtain numbers as a result (82% liked it, 15% did not, and the rest are unsure).

Quantitative surveys are easier to analyze because they simply deal with statistics and rating scales, but they don’t provide a whole lot of insight.

Qualitative data, on the other hand, describes and explains a topic or phenomenon rather than measure it. Instead of values expressed in numbers, qualitative data comes in the format of open-ended responses. Qualitative survey data can go beyond what happened and uncover why it happened.

Qualitative data can collect and derive meaning from open-ended responses to get to the views, opinions, and feelings of the respondent. So, if you are more interested in knowing why 15% of your customers did not like your new product, you can conduct qualitative research and ask them open-ended questions, like Can you tell us a little bit more why you didn’t like our new product?

A qualitative survey provides you with more insightful information about your business and customers, but they’re a bit trickier to analyze than quantitative surveys. We’ll show you how to analyze qualitative survey responses a little later.

The differences between qualitative and quantitative data, including a definition, types of data, what it answers, and examples of both data types.

When to Use Quantitative Vs. Qualitative Surveys?

Quantitative surveys are great for obtaining survey analysis results that help you see the bigger picture, while qualitative data goes more in-depth to understand why people feel a certain way.

If you need a general overview based on cold, hard facts, then you should go for quantitative research. This approach is also useful to confirm or reject hypotheses you may have about how customers feel about a specific topic or why they behave in a certain way.

If you are more interested in getting to know more about your customers’ views, opinions, and beliefs, then a qualitative approach is more appropriate. This approach is also useful for discovering issues or opportunities you may not have even considered.

Let’s take a look at some instances in which it’s better to apply qualitative methods over quantitative, and vice-versa.

  • Generate New Ideas: Qualitative surveys gather in-depth data about an event, topic, product, and so on. The feedback you obtain will help you make data-based decisions and improve your product or service. Also, customer surveys can help you come up with new ideas tailored to your customers’ needs and requirements.
  • Get More Answers: Responding to a quantitative survey is easier than answering a lot of open-ended questions, so more customers are likely to respond. It’s also simpler for the company to analyze this data. So, if you are looking for a lot of answers to broad questions, and you need them quickly, perhaps for a presentation, then quantitative research is definitely the way to go.
  • Gather Personal Insights: Using qualitative research enables you to get a better idea of your customers’ emotions. Cold, hard facts don’t really show you how your customers feel about a specific topic and the reasons behind their sentiment, while open-ended responses let you “listen” to the voice of the customer (VoC).

Difference Between Insightful and Non-Insightful Data

When it comes to customer survey analysis, you’ll find that not all the information you get is useful to your company. This feedback can be categorized into non-insightful and insightful data. The former refers to data you had already spotted as problematic, while insightful information either helps you confirm your hypotheses or notice new issues or opportunities.

Let’s imagine your company carries out a customer satisfaction survey, and 60% of the respondents claim that the pricing of your product or service is too high. You can use that valuable data to make a decision. That’s why this data is also called actionable insights because they either lead to action, validation, or rethinking of a specific strategy you have already implemented.

How to Analyze Survey Data

Follow along for some best practices for preparing and analyzing both quantitative and qualitative survey data.

Prepare Survey Data for Analysis

So, you have carried out a survey and have fresh information to analyze. Is it time to make data-based decisions? Not yet! Before examining the responses, you should clean your survey data.

Cleaning data allows you to get the most valuable insights possible and increases the quality and reliability of your findings. Some things you will need to do to prepare your data for analysis are:

  • Eliminate duplicate responses. It might come as a surprise, but there are some enthusiastic customers that will answer your survey more than once, especially if you are offering an incentive for completing the questionnaire. Luckily, it’s very easy to delete duplicate content to better structure your survey responses. It’s industry standard to keep a customer’s first answer and eliminate the rest.
  • Look for problematic respondents. There are two types of respondents who pollute your data: flatliners and speedsters. Flatliners just pick the same option in a series of multiple-choice questions. Some surveys ask scaled questions such as, How would you rate our customer service on a scale of 1-10? A flatliner would assign the same score to every item.

Speedsters, on the other hand, read surveys as fast as they can and answer in a random way. Let’s imagine you have designed a questionnaire to be completed in 30 minutes. A person who answers in six minutes is considered a speedster, as it’s just not possible for them to answer each question appropriately in such a short time. As a result, their answers are not valid. Experts recommend ignoring surveys that were completed in a third of the median time of completion.

Here are some tips to obtain clean data from your surveys:

  • Try to include open-ended questions your respondents cannot skip. If they provide nonsensical answers, then you should take a look at their other answers to see if it’s worth analyzing those survey results.
  • Use ‘Cheater’ Questions. These are queries aimed at eliminating respondents who cheat when filling in your survey. It’s very easy to spot cheaters in open-ended comments, as they are likely to give random answers. Multiple-choice random answers, on the other hand, are much more difficult to spot. One strategy you can implement is to add questions with commands such as Select two answers for this question to see if the respondent is truly paying attention to the instructions.

After cleaning all your data, you can start categorizing your survey responses using different methods.

Analyze Quantitative Survey Data First

Analyzing quantitative surveys may sound difficult, but it’s not. Actually, all you need to do is to organize your survey responses by coding them and transform these responses into aggregated numbers. What does this mean? Counting the total number of people who took your survey, and seeing how many of them chose option 1, 2, 3, etc.

Let’s imagine 200 people answer your survey. One of the questions asks: “How would you rate our product?” The responses are split into:

  • Excellent: 100 answers.
  • Good: 70 answers.
  • Bad: 30 answers.

If you have used tools such as Typeform, you’ll automatically receive aggregated survey analysis results without having to process them yourself.

Net Promoter Score (NPS) is a popular survey that simply asks how likely customers are to recommend a business or product. Take a look below at the data analysis of 2,800 responses from an NPS survey of the MonkeyLearn Word Cloud Generator. We asked the question: How likely are you to recommend MonkeyLearn WordCloud to a friend or colleague? with responses scaled from 0 to 10.

Programs like Excel and Google Sheets can also be helpful tools for quantitative analysis. On the page the overall survey results analysis looks like this:

A spreadsheet showing NPS survey analysis results by Country, Device, Browser, and Operating System.

The respondents used the word cloud tool on desktop, tablet, or smartphone. You can filter the data by device to see how well each scored and find out which device was most popular by country:

NPS survey results filtered by Phone.

Filter by country to see how well-liked the app is by region or to find out if it may not be working well with a particular language:

NPS survey results filtered by Country: Mexico.

Filter by browser to see the preferred way to use the tool:

NPS survey results filtered by Browser: Safari.

Or with which browser it may be particularly buggy:

NPS survey results filtered by Browser: Chrome Mobile iOS.

You can get more fine-grained with your results by performing calculations, like mean, maximum, and minimum or using pivot tables.

Break down your analysis to see overall count and percentage of scores:

NPS survey results filtered by score.

Follow the overall NPS score over time:

A graph showing NPS results over time.

Survey data displayed in charts and graphs offers a quick and easily-to-understand view of your quantitative data:

A bar graph showing distribution of NPS results: Detractors, Passives, Promoters.

With your quantitative survey data analysis, you’ve uncovered the statistics and details of what is happening, next we’ll show you how to go into the why it’s happening with qualitative survey data analysis. Read on to learn how qualitative survey analytics works, then we’ll walk you through a tutorial to analyze the open-ended follow-up questions to the NPS survey above: How can we improve MonkeyLearn WordCloud for you? and Any features we are missing?

Analyze Qualitative Survey Data

Analyzing qualitative information requires more steps, but the survey results will allow you to understand your respondents’ true feelings and opinions on any given issue and help you take action with powerful insights. And with the help of machine learning tools, qualitative survey analysis can be quite simple – and much more accurate than manual analysis.

You’ll first need to make sense of your data by categorizing responses. Let’s take a look at some ways to categorize your open-ended survey responses, and the best practices for doing so.

How to Analyze Open-Ended Responses?

Open-ended responses can be analyzed and categorized in many ways. For example, Hubspot suggests three categories for sorting customer feedback: product, customer service, and marketing and sales:


When sending customers surveys for product feedback (that is, any piece of text that mentions a new feature, the name of the product, its pricing, etc), you’ll find yourself with many, and varied, responses. You can create sub-categories within this main category that sort product feedback by urgent and minor issues, and requests:

  • Urgent issues that hinder your product. Let’s imagine you launch a new feature within an existing product, such as a new filter for your photo editing app, and want to know whether your customers are happy about it. If customer feedback highlights an error, such as: Since I added the new filter, I can’t use other important features, you’d categorize this as an important product issue and take immediate action.
  • Minor and distracting issues. Going back to the same example, a minor issue would be that two of the new filters are black and white when one should have a blue tint.
  • Requests. It may happen that your customers come up with an idea about a feature they think your product should have. This is valuable insight you can take into account, but it all comes down to the volume of requests you receive and the feasibility (and impact) of that particular feature.

Customer Service

Customer satisfaction surveys are often sent after support tickets are closed to find out how happy customers are with the service they received. Hubspot suggests that you look for patterns and the questions that customers ask most often. When consumers were asked what impacts their level of trust with a company, offering excellent customer service ranked number one, so ensuring the quality of yours is crucial to maintain customers and prevent them switching to the competition. By categorizing open-ended responses related to customer service, you can find out what customers like and dislike about your process and discover ways in which to improve your customer service.

Marketing and Sales

Having a tight customer feedback loop to keep your marketing and sales team updated will save you a lot of headaches and problems. For example, imagine that your marketing team mistakenly advertises your mobile app as compatible with iOS. A person pays for the service only to realize that the app doesn’t work on their phone!

You send a survey to find out how they rate the new app and ask open-ended questions to find out the reason for their rating. Obviously, the rating would be low and the text response negative about how the product was falsely marketed. By analyzing this text, you can quickly direct this feedback to the marketing and sales team, who can offer the customer a refund and post a tweet to let other potential iOS customers know that the app is not yet compatible with their software.

RUF: Another Way to Categorize Feedback

Of course, there are other paradigms for organizing and analyzing customer feedback. Atlassian, for example, designed its own framework that suits the needs of SaaS companies: RUF. They propose that you organize your feedback into 3 categories (Reliability, Usability, and Functionality), and use sub-categories within them.

  • Reliability: It refers to the way in which your product performs (with or without errors, for example). Some subcategories include Performance and Bugs.
  • Usability: This tag is related to how easy or difficult it is for customers to use your products. Within this category, you may use subtags such as Complexity, Content, or Navigation.
  • Functionality: The functionality tag is specific to your product or service. If we take MonkeyLearn as an example, some subtags might include Training Models, Integrations, or Batch Analysis.

Why Is (Great) Categorization Important?

Before creating and defining your tagging structure for organizing your survey responses, it’s important to identify the questions you want to answer. Some of your objectives may include:

  • Understanding trends in your overall customer satisfaction over time.
  • Identifying customer service problems that frustrate your customers.
  • Discovering product issues that annoy your customers.

Devote some time to think strategically, and define a structure and criteria for your tags. If you don’t, it will be hard for you to get any value out of your surveys. Once you’ve processed them, it’s a lot of work to go back and re-tag those survey responses.

Inconsistent tagging affects your feedback analytics and your team’s workflow. Teammates might feel confused if your tagging infrastructure is unclear. For example, they may end up tagging every text as General because they don’t know which tags to use for texts, or they can’t find an appropriate tag.

Let’s imagine that someone tags a survey response as General but it’s actually about Bug. Another teammate may read this response hoping to process this information, only to realize that it should have been routed immediately to the dev team so they can work on a fix. Time has been wasted, valuable insights have been missed, even potential customers lost.

Well-structured tagging is also essential to training a machine learning algorithm to auto-tag your customer feedback. When creating a custom model in MonkeyLearn, you have to first define your tags and then train the machine learning tool to use them correctly. If your tagging criteria is messy, then the model is likely to make mistakes, giving you incorrect results and insights.

Regardless of whether or not you want to use machine learning to analyze your surveys, it’s crucial to come up with a clear and consistent tagging system. You’ll understand your customer feedback better, and gain deeper and more accurate insights about your company, such as: what are your customers most confused about? Which aspect often results in poor satisfaction scores? Is your interface easy to use or not?

Now, let’s examine the ways in which your team can improve your feedback tagging process so that your texts are ready for machine learning to analyze!

Best Practices for Tagging Open-Ended Responses

Tagging can be a hassle, especially if you are working with high volumes of data. Luckily, there are some practices that will make this process easier. The following best practices apply to both analyzing feedback manually and automatically:

Take a look at what your respondents say

As you’ll be creating tags that apply only to your business, you need to first understand what most of your respondents say. It is useful to read approximately 30 open-ended answers from different surveys and jot down notes about the features, themes, or problems people commonly mention. This will help you define your tags.

Think about consistent tags

You’ll need clearly-defined tags that don’t overlap, especially to start with, so that humans and machines don’t get confused and tag responses incorrectly or inconsistently. Imagine receiving a comment that reads I’m confused because the page is messy and has too many options. If you created tags such as Design and Usability, this comment could fall into either category. To make it easier for the team (or the machine learning model) to tag this type of response, we recommend including brief summaries of each tag to make sure the difference between each tag is clear.

Don’t create tags that are too specific

If you come up with tags that are too specific, your machine learning model won’t have enough data to categorize your texts accurately. Likewise, your team might get confused or even forget about niche tags and opt for the ones they use more often. Instead of creating tags like Speed of Mobile Device, choose a broader topic like Function.

You don’t need to tag everything

It’s not necessary for you to tag every survey response, review, or comment you receive. Many of your customers leave comments about issues or problems that are original and unique. Focus on tagging common themes, opportunities, or problems that respond to a larger proportion of your customer base.

Try not to include too many tags

When analyzing your survey responses, you should always choose quality over quantity. If you include more than 15 tags, for example, machines and humans will find it hard to categorize survey responses accurately. Not only because it’s confusing having so many options, but also because teams would have to scroll down a long list of tags, looking for the most suitable one.

Embrace hierarchy

Help your team (or your model!) to analyze your texts by creating a hierarchy of tags. Grouping tags and having a solid structure makes your model more accurate when making predictions. Instead of lumping tags into one category, create sub-tags within the main ones. Ease of Use and Design can go inside Usability, for example.

Use a single classification criterion per model

When you analyze your survey responses, there are hundreds of ways to categorize them. For example, if you asked your customers to describe your products, you can categorize those responses in terms of the materials of the product (Wood, Steel, Plastic), its category (Healthcare, Electronics, Home), and so on.

So, instead of creating just one model with all these categories, it is much more convenient and precise to create two smaller models for the different groups of tags (one model for materials, one model for categories). It’s much easier for both people and machines to solve smaller problems separately!

Automating Qualitative Survey Analysis with AI

Now, let’s take a look at what text analysis with machine learning is and how to use it to automatically analyze survey responses.

Text analysis uses natural language processing (NLP) to automate the process of classifying and extracting data from texts, such as survey responses, product reviews, tweets, emails, and more. In other words, it automatically structures your data and allows you to get insights about your business. For this to happen, you will have to train your text analysis model to analyze and sort your data, which isn’t as difficult as it sounds!

Let’s imagine you have a bunch of survey responses and want to analyze them. First, you need to ‘show’ your AI model some of these responses and tag each one. Once you have tagged enough samples, it will be able to differentiate responses on its own.

How many texts should you tag? Well, that depends on your objective and the type of model you are using. We’ll take a look at those details below, but it’s important to mention that the more texts you tag, the smarter the model becomes.

After you have provided the algorithm with a certain number of samples, it will start making predictions on its own. MonkeyLearn has a number of pre-trained models that can help you obtain survey analysis results right away. Give them a try to see how they work:

Test with your own text



Test with your own text


Ease of Use91.2%
  • Keyword Extractor: extract the most used and most important words from your survey responses.

Test with your own text


KEYWORDelon musk
KEYWORDsecond image
KEYWORDbody look
KEYWORDnew design
  • Company Extractor: automatically extracts the names of businesses and organizations from surveys or any text.

Test with your own text



The great thing about MonkeyLearn is that you can train custom models of the above tools, and more, to the language and criteria of your business in just a few steps.

Why Is it Important to Analyze Surveys?

The amount of data companies get every day is massive. For example, 281 billion emails are sent and received each day, and the figure is expected to increase to over 347 billion in the near future. That’s too much for human beings to analyze alone!

Automated text analysis can help with the titanic task of transforming unstructured information into actionable insights. For example, it’s very effective when it comes to auto-tagging a survey.

Tagging your survey responses accurately will not only allow you to understand your customers but also enable you to meet their expectations and solve their problems before they turn to your competitors. Best of all, AI-trained analysis models provide total consistent accuracy, so you don’t have to worry about the results.


Human agents can only handle a certain number of tasks per day, no matter how hard they work. If all of a sudden you get 1,000 responses to a survey you sent out, how will they cope? Adding more members to your team is not only expensive but also time-consuming, as you will have to go through a hiring process and then train agents to tag your survey responses accurately.

Instead of hiring new employees to deal with the extra workload, you can train a machine learning model to sort thousands of surveys in just minutes.

Real-Time Analysis

Businesses send out qualitative surveys on a regular basis to get insightful feedback about a particular product, feature, service, etc. And, if you’re a medium-sized company, you could get anything from 100 to 3,000 responses. This is new information that could give you valuable, up-to-date insights about your business, so you probably want to sort it immediately and share this information with the wider team.

Consistent Criteria

Tagging survey responses is not only time-consuming but also a boring task and this leads to mistakes and inconsistencies. Also, people have different views depending on their cultural and political values, which will shape the way they categorize texts, for example, they may disagree on whether a text is Positive or Negative, about Pricing or Refunds, or Urgent or Not Urgent.

In contrast, AI-equipped text analysis models will never get tired, bored, or change their criteria.

Deeper Customer Understanding

Getting deep insights from your survey responses is the ultimate aim of analyzing feedback. As we mentioned above, it’s crucial to create a defined structure for tagging your texts to truly understand what your customers are saying. By creating sub-tags within main tags, you can get a fine-grained analysis of your text data and not just a general overview.

For example, one of your main tags may be Usability, and you want to know what aspect of usability your customers are talking about. Thus, you can create sub-tags such as Mobile Interface or Loading Speeds.

Survey Analysis Tools

Find relevant survey analysis examples and automate survey analysis with these tools. These are some of the best tools for creating, executing, and analyzing surveys.

Tools for Creating and Conducting Surveys

  • SurveyMonkey: an inexpensive online survey tool with templates for all kinds of popular surveys and relevant survey question examples.
  • Typeform: an easy-to-use survey tool to create customize online surveys, forms, polls, and questionnaires.
  • Google Forms: a great survey tool to integrate easily with Google Sheets, Google Docs, and Google Slides.
  • Alchemer (formerly SurveyGizmo): focused heavily on VoC, Alchemer is a helpful tool for gathering surveys and other customer data from multiple sources.
  • GetFeedback: create and conduct mobile surveys easily with templates to measure customer satisfaction, customer effort score, product experience, and more.
  • Delighted: focused on Net Promoter Score (NPS), with Delighted you can customize surveys, add follow-up questions, and send them via email or SMS.
  • an end-to-end solution for creating productive NPS surveys with high response rates.

Tools for Survey Data Analysis

When it’s time to analyze your surveys, you no longer have to waste countless hours processing them manually. AI-guided machine learning tools can automate survey analysis to save time and money and perform with near perfect accuracy.

Deciding what text analysis tools you want to use comes down to the Build vs. Buy Debate. If you have a data science and computer engineering background or are prepared to hire whole teams of engineers, building your own text analysis tools can produce great survey analysis results, though it’s still expensive and time consuming.

SaaS tools with easy-to-implement APIs, on the other hand, provide low to no code solutions and can integrate easily with other survey tools to get your analysis up and running, usually in just a few steps. Best of all, they can be customized to the language and criteria of your business.

  • Thematic offers simple integrations with SurveyMonkey, Zendesk, Medallia, and more, or allows you to extract data from NPS surveys or your personal database. Thematic’s text processing finds recurring themes and subjects to present infographics and insights that are easy to understand.
  • Idiomatic is designed to take full advantage of VoC by allowing communication across all internal departments, so marketing, customer support, and product development are always in touch with the most recent survey metrics.
  • ScopeAI aims to centralize product feedback by helping your business perform regular NLP surveys and track down other useful feedback to ensure your product design is up to speed with your customers’ needs.
  • Prodsight combines survey analytics and CX analytics from review sites and internal CRM data to help make sure you’re following the customer journey from start to finish.

What is Aspect-Based Sentiment Analysis?

Let’s start with the basics: sentiment analysis? is the process of identifying the attitude or opinion towards a certain topic, be it positive, neutral, or negative. It’s also known as opinion mining and can be used to analyze survey data to understand the opinions of your customers toward your products or services.

To carry out this process, the first thing a sentiment analysis model needs to do is determine if a text is subjective or objective. Then, it will be able to classify it into Positive, Negative, or Neutral. For example, a customer might claim:

“The pricing of the package is too expensive”

In this case, this person is expressing a negative opinion about a feature (pricing) of an entity (a package). Also, this opinion is direct. A client might express his opinion about a product or service by comparing it with others:

“The pricing of package A is too expensive when compared to package B”.

Comparative opinions show how similar or different two or more products or services are. In this case, the customer is saying something positive about package B in contrast to package A.

However, before sentiment analysis models can detect these subtle nuances between positive and negative within the same text, you need to break it down into opinion units. These are fragments that contain just one sentiment.

Here’s another example of a customer response with two sentiments: “Great design, but it’s really expensive.” Once this data has been preprocessed into opinion units, you’ll receive data fragments like this: “Great design design” and “but it’s really expensive” Then, a sentiment analysis model can easily tag the first opinion as Positive and second opinion as Negative.

Give our opinion unit extractor a go to see how it works:

Test with your own text


OPINIONThe hotel has a great location
OPINIONbut all in all it was a horrible experience!
OPINIONOnly stayed here because it was the pre-accommodation choice for one of our tours
OPINIONWill never stay here again!

If you want to obtain even more insightful information from your customer survey analysis, you can carry out aspect-based sentiment analysis, a more advanced technique that will make the most out of your customer feedback by breaking down your text into aspects, allocating each one a sentiment. For example, for the opinion “Great design” aspect-based sentiment analysis will tag it as Functionality (aspect) and Positive (sentiment). You’ll be able to read between the lines and take a look at the specific features of your business that make your customers happy (or not!).

Getting started with aspect-based sentiment analysis is easy. Follow along below, and we’ll show you how to create custom models in a snap.

How to Do Aspect-Based Sentiment Analysis?

Aspect-based sentiment analysis will allow you to base your decisions on objective information after examining your customer surveys in-depth.

Preprocess Your Data into Opinion Units

As we mentioned before, this is a crucial step that ensures the accuracy of your aspect-based sentiment analysis. First, create a free MonkeyLearn account.

To break your survey responses into opinion units, access your dashboard and click on explore:

MonkeyLearn dashboard showing the selection: "Explore."

At the top, click on ‘Extractors’. Here, you’ll find our opinion unit extractor:

Choosing "Opinion Unit Extractor."

To get opinion units from a batch of survey responses, click on ‘new batch’ and add the Excel or CSV file with your responses:

Uploading a CSV file.

And that’s it! The model will break down your responses into opinion units and send you a new file!

How to Create a Sentiment Classifier

Now we really get to see machine learning at work. It’s time to train your own sentiment analysis model. Although you could use some of the pre-trained models for sentiment analysis, if you want to get the most accurate predictions you should train your own model to your data and criteria.

1. Choose your model

In the MonkeyLearn dashboard, click ‘create model’ in the top right-hand corner of the page. Now, select ‘classifier’:

The option to choose a text analysis model: "Classifier" or "Extractor."

In the following screen, choose the sentiment analysis model:

The option to choose "Topic Classification," "Sentiment Analysis," or "Intent Detection."

2. Import Your Data

We’ll be using survey data from our NPS survey of the MonkeyLearn Word Cloud Generator. These are open-ended responses to the questions: How can we improve MonkeyLearn WordCloud for you? Any features we are missing?

You can upload an Excel or CSV file, or download a dataset from our data library if you don’t have one handy:

The option to upload a CSV or Excel file or download from the Data Library.


3. Start tagging!

This is one of the most important steps when creating your custom model: training. Every text you tag makes your model smarter, and after you’ve tagged a certain number of texts your model will be ready to make predictions on its own.

Training the sentiment analysis model by tagging text "Negative," "Neutral," or "Positive."

After you’ve trained it a bit, the model will begin making predictions on its own. Correct it if it’s predicted incorrectly.

4. Test it

After prompting you to name your model, the app will give you the option to test it. Or you can click ‘run’ to test it.

Just type something in the text box to see how your model works.

Testing the sentiment analyzer with new text.

To increase your model confidence, keep on tagging. The more samples the model has, the better its confidence and accuracy. Once your model is properly trained, click ‘run,’ then ‘batch’ to process a new CSV or Excel file with your sentiment results returned automatically:


How to Create an Aspect Classifier

Now, let’s take a look at how to create your aspect classifier, a model that will identify the different topics of your texts. Again, go to your dashboard click ‘create model’ and select ‘classifier’:

The option to choose a text Classifier or Extractor model.

Then, choose the topic classification model:

The option to choose "Topic Classification," "Sentiment Analysis," or "Intent Classification."

1. Upload Your Data

Now, upload the data you’ve already analyzed for sentiment to your topic classifier.

2. Define Your Tags

You’ll need to choose topics that are relevant to the problem you’re trying to solve, or the insights you’re hoping to gain from the survey responses. For our NPS survey analysis, we’ll be using the tags, Features, Functionality, and Ease of Use.

Defining/creating tags: "Functionality," "Features," and "Ease of Use."

3. Start Tagging

Now it’s time to start training the model and tagging samples. The more texts you tag, the better equipped your model will be to auto-tag your survey responses on its own.

Training the topic classifier with tags.

Once you have tagged enough samples, the model will be ready for you to test.

4. Try it Out!

Just type something in the text box and see how the text analysis tool tags your survey data. If you want to increase its confidence, you just have to keep on tagging samples!

Testing the topic classifier with new text.

After training the model, you can:

  1. Upload new responses in an Excel or CSV file to conduct batch analysis. Here you’ll see surveys responses tagged by topic, as well as sentiment:


  1. Use Monkeylearn’s integrations with Google Sheets, Zapier, Zendesk, and more to analyze your texts or

  2. If you know how to code, you can use our API.

Once your aspect-based sentiment analysis is complete, you'll know which aspects of your NPS survey are most positive and most negative, so you can focus on areas you need to change. If you’re scoring high on Functionality but low on Ease of Use, for example, it may indicate that your product is great, you just need to work on onboarding or customer education.

Here, we can see that most of the word cloud survey responses mentioned Features, which was also the category that received most negative responses:


Check out this guide if you want to know more about text analysis tools and how they work.

Survey Data Analysis Visualization

You have run all your survey responses through the aspect-based sentiment analysis model. Now what? Using visuals is a great way to present your results in a clear and inspiring way. With data visualization, it’s easier to detect insights and make better decisions. If you have analyzed 10,000 survey responses, this means you’ll have 10,000 cells with information in an Excel file. Visualizing your results make them much more persuasive, richer, and easier-to-understand.

Let’s take a look at some of the best survey data visualization tools available:

MonkeyLearn Studio

MonkeyLearn Studio allows you to connect all of your analyses and run them simultaneously. It takes you from data gathering to text analysis and data visualization, all in a single, easy-to-use dashboard for striking results. Once your MonkeyLearn Studio analysis process is set up, you can do it all with almost no human interaction.

Take a look at this aspect-based sentiment analysis of online reviews of Zoom.

The MonkeyLearn Studio dashboard showing multiple text analysis results together.

Reviews are broken into aspects (Reliability, Functionality, Pricing, etc.) then sentiment analyzed by category. See results by date or follow categories and sentiments over time.

You can also see intent classification, an analysis that reads text to output the objective of the writer. In this case the highest intent is for Opinion, as these are merely reviews of software. But this kind of analysis is great for things like marketing email responses, to group emails into categories, like Autoresponder, Interested, Not Interested, etc.

Play around with the MonkeyLearn Studio public dashboard to see how it works. You change the analysis criteria by date, category, sentiment, etc.

Google Data Studio

Google Data Studio’s, user-friendly interface helps communicate data in a simple way.. Need to upload large amounts of information at the same time from multiple files? That’s no problem at all since the maximum you can upload is 100 megabytes per upload! If you want to learn how to use Google Data Studio, there are some very useful official tutorials out there that can help.


Looker allows you to analyze large and small amounts of data in real time and uses data analytics to interpret results. Looker can be connected to different databases and build user-friendly reports that can be shared with other teammates. It’s very simple to use, and you can learn more about how it works by checking out these online resources.


Tableau makes working with analytics intuitive, and it doesn’t require any technical skills, so practically anyone can analyze information with it. It works with almost any type of data source, including Excel and CSV files, creating bar charts in no time at all. Learning how to create graphs using Tableau is straightforward, and the company offers a thorough tutorial so you can understand each and every function that the platform has to offer.

Though these are the main players in the data visualization market, there are some other very interesting options, such as Klipfolio and Mode Analytics, which can help you better understand your data.

Final Words

Conducting surveys is crucial for businesses to check customer satisfaction and obtain powerful insights that will improve overall customer experience.

Quantitative feedback is definitely helpful, but qualitative feedback is where the real insights lie, so you need tools to extract them in the most effective way possible. That’s why analyzing your survey feedback with machine learning is key.

MonkeyLearn offers a plethora of pre-trained machine learning models, or you can create your own extractors or classifiers using our intuitive interface. As you have seen, it’s very easy and you can choose tags that are tailored to your business to gain deeper insights from your survey responses. And once your analyses are set up, you can easily visualize them with MonkeyLearn Studio and run regular, real-time analysis with little human interaction needed.

If you’re interested in getting started with AI for survey analysis, request a demo to get more information. Our team is ready to help you start analyzing your surveys using machine learning models right away.

Federico Pascual

September 20th, 2019