Natural Language Processing

Natural Language Processing (NLP) is a branch of artificial intelligence that helps computers read, interpret, and understand human language. The goal of NLP is for machines to carry out repetitive and high-volume tasks that would otherwise be completed by humans.

Once a fantasy of science fiction movies, the ability of machines to interpret human language is now at the core of many applications that we use every day – from translation software, chatbots, spam filters, and search engines, to grammar checking software, voice assistants, and social media monitoring tools.

This guide covers the basics of natural language processing and explains how easy it is to perform natural language processing techniques using no-code tools like MonkeyLearn.

  1. What Is Natural Language Processing?
  2. Why Is NLP Important?
  3. Techniques of Natural Language Processing & Algorithms
  4. Natural Language Processing Examples
  5. Natural Language Processing in Python
  6. NLP Tutorial With No-Code Tools

Let’s dive right into it!

What Is Natural Language Processing (NLP)?

What is NLP?

Natural Language Processing (NLP) is a field of Artificial Intelligence (AI) that focuses on quantifying human language to make it intelligible to machines. It combines the power of linguistics and computer science to study the rules and structure of language, and create intelligent systems capable of understanding, analyzing, and extracting meaning from text and speech.

Linguistics is used to understand the structure and meaning of a text by analyzing different aspects like syntax, semantics, pragmatics, and morphology. Then, computer science transforms this linguistic knowledge into rule-based or machine learning algorithms that can solve specific problems and perform desired tasks.

Take Gmail, for example. Your emails are automatically categorized as Promotions, Social, Primary, or Spam thanks to an NLP task called text classification. By breaking down words and identifying patterns, rules, and relationships between them, machines automatically learn which category to assign emails.

Why Is Natural Language Processing Important?

Why is NLP Important?

Natural Language Processing plays a very important role in structuring big data because it prepares text and speech for machines so that they’re able to interpret, process, and organize information. Some of the main advantages of NLP include:

  • Large-scale analysis. Natural Language Processing can help machines perform language-based tasks such as reading text, identifying what’s important, and detecting sentiment at scale. If you receive an influx of customer support tickets, you don’t need to hire more staff. NLP tools can be scaled up or down as needed.

  • Automate processes in real-time. Machine learning tools, equipped with natural language processing, can learn to understand and analyze information without human help – quickly, effectively, and around the clock.

  • Consistent and unbiased criteria. NLP machines are not subjective like humans. They tag data based on one set of rules, so you don’t have to worry about inconsistent and inaccurate results.

Techniques of Natural Language Processing & Algorithms

How Does NLP Work?

In this section, we’ll focus on two primary natural language processing techniques and their sub-tasks.

Syntactic Analysis

Syntactic analysis ― also known as parsing or syntax analysis ― identifies the syntactic structure of a text and the dependency relationships between words, represented on a diagram called a parse tree.

Syntax analysis involves many different sub-tasks, including:

Tokenization

This is the most basic task in natural language processing. It’s used to break up a string of words into semantically useful units called tokens and works by defining boundaries, that is, a criterion of where a token begins or ends.

You can use sentence tokenization to split sentences within a text, or word tokenization to split words within a sentence. Generally, word tokens can be separated by blank spaces, and sentence tokens by stops. However, you can perform high-level tokenization for more complex structures, like words that often go together, otherwise known as collocations (for example, New York).

Here’s an example of how word tokenization simplifies text:

Customer service couldn’t be better! = [“customer service”, “could”, “not”, “be”, “better”]

Part-of-speech tagging

Part-of-speech tagging (abbreviated as PoS tagging) involves adding a part of speech category to each token within a text. Some common PoS tags are verb, adjective, noun, pronoun, conjunction, preposition, intersection, among others. In this case, the example above would look like this:

“Customer service”: NOUN, “could”: VERB, “not”: ADVERB, be”: VERB, “better”: ADJECTIVE, “!”: PUNCTUATION

PoS tagging is useful for identifying relationships between words and, therefore, understand the meaning of sentences.

Dependency Parsing

Dependency grammar refers to the way the words in a sentence are connected to each other. A dependency parser, therefore, analyzes how ‘head words’ are related and modified by other words in order to understand the syntactic structure of a sentence:

Dependency Parsing

Constituency Parsing

Constituency Parsing aims to visualize the entire syntactic structure of a sentence by identifying phrase structure grammar. Basically, it consists of using abstract terminal and non-terminal nodes associated to words, as shown in this example:

Constituency Parsing

You can try different parsing algorithms and strategies depending on the nature of the text you intend to analyze, and the level of complexity you’d like to achieve.

Lemmatization & Stemming

When we speak or write, we tend to use inflected forms of a word (words in their different grammatical forms). To make these words easier for computers to understand, NLP uses lemmatization and stemming to transform them back to their root form.

The word as it appears in the dictionary – its root form – is called a lemma. For example, the words ‘are, is, am, were, and been’, are grouped under the lemma ‘be’. So, if we apply this lemmatization to “African elephants have 4 nails on their front feet”, the result will look something like this:

African elephants have 4 nails on their front feet = [“African”, “elephant”, “have”, “4”, “nail”, “on”, “their”, “foot”]

This example is useful to see how the lemmatization changes the sentence using its base form (e.g. the word "feet" was changed to "foot).

When we refer to stemming, the root form of a word is called a stem. Stemming ‘trims’ words, so word stems may not always be semantically correct.

For example, stemming the words “consult”, “consultant”, “consulting”, and “consultants”, would result in the root form “consult”.

While lemmatization is dictionary-based and chooses the appropriate lemma based on context, stemming operates on single words without considering the context. For example, in the sentence:

“This is better”

The word “better” is transformed into the word “good” by a lemmatizer but is unchanged by stemming. Even though they can lead to less-accurate results, stemmers are easier to build and perform faster than lemmatizers. However, the latter is better if you're seeking more precise linguistic rules.

Stopword Removal

Removing stop words is an important step in NLP text processing. It involves filtering out high-frequency words that add little or no semantic value to a sentence, for example, which, to, at, for, is, etc.

You can even customize lists of stopwords to include words that you want to ignore.

Let’s say you’d like to classify customer service tickets based on their topics. In this example: “Hello, I’m having trouble logging in with my new password”, it may be useful to remove stop words like “hello”, “I”, “am”, “with”, “my”, so you’re left with the words that help you understand the topic of the ticket: “trouble”, “logging in”, “new”, “password”.

Semantic

Semantic analysis focuses on identifying the meaning of language. However, since language is polysemic and ambiguous, semantics is considered one of the most challenging areas in NLP.

Semantic tasks analyze the structure of sentences, word interactions, and related concepts, in an attempt to discover the meaning of words, as well as understand the topic of a text.

Some sub-tasks of semantic analysis include:

Word Sense Disambiguation

Depending on their context, words can have different meanings. Take the word “book”, for example:

  • You should read this book, it’s a great novel!

  • You should book the flights as soon as possible.

  • You should close the books by the end of the year.

  • You should do everything by the book to avoid potential complications.

There are two main techniques that can be used for Word Sense Disambiguation (WSD): knowledge-based (or dictionary approach) and a supervised approach. The first one tries to infer meaning by observing the dictionary definitions of ambiguous terms within a text; while the latter is based on machine learning algorithms that learn from examples (training data).

Relationship Extraction

This task consists of identifying semantic relationships between two or more entities in a text. Entities can be names, places, organizations, etc; and relationships can be established in a variety of ways. For example, in the phrase “Susan lives in Los Angeles”, a person (Susan) is related to a place (Los Angeles) by the semantic category “lives in”.

Rule-Based vs Machine Learning NLP

There are two main technical approaches to Natural Language Processing that create different types of systems: one is based on linguistic rules and the other on machine learning methods. In this section, we’ll examine the advantages and disadvantages of each one, and the possibility of combining both (hybrid approach).

Rule-Based Approach

Rule-based systems are the earliest approach to NLP and involve applying hand-crafted linguistic rules to text. Each rule is formed by an antecedent and a prediction:

IF this happens (antecedent), THEN this will be the outcome (prediction).

For example, imagine you’d like to perform sentiment analysis to classify positive and negative opinions in product reviews. First, you would have to create a list of positive words (such as good, best, excellent, etc), and a list of negative words (bad, worst, frustrating, etc). Then, you’ll need to go through each review and count the number of negative and positive words within each text to determine the overall sentiment.

Since rules are determined by humans, this type of system is easy to understand and provides fairly accurate results with minimal effort. Another advantage of rule-based systems is that they don’t require training data, which makes them a good option if you don’t have much data and are just starting your analysis.

However, manually crafting and enhancing rules can be a difficult and cumbersome task, and often requires a linguist or a knowledge engineer. Also, adding too many rules can lead to complex systems with contradictory rules.

Machine Learning Models

Machine Learning consists of algorithms that can learn to understand language based on previous observations. The system uses statistical methods to build its own ‘knowledge bank’, and is trained to make associations between a particular input and its corresponding output.

Let’s go back to the sentiment analysis example. With machine learning, you can build a model to automatically classify opinions as positive, negative, or neutral. But first, you need to train your classifier by manually tagging text examples, until it’s ready to make its own predictions for unseen data.

You will also need to transform the text examples into something a machine can understand (vectors), a process known as feature extractor or text vectorization. Once the texts have been transformed into vectors, they are fed to a machine learning algorithm together with their expected output (tags) to create a classification model. This model can then discern which features best represent the texts, and make predictions for unseen data:

Process

The biggest advantage of machine learning models is their ability to learn on their own, with no need to define manual rules. All you’ll need is a good set of training data, with several examples for each of the tags you’d like to analyze.

Over time, machine learning models often deliver higher precision than rule-based systems, and the more training data you feed them, the more accurate they are.

However, you’ll need training data that’s relevant to the problem you want to solve in order to build an accurate machine learning model.

Hybrid Approach

A third approach involves combining both rule-based and machine learning systems. That way, you can benefit from the advantages of each of them, and gain higher accuracy in your results.

NLP Algorithms

Natural language processing algorithms are usually based on machine learning algorithms. Below are some of the most popular ones that you can use depending on the task you want to perform:

Text Classification Algorithms

Text classification is the process of organizing unstructured text into predefined categories (tags). Text classification tasks include sentiment analysis, intent detection, topic modeling, and language detection.

Some of the most popular algorithms for creating text classification models are:

  • Naive Bayes: a collection of probabilistic algorithms that draw from the probability theory and Bayes’ Theorem to predict the tag of a text. According to Bayes’ Theorem, the probability of an event happening (A) can be calculated if a prior event (B) has happened.

This model is called naive because it assumes that each variable (features or predictors) is independent, has no effect on the others, and each variable has an equal impact on the outcome. Naive Bayes algorithm is used for text classification, sentiment analysis, recommendation systems, and spam filters.

  • Support Vector Machines (SVM): this is an algorithm mostly used to solve classification problems with high accuracy. Supervised classification models aim to predict the category of a piece of text based on a set of manually tagged training examples.

In order to do that, SVM turns training examples into vectors and draws a hyperplane to differentiate two classes of vectors: those that belong to a certain tag and those that don’t belong to that one tag. Based on which side of the boundary they land, the model will be able to assign one tag or another. SVM algorithms can be especially useful when you have a limited amount of data.

  • Deep Learning: this set of machine learning algorithms are based on artificial neural networks. They are perfect for processing large volumes of data, but in turn, require a large training corpus. Deep learning algorithms are used to solve complex NLP problems.

Text Extraction Algorithms

Text extraction consists of extracting specific pieces of data from a text. You can use extraction models to pull out keywords, entities such as company names or locations, otherwise known as entity recognition, or to summarize text. Here are the most common algorithms for text extraction:

  • TF-IDF (term frequency-inverse document frequency): this statistical approach determines how relevant a word is within a text in a collection of documents, and is often used to extract relevant keywords from text. The importance of a word increases based on the number of times it appears in a text (text frequency), but decreases based on the frequency it appears in the corpus of texts (inverse document frequency).

  • Regular Expressions (regex): A regular expression is a sequence of characters that define a pattern. Regex checks if a string contains a determined search pattern, for example in text editors or search engines and is often used for extracting keywords and entities from text.

  • CRF (conditional random fields): this machine learning approach learns patterns and extracts data by assigning a weight to a set of features in a sentence. This approach can create patterns that are richer and more complex than those patterns created with regex, enabling machines to determine better outcomes for more ambiguous expressions.

  • Rapid Automatic Keyword Extraction (RAKE): this algorithm for keyword extraction uses a list of stopwords and phrase delimiters to identify relevant words or phrases within a text. Basically, it analyzes the frequency of a word and its co-occurrence with other words.

Topic Modeling Algorithms

Topic modeling is a method for clustering groups of words and similar expressions within a set of data. Unlike topic classification, topic modeling is an unsupervised method, which means that it infers patterns from data without needing to define categories or tag data beforehand.

The main algorithms used for topic modeling include:

  • Latent Semantic Analysis (LSA): this method is based on the distributional hypothesis, and identifies words and expressions with similar meanings that occur in similar pieces of text. It is the most frequent method for topic modeling.

  • Latent Dirichlet Allocation (LDA): this is a generative statistical model that assumes that documents contain various topics, and that each topic contains words with certain probabilities of occurrence.

Natural Language Processing Examples

Use Cases & Applications

Thanks to NLP-powered systems, companies are able to automate tasks like ticket tagging, routing, and data entry, and gain fine-grained insights that can be used to make data-driven decisions.

Here are some examples of how NLP is used in business:

Quickly Sorting Customer Feedback

Text classification models are excellent for categorizing qualitative feedback, such as product reviews, social media conversations, and open-ended responses in online surveys. Take the example of Retently, a SaaS platform for online surveys that used MonkeyLearn to classify NPS responses and gain actionable insights.

Text classification models are excellent for categorizing qualitative feedback, such as responses to open-ended questions in online surveys. Take the example of Retently, a SaaS platform for online surveys that used MonkeyLearn to classify NPS responses and get actionable insights from their customers.

The team at Retently classified their open-ended responses using these categories:

NPS Feedback Classification

Tagging each piece of feedback automatically with NLP tools enabled them to find out the most relevant topics mentioned by customers, along with how much they valued their product. As you can see in the graph below, most of the responses referred to “Product Features”, followed by “Product UX” and “Customer Support” (these last two topics were mentioned mostly by Promoters).

Response Tags Analysis

Automating Processes in Customer Service

Other interesting applications of NLP revolve around customer service automation. This concept uses AI-based technology to eliminate or reduce routine manual tasks in customer support, saving agents valuable time, and making processes more efficient.

According to the Zendesk benchmark, a tech company receives +2600 support inquiries per month. Receiving large amounts of support tickets from different channels (email, social media, live chat, etc), means companies need to have a strategy in place to categorize each incoming ticket.

Text classification allows companies to automatically tag incoming customer support tickets according to their topic, language, sentiment, or urgency. Then, based on these tags, they can instantly route tickets to the most appropriate pool of agents.

Uber designed its own ticket routing workflow, which involves tagging tickets by Country, Language, and Type (this category includes the sub-tags Driver-Partner, Questions about Payments, Lost Items, etc), and following some prioritization rules, like sending requests from new customers (New Driver-Partners) to the top of the list.

Chatbots for Customer Success

A chatbot is a computer program that simulates human conversation. Chatbots use NLP to recognize the intent behind a sentence, identify relevant topics and keywords, verbs, and even emotions, and come up with the best response based on their interpretation of data.

As customers crave fast, personalized, and around-the-clock support experiences, chatbots have become the heroes of customer service strategies. Chatbots reduce customer waiting times by providing immediate responses and excel when handling routine queries (which often represent a high volume of customer support requests), allowing agents to focus on solving more complex issues. In fact, chatbots can solve up to 80% of routine customer support tickets.

Besides providing customer support, chatbots can be used to recommend products, offer discounts, and make reservations, among many other tasks. In order to do that, most chatbots follow a simple ‘if/then’ logic (they are programmed to identify intents and associate them with a certain action), or provide a selection of options to choose from.

Automatic Summarization

Automatic summarization consists of reducing a text and creating a concise new version that contains its most relevant information. It can be particularly useful to summarize large pieces of unstructured data, such as academic papers.

There are two different ways of using NLP for summarization: the first approach extracts the most important information within a text and uses it to create a summary (extraction-based summarization); while the second applies deep learning techniques to paraphrase the text and produce sentences that are not present in the original source (abstraction-based summarization).

Automatic summarization can be particularly useful for data entry, where relevant information is extracted from, let’s say a product description, and automatically entered into a database.

Machine Translation

The possibility of translating text and speech to different languages has always been one of the main interests in the NLP field. From the first attempts to translate text from Russian to English in the 1950s to the state-of-the-art neural systems, machine translation (MT) has seen significant improvements but still presents challenges.

Google Translate, Microsoft Translator, and Facebook Translation App are a few of the leading platforms for generic machine translation. In August 2019, Facebook AI English-to-German machine translation model received first place in the contest held by the Conference of Machine Learning (WMT). The translations obtained by this model were defined by the organizers as “superhuman” and considered highly superior than the ones done by human experts.

Another interesting development in machine translation has to do with customizable machine translation systems, which are adapted to a specific domain and trained to understand the terminology associated with a particular field, such as medicine, law, and finance. Lingua Custodia, for example, is a machine translation tool dedicated to translating technical financial documents.

Finally, one of the latest innovations in MT is adaptative machine translation, which consists of systems that can learn from corrections in real-time.

Natural Language Generation

Natural Language Generation (NLG) is a subfield of NLP designed to build computer systems or applications that can automatically produce all kinds of texts in natural language by using a semantic representation as input. Some of the applications of NLG are question answering and text summarization.

In 2019, artificial intelligence company Open AI released GPT-2, a text-generation system that represented a groundbreaking achievement in AI and has taken the NLG field to a whole new level. The system was trained with a massive dataset of 8 million web pages and it’s able to generate coherent and high-quality pieces of text (like news articles, stories, or poems), given minimum prompts.

The model performs better when provided with popular topics which have a high representation in the data (such as Brexit, for example), while it offers poorer results when prompted with highly niched or technical content. Still, it’s possibilities are only beginning to be explored.

Natural Language Processing in Python

Resources

Now that you’ve gained some insight into the basics of NLP and its current applications in business, you may be wondering how to put NLP into practice.

There are many open-source libraries designed to deal with Natural Language Processing. The good thing about these libraries is that they are free, flexible, allow you to build a complete and customized NLP solution, and are often written in Python (the best language for performing NLP tasks). However, building a whole infrastructure from scratch demands both programming skills and machine learning knowledge.

SaaS tools, on the other hand, are ready-to-use solutions that allow you to incorporate NLP into your apps in a very simple way, with very little setup. Connecting SaaS tools to your favorite apps through their APIs is super simple and only requires a few lines of code, and it’s an excellent alternative if you don’t want to invest time and resources learning about machine learning or NLP.

Here’s a list of the top NLP tools:

  • MonkeyLearn is a SaaS platform that lets you build customized natural language processing models to perform tasks like sentiment analysis and keyword extraction. Developers can connect text analysis models via the MonkeyLearn API in Python, while those with no programming skills can upload datasets via the smart interface or connect to everyday apps like Google Sheets, Excel, Zapier, Zendesk, and more.

  • Natural Language Toolkit (NLTK) is a suite of libraries for building Python programs that can deal with a wide variety of NLP tasks. It is the most popular Python library for NLP, has a very active community behind it, and is often used for educational purposes. There’s a handbook and tutorial for using NLTK, but learning how to use it might take some time.

  • SpaCy is a free open source library for advanced NLP in Python. It has been specifically designed to build NLP applications that can help you understand large volumes of text. That’s one of the differences with its main competitor, NLTK, which was created mostly for research and teaching purposes. SpaCy is fast, easy to use, and very well documented. Instead of presenting you with all the available options to solve an NLP problem, it focuses on the best algorithm you can use for that task. However, for the time being, it only supports the English language.

  • TextBlob is a Python library with a simple interface to perform a variety of NLP tasks. Built on the shoulders of NLTK and another library called Pattern, it is intuitive and user-friendly, which makes it ideal for beginners. Learn more about how to use TextBlob and its features.

NLP Tutorial With No-Code Tools

Tutorial

SaaS solutions like MonkeyLearn offer ready-to-use NLP models for text analysis. To see how they work, have a go at pasting text into this online sentiment analyzer, and click on “classify text”.

You can also upload a CSV or Excel file to analyze a large batch of data, use one of the available integrations, or connect through the MonkeyLearn API.

Ready-to-use models are great for taking your first steps with sentiment analysis. However, if you need to analyze industry-specific data, you should build your own customized classifier. Custom sentiment models can detect words and expressions within your domain for more accurate predictions.

These are the steps you need to follow to create a customized sentiment analysis model with MonkeyLearn. Before you start, you’ll need to sign up to MonkeyLearn for free:

1. Choose a type of model. Go to the dashboard, click on Create Model and choose “Classifier”.

Choose a Model Type

2. Choose a type of classifier. In this case, “Sentiment Analysis”.

Kind of Classification

3. Upload training data. You can import data from a CSV or an Excel file, or connect with any of the third-party integrations offered by MonkeyLearn, such as Twitter, Gmail, Zendesk, and Front, among others. This data will be used to train your machine learning model.

Upload Data

4. Tag your data. It’s time to train your sentiment analysis classifier by manually tagging examples of data as positive, negative, or neutral. The model will learn based on your criteria, and the more examples you tag, the smarter your model will become. Notice that after tagging several examples, your classifier will start making its own predictions.

Tag Data

5. Test your sentiment analysis classifier. After training your model, go to the “Run” tab, enter your own text and see how your model performs. If you are not satisfied with the results, keep training your classifier y tagging more examples.

Test Model

6. Put your model to work! Use your sentiment classifier to analyze your data. There are three ways to do this:

  1. Upload a batch of data (like a CSV or an Excel file)
  2. Use one of the available integrations
  3. Connect to the MonkeyLearn API
Using a Keyword Extractor

With a keyword extractor, you can easily pull out the most important words and expressions from a text, whether it’s a set of product reviews or a bunch of NPS responses. You can use this pre-trained model for extracting keywords or build your own custom extractor with your data and criteria.

These are the steps for building a custom keyword extractor with MonkeyLearn:

1. Choose a type of model. Go to the dashboard, click on Create Model and choose “Extractor”.

2. Import your text data. You can upload a CSV or an Excel file, or import data from a third-party app like Twitter, Gmail, or Zendesk.

Upload Data

3. Specify the data you’ll use to train your keyword extractor. Select which columns you will use to train your model.

Specify Data

4. Define your tags. Create different categories (tags) for the type of data you’d like to obtain from your text. In this example, we’ll analyze a set of hotel reviews and extract keywords referring to “Aspects” (feature or topic of the review) and “Quality” (keywords that refer to the condition of a certain aspect).

Define Tags

5. Train your keyword extractor. You’ll need to manually tag examples by highlighting the keyword in the text and assigning the correct tag.

Tag Data

6. Test your model. Paste new text into the text box to see how your keyword extractor works.

Test Model

7. Put your model to work! Upload data in a batch, try one of our integrations, or connect to the MonkeyLearn API.

Final Words

Natural language processing is transforming the way we analyze and interact with language-based data, by creating machines capable of making sense of text and speech and performing human tasks like translation, summarization, classification, and extraction.

Not long ago, the idea of computers capable of understanding human language seemed impossible. However, in a relatively short period of time ― and fueled by research and developments in linguistics, computer science, and machine learning ― NLP has turned into one of the most promising and fastest growing fields within AI.

NLP gives businesses the opportunity of analyzing unstructured data, such as product reviews, social media posts, and customer support interactions, and gaining valuable insight about their customers. Also, it allows them to simplify and automate routine tasks, such as tagging incoming tickets in customer service and routing them to the right agent.

As technology advances, NLP is becoming more accessible. Thanks to platforms like MonkeyLearn, it’s getting easier for companies to create customized solutions that help them automate processes and better understand their customers.

Ready to get started? Request a demo and let us know how we can help you get started with NLP!

Learn all about Natural Language Processing!

Automate business processes and save hours of manual data processing.