×

Gated Recurrent Units (GRUs) for Natural Language Processing

In the previous articles on Recurrent Neural Networks and Long Short-Term Memory networks, we have seen how these networks work efficiently to solve problems related to NLP. In this article,

Long Short-Term Memory (LSTMs) for NLP

In the previous post, we have been introduced to Recurrent Neural Networks. In this post, we will build on that knowledge and look at an important variation of RNN called

Introduction to Recurrent Neural Networks for NLP

In previous articles, we mainly focused on Artificial Neural Networks and Convolutional Neural Networks for solving problems in NLP. In this article, we will get an introduction to Recurrent Neural

Convolutional Neural Networks (CNNs) for NLP

In the previous articles, we have seen how deep learning and specifically how an ANN can be used for the purpose of NLP. Now, we advance towards another deep learning

Building a Question Classifier using ANN in NLP

In the previous article, we got an overview of neural networks. If you haven’t read that article, I suggest you read the article on neural networks first so you understand

Intro to Deep Learning (Neural Networks) for the purpose of NLP

In the previous articles, we had seen the basics of Machine Learning and we had worked with certain algorithms for NLP. Now, we will explore deep learning which is a

The Support Vector Machines (SVM) algorithm for NLP

In the previous article, we explored the Naive Bayes algorithm for an NLP task. In this article, we look at another popular ML algorithm for NLP called the Support Vector

The Naive Bayes algorithm for NLP

In the previous article on Machine Learning, we had discussed that two ML algorithms most commonly used in Natural Language Processing and are Naive Bayes and SVM. In this article,

Introduction to Machine Learning for NLP

As we had discussed in the introductory article, Machine Learning, Artificial Intelligence, and NLP are interlinked together. We need to know Machine Learning if we efficiently want to solve NLP

Using fastText to build a Spelling Corrector

In the previous article on fastText, we had seen how to build a fastText model. In this article, we will use the same concept of the fastText model and build

FastText in NLP

In previous articles, we have discussed and built models for word embeddings and for document representations. The models we had trained were Word2Vec models and Doc2Vec models. In this article,

Doc2Vec in Natural Language Processing

In the previous articles, we have seen how to generate vectors for words in the form of word embeddings. For that task, we had used the Word2Vec model. But what

Finding Similarity using Word Mover’s distance for NLP

In a previous article, we had discussed and implemented the cosine similarity. With the help of cosine similarity, we were able to know if the two documents were similar or

Building a basic Word2Vec model

In the previous article, we learned about word embeddings and saw a glimpse of the Word2Vec model. If you recall, we had used an already trained model by Google which

Introduction to Word Embeddings in NLP

Before diving into word embeddings we see the difference between syntax and semantics in NLP. Table of Contents Show / Hide 1. Syntax vs Semantics in NLP2. Word embeddings3. How

Building a basic Chatbot in NLP

A chatbot is one of the most important applications of Natural Language Processing. In the introductory article, we had discussed chatbots in brief. Chatbots are growing immensely in popularity so

Cosine Similarity in Natural Language Processing

In the previous two articles, we discussed two algorithms by which we convert text into mathematical representations. After converting the text into a suitable mathematical form, how can we know

TF-IDF vectors in Natural Language Processing

In the previous article, we have seen how the BoW approach works. It was a straightforward way to convert out text into numbers by just taking into consideration the frequency

The Bag of Words (BoW) approach in NLP

The Bag-of-Words or BoW approach is a very fundamental topic in Natural language Processing. It is a way to represent our text into numbers. In the introductory section of this

Named Entity Recognition (NER) with spaCy in NLP

NER or Named Entity Recognition is an important part of Information Extraction in NLP. How do we define NER? A formal definition of Named Entity Recognition (NER) is that it

POS Tagging in NLP

Let us start by defining what Part-of-Speech tagging means. We saw that part-of-speech or POS tagging is necessary for Lemmatization. It is important for other NLP tasks and problems as

Lemmatization in NLP

Lemmatization is the process wherein the context is used to convert a word to its meaningful base or root form. Now, let’s try to simplify the above formal definition to

Tokenization in NLP

Tokenization is the process of breaking down the documents or sentences into chunks called tokens. These tokens are mostly words, characters, or numbers but they can also be extended to

Stemming in NLP

If you remember, we had discussed in the previous articles that the first step towards vectorization (converting text to numbers) is tokenization. So what is the next step? After splitting

The NLP pipeline

This post gives a brief overview of the complete NLP pipeline and the parts included in it. The image below gives us an overview of the pipeline. The aim of

NLP with Python

Natural Language Processing research and development is occurring concurrently in many languages. Some popular libraries in NLP are written in Python, Java, and C++. Lets us see why Python is

Natural Language Processing (NLP) Introduction

Before defining NLP, let us take a look at how we humans and computers communicate. Humans communicate using natural languages. When you speak to another person, you speak in a