Grammarly AI: The sweet spot of deep learning and natural language processing

4 min read
text grammar error checker
Image credit: Depositphotos

Last week, Grammarly secured $90 million in funding for its artificial intelligence–based grammar and writing tools. I’m usually not interested in AI funding news. But this specific case drew my attention because amidst all the hype and confusion surrounding artificial intelligence, I believe that Grammarly is solving a real problem.

Understanding and processing natural language are among the most challenging areas of AI. Many companies have engaged in ambitious AI-based language projects. But a considerable number of them have failed miserably for not having considered the limits of current AI technologies.

Meanwhile, Grammarly has found a niche suitable for the narrow capabilities of deep learning, the current bleeding edge of AI. And the right elements have helped it grow from a small app into the leading AI-based grammar checker, now valued at more than $1 billion.

The limits of deep learning

Major advances in artificial neural networks in recent years have renewed interest in deep learning. Deep learning now permeates many fields previously thought to be off-limits for computers.

Unfortunately, the achievements of deep learning have also caused confusion and unwarranted excitement about its capabilities. Neural networks are outperforming humans in many complicated tasks, such as detecting cancer or recognizing objects. Therefore, it’s easy to expect deep learning to be able to solve problems that are easier for humans to tackle.

A stark example is natural language processing (NLP), the science of extracting the meaning of written and spoken language. Most humans learn to deal with language and its different nuances at a very young age. So, why should an AI that can predict cancer five years in advance—a task that takes years of study for humans to master—not be able to solve the same problem?

As it happens deep learning and neural networks are very bad at solving problems that require common sense. Neural networks are very sophisticated statistical machines. They can process large amounts of data and find all kinds of intricate correlations and patterns. This is an approach that works well for tasks that require pattern-matching, such as image classification or voice and face recognition.

Unfortunately, in NLP, pattern-matching and statistics have limited applications. Although it comes to us very naturally, language processing is one of the most complicated functions of our brains. When we hear or read a sentence, we tap into our vast amount of background knowledge about the world to reason about its hidden and implicit meanings. Those are the kinds of functions that can’t be represented through statistics.

When dealing with the meaning of sentences and the semantic relations of words, deep learning algorithms fail miserably. In this case, finetuning the AI models with more data won’t solve the problem because every new sentence is unique.

The struggles of chatbots and AI assistants


Chatbots and AI assistants are one of the most interesting case studies of the capabilities of current AI technologies in dealing with languages. In the past few years, many companies have developed AI-powered conversational interfaces to carry out various tasks. But the field has witnessed many disappointments.

In several cases, companies that have engaged in language tasks ended up hiring humans to fix the failures of their AI models. In others, such as Facebook’s M assistant, the companies had to shut down their AI projects altogether.

But the field has also seen some very interesting advances. AI assistants Alexa, Siri and Cortana have become widely popular among millions of users, and they owe their success to deep learning and neural networks.

Interestingly, however, the successful applications of AI assistants have been those that tackle very limited problem domains. Despite the thousands of applications developed for these assistants, those that remain popular are the simple ones. AI assistants are very good at telling the weather, set up a time, or looking up the name of the U.S. president.

But they start to break as soon as they enter conversations and interactions that take multiple steps. They also can’t engage in general casual conversations that humans are very good at. In fact, one of the main challenges of AI assistants is that as they develop, they tend to trek into broad areas that are beyond the capabilities of deep learning.

Also, the use of voice interfaces of AI assistants and their human names work negatively here. Users have a tendency to anthropomorphize AI and expect their assistants and chatbots to manifest human behavior, which leads to all kinds of frustrating and confusing scenarios.

Grammarly’s narrow AI application is successful

This brings me back to Grammarly’s AI. The company’s success is largely due to its focus on a narrow application of AI NLP: grammar assistance. Grammar can be very frustrating and difficult for humans, but it is, in fact, a task where patterns play an important role and deep learning can be very useful. In most cases, finding and fixing grammatical errors are not bound to understanding the meaning of the text, where deep learning struggles.

A large enough corpus of well-written text can provide enough training data for a deep learning–based grammar assistant. Deep learning can detect and correct most grammatical errors if not all. In my experience using Grammarly, I’ve agreed with most of the corrections it gives me, and only ignore the ones that have more to do with style and personal preferences.

Grammarly’s minimalist interface also makes sure that it does not venture out of the domain of narrow AI. Unlike AI assistants, Grammarly doesn’t have a human name, it doesn’t try to speak in casual human prose, and it has no voice interface. It’s barebones grammar suggestions. There’s no room for misunderstanding or bloated expectations.

Grammarly is not the only company in the AI-powered grammar assistance space. It’s up against the likes of Microsoft and Google, which have larger data sets and deeper history in AI. Microsoft has been using machine learning to enhance its grammar checker for several years. Google also integrated deep learning in Docs and G Suite apps in the past years.

But Grammarly’s integration strategy has enabled it to compete with giants and carve a niche for itself. Grammarly’s browser extension makes it an ever-present entity in every text element you find in your web pages, helping you in writing emails, blog posts, comments, and more. In contrast, Google and Microsoft have confined themselves within their own apps. Grammarly has also added integration with apps such as MS Word and Google Docs, giving users the chance to compare its performance with that of its competitors. This expansive integration strategy also provides the company with plenty of opportunities to expand its corpus of text to train its AI algorithms.

Grammarly might not employ the best AI scientists or the latest deep learning techniques developed at Google and OpenAI. But it has surely found a very good use for deep learning in text processing, one that recognizes the limits of current AI technologies and embraces their strengths.


  1. As a translator, i can say that it is simply an indispensable tool in the work. It’s great that the developers have created such a convenient and useful tool. I hope that over the years this project will only develop and become better and better.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.