top of page

Natural Language Processing 101

Armaan Kokan

December 7th, 2022


There is one field in Computer Science that is growing rapidly, and often not thrown around like other buzzwords, think “AI” and “ML or Machine Learning”. This technology is called Natural Language Processing. In essence, this field is where computers are given abilities to understand human languages, and be able to communicate with us effectively like other humans would.


Some famous examples of NLP technologies include Siri, Alexa, and Google voice assistant.




Although you might have not heard about NLP technology until now, this is not a new field at all. NLP is an interdisciplinary field that combines the subjects of computer science and linguistics. Linguistics is basically another word for languages. For our first example let’s consider just the English language.


In the early days, NLP’s scientists figured that there are an infinite number of ways to put the English alphabets, words, and punctuations together but only few actually made sense and were grammatically correct. We, as humans, can distinguish between those sentences but an AI or computer cannot so that had to be fixed. NLP was first developed in the 1970s at the University of California, Santa Cruz by John Grinder, a linguist, and Richard Bandler, an information scientist and mathematician.



Early NLPs were only able to understand simple sentences through a simple parse tree structure that would divide each word into its specific part of speech. Complex sentences although placed a huge challenge as there can be multiple nuances of words and punctuation uses.



NLP’s have various subfields, here are some of them:


Text Processing - Chatbots are the most common examples of NLP. At first, they were rule based where coders would have to set up trees with all the different options a person might type and what the chatbot should respond with. Today, however, most are programmed with basic machine learning techniques which are improved from billions of data from human to human conversations. The more data the chatbot receives, the more human-like it becomes.



Speech Recognition- This field deals with how a computer can take human voice, and change it into words and understand that properly. The first computer to understand human speech was Harpy. She understood exactly 1000 words and was developed by Carnegie Mellon University. Slowly computers have started to understand human speech at fast speeds but we still have long ways to go.



Speech Synthesis- This is the ability for computers to emit sounds, or speak words. The words that are broken down by computers in text processing and speech recognition can be rearranged to say specific things. Some examples that use this type of technology are the famous voice assistants mentioned earlier in this article. You will notice one thing though. Although all these voice assistants understand most of our simple requests, they still aren’t even close to humans and often sound like robots.



NLP, like other computer science fields, has come a long way but still has many ways to go and improve the lives of many people like ones with disabilities, and act as therapists for psychological disorders, including phobias, and generalized anxiety disorders.





25 views

Recent Posts

See All
bottom of page