Reader small image

You're reading from  Natural Language Understanding with Python

Product typeBook
Published inJun 2023
PublisherPackt
ISBN-139781804613429
Edition1st Edition
Right arrow
Author (1)
Deborah A. Dahl
Deborah A. Dahl
author image
Deborah A. Dahl

Deborah A. Dahl is the principal at Conversational Technologies, with over 30 years of experience in natural language understanding technology. She has developed numerous natural language processing systems for research, commercial, and government applications, including a system for NASA, and speech and natural language components on Android. She has taught over 20 workshops on natural language processing, consulted on many natural language processing applications for her customers, and written over 75 technical papers. Th is is Deborah's fourth book on natural language understanding topics. Deborah has a PhD in linguistics from the University of Minnesota and postdoctoral studies in cognitive science from the University of Pennsylvania.
Read more about Deborah A. Dahl

Right arrow

Global considerations – languages, encodings, and translations

There are thousands of natural languages, both spoken and written, in the world, although the majority of people in the world speak one of the top 10 languages, according to Babbel.com (https://www.babbel.com/en/magazine/the-10-most-spoken-languages-in-the-world). In this book, we will focus on major world languages, but it is important to be aware that different languages can raise different challenges for NLP applications. For example, the written form of Chinese does not include spaces between words, which most NLP tools use to identify words in a text. This means that to process Chinese language, additional steps beyond recognizing whitespace are necessary to separate Chinese words. This can be seen in the following example, translated by Google Translate, where there are no spaces between the Chinese words:

Figure 1.1 – Written Chinese does not separate words with spaces, unlike most Western languages

Figure 1.1 – Written Chinese does not separate words with spaces, unlike most Western languages

Another consideration to keep in mind is that some languages have many different forms of the same word, with different endings that provide information about its specific properties, such as the role the word plays in a sentence. If you primarily speak English, you might be used to words with very few endings. This makes it relatively easy for applications to detect multiple occurrences of the same word. However, this does not apply to all languages.

For example, in English, the word walked can be used in different contexts with the same form but different meanings, such as I walked, they walked, or she has walked, while in Spanish, the same verb (caminar) would have different forms, such as Yo caminé, ellos caminaron, or ella ha caminado. The consequence of this for NLP is that additional preprocessing steps might be required to successfully analyze text in these languages. We will discuss how to add these preprocessing steps for languages that require them in Chapter 5.

Another thing to keep in mind is that the availability and quality of processing tools can vary greatly across languages. There are generally reasonably good tools available for major world languages such as Western European and East Asian languages. However, languages with fewer than 10 million speakers or so may not have any tools, or the available tools might not be very good. This is due to factors such as the availability of training data as well as reduced commercial interest in processing these languages.

Languages with relatively few development resources are referred to as low-resourced languages. For these languages, there are not enough examples of the written language available to train large machine learning models in standard ways. There may also be very few speakers who can provide insights into how the language works. Perhaps the languages are endangered, or they are simply spoken by a small population. Techniques to develop natural language technology for these languages are actively being researched, although it may not be possible or may be prohibitively expensive to develop natural language technology for some of these languages.

Finally, many widely spoken languages do not use Roman characters, such as Chinese, Russian, Arabic, Thai, Greek, and Hindi, among many others. In dealing with languages that use non-Roman alphabets, it’s important to recognize that tools have to be able to accept different character encodings. Character encodings are used to represent the characters in different writing systems. In many cases, the functions in text processing libraries have parameters that allow developers to specify the appropriate encoding for the texts they intend to process. In selecting tools for use with languages that use non-Roman alphabets, the ability to handle the required encodings must be taken into account.

Previous PageNext Page
You have been reading a chapter from
Natural Language Understanding with Python
Published in: Jun 2023Publisher: PacktISBN-13: 9781804613429
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Deborah A. Dahl

Deborah A. Dahl is the principal at Conversational Technologies, with over 30 years of experience in natural language understanding technology. She has developed numerous natural language processing systems for research, commercial, and government applications, including a system for NASA, and speech and natural language components on Android. She has taught over 20 workshops on natural language processing, consulted on many natural language processing applications for her customers, and written over 75 technical papers. Th is is Deborah's fourth book on natural language understanding topics. Deborah has a PhD in linguistics from the University of Minnesota and postdoctoral studies in cognitive science from the University of Pennsylvania.
Read more about Deborah A. Dahl