July 2019 - Artificial Intelligence | Digital Ethics

Teaching AI to Learn

Prof. De Kai explains how AI can learn independently, and why we need to be careful to offer good role models for them to learn from.

Teaching AI to Learn

© Ryzhi | istockphoto.com

dotmagazine: Prof. De Kai, you say that machines can learn independently by comparing two languages. How does this work?

Prof. De Kai: Children don't learn language in a vacuum. Imagine if children grew up blindfolded and tied up in a chair. Even if you spoke a million sentences at them, they wouldn't learn the language. Remember the story of Helen Keller? What actually enables us to learn our native languages is that we are constantly comparing two (or more) languages and learning how they relate. As children, we correlate, say, the English representation language we're hearing with, say, the visual representation language of what we're seeing, or the tactile representation language of what we're feeling.

What we did, which was pretty radical at that time, was to build an AI that would learn language by correlating the English representation language it was “hearing” with a Chinese representation language of what it was “seeing”. We fought the traditional approach in both linguistics and computer science of studying language as a monolingual entity. That mistake was a bottleneck that prevented researchers for many decades from thinking deeply enough about the true nature of language, which is as an interpretation or translation mechanism. Interpretation and translation are by nature bilingual phenomena.

My research shifted the study of how to learn the structure of language away from a monolingual pursuit, instead to a bilingual pursuit. Rather than trying to discover what is universal among all human languages (a monolingual way of thinking), we realized that what is truly universal is the kinds of relationships between human languages (a bilingual way of thinking). This ends up explaining longstanding puzzles in linguistics (such as the phenomena that all human languages seem to limit the number of "core semantic arguments" to any linguistic predicate to a maximum of about four) in terms of basic universal mathematical properties of relationships between structures.

By doing so, we were able to develop fast (polynomial time and space) algorithms for learning and transducing (interpreting or translating), where classic approaches failed due to their slow (exponential time and space) algorithms. This led to the theory of inversion transductions and stochastic ITGs (inversion transduction grammars) that underlie much of modern machine translation, today in both probabilistic and neural implementations. These kinds of breakthroughs were what enabled the advent of systems like Google/Yahoo/Microsoft translators which we rely so heavily upon today.

dot: Language makes complex interrelationships and thoughts also accessible for machines. Is there any likelihood that machines will soon become aware of what they are saying?

Prof. De Kai: The commercial technology today is still far from even attempting to get machines to truly understand at a reasonably human level of depth. Instead of tackling the more fundamental problems, commercial approaches have been grabbing the low hanging fruit. They throw exponentially more massive amounts of data and computation at the tasks, and yet the translation and dialog assistant AIs still make hilarious mistakes that three-year-olds would laugh at.

Consider that commercial AIs are trained on many trillions of words of training data. In contrast, a human three-year-old pretty much masters their mother tongue by the time they turn four — having heard only about 15 million words spoken to them in their entire lifetime. In other words, current "weak AI" systems are trained on the square of the amount of data (and computation) that they really should need if they were truly intelligent. From a human-level "strong AI" standpoint, that's insane, and solving such problems is the kind of research that my lab is focused on.

That said, I don't think we are that far away from the point where machines become aware of what they are saying. It's just that 99.9% of the huge amounts of capital being invested into "AI" today are just applying existing off-the-shelf weak AI tools, rather than tackling the real questions of strong AI. A tiny fraction of that funding would actually solve the strong AI problems. There's a TEDx talk "Why AI is impossible without mindfulness" that I gave recently at TEDxOakland if you'd like to dive in deeper into this topic. 

dot: What does this mean today for the way we interact with Alexa, Siri and Co.? 

Prof. De Kai: The hidden danger that we are insufficiently aware of is that machines today are already learning and spreading culture based on how we interact with them. Even though today's AIs are still weak, they have already become integral, active, imitative, influential members of society. More so than most human members of society, if we are honest. And unlike rule-based AI systems of the past, today's AIs are based on machine learning and neural networks, which means there's not a lot of places you can hardcode ethical rules — no more than you could unscrew a human's head and solder in ethical rules. As AIs become more aware, they will simply learn morals, ethics, and values from us — just like human children do. We, all of us, are the training data. Each one of us needs to be raising our AIs far better than we have been doing, if we expect any sort of sustainable planet in the AI era.


De Kai, author of the forthcoming book Artificial Children, is Professor of Computer Science and Engineering at HKUST, and Distinguished Research Scholar at Berkeley's International Computer Science Institute. He is among only 17 scientists worldwide named by the Association for Computational Linguistics as a Founding ACL Fellow, for his pioneering contributions to machine translation and machine learning foundations of systems like the Google/Yahoo/Microsoft translators. Recruited as founding faculty of HKUST directly from UC Berkeley, where his PhD thesis was one of the first to spur the paradigm shift toward machine learning based natural language processing technologies, he founded HKUST's internationally funded Human Language Technology Center which launched the world's first web translator over twenty years ago. De Kai's AI research focuses on natural language processing, language technology, music technology, and machine learning, and his cross-disciplinary work in language, music, and cognition centers on enabling cultures to relate. He holds a Kellogg-HKUST Executive MBA and a BS in Computer Engineering from UCSD. In 2015, Debrett's HK 100 recognized him as one of the 100 most influential figures of Hong Kong. De Kai was one of eight inaugural members selected by Google in 2019 for its AI ethics council ATEAC (Advanced Technology External Advisory Council).


Please note: The opinions expressed in Industry Insights published by dotmagazine are the author’s own and do not reflect the view of the publisher, eco – Association of the Internet Industry.