November 2017 - Artificial Intelligence

Artificial Intelligence - A Quite Fantastic Machine

AI specialist Reinhard Karger takes a look at the history of Artificial Intelligence research, the current state of play, and the next major advances in machine learning.

Digital Neural Network

© ktsimage| istockphoto.com

This interview was first published in the German language on the eco Audiomagazine on Artificial Intelligence

eco: Mr. Karger, you have been dealing with this topic for several decades now. When we talk about Artificial Intelligence, what exactly does that mean?

REINHARD KARGER: We have been researching Artificial Intelligence since 1988, and we say that Artificial Intelligence is the digitization of the human capacity for knowledge. What we mean is, for example, the ability to transform natural language which we hear into texts. This would be the problem for so-called speech recognition. This is a human ability, and if computers perform the same tasks, then you would say that they, in this case, display Artificial Intelligence. This goes many steps further. Besides speech recognition, which is used for dictation systems, knowledge abilities like speech comprehension come into play. This means the recognition, processing, and analysis of the spoken word to extract the meaning. This is a completely different task, which is why different algorithms are responsible for it. However, this is also a knowledge ability. And we have many more, for example, the capability to understand images, but also the capacity to evade someone in public – these are all knowledge abilities

eco: It is already the case today that we can find Artificial Intelligence at every corner (for example, in smartphones and in many cars). What is the development behind it?

KARGER: Human intelligence and human consciousness have been a focus in human history for more than 5,000 years now, and in the last 2,500 years, we in the west have also been dealing with it. The 19th century logicians did a lot of groundwork, and for 70 years now, we have had machines that can be used to test these theories. This means that we are definitely standing on the shoulders of giants with this work. We live in a great time, but we are not the only geniuses who have ever lived. 

When you look back in time at the state of development, let’s say 25 years ago, back then we had just started to address the problem of mechanized translation. To do this, you have to identify and understand the spoken language, and then produce translations, and then articulate these translations verbally. In this respect, the first systems that were built could already do a lot of things, but they required a longer processing time. At that time, the speech recognition was rather modest. This means that... after... every... word... you... had... to... make... a... little... pause. A great advancement in this field was the distinction between hesitations and words, so that when someone says “I have Ah... a meeting Ah... in Aachen,” the speech recognizer will not try to represent these “Ahs” as words or parts of words. This was a real achievement. Today’s systems are already able to do this type of work relatively well. In my opinion, more has happened than I personally expected. The progress is actually quite fast, but we are still far from the capacities of a human being. 

eco: How well does the interaction with language systems work today? For example, what about smartphones?

KARGER: Of the systems that are around now, let us take “Siri” as an example, because many people know this system. You can use “Siri” as a dialog system. This means that you ask “Siri” a question and “Siri” then responds to it, or perhaps not – these are the so-called digital assistants. When you ask people, at first everyone likes to use these systems, but after a few attempts, they get the impression that something hasn’t been working properly. As a result, they stop using it out of frustration because “Siri” does not really work as a dialog system. However, “Siri” as a dictation system is something else. When you dictate short messages, such as WhatsApp messages or notes, you realize that, as a dictation system, "Siri" can produce quite astonishing results and is also amazingly helpful. This means that speech recognition is now really at a rather good level. Speech comprehension has not yet reached this level, and therefore many answers are either not relevant or somehow miss the point, or you do not get the information you actually need.

eco: So speech recognition is already well developed. But, speech comprehension still has room for improvement. What is the next step now?

KARGER: Next up, I think a lot of innovation and Artificial Intelligence in form of visual search will have an impact on our everyday life. This, in turn, means that you can search for pictures by means of language. Let’s assume you have a smartphone with anywhere up to 20,000 pictures on it, and you want to show someone a specific photo, but you are unable to find it. The photo you are looking for gets lost in the flood of images on your smartphone. The next thing that I think will happen will be the linking of speech recognition with systems that are able to recognize and process images, which operate on neural networks – keyword “deep learning” – so that you can verbally describe a picture, and the right picture is displayed – a visual result without having to specify a text or a keyword in advance. In my opinion, this is the next step, where you can say: “This is a fantastic new solution.” Of course, this also means that you will get much better results than you have today, especially in areas (e.g. movies, reports, and documentation) that are concerned with pictures.

We can of course look at other fields of application, and this would include medicine, i.e. the deep processing, clustering, and analysis of results of imaging techniques (e.g. MRI, CT, and ultrasound images). In this context, I hear from doctors that these applications can provide completely new insights because there are a great number of these pictures and no human being can look at them all. If you could reasonably analyze these images with pattern recognition, it could lead to new insights in the medical context. A similar situation can be expected in the context of predictive maintenance. It is possible that the automatic analysis of sensory information will find patterns that indicate when a machine component should be replaced sooner than was originally intended in a maintenance cycle, or could run much longer, so that this maintenance work can be adapted

eco: How do you actually make objects intelligent? 

KARGER: To make objects intelligent – to transform products into smart products – usually requires, at first, that they have sensory information about their own condition. This means that smart products have their own framework conditions. In most cases, however, what matters is that the meaning, or the semantics, of an action within its context brings with it a complexity that no one can foresee. Therefore, you need more than just a specialized digital assistant – rather, you need a surprisingly comprehensive understanding of the world. That is exactly what machines cannot yet do. For this reason, despite the great successes we’ve seen, it will continue with specialized and specific applications. In manufacturing, these will change our day-to-day work. In teaching, it will not only change the way we use language, but also subjects such as mathematics and natural sciences – we will be able to engage in dialog-based, factual conversations with digital assistants. This will all be possible. However, when you say that you “breathe intelligence into objects”, then you are very preoccupied with a biblical understanding of intelligence and life. This is not the machine. The machine is not magical. The machine has only limited sensory access to the world. It has no instincts, no feelings and no will – these are things that you need in order to understand human beings. And if you could actually transfer all these to machines, then let us see what we can achieve and what the world looks like. But that is certainly not something that we are doing today or in the near future.

eco: So the general intention is not to replace human beings one day?

KARGER: Well, Artificial Intelligence is a machine and it should stay that way. It is like a bicycle – a quite fantastic machine. But when you and your partner take the bike out on the weekend to go to a cafe, then it’s your partner who joins you in the cafe, and the bike can get locked up out the front.


Please note:
 The opinions expressed in Industry Insights published by dotmagazine are the author’s own and do not reflect the view of the publisher, eco – Association of the Internet Industry.