October 2019 - Artificial Intelligence | E-Health | IoT

Overcoming Preconceptions about AI in the Healthcare Industry

Rather than robots, it will be monitoring systems using AI that will enable people to live independently for longer, VIVAI’s Bettina Horster argues.

Overcoming Preconceptions about AI in the Healthcare Industry

© Jackie Niam | istockphoto.com

Everybody knows that AI is very powerful in the health care industry. Frankly, it's a no-brainer. There are a lot of initiatives utilizing AI in healthcare, but there is a need to broaden general awareness for the potential applications, so that funding decision-makers (political, health insurance companies, etc.) have a more complete picture.

Currently, the most well-known applications of AI in the medical industry make use of image recognition, for example, to recognize anomalies and make a diagnosis. And then, if you look more closely into the applications in the fields of health care and nursing, people always first think of robots.

I find this really strange, because robotics is hugely expensive and use of them would mean the cost of care would skyrocket. To give you an example, when we started our AI-supported Ambient Assisted Living project Smart Service Power, we included a robotics company that produces tactile arms; robots that you can use together with people. And, at that time, the robot arm cost roughly EUR 90,000. And this is only the hardware – without any programs or software. From the computer science point of view, creating the software for robotics is highly challenging, because it needs to operate in a 3-D environment, and it is necessary to predict a lot of things.

So, to me, it has always seemed odd that a lot of research funding is going into this channel. Rather than focusing on developing robotics for aged care, it is my belief that the focus should be placed firmly on the person who needs care. If you look at elderly people, it’s important to understand what their actual day-to-day problems are, and how they feel about their situation. Older people often feel insecure, unsteady on their feet, afraid of forgetting important things and they don't like being at home alone, without access to assistance. That's why some of them – although they could still live at home – move to a nursing home or assisted living facility for senior citizens. What all seniors are most afraid of is that they might fall and then not be detected for hours and hours.

So what we thought is – why don't we increase the safety level at home and enable people to live autonomously as long as they can?

But what does AI have to do with this?

With AI, it's possible to predict if a person is deteriorating. There are algorithms which are called “ADLs” (activities of daily living), and these refer to very simple actions: You get out of your bed, go to the bathroom, spend a certain amount of time in the bathroom, and afterwards you return to your bed and lie down again. There are 14 major tasks a person has to be able to do in order to be able to live autonomously.

We can really monitor these actions – whether this person gets up, goes to the coffee machine and opens the refrigerator door, makes breakfast and later on puts stuff back into the refrigerator. It is necessary to analyze whether these activities are done on a regular basis, or whether there is something wrong. Take, for example, someone who used to go to bed at 10pm: such a system flags if this person starts getting up at night rather often and wandering around the apartment. It is quite possible that this person would not provide this information to their doctor or their relatives, making it something of a hidden fact. But with a system like ours, it is possible to detect that this person might be in an early stage of dementia or Parkinson's. It's also useful should this person fall – the AI needs to be able to detect that this person is not sitting, but lying on the floor, and that something is not right. That is another AI-supported task in our service.

In our service, AI functions both as a diagnostic tool and as a safety tool, making use of the standard processes and usual activities that a given person engages in. This is pure AI. A prediction is made about what this person is likely to do, what this person is usually expected to do, and at what point in time it appears this person may no longer be able to do it.

A further AI task in the context of aged care is natural language processing. When a person has fallen, the system needs to ask if everything is ok. And if the person answers, “Ow”, or “I feel so bad”, or “I have a problem”, the computer needs to understand what this person means, and react accordingly. This, then, is much more than just speech to text – it requires an understanding of the actual meaning of an utterance – an intent. We have also integrated AI into the user interface, because the system is sometimes confronted with people with slight dementia, for example, and the machines need to be able to understand erratic behavior and to make a diagnosis based on the change.

These are all examples of the AI-supported cognitive services that could be more widely used to provide a really quite profound level of support, so that elderly people can live a better and more independent life, without being worried about what is going to happen in an emergency. This makes it clear that elderly people can benefit dramatically from AI, and this can be achieved at a fraction of the cost of robotics. And most importantly – these functions are already there!

Training AIs for aged care

So, how did we develop our AIs? The first step is to create a user model. User modelling is the key here because every person is an individual, with their own idiosyncrasies. If you have somebody who is a night owl, it's perfectly okay if they are up until 2 o'clock in the morning, walking around the apartment – that's what they’ve always done, and it’s not a sign of deterioration. So the machine needs to learn how to recognize if somebody is in danger, is in difficulties or unresponsive, or is talking to a person. These are all algorithms.

The system has been tested by a field test. Our estimate that the system needs about four weeks to learn what is normal for the person whose routines and reactions are being modelled turned out to be correct. When we know that, we can be very accurate in predicting what is happening right now and what will happen in the near future. What we’re seeing as a central issue is that it’s important to talk to the people taking part in the test. You have to explain how things work – that it is not a miracle or anything supernatural. This is a system that keeps getting better every day.

Ensuring data sovereignty

Most people regardless of age have a notion of security and privacy – what they want known and what not. The thing about a system like ours is that it really does intrude on the lives of the people, so as a result, it is essential to be extremely careful with the handling of data. Data autonomy and data usage control is key.

For example, we did a large survey at the start of the Smart Service Power asking people what they want their relatives to know. In most cases elderly people responded that they do not want their relatives to be informed of everything. It is necessary to allow people to decide what data will leave the premises and will be visible for others, and what stays in the apartment and remains completely confidential. The result of the survey showed that a traffic light system for permission to access certain kinds of data is likely to achieve broad acceptance.

In the case of medical professionals, things look different. Such a system allows the professionals to gain important insights about patients who might not be capable of providing the information themselves. For example, if the medical professional has a user profile for a particular individual, the data collected will flag any changes in the person’s behavior or energy levels as a result for example of changes in medication, and they are much more able to effectively adapt medication to changing conditions.

But exactly who would have knowledge of this behavior data?

The best thing would be if a nursing care service were to be given access to this data. They are the experts. The person does not need to be a patient of an in-care service. A nursing service could offer this as a service – something that a lot of nursing services are interested in. This is a new business model for them, and also for us. We basically administer the data, and bring in the experts when they are needed. But the data could also be provided to the relatives. Or the person’s doctor, if they have a doctor who is active in telemedicine. There are various options available depending on what the person wants.

The simple truth is, populations are aging, and we need to find cost-effective solutions to provide aged care.In Germany, seventy five percent of all people living in nursing homes receive social security payments. Here, we are talking about social transfer costs of EUR 20,000 on average for every person in this kind of care in Germany. And what I really don't understand is why a digital assistance for elderly care system which costs only a fraction is not given greater support in the funding and planning of aged care – and it's not only our service that is available and shows potential, there are other systems too.

My basic appeal would be for systems like this to be treated as equivalent to aged care facilities. Instead of going to a nursing home, people could choose to stay at home, be supported by this kind of monitoring, and remain in control of their lives, their data, and their care.

The digital eHealth assistant “Smart Service Power” (SSP) is a new approach to elderly care, integrating sensors, devices, services, and data. Beneficiaries are seniors who can stay in their own homes for as long as possible, with the safety of a nursing home. In addition, all providers of services in housing, health, and care – such as housing associations or nursing services – benefit from SSP. The system is currently in the final realization phase. It is the only federal and EU-wide platform of its kind in the care sector. The project has already received many awards: for example, from the European Commission, the German Federal Government, and many others. You can find all information about SSP here.


Dr. Bettina Horster is Head of Business Development at VIVAI Software AG and is responsible for the area IoT (Internet of Things, Industry 4.0, M2M), and is also Director of IoT with eco – Association of the Internet Industry. Since 2009 she has been involved with the Internet of Things and is a pioneer in the field. She is currently also Project Leader for the IoT- EU project “Smart Service Power – technikgestütztes, altersgerechtes Wohnen im Quartier” (for IoT-assisted aged care in the own home), and recently won the UN NGOs Diplomatic Council “Information Society Award”, and was finalist in the UN “World Summit on the Information Society” prize.


Please note: The opinions expressed in Industry Insights published by dotmagazine are the author’s own and do not reflect the view of the publisher, eco – Association of the Internet Industry.