Before we begin any discussion on artificial intelligence (AI) in terms of economic potential, innovation, or even associated ethical considerations, it is first important to delineate exactly what we are talking about when we refer to AI. We are not talking about the form of AI that we have become accustomed to meeting in the realm of science fiction. This, with its broad human-like but super-human intelligence and capabilities – as embodied in characters like HAL, R2-D2, or Mr Data – is what should be considered as constituting strong AI. But strong AI is not the reality, and is unlikely to become so in the foreseeable future. When we talk about AI, typically we are talking about weak AI. This includes machine learning, natural language processing, speech recognition, and some expert systems which start with robotics. AI comes particularly into play in the visualizing of information, like image recognition and machine visioning. These are the elements we are alluding to within the spectrum of AI. And – to put it in a nutshell – we are not talking about the replacement of humans at all.
Highlights of the month:
Alongside the articles and industry insights mentioned in this article, dotmagazine also takes a look at developments in digital identities - with an interview on the ID4me single sign-on initiative.
However, AI is a game-changer which will have an enormous impact in future on the competitiveness of economies, offering massive advantages to those who are able to develop and support it effectively. But in order to achieve this, it requires a paradigm shift in the way data is handled by companies: There is a need for small and medium-sized companies to share data through open data platforms, in order to be able to combine forces and compete with the giants.
“Education, research, innovative industrial manufacturing, and high-performance digital infrastructures are the four pillars of a strong ecosystem that can bring reliable and secure AI applications onto the market and secure our long-term digital self-determination in this important technology area,” according to eco Association Chair Oliver Süme. In particular, without high-performance digital infrastructure, there can be no AI. This is something that Sanna Räsänen from Herman IT looks into in detail in this issue of dotmagazine.
To ensure digital self-determination for Europe, it is essential that we provide the appropriate infrastructure to manage data and to process data within our territory. On this note, the EU Commission presented its proposal for a strategy for artificial intelligence (AI) in Europe in April 2018. The main pillar of this strategy is to strengthen research. The Commission also hopes to broaden the use of AI in the context of the Digital Single Market. As is emphasized in the eco Association campaign for the 2019 European Parliament election, both approaches are to be welcomed in principle, but require further concretization. In particular, the EU should also accord a greater focus to market-oriented application of AI solutions.
But while it is important to set the right direction for politics when it comes to AI, it is also necessary to recognize the value of ethical behavior within the industry.
As Oliver Süme points out in his dotmagazine editorial “Ethical Standards for Digital Technologies: Evolution Instead of Revolution?”, “digital technologies which find their applications in the areas of artificial intelligence, data processing, the Internet of Things, or social communication platforms are always so-called dual-use technologies – meaning that they can be both a blessing and a curse, can be used for benign purposes or abused to criminal ends.” To this end, the eco Association has produced a set of “Guidelines for the Handling of Artificial Intelligence”. One of its core recommendations is that “initiatives for the promotion of reliable transparency of artificial intelligence, its mode of operation, and the data that it collects, processes, and generates should be supported by politics and industry. They make a central contribution to the trust accorded to developers, providers, and operators.” Prof. Norbert Pohlmann elaborates on this theme of trust in his article "Artificial Intelligence in Support of Humans", in which he emphasizes how important transparency in algorithms is for ethical AI. And in a related article, Oliver Süme, alongside his fellow Fieldfisher technology law expert Niels Töllner, outlines an important legal framework for AI and Smart Data.
But what about the value of AI? Let’s come back to the aforementioned market-oriented application of AI solutions. Systems are becoming more complex all the time, and the mass of data being generated is a limiting factor for humans. IT security is one example, but so too is basic infrastructure provision like network interconnection. It is becoming more and more difficult to have a complete overview of systems, particularly given that things are happening and changing so fast. Here AI comes into its own: Applications can be trained to recognize anomalies – patterns that may suggest the beginning of an attack, for example – and provide early warning so that specialists can take preventative action. In this context, Dr. Thomas King, CTO of DE-CIX offers useful insights into how his company have already successfully deployed AI for network infrastructure.
But the final decision is still made by the humans. AI cannot and will not replace humans. Rather, AI can support specialists in many areas to deal with this increasing complexity. As Sebastian Kurowski points out, AI is not capable of conceptual reasoning, meaning that it cannot make accountable decisions. As a result, “AI will have a huge impact on decision support, it will provide a lot of resource savings – but it will never be a stand-alone, completely isolated, autonomous entity, as it is so often portrayed.”
This of course doesn’t mean that we don’t have to deal with some transitional human resource challenges. Companies are faced with a growing skills gap between those workers who are familiar with known systems but who are not yet familiar with the new systems. Added to this, we have very short development cycles and enhancement cycles, which means a significant amount of a technician's time must be used for their re-education or to bring them to the next level of knowledge. This is a very challenging requirement for all companies. Just consider the fast shift from the combustion engine to the electric motor.
This is something that the eco Association is trying to address in the joint project Service-Meister, through making use of AI services – especially of machine learning, monitoring systems, the pre and post analyses of services, and access to knowledge through data sharing – and bringing all of these factors together to support technicians when they are working with complex systems. Essentially, this is all about closing that knowledge gap that is at the heart of many companies’ concerns. In his article on the Service-Meister project, Henrik Oppermann of USU Software - the company leading the Service-Meister project consortium - explores how AI can add significant value. At the same time, Oppermann also provides reassurance about how AI will support humans without replacing them.
When they kick in, crucial projects like Service-Meister are bound to allay fears, but until they do, we are faced with a discernible fear of negative consequences of AI in workplaces. This is a topic picked up on by Lucia Falkenberg, Chief People Officer of the eco Association. She points out that many of these anxieties are not based on prior personal experience, but rather on speculation and a lack of knowledge about new technologies. As a result, it will be imperative to ensure comprehensive education programs, both in school and professional development contexts, to allow people to keep up with the pace of change, and to enable the success of the digital transformation of industry. On the one hand, staff need to learn how to work effectively with the support of AI applications, and on the other hand, AI can provide actual training for staff, as is demonstrated by the Machine Learning-based IT security awareness service explained by David Kelm of IT-Seal.
AI is certainly a game-changer, but it’s also simply another step in the current industrial revolution – and if we look back at previous industrial revolutions, we can see that anxiety about change is something that needs to be overcome for a society to move forward. It's an ongoing story. I do not have the expectation that we will be facing a singularity in the foreseeable future. We still need the workforce, more than ever, but we need to support them in dealing with the changes and developments taking place in industrial and IT systems.
Andreas Weiss is Head of Digital Business Models at eco - Association of the Internet Industry. He started there in 1998 with the Competence Group E-Commerce and Logistics, moving afterwards to E-Business. Since 2010, he has been leading the eco Cloud Initiative as Director of EuroCloud Deutschland_eco and is engaged in several projects and initiatives for the use of artificial intelligence, Data Privacy, GDPR conformity, and overall security and compliance of digital services.