Although more and more industries are processing data with artificial intelligence (AI), there is no dedicated AI law. “As long as there are no AI regulations, the requirements can only be derived from the applicable laws,” said Dr. Lutz M. Keppeler, specialist lawyer for IT law, recently in a Service-Meister webinar.
Health insurance companies evaluate the individual health risks of their members with AI. Banks detect suspicious account movements using AI. And manufacturing relies on machine data and AI to optimize processes, keep equipment available, and accelerate service processes.
“There is hardly an industry today that does not work with AI,” said Dr. Lutz M. Keppeler in a Service-Meister webinar held in July 2020 (in German). The specialist lawyer for IT law at Heuking Kühn Lüer Wojtek spoke about the legal framework which is relevant for users when they want to collect data and process it with AI. “Although AI is widely used today, there is no dedicated AI law,” Keppler said. “But the EU Commission is working on a White Paper on the Regulation of AI.”
White Paper on AI from the EU Commission: Derive minimum requirements from the potential risk
Which requirements apply to training data? What must users document and store? When is human oversight required? “Regardless of whether autonomous driving or biometric data – the White Paper will regulate all relevant aspects and derive certain minimum requirements from the potential risk,” said Keppler. At present, however, the EU Commission has not yet presented a final result. “In the absence of AI rules, the requirements can only be derived from existing law.”
Example: Data protection law.
“Lawyers are always asked who actually owns data,” said Keppler. “But there is no such thing as ownership of information.”
Exceptions to this are:
- Trade secrets and
- Database works.
Whether the condition and operating parameters of machines can be a trade secret at all remains questionable. This is different with AI training data and information collections for predictive maintenance: These are regarded as database works because the companies have invested in order to collect, prepare, and store information in such a targeted manner. What does that mean exactly? If you want to share service or training data with third parties, this should be contractually regulated.
Pitfalls of personal data protection
Even if personal data is to be protected, it is important to know the pitfalls. In practice, far more data can be assigned to a person than many users are aware of. For example, not only individual names and email addresses are to be regarded as personal, but also entire data records that can be related to individuals. “This applies to time stamps, log files and IP addresses if they can be assigned to a person with a certain amount of effort,” said Keppler.
AI users should also ask themselves who is responsible for data. “An AI cannot yet be responsible for itself,” Keppler said. Everyone who operates and controls an AI system is ultimately liable. In a project such as Service-Meister, it is also possible to have a collective responsibility that extends across all actors in the ecosystem. What this looks like in practice is shown by the example of Facebook: “The operators and the US provider are jointly responsible for all data shared by the followers on the so-called fan pages on the social media platform,” said Keppler.
Data processing allowed with three exceptions
“Anyone who wants to process information with AI should clarify whether they are allowed to use the data at all,” said Keppler. Data protection law and the GDPR generally prohibit all processing – with three exceptions:
- The parties concerned have consented. A prerequisite for this is that, for a consent to be valid, data processors must explain to users in a comprehensible and understandable way what exactly will happen with the information. Complex AI systems sometimes make it difficult to meet this legal requirement.
- The data processing is necessary to fulfil a contract. For example, online shops are allowed to process names and addresses in order to fulfil sales contracts and deliver ordered products.
- There is a legitimate interest. When cameras and AI secure public places, people are often recognizable. Here, the legitimate interest nevertheless justifies data processing.
AI systems need transparency from a legal perspective
Regardless of the background against which companies want to use AI systems – users should weigh up all aspects carefully. “What data does my AI use, on what legal basis is the processing being carried out and for what purpose is information being analyzed?” said Keppler. “AI systems also need transparency from a legal standpoint.” However: Being able to explain all the ways of thinking and decision-making of a neural network in a comprehensible way is not necessary from a legal perspective. Because: “If the consent of the person concerned has been obtained, the question is moot.” However, users are obliged at all times to assess and document the consequences of their data processing. “The GDPR has introduced regulations in this respect, which also apply to the deletion or correction of personal data,” said Keppler.
Nils Klute is Project and Communication Manager at EuroCloud Germany. He is responsible for content marketing activities on topics such as GAIA-X and AI, supports initiatives such as Service-Meister, EuroCloud Native or systems integrators on their cloud journey. Prior to his start at eco in 2018, Nils worked as a corporate journalist for IT corporations (like SAP, T-Systems, and QSC at Cologne-based communication agency Palmer Hargreaves) and previously held public relations positions at market and economic research institutions.
Please note: The opinions expressed in Industry Insights published by dotmagazine are the author’s own and do not reflect the view of the publisher, eco – Association of the Internet Industry.