April 2019 - IT Law | Artificial Intelligence | Data Protection & Privacy

Smart Data, Artificial Intelligence, and Law

What does the use of AI and the sharing of data along value chains mean for liability, accountability and data protection? The Fieldfisher technology law experts Oliver Süme and Nils Töllner outline the legal framework for robotics, AI and data analytics.

Robot Liability, AI and Data analytics - Legal Challenges for New Technologies

© Natali_Mis | istockphoto.com

More and more businesses are gathering intelligence about products and services by sharing data along their value chain with third parties and analyzing information through artificial intelligence technologies. At the same time, robots – which are often controlled by artificial intelligence as well and make decisions that previously had to be made by human beings – are being used in many sectors. What does this mean for liability and who is accountable? What are the challenges for contractual agreements, allowing the parties to add value to the data they share, but also to protect trade secrets and confidential information? And finally, how can these new technologies meet data protection requirements? The Fieldfisher technology law experts Oliver Süme and Nils Töllner take a closer look at these questions and outline the legal framework for robotics, AI, and data analytics.

AI algorithms are becoming not only increasingly important for many business processes, they also decide or at least help to take decisions on fundamental issues, some of which have far-reaching legal implications. 

Combined with additional concepts such as data analytic processes and refining raw data into Smart Data, these developments raise a lot of legal questions, touching a number of different legal fields: criminal and civil liability, contract law and contract drafting, as well as privacy and data protection compliance. Let's have a look at the three core legal fields playing a role in the legal evaluation of AI applications.

Liability

Liability, as always, plays a fundamental role when it comes to use cases based on new technologies such as blockchain and AI. Intermediaries are increasingly becoming superfluous as agents for certain services, so that a liability instance is no longer required. The use of robots based on artificial intelligence also raises the question of who is liable for damages that can be caused by faulty decision-making processes.

The answer is still relatively simple if a contract has been concluded between a claimant and a defendant. Here, liability issues can either be settled or decided by general contract law. Such cases have already been subject to legal proceedings for some time. For example in Germany, a woman suffered serious nerve damage during a surgery conducted by a so-called Robodoc back in the late 1990s and sued her surgeon for compensation. Even though the German High Court dismissed the action, it clearly stated that the use of such new technologies requires transparency and information obligations and demands clear information that the method might involve the possibility of unknown risks. This is a fundamental principle that must also be taken into account when drafting contracts for existing new technologies based on AI.

A liability for robots is also conceivable in many jurisdictions from tort law, since a manufacturer may not sell products which endanger customers or third parties more than inherently inevitable. Throughout Europe, a directive has even introduced strict liability for products when dangerous products are placed on the market. But what if damage from a robot could not have been foreseen and the fault could not have been prevented even with the best possible monitoring? For such scenarios, the European Parliament has brought into discussion the idea of legally creating the status of an “E-Person” for robots and thus granting them legal personhood. The robot, and no longer its manufacturer, would then be liable for all damage itself. This electronic person would then have to take out liability insurance. It is no surprise that this proposal has been met with strong criticism from legal researchers, as the robot has no incentive to maintain “its” financial status. In addition, the idea was based on a misunderstanding, experts argue: Also in the case of AI, it is not impossible to understand where damage originates, and thus an attribution of human misconduct is possible. In any case, companies in this area should continue to keep track of this discussion.

Smart Data and data sharing agreements

But robotics is only one of the fields where AI technologies are used. AI nowadays is also used in the manufacturing industry along the value chains, in order to evaluate the production and process the data into Smart Data in the area of Industry 4.0, for example, thus gaining more information and improving production technologies. In many sectors, the participants in a value chain have a mutual interest in the evaluation of accumulated data, and provide a shared use of additional information to improve products or services. We have seen many of these examples in particular in the automotive and mobility sector recently. In some cases, separate Big Data analytic platforms are built either by one participant, in cooperation with several companies, or are provided by a third party.

Apart from general contractual challenges, the core negotiations in those scenarios often revolve around the question of how data should actually be legally classified, i.e. to whom it belongs and how intellectual property assets can be protected without jeopardizing the common goal of an added value for the parties. It should be noted, however, that although you will often hear discussions about “data ownership”, there is no ownership of data at European level in the sense of strict legal property. In addition, neither the raw data nor the processed Smart Data are regulated separately in the Europe-wide harmonized copyright law. Only the software itself as an intellectual and creative result of the use of data is protected by copyright or intellectual property law. The European copyright law also provides an ancillary copyright for databases. However, this only protects the investment in the construction and operation of a database, not the data itself.

In order to protect data adequately and in accordance with contract law, data will often have to be regarded as trade secrets, which are protected by competition laws or other civil and trade laws in many jurisdictions. Also, criteria associated to the source and generation of data, the data format and data integrity, as well as the designated owner of certain data or storage medium can help to define an appropriate demarcation between shared data use and one's own economic interests. Any contract drafting and negotiations in terms of data sharing agreements should take these criteria into account.

Data protection and privacy

Finally, the application of AI is of course subject to data protection laws when it comes to the processing of personal information. First of all, the general rules and requirements, e.g. of the European GDPR apply here as well. The relevant controller must process the data lawfully, fairly, and in a transparent manner, and must be able to demonstrate compliance with data protection law according to the so-called accountability principle under the GDPR. But with regard to AI, special rules and requirements can additionally apply when it comes to automated individual decision-making. Data subjects can have a right to be not subject to this kind of decision, with only a few exemptions. Consequently, AI may only be used under certain conditions if automated decisions may have a legal or similar effect. In addition, an automated decision must either be necessary for the conclusion or performance of a contract with the data subject, be allowed under Union or Member State law (provided that adequate safeguards are in place), or the decision must be taken with the express consent of the data subject.

The second option has a quite challenging impact on multinational technology providers: The legal requirements for automated decision-making (and therefore the application of AI) can be different among the states of the European Union. In any case, people have a right to obtain human intervention, a right to express their own point of view, and to contest the decision. It has also to be noted that European regulators see any automatic decision-making as a risky technology for which a privacy and data protection impact assessment needs to be carried out. According to the risk-based approach of the GDPR, the data protection impact assessment is an instrument which the data processor him/herself has to assess, and the scope and risks of the data processing operations must be very comprehensively documented, including the necessity and proportionality of the processing operations in relation to its purposes.

Lastly, let us take a look into the future: The European Commission has set up a High-Level Expert Group on AI, which has published “Ethics guidelines for trustworthy AI”. These might form the basis for further legislation in the future, taking into account the seven key requirements for AI applications which are based on three pillars (lawfulness, ethics, and robustness). These key elements are driven by a human-centered approach, emphasize transparency and non-discrimination, and aim to be in line with existing privacy regulations. Others may lead to an increase of legal requirements that must be met in the future by anyone who applies AI for business.

 

Nils Töllner is a German trained lawyer and an associate at the technology and privacy group of the international law firm Fieldfisher. 

Oliver Süme is chair of the board of the eco Association, a certified IT lawyer and partner at the international law firm Fieldfisher. He is an IT and technology law specialist with more than two decades of experience in the field. Oliver advises national and international clients of various sectors on their path to digitisation. Data protection, IT Security and IT contracts are among his key areas, as well as the legal impact of new technologies such as AI and blockchain. 
 


Please note: The opinions expressed in Industry Insights published by dotmagazine are the author’s own and do not reflect the view of the publisher, eco – Association of the Internet Industry.