Advanced digital technologies and services, including task-specific artificial intelligence (AI) bring with them extraordinary promise. They have already generated very substantial benefits, particularly in the form of enhanced efficiency, accuracy, timeliness, and convenience across a wide range of digital services. Yet the emergence of these technologies has also been accompanied by rising public anxiety concerning their potentially damaging effects: for individuals, for vulnerable groups, and for society more generally.
In principle, AI can be seen as one or more algorithms combined and fed by data in order to learn and possibly expand themselves or develop new algorithms. One aspect of AI which some people are afraid of can be found in the way data is collected and used. It is easy to check whether virgin algorithms conform to what they are developed for, but if they are trained and fed with wrong data this may have unexpected or unwanted results. A lot of people are unhappy about their personal data being used for AI algorithms without their consent or even without their knowledge.
We can distinguish between two major scenarios for advanced digital technologies, including AI, which are important for the following discussion:
- AI usage within an industrial environment, based mainly on data from sensors and devices;
- Algorithms which directly influence individuals (which will not be addressed in this article).
To give an example: AI can be used to generate fake news within social media or AI can be used to replace judges, but it can also be used to give early warnings about future illnesses if it has access to enough personal data (e.g. from sensors attached to the body).
The definition of AI proposed within the EU Commission’s “Communication on AI”  covers the aspects above in a more general way:
Artificial Intelligence (AI) refers to systems that display intelligent behavior by analyzing their environment and taking actions – with some degree of autonomy – to achieve specific goals. AI-based systems can be purely software-based, acting in the virtual world (e.g. voice assistance, image analysis software, search engines, speech and face recognition systems) or AI can be embedded in hardware devices (e.g. advanced robots, autonomous cars, drones or Internet of Things applications). Many AI technologies require data to improve their performance. Once they start performing well, they can help improve and automate decision-making in the same domain.
AI as a possible influencing tool or mechanism for processes concerning human rights aspects is currently heavily discussed in many conferences, studies, and papers. On the Council of Europe website, you can read that “the impact of ‘algorithms’ used by the public and private sector, in particular by Internet platforms, on the exercise of human rights and the possible regulatory implications has become one of the most hotly debated questions today.” 
One of those events was the Helsinki Conference “AI – Governing the Game Changer” on artificial intelligence that took place in 2019, co-organized by the Information Society Department and the Finnish Presidency of the Committee of Ministers. The Council of Europe will follow up on the Helsinki Conclusions through a number of measures, including: continuing research into the impacts of AI development on human rights, democracy, and the rule of law; developing sector-specific recommendations and guidance; examining existing gaps in the current legislative and regulatory framework applicable to AI design, development, and implementation, and considering feasibility and potential elements of a framework legal instrument in this area.
AI-based systems, software, and devices – sometimes referred to as AI applications – are providing new and valuable solutions to tackle needs and address challenges in a variety of fields, such as smart homes, smart cities, the industrial sector, healthcare, and crime prevention. AI applications may represent a useful tool for decision-making, in particular for supporting evidence-based and inclusive policies. As is the case with other technological innovations, these applications could have possible adverse consequences for individuals and society. In order to prevent this, there must be assurances that AI development and use respect the rights to privacy and data protection (Article 8 of the European Convention on Human Rights), thereby enhancing human rights and fundamental freedoms.
The Committee of Convention 108  adopted two sets of guidelines of core relevance to algorithmic systems:
- Guidelines on the protection of individuals with regard to the processing of personal data in a world of big data (adopted on 23 January 2017), and
- Guidelines on data protection and artificial intelligence (adopted on 25 January 2019).
The latter guidelines provide a set of baseline measures that governments, AI developers, manufacturers, and service providers should follow to ensure that AI applications do not undermine human dignity, human rights, and the fundamental freedoms of every individual, in particular with regard to the right to data protection.
Nothing in the present guidelines shall be interpreted as precluding or limiting the provisions of the European Convention on Human Rights.
There are three sections within the COE guidelines on data protection and artificial intelligence.  The first section with six points of general guidance includes recommendations that a wider view of the possible outcomes of data processing should be adopted, which considers not only human rights and fundamental freedoms, but also the functioning of democracies and social and ethical values. In particular when AI applications are used in decision-making processes, it is crucial that human dignity is protected and human rights and fundamental freedoms are safeguarded when developing and adopting AI applications that may have consequences on individuals and society.
The second section urges developers, manufacturers, and service providers to adopt a values-oriented approach in the design of their products and services:
“In all phases of the processing, including data collection, AI developers, manufacturers, and service providers should adopt a human rights by-design approach and avoid any potential biases, including unintentional or hidden, and the risk of discrimination or other adverse impacts on the human rights and fundamental freedoms of data subjects.”
The guidelines call on AI developers, manufacturers, and service providers to both set up and consult independent committees of experts and engage with independent academic institutions. These institutions can contribute to designing human rights-based and ethically and socially-oriented AI applications and can help detect potential bias. The guidelines also stress the rights of data subjects:
“All products and services should be designed in a manner that ensures the right of individuals not to be subject to a decision significantly affecting them based solely on automated processing, without having their views taken into consideration.”
The third section addresses legislators and policy-makers and includes the concept of ‘algorithm vigilance’:
“Without prejudice to confidentiality safeguarded by law, public procurement procedures should impose on AI developers, manufacturers, and service providers specific duties of transparency, prior assessment of the impact of data processing on human rights and fundamental freedoms, and vigilance on the potential adverse effects and consequences of AI applications (hereinafter referred to as algorithm vigilance).”
The guidelines recommend that AI developers, manufacturers, and service providers should consult supervisory authorities when AI applications have the potential to significantly impact the human rights and fundamental freedoms of data subjects. They also call on policy makers to invest resources in digital literacy to increase awareness and understanding of AI applications and their effects, and to encourage professional training for AI developers on the potential effects of AI on individuals and society.
These guidelines are accompanied by a declaration of the Ministers of States participating in the Council of Europe Conference of Ministers responsible for media and information society, held in Nicosia, Cyprus on 28-29 May 2020, to adopt a final declaration on the challenges and opportunities for media and democracy including AI.
One of the actions for gaining a better understanding of AI technologies is a study  from the Council of Europe by an expert committee. The study explores the implications of AI decision-making for the concept of responsibility within a human rights framework.
The scope of the study is to examine the implications of ‘new digital technologies and services, including artificial intelligence’ for the concept of responsibility from a human rights perspective. It focuses on technologies referred to as artificial intelligence (AI). AI is notoriously difficult to define, and even technical AI researchers do not appear to have settled upon a widely agreed definition.
As the rapporteur of the study, Prof. Karen Yeung, claims in her report, the study concludes that if we are serious in our commitment to protect and promote human rights in a global and connected digital age, then we cannot allow the power of our advanced digital technologies and systems, and those who develop and implement them, to be accrued and exercised without responsibility. The fundamental principle of reciprocity applies insofar as those who deploy and reap the benefits of these advanced digital technologies (including AI) in the provision of services (from which they derive profit) must be responsible for their adverse consequences.
It is therefore of vital importance that states committed to the protection of human rights uphold a commitment to ensure that those who wield digital power (including the power derived from accumulating masses of digital data) are held responsible for their consequences. It follows from the obligation of states to ensure the protection of human rights that they have a duty to ensure that there are governance arrangements and enforcement mechanisms within national law that will ensure that both prospective and historic responsibility for the adverse risks, harms, and wrongs arising from the operation of advanced digital technologies are duly allocated.
 Convention 108 has become the backbone of personal data protection legislation in Europe and beyond. It was modernized in 2018 and this current version is referred to as Convention 108+.
 The full guidelines can be downloaded here:
 A study of the implications of advanced digital technologies (including AI systems) for the concept of responsibility within a human rights framework by Karen Yeung. The full text of the study can be found on the CoE website (https://www.coe.int/en/web/freedom-expression/msi-aut)
Michael Rotert is a pioneer and veteran of the Internet industry – and was the first person on German soil to receive an email. Amongst other posts, his stellar career spans Technical Head of the Data Center of the Informatics Department at the University of Karlsruhe, Founder and Managing Director of the ISP Xlink (later KPNQwest), and managing directorships of various service providers. After stepping down from his long-standing service as Chair of eco – Association of the Internet Industry in 2017, he became Honorary President of the eco Association in the same year. Rotert is also a founding member of the Internet Society, DE-NIC, and other Internet bodies, and contributes his industry expertise through membership of numerous committees and advisory councils.