November 2025 - Artificial Intelligence | Digital Policy

Autonomy in the Age of Algorithms – Are We Still in Control?

AI shapes our choices – but are we still in control? Melanie Ludolph from Fieldfisher explores why digital autonomy is the next frontier of human rights in the age of algorithms.

Autonomy in the Age of Algorithms – Are We Still in Control?-web

©Deagreez | istockphoto.com

Artificial intelligence and automation are quietly reshaping how decisions are made – from what we read to the opportunities we’re offered. As algorithms increasingly act on our behalf, one question looms large: are we still in control of our own choices? Digital autonomy is emerging as the next frontier of human rights in the age of AI – a principle that may prove as vital as privacy once was. But as Europe leads with far-reaching regulation, we must ask whether our quest for protection is also limiting the very freedom it seeks to preserve.

Convenience comes at a cost

Automation promises efficiency, comfort, and safety. Yet the more deeply artificial intelligence and connected systems penetrate our daily decisions, the more we risk a subtle erosion of self-determination. Recommendation algorithms structure our information environments, smart assistants make background choices, and data-driven platforms steer behavior through subtle incentives – the so-called dark patterns. The paradox is clear: the more convenient the technology is, the less we control its mechanisms. The question of digital autonomy is therefore not an abstract privacy issue but a core societal challenge: How can human self-determination survive in a world optimized for prediction, personalization, and automation?

When our laws protect data, not decisions

Legally, a broad framework already exists: Fundamental rights such as human dignity, privacy, and the protection of personal data – enshrined in the EU Charter of Fundamental Rights (Articles 1, 7, and 8) – provide a strong foundation. They safeguard individual freedom from both state and corporate interference, set limits on data processing, and prevent discrimination.

However, these safeguards fail where human decision-making is silently undermined by algorithms. A person who formally consents to data use but no longer understands the algorithmic consequences may be legally compliant – yet effectively deprived of autonomy.

Autonomy therefore goes beyond data protection. Digital self-determination also demands safeguards against manipulative platform design, exploitative business models, and algorithmic market control. In the digital realm, autonomy is threatened not only by the state but by private architectures of choice – interfaces and incentives that steer behavior without explicit coercion.

What digital autonomy really means

“Digital autonomy” means the ability to act in digitally mediated environments freely, informedly, and without manipulation. It comprises three dimensions: (i) informational autonomy – access to unbiased information and transparency of algorithmic processes; (ii) decisional autonomy – protection against manipulative or opaque systems; and (iii) practical autonomy – control over digital identity, data, and connected devices. Legally, this can be read as an evolution of informational self-determination – moving from “What may happen with my data?” to “How can I make my own choices in digital contexts?”

AI, IoT, and the erosion of choice

Artificial intelligence and the Internet of Things are more than technological advances – they are systems of pre-emptive decision-making. Algorithms assess risks, preselect options, or act autonomously: from credit scoring to self-driving vehicles and smart cities. Such systems challenge the classical notion of responsibility. When machines learn and act independently, accountability and liability blur. Lawmakers are reacting – with the EU AI Act, liability reforms, and ethical frameworks. But behind these debates lies a deeper question: How can autonomy be maintained when decision-making power is technically externalized?

Do we need a new fundamental right?

A constitutional right to digital autonomy sounds logical – yet risky. Europe’s AI Act, Data Act, and Digital Services Act – and you can add the Digital Markets Act, the Cyber Resilience Act, and others to the list – already position the EU as a global pioneer of rule-based digital governance. Still, critics warn that Brussels may have overshot the mark: too many layers of regulation, too complex obligations, too little space for experimentation.

A new fundamental right could intensify this tension. It would create additional duties of justification and compliance, potentially slowing innovation and burdening smaller players. In a world built on agile development and rapid learning cycles, over-regulation can easily become a brake on progress.

Perhaps what we need is not more rights, but clearer principles. Digital autonomy should guide policy as a framework for human-centric innovation: law as an enabler, not merely a constraint; trust as a foundation, not a substitute, for innovation.

Taking back control

Autonomy and innovation don’t have to compete – but regulation can easily tip the balance. Europe’s dense web of digital laws has already reached a point where compliance, not creativity, often dictates the pace of innovation. The legal intent is clear: to safeguard trust and accountability. Yet in practice, overregulation risks constraining precisely the human agency it seeks to protect.

At the same time, we must be honest about the limits of human understanding. No one can fully grasp the complexity of modern AI systems, nor trace every decision path within a self-learning model. Absolute transparency is an illusion – and perhaps, not even desirable.

Digital autonomy is not about resisting technology; it is about designing systems that respect human decision-making even when full comprehension is impossible. But law alone cannot guarantee that balance. Some parts of autonomy must be protected not through regulation, but through responsible design, ethical standards, and digital literacy – through a culture that values agency as much as innovation. Perhaps that is where digital autonomy begins: in accepting that law can set boundaries, but genuine freedom depends on how we design, deploy, and engage with technology.

 

📚 Citation:

Ludolph, Melanie. (November 2025). Autonomy in the Age of Algorithms – Are We Still in Control? dotmagazine. https://www.dotmagazine.online/issues/ai-automation/digital-autonomy-

 

Melanie Ludolph is a Senior Associate at Fieldfisher’s Hamburg office, where she helps clients navigate the evolving world of EU digital regulation. With a strong focus on data protection, online marketing, and international data transfers, she now also advises on the fast-growing area of AI regulation as it moves to the top of the European policy agenda.

 

FAQ

1. What is digital autonomy in the context of AI?

Digital autonomy means being able to make informed, independent decisions in technology-driven environments. It includes access to unbiased information, protection against manipulation, and control over one’s digital identity and data.

2. Why does legal data protection not fully ensure autonomy?

Even when users consent to data use, they often don't understand how algorithms shape outcomes. Legal compliance doesn’t always prevent systems from silently undermining free decision-making.

3. How do dark patterns and recommendation systems impact self-determination?

Designs that nudge behavior – like default settings or biased suggestions – can subtly limit choices. Ludolph warns that autonomy can erode even without explicit coercion.

4. Could a new fundamental right to digital autonomy help?

Possibly – but it might also complicate regulation. Ludolph argues that instead of more rights, clearer principles are needed to balance innovation with meaningful human agency.

5. What legal frameworks already support digital autonomy?

The EU Charter of Fundamental Rights protects dignity, privacy, and data. But Ludolph notes that additional safeguards may be needed to address AI's behavioral influence and design power.

6. What practical steps can preserve autonomy in AI systems?

According to Ludolph, we need:
• Transparent, accountable design
• Ethical standards and usable interfaces
• Digital literacy to empower user choices

7. How can lawyers and technologists work together to support autonomy?

By embedding human-centric principles into system design and law, both communities can ensure autonomy isn’t lost in the pursuit of innovation. This aligns with eco’s focus on responsible digital governance.

 

Please note: The opinions expressed in articles published by dotmagazine are those of the respective authors and do not necessarily reflect the views of the publisher, eco – Association of the Internet Industry.