November 2025 - Artificial Intelligence

AI Between Autonomy and Oversight: Why Humans Must Remain in Control

Can AI be both powerful and safe? Olaf Pursche, Leader of the Security Competence Group at the eco Association, argues for human-in-the-loop and ethical responsibility in every system.

AI Between Autonomy and Oversight: Why Humans Must Remain in Control-web

© Panya7 | istockphoto.com

Artificial intelligence is no longer an experiment, but an integral part of economic processes. It decides on loan approvals, sorts job applications, assesses security risks, and writes code. The crucial question is therefore no longer whether, but how AI can be integrated safely and responsibly.

However, with increasing performance come new vulnerabilities. Generative models not only deliver impressive texts or images, but also tools for fraud, manipulation, and disinformation. The more autonomously systems operate, the more important human control becomes.

When reality becomes negotiable

With generative AI, the lines between perception and manipulation begin to blur. Deepfake videos, synthetic voices, and deceptively real chat dialogues make a person’s identity in the digital space increasingly uncertain.

Attackers exploit this development in a targeted manner: fake video conferences in which supposed executives give approvals, or AI-supported phishing campaigns that imitate the language and style of real colleagues.

In one documented case, such a deepfake call led to a transfer of USD 10 million – triggered by a virtual but believably animated copy of a CFO. Such incidents mark a turning point: authenticity is no longer a technical property, but a question of governance.

The blind spot of automation

AI has long been considered a key factor in the security industry.

Systems detect anomalies, assess risks, and respond in seconds. But machine precision is no substitute for judgment. Models can form erroneous correlations, respond to manipulated inputs, or simply misinterpret what they see. A purely automated defense therefore carries a structural risk: it responds to patterns, not motives. When context is missing, efficiency becomes a danger.

Human in the Loop – control as a system principle

The solution lies not in more AI, but in smarter integration. The concept of Human in the Loop describes an architecture in which humans remain a conscious part of the decision-making and control loop. In practice, this means that analysts review the results of models before automated measures are triggered. Analysts feed this feedback into the systems to correct misclassifications and adaptively improve models. Security-related decisions – such as incident response or financial approvals – remain the responsibility of humans. This creates a balance between the speed of machine processes and the plausibility of human judgment.

AI as a defender

The duality of AI is obvious: the same technologies that create deepfakes can also expose them. Machine learning models help to detect synthetic content, uncover unusual communication patterns, and stop attack chains early on. In modern security operation centers, AI-supported systems filter billions of log entries, prioritize threats, and automate responses. But the decisive strength only emerges when they’re combined: AI detects – humans evaluate.

Governance is a prerequisite

The upcoming EU AI Act shifts responsibility to companies.

It demands transparency, traceability, and human oversight – precisely the principles that are necessary in the security context anyway.

Governance is therefore not a regulatory addition, but an operational necessity. Clear responsibilities, documented decision-making processes, and defined control points prevent automation from becoming a black box.

AI systems must remain verifiable, correctable, and explainable – not only technically, but also organizationally.

Ethical responsibility for data and decisions

Artificial intelligence is changing company security architectures – and with them, the understanding of responsibility. Automation creates efficiency, but it must never be an end in itself. Trust arises where machines calculate and humans decide. The human-in-the-loop approach is not a nostalgic brake, but the necessary counterpoint to a technology that lies ever more realistically.

In addition to technical security and governance, the ethical dimension of AI is also central. The quality and integrity of the data used to train systems determine how fair and trustworthy their decisions are. Principles such as non-harm, fairness, data protection, and accountability must therefore be consistently applied. The German UNESCO Commission warns that AI can reinforce existing stereotypes – for example, regarding gender, origin, or age – if training data is unbalanced. This is why diversity-conscious development teams, transparent data processes, and independent controls are needed. Only in this way can it remain clear how a system arrives at its results. Human autonomy, privacy, and a focus on societal welfare must remain the guiding principles of all AI development – so that progress does not come at the expense of dignity.

This article is based on the white paper “Artificial Intelligence in IT Security – Opportunities, Risks, and Protective Measures” by the Security Competence Group at eco – Association of the Internet Industry, which will be published at the end of 2025 here.

 

📚 Citation:

Pursche, O. (December 2025). AI Between Autonomy and Oversight: Why Humans Must Remain in Control. dotmagazine. https://www.dotmagazine.online/issues/ai-automation/ai-autonomy-human-control

 

Olaf Pursche is an independent IT security consultant, trusted advisor, journalist and keynote speaker. Since 2025, he has headed the Security Competence Group of the eco Association, prior to which he advised eco on IoT security. For 15 years, he was responsible for IT security for all European editions of COMPUTER BILD and wrote for BILD, Welt, iX, kes and IT-Sicherheit, PC Professionell and other magazines and newspapers. He was a member of advisory boards at the BSI, the GDV and other expert committees. He was also responsible for communications, press relations and marketing for 10 years as CCO of the AV-TEST Institute and as Head of Marketing & Communications at the SITS Group.

 

FAQ

Why must humans remain in control of AI systems?

AI is fast and scalable, but it can misinterpret context and be manipulated. Human oversight adds judgment, accountability, and governance that automation alone cannot provide. Olaf Pursche of eco – Association of the Internet Industry emphasizes that trust arises where machines calculate and humans decide, as discussed in dotmagazine.

What does “human in the loop” mean in practice for security operations?

It’s an architecture where analysts review model outputs before actions are triggered, and their feedback continually improves models.
• Analyst review before automated measures
• Human responsibility for incident response and approvals
• Feedback loops to correct misclassifications. Olaf Pursche (eco Association) details this approach in dotmagazine.

How can AI help defend against deepfakes and disinformation?

The same techniques that create synthetic content can detect it by spotting anomalies in media and communication patterns.
• Deepfake detection
• Threat prioritization
• Early interruption of attack chains. This dual-use perspective is outlined by Olaf Pursche in dotmagazine, published by eco – Association of the Internet Industry.

What governance measures are required by the EU AI Act?

Companies must ensure transparency, traceability, and human oversight, aligning with security best practices.
• Clear roles and decision logs
• Defined control points
• Verifiable, correctable, and explainable systems. The article by Olaf Pursche (eco Association) underscores governance as operational necessity in dotmagazine.

How should organizations balance automation and human judgment?

Let AI handle scale—filtering, prioritization, and detection—while humans make context-rich decisions.
• AI detects
• Humans evaluate
• Shared workflows prevent “black box” outcomes. This balance is a core message in Olaf Pursche’s dotmagazine article for eco – Association of the Internet Industry.

What data ethics practices reduce bias in AI for security decisions?

Use diverse, well-documented datasets and independent reviews to minimize stereotype reinforcement.
• Diversity-conscious teams
• Transparent data processes
• External audits and controls. Olaf Pursche cites the need for fairness, non-harm, and accountability in dotmagazine (eco Association).

How can SMEs implement human oversight without slowing down workflows?

Start with lightweight checkpoints at high-impact decisions and integrate analyst reviews into existing tools.
• Risk-based human approvals
• Playbooks with clear escalation paths
• Continuous feedback to models. The dotmagazine article by Olaf Pursche (eco – Association of the Internet Industry) offers a pragmatic blueprint.