November 2025 - Interconnection | Digital Infrastructure | Artificial Intelligence

Navigating the Pathway to AI-ready Network Infrastructure: Common Pitfalls to Avoid

Ivo Ivanov from DE-CIX exposes the biggest traps on the road to AI readiness – from legacy data centers to public Internet bottlenecks – and shows how smart, low-latency interconnection powers real AI performance.

Navigating the Pathway to AI-ready Network Infrastructure: Common Pitfalls to Avoid

Image generated using Whisk AI

As enterprises race to harness artificial intelligence, many overlook a critical success factor: the infrastructure that powers it. From legacy data centers to public Internet pitfalls, common missteps can derail even the most ambitious AI strategies. Ivo Ivanov, CEO of DE-CIX, explains why low latency, direct interconnection, and hybrid approaches are essential to unlocking AI’s full potential – and how to future-proof your network for the era of distributed intelligence.

The rapid evolution of artificial intelligence has made it a defining force in modern business. With Gartner forecasting that the money spent on AI-optimized servers will surpass traditional server hardware investments by 2028, and McKinsey reporting that 88% of companies already use AI in some capacity, its momentum is undeniable. In the U.S., AI is projected to boost GDP by 21% by 2030, while in Europe it is expected to add roughly 575 billion dollars to the economy over the next five years.

While the path to AI adoption has been mapped, many enterprises face significant hurdles along the way. Modern AI workloads require low latency, high-performance connectivity, along with transparency and control of data flows to meet evolving regulatory demands. Without aligning their network infrastructure to support these requirements, businesses risk inefficiencies, missed opportunities, and wasted investments in AI implementation. 

Here are some of the most common challenges to watch out for:

Overreliance on on-premise data centers

In the past, traditional on-premise data centers held one main advantage: They allowed enterprises full control over their IT infrastructure, security, and data sovereignty. When it comes to the challenge of supporting new AI workloads, however, they fall short. Training AI models relies on immense computational power and specialized GPU processors that require a power density which pushes legacy on-premise data centers well beyond their limits. Even AI inference, which is computationally less demanding, is a challenge for on-prem infrastructure. This requires geographically distributed, low-latency environments that isolated setups can’t deliver. 

While concerns over data sovereignty and security make some organizations hesitant to move fully away from on-premise systems, maintaining AI-ready infrastructure in-house is often too costly and inefficient. This dilemma can be overcome by choosing a hybrid approach – combining on-premise resources for sensitive tasks with cloud or colocation facilities for scalable AI processing. Interconnection solutions that link these environments can provide the performance and flexibility needed. A practical example would be GPU as a Service communities: by interconnecting GPUs across multiple colocation facilities through an AI Exchange (AI-IX), enterprises can build private AI environments that achieve the required computational power while remaining scalable – without sacrificing control over their digital infrastructure. This approach combines ultra-low latency and optimized routing to enable real-time inference, eliminating the constraint of having to house all resources in a single location. In the future, with the advent of updated version of the new Ultra Ethernet protocol, GPU as a Service communities will also enable distributed and private AI training in a provider-neutral environment.

Accessing clouds via the public Internet

When connecting to cloud resources – where the vast majority of AI workloads are currently hosted – many enterprises even today use the public Internet. This, however, can cause serious difficulties in terms of latency issues, security risks, and compliance challenges. The public Internet is, after all, an infrastructure based on the principle of “best effort”. And its inherent lack of SLAs and performance guarantees doesn’t just affect AI workloads; it can also impact SaaS applications and overall performance of systems. Increasing bandwidth won’t solve this – in fact, the so-called “Application Performance Trap” causes 82% of IT professionals to pursue more bandwidth while overlooking the effects of packet loss and their general connectivity performance, to the detriment of their quality of service. A trap, because unresolved connectivity issues impact the bottom line: In Europe, 28% of enterprises report that poor connectivity results in revenue losses and 46% cite additional operational costs.

One solution to improve connectivity performance is direct interconnection of networks through an Internet Exchange (IX) or Cloud Exchange with AI-IX functionality. By connecting directly to the cloud provider’s network, companies ensure that their AI-relevant data bypasses the public Internet. The number of network “hops” required to transfer data from A to B is also minimized. This reduces latency and optimizes data flows, by ensuring that there are no unnecessary detours via distant data centers and third-party networks. Direct interconnection can thus deliver the network performance, security, and reliability necessary to support AI-driven workloads and cloud-based applications.

Neglecting cloud providers’ SLAs and connectivity options

What’s more, cloud providers vary in network performance, uptime guarantees, and interconnection flexibility. Hence, inadequate SLA assessment can leave companies vulnerable to increased downtime, unexpectedly high egress fees, and poor cross-cloud performance. Another significant risk is vendor lock-in, as heavy reliance on one provider limits pricing flexibility, data mobility, and access to newer AI-enabled services. The reason is that when businesses build their AI workloads within a single ecosystem, migrating to a different provider can be extremely costly and complex – even if the evolving framework makes change a necessity. This can also create compliance issues if the provider’s infrastructure fails to meet changing security or data sovereignty requirements. 

Limiting potential through single-cloud policies

Managing multiple cloud environments can be challenging, but sticking to one provider can hinder AI agility and scalability. Since cloud vendors offer distinct AI capabilities – from model training to inference optimization – multi-cloud strategies have become essential. Enterprises hesitant to embrace multi-cloud due to integration challenges should leverage flexible interconnection platforms like Cloud Exchanges. With cloud and AI routing technology, they facilitate seamless data transfer between cloud providers and help mitigate vendor lock-in and performance bottlenecks. DE-CIX’s AI Internet Exchange (AI-IX), for example, provides scalable and secure, low-latency connectivity for AI workloads across multiple clouds, as well as enabling hybrid strategies.

The need to plan for future scalability of data center infrastructure and colocation

AI infrastructure demand is growing fast, and data center space is becoming scarce. Organizations that don’t plan ahead to secure colocation capacity may struggle to scale AI deployments and subsequently face delays, higher costs, and lost opportunities. To avoid these issues, enterprises should assess long-term needs and partner not only with cloud providers but also colocation providers that provide access to high-performance interconnection capabilities and offer flexible expansion options. Strategies that focus on a multi-provider approach or work toward the setup of distributed solutions like GPU-as-a-Service communities will reduce risk and improve agility.

While AI adoption is accelerating, businesses risk hitting major roadblocks if they don’t have the right network infrastructure in place. Avoiding strategic mistakes – from clinging to legacy data centers to underestimating the importance of direct interconnection – will be key to unlocking AI’s full potential. In a field where low latency and high-performance connectivity are critical, aligning infrastructure strategies with the demands of AI-driven innovation is essential.

 

📚 Citation:

Ivanov, I. (2025, December). Navigating the Pathway to AI-Ready Network Infrastructure: Common Pitfalls to Avoid. dotmagazine. https://www.dotmagazine.online/issues/ai-automation/ai-ready-network-infrastructure-common-pitfalls

 

Ivo Ivanov has been Chief Executive Officer at DE-CIX and Chair of the Board of the DE-CIX Group AG since 2022. Prior to this, Ivanov was Chief Operating Officer of DE-CIX and Chief Executive Officer of DE-CIX International, responsible for the global business activities of the leading Internet Exchange operator in the world. He has more than 20 years of experience in the regulatory, legal and commercial Internet environment. Ranked as one of the top 100 most influential professionals of the Telecom industry (Capacity Magazine’s Power 100 listing, 2021/2022/2023/2024/2025), Ivo is regularly invited to share his vision and thought leadership in various industry-leading conferences around the globe.

 

FAQ