Only 13% of organizations believe they are ready to tap into AI's potential, even though urgency is seen as high. This encompasses processing power, bandwidth, privacy, and security, all pivotal to the network.[1]
Many are unclear on how to lay the foundations to get the most out of AI, including the infrastructure, but they know they need to do it quickly. They know they need a strategy to prepare for AI's demands but are unclear where to start.
You can’t properly deploy or scale up your AI solutions without remodeling your networking functions. According to Bain, taking an ‘AI everywhere’ approach to re-architecting the tech stack is a vital, foundational step. A surge in AI applications and networking demands is causing growing pains as companies look to balance cost, performance, and network design in an unpredictable and dynamic landscape. It is no surprise that Gartner forecasts that 30% of AI projects will be abandoned after proof of concept (PoC) and before operationalization due to several factors, including escalating costs and unclear business value [2].
AI models, especially deep learning, demand huge amounts of computational resources. The problem is that network requirements for AI are still fluid, but as AI models become more complex, the network will become the bottleneck if the computational power isn’t there.
What AI means for your network infrastructure
Obviously, enterprises are still learning and developing their AI strategies. However, according to an Orange Business Report by Global Data (3) , 43% of enterprises said they are looking to modernize their network infrastructure to accommodate GenAI. At the same time, 37% acknowledge that AI deployments take much longer and require more resources than they initially anticipated.
“When it comes to the network, the picture is still evolving. Why? Because we are dealing with a very dynamic environment. Orange Business has vast networks, and we have been putting probes into them to monitor the impact of AI, focusing mainly on OpenAI for both business and consumer traffic. For example, we are looking at areas such as the evolution of traffic loads, traffic patterns, usage curves, and bandwidth demands. At this early stage, it is still tough to predict AI's impact – but we expect it to be significant.”
Thomas SOURDON, Head of Marketing, Strategy & Innovation
We do, however, know that AI applications, data, and processing are highly distributed. This distribution is also dynamic and volatile, so it is not a simple architectural fix. Unlike regular web applications, AI applications demand complex load balancing to optimize GPU efficiencies and manage the unpredictability of model response time. In web apps, response times are highly predictable. In AI apps, it will depend on the type of prompt and volume of data to compute and provide an answer from the model.
They also utilize multi-model inference, where multiple AI models process the same data and the system selects the best meaning, also avoiding bias and hallucinations. Sometimes, all this leads to increased latency with processing being done at distant locations to save energy, but it is a trade-off for optimal resource usage. AI brings with it new traffic patterns. Processing the same data means traffic duplication and a higher volume of data. There may be more uploads than downloads, for example, because we send more data and rich content to the model. The lack of caching mechanisms as answers are usually unique – a distinctive answer to a specific prompt and set of data. Take image or video generation, for example. Here each image is unique and can’t be cached somewhere for future usage.
Lastly, it can be maintained that live voice/video conversation can be explored with AI (more simultaneous communication, Erlang rules being revisited): In a company of 100 people, we can have a maximum of 50 simultaneous one-to-one communications, and the network will be sized for a percentage of this maximum (allowing a blocking factor). With AI, the 100 employees can have live simultaneous communication with AI agents.
We may not have clarity on the bigger picture for the enterprise yet from these and other scenarios, but we know yesterday’s networks are not flexible enough for this level of distributed processing or able to quickly adapt to new traffic patterns and needs.
Assessing compliance, data security, and the risk of shadow AI
AI and GenAI utilize internal company data, which can be sensitive or regulated. This means enterprises must also carefully manage where and how their data is processed. Organizations with strict privacy concerns, such as financial or healthcare, may choose to run AI models on-premises or in private clouds rather than in the public cloud. This impacts network infrastructure decisions as internal AI processing will utilize more local computing resources. For example, more local servers and GPUs may be needed to support workloads. If processing is done in the cloud, network bandwidth and performance between sites becomes critical.
Another challenge is Shadow AI, where employees use external AI tools such as ChatGPT and may unintentionally share confidential data, which is then used in training. Preventing unauthorized use of AI is paramount to maintaining compliance with data security policies.
As AI use grows, enterprises must also establish clear cybersecurity strategies to control AI usage and safeguard sensitive data and the network from emerging threats. AI is triggering new types of attacks, such as data poisoning, which targets the model and injects false data into it to change its answers, for example.
How are we helping our customers prepare for the unknown?
So, how do you prepare your network for the unknown? We explain to our customers that flexibility and visibility are the way forward in this uncertain AI journey. This includes both the design and management of the network. We advise on taking a platform-based model where the components can be linked together. This network-as-a-service (NaaS) gives enterprises the flexibility to rent infrastructure and services from a provider and provision compute power and network bandwidth as and when needed to handle the dynamic nature of AI.
This pay-as-you-go model means you only pay for the network functions they use. you can scale bandwidth up and down as required, ensuring a cloud-like experience and avoiding bill shock. Thus, avoid large upfront costs for hardware and infrastructure.
Our answer within this spectrum is Evolution Platform, a networking solution that simplifies WAN and SD-WAN services, ZTNA and remote access services, and Cloud connectivity and multi-cloud networking through virtualization and automation. It enables you to easily compose, deploy, and manage connectivity, cybersecurity, and cloud services on demand. Users can choose from a menu of services to build customized solutions that satisfy their specific business needs.
Regarding visibility, observability is crucial for monitoring AI and detecting usage patterns on the network to maintain performance, security, and compliance. What is specific about observability and event correlation is that it will consider data points from the network as well as from AI applications and users' laptops, enabling a 360° view. Looking at the network only or application only is not sufficient.
It can also help identify inefficiencies in AI workloads, optimize resource usage, and spot suspicious behaviors that may indicate a possible attack.
NaaS requires a change of mindset
Network-as-a-service (NaaS) solutions like the Evolution Platform require enterprises to significantly change how they think about networking. They must move from a hardware-centric concept to a flexible, cloud-based, and consumption-driven one. Only our early adopters are on the journey, but we expect this to ramp up quickly as AI application usage grows, driving higher data loads.
NaaS and our own Evolution Platform are transforming network services and providing enhanced security to enable enterprises to manage and optimize AI to drive business outcomes. To operationalize AI, enterprises will have to move to NaaS sooner rather than later.

I am a telecommunications professional with 25 years of experience in different positions and company profiles, always in direct relation with enterprise customers. First a network expert and pioneer of SDN, SD-WAN and SASE, then a strategic consultant for enterprises. I am now helping Orange Business to define and execute the best communication services strategy to address customers' business challenges in a fast-evolving, innovative and always more complex ecosystem.