Cisco announced an AI native wireless stack for 6G class applications alongside a new N9100 data center switch, aligning with NVIDIA’s AI ecosystem at the GTC conference in Washington, D.C.
The reveal positioned Cisco for the next wave of intelligent connectivity, where networks sense, infer, and allocate resources with minimal human intervention. The strategy targets telecom and enterprise operators preparing for AI-rich services.
As traffic shifts to edge inference and time-sensitive workloads, Cisco emphasized architectures that pair radio intelligence with cloud-scale Ethernet fabrics, unifying radio, transport, and compute domains to deliver predictable latency and programmability.
How does an AI-native 6G stack change the network?
An AI-native stack embeds learning and inference into each layer of the mobile system, including radios, RAN control, and the core, enabling the network to classify traffic, predict congestion, and steer flows in real time.
The approach aims to automate optimization, align the spectrum and compute, and expose APIs for developers who need consistent latency budgets.
Cisco aligned this model with AI RAN initiatives that use acceleration libraries and model pipelines to inform scheduling and beamforming.
By bringing model execution closer to radios and the user plane, operators can combine connectivity and AI processing into a single service, reducing backhaul pressure and improving determinism for industrial control and robotics.
Did you know?
Integrated sensing and communications, ISAC, allows 6G radios to both connect devices and sense environments, enabling services like traffic monitoring and public safety analytics without dedicated sensors.
What is AI for Wireless and Wireless for AI
AI for Wireless refers to applying machine learning to plan, sense, and optimize wireless networks. It includes channel prediction, anomaly detection, and policy automation, which improve reliability and utilization.
Wireless for AI treats the network as a distributed AI computer that assigns compute, places models, and manages data gravity near users.
Together, these pillars support what Cisco calls physical AI, where intelligent machines and vehicles interact with complex environments.
The architecture relies on edge inference, low-jitter transport, and fine-grained resource control, enabling applications to meet tight deadlines without overprovisioning cloud resources, thereby improving efficiency.
How does the N9100 switch reshape AI data centers?
The N9100 switch integrates NVIDIA Spectrum X Ethernet silicon to deliver high bandwidth and low tail latency for AI workloads. It supports 51.2 terabits per second of throughput and runs Cisco NX-OS or SONiC, giving operators flexibility in deployment models and automation stacks for converged AI and cloud fabrics.
Cisco framed the platform as a reference architecture for NVIDIA-aligned AI clusters, enabling consistent telemetry, congestion control, and workload isolation.
This helps training and inference jobs coexist with storage and web services while preserving performance, reducing stranded capacity, and shortening time-to-value in mixed environments.
What did AI-WIN prove in six months?
The AI WIN program, including Cisco, NVIDIA, Booz Allen, MITRE, ODC, and T-Mobile, built a fully AI-native mobile stack in half a year. The team deployed AI functions into the RAN and core, then conducted a user-to-user phone call on the system, demonstrating orchestration and performance across the pipeline.
The project also demonstrated integrated sensing and communications (ISAC) for public safety applications.
By combining inference with radio signals, the network identified events while maintaining service, which points to new use cases such as incident response, traffic analytics, and facility monitoring that were previously handled by separate sensor networks.
Where are the early use cases and ROI
Initial targets include autonomous guided vehicles, warehouse robots, and teleoperation, where sub-ten-millisecond end-to-end latency and high reliability are critical.
AI native control allows faster handover, more innovative scheduling, and edge execution of perception tasks, which reduces error rates and improves throughput in mission-critical sites.
Service providers see potential in automated assurance, energy savings, and dynamic spectrum sharing. Enterprises may gain from private 5G and Wi-Fi convergence, enabled by AI-enhanced planning and policy.
As budgets for AI networking rise, platforms that unify radio intelligence and data center fabrics could capture spend and deliver measurable savings by 2030.
The shift to AI native networks signals a new design era where connectivity and computation are jointly optimized.
If operators standardize on programmable RAN, edge inference, and AI-tuned Ethernet fabrics, early deployments will shape best practices and accelerate the path toward 6G readiness.


Comments (0)
Please sign in to leave a comment