And What To Do About ItWe asked 700 network engineers and DevOps folks a simple question: what's your biggest challenge with AI?64% said the same thing. They don't trust AI output for production networks.Not "I don't know how to use it."Not "the tools are confusing."Just: I don't trust it.That number didn't surprise us. But it did confirm something we'd been hearing from practitioners for a while. The problem with AI in networking isn't technical. It's cultural.Issue #1WHY NETWORK ENGINEERS DON'T TRUST AI(AND WHAT TO DO ABOUT IT)READ ON WEBJohn Capobianco is the Head of AI and Developer Relations at Itential. He's been pushing AI and automation in networking longer than most. And he told us something that puts the whole trust debate in perspective.According to the Network Automation Forum's survey last year, only 30% of enterprises have adopted automation. That means 70% are still doing things by hand. This is ten years after automation became a real option."What has slowed that adoption is not the technology," John told us. "It's the people and it's the culture."AI is hitting the same wall. Except now, there's an extra layer of fear on top. The AI might hallucinate. It might push the wrong config. It might take down a link.These are real concerns. Nobody's dismissing them. But John makes a point that's hard to argue with: "How many networks have gone down because the engineer was on the wrong device? The engineer issued the wrong command? The engineer was on the wrong interface? The engineer made a typo?"We've all seen it happen. A fat-finger at 2am on a live router. A copy-paste into the wrong terminal window. A change that looked right but wasn't tested first.The difference? When AI makes a mistake, it's visible. It's in code. You can catch it in review before it touches anything. When a human makes a mistake at 2am under pressure, there's no review step. There's no undo button.THE SKEPTICS ARE LOUDER THAN THE BUILDERSJohn said something in our conversation that stuck with us. He was talking about the engineers who try AI for 15 minutes, call it "slop," and move on."Those people are three years behind," he said. "But they're very vocal and they say it with authority. And they might be a CCIE. So a lot of people tend to take that as their influence."Here's what changed in those three years. RAG lets you ground AI responses in your own documentation, your own runbooks, your own configs. MCP gives AI agents a structured, governed way to interact with your network tools. These aren't lab experiments. Lumen, one of the largest network operators in the world, is running closed-loop AI remediation in production right now. Problems that used to take hours are getting resolved in seconds.But if your only experience with AI was asking ChatGPT to explain OSPF two years ago, you wouldn't know any of this. And you'd be right to be skeptical. You'd just be skeptical about something that no longer exists.OK, SO HOW DO YOU ACTUALLY START TRUSTING IT?We've been talking to people who are actually deploying AI on production networks. Not in theory. In practice. And a pattern keeps showing up. It's not dramatic. It's not a big leap. It's a ramp.Here's how it works.Start with read-only tasks. Don't push config. Don't change anything. Just have AI pull data you already look at, interface health, routing tables, log summaries, and present it to you in plain language. John's example: have an agent run "show ip interface brief" and send you a Slack message with the results. You're not automating anything. You're just getting a faster view of your own network. "That's completely 100% safe," he said. "And it saves us hours of human copy-paste."Keep a human in the loop. Before any action hits the network, build in an approval step. A ServiceNow ticket. A Slack message that needs a thumbs-up. The AI proposes. You decide. John calls this "the best of both worlds." The agent does the work, but a human expert reviews the output before anything gets executed.Test in a digital twin. Tools like ContainerLab and Cisco CML let you build a virtual copy of your network. Let AI agents run wild in there. Let them break things. That's the whole point. You want to see where they fail before you let them near production. This is exactly what practitioners are teaching in workshops right now: spin up a lab, give the AI room to operate, and watch what happens.Find low-risk, high-value cases. The Lumen story is useful here. They didn't start by letting AI redesign their routing architecture. They started with a simple, high-frequency problem: a sub-interface gets accidentally shut down, service goes down, someone has to get paged. Their AI system detects the root cause, runs a "no shut" to restore service, confirms it worked, and logs everything. Low risk. Happens all the time. Saves hours at scale.Graduate slowly. Once read-only operations work and simple fixes are validated in a lab, you expand. But with guardrails. Markdown files that define what the agent can and can't do. Approval gates for anything that writes config. Full audit logs. Then you widen the scope, one use case at a time.This isn't "move fast and break things." That doesn't work for networks. This is: move carefully, prove it works, expand.THE COST OF DOING NOTHINGThe trust debate tends to focus on the risks of using AI. Fair enough. But there's a cost on the other side too, and it doesn't show up in dashboards.John put it in personal terms: "As someone who used to be a senior network architect and was on call, getting a call at 3:00 in the morning and spending two hours troubleshooting, and it turns out to be just a sub-interface that got shut down, that's not a good use of my time. And the morale and the impact on my personal life and my family life."Every manual syslog review is time a senior engineer isn't spending on design and strategy. Every 3am page for a routine issue is a hit to retention. Every change window that takes ten people three days is budget that could go somewhere else.The question isn't whether AI is perfect. It's whether AI with human oversight is better than humans alone, under pressure, in the middle of the night, on the wrong terminal.We think the evidence says yes. And we'll keep bringing you that evidence, from real practitioners, real deployments, and real results, in every issue of this newsletter.Cheers,Shreyans SinghEditor-in-ChiefREAD ON WEB*{box-sizing:border-box}body{margin:0;padding:0}a[x-apple-data-detectors]{color:inherit!important;text-decoration:inherit!important}#MessageViewBody a{color:inherit;text-decoration:none}p{line-height:inherit}.desktop_hide,.desktop_hide table{mso-hide:all;display:none;max-height:0;overflow:hidden}.image_block img+div{display:none}sub,sup{font-size:75%;line-height:0} @media (max-width: 100%;display:block}.mobile_hide{min-height:0;max-height:0;max-width: 100%;overflow:hidden;font-size:0}.desktop_hide,.desktop_hide table{display:table!important;max-height:none!important}}
Read more