Artificial intelligence has already transformed how we build software, run networks and manage connected devices. But according to AT&T Chief Data Officer Andy Markus, the real disruption is just getting started.
In his December 2025 article, “Six AI predictions for 2026”, Markus outlines how a new wave of AI agents, AI‑fueled coding and telecom‑grade infrastructure will reshape enterprise technology over the next year.
For IoTWorlds.com readers—who live at the junction of IoT, 5G, edge and cloud—these predictions are especially relevant. They point to:
- a new development model for connected apps,
- a shift in where AI workloads run,
- and an expanded role for telcos as AI service providers, not just connectivity vendors.
This long‑form guide unpacks each of AT&T’s six AI predictions for 2026, translating them into practical implications for:
- IoT and Industrial IoT (IIoT) leaders,
- network and cloud architects,
- software teams building agentic solutions,
- and telecom providers exploring new revenue streams.
We will cover:
- Fine‑tuned small language models (SLMs) as the default enterprise AI
- AI‑fueled coding as the next development methodology
- On‑demand apps built and maintained by AI agents
- Private high‑capacity fiber networks connecting enterprises to compute
- Telcos productizing AI services such as fine‑tuning
- Accuracy, cost and speed as universal AI metrics
Then we’ll add an IoT‑specific roadmap for 2026.
1. Fine‑Tuned Small Language Models Will Dominate Enterprise AI
AT&T’s first prediction is that fine‑tuned small language models (SLMs) will become the most‑used models in enterprises by 2026.
1.1 What are SLMs and why do they matter?
Small language models are:
- Smaller in parameter count than frontier LLMs
- Fine‑tuned on narrow, domain‑specific datasets
- Optimized for speed, cost and targeted accuracy, not general world knowledge
According to Markus, they are “breaking the old adage: between good, cheap and fast, choose two.” Well‑designed SLMs can be:
- Good – highly accurate on their target tasks
- Cheap – much lower per‑token cost than giant models
- Fast – low latency, suitable for real‑time use
In practice, enterprises will not abandon large models. Instead, LLMs and SLMs will work together inside agentic workflows:
- A large reasoning model coordinates the overall task, calls tools, reasons across documents and orchestrates agents.
- Specialized SLMs handle focused subtasks—routing tickets, classifying logs, drafting standard emails, parsing IoT telemetry, etc.
1.2 Why SLMs are perfect for IoT and edge scenarios
For IoT systems, SLMs are exceptionally well aligned with constraints:
- Limited compute at the edge – SLMs can run on gateways, industrial PCs and even high‑end embedded devices.
- Domain‑specific vocabulary – industrial telemetry, medical device alerts or telco logs don’t need general‑purpose knowledge; they need deep accuracy on narrow semantics.
- Data‑sovereign deployments – SLMs fine‑tuned on private data can be deployed on‑premises or in private clouds to satisfy regulatory and latency requirements.
Example IoT use cases for SLMs:
- Smart factory copilot: SLM trained on MES logs, maintenance manuals and PLC fault codes to help technicians diagnose issues on the shop floor.
- Utility‑grid triage assistant: SLM interpreting SCADA alerts, weather feeds and trouble tickets to suggest likely root causes and next steps.
- Telecom NOC assistant: SLM classifying alarms across RAN, transport and core, mapping them to known incidents and recommended playbooks.
1.3 Strategic moves for 2026
To leverage the SLM wave:
- Build a high‑quality, labeled domain corpus – logs, tickets, manuals, SOPs, network traces.
- Choose an SLM base architecture that fits your deployment targets (GPU cluster vs. CPU‑only edge).
- Fine‑tune for specific roles, not generic “do everything” tasks—e.g., incident triage, anomaly explanation, work‑order drafting.
- Integrate SLMs as micro‑services that can be called by larger orchestrating models or agents.
The core insight: your data, not the model, is your competitive advantage. SLMs are just the vehicle to turn that data into reliable, low‑latency intelligence.
2. AI‑Fueled Coding Becomes the Default Development Methodology
AT&T’s second prediction: AI‑fueled coding will emerge as the next major development methodology, reducing some development cycles from weeks to minutes.
2.1 From Copilot to End‑to‑End Code Partner
Developers have used AI for autocomplete and boilerplate for years. GPT‑class models can already:
- Generate API clients and test suites
- Rewrite legacy code into modern frameworks
- Suggest fixes for compiler errors or failing tests
In AT&T’s internal experiments, an “internal curated data product” that would have taken six weeks was built in 20 minutes using AI‑fueled coding, while still adhering to AT&T’s security and compliance discipline.
The evolution Markus describes goes much further:
- Developers wear more hats – product owner, architect, implementer—because AI handles many hand‑offs.
- Non‑technical teams contribute – product managers or business analysts write natural‑language specs or interface sketches; AI turns them into working prototypes.
- For well‑bounded apps, a strong model plus clearly defined tests may produce production‑grade code “in one shot”, with only minor human edits.
2.2 Implications for IoT and telecom software
For connected‑system builders, AI‑fueled coding changes:
- Firmware and driver development – AI can scaffold board‑support packages, MODBUS/MQTT clients, and sensor fusion routines.
- Gateway and edge‑service orchestration – building micro‑services for protocol translation, data enrichment and local analytics becomes much faster.
- Network automation scripts – AI can write and maintain Ansible, Terraform, Netconf/YANG or Open RAN automation recipes.
This doesn’t mean you can skip engineering rigor. For IoT and telecom:
- Static analysis, fuzzing, unit tests, integration tests and security reviews remain essential.
- AI must be trained or constrained to adhere to existing coding standards, threat models and compliance frameworks—exactly as AT&T has done with its internal tools.
2.3 How to prepare your development organization
- Standardize prompts and templates for common tasks: new REST API, PROFINET driver, Kubernetes operator, etc.
- Invest in AI‑assisted CI/CD: automatic test‑case generation, upgrade suggestions, vulnerability scanning.
- Create ”golden patterns”—approved scaffolds for micro‑services, device agents, pipelines—that AI tools are instructed to follow.
- Upskill engineers to think in terms of system design and constraint specification, letting AI handle boilerplate implementation.
By 2026, the competitive difference will be how effectively you orchestrate AI in your SDLC, not whether you use it at all.
3. On‑Demand Apps Powered by AI Agents
Prediction three: businesses will begin building on‑demand applications, supported and maintained by AI agents.
3.1 From monolithic apps to ephemeral workflows
Historically, enterprise apps:
- required long requirements phases and implementation cycles,
- had to justify multi‑year maintenance and licensing,
- and often lagged behind fast‑changing business needs.
With AI‑fueled coding and autonomous agents, Markus argues that companies will increasingly spin up apps:
- for a specific workflow or campaign,
- with lifecycle measured in weeks or months,
- and possibly retired once the need passes.
Agents can continually adapt these micro‑apps based on:
- user feedback,
- data drift,
- new APIs or backend services.
Traditional apps won’t disappear overnight, but AI‑generated micro‑SaaS and internal tools will become the fastest path to solving immediate problems.
3.2 How this transforms IoT and OT environments
In IoT and operational technology (OT), on‑demand apps can:
- Spin up custom dashboards for a temporary production line or maintenance campaign.
- Generate field‑service mobile apps tailored to a specific asset family or region, then retire them post‑upgrade.
- Create situation rooms during incidents—pulling together maps, live device telemetry, historical trends and recommended playbooks.
Imagine a scenario:
- A chemical plant experiences a new type of fault across several reactors.
- Operations leaders describe the issue in natural language.
- Within hours, an AI agent generates:
- a diagnostic web app integrating historian data, sensor thresholds and safety rules,
- a workflow for engineers to label incidents,
- and integration hooks into the CMMS.
Once the anomaly is understood and fixed, the app may be archived—but the patterns live on for future reuse.
3.3 Governance for agent‑built applications
To avoid chaos:
- Treat AI‑built apps as first‑class artifacts in your portfolio—version‑controlled, tested and monitored.
- Establish guardrails: which data sources agents may use, what operations they may perform, who approves deployment.
- Use enterprise catalogs so employees can discover, rate and responsibly reuse on‑demand tools.
4. Private Fiber Networks + Consolidated Compute Centers
The fourth AT&T prediction is highly infrastructural: private high‑capacity, low‑latency fiber networks will connect enterprises to consolidated compute centers, reshaping cloud and AI architectures.
4.1 Why connectivity becomes an AI bottleneck
Modern AI workloads, especially those involving:
- large models,
- continuous streaming data,
- and agentic orchestration,
demand massive bandwidth and ultra‑low latency between:
- edge devices (IoT sensors, cameras, robots),
- enterprise campuses,
- and regional or centralized compute clusters.
Public internet paths and best‑effort VPNs often cannot guarantee predictable performance, especially under heavy, bursty AI traffic.
4.2 The rise of private fiber for AI‑heavy enterprises
AT&T expects that major enterprises investing heavily in AI will:
- build or lease private fiber networks connecting key locations to cloud and data‑center sites,
- secure dedicated capacity and traffic engineering for AI workflows,
- use network slicing and QoS to prioritize latency‑critical tasks.
In turn, cloud providers will co‑locate or consolidate compute centers near these fiber hubs, moving closer to enterprise premises and 5G aggregation points.
We are essentially moving into a new era of:
- cloud + telecom co‑design,
- sophisticated hybrid‑cloud and on‑prem compositions,
- and AI‑native transport architectures.
4.3 Implications for IoT and edge
For IoT and IIoT:
- High‑bandwidth data streams (4K video analytics, digital twins) become more practical over dedicated fiber plus private 5G.
- Edge and fog compute nodes may act as local SLM hosts, while larger reasoning models sit in nearby data centers connected through deterministic fiber.
- Mission‑critical systems—energy, transportation, healthcare—get more predictable performance than over shared public networks.
Enterprises should:
- Re‑evaluate WAN and multi‑cloud strategies through an AI lens: where is data produced, where is it consumed, how often and how fast?
- Explore Network‑as‑a‑Service (NaaS) and telecom partnerships that provide programmable private connectivity.
- Incorporate AI‑driven traffic engineering, where agents dynamically route flows based on current load and model demands.
5. Telcos as AI Service Providers
In prediction five, Markus argues that telecom operators will offer more AI services—such as fine‑tuning—to business customers, extending beyond core connectivity.
5.1 Why telcos are well placed
Telecoms handle:
- massive volumes of network telemetry, call data records and IoT device traffic,
- large, distributed infrastructures with strong security and compliance practices,
- existing partnerships with hyperscale cloud providers and AI platforms.
They have already learned to:
- train and fine‑tune models on their own network data (for fraud detection, network optimization, customer‑care automation),
- run high‑availability AI services in both core data centers and at the edge (e.g., for RAN optimization).
That know‑how is directly reusable as productized AI services:
- SLM fine‑tuning on customer data, hosted in the telco’s cloud region,
- AI‑driven network analytics for enterprise SD‑WANs and private 5G networks,
- turnkey agentic platforms that automate observability, ticketing and remediation.
5.2 Potential service categories
- Network + AI bundles
- Private 5G / LAN plus embedded AI observability and optimization.
- Pay‑per‑use SLMs tuned for specific sectors: manufacturing, logistics, retail, healthcare.
- AI at the edge
- Co‑located MEC (multi‑access edge compute) with pre‑configured models for video analytics, predictive maintenance and XR.
- Edge app marketplaces where enterprises deploy AI agents close to their IoT data sources.
- Security and compliance AI
- Anomaly detection and threat‑hunting on customer traffic (with clear privacy contracts).
- Automated compliance reporting for regulated industries.
Telcos have historically diversified into cloud, TV, security and IoT. AI services are the next adjacency, enabled by their unique data position and network footprint.
For enterprise buyers, this means:
- more choice beyond hyperscalers,
- lower integration friction between connectivity and AI,
- and potentially tighter SLAs across the whole stack.
6. Metrics for AI Accuracy, Cost and Speed Become Universal
The final AT&T prediction: by 2026, every business unit will track clear metrics for AI accuracy, cost and speed, treating AI as a core utility rather than a novelty.
6.1 From experimentation to operational discipline
As AI tools proliferate:
- Marketing, HR, finance, operations and engineering are all using AI assistants, generators and agents.
- Boards and regulators increasingly ask, “What value does this deliver? At what risk and cost?”
Markus highlights three critical measures:
- Accuracy – Does the model produce correct, reliable outputs for the use case?
- Speed – Is latency acceptable for the workflow (e.g., real‑time triage vs. offline planning)?
- Cost – How much are we spending on inference, training, storage and supporting infrastructure per task?
Without standardized metrics:
- Teams may overspend on large models where SLMs would suffice.
- Hidden error rates can lead to costly mistakes (e.g., mis‑triaged incidents, bad code, compliance violations).
6.2 Building an AI metrics framework for IoT
For IoT organizations, extend these metrics with domain‑specific KPIs:
- MTTR reduction – impact of AI on Mean Time To Repair.
- Uptime / SLA adherence – improvement in network or equipment availability.
- False‑positive and false‑negative rates for anomalies.
- Energy per inference for edge‑deployed models.
- Cost per device / per site for AI‑driven monitoring or optimization.
Combine these into dashboards visible to:
- business leaders (ROI, risk),
- engineering teams (latency, error budgets),
- finance (run‑rate costs and capital efficiency).
By 2026, AI governance will look much more like SRE and FinOps: data‑driven, cross‑functional and continuous.
7. What AT&T’s Predictions Mean for IoT and Edge Computing
Taken together, AT&T’s six AI predictions for 2026 paint a clear picture:
The next phase of AI is agentic, domain‑specialized, infrastructure‑aware and metrics‑driven.
For IoT and edge ecosystems, this implies several concrete shifts.
7.1 Architectures will be agent‑centric
IoT platforms will increasingly be orchestrated by AI agents that:
- monitor telemetry,
- call domain‑specific SLMs,
- invoke tools (APIs, automation scripts),
- and coordinate human‑in‑the‑loop decisions.
Rather than monolithic “IoT platforms,” we’ll see:
- modular services (device registry, digital twins, rule engines),
- wrapped in agentic workflows that adapt over time.
7.2 Edge and cloud boundaries will blur
With:
- SLMs at the edge,
- large models and training clusters in nearby consolidated compute centers over private fiber,
- and telco‑hosted AI services at MEC sites,
enterprises will design tiered AI stacks:
- On‑device: safety checks, basic anomaly detection, control loops.
- On‑prem / edge: SLMs, multi‑sensor fusion, local orchestration.
- Regional cloud: heavy reasoning, cross‑site optimization, deep learning.
7.3 Security models must become quantum‑ and AI‑aware
AI agents and massive automation introduce:
- new attack surfaces (prompt injection, tool‑abuse, data poisoning),
- and new dependencies on network performance and integrity.
Private fiber and telco AI services can help, but organizations must:
- enforce zero‑trust principles for agents,
- monitor AI behavior and drift,
- and adopt quantum‑safe encryption in anticipation of future compute advances.
7.4 Talent profiles will change
Teams will need:
- AI‑literate network engineers and IoT architects,
- developers comfortable orchestrating agents and SLMs, not just writing code,
- operations leaders who can interpret AI metrics dashboards and adjust strategy accordingly.
8. 2025–2026 Roadmap: How to Prepare Now
Here is a condensed, practical roadmap aligned with AT&T’s predictions and tailored for IoTWorlds readers.
Step 1 – Audit your AI and connectivity baseline
- Catalog current AI use across business units and IoT systems.
- Map where data originates, how it flows, and where models run.
- Identify clear pain points in accuracy, latency and cost.
Step 2 – Design your SLM strategy
- Choose 2–3 high‑value, narrow use cases (e.g., incident triage, maintenance‑ticket categorization, device‑alert summarization).
- Build a labeled dataset and fine‑tune at least one SLM.
- Measure improvements versus generic LLM prompts.
Step 3 – Introduce AI‑fueled coding safely
- Roll out AI assistants to a limited set of dev teams.
- Define coding standards, security rules and architectural patterns that the AI must follow.
- Track cycle‑time reduction and defect rates.
Step 4 – Pilot on‑demand apps with agents
- Identify a workflow that changes frequently (ad‑hoc reporting, campaign‑specific dashboards).
- Use agents plus AI‑fueled coding to build and iterate a micro‑app.
- Document governance: who can spawn agents, what data they see, what actions they can take.
Step 5 – Evaluate your network for AI readiness
- Estimate future AI traffic between sites and clouds.
- Discuss private fiber and MEC options with your telecom provider.
- Pilot at least one AI‑accelerated edge use case (e.g., vision analytics) over a high‑SLA connection.
Step 6 – Engage telcos as AI partners, not just carriers
- Ask your operator about:
- fine‑tuning or hosting SLMs on their infrastructure,
- AI‑powered observability for your SD‑WAN / 5G slices,
- co‑developing vertical solutions (e.g., smart factory, connected health).
- Explore joint funding or go‑to‑market programs.
Step 7 – Build an AI metrics and governance stack
- Standardize definitions of accuracy, latency and cost for your key AI services.
- Create dashboards and review cadences.
- Implement risk‑management practices: model cards, incident tracking, change logs.
9. Final Thoughts: 2026 as the Year of Practical, Networked AI
AT&T’s 2026 AI predictions are not speculative sci‑fi—they are grounded in what a major U.S. telecom and data‑driven enterprise is already seeing across its networks and customers.
The overarching themes:
- Agents + SLMs will bring AI deeply into day‑to‑day operations.
- AI‑fueled coding will compress software timelines and empower non‑developers.
- Private fiber and telco AI services will become the backbone of intelligent infrastructure.
- Accuracy, speed and cost metrics will determine winners and losers.
For IoT and connected‑industry leaders, the takeaway is clear:
Treat AI as a system‑level capability that spans devices, networks, compute and people—not as an add‑on feature.
The organizations that act in 2025—building SLMs, modernizing connectivity, piloting agentic workflows and tightening AI governance—will enter 2026 ready to harness this new landscape. Those that wait may find themselves running on legacy architectures in a world of on‑demand, agent‑driven, network‑optimized AI.
