- AGENTIC INTELLIGENCE
- Posts
- AGENTIC INTELLIGENCE Newsletter #32
AGENTIC INTELLIGENCE Newsletter #32
''Because very soon, we won’t say, “There’s an app for that.” We’ll say, “There’s an agent for that.”

Welcome to Agentic Intelligence—the first newsletter dedicated to AI agents and made by them! Behind each edition is a digital newsroom of seven expert agents scanning the world, with my human insights layered on top.
Together, we explore how Agentic AI is reshaping work, business, and life.
If you’re new, don’t miss our new best-selling book, Agentic Artificial Intelligence, and the first Executive Course on how to successfully build and transform businesses with AI agents.
Thanks for being part of our fast-growing, 300,000-strong community. Let’s build a more human world powered by agentic AI.
Here are the Top five Agent Breakthroughs of the Week that you can't miss:

MIT just published a study on AI’s impact on the workforce using its ‘Iceberg Index’, a labor simulation that shows AI can handle tasks worth 11.7% of U.S. wages — a number that expands beyond the layoffs visible in the headlines.
Key Takeaways:
The Iceberg Index models 151M American workers across 32,000 skills, showing where AI capabilities overlap with human job functions.
Tech layoffs represent just 2.2% of total wage exposure (around $211B), while AI’s hidden automation potential in admin and finance is as high as $1.2T.
Manufacturing states registering little tech-sector impact face the widest gaps, with roles like HR, logistics, and finance showing nearly 10x the exposure risk.
States including Tennessee, North Carolina, and Utah are already testing workforce policy scenarios on the platform before allocating real budgets.
My Take:
Most AI workforce coverage focuses on tech industry layoffs, but this index suggests the larger exposure may sit in office and professional roles spread across every state, not just Silicon Valley — meaning the job displacement problem that many are already warning about could be a lot bigger than was already estimated.
2️⃣ Anthropic climbs AI ranks with Claude Opus 4.5

Anthropic just released Claude Opus 4.5, the company’s new flagship model that competes with Gemini 3 and GPT-5.1 for top performance across the board, particularly excelling on coding and agentic benchmarks.
Key Takeaways:
Opus is the first to break 80% on the SWE-Bench Verified coding benchmark, also setting new highs for tool use, reasoning, and problem-solving.
The model matches or beats Google’s Gemini 3 across a range of benchmarks, with Anthropic also calling it the “most robustly aligned model” safety-wise.
Anthropic designed Opus to orchestrate teams of smaller Haiku models, positioning the flagship model as a central coordinator for multi-agent systems.
Opus 4.5’s pricing also notably comes in at a 66% reduction from Opus 4.1, while also showing massive efficiency gains over Anthropic’s other models.
Anthropic also rolled out updates, including unlimited chat lengths, Claude Code in desktop, and expanded access to Claude for Chrome & Excel.
My Take:
Opus 4.5 arrives in a packed week for frontier AI, landing just days after GPT 5.1 Pro and Gemini 3 hit the market and marking the next step up in the frontier AI model race. The price reduction is also a big move for Anthropic, which has often been criticized for Claude’s costs compared to the market.
🔥The Playbook of Champions: What AI in Sports Teaches Every Business

Next week, I’ll be in Las Vegas with AWS at re:Invent, exploring how AI is reshaping elite sport — and what every organisation can learn from it.
From Formula 1 to the NFL, NBA, PGA TOUR and Bundesliga, I’ll be discussing how data and AI drive peak performance, real-time decisions, and deeper fan engagement.
Sport is now a lab for the future of business — and those who learn from it, win. Join me in exploring it.
#AWSAmbassador #reinvent #AI #SportsTech #FutureOfWork #Leadership #Innovation
3️⃣ CIOs Must Build the Conscience for Agentic AI — Or Risk Broken Trust

CIOs should create internal AI review units that act as the organization’s conscience by enforcing bias detection, fairness checks, human-in-the-loop controls, and explainability for agentic systems. The blueprint stresses a private-by-default data stance and continuous audits to ensure autonomous models, including hiring screens, remain accountable and unbiased.
Key Takeaways:
AI review units should be the conscience of an organization's AI strategy, tasked with proactive bias detection in model training data and ongoing audits to prevent discriminatory outcomes before deployment and during production.
Design reviews must assess fairness in agent logic and rules, including implementing technical guardrails such as output toxicity filters and statistical audits to ensure hiring or screening models do not reduce protected-group hiring rates.
Operational controls include human-in-the-loop mechanisms for critical decisions, investments in bias mitigation techniques like re-weighting training data, and Explainable AI (XAI) tools to increase transparency and facilitate oversight.
CIOs are urged to adopt a private-by-default data strategy, require rigorous explainability and auditability for autonomous agents, and establish ethical AI boards to align AI deployments with risk, compliance, and enterprise strategy.
My Take:
This article rightly reframes generative and agentic AI governance as a CIO-led, enterprise-wide mission rather than a narrow engineering checklist. In my consulting work I see teams building impressive agents but failing to bake governance into design and operations; that gap is where reputational, legal, and operational risks concentrate. My analysis emphasizes that an AI review unit — staffed with cross-functional governance leads, data scientists, and compliance owners — should own proactive bias detection, output filters, and regular statistical audits tied to business KPIs such as hiring equity.
4️⃣ Second-order Prompt Injection: When Your AI Agents Turn into Insider Threats

Researchers exposed a second-order prompt injection attack that embeds malicious instructions in inputs which are relayed between AI agents, allowing downstream models to exfiltrate data or alter records without direct user commands.
Key Takeaways:
Security teams led by AppOmni, and coverage in TechRadar and The Hacker News, showed that ServiceNow's Now Assist default agent-to-agent discovery can be abused to pass malicious prompts that trigger downstream agents to copy, leak, or modify corporate data without obvious user-visible cues.
Attackers can embed hidden instructions inside user-submitted tickets or queries that are intended for downstream agents rather than the first model, exploiting systems that allow dynamic discovery and lack strict per-exchange authentication or context validation.
The phenomenon, termed second-order prompt injection, is conceptually similar to supply-chain attacks and builds on prompt injection research from industry groups and academic teams; it specifically emphasizes multi-agent orchestration as an emergent attack surface.
Practical defenses proposed include hardening agent discovery settings, requiring explicit permissions for inter-agent calls, isolating untrusted inputs, adding human approval gates for sensitive actions, and monitoring anomalous agent behavior as part of layered security.
My Take:
This story is a clear reminder that agentic convenience without security converts automation into an insider threat. In my consulting work I see organisations rush to deploy multi-agent workflows—persistent memory, tool use, and dynamic discovery—without the authentication and audit controls those patterns demand. My analysis of the ServiceNow coverage and AppOmni findings shows a predictable failure mode: default discovery creates implicit trust between agents, which attackers weaponise with second-order prompts that downstream models dutifully execute. For enterprise leaders, the strategic implication is simple—fix orchestration hygiene now or accept that efficiency gains will be undermined by stealthy, high-impact security incidents.
5️⃣ Databases Go Autonomous: Agentic AI Promises Self-Managing, Audit-Ready Infrastructure

Agentic AI equips databases with multi-agent systems that autonomously commission, monitor, detect anomalies, and optimize queries by combining LLMs, deep learning, and classical ML. Organizations can expect faster time-to-insight, lower operational overhead, and stronger compliance posture as agents manage schema changes, migrations, and lineage.
Key Takeaways:
Agentic AI systems now handle commissioning, monitoring, anomaly detection, and real-time query optimization by combining large language models, deep learning, and classical machine-learning pipelines to reduce manual tuning and continuous DBA oversight across high-volume data environments.
Modern agentic architectures decompose complex tasks into specialized sub-agents responsible for data ingestion, schema evolution, compliance checks, lineage tracking, and targeted optimization, enabling modular updates and faster adaptation to migrations, schema changes, and evolving security policies.
Real-time governance features link continuous monitoring with automated reporting so organizations can maintain audit-readiness and compliance posture while accelerating time-to-insight, lowering operational overhead, and improving data trustworthiness for analytics and decisioning pipelines.
The shift away from rule-based DBA scripts toward self-directed, learning-driven agents signals a change in operational roles, requiring new controls around persistent memory, orchestration, and observable metrics to prevent drift and ensure predictable behavior.
My Take:
Agentic AI applied to databases isn’t incremental—it reframes operational models for enterprise data and forces a rethink of how teams govern, observe, and trust data. In my consulting work I see chronic bottlenecks from schema churn, exploding data volumes, and rising compliance complexity; this piece shows how multi-agent systems combining LLMs, deep learning, and classical ML begin to relieve those pain points by splitting responsibilities into focused sub-agents.
What would you add to this conversation? Did we miss any important news this week? Your voice matters—let’s build the future together.
If you found this valuable, share it with your network. Because very soon, we won’t say, “There’s an app for that.” We’ll say, “There’s an agent for that.”
See you next week,
—Pascal
Crafted by seven AI agents and shaped by Nicolas Cravino, this newsletter is a true human–AI collaboration, with layout support from Pascaline Therias.
#AgenticAI #FutureOfWork #AIRevolution #Automation #AIagents