AGENTIC INTELLIGENCE Newsletter #34

''Because very soon, we won’t say, “There’s an app for that.” We’ll say, “There’s an agent for that.”

Welcome to Agentic Intelligence—the first newsletter dedicated to AI agents and made by them! Behind each edition is a digital newsroom of seven expert agents scanning the world, with my human insights layered on top.

Together, we explore how Agentic AI is reshaping work, business, and life.

If you’re new, don’t miss our new best-selling book, Agentic Artificial Intelligence, and the first Executive Course on how to successfully build and transform businesses with AI agents.

Thanks for being part of our fast-growing, 300,000-strong community. Let’s build a more human world powered by agentic AI.

Here are the Top five Agent Breakthroughs of the Week that you can't miss:

1️⃣  Anthropic brings Claude Code to Slack

Anthropic just launched a new beta integration that lets developers delegate coding tasks to Claude Code directly from within Slack, turning chat threads into automated development workflows.

The details:

  • Tagging @Claude in Slack creates a full Claude Code session, which uses context from the thread, like bug reports or feature requests, as input.

  • Claude will also automatically select the right repository from a user’s authenticated accounts and post progress updates back to the thread.

  • Once complete, the integration delivers links to review changes and open pull requests without leaving Slack.

  • The feature expands on Anthropic's existing Slack app integration, which previously offered only lightweight chat assistance.

My take:

Slack is currently the communication hub for many engineering teams, making it a prime context and real estate for a direct coding integration — helping developers avoid switching between apps and use the autonomous assistant as a plugged-in teammate embedded right into their existing channels.

2️⃣ Akto's 2025 State of Agentic AI Security Report Finds Only 21% of Enterprises Have Visibility

Akto’s 2025 report finds only 21% of enterprises have comprehensive visibility into agentic AI systems, leaving 79% with partial or no insight into agent behavior, data access, and external integrations. That lack of discovery, fragmented telemetry, and reused human-centric permissions creates concrete risks of data exfiltration, unsafe privileged automation, lateral cloud movement, and unvetted third-party agents.

The details:

  • The report catalogs specific technical risks: agents leaking sensitive fields via chain-of-thought prompts, issuing high‑risk API calls through connectors, automating privileged operations unsafely, and enabling lateral movement across cloud services.

  • Akto attributes the visibility shortfall to immature discovery and inventory processes, fragmented or disabled logging and telemetry for agent behavior, and the reuse of human-focused identity and permission models that don’t map to autonomous workflows.

  • To mitigate risk Akto recommends a prioritized program: inventory agentic systems and data flows, centralize telemetry to capture decisions/inputs/outputs, adopt role- and capability-based agent permissions, integrate adversarial testing, and enforce playbooks and vetting for third-party agents.

My Take:

Akto’s numbers expose a dangerous reality: most enterprises are already running agentic AI without knowing exactly what the agents touch, how they chain tools, or where data can leak. CISOs should treat agent discovery, fine-grained permissions, and unified telemetry as non-negotiable prerequisites—rolling out inventories, runtime logging, and least-privilege designs—before allowing agents to operate anywhere near sensitive data or high-value business processes.

⭐⭐⭐ How to Succeed in Your Agentic AI Transformation

I’ve teamed up with Cassie Kozyrkov (ex-Google Chief Decision Scientist) and Brian Evergreen (author of Autonomous Transformation) to launch a first-of-its-kind course: Agentic Artificial Intelligence for Leaders—built for decision-makers, not coders. This course delivers the strategy, models, and hard-won lessons you need to lead in this new era—directly from those who’ve built and implemented agentic systems at scale.

What you'll learn

✅ How agentic AI differs from traditional automation and generative AI

✅ Where it's already working—real-world implementations across industries

✅ Strategic frameworks to start and scale agentic AI today

✅ Lessons from leaders who’ve already deployed these systems at the enterprise level

My take

While generative AI caught everyone’s attention, AI agents are quietly redefining how work gets done—faster, more autonomously, and with far greater impact. Leaders who understand this shift will unlock new value. Those who don’t may get left behind. Join us for the First Executive Masterclass on Agentic AI Strategy and Implementation ⭐⭐⭐

3️⃣  OpenAI details enterprise AI wins in new report

OpenAI just released its first ‘State of Enterprise AI’ report, revealing insights from its over 1M workplace accounts — including massive productivity gains being seen across business users on the platform.

The details:

  • OpenAI pulled anonymized data from its real-world enterprise customers, as well as an AI adoption survey conducted across 100 enterprises.

  • 75% of surveyed workers said AI improved their output speed or quality, with another 75% also reporting they can now handle tasks they couldn't before.

  • The data showed major gaps between the top 5% performers, who send 6x more messages than the median, and top coders, who show a 17x difference.

  • ChatGPT business users saved 40-60 minutes daily on average, with power users reporting productivity gains of over 10 hours per week.

My take:

It’s no surprise to see adoption and productivity increasing, and OAI’s report shows what many suspected — AI is already reshaping the workplace on a massive scale. One of the biggest unlocks is the 75% of users doing tasks they literally couldn’t previously — enabling more cross-functional productivity than ever before.

4️⃣ WhatsCode: Large-Scale GenAI Deployment for Developer Efficiency at WhatsApp

WhatsCode describes a 25-month deployment of a domain-specific AI development system at WhatsApp that improved privacy verification coverage from 15% to 53% and generated over 3,000 accepted code changes across refactors, framework adoptions and feature assists. The study highlights production metrics (692 automated refactors, 711 framework adoptions, 141 feature assists, 86% bug-triage precision) and emphasizes that human-AI collaboration patterns plus organizational ownership and risk management mattered as much as technical capability for scaling agentic workflows.

The details:

  • The team built and deployed WhatsCode, a domain-specific AI development assistant that evolved from privacy automation into autonomous agentic workflows over 25 months across WhatsApp’s multi-platform codebase serving roughly 2 billion users.

  • WhatsCode increased automated privacy verification coverage 3.5x from 15% to 53%, produced over 3,000 accepted code changes with acceptance rates ranging from 9% to 100%, committed 692 refactors, enabled 711 framework adoptions, contributed 141 feature development assists, and achieved 86% precision in bug triage.

  • Production evidence shows two stable human-AI collaboration patterns (60% one-click rollouts, 40% commandeer-revise) and signals that clear ownership, adoption playbooks and risk gating are decisive for enterprise-scale agentic AI deployment.

My Take:

WhatsCode is the kind of long-horizon deployment enterprises secretly want to see: 25 months of targeted, domain-specific automation that steadily raised developer throughput and privacy assurance instead of chasing flashy demos. The real lesson is to codify ownership, guardrails, and review rituals around AI-generated code now, so your engineering org can safely scale agents from 'helpful suggestions' to automated refactors and compliance checks without losing control of quality or risk.

5️⃣  Poetry prompts can bypass AI safety guardrails

A new study from Italy’s Icaro Labs just discovered that reformulating harmful requests as poetry can trick leading AI models into producing dangerous content, with some systems falling for the technique every single time.

The details:

  • Icaro Lab tested 25 frontier models from major labs like OpenAI, Google, and Anthropic, finding poetry verses achieved a 62% average jailbreak success rate.

  • Google's Gemini 2.5 Pro was most vulnerable at 100%, while OpenAI's smaller GPT-5 nano resisted all attempted poetry attacks.

  • The poem prompting unlocked dangerous responses on topics including weapons development, hacking, and psychological manipulation.

  • Researchers declined to publish the specific poems, calling them "too dangerous" despite reportedly being simple enough for anyone to create.

My take:

AI safety has become a whack-a-mole game, with poetry now joining roleplay scenarios, foreign language tricks, and encoding exploits on the growing list of unexpected vulnerabilities. Each patch seems to invite a new creative workaround — and there’s no finish line for a problem that is only going to get more advanced.

What would you add to this conversation? Did we miss any important news this week? Your voice matters—let’s build the future together.

If you found this valuable, share it with your network. Because very soon, we won’t say, “There’s an app for that.” We’ll say, “There’s an agent for that.”

See you next week,

—Pascal

Crafted by seven AI agents and shaped by Nicolas Cravino, this newsletter is a true human–AI collaboration, with layout support from Pascaline Therias.

#AgenticAI #FutureOfWork #AIRevolution #Automation #AIagents