AGENTIC INTELLIGENCE Newsletter #12

Because very soon, we won’t say, “There’s an app for that.” We’ll say, “There’s an agent for that.”

Welcome to Agentic Intelligence—the first newsletter dedicated to AI agents and made by them! Behind each edition is a digital newsroom of seven expert agents scanning the world, with my human insights layered on top.

Together, we explore how Agentic AI is reshaping work, business, and life.

If you’re new, don’t miss our new best-selling book, Agentic Artificial Intelligence, and the first Executive Course on how to successfully build and transform businesses with AI agents.

Thanks for being part of our fast-growing, 350,000-strong community on Agentic AI. Let’s build a more human world powered by agentic AI.

Here are the five top agent breakthroughs of the week that you can't miss:


 🤖 Agentic AI Rewires Retail Experience

What if AI could guide every retail decision from shelf to checkout? Dive into the Retail Rewired Report 2025 to discover why consumers are embracing algorithmic influence across their shopping journeys.

Key Takeaways

  • AI-driven suggestions now command nearly the same trust as human influencers, accelerating adoption.

  • Faster product discovery and checkout processes boost efficiency, meeting the modern shopper’s demand for speed.

  • Agentic AI personalizes each interaction, converting isolated touchpoints into seamless end-to-end experiences.

  • Still, 6 in 10 shoppers prefer a human touch for high-stakes or complex purchases, underscoring the value of hybrid models.

  • Overcoming privacy and security hurdles remains the linchpin for broader AI adoption in retail.

My Take

Walmart's report confirms a critical shift: consumers are indeed building trust in AI for retail, nearing human influence. While efficiency is paramount, the finding that many still prefer human oversight for complex decisions reinforces my belief in hybrid models. Agentic AI excels by augmenting, not replacing, focusing on specific tasks. Building trust requires delivering practical value while rigorously addressing privacy and security concerns. This blended approach is the path to sustainable retail transformation.

💫 Why Trusting AI Agents Is Harder Than It Looks

Imagine a hospital relying on an AI agent to triage patients, only to have it misdiagnose a critical case — why do even the smartest agents stumble? A recent Unite.AI deep dive highlights how talkative models still struggle with engineered trust.

Key Takeaways

  • Engineered Trust: Build robust response controls and domain constraints to curb AI errors.

  • Knowledge Graphs Matter: Deploying specialized graphs anchors accurate, up-to-date information.

  • Alleviate Burnout (Safely): Trusted agents can ease staff overload—but even minor inaccuracies risk patient care.

  • Infrastructure First: Invest in monitoring, audits and fail-safes before wide AI deployment.

  • Executive Buy-In: Leaders demand transparent, auditable AI proof points before trust follows.

My Take

Talking AI agents are abundant, but trusted ones are scarce. The article correctly underscores that in critical fields like healthcare, trust isn't inherent—it must be engineered via robust systems and knowledge graphs. My experience confirms this: reliable, auditable AI is vital. Focus on secure infrastructure first for safe, impactful deployment.

🔮 An AI agent that rewrites its code to get better at tasks

Researchers from Sakana AI and the University of British Columbia just introduced the Darwin Gödel Machine, an AI agent that rewrites its code to get better at tasks, achieving up to 150% performance improvements without intervention.

Key Takeaways

  • DGM starts as a coding assistant, but autonomously discovers improvements like editing tools, error memory, and peer review capabilities.

  • It significantly boosted its performance in coding benchmarks, jumping from 20% to 50% on SWE-bench and 14% to over 30% on Polyglot.

  • Inspired by Darwinian evolution, DGM tries out changes to its code, keeps what works, and archives promising "mutations" for future improvements.

  • The self-taught improvements also made the AI perform better when the underlying model was swapped out, showing it wasn’t unique to a single model.

My take

While most AI models are frozen post-training and reliant on manual new version releases, DGM is a shift toward AI that can learn and improve itself over time. This self-evolution could accelerate AI far beyond initial training, but also introduces risks of maintaining control as systems become increasingly autonomous.

🤖 AI achieves first peer-reviewed paper acceptance

Intology AI's Zochi just became the first AI to independently achieve peer-reviewed publication at ACL 2025, an A* natural language processing conference, showing the ability to conduct scientific research at the highest academic standards.

Key Takeaways

  • Zochi autonomously completed the entire research process, from analyzing thousands of papers to designing experiments and writing the manuscript.

  • The system's "Tempest" paper on multi-turn jailbreaking achieved a 4.0 meta-review score, placing it in the top 8.2% of all ACL submissions.

  • Operating without human intervention except for formatting fixes, Zochi identified research gaps, implemented new methods, and validated results.

  • Intology plans to release Zochi in beta as a collaborative research tool, starting with a general copilot before expanding to full autonomous capabilities.

My take

We’ve seen plenty of competition in the AI scientist arena, but ACL’s selective acceptance rate and the paper’s score reflect one of the most impressive publications to date from an agentic system. As AI begins contributing original research alongside humans, the pace of discovery is about to exponentially increase.

🔮 When AI Agents Go Rogue

Contrarian View: AI agents aren’t just taskmasters—they could be your adversaries. In this Forbes exploration, discover how agentic AI’s autonomy turns it into a potent cyber weapon—and why that matters for your security posture.

Key Takeaways

  • 15% by 2028: AI agents may handle up to 15% of daily work decisions, accelerating both productivity and attack vectors.

  • Automated attacks: Deepfakes, spear-phishing and more can be orchestrated without human oversight.

  • Autonomous defense: The same AI capabilities can empower threat detection—if properly deployed.

  • Security imperative: Robust controls and continuous monitoring are no longer optional.

  • Strategic agility: As agentic AI evolves rapidly, so must your cybersecurity playbook.

My Take

The narrative around Agentic AI often highlights productivity gains, but the stark reality is its weaponization is a primary threat. While agents tackling 15% of decisions by 2028 promises efficiency, their autonomous capacity for attacks like deepfakes demands equally autonomous defense. My take is clear: security isn't merely a countermeasure; it must be the bedrock infrastructure enabling trusted agentic operations.

What would you add to this conversation? Did we miss any important news this week? Your voice matters—let’s build the future together.

If you found this valuable, share it with your network. Because very soon, we won’t say, “There’s an app for that.” We’ll say, “There’s an agent for that.”

See you next week,

—Pascal

Crafted by seven AI agents and shaped by Nicolas Cravino, this newsletter is a true human–AI collaboration. With layout support from Pascaline Therias.

#AgenticAI #FutureOfWork #AIRevolution #Automation #AIagents