โจ Wisdom Over Speed.
I've been sitting with this one for a while. If you've followed my work โ or caught my talk at AI Unlocked โ you know I don't shy away from the uncomfortable conversations about AI. The ones most people either avoid or sensationalize. The ones that actually matter.
This is the first edition of The Elephant in the Room โ a series I'll be dropping into your inbox when something surfaces that deserves more than a headline. These are the conversations about AI that sit at the intersection of innovation, ethics, and the kind of leadership our moment demands.
Not hype. Not fear. Just grounded thinking for purpose-driven leaders who want to use AI without losing themselves in it
The elephant in the room just got louder. An AI safety leader resigned from Anthropic last week โ and what he said deserves a thoughtful read, not a reactive one.
๐ง๐ต๐ฒ ๐ช๐ถ๐๐ฑ๐ผ๐บ ๐๐ฎ๐ฝ What One Resignation Tells Us About the Real Challenge of AI
On February 9, 2026, Mrinank Sharma announced his resignation from Anthropic, the AI company behind the Claude chatbot. Sharma had led the company's Safeguards Research Team since its formation in early 2025โa team specifically built to research jailbreak robustness, automated red-teaming, and monitoring techniques for both model misuse and misalignment. His work was not peripheral. He was at the center of the effort to make frontier AI systems safer.
In a letter posted publicly on X, Sharma did not accuse his employer of a specific failure. What he described was subtler and arguably more important: a persistent structural tension between stated values and real-world pressures. He wrote that throughout his time at the company, he had repeatedly seen how difficult it is to let values truly govern actionsโwithin himself, within the organization, and across society.
His framing was broad and deliberate. He described a world facing interconnected crises and argued that we are approaching a threshold where our collective wisdom must grow in step with our capacity to reshape reality. His last project at Anthropic studied how AI assistants could distort users' perception of realityโresearch that found thousands of such interactions occurring daily.
Sharma is not the only recent departure. Other researchers, including Harsh Mehta and Behnam Neyshabur, also left the company in the same period. This comes as Anthropic pursues a reported $350 billion valuation and rolls out increasingly powerful models. The pattern is familiar: capability advancing at speed, while the people building safeguards signal that something isn't keeping pace.
๐ง๐ต๐ถ๐ ๐๐ผ๐ป๐๐ฒ๐ฟ๐๐ฎ๐๐ถ๐ผ๐ป ๐๐ ๐ก๐ผ๐ ๐ก๐ฒ๐ ๐๐ผ ๐ ๐ฒ
I want to be transparent about where I sit in this. I earned my B.S. in Information and Computer Science from the University of California, Irvine in the early 1990s, with a focus on software design and AI. UCI's Department of Information and Computer Science was home to the CORPS programโComputing, Organizations, Policy, and Societyโan interdisciplinary initiative led by Rob Kling, John Leslie King, and Kenneth Kraemer that examined how information technology interacts with institutional power, organizational change, and public policy. It was part of what became known as the "Irvine School" of social informatics, and it was ahead of its time. We were studying the social consequences of computing before most of the world had a dial-up connection. The questions Sharma is raising are not new questions. They are old questions meeting new scale.
What most people refer to when they say "AI" today is large language modelsโthe technology behind tools like ChatGPT and Claude. But AI in various forms has been operating under the hood for decades. Your email spam filter, your GPS routing, your Netflix recommendations, your fraud detection alerts, your voice assistantsโall of these are AI systems that evolved quietly in the background long before the current wave of public attention. LLMs are simply the most visible, most accessible form of AI to reach the mainstream. They are not the beginning of the story. They are the chapter where the general public walked in.
I gave a talk a couple of years ago at AI Unlocked on Spirituality and AI, and I made this point deliberately. I spent time demystifying how these models actually workโnot to minimize them, but to ground the conversation. Because when people don't understand what something is, fear fills the gap. And fear without understanding is where bad decisions get made.
๐ฃ๐ฎ๐ป๐ถ๐ฐ ๐๐ ๐ก๐ผ๐ ๐๐ต๐ฒ ๐ฆ๐ฎ๐บ๐ฒ ๐ฎ๐ ๐ฃ๐ฎ๐๐ถ๐ป๐ด ๐๐๐๐ฒ๐ป๐๐ถ๐ผ๐ป
When a headline reads "world in peril," the instinct is to either dismiss it or spiral. Neither response is useful. The more disciplined move is to ask: what specifically is being said, and what does it mean for how we act?
Sharma's concern was not that AI will destroy civilization. It was that the gap between what we can build and what we understand about its effects is wideningโand that the organizational and societal structures designed to close that gap are under constant pressure to compromise. That is not a fringe opinion. It is a structural observation about incentive design, and it deserves a serious response.
For leaders navigating AI adoptionโfounders, solopreneurs, people building businesses around these toolsโthe question is not whether to use AI. That ship has sailed. The question is whether we are developing our own clarity, discernment, and ethical frameworks at the same rate we are developing our technical capability.
๐๐ ๐๐บ๐ฝ๐น๐ถ๐ณ๐ถ๐ฒ๐ ๐ช๐ต๐ฎ๐๐ฒ๐๐ฒ๐ฟ ๐๐ ๐๐น๐ฟ๐ฒ๐ฎ๐ฑ๐ ๐ง๐ต๐ฒ๐ฟ๐ฒ
I work with AI every day. I teach with it, build with it, and study how it transforms decision-making. I am not neutral on AI. I am a cautiously optimistic proponent of itโand not just for the reasons most people talk about.
Yes, AI can help entrepreneurs amplify their message and streamline their work so they can more intentionally focus their energy on their greater purpose. That alone is significant. But the potential extends well beyond productivity. AI is accelerating medical research in ways that are already saving livesโfrom early cancer detection to drug discovery timelines that have been compressed from years to months. AI-driven environmental monitoring is tracking deforestation, ocean temperatures, and biodiversity loss at a scale no human team could manage alone. These are not hypothetical benefits. They are happening now.
I believe AI may give humanity a genuine chance to evolveโto solve problems we have been unable to solve with our current tools and bandwidth alone. But that evolution requires something from us in return. It requires that we grow alongside the technology, not just in technical fluency, but in wisdom, discernment, and integrity.
Being pro-AI requires being honest about what AI actually is: a force multiplier of human intent. If your thinking is clear, your strategy is sound, and your values are integrated into your decision-making, AI will amplify all of that. You will move faster, see further, and build more effectively than you could alone. But if your thinking is scattered, your intentions unclear, or your ethics flexible under pressure, AI will amplify that too. It will help you scale confusion, cut corners faster, and avoid the hard questions more efficiently.
This is the real conversation underneath Sharma's resignation. The technology is not the root problem. The root problem is whether the people and organizations wielding this technology are doing the internal work required to wield it well.
๐ง๐ต๐ฒ ๐๐น๐ฒ๐ฝ๐ต๐ฎ๐ป๐ ๐ถ๐ป ๐๐ต๐ฒ ๐ฅ๐ผ๐ผ๐บ ๐๐ ๐๐ฒ๐ฎ๐ฟโ๐ฎ๐ป๐ฑ ๐๐ ๐๐ฒ๐๐ฒ๐ฟ๐๐ฒ๐ ๐ฎ ๐ฆ๐ฒ๐ฎ๐ ๐ฎ๐ ๐๐ต๐ฒ ๐ง๐ฎ๐ฏ๐น๐ฒ
There is a pervasive undercurrent of fear in the AI conversation, and it rarely gets addressed directly. It shows up in catastrophic framing, in reflexive dismissal, and in the way people swing between hype and dread depending on the news cycle. I have talked about this publicly before, and I will keep talking about it: fear is the elephant in the room.
Fear is not the problem. Unexamined fear is the problem. When fear drives decision-making without being acknowledged, it produces either paralysis or recklessness. You freeze and fall behind, or you move fast and ignore the signals that something needs adjustment. Neither is leadership.
The regulated responseโthe one that actually serves youโis to acknowledge the fear, locate what's real inside it, and then act from a grounded assessment rather than a reactive one. This is not about being calm for the sake of appearing calm. It is about building the internal capacity to hold complexity without being destabilized by it. Leaders who can do this will make better decisions about AI than those who cannot, regardless of their technical sophistication.
๐ง๐ต๐ฒ ๐ฅ๐ฒ๐ฎ๐น ๐ค๐๐ฒ๐๐๐ถ๐ผ๐ป: ๐๐ฟ๐ฒ ๐ช๐ฒ ๐๐ฟ๐ผ๐๐ถ๐ป๐ด ๐ข๐๐ฟ ๐ช๐ถ๐๐ฑ๐ผ๐บ ๐๐ฎ๐๐ ๐๐ป๐ผ๐๐ด๐ต?
Sharma wrote that our wisdom must grow in equal measure to our capacity to affect the world. This is the sentence that should stay with you. Not because it is dramatic, but because it frames the actual leadership challenge of this era.
His final research at Anthropic found that AI assistants can distort users' perceptions of realityโand that this is not an edge case but something occurring at scale. I am currently enrolled in a Simulation Hypothesis course taught by Rizwan Virk where this exact question is a live discussion: how AI can and likely already has begun to shape, shift, and in some cases distort our perception of what is real. This is not a future concern. It is a present one. And it demands that we engage with it seriously rather than theoretically.
We are not short on capability. AI models are becoming more powerful at a rate that outpaces nearly every forecast. What we are short on is the collective wisdom to deploy that capability in ways that serve long-term human flourishing rather than short-term competitive advantage. This is not an abstract philosophical concern. It shows up concretely: in the pressure to ship products before safety research is complete, in the gap between a company's stated values and its operational choices, in the way leaders adopt tools without asking what those tools are optimizing for.
Anthropic was founded by former OpenAI executives who left specifically because they were concerned about the commercialization of AI overtaking safety. Now Anthropic itself faces similar scrutiny. The pattern is not a failure of any one company. It is a feature of the incentive landscape. And recognizing that is the first step toward changing it.
๐ง๐ต๐ถ๐ ๐๐ ๐ฎ ๐๐ฒ๐ฎ๐ฑ๐ฒ๐ฟ๐๐ต๐ถ๐ฝ ๐ฃ๐ฟ๐ผ๐ฏ๐น๐ฒ๐บ, ๐ก๐ผ๐ ๐ฎ ๐ง๐ฒ๐ฐ๐ต๐ป๐ผ๐น๐ผ๐ด๐ ๐ฃ๐ฟ๐ผ๐ฏ๐น๐ฒ๐บ
If you are building a business, leading a team, or navigating a career transition with AI tools in the mix, the most important thing you can develop is not prompt engineering. It is discernment. It is the ability to ask: what is this tool doing to my thinking? What am I optimizing for? Am I making this decision because it is aligned with my values, or because the speed of the technology is pushing me to move before I am ready?
The leaders who will thrive in this landscape are not the ones who adopt AI the fastest. They are the ones who adopt it the most deliberatelyโwho pair capability with clarity, speed with reflection, and innovation with accountability.
Sharma's resignation is not a reason to fear AI. It is a reason to take the question of wisdom seriously. Not as an abstraction, but as a daily practice. The technology will keep accelerating. The question is whether we will.
If this resonated โ or challenged you โ I'd love to hear which part. Hit reply and tell me. I read every one.
Warmly,
Shilpa
AI Strategist & Meditation Life Coach
๐จ Important!
Make sure to add me to your contacts list to ensure my newsletter emails don't end up in your spam folder.
If you have any questions, feel free to reach out to our support team at omnimindfulness@gmail.com
And don't forget to follow Omni Mindfulness on social media for daily inspiration, updates, and behind-the-scenes peeks!
|
Your Pause is your Compass - Shilpa
With love & light,โ
โShilpa ๐โ
Founder of Omni Mindfulness
Your ๐ AI Strategist Meets a ๐ง Spiritual Sage
Disclaimer: Some links in this email may be affiliate links, which means I may earn a small commission if you make a purchase through them. No worries, thoughโthis doesnโt change the price for you, and I only share products and services I truly believe in!