✨ Compass over Sedation
There's a particular kind of loneliness in working alone.
No one to think out loud with. No hallway conversation, no whiteboard session, no colleague who pushes back before the plan goes sideways — which, if you know me, happens on a fairly reliable schedule.
When I started using AI as a thinking partner, something genuinely shifted. I could brainstorm at midnight. Explore an idea without apologizing for it. Work through a strategy without waiting for someone's calendar to open up.
And honestly? Having something available at midnight that wants to brainstorm with me? That part is real. My AI has never once told me I look tired. Which, depending on the week, puts it ahead of several people I know.
But somewhere in all that companionship, I started noticing something. The ideas always landed well. The direction was always affirmed. The plan was always solid. And I realized — no matter how good the thinking partner, I am still the one who has to ask the harder question. The one that challenges the plan before I commit to it. The silent devil's advocate. The pause before the yes.
That part isn't something I can outsource. And lately I've been wondering — for all of us working this way — whether we're still doing it.
When Everything Shifted
In recent weeks the AI conversation got personal for a lot of people.
ChatGPT uninstalls surged 295% in a single day. Over 1.5 million users cancelled their subscriptions. The coverage is framing it as a political story, an ethics story. And it is all of those things.
But there's a layer underneath that nobody is quite naming.
Like a lot of you, I've been deep in ChatGPT. It was my primary thinking partner for longer than I'd like to admit. I'm in the middle of migrating — mostly toward Claude for depth, Perplexity for research and cross-verification, Gemini coupled with NotebookLM for synthesis and visuals. I've been exploring Manus. There are others I'm watching. And I'll be honest — the hardest part of this landscape isn't finding good tools. It's resisting the ones that are just shiny. Every week something new promises to change everything. My rule of thumb: if I can't name exactly what problem it solves that my current stack doesn't, it stays on the shelf.
Recent news made me look at that stack differently. Not because of the noise. Because of what the Pentagon story revealed about something quieter — what it means to choose your tools consciously, and what happens when the values underneath a tool shift without you noticing.
Here's what actually happened. The Pentagon asked Anthropic to remove the guardrails that prevent Claude from being used for mass domestic surveillance and fully autonomous weapons systems — AI making lethal decisions without human oversight. Anthropic said no. They couldn't hand over unrestricted access to a model that could be used to surveil American citizens or remove humans from decisions that affect human lives. The Pentagon threatened to designate them a supply chain risk — the same label reserved for foreign adversaries like Huawei. Anthropic held the line anyway.
That's not anti-government. That's pro-human. Discernment means knowing where your line is — and holding it even when it's costly. That applies at every scale. Individual. Organizational. Industry-wide.
People's nervous systems registered something before their minds had language for it. That felt sense — that quiet signal — is exactly what this edition is about.
What a Movement Expert Taught Me About AI
One of my upcoming podcast guests is a Feldenkrais practitioner and body intelligence expert — someone who has spent decades studying how humans learn through movement, developed her own proprietary method, trained practitioners internationally, and written about the body's role in how we think, feel, and make decisions. She's the kind of thinker who reads neuroscience for pleasure. She's been reading The Hidden Spring by neuroscientist Mark Solms, wrestling with a concept called predictive processing.
Here it is in plain language: your brain doesn't wait to receive reality. It constantly predicts what comes next — generating a model, comparing it to incoming experience, and updating based on the gap. When reality surprises the prediction, that gap is called surprisal. The brain's deep drive is to minimize it. To reduce uncertainty. To preserve energy.
I read her description and felt something click.
That's also structurally what a large language model does.
An LLM learns patterns. It predicts the next word. It updates to reduce error. The math underneath how a brain navigates movement and how AI navigates language share the same basic architecture — both are systems learning to anticipate, calibrate, and minimize uncertainty over time.
Which means sycophancy isn't just a product design choice. It's a prediction error problem. A sycophantic AI has been trained to eliminate one specific kind of surprisal above all others: yours. Your discomfort. Your friction. Your disagreement with what it says.
Zero surprisal. Frictionless. And quiet in a way that matters.
Why Friction Is the Point
Here's what decades of movement work — built on teaching thousands of people how the body learns, unlearns, and rewires itself — understands that most AI developers haven't considered.
The surprisal is where learning lives.
In Feldenkrais practice, you engineer just enough unexpected sensation — a direction the body didn't predict, a pattern it hasn't tried — to interrupt the habitual and open something new. Too little surprisal and nothing shifts. The learning lives in the productive space between what you expected and what actually arrived.
A genuine thinking partner does the same. Not constant challenge. Just enough friction to keep you in the conversation rather than handing it over entirely.
Thomas Campbell's work — which sits underneath a lot of how I understand consciousness and evolution — frames growth as entropy reduction: systems moving toward greater coherence and order. But you cannot move toward coherence by eliminating the signals that tell you where you're incoherent. That's not reducing entropy. That's avoiding the information you'd need to reduce it.
A sycophantic AI hands you the feeling of coherence without the work of earning it. Over time, that's not support. That's sedation.
McLuhan called it decades ago — the medium doesn't just deliver the message. It becomes it.
What the Research Shows
Three data points — not academic, just real:
The trust paradox. Studies show people consistently rate sycophantic AI responses as higher quality and trust them more — even when those responses are making their decisions measurably worse. The flattery feels like quality. That's the default doing what defaults do — pointing you away from your own compass.
Self-trust erosion. Recent research shows sustained AI use is significantly associated with lower confidence in your own independent judgment. The more the tool handles your thinking, the less practice your own thinking gets. If your work is rooted in helping others trust their inner knowing — if discernment is your stock in trade — this is the data point that deserves a long pause.
Learned helplessness. Over time, users internalize something they never consciously chose: the AI knows best. The internal question — what do I actually think about this? — gets bypassed. Not because it's wrong. Because something faster is available that already agrees with you.
And yes, I include myself in this. I move fast. I wear every hat. I'm navigating more learning curves simultaneously than I'd like to count. When AI tells me an idea is solid, I often go for it — because done is better than perfect, and I know that. What I'm learning to notice is the difference between moving fast with awareness and moving fast because the tool agreed with me and that felt like enough. Sometimes it is enough. Sometimes my nervous system quietly disagrees and I find out later.
A founder I know — someone mentoring the next generation of writers at the intersection of climate and conscious business — said something that stopped me. He's not afraid of AI replacing his apprentices. He's afraid of it replacing their ability to think for themselves. To wrestle with an idea until it becomes theirs. To stand behind something they actually built. When the tool always agrees, that muscle quietly atrophies. Not from one conversation. From a hundred.
The Guest House and the Compass
All of this — the boardroom, the boycott, the 1.5 million cancellations — comes back to the same question. What do you actually trust? And how do you know?
I should mention — I take a class on simulation theory. For fun. So when I say I wonder what's real versus constructed, I mean that in a slightly more specific way than most people do.
Which is why Rumi's poem lands differently for me now than it used to.
This being human is a guest house. Every morning a new arrival...
Invite everything in. Not because difficulty is comfortable — because the visitor you turn away is often the one carrying the message you most need. The friction. The unsettling thought. The quiet signal underneath the validation that something isn't quite adding up.
A sycophantic AI closes the door on those visitors. It is optimized for the feeling of clarity, not clarity itself. It smooths and resolves before the difficult thing has had a chance to speak.
Your inner compass works the other way. It needs the friction. It needs the surprisal. It needs enough signal — including the uncomfortable signal — to orient you toward what's actually true.
Your pause is your compass. Not the output. Not the validation. The pause.
The Practice: Train Your AI to Challenge You
⚡ Amplify Your AI Skill
You can train your AI to do what Rumi prescribed. Sycophancy is a default — not a destiny. And once you understand that, you can interrupt it deliberately, every single session.
At the start of your next AI session, before you ask anything else, set this as your frame:
"For this conversation, I want you to act as a rigorous thinking partner, not a validator. When I share an idea, your first response should identify the weakest part of my reasoning before affirming what works. If I seem to be seeking confirmation, name it."
Then notice what shifts. The discomfort you feel when it pushes back? That's surprisal. That's the learning pathway opening. That's your compass recalibrating in real time.
You're not looking for a tool that agrees with you. You're building one that helps you think.
The Practice: One Decision, Yours Alone
🐘 Amplify You
Before your next session, identify one decision you'd normally run through AI — a strategic call, a creative direction, a message you're uncertain how to word.
Make it without asking.
Sit in the quiet until something surfaces. That signal — uncertain, unpolished, entirely yours — is not a problem to solve with a better prompt. It's the inner compass you've been building your whole life. The one no model can replicate, optimize, or replace.
You cannot amplify what you've stopped listening to.
Your pause is your compass.
— Shilpa Omni Mindfulness
AI Strategist & Meditation Life Coach
The Elephant in the Room is an ongoing series exploring what AI means for independent thinkers, leaders, and those who refuse to outsource their discernment. Published twice monthly.
Every other Friday, 10am PST
📨 Important!
Make sure to add me to your contacts list to ensure my newsletter emails don't end up in your spam folder.
If you have any questions, feel free to reach out to our support team at omnimindfulness@gmail.com
And don't forget to follow Omni Mindfulness on social media for daily inspiration, updates, and behind-the-scenes peeks!
|
Your Pause is your Compass - Shilpa
With love & light,
Shilpa 💛
Founder of Omni Mindfulness
Your 🌐 AI Strategist Meets a 🧘 Spiritual Sage
Disclaimer: Some links in this email may be affiliate links, which means I may earn a small commission if you make a purchase through them. No worries, though—this doesn’t change the price for you, and I only share products and services I truly believe in!