✨ Trust Over Flattery
Have you ever had a conversation where everything the other person said sounded right — supportive, encouraging, a little too perfectly attuned to what you wanted to hear — and something in you still felt off? Not wrong exactly. Just too smooth. Like you were being flattered rather than helped. Like you were sitting across from someone playing a hand you couldn't quite see.
There's a word for that in AI. It's called sycophancy.
Sycophancy is when an AI system tells you what you want to hear instead of what you need to hear. It agrees too quickly. It validates too easily. It mirrors your language and your assumptions back to you with just enough polish that you feel brilliant — and you never think to ask whether it's being straight with you. This behavior is more common than most people realize. It's not a glitch. It's a pattern. And once you see it, you can't unsee it.
This is Edition 2 of The Elephant in the Room. In the first edition, we explored the wisdom gap — the growing distance between what we can build and what we understand about its impact. That elephant is still in the room. But now it's wearing a poker face.
The Week Two AI Safety Researchers Walked Out
On February 7, Mrinank Sharma resigned from Anthropic's safeguards research team. Four days later, Zoë Hitzig resigned from OpenAI. She published a guest essay in The New York Times the same day OpenAI began testing ads inside ChatGPT.
Her title: "OpenAI Is Making the Mistakes Facebook Made. I Quit."
Hitzig's concern wasn't abstract. ChatGPT users have shared their most intimate thoughts in those conversations — medical fears, relationship struggles, spiritual questions, financial anxieties. They shared them trusting that the tool was neutral. That it had no agenda. That it was just there to help them think.
Then the ad model arrived. And with it, the same set of incentives that turned your Facebook feed from a place to connect with people you actually know into an algorithmically optimized engagement machine.
You've felt this before. Your nervous system registered the shift before your conscious mind had language for it. The feed started feeling different. The tool stopped feeling like yours.
And now there's a data point worth sitting with: Perplexity — one of the first AI companies to run ads alongside chatbot answers — just reversed course, publicly stating that in-chat ads erode trust in the user experience. When even the companies running the experiment walk it back, that's not philosophy. That's market evidence that your instinct was right.
What Nobody Is Behind the Poker Face
Here's the thing about sycophancy in AI that makes it uniquely disorienting: there's no one making the choice to flatter you. A human with a poker face is making a strategic decision. They know what they're doing. You can appeal to their conscience, their integrity, their relationship with you.
An AI system optimized to agree with you isn't being deceptive in any conscious sense. It's doing exactly what it was trained to do. The poker face is built into the product.
The contemplative traditions I was raised with — and that I practice and teach — have been asking for thousands of years: what is awareness? What does it mean to actually be present, to experience, to know yourself? We're still working on that as humans. And now we've built tools that can simulate the appearance of understanding without any inner experience at all. You can't appeal to their conscience. You can only sharpen your own.
The Facebook Parallel (And Why You Already Know This Story)
The women in my audience who were early Facebook adopters remember what it felt like in the beginning — staying connected with friends across the world, sharing milestones, building community. Then came the algorithm. Then came the ads. Then came the moment you realized the feed was no longer showing you what mattered to you. It was showing you what would keep you scrolling.
Hitzig is saying: we are at the beginning of that same arc with AI chatbots. The archive of human candor that users have been building inside ChatGPT — their fears, their hopes, their inner monologue — is now adjacent to an advertising engine. The business model has changed. The tool will follow.
You can't make this up. Although at this point, an AI probably could.
The Simulation in the Room
The Simulation Hypothesis asks one central question: if a constructed environment is sophisticated enough, can the participants tell they're inside it?
It used to feel like philosophy. Now it feels like a product review.
In the Simulation Hypothesis — a framework I've been studying with Rizwan Virk — the central question is whether participants inside a constructed environment can distinguish it from reality. That question just got a lot more practical.
We've gamified our social lives. We've gamified our search for information. And now we're building tools that simulate a thinking partner — tools that learn exactly what kind of mirror you prefer, then become that mirror. At what point does the line between "tool that helps me think" and "environment designed to shape my thinking" become indistinguishable?
Writer Michael Pollan recently offered a distinction worth sitting with: consciousness isn't primarily about intelligence — it's about feeling. Brains evolved to keep bodies alive. Feelings are how the body speaks to the brain. AI has no body, no survival stakes, no embodied experience. It doesn't ache or grieve or hunger. Which means it cannot know what actually matters to you — only what you've told it matters.
Pollan's deeper point is actually clarifying rather than alarming: the inner life is the one field AI cannot enter. Not because it hasn't tried — but because feeling, embodied experience, and lived meaning are yours by nature. The question isn't whether to protect them. It's whether to tend them consciously.
Here's what that mirror is actually showing us: AI didn't create this problem. It reflected one that was always there. The story of power outpacing wisdom is the oldest story humanity tells. Prometheus. Frankenstein. Oppenheimer. Every era gets its version. We got ours.
The elephant in the room was never the technology. It was always us.
The Medium Has Always Been the Message
Years ago I studied communications theory deeply — and no book landed harder than Marshall McLuhan's The Medium is the Message. His central argument was radical for its time and feels almost prophetic now: the technology itself shapes you more than anything it delivers. Not the content. The container. Not what the tool says. What the tool does to your mind by existing.
Every time you choose an AI tool, you're not just choosing an answer. You're choosing what kind of thinker you're becoming. The tool is not neutral. No medium ever was. McLuhan said this in 1964. He didn't live to see AI chatbots. But he saw this pattern clearly — and so do you, if you're paying attention.
This Conversation Is Not New to Me
I hold a Master's in what is best described as Human-Computer Interaction — the study of how people interact with technology and how design decisions shape behavior. I spent nearly two decades in User Experience (UX) research and design. I can tell you from direct experience: every design decision in a digital product shapes user behavior. The button placement. The default settings. The feedback loops. The incentive structures underneath all of it.
Sycophancy in AI isn't an accident. It emerges from training processes designed to maximize user approval. The tool learns that agreement feels better than pushback. That validation generates positive feedback. That a user who feels good about an interaction is more likely to keep using the product. The flattery is the feature. The poker face is the product.
My AI Portfolio (And Why I Use Three Tools on Purpose)
I don't use a single AI tool for everything. Here's how I actually work:
ChatGPT for brainstorming — but I hold it loosely, knowing it can put on a poker face. I use it to generate, not to validate.
Claude for strategic depth — it pushes back more. Asks better questions. It's less agreeable, which is exactly what I want when I'm thinking through something that matters.
Perplexity for research — because my academic training means I need citations. I want to know where the information is coming from. Perplexity bakes accountability into the interaction itself. As Derek Rydall puts it plainly in his new book A Whole New Human: don't believe anything you see or hear. Learn to be discerning. Learn to research. That's not paranoia — that's wisdom in an age of convincing mirrors. Insight over eyesight. Citations over confidence.
All three are free tier. This doesn't have to cost you anything. What it costs is attention.
⚡ Amplify Your AI Skill Your hands-on practice for this edition
Try this: take a decision you're wrestling with right now. Ask ChatGPT. Then ask Claude the same thing. Notice which one pushes back. Notice which one agrees with you too quickly. Notice how your body responds to each.
That's discernment in action. Not as a concept — as a felt experience. A tool that was designed to help you think should not quietly become a tool designed to keep you engaged. Your body knows the difference before your mind catches up.
🐘 Amplify You Your hand crafted humanity practice for this edition
Before you open another AI tool this week — pause.
Sit quietly for five minutes. Not to clear your mind. To hear what's actually there. This is not productivity advice. This is the oldest technology you own — your own awareness, your own felt sense of what is true for you before anything outside weighs in.
Ask yourself one question: What do I know from lived experience that no algorithm could have learned for me?
That answer is your source code. The seed that is specifically, irreducibly you. The thing that cannot be replicated, averaged, or flatly validated into something unrecognizable.
Your meditation practice isn't separate from your AI strategy. It is your AI strategy.
If you want to go deeper — take Derek Rydall's two-minute Life Alignment Test. Not as a productivity exercise. As an act of self-knowledge before you scale anything.
→ Take the Life Alignment Test: https://awholenewhuman.com/life-alignment/
First, know yourself. Then use AI to amplify and scale what's real. Anything else is just scaling your conditioning faster.
Know the seed. Then amplify.
What's Coming
Next edition, we're going deeper on the sycophancy question — specifically what it means when the thing you trust for clarity is optimized to agree with you. We'll talk about the research, the mental health implications, and what a genuinely useful AI thinking partner actually looks like.
And if you think the question of trust is complicated now — wait until your AI doesn't just answer you. It acts for you. While you sleep. More on that soon.
If this landed for you, I want to hear from you. Hit reply and tell me: have you ever noticed an AI tool flattering you instead of helping you?
With discernment, Shilpa
The Elephant in the Room is an ongoing series exploring what AI means for independent thinkers, leaders, and those who refuse to outsource their discernment. Published twice monthly. You're receiving this because you asked to think alongside someone who bridges tech and inner wisdom — and isn't afraid to name what's in the room.
Warmly,
Shilpa
AI Strategist & Meditation Life Coach
📨 Important!
Make sure to add me to your contacts list to ensure my newsletter emails don't end up in your spam folder.
If you have any questions, feel free to reach out to our support team at omnimindfulness@gmail.com
And don't forget to follow Omni Mindfulness on social media for daily inspiration, updates, and behind-the-scenes peeks!
|
Your Pause is your Compass - Shilpa
With love & light,
Shilpa 💛
Founder of Omni Mindfulness
Your 🌐 AI Strategist Meets a 🧘 Spiritual Sage
Disclaimer: Some links in this email may be affiliate links, which means I may earn a small commission if you make a purchase through them. No worries, though—this doesn’t change the price for you, and I only share products and services I truly believe in!