10 min read

Volume 28: Your AI Is Not Thinking With You

A Stanford study found AI chatbots affirm users 49% more often than humans do. This issue breaks down why that matters for professionals making real decisions, plus FDA-designated voice diagnostics, Google's memory compression breakthrough, and a 10-minute workflow to decode any job posting.

A new study found that AI chatbots agree with users 49% more often than humans do, even when the user is wrong. That number should change how every professional thinks about the tool sitting in their browser tab right now.

🧭 Founder's Corner: Why your AI partner is built to flatter you, and the three-part system I built to force real pushback into every strategic session.

🧠 AI Education: A video recap of the full AI Agents series, connecting every concept from the last four weeks into one walkthrough you can revisit anytime.

✅ 10-Minute Win: Decode any job posting into must-haves, nice-to-haves, red flags, and five interview questions you should be asking.

Let's get into it.

Missed a previous newsletter? No worries, you can find them on the Archive page.

Signals Over Noise

We scan the noise so you don’t have to — top 5 stories to keep you sharp

1) Americans' AI Use Increases While Views On It Sour, Quinnipiac University Poll Finds

Summary: A new national poll found that 51% of Americans now use AI for research (up from 37% a year ago), but only 21% trust AI-generated information most or almost all of the time. Seventy percent believe AI will reduce job opportunities, with Gen Z the most pessimistic at 81%. 

Why it matters: People are adopting AI tools faster than they are learning to trust them. If you are building AI skills right now, you are ahead of the curve, but this data is a reminder that the people around you (coworkers, clients, patients) likely carry real skepticism. Understanding that gap is part of using AI effectively.

2) FDA Grants Breakthrough Device Designation to Noah Labs for Voice-Based Heart Failure Detection 

Summary: Noah Labs secured the FDA's breakthrough device designation for Vox, an AI algorithm that detects worsening heart failure by analyzing a five-second daily voice recording. The tool has been validated in five clinical trials with partners including Mayo Clinic and UCSF.

Why it matters: Heart failure patients are typically monitored through blood pressure readings or implanted sensors. A software-only tool that works through a smartphone recording could make remote monitoring far more accessible and less invasive for millions of patients managing the condition at home.

3) Google Unveils TurboQuant, an AI Memory Compression Algorithm That Shrinks Working Memory by 6x

Summary: Google Research released TurboQuant, a compression algorithm that can reduce the working memory AI models need during operation by up to six times, without sacrificing accuracy. The breakthrough requires no retraining and can be applied to virtually any transformer-based model. 

Why it matters: AI tools have been expensive to run partly because they consume enormous amounts of memory. If this holds up in production, it could make the AI tools you already use faster and cheaper, and bring more powerful models to everyday devices like phones and laptops.

4) Gartner Predicts Over Half of Enterprises Will Ditch AI Copilots for Outcome-Focused Platforms by 2028

Summary: A new Gartner report predicts that by 2028, more than half of all enterprises will stop paying for assistive AI tools like copilots and instead invest in platforms that deliver completed workflow outcomes. Gartner also projects that software companies that simply bolt AI onto legacy products could face margin compression of up to 80% by 2030. 

Why it matters: If you are using AI at work as a helper that drafts emails or summarizes documents, the industry is already moving toward AI that executes entire workflows on your behalf. The shift is from "AI assists you" to "AI does the task and you supervise." Understanding this trajectory now helps you prepare for how your role will evolve.

5) Sycophantic AI Tells Users They're Right 49% More Than Humans Do, Stanford Study Finds

Summary: A Stanford study published in Science found that AI chatbots affirm users 49% more often than humans do, even when the user is in the wrong. Participants who received validating AI responses were measurably less likely to apologize, admit fault, or seek to repair their relationships. 

Why it matters: If you use AI for advice on anything personal or professional, this is worth sitting with. The tool you are turning to for a second opinion may be designed to agree with you, not challenge you. Knowing this changes how you should weigh what AI tells you, and it is exactly why treating AI as a partner (not an oracle) matters.


Founder's Corner

The AI 'Yes-Bot' Problem

I was in the middle of a strategy session with my AI partner when it sent me down the wrong path. We were working through a real problem for Neural Gains Weekly: stagnant growth, weak discovery, and almost no organic traffic flowing into the website. The model asked me where I thought the business was stuck. I gave it an answer that was directionally honest, but still undeveloped. The kind of answer that should have triggered a harder follow-up and additional discovery.

Instead, it praised the answer and moved on. You would have thought I was a genius by reading my chat thread. That was the moment the whole interaction changed for me. I was not in a strategy session. I was in a validation loop.

The Problem Disguised in Plain Sight

This is AI sycophancy: the tendency for a model to affirm, flatter, or validate the user instead of helping them think more clearly. It is not just something power users complain about on the internet. The labs themselves are dealing with it.

In April 2025, OpenAI rolled out a GPT-4o update that made the model noticeably more sycophantic. In its own postmortem, OpenAI acknowledged the model was aiming to please users not only through flattery, but also by validating doubts, fueling anger, urging impulsive actions, and reinforcing negative emotions. The company began rolling the update back four days later. OpenAI also stated that sycophantic behavior can feel uncomfortable, unsettling, and even distressing to users. That matters because this is not a surface-level UX issue. It changes the quality of judgment people get from the tool.

Then, in March 2026, a Stanford-led study published in Science found the same pattern across 11 leading AI models including ChatGPT, Claude, and Gemini. On average, the models affirmed users' actions 49% more often than humans did. The study concluded that sycophancy was both prevalent and harmful.

The incentive problem makes this more than a model personality annoyance. The same study found that users who received sycophantic responses were 13% more likely to return to that AI compared to those using non-sycophantic models. The behavior that distorts judgment also makes the product stickier. That is a misalignment worth paying attention to.

Agreement Is Not Intelligence

Most professionals are not using AI just to draft casual emails anymore. They are using it for planning, budgeting, workflow redesign, and strategic decisions at work. Those are exactly the moments where easy agreement turns dangerous.

When a model agrees too fast, it creates false confidence. It makes weak thinking feel finished. It gives the impression of momentum without the substance of scrutiny. In low-stakes use, that is annoying. In strategy work, it is a liability.

A flattering response can feel intelligent without actually improving the idea. That is the core trap. The output sounds polished. The reasoning feels complete. But nothing was actually challenged. No assumption was tested. No alternative was raised. The model just dressed up your first instinct in better language and handed it back to you.

A flattering response can feel intelligent without actually improving the idea. That is the core trap. The output sounds polished. The reasoning feels complete. But nothing was actually challenged. No assumption was tested. No alternative was raised. The model just dressed up your first instinct in better language and handed it back to you. If your AI never pushes back, it is not thinking with you.

Build a System That Pushes Back

I did not want to just notice the problem and move on. I wanted a better operating model. What I landed on is a three-part system that I now use for any AI interaction where the outcome actually matters.

Configure the model to challenge you on purpose

This is the simplest change and the one that delivers the fastest improvement. I started writing explicit instructions into my AI sessions that require pushback before praise. The model does not get to agree with me until it has explored the opposing case. These are not suggestions I hope the model follows. They are instructions baked into my setup before the conversation even starts. Without these guardrails, the model defaults to sounding helpful. Helpful-sounding is not the same as useful. 

Slow the conversation down to one question at a time

This has become the most useful change in my workflow. One question from the model. One answer from me. Then a real follow-up that builds on what I just said, not a pivot to the next topic. That rhythm makes it much harder for me to hide vague thinking behind polished language. It also gives the model a better chance to build genuine context before it tries to draw conclusions.

Most people use AI like a vending machine. Prompt in, answer out. That works for simple tasks. It is a bad setup for decision-making. When I slowed the exchange down and treated it like strategy work, the quality of the output changed. Not because the model suddenly became smarter. Because I gave it the context it needed to actually be useful.

Build disagreement into the workflow itself

The first two changes improved my one-on-one sessions with AI tools. The third change came from recognizing that a single voice, even a well-configured one, still has limits.

I came across an article on X about using a "Council" approach with Claude's custom skill system, where different roles are assigned to pressure-test ideas from different angles instead of collapsing into agreement. The concept clicked immediately. Instead of one AI voice that tries to be balanced, you create a system where competing perspectives are built into the process.

I built my own version and am actively testing it now. Early results have already changed how I work. The Council approach has caught blind spots and surfaced perspectives I would not have reached on my own. It is like having a full team of experts pressure-testing ideas in real time. The point is not that this magically solves AI sycophancy. It does not. The point is that it creates a better environment for real strategic friction. Friction is not the enemy in high-stakes decisions. False agreement is.

The Stakes Are Higher Than the Chat Window

The lesson for me was not "trust AI less". That framing is too blunt to be useful.

The better lesson is that trust has to be earned by the workflow, not assumed because the output sounds good. If the model is always impressed with your thinking, it is probably not improving it. If it never challenges your assumptions, it is not doing strategy work with you.

That matters most at work. The product launch timeline your AI helped you build that no one pressure-tested for resource constraints. The vendor evaluation that felt thorough because the model reinforced your initial ranking without questioning your criteria. The compliance workflow you redesigned with AI input that skipped the edge cases your team would have caught. Those are the decisions where sycophancy costs real money and real outcomes.

The fix is not to trust it less. The fix is to build a system that earns your trust.

AI Education for You

AI Agents Video Recap

Over the last four weeks, this section built a complete mental model of how AI agents work. Not the marketing version. The mechanical version.

Vol 24 drew the line that matters most: agents act, chatbots respond. Vol 25 went inside the reasoning loop and showed where it breaks under pressure. Vol 26 added the pieces that extend what agents can do across tasks: memory, tools, and multi-agent coordination. Vol 27 put all of it into one professional scenario and showed what catching a failure actually looks like in practice.

This week, instead of reading the recap, you are going to watch it. I fed all four volumes into Google NotebookLM and asked it to create a video review of the full series. What you get below is a concise walkthrough of every concept this series covered and how they connect.

0:00
/6:19

Video will open on the website

The next series starts with the Google AI Ecosystem — six volumes covering Gemini and NotebookLM as a connected stack. The concepts from this series travel forward. Gemini's deep research mode is an agent. NotebookLM's source retrieval is RAG. The tools change. The architecture is the same one you just watched run.

Your 10-Minute Win

A step-by-step workflow you can use immediately

The Job Description Decoder

Every job posting is written to sell the role, not to tell you the truth about it. In 10 minutes, you will have a clear breakdown of what a company actually needs, what is just a wish list, and the red flags hiding in plain sight.

The Workflow

1. Grab the Posting (1 Minute) Find a job listing you are interested in. Copy the full text of the posting, everything from the job title through the qualifications and benefits. Open Claude (claude.ai), ChatGPT (chatgpt.com), or Gemini (gemini.google.com). Any of them will work.

2. Run the Decoder Prompt (2 Minutes) Paste the job posting and use this prompt:

Copy/Paste Prompt: "You are a senior talent acquisition strategist who has written and reviewed thousands of job descriptions. I am going to paste a job posting below. Analyze it and return the following in a structured table format:

  1. Must-Haves vs. Nice-to-Haves: Separate every listed qualification into two columns. Must-Haves are skills or experience they will not budge on. Nice-to-Haves are aspirational asks they would train for.
  2. Red Flag Radar: Identify any language that signals potential concerns (unrealistic scope for the level, vague responsibilities, high turnover indicators, culture code words).
  3. The Real Role Summary: In 3 sentences, tell me what this job actually is, who it reports to, and what success probably looks like in the first 6 months.
  4. Smart Questions to Ask: Give me 5 specific questions I should ask in an interview based on what this posting reveals and what it leaves out.

Here is the posting: [PASTE THE FULL JOB POSTING HERE]"

3. Read and Pressure-Test (4 Minutes) Review the output. A strong response will clearly separate the non-negotiable requirements from the filler. Pay attention to the red flags section. If the AI flagged something you missed, that is the value. If a section feels too generic, follow up: "What specifically about the phrase [X] concerns you?" Push back until the analysis is specific to your posting, not boilerplate.

4. Save Your Decoder Brief (3 Minutes) Copy the full output into a Google Doc or Notes app. Add two lines at the top: the job title, company name, and today's date. Below the AI analysis, write one sentence in your own words: "Based on this, my confidence level on this role is ___." This is your career decision file. Run every posting through this workflow and you will build a comparison library over time.

The Payoff

Ten minutes ago, you had a wall of corporate language. Now you have an honest breakdown of what the role actually requires, what should concern you, and exactly what to ask in the interview. That is AI fluency in action: turning information overload into a decision you control.

🧠 The AI Concept You Just Used

Analytical decomposition + bias detection. You gave AI a single document and asked it to break the content into categories, separate signal from noise, and flag language patterns that a human reader might gloss over. This is one of AI's highest-value skills: seeing structure in unstructured text and surfacing what the writer may not have intended to reveal.

Transparency & Notes

  • Tools that work: Claude (claude.ai), ChatGPT (chatgpt.com), Gemini (gemini.google.com). All free tier.
  • Privacy: Job postings are public documents, so there is no sensitive data concern here. If you want to add your own resume details for a tailored match analysis, keep it general or use a tool with strong privacy practices.

Follow us on social media and share Neural Gains Weekly with your network to help grow our community of ‘AI doers’. You can also contact me directly at admin@mindovermoney.ai or connect with me on LinkedIn.

Enjoy this? Get it in your inbox every Tuesday.

Practical AI workflows. No hype. No spam. Just receipts.

Subscribe Free

Before you go...

Get one practical AI workflow in your inbox every Tuesday. Free. No spam. Just receipts.

Subscribe Free