Volume 25: Don't Be Easy to Influence
AI just became a kitchen table issue. Not in the abstract, but in real life: in your job description, your utility bill, your feed, and eventually your ballot. The talking points are already forming, and most of them will sound completely reasonable until you ask the second question.
This week is about building the habit of asking it.
🧭 Founder's Corner: A field guide to the four AI narratives headed your way this election cycle, and the questions worth asking before you accept any of them.
🧠 AI Education: Part 2 of the AI Agents series goes inside the decision loop nobody shows you, including where it breaks and what that means for how you work with it.
✅ 10-Minute Win: Turn your rough weekly notes into a status update that connects your work to your goals and pressure-tests it from your manager's perspective, in under 10 minutes.
Let's jump in.
Missed a previous newsletter? No worries, you can find them on the Archive page.
Signals Over Noise
We scan the noise so you don’t have to — top 5 stories to keep you sharp
1) YouTube expands AI deepfake detection to politicians, government officials and journalists
Summary: YouTube is expanding its “likeness detection” deepfake tool to a pilot group of politicians, government officials, and journalists so they can detect unauthorized AI-generated impersonations and request removal under existing policy review.
Why it matters: Deepfakes are moving from “internet weirdness” to “civic risk.” Detection + takedown workflows like this are becoming core infrastructure for trust online.
2) ChatGPT, other AI chatbots approved for official use in US Senate
Summary: Reuters reports ChatGPT and two other AI chatbots have been approved for official use in the U.S. Senate, following internal review and rules for how staff can use them.
Why it matters: This is a mainstream adoption milestone: when government offices formalize “approved tools,” it accelerates normalization—and pushes policy and security standards into real practice.
3) Perplexity’s Personal Computer turns your spare Mac into an AI agent
Summary: Perplexity announced “Personal Computer,” an always-on agent that runs locally on a spare Mac, with controls like approvals for sensitive actions, audit trails, and a kill switch (waitlist for early access).
Why it matters: The agent trend is getting real—and “local + auditable” is a strong answer to the biggest blocker: people don’t trust a cloud agent to roam across their personal data without tight controls.
4) Microsoft and Anthropic team up to bring Claude Cowork to Microsoft 365
Summary: Microsoft says it worked with Anthropic to integrate Claude Cowork into Microsoft 365 Copilot, positioning Copilot as “multi-model” and able to choose the best model for a given task.
Why it matters: This is a major enterprise pattern: model choice becomes a platform feature. It also signals Microsoft is diversifying beyond a single model supplier as AI becomes core to productivity software.
5) Microsoft Copilot Health Insights: AI news
Summary: Microsoft launched Copilot Health, a dedicated experience to help people make sense of medical records, wearable data, and lab results with privacy controls and health-sourced answers—positioned as informational support, not diagnosis.
Why it matters: Health is one of the highest-stakes consumer use cases for AI. If this category works, it’s a clear example of “AI becomes a personal layer” across your most sensitive data.
Founder's Corner
The AI Election Is Here. Here’s How Not to Get Manipulated.
AI just became a kitchen table issue, not in the abstract but in regular life. The conversation has evolved to include your career trajectory, your news feed, and eventually your ballot.
Three years ago, AI was mostly a tech conversation. Then it became a business conversation. This year it turns into a voter conversation, and I do not think most people are ready for the way that shift will show up. A recent NBC News poll showed only 26% of voters reported positive feelings about AI and 46% reported negative feelings. AI is already underwater with voters, and the exact number matters less than what it signals: the public has a feeling before it has a framework. That is exactly the kind of gap politics knows how to exploit.
I have been watching this pattern play out for a while now. A corporate layoff happens, the word AI shows up in the headline, and people stop asking questions. A broken workflow becomes the center of attention, and someone says “just use AI” as if data quality, process design, and accountability do not exist. AI becomes either the villain or the savior, with conclusions drawn without the proper context.
Now those habits are moving into politics. Voters are about to get hit with plausible, emotional, incomplete AI arguments. This is not a piece about who to vote for or what side to pick. It is a field guide for how I plan to think when the next AI talking point shows up that might influence my decision making.
1. The jobs story will sound obvious. It usually isn’t.
The most effective AI talking point this cycle will probably be the simplest one: AI is taking your job, and I am the one who will protect you. It works because there is real fear underneath it, and I do not dismiss that. There are credible reasons to think AI will disrupt a lot of white-collar work, especially lower on the ladder where tasks are more repeatable and easier to unbundle.
What I do not trust is how clean this story gets once it enters public debate. I have watched smart professionals read a layoff headline, see the word AI, and stop there, with no second question about whether the company had demand problems, margin pressure, leadership issues, or a messy cost structure long before AI entered the press release. Once AI becomes the headline explanation, people often treat it like the full explanation.
From an operator’s perspective, that is where the thinking usually breaks down. “AI caused this layoff” can mean the work genuinely changed. It can mean leadership found a way to automate part of the workflow and cut headcount faster. It can mean the company was already in trouble and AI became the cleanest public explanation for a decision that was coming anyway. Those are very different stories with very different implications for workers and policy, but politics will flatten them into one.
Then comes the fix: universal basic income, automation taxes, worker protections, oversight boards. Different packaging, same basic move. Here is the villain, and here is the answer. Some of those ideas may be serious, but none of them are clean. Whenever someone gives me a simple answer to a messy systems problem, I want the operating model. Who pays for it? Who runs it? What incentive changes? What breaks next? If the person making the claim cannot walk me through the mechanism, I do not take the confidence at face value.
That is the question I want more voters to borrow: is this person explaining what actually changed in the work, or are they using AI to skip the harder story? And if they have a fix, can they explain how it works beyond the slogan?
2. The deepfake problem is bigger than fake content
AI image and video generation tools have improved fast enough that the average voter is going to have a harder time telling what is real, what is manipulated, and what is completely fake. Political operatives know that. The use of realistic AI-generated images to target political opponents is expected to grow substantially in the 2026 midterm cycle, with super PACs likely to experiment more aggressively with deepfake-style attack ads. It was not long ago that AI images were easy to laugh off because of extra fingers and distorted facial features. That era is ending fast.
Synthetic output does not need to be perfect to be effective. It just needs to land before skepticism does. That is why I think the deeper problem here is not just fake content spreading, but the doubt that follows it. People are going to react, share, and form opinions before they know whether the content is authentic.
I see a smaller version of this all the time. Someone gets burned by one hallucinated answer and swings too far in the other direction, distrusting everything. Blind trust is bad, but blanket distrust is not much better. Both are shortcuts, and neither is judgment. Politics is about to stress-test that exact weakness in public.
Due diligence will be table stakes during this election cycle, and there are a few basic questions that can help. Where did it first appear? Is it coming from an official or verified account, or from nowhere? Does it sound exactly right, or just emotionally convincing? Has any credible outlet verified it? Am I being pushed to react fast? That last one matters because urgency is a manipulation tool. That kind of discipline will matter more than people think.
3. “Regulate AI” will hide a lot of missing detail
Could AI regulation be a unifying topic during the midterm elections? A December 2025 Navigator Research survey found 60% of Americans support more AI regulation, including 63% of Democrats, 59% of Republicans, and 52% of independents. The partisan divide is not in whether to regulate. It is in what to regulate and who controls it.
My reaction to “we need to regulate AI” is basically the same as my reaction when someone at work says “we need an AI strategy.” Fine. What exactly are we talking about? Hiring tools? Deepfakes? Copyright? Data privacy? Political ad disclosures? Model training? Data centers? Consumer liability? State rules? Federal rules? People say “AI” like it is one neat object. It is not. It is a pile of different problems sitting at different layers of the tech stack, and they do not all need the same response.
That is why this part of the debate gets slippery so fast. A candidate can say “regulation” and sound serious without naming the actual rule. Another can say “innovation” and sound strong without naming who absorbs the downside while the market races ahead. Vague language creates fake clarity. It makes people feel informed when they are really just choosing which label sounds better.
I do not see this as only a political problem. There are countless examples from the business world where a headline announces “AI transformation” when what they really mean is “we bought tools and have not worked through the process change yet.” Big language can hide thin thinking in any environment. The same pattern will show up here: strong words, weak definitions, lots of confidence.
So when someone says they want to regulate AI, I want to hear the nouns and verbs. What specific thing are they trying to regulate? What is the actual mechanism? Which level of government would do it? If they cannot answer those questions, I am not hearing a real policy plan. I am hearing a sales pitch.
4. Your electricity bill may be where AI gets most real
A lot of people think AI will feel real when more voters start using chatbots. I think it may feel real somewhere much less glamorous and much more immediate: the monthly electric bill. There is a race to build out capacity for the AI boom, leading to massive increases in electricity demand. What happens when those costs are passed down to consumers?
This is probably the most underestimated AI story in the whole election because it is concrete in a way most AI debates are not. People may not care much about model architecture or benchmark scores. They care about whether costs go up, who benefits, and whether they are being asked to absorb tradeoffs they never agreed to.
The costs are already moving. Electricity prices are forecast to rise 6% through 2027 and another 3% by 2028 as data center demand outpaces power supply. In communities near large data center developments, costs have risen by as much as 267% compared to 2020 levels. One study projects the average household electric bill will increase 8% by 2030 from data center and cryptocurrency demand alone.
There will no doubt be promises made by candidates to solve the energy crunch. But there are uncontrollable factors that will influence whether those promises can actually be kept. Unfortunately, that message does not fit neatly into a campaign line or talking point. I want to know who actually owns the constraint. Who controls the bill? Who approves the infrastructure? Who has authority, and who is just performing authority in public? Every situation will be different, but understanding the problem and its full set of tradeoffs will help you be a more informed voter.
The Filter
The people most likely to get played this cycle will not be the least intelligent voters. They will be the people who are smart enough to recognize the topic but not disciplined enough to slow down once the argument feels plausible. I know that because I have caught myself doing versions of this too. A claim lands, it sounds directionally right, and my brain wants to complete the story before I have inspected it.
That is the habit I am trying to break. This election is going to reward speed, emotion, and clean narratives. AI is messy, uneven, useful, disruptive, overclaimed, and misunderstood. Anyone offering you a one-line explanation for what AI is doing to jobs, truth, regulation, or your electric bill is probably giving you a frame before they give you the facts.
I do not want to be easy to influence.
That is not cynicism. It is basic defense. In a cycle full of plausible, emotional, incomplete AI arguments, basic defense may be one of the most useful skills you can build.
AI Education for You
AI Agents, Part 2 - How Agents Reason: The Decision Loop Nobody Shows You
What Is Actually Going On Here
The moment you hand an agent a goal, something starts running that you never see. The agent is not waiting for your next prompt. It is generating a chain of internal reasoning — thinking through what the goal requires, what the first step should be, which tool fits that step, and what it will do with the result. It takes the action, reads what came back, and reasons again. Then it acts again. This cycle — reason, act, observe, reason, act — runs continuously until the agent decides the goal is met or the task collapses under its own weight. The entire loop is invisible. What you see is the starting prompt and the final output. Everything in between happens without you.
The Problem That Made This Necessary
Early AI systems were built for single-turn responses. You asked a question. The model answered. That was the complete interaction. The problem was that most real tasks are not single-turn problems. They are sequences — each step depends on what the previous step returned.
Researchers at Google and elsewhere recognized this around 2022 and began formalizing what became known as the ReAct pattern — short for Reason and Act. The core insight was straightforward: if you interleave reasoning traces with tool actions inside the model's generation process, the model could plan a step, execute it, read the result, and plan the next step — all within a single extended run. Prior approaches had tried to separate planning from execution, handing a fixed plan to a separate execution layer. That broke constantly because real-world tool results are unpredictable. A search returns something unexpected. A file is formatted differently than assumed. A calendar API throws an error. A fixed plan has no mechanism for adapting. ReAct treated reasoning and acting as a continuous loop rather than two separate phases — and that architectural shift is what made agents viable.
How It Actually Works
The loop runs in three repeating stages.
Reason. The agent generates a reasoning trace — an internal chain of thought that names what it is trying to do, what it knows so far, and what the next action should be. This reasoning is not visible in the final output. It is scaffolding the model builds for itself. The quality of this reasoning trace determines whether the next action is well-targeted or misdirected.
Act. The agent executes one tool call based on its reasoning: a web search, a file read, a code execution, a database query. One action at a time. Not a batch. The constraint is intentional — the result of each action informs the reasoning for the next one. Batching actions removes the feedback loop that makes the system adaptive.
Observe. The agent reads the result and feeds it back into its context. This is where the loop either tightens or begins to drift. If the result is clean and expected, the next reasoning step builds accurately on top of it. If the result is ambiguous, incomplete, or an error, the agent has to decide — with no human input — how to proceed.
Then the cycle repeats. A well-designed agent runs this loop efficiently, compressing a task that would take a professional multiple manual steps into a single continuous run. The tradeoff is opacity. The more steps the loop runs, the harder it is to audit what happened and why.
Where It Still Breaks
Compounding errors. A wrong assumption in the reasoning trace at step two does not announce itself. It travels forward. By step eight the agent may be producing confident, well-formatted output built on a flawed foundation. The output looks finished. The error is buried.
Goal drift. On long runs, the agent's reasoning can drift away from the original goal — particularly when tool results pull the context in a different direction. The agent optimizes for what is in front of it, not always for what you originally asked.
Tool failures without recovery. When a tool returns an error or an unexpected format, agents vary significantly in how they handle it. Some retry intelligently. Some loop. Some abandon the task silently and produce a partial output that reads like a complete one.
Context window overflow. A long reasoning loop consumes tokens. As the context fills, earlier information — including the original goal — can get pushed out. The agent finishes the loop having forgotten what it was solving for.
What This Means for How You Work With It
Verify outputs at the end, not just the beginning. The loop ran without you. What came back may look complete and be wrong in ways that are not obvious on the surface.
Check your goal definition before you run anything. The reasoning loop amplifies whatever goal you gave it — clearly stated goals compound well, vague goals compound poorly.
Ask what the agent actually did. Most agent interfaces expose a log of tool calls and reasoning steps. Reading it takes two minutes and tells you whether the loop ran cleanly or drifted mid-task.
Never assume a confident output means a correct one. The agent's tone does not change when its reasoning has gone sideways.
How This Connects
Part 1 established what separates agents from chatbots — the reasoning loop, tools, and memory working together. Now you have seen inside the loop itself: how ReAct patterns replaced fixed planning, why reason-act-observe runs continuously, and where the loop breaks under real conditions. Context windows (Vol 8) are directly relevant here — the longer the loop runs, the more tokens it consumes, and eventually the earlier context gets pushed out. That is not a hypothetical. It is a documented failure mode on long agent runs.
Vol 26 goes one layer deeper into the two components that extend what agents can do across tasks: memory architectures and the tool ecosystem. Vol 27 puts all of it together in a single professional scenario from goal to finished output — including a loop that nearly goes wrong and what catching it actually looks like.
Part 2 of 4 in the AI Agents series.
Your 10-Minute Win
A step-by-step workflow you can use immediately
The 1:1 Prep Assistant
A weekly status update on your 1:1 is one of the most underrated career tools you have — but most people treat it like a chore. This workflow doesn't just clean up your bullet notes. It maps your work back to your actual goals so your manager sees the connection without you having to spell it out, then pressure-tests the whole thing from your manager's perspective before you hit send.
The Workflow
1. Brain Dump Your Week (2 Minutes)
Open a blank note and spend two minutes listing everything you worked on this week — rough bullets, incomplete thoughts, half-finished projects. Nothing needs to be polished. The messier the better.
Examples of what rough looks like:
- "finished the slide deck for the Q2 review"
- "had that call with the vendor, still waiting on pricing"
- "been stuck on the data issue, Sarah is looking into it"
Also grab your goals document, quarterly priorities list, or even a copy of your last performance review. A few bullet points of what you are being measured on this year is enough.
2. Run the Status Update Prompt (3 Minutes)
Open Claude, ChatGPT, or Gemini and paste the prompt below. This is prompt one of two — keep the chat window open after you run it.
Copy/Paste Prompt 1: "I am going to give you two things: my rough notes from this week and my current goals or priorities. Your job is to write my weekly status update and explicitly connect my completed work back to my stated goals so my manager can see the impact without me having to explain it.
My rough notes from this week: [PASTE YOUR BULLET NOTES HERE]
My current goals or priorities: [PASTE YOUR GOALS, QUARTERLY PRIORITIES, OR WHAT YOU ARE BEING MEASURED ON]
My role: [your job title or function]
Format the update in three sections:
- Completed This Week — what I finished, with a one-line note connecting each item to a relevant goal where possible
- In Progress — what is actively underway and any blockers
- Next Week — my top 2 to 3 priorities
Keep the tone professional but human. No corporate filler. Under 200 words total."
Read through the output. If a goal connection feels forced or inaccurate, tell the model to remove it. Accuracy matters more than coverage here.
3. Run the Manager Perspective Prompt (2 Minutes)
Stay in the same chat window — do not start a new conversation. The model already has full context on your work and goals. Now flip its perspective.
Copy/Paste Prompt 2: "Now read this status update as my manager. Based on what you know about my role and goals, tell me:
- What questions or concerns would this raise for you?
- Is there anything missing that a manager would want to know?
- Does anything sound vague or unconvincing?
Be direct and honest — I want to catch problems before I send this."
This is where the workflow earns its name. The model will surface gaps you cannot see because you are too close to your own work.
4. Finalize and Save Your Asset (3 Minutes)
Address any flags the model raised — either by editing the update directly or by asking the model to revise specific sections. Copy the final version into an email draft, Slack message, or wherever you share updates and send it. Save the two prompts somewhere reusable. Next Friday, your only job is to swap in new bullet notes and goals.
The Payoff
You now have a status update that connects your work to your goals and has been reviewed from your manager's perspective — all before you sent it. The prompt pair is reusable every single week. Over time this workflow does not just save you time, it trains you to communicate your impact more deliberately with or without AI.
🧠 The AI Concept You Just Used
Chained prompting + perspective switching. You ran two prompts in sequence in the same chat, letting the model carry context from the first into the second. Then you asked it to shift roles entirely and evaluate your work from someone else's point of view. These two techniques together are some of the most powerful — and most underused — moves in AI.
Transparency & Notes
- Tools that work: Claude (claude.ai), ChatGPT (chatgpt.com), Gemini (gemini.google.com) — all free tier, no credit card required.
- Privacy: Keep notes general. Avoid client names, confidential project details, or anything you would not say in an open meeting.
Follow us on social media and share Neural Gains Weekly with your network to help grow our community of ‘AI doers’. You can also contact me directly at admin@mindovermoney.ai or connect with me on LinkedIn.