The AI Election Is Here. Here’s How Not to Get Manipulated.
AI just became a kitchen table issue, not in the abstract but in regular life. The conversation has evolved to include your career trajectory, your news feed, and eventually your ballot.
Three years ago, AI was mostly a tech conversation. Then it became a business conversation. This year it turns into a voter conversation, and I do not think most people are ready for the way that shift will show up. A recent NBC News poll showed only 26% of voters reported positive feelings about AI and 46% reported negative feelings. AI is already underwater with voters, and the exact number matters less than what it signals: the public has a feeling before it has a framework. That is exactly the kind of gap politics knows how to exploit.
I have been watching this pattern play out for a while now. A corporate layoff happens, the word AI shows up in the headline, and people stop asking questions. A broken workflow becomes the center of attention, and someone says “just use AI” as if data quality, process design, and accountability do not exist. AI becomes either the villain or the savior, with conclusions drawn without the proper context.
Now those habits are moving into politics. Voters are about to get hit with plausible, emotional, incomplete AI arguments. This is not a piece about who to vote for or what side to pick. It is a field guide for how I plan to think when the next AI talking point shows up that might influence my decision making.
1. The jobs story will sound obvious. It usually isn’t.
The most effective AI talking point this cycle will probably be the simplest one: AI is taking your job, and I am the one who will protect you. It works because there is real fear underneath it, and I do not dismiss that. There are credible reasons to think AI will disrupt a lot of white-collar work, especially lower on the ladder where tasks are more repeatable and easier to unbundle.
What I do not trust is how clean this story gets once it enters public debate. I have watched smart professionals read a layoff headline, see the word AI, and stop there, with no second question about whether the company had demand problems, margin pressure, leadership issues, or a messy cost structure long before AI entered the press release. Once AI becomes the headline explanation, people often treat it like the full explanation.
From an operator’s perspective, that is where the thinking usually breaks down. “AI caused this layoff” can mean the work genuinely changed. It can mean leadership found a way to automate part of the workflow and cut headcount faster. It can mean the company was already in trouble and AI became the cleanest public explanation for a decision that was coming anyway. Those are very different stories with very different implications for workers and policy, but politics will flatten them into one.
Then comes the fix: universal basic income, automation taxes, worker protections, oversight boards. Different packaging, same basic move. Here is the villain, and here is the answer. Some of those ideas may be serious, but none of them are clean. Whenever someone gives me a simple answer to a messy systems problem, I want the operating model. Who pays for it? Who runs it? What incentive changes? What breaks next? If the person making the claim cannot walk me through the mechanism, I do not take the confidence at face value.
That is the question I want more voters to borrow: is this person explaining what actually changed in the work, or are they using AI to skip the harder story? And if they have a fix, can they explain how it works beyond the slogan?
2. The deepfake problem is bigger than fake content
AI image and video generation tools have improved fast enough that the average voter is going to have a harder time telling what is real, what is manipulated, and what is completely fake. Political operatives know that. The use of realistic AI-generated images to target political opponents is expected to grow substantially in the 2026 midterm cycle, with super PACs likely to experiment more aggressively with deepfake-style attack ads. It was not long ago that AI images were easy to laugh off because of extra fingers and distorted facial features. That era is ending fast.
Synthetic output does not need to be perfect to be effective. It just needs to land before skepticism does. That is why I think the deeper problem here is not just fake content spreading, but the doubt that follows it. People are going to react, share, and form opinions before they know whether the content is authentic.
I see a smaller version of this all the time. Someone gets burned by one hallucinated answer and swings too far in the other direction, distrusting everything. Blind trust is bad, but blanket distrust is not much better. Both are shortcuts, and neither is judgment. Politics is about to stress-test that exact weakness in public.
Due diligence will be table stakes during this election cycle, and there are a few basic questions that can help. Where did it first appear? Is it coming from an official or verified account, or from nowhere? Does it sound exactly right, or just emotionally convincing? Has any credible outlet verified it? Am I being pushed to react fast? That last one matters because urgency is a manipulation tool. That kind of discipline will matter more than people think.
3. “Regulate AI” will hide a lot of missing detail
Could AI regulation be a unifying topic during the midterm elections? A December 2025 Navigator Research survey found 60% of Americans support more AI regulation, including 63% of Democrats, 59% of Republicans, and 52% of independents. The partisan divide is not in whether to regulate. It is in what to regulate and who controls it.
My reaction to “we need to regulate AI” is basically the same as my reaction when someone at work says “we need an AI strategy.” Fine. What exactly are we talking about? Hiring tools? Deepfakes? Copyright? Data privacy? Political ad disclosures? Model training? Data centers? Consumer liability? State rules? Federal rules? People say “AI” like it is one neat object. It is not. It is a pile of different problems sitting at different layers of the tech stack, and they do not all need the same response.
That is why this part of the debate gets slippery so fast. A candidate can say “regulation” and sound serious without naming the actual rule. Another can say “innovation” and sound strong without naming who absorbs the downside while the market races ahead. Vague language creates fake clarity. It makes people feel informed when they are really just choosing which label sounds better.
I do not see this as only a political problem. There are countless examples from the business world where a headline announces “AI transformation” when what they really mean is “we bought tools and have not worked through the process change yet.” Big language can hide thin thinking in any environment. The same pattern will show up here: strong words, weak definitions, lots of confidence.
So when someone says they want to regulate AI, I want to hear the nouns and verbs. What specific thing are they trying to regulate? What is the actual mechanism? Which level of government would do it? If they cannot answer those questions, I am not hearing a real policy plan. I am hearing a sales pitch.
4. Your electricity bill may be where AI gets most real
A lot of people think AI will feel real when more voters start using chatbots. I think it may feel real somewhere much less glamorous and much more immediate: the monthly electric bill. There is a race to build out capacity for the AI boom, leading to massive increases in electricity demand. What happens when those costs are passed down to consumers?
This is probably the most underestimated AI story in the whole election because it is concrete in a way most AI debates are not. People may not care much about model architecture or benchmark scores. They care about whether costs go up, who benefits, and whether they are being asked to absorb tradeoffs they never agreed to.
The costs are already moving. Electricity prices are forecast to rise 6% through 2027 and another 3% by 2028 as data center demand outpaces power supply. In communities near large data center developments, costs have risen by as much as 267% compared to 2020 levels. One study projects the average household electric bill will increase 8% by 2030 from data center and cryptocurrency demand alone.
There will no doubt be promises made by candidates to solve the energy crunch. But there are uncontrollable factors that will influence whether those promises can actually be kept. Unfortunately, that message does not fit neatly into a campaign line or talking point. I want to know who actually owns the constraint. Who controls the bill? Who approves the infrastructure? Who has authority, and who is just performing authority in public? Every situation will be different, but understanding the problem and its full set of tradeoffs will help you be a more informed voter.
The Filter
The people most likely to get played this cycle will not be the least intelligent voters. They will be the people who are smart enough to recognize the topic but not disciplined enough to slow down once the argument feels plausible. I know that because I have caught myself doing versions of this too. A claim lands, it sounds directionally right, and my brain wants to complete the story before I have inspected it.
That is the habit I am trying to break. This election is going to reward speed, emotion, and clean narratives. AI is messy, uneven, useful, disruptive, overclaimed, and misunderstood. Anyone offering you a one-line explanation for what AI is doing to jobs, truth, regulation, or your electric bill is probably giving you a frame before they give you the facts.
I do not want to be easy to influence.
That is not cynicism. It is basic defense. In a cycle full of plausible, emotional, incomplete AI arguments, basic defense may be one of the most useful skills you can build.