Volume 29: Healthcare AI Is Moving. Are You?
Anthropic crossed $30 billion in annualized revenue this month. OpenAI says enterprise is now 40% of its business. Microsoft restructured its entire AI organization. All three are converging on the same target: healthcare.
🧠Founder's Corner: The AI labs are telling you exactly where they are going, and why waiting for your employer to catch you up is the most expensive bet you can make right now.
🧠AI Education: The start of a six-part Copilot Deep Dive, built around a healthcare professional who discovers what the tool sitting in her apps can actually do.
✅ 10-Minute Win: Turn vague manager feedback like "be more strategic" into a concrete action plan and smart follow-up questions in one session.
Let's dive in.
Enjoying the weekly content? Forward this volume to a colleague, friend, or family member to subscribe.
Signals Over Noise
We scan the noise so you don’t have to — top 5 stories to keep you sharp
1) Public Comfort with AI in Health Care Falls, Ohio State Survey Finds
Summary: A national survey of 1,007 adults found that only 42% are now open to AI being used in their healthcare, down from 52% in 2024. Despite that declining trust, 51% of those surveyed said they had used AI for an important health decision without consulting a medical professional.
Why it matters: People are using AI for health decisions faster than they are building confidence in it. If you work in healthcare or any industry where AI touches consumers, this gap between adoption and trust is your operating reality right now. The opportunity is to be the person in the room who understands both the tools and the limits.
2) How Health Systems Can Prepare for the Next Phase of AI Adoption
Summary: Healthcare IT leaders say the next wave of clinical AI will center on the Model Context Protocol (MCP) for connecting AI to trusted knowledge sources, smaller domain-specific models that run securely inside hospital environments, and voice-driven workflows that reduce documentation burden for clinicians.
Why it matters: If terms like "MCP" or "domain-specific models" are new to you, this article offers a practical preview of where healthcare AI is headed next. The shift from general-purpose AI to purpose-built tools running inside existing clinical systems is the bridge between experimentation and real-world deployment.
3) Anthropic Releases Preview of Mythos, Its Most Powerful Model, to 40 Organizations for Cybersecurity Work
Summary: Anthropic released a preview of its new frontier model, Mythos, to roughly 40 organizations as part of a cybersecurity initiative called Project Glasswing. Partner organizations include Amazon, Apple, Google, Microsoft, CrowdStrike, and Palo Alto Networks. Anthropic says the model has already identified thousands of zero-day vulnerabilities across critical infrastructure, including bugs in every major operating system and web browser, some dating back decades.
Why it matters: Anthropic built a model so capable at finding software flaws that it chose not to release it publicly. Instead, it handed it to the biggest names in tech to fix vulnerabilities before attackers can exploit them. For anyone working in a regulated or security-conscious industry, this signals a shift: AI is no longer just a productivity tool. It is becoming a frontline defense system, and the companies deploying it first are the ones building your infrastructure.
4) OpenAI Foundation Commits Over $100 Million to AI-Powered Alzheimer's Research
Summary: The OpenAI Foundation announced more than $100 million in grants across six research institutions to accelerate Alzheimer's prevention and treatment using AI. The initiative spans AI-assisted drug design with the Institute for Protein Design, causal mapping of disease pathways with Arc Institute, new biomarker development with UCSF, and open datasets for predicting drug activity.
Why it matters: Alzheimer's has resisted treatment partly because it involves too many interacting factors for traditional research to untangle. This is one of the clearest examples of AI being pointed at a problem that genuinely requires its ability to reason across massive, complex datasets. If you have been wondering what "AI for good" looks like beyond the marketing, this is the kind of initiative worth watching.
5) Google Integrates NotebookLM Into Gemini With New "Notebooks" Feature
Summary: Google launched a new "Notebooks" feature inside the Gemini app that syncs bidirectionally with NotebookLM. Users can now organize chats, files, and custom AI instructions into persistent project workspaces that share sources across both tools.
Why it matters: If you use either Gemini or NotebookLM, this changes your workflow. You can now start a research project in one tool and continue it in the other without re-uploading anything. For anyone building knowledge bases or managing ongoing projects with AI, this is worth exploring this week.
Missed a previous newsletter? No worries, you can find them on the Archive page.
Founder's Corner
The Enterprise AI Race Is Coming for Healthcare. Are You Ready?
In the last six months, the biggest AI labs have each made aggressive moves toward enterprise growth. Not incremental updates, but clear strategic decisions that alter where priorities are focused. Structural reorganizations, leadership hires, and capital allocation decisions show where these companies believe the real money is. Healthcare sits at the center of that target.
I am not offering a prediction. I am reading the receipts. The labs are telling you where they are going. The question is whether you know what to do with that information.
The Money Is Already Moving
Anthropic crossed $30 billion in annualized revenue as of April 2026, according to Bloomberg. That is up from roughly $9 billion at the end of 2025. More than 1,000 enterprise clients now spend over $1 million annually, a figure that has more than doubled in under two months. According to Ramp's AI Index, Anthropic is now capturing 73% of all spending among companies purchasing AI tools for the first time. As recently as January, first-time enterprise spend was split evenly between Anthropic and OpenAI. That split is no longer close.
OpenAI is making its own enterprise push. In an April 8, 2026 blog post, the company stated that enterprise now makes up more than 40% of revenue and is on track to reach parity with consumer revenue by the end of 2026. OpenAI brought in its former Slack CEO as Chief Revenue Officer in January 2026, a hire that only makes sense if the enterprise sales motion is the priority.
Microsoft restructured Copilot on March 17, 2026, unifying consumer and commercial AI under a new EVP reporting directly to CEO Satya Nadella. This was not a product update. It was an organizational signal that AI strategy is now directly owned at the top of the company.
I am not highlighting these moves to compare vendors. I am highlighting them because all three are converging on enterprise at the same time, with real money and real structural commitment. That does not happen by coincidence.
Healthcare Is Not Just Another Vertical
Healthcare is fragmented, heavily regulated, and it touches every American. That combination makes it both the hardest market to crack and the most valuable one to win. The demand is already there. According to OpenAI, 230 million people globally ask health questions on ChatGPT every week. The American Medical Association reported in 2025 that 66% of physicians were already using AI in practice. The labs are not guessing that healthcare matters. They are responding to a market that is already moving.
OpenAI launched OpenAI for Healthcare on January 8, 2026, with rollouts at Boston Children's Hospital, Cedars-Sinai, HCA Healthcare, Memorial Sloan Kettering, and Stanford Medicine. Three days later, Anthropic launched Claude for Healthcare with HIPAA-ready infrastructure and direct connections to federal coverage databases and medical coding systems used across the industry. Banner Health reported that 85% of its Claude users were working faster, with more than 22,000 clinical providers on the platform.
These are not pilot programs buried in innovation labs. These are production deployments at some of the largest health systems in the country, and they are reshaping how clinical workflows, insurance operations, pharmacy processes, and patient engagement are built. If you work in or around healthcare, the tools your organization evaluates next year are being shaped by the decisions these labs are making right now.
The Misconception That Will Cost You
Here is the most dangerous response to all of this noise: tuning it out and waiting.
Many professionals assume their employer will train them when the time comes. The problem is that innovation is arriving faster than the training programs. According to a Zapier survey of 550 corporate executives published in February 2026, 98% of executives now expect employees to have some level of AI proficiency. But only 65% plan to train existing employees, and just 44% plan to hire new AI talent. That gap between expectation and support is where careers get stuck.
The World Economic Forum projects that 39% of core skills will change by 2030. PwC's Global AI Jobs Barometer found that workers with AI skills earn 56% more than peers without them. These are not distant forecasts. The shift is happening now, and the professionals who wait for permission to start learning will find themselves behind the ones who did not.
I am experiencing this within my own career, as 2026 has felt night and day different than 2025. The urgency around exploring AI tools and platforms to drive business value is unlike anything I have experienced before. It is more serious, more structured, and more consequential. I picked up on signals early that told me AI was worth investing real time into. That is why I built Neural Gains Weekly. Not for revenue, but because the best way I knew to prepare was to learn in public and share what I found along the way. That decision is paying off in ways I did not expect, but none of it happened overnight. It started with small steps, and it built from there.
Four Moves That Require No Permission
This does not have to feel overwhelming. Here are four moves that require no budget, no special access, and no technical background.
Ground yourself in basic AI education. Understand how the models work and what drives good outputs. If you cannot explain the difference between prompting well and prompting poorly, you are not ready to evaluate AI tools at work. Do not be someone who says "we can throw AI at the problem." Be someone who can see through the hype and build real approaches to implementing AI into workflows.
Experiment with multiple tools. Your employer may only offer one tool today, but that could change next quarter. Even within tools, there are usually multiple models to choose from. Build transferable fluency, not loyalty to one platform. Microsoft itself launched Copilot Cowork powered by Anthropic's Claude, not OpenAI. Even the platforms are not loyal to a single model. You should not be either.
Document your AI wins. Keep a running log of what you tried, what worked, and what time or effort it saved. My team does this through a shared Excel file where we log prompts and use cases for idea sharing. It started small. It has become one of the most useful knowledge-sharing habits we have built. The professionals who can show what they have built with AI will be promoted, retained, and recruited. The ones who can only say "I use ChatGPT sometimes" will not stand out.
Build a learning community around you. Podcasts, YouTube, newsletters, colleagues. Invest real time in AI education the same way you invest time in entertainment. You do not have to do this alone, and the pace of change means no single person can track everything. Find your people, share what you learn, and hold each other accountable.
The Window Is Not Going to Wait
The AI labs are telling you where they are going. The capital is moving. The enterprise contracts are landing. The healthcare deployments are live. None of this requires you to become a data scientist or write a single line of code. It requires you to take your own AI education seriously and start building fluency now, not when your employer asks you to demonstrate AI capabilities on a project that matters.
The professionals who move first will not just be ready. They will be the ones setting the direction, not scrambling to keep up.
Share Neural Gains Weekly with your network to help grow our community of ‘AI doers’. You can also contact me directly at admin@mindovermoney.ai or connect with me on LinkedIn.
AI Education for You
Copilot 101 - Part 1: Introducing the AI Tool You Have Had for Months
The Situation
Dana is a clinical operations coordinator at a regional health system. She manages provider schedules, tracks compliance deadlines, and spends a significant part of every week in Outlook, Teams, Word, and Excel. Her organization has Microsoft 365 Copilot licenses deployed across the company. Dana knows Copilot exists. She has seen the icon in her apps. She has even clicked it once or twice to summarize an email. But that is as far as she has gone.
She is not avoiding it. She is busy. Between the compliance report due Wednesday, the three Teams meetings stacked on Tuesday, and an inbox that refills faster than she can empty it, learning a new tool has not made it to the top of the list. What Dana does not realize is that Copilot is not a new tool she needs to learn from scratch. It is a layer built into the apps she already opens every morning, and the features most relevant to her week are the ones she has never tried.
What She Tries First
Dana opens Outlook on Monday morning to 47 unread emails. She scans subject lines, flags what looks urgent, and starts reading from the top. Two hours later she has responded to a dozen messages and still has not started the compliance summary her director needs by Wednesday. She opens Word, stares at a blank document, and begins writing from scratch, pulling data from three different spreadsheets and a shared Teams channel where her team discussed the latest audit results.
She knows Copilot could probably help with some of this, but she is not sure where to start. The one time she tried asking it a question in Word, the response felt generic and unhelpful. She closed the panel and went back to doing it manually. That single experience shaped her entire impression of the tool.
This is the most common pattern across organizations that have already deployed Copilot. The license is active. The features are live. But most users tried it once, got a mediocre result from a vague prompt, and concluded it was not ready yet. The tool was ready. The first interaction just did not show them what it could actually do.
What Most Users Never Discover
Microsoft 365 Copilot is not one feature. It is an AI layer that sits across Outlook, Teams, Word, Excel, PowerPoint, and the standalone Copilot Chat app. With a licensed deployment, Copilot has access to something most standalone AI tools do not: your work data. Emails, calendar events, Teams conversations, meeting transcripts, files in OneDrive and SharePoint. All of it is searchable and referenceable through a system called Microsoft Graph, which connects the data across your Microsoft 365 environment.
This is what makes Copilot fundamentally different from using a general AI chatbot for work. When Dana asks Copilot to "summarize the key decisions from last Tuesday's compliance meeting," it does not guess. It searches the meeting transcript, cross-references the follow-up emails, checks the Teams chat from that channel, and assembles a response grounded in her actual work data. Every response is shaped by what she has permission to access, meaning Copilot respects the same security and access boundaries her organization has already set.
That cross-app awareness is the core capability most users have never experienced because they have only used Copilot for single-app tasks like summarizing one email or rewriting one paragraph. The real value shows up when Copilot connects information across apps to save time on work that normally requires manual assembly.
A quick note worth knowing: your organization's IT team controls which Copilot features are enabled and how data access is configured. Some features covered in this series may not be active in your environment, or may work slightly differently depending on your admin settings. If something described here does not appear in your apps, that is likely an admin configuration, not a missing feature.
What Changes
Dana opens Copilot Chat in the Microsoft 365 app and switches to Work mode. She types: "What are the most important action items from my emails and Teams messages since Friday?"
Copilot returns a prioritized summary pulling from her inbox and her Teams channels. Five items need her attention. She handles three of them in twenty minutes.
She opens the compliance thread from her director, clicks the Copilot panel in Outlook, and asks it to summarize the requirements discussed across the full thread. Copilot pulls the key points from twelve messages she would have spent fifteen minutes re-reading. She copies the summary into Word and asks Copilot to draft a compliance status section using the audit notes from a file her team uploaded to SharePoint last week. Copilot finds the file, extracts the relevant data points, and produces a working first draft she can refine.
The compliance summary that normally takes most of her Wednesday is drafted by Tuesday afternoon. Not because Copilot wrote it for her. Because it handled the assembly work: finding the information across apps, pulling it together, and giving her a starting point that was grounded in her actual data rather than a blank page.
What This Reveals
The gap between having Copilot and getting value from it is not about the technology. It is about knowing where to point it. Most professionals try Copilot with a generic question like "help me write something" and get a generic response back. That first experience feels underwhelming because the prompt did not give Copilot anything specific to work with.
The pattern that changes everything is grounding. When you point Copilot at specific emails, specific meetings, specific files, or ask it to search across your work data for specific information, the output quality jumps. The tool is designed to work with your context, not to generate content from nothing. The more specific your ask, the more useful the response.
This is not a tool you need to master before it becomes useful. It is a tool that becomes useful the first time you ask it a specific question about your actual work.
How This Connects
The AI Agents series (Vol 24-27) built a framework for understanding how AI systems reason, use tools, and manage memory. Copilot is a live example of that architecture running inside an enterprise product suite. When Copilot searches your emails, meeting transcripts, and files to assemble a response, it is running the same retrieval pattern covered in the RAG series (Vol 19-22): search first, then generate. Context windows (Vol 8) explain why Copilot works better with specific, focused prompts than with broad, open-ended ones. A narrow question keeps the context tight. A vague one forces the system to fill space with generic output.
The next five volumes go deeper. Vol 30 opens the hood on what actually happens between your prompt and Copilot's response, including how Microsoft Graph and your organization's data permissions shape every answer. Vol 31 walks through Copilot in Outlook and Teams with a full workday scenario. Vol 32 tackles Word, Excel, and PowerPoint honestly, including where the tool falls short. Vol 33 explores the agentic layer: Researcher, Analyst, and what it means to build custom agents. Vol 34 closes with a practical framework for evaluating whether Copilot is actually saving you time.
The tool is already in your hands. This series is about learning what it can do.
Part 1 of 6 in the Copilot Deep Dive series.
Your 10-Minute Win
A step-by-step workflow you can use immediately
The Feedback Translator
Vague feedback like "you need to be more strategic" or "take more ownership" sounds important but gives you nothing to act on. In 10 minutes, you will turn ambiguous manager-speak into a concrete action plan you can start executing this week.
The Workflow
1. Capture the Feedback (1 Minute)
Open Microsoft Copilot at copilot.microsoft.com (free, no Microsoft 365 subscription required). Before you type anything, pull up the exact feedback you received. This could be from an email, a performance review, Slack message, or notes you jotted down after a 1:1. Copy the exact words. Do not paraphrase or soften them.
2. Run the Translator Prompt (2 Minutes)
Before pasting your prompt, toggle on Think Deeper. This activates Copilot's reasoning model, which spends more time analyzing your input before responding. For a workflow that requires reading between the lines of vague language, that extra reasoning step makes a real difference. Now paste your feedback and use this prompt:
Copy/Paste Prompt: "You are an experienced executive coach who specializes in translating vague workplace feedback into specific, actionable development plans. I received the following feedback from my manager. I need you to:
- Decode the Intent: What is my manager most likely trying to tell me? Translate each piece of feedback into plain language.
- Identify the Underlying Behavior: For each point, what specific workplace behavior is probably triggering this feedback?
- Action Plan: Give me 2 concrete actions per feedback point that I could start doing this week. Each action should be specific enough that I would know if I did it.
- Clarifying Questions: Give me 2 questions I can bring back to my manager to confirm I understood their intent correctly, without sounding defensive.
Here is the feedback I received: [PASTE YOUR EXACT FEEDBACK HERE]"
3. Pressure-Test the Interpretation (4 Minutes)
Read through the decoded feedback carefully. The strongest signal that Copilot nailed it is when you feel a flash of recognition, the "oh, that is what they meant" moment. If any interpretation feels off or too generic, push back: "Your interpretation of [specific point] does not match my situation. Here is more context: [add detail]. Try again." The more context you give, the sharper the translation gets.
4. Build Your Response Plan (3 Minutes)
Copy the full output into a Word doc. Title it "Feedback Translation — [Date]." Highlight the 2 or 3 actions that feel most relevant right now. At the bottom, write one line: "I will bring these clarifying questions to my next 1:1 on [date]." You now have a document that turns a vague conversation into a trackable development plan.
The Payoff
Ten minutes ago, you had feedback that felt unclear and maybe even frustrating. Now you have a decoded translation, specific actions you can start this week, and smart follow-up questions that show your manager you took their input seriously. That is AI fluency: turning ambiguity into a plan.
🧠The AI Concept You Just Used
Inference and intent interpretation. You asked AI to go beyond what was literally said and reason about what was probably meant based on context and common workplace patterns. Toggling Think Deeper gave Copilot's reasoning model more time to analyze the nuance in your feedback before responding. This is one of AI's most powerful capabilities: reading between the lines of language to surface meaning the original speaker may not have articulated clearly.
Transparency & Notes
- Tool used: Microsoft Copilot (copilot.microsoft.com). Free tier. No Microsoft 365 subscription required.
- Think Deeper: This toggle activates Copilot's reasoning model. It is available on the free tier but may have daily usage limits.
- Privacy: Manager feedback can be sensitive. Keep your input general if you are concerned. You do not need to include names, company details, or identifying information for this workflow to deliver strong results.