Volume 11: The Models Are Changing. Are You?
Hey everyone! đ
The models are changing fast. New names, new features, new âbest everâ launches every week. The real question is not whether the models are getting better, it is whether our habits are keeping up.
In AI Education, we keep building your mental model of how these systems work so you can adapt with them, not chase them. This weekâs 10-Minute Win gives you a practical workflow you can reuse in your own stock research. And in Founderâs Corner, I share how I am adjusting my own routines as the tools evolve.
Missed a previous newsletter? No worries, you can find them on the Archive page. Donât forget to check out the Prompt Library, where I give you templates to use in your AI journey.
Signals Over Noise
We scan the noise so you donât have to â top 5 stories to keep you sharp
1) OpenAIâs GPT-5.2 âcode redâ response to Google is coming next week
Summary: OpenAI has reportedly pulled forward the launch of GPT-5.2 to around December 9 after CEO Sam Altman declared an internal âcode redâ in response to Googleâs Gemini 3 and Anthropicâs latest models. Internal evals suggest 5.2 outperforms Gemini 3 on key benchmarks, with the update focused on speed, reliability, and reasoning rather than flashy new features.
Why it matters: Model quality is an arms race. A stronger, faster baseline inside ChatGPT changes what ânormalâ users can do with AI for work, research, and investing â and keeps competitive pressure high across the whole ecosystem.
2) Anthropic plans an IPO as early as 2026, FT reports
Summary: Anthropic, backed by Google and Amazon, has hired Wilson Sonsini and begun informal talks with banks for a potential IPO as early as 2026, according to the Financial Times. The Claude maker is reportedly exploring a fresh funding round that could push its valuation north of $300B.
Why it matters: A public Anthropic would give markets a direct way to price a frontier-model pure play â and would bring more disclosure on revenue, margins, and capex in the AI stack.
3) AWS unveils âfrontier agents,â including Kiro, a virtual developer
Summary: Amazon Web Services introduced frontier agents â autonomous AI workers that can run for hours or days with minimal intervention â including Kiro (a virtual developer), AWS Security Agent, and AWS DevOps Agent. These agents maintain context, learn over time, and are built to handle multi-step, production-grade workloads.
Why it matters: This is the âagents, not just chatbotsâ shift going mainstream in cloud. Expect more operations, security, and dev work to be handled by persistent AI systems rather than one-off prompts.
4) US senators unveil bill to stop easing of curbs on AI chip sales to China
Summary: A bipartisan group of senators introduced the SAFE CHIPS Act, which would block the administration from loosening controls on advanced U.S. AI-chip exports to China, Russia, Iran, and North Korea for 30 months, and require Congress briefings before future rule changes.
Why it matters: Export controls are now a core weapon in the AI race. This bill directly affects who gets cutting-edge Nvidia/AMD silicon â and therefore who can train and deploy the most powerful models.
5) OpenAI and Accenture accelerate enterprise reinvention with advanced AI
Summary: Accenture named OpenAI one of its primary AI partners and will equip tens of thousands of staff with ChatGPT Enterprise. Together theyâre launching a flagship program to help large clients embed agentic AI into core functions like customer service, supply chain, finance, and HR.
Why it matters: This is how AI actually reaches the Fortune 500: consulting plus tooling. Itâs a strong signal that âAI agents over enterprise workflowsâ is becoming the default playbook, not an experiment.
AI Education for You
LLM Deep Dive - Part 2: How Large Language Models Learn From Data
Last time you learned that a large language model is: A system that learns patterns in text and predicts the next small piece of text.
Now we focus on how it learns those patterns.
Training data: what the model sees
Training data for a language model is mostly text:
- Sentences and paragraphs
- Many topics and writing styles
- Different formats, such as articles, documentation, sometimes code and other text-like sources
Key idea: the model is not memorizing a giant book. It is seeing countless examples of text sequences and learning: âWhen I see this kind of text, these next tokens are likely to follow.â
You can think of the training data as millions of practice questions with answer keys.
Parameters: the tiny settings inside
Inside the model are parameters.
- Each parameter is a number
- A modern model has billions of them
- Together they control how the model maps input tokens to output tokens
Analogy: Imagine a sound mixing board with billions of tiny sliders. Each slider affects the sound a little bit. Training is the process of nudging those sliders based on feedback, so the overall sound (the predictions) gets closer to what we want.
The training loop: predict, check, adjust
Training runs a loop over and over:
- The model sees a piece of text from the training data
- It tries to predict the next token at each step
- It compares its prediction to the actual token
- It measures how wrong it was
- It adjusts the parameters slightly
- It repeats on new text
Analogy: A student answers a practice question, checks the answer key, sees where they went wrong, and adjusts their âmental settingsâ so they will answer similar questions better next time.
Do this billions of times and the student, or the model, gets very good at predicting.
Loss: a score for âhow wrongâ it is
Training needs a signal to know if it is improving. That signal is called loss.
- High loss means the predictions are far from the real text
- Lower loss means the predictions are closer
The whole training process is about pushing the loss down over time.
You can think of loss as the modelâs average âerror score.â Training tries to make that score as low as possible across many examples, not just one.
Why so much data and compute?
The model has billions of parameters. Language is messy and rich.
To train that many settings well, you need:
- A huge number of text examples
- Many passes over that data
- Specialized hardware to run the math quickly
That is why training large models is expensive:
- More data â better coverage of patterns
- More compute â more training steps
- More steps â more chances to reduce loss
Fine-tuning and specialization
After base training, there are often later stages:
- Fine-tuning on conversation-style examples
- Training with human feedback to prefer more helpful and safe answers
- Sometimes extra training on specific domains, such as code or support tasks
You can think of base training as broad education, and fine-tuning as job training.
Data quality and diversity
The model reflects what it learned from training data.
- If the data covers many viewpoints and styles, the model is more flexible
- If some topics are weak, the model is weaker there
- If the data contains biased or low-quality patterns, those can show up in outputs
This is why you should still treat outputs as suggestions, not truth. The modelâs behavior is shaped by what it saw, not by a deep understanding of reality.
Reader questions
Q: So is the model just memorizing the internet?A: No. It learns statistical patterns, not a complete copy of specific pages. It may sometimes produce text similar to training examples, but most of the time it is generating new text that follows patterns it has learned.
Q: Can it see my private financial data from training?A: No. A deployed model does not have live access to your bank accounts or personal records unless you or an app send that information in a prompt. Training shapes its internal settings, but it does not act as a searchable database of your private history.
Closing this week
You now have a simple view of how a model learns:
- It sees huge amounts of text
- It adjusts billions of internal settings to reduce mistakes
- It learns patterns, not deep understanding or live facts
- Its strengths and blind spots come from the data and the way it was trained
Next week, we shift to the part you see every day:
What happens between you typing a message and the model sending back an answer.
Your 10-Minute Win
A step-by-step workflow you can use immediately
đ§ Competitor Quickmap for Payment Stocks
Why this matters: Knowing tickers isnât the same as understanding the businesses behind them. Payments is a crowded, confusing spaceânetwork toll collectors, card issuers, wallets, platforms. In about 10 minutes, this workflow uses AI to auto-research Visa (V), Mastercard (MA), American Express (AXP), and PayPal (PYPL) and turn that into a simple Competitor Quickmap you can actually reason from. Youâll walk away with a one-page snapshot of who does what, for whom, and where the edges and risks sitâbefore you sink hours into deeper research.
Step 1 â Let ChatGPT auto-build your sector snapshot (4â5 minutes)
Open a fresh ChatGPT chat and run this prompt:
Role: You are my sector analyst and research assistant.
Context: I want a clean, up-to-date snapshot of the payments/fintech landscape focusing on four stocks: Visa (V), Mastercard (MA), American Express (AXP), and PayPal (PYPL).
Task:
- For each company (V, MA, AXP, PYPL), use only recent, public information from official sources (Investor Relations pages, company overviews, or 10-K âBusinessâ sections) to create a short overview.
- Then build a single comparison table with columns:
- Company
- Core Business Model (plain English, 1â2 lines)
- Primary Customers (who actually pays them)
- Revenue Engine (how money flows in: network fees, interest, take rate, etc.)
- Competitive Angle / Edge (what seems to make them different)
- Key Risks / Watchpoints (from filings/IR language, no speculation)
- After the table, write 5 bullet insights as if youâre explaining the sector to a new investor, covering:
- How the card networks (Visa, Mastercard) differ from AmEx and PayPal
- Where the strongest moat appears to be from the business descriptions alone
- Where you see the most sensitivity to the economic cycle or credit risk
- Any obvious âthis one feels differentâ angle between the four
Rules:
- Cite which official pages you used in a short note under the table (no links needed, just names).
- Keep language beginner-friendly.
- Do not give buy/sell/hold recommendationsâonly describe the landscape.
Let ChatGPT do the heavy lifting: it finds the pages, reads them, and outputs a ready-made sector map.
Step 2 â Drop the Quickmap into Sheets (2â3 minutes)
Open Google Sheets and:
- Create a new sheet named Payments_Quickmap.
- Copy the entire comparison table from ChatGPT and paste it into the sheet (starting at A1).
- Add one more column at the end: âMy Take (1 line)â.
- For each company, write a blunt one-liner, for you:
- âPure network toll collector; volume-driven.â
- âCard issuer with more direct credit risk.â
- âFeels like a digital wallet / checkout network more than a card network.â
This step forces you to actually internalize the differences instead of passively scrolling.
Step 3 â Ask AI for investor-style questions to dig deeper (2 minutes)
Back in the same ChatGPT chat, ask:
Based on the Quickmap table you just created, give me:
- 5 follow-up questions I should explore before considering any of these as investments (things like unit economics, margins, regulatory risk, competitive threats).
- For each question, suggest where I might look (10-K, earnings call, segment breakdown, key metrics). Keep it short, numbered, and written for a curious individual investorânot a professional analyst.
Copy those questions into your research notes or into a second tab in your Sheets file. This becomes your research checklist the next time you pull up filings or earnings calls.
Step 4 â Turn this into a reusable sector template (1â2 minutes)
Now make the process reusable for any group of competitors:
Create a reusable âInvestor Competitor Quickmapâ template I can use for any sector, with:
- A short instruction block on how youâll auto-research 3â5 tickers from official sources.
- A blank table structure using the same columns (Company, Core Business Model, Primary Customers, Revenue Engine, Competitive Angle, Key Risks).
- 5 generic sector-insight prompts (e.g., âWho has the clearest, simplest business model?â, âWho is most exposed to credit risk?â, âWhere is the obvious gap in the market?â).
- Format it so I can copy/paste the template into my notes and re-run it for cloud, streaming, brokers, etc.
Save that template somewhere you actually use (Notion, Docs, Obsidian). Next time, swap in your own tickers + sector and rerun the same play.
The Payoff
In a single 10-minute session, youâve turned âI know the tickersâ into âI have a structured map of what these businesses actually are.â You see:
- Who is more like a toll-taking network vs. a lender vs. a digital wallet/platform
- Who gets paid by whom, and how that might behave in different macro environments
- Where moats and weaknesses appear just from how the companies describe themselves
And youâve built a template you can reuse for any new watchlist idea instead of restarting from zero each time.
Transparency & Notes for Readers
- All tools are free: ChatGPT (Free or Plus) and Google Sheets; all company information comes from public web sources.
- Scope: This is an overview, not a full valuation or risk modelâtreat it as a first-pass map.
- Data: Donât paste personal or account info; ChatGPT should be querying public company pages only.
- Educational workflow â not financial advice.
Founder's Corner
Real world learnings as I build, succeed, and fail
Last week, I shared how Iâve been incorporating new AI models into my workflows for Neural Gains Weekly. The more I experiment, the more I realize how important it is to stay current as these models evolve. If you rely on AI at home or at work, staying up to date is no longer optional. Each release can change how a model reasons, how it follows instructions, and what it is actually good at. If you treat every new model like the last one, your results will quietly get worse over time. As the model arms race heats up between the big labs, that gap will only grow. So what can you do when a new model is released? In this weekâs Founderâs Corner, I want to share three simple ways I think about learning new models so you can adapt faster and get better outputs as the AI landscape keeps shifting.
Tip 1: Learn What the Model Is Optimized For
Not every model is trained and fine-tuned to excel at the same tasks. That can be hard to see if most of your interactions are simple, search-like chats. Itâs a big reason I push myself (and you) to run more complex workflows through AI. Before I judge a model, I want to know what it was actually built to be great at.
When you pay attention to headlines and launch materials, you start to pick up on these cues directly from the labs. One model touts superiority in all things coding, another leans into being the âagenticâ leader. I read release notes and listen to a few trusted podcasts to get a sense of where a model is supposed to shine and how it can best support my workflows. Those small habits shorten the learning curve and make it easier to plug the right model into the right task.
Tip 2: Adjust Your Prompts When the Model Changes
The prompt is our chance to provide context, structure, and guidelines that help the AI produce the outcome we want. Itâs easy to get comfortable with a certain prompting style or reuse the same prompt over and over again. But this can lead to poor outputs when a new model is introduced. New models are not just updates, they behave like completely new systems that often require reengineered inputs.
Luckily, the process to learn the prompting structure for a new model is straightforward. You might need to be more explicit about structure, set length constraints, or change how you chunk information, but you donât have to figure this out alone. I often ask the model directly for advice on how it wants to be prompted for a specific task and use that as a starting point. From there, I let the AI help me refine a âbest promptâ for the workflow I am testing, which removes a lot of the guesswork. Over time, this builds a mental playbook for each model instead of forcing one-size-fits-all prompts on every new release.
Tip 3: Run Small âSame Task, Different Modelâ Experiments
Experimentation is crucial in the world of AI, especially when you are trying to learn a new model. The simplest way to do this is to run the same real task through different models and compare the results. You do not need benchmarks or lab tests, just a light habit of asking, âWhich model handled this better, and why?â.
I started comparing outputs from ChatGPT 5.0 and ChatGPT 5.1 as soon as the newer model launched. I wanted to see if I could spot patterns that highlighted the differences between the two. The âAI Educationâ prompt was a great starting point to compare which modelâs output aligned with my vision, needed the least editing, and felt the most useful for the audience. It was obvious how much better 5.1 handled the task, even though I was using the same prompt I had originally optimized for 5.0. That simple experiment helped me compare the models directly and choose the best content for the newsletter. Over time, that kind of side-by-side testing builds a feel for each model that no release note can give you.
At this point, I see new models less as shiny toys and more as part of the foundation of my work. The models will keep changing, and if my habits do not change with them, my results will slowly fall behind. This rings true in the workplace as more employers start mandating the use of AI. We will all need to be educated and well-versed in the models that power the tools we use every day. My perspective is that learning how new models work will be the equivalent of learning to use email in the early 2000s: mandatory. The models will keep getting smarter, but the real leverage comes from how quickly we learn them and fold them into the work that matters most to us.
Follow us on social media and share Neural Gains Weekly with your network to help grow our community of âAI doersâ. You can also contact me directly at admin@mindovermoney.ai or connect with me on LinkedIn.