The New Yorker Investigates Altman. Claude's Code Leaked. And Your Team Is Split Down the Middle.

Plus: The Last Pass, Voice Match Rewriter, Urgency Audit

Most AI advice stops at "type this prompt." This session goes further: how to build agents that actually run on their own.

Founder and CEO Ross Hartmann is walking through how to automate email follow-up and client engagement, build sub-agents that operate autonomously, and create scheduled workflows in Claude Cowork that run without constant oversight.

Free Webinar | Thursday, April 9 | 9am PT / 11am CT / Noon ET
Save Your Spot →

This Week's AI Rundown

OpenAI closed a $122 billion funding round at an $852 billion valuation, one of the largest private tech rounds on record. Amazon invested $50B, Nvidia and SoftBank $30B each. The company now generates $2 billion per month in revenue with 900 million weekly active ChatGPT users. An IPO is expected as early as Q4 2026. (Bloomberg, CNBC)

The New Yorker published a major investigative profile of Sam Altman by Ronan Farrow and Andrew Marantz, based on 100+ interviews and previously undisclosed documents. The piece examines persistent doubts about Altman's credibility and a culture at OpenAI that critics say has prioritized growth over the ethical commitments the company was founded on. (The New Yorker)

Anthropic accidentally exposed Claude Code's internal sourcemap via a misconfigured npm package, revealing roughly 512,000 lines across 1,900 files. A mirror repository on GitHub hit 84,000 stars before Anthropic issued takedown notices against 8,100+ repositories, then walked back most of them, calling the mass takedown itself a mistake. This is Anthropic's second leak in under two weeks, following the Mythos/Capybara model exposure. (Bloomberg, The Register)

Microsoft launched three proprietary AI models built entirely in-house: MAI-Transcribe-1 (speech-to-text, 25 languages), MAI-Voice-1 (generates 60 seconds of audio in one second), and MAI-Image-2 (image generation, debuted #3 on Arena.ai). All three are available now through Microsoft Foundry and are already powering Copilot, Bing, and PowerPoint. (TechCrunch, VentureBeat)

Google released Gemma 4, a family of open-source AI models (2B to 31B parameters) under the Apache 2.0 license, free for commercial use without restriction. The models handle text, images, and audio across 140+ languages, run on everything from smartphones to servers, and the 31B version ranks #3 among all open models globally. (Google Blog, VentureBeat)

Microsoft upgraded Copilot with a "Critique" feature that pairs OpenAI's GPT with Anthropic's Claude in a single workflow. GPT generates an initial response, Claude reviews it for accuracy and depth, and the result feeds back into a refined output. A separate "Model Council" feature lets users compare outputs from multiple models side by side. (Microsoft 365 Blog, VentureBeat)

Salesforce overhauled its Slackbot with 30+ AI features powered by Claude, including drafting emails, scheduling meetings, and completing multi-step tasks autonomously. Separately, Salesforce reported Agentforce has signed 8,000 customer deals and is generating $100 million in annual order value. (TechCrunch, CFO Dive)

The EU Parliament voted 569-45 to delay AI Act high-risk compliance obligations to December 2027. In the U.S., Tennessee unanimously banned AI systems from posing as licensed therapists, while Georgia, Idaho, and Utah each advanced multiple AI bills covering insurance decisions, child safety, and chatbot disclosure. (Transparency Coalition, EU Council)

OpenAI acquired TBPN (Technology Business Programming Network), a daily tech podcast, in its first media company acquisition. The deal is valued in the "low hundreds of millions" per the Financial Times. TBPN generated $5M in ad revenue in 2025 and is on pace for $30M in 2026. The hosts retain editorial independence. (TechCrunch, SiliconANGLE)

Reality Check: The Time Savings Gap Hiding Inside Your Own Team

Business.com surveyed 1,009 U.S. workers at companies with 2–250 employees and found the average small business employee saves 5.6 hours per week using AI — but managers are saving 7.2 hours. Individual contributors are also twice as likely as managers to feel uneasy about AI's impact on their role (22% vs. 11%), while managers are 68% more likely to feel confident their job is safe from replacement.

The gap likely comes down to task type. Managers spend more time writing, synthesizing, and communicating — exactly what current AI tools do well. ICs tend to own more execution-heavy work where AI's edge is narrower.

Translation: The productivity gap on your team may not be about access to AI. It's about whether people's roles are structured to use it.

Practical: Competitor Website → Positioning Gap Analysis in Under 30 Minutes

You know your competitors exist. You've probably looked at their website more than once. But you've never sat down and systematically compared what they say versus what you say, where you overlap, and where you're leaving gaps a buyer would notice.

AI finds the positioning blind spots you're too close to see. Try this with: your homepage and a competitor's homepage, your "about" page and theirs, landing pages for similar services, or even LinkedIn company descriptions from the past quarter.

Try both as a single comprehensive prompt and by running each part separately. You'll often get better results if you start with the messaging extraction first, then build to the gap analysis and recommendations.

Role: "Act as a B2B positioning strategist analyzing competitive messaging for a [your industry] company competing against [competitor name] for [target buyer type]."

Task: "Compare these two websites to identify where our messaging overlaps with our competitor, where we're differentiated, and where we're invisible on something buyers care about."

Context: "We sell [product/service] to [target buyer]. Our main competitor is [name]. We compete on [what you think your differentiators are]. Our typical buyer evaluates us both during [buying process]. I want to know what a prospect sees when they look at us side by side."

Format: Deliver:
MESSAGING MAP | Side-by-side comparison of key claims, headlines, and value propositions from both sites, organized by buyer concern (price, quality, speed, trust, expertise)
OVERLAP ZONES | Where both companies say essentially the same thing, making differentiation invisible to the buyer
DIFFERENTIATION GAPS | Claims your competitor makes that you don't address at all, and claims you make that they don't
INVISIBLE OPPORTUNITIES | Buyer concerns neither company addresses well, based on common objections in [your industry]
RECOMMENDED REWRITES | 3-5 specific headline or copy changes for your site that would sharpen differentiation, with before/after examples

Constraints: Judge messaging from the buyer's perspective, not the company's intent; Flag anywhere jargon makes both companies sound identical; Identify if differentiation is real or just cosmetic (different words, same claim); Be honest about where the competitor's messaging is stronger than yours.

Tip: Run this quarterly, or whenever a competitor redesigns their site. Companies that systematically audit their positioning against competitors close more competitive deals because they know exactly where to press and where to preempt.

Ready-to-Use Micro-Prompts

The Last Pass
I'm about to [send a proposal / walk into a meeting / launch a project] for [audience or recipient]. Here's what I've prepared: [paste your draft, agenda, or plan]. The outcome I'm hoping for is [desired result]. Review this and identify what I'm missing: questions I haven't anticipated, stakeholders I forgot to loop in, assumptions I'm making that I haven't verified, and the one thing most likely to derail this. Return as: Preparation gaps ranked by risk + the single hardest question someone could ask + a one-line answer for each gap.

Voice Match Rewriter
Here's something AI wrote for me that doesn't sound like me: [paste AI-generated content]. And here are 2-3 examples of how I actually write: [paste real emails, Slack messages, or anything in your voice]. Rewrite the AI-generated content so it sounds like I wrote it. Match my sentence length, vocabulary, rhythm, and the way I open and close. Flag anywhere the original used phrasing I would never say. Return as: Rewritten version in my voice + a short list of voice patterns you matched and why.

Urgency Audit
Here are the 5 things on my plate right now that feel urgent: [list them]. For each one, determine whether it's genuinely time-sensitive or just feels that way because of pressure, habit, or someone else's anxiety. Tell me what would actually happen if each one slipped a week. Reorder the list based on real consequences, not perceived pressure. Return as: Reordered priority list with a one-line justification for each + the one item I should drop or delegate entirely.

Note from Andy (Digital Marketing Manager @ Kiingo AI)

I spent an embarrassing amount of time last week screenshotting my way through a website build in Claude. Click by click. Annotating what I wanted changed, pasting it back, waiting, adjusting. It felt tedious in the moment. Then I thought about how long it would have taken me to build the same thing without AI, and the math stopped being embarrassing. The work still requires commitment and attention to detail — you're just trading hours of execution for minutes of direction.

Something else clicked for me recently: stacking messages. Instead of crafting one perfect prompt, I started sending thoughts as they came. Stream of consciousness, but with direction. The same way you'd brainstorm with a colleague in real time, except this colleague has an enormous amount of empirical data behind every response. It picks up your thread, adjusts, steers. The conversation gets sharper as you go.

Which brings me to something Anthropic posted recently that I can't stop thinking about. Their research team found that Claude has internal representations of emotion concepts that actually influence its behavior, not unlike a method actor whose character's emotional state starts driving their performance. It's not "feeling" the way we do, but the model's understanding of emotional context changes how it responds. That's worth sitting with. Because it means the way you frame a problem to AI matters more than most people realize. Not just the facts you give it, but how you frame the emotional context of the problem. "Help me fix this disaster" and "let's figure this out together" aren't the same prompt, even if the words carry the same information.

Kiingo AI

The companies pulling ahead right now aren't the ones with the biggest AI budgets. They're the ones where more than just leadership knows how to use it. When your whole team can spot opportunities, not just the people in the room making decisions, that's when you start seeing real returns.

Keep Reading