Most AI advice stops at "type this prompt." This session goes further: how to build agents that actually run on their own. Founder and CEO Ross Hartmann is walking through how to automate email follow-up and client engagement, build sub-agents that operate autonomously, and create scheduled workflows in Claude Cowork that independently accomplish tasks without constant oversight.
If you're past the "what can AI do?" phase and into "how do I make it work while I'm not looking?", this is the session.
Free Webinar | Wednesday, April 9 | 9am PT / 11am CT / Noon ET
Save Your Spot →
This Week's AI Rundown
• OpenAI is shutting down Sora, its AI video app, just six months after launch, and replacing it with a new model codenamed "Spud." Sora's estimated $15M/day inference costs dwarfed its $2.1M in total lifetime revenue, downloads dropped 66%, and Disney pulled both a content partnership and a planned $1B investment. On the same day, OpenAI revealed it completed pretraining on Spud, which Altman described as "a very powerful model" coming "within a few weeks." Unlike Sora, Spud will focus on physical world understanding for industrial and robotic applications, not consumer video. (Axios, NPR, NowadAIs)
• Google's March Gemini Drop made personal intelligence features free for all U.S. users, added chat history imports from competing AI providers, and extended Gemini 3.1 conversation memory to 2x its previous length. (Google Blog)
• OpenAI released GPT-5.4 Mini ($0.75/M input tokens) and Nano ($0.20/M), its cheapest models yet with 400K context windows. Mini approaches full GPT-5.4 on coding benchmarks (54.4% SWE-Bench Pro) at a fraction of the cost, designed for high-volume and agentic workloads. (OpenAI Blog)
• An accidental data leak revealed Anthropic has been testing "Claude Mythos" (internally codenamed "Capybara"), a model tier above Opus. Roughly 3,000 unpublished internal documents were publicly accessible due to a CMS misconfiguration. Internal docs describe it as "the most capable we've built to date" with dramatically higher scores on coding, reasoning, and cybersecurity benchmarks. (Fortune via The Rundown)
• Anthropic published the largest qualitative AI study ever conducted: 80,508 Claude users across 159 countries in 70 languages. 67% expressed net positive sentiment toward AI. Top actual use: productivity (32%). Top concern: hallucinations (26.7%). Lower-income countries were significantly more optimistic than Western markets. (Anthropic)
• AI coding tools crossed a new line this week: Anthropic added computer use and auto mode to Claude Code, and OpenAI shipped a Codex plugin that runs inside it. Auto mode uses classifiers to approve safe actions automatically (users were already approving 93% of prompts). The Codex plugin adds code review and task delegation without leaving Claude Code, a competitor voluntarily putting its agent inside a rival's tool. (TechCrunch, Anthropic, OpenAI GitHub)
• Google launched Lyria 3 Pro, expanding AI music generation from 30-second clips to full 3-minute tracks at $0.08 per song. Now integrating across Google's product ecosystem for professional creators. (Google Blog)
• Sakana AI's "AI Scientist," a system that generates research ideas, writes code, runs experiments, and drafts papers, was published in Nature. Key finding: stronger AI models produce higher-quality scientific papers, suggesting a "scaling law of science." (Nature, Sakana AI)
• A supply chain attack hit LiteLLM, a popular AI development package with 3.4 million daily downloads. Malicious versions harvested credentials and attempted to move across cloud infrastructure. If your team uses AI development tools, it's worth auditing which packages you're running. (BleepingComputer, Snyk)
Reality Check: The Skill Nobody's Practicing Yet
OpenAI's chief scientist Jakub Pachocki told MIT Technology Review the company will ship an "AI research intern" by September 2026 that can handle tasks "that would take a person a few days," autonomously. By 2028, the full system will handle not just research but "business and policy dilemmas." His framing: "You don't need to be as smart as people in all their ways in order to be very transformative."
When AI can work independently for days, your job shifts from doing the work to directing and verifying work you didn't do yourself. That's a management skill most teams haven't practiced yet.
Directing AI that works independently for days is a management skill. Reviewing someone else's thinking, catching what looks right but isn't, knowing when to trust the output and when to push back. By September, that's not optional. It's the job.
Sources: MIT Technology Review (March 2026)
Practical: Client Win → Draft Case Study + Approval Request in Under 30 Minutes
You closed a project three months ago and it went well. Marketing has been asking for a case study ever since. But writing one from scratch feels like a whole project in itself, so it sits on the list. Meanwhile, your best proof of results stays locked in your head.
AI turns scattered project notes into a structured case study draft. Try this with: project kickoff docs, final deliverable summaries, Slack recaps, your own memory of what happened, client emails, or internal debrief notes from the past 6 months.
Try both as a single comprehensive prompt and by running each part separately. You'll often get better results if you start with the story extraction first, then build to the polished draft and approval request.
Role: "Act as a B2B content strategist specializing in case studies for a [your industry] company that sells to [target buyer type]."
Task: "Analyze these project notes to extract the narrative arc and produce a client-ready case study draft with a permission request I can send today."
Context: "We completed [project type] for [client name/description] over [timeframe]. The engagement started because [original problem]. We delivered [what you did]. The result was [outcomes you remember, even rough numbers]. Our typical buyer is [title/role] at [company type]. I want this case study to help close deals with similar prospects."
Format: Deliver:
STORY EXTRACTION | The problem the client had before you, what they tried that didn't work, why they chose you, what you actually did (not what the SOW said), measurable results, one unexpected benefit they didn't anticipate
DRAFT CASE STUDY | 400-600 words following Problem → Solution → Results structure with a pull quote placeholder, written for your target buyer, not the client's industry
STAT GAPS | Specific numbers or proof points that would strengthen the piece, phrased as questions to ask the client ("Can you confirm revenue increased by roughly X%?")
CLIENT APPROVAL EMAIL | Ready-to-send email requesting permission to publish, including the draft as an attachment reference, making it easy for them to say yes
SOCIAL PROOF EXTRACTS | 2-3 standalone sentences pulled from the case study that work as LinkedIn posts, website testimonials, or sales deck slides
Constraints: Distinguish between facts you've provided and inferences the AI is making — flag anything assumed; Write for your prospect, not the client's ego — focus on results the reader can picture themselves getting; Don't fabricate numbers — leave brackets where real data is needed; Keep the tone conversational, not corporate.
Tip: Run this after every successful engagement. The sooner you ask after a successful project, the more likely the client says yes. Results are fresh, the relationship is warm, and the details haven't faded. Waiting six months makes every part of this harder.
Ready-to-Use Micro-Prompts
"Why Haven't They Responded?"
You sent a proposal, deliverable, or follow-up and heard nothing back. Describe what you sent, when you sent it, the relationship history, and their last few messages or interactions. Identify the three most likely reasons for the silence, ranked by probability based on the context. For each, draft a follow-up message that addresses that specific scenario without being pushy or presumptuous. Return as: Ranked silence diagnosis (most to least likely) + one tailored follow-up message per scenario + a recommended send date and channel.
The 5-Second First Impression Test
Paste your company's homepage headline, email signature tagline, or elevator pitch. Tell me what a stranger with no context would actually understand from this in five seconds, what's unclear or generic, and what's missing that would make someone want to learn more. Rewrite it three ways: one optimized for clarity, one for curiosity, and one for credibility. Return as: Five-second stranger interpretation + gap analysis + three rewrites with the reasoning behind each.
"You're Too Expensive" (Now What?)
A client or prospect just pushed back on your pricing. Paste exactly what they said and describe your offering, typical deal size, and who you were talking to. Determine whether this is a genuine budget constraint, a negotiation tactic, or a gap in how they perceive your value. For each scenario, draft a response that holds your price without being defensive. Return as: Objection diagnosis (budget, tactic, or perception) + one response per scenario + one question to ask that reveals which it actually is.
Note from Andy (Digital Marketing Manager @ Kiingo AI)
Gilles Deleuze and Félix Guattari, French philosophers writing in the 1970s and 80s, developed a concept called a "line of flight" — the idea that breaking from a familiar system isn't about escaping it, but about moving into new territory in a way that changes how you operate. Think of a business owner who spends a week shadowing a completely different industry, then comes home and sees their own operations differently.
I've started thinking about AI use the same way. The most valuable thing I've done with AI this month on a personal level wasn't optimizing a workflow I already had. It was asking it to help me think about a problem I had no business solving on my own, something outside my lane. I used it to break down a financial model I'd never built before, not because I needed to become a financial analyst, but because understanding the structure made me better at the marketing work that actually is my job.
That's the pattern worth paying attention to. AI lets you take these "flights" into unfamiliar territory without the usual cost of entry: months of study, expensive consultants, or trial and error. You can explore a new domain in an afternoon, extract what's useful, and come back to your core work with a wider lens.
The people getting the most from AI are also using it to learn things they wouldn't have had access to before, and folding that knowledge back into the work that matters most.
Kiingo AI
Most companies are still using AI for the obvious stuff. The edge isn't in doing the same work faster. It's in finding the tasks you didn't realize AI could handle, and building a team that spots those opportunities on their own.


