New From Kiingo: AI Champions Peer Groups

You're experimenting with AI. Your team tried a few things after training. Then what?

For most companies, AI adoption stalls. Not because the tools don't work—because there's no operating system to sustain momentum, test what's working elsewhere, or make sure wins compound instead of fade.

The AI Champions Group is a monthly peer cohort for the person in your organization responsible for making AI actually stick. In 4-hour facilitator-led sessions with 8-12 other business leaders, members see what's working now, work through use cases, stress-test their approaches, and leave with concrete next steps.

We're now enrolling for Q1 cohorts. If you have someone on your team who should be in this room—or you want to be that person—reply with "champions group" and we'll send more details.

Also, we just launched a new page on our website: AI For Executives. It has additional free prompts, videos, and practical AI techniques you can use today.

This Week’s AI Rundown:

  • OpenAI signed a ~$10B compute deal with Cerebras to secure up to 750 megawatts of capacity through 2028—because apparently owning the AI race requires electricity bills that could power a small country. (Financial Times)

  • Sam Altman is investing in brain-computer interfaces. Merge Labs emerged from stealth with $252M in funding (OpenAI among the investors) to pursue non-invasive ultrasound-based BCIs. When chatbots aren't enough, why not just read thoughts directly? (WIRED)

  • ChatGPT Translate launched as a standalone experience at chatgpt.com/translate, handling "dozens of languages"—OpenAI's polite way of telling Google Translate to watch its back. (OpenAI)

  • Anthropic announced Claude for Healthcare with new tooling for clinical and administrative workflows—regulated healthcare use cases now have a dedicated offering, and it's available on Azure AI Foundry alongside AWS Bedrock and Google Cloud Vertex AI. (Anthropic)

  • Meta formed "Meta Compute" to scale AI data-center capacity into the "tens of gigawatts" over the coming decade. Translation: AI is an energy and logistics business as much as a model business, and everyone knows it now.

  • NVIDIA and Eli Lilly announced a co-innovation AI lab at JPM Healthcare week, aiming to reinvent drug discovery. Pharma + chips = the new power couple. (Eli Lilly)

  • BlackRock's AI Infrastructure Partnership targets $30B (with $12.5B raised so far), while a client survey shows rising preference for energy/infrastructure "picks and shovels" exposure over Big Tech as an AI investment thesis. The smart money is betting on megawatts, not models. (The Information, Reuters)

  • Skild AI raised ~$1.4B at a valuation over $14B, while Bloomberg reports Replit is pursuing a funding round at around $9B. The robotics and coding-agent capital race continues. (Business Wire/Morningstar, Bloomberg)

  • AMD CEO Lisa Su on hiring "AI-forward": "We're not hiring fewer people, but we're hiring different people." Translation: the job market isn't shrinking—it's shifting. Whether your skills are on the right side of that shift is a different question. (Reuters/ET)

  • Forrester forecasts AI and automation will account for ~6% of U.S. job losses by 2030 (about 10.4M roles)—and warns against over-automation driven by hype followed by costly pullbacks. The pendulum swings both ways. (Forrester)

  • Actual hope file: OpenAI launched ChatGPT Health with added privacy/security controls for health conversations and integrations to connect certain health data sources. 230M+ users already ask health questions weekly—now there's a dedicated space with guardrails. (OpenAI)

From Schuyler, Our CMO: Tools I’ve Been Trying

Broke my wrist this week. Josh, our COO, got me into Wispr Flow—voice-to-text that transcribes as you speak directly into whatever app you're working in. Not sponsored, just useful when typing hurts + to get thoughts down faster. (And they’ve made a convert out of me even as an avid Siri hater!)

Also: Notebook LM makes surprisingly good slide decks from documents, images, whatever you upload. I've tried a bunch of tools for this—and this is by far the best starting point I've found.

Practical: All-Hands Transcripts + Survey Comments → Culture Reality Check + Messaging Gap Analysis in 20 minutes

You addressed remote work concerns in three all-hands meetings. The anonymous survey still shows 40% of employees don't believe leadership cares about flexibility. The gap between what you're saying and what they're hearing is where your credibility quietly erodes—and you won't see it until your best people start leaving.

AI surfaces where your words aren't landing. Try this with: all-hands meeting transcripts or recordings, town hall Q&A logs, employee survey verbatims, anonymous feedback channels, or Slack sentiment from the past 3-6 months.

Two approaches: Run as a single diagnostic before your next all-hands (20 minutes) or track quarterly to measure whether your messaging is actually shifting perception (45 minutes with trend analysis). Start with the gap analysis—it's usually more uncomfortable and more valuable than the content audit.

Role: "Act as an organizational psychologist analyzing leadership communication effectiveness for a [company size] [industry] company where the CEO suspects their messaging isn't landing the way they intend."

Task: "Compare what I've communicated in all-hands meetings against employee feedback to identify where my messaging isn't working, which concerns persist despite being addressed, and where I've lost credibility on specific topics."

Context: "We have [X] employees. Recent concerns include [topics: compensation transparency, career growth, remote policy, workload, leadership trust]. I've addressed [specific topics] in [X] all-hands meetings over the past [timeframe]. Latest survey shows [specific pain points]. I need to know where my communication is failing before it becomes a retention problem."

Format: Deliver:

MESSAGING GAP ANALYSIS | Table: Topic I Addressed | What I Said (key phrases) | What Employees Still Say (survey/feedback themes) | Gap Type (not heard / not believed / not enough) | Credibility Status (intact / eroding / lost)

PERSISTENT CONCERNS | Issues that appear in feedback despite being addressed 2+ times; Why the message isn't landing (tone, specificity, trust deficit, actions don't match words); What employees need to hear differently

CREDIBILITY AUDIT | Topics where I have high trust (keep doing this); Topics where trust is eroding (course correct); Topics where I've lost credibility (requires action, not words); Evidence for each assessment

COMMUNICATION FIXES | For top 3 gaps: What to say differently next time; What action would restore credibility faster than words; Who should deliver the message if not me; Proof points employees need to see

NEXT ALL-HANDS PREP | Topics to address (prioritized by credibility risk); Topics to avoid until you have action to announce; Questions to expect and how to answer them; One thing to do before the meeting that will matter more than anything you say

Constraints: Distinguish between "they didn't hear it" and "they don't believe it"—these require completely different responses; Flag where actions contradict words (no amount of messaging fixes this); Identify if the real problem is middle management not reinforcing the message; Be honest about topics where more communication will backfire without corresponding action.

Reality Check: The Training Gap That's Eating Your AI Gains

Workday's January 2026 study of 3,200 employees at $100M+ companies reveals a disconnect: 66% of leaders say skills training is a priority—but only 37% of employees struggling with AI outputs actually receive it. The result? 85% of workers save 1–7 hours weekly with AI, yet much of that time disappears into fixing, rewriting, and double-checking. Leaders who close this gap—pairing tools with structured training and role redesign—turn productivity gains into lasting results.

Translation: The bottleneck isn't AI capability. It's whether you're investing in the people using it.

Ready-to-Use Micro-Prompts

Email Thread Untangler Paste this email thread. Extract: What was actually decided? Who committed to what, by when? What's still unresolved? What context would someone new need to take action? Ignore social pleasantries and circular discussion. Return as: Decisions made + commitments (owner/deadline) + open items requiring follow-up + one sentence summarizing "where this stands."

Report Stack Synthesizer Analyze these department updates together. Identify: Where do reports agree on what's working? Where do they contradict each other? What risks appear in multiple reports? What's mentioned once but seems significant? What's conspicuously absent? Return as: Consensus view (2-3 bullets) + contradictions to resolve + risks requiring attention + one question each report should have answered but didn't.

Spreadsheet Story Finder Review this data. Tell me what it actually means—not what the numbers are, but what they reveal. What trend would concern a board member? What's improving that deserves attention? What looks fine but shouldn't? Where is the story the numbers are telling different from the story we're telling ourselves? Return as: 3-sentence executive summary + one "good news buried in the data" + one "warning sign hiding in plain sight."

Data Gap Identifier (Bonus) I'm about to decide on [decision]. Before I commit: What information would I want that I don't have? What am I assuming without evidence? What data exists but I haven't looked at? What would change my mind if I knew it? Return as: Known facts supporting decision + assumptions treated as facts + data gaps ranked by how much they'd change the decision + one gap worth filling before proceeding.

Note from Schuyler (Chief Marketing Officer @ Kiingo AI)

There are too many AI tools to keep up with. Use cases too.

Because of this, I started a folder—every prompt I've actually used, where I used it, notes about which models work for what. Not a productivity system. Just a record of what I've already figured out.

The payoff is not starting from scratch every time. When a new tool drops, I don't wonder "what should I try?"—I pull from my library and test whether it handles my existing workflows better. And the more I practice and iterate, the more interesting use cases I stumble into… Things I wouldn't have thought to try if I were always starting cold.

Your AI fluency compounds when you stop rediscovering what you already know.

Kiingo AI

We help you define, build, and implement your AI strategy.

Peer groups for ongoing strategy and accountability. Training to build real capability. Resources to roll it out across your organization. 300+ companies. 5-15 hours saved per employee per week. Results in six weeks.

Kiingo is an AI consultancy & advisory firm that helps companies unlock real business value with artificial intelligence. From hands-on training to strategic planning and tailored implementation, we partner with growth-minded organizations to build AI fluency, generate more value per team member, reduce inefficiencies, and create lasting competitive advantage. We believe in humans, amplified by AI. Whether you're exploring AI for the first time or ready to scale your efforts, we’ll meet you where you are and guide you forward— with clarity, confidence, and results.

Keep Reading