We ran into technical issues last week and had to reschedule. We’re back this Thursday, March 19 — and there’s still time to grab a spot.
Finance teams are built to drive decisions, not spend their week on data entry and manual reconciliation. AI can handle that work now, so yours doesn’t have to.
Josh Sullivan, our COO, will walk you through:
• Invoice processing and AP/AR follow-ups that run without manual input
• Month-end close tracking that doesn’t fall apart when someone’s out
• Budget variance explanations written in plain English, not formulas
• Audit prep that doesn’t require a last-minute all-hands
• Cash flow narratives your board can actually read
No slides full of buzzwords. No theory. Just demos of tools your team could use the following Monday.
This is for CFOs, Controllers, and Finance Managers at companies with 50 to 500 employees.
Date: Thursday, March 19
Time: 9am PT / 11am CT / Noon ET
Save Your Spot →
This Week’s AI Rundown
• Microsoft Copilot became a full agentic execution engine this week, handling multi-step tasks — build a deck, gather financials, email the team, schedule prep time — all from a single prompt. Copilot paid seats are up 160% year-over-year, and 90% of the Fortune 500 now uses it in some capacity. (CNET)
• Anthropic cut prices on extended context and launched a $100 million partner program in the same week, eliminating the up-to-100% surcharge on Opus 4.6 and Sonnet 4.6 for long-document workflows, while Accenture trains 30,000 professionals on Claude and Cognizant opens access across all 350,000 associates. If your outside advisors are recommending your AI stack, they’re increasingly Claude-certified. Anthropic also opened its first Washington D.C. office and launched The Anthropic Institute — a consolidated safety and policy research organization — with a former White House NSC official as its new policy head. (The Decoder, Anthropic, Anthropic)
• Claude Code crossed $2.5 billion in annualized revenue and launched a Code Review tool to help development teams manage the growing volume of AI-assisted code and maintain quality standards at $15–$25 per review. (TechCrunch)
• Google released Gemini Embedding 2, its first fully multimodal embedding model, processing text, images, video, audio, and documents through a single unified system — no more stitching separate models together. The practical result is faster, more accurate search across every document type your business runs on, with 70% lower latency than conventional multi-model setups. Available now in public preview. (Google)
• GPT-5.4 solved a math problem that had stumped every previous AI model by surfacing a 2011 preprint the original problem’s own author didn’t know existed. The mechanism is the story: AI as a research retrieval engine that cross-references at a speed no human can match. GPT-5.2 jumped from 19% to 31% on Epoch AI’s advanced math benchmarks in a single release cycle. (Computerworld)
• Facebook Marketplace now lets Meta AI draft replies to buyer inquiries on behalf of sellers, pulling from listing details like pricing, availability, and pickup location. Sellers review and edit before anything sends. For small sellers managing dozens of inquiries a day, this is the kind of friction removal that actually moves transactions. (TechCrunch)
• OneVest launched an AI-native wealth management workspace for independent financial advisors, with autonomous client follow-ups, AI-generated meeting summaries, and intelligent next-step prompts surfaced automatically. Built for small RIA teams who want to spend less time on admin and more time in front of clients — the kind of workflow shift the finance webinar this Thursday is built around. (PlanAdviser)
• Savvy Cyber Kids partnered with Chatperone to bring COPPA-compliant AI to children and teens, with full parental monitoring, daily message limits, zero ads, and a “homework coaching mode” that teaches rather than just answers. The major platforms weren’t going to build this — so someone else did. (Business Insider)
Reality Check: The Performance Gap Opening Inside Your Sales Team
Eighty-seven percent of sales organizations report using AI in some form. So why are some teams growing revenue while others aren’t? Salesforce surveyed 5,500 sales professionals and found the answer isn’t access. High-performing sellers — the ones who actually grew year-over-year revenue — are 1.7x more likely to use AI prospecting agents than their underperforming counterparts. Those tools cut prospect research time by 34% and email drafting by 36%, which means top performers are spending more of their day on the work that actually closes deals. The difference isn’t whether your team has the tools. It’s whether they’ve built them deliberately into how they work.
The bottleneck isn’t whether AI works in sales. It’s whether your best sellers are quietly pulling away from the rest of your team because of it, and whether you have a plan to close that gap.
Practical: Sales Conversations → What’s Working + What to Change in 10 Minutes
You’ve had twenty sales conversations this month. Some moved forward. Some didn’t. Without stepping back to look at all of them at once, you’re making gut calls on what’s working instead of pattern calls.
AI reads across your conversations faster than you can in a pipeline review. Try this with: notes from recent calls, email threads with prospects, or any CRM fields you fill in after meetings.
Try it as one prompt first. If the output feels too broad, split it into two passes — patterns first, then scripts.
Role: “Act as a sales coach reviewing recent prospect conversations for a [company type] selling to [customer type].”
Task: “Read these conversation notes and tell me what’s working, what isn’t, and what I should say differently.”
Context: “Here are notes from [X] conversations over the past [30/60 days]. We sell [product/service] to [buyer title] at [company type]. Deal size is roughly [$X]. Some moved forward, some stalled.”
Format: Deliver:
WHAT’S WORKING | The phrases, approaches, or moments that consistently moved conversations forward
WHAT’S HURTING YOU | The patterns that seem to stall or lose deals
TWO THINGS TO CHANGE | The highest-leverage adjustments to make this week, with specific language to try
ONE QUESTION TO ADD | The single question that’s missing from most of these conversations
Constraints: Be specific, not general; if a pattern only appears once, say so; don’t suggest changes that require a complete overhaul.
Tip: Run this monthly. Sales teams that review conversation patterns close 35% more late-stage deals than teams that treat each conversation as a one-off.
Ready-to-Use Micro-Prompts
The Reference Decoder
Paste in what a reference said about a candidate — word for word if you have it. Read the language for what’s being left unsaid: the qualifications that trail off, the enthusiasm that isn’t there, the praise that pivots to “he works best with clear direction.” Tell me what this reference is actually signaling and whether I should hire this person. Return as: What they said | What they meant | Hire / Pause / Pass.
The Opening Line Test
Paste the first line of your last five emails, posts, or newsletters. For each one, tell me whether it earns the second line or gives someone a reason to stop. What’s the pattern in the ones that fall flat, and what’s the one fix I should apply to every opening going forward? Return as: Line | Pass/Fail | Why | The one fix to apply everywhere.
The Comment Reader
Paste the comments from your best-performing post or email — don’t read them as compliments, read them as data. What specific words, phrases, or ideas did your audience react to most? What did they want more of? Use that to tell me what my next piece of content should be about and how to open it. Return as: What landed | What they wanted more of | Next content idea with a suggested first line.
Note from Andy (Digital Marketing Manager @ Kiingo AI)
Something shifted for me recently that I didn’t see coming.
I’ve been deep in AI integration for work, building workflows, writing prompts, figuring out how to give clear enough instructions to actually get useful output. At some point I noticed the same habits showing up in my personal life. Not because I started using AI for everything outside of work, but because the practice had changed how I think. To get good output from AI, you have to get organized first. You think through what you actually need, what good output looks like, what you’re willing to accept. Do that enough times and it stops being a pre-step. It just becomes how you approach things.
I started bringing that same thinking to personal finances, to making plans with family, even to creative projects I’d been putting off. The AI wasn’t always in the picture at that point. The habit was.
The unexpected upside of AI integration isn’t just what it does for your business. It’s what the discipline of using it well does to your people. Every employee who gets genuinely good at working with AI is also getting better at thinking through problems on their own.
Kiingo AI
Kiingo is an AI consultancy & advisory firm that helps companies unlock real business value with artificial intelligence. From hands-on training to strategic planning and tailored implementation, we partner with growth-minded organizations to build AI fluency, generate more value per team member, reduce inefficiencies, and create lasting competitive advantage. We believe in humans, amplified by AI. Having the tools isn’t the hard part anymore. Knowing where they fit, training your team to use them, and measuring what comes back are key.
Most of our conversations start the same way: the tools are already there, the results aren’t. If that sounds familiar, let’s figure out why together.


