Upcoming Free Webinar
A practical session on AI use cases, workflows, and agents for ops teams.
Presented by Josh Sullivan, COO, Kiingo AI
What we'll cover:
• Real use cases where AI can support your day-to-day operations work
• How to turn those use cases into automated, repeatable workflows
• How to build agentic versions that run on their own like clockwork
Whether you're managing processes, coordinating across teams, or just trying to get more done with less, this session will give you practical insights you can apply right away.
April 22nd at 9:00 AM PT / 11:00 AM CT / 12:00 PM ET
Save your spot here →
This Week's AI Rundown
• Amazon committed another $25 billion to Anthropic at a $380 billion valuation, with Anthropic agreeing to spend over $100 billion on AWS over the next decade and lock in up to 5 gigawatts of training and inference capacity. The vendor landscape is now tied to specific clouds and custom silicon, and easy switching between Claude, GPT, and Gemini is getting harder to count on. (Anthropic, CNBC)
• Anthropic shipped four products in a single week: Claude Opus 4.7 with 3x higher-resolution vision and self-verification on its own output, Claude Design for turning plain-language prompts into prototypes, slides, and one-pagers (Pro/Max/Team/Enterprise), scheduled Claude Code Routines inside the Claude app, and an official preview of Claude Mythos, the company's most powerful model, restricted to select partners. (Anthropic: Opus 4.7, Anthropic: Claude Design, Anthropic: Mythos)
• OpenAI disclosed that enterprise revenue has crossed 40% of total and is on pace to match consumer revenue by year-end. Codex expanded beyond coding into computer use, image generation, memory, and automations, effectively a desktop agent. A new research-preview model, GPT-Rosalind, targets biology and drug discovery with partners including Amgen, Moderna, and Thermo Fisher. Separately, preview builds surfaced a ChatGPT Agents platform codenamed "Hermes" with a Studio UI, scheduled runs, Slack integration, and agent roles like "Data Analyst" and "Chief of Staff" — a direct answer to Anthropic's Cowork and Managed Agents. (OpenAI: Enterprise, OpenAI: Codex, OpenAI: GPT-Rosalind, Testing Catalog)
• Google shipped "Nano Banana 2" for high-volume image generation plus a wave of Workspace updates: new Gemini Enterprise admin console controls, Gemini 3.1 Flash text-to-speech across 70+ languages with SynthID watermarking, and semantic search over chat history across Gemini Enterprise and Business. (Google AI, Google Workspace)
• Microsoft added multi-agent orchestration to Copilot Studio using the open Agent-to-Agent (A2A) protocol, letting one Copilot agent delegate work to another across Microsoft Fabric, 365 Agents SDK, and external platforms. Microsoft also expanded its Publicis partnership, embedding Copilot Studio, Agent 365, and Microsoft IQ directly into the agency's marketing workflows. (Microsoft Copilot, Microsoft Newsroom)
• Salesforce launched Headless 360, exposing the entire CRM as APIs and MCP tools so AI agents built in Claude Code, Cursor, Codex, or Windsurf can access Salesforce data and workflows directly. 60+ new MCP tools and 30+ coding skills went live with the launch, plus new controls for governing agent behavior before and after deployment. (Salesforce, The Register)
• Canva unveiled AI 2.0 at Canva Create 2026: conversational design (describe an idea, get an editable design back), agentic orchestration across documents, spreadsheets, and interactive web pages, a "Memory Library" that retains brand preferences, and native connectors for Slack, Gmail, Notion, Zoom, and Google Calendar. (Canva)
• Allbirds shares jumped more than 400% after the shoemaker announced it was raising capital to pivot away from sneakers and toward "AI computing infrastructure." The footwear-to-AI pivot isn't the sign of a bubble. It's just a company whose stock had fallen 99% from its IPO finding a way to ride the narrative. (Reuters)
What Studies Are Saying
Three findings worth knowing this week.
• Gartner surveyed 353 data, analytics, and AI leaders and found successful AI adopters invest up to 4x more in foundations like data quality, governance, AI-ready people, and change management than peers seeing poor outcomes. Just 39% of tech leaders are confident their AI investments will have a positive financial impact. (Gartner, April 2026)
• Stanford's 2026 AI Index found organizational AI adoption has reached 88%, and generative AI hit 53% population adoption within three years, faster than the PC or the internet. Four in five university students now use generative AI. (Stanford HAI, April 2026)
• The same Stanford AI Index reports 62% of organizations cite security and risk as the primary blocker to scaling agentic AI. Only 18% now rate their AI incident response as "excellent," down from 28% a year ago. (Stanford AI Index via Kiteworks, April 2026)
Prompt of the Week: The Handoff Test
[Describe a task, workflow, or process you're thinking about handing off to an AI agent. It could be something repetitive you do every week, a back-office workflow your team runs, or something an agent product (ChatGPT Agents, Claude Code, Copilot Studio, Salesforce) has been pitching you on. Include what you know: how often it runs, who currently owns it, what data it touches, what decisions get made, and what's at stake if it goes wrong.]
Before telling me whether to deploy an agent here, stress-test the idea. First, tell me whether this is actually a good agent candidate, or whether I'd be better off using AI interactively (pulling it in when I need it) instead of handing off the whole thing. Be specific about why. Second, name what could go wrong. What data access does the agent need, what decisions does it make autonomously, and what happens if it's wrong or gets hijacked? If the downside is a tedious correction versus a reputational or financial hit, that matters. Third, tell me what I need in place before flipping the switch: data cleanliness, permission boundaries, audit trails, someone responsible for reviewing outputs, a way to roll it back. Fourth, tell me which parts of the task should stay with a human no matter what.
Return it as: agent candidate (yes, no, or partial) + why + top 3 risks + what must be in place first + what stays human.
Note from Andy (Digital Marketing Manager @ Kiingo AI)
Recurring tasks need more pre-organization than one-offs. I thought I knew that already. Then I started building more Claude Cowork scheduled tasks, and realized I didn't know it the way I know it now.
When you ask Claude to do something once, you can iterate. You see the output, you adjust, you try again. It's a conversation. But a scheduled Cowork task fires off into the world unattended. If the prompt is vague, the first output is vague, and the tenth one is still vague a week later. You're not iterating in real time anymore.
What's been saving me is using Claude Desktop as the planning layer. If I don't know exactly what I want the Cowork task to produce, I don't start by writing the Cowork prompt. I have a conversation with Desktop first: inputs, format, output shape, edge cases. Once I've landed on the shape of it, I ask Desktop to generate the Cowork scheduled task prompt for me.
It's the difference between hiring a contractor and talking to your architect first. The contractor can build anything. They just need a drawing. Desktop is the architect. Cowork is the contractor. Figure out what you're building before someone starts swinging a hammer unsupervised.
Kiingo AI
Kiingo helps mid-market companies figure out what actually works with AI and stop spending time on what doesn't. Training, strategy, implementation.
If your team is using AI but you're not sure it's moving the needle, that's exactly the conversation we have. kiingo.com/contact


