Stop Renaming the CMO. Start Leveling Up Your Team
Marketing is marketing. Not Engineering.
Another week, another post about what we should call the CMO now. Chief Market Orchestrator. Chief Revenue Architect. Chief Growth Something.
Matt Heinz had a post recently asking what the modern CMO should be called. Hundreds of comments debating titles. Meanwhile, most marketing teams can barely use AI beyond asking ChatGPT to write a headline.
I commented something like: “If we as marketers want to be taken seriously, we need to stop inventing new names and focus on the work that needs to be done.”
That’s it. That’s the post.
The work is changing. The titles are irrelevant. And the gap between marketing teams that are actually adopting AI and those just talking about it is getting wider every month.
The Real Problem: Most Teams Are Stuck at L1-L2
Geoff Charles at Ramp posted a thread recently about getting their company “AI-pilled.” 99.5% of employees active on AI tools. Non-engineers shipping code to production. The whole thing.
He included a levelling framework:
- L0: Sometimes uses ChatGPT
- L1: Built custom GPTs, dabbled with coding tools
- L2: Built an app that automates part of their job
- L3: Systems builders who create infrastructure for everyone else
It’s a useful frame. But it’s company-wide and generic. What does this actually look like for marketing teams specifically?
Here’s my version. Five levels, marketing-specific, based on what I’m seeing across our clients and our own team.
The 5-Level AI Adoption Ladder for Marketing
Level 1: AI as Search
ChatGPT as a slightly better Google. Basic queries. “Write me a headline for this ad.” “Give me 5 subject lines.” “What’s the difference between demand gen and lead gen.”
The output is generic. The user doesn’t know why it’s generic. They’re not giving context, not iterating, not building on responses. They type a question, get an answer, move on.
Most marketers are here. They’ve “tried AI” and concluded it’s useful for simple tasks but nothing transformative. They’re not wrong about the output quality. They’re wrong about why.
Level 2: Prompt Engineering
The user realizes the output is mediocre and starts learning why. They write longer prompts. They specify tone, audience, format. They iterate. “Make it shorter.” “More conversational.” “Add a statistic.”
Output improves. Copy is decent. Brainstorms are useful. Research summaries save time. But it’s still generic in a fundamental way — the AI doesn’t know anything about your company, your product, your voice, your customers.
This is where a lot of “AI-savvy” marketers plateau. They’ve gotten good at prompting. They think that’s the skill. It’s not.
Level 3: Context-Aware AI
This is where it gets interesting.
Custom GPTs. Claude Projects. NotebookLM. The user uploads their docs. Brand guidelines. Past campaigns. Competitor research. Customer interviews. Product specs. Pricing pages.
Now the AI isn’t prompting blind. It has context. It knows your voice because it’s read 50 examples of your writing. It understands your positioning because it’s ingested your messaging docs. It can reference actual data from your actual campaigns.
The output shifts from “generic but competent” to “actually useful for my specific situation.”
Most marketing teams haven’t made this jump. They’re still prompting raw ChatGPT with no context, getting mediocre output, and concluding AI is overhyped.
The delta between L2 and L3 is enormous. L3 is where AI starts feeling like a real teammate instead of a parlor trick.
Level 4: Data Analysis
Spreadsheets with AI. Code Interpreter. Actual analytical work, not just words.
Upload your campaign performance data. Ask questions. “Which creative themes are fatiguing fastest?” “What’s the correlation between ad spend and pipeline by channel?” “Identify anomalies in the last 90 days.”
This is beyond copywriting and brainstorming. This is using AI to see patterns in your data that you’d miss or that would take hours to find manually.
For paid media specifically, this is where AI gets powerful. You’re not asking it to write an ad. You’re asking it to analyze 10,000 rows of performance data and surface what matters.
Most marketers don’t even know this is possible. They think AI is for words. It’s also for numbers. And that’s where a lot of the leverage actually lives.
Level 5: Agents and Automation
MCP. APIs. Claude Code. Replit. Building agents. Automating workflows.
At L5, the AI isn’t just answering questions or producing output. It’s doing work autonomously. Monitoring signals and triggering actions. Enriching leads and routing them to the right sequences. Pulling data, transforming it, generating reports, sending them — without human intervention.
This is infrastructure. It’s not “using AI” — it’s building with AI.
Very few marketers are here. The ones who are have effectively become a different kind of operator. They’re not just running campaigns. They’re building systems that run campaigns.
Why L3 Is the Unlock
If I had to pick the single biggest jump, it’s L2 to L3.
L1 and L2 are both “prompting a general-purpose AI.” You get better at it, but you’re still working with a tool that knows nothing about your specific situation.
L3 changes the game. Suddenly, the AI understands your context. It knows your brand voice because it’s read your best-performing content. It knows your competitive positioning because it’s ingested your battlecards. It knows your customer pain points because it’s read the interview transcripts.
The output quality gap is massive. And the use cases expand — you’re not just asking for headlines anymore. You’re asking for strategic recommendations grounded in your actual data.
Here’s the thing, though: L3 requires setup. You have to actually upload your docs. Organize them. Create the project or the custom GPT. Most marketers don’t do this because it feels like extra work upfront. So they stay at L2, prompting blind, getting generic output, and thinking that’s what AI can do.
L3 is maybe 30 minutes of setup. The ROI is hundreds of hours of better output. But humans are bad at upfront investment, so most people skip it.
The Adoption Problem
The issue isn’t capability. The tools exist. L5 is possible today for anyone willing to learn.
The issue is adoption. Getting people to default to AI. Not “can they use it” but “will they.”
This is what Geoff’s Ramp thread actually gets right. He doesn’t talk about tools. He talks about culture.
- Slack channels celebrating AI wins
- Office hours to help people level up
- All-hands demos where people show what they built
- Leaderboards tracking usage
- Manager accountability
He mentions that Ramp considered rewarding token usage before worrying about quality. Just get people using the tools. Build the habit first. Quality comes after.
Meta apparently does something similar — they track and reward AI usage to drive adoption, even before they measure impact.
It sounds dumb. Why would you reward activity over outcomes? Because the activity creates the habit, and the habit creates the skill, and the skill creates the outcomes. You can’t optimize for quality if people aren’t using the tools in the first place.
“Mandates decay. Culture is what remains.”
That’s the line from Geoff’s thread that stuck with me. You can mandate AI usage. Your team will nod and ignore it. But if they see their peers shipping faster, building cool stuff, getting recognized — they’ll want to level up on their own.
What We’re Actually Doing at 42
We’re a 20-person agency. We don’t have Ramp’s resources. We can’t build our own internal AI platform or host 700-person hackathons.
But we’re further along than most.
Claude Code as the operating system.
Not ChatGPT in a browser. Claude Code running locally, hooked into APIs, with custom skills that automate real workflows. Competitive intel. Campaign analysis. Reporting. Content production. The AI doesn’t just answer questions — it does work.
Building Agents & Skills, not just prompts.
We’re packaging workflows into reusable skills that anyone on the team can invoke. One command runs a full competitive analysis. Another generates a campaign brief from a transcript. This is the L5 stuff — building infrastructure that makes everyone faster.
1:1 onboarding.
We’re not sending a Loom and hoping people figure it out. We’re doing individual Claude Code setup sessions. Walking people through configuration. Getting their environment working for their specific role. High-touch, but it’s the only way to actually drive adoption.
GitHub for shared infrastructure.
Team repo with shared API keys, config files, skill definitions. When someone builds something useful, it’s available to everyone. The knowledge compounds instead of staying siloed.
Shipping real things.
Our creative director, Alejandra — not an engineer; is pushing interactive content to Vercel. Production deployments. From someone who six months ago had never touched a terminal. That’s what L4-L5 looks like in practice.
We’re not at 99.5% adoption. Not even close. But the people who’ve levelled up are building things that seemed impossible a year ago. And every week, someone else starts to see what’s possible.
The Point
Stop debating what to call the CMO. The title doesn’t matter.
What matters is whether your team can actually use AI at L3+ or whether they’re stuck at L1-L2, wondering why the output is generic.
The work is changing. Campaign setup, analysis, creative iteration, reporting, enrichment, outbound — all of it can be faster and better with AI. But only if people actually adopt it, build the habits, and level up.
That’s not a tool’s problem. It’s a culture problem.
And if you’re a marketing leader spending time on titles instead of figuring out how to move your team from L2 to L3, you’re focused on the wrong thing.
Where’s your team on the ladder? And what would it take to move them up one level?



