How Vector uses Vector MCP

Kelly Arndt
May 14, 2026
|
5
min read
Contents

Using AI has changed the way I work.

My morning routine used to be something like:  

Alarm goes off. 

Eyes barely open. 

Grab a coffee. 

Play a couple 2:1 Chess games (great primer for getting my strategy brain moving).

Then, I’d sit down and open LinkedIn Campaign Manager to see if my budget paced itself off a cliff while I was sleeping. Then Google Ads. Then a spreadsheet. All before a second coffee. 

The core of my work as Vector’s Demand Gen Lead is campaign strategy, creative testing, budget allocation, audience building. But the daily and weekly analysis takes real bandwidth, especially just pulling, confirming and analyzing data. I’d open every ad platform, pull spend numbers into a spreadsheet, calculate pacing by channel and total, figure out if I’m over or under on budget, then do the same thing for creative performance and campaign optimization. 

Five hours a week, minimum, just assembling data I already had access to. The answers were always in there. I just had to go dig them out, stitch them together, and make them look presentable before I could actually do anything.

Now, with Vector MCP, much of that’s done before I’m at my desk. Here’s the system I built. 

Step zero: giving Claude the context it needs

Step zero
Three inputs that make Vector MCP useful
Before building any workflow with Vector MCP, load these three things into Claude. Skip this, and you'll get technically correct but strategically useless outputs.
01
Campaign goals
Not the platform objective — the real goal and how campaigns feed into each other.
Without this
"Your CTV campaign has no conversions. Shut it off."
02
Budget framework
What % goes to brand vs. demand vs. ABM, and which campaigns sit in each bucket.
Without this
"Why are you spending money on brand? The conversion rate is terrible."
03
Benchmarks + thresholds
What "great," "fine," and "worried" look like for every metric you track.
Without this
"Your CPL is $142." (Is that good? Bad? Claude has no idea.)
Result
Claude with strategic context
Vector MCP gives Claude the data. Your context tells it what the data means.

This is the part most people skip, and it’s the reason their outputs are useless.

The MCP gives Claude access to your ad platform data. But Claude doesn’t know why you’re running a campaign unless you tell it. And if you don’t tell it, you’ll get answers that are technically correct but completely miss the point.

Before I set up any of my workflows, I gave Claude three things:

1. What my campaigns are actually trying to do 

Not the objective set in the ad platform, but the real goal. What am I trying to accomplish? How does this campaign feed into the others? For example, my connected TV campaign is a brand awareness play measured only by reach. If I don’t tell Claude that, it sees no engagement metrics and no conversions and tells me to shut it off. Same with a product demo video where the only thing I care about is whether a small retargeting audience is engaging with bottom-of-funnel content. The in-platform metrics won’t tell Claude any of that. You have to.

2. My budget framework

I'm constantly thinking about coverage, building demand and capturing it. What percentage goes to brand awareness, what goes to demand capture, what goes to ABM and our signal-based plays. And which campaigns fall into which bucket. This matters because if Claude doesn’t know that a campaign is allocated to brand, it’ll look at the conversion numbers and wonder why you’re spending money on it. It also needs to know the framework for the quarter or the month so it can evaluate whether a campaign is doing its job within the plan you set, not just against some generic standard.

3. Benchmarks and thresholds

You want to give Claude parameters for when you’d be really happy, when things are fine, and when you’d start worrying. For me, that includes things like my target blended CPL and the number where it becomes a problem, but it extends to any metric I’m tracking. I also fed in custom benchmarks about what competitors in our space pay per lead. The goal is that Claude can look at a number and have enough context to know whether to flag it or move on.

The three workflows I run every week

The Vector MCP system
Three workflows powered by Vector MCP
01
Budget pacing
The job Know whether I'm on track to hit monthly and quarterly spend targets, by channel and overall.
What it does Pulls spend data, calculates pacing variance, runs a sanity check, flags anything 10%+ off.
Outputs
Notion page Asana task Slack notification
02
Campaign performance
The job Which campaigns are delivering, which aren't, and where should I move budget.
What I get A reallocation suggestion based on engagement + conversion performance, and a kill list for underperformers.
Outputs
Notion analysis Asana task
03
Creative performance
The job Which ad creatives are performing and which are starting to fatigue.
What it does Pulls 7-day creative data, calculates spend vs. monthly target, benchmarks against real competitor CPL data.
Outputs
Dashboard Weekly creative review

With that context in place, I use the MCP for three things more than anything else: budget pacing, campaign performance, and creative performance. 

Here’s what each one looks like:

1. Budget pacing

Every day I need to know whether I’m on track to hit my monthly and quarterly spend targets, both by channel and overall.

This one is fully automated. I built agents with skills in Claude (basically a set of step-by-step instructions Claude follows on its own) that runs every day. Here’s what’s in it:

  1. My Q2 budget targets and the data sources to pull from.
  2. The math. Days elapsed in the month divided by total days = percent of month consumed. Spend to date divided by monthly budget = percent of budget consumed. It calculates a variance and flags anything more than 10% over or under.
  3. A data check. I built this in because I don’t want to blindly trust the output. Claude runs a quick sanity check before moving on. I always give data a second look, but I like to assign a specific job to be done here to cut out any hallucinations that can commonly occur with LLMs
  4. The output. Results drop into a Notion page tracking daily, monthly, and quarter-to-date spend. An Asana task gets created with my name on it so I have a reminder to review the data and act on anything that’s off. I also get a Slack notification before I’m at my desk.

All I do now is spot-check it daily. I’m looking for major drifts in underpacing or overpacing, or significant performance issues in creative. I set a threshold with my campaign frameworks so an LLM knows when to flag an issue or a win. 

Before this, the same job meant opening every ad platform, pulling numbers into Excel, and deciding whether to shift budget between LinkedIn and Google. Am I overpacing on one? Underpacing on the other? Should I move money? Is my audience experiencing ad fatigue? Is my copy speaking relevantly to my ICP? I was playing two games at once: individual channel pacing and total paid pacing. The math itself doesn’t take long, but the mental overhead of deciding when to check and whether I was caught up was the real cost.

Now I just always know. And I have guardrails and thresholds in place to signal when I need to look a layer deeper. 

2. Campaign performance

This is the big-picture check: which campaigns are delivering, which ones aren’t, and where should I be moving budget.

I’ll ask Claude what’s working with my campaigns, what’s not, what opportunities I have, and the follow-up is different every time. But because I’ve already loaded the context from step zero, Claude doesn’t just rank everything by conversions. It knows which campaigns are brand, which are demand, which are signal-based or designed at a certain ABM stage and what “good” looks like for each one.

What I usually get back:

  1. A reallocation suggestion based on both engagement and conversion performance. It looks at what’s actually delivering and where budget would be better spent.
  2. A kill list. Campaigns where, if certain metrics don’t shift within two weeks, I should cut the budget and move it somewhere else.

Then the loop is the same every time:

  1. Claude does the analysis and outputs it to Notion.
  2. I get an Asana task to review the recommendation.
  3. I review the recommendation and log whether I agree/disagree and why.
  4. I go make the change in the platform.

3. Creative performance

Separate from how a campaign is doing overall, I need to know which specific ad creatives are performing and which ones are starting to fatigue. A campaign can be set up correctly and still underperform if the creative is stale.

One of the workflows I built for this does three things:

  1. Pulls creative performance for the last seven days.
  2. Calculates spend vs. my monthly target.
  3. Benchmarks against competitor data for what companies in our space are paying per lead. Real numbers, not Claude guessing.

I use this for our weekly creative review with the marketing team. Instead of spending a couple hours sorting and formatting campaign data into something presentable, Claude pulls the data and builds the visualization for me. One of the dashboards it built was even editable, so I could change a couple things on it.

How I verify everything

One thing you won’t catch me doing is taking what Claude gives me and running with it. Instead, I read through the takeaways and ask myself if it makes sense. If any data point surprises me, I go into the ad platform and check it manually.

For anything going into reporting, the first pass is Claude, the second pass is me. But I don’t have to deal with the formatting, the visualization, or the initial data pull. That saves me about an hour per report. My job at that point is just comparing numbers.

I still go back to the ad platforms for any action I need to take. Creating campaigns, pausing ads, uploading new creative, adjusting settings—those all stay squarely with me. 

Guardrails I built into every workflow

Vector MCP guardrails
Four rules baked into every workflow
Vector MCP gives Claude access to your live ad data, but it doesn't come with judgment out of the box. These guardrails keep outputs trustworthy.
Never make up numbers
Claude will try to calculate things like CAC even without salary data. Be explicit: don't fabricate anything, ever.
What to tell Claude
"If you don't have the data, say so. Never estimate, infer, or use industry benchmarks as a stand-in."
Cite every source
Every output should include where the data was pulled, a timestamp, and where to manually verify it.
Why it matters
"Did this come from Vector MCP (live, verifiable) or a Notion doc from six months ago (maybe stale)?"
Conversions aren't everything
Claude defaults to optimizing for conversions. Teach it how your campaigns work together — brand feeds demand.
The risk
"10 campaigns, 5 not bottom-of-funnel — Claude will tell you to turn those off unless you explain the system."
Keep your asks small
The bigger the data pull, the more likely a hallucination. Give Claude one job at a time, not five at once.
Think of it like
"Assigning tasks to a junior. Five jobs is fine — they just do them separately."

Claude is good at pulling data and running analysis, but it doesn’t come with judgment out of the box. If you don’t set boundaries for it, you’ll get outputs that look convincing but are missing something, or that include things Claude shouldn’t have made up. 

These are the guardrails I’ve baked into every skill and prompt I run to prevent that:

Tell Claude to never make up numbers

It will try. It might calculate your CAC even though it doesn’t have salary or overhead data. It’ll use industry benchmarks and present the result like it’s real. You have to be explicit: don’t fabricate anything, ever. And even still, double check the numbers if you’re making a significant directional change. 

Make it cite its sources

I tell Claude to tag every output with where the data was pulled, a timestamp, and where I’d go to verify it. That way I know if a number came from the MCP, which is live data I can validate in the ad platform, versus a Notion doc I wrote six months ago that might be outdated. Best practice is to interrogate the data. 

Teach it that conversions aren’t everything

Claude defaults to conversions as the only metric worth optimizing for. And honestly, that’s how most of paid advertising works. But campaigns feed into each other. If you have 10 campaigns and five of them aren’t bottom of funnel, Claude will tell you to turn those off. You need to teach it how your campaigns work together.

Keep your asks small

The bigger the data set you try to pull at once, the more likely Claude is to hallucinate a number. Think about it like assigning tasks to a junior. You wouldn’t hand them five things at once and say “figure it out.” Give them one job at a time. Five jobs is fine, but they do them separately.

Where to start if you want to build this

Start with the thing you do every single day that takes a little bit of time. For most people running paid, that’s probably checking key metrics like conversions, audience penetration, dwell time, LP clicks, CTR, or CPM  in campaign manager.

Open Claude, connect the Vector MCP, and ask: pull me these three metrics on campaigns month to date.

See what comes back. Then slowly build out how you’d normally analyze performance on that platform. Ask follow-up questions the way you’d think through the data yourself.

Once you’ve done that a few times, you’ll start to see the patterns in what you ask. That’s when you can turn it into a repeatable skill and stop typing the same prompt every Monday morning.

Before you start, make sure your conversion tracking is set up correctly in your ad platform. If you want to understand pipeline impact, you’ll need offline conversion tracking sorted out, too. But if you’re just looking at in-platform campaign performance, that’s enough to get going.

Ready to try this yourself? 

Vector MCP connects your ad platforms to Claude so you can do everything I walked through here: pull performance data, build pacing workflows, run creative reviews, and get answers about your campaigns without logging into five different dashboards first.

Remember, not all MCPs are built the same. Vector is an approved API partner with the ad platforms we connect to, which means you’re not going to get flagged or banned because some tool is scraping your account the wrong way. The data you’re working with is also tied to real people engaging with your actual ads, not company-level estimates stitched together from somewhere else. You want to be sure of that when you’re making budget decisions based on what Claude tells you.

We’re rolling out early access now. Sign up here →

Share this post
Kelly Arndt
May 14, 2026
|
5
min read

Ad targeting
doesn't have to be
a guessing game.

Turn your contact-level insights into ready-to-run ad audiences.

Hey, boo — sign up for our newsletter

Frequently asked questions

No items found.