Pillar 1: Paid Media Operations 25 min read

The Paid Media Operations Guide for B2B SaaS Teams

Paid media operations is the system that governs how changes get made, who approves them, and how your team answers "what changed?" when performance shifts. Most teams don't have one. This guide is how to build it.

CPL jumps 40% in a week. Three people on a call. One says targeting drift. One says seasonal. One thinks someone adjusted bids on a mid-funnel campaign without telling anyone. Nobody has a change log. Nobody can check.

This isn't a strategy problem. This is an operations problem.

What paid media operations actually means

Campaign management is creating campaigns, adjusting bids, reviewing creative, interpreting metrics. That's the work itself.

Paid media operations is the system around the work:

  • Who can make what changes, under what conditions
  • How proposed changes get reviewed before going live
  • How every change gets logged with rationale and expected outcome
  • How naming and UTM conventions are enforced across platforms
  • How the team answers "what happened this week?" without reconstructing it from memory

When one person controls one platform, the operational layer lives in their head. That works until you add a second person or a second platform. Then it breaks, usually silently.

Why it breaks at Series B specifically

At Seed and Series A, the setup is tight. One operator, one or two platforms, modest spend. You notice when something changes because you made the change.

Series B changes three conditions simultaneously.

More platforms. You've added LinkedIn to Google. Maybe Reddit. Each has different change cadences, different APIs, different blast radius. A "change" means something different on each one.

More people touching the accounts. Internal growth manager. Agency partner. Head of growth making strategic calls. All of them have credentials. All of them make changes. Coordination is now an operational problem, not a communication preference.

And underneath both: the budget is bigger, and the board is watching. Series B investors expect paid to scale. That creates urgency to move fast, which compounds errors when there's no operational infrastructure catching them.

What this looks like when it breaks

What happenedWhat it looked likeRoot cause
Agency updated UTM template without notice3 weeks of broken attribution in CRMNo change log, no approval required for UTM changes
Campaign paused "temporarily" during a testPaused for 6 weeks; nobody caught itNo audit trail, no review cadence to catch stale state
Budget shifted to high-performing campaignCorrect decision, but other campaigns starvedNo documented rationale; change looked like an error in retrospect
Audience overlap grew across Google and LinkedInFrequency spiked; CPL rose graduallyNo cross-platform naming or audience registry
CPL jumped week-over-weekThree theories, no answerNo change log to check; can't isolate the variable

None of these are strategy failures. The targeting was often sound. The bids were reasonable. The failures live in the gap between "decision made" and "decision documented, reviewed, and traceable."

The four components of a paid media ops system

You don't need a dedicated tool. A shared doc and a weekly 30-minute meeting gets you 80% of the value. Each component below closes a specific failure mode.

1. A proposal workflow for changes

Separate "I think we should do this" from "this is now live." Not every change needs a committee. But high-impact changes need a proposal step before execution.

Require a proposal for:

  • Budget shifts greater than 10% at the campaign level
  • Pausing or unpausing active campaigns
  • Audience changes on any active campaign
  • New creative going live for the first time
  • UTM template changes
  • Any change touching more than one platform simultaneously

A proposal is not a formal document. It's five fields:

FieldWhat it capturesExample
WhatThe specific change being madeShift $3k/mo from Brand campaign to Competitor Conquest
WhyThe rationale driving the changeBrand CPL is 2x target; Conquest is at 0.8x with strong pipeline contribution
Expected impactWhat you expect to happenReduce overall CPL by ~15% in 14 days; total pipeline volume unchanged
Rollback pathHow to undo it if the expected impact doesn't materializeRevert budget split if Conquest CPL rises above $280 over 7-day window
ReviewerThe named person who approved itHead of Growth

The point of this table isn't process for its own sake. It's so that when performance shifts two weeks from now, you can actually isolate the variable instead of guessing.

2. A change log

Not a Slack thread. Not a Google Sheet that gets abandoned after two weeks. A structured log that every person with account access writes to, every time they make a change.

Minimum fields:

FieldNotes
DateWhen the change went live
PlatformGoogle Ads / LinkedIn / Reddit / Cross-platform
Change typeBudget / Bid / Audience / Creative / UTM / Structural
What changedSpecific and concrete: campaign name, amount, audience segment
RationaleWhy this was the right call at the time
OwnerWho made the change (person, not role)
Expected outcomeWhat you expected to happen
Actual outcomeFilled in 7-14 days later during review

The log only works if everyone who can make changes writes to it. One holdout (usually the agency) breaks the whole system. Make it a contractual expectation, not a cultural suggestion.

Write for the investigator

Write entries for the person investigating a performance shift two months from now, not the person reviewing this week. "Reduced bids" is useless. "Reduced target CPA from $180 to $155 on Mid-Funnel Google campaign based on 30-day CPL trend; expect pipeline volume to hold while CPL normalizes" is useful.

3. Naming conventions

Inconsistent naming breaks everything downstream. You can't filter the change log by campaign. You can't parse UTM data in your CRM. You can't compare across platforms. Fix naming first. It's the backbone.

Campaign naming formula: [Platform] | [Funnel Stage] | [Audience / Segment] | [Quarter-Year]

Examples:

  • GGL | TOFU | ICP-Founders-Series-B | Q1-2025
  • LI | MOFU | Retarget-Visitors-30d | Q1-2025
  • RDT | TOFU | Competitor-Intent | Q1-2025

UTM parameters need their own convention:

UTM parameterConventionExample
utm_sourceLowercase platform namegoogle, linkedin, reddit
utm_mediumAlways "paid"paid
utm_campaignFunnel stage + audience (hyphens)tofu-icp-founders
utm_contentCreative ID or ad formatv3-carousel, headline-b
utm_termKeyword or audience segment (Google only)b2b-paid-media-ops

Pick a convention. Document it in one place. Enforce it on every new campaign. Don't retroactively rename existing campaigns. The disruption to historical data isn't worth it. Fix forward.

4. A weekly ops review

Without regular review, the system degrades. The change log fills up. "Actual outcome" columns stay empty. Proposals get approved but never evaluated. It becomes paperwork.

This is not a performance review. Different meeting, different stakeholders. The ops review covers:

  • Changes shipped this week: Review the log. Were all high-impact changes documented with proposals? Any surprises?
  • Proposals for next week: What changes are queued? Have they been reviewed?
  • Open evaluations: Which changes from two weeks ago should now have actual outcome data? Update the log.
  • Naming and tracking hygiene: Any new campaigns without proper naming? Any UTM issues surfaced this week?
  • Flags: Anything stale, paused without documentation, or showing unexpected results?

30 minutes. Once a week. Everyone with account access. The meeting runs from the change log. If you can't run the review from the log, the log is broken.

What to do if you're starting from scratch

Don't implement all four at once. Big-bang rollouts produce perfect systems nobody uses after week three.

Week 1Start the change log
Create a shared doc or spreadsheet with the eight fields above. Retroactively fill in the last two weeks from memory as best you can. Even partial entries are valuable. From this point, anyone making a change writes to the log the same day.
Week 2Add proposals for high-impact changes only
Don't gate every change. You'll kill velocity. Apply the proposal requirement to the five change types that do the most damage when undocumented: budget shifts >10%, audience changes, campaign pauses, UTM changes, and new creative launches.
Week 3Audit and standardize naming on highest-spend campaigns
Don't rename everything. Pull your top 10 campaigns by spend. Check naming and UTM consistency. Fix forward on anything new, and flag any active campaigns with broken or inconsistent UTMs that are distorting CRM data.
Week 4Run your first ops review
Schedule 30 minutes with everyone who has account access. Run through the agenda above using the change log as the source of truth. Fill in any missing "actual outcome" columns. Queue next week's proposals. This is now a standing weekly.

After four weeks you have a functioning operational layer. It won't be perfect. The point is a traceable record of what happened and why, and a weekly moment to improve it.

Who owns what

Paid media ops breaks when ownership is ambiguous. At Series B, you typically have three people (or groups) touching the accounts: an in-house operator, an agency, and a head of growth or marketing leader making strategic calls. All three have credentials. None of them have a clear ops mandate.

The result: everyone assumes someone else is maintaining the log. Proposals get discussed in Slack but never formally reviewed. Budget shifts happen without documentation because the person making the change "was going to mention it in standup."

Operational surfaceOwnerNotes
Change log entriesWhoever made the changeNo exceptions. Agency included.
Proposal creationWhoever wants the changeOperator proposes tactical. Growth lead proposes strategic. Agency within scope.
Proposal approvalHead of growthOne person approves. Not a committee.
Naming enforcementIn-house operatorAgency follows convention. Operator audits weekly.
Ops review facilitationIn-house operatorRuns the meeting. Flags gaps before the call.
UTM consistencyIn-house operatorSingle owner prevents drift.
Budget reallocationHead of growthAnyone can propose. Approval is always the growth lead.

The pattern: operators own execution and log integrity. The growth lead owns approval and budget authority. The agency owns their deliverables but operates inside your system, not theirs.

If the agency insists on using their own change log or reporting cadence, you now have two systems. Two systems means one of them gets ignored. Insist on a single log.

Cross-platform operations

Running Google Ads and LinkedIn Ads is not "the same thing twice." The platforms have different change cadences, different risk profiles, and different failure modes.

Google AdsLinkedIn AdsReddit Ads
Change cadenceHigh. Bid and audience tweaks daily or weekly.Lower. Changes bi-weekly or monthly.Moderate. Creative rotation matters more.
Blast radiusHigh. Bad bid strategy torches budget in hours.Medium. Slower pacing gives you a day or two.Medium-low. But creative missteps get community backlash.
Audit trail qualityDecent. Change history exists but buried.Minimal. You're relying on your own log.Almost none. If you didn't log it, it didn't happen.

Practical implications:

  • Proposal thresholds should differ by platform. Set thresholds based on blast radius, not uniformity.
  • The change log needs a platform column. "What changed on LinkedIn this week?" should be a one-line filter.
  • Cross-platform changes are the highest-risk category. Always require a proposal, regardless of size.
The rule

Build one log, one review cadence, one naming system, but let the operational rules flex per platform where the risk profile demands it.

Budget operations

Budget strategy is "we should spend $40k/month on paid, split 60/30/10 across Google, LinkedIn, and Reddit." Budget operations is how that split actually gets maintained, how reallocation decisions happen, and what prevents one platform from quietly eating another's share.

Most teams have the strategy. Almost none have the operations.

Tracking allocation vs. actual spend

PlatformMonthly allocationWTD spendMTD spendPacingNotes
Google Ads$24,000$5,800$14,200Slightly overConquest pulling ahead
LinkedIn Ads$12,000$2,600$7,100On pace
Reddit Ads$4,000$700$2,200UnderNew creative pending

Update this during the weekly ops review. If pacing is off by more than 15%, that's a flag.

Reallocation workflow

Budget moves between platforms should go through the proposal workflow. Not because reallocating $2k from Reddit to Google is risky. Because undocumented reallocation is how you end up three months later with a platform starved to zero and nobody can explain when or why.

The proposal needs three fields beyond the standard: the source platform, the destination platform, and the duration. "Move $3k from Reddit to Google" is incomplete. "Move $3k/month from Reddit to Google for Q2, reassess in April ops review" is trackable.

Guardrails

Set floor allocations for each platform. If LinkedIn's minimum is $8k/month, reallocation proposals that would push it below $8k require head-of-growth approval. This prevents the common failure where one high-performing platform gradually absorbs everything and you lose optionality on the others.

Operational reporting

Most teams have performance reporting: CPL, pipeline contribution, ROAS, spend vs. budget. Your CFO sees this. Your board sees this. This is not what your ops team needs.

Operational reporting answers a different set of questions:

  • What changes shipped this week across all platforms?
  • How many proposals were submitted, approved, and rejected?
  • Which changes from two weeks ago now have outcome data? Did they work?
  • Are there campaigns that haven't been touched in 30+ days?
  • Is the naming convention being followed on new campaigns?
  • Are there open proposals that haven't been reviewed?

The performance report tells you how paid media is doing. The ops report tells you how the team running paid media is doing. You need both.

Building the ops report without making it a second job

Pull it together in 15 minutes before the weekly review:

  • Changes shipped: Count and categorize from the log. "8 changes this week: 3 budget, 2 audience, 2 creative, 1 structural."
  • Proposal throughput: How many submitted vs. approved vs. still pending? A growing backlog means the approval step is becoming a bottleneck.
  • Evaluation gaps: How many changes from 2+ weeks ago still have empty "actual outcome" columns? If growing, the feedback loop is broken.
  • Stale campaigns: Anything active with no logged change in 30+ days.

Circulate 24 hours before the weekly meeting. The meeting gets shorter because people come prepared.

Working with an agency

Agencies are force multipliers when the operational handoff is clean. They are a source of invisible risk when it isn't. The problem is rarely competence. It's that the agency operates in their own system, with their own conventions, on their own cadence.

Defining the handoff

Before the engagement starts, agree on these in writing:

ItemWhat to agree on
Change logAgency writes to your log, not theirs. Same fields, same format. Non-negotiable.
Proposal workflowAgency submits through your system. No separate approval path.
Naming conventionAgency follows yours. Two conventions means broken data.
Reporting cadenceAgency attends ops review or submits their portion 24 hours in advance.
Access scopeWhich platforms managed vs. read-only. Document explicitly.
Escalation pathWhat happens when agency wants a change outside approved scope.

Most agency contracts cover deliverables and reporting. Almost none cover operational integration. Add these as an addendum.

Where agency ops typically breaks

UTM drift. The agency uses their own convention or a client-generic template. Three weeks in, CRM attribution is fragmented. Fix: UTM convention is in the contract. Operator audits weekly.

Shadow changes. "Minor" adjustments that don't hit the proposal threshold and don't get logged. Over a month, they accumulate into a meaningful shift nobody can trace. Fix: every change gets logged, regardless of size.

Parallel reporting. The agency sends their own report in a different format. Now you have two sources of truth. Fix: one reporting format. Don't run two systems.

When the system outgrows the doc

The manual system works for 2-4 people on 2-3 platforms. It starts to strain when:

  • Multiple people writing to the log create version conflicts
  • The agency operates in their own tracking system
  • The log is too dense to surface what matters quickly
  • Leadership asks for "what we did last month" and it takes hours
  • Proposal reviews happen in Slack threads that get buried
  • The "actual outcome" column is consistently empty

These are signs you've outgrown a general-purpose tool. You need a system that:

  • Treats proposals as first-class objects with a reviewable lifecycle
  • Maintains an audit trail automatically
  • Works across Google Ads, LinkedIn Ads, and Reddit Ads
  • Surfaces what needs attention without manual synthesis

That's what Maple does. Proposals, approvals, apply, and audit across Google Ads, LinkedIn Ads, and Reddit Ads. No spreadsheet.


Start with the change log. Add the rest over four weeks. When the doc can't keep up anymore, you'll know because the symptoms above will be obvious.

Built for teams at this inflection point

When the doc becomes the bottleneck, Maple handles the ops layer.

Maple is the paid media operations workspace for teams running Google Ads, LinkedIn Ads, and Reddit Ads. Proposals, approvals, apply, and audit. No spreadsheet required.