Back to Resources

Facebook CBO vs ABO: When to Use Each

Campaign Budget Optimization vs Ad Set Budget Optimization. When each wins, common mistakes, and hybrid structures.

Vince Servidad April 13, 2026 12 min read

Share this article

Facebook CBO vs ABO: When to Use Each in 2025

Campaign Budget Optimization (CBO) vs Ad Set Budget Optimization (ABO) was a religious war in Facebook ad communities for years. CBO advocates said let the algorithm decide. ABO advocates said control your spend manually.

The honest answer: neither is universally correct. The right choice depends on what you're trying to do.

Quick definitions

  • ABO (Ad Set Budget Optimization). You set a budget at the ad set level. Each ad set has its own daily/lifetime budget that doesn't shift between ad sets.
  • CBO (Campaign Budget Optimization). You set one budget at the campaign level, and the algorithm distributes it across ad sets based on which is performing best.

CBO is now Meta's default and called "Advantage+ Campaign Budget" in newer interfaces.

When CBO wins

CBO is the right choice when:

  • You have multiple ad sets with different audiences and want the algorithm to find the best one.
  • You're scaling a working campaign and want spend to flow toward winners.
  • You're optimizing for a single objective (Purchase, Lead, etc.) across all ad sets.
  • You have enough volume (campaign producing 50+ conversions/week) for the algorithm to learn quickly.

In other words: production-mode campaigns with established performance.

The CBO advantage: spend automatically shifts to the highest-ROAS ad set within the campaign, and within an ad set, to the highest-ROAS ad. Less manual tuning, faster optimization.

When ABO wins

ABO is the right choice when:

  • You're testing. You want equal spend across each ad set/audience/creative variant to compare them fairly.
  • You have ad sets at very different stages (new vs scaled). CBO underspends new ad sets.
  • You're targeting small audiences that you want to ensure get spend (CBO sometimes leaves small audiences with $0/day).
  • Budget allocation matters strategically (spending a guaranteed amount on a particular product or audience).

In other words: testing, exploration, or strategic allocations.

The hybrid approach: CBO with ABO tests

Most accounts run both:

  • CBO campaigns for production. Proven creatives and audiences scaled with budget flexibility.
  • ABO campaigns for testing. New creatives, new audiences, with controlled spend per variant.

Once a test winner emerges in the ABO campaign, graduate it to the CBO campaign with the existing performers.

Typical structure

A clean account might have:

Campaign 1: Prospecting (CBO)

  • Ad set 1: Lookalike 1% of top customers.
  • Ad set 2: Advantage+ broad targeting.
  • Ad set 3: Lookalike 3% of past 90-day purchasers.
  • Daily budget: $300, distributed by algorithm.

Campaign 2: Retargeting (CBO)

  • Ad set 1: Site visitors 30 days.
  • Ad set 2: Add-to-cart 14 days.
  • Ad set 3: Engagement 90 days.
  • Daily budget: $80.

Campaign 3: Creative testing (ABO)

  • Ad set 1: New creative concept A — $30/day.
  • Ad set 2: New creative concept B — $30/day.
  • Ad set 3: New creative concept C — $30/day.

Campaign 4: Audience testing (ABO)

  • Different audiences, same creative — $20–$40 each.

This structure is clean, scales, and lets you test without losing optimization.

CBO myths that won't die

"CBO doesn't spend evenly across ad sets."

True, and that's the point. The algorithm allocates spend based on performance. If you want even spend, use ABO.

"CBO needs lots of conversions to work."

Helpful but not strictly required. Even at 10–20 conversions per week, CBO can work. The risk is volatility — small samples create noisy "winners" that don't sustain.

"CBO requires minimum 3 ad sets."

False. CBO works with 1, 2, or many ad sets. More ad sets gives the algorithm more options but isn't required.

"Spending limits in CBO don't work."

Minimum and maximum daily ad set spend caps do work. They're useful when you want to ensure no audience gets starved of spend.

Using ad set spend limits in CBO

If you run CBO but want guarantees, use minimum daily spend:

  • Force at least $X to each ad set so it has chance to learn.

Or maximum daily spend:

  • Cap a high-performing ad set so the budget doesn't all go there (useful when you suspect the "winner" isn't sustainable).

These caps reduce the algorithm's freedom but increase predictability. Use sparingly.

Common CBO mistakes

  • Mixing wildly different audiences in one CBO campaign. Algorithm won't allocate fairly. Group similar audiences together.
  • Mixing prospecting and retargeting. Different objectives, different behavior. Always separate.
  • Increasing budget too aggressively. CBO handles 20% increases gracefully. Doubling a budget overnight resets the learning phase.
  • Constantly editing the campaign. Edits reset learning. Let CBO settle for 5–7 days before judging.
  • Mixing different optimization events. All ad sets must optimize for the same conversion event for CBO to make valid comparisons.

Common ABO mistakes

  • Setting different budgets per ad set in a "test." Defeats the test. Equal budgets only.
  • Running ABO at very low budgets. Below $20/day per ad set, the data is too sparse.
  • Pausing/resuming ad sets manually all day. Resets learning. Make changes, then leave it alone.

When CBO underperforms

Sometimes CBO doesn't work well in practice. Common causes:

  • Audience overlap. Multiple ad sets targeting overlapping users. CBO spends inefficiently. Fix exclusions.
  • One ad set has dramatically more volume potential than others. All spend goes there, others get $0. Switch to ABO with floors.
  • Creative variance is too high. One ad performs 5x better than others. Algorithm allocates almost everything to it. Force creative testing in a separate ABO campaign.

CBO and Advantage+ Shopping Campaigns

Advantage+ Shopping Campaigns (ASCs) are essentially super-CBO with even more algorithm control. Single campaign, broad targeting, automated creative/audience optimization.

For most e-commerce accounts at scale, Advantage+ Shopping has replaced traditional CBO prospecting as the default. Use ABO testing campaigns alongside ASCs.

A test framework for CBO vs ABO

If you're not sure which works in your account:

  • Take a working campaign.
  • Duplicate it. Run one as CBO, one as ABO with equal ad set budgets.
  • Run both for 14 days.
  • Compare overall ROAS and spend efficiency.

In our testing:

  • Established accounts: CBO wins 60–70% of the time.
  • New accounts (under $5K/month): ABO wins 50–60% of the time.
  • Testing campaigns: ABO always wins (because the test is the point).

What "good" looks like

A well-structured account in 2025:

  • 70–80% of spend in Advantage+ / CBO campaigns.
  • 20–30% in ABO testing campaigns.
  • Clear graduation path from ABO test → CBO production.
  • No campaign with overlap > 30% with another.

The structure isn't the strategy. It's the chassis. Get it clean once, and your team can focus on what actually drives performance: creative, offer, and product.

Related Articles

Continue learning with these in-depth guides