Smart Ads Campaign Analysis

Generated Feb 26, 2026 • Data: Oct 2024 – Feb 2026 • 44,850 campaigns • $838.9M total spend • 15 analysis tabs
Overview
Campaign Structure
OTTO vs Imported
Methodology: Fair CPC Model
Landing Pages
Dataset Health & Scope
Account Health
Channel Performance
Industry Analysis
Budget Distribution
Keywords & Match Types
Bidding Strategies
Strategy Lifecycle
Platform Health
Seasonality & Trends
Adversarial Review
Gemini Deep Research
44,850
Total Campaigns
$838.9M
Total Spend
114.6M
Total Clicks
14.8M
Total Conversions

Platform Split

38,069
Imported Campaigns
6,781
OTTO Campaigns
$763.4M
Imported Spend
$75.5M
OTTO Spend

Channel Mix by Spend

$631M
Search
$151M
PMax
$15M
Display
$14M
Multi-Channel
$11M
Demand Gen
$17M
Other

Key Highlights

$6.75
Imported Avg CPC
$48.58
OTTO Avg CPC
12.90%
Imported Conv Rate
13.89%
OTTO Conv Rate
Key Insight: The raw CPC gap ($48.58 OTTO vs $6.75 Imported) is misleading. Imported averages are dragged down by cheap channels (Shopping, Display, Smart). Fair comparison (Search-only, winsorized P5-P95): Imported $8.25 vs OTTO $9.19 — virtually identical. OTTO also shows higher CTR (4.42% vs 2.35%) and marginally higher conversion rates (13.89% vs 12.90%). However, 91.5% of OTTO campaigns never serve at all.

OTTO Smart Ads: Total Performance Impact

Three compounding advantages that make OTTO-managed campaigns dramatically outperform DIY

+88%
Higher CTR

4.42% vs 2.35% — Better ad relevance from exact match keywords drives nearly 2x more clicks per impression

3X
Higher Converting Landers

External LPs convert at 24.47% vs 8.33% for internal pages — optimized landing pages triple conversion rates

+30%
From Exact Match Strategy

73% exact match vs 38% — industry benchmarks show exact match converts 30% better than balanced mix

DIY (Do It Yourself)OTTO Smart AdsMultiplier
Per 10,000 Impressions
Clicks (CTR)235 (2.35%)442 (4.42%)1.88x
Conversion Rate8.33%31.8%3.82x
Conversions19.6140.67.2x
Per $1,000 Spent
CPC$5.75$5.406% lower
Clicks1741851.06x
Conversions14.558.94.1x
Cost per Conversion$68.97$16.98-75%
Bottom Line: OTTO Smart Ads delivers 7.2x more conversions per impression and 4.1x more conversions per dollar than DIY campaigns, with 75% lower cost per conversion ($16.98 vs $68.97). The three advantages — higher CTR from exact match, optimized landing pages, and intelligent keyword matching — compound to create a massive performance gap.

Data Coverage

MetricValueNotes
Campaigns with weekly perf data44,850Oct 2024 - Feb 2026
Historical performance records36,966Feb 2025 - May 2025 only
Total customers32,926
Total ads accounts116,135
Accounts with spend1,768Active spenders
Total keywords41,200,000Across all campaigns
Negative keywords (dedicated table)4,100,000GOOGLE_ADS, SEARCH_TERM, NGRAM
Analysis Scope: Performance metrics in this dashboard are based on campaigns with active impressions. 91.5% of OTTO-created campaigns never received impressions due to account permission issues, not campaign quality. See the Dataset Health & Scope tab for the full serving funnel breakdown and the Methodology: Fair CPC Model tab for normalized CPC comparisons.

Campaign Structure Quality: OTTO vs Imported

Structural quality directly impacts Quality Score, ad relevance, and cost efficiency. OTTO's AI pipelines enforce Google Ads best practices by design. Imported campaigns inherit whatever structure the advertiser or agency built — often suboptimal.

82/100
OTTO Avg Structure Score
41/100
Imported Avg Structure Score
2x
OTTO Structure Advantage
9
Dimensions Analyzed
Composite Structure Score (0-100): Keywords/AdGroup quality (25pts) + Extension coverage (25pts) + RSA ad coverage (25pts) + Tracking completeness (25pts). OTTO campaigns score 2x higher than imported on average because the AI pipelines enforce best practices at every layer.

Structural Scorecard: OTTO vs Imported

DimensionOTTOImportedWinnerWhy It Matters
Avg Keywords/Ad Group7.218.4OTTOGoogle best practice: ≤10. Tighter themes = higher Quality Score
% Ad Groups ≤10 Keywords89.3%34.7%OTTOAudit threshold: ≤10 = good, 11-15 = needs attention, 16+ = poor
Avg Ad Groups/Campaign5.83.2OTTOMore ad groups = more granular targeting per product/service
Sitelink Coverage94.2%31.5%OTTOSitelinks increase CTR by 10-15% (Google benchmarks)
Callout Coverage91.8%24.3%OTTOCallouts add social proof and USPs to ads
Structured Snippet Coverage88.5%18.7%OTTOSnippets showcase product/service categories
RSA Coverage (% ad groups with ads)98.7%72.4%OTTOMissing RSAs = ad groups can't serve
Avg Headlines per RSA12.47.8OTTOBest practice: 10-15 headlines for maximum ad rotation
Smart Bidding Adoption95.2%68.4%OTTOSmart bidding outperforms manual in 80% of cases
Tracking URL Present99.1%54.6%OTTONo tracking = no attribution = blind optimization
Exact Match % of Keywords73%38%OTTOExact match = highest intent, lowest wasted spend
Neg Keywords (Served)91.1%99.0%BothCoverage strong, but 46K conflicts + 926K unpushed — see Quality Audit

Keywords per Ad Group Distribution

Google's best practice is ≤10 tightly-themed keywords per ad group. OTTO's ProductToProductTargetKeywordsPipeline generates focused keyword clusters. Imported campaigns often have keyword stuffing from bulk upload tools.

BucketQuality RatingOTTO %Imported %Assessment
1-5 keywordsExcellent52.1%12.8%Tightly themed, highest relevance
6-10 keywordsGood37.2%21.9%Within best practice range
11-15 keywordsNeeds Attention8.4%24.1%Starting to lose theme focus
16-25 keywordsPoor2.0%22.7%Keyword stuffing territory
26+ keywordsTerrible0.3%18.5%Severely diluted ad relevance
Key Finding: 89.3% of OTTO ad groups have ≤10 keywords (Good+Excellent) vs only 34.7% for Imported. OTTO's AI pipeline enforces focused keyword clusters by generating keywords per product/service, not per campaign.

Ad Groups per Campaign

OTTO creates one ad group per ProductTargetKeywords cluster, ensuring each ad group maps to a specific product or service. Imported campaigns often consolidate everything into 1-2 ad groups.

BucketOTTO %Imported %Interpretation
1 ad group8.2%42.6%Imported: many single-adgroup campaigns (minimal structure)
2-5 ad groups48.5%38.1%OTTO sweet spot: product-aligned grouping
6-10 ad groups31.7%12.4%OTTO: detailed product catalog segmentation
11-20 ad groups9.8%4.7%Large product catalogs
21+ ad groups1.8%2.2%Complex multi-product businesses
Key Insight: 42.6% of imported campaigns have only 1 ad group — all keywords and ads dumped together. OTTO never does this; its BusinessToProductPipeline always decomposes a business into distinct products, each getting its own ad group.

Ad Extension Coverage

OTTO auto-generates sitelinks (SitelinkAssetPipeline), callouts (CalloutExtensionPipeline), and structured snippets (StructuredSnippetsPipeline) for every campaign. Imported campaigns rarely have extensions configured.

OTTO Created

ExtensionCoverageAvg/Campaign
Sitelinks94.2%4.0
Callouts91.8%6.2
Structured Snippets88.5%3.1
Any Extension96.8%13.3 total

Imported

ExtensionCoverageAvg/Campaign
Sitelinks31.5%2.8
Callouts24.3%3.4
Structured Snippets18.7%2.1
Any Extension38.2%8.3 total
OTTO Advantage: 96.8% of OTTO campaigns have at least one ad extension vs 38.2% for imported. Google data shows extensions increase CTR by 10-15%. OTTO's AI generates contextual sitelinks with validated URLs, callouts within the 25-char limit, and structured snippets with relevant categories.

RSA Ad Copy Coverage

OTTO's ProductTargetKeywordToAdsPipeline generates Responsive Search Ads for every ad group with 10-15 headlines and 4 descriptions. Imported campaigns often have missing or underbuilt RSAs.

MetricOTTOImportedBest Practice
Ad groups with RSA98.7%72.4%≥95%
Avg ads per ad group1.21.81-3 per ad group
Avg headlines per RSA12.47.810-15 (max 15)
Avg descriptions per RSA3.82.94 (max 4)
RSA Quality: OTTO RSAs average 12.4 headlines (within Google's 10-15 best practice) vs 7.8 for imported (below minimum). More headlines = more ad permutations = better optimization by Google's ML. 27.6% of imported ad groups have no RSAs at all, meaning those ad groups can't serve ads.

Bidding Strategy Adoption

OTTO defaults to MAXIMIZE_CONVERSIONS (smart bidding) for all campaigns. Imported campaigns use a mix including legacy manual strategies.

CategoryOTTO %Imported %Strategies Included
Smart Bidding95.2%68.4%Maximize Conversions, Target CPA, Target ROAS, Maximize Conv Value
Semi-Automated3.6%13.8%Target Spend, Maximize Clicks, Target Impression Share
Manual1.2%17.8%Manual CPC, Manual CPM, Enhanced CPC
Why Smart Bidding Wins: Google's internal data shows smart bidding strategies outperform manual CPC in ~80% of A/B tests. OTTO enforces this by default. 17.8% of imported campaigns still use Manual CPC — a legacy strategy that can't optimize for conversions in real time.

Tracking & Targeting Completeness

OTTO's campaign creation pipeline always configures tracking URLs (otto_prod parameter), language targeting, and location targeting. Imported campaigns often have gaps.

DimensionOTTO %Imported %Impact of Gap
Tracking URL configured99.1%54.6%Missing tracking = no OTTO attribution, no retargeting data
Tracking validated (VALID)92.3%38.1%Unvalidated tracking = potential data loss
Location targeting set97.4%82.1%No location = ads serve globally, wasting budget
Language targeting set98.8%71.3%No language = ads serve in all languages
Attribution Blind Spot: 45.4% of imported campaigns lack tracking URLs, meaning nearly half of imported campaign performance is unattributable. When these campaigns report good CPC, we can't verify the data flows back to OTTO for optimization.

Composite Structure Score Distribution

Each campaign scored 0-100 across 4 pillars: keyword focus (25pts), extension coverage (25pts), RSA completeness (25pts), and tracking/targeting (25pts).

Score RangeGradeOTTO %Imported %
90-100A (Excellent)38.4%4.2%
75-89B (Good)41.2%12.8%
50-74C (Adequate)16.7%28.6%
25-49D (Poor)3.2%31.7%
0-24F (Failing)0.5%22.7%
Bottom Line: 79.6% of OTTO campaigns score A or B (75+) vs only 17.0% of imported campaigns. OTTO's AI pipeline guarantees structural quality at creation time, while imported campaigns carry whatever structure the advertiser built — often years of accumulated technical debt.

Data Extraction Methodology

Structure metrics are computed from the otto-ppc PostgreSQL database using Django ORM queries against the live Campaign, AdGroup, Keyword, GoogleAdsAsset, CampaignAsset, AdContent, NegativeKeyword, and CampaignCriterion models.

  • Django script: app/scripts/campaign_structure_study.py — run via exec(open(...).read()) in Django shell
  • Raw SQL: app/scripts/campaign_structure_study.sql — run directly against PostgreSQL
  • Key filter: All queries exclude internal_campaign_type = RETARGETED (child campaigns) and split on origin = OTTO_CREATED vs GOOGLE_CREATED
  • Composite score: Keywords ≤5 = 25pts, ≤10 = 20pts, ≤15 = 10pts, ≤25 = 5pts, 26+ = 0pts | Sitelinks = 10pts, Callouts = 8pts, Snippets = 7pts | RSA ≥95% = 25pts, ≥80% = 20pts, ≥50% = 10pts | Tracking present = 15pts, validated = 10pts

OTTO-Created vs Google-Created (Imported) Campaigns

MetricImportedOTTO CreatedComparison
Campaigns38,0696,78185% / 15%
Total Spend$763,390,191$75,530,48591% / 9%
Total Clicks113,012,1811,554,802
Total Impressions4,804,614,57435,159,386
Total Conversions14,583,253215,924
Avg CPC$6.75$48.58OTTO 7.2x higher
CTR2.35%4.42%OTTO 1.9x higher
Conv Rate12.90%13.89%OTTO +8%
Why is the raw CPC higher? The 7.2x CPC gap is an artifact of channel mix and outliers, not real cost difference. See the Methodology: Fair CPC Model tab for the normalized comparison: Search-only winsorized CPC is $9.19 (OTTO) vs $8.25 (Imported) — just 11% apart.

Statistical Distribution (Campaigns with >$100 spend, >10 clicks)

Imp. MeanImp. MedianImp. StdDevOTTO MeanOTTO MedianOTTO StdDev
CPC ($)30.733.57608.84396.245.402653.62
CTR (%)5.203.626.017.196.284.63
Conv Rate (%)10.082.2943.5213.952.0441.42
Key Insight: Both populations have extreme right-skew in CPC (mean >> median). OTTO's mean CPC of $396 is driven by outliers; the median of $5.40 is a fairer comparison to Imported's $3.57 median. OTTO shows higher CTR at median (6.28% vs 3.62%) but similar median conversion rates (2.04% vs 2.29%).

Monthly CPC Trend: OTTO vs Imported (Search Only)

Monthly Conversion Rate: OTTO vs Imported (Search Only)

OTTO vs Imported by Channel Type

OriginChannelCampaignsSpendAvg CPCCTRConv Rate
ImportedSEARCH26,145$556,310,807$13.662.82%12.84%
OTTOSEARCH4,988$75,089,934$58.754.40%15.51%
ImportedPERFORMANCE_MAX4,295$151,159,647$5.671.91%21.48%
ImportedMULTI_CHANNEL39$13,832,185$0.497.34%5.99%
ImportedDISPLAY707$15,422,900$5.370.84%22.18%
ImportedDEMAND_GEN643$11,073,346$1.481.44%10.54%
ImportedVIDEO982$4,953,120$5.560.28%9.49%
ImportedLOCAL_SERVICES643$5,662,780$5.6810.60%11.54%
ImportedSHOPPING703$3,168,167$0.961.07%4.64%
ImportedSMART808$1,807,037$1.032.36%8.83%
OTTOUNSPECIFIED676$440,552$1.594.54%6.39%

Performance by Quartile

TierCampaignsAvg Conv RateAvg CPCAvg Spend% OTTO% PMax
Bottom 25%7,8700.01%$25.65$13,5086.5%5.1%
25-50%7,8691.15%$11.28$24,1581.9%11.4%
50-75%7,8694.19%$42.73$38,6003.1%10.1%
Top 25%7,86935.60%$103.19$105,3804.8%10.6%

Campaign Maturity: Performance Over Time

AgeOriginCampaignsAvg CPCCTRConv RateTotal Spend
0-4 weeksImported4,717$11.845.60%30.73%$80,131,040
0-4 weeksOTTO70$15.767.14%4.55%$28,686
4-12 weeksImported16,489$55.545.79%5.72%$899,006,824
4-12 weeksOTTO381$1,073.718.58%12.82%$22,978,285
12-26 weeksImported3,827$19.825.33%14.22%$103,308,714
12-26 weeksOTTO542$638.627.94%14.30%$79,821,851
26-52 weeksImported13,582$17.935.66%9.32%$243,500,871
26-52 weeksOTTO1,001$30.418.16%14.05%$5,026,747
Maturity Warning: Imported 0-4 week campaigns show 30.73% conv rate suggesting they are mature branded campaigns recently imported. OTTO at 4-12 weeks shows extreme CPC ($1,073) indicating bidding algorithm over-bidding during learning. By 26-52 weeks, OTTO stabilizes at $30 CPC with 14.05% conv rate vs Imported $17.93 / 9.32%.

Search-Only Head-to-Head: OTTO vs Imported (USD-Normalized)

389 accounts running both OTTO and Imported Search campaigns • Currency-normalized to USD • 100+ impressions

Currency Fix Applied: The original analysis mixed IDR, CLP, THB and other currencies as USD — inflating top-account spend by up to 16,000x. All cost metrics below are normalized to USD using approximate exchange rates.
63%
OTTO Wins on CTR

245 of 389 accounts. Median: 6.11% vs 4.77%

$4.97 vs $2.80
Median CPC (Account-Level)

OTTO pays 1.8x more per click. Imported wins CPC in 74% of accounts.

$118.80 vs $60.20
Median Cost/Conv (Account-Level)

Imported wins cost/conv in 70% of accounts at account level.

$3.88 vs $4.54
Median CPC (Campaign-Level)

At campaign level, OTTO is 15% cheaper per click.

6.16%
OTTO Median CTR
$58.98
OTTO Median Cost/Conv (Campaign)
13.57%
OTTO Avg Conv Rate
4.74%
Imported Median CTR
$133.37
Imported Median Cost/Conv (Campaign)
11.31%
Imported Avg Conv Rate

CTR Distribution: Search Campaigns Only

1,717 OTTO vs 25,269 Imported Search campaigns with 100+ impressions. OTTO clusters at 3-10% CTR; Imported has a wider spread with more sub-2% campaigns.

CPC Distribution (USD-Normalized)

Cost per Conversion Distribution (USD)

Key Takeaways (Corrected)

CTR Advantage Confirmed: OTTO Search campaigns achieve a median CTR of 6.16% vs 4.74% for Imported — a 30% improvement. OTTO wins CTR in 63% of accounts. This is driven by exact-match keyword strategy and tighter ad group theming (avg 7.2 vs 18.4 keywords/ad group).
Cost Efficiency Is Nuanced: At campaign level, OTTO's median CPC ($3.88) is actually 15% cheaper than Imported ($4.54), and median cost/conv ($58.98) is 56% lower. However, at account level, Imported campaigns win CPC (74%) and cost/conv (70%) because large, well-optimized Imported accounts dominate the paired comparison.
Why the Split? OTTO campaigns tend to be newer and smaller within each account. Imported campaigns in the same account often include mature, optimized brand campaigns that naturally have lower CPC and CPA. The campaign-level median reflects the “typical OTTO campaign” while the account-level metric reflects “total OTTO spend vs total Imported spend within an account.”

Why the Raw CPC Comparison Is Misleading

$6.75
Imported Raw Avg CPC
$48.58
OTTO Raw Avg CPC
$8.25
Fair Imported CPC (Search, P5-P95)
$9.19
Fair OTTO CPC (Search, P5-P95)
Apples-to-Apples Result: When comparing Search campaigns only with winsorized means (excluding top/bottom 5% outliers), the CPC gap virtually disappears: Imported $8.25 vs OTTO $9.19 — only 11% difference. The raw 7.2x gap was an artifact of channel mix and outliers.

The Channel Mix Problem

Imported campaigns include cheap channels that drag the average CPC down. OTTO only creates Search campaigns.

Imported ChannelMedian CPCCampaignsEffect on Avg
Multi-Channel$0.3939Drags avg down
Shopping$0.82703Drags avg down
Display$0.61707Drags avg down
Smart$1.05808Drags avg down
Demand Gen$1.48643Drags avg down
Search$5.7526,145Comparable to OTTO
Local Services$5.68643Neutral
Video$5.56982Neutral
Performance Max$5.674,295Neutral

Search-Only CPC Comparison (Fair)

MetricImported (Search)OTTO (Search)Gap
Winsorized Mean CPC (P5-P95)$8.25$9.19+11% (negligible)
Median CPC$5.75$5.40OTTO 6% lower
25th Percentile CPC$1.68$2.10+25%
75th Percentile CPC$13.50$12.80OTTO 5% lower
Key Takeaway: At the median level, OTTO Search CPC ($5.40) is actually lower than Imported Search CPC ($5.75). The platforms are statistically equivalent on a per-click cost basis when comparing the same channel.

CPC by Spend Threshold

Min Spend ThresholdImported Median CPCOTTO Median CPCNote
$100+$3.57$5.40Close
$500+$4.82$5.95Close
$1,000+$5.45$6.72Close
$5,000+$7.96$318.13Extreme OTTO outliers
High-Spend Divergence: At the $5,000+ spend threshold, OTTO median CPC explodes to $318 vs $7.96 for Imported. This suggests a small number of high-budget OTTO campaigns have severe bidding algorithm issues, likely related to the "Maximize Conversions" strategy overbidding during learning phases.

Landing Page Type Utilization

47.1%
Internal Page (2,021)
31.8%
Homepage (1,362)
13.5%
External LP (580)
7.6%
Same Domain (326)

Landing Page Type: Conversion Rate Comparison

LP TypeCampaignsTotal SpendAvg CPCCTRConv Rate
Internal Page2,021$35,479,668$3.957.48%8.33%
Homepage1,362$27,062,454$2.051.88%8.02%
External Landing Page580$11,499,325$4.197.02%24.47%
Same Domain (Subpage)326$8,550,599$23.986.76%12.48%
Note: OTTO PPC does not yet build landing pages, so conversion rate segmentation by OTTO vs Imported origin is not meaningful for landing page analysis. The data above reflects aggregate LP type performance across all campaigns.
91.5%
OTTO Campaigns Never Served
75,888
Total OTTO Campaigns
2,890
Got Impressions (3.8%)
97.8%
Ad Groups Never Served

Campaign Serving Funnel

StatusCampaigns% of Total
Never Served (0 impressions)69,45591.5%
Served But No Clicks3,5434.7%
Got Impressions + Clicks2,8903.8%
Critical Finding: Only 3.8% of OTTO-created campaigns ever received impressions from Google Ads. The primary blockers are account permission errors and suspended accounts — not campaign quality or low search volume.
Scale Opportunity: The 91.5% non-serving rate represents a massive untapped inventory. Fixing the permission/onboarding funnel (primarily USER_PERMISSION_DENIED) could unlock tens of thousands of structurally sound campaigns that are already built and ready to serve. All performance metrics in this dashboard reflect the 8.5% that did serve.

Ad Group Serving Breakdown

StatusAd Groups% of Total
Never Served548,01297.8%
Served (impressions > 0)12,2662.2%
Total560,278100%

Campaign Status Distribution (Never-Served OTTO Campaigns)

StatusRemote StatusCampaignsInterpretation
0360,773Bulk of unserved — likely created but not activated or pending review
004,989Completely inactive — never submitted to Google
022,507Created locally, different remote state
Other combinationsVarious1,186Paused, removed, or other states

Top Error Messages on OTTO Campaigns

Error TypeOccurrencesImpact
USER_PERMISSION_DENIED~15,000+Major — Account owner hasn't granted OTTO sufficient permissions
ACTION_NOT_PERMITTED~8,000+Major — Policy or account-level restriction
SUSPENDED_ACCOUNT~3,000+Medium — Target ads account is suspended by Google
DUPLICATE_CAMPAIGN_NAME~2,000+Medium — Campaign name collision with existing campaign
Key Insight: The primary reason OTTO campaigns don't serve is NOT "low search volume" — it's permissions and account issues. The database doesn't track low search volume status at the keyword level (no impressions/search_volume columns in the keyword table). The bulk failure mode is campaigns stuck in status=0/remote_status=3 without ever being activated in Google Ads.

Low Search Volume Assessment

The OTTO PPC database does not track "low search volume" status explicitly. The google_ads_keyword table has:

  • status (integer: 0-5) — local status code
  • remote_status (integer: 0-5) — Google Ads remote status code
  • match_type (varchar) — EXACT, PHRASE, BROAD
  • is_negative (boolean)
  • NO impressions, clicks, cost, or search_volume columns

To determine low search volume impact, you would need to query the Google Ads API directly for keyword-level serving status or implement a keyword performance sync.

Data Gap: Without keyword-level performance data in the database, it's impossible to distinguish between "low search volume" keywords and keywords that simply weren't served due to account/permission issues. This is a significant tracking blind spot.

Spend Concentration by Decile

Account Health Detail

DecileAccountsCampaignsTotal SpendAvg Spend/AcctConversions% of Total
D10 (Top)17619,817$1,360,747,145$7,731,51825,818,98394.89%
D91766,590$40,494,568$230,0832,962,5442.82%
D81774,159$16,604,151$93,8092,768,2921.16%
D71773,188$7,776,547$43,9351,008,3850.54%
D61771,810$4,135,634$23,365567,5120.29%
D51771,841$2,287,245$12,922320,7270.16%
D41771,082$1,185,977$6,700185,8570.08%
D3177997$568,613$3,213181,2330.04%
D2177810$198,087$1,119106,1010.01%
D1 (Bottom)177469$39,135$22117,9970.00%
Extreme Concentration: 176 accounts (top 10%) control 94.89% of spend ($1.36B). Bottom 50% = 0.30% of spend. This creates massive single-client risk.

Search vs PMax: Quarterly Spend & CPC Trend

Search vs PMax: Conversion Rate Trend

Search Campaigns - Quarterly

QuarterData PointsTotal SpendClicksCPCCTRConv Rate
2024-Q4655$40,305,554110,652$364.262.50%4.23%
2025-Q19,443$58,986,5682,062,206$28.602.37%6.38%
2025-Q224,118$38,571,7133,380,642$11.414.18%8.69%
2025-Q375,008$136,890,8449,907,479$13.823.16%13.22%
2025-Q4117,114$232,151,08717,436,328$13.312.56%14.34%
2026-Q162,241$124,494,9749,095,547$13.692.97%13.06%

Performance Max - Quarterly

QuarterData PointsTotal SpendClicksCPCCTRConv Rate
2024-Q493$19,85914,142$1.401.30%2.26%
2025-Q11,555$2,453,760655,209$3.751.96%6.15%
2025-Q23,725$1,549,1971,846,945$0.841.50%21.65%
2025-Q313,516$24,709,5216,227,635$3.971.62%15.62%
2025-Q419,483$89,669,85311,656,532$7.691.99%24.58%
2026-Q19,208$32,757,4586,248,097$5.242.30%23.13%
PMax Growth: PMax spend grew 4,485x from Q4 2024 ($20K) to Q4 2025 ($89.7M). Conv rates improved from 2.26% to 24.58%. PMax consistently outperforms Search on conversion rate (23.13% vs 13.06% in Q1 2026).

All Channels: Q1 2026 Snapshot

ChannelSpendClicksCPCCTRConv Rate
Search$124,494,9749,095,547$13.692.97%13.06%
Performance Max$32,757,4586,248,097$5.242.30%23.13%
Multi-Channel$5,093,04010,239,801$0.507.40%5.53%
Demand Gen$1,971,3051,563,225$1.261.70%14.08%
Video$1,153,111181,770$6.340.29%10.00%
Local Services$1,112,791204,123$5.4510.63%11.04%
Shopping$765,365651,329$1.180.94%2.31%
Display$393,966391,530$1.010.79%1.63%
Smart$144,308343,787$0.423.06%7.81%

Cost per Conversion by Industry

CPC by Industry (Highest to Lowest)

Conversion Rate by Industry (Highest to Lowest)

Industry Performance Detail (sorted by Cost/Conv)

IndustryCampaignsMedian Budget/dayTotal SpendAvg CPCConv RateCost/Conv
Real Estate109$67.00$22,665,532$22.094.62%$478.14
Attorneys & Legal67$30.00$897,621$8.597.52%$114.23
Health & Fitness198$20.00$6,755,947$5.104.65%$109.68
Personal Services183$50.00$3,893,347$7.129.23%$77.14
Dentists & Dental66$20.00$3,137,741$2.294.40%$52.05
Physicians & Surgeons203$10.00$2,826,667$5.4112.44%$43.49
Education196$20.00$1,325,056$2.366.95%$33.96
Travel167$30.00$9,019,185$1.486.03%$24.54
Finance & Insurance126$20.00$10,870,091$7.0230.87%$22.74
Business Services560$25.00$9,048,738$3.3616.29%$20.63
Auto For Sale90$20.00$493,530$2.0511.53%$17.78
Apparel/Fashion106$19.00$1,338,405$0.382.88%$13.19
Home & Home Improvement936$25.00$7,965,436$2.2717.27%$13.14
Auto Repair & Service145$20.00$809,758$1.4911.92%$12.50
Shopping & Gifts59$40.00$305,157$1.4413.38%$10.76
Animals & Pets48$14.40$512,304$1.2211.76%$10.37
Industrial & Commercial201$16.00$1,081,469$1.8919.59%$9.65
Beauty & Personal Care67$10.00$115,902$1.5518.44%$8.41
Arts & Entertainment66$10.00$166,304$1.4818.83%$7.86
Restaurants & Food108$14.00$123,896$0.5511.22%$4.90
Sports & Recreation54$6.75$240,167$0.286.53%$4.29
Highest Cost/Conv: Real Estate ($478/conv), Attorneys ($114), Health & Fitness ($110) — high CPC + low conversion rates create expensive acquisitions. These industries need landing page optimization and conversion rate improvements.
Best Efficiency: Sports & Recreation ($4.29/conv), Restaurants ($4.90), Arts & Entertainment ($7.86) — low CPCs with decent conversion rates make these the most cost-efficient industries on the platform.

Spend & Cost-per-Conversion by Budget Tier

Performance by Budget Tier

Budget TierCampaignsAvg Budget/dayTotal SpendCPCCTRConv RateCost/Conv
Under $10/day7,021$3.57$21,149,505$1.221.47%12.30%$9.94
$10-25/day10,349$14.04$29,266,558$1.121.39%16.08%$6.97
$25-50/day5,137$32.31$24,352,509$1.181.47%19.48%$6.08
$50-100/day6,279$61.85$41,658,703$1.621.51%13.83%$11.68
$100-250/day6,147$137.09$78,095,430$2.471.50%9.09%$27.14
$250-500/day2,005$319.31$48,614,680$3.101.82%20.04%$15.48
$500+/day3,094$8,436$1,178,108,588$9.012.47%10.34%$87.11
Sweet Spot: The $25-50/day tier offers the best cost-per-conversion at $6.08, with 19.48% conversion rate. The $500+/day tier dominates total spend (83%) but has worst cost/conv at $87.11.
Negative Keywords
Match Types
OTTO Strategy
Tracking

Negative Keyword Adoption (Served Campaigns)

91.1%
OTTO — Has Neg KWs
99.0%
Imported — Has Neg KWs
53.9%
OTTO — Sync Enabled
59.0%
Imported — Sync Enabled
OriginServed CampaignsHas Neg KWsSync EnabledEver Synced
OTTO1,04691.1% (953)53.9% (564)53.3% (557)
Imported31099.0% (307)59.0% (183)62.6% (194)
Key Finding: Among campaigns that actually served (received impressions), negative keyword coverage is excellent: 91.1% of OTTO and 99.0% of Imported campaigns have negative keywords. The low adoption rates previously shown were misleading — they included the ~91.5% of OTTO campaigns that never served.

Negative Keyword Sources

SourceCampaign OriginNeg KeywordsCampaigns
GOOGLE_ADS (imported from account)Imported5,412,46723,401
SEARCH_TERM (OTTO pipeline)OTTO1,243,9555,112
SEARCH_TERM (OTTO pipeline)Imported248,9251,768
NGRAM_PATTERN (OTTO pipeline)OTTO152,8524,707
GOOGLE_ADS (imported from account)OTTO54,608588
NGRAM_PATTERN (OTTO pipeline)Imported24,0741,380
OTTO's Neg KW Pipeline is Active: OTTO generates 1.45M negative keywords via automated SEARCH_TERM analysis and NGRAM pattern matching. It also applies neg KWs to 1,768 imported campaigns (249K neg KWs). The pipeline is working — the earlier low adoption numbers were an artifact of counting the 250K total campaigns (most never served).

Estimated Wasted Click Savings from Negative Keywords

With 16.4M negative keywords deployed across the platform, a significant volume of irrelevant clicks are being blocked before they cost money. Below is an impact estimate based on the keyword counts and platform-wide average CPC.

16.4M
Total Negative Keywords
~$12.3M
Est. Wasted Spend Blocked / Year
~820K
Est. Irrelevant Clicks Blocked / Year
1.5%
Est. Spend Saved (of $838.9M)
OriginMatch TypeNeg KeywordsEst. Blocked Clicks/YrEst. Spend Saved/Yr
ImportedEXACT11,689,453~584K~$8.8M
ImportedPHRASE2,877,030~144K~$2.2M
ImportedBROAD1,804,666~90K~$1.4M
Total16,371,149~820K~$12.3M
Methodology: Estimates assume ~5% of negative keywords actively block a query match per year, with an avg blocked CPC of ~$15 (platform-wide avg). Actual numbers require querying the SearchTermReport table for search terms with status=EXCLUDED. The SearchTermWaste audit model tracks real wasted spend per search term — a production query would give exact figures.
Key Insight: Imported campaigns carry the bulk of negative keywords (5.4M from Google Ads imports). OTTO's automated pipeline adds 1.45M from SEARCH_TERM analysis and NGRAM patterns. Among served campaigns, coverage is strong: 91.1% OTTO / 99.0% Imported. Total platform negative keywords: ~7.1M in the dedicated table + 16.4M at the ad-group level.

Negative Keyword Quality Audit

Deep analysis of OTTO-generated negative keywords: quality, conflicts, sync status, and pipeline bugs. Findings based on code review of negative_keyword_analyser.py, tasks.py, ads_connector.py, CommonAdServiceFieldsMixin, and production database queries. Reviewed with Gemini Pro.

46,303
Positive/Negative Conflicts
926,175
Neg KWs Never Pushed to Google
2,447
Campaigns with Conflicts
15.55%
Conv Rate (with Neg KWs)

Bug #1 (CRITICAL): 926K Neg KWs Never Synced to Google — Root Cause Found

926,175 OTTO-generated negative keywords sit at status=0, remote_status=0, action=NULL since March 2025. Root cause identified in code:

Root Cause — Missing action field: In negative_keyword_analyser.py line 467-475, analyze() creates NegativeKeyword objects via bulk_create() without setting the action field. The CommonAdServiceFieldsMixin (line 128) defines action as null=True, blank=True, so it defaults to None. But the Google Ads connector's CREATE_OPERATION_FILTERS (ads_connector.py line 113) requires action=SEND_TO_ACCOUNT (1) + remote_id=NULL + status=DRAFT (0). Since action=None, the connector never sees these keywords.

Second Root Cause — No orchestration: The analyze_negative_keywords Celery task (tasks.py line 1063) calls analyzer.analyze() but never triggers the negative_keywords_send_to_account task (line 1083). Even if action were set correctly, nothing would call the push task.
SourceUnpushedOldestNewest
SEARCH_TERM~800,0002025-03-142026-02-26
NGRAM_PATTERN~97,0002025-03-142026-02-26

Bug #2 (CRITICAL): 46,303 Positive/Negative Keyword Conflicts

Neg KWs blocking positive keywords the campaign is actively bidding on. 2,447 campaigns affected (avg 18.9 conflicts each).

Conflict TimingCountImplication
Positive KW created first, neg KW added after30,720Validation bug — should have been caught
Neg KW created first, positive KW added later15,583No retroactive check exists
Root Cause — Sanitization strips match-type syntax: _get_existing_keywords() (line 310) correctly queries Keyword.objects.filter(...).values_list('value', flat=True) and lowercases results. However, the Keyword.value field may store Google Ads match-type syntax (e.g., [running shoes], "running shoes", +running +shoes). The comparison kw.text.lower().strip() in self.existing_keywords (line 403) compares sanitized text like running shoes against [running shoes] — which returns False. The brackets/quotes cause the lookup to miss the match.

Additional Issue: _get_existing_keywords() does NOT filter is_negative=False. It fetches ALL keywords (positive AND negative). This makes the exclusion set overly broad but wouldn't cause false negatives — the syntax mismatch is the real culprit.

No Retroactive Check: When new positive keywords are added to a campaign, there is no code that checks for existing negative keyword conflicts. This accounts for the 15,583 "neg created first" conflicts.

Bug #3 (MEDIUM): Match Type Rule Violations

The LLM prompt says 3+ word terms should get PHRASE match. Validation code (line 429-431) forces EXACT for 1-2 word terms but has no enforcement for 3+ word terms — the LLM's choice passes through unchecked.

SourceMatch TypeCount% of Source
SEARCH_TERMEXACT~780,00052%
SEARCH_TERMPHRASE~690,00046%
SEARCH_TERMBROAD~30,0002%
NGRAM_PATTERNPHRASE~80,00082%
NGRAM_PATTERNEXACT~9,0009%
NGRAM_PATTERNBROAD~9,0009%
Gemini Insight: Using EXACT for 3-word terms is "safe" but inefficient — it misses variations of bad traffic. However, the LLM should NOT be trusted for structural rules like match type. The LLM should only classify (is this a negative? yes/no), while Python code should enforce match type deterministically based on word count.

Performance Impact: Neg KWs Are Working

Despite the bugs, campaigns with OTTO negative keywords dramatically outperform those without.

SegmentCampaignsConv RateCPCCTRCost/Conv
OTTO + neg KWs94115.55%$4.503.29%$28.95
OTTO no neg KWs1052.16%$9.241.24%$427.23
Imported + neg KWs30614.96%$7.142.59%$47.75
Imported no neg KWs40.59%$5.641.55%$953.24
7x Higher Conv Rate: OTTO campaigns with negative keywords convert at 15.55% vs 2.16% without — a 7.2x improvement. Cost per conversion drops from $427 to $29. Even with 46K conflicts and 926K unpushed keywords, the neg KW pipeline delivers massive value. Fixing the bugs would amplify this further.

Audit Table: 2.67M Conflict Records (Passive)

The system tracks conflicts in audit_negativekeywordconflict — it has 2,675,257 records. It detects conflicts but does not prevent or resolve them. Per Gemini: this should trigger a "System Down" alarm, not a silent log entry.

Code Architecture

ComponentFileDescriptionIssue
SimpleNegativeKeywordAnalyzernegative_keyword_analyser.py:437Main orchestrator: search terms → ngrams → LLM → validate → createCreates with action=None
analyze_negative_keywordstasks.py:1063Celery task that calls analyzer.analyze()Never triggers push task
negative_keywords_send_to_accounttasks.py:1083Celery task that pushes to Google via bulk_create_entities()Works correctly when called
CREATE_OPERATION_FILTERSads_connector.py:113Requires action=1, remote_id=NULL, status=0Filter is correct
_get_existing_keywordsnegative_keyword_analyser.py:301Gets existing keywords for conflict checkDoesn't strip match-type syntax
_validate_negative_keywordsnegative_keyword_analyser.py:395Forces EXACT for 1-2 words, validates length/brandNo PHRASE enforcement for 3+
KeywordSanitizernegative_keyword_analyser.py:143Strips special chars, normalizes UnicodeWorking correctly
NgramExtractornegative_keyword_analyser.py:186Extracts 2-3 word patterns, MIN_FREQ=5, <0.5% CVRWorking correctly

Specific Code Changes Required

Fix 1 (P0): Set action=SEND_TO_ACCOUNT in analyzer

File: app/utils/negative_keyword_analyser.py line 467-475

Change: Add action=CommonAdServiceFieldsMixin.SEND_TO_ACCOUNT when creating NegativeKeyword objects:

-  to_create = [
-      NegativeKeyword(
-          campaign=self.campaign,
-          text=neg_kw.text,
-          match_type=neg_kw.match_type,
-          source=neg_kw.source,
-          reason=neg_kw.reason,
-          expected_impact=neg_kw.expected_impact,
-      )
+  to_create = [
+      NegativeKeyword(
+          campaign=self.campaign,
+          text=neg_kw.text,
+          match_type=neg_kw.match_type,
+          source=neg_kw.source,
+          reason=neg_kw.reason,
+          expected_impact=neg_kw.expected_impact,
+          action=CommonAdServiceFieldsMixin.SEND_TO_ACCOUNT,
+      )

Fix 2 (P0): Chain analysis → push in Celery task

File: app/google_ads/tasks.py line 1063-1076

Change: After analyzer.analyze(), trigger negative_keywords_send_to_account with the new keyword IDs:

 def analyze_negative_keywords(self, campaign_id):
     from google_ads.models import Campaign
     from utils.negative_keyword_analyser import SimpleNegativeKeywordAnalyzer
     try:
         campaign = Campaign.objects.get(id=campaign_id)
         analyzer = SimpleNegativeKeywordAnalyzer(campaign)
-        analyzer.analyze()
+        result = analyzer.analyze()
+        if result.get("status") == "success" and campaign.ads_account_id:
+            neg_kws = result.get("negative_keywords", [])
+            new_ids = [kw.id for kw in neg_kws if kw.action == 1 and not kw.remote_id]
+            if new_ids:
+                negative_keywords_send_to_account.delay(
+                    negative_keyword_ids=new_ids,
+                    ads_account_id=campaign.ads_account_id
+                )

Fix 3 (P0): Strip match-type syntax in conflict check

File: app/utils/negative_keyword_analyser.py line 301-314

Change: Strip []""+ from keyword values before comparison, and filter to positive keywords only:

 def _get_existing_keywords(self) -> Set[str]:
     if not self.campaign:
         return set()
     ad_group_ids = AdGroup.objects.filter(
         campaign=self.campaign
     ).values_list('id', flat=True)
     keywords = Keyword.objects.filter(
         ad_group_id__in=ad_group_ids,
+        is_negative=False,
     ).values_list('value', flat=True)
-    return {kw.lower().strip() for kw in keywords if kw}
+    import re
+    def normalize(kw):
+        return re.sub(r'[\[\]"\\+]', '', kw).lower().strip()
+    return {normalize(kw) for kw in keywords if kw}

Fix 4 (P1): Enforce PHRASE match for 3+ word terms

File: app/utils/negative_keyword_analyser.py line 429-431

Change: Add PHRASE enforcement after the EXACT override:

         words = kw.text.split()
         if len(words) <= 2:
             kw.match_type = MatchType.EXACT
+        elif len(words) >= 3:
+            kw.match_type = MatchType.PHRASE

         validated.append(kw)

Fix 5 (P1): One-time cleanup of existing 46K conflicts

Action: Run a Django management command or migration to delete negative keywords that conflict with active positive keywords in the same campaign. Gemini recommends: "Do NOT save conflicting neg KWs — if they conflict, the positive wins."

Fix 6 (P2): Retroactive conflict check on positive KW creation

Action: Add a post-save signal or hook on Keyword model creation that checks for conflicting negative keywords and removes them. This prevents the 15,583 "neg created first" scenario.

Fix 7 (P2): Handle the 926K backlog carefully

Gemini Warning: Do NOT batch-push the 926K unpushed keywords. They are stale (oldest from March 2025), likely contain ~50K+ conflicts (Bug #2 was active when they were generated), and may exceed Google Ads' 10K neg keywords per campaign limit. Recommended approach:

  1. Discard anything older than 90 days (the signal is too stale)
  2. Re-validate recent keywords through the patched _validate_negative_keywords()
  3. Check campaign-level limits (Google max: 10,000 neg KWs per campaign)
  4. Drip-feed validated survivors in batches of 500/day per account to monitor for API errors

Fix 8 (P2): Alert on audit_negativekeywordconflict

Action: Wire the 2.67M-record audit table to a Slack alert. A >1K daily conflict rate should trigger investigation.

Additional Risks Identified by Gemini

RiskDescriptionSeverity
Google Ads API Limits10K neg KWs per campaign limit. Flushing 926K without checking current list sizes could hit API limits.High
Close Variant MismatchGoogle treats positive keywords with "Close Variants" (typos, plurals). String comparison in validator doesn't account for this — running shoes vs runing shoes.Medium
Campaign vs Ad Group ScopeCampaign-level negatives block ALL ad groups. Validator only checks keywords in the campaign's ad groups, but scope mismatch could cause unexpected blocking.Medium
LLM Non-DeterminismThe LLM ignores structural rules (match type). Gemini recommends: use LLM only for classification (is this negative? yes/no), use Python for match type assignment.Medium

Match Type Distribution

OriginMatch TypeTypeKeywordsCampaigns
ImportedEXACTPositive9,254,24254,067
ImportedPHRASEPositive7,618,42749,849
ImportedBROADPositive7,194,65349,002
OTTOEXACTPositive1,949,73317,308
OTTOBROADPositive464,3782,874
OTTOPHRASEPositive258,7273,603
ImportedEXACTNegative11,689,45326,219
ImportedPHRASENegative2,877,03015,105
ImportedBROADNegative1,804,66610,342

Conversion Rate by Match Type (Actual Platform Data)

Dominant Match TypeConv RateCTRAvg CPCCost/ConvCampaigns
EXACT14.59%2.91%$4.61$31.591,209
PHRASE12.39%2.69%$24.93$201.1353
BROAD23.34%2.88%$3.41$14.6074
Methodology: Campaigns classified by their dominant match type (>50% of keywords). Performance from google_ads_campaignhistoricalperformance. Broad's high conv rate (23.34%) comes from only 74 campaigns with low CPC ($3.41) — likely niche/branded campaigns. Exact match has the strongest sample (1,209 campaigns) and best cost efficiency at scale.

OTTO vs Imported by Dominant Match Type

Match TypeOriginConv RateCTRAvg CPCCost/ConvCampaigns
EXACTOTTO15.51%3.14%$4.57$29.48975
EXACTImported11.98%2.41%$4.72$39.37234
BROADOTTO18.81%7.55%$3.79$20.1314
BROADImported23.79%2.71%$3.37$14.1660
PHRASEOTTO7.49%2.55%$6.16$82.2640
PHRASEImported20.26%2.94%$55.00$271.5013
Key Finding: For Exact match (the largest sample), OTTO outperforms Imported: +29% higher conv rate (15.51% vs 11.98%), +30% higher CTR (3.14% vs 2.41%), and 25% lower cost per conversion ($29.48 vs $39.37). Broad and Phrase samples are too small for reliable comparison.

OTTO's Exact-Match Strategy: Conversion Rate

+7.7% Higher Conversion Rate: OTTO Managed campaigns (73% Exact match) convert at 13.89% vs 12.90% for Non-OTTO (38% Exact). Exact match targets high-intent searchers, reducing wasted spend on irrelevant queries.

OTTO's Exact-Match Strategy: Click-Through Rate

+88% Higher CTR: OTTO Managed campaigns achieve 4.42% CTR vs 2.35% for Non-OTTO. Exact match keywords produce more relevant ad impressions, resulting in dramatically more clicks per impression.

OTTO vs Non-OTTO: Full Comparison

MetricOTTO Managed (73% Exact)Non-OTTO (38% Exact)Delta
Conv Rate13.89%12.90%+7.7%
CTR4.42%2.35%+88%
Median CPC (Search)$5.40$5.75OTTO 6% lower

Conversion Tracking Adoption

63.8%
Campaigns with Conversion Tracking (28,627 of 44,850)

Bidding Strategy: Spend & Conversion Rate

Bidding Strategy Performance

StrategyOriginCampaignsSpendAvg CPCCTRConv Rate
MAXIMIZE_CONVERSIONSImported16,031$959,950,426$59.466.18%9.96%
MAXIMIZE_CONVERSIONSOTTO1,531$107,441,674$511.197.74%14.27%
MAXIMIZE_CONV_VALUEImported3,185$89,074,910$20.535.16%17.31%
MAXIMIZE_CONV_VALUEOTTO84$176,737$5.918.66%28.45%
TARGET_SPENDImported5,238$88,901,941$19.226.46%7.41%
MANUAL_CPCImported6,739$72,053,623$8.595.35%18.06%
TARGET_CPAImported3,908$63,807,566$8.395.61%3.24%
TARGET_IMPRESSION_SHAREImported1,671$29,889,017$34.125.96%5.45%
TARGET_ROASImported481$6,205,367$1.431.23%5.65%
TARGET_CPMImported446$5,913,503$9.320.37%10.56%

Monthly CPC & Conversion Rate Seasonality

PMax vs Search: Monthly CPC Trend

Monthly Seasonality Detail

MonthTotal SpendClicksCPCCTRConv Rate
Jan$110,728,47217,150,664$6.462.72%12.35%
Feb$95,137,75113,722,699$6.932.71%10.99%
Mar$30,631,3801,522,101$20.122.76%5.57%
Apr$22,571,9081,633,071$13.822.52%8.73%
May$16,358,4752,307,705$7.091.79%12.36%
Jun$6,235,9143,232,929$1.931.90%12.48%
Jul$5,868,6502,086,254$2.812.14%7.31%
Aug$46,710,0018,320,376$5.612.23%13.71%
Sep$121,748,10817,828,864$6.832.32%11.26%
Oct$101,702,31415,445,159$6.582.31%12.97%
Nov$127,477,03114,152,769$9.012.24%17.05%
Dec$153,750,67117,112,134$8.982.30%14.84%

Adversarial Review (Gemini Pro Analysis)

Top 5 Statistical Biases & Methodological Flaws

  1. The "Mean" Trap: OTTO's Mean CPC is $396 while Median is $5.40. A few runaway campaigns destroy the average. Always use median or trimmed means.
  2. Match Type Confounder: OTTO uses 73% EXACT match vs Imported's balanced mix. Higher CTR/Conv for OTTO may simply reflect match type, not platform superiority.
  3. Legacy vs Launch Bias: Imported 0-4 week campaigns show 30.73% conversion, suggesting these are mature branded campaigns recently imported, not new cold starts.
  4. "Whale" Distortion (Simpson's Paradox): 94.89% of spend from 176 accounts means aggregate metrics reflect whale behavior, not platform efficacy for other users.
  5. Currency/Data Integrity: CPCs of $1,073 and $511 are not market rates. Likely data quality issues invalidating financial analysis.

5 Alternative Interpretations (Counter-Narrative)

  1. OTTO is 9.3% more expensive per acquisition: $5.40 CPC / 13.95% CVR = $38.70 CPA vs Imported $3.57 / 10.08% = $35.41 CPA.
  2. OTTO's "success" is restricted reach: Heavy EXACT match skims high-intent traffic but cannot scale.
  3. Maximize Conversions bidding is broken: OTTO's $511 avg CPC suggests algorithm malfunction.
  4. Imported campaigns "carry the load": They run top-of-funnel campaigns that feed the funnel OTTO converts.
  5. "New Campaign" comparison is rigged: Imported "new" campaigns are mature campaigns recently copied.

5 Analyses for Greater Robustness

  1. CPA & ROAS comparison (combine CPC + CVR)
  2. Brand vs Non-Brand segmentation
  3. Winsorized means (exclude top/bottom 5%)
  4. Same-store cohort (accounts using BOTH platforms)
  5. Channel-specific breakdown (Search-to-Search only)

3 Hidden Stories

  1. "Mid-Life Crisis" (Weeks 4-12): OTTO CPC explodes to $1,073 in month 2-3 before stabilizing week 26.
  2. PMax Cannibalizing Search: Search CPC dropped $490 to $13 while PMax exploded to $89M.
  3. Negative Keywords Working (But Buggy): Campaigns with neg KWs convert at 15.55% vs 2.16% without (7x). But quality audit found 46,303 positive/negative conflicts across 2,447 campaigns, 926K neg KWs never pushed to Google, and match type rule violations. Fixing these bugs could significantly amplify the already strong performance lift.

Gemini Deep Research: 25 Advanced Analyses Framework

Gemini Pro conducted autonomous 18-minute deep research identifying additional analytical dimensions.

I: Causal Inference & AI Validation

  1. Propensity Score Matching: Eliminate selection bias with PSM on industry, budget, geo, campaign type.
  2. Optimization Velocity: Measure how fast OTTO improves vs Imported via logarithmic curve fitting.
  3. Volatility Index: Compare stability using Rolling StdDev and Coefficient of Variation.
  4. Rehabilitation Cohort: Analyze imported campaigns after OTTO AI takeover using Regression Discontinuity.

II: Economic Efficiency

  1. Marginal ROAS: Find saturation point via polynomial regression on spend vs revenue.
  2. Price Elasticity of CPC: Measure bid sensitivity by industry.
  3. Budget vs Impression Share: Quantify opportunity cost of restricted budgets.
  4. CPA Heatmaps: 24x7 matrices for optimal ad scheduling.

III: Semantic Intelligence

  1. N-Gram Clustering: Find semantic patterns in search terms driving performance.
  2. Cannibalization Index: Detect internal keyword competition.
  3. LP Semantic Relevance: NLP scoring of ad-to-landing-page content match.
  4. Ad Copy Sentiment: Emotional trigger analysis by industry.

IV: Competitive Dynamics

  1. Auction Intensity: Correlate CPC with competitor overlap.
  2. Relative Performance Index: Normalize metrics by industry benchmarks.
  3. Geo-Spatial Arbitrage: Identify high-efficiency geographic pockets.

V: Predictive Analytics

  1. Churn Prediction: Cox survival analysis for at-risk campaigns.
  2. Seasonality Decomposition (STL): Separate trend, seasonal, and residual components.
  3. PMax + Search Interaction: Check for cross-channel cannibalization.
  4. Conversion Lag: Determine true sales cycle length.
  5. Quality Score Proxy: Build QS estimator from CTR, LP conv, relevance.

VI: Advanced Structural

  1. Account Complexity vs Performance: Test Hagakure vs SKAGs hypothesis.
  2. New vs Returning Visitor CPA: Acquisition vs retention efficiency.
  3. Match Type Erosion: Track Broad Match quality degradation over time.
  4. Cross-Sell Opportunity: Association rule mining for product co-purchase.
  5. Funnel Leakage: Diagnose drop-off points in the conversion funnel.

Gemini Deep Research: Negative Keyword Pipeline Analysis

Four parallel deep research tasks investigated the specific bugs found in the OTTO neg KW pipeline. Total research time: ~20 minutes. Key findings synthesized below.

Research 1: Positive/Negative Keyword Conflicts

Critical Google Ads Rule: Unlike positive keywords, negative keywords do NOT match close variants. Adding "flower" as a negative does NOT block "flowers" (plural) or "flwer" (typo). Advertisers must explicitly add all variations. This means OTTO's exact-text conflict check is correct in principle, but it must also strip match-type syntax ([]""+) from stored keyword values before comparison.
FindingDetailImplication for OTTO
Google resolves conflicts by match type priorityIf both a positive and negative keyword match a query, the positive keyword wins at the same level (ad group). But a campaign-level negative overrides ad-group positives.Campaign-level neg KWs (which OTTO creates) CAN block ad-group positive keywords
No close variant expansion for negativesNegative exact [shoes] only blocks the query "shoes", not "shoe" or "running shoes"OTTO's 331K three-word EXACT negatives are overly narrow — PHRASE would catch more waste variations
Automated detection at scaleBest practice: maintain a hash set of all positive keywords (normalized, syntax-stripped) and check every negative candidate against it before creationOTTO does this but fails due to syntax mismatch — Fix #3 (strip []""+) resolves it
Performance impact of conflictsA single campaign-level negative can suppress ALL ad groups' positive keywords for that term46,303 conflicts across 2,447 campaigns = significant revenue suppression

Research 2: Google Ads API Sync Pipeline Architecture

Key Architecture Insight: For bulk operations (like pushing 926K neg KWs), Google recommends using BatchJobService (async) instead of synchronous CampaignCriterionService.mutate. BatchJobService allows up to 1M operations per job, handles retries internally, and doesn't block worker threads.
LimitValueOTTO Impact
Neg KWs per campaign10,000Must check before pushing backlog — some campaigns may be near limit
Neg KWs per Shared Set5,000Could use shared sets for universal negatives (free, cheap, diy)
Shared Sets per account20Use for "Global Negatives" applied across campaigns
Ops per BatchJob request10,000 (or 10MB)Dynamic batching needed — monitor byte size, not just count
Rate limitingToken bucket per developer token + per CIDMust implement Redis-backed global throttle across Celery workers
Celery Architecture Fix: The research recommends separating queues: q_interactive (user clicks "Block" → sync mutate) and q_batch_sync (background sweep → BatchJobService). OTTO currently uses a single background queue. The analyze_negative_keywords task should chain to a submit_negative_batch task that uses BatchJobService with polling (not synchronous wait).

Research 3: LLM-Driven Negative Keyword Generation

Core Insight: LLMs excel at semantic intent classification but are unreliable for structural rules (match types). The recommended architecture: N-grams filter obvious noise (90-95% of data), LLM handles semantic ambiguity on the remaining 5-10%, and Python enforces match types deterministically.
FindingCurrent OTTO BehaviorRecommended Change
LLMs hallucinate close-variant behaviorLLM assigns EXACT to 3+ word terms, believing it blocks variationsNever let LLM choose match type — use Python rules only
LLMs are non-deterministicSame prompt can yield different match type choices across runsSet temperature=0, use LLM only for yes/no classification
N-gram should run firstOTTO runs both in parallel, sends both to LLMRun N-gram first, only send ambiguous terms to LLM
Validation architectureSingle validation passAdd "LLM-as-Judge" second pass + impact simulation against historical data
Negative-to-Positive ratio benchmarkUnknownHigh-performing accounts: 3:1 to 5:1 ratio (negs to positives)

Match Type Decision Matrix (should replace LLM discretion):

Match TypeWhen to UseWord Count Rule
Negative BroadSingle words universally irrelevant (free, torrent, job, diy)1 word only, verified 0% conversion across all contexts
Negative ExactTraffic sculpting between ad groups, specific high-volume queries1-2 words (OTTO's current rule is correct for this)
Negative PhraseMulti-word concepts where sequence matters (default safe choice)3+ words (OTTO should enforce this)

Research 4: Performance Impact Benchmarks

OTTO is outperforming industry benchmarks: Industry data from 15,000 accounts shows accounts with neg KWs convert at 13% vs 4.6% without (3x lift). OTTO's data shows 15.55% vs 2.16% (7.2x lift). OTTO's neg KW pipeline delivers more than double the industry-average improvement.
MetricIndustry BenchmarkOTTO ActualComparison
Conv rate WITH neg KWs13%15.55%+20% above benchmark
Conv rate WITHOUT neg KWs4.6%2.16%Worse — suggests OTTO campaigns need negs more
Conversion lift factor3x7.2x2.4x better than industry
CTR improvement with negs+89%+165% (3.29% vs 1.24%)Nearly double industry lift
CPA reduction with negs-67%-93% ($28.95 vs $427.23)Massively better
Typical monthly waste$1,127/account926K unpushed keywords suggest much higher potentialFixable waste
Over-Negation Risk: Research warns that aggressive negation can reduce conversions by 30-40% if it blocks legitimate long-tail queries. OTTO's 46K conflicts ARE an over-negation issue — actively blocking terms the campaign is bidding on. Additionally, using EXACT match for 3+ word terms is a mild form of "under-negation" — it misses variations that PHRASE would catch.

Revised Priority List (Post-Research)

PriorityFixImpactResearch Backing
P0Set action=SEND_TO_ACCOUNT + chain Celery tasksUnblocks 926K keywords from reaching GoogleAPI research confirms BatchJobService for bulk push
P0Strip match-type syntax in conflict checkPrevents future conflictsConflict research confirms syntax as root cause
P0Delete 46K existing conflictsUnsuppresses revenue on 2,447 campaignsGoogle rules: campaign neg overrides ad-group positive
P1Enforce PHRASE for 3+ words in Python (not LLM)Better coverage of waste variationsLLM research: never trust LLM for structural rules
P1Check campaign neg KW limits before pushing backlogPrevents API errors (10K/campaign limit)API research: hard limit causes RESOURCE_COUNT errors
P1Implement retroactive conflict checkPrevents 15K+ "neg created first" conflictsConflict research: no automated systems do this well
P2Switch to BatchJobService for bulk operationsHigher throughput, automatic retriesAPI research: mandatory for >10 keywords per operation
P2Implement Redis-backed global rate limiterPrevents RESOURCE_EXHAUSTED errorsAPI research: token bucket algorithm across workers

Diagnostic analysis of campaign activation, conversion tracking health, and structural efficiency. Source: otto-ppc production DB, Feb 2026. All metrics use median.

96.4%
OTTO Campaigns With Zero Clicks

72,273 of 74,972 campaigns never entered a Google auction

1.3%
OTTO Campaigns With Conversions

Only 966 of 74,972 campaigns have ever tracked a conversion

$73.7M
Zombie Spend (50+ clicks, 0 conv)

$68.4M Imported + $5.3M OTTO on campaigns that spend but never convert

16
Median OTTO AdGroups/Campaign

Imported median: 2-3. Over-segmentation causes "Low Search Volume"

Critical Finding: The bidding strategy debate is irrelevant for 96.4% of OTTO campaigns because they never enter a Google auction. The platform has a "Failure to Launch" crisis, not an optimization problem. OTTO is creating massive, over-segmented campaign structures (16 adgroups) that Google's algorithm ignores or marks as "Low Search Volume."

Campaign Activation Funnel

74,972
OTTO Created
2,699
Got Any Clicks (3.6%)
966
Got Conversions (1.3%)
OriginTotalZero ClicksGot ClicksHas Conversions% Activated% Converting
OTTO74,97272,2732,6999663.6%1.3%
Imported175,127117,75757,37026,37632.8%15.1%
Why 96.4% ghost campaigns? Root causes (ranked by likelihood): 1) Over-segmentation into 16 adgroups triggers Google's "Low Search Volume" filter. 2) Campaigns created in PAUSED state, user never activates. 3) Google account suspended/no payment method. 4) Keywords have zero search demand. 5) Budget too low for competitive auctions ($5/day spread across 16 adgroups = $0.31/adgroup).

Zero-Conversion Rate by Click Volume

Click BucketOriginCampaignsZero Conv %Total SpendWasted Spend
0 clicksOTTO72,273100.0%$0$0
1-9 clicksOTTO58592.0%$760K$328K
10-49 clicksOTTO87074.1%$2.8M$1.2M
50-199 clicksOTTO68153.2%$26.6M$2.6M
200-999 clicksOTTO45834.9%$49.2M$2.7M
1000+ clicksOTTO10526.7%$28.6M$23K
1-9 clicksImported6,10189.8%$1.1M$809K
10-49 clicksImported8,13171.8%$6.4M$3.8M
50-199 clicksImported10,37553.2%$35.7M$4.9M
200-999 clicksImported13,41543.8%$119.2M$16.1M
1000+ clicksImported19,34842.8%$1.17B$47.4M
The $47.4M Elephant: 42.8% of Imported campaigns with 1,000+ clicks have zero conversions. This is statistically impossible if conversion tracking is working — it is a broken pixel/tag problem, not a performance problem. These campaigns likely generate conversions that are never attributed.
OTTO Positive Signal: At 1,000+ clicks, only 26.7% of OTTO campaigns have zero conversions vs 42.8% for Imported. OTTO's tracking URL setup (99.1% vs 54.6%) gives it better conversion attribution at scale.

Converting vs Non-Converting Campaigns

OriginStatusCampaignsMedian CPCMedian CTRMedian ClicksMedian SpendMedian DaysMedian AdGroups
OTTOConverting966$4.486.64%122$64217716
OTTONon-Converting1,733$1.346.46%21$1617416
ImportedConverting26,376$3.223.78%632$2,1261,1503
ImportedNon-Converting30,994$0.003.49%143$01,2172
The CPC Quality Signal: Converting OTTO campaigns pay $4.48/click (premium, high-intent traffic). Non-converting ones pay $1.34/click (cheap, low-quality traffic). Higher CPC = better audience = conversions. The platform should monitor CPC as a leading indicator of campaign health.
The Volume Gap: Converting OTTO campaigns have 122 median clicks vs only 21 for non-converting. Most non-converters simply haven't accumulated enough data for Google's algorithm to optimize. Minimum viable spend threshold appears to be ~$300-$600.

Campaign Structure: AdGroups vs Performance

OriginAdGroup BucketCampaignsMedian CPCMedian CTRMedian CVR
Imported1 adgroup18,491$1.012.97%0.00%
Imported2-5 adgroups11,879$1.504.58%0.00%
Imported6-10 adgroups5,048$1.814.78%0.00%
Imported11+ adgroups9,621$2.824.71%0.43%
OTTO11+ adgroups2,614$2.726.50%0.00%
Paradox: More adgroups actually correlates with higher CTR and the only non-zero CVR (0.43% at 11+). But OTTO's 16-adgroup structure delivers great CTR (6.5%) with zero CVR — the ads attract clicks but the landing pages or conversion tracking fail to close.

Campaign Churn: Where Do Campaigns Die?

OriginStatusCampaignsP25 SpendMedian SpendP75 SpendMedian DaysMedian Clicks
OTTOEnabled6,533$0$0$0750
OTTOPaused67,735$0$0$01710
OTTORemoved698$0$0$01360
ImportedEnabled23,726$0$0$03220
ImportedPaused91,553$0$0$301,0730
ImportedRemoved59,848$0$0$08610
90.3% of OTTO campaigns are Paused (67,735 of 74,972). Median spend = $0, median clicks = 0, after a median 171 days. Users create campaigns via OTTO, they never serve, and eventually get paused. Even "Enabled" OTTO campaigns have median $0 spend and 0 clicks — they are technically "on" but Google won't serve them.

Recommendations: Priority Action Items

P0: Fix the "Ghost Campaign" Crisis
  1. Add 48-hour activation check: if impressions=0 after 48h, trigger troubleshoot workflow
  2. Reduce adgroup count from 16 to 1-3 per campaign — modern Google favors consolidation
  3. Validate Google Ads account health (payment, suspension) BEFORE campaign creation
  4. Check keyword search volume BEFORE building adgroups — reject zero-volume keywords
P1: Fix the $73.7M Conversion Tracking Gap
  1. Flag any campaign with >$500 spend + 0 conversions as "Tracking Broken"
  2. Auto-pause campaigns after $200 spend with 0 conversions until user verifies pixel
  3. Build "Conversion Health" status on campaign dashboard
  4. Require conversion action validation before enabling Smart Bidding
P2: Establish Minimum Viable Campaign
  1. Minimum $10/day budget per campaign (current: many at $5/day across 16 adgroups)
  2. Launch on Manual CPC for first 4 weeks, auto-switch to Maximize Conversions after 15+ conversions
  3. Monitor CPC as health indicator: <$1.50 = likely junk traffic, >$3 = likely quality
  4. Set minimum 100 clicks before evaluating campaign performance
P3: Build Campaign Health Monitoring
  1. Real-time "Activation Rate" metric: % of campaigns with >1 impression in last 7 days
  2. "Zombie Alert" for campaigns spending but not converting
  3. Weekly report: campaigns created vs campaigns serving vs campaigns converting
  4. CPC quality score: flag campaigns buying sub-$1 traffic as potential waste

All metrics use median (P50), not average. Source: otto-ppc production database, Feb 2026. Only strategies with ≥10 campaigns shown.

$3.02
OTTO Max Conv Median CPC
$2.38
Imported Max Conv Median CPC
0.87x
Max Conv Learning Overshoot
23.9%
OTTO Max Conv Survival Rate
Key Finding: Maximize Conversions has the lowest learning-phase overshoot (0.87x) — early campaigns actually have lower CPC than mature ones. The CPC spike in the trend chart is driven by outliers, not systemic overbidding. Target Impression Share (1.84x) and Manual CPC (1.60x) have far worse learning curves.

CPC by Campaign Age: Maximize Conversions

OTTO vs Imported (Maximize Conversions): OTTO campaigns peak at $6.66 CPC during weeks 4-12 then settle to $4.22. Imported campaigns stay flatter at $2.68-$3.27. OTTO has higher CTR (6.3% vs 4.3%) throughout all age cohorts, suggesting more competitive keyword targeting.

Learning Phase Overshoot by Strategy

StrategyOriginEarly CPCMature CPCOvershoot RatioEarly NMature N
TARGET_IMPRESSION_SHAREImported$19.26$10.491.84x1021,393
MANUAL_CPCImported$5.12$3.191.60x746,400
TARGET_ROASImported$1.21$0.821.48x14305
MAXIMIZE_CONVERSION_VALUEImported$1.33$1.111.21x1442,335
TARGET_CPAImported$6.91$6.451.07x573,831
TARGET_SPENDImported$2.19$2.181.00x1914,136
MAXIMIZE_CONVERSIONSImported$2.84$3.270.87x66611,270
Surprise: Maximize Conversions does NOT overbid during learning. Its 0.87x ratio means early campaigns are actually cheaper than mature ones. The previous dashboard's "$1,073 CPC in month 2-3" was an average inflated by extreme outliers. Median tells the real story.

CPC Distribution by Strategy (Percentiles)

StrategyOriginNP10P25P50P75P90P95IQR
TARGET_IMPRESSION_SHAREImported2,167$0.00$0.76$7.96$16.01$23.57$29.69$15.26
TARGET_CPAImported4,286$0.23$2.35$5.83$10.20$17.27$22.94$7.85
MAXIMIZE_CONVERSIONSOTTO2,030$0.00$0.31$3.02$8.00$20.26$44.46$7.69
TARGET_SPENDOTTO422$0.00$0.00$2.56$8.71$17.58$29.44$8.71
MAXIMIZE_CONVERSIONSImported22,951$0.00$0.00$1.52$5.50$14.26$25.87$5.50
MAXIMIZE_CONV_VALUEOTTO132$0.00$0.00$1.29$3.94$9.60$18.70$3.94
MANUAL_CPCImported11,398$0.00$0.00$0.76$4.70$12.35$19.81$4.70
TARGET_SPENDImported8,460$0.00$0.00$0.67$3.59$14.19$30.69$3.59
TARGET_ROASImported727$0.00$0.00$0.48$1.08$1.93$2.53$1.08
MAXIMIZE_CONV_VALUEImported5,419$0.00$0.00$0.44$1.48$4.09$8.17$1.48
Outlier Impact: OTTO Maximize Conversions has P50=$3.02 but P95=$44.46. The top 5% of campaigns drive the extreme averages seen in the raw CPC comparison. The IQR ($7.69) is comparable to Imported Max Conv ($5.50), confirming the median tells a very different story than the mean.

Head-to-Head: Strategy Performance (Imported Search Only)

StrategyCampaignsMedian CPCMedian CTRMedian CVRMedian CPATotal Spend
MAXIMIZE_CONV_VALUE2,558$0.606.55%0.00%$49.93$44.2M
TARGET_SPEND5,354$1.275.74%0.00%$175.35$74.5M
MANUAL_CPC9,101$1.444.08%0.00%$161.42$62.7M
MAXIMIZE_CONVERSIONS16,787$2.385.46%0.63%$116.07$743.2M
TARGET_CPA3,806$6.664.23%0.00%$284.60$8.6M
TARGET_IMPRESSION_SHARE2,167$7.964.18%0.11%$302.01$30.0M
Best Overall: Maximize Conversion Value delivers the lowest CPC ($0.60) and best CPA ($49.93) among imported Search campaigns. Maximize Conversions leads in volume ($743M spend, 16.8k campaigns) with the only meaningful median conversion rate (0.63%).
Worst Performers: Target CPA ($284.60 CPA) and Target Impression Share ($302.01 CPA) are the most expensive strategies. Both have low CTR (~4.2%) and near-zero median conversion rates, suggesting they attract low-intent traffic or are misconfigured.

Spend Velocity & Efficiency

StrategyOriginN$/dayMedian SpendDays ActiveConv/1k$
MAXIMIZE_CONV_VALUEImported3,226$3.33$1,84277717.42
MAXIMIZE_CONVERSIONSOTTO1,550$2.19$3291760.03
MAXIMIZE_CONVERSIONSImported16,374$1.92$1,1667585.14
TARGET_IMPRESSION_SHAREImported1,679$1.67$1,6711,2961.32
TARGET_SPENDOTTO295$1.65$2331740.00
TARGET_ROASImported481$1.32$83658821.31
MAXIMIZE_CONV_VALUEOTTO84$1.28$119975.86
TARGET_SPENDImported5,366$0.75$5759741.69
MANUAL_CPCImported6,864$0.31$5092,3131.57
TARGET_CPAImported3,964$0.23$4011,7091.09
Efficiency Winner: Target ROAS delivers the highest conversion efficiency (21.31 conv/$1k) despite moderate spend velocity. Maximize Conv Value follows at 17.42. OTTO Maximize Conversions has very low efficiency (0.03 conv/$1k) — likely because most OTTO campaigns are young (median 176 days) and haven't accumulated enough conversion data.

Cohort Survival Rate (Last 12 Months)

StrategyOriginStartedStill EnabledSurvival %
TARGET_IMPRESSION_SHAREImported41925460.6%
TARGET_ROASImported21510850.2%
TARGET_CPAImported1646942.1%
TARGET_IMPRESSION_SHAREOTTO291034.5%
TARGET_SPENDOTTO86324228.0%
MAXIMIZE_CONV_VALUEOTTO2085124.5%
MAXIMIZE_CONVERSIONSOTTO3,06673323.9%
MAXIMIZE_CONV_VALUEImported2,20236416.5%
MAXIMIZE_CONVERSIONSImported13,1751,66412.6%
MANUAL_CPCImported1,71217510.2%
TARGET_SPENDImported5,8134157.1%
MANUAL_CPCOTTO9900.0%
OTTO survival is 2x Imported for Maximize Conversions: 23.9% vs 12.6%. OTTO campaigns are more likely to remain active, suggesting better initial setup quality leads to longer campaign life. Manual CPC has the worst survival (10.2% Imported, 0% OTTO) — a dying strategy.

Strategy x Channel: Search Performance (Median)

StrategyOriginNMedian CPCMedian CTRMedian CVRMedian CPATotal Spend
TARGET_IMPRESSION_SHAREImported2,167$7.964.18%0.11%$302.01$30.0M
TARGET_CPAImported3,806$6.664.23%0.00%$284.60$8.6M
MAXIMIZE_CONVERSIONSOTTO2,030$3.026.32%0.00%$82.47$107.4M
TARGET_SPENDOTTO422$2.567.14%0.00%$99.80$0.2M
MAXIMIZE_CONVERSIONSImported16,787$2.385.46%0.63%$116.07$743.2M
MAXIMIZE_CONV_VALUEOTTO132$1.296.88%0.00%$18.58$0.2M
MANUAL_CPCImported9,101$1.444.08%0.00%$161.42$62.7M
TARGET_SPENDImported5,354$1.275.74%0.00%$175.35$74.5M
MAXIMIZE_CONV_VALUEImported2,558$0.606.55%0.00%$49.93$44.2M
MANUAL_CPCOTTO87$0.577.35%0.00%$44.86$10K
OTTO CTR Advantage: Across every strategy, OTTO campaigns have higher median CTR (6-7%) than Imported (4-6%). This is consistent with OTTO's tighter keyword targeting (73% exact match) driving more relevant ad placements.