The Context Engine

Environmental Performance Isolation

The Problem: Averages Hide the Truth

Your campaign has a 3% conversion rate. Sounds fine.

But what if:

Desktop converts at 5% while mobile converts at 1%?

Fridays at 2 PM have a 12% conversion rate while Mondays at 9 AM have 0.5%?

Chrome users convert at 4.2% but Safari users convert at 0.3%?

Aggregate metrics smooth over these fault lines. You're making budget decisions based on averages that don't represent any real user.

The Context Engine exposes these hidden patterns so you can:

Increase bids during high-conversion windows

Reduce bids during dead zones

Fix device-specific bugs that silently kill conversions

Adjust geo targeting based on actual performance, not assumptions

1. The Temporal Heatmap

Visual: 24 × 7 grid (Hour of Day × Day of Week) color-coded by conversion rate.

168 cells — each representing one hour-slot across the week.

Color scale:

🟢 Dark Green: Prime Time (conversion rate >2× account average)

🟢 Light Green: Above average

🟡 Yellow: Average

🔴 Red: Kill Zone (conversion rate <0.5× account average)

⬛ Gray: Insufficient data

Click any cell to see raw metrics: clicks, conversions, cost, revenue, CPA.

Consistency Score: Each cell also shows how stable that time slot has been over the past 30 days. A green cell with high consistency = reliable pattern. A green cell with low consistency = could be noise.

How to use it:

Finding Prime Time: Look for clusters of dark green cells. These are your peak conversion windows. Set bid modifiers +30-50% for these hours.

Finding Kill Zones: Look for red clusters. These are wasting budget. Set bid modifiers -50% or exclude entirely via ad scheduling.

Common patterns:

B2B: Weekdays 9 AM–5 PM (green), evenings and weekends (red)

E-Commerce: Evenings 7–10 PM + weekends (green), weekday mornings (yellow)

Local Services: Business hours (green), after 8 PM (dead)

2. Tech Diagnostics (Device & Browser Analysis)

What it shows: Performance breakdown by Device Type, Operating System, and Browser.

Why it matters: Not all traffic is created equal. A user on iPhone 15 / Safari might behave completely differently than a user on Samsung Galaxy / Chrome.

The Hidden Bug Pattern:

A SaaS client saw 40% of traffic from iPhone users but only 8% of conversions. The Context Engine revealed the issue: their demo signup form had a date picker broken specifically in Mobile Safari 15.x. Desktop and Android users had no issues.

By isolating the exact device + browser combo, they fixed it in 48 hours and recovered significant monthly revenue in lost conversions.

Table Columns:

Device / OS / Browser

Sessions

Conversion Rate

CPA

Revenue

Context Score (0-100)

Flags:

🔴 Zero-Convert Device: High traffic (100+ sessions) but 0 conversions → Likely a UX bug, not an audience issue

🟡 Underperformer: Conversion rate <50% of account average → Consider bid reduction or device-specific creative

🟢 Overperformer: Conversion rate >150% of account average → Consider bid increase

Action buttons:

"Exclude Device" → Adds device bid modifier of -100% via API

"Reduce Bid" → Sets -50% device bid modifier

"Boost Bid" → Sets +30% device bid modifier

3. Geographic Performance Mapping

Visual: Choropleth map color-coded by conversion rate or ROAS.

Drill-down levels: Country → State/Region → City → Postal Code (when data volume permits).

Why it matters:

You might assume all major cities perform similarly. The geo map often reveals surprising disparities:

City A: 6.2% CVR, 4.8x ROAS

City B (similar size/demographics): 1.1% CVR, 0.8x ROAS

Common causes of geographic disparity:

Local competitor dominance in City B

Shipping cost differences affecting checkout completion

Cultural or language variations in ad response

Regional internet speed differences (slow = more bounces)

Table below map:

Location

Spend

Conversions

CVR

ROAS

Context Score

Actions:

"Exclude Location" → Removes from targeting via API

"Reduce Bid" → Sets location bid modifier -30 to -50%

"Boost Bid" → Sets location bid modifier +20 to +50%

Recommended approach: Don't exclude locations with low performance immediately. First check if it's a creative fit issue (try location-specific ad copy). Only exclude if performance stays below 0.5x ROAS after 30 days of testing.

4. Cross-Context Scoring Matrix

What it does: Combines all three dimensions (Time, Device, Geography) into a single scoring matrix.

Each combination gets a Context Score (0-100):

Example:

"Mobile + Tuesday 2 PM + New York" = Score: 82 (strong performer)

"Desktop + Saturday 11 PM + Rural Texas" = Score: 14 (dead zone)

Score Interpretation:

80-100: Boost bids, allocate more budget

50-79: Standard performance, no changes needed

30-49: Underperforming, investigate before excluding

0-29: Kill zone, reduce bids or exclude

The power of cross-context analysis:

You might see that Mobile has a low overall score (45). But when you cross it with "Tuesday 2 PM," it jumps to 82. The problem isn't mobile—it's mobile *outside of peak hours.*

This prevents over-broad exclusions. Instead of cutting all mobile traffic (-100% bid modifier), you set:

Mobile + Peak Hours: +20% bid

Mobile + Off-Peak: -40% bid

Result: You keep the profitable mobile traffic and cut the waste.

When to Use This Dashboard (vs. Other Tools)

Use Context Engine when you want to:

Optimize ad scheduling (bid by time of day)

Find device-specific conversion issues

Adjust geographic targeting

Understand why a campaign performs differently at different times

Don't use Context Engine when you want to:

Reallocate budgets between campaigns (use Budget Balancer)

Find wasted search terms (use Search Hygiene)

Check conversion tracking health (use Pulse Center)

Analyze ad creative performance (use Creative Lab)

Who Should Use This:

✅ Campaign Managers (bid optimization)

✅ Developers (fixing device-specific bugs)

✅ Analysts (finding hidden performance patterns)

❌ Executives (too granular for strategic decisions)

How Often: Monthly for most accounts. Weekly only if you just expanded to a new geographic market or launched a mobile-specific campaign.

Technical FAQ

Q: How much data do I need before the Context Engine is reliable?

At least 100 conversions across all contexts in a 30-day window. For temporal analysis specifically, you need 5-10 conversions per day of the week. The system warns if sample sizes are too small.

Q: Should I exclude low-performing contexts immediately?

Not necessarily. Sometimes low performance is due to poor creative fit, not inherent audience issues. Run A/B tests with context-specific ad copy before excluding. Try mobile-optimized creative before cutting all mobile traffic.

Q: Can I use temporal data to adjust bids by time of day?

Absolutely. Export the heatmap data and feed it into Google Ads' ad schedule bid adjustments. If 8-10 PM converts at 2× the average rate, set a +100% bid modifier for that window.

Q: Does geographic analysis work for international campaigns?

Yes, but currency and language differences can skew the analysis. Run separate analyses per country or currency zone for cleanest results.

Q: Why does the Context Engine show 'Insufficient Data' for some cells?

If a time-day-device-location combination has fewer than 10 clicks, the system marks it gray ('Insufficient Data') rather than showing a potentially misleading score. This prevents you from making decisions based on noise.

Q: How do I find device-specific bugs killing my conversions?

Sort the Tech Diagnostics table by Sessions (descending), then filter to 'Zero-Convert Devices.' Any device with 100+ sessions and 0 conversions is almost certainly a UX bug, not an audience issue. Share the device/browser/OS combination with your dev team.

Q: Can I apply context-based bid modifiers automatically?

Yes. Click the 'Apply Bid Modifiers' button next to any context row. The system sets the modifier via Google Ads API. You can also bulk-apply: select multiple rows, choose a modifier percentage, and apply to all at once.

Q: What's the difference between Context Engine and the PCC's Execution Heatmap?

The PCC heatmap is a quick reference showing ad delivery timing. The Context Engine goes deeper: it cross-references time with device and geography, scores each combination, and provides actionable bid modifier recommendations.

The Context Engine | ClickCatalyst