THE ENGINE
LOGIC
Technical documentation for the algorithms, workbenches, and data models powering ClickHub.
Pipeline Setup
ClickCatalyst builds a direct, encrypted bridge between your ad spend and real user behavior. No spreadsheets, no CSV exports — just live data flowing into your private analytical vault.
Spend-to-Outcome Mapping
The engine matches your ad spend to actual user behavior by linking campaign IDs across both platforms. Every click gets a cost. Every conversion gets a value. Spend meets outcome.
Surgical Deletion (You're in Control)
Delete one account and every row tied to those IDs is scrubbed from BigQuery. Delete your last account and the entire vault is destroyed. Your data, your call.
Tenant-Isolated Storage
Your BigQuery dataset is yours alone — isolated, encrypted with AES-256, and tagged to your account. No other user can see it. Not even us, unless you explicitly grant support access.
Data Governance
ClickCatalyst enforces strict data isolation. Every account gets a private, encrypted BigQuery dataset. You control the keys, the purge protocol, and the vault itself.
Tenant Isolation
Your email hash generates a dedicated BigQuery dataset. No shared tables, no cross-contamination. Your data lives in complete isolation from every other user.
Read-Only Access Model
OAuth 2.0 tokens are scoped to read-only permissions. The engine can see your advertising data to analyze it, but it cannot modify campaigns, adjust budgets, or spend your money.
On-Demand Purge Protocol
Delete one account and every associated row is scrubbed. Delete your last account and the entire dataset is destroyed with all tokens revoked. Both operations are permanent and irreversible.
Campaign Wizard
A guardrail-driven Google Ads campaign builder that forces goal-first architecture. Pick your business outcome, and the engine configures the campaign structure around that objective — preventing accidental waste from bad default settings.
Goal-First Architecture
Instead of navigating 47+ campaign settings, you pick one business outcome (E-Commerce Sales, High-Intent Leads, Brand Awareness, App Installs). The engine auto-configures everything else.
Playbook Selector
For Search campaigns, the wizard asks for strategy — not keywords. Brand Defense (protect your name), Competitor Conquest (steal rival traffic), or Intent Capture (high-value service search). Each playbook has pre-built structures.
Smart Asset Library
The wizard scans your existing Google Ads account for creatives — detecting logos, separating landscape from portrait images, and suggesting assets automatically based on the campaign type you've selected.
Auto-Save State
Campaign creation progress is cached in browser storage. Close the tab, lose internet, come back later — you pick up exactly where you left off. Cache expires after 7 days or on successful launch.
Pulse Center
A real-time diagnostic system that verifies the connection health between Google Ads, GA4, and BigQuery. If this dashboard shows red, stop spending — your data pipeline is broken and campaign performance can't be measured accurately.
Authorization Check
Verifies that ClickCatalyst holds valid OAuth 2.0 tokens with appropriate permissions for your Google Ads and GA4 accounts. Tokens are monitored for expiry and revocation.
Data Pipeline Status
Checks whether BigQuery is actively receiving fresh data from Google Ads and GA4 APIs. 'Initializing' means the engine is still running its first historical hydration — wait 3-5 minutes.
Signal Linkage Verification
Tests whether Google Ads and GA4 are communicating via the GCLID (Google Click ID) parameter. Broken linkage means your bidding algorithm is making decisions blind.
Conversion Heartbeat
Scans for recorded conversions in the last 30 days. Zero conversions means Smart Bidding has no learning signal — the AI needs conversion data to optimize effectively.
Conversion Architect
The Conversion Architect is the translation layer between GA4 events (what happens on your site) and Google Ads conversion actions (what the bidding algorithm optimizes for). If this isn't configured correctly, Smart Bidding optimizes for nothing — or worse, for the wrong thing.
Event Scanner
Automatically scans your GA4 property for trackable events like 'purchase', 'generate_lead', 'sign_up', and compares them to your existing Google Ads conversion actions.
Primary vs. Secondary Tagging
Primary = 'Bid for this' (e.g., Purchase). Secondary = 'Track this, but don't spend money optimizing for it' (e.g., Page View). Only Primary conversions influence Smart Bidding decisions.
One-Click Import
Found a GA4 event that isn't in Google Ads? Click 'Import to Ads' to create the conversion action via API in seconds. No need to open the Google Ads interface.
Value Assignment
For revenue events (purchase, subscription), the architect uses the transaction value from GA4 automatically. For lead events without transaction values, you set a static value in Google Ads based on your lead-to-customer economics.
Integrity Monitor
The Integrity Monitor is a profit-protection system for your Google Ads pipeline. Enter your URL and it runs a simultaneous three-layer audit: a live HTML scan of your site, a direct Google Ads API probe, and a BigQuery analysis of your actual session and conversion data. Within seconds you get a 0-100 health score, prioritized diagnosis cards, and — for the two most common attribution killers — one-click fixes that resolve the problem via API without touching a line of code.
Three-Layer Live Audit
A single scan runs three checks simultaneously: your live site HTML is fetched and inspected for GA4 tags and click ID redirect survival, your Google Ads account is queried via API for auto-tagging status and GA4 link health, and your BigQuery data is analyzed for click-to-session loss, zero-value purchase events, and conversion import gaps. All three layers feed a single 0-100 health score.
Match Rate Intelligence
Industry-standard click-to-session match rates run 85-95%. The monitor benchmarks your account against this range, tracks your 7-day trend, and distinguishes structural baseline loss — ad blockers, sub-second bounces — from fixable leaks caused by broken configuration. When your rate drops, you see whether it fell overnight (something broke) or gradually over weeks (page speed degrading).
Platform-Aware Diagnosis
The scanner fingerprints your tech stack on every scan. WordPress with WooCommerce gets different fix guidance than Shopify, which gets different guidance than Next.js or a React SPA. Instead of generic advice, you get the specific plugin name, config path, or code change that resolves the issue on your actual platform.
One-Click Repair Console
Auto-tagging disabled and GA4 not linked to Google Ads — two issues responsible for the majority of attribution failures — are fixed instantly via API. Click once, the connection is made or restored, no developer required. Every other detected issue gets a step-by-step pipeline verification checklist that walks you to the exact stage where the signal chain breaks.
Exploratory Data Analysis
Standard reports tell you what happened. The Explorer tells you why, where, and how to replicate success. Advanced visualizations — scatter plots, correlation matrices, and distribution histograms — expose the hidden patterns of profitability and waste that tabular data hides.
Multi-Lens Architecture
One unified interface, five entity views: Campaigns, Ad Groups, Ads, Keywords, and Search Terms. Switch lenses to analyze your account at any level of granularity.
Profitability Scatter Plots
Every entity is plotted as a bubble on an X/Y axis (Spend vs. Revenue), with bubble size representing conversion volume. Stars (top-left) are high-revenue, low-spend winners. Bleeders (bottom-right) are high-spend, low-revenue drains.
Correlation Matrix
Pearson's r correlation between all key metrics reveals hidden relationships. Does spending more actually increase conversions? Or does it just tank your ROAS? The matrix answers this with statistical precision.
Distribution Histograms
See how your campaigns are distributed across any metric — ROAS, CTR, CPC. Most accounts follow a power law: a few campaigns carry all the profit while the majority underperform.
Performance Command Center
The Performance Command Center is a single-page operational dashboard built around one workflow: Diagnose in 10 seconds, Decide in 30 seconds, Act in 60 seconds. It combines real-time performance monitoring, AI-prioritized action alerts, and budget pacing forecasts into one scrollable view.
Three-Act Architecture
Act I: Status Check (10 seconds — am I on track?). Act II: Action Feed (30 seconds — what's burning?). Act III: Deep Dive Dispatch (60 seconds — where do I focus next?). One scroll, complete operational awareness.
Intelligence-Driven Alerts
The Action Feed doesn't just show data — it prioritizes actions using campaign intelligence signals: bleeding flags, pacing anomalies, quick-win opportunities, and temporal patterns. Each alert has a quantified financial impact.
Budget Pacing Simulator
Projects your month-end spend based on current daily run rate. Shows whether you're on pace, underspending (leaving opportunity), or overpacing (about to blow your budget by day 22). Adjustable daily budget slider lets you model scenarios.
Configurable Goals
Set your own monthly targets for Revenue, Spend, ROAS, and Conversions via the settings gear. The Performance Pulse gauges show real-time progress toward YOUR goals, not arbitrary benchmarks.
Search Hygiene Cockpit
A surgical cleanup tool that finds wasted spend on search terms with zero conversions. Configure your waste thresholds, visualize where budget bleeds, and block bad traffic with one click. This dashboard exists to recover money you're already losing.
Hygiene Scoring
A composite score (0-100) grading your search targeting efficiency. Based on waste ratio (spend on zero-conversion terms), match type discipline (how much waste comes from Broad Match), and Quality Score distribution. Green (above 80) = tight ship. Red (below 50) = bleeding money.
Waste Detection Engine
Identifies search terms that meet your configurable criteria: significant clicks, meaningful spend, but zero conversions. These are your negative keyword candidates — budget you can recover immediately by blocking them.
Match Type Leakage Analysis
Breaks down wasted spend by match type: Broad vs. Phrase vs. Exact. If 90% of waste is Broad Match, the data is telling you to tighten targeting with more Phrase/Exact keywords and additional negative keywords.
One-Click Blocking
Every waste term has a 'Block' button that adds it as a negative keyword at campaign or account level. Bulk select multiple terms and block them all at once. Instant waste elimination.
The Creative Lab
Deconstructs creative performance at the element level—headline, description, call-to-action, device, and asset—to pinpoint exactly what converts and what doesn't.
Element-Level Scoring
Each ad component (headline, description, CTA, sitelink) is scored independently on CTR, CVR, and cost efficiency. This isolates whether the headline is brilliant but the CTA is killing conversions.
RSA Pin Analysis
For Responsive Search Ads, the lab analyzes which headlines and descriptions Google serves most often, which combinations drive the best CTR, and where pinning could improve performance.
Device-Creative Split
Cross-references creative performance by device type. An ad that dominates on desktop might underperform on mobile due to truncation, viewport differences, or user intent shifts.
Creative Fatigue Detection
Tracks CTR decay over time per ad. When an ad's CTR drops below 70% of its peak performance for 7+ consecutive days, it's flagged as fatigued with a recommendation to refresh.
The Budget Balancer
A financial simulator that identifies 'Rocket Ships' (High ROAS, Budget Constrained) and 'Dogs' (Low ROAS, High Spend) to facilitate drag-and-drop budget transfers between campaigns.
Opportunity Calculation
The engine calculates 'Potential Lift' by analyzing Impression Share Lost to Budget. Formula: (Current Revenue / (1 - Lost IS Budget%)) - Current Revenue. This tells you exactly how much revenue you're leaving on the table.
Reallocation Safety
Transfers are capped at 50% of the source campaign's daily spend to prevent shock to Smart Bidding's learning phase. You can make multiple 50% moves over several days if data supports it.
ROAS × Headroom Scoring
Campaigns are scored on a composite index of Current ROAS × Impression Share Lost (%), creating a priority ranking. High ROAS + high IS Lost = top reallocation target.
Historical Volatility Check
Before recommending a transfer, the system checks 7-day ROAS stability. High volatility (standard deviation >25%) triggers a warning flag to prevent scaling unstable campaigns.
The Context Engine
Isolates performance variables often hidden in averages: Time of Day, Geographic Location, and User Device Technology. Your campaign has a 3% conversion rate—but what if desktop converts at 5% and mobile at 1%?
Temporal Heatmap
Aggregates conversion rate by Hour of Day × Day of Week to identify 'Prime Time' (scale here) vs 'Kill Zones' (reduce bids or exclude).
Tech Diagnostics
Identifies specific device models or browser versions with high traffic but 0% conversion rate—often indicating UX bugs rather than audience issues.
Geographic Performance Mapping
Groups cities and regions by performance similarity, revealing locations that should perform alike based on demographics but don't—flagging potential landing page or offer mismatches.
Cross-Context Scoring
Each context dimension is scored 0-100. Contexts scoring below 30 are flagged for exclusion or creative adjustment. Contexts above 80 are candidates for bid increases.
The Growth Engine
A multi-workbench system that converts raw advertising data into confidence-aware operational guidance. It uses statistical confidence scoring and lifecycle stage classification to answer the ultimate question: 'Should I wait, optimize, or scale?'
Minimum Viable Signal (MVS)
Uses a confidence interval approach to calculate statistical readiness. When Readiness hits 95%, you've gathered enough data to scale confidently. The engine tracks both conversion volume and spend efficiency simultaneously.
Lifecycle Stage Classification
Campaigns progress through 4 stages: Exploration (buying data, 0-70% readiness), Calibration (data exists but volatile, 70-95%), Exploitation (statistically safe to scale, 95%+), Degradation (efficiency decay confirmed).
The Shield (Protection Logic)
Campaigns in Exploration or with high volatility are 'shielded' from waste alerts. This prevents false panic and premature pausing. Shield drops when MVS is reached and volatility stabilizes.
Sensitivity Profiles
Three modes adjust confidence thresholds: Conservative (95% confidence, Z=1.64), Balanced (90% confidence, Z=1.28), Aggressive (85% confidence, Z=1.04). Higher confidence = more data required before green light.
The Strategist Center
Moves beyond Last-Click attribution to understand the full customer journey and the relationship between Awareness (Display, Video) and Action (Search, Shopping).
Time-Lagged Correlation
Runs Pearson's r between 'Driver' spend (Video/Display) and 'Harvester' conversions (Search) with rolling time lags (0-14 days). A peak correlation at lag=5 means Display ads take 5 days to drive Search activity.
Ghost Value (Assisted Revenue)
Quantifies revenue 'assisted' by a campaign but not claimed by it under Last-Click attribution. Surfaces View-Through Conversions and cross-campaign paths from GA4.
Halo Effect Detection
Analyzes the lift in branded search volume driven by non-brand campaign activity. Tracks branded search impressions before and after Display/Video spend changes to isolate the awareness-to-intent pipeline.
Multi-Touch Path Analysis
Uses GA4 conversion path data to distribute credit across the customer journey. Shows which touchpoints consistently appear before conversions, even if they never get Last-Click credit.
Unit Economics Workbench
A workbench that connects your advertising metrics to business fundamentals—calculating Customer Acquisition Cost (CAC), modeling Lifetime Value (LTV), and showing exactly where your profit margin breaks.
Max Tolerable CAC
Formula: LTV × (1 - Target Margin %). If your actual CAC exceeds this, the Breakeven Thermometer turns red. This is your mathematical ceiling—spend above it and you lose money on every customer.
LTV Proxy
Uses 'AOV × Avg Orders (90 days) × Estimated Lifetime in Quarters' as a real-time proxy for Lifetime Value when full cohort data is unavailable. Replace with actual retention curves as your data matures.
Cohort Analysis
Tracks CAC and LTV by monthly acquisition cohort, revealing whether customer quality is improving or degrading over time. Month-over-month trends matter more than absolute numbers.
Payback Period Calculator
Shows how many months it takes for the average customer to generate enough gross profit to cover their acquisition cost. Shorter payback = faster reinvestment into growth.
Loyalty Matrix
Segments customers by purchase frequency and recency, revealing which buyer groups generate the most long-term value and which are at risk of churning.
RFM Scoring
Recency (days since last purchase), Frequency (total purchases), Monetary (total spend). Each dimension is scored 1-5, creating segments like 'Champions' (5-5-5) or 'At Risk' (2-3-4).
Channel-Loyalty Correlation
Maps acquisition channel (Search, Display, Shopping, Social) to long-term customer segment. Reveals which channels acquire 'Champions' vs 'One-and-Done' buyers.
Churn Probability Estimation
Based on historical purchase intervals, estimates the probability that a customer has churned. If average purchase interval is 30 days and a customer hasn't bought in 90 days, churn probability is high.
Reactivation ROI Model
Calculates the expected return from reactivation campaigns targeting 'At Risk' or 'Hibernating' segments. Compares cost of reactivation vs cost of acquiring a new customer.
Semantic DNA (N-Gram Analysis)
Decomposes search queries into recurring word fragments (n-grams) to discover hidden patterns, intent clusters, and waste signals that individual query analysis misses.
N-Gram Extraction
Breaks search queries into unigrams (single words), bigrams (two-word pairs), and trigrams (three-word sequences). Aggregates performance metrics (spend, conversions, ROAS) across all queries containing each n-gram.
Intent Pattern Grouping
Groups n-grams by inferred intent based on keyword modifiers: 'buy/purchase/order' = Transactional, 'how/what/why' = Informational, 'best/top/review' = Comparison, 'near me/local' = Local. Performance varies dramatically by intent group.
Waste Signal Detection
Identifies n-grams that appear across multiple queries and consistently produce poor results (high spend, low/zero conversions). These are systemic waste patterns—adding one n-gram as a negative keyword can eliminate waste across dozens of queries at once.
Opportunity Mining
Surfaces high-performing n-grams that could become exact match keywords or ad group themes. If the bigram 'enterprise solution' appears in 15 queries with 8x ROAS, it deserves its own dedicated ad group.
Quality Score Clinic
Diagnoses Quality Score issues at the keyword level and prescribes specific fixes for each of the three QS components: Expected CTR, Ad Relevance, and Landing Page Experience.
Component-Level Diagnosis
Quality Score has 3 components: Expected CTR (ad appeal), Ad Relevance (keyword-to-ad alignment), Landing Page Experience (post-click quality). Each is rated Above Average, Average, or Below Average. The Clinic isolates which component is the bottleneck.
QS Impact Calculation
Estimates the CPC premium you pay due to low QS. A keyword with QS 4 pays roughly 50-75% more per click than QS 8. The Clinic calculates your total 'QS Tax'—the annual cost of poor Quality Scores across all keywords.
Prescriptive Fix Mapping
Each QS component maps to specific fixes: Low Expected CTR → improve headlines in Creative Lab. Low Ad Relevance → tighten keyword-to-ad group alignment. Low LP Experience → fix page speed, mobile UX, content relevance.
QS Trend Tracking
Monitors Quality Score changes over time per keyword. Sudden QS drops indicate a problem (competitor improvement, LP issue, ad fatigue). Gradual improvement confirms optimization efforts are working.
Ready to deploy the engine?
Stop guessing with "Black Box" automation. Start optimizing with transparent, math-based workbenches.