Marketing Analytics
Frameworks for tracking, interpreting, and acting on marketing data across channels, campaigns, and the full customer funnel.
Channel Metric Reference
Email
| Metric | How It Is Calculated | Typical Range | Insight Provided |
|---|
| Delivery rate | Delivered / Sent | 95-99% | Sender reputation and list hygiene |
| Open rate | Unique opens / Delivered | 15-30% | Subject line and sender name effectiveness |
| Click-through rate (CTR) | Unique clicks / Delivered | 2-5% | Relevance of content and CTA |
| Click-to-open rate (CTOR) | Unique clicks / Unique opens | 10-20% | In-email content quality among openers |
| Unsubscribe rate | Unsubscribes / Delivered | Below 0.5% | Audience-content fit and send frequency tolerance |
| Bounce rate | Bounces / Sent | Below 2% | List data quality |
| Conversion rate | Conversions / Delivered | 1-5% | Full-funnel email performance |
| Revenue per send | Total revenue / Emails sent | Varies | Direct monetary contribution |
| List growth rate | (New subs - Unsubs) / Total list | 2-5% per month | Audience acquisition health |
Social Platforms
| Metric | How It Is Calculated | Insight Provided |
|---|
| Impressions | Times content appeared in feeds | Distribution breadth |
| Reach | Unique users who saw content | Audience coverage |
| Engagement rate | (Reactions + Comments + Shares) / Reach | Content resonance |
| Click-through rate | Link clicks / Impressions | Ability to drive traffic |
| Follower growth rate | Net new followers / Total followers per period | Audience expansion pace |
| Share/Repost rate | Shares / Reach | Virality and advocacy signal |
| Video view rate | Views / Impressions | Hook effectiveness for video |
| Video completion rate | Completed views / Total views | Content quality and length fit |
| Share of voice | Your mentions / Category total mentions | Competitive visibility |
Paid Advertising (Search and Social)
| Metric | How It Is Calculated | Insight Provided |
|---|
| Impressions | Times the ad appeared | Budget utilization and audience sizing |
| Click-through rate (CTR) | Clicks / Impressions | Creative and targeting relevance |
| Cost per click (CPC) | Spend / Clicks | Traffic generation efficiency |
| Cost per thousand impressions (CPM) | Spend per 1,000 impressions | Awareness cost efficiency |
| Conversion rate | Conversions / Clicks | Landing page and offer effectiveness |
| Cost per acquisition (CPA) | Spend / Conversions | Full-funnel cost efficiency |
| Return on ad spend (ROAS) | Revenue / Ad spend | Revenue generation return |
| Quality Score (search) | Platform relevance rating (1-10) | Alignment of ad, keyword, and destination |
| Frequency | Average exposures per user | Ad fatigue risk indicator |
| View-through conversions | Conversions from users who saw but did not click | Influence of display and awareness placements |
Organic Search / SEO
| Metric | How It Is Calculated | Insight Provided |
|---|
| Organic sessions | Visits originating from search engines | Overall SEO health |
| Keyword positions | Rank for target search terms | Search result visibility |
| Organic CTR | Clicks / Search impressions | Title and meta description appeal |
| Indexed pages | Pages present in the search index | Crawlability and site architecture |
| Domain authority | Third-party composite score | Aggregate site strength |
| Backlink count | External domains linking inward | Off-page authority and content value |
| Page speed | Time to interactive | UX quality and ranking signal |
| Organic conversion rate | Conversions / Organic sessions | Intent alignment and content quality |
| Top organic entry pages | Most-visited pages from search | Highest-performing SEO content |
Content Performance
| Metric | How It Is Calculated | Insight Provided |
|---|
| Pageviews | Total views across content pages | Content reach |
| Unique visitors | Distinct users consuming content | True audience size |
| Average time on page | Duration spent on content pages | Depth of engagement |
| Bounce rate | Single-page sessions / All sessions | Content-audience alignment and UX |
| Scroll depth | Percentage of page scrolled | Engagement persistence |
| Social shares | Times content was distributed socially | Audience advocacy |
| Backlinks generated | External links earned by content | SEO value and authority |
| Leads attributed | Leads traced to content interaction | Conversion power |
| Content ROI | Attributed revenue / Production cost | Investment return |
Pipeline and Revenue Metrics
| Metric | How It Is Calculated | Insight Provided |
|---|
| Marketing qualified leads (MQLs) | Leads passing marketing qualification criteria | Top-of-funnel output |
| Sales qualified leads (SQLs) | MQLs accepted by the sales team | Lead quality |
| MQL-to-SQL conversion | SQLs / MQLs | Marketing-sales alignment |
| Pipeline created | Dollar value of new opportunities | Marketing revenue impact |
| Pipeline velocity | Speed of deal progression | Campaign urgency and quality signal |
| Customer acquisition cost (CAC) | Total marketing + sales spend / New customers | Acquisition efficiency |
| CAC payback period | Months to recoup CAC from revenue | Unit economics viability |
| Marketing-sourced revenue | Revenue from marketing-originated deals | Direct marketing contribution |
| Marketing-influenced revenue | Revenue from deals with any marketing touchpoint | Broader marketing footprint |
Report Structures
Weekly Snapshot
Designed for rapid team consumption:
- Three headline metrics with week-over-week movement
- Wins: 1-2 data-backed highlights
- Watch items: 1-2 areas requiring attention with supporting numbers
- Upcoming actions: 3-5 priorities for the week ahead
Monthly Performance Review
Standard format for stakeholder reporting:
- Executive summary (3-5 sentences)
- Core metrics table with month-over-month and target comparisons
- Channel-level performance breakdown
- Campaign results and highlights
- What succeeded and what underperformed, with working hypotheses
- Recommendations and priorities for the coming month
- Budget spent vs. planned
Quarterly Strategic Review
For leadership-level analysis:
- Quarter results against stated goals
- Year-to-date progress and trajectory
- Channel-by-channel ROI assessment
- Campaign portfolio performance summary
- Competitive and market landscape observations
- Strategic recommendations for the next quarter
- Budget proposal and reallocation plan
- Experiment outcomes and key learnings
Dashboard Construction Principles
- Feature the metrics that tie directly to business goals, not vanity numbers
- Display trends over multiple periods rather than isolated data points
- Provide comparison anchors: prior period, target, industry benchmark
- Apply uniform color signaling: green for on-track, yellow for at-risk, red for off-track
- Organize by funnel stage or the business question being answered
- Confine the dashboard to a single screen; relegate granular data to an appendix
- Match the refresh cadence to the decision cadence (real-time for paid media, weekly for content)
Trend Analysis and Projection
Spotting Patterns
When examining performance data, investigate:
- Sustained direction: is the metric consistently rising, falling, or flat across 4+ consecutive periods?
- Turning points: at what moment did the trajectory change, and what event coincided?
- Cyclical patterns: are there recurring fluctuations by day of week, month, or quarter?
- Outliers: isolated spikes or dips — what triggered them, and could the cause be replicated or avoided?
- Predictive signals: which metrics shift first and foreshadow downstream outcomes?
Analytical Process
- Plot the metric across time with at least 8-12 data points for statistical relevance
- Characterize the overall trajectory (rising, declining, stable, or oscillating)
- Quantify the rate of change — is the trend accelerating or flattening?
- Layer in external events (campaign launches, product updates, market shifts)
- Benchmark against targets or industry norms
- Look for correlations with related metrics
- Formulate causal hypotheses and design experiments to test them
Projection Techniques
- Trend extension: project the existing trajectory forward (works best for stable metrics)
- Rolling average: average the most recent 3-6 periods to dampen noise
- Year-over-year overlay: use the prior year's seasonal pattern, adjusted for a growth coefficient
- Funnel arithmetic: forecast outputs from inputs (X leads at Y% conversion rate yields Z customers)
- Scenario planning: model optimistic, expected, and pessimistic cases
Projection Guardrails
- Near-term forecasts (1-3 months) carry far more reliability than long-range ones
- Projections built on fewer than 12 data points should be labeled low-confidence
- External disruptions (market shifts, competitive moves, economic changes) can invalidate trend-based models
- Always express forecasts as ranges rather than single numbers
Attribution Fundamentals
Why Attribution Matters
Buyers rarely convert after a single interaction. Attribution assigns credit across the multiple touchpoints that precede a conversion, informing channel investment decisions.
Standard Attribution Models
| Model | Mechanism | Strength | Weakness |
|---|
| Last interaction | All credit to the final touchpoint | Identifies closing channels | Overlooks awareness and nurture |
| First interaction | All credit to the initial touchpoint | Highlights discovery channels | Ignores conversion drivers |
| Even distribution | Equal credit across all touchpoints | Acknowledges every channel | Fails to reflect relative influence |
| Recency-weighted | Increasing credit as touchpoints approach conversion | Balances awareness and closing | Can undervalue early awareness |
| Position-based (40/20/40) | Heavy credit to first and last, remainder split across the middle | Honors both discovery and conversion | Somewhat arbitrary weight assignment |
| Algorithmic | Machine-learned credit based on conversion path data | Most reflective of actual influence | Demands large conversion volumes |
Practical Attribution Advice
- If you have no attribution system, begin with last-interaction — it is the simplest and most immediately actionable
- Contrast first-interaction and last-interaction views to learn which channels drive discovery vs. closure
- Position-based (40/20/40) is a pragmatic default for most B2B organizations
- Algorithmic models need high conversion volumes to produce statistically sound results
- Treat attribution as directional intelligence, never as absolute truth
- Any multi-touch model is more informative than a single-touch model, and any model outperforms none
Attribution Traps
- Optimizing a single channel based on single-touch data can starve the rest of the funnel
- Awareness-oriented channels (display, organic social, PR) will consistently underperform in last-touch reports
- Conversion-oriented channels (branded search, retargeting) will consistently underperform in first-touch reports
- Self-reported attribution ("How did you hear about us?") offers useful qualitative signal but is unreliable for quantitative allocation
- Cross-device and cross-channel tracking gaps guarantee that attribution data is always incomplete
Optimization Methodology
Systematic Improvement Process
- Detect: which metrics fall short of targets or benchmarks?
- Locate: where in the funnel does the breakdown occur? (impressions, clicks, conversions, retention)
- Theorize: what is causing the shortfall? (targeting, messaging, creative, offer design, timing, technical issues)
- Rank: which interventions promise the greatest impact relative to effort?
- Experiment: run a controlled test to validate or disprove the hypothesis
- Evaluate: did the metric improve meaningfully?
- Act: scale successful changes broadly; iterate on inconclusive or negative results
Intervention Levers by Funnel Position
| Funnel Position | Warning Sign | Available Levers |
|---|
| Awareness | Low impressions, limited reach | Budget levels, targeting parameters, channel mix, ad format |
| Interest | Low CTR, weak engagement | Creative execution, headline copy, content hooks, audience refinement |
| Consideration | High bounce rate, low dwell time | Page content, load speed, relevance alignment, user experience |
| Conversion | Low conversion rate | Offer structure, CTA wording, form complexity, trust elements, page layout |
| Retention | Elevated churn, declining re-engagement | Onboarding flow, email sequences, product experience, support quality |
Impact-Effort Prioritization
Score every optimization idea on two axes:
Impact (potential metric movement):
- High: directly addresses the primary bottleneck
- Medium: improves a contributing factor
- Low: yields incremental gains
Effort (implementation difficulty):
- Low: copy tweak, targeting adjustment, quick A/B test
- Medium: new creative asset, page redesign, workflow modification
- High: new tooling, cross-team initiative, major content production
Execution order:
- High impact, low effort — execute immediately
- High impact, high effort — plan and staff
- Low impact, low effort — pursue if bandwidth allows
- Low impact, high effort — defer or deprioritize
Experimentation Discipline
- Isolate a single variable per test for interpretable results
- Lock in the success metric before the test begins
- Calculate the required sample size in advance and resist ending tests prematurely
- Run each test for at least one complete business cycle (usually a full week for B2B)
- Record all experiments and outcomes, including negative and null results
- Circulate learnings across the team — a test that confirms the current approach still builds confidence
Ongoing Optimization Rhythm
- Daily: check paid campaign pacing, flag anomalies, review ad approval status
- Weekly: assess channel-level performance, pause lagging efforts, amplify winners
- Biweekly: rotate ad creative and launch new test variants
- Monthly: conduct a comprehensive performance review, surface new optimization opportunities, refresh projections
- Quarterly: reassess channel strategy, budget distribution, and audience targeting at a strategic level