• What are Measurement-Centric Terms?

Measurement-Centric Terms

Measurement-Centric Terms are the vocabulary of marketing measurement - the words teams use to describe what they’re tracking, how they’re attributing credit, and whether anything is actually working. Words like CAC, LTV, ROAS, attribution, incrementality, cohort, funnel, conversion, MQL, SQL, pipeline, payback period, retention, NPS.

This page is a primer on the shape of that vocabulary rather than a comprehensive glossary in its own right - each of these terms has its own dedicated entry. The goal here is to explain how the vocabulary fits together and where the common traps live, so that the glossary as a whole makes sense as a system.

The four families of measurement vocabulary

Most measurement terms fall into one of four groups:

Efficiency metrics. How much did we spend to get this outcome? CAC (customer acquisition cost), CPA (cost per acquisition), CPL (cost per lead), ROAS (return on ad spend), payback period. These answer “is the growth economically sound?”

Volume and velocity metrics. How much of it is happening and how fast? Sessions, MQLs, SQLs, opportunities created, pipeline generated, deals closed, cycle length, conversion rates between stages. These answer “is the machine moving?”

Quality and retention metrics. How good are the outcomes after acquisition? Activation rate, 30-day retention, churn, net revenue retention (NRR), LTV, NPS, expansion revenue. These answer “did we acquire the right customers?”

Attribution and causal metrics. How do we know what caused what? First-touch, last-touch, multi-touch models, marketing-mix modelling, incrementality tests, holdout experiments, lift studies. These answer “was this program actually responsible for the result?”

A good measurement stack has terms from all four families in play. A stack that only reports efficiency metrics is flying blind on quality. A stack obsessed with volume rarely catches retention problems until it’s too late.

Where measurement vocabulary gets misused

Four recurring problems:

Metrics without definitions. Two teams reporting CAC that compute it three different ways. One includes salaries; another doesn’t. One uses blended; another paid-only. “Our CAC is $4,200” means nothing until the definition is agreed.

Vanity over outcome. “Traffic grew 80%” or “we have 150,000 followers” without connecting to pipeline, revenue, or retention. Volume without outcome is theatre.

False precision. “LinkedIn drove $2.4M of pipeline last quarter” reported to the nearest dollar, based on a last-touch attribution model that everyone privately agrees is wrong. Attribution numbers are estimates - pretending otherwise erodes trust.

Metric-gaming. Teams bonused on MQL volume soften the MQL definition. Teams bonused on pipeline generate inflated opportunities. Any metric used as a target eventually gets gamed (Goodhart’s Law). Designing bonus structures around outcomes rather than intermediate metrics helps, but the problem never fully goes away.

How to use measurement vocabulary well

Three practices:

Define once, document somewhere findable. A one-page “metric definitions” doc that everyone can point to. Revised quarterly. Prevents the slow drift where the same term means four different things across four quarterly reviews.

Pair efficiency with quality. Never report CAC without LTV. Never report ROAS without retention. Never report MQL volume without MQL-to-SQL conversion. Efficiency metrics alone always look better than they are.

Name the model, not just the number. “Using last-touch attribution, paid search contributed 38% of sourced pipeline. Under our multi-touch model, it’s 22%.” Naming the methodology keeps the conversation honest.

An example

A Series A B2B team was reporting healthy numbers in their quarterly board deck: 40% traffic growth, 60% MQL growth, CAC down 12%. The board approved another round of growth spend.

Four months later, revenue had only grown 8%. What went wrong: the measurement vocabulary was all volume and efficiency, no quality. MQL-to-SQL conversion had quietly dropped from 45% to 26% because the MQL definition had softened. LTV had fallen because newer customers skewed toward smaller accounts with higher churn. Each individual metric looked good; the system was rotting.

The fix wasn’t a new dashboard - it was adding three metrics to the top-line view: MQL-to-SQL conversion, 90-day customer retention, and net revenue retention. With those alongside the volume and efficiency numbers, the board deck stopped telling only half the story.

Related terms

Here's how we can help you

Want a glossary just like this?

Get in touch for our DFY glossary service.