Open rate. Click-through rate. Ask most marketing teams how their email program is performing and these are the numbers they will show you.
But here’s the problem: Open and click metrics don’t tell you whether your email program is actually working. They tell you how people interacted with the email message, not whether the email generated meaningful results for the company.
Relying on them as key performance indicators (KPIs) can lead to optimizing completely wrong results.
What the data shows about open and click rates
I’ve seen subject line A/B split tests where the version with the highest open rate resulted in fewer conversions or lower revenue per email (RPE) than the other versions. I’ve also seen campaigns where the version with the highest click-through rate generated fewer conversions or a lower RPE than the other versions.
This isn’t just anecdotal. Analyzing item testing I have conducted for a wide range of clients and audiences over many years, the results are astonishing.
Your customers search everywhere. Make sure your branding introduces himself.
The SEO toolkit you know, plus the AI visibility data you need.
Start your free trial
Start with
Open rate as a KPI: Rarely useful
Many marketing teams use open rate as their primary KPI for subject line testing. After all, the subject line affects whether someone opens the email. But if your business goal is conversions or revenue, open rate is a weak proxy.
Through topic A/B testing I conducted using the scientific method, I went back to see how often the highest open rate was a correct predictor of the version with the highest conversion rate (for non-revenue actions), or RPE.

Data suggests that if your marketing department relies on open rate as a KPI for subject line testing:
- In 20% of cases, the results will correctly identify the version with the highest conversion rate or RPE.
- In 10% of cases, the results will not correctly identify the highest converting RPR version.
- In 70% of cases, the open rate variance will be within the margin of error and therefore inconclusive, even when there is a clear version of the highest conversion rate or RPE.
The open rate misled the analysis or provided no useful information in 80% of tests.
The click-through rate is even worse
If open rate isn’t a reliable KPI, some turn to click-through rate instead. This seems reasonable: clicks occur later in the funnel, so they should correlate more closely with conversions or revenue.
Unfortunately, the data suggests otherwise. Through campaign testing comparing CTR to conversion rate or revenue per email, my analysis results looked something like this:

In this case:
- Only 7% of the time the version with the highest CTR also produced the highest conversion rate or RPE.
- In 36% of cases, the CTR indicated the wrong winner.
- In 57% of cases, CTR differences were not statistically significant, even though company metrics clearly were.
CTR was a reliable indicator of the real winner only 7% of the time.
The metrics that actually measure email performance
Diagnostic metrics such as opens and clicks do not reliably predict business outcomes. If you want to accurately evaluate email performance, you need to look at metrics that measure actual results.
Two of the most important are conversion rate and revenue per email.
Conversion Rate: The metric that shows whether email is working
Conversion rate measures the percentage of recipients who complete the action your email was designed to do. A conversion doesn’t have to be a sale. This could be a purchase, a demo request, a webinar registration, a lead generation form, an app download, or a subscription renewal.
Here is the formula:
- Conversion Rate = Conversions ÷ (Number of Emails Sent – Bounce)
- Multiply by 100 to express it as a percentage.
Conversion rate directly measures whether your email campaign is achieving its objective. It answers the most important question: Did the email cause the behavior we wanted?
This is why conversion rate is often the best KPI for campaigns designed to drive actions rather than immediate revenue.
Email Revenue: The metric that demonstrates email ROI
For ecommerce programs, the metric I’m most interested in is RPE.
- Revenue per Email = Total Revenue ÷ (Number of Emails Sent – Bounce)
- We present this metric as a dollar figure.
Unlike clicks or opens, this metric directly links email performance to financial results. It often reveals information hidden from engagement metrics.
For example, I once talked to a company that sells high-end audio equipment. They had an aggressive A/B testing program and ran a test to see whether to include product prices in their emails. They used click-through rate as their KPI. Here are the results.

The unpriced version generated significantly more clicks and was therefore declared the winner. But think about what’s going on here. High-end audio equipment is expensive. Many recipients probably clicked simply to find out the price. If the price had been included in the email, as in the control version, recipients would not have needed to click.
The higher CTR version may simply have generated more curiosity clicks, not more clicks from qualified buyers. Because the company wasn’t tracking email conversions or revenue, it couldn’t determine which version actually generated more sales.
Without this data, the test winner is questionable at best.
The real lesson for email marketing experts
Open rates and click-through rates are not useless. They are valuable diagnostic metrics that help you understand how recipients interact with the content of your email.
But they shouldn’t be the primary KPIs for most email programs. If your goal is conversions or revenue, and for most businesses it is, performance metrics should directly measure those results.
This means focusing on:
- Conversion rate to measure whether your campaigns generate the desired action.
- Revenue by email to measure the financial impact of your program.
Because in the end, it’s not the opens and clicks that guarantee the success of a campaign, the results do.
Author’s Note: Artificial intelligence tools were used to assist in writing and editing portions of this article. Test data, analysis and conclusions are based on subject matter and campaign testing conducted for customer email programs over multiple years.
