Why Your Deliverability Rate Is Wrong (and what to do about it)
The #1 thing I hear from marketers seeking help with email performance issues is, “our open rates have gone down and we’re not reaching [insert any email goal or KPI here], but our deliverability rate is still really good.”
Unfortunately, the situation isn’t quite that simple. Let’s talk about why. Starting with...
What is deliverability?
If you’re readin’ this here blog, you probably already know what deliverability is. At least, you think you do. We’ll find out if that’s true right meowww.
Deliverability is the percentage of emails delivered to the inbox.
So, you did know that! Good on ya’.
On paper, it looks like this:
You probably knew that, too. (smarty pants)
It sounds so simple, doesn’t it? Just gimme that data and I’ll punch it into my calculator. Easy peasy; done and done.
One problem: while we do have a definition for the term “deliverability”, and it does include a calculation, our deliverability rates are all wrong! Yours…mine…everybody’s.
Why’s my deliverability rate wrong?
I hate to be the bearer of bad news. Attribution’s hard enough as it is…ugh. But there are a few reasons why your deliverability rate isn’t as accurate as you think it is. For some people, it’s…
Reason 1: You’re thinking of another metric
Deliverability is a term often confused with the delivered rate. You can see why...check out how similar it is to the delivery rate.
Deliverability Rate = Number of Emails Delivered to Inbox divided by Quantity Sent
VS.
Delivery Rate = (Quantity Sent - Number of Bounces) divided by Quantity Sent
They look very similar, don't they? Heck, they even sound the same! But, they're different!
Email Deliverability ≠ Delivery
One of these is very accurate. (the delivery rate)
Because mailbox providers DO give you feedback to let you know whether an email you sent was accepted into their servers, or rejected. So, delivery rates should be quite accurate within your ESP reporting. That's gooood.
The other is incredibly subjective. Open to interpretation. Straight up voodoo, in some cases. (that’s the deliverability rate)
Because the mailbox providers don’t tell you what happens after that. They hang up the phone! That’s…not so good.
Which brings us to the next reason why your deliverability rate’s not accurate…
Reason 2: Mailbox providers don’t tell us if mail went to the inbox
That’s right! One of the three key metrics you need to calculate our deliverability rate is a data point that doesn’t exist for senders. Mailbox providers DO NOT tell us if a message went to the inbox or the spam folder…or if it was dropped on the floor in the basement (hello, Microsoft).
This lack of actual evidence about where a message landed is why I get so grumpy when I hear people say, “my deliverability rate is X,” or when vendors try to compare deliverability rates between ESPs.
Speaking of fibs people’ve been telling since the beginning of (email) time…
Reason 3: Somebody’s misleading you
There are hundreds of factors potentially impacting deliverability, and without anybody knowing where anything actually landed besides recipients and the mailbox providers who serve them, it’s all just…a guesstimate like cookies eaten by Santa divided by cookies baked on Christmas Eve.
Cool story, Hansel.
But 👏 we 👏 have 👏 no 👏 idea 👏 who 👏 ate 👏 the 👏 cookies.
Yet, some platforms and vendors have a metric within their dashboards called the “deliverability rate”, leaving it up to the marketer to apply the necessary pinches pounds of salt required to put that lil’ phantom metric into proper context.
More responsible vendors use terminology making it clear what the score represents and how it’s being calculated.
Validity’s got one called SenderScore. Pop in your sending IP or domain. Out pops a 0-100 score giving you insight into several aspects of reputation. A glimpse into how mailbox providers may be viewing your reputation, really.
We’ve got one where I work called StreamScore, too. It considers 25 different data points including delivery and engagement metrics, bounce responses, spam trap hits and seed testing, mailbox provider feedback (such as Google postmaster tools), and more in an attempt to capture more of the most important factors impacting deliverability. My colleagues spent years refining it.
Even still, these sender reputation scores are always just a guess. Even if it’s a super educated one.
Which is why we don’t call ours a “deliverability rate”!
…because inbox placement rates aren't based on actual feedback coming from mailbox providers. Doing so would be misleading since they hang up after a message is accepted or rejected.
Now you know! Any deliverability rate or score coming from your ESP, a deliverability monitoring tool, or that random guy on LinkedIn who thinks he has all the answers...all just a guess.
Which might lead you to wonder…
Should I just ignore my deliverability rate then?
Not necessarily. There are 100's of factors affecting inbox placement, and almost as many data points you could be monitoring.
Delivery and bounce rates
Replies and unsubscribe reasons
Conversions and website traffic around the time of send
Mailbox provider feedback (like Google Postmaster Tools and Microsoft’s SNDS)
Seed list test results
Spam trap and blocklist feeds
Deliverability rates SENDER REPUTATION scores (as long as you know how those scores are being calculated)
DMARC reports and authentication status
User behaviors like devices, mail clients, and geolocation
Just to name a few… 😰
No one expects you to monitor them all. It's why deliverability scores were created in the first place!
Just, understand how your performance metrics are being calculated.
For example, do they actually mean “delivery rate” when they say “deliverability rate"? Did they calculate it based on delivery and engagement metrics, or maybe seed test results? Or is it just…magic?
A lot of the reputation scores (and “deliverability rates”) I’ve encountered use less than 5 data points, with heavy focus on bounce rates, opens and spam complaints. It’s a good start – but there’s a whole lot more to the inbox equation that’s not being considered.
This can cause both false positives (FPs) and false negatives (FNs), leading to senders stressing about non-issues while missing real problems impacting performance. This is one of the hidden costs associated with deliverability issues we dug into last time.
Others apply weighting to additional data points coming from spam trap and blocklist feeds, data feeds from mailbox providers like Gmail, Yahoo, and Microsoft...plus everything else you see below.
Whether it’s the deliverability rate or another metric, be sure you understand how it’s been calculated before using it to inform decisions.
Also keep in mind, these scores are meant to be used directionally, not as gospel.
The more you know 🌈
There is no silver bullet to the inbox, and there’s no silver bullet to reporting on it, either.
My advice?
Take deliverability scores with several grains of salt because mailbox providers don’t tell senders if a message went to the inbox or the spam folder. They only tell you if it was accepted into their servers (aka delivered or bounced).
Know how the sausage is being made if your ESP or deliverability monitoring/testing tool is giving you a “deliverability rate”. The difference in output and accuracy will be wildly different if your reputation score is calculated based on 25 data points…or 3. 😳
Focus on delivering the best possible experience to your recipients by sending what people signed up for and following your engagement metrics (like opens, clicks, unsubscribes, and spam complaints) to optimize your signup processes, content, and segmentation. Keep subscribers happy and your deliverability will take care of itself.
When in doubt, reach out…to your ESP's support team — or to an email nerd like me!
For now, go shake it off. Hydrate. Ponder life, or, you know…go read about how to build a sender reputation fit for the hall of fame on my Email Marketing Bl(aahhh!)g.
Want more content like this?
Hubspot recently gave it a shout-out on their blog, so you know it’s at least halfway decent.