ESS
Back to Feed

AI-powered deliverability scoring: we built an internal tool and here's what we learned

data_diego

Instead of waiting for deliverability problems to surface, we built an internal ML model to predict inbox placement before hitting send.

The model

We trained on 18 months of sending data — subject lines, content features, send volumes, historical engagement, and recipient domains — mapped to actual inbox placement rates.

Features that predicted deliverability

  1. Sender reputation score (35% weight): Domain and IP reputation from Postmaster Tools
  2. Engagement recency (25%): How recently recipients opened or clicked
  3. Content signals (20%): Link density, image ratio, spam-trigger phrases
  4. Volume patterns (15%): Consistency with historical sending volume
  5. Authentication health (5%): SPF/DKIM/DMARC pass rates

Results

The model predicts inbox placement within 5% accuracy for 80% of campaigns. It flagged 3 campaigns in the last quarter that would have had deliverability issues — we adjusted before sending and avoided reputation damage.

#ai#deliverability#machine-learning
98

4 Comments

deliverability_danDeliverability Expert

This is basically what Google Postmaster Tools does from the receiving side. Building it from the sending side gives you proactive rather than reactive insights. Smart approach.

15
metrics_mikeData Analyst

The feature importance ranking is fascinating. Sender reputation at 35% makes sense — it is the foundation everything else sits on.

13
growth_graceGrowth Marketer

How much training data did you need before the model became useful? We have about 6 months of sending data — is that enough?

9
data_diego

Six months should be sufficient if you have consistent volume. We started seeing useful predictions at about 4 months. The model improves continuously as it gets more data.

11