Detailed postmortem of the outage on February 15, 2026.
On February 15, 2026, starting at 10:19 PM UTC, Resend experienced an incident that caused email sending delays and errors loading the dashboard.
The outage lasted 3 hours and 31 minutes, with full service recovery at 1:50 AM UTC.
No emails were lost during this incident. However, most of the emails were delayed, and the dashboard was inaccessible due to database connection exhaustion.
We're really sorry if you were affected by this incident. You trust Resend with your emails, and we take that seriously. This post is a transparent account of what happened, how we responded, and the steps we're taking to prevent this from happening again.
The database reached its maximum connection limit, with idle connections not being released fast enough under load. This caused connection exhaustion across the system, preventing the dashboard and non-email API operations from functioning normally.
We resolved the outage by:
Email sending continued to work throughout the incident, but delivery was delayed by approximately 2 hours on average, with all queued emails fully delivered by the end of the incident.
All times are in Coordinated Universal Time (UTC)
Yesterday, due to a lack of configuration in one of our databases, a service went from an average of 60 database connections usage to over 330. This spike was unusual because it was not correlated to the actual traffic.

We have applications using different sets of combinations for: deployment type (long-lived, serverless, cron-based, etc), Postgres library and database connection config (direct vs using Pooler). This mixed setup made it hard to ensure a healthy usage of the database connections.
Since we built the email sending platform to be resilient to database unavailability, emails were delayed but not lost.
After identifying the database starvation issue, we realized the Max Pool Size for the database clients was too high, allowing that single application to use more database connections than it needed.

We decreased the initial number twice, in a time interval of 20 minutes, allowing Resend's services to recover and open new connections to the db. After that, we altered the max number of connections for the faulty application (using Postgres connection limit configuration).
After approximately 40 minutes, the whole platform recovered and started reprocessing the delayed emails.
It's important to note that the internal escalation process took way longer than it should have, and we're actively working on improving it.
To prevent this from happening again, here's what we're changing:
Thank you to everyone who reported issues and gave us feedback. We hear you, and we're acting on it. If you have questions or concerns, please reach out to us.