venerisBook a Pipeline Review
← Back to blog
Case Study·April 2026·6 min read

We Sent 27,000 Personalised Cold Emails Without a Single SDR. Here's What Happened.

Earlier this year, we ran 27,000 personalised cold emails across a B2B campaign covering four industry verticals. No SDR wrote a single one. No human reviewed a contact list or chose a follow-up timing. The entire cycle — prospect identification, verification, sequence generation, send scheduling, reply handling — ran autonomously.

What happened was not what most people expect when they hear “AI-generated emails.” It was not a blast of generic templates with a first name inserted. Each email was built from scratch around the recipient's company, role, and the signal data we had collected about their current priorities. Some referenced a recent funding round. Some addressed a specific challenge common to companies at their stage. Some opened with an observation about their competitive position.

The results changed how we think about what autonomous outreach is actually capable of.

What We Set Out to Test

The experiment was not designed to prove that AI outbound beats human outbound. It was designed to answer a narrower question: at what point does autonomous personalisation become indistinguishable from — or better than — what a skilled SDR produces?

Most AI outbound tools optimise for volume. Send more, convert a fixed percentage, move on. That model produces the reply rates you would expect from a spray-and-pray approach: sub-two percent, high unsubscribe rates, domain reputation damage. We were not interested in that. We were interested in what happens when the AI is given enough context about each prospect to write something a person would actually want to read.

The system we ran had access to company intelligence data, decision-maker profiles, industry signals, and a learned understanding of what messaging had worked and not worked in previous campaigns. Each email was generated fresh, with that context loaded.

What the Numbers Showed

Across 27,000 emails, the aggregate open rate came in above thirty percent. Reply rates averaged four-point-two percent — more than double the industry median for outbound campaigns of comparable volume. Positive replies, meaning responses indicating genuine interest rather than polite deflections, accounted for just under half of all replies.

The follow-up sequence performance was where the results diverged most sharply from typical outbound benchmarks. Step three — the third email in the sequence — generated nearly as many positive responses as step one. That is unusual. In most campaigns, reply rates decay steeply after the first touch. The system had learned to vary the angle and tone of each follow-up rather than restating the same value proposition with different words. That variation kept engagement alive through the full sequence.

The most important number was not in the open or reply rates. It was in the cost per qualified meeting. Against a benchmark of what those meetings would have cost through a traditional SDR model — fully loaded, including ramp time and tools — the autonomous system produced qualified meetings at roughly one-fifth of the cost.

What Surprised Us

The finding that most changed our thinking was not about volume or cost. It was about quality signal. Because every interaction with the system was logged — every open, every click, every reply, every non-response — the campaign produced a dataset that a human SDR team would never generate. We knew exactly which opening angles produced the best response rates by industry. We knew which CTAs converted by role. We knew which follow-up timing by day-of-week produced replies versus which produced unsubscribes.

A human SDR team running the same campaign would have produced meetings. They would not have produced that dataset. The learning loop — where every campaign makes the next one better — does not exist in a human-led model at any comparable speed or granularity.

The system that ran that campaign is meaningfully better now than it was when it started. The gap between what it produces today and what it produced at launch is entirely the result of what it learned from those 27,000 interactions.

What It Means for Your Pipeline

We are not publishing this to suggest that autonomous outreach is appropriate for every company or every market. There are contexts where relationships and human judgment are genuinely irreplaceable. But for the research-and-prospecting layer of outbound — the part that consumes most of a typical SDR's week — the evidence now points in one direction.

The question worth asking is not whether autonomous outreach can produce results. At the volumes and quality levels we ran, it demonstrably can. The question is whether the companies in your market are already running it against you — and what that means for the speed at which your pipeline fills.

Veneris runs the full autonomous outbound cycle for B2B companies. If you want to understand what a campaign like this would look like for your market, book a conversation with us.