Skip to main content
    Back to all posts
    ExperimentMay 27, 202610 min read

    The 100-Email Experiment - Three Approaches, One Winner

    A hypothetical controlled experiment comparing three cold email strategies across 300 total sends. Full methodology, raw data, and statistical analysis - presented as a structured research report.

    email experimentA/B testingcold email testoutreach comparisonemail methodologyresponse ratesemail personalizationobservation-based outreachsplit testingemail results
    300
    Emails Sent
    3
    Test Groups
    1
    Clear Winner
    Section 1

    Abstract and Methodology

    Abstract

    This hypothetical experiment examines whether observation-based cold emails - those referencing specific, publicly visible details about a business - outperform generic templates and name-only personalization. Three groups of 100 local service businesses each received one of three email variants. The primary metrics measured were open rate, reply rate, and meeting conversion rate over a 14-day observation window. The observation-based approach (Group C) produced substantially higher engagement across all metrics.

    Disclosure: All data in this report is hypothetical. This experiment was not conducted with real businesses. It is presented as a structured thought experiment to illustrate the likely impact of different outreach strategies based on observable patterns in cold email performance.

    Hypothesis

    H1: Cold emails that reference specific, publicly observable details about a recipient's business will produce higher open rates, reply rates, and meeting conversion rates than emails using only the recipient's name or no personalization at all.

    The reasoning: business owners receive dozens of generic pitches. An email that demonstrates the sender has actually looked at their business signals relevance and reduces the perception of mass outreach. For more on why this matters, see our guide on how to personalize cold outreach at scale.

    Methodology

    ParameterValue
    Total Sample Size300 local service businesses (hypothetical)
    Group Size100 per group (randomized assignment)
    IndustryHome services (plumbing, HVAC, electrical, roofing)
    GeographyMid-size US cities, 50K-200K population
    Service OfferedWebsite design and local SEO
    Observation Window14 days from initial send
    Send TimeTuesday 9:15 AM local time (all groups)
    Follow-upsNone (single-send test only)

    Experimental Groups

    A

    Generic Template

    No personalization. Same email body for all 100 recipients. Only the company name is inserted via mail merge.

    Subject: Quick question about your business


    Hi there,

    I help local businesses get more customers through better websites and local SEO. I would love to show you how we could help [Company Name] grow online.

    Would you be open to a quick 10-minute call this week?

    Best regards

    B

    Name-Personalized

    Uses the owner's first name and company name. Same pitch and structure as Group A, but addresses the recipient directly.

    Subject: Quick question for [First Name]


    Hi [First Name],

    I help local businesses like [Company Name] get more customers through better websites and local SEO.

    Would you be open to a quick 10-minute call this week to see if we could help?

    Best regards

    C

    Observation-Based

    References a specific, publicly observable detail about the business - such as their review count, missing website, or listing gap. Each email is unique to the recipient. This approach aligns with outreach based on public facts.

    Subject: [First Name] - noticed something about [Company Name]


    Hi [First Name],

    I was looking at [Company Name]'s Google listing and noticed you have [X] reviews with a [Y]-star average, but no website linked. That means people searching for [service type] in [City] can find your reviews but have nowhere to go next.

    I build websites for businesses exactly like yours. Would it be worth a 10-minute call to see if it makes sense?

    Best regards

    Section 2

    Results

    Reminder: All figures below are hypothetical. They are designed to reflect realistic proportional differences observed in published cold email benchmarks, not actual experimental data.

    Open Rate

    Group A22%
    22/100
    Group B34%
    34/100
    Group C61%
    61/100

    Group C open rate: 2.8x Group A

    Reply Rate

    Group A3%
    3
    Group B7%
    7
    Group C19%
    19

    Group C reply rate: 6.3x Group A

    Meeting Booked Rate

    Group A1%
    1
    Group B3%
    3
    Group C9%
    9

    Group C meeting rate: 9x Group A

    Complete Results Table

    MetricGroup A (Generic)Group B (Name)Group C (Observation)
    Emails Sent100100100
    Delivered949395
    Bounced675
    Opened22 (23.4%)34 (36.6%)61 (64.2%)
    Replied3 (3.2%)7 (7.5%)19 (20.0%)
    Positive Replies1414
    Negative / Unsubscribe235
    Meetings Booked1 (1.1%)3 (3.2%)9 (9.5%)
    Reply-to-Open Conversion13.6%20.6%31.1%

    Statistical Comparison (Hypothetical)

    Open Rate Lift (C vs A):

    +40.8 percentage points
    95% CI: [28.3, 53.3]

    Reply Rate Lift (C vs A):

    +16.8 percentage points
    95% CI: [8.5, 25.1]

    Meeting Rate Lift (C vs A):

    +8.4 percentage points
    95% CI: [2.7, 14.1]

    Reply Rate Lift (C vs B):

    +12.5 percentage points
    95% CI: [3.9, 21.1]

    Note: Confidence intervals are illustrative and assume proportional z-test calculations. With n=100 per group, these intervals are wide. A real experiment would benefit from a larger sample size.

    Section 3

    Discussion and Limitations

    Why the Observation-Based Approach Won

    • Subject line specificity. Mentioning the owner's name alongside their company signals that the email was not mass-sent. It triggers curiosity rather than suspicion.
    • Demonstrated research. Citing a specific review count or a missing website proves the sender looked at the business. This is the difference between a cold pitch and a warm observation. It is also why spotting underperforming businesses through public signals matters so much.
    • Problem-first framing. Group C identified a gap before offering a solution. Groups A and B led with the service. Leading with the prospect's problem earns attention.
    • Implied expertise. Knowing their review count and listing status signals industry understanding, not just sales ability.

    What the Data Means in Practice

    If these numbers held in a real campaign, the observation-based approach would produce 9 meetings from 100 emails compared to 1 meeting from the generic template. That is the difference between a sustainable outreach operation and a failing one.

    Time investment comparison:
    Group A: ~2 min/email x 100 = 200 min for 1 meeting
    Group C: ~8 min/email x 100 = 800 min for 9 meetings

    Cost per meeting:
    Group A: 200 min / 1 = 200 min per meeting
    Group C: 800 min / 9 = 89 min per meeting

    Despite taking 4x longer per email, the observation-based approach is more than 2x more efficient on a per-meeting basis. For practical guidance on doing this at scale, see how to personalize cold outreach at scale.

    Why Group B Underperformed Expectations

    Name personalization is widely recommended in cold email guides, but the lift over Group A was modest. The likely explanation: business owners are now accustomed to seeing their name in automated emails. A name alone no longer signals genuine research - it signals a mail merge field.

    The improvement from Group A to Group B was real but small. The improvement from Group B to Group C was dramatic. The lesson: the bar for perceived personalization has risen. Names are table stakes. Observations are the differentiator.

    Limitations

    • Hypothetical data. No actual emails were sent. All figures are modeled to illustrate likely proportional differences, not to claim exact numbers.
    • Single industry. Home services may respond differently than professional services, healthcare, or retail. Results should not be generalized across all verticals.
    • Small sample size. With n=100 per group, confidence intervals are wide. A real experiment would ideally use 500+ per group for statistical significance.
    • No follow-up sequence. Real outreach includes follow-ups. The single-send design isolates the first-touch effect but does not capture the full campaign lifecycle. See our post on how to structure a cold email sequence for that context.
    • US-only geography. Cultural norms around cold outreach vary significantly by region. These patterns may not hold internationally.
    • Sender reputation not controlled. In a real test, domain age, warmup status, and sender reputation would affect deliverability independently of email content.
    FAQ

    Frequently Asked Questions

    Is this experiment based on real data?

    No. This is a hypothetical experiment. The numbers are modeled to illustrate the proportional differences you would likely see when comparing generic, name-personalized, and observation-based cold emails. The methodology is sound, but the data points are constructed for educational purposes.

    How long does it take to write an observation-based email?

    Typically 5 to 10 minutes per email if you are working from a lead list that already includes business signals like review count, website status, and listing completeness. The research time is where the value comes from - you are trading speed for relevance, and the data suggests that trade is worth making. Having structured data-driven lead information reduces this time substantially.

    What publicly observable signals should I reference in my emails?

    The most effective signals to reference are: whether the business has a website, their Google review count and rating, whether their listing is claimed and complete, and whether they have active social media profiles. Anything a customer could see when searching for that business is fair game.

    Would results differ in a different industry?

    Almost certainly. Home service businesses tend to be less digitally sophisticated, which makes observation-based outreach especially effective. In industries where businesses already have strong digital presences, the observable gaps would be different - but the principle of leading with a specific observation still holds.

    Can I combine Group B and Group C approaches?

    Group C already includes name personalization - it just adds observable details on top. The question is really whether you can do observation-based outreach at scale. The answer is yes, if your lead data includes the right signals. The bottleneck is data quality, not writing ability.

    Conclusion

    Key Takeaways

    1

    Observation Beats Personalization

    Using someone's first name is no longer enough. Referencing a specific observable fact about their business is what separates your email from the other 20 they received that day.

    2

    Slower Per-Email, Faster Per-Meeting

    Observation-based emails take 4x longer to write, but produce meetings at more than double the efficiency. The math favors quality over volume.

    3

    Lead With the Gap

    The most effective emails start with the prospect's problem, not your solution. Identify a visible gap - no website, few reviews, incomplete listing - and make that your opening.

    4

    Data Quality Is the Lever

    You can only write observation-based emails if your lead data includes observable signals. Without review counts, website status, and listing data, you are stuck in Group A territory.

    5

    Generic Templates Are Costly

    A 1% meeting rate means you need 100 emails for one conversation. At that rate, cold outreach feels broken. But the template is the problem, not the channel.

    6

    Test Your Own Version

    This experiment is hypothetical. The only way to know what works for your market is to run your own version. Start with 50 emails per group and compare the results after 14 days.

    ©2026 All rights reserved