When to kill an outreach experiment

Persistence is overrated when the hypothesis is weak. Signs I stop, reset, or change one variable at a time.

Not every dry spell means “send more.” Sometimes it means the test is broken — and grit won’t fix a bad hypothesis.

Signals I stop or reset

  • Same negative pattern three batches in a row. Ignores, polite brush-offs, or hostile replies — if nothing shifts when I change one variable, the ICP or offer cut is wrong.
  • I can’t explain the audience in one sentence. If the list is “anyone who might buy someday,” I’m not experimenting — I’m hoping.
  • I dread opening the tracker. That’s data too: the system is unsustainable, which means it will rot.

What “kill” actually means

Not rage-quitting — archiving the angle, keeping notes, and starting a new batch with a written hypothesis. I want failure to be legible next month.

What I don’t do

Double volume to “prove” the approach works. That’s how you blend bad targeting with platform risk.

Kill vs. pause

Seasonal industries, hiring freezes, August in Europe — sometimes pause is rational. I mark the date and revisit with a fresh list — not the same tired message on a timer.

Experiments need endpoints. Otherwise “we’re still testing” becomes a permanent excuse for not learning.