Numerical Analysis · Final Project
The exact monthly user-churn rate above which no amount of growth saves the business.
What this is, in plain English
Every venture capitalist looks at a startup and tries to answer one question: will this company survive? You can answer that with intuition, or you can answer it with math. I built the math.
For a typical software-as-a-service startup — reasonable growth, reasonable conversion, reasonable costs — there is a single number. The monthly rate at which paying users cancel. Below it, the company survives a 10-year horizon. Above it, the runway runs out before the trajectory ever recovers. For our default profile that number is about 14% per month, with calibration uncertainty putting the 95% interval at [8%, 16%].
Below: the 4-dimensional ODE I built. Pick a startup profile to watch the trajectory unfold. Then drag the churn rate to find the survival threshold for yourself.
The model
A 4D ODE on Users, Active subscribers, Revenue, and Cash.
Logistic acquisition, conversion, churn, billing-cycle lag, fixed and variable costs. Solved with five from-scratch ODE methods (Euler, Heun, RK4, Adams-Bashforth-4, Adams-Moulton predictor-corrector). Pick a profile to see the trajectory.
▸ vertical / enterprise-tier seats
Each archetype is an illustrative parameter set, not a calibration to a specific company. See About the data for the methodology disclosure.
The math, briefly
Calibrated against noisy quarterly revenue with from-scratch gradient descent and Adam. The threshold μ* is the root of terminal cash, found by Newton, bisection, and secant (3 / 14 / 4 iterations respectively — exactly the textbook ordering). The confidence interval comes from a Monte Carlo posterior over the calibrated parameters, computed with Kahan-summed estimators and antithetic variates.
Every method here is implemented from scratch — Burden et al., chapter by chapter. No SciPy. 114 unit tests. Validation-first: no engine module enters a notebook until its tests pass.
Real-data anchor
The same engine, fit to Shopify's pre-IPO revenue.
The archetype profiles upstream are illustrative parameter sets. To close the loop, we ran the engine's Adam optimizer against 9 quarters of Shopify Inc.'s S-1 quarterly revenue (2012-Q4 through 2014-Q4, public SEC EDGAR data) — fitting the growth rate, holding business-domain parameters at public-data-informed anchors. Real numbers, real methodology.
The result I'm proudest of
A direction in parameter space the data can't distinguish — and a proof the answer is robust to it.
The fit is a curved valley, not a single best-fit point. The growth rate g and the billing-cycle lag μ_R partially compensate for each other over a finite observation window — a faster lag mimics a higher growth rate, and the data can't tell them apart.
Does that ambiguity matter for the answer? I computed the smallest-eigenvalue eigenvector of the calibration loss Hessian — the direction the data is least informative about — and walked the threshold along it. μ* varies by 2.4%. Even though the calibration is ambiguous, the answer is robust to the ambiguity. That is a structural-identifiability finding.
Drive the threshold
Drag the churn rate. Watch the company recover or run out.
For each value of monthly churn, the cash balance at the end of the 10-year horizon is precomputed from the 4D ODE. The dashed line is the runway-survival threshold μ*: above it, every parameter setting we tested ends with the company cash-negative.
What a VC would learn from this model
Three takeaways the math earns.
- 01
Retention dominates growth in survival math.
On the sensitivity tornado, conversion rate α moves the boundary ~6× more than growth rate g; billing-cycle lag μ_R moves it ~2× more than g. The VC instinct that retention matters more than acquisition has a closed-form math version — and the multiple is sharper than most people guess.
- 02
Calibration ambiguity barely shifts the answer.
When (g, μ_R) can't be separately identified from data — a curved valley in parameter space — the boundary μ* still varies by only ~2.4% along the worst direction. Caveat: this is the conditional CI, holding α at its calibrated MAP. The marginal CI under joint (α, g, μ_R) uncertainty is wider; full joint posterior is deferred work.
- 03
The boundary exists, and a bad quarter walks you toward it.
Default profiles sit around μ ≈ 3% monthly churn — well below the 14.2% non-viability boundary. But every company within ~5pp of the boundary is one cohort retention shock away from non-recovery. The model's job isn't to say 'companies are safe' — it's to draw the line and quantify the daylight.
About the data
The five archetype profiles above (High-ARPU SaaS, Horizontal SaaS, Two-sided marketplace, Enterprise-cycle, Consumer subscription) are illustrative parameter sets chosen to span the realistic shape space of subscription businesses. They are not calibrated to any specific company's financials. The 14.2% headline is the bisection root of Cash(T=120) for the default-SaaS archetype, computed by the from-scratch numerical pipeline.
The structural findings — that μ* exists, that it is robust to calibration ambiguity (~2.4%), that the sensitivity ordering is α > μ_R > g — survive any recalibration to a real subscription business with comparable cost structure. The methodology is the artifact, not the specific number.
How confident
Sample the calibrated parameters, refit μ* on each draw, look at the spread.
The 95% credible interval on μ* comes from this posterior: draw (g, μ_R) from their calibration distribution, run a fresh bisection to find μ*, repeat. The mean lands within 0.6% of the point estimate.
Which parameter moves the threshold
∂μ*/∂(parameter), computed by central differences on the ODE output.
Conversion rate α dominates, with the billing-cycle lag μ_R second and growth rate g third. Fixed cost F and ARPU p barely move the threshold at all — they shift the magnitude of cash but not the existence of recovery.
Three things this taught me
- 01
A wrong model produces precise nonsense.
I caught a structural defect in the revenue equation in week two and fixed it before any calibration ran on top. The original would have produced a precise answer to the wrong question.
- 02
Identifiability matters more than accuracy.
Whether you can recover a parameter is the deeper question. The Hessian eigenvector along the calibration valley is the answer — not the eyeballed direction.
- 03
Validation-first separates engineering from coursework.
114 tests gate every notebook. No engine module enters a notebook until its tests pass. The discipline is the deliverable.