We Built a 30-Day Revenue Forecast That's 94% Accurate — Here's the Model
We were flying blind on monthly revenue until we pulled payout data through the API and built a rolling forecast model. Now we predict 30-day earnings within 6% — here's exactly how we did it.
For the first year running our agency, we could not answer a basic question: how much are we making next month?
Not even a ballpark. Every time a creator asked when they’d get paid, or we tried to plan a marketing push, or our accountant wanted a projection — we were guessing. We knew what had come in last month. We had no model for what was coming.
The number that finally broke us: we missed a $23,000 payout window because we’d already committed that cash to chatter salaries and a content production run we thought we could afford. Revenue came in lower than expected. We scrambled.
That was the last time we operated on gut feel. We pulled six months of payout data through the OnlyFans API, built a rolling forecast model, and now predict 30-day onlyfans revenue within 6% accuracy. Here’s the full model.
Why Agency Revenue Is Actually Forecastable
Most agency operators think OnlyFans revenue is too variable to predict. Subscriber counts fluctuate. A viral post changes everything. A creator goes quiet for a week and numbers tank.
That’s true at the day level. At the 30-day level, the signal is much stronger than the noise.
When we pulled six months of /payouts/statistics data, three patterns emerged immediately:
- Subscription revenue is remarkably stable — Recurring subs churn slowly. A creator with 800 subscribers at the start of a month tends to have 760-840 at the end.
- PPV and tips spike predictably — Mass PPV campaigns, new content drops, and promotional posts follow a weekly rhythm. The revenue from these isn’t random; it clusters around specific creator behaviors.
- Pending balance trends are leading indicators — The
/payouts/balancesendpoint shows what’s accrued but not yet paid out. A rising pending balance in week 2 almost always means a strong payout by day 30.
Once we saw these patterns, forecasting became a data engineering problem, not a guessing game.
The Data Pull
We started by pulling everything from two endpoints:
/payouts/statistics— aggregated earnings by date range, broken into subscription revenue, PPV, tips, and referrals/payouts/balances— current pending balance and projected next payout
Here’s the initial data collection script:
import requests
from datetime import datetime, timedelta
import json
API_BASE = "http://157.180.79.226:4024/api/v1"
HEADERS = {"X-API-Key": "YOUR_API_KEY"}
def pull_payout_history(creator_id: str, days_back: int = 180):
"""Pull 6 months of daily payout statistics."""
end_date = datetime.today()
start_date = end_date - timedelta(days=days_back)
# Pull statistics in 30-day chunks to avoid timeout
chunks = []
cursor = start_date
while cursor < end_date:
chunk_end = min(cursor + timedelta(days=30), end_date)
resp = requests.get(
f"{API_BASE}/payouts/statistics",
headers=HEADERS,
params={
"creator_id": creator_id,
"start_date": cursor.strftime("%Y-%m-%d"),
"end_date": chunk_end.strftime("%Y-%m-%d"),
"granularity": "daily"
}
)
chunks.extend(resp.json()["data"])
cursor = chunk_end
# Pull current pending balance
balance_resp = requests.get(
f"{API_BASE}/payouts/balances",
headers=HEADERS,
params={"creator_id": creator_id}
)
pending = balance_resp.json()["pending_balance"]
return {"history": chunks, "pending_balance": pending}
After collecting this for our top 12 creators over 180 days, we had roughly 2,160 data points to train against.
The Forecast Model
We tried three approaches before landing on what works:
Approach 1: Simple moving average. Take the last 30 days, use that as the next 30-day estimate. Accuracy: 78%. Not bad, but it completely misses momentum — a creator who just launched a PPV campaign looks the same as one who’s declining.
Approach 2: Linear regression on trailing 60 days. Better for trending accounts, worse for cyclical ones. Accuracy: 81%.
Approach 3: Weighted rolling average with pending balance adjustment. This is what we use now. Weight recent weeks more heavily, then adjust upward or downward based on where the pending balance sits relative to historical midpoints. Accuracy: 94%.
import numpy as np
from collections import defaultdict
def build_30day_forecast(history: list, pending_balance: float) -> dict:
"""
Weighted rolling average forecast with pending balance adjustment.
Returns predicted revenue and confidence band.
"""
# Convert to weekly buckets
weekly = defaultdict(float)
for record in history:
week_num = datetime.strptime(record["date"], "%Y-%m-%d").isocalendar()[1]
weekly[week_num] += record["gross_revenue"]
weekly_values = list(weekly.values())
if len(weekly_values) < 8:
raise ValueError("Need at least 8 weeks of history for reliable forecast")
# Weighted average: recent weeks count more
# Weights: last 4 weeks = 2x, weeks 5-8 = 1x, older = 0.5x
weights = []
values = []
for i, val in enumerate(reversed(weekly_values)):
if i < 4:
weights.append(2.0)
elif i < 8:
weights.append(1.0)
else:
weights.append(0.5)
values.append(val)
weighted_weekly = np.average(values, weights=weights)
base_forecast = weighted_weekly * 4.33 # 4.33 weeks per month
# Pending balance adjustment
# Historical median pending at mid-month tells us if this month is running hot or cold
historical_midmonth_pending = np.median([
record["pending_balance"]
for record in history
if record.get("pending_balance") and 12 <= datetime.strptime(
record["date"], "%Y-%m-%d").day <= 18
]) if any(r.get("pending_balance") for r in history) else base_forecast / 2
pending_ratio = pending_balance / historical_midmonth_pending if historical_midmonth_pending > 0 else 1.0
adjustment_factor = 1 + (pending_ratio - 1) * 0.4 # Dampen the signal somewhat
final_forecast = base_forecast * adjustment_factor
# Confidence band: ±6% based on our historical accuracy
return {
"forecast_30d": round(final_forecast, 2),
"low": round(final_forecast * 0.94, 2),
"high": round(final_forecast * 1.06, 2),
"pending_balance_factor": round(adjustment_factor, 3),
"base_weekly_avg": round(weighted_weekly, 2)
}
# Example usage
data = pull_payout_history(creator_id="creator_abc123", days_back=180)
forecast = build_30day_forecast(data["history"], data["pending_balance"])
print(f"30-day forecast: ${forecast['forecast_30d']:,.0f}")
print(f"Range: ${forecast['low']:,.0f} — ${forecast['high']:,.0f}")
What the Numbers Actually Look Like
Running this across our roster of 12 creators for the first full month:
- Total forecast: $147,200
- Actual revenue: $153,800
- Error: 4.3% — inside our 6% target
Month two: 5.8% error. Month three: 3.1%. We’ve now run this for seven months. The model has missed our 6% threshold twice — both times when a creator ran an unplanned major promotion that added unexpected volume. For normal operating months, the model holds.
The Early Warning System
The second thing this unlocked was a week-over-week monitoring system. We set a threshold: if a creator’s gross revenue drops more than 15% week-over-week compared to their rolling baseline, we get an alert.
Before this system, we’d notice a declining creator at month-end when the payout came in light. By then it was too late to intervene. Now we see it in week two.
The intervention can be simple — a new mass PPV, a price adjustment, a chatter check to see if response times have slipped. But catching it early matters. Our average recovery time after an alert went from “discovered at month-end, recovery next month” to “caught in week 2, corrective action by week 3.”
In dollar terms, that early-warning catch has been worth roughly $31,000 in recovered revenue across seven months.
What We Do With the Forecast
The forecast runs every Sunday morning via a cron job. It pushes results to a Google Sheets dashboard that our whole team can see. Monday morning standup starts with: “Here’s where we’re projected to land this month, here’s who’s running hot, here’s who’s flagged.”
This changed three things operationally:
Hiring decisions. We now hire chatters based on projected revenue, not trailing revenue. If three creators are projected to scale this quarter, we can start recruiting now instead of scrambling when they’re already live.
Creator advances. Some creators ask for advances against expected payouts. We used to say no because we couldn’t project with confidence. Now we evaluate advances against the model. If the forecast says $18,000 is coming and they want a $5,000 advance, that’s a calculable decision.
Marketing spend. We know what we have to allocate before the month starts. No more “we thought we had budget” surprises.
If your agency is still running on last-month’s numbers and gut feel, the data to build this model already exists in your creator accounts. You just need to pull it.
Start with the revenue tracking use case to get your historical data flowing, then layer the forecast model on top. The Google Sheets integration is the fastest path to getting this in front of your team without building a frontend.
View pricing to see what plan covers multi-creator access, or jump straight to the getting started guide to make your first API call in under 10 minutes.
Flying blind is a choice. The data exists. Use it.