OnlyFans KPI Framework: The 15 Metrics Every Agency Should Track Daily
We tracked 3 metrics our first year and nearly went under. Here's the full 15-metric framework we built to run a profitable agency — with benchmarks and the API calls to pull every number automatically.
The Monday morning pull takes four minutes now. It used to take two hours.
Two hours of opening tabs, copying numbers into a spreadsheet, calculating ratios by hand, then spending standup arguing about whether the data was even current. By the time we finished the pull, half the morning was gone and we were making decisions on numbers that were already three days stale.
Then one of our top creators dropped 40% in revenue in a single month and we had no idea why until three weeks after it started. We had all this data and none of it told us anything was wrong until the damage was done.
The problem wasn’t that we tracked the wrong metrics exactly — it’s that we tracked metrics that told us what had already happened instead of metrics that told us what was about to happen. Revenue is a lagging indicator. By the time it drops, the cause happened two weeks ago.
We rebuilt our tracking from scratch. We now track 15 metrics across four categories, pull them automatically every morning before standup, and the whole thing runs in under five minutes. Here’s the full framework, the benchmarks we’ve established across our creator roster, and the code to pull it all automatically.
Why Most Agency Dashboards Fail
Before getting into the framework, it’s worth understanding the failure mode. Most agency dashboards are vanity dashboards. They show subscriber counts (which fluctuate constantly and mean very little in isolation), total revenue (a lagging outcome, not a leading indicator), and message volume (activity, not effectiveness).
None of those tell you where the business is actually going. A creator can be gaining subscribers while hemorrhaging long-term fans. Revenue can be flat while PPV conversion craters — held up temporarily by a price increase or a viral moment. Messages can be high while chatter quality tanks.
The framework below is built around a different question: what is the earliest possible signal that something is going wrong or right? Leading indicators over lagging ones. Conversion rates over raw volumes. Retention signals before churn becomes revenue loss.
Category 1: Revenue Metrics
1. ARPU (Average Revenue Per Subscriber Per Month)
ARPU is your most important revenue health metric. It answers the question: “How much is the average subscriber actually worth?” A creator can have 5,000 subscribers and generate $8,000/month (ARPU: $1.60) while another has 1,200 subscribers and generates $12,000/month (ARPU: $10.00). Size of list is almost irrelevant without ARPU context.
Benchmark: Healthy managed creators typically see ARPU between $8–$22/month. Below $5 is a red flag — usually a sign of a highly discounted subscriber base, weak chatter performance, or both. Above $25 is exceptional and usually indicates a highly engaged niche audience with strong PPV execution.
Formula: total_net_revenue / active_subscriber_count
2. PPV Conversion Rate
What percentage of fans who receive a PPV offer actually purchase it? This is your single best measure of offer quality and chatter effectiveness combined. It also tells you immediately when something is off — a sudden drop in PPV conversion rate is almost always an early warning sign before revenue follows.
Benchmark: Mass-blast PPV to full list: 2–5%. Segmented PPV to warm fans: 8–15%. Personalized DM PPV from chatter: 18–35%. If any of these drop more than 3 percentage points week-over-week, that’s a signal worth investigating that day.
3. Revenue Mix (Subscription vs. PPV vs. Tips)
What percentage of total revenue comes from each source? This matters because each revenue stream has different margin profiles and volatility characteristics. Subscription revenue is predictable. PPV is high-margin but volatile. Tips are often signals of exceptional fan relationships.
Benchmark: Healthy agencies tend to target 30–40% subscription, 45–55% PPV, 10–20% tips. Heavy reliance on subscriptions (>60%) means you’re probably leaving significant PPV revenue on the table. Heavy PPV dependence (>70%) creates revenue volatility when content quality dips.
4. Refund Rate
A metric most agencies ignore until it’s too late. High refund rates are an early signal of content quality issues, misleading PPV previews, or chatter overpromising. They also directly reduce net revenue and can trigger account flags.
Benchmark: Under 1.5% is healthy. Above 3% needs investigation. We’ve seen refund rates spike before a creator makes any production quality changes — it’s usually a sign that chatters have started overselling content that doesn’t deliver on the setup.
Category 2: Retention Metrics
5. 30-Day Retention Rate
Of subscribers who joined 30 days ago, what percentage are still active? This is your churn early-warning system. If 30-day retention drops, you’ll feel it in revenue 30–60 days later. By the time you see the revenue impact, you’ve already lost the fans.
Benchmark: 65–80% is healthy for managed creators. Below 60% is a signal that either the onboarding experience is failing (first-month engagement is weak) or the subscriber acquisition is low-quality (promo subs who were never going to convert long-term).
6. Churn Rate (Monthly)
Related to retention but distinct: what percentage of your existing subscriber base cancels in a given month? You want this number stable or declining. A rising churn rate while subscriber count holds steady usually means you’re acquiring as fast as you’re losing — expensive and unsustainable.
Benchmark: 8–15% monthly churn is typical. Below 8% is exceptional — usually a creator with a very strong personal brand and high community belonging. Above 20% is a crisis-level signal.
7. Reactivation Rate
What percentage of churned subscribers re-subscribe within 90 days? This is an undertracked metric that reveals the quality of your win-back sequences and tells you something important about why fans left. High reactivation rates usually mean the churn was price-sensitive (they left during a price increase and came back when you ran a promo) rather than content-sensitive (they left because they were dissatisfied and stayed gone).
Benchmark: 12–25% 90-day reactivation is healthy. Below 10% suggests your win-back messaging or offers aren’t compelling enough. We found that a targeted reactivation DM with a personalized offer beats a mass-discount blast for reactivation by about 3x.
Category 3: Engagement Metrics
8. Message Response Rate
Of fans who message the creator, what percentage get a response? This sounds like a floor-level metric but it’s more nuanced than that. Response rate varies significantly by time of day, chatter shift coverage, and account size. Gaps in response rate almost always correspond to gaps in revenue — unanswered messages are missed conversion opportunities.
Benchmark: 85–95% response rate for managed accounts. Below 80% means you have coverage gaps. We found a direct correlation: for every 5-point drop in response rate, revenue per subscriber drops approximately $1.20 the following week.
9. DM Open Rate
Of mass messages sent, what percentage get opened? This tells you the health of your subscriber relationship more than your content — a highly engaged list opens 60–80% of mass messages. A cold or disengaged list opens 20–35%. Tracking this over time tells you whether your relationship-building is working.
Benchmark: 55–75% for healthy managed accounts. Declining open rate over 4+ weeks is a sign of list fatigue, which usually means mass messages are going out too frequently or without enough personalization.
10. Tip Frequency (Tips Per Active Subscriber Per Month)
How often does the average fan tip? Tipping behavior is the clearest signal of emotional investment. Fans who tip are fans who feel seen, who feel a genuine connection. Tip frequency is almost impossible to manufacture through tactics — it emerges from authentic relationship quality. Tracking it over time tells you whether your chatters are actually building relationships or just transacting.
Benchmark: 0.08–0.15 tips per subscriber per month is healthy. Below 0.05 suggests the fan relationship is purely transactional. Above 0.20 is exceptional — usually a creator with a very strong parasocial relationship and chatters who actually invest in learning fan details.
Category 4: Operational Metrics
11. Revenue Per Chatter Message ($/Message)
How much revenue does each chatter message generate? This normalizes chatter performance across different shift lengths and conversation volumes. A chatter who sends 400 messages and generates $2,000 is outperforming one who sends 600 messages and generates $2,400 — the raw revenue number lies, the per-message rate tells the truth.
Benchmark: $3.50–$8.00 per chatter message is a healthy range. Below $2.50 suggests chatters are sending too many low-value messages (over-chatting without conversion intent). Above $12.00 is exceptional and usually indicates a high-value niche with strong chatter training.
12. Content Cadence Compliance
What percentage of weeks does each creator post at or above their target frequency? Inconsistent posting is the single most common driver of churn we’ve seen. Fans subscribe expecting a content volume and when it drops, they don’t renew. Tracking cadence compliance gives you a leading indicator of coming churn.
Benchmark: 90%+ cadence compliance is the target. Below 80% means you have a production consistency problem that will show up in retention numbers within 4–6 weeks.
13. PPV Price Per Piece (Average)
What’s the average price point of PPV content sold? This tracks pricing strategy and tells you whether you’re leaving money on the table or pricing yourself out of conversions. Many agencies default to a flat PPV price and never test elasticity.
Benchmark: $8–$20 average PPV for most creator tiers. Testing price sensitivity by segment often reveals that top-tier fans will pay $25–$40 for the right content while new fans respond better to $6–$10 entry points.
14. Onboarding Time (Days to First Revenue)
For new creators, how many days from signed contract to first revenue event? Slow onboarding costs money — every day a creator isn’t live is a day of missed revenue. We also found that the faster we onboard a creator, the better their first-month retention, because they launch with momentum.
Benchmark: Under 5 days to first subscriber. Under 10 days to first PPV revenue. Above 14 days to either is a process problem worth examining.
15. Creator Revenue Per Content Piece
Total monthly revenue divided by number of pieces published. This tells you content efficiency — are you getting full value from your production investment? A creator publishing 30 pieces/month and generating $8,000 is at $267/piece. One publishing 12 pieces and generating $10,000 is at $833/piece. More output is not always better output.
Benchmark: Varies significantly by creator tier, but for actively managed creators, $400–$800 per content piece is a healthy range. Below $200 suggests content strategy isn’t optimized for the audience.
Pulling All 15 Metrics from the API
We run this every morning. The script hits several endpoints, normalizes the data, and populates our morning dashboard. Here’s the core pull:
import requests
from datetime import datetime, timedelta
from collections import defaultdict
API_BASE = "http://157.180.79.226:4024/api/v1"
HEADERS = {"X-API-Key": "YOUR_API_KEY"}
def pull_daily_kpis(creator_id: str, days_back: int = 30):
"""
Pull all 15 KPI metrics for a single creator.
Returns a dict with all metrics and benchmark comparisons.
"""
end_date = datetime.now().strftime("%Y-%m-%d")
start_date = (datetime.now() - timedelta(days=days_back)).strftime("%Y-%m-%d")
results = {}
# --- Revenue Metrics ---
# Subscriber stats
sub_resp = requests.get(
f"{API_BASE}/subscribers",
headers=HEADERS,
params={"creator_id": creator_id, "status": "active"}
)
subscribers = sub_resp.json()
active_count = subscribers.get("total_count", 0)
# Transactions
txn_resp = requests.get(
f"{API_BASE}/payouts/transactions",
headers=HEADERS,
params={"creator_id": creator_id, "start_date": start_date, "end_date": end_date}
)
transactions = txn_resp.json().get("transactions", [])
total_revenue = sum(t["net_amount"] for t in transactions)
ppv_revenue = sum(t["net_amount"] for t in transactions if t["type"] == "ppv_purchase")
sub_revenue = sum(t["net_amount"] for t in transactions if t["type"] in ("subscription", "subscription_renewal"))
tip_revenue = sum(t["net_amount"] for t in transactions if t["type"] == "tip")
refunds = sum(t["net_amount"] for t in transactions if t["type"] == "refund")
# PPV stats
ppv_sent_resp = requests.get(
f"{API_BASE}/stats/ppv",
headers=HEADERS,
params={"creator_id": creator_id, "start_date": start_date, "end_date": end_date}
)
ppv_stats = ppv_sent_resp.json()
ppv_sent = ppv_stats.get("total_sent", 0)
ppv_purchased = ppv_stats.get("total_purchased", 0)
results["arpu"] = round(total_revenue / active_count, 2) if active_count > 0 else 0
results["ppv_conversion_rate"] = round(ppv_purchased / ppv_sent * 100, 1) if ppv_sent > 0 else 0
results["revenue_mix"] = {
"subscription_pct": round(sub_revenue / total_revenue * 100, 1) if total_revenue > 0 else 0,
"ppv_pct": round(ppv_revenue / total_revenue * 100, 1) if total_revenue > 0 else 0,
"tip_pct": round(tip_revenue / total_revenue * 100, 1) if total_revenue > 0 else 0,
}
results["refund_rate"] = round(abs(refunds) / total_revenue * 100, 2) if total_revenue > 0 else 0
# --- Retention Metrics ---
retention_resp = requests.get(
f"{API_BASE}/stats/retention",
headers=HEADERS,
params={"creator_id": creator_id, "cohort_days": 30}
)
retention_data = retention_resp.json()
results["retention_30d"] = retention_data.get("day_30_retention_rate", 0)
results["churn_rate_monthly"] = retention_data.get("monthly_churn_rate", 0)
results["reactivation_rate_90d"] = retention_data.get("reactivation_rate_90d", 0)
# --- Engagement Metrics ---
# Message response rate
msg_resp = requests.get(
f"{API_BASE}/stats/messages",
headers=HEADERS,
params={"creator_id": creator_id, "start_date": start_date, "end_date": end_date}
)
msg_stats = msg_resp.json()
results["message_response_rate"] = msg_stats.get("response_rate_pct", 0)
results["dm_open_rate"] = msg_stats.get("mass_message_open_rate_pct", 0)
# Tip frequency
tip_count = len([t for t in transactions if t["type"] == "tip"])
results["tip_frequency"] = round(tip_count / active_count, 3) if active_count > 0 else 0
# --- Operational Metrics ---
# Chatter performance
chatter_resp = requests.get(
f"{API_BASE}/stats/chatters",
headers=HEADERS,
params={"creator_id": creator_id, "start_date": start_date, "end_date": end_date}
)
chatter_stats = chatter_resp.json()
total_chatter_messages = chatter_stats.get("total_messages_sent", 0)
chatter_attributed_revenue = chatter_stats.get("attributed_revenue", 0)
results["revenue_per_chatter_message"] = round(
chatter_attributed_revenue / total_chatter_messages, 2
) if total_chatter_messages > 0 else 0
# Content stats
content_resp = requests.get(
f"{API_BASE}/stats/content",
headers=HEADERS,
params={"creator_id": creator_id, "start_date": start_date, "end_date": end_date}
)
content_stats = content_resp.json()
results["content_pieces_published"] = content_stats.get("pieces_published", 0)
results["content_cadence_compliance"] = content_stats.get("cadence_compliance_pct", 0)
results["avg_ppv_price"] = ppv_stats.get("avg_price", 0)
results["revenue_per_content_piece"] = round(
total_revenue / results["content_pieces_published"], 2
) if results["content_pieces_published"] > 0 else 0
return results
def benchmark_check(metrics: dict) -> list:
"""Flag metrics outside healthy benchmark ranges."""
flags = []
benchmarks = {
"arpu": (8.0, 22.0, "Revenue: ARPU"),
"ppv_conversion_rate": (8.0, 35.0, "Revenue: PPV conversion rate"),
"refund_rate": (0.0, 1.5, "Revenue: refund rate"),
"retention_30d": (65.0, 100.0, "Retention: 30-day retention"),
"churn_rate_monthly": (0.0, 15.0, "Retention: monthly churn"),
"message_response_rate": (85.0, 100.0, "Engagement: message response rate"),
"dm_open_rate": (55.0, 100.0, "Engagement: DM open rate"),
"revenue_per_chatter_message": (3.5, 12.0, "Ops: revenue per chatter message"),
}
for key, (low, high, label) in benchmarks.items():
val = metrics.get(key, 0)
if val < low:
flags.append(f"BELOW BENCHMARK — {label}: {val} (target: >{low})")
elif val > high and high < 100:
flags.append(f"ABOVE BENCHMARK — {label}: {val} (target: <{high})")
return flags
# Example: morning pull for all managed creators
creator_ids = ["creator_abc123", "creator_def456", "creator_ghi789"]
for cid in creator_ids:
metrics = pull_daily_kpis(cid, days_back=30)
flags = benchmark_check(metrics)
print(f"\n=== {cid} ===")
print(f" ARPU: ${metrics['arpu']} | PPV conv: {metrics['ppv_conversion_rate']}% | Churn: {metrics['churn_rate_monthly']}%")
print(f" Response rate: {metrics['message_response_rate']}% | Rev/chatter msg: ${metrics['revenue_per_chatter_message']}")
if flags:
print(f" FLAGS ({len(flags)}):")
for f in flags:
print(f" {f}")
The benchmark_check function is what makes this useful operationally. When we run this in the morning, any creator with flags gets a dedicated review in standup. Creators with no flags are healthy — the team focuses their energy where the data says it’s needed.
Making This Actionable
Having 15 metrics is only useful if they drive decisions. Here’s how we mapped each category to action triggers:
Revenue metrics dropping → Investigate chatter performance and PPV offer quality first. Revenue problems are almost always downstream of conversion problems.
Retention metrics dropping → Look at content cadence first. Consistent posting is the highest-leverage retention intervention we’ve found.
Engagement metrics dropping → Check shift coverage and response time. Engagement drops are usually operational problems before they’re content problems.
Operational metrics dropping → Process review. If revenue per chatter message drops, pull the chatter attribution report. If cadence compliance drops, look at the content production workflow.
The framework doesn’t tell you what’s wrong. It tells you where to look — and in a 30+ creator operation, that’s the difference between catching problems in week one and catching them in week six.
For the detailed chatter performance measurement that feeds into metric 11, see our post on how we measured the 400% performance spread across our chatter team. The attribution model there is what powers the $/message calculation above.
If you’re building this reporting layer for the first time, the getting started guide walks through authentication and your first API calls. View pricing to see what access looks like for your agency size — the daily KPI pull above runs on a single API key regardless of how many creators you manage.
Track everything. Benchmark everything. Act on what the data tells you first.