We Measured Every Chatter's Revenue-Per-Conversation — The Spread Was Nearly 4x
We had 8 chatters and no idea which ones were actually driving revenue. After correlating conversations with transaction data via the API, the performance gap was staggering — and fixable.
We had 8 chatters on payroll and a nagging feeling that some of them were dead weight. We just couldn’t prove it.
Every chatter looked roughly the same on the surface. They all responded to fans. They all sent PPV offers. They all showed up for their shifts. But onlyfans chatter performance was a black box — we had no way to connect individual conversations to actual revenue events.
So we were paying everyone roughly the same flat rate, doing vibes-based performance reviews, and wondering why our conversion numbers weren’t moving.
The answer came when we finally built a system to correlate chat data with transaction data through the API. What we found: our best chatter generated $45 per conversation. Our worst generated $12 per conversation. Nearly 4x — on the same accounts, with the same fans, working the same shifts.
Here’s exactly how we measured it and what we did with the data.
The Measurement Problem
The core challenge is that OnlyFans doesn’t natively connect a specific chatter’s conversation to a revenue event. A fan chats with Chatter A at 3pm, then buys a PPV at 4pm — that’s one step removed. Did Chatter A cause that purchase? Probably. But you can’t know without building the attribution layer yourself.
We pulled three data sources and joined them:
/chats— all active fan conversations per creator, with timestamps and chatter assignment/chats/{chatId}/messages— full message history per conversation/payouts/transactions— every revenue event (PPV purchase, tip, subscription) with timestamp and fan ID
The attribution logic: if a fan made a purchase within 4 hours of an active chat session with a specific chatter, we attributed it to that chatter. Not perfect — but consistent, and consistent is what matters for comparison.
The Data Pull
import requests
from datetime import datetime, timedelta
from collections import defaultdict
API_BASE = "http://157.180.79.226:4024/api/v1"
HEADERS = {"X-API-Key": "YOUR_API_KEY"}
def pull_chatter_attribution(creator_id: str, start_date: str, end_date: str):
"""
Pull chat and transaction data, build chatter attribution model.
Attribution window: purchase within 4 hours of active conversation.
"""
# Step 1: Pull all conversations in date range
chats_resp = requests.get(
f"{API_BASE}/chats",
headers=HEADERS,
params={
"creator_id": creator_id,
"start_date": start_date,
"end_date": end_date
}
)
chats = chats_resp.json()["chats"]
# Step 2: Pull all transactions in date range
txn_resp = requests.get(
f"{API_BASE}/payouts/transactions",
headers=HEADERS,
params={
"creator_id": creator_id,
"start_date": start_date,
"end_date": end_date
}
)
transactions = txn_resp.json()["transactions"]
# Build lookup: fan_id -> list of (timestamp, amount, type)
fan_purchases = defaultdict(list)
for txn in transactions:
if txn["type"] in ("ppv_purchase", "tip", "subscription_renewal"):
fan_purchases[txn["fan_id"]].append({
"timestamp": datetime.fromisoformat(txn["created_at"]),
"amount": txn["net_amount"],
"type": txn["type"]
})
# Step 3: For each chat, pull messages to find last chatter activity
chatter_stats = defaultdict(lambda: {
"conversations": 0,
"revenue_attributed": 0.0,
"ppv_closes": 0,
"total_messages_sent": 0
})
for chat in chats:
chat_id = chat["id"]
fan_id = chat["fan_id"]
# Get messages for this conversation
msg_resp = requests.get(
f"{API_BASE}/chats/{chat_id}/messages",
headers=HEADERS,
params={"start_date": start_date, "end_date": end_date}
)
messages = msg_resp.json()["messages"]
# Find chatter messages and their timestamps
chatter_messages = [m for m in messages if m["sender_type"] == "chatter"]
if not chatter_messages:
continue
chatter_id = chatter_messages[-1].get("chatter_id", "unknown")
last_chatter_msg_time = max(
datetime.fromisoformat(m["created_at"]) for m in chatter_messages
)
chatter_stats[chatter_id]["conversations"] += 1
chatter_stats[chatter_id]["total_messages_sent"] += len(chatter_messages)
# Attribution: purchases within 4 hours of last chatter message
attribution_window = timedelta(hours=4)
for purchase in fan_purchases.get(fan_id, []):
time_delta = purchase["timestamp"] - last_chatter_msg_time
if timedelta(0) <= time_delta <= attribution_window:
chatter_stats[chatter_id]["revenue_attributed"] += purchase["amount"]
if purchase["type"] == "ppv_purchase":
chatter_stats[chatter_id]["ppv_closes"] += 1
# Calculate revenue per conversation
results = []
for chatter_id, stats in chatter_stats.items():
if stats["conversations"] > 0:
results.append({
"chatter_id": chatter_id,
"conversations": stats["conversations"],
"revenue_attributed": round(stats["revenue_attributed"], 2),
"revenue_per_conversation": round(
stats["revenue_attributed"] / stats["conversations"], 2
),
"ppv_close_rate": round(
stats["ppv_closes"] / stats["conversations"] * 100, 1
),
"avg_messages_per_conv": round(
stats["total_messages_sent"] / stats["conversations"], 1
)
})
return sorted(results, key=lambda x: x["revenue_per_conversation"], reverse=True)
# Run for 30 days
results = pull_chatter_attribution(
creator_id="creator_abc123",
start_date="2026-01-01",
end_date="2026-01-31"
)
for r in results:
print(f"Chatter {r['chatter_id']}: ${r['revenue_per_conversation']}/conv "
f"| {r['ppv_close_rate']}% PPV close | {r['avg_messages_per_conv']} msgs/conv")
What We Found
Running this across our 8 chatters for a full 30-day period produced this leaderboard:
| Chatter | Rev/Conv | PPV Close Rate | Avg Msgs/Conv |
|---|---|---|---|
| Chatter A | $45.20 | 34% | 8.2 |
| Chatter B | $38.90 | 29% | 9.1 |
| Chatter C | $31.40 | 22% | 11.4 |
| Chatter D | $27.80 | 19% | 12.8 |
| Chatter E | $21.30 | 15% | 14.2 |
| Chatter F | $18.60 | 13% | 16.7 |
| Chatter G | $14.10 | 9% | 19.3 |
| Chatter H | $11.30 | 7% | 21.6 |
The nearly 4x spread between top and bottom wasn’t a fluke. We ran it for three months. The rankings barely moved.
The Three Traits of Top Performers
Once we had the data, we went back into the actual message transcripts for our top two and bottom two chatters. We were looking for behavioral differences, not just output differences.
Three patterns separated the high-revenue chatters from the low ones:
1. They asked purchasing questions earlier.
Chatter A’s median time to first purchasing question (something like “have you seen her exclusive content?”) was message 3. Chatter H’s was message 11. By the time Chatter H got there, some fans had already moved on. Top performers created purchase context early — they weren’t pushy, but they were intentional.
2. Their openers were specific to the fan.
Top chatters referenced something from the fan’s profile or prior purchase history within the first two messages. “Saw you’ve been a member since March — she just dropped something I think you’d really like.” Bottom chatters opened generically: “Hey! How are you today?” It felt like a script because it was.
3. They sent fewer messages to close the same outcome.
This was counterintuitive. We assumed higher-revenue chatters were sending more messages, building more rapport. The data showed the opposite. Top performers got to a purchase event in 8-9 messages. Bottom performers sent 20+ messages and still didn’t close. More conversation was not better conversation.
What We Changed
We did four things with this data:
Performance-based compensation. We moved from flat hourly rate to a base + revenue share model. Chatters now earn a percentage of attributable revenue on top of their base. This aligned incentives immediately. Three chatters who were coasting started performing differently within two weeks.
Targeted training. Instead of general chatter training, we built specific training around the three traits. We took actual high-performing conversation excerpts (anonymized) and turned them into training material. New chatters now shadow Chatter A’s transcripts, not a generic script.
Account assignment by chatter tier. We moved our top two chatters onto our highest-value creator accounts. Why let Chatter H work an account generating $50K/month when Chatter A could be running it?
Weekly attribution report. The script now runs every Monday. Every chatter sees their own numbers. No one can pretend the data doesn’t exist.
The Revenue Impact
Three months after implementing the changes:
- Agency-wide revenue per conversation went from $26.40 average to $34.80 — a 32% increase
- Two chatters voluntarily left after seeing their performance data (they knew before we said anything)
- We replaced them with two new hires who we onboarded with attribution targets from day one
That 32% improvement on a $180K/month operation is roughly $57,000 in additional monthly revenue. From measurement and realignment — not from hiring more chatters.
If you’re running chatters without attribution data, you’re making staffing decisions on gut feel — on your most important revenue driver. The data to fix this is sitting in your chat history and transaction logs right now.
Check out our guide on identifying the 3 phrases that 10x PPV conversion for the messaging-level analysis that pairs with this attribution model. The automated messaging use case shows how to systematize the winning patterns once you’ve identified them.
View pricing to get API access for your creator accounts, or start with the getting started guide to make your first call today.
The chatters you thought were performing might not be. The data will tell you.