Our Best Chatter Was Actually Our Worst — Here's How We Found Out
He sent 200 messages a day and we thought he was a star. When we built a revenue-per-message metric from the API data, we found out he was generating less revenue than the chatter sending a fifth as many messages.
Marcus was our most active chatter. Two hundred messages a day, sometimes more. He was the first one online in the morning and often the last one clocking out. When we had a creator who needed coverage, Marcus was the answer. He was reliable, enthusiastic, and by every metric we were tracking at the time, he was performing.
We were tracking the wrong metrics.
When we built a proper revenue attribution system from the chat and transaction data in the API, Marcus dropped to the bottom of our performance rankings. The chatter we had been gently suggesting “send more messages” to — a quieter operator named David who averaged 40 conversations per day — was generating more than six times the revenue per message and was single-handedly responsible for the creator’s top three highest-revenue months.
Marcus was generating $0.18 of revenue per message sent. David was generating $2.40.
The gap between them was $4,000 per month in labor efficiency. We were paying for volume when we should have been paying for conversion.
Why Message Count Became Our Default KPI
The reason message count became a default metric in our agency — and in most agencies we’ve talked to — is that it’s easy to track. You can eyeball a shift report, count the conversations logged, and have a number in 30 seconds. It feels like accountability.
The problem is that message count is an input metric, not an output metric. It tells you how active a chatter is. It tells you nothing about whether that activity is generating revenue. A chatter who sends 200 low-quality, low-intent messages is doing less useful work than a chatter who sends 40 high-craft messages that move fans through a purchase sequence.
We knew this intuitively. The problem was we didn’t have a clean way to prove it quantitatively until we started pulling data from the /chats, /chats/{chatId}/messages, and /payouts/transactions endpoints and building proper attribution.
Building the Attribution Model
The core challenge in onlyfans chatter performance attribution is connecting a specific message or conversation to a specific revenue event. A fan doesn’t always buy immediately after a message — sometimes there’s a lag of hours or a day. And some revenue comes from fans who haven’t been messaged at all recently (auto-renewals, for example). You need a model that credits chatters for revenue that plausibly resulted from their work without over-attributing.
We landed on a 24-hour attribution window with decay weighting: a purchase within 4 hours of a chatter conversation gets 100% credit to that chatter. A purchase between 4 and 12 hours gets 70% credit. Between 12 and 24 hours gets 40%. After 24 hours, no attribution.
This isn’t perfect — no attribution model is — but it’s substantially more accurate than no attribution at all, and it’s consistent enough that the relative ranking of chatters is meaningful.
import requests
from datetime import datetime, timedelta
from collections import defaultdict
API_KEY = "your_api_key"
BASE_URL = "http://157.180.79.226:4024/api/v1"
headers = {"X-API-Key": API_KEY}
def get_chats(creator_id, days_back=30):
cutoff = (datetime.utcnow() - timedelta(days=days_back)).isoformat()
response = requests.get(
f"{BASE_URL}/chats",
headers=headers,
params={"creatorId": creator_id, "since": cutoff, "limit": 2000}
)
response.raise_for_status()
return response.json().get("chats", [])
def get_chat_messages(chat_id):
response = requests.get(
f"{BASE_URL}/chats/{chat_id}/messages",
headers=headers
)
response.raise_for_status()
return response.json().get("messages", [])
def get_transactions(creator_id, days_back=30):
cutoff = (datetime.utcnow() - timedelta(days=days_back)).isoformat()
response = requests.get(
f"{BASE_URL}/payouts/transactions",
headers=headers,
params={"creatorId": creator_id, "since": cutoff, "limit": 5000}
)
response.raise_for_status()
return response.json().get("transactions", [])
def build_chatter_attribution(creator_id, days_back=30):
chats = get_chats(creator_id, days_back)
transactions = get_transactions(creator_id, days_back)
# Build a map of fan_id -> list of transactions with timestamps
fan_transactions = defaultdict(list)
for txn in transactions:
fan_transactions[txn["fanId"]].append({
"amount": txn["amount"],
"timestamp": datetime.fromisoformat(txn["timestamp"]),
"type": txn.get("type")
})
chatter_stats = defaultdict(lambda: {
"messages_sent": 0,
"conversations": 0,
"attributed_revenue": 0.0,
"attributed_transactions": 0
})
for chat in chats:
fan_id = chat["fanId"]
chatter_id = chat.get("chatterId", "unassigned")
messages = get_chat_messages(chat["chatId"])
outbound = [m for m in messages if m.get("direction") == "outbound"]
if not outbound:
continue
chatter_stats[chatter_id]["messages_sent"] += len(outbound)
chatter_stats[chatter_id]["conversations"] += 1
# Find the last outbound message timestamp in this chat
last_outbound_time = max(
datetime.fromisoformat(m["timestamp"]) for m in outbound
)
# Attribution window: 24 hours, with decay
for txn in fan_transactions.get(fan_id, []):
delta_hours = (txn["timestamp"] - last_outbound_time).total_seconds() / 3600
if delta_hours < 0:
continue # transaction before message, no credit
if delta_hours <= 4:
weight = 1.0
elif delta_hours <= 12:
weight = 0.7
elif delta_hours <= 24:
weight = 0.4
else:
continue
chatter_stats[chatter_id]["attributed_revenue"] += txn["amount"] * weight
chatter_stats[chatter_id]["attributed_transactions"] += 1
return chatter_stats
stats = build_chatter_attribution("creator_123", days_back=30)
print(f"\n{'Chatter':<20} {'Messages':>10} {'Convos':>8} {'Revenue':>12} {'$/Msg':>8} {'$/Convo':>10}")
print("-" * 72)
for chatter_id, data in sorted(stats.items(), key=lambda x: x[1]["attributed_revenue"], reverse=True):
msgs = data["messages_sent"]
convos = data["conversations"]
rev = data["attributed_revenue"]
per_msg = rev / msgs if msgs else 0
per_convo = rev / convos if convos else 0
print(f"{chatter_id:<20} {msgs:>10} {convos:>8} {rev:>12.2f} {per_msg:>8.2f} {per_convo:>10.2f}")
When we ran this for the first time across our main creator’s account, the output was a gut punch.
What the Numbers Showed
The table that printed out had Marcus at the top by message count and near the bottom by revenue per message. David was buried in the middle by message count and sitting alone at the top by every revenue metric.
The specific numbers for that 30-day period:
Marcus: 4,200 messages sent, 187 conversations, $756 attributed revenue. $0.18 per message. $4.04 per conversation.
David: 820 messages sent, 41 conversations, $1,968 attributed revenue. $2.40 per message. $48.00 per conversation.
David’s revenue-per-conversation was nearly 12 times higher. His revenue-per-message was more than 13 times higher.
We had three other chatters in between. Combined, Marcus represented roughly 35% of all messages sent and 11% of attributed revenue. David represented 7% of all messages sent and 27% of attributed revenue.
The labor cost implications were immediate. Marcus was being paid hourly. His effective cost per dollar of attributed revenue was $0.87 — nearly a dollar of labor cost for every dollar of revenue generated. David’s effective cost per dollar of attributed revenue was $0.09.
Understanding Why Marcus Was Underperforming
The attribution model told us there was a gap. It didn’t tell us why. For that, we had to look at conversation content.
When we read through a sample of Marcus’s conversations, the pattern was clear: Marcus was a volume chatter. He opened conversations constantly — “Hey! How are you?” — but he was not good at building toward a purchase. His messages were friendly but low-stakes, low-intent. He kept conversations going without moving them anywhere.
David’s conversations were different. Shorter in message count, but more directed. He asked questions that surfaced fan preferences. He introduced PPV offers in context — after a fan mentioned wanting something specific — rather than as cold pitches. His conversion rate wasn’t magic; it was just competent sales craft applied to a chat window.
Marcus wasn’t a bad employee. He was a bad fit for the job as it should have been defined. He was good at high-volume outreach, not at conversion-focused conversation. We had been measuring him on volume (where he excelled) and ignoring conversion (where he didn’t).
The KPI Shift and Its Downstream Effects
We rebuilt our onlyfans chatter KPI framework around three metrics in priority order:
1. Revenue per conversation (primary). This is the north star. It captures the quality of the work, not the quantity.
2. Conversations per shift (secondary). Volume still matters — you want chatters working, not sitting idle — but it’s a secondary constraint, not the primary objective. A chatter who has 50 conversations per shift and converts at $30 per conversation is doing better work than one who has 100 conversations and converts at $4.
3. Response time (hygiene metric). Fans expect replies within minutes during active hours. We track this as a minimum threshold, not a ranking metric. Everyone needs to pass; no one gets rewarded for being faster than necessary.
The shift to revenue-per-conversation as the primary KPI changed behavior almost immediately. Chatters who had been padding message counts with low-value follow-ups started being more deliberate. The ones who couldn’t make that shift became visible quickly — they dropped in the rankings without being able to hide behind volume.
We also changed how we train new chatters. Instead of “here’s the script, send as many messages as you can,” the training now centers on conversation flow: how to identify a fan’s interest, how to present an offer in context, how to handle an objection, when to close and when to pull back. The scripts still exist, but they’re taught as frameworks for conversion craft, not templates to copy-paste at volume.
Marcus After the Restructure
We had a direct conversation with Marcus about the data. To his credit, he took it well. He asked to see the numbers himself, and we walked through the attribution model with him. He understood what the gap represented.
We moved Marcus to a different role: new subscriber outreach. His strength — high volume, friendly, persistent — is actually an asset in that context. Opening conversations with new subscribers, getting them to reply, making them feel seen in the first 48 hours. That work is less about conversion and more about engagement, and Marcus is genuinely good at it.
His revenue attribution in the new role is still lower per message, but the engagement metrics — reply rates from new subscribers, day-7 retention correlated to his outreach — are meaningfully better than what we were seeing before. We’ve put his volume to work in a context where volume is the right input metric.
David is now our lead conversion chatter. He has first access to high-value fan queues. We pay him more. The business justification is in the data.
What This Means for Your Team
If you are managing chatters and your primary performance metric is message count or hours worked, you are almost certainly paying for volume when what you actually need is conversion. The data to find the gap is sitting in your chat logs and transaction records — it just needs to be connected.
The attribution model above is imperfect, but imperfect attribution that directionally ranks your chatters correctly is infinitely more useful than no attribution at all. Run it for 30 days. Look at the revenue-per-conversation column. If the ranking surprises you, that surprise is where your management attention should go.
For more on understanding what drives fan spending behavior, see our post on PPV pricing segmentation by fan spend history and the full analytics endpoint reference.
High-volume activity that doesn’t convert is not an asset — it is an overhead cost with a friendly interface. The only way to know the difference is to build attribution and measure what matters.
See the full capabilities available on the pricing page, or start pulling your own chatter and transaction data from the API documentation.