Case Study 13 min read

What Breaks at 5, 10, and 20 Creators — Our Agency Scaling Retrospective

At 3 creators we felt like geniuses. At 8 we were drowning. At 15 we nearly shut down. This is an honest account of what specifically broke at each growth threshold and how data visibility solved each one.

By OFAPI Team · · 13 min read

We scaled from 3 to 30 creators in 18 months. That sentence sounds like a success story. What it does not capture is the series of near-catastrophic operational failures we survived mostly through luck and stubbornness — and which we eventually solved not through management insight but through data.

At 3 creators we felt like geniuses. At 8 we were drowning. At 15 we nearly shut down.

This is the honest version. Here is what actually broke at each threshold, what it cost us, and what we did about it.

The agencies that scale past 20 creators do not do it by working harder. They do it by building the infrastructure before they hit the wall — not after.

At 3 Creators: We Thought We Were Good at This

With three creators, our operation felt lean and professional. Shared spreadsheet tracking revenue by creator. A group chat where the chatter team flagged anything unusual. Weekly check-ins. We knew the numbers by heart.

What we did not know — because we had no way to see it — was that performance was almost entirely a function of the creators being good, not the systems being good. One of three creators generated 70% of total revenue. She was proactive, prolific, and had an existing audience. Nothing particularly special was being done operationally — the operation was benefiting from her momentum.

When she left — amicably, after 11 months, to manage her own operation — we lost 70% of our revenue in a week.

The two remaining creators were not enough to sustain the team we had built. We rebuilt, but the experience taught us something important: we had been mistaking creator quality for agency quality. We had no real systems. We had a spreadsheet and a group chat. There is a difference between running an agency and riding a talented creator’s wave — and most early-stage agencies do not know which one they are doing.

At 5 Creators: Manual Chat Monitoring Fails

The first concrete operational breakdown happened at five creators. Five creators each generating 50-150 fan conversations per day meant 250-750 conversations happening simultaneously across the portfolio, all needing monitoring. Our chatter team was small, responsible for multiple creators. When one creator had a high-volume day, coverage on the others thinned.

We had no visibility into this at the time. We found out about coverage gaps the way most agencies do: a creator complained that fans were leaving, or we noticed a revenue dip and tried to trace it backward.

The fix required getting off manual monitoring entirely and building automated coverage alerts. We needed to know in real time which conversations were sitting unanswered, not find out a week later through revenue variance.

import requests
from datetime import datetime, timedelta
from collections import defaultdict

API_KEY = "your_api_key"
BASE_URL = "https://api.ofapi.dev/api/v1"

headers = {"X-API-Key": API_KEY}

def get_unanswered_chats(creator_id, threshold_minutes=45):
    since = (datetime.utcnow() - timedelta(hours=6)).isoformat()
    response = requests.get(
        f"{BASE_URL}/chats",
        headers=headers,
        params={"creatorId": creator_id, "since": since, "limit": 500}
    )
    response.raise_for_status()
    chats = response.json().get("chats", [])

    unanswered = []
    now = datetime.utcnow()

    for chat in chats:
        last_fan_message = chat.get("lastFanMessageAt")
        last_creator_message = chat.get("lastCreatorMessageAt")

        if not last_fan_message:
            continue

        fan_time = datetime.fromisoformat(last_fan_message)

        # Only flag if fan's last message is more recent than creator's last message
        if last_creator_message:
            creator_time = datetime.fromisoformat(last_creator_message)
            if creator_time >= fan_time:
                continue

        minutes_waiting = (now - fan_time).total_seconds() / 60

        if minutes_waiting >= threshold_minutes:
            unanswered.append({
                "chat_id": chat["chatId"],
                "fan_id": chat["fanId"],
                "username": chat.get("fanUsername"),
                "minutes_waiting": round(minutes_waiting),
                "fan_spend_90d": chat.get("fanSpend90d", 0)
            })

    # Sort by highest spenders first
    unanswered.sort(key=lambda x: x["fan_spend_90d"], reverse=True)
    return unanswered

def portfolio_coverage_report(creator_ids):
    print(f"\nCoverage Report — {datetime.utcnow().strftime('%Y-%m-%d %H:%M')} UTC")
    print("=" * 60)

    total_unanswered = 0
    for creator_id in creator_ids:
        gaps = get_unanswered_chats(creator_id)
        total_unanswered += len(gaps)
        if gaps:
            high_value = [g for g in gaps if g["fan_spend_90d"] > 50]
            print(f"\n{creator_id}: {len(gaps)} unanswered ({len(high_value)} high-value fans)")
            for g in gaps[:5]:
                print(f"  {g['username']:<20} {g['minutes_waiting']:>4}min  ${g['fan_spend_90d']:.0f} 90d spend")

    print(f"\nTotal unanswered across portfolio: {total_unanswered}")

We ran this report every 30 minutes and routed alerts to the chatter team’s shared queue. The high-value sort was critical — when coverage was thin, chatters prioritized fans with demonstrated spend history, not whoever happened to message first.

After two weeks, unanswered conversation time across the portfolio dropped from an average of 3.2 hours to 28 minutes. That change alone moved aggregate monthly revenue by roughly 8% — not because we did anything new, but because fans who message and do not get a response for three hours often do not come back.

At 10 Creators: Revenue Tracking Becomes Impossible

The spreadsheet broke at ten creators. Not metaphorically — the actual Google Sheet broke. Formulas stopped referencing the right cells. People were editing simultaneously and overwriting each other. The “source of truth” was neither a source nor true.

Ten creators generating revenue across subscriptions, PPVs, custom content, and tips, with different payout schedules, different commission structures, and different monthly patterns. The complexity was beyond what any spreadsheet-based system could handle cleanly.

We needed a single pipeline that pulled live data from the API for every creator on a consistent schedule and stored it in a format we could query. No manual entry. No formula chains. Raw numbers from the source, transformed once, available to everyone.

import requests
from datetime import datetime

API_KEY = "your_api_key"
BASE_URL = "https://api.ofapi.dev/api/v1"

headers = {"X-API-Key": API_KEY}

def get_creator_revenue_snapshot(creator_id):
    overview_resp = requests.get(
        f"{BASE_URL}/statistics/overview",
        headers=headers,
        params={"creatorId": creator_id}
    )
    overview_resp.raise_for_status()
    overview = overview_resp.json()

    payout_resp = requests.get(
        f"{BASE_URL}/payouts/statistics",
        headers=headers,
        params={"creatorId": creator_id}
    )
    payout_resp.raise_for_status()
    payout = payout_resp.json()

    return {
        "creator_id": creator_id,
        "snapshot_at": datetime.utcnow().isoformat(),
        "revenue_30d": overview.get("revenue30d", 0),
        "revenue_prev_30d": overview.get("revenuePrev30d", 0),
        "revenue_mtd": overview.get("revenueMtd", 0),
        "active_subscribers": overview.get("activeSubscribers", 0),
        "new_subscribers_30d": overview.get("newSubscribers30d", 0),
        "churn_30d": overview.get("churnedSubscribers30d", 0),
        "ppv_revenue_30d": overview.get("ppvRevenue30d", 0),
        "available_balance": payout.get("currentBalance", 0),
        "total_paid_out_30d": payout.get("paidOut30d", 0),
    }

def build_portfolio_revenue_table(creator_ids):
    snapshots = []
    for creator_id in creator_ids:
        try:
            snap = get_creator_revenue_snapshot(creator_id)
            snapshots.append(snap)
        except Exception as e:
            print(f"Failed to pull {creator_id}: {e}")

    snapshots.sort(key=lambda x: x["revenue_30d"], reverse=True)

    total_30d = sum(s["revenue_30d"] for s in snapshots)
    total_prev = sum(s["revenue_prev_30d"] for s in snapshots)
    mom_change = ((total_30d - total_prev) / total_prev * 100) if total_prev else 0

    print(f"\nPortfolio Revenue — pulled {datetime.utcnow().strftime('%Y-%m-%d %H:%M')}")
    print(f"Total 30d: ${total_30d:,.0f}  |  Prev 30d: ${total_prev:,.0f}  |  MoM: {mom_change:+.1f}%")
    print(f"\n{'Creator':<20} {'30d Rev':>10} {'MoM':>8} {'Subs':>6} {'Churn':>6} {'PPV%':>6}")
    print("-" * 60)

    for s in snapshots:
        mom = ((s["revenue_30d"] - s["revenue_prev_30d"]) / s["revenue_prev_30d"] * 100
               if s["revenue_prev_30d"] else 0)
        ppv_pct = (s["ppv_revenue_30d"] / s["revenue_30d"] * 100
                   if s["revenue_30d"] else 0)
        print(
            f"{s['creator_id']:<20} "
            f"${s['revenue_30d']:>9,.0f} "
            f"{mom:>+7.1f}% "
            f"{s['active_subscribers']:>6} "
            f"{s['churn_30d']:>6} "
            f"{ppv_pct:>5.0f}%"
        )

    return snapshots

We ran this every morning and wrote the output to a shared read-only dashboard. The spreadsheet was retired. Revenue tracking became a pull operation, not a manual entry operation. Our weekly revenue review went from 90 minutes to compile to a five-minute read of the morning report.

At 15 Creators: Accountability Disappears

The 15-creator threshold was the most dangerous one. It was where individual creator performance became invisible inside aggregate numbers.

When total portfolio revenue is growing — even modestly — it masks individual creator deterioration. Three creators can be declining sharply while two are growing fast enough to keep the total moving upward. In the monthly review, you look at the portfolio total, see growth, and move on. The declining creators do not get attention because the aggregate number does not ask for it.

We nearly lost a creator at this stage — not because she left, but because her revenue had declined 41% over eight weeks and nobody had caught it. It was invisible inside a portfolio that was growing overall. When she brought it up herself on a creator call, we were embarrassed. We had no explanation. The answer — though we did not say it — was that we had no system that would have noticed.

The immediate fix was a per-creator weekly anomaly alert that fired whenever any creator’s 7-day revenue dropped more than 20% below their trailing 30-day average.

import requests
from datetime import datetime

API_KEY = "your_api_key"
BASE_URL = "https://api.ofapi.dev/api/v1"

headers = {"X-API-Key": API_KEY}

def check_revenue_anomalies(creator_ids, drop_threshold=0.20):
    alerts = []

    for creator_id in creator_ids:
        try:
            resp = requests.get(
                f"{BASE_URL}/statistics/overview",
                headers=headers,
                params={"creatorId": creator_id}
            )
            resp.raise_for_status()
            data = resp.json()

            rev_7d = data.get("revenue7d", 0)
            rev_30d = data.get("revenue30d", 0)

            if rev_30d == 0:
                continue

            daily_30d_avg = rev_30d / 30
            daily_7d_avg = rev_7d / 7

            if daily_30d_avg == 0:
                continue

            decline_pct = (daily_30d_avg - daily_7d_avg) / daily_30d_avg

            if decline_pct >= drop_threshold:
                alerts.append({
                    "creator_id": creator_id,
                    "decline_pct": decline_pct * 100,
                    "daily_7d_avg": daily_7d_avg,
                    "daily_30d_avg": daily_30d_avg,
                    "severity": "critical" if decline_pct >= 0.35 else "warning"
                })
        except Exception as e:
            print(f"Error checking {creator_id}: {e}")

    if alerts:
        alerts.sort(key=lambda x: x["decline_pct"], reverse=True)
        print(f"\nRevenue Anomaly Alerts — {datetime.utcnow().strftime('%Y-%m-%d')}")
        for a in alerts:
            emoji = "CRITICAL" if a["severity"] == "critical" else "WARNING"
            print(
                f"  [{emoji}] {a['creator_id']}: "
                f"-{a['decline_pct']:.1f}% vs 30d avg "
                f"(${a['daily_7d_avg']:.0f}/day vs ${a['daily_30d_avg']:.0f}/day)"
            )
    else:
        print("No revenue anomalies detected.")

    return alerts

The first time this ran, it flagged four creators. Two had already been flagged manually. Two had not. One of the ones missed was down 38% over the prior seven days due to a content posting schedule that had quietly lapsed when the creator’s personal circumstances changed. A conversation within 24 hours of the alert caught it six weeks before it would have surfaced in an aggregate review.

At 20 Creators: What the Multi-Creator Dashboard Solved

By twenty creators, the individual alert systems were necessary but not sufficient. We needed a single aggregated view where a manager could look at the full portfolio in one place — not just anomalies, but the complete picture. Sortable. Comparable. Shared across the entire management team.

import requests
from datetime import datetime

API_KEY = "your_api_key"
BASE_URL = "https://api.ofapi.dev/api/v1"

headers = {"X-API-Key": API_KEY}

def get_org_creators(org_id):
    response = requests.get(
        f"{BASE_URL}/organizations/{org_id}/models",
        headers=headers
    )
    response.raise_for_status()
    return response.json().get("models", [])

def get_creator_summary(creator_id):
    resp = requests.get(
        f"{BASE_URL}/statistics/overview",
        headers=headers,
        params={"creatorId": creator_id}
    )
    resp.raise_for_status()
    data = resp.json()

    rev_30d = data.get("revenue30d", 0)
    rev_prev = data.get("revenuePrev30d", 1)
    mom = ((rev_30d - rev_prev) / rev_prev * 100) if rev_prev else 0

    active_subs = data.get("activeSubscribers", 0)
    churned = data.get("churnedSubscribers30d", 0)
    new_subs = data.get("newSubscribers30d", 0)
    retention = (active_subs / (active_subs + churned) * 100) if (active_subs + churned) > 0 else 0

    return {
        "creator_id": creator_id,
        "revenue_30d": rev_30d,
        "mom_change_pct": mom,
        "active_subscribers": active_subs,
        "new_subscribers_30d": new_subs,
        "churn_30d": churned,
        "retention_rate": retention,
        "ppv_revenue_30d": data.get("ppvRevenue30d", 0),
    }

def run_multi_creator_dashboard(org_id, sort_by="revenue_30d"):
    creators = get_org_creators(org_id)
    creator_ids = [c["creatorId"] for c in creators]

    summaries = []
    for cid in creator_ids:
        try:
            summaries.append(get_creator_summary(cid))
        except Exception as e:
            print(f"Failed to pull {cid}: {e}")

    summaries.sort(key=lambda x: x.get(sort_by, 0), reverse=True)

    portfolio_rev = sum(s["revenue_30d"] for s in summaries)
    portfolio_subs = sum(s["active_subscribers"] for s in summaries)

    print(f"\nPortfolio Dashboard — {datetime.utcnow().strftime('%Y-%m-%d')}")
    print(f"Creators: {len(summaries)}  |  Total 30d Revenue: ${portfolio_rev:,.0f}  |  Total Active Subs: {portfolio_subs:,}")
    print(f"\n{'Creator':<20} {'30d Rev':>10} {'MoM':>8} {'Subs':>6} {'Retention':>10} {'New Subs':>9}")
    print("-" * 67)

    for s in summaries:
        print(
            f"{s['creator_id']:<20} "
            f"${s['revenue_30d']:>9,.0f} "
            f"{s['mom_change_pct']:>+7.1f}% "
            f"{s['active_subscribers']:>6,} "
            f"{s['retention_rate']:>9.1f}% "
            f"{s['new_subscribers_30d']:>9,}"
        )

    return summaries

# Usage
dashboard = run_multi_creator_dashboard("org_abc123", sort_by="mom_change_pct")

The dashboard runs on demand and as a morning report. Every person on the management team starts their day with the same picture. When someone says “Creator X is struggling,” everyone can pull up the same numbers immediately rather than spending 10 minutes reconstructing context from memory.

What We Would Tell a Three-Creator Agency

If you are running three creators and thinking about scaling to ten or fifteen, these are the infrastructure investments that should precede growth, not follow it:

1. Automated coverage monitoring before you add the fourth creator. Manual monitoring does not scale past three. Build the unanswered conversation alert before you need it. The cost of building it early is a few hours. The cost of not having it is measured in revenue gaps you will never be able to explain.

2. API-powered revenue tracking before you add the seventh creator. Spreadsheets break at ten. Build the pipeline at six when you still have time to do it deliberately.

3. Per-creator anomaly alerts before you add the twelfth creator. Aggregate numbers hide individual problems. You need individual visibility before the portfolio gets large enough to mask it — because once it masks it, you find out the way we did: from a creator who is already 41% down calling you to ask what happened.

4. Portfolio-wide dashboard before you add the eighteenth creator. At fifteen or more, you need a single consistent view that everyone on the team shares. Ad hoc reporting from different data sources creates confusion and misalignment.

We built each of these systems reactively — after we hit the wall, not before. Building them proactively would have saved at minimum three months of operational chaos and one near-departure from a creator who felt she was being under-served.


Growth that outruns your data visibility is not growth. It is managed chaos that happens to be generating revenue for now. The ceiling is lower than you think, and it arrives faster than you expect.

Build the monitoring infrastructure at three creators. It takes a day. You will spend that day again — and again — in operational fires if you do not.

For more on the monitoring components referenced in this post, see creator health scoring, churn prediction from engagement signals, and chatter performance attribution.

See the full portfolio management capabilities on the pricing page, or start building your own multi-creator data pipeline from the API documentation.

Get API access — start building.

Full REST API for OnlyFans automation. Get started in minutes.

Get Access →

Ready to automate your OnlyFans operations?

Get full API access and start building in minutes.