Tutorial 12 min read

Building a Custom OnlyFans CRM from Scratch with the API

Off-the-shelf CRMs cost $68 per creator account per month and still did not fit our workflow. We built our own in two weeks using the OnlyFans API. Here is the full architecture, the fan profile builder, and the engagement scoring system we use in production.

OFAPI Team ·

We spent seven months paying for off-the-shelf chatter management software before we admitted it was not going to work. The tools we tried were well-built for what they were designed to do — streamline chat workflows for individual creators. They were not designed for an agency managing ten accounts simultaneously, each with a different chatter, each needing different analytics, all feeding into a single P&L.

The per-account pricing was the first problem. At $68 per creator account per month, running eight active accounts meant $544 monthly — before any other tooling. The second problem was that we could not get our data out in a useful format. Engagement scores existed inside the platform dashboards. Revenue attribution lived in a different view. Fan history was accessible only through the chat interface itself, not exportable, not queryable.

The third problem, which only became clear once we started building our own system, was that the tools we were paying for were answering questions we had not thought carefully enough about. We wanted to know which fans were at churn risk, which fans were likely to respond to a PPV push, which fans had gone cold but historically spent heavily. None of the off-the-shelf tools surfaced those answers cleanly.

Two weeks of development. Zero recurring software costs. Full ownership of the data. Here is how we built it.

The Architecture Decision

We considered several approaches: a hosted database service, a proper relational database, even a set of structured JSON files. We landed on SQLite for the fan profile store because our data volumes did not justify the operational overhead of a hosted database, and SQLite’s single-file portability made backup and migration trivial.

The backend is a Python service that runs on a schedule — hourly for active creator accounts, daily for the summary rollups. It pulls from the OnlyFans API, transforms the data, and writes it into the local database. A Flask dashboard sits on top of it for the chatters, accessible on our internal network.

Total infrastructure cost: one small VPS at around $6 per month. The API calls cost credits based on usage. The database and dashboard are ours indefinitely.

Building the Fan Profile Database

The core data model is the fan profile: one record per fan per creator account, updated on each API pull. The profile stores everything we can derive from the available endpoints — spend history, engagement signals, content preferences inferred from purchase behavior, and our own risk scores.

import sqlite3
import requests
from datetime import datetime, timedelta

API_KEY = "your_api_key"
BASE_URL = "http://157.180.79.226:4024/api/v1"
DB_PATH = "agency_crm.db"

headers = {"X-API-Key": API_KEY}

def init_database():
    conn = sqlite3.connect(DB_PATH)
    conn.execute("""
        CREATE TABLE IF NOT EXISTS fan_profiles (
            id INTEGER PRIMARY KEY AUTOINCREMENT,
            creator_id TEXT NOT NULL,
            fan_id TEXT NOT NULL,
            username TEXT,
            -- Spend data
            total_spend REAL DEFAULT 0,
            spend_30d REAL DEFAULT 0,
            spend_prior_30d REAL DEFAULT 0,
            sub_spend REAL DEFAULT 0,
            ppv_spend REAL DEFAULT 0,
            tip_spend REAL DEFAULT 0,
            -- Engagement
            subscription_status TEXT,
            subscription_expires TEXT,
            last_active_at TEXT,
            days_since_active INTEGER,
            message_count_30d INTEGER DEFAULT 0,
            ppv_open_rate REAL DEFAULT 0,
            -- Scoring
            ltv_score REAL DEFAULT 0,
            engagement_score REAL DEFAULT 0,
            churn_risk_score REAL DEFAULT 0,
            -- Meta
            first_seen TEXT,
            last_updated TEXT,
            UNIQUE(creator_id, fan_id)
        )
    """)
    conn.execute("""
        CREATE TABLE IF NOT EXISTS fan_events (
            id INTEGER PRIMARY KEY AUTOINCREMENT,
            creator_id TEXT NOT NULL,
            fan_id TEXT NOT NULL,
            event_type TEXT NOT NULL,
            event_value REAL,
            event_at TEXT,
            notes TEXT
        )
    """)
    conn.commit()
    return conn

def pull_fan_data(creator_id: str, limit: int = 100) -> list:
    """Pull top fans and their detailed profiles from the API."""
    # Get top fans by spend
    top_fans_resp = requests.get(
        f"{BASE_URL}/stats/fans/top",
        headers=headers,
        params={"creatorId": creator_id, "limit": limit}
    )
    top_fans_resp.raise_for_status()
    top_fans = top_fans_resp.json().get("fans", [])

    enriched = []
    for fan in top_fans:
        fan_id = fan.get("id")
        if not fan_id:
            continue

        # Get detailed fan info
        detail_resp = requests.get(
            f"{BASE_URL}/fans/info",
            headers=headers,
            params={"fanId": fan_id}
        )
        if detail_resp.status_code != 200:
            continue

        detail = detail_resp.json()
        enriched.append({**fan, **detail})

    return enriched

def build_fan_profile(creator_id: str, fan_data: dict) -> dict:
    """Transform raw API data into a structured fan profile."""
    now = datetime.utcnow().isoformat()

    total_spend = float(fan_data.get("totalSpend", 0))
    spend_30d = float(fan_data.get("last30dSpend", 0))
    spend_prior_30d = float(fan_data.get("prior30dSpend", 0))

    # Parse last active
    last_active_str = fan_data.get("lastActiveAt")
    days_since_active = None
    if last_active_str:
        try:
            last_active = datetime.fromisoformat(last_active_str.replace("Z", "+00:00"))
            days_since_active = (datetime.utcnow() - last_active.replace(tzinfo=None)).days
        except (ValueError, AttributeError):
            pass

    return {
        "creator_id": creator_id,
        "fan_id": fan_data.get("id"),
        "username": fan_data.get("username", "unknown"),
        "total_spend": total_spend,
        "spend_30d": spend_30d,
        "spend_prior_30d": spend_prior_30d,
        "sub_spend": float(fan_data.get("subscriptionSpend", 0)),
        "ppv_spend": float(fan_data.get("ppvSpend", 0)),
        "tip_spend": float(fan_data.get("tipSpend", 0)),
        "subscription_status": fan_data.get("subscriptionStatus"),
        "subscription_expires": fan_data.get("subscriptionExpires"),
        "last_active_at": last_active_str,
        "days_since_active": days_since_active,
        "message_count_30d": int(fan_data.get("messageCount30d", 0)),
        "ppv_open_rate": float(fan_data.get("ppvOpenRate", 0)),
        "first_seen": fan_data.get("firstSeenAt", now),
        "last_updated": now,
    }

def upsert_fan_profile(conn: sqlite3.Connection, profile: dict):
    """Insert or update a fan profile record."""
    conn.execute("""
        INSERT INTO fan_profiles (
            creator_id, fan_id, username, total_spend, spend_30d, spend_prior_30d,
            sub_spend, ppv_spend, tip_spend, subscription_status, subscription_expires,
            last_active_at, days_since_active, message_count_30d, ppv_open_rate,
            first_seen, last_updated
        ) VALUES (
            :creator_id, :fan_id, :username, :total_spend, :spend_30d, :spend_prior_30d,
            :sub_spend, :ppv_spend, :tip_spend, :subscription_status, :subscription_expires,
            :last_active_at, :days_since_active, :message_count_30d, :ppv_open_rate,
            :first_seen, :last_updated
        )
        ON CONFLICT(creator_id, fan_id) DO UPDATE SET
            username=excluded.username,
            total_spend=excluded.total_spend,
            spend_30d=excluded.spend_30d,
            spend_prior_30d=excluded.spend_prior_30d,
            sub_spend=excluded.sub_spend,
            ppv_spend=excluded.ppv_spend,
            tip_spend=excluded.tip_spend,
            subscription_status=excluded.subscription_status,
            subscription_expires=excluded.subscription_expires,
            last_active_at=excluded.last_active_at,
            days_since_active=excluded.days_since_active,
            message_count_30d=excluded.message_count_30d,
            ppv_open_rate=excluded.ppv_open_rate,
            last_updated=excluded.last_updated
    """, profile)
    conn.commit()

This gives us a continuously updated fan database. Every fan who appears in any creator’s top-fan list has a full profile record. The upsert logic means we never create duplicates — we always work with the most current data.

The Engagement Scoring Algorithm

Raw spend data tells you who has spent money. Engagement scoring tells you who is likely to spend more, and who is at risk of churning. We built a scoring system on top of the profile data that weights several signals:

def calculate_scores(conn: sqlite3.Connection, creator_id: str):
    """
    Calculate LTV, engagement, and churn risk scores for all fans
    of a given creator. Scores are 0-100.
    """
    cursor = conn.execute(
        "SELECT * FROM fan_profiles WHERE creator_id = ?", (creator_id,)
    )
    fans = [dict(row) for row in cursor.fetchall()]

    if not fans:
        return

    # Normalize total_spend across the creator's fan base
    max_spend = max(f["total_spend"] for f in fans) or 1
    max_spend_30d = max(f["spend_30d"] for f in fans) or 1

    for fan in fans:
        # LTV Score: blend of total spend and recent spend trajectory
        spend_pct = fan["total_spend"] / max_spend
        recent_pct = fan["spend_30d"] / max_spend_30d

        # Trend: is spend accelerating or decelerating?
        prior = fan["spend_prior_30d"] or 0.01
        trend_ratio = fan["spend_30d"] / prior if prior > 0 else 1.0
        trend_bonus = min(max(trend_ratio - 1.0, -0.3), 0.3) * 20

        ltv_score = (spend_pct * 50) + (recent_pct * 30) + trend_bonus + 20
        ltv_score = max(0, min(100, ltv_score))

        # Engagement Score: activity signals
        engagement_score = 50  # baseline

        # Days since active (biggest signal)
        dsa = fan["days_since_active"] or 0
        if dsa == 0:
            engagement_score += 20
        elif dsa <= 1:
            engagement_score += 15
        elif dsa <= 3:
            engagement_score += 5
        elif dsa <= 7:
            engagement_score -= 10
        else:
            engagement_score -= min(dsa * 3, 40)

        # Message frequency
        msg_count = fan["message_count_30d"] or 0
        if msg_count >= 20:
            engagement_score += 15
        elif msg_count >= 5:
            engagement_score += 8
        elif msg_count == 0:
            engagement_score -= 15

        # PPV open rate
        ppv_rate = fan["ppv_open_rate"] or 0
        engagement_score += ppv_rate * 15  # up to +15 for 100% open rate

        engagement_score = max(0, min(100, engagement_score))

        # Churn Risk Score: higher = more at risk
        churn_risk = 0

        sub_status = fan["subscription_status"] or ""
        if sub_status == "expired":
            churn_risk = 100
        elif sub_status == "expiring_soon":
            churn_risk += 40

        # Spend decline
        if prior > 0 and fan["spend_30d"] < prior * 0.5:
            churn_risk += 30

        # Inactivity
        churn_risk += min(dsa * 4, 40)

        # Engagement counter-signal
        churn_risk = max(0, churn_risk - (engagement_score * 0.2))
        churn_risk = max(0, min(100, churn_risk))

        # Write scores back
        conn.execute("""
            UPDATE fan_profiles
            SET ltv_score = ?, engagement_score = ?, churn_risk_score = ?
            WHERE creator_id = ? AND fan_id = ?
        """, (ltv_score, engagement_score, churn_risk, creator_id, fan["fan_id"]))

    conn.commit()
    print(f"Scored {len(fans)} fans for creator {creator_id}")

We run score recalculation after every data pull. The churn risk score is what chatters see first when they open the dashboard in the morning — any fan with a churn risk above 60 gets a flag, and any fan above 80 gets an immediate action item assigned.

The Flask Dashboard

The dashboard is intentionally minimal. Chatters do not need a complex interface — they need to see which fans need attention today and have quick access to the relevant context.

The three views they use most:

Priority Queue: fans sorted by churn risk score, filtered to the creator they are managing. One-click to see spend history, last active date, message frequency, and any notes the previous chatter left.

Revenue Forecast: fans sorted by LTV score with their 30-day spend trend. This is what the team leads use to plan PPV strategy — which fans have high LTV and high engagement, meaning they are likely PPV purchasers this week.

Winback List: fans who have gone inactive (no activity in 7+ days) but have significant historical spend. These are not fully churned — they are still subscribed but disengaged. The winback queue prompts the chatter to find a reason to re-open a conversation.

What It Cost to Build and What It Saves

Development time: approximately two weeks, part-time. One developer, no external contractors.

Infrastructure cost: $6/month VPS. API credits based on usage — for ten creator accounts pulling data hourly, this runs under $40/month.

Previous software spend: $544/month for eight accounts.

Net monthly saving: roughly $500. The savings paid for the development time within the first three months.

The less quantifiable benefit is data ownership. The profiles, the scores, the event history — all of it lives in a database we control. We can run any query we want against it. We can export it, back it up, build new features on top of it without waiting for a vendor. When a creator churns from our roster, we retain the historical data. When we bring on a new creator, we have a framework ready to instrument them from day one.


The architecture described here is not clever or complex. It is a straightforward data pipeline with a simple scoring layer on top. The value comes entirely from having the raw data available — which requires the API — and from making decisions based on that data rather than intuition.

For how we use the fan database in practice for churn prediction, see our post on keeping whale fans. For context on how this fits into a broader monetization strategy, see our ARPU optimization framework.

Get started with the API at OFAPI pricing, or review the endpoints we used in the API documentation.

Ready to automate your OnlyFans operations?

Get full API access and start building in minutes.