What is the Total Addressable Market (TAM)?
~US$66B/year across three segments priced
by tier: IB Diploma students ($132M — 380K students × $348/yr at Analyst $29/mo),
broader high school + university students ($43.5B — 125M students in English/OECD
markets at Analyst $29/mo), and knowledge-worker professionals ($22.8B — 16M heavy
writers in consulting, policy, and advisory roles at Strategist $119/mo). Two‑tier
pricing: students on the Analyst plan ($29/mo), professionals on the
Strategist plan ($119/mo).
What is the Serviceable Available Market (SAM)?
~US$13.3B/year. Defined as 20% of the
English/OECD addressable population reachable via digital and organic channels over a
5-year horizon — approximately 28.3M individuals (76K IB + 25M students at Analyst
$29/mo; 3.2M professionals at Strategist $119/mo) — before institutional or
non-English expansion.
What is the Serviceable Obtainable Market (SOM)?
SOM is computed as blended ARR across both tiers
(Analyst at $29/mo + Strategist at $119/mo). In the very conservative
scenario, Year 1 yields ~3,000 Analyst subs + ~200 Strategist subs ≈
$1.33M blended ARR. Higher scenarios scale from there — use the
scenario selector above to switch between Very Conservative, Conservative, Base, and
Aggressive, or adjust individual inputs to stress-test any assumption.
Who is the primary customer?
Two tiers serve three segments: (1) IB Diploma
students (11th-12th grade) who must write a 4,000-word Extended Essay and
3+ Internal Assessments — high pain, high willingness to pay, seasonal usage.
(2) Broader students (AP, A-level, university undergrad/postgrad)
writing research papers with citations. Segments 1 and 2 map to the Analyst
tier ($29/mo) — the core research pipeline with academic voice and
fact-checking.
(3) Professionals — consultants, analysts, advisors, and LinkedIn
thought-leadership publishers producing research-backed content year-round. This segment
maps to the Strategist tier ($119/mo) — all voices, all analytical
lenses,
smart agent selection, gap-focused research, extended output, and unlimited archive.
What are the specific pain points?
Finding and validating research sources. Threading a thesis
through evidence. In-text citation formatting. Abstract generation. Maintaining
structural coherence across sections. These are acute, time-consuming tasks whether
you're a 12th-grader writing your Extended Essay or a PhD producing a literature review.
ChatGPT writes generic chatbot prose; StellarumAtlas produces thesis-driven,
citation-backed academic writing with validated sources.
What is the go-to-market wedge?
The Analyst tier ($29/mo) is the viral entry
point — IB Diploma is the initial wedge (small cohort, 380K, but extreme pain and
tight-knit communities). Students share tools inside cohorts and schools, driving
subscriber volume and brand awareness. As users' needs and budgets grow — postgrads,
independent researchers, working professionals — they upgrade to the Strategist
tier ($119/mo), which anchors perceived product value and delivers much
higher
ARPU. Translation features (Wave 1 roadmap) multiply the student TAM by 3-4x into
Spanish, French, and other IB programme languages.
How does the Business Plan writer expand the market?
The research essay engine targets students and knowledge
workers.
The Business Plan writer opens an entirely new segment: founders, startups, and
SMBs who need investor-ready plans but can't justify $5K–$25K consultant
fees.
The pipeline is structurally different — founder intake → AI-generated due-diligence
questions → evidence-backed plan — but runs on the same multi-agent architecture. This
is a higher-ARPU segment ($199 credit pool vs. $29–$119/month subscriptions) with a
different purchase
pattern (project-based, not subscription), diversifying revenue away from seasonal
student cycles and expanding TAM into the broader startup ecosystem.
How is this different from ChatGPT?
ChatGPT is a single-model chatbot that produces unstructured,
uncited prose. StellarumAtlas is a 7-agent orchestrated pipeline —
Planner, Researcher, Critic, Thesis Generator, Decomposer, Writer, Titler — that
produces thesis-driven, structurally rigorous academic papers with validated real-world
citations. Critically, the engine doesn't just synthesize what's known — it
identifies what's missing. The research phase actively surfaces
contradictions, unresolved debates, and gaps in existing scholarship; the introduction
frames these gaps as the motivation for the essay's thesis. This is how real academic
research works — and no chatbot does it. It's the difference between asking someone a
question and deploying a research team.
What is SourceLock™ and why does it matter?
SourceLock™ is our proprietary evidence
integrity architecture. The core design principle: AI handles thought
(argument structure, pattern recognition, synthesis) while facts come
exclusively from verified external sources. Every research finding is stored as an
atomic NoteChunk with content, source URL, citation ID, and confidence tag — creating
an auditable chain of custody. The writer agent is then constrained to only
use citation IDs that exist in the verified evidence pool — it cannot
introduce unsourced claims. Four verification layers enforce this: (1) Perplexity
Sonar retrieves live web data with URLs, (2) HEAD-request validation strips dead
links, (3) a dual-pass fact-checker audits numeric precision and named-entity
fabrication, (4) the writer is citation-locked to the surviving evidence base. The
result: in manual spot-checking — clicking every citation tooltip and verifying
figures, phrases, and statistics against the source — output traces back with
near-100% fidelity. In a market where competing AI tools average
75% or worse factual accuracy, this is a structural differentiator, not a prompt
tweak. SourceLock™ is potentially licensable IP — the same architecture applies
anywhere AI needs to reason over verified facts without generating them.
What does the product roadmap unlock?
Shipped (12 features): Inline paragraph
editing, fact-check rigor control, per-pipeline usage dashboard,
IB Essay Evaluator (rubric-based scoring with per-criterion feedback),
IB Literary Essay structure (Paper 2 comparative scaffold with
anti-hallucination quote sourcing), IB Extended Essay structure
(criterion-aligned scaffold mapping to IB assessment Criteria A–D for the 4,000-word
EE),
Business Plan document structure
(investor-ready pipeline: founder intake → AI due-diligence → evidence-backed plan with
Executive Summary through Investment Thesis),
Alternative Thesis Generation
(parallel LLM call generates diverse thesis angles — contrarian, scope-shift,
methodological — with self-regulated count and one-click re-generation).
Strategist-only shipped features:
gap-focused research, smart agent selection (11 specialist critics with LLM-powered
topic-fit selector), quantitative pressure control, user-selectable writer models,
19 writing voices, 8 analytical lenses, extended word targets, selective publish, and
unlimited archive. Wave 1 (in progress): Multiple
citation formats (APA/MLA/Chicago), essay translation. Wave 2: Document
ingestion (upload your own sources), post-generation critique loop, thesis selection
(choose from 3 options or supply your own). Wave 3: Checkpoint/resume,
Topic Sentinel (RSS-driven active monitoring of archived essays), social format
multiplier (LinkedIn/X/executive summary outputs), personal knowledge base
(cross-reference and search your research archive), publish-ready formatting.
Wave 4: Personal Research Library — upload PDFs, books, and private
sources as weighted context. The platform moat.
What are the key technical capabilities?
Both tiers share the core pipeline: dual-pass
research, live web sources via Perplexity Sonar (no stale training data), URL
validation (HEAD request on every source), smart evidence deduplication,
dual-layer fact-checking (numeric claim audit + named-entity
verification), inline paragraph editing with Firestore persistence, per-pipeline usage
tracking, and real-time SSE telemetry. 8 essay structures (Academic, Narrative, Hybrid,
Magazine, Op-Ed, IB Literary, IB Extended Essay, Business Plan).
Strategist tier adds: 11-specialist critic roster with
LLM-powered agent selection (auto-picks the 3 most relevant domain experts
per
topic), gap-driven research methodology (surfaces contradictions and
unresolved questions to motivate the thesis), quantitative pressure
control (adjustable data density with guardrails against false precision),
user-selectable writer models, 19 authorial voices (vs. 1), 8 analytical lenses (vs. 1),
extended word targets up to 8,000 words, 6 body sections, selective publish, and
unlimited archive.
How do you handle AI hallucinations?
This is the core technical risk for any AI writing product, and
we attack it at every layer of the pipeline. (1) Research layer:
Perplexity Sonar pulls live web data with source URLs — no stale training data. Every
source URL gets a HEAD request; dead links are stripped before they reach the writer.
(2) Fact-checking layer: A dedicated fact-checker agent runs a
dual-pass audit on all research notes. Pass 1 audits numeric claims — classifying each
as grounded, approximate (auto-hedged with cautious language), or unsupported (removed
at high rigor). Pass 2 audits named entities — legislation, regulatory programs,
government initiatives — catching fabricated Act names and overstated legal status.
(3) Writer guardrails: The writer agent's system prompt enforces six
precision rules: preserve approximate language, no false decimal precision, no
unverified legislation names, specific date-window qualifiers on volatile metrics and
market caps, no invented comparative specifics, and statistical coefficient hedging
(correlations, betas, R² must use ranges unless tied to a named dataset). (4)
User control: Both Quantitative Pressure and Fact Check Rigor are
user-facing sliders (0–100%), so users can dial accuracy constraints to their use case.
The result: output that is defensible with its cited sources, not just
plausible-sounding.
Who are the competitors?
Jasper and Copy.ai target marketing copy, not academic research.
Jenni.ai and Writesonic offer AI writing assistance but lack the multi-agent pipeline,
dual-pass research, source validation, and structural rigor. No competitor runs a
7-agent orchestrated pipeline with critic panels, evidence deduplication, and
configurable analytical lenses.
What are the barriers to entry?
The moat is architectural, not model-dependent:
(1) The multi-agent coordinator pattern with typed handoffs between 7
specialized agents. (2) The prompt engineering corpus — each agent has
role-specific system prompts tuned for academic structure, not generic text.
(3) The validation pipeline (URL checking, relevance scoring, evidence
deduplication). (4) The configurable voice/structure/pillar system. A
competitor would need to rebuild the full pipeline, not just wrap an LLM API call.
What prevents OpenAI from building this?
OpenAI builds horizontal platforms. This is a vertical product
with domain-specific orchestration, academic structure logic, and curated UX. Same
reason Figma exists despite Adobe, or Notion despite Google Docs. Vertical
specialization wins when the workflow is complex enough that generic tools produce
generic output.
What is the pricing model?
Two subscription tiers plus credit top-ups.
Analyst ($29/mo) — core research pipeline with academic voice,
fact-checking, inline editing, and up to 10 archived essays. Targets students (IB, AP,
A-level, university) who subscribe ~6–9 months/year around essay deadlines.
Strategist ($119/mo) — the full product: all 19 writing voices, 8
analytical lenses, smart agent selection, gap-focused research, quantitative pressure
control,
writer model choice, extended output (8,000 words / 6 sections), selective publish, and
unlimited archive. Targets professionals (consultants, analysts, advisors) subscribing
~10–12 months/year. The Analyst tier drives subscriber volume and virality; the
Strategist tier anchors product value and delivers ~4× higher ARPU. Students can upgrade
as their needs grow — a clear expansion path without diluting positioning.
Additionally, the Business Plan writer is a separate $199 credit pool with a
3-month expiry — enough for ~3–4 full plan generations, matching the
project-based purchase pattern of founders. Credit top-ups ($20–$200) are also available
for pay-as-you-go usage. All tiers maintain 90%+ FCF margins.
What are the expected retention dynamics?
Analyst tier (students): ~40-60%
year-over-year retention — seasonal churn is structural, but new cohorts replace
graduating ones (each IB class is a fresh market). Strategist tier
(professionals): ~70-80% retention with lower churn if they publish regularly, and
significantly higher LTV at $119/mo. Blended retention improves as the Strategist
segment grows as a proportion of the base, pulling up both ARPU and cohort stickiness.
What is the expected LTV:CAC ratio?
Analyst: at $29/mo with ~6–9 month average
retention, LTV ≈ $174–$261. Low CAC via word of mouth in IB cohorts, teacher referrals,
and student forums. Strategist: at $119/mo with ~10–12 month retention,
LTV ≈ $1,190–$1,428 — roughly 5× the Analyst LTV. Higher CAC (LinkedIn ads, content
marketing) but the ARPU more than compensates. Blended target: 3:1 LTV:CAC,
improving as organic referral compounds and the Strategist mix grows.
What does one essay cost you to generate?
Every pipeline run is instrumented with a per-agent usage
tracker that records token counts (input/output), model used, latency, and estimated
cost — broken down by agent role (planner, researcher, critic, fact-checker, writer,
titler). A typical 3,500-word essay with dual-pass research, fact-checking, and 4 body
sections costs $0.05–$0.15 in compute depending on writer model
selection, with the default Gemini 2.5 Flash Lite at the low end. At the Analyst tier
($29/mo), even a heavy user generating 20+ essays/month stays well within 90% margin.
Strategist runs cost slightly more (gap-focused research, smart agent selection,
extended
output)
but at $119/mo the margin is even richer. This data is exposed in-app per run and stored
in Firestore for aggregate analysis.
What are you raising?
Based on the very conservative scenario
(~3,000 Analyst subs at $29/mo + ~200 Strategist subs at $119/mo in Year 1, yielding
~$1.33M blended ARR), we are raising $10.9M for 45% equity
(pre-money valuation $24.23M, post-money valuation $35.13M). This is a seed/Series A
raise to fund product development, customer acquisition, and team scaling through to
cash-flow positive.
How will the funds be deployed?
Product & Engineering (~40%): Ship Waves 1-4 of
the roadmap, including translation, document ingestion, checkpoint/resume, Topic
Sentinel, social format multiplier, and Personal Research Library. Customer
Acquisition (~35%): IB community outreach, student channel partnerships,
LinkedIn professional targeting, content marketing. Team (~20%):
Engineering, product, and growth hires. Infrastructure (~5%): Compute
scaling, monitoring, and security.
What are the key milestones for the next 18 months?
Ship Wave 1 features (citation formats, translation). Ship Wave
2 (document ingestion, critique loop, thesis selection). Validate CAC in both the
Analyst
(student) and Strategist (professional) channels. Hit Year-1 subscriber and blended ARR
targets per the selected scenario. Launch Spanish language
support to expand European IB market. Begin B2B pilot with 2-3 IB schools. Ship Wave 3
(Topic Sentinel, social format multiplier, publish-ready formatting, personal knowledge
base). Begin Wave 4 (Personal Research Library) to build the long-term platform moat.
What about AI detection tools in education?
StellarumAtlas produces research-backed, citation-verified
content — it's a research assistant, not a plagiarism tool. The user still needs to
understand, defend, and iterate on the work. Crucially, the platform surfaces all
intermediate deliverables — research notes, source critiques, thesis iterations —
providing auditable evidence that the user engaged with the research process. This is
the Grammarly argument: it helps you write better, it doesn't write for you.
What is the regulatory risk around AI in education?
Real concern, actively managed. Positioning: a research tool
(like Google Scholar + Zotero + a writing tutor combined), not a paper mill. The
in-between deliverables (notes, thesis iterations, source critiques, outline) are the
evidence of genuine intellectual engagement. Schools adopting via B2B licensing further
normalizes the tool as an approved research assistant.
What is the model dependency risk?
The architecture is model-agnostic by design. Research uses
Perplexity Sonar; planning/critique use Gemini; writing is user-selectable across 5+
models. All routed through OpenRouter. If any model degrades or is deprecated,
substitution is a config change, not a rewrite. The value is in the orchestration
pipeline, not any single model.