Link Compute
User GuideAPI ReferenceAI ApplicationsSkillsHelp & SupportOneSpark ProgramBusiness Cooperation

Roadmap

Five execution phases of LinkCompute

We don't ship all five layers at once — each phase builds on the last: first prove the model business, then plug in compute supply, then sharpen the data and evaluation edge, then take it global, then layer on tools and orchestration.

Phase overview

PhaseFocusPrimary revenue
Phase 1 — Domestic foundation (in progress)Aggregate closed-source models, build channels, close the business loopModel spread
Phase 2 — Open-source + evaluationAbsorb idle compute, deploy open models, launch price comparison and rankingsSelf-operated APIs + compute-absorption fees
Phase 3 — Overseas siteReplicate the domestic model abroad; bring Chinese models outCross-border channel fees, settlement
Phase 4 — Tools & Agent ecosystemLaunch Agent marketplaceDistribution revenue share
Phase 5 — AI orchestration + compute opsIntelligent scheduler; enterprise hosting and data insightCompute hosting + data services

Phase 1 — Domestic foundation (in progress)

Timing: current

Key work

  • Launch the platform and get the domestic version running
  • Aggregate domestic and overseas closed-source models into a full-category supply
  • Build the sales channel network, validate the "model spread" revenue model
  • Start relationships with compute centers to prep Phase 2

Goals

  • Complete coverage of mainstream closed-source models; one platform, any model
  • Close the loop: model call → compute consumption → billing
  • Accumulate a first batch of loyal users and channel partners

Phase 2 — Open-source deployment + data evaluation

Timing: starts once Phase 1 revenue is validated

Key work

  • Connect idle GPU capacity from compute centers and deploy popular open-source models
  • Launch the "Where does compute come from" comparison: same open model, different data centers, real price / latency / concurrency
  • Launch the "AI Barometer" call-volume rankings built from platform data
  • Deliver the full closed + open matrix

Goals

  • Become the domestic platform with the broadest model coverage
  • Build industry credibility through comparison and rankings
  • Help compute centers absorb idle capacity; validate the compute-absorption fee model

Phase 3 — Overseas replication

Timing: starts once closed + open + evaluation are proven domestically

Key work

  • Launch the overseas site with the same scope as domestic
  • Replicate the model aggregation + compute absorption + evaluation stack abroad
  • Onboard overseas local closed/open models; ship Chinese-origin models (Qwen, DeepSeek, etc.) outbound through the platform
  • Adapt to overseas payments, languages, and compliance

Goals

  • Build the "first stop for Chinese models going abroad" brand
  • Enable two-way global flow of models and compute
  • Run the overseas site as a standalone P&L

Unique tailwind: OpenRouter data shows Chinese AI call volume has overtaken the US for several consecutive weeks. The overseas site is built to ride that demand.

Phase 4 — Tool & Agent ecosystem

Timing: starts once both sites have stable user bases

Key work

  • Launch the tool marketplace / Agent store
  • Onboard indie developers and small teams to list their agents and tools
  • One-stop showcase, invocation, billing, settlement
  • Build a standardized evaluation framework so users see real effectiveness

Goals

  • Form the loop: developers ship → platform promotes → users call → developers get paid
  • More tools bring more users; more users drive more model and compute consumption — classic network effect

Phase 5 — AI orchestration + compute operations

Timing: once the ecosystem is mature

Key work

  • Launch the AI orchestration engine: users describe needs in natural language, AI picks the optimal plan
  • Scale compute hosting and data-center operations as enterprise-grade services
  • Build compliant AI data insights and industry reports on top of aggregated call-volume data

Goals

  • Become the "intelligent operating system" of AI infrastructure
  • Evolve from a light-asset matchmaker into a full-stack, hybrid-asset service platform

Last updated on

On this page