Link Compute
User GuideAPI ReferenceAI ApplicationsSkillsHelp & SupportOneSpark ProgramBusiness Cooperation

Quick Start

Pick the path that matches your background

LinkCompute serves three kinds of users. Find the profile closest to yours and follow the links from there.

👋 Non-technical user

Your situation: you want to chat with models like GPT-4o or Claude, write text, generate images — without writing code or dealing with "APIs".

Shortest path (up and running in 5 minutes)

  1. Click "Register" in the top-right and create a username and password
  2. In Wallet Management, top up $10 via Alipay (~70 RMB)
  3. In Token Management, click "Add token", name it cherry-studio, keep the rest as default
  4. Install Cherry Studio (free desktop AI chat client) and follow Token Management → Using with Cherry Studio — paste the token you just created
  5. Start chatting with GPT-4o, Claude, DeepSeek, Qwen, and more

Recommended reading


👨‍💻 Software engineer

Your situation: you're familiar with the OpenAI / Anthropic SDKs and want LinkCompute as a unified gateway — one Base URL for all models, no more juggling per-vendor keys.

Shortest path

  1. Register and top up (same as above)
  2. In Token Management, create a token named after your project (e.g. prod-backend, local-dev) and optionally restrict it to specific IPs
  3. In your client, set Base URL to https://ai.futurecore.com.cn/v1 and use the LinkCompute token as the API Key
  4. Call with the SDKs you already know — endpoints are compatible with openai / anthropic / gemini / image-generation / audio / video
curl https://ai.futurecore.com.cn/v1/chat/completions \
  -H "Authorization: Bearer sk-xxxx" \
  -H "Content-Type: application/json" \
  -d '{"model":"gpt-4o","messages":[{"role":"user","content":"hello"}]}'

Capability index

The same model priced differently across groups — for example mix-claude-hc 0.55x is 45% cheaper than default 1x. For production, create a dedicated token in a low-multiplier group to cut cost immediately.


🤖 Heavy AI developer (Agent / Claude Code / Codex CLI)

Your situation: you live inside tools like Claude Code, Codex CLI, Cursor, or Cline, or you're building Agents / automation pipelines — call volume is high and you care about latency and reliability.

Recommended setup

  1. One token per tool — Claude Code / CC Switch / Codex CLI each get their own token, so you can trace heavy callers in the logs
  2. Use CC Switch as an upstream switcher — toggle one Claude Code instance between Anthropic official and LinkCompute; setup steps are in Token Management → Using with CC Switch
  3. Pick low-multiplier groups — for Claude try mix-claude-hc 0.55x or mix-claude-kiro 0.55x; for GPT try mix-gpt-kj 0.4x. Savings around 50%.
  4. Turn on balance alerts — configure Webhook / Bark / Gotify in Personal Settings so heavy usage doesn't silently drain your balance
  5. Task-type calls go through Task Logs — async tasks from MJ, Suno, Kling, Seedance land in Task Logs; search by task ID for progress and errors

Popular links

For enterprise procurement, compute partnerships, channel deals, or special SLAs — reach Business Cooperation at futurecore@zoho.com.cn.


📚 Full Documentation

Last updated on

On this page