Quick Start
Pick the path that matches your background
LinkCompute serves three kinds of users. Find the profile closest to yours and follow the links from there.
👋 Non-technical user
Your situation: you want to chat with models like GPT-4o or Claude, write text, generate images — without writing code or dealing with "APIs".
Shortest path (up and running in 5 minutes)
- Click "Register" in the top-right and create a username and password
- In Wallet Management, top up $10 via Alipay (~70 RMB)
- In Token Management, click "Add token", name it
cherry-studio, keep the rest as default - Install Cherry Studio (free desktop AI chat client) and follow Token Management → Using with Cherry Studio — paste the token you just created
- Start chatting with GPT-4o, Claude, DeepSeek, Qwen, and more
Recommended reading
Model Plaza
Browse 448 models with real pricing and pick the one that fits.
FAQ
Signup, top-up, redemption codes, and client setup questions.
👨💻 Software engineer
Your situation: you're familiar with the OpenAI / Anthropic SDKs and want LinkCompute as a unified gateway — one Base URL for all models, no more juggling per-vendor keys.
Shortest path
- Register and top up (same as above)
- In Token Management, create a token named after your project (e.g.
prod-backend,local-dev) and optionally restrict it to specific IPs - In your client, set Base URL to
https://ai.futurecore.com.cn/v1and use the LinkCompute token as the API Key - Call with the SDKs you already know — endpoints are compatible with
openai/anthropic/gemini/image-generation/audio/video
curl https://ai.futurecore.com.cn/v1/chat/completions \
-H "Authorization: Bearer sk-xxxx" \
-H "Content-Type: application/json" \
-d '{"model":"gpt-4o","messages":[{"role":"user","content":"hello"}]}'Capability index
API Reference
Endpoints, parameters, and response formats for the OpenAI / Anthropic / Gemini protocols.
Model Plaza
Filter by tag, billing type, or endpoint. Toggle the multiplier switch to see real pricing.
Usage Logs
Look up any call by Request ID for latency, tokens, and cost.
The same model priced differently across groups — for example mix-claude-hc 0.55x is 45% cheaper than default 1x. For production, create a dedicated token in a low-multiplier group to cut cost immediately.
🤖 Heavy AI developer (Agent / Claude Code / Codex CLI)
Your situation: you live inside tools like Claude Code, Codex CLI, Cursor, or Cline, or you're building Agents / automation pipelines — call volume is high and you care about latency and reliability.
Recommended setup
- One token per tool — Claude Code / CC Switch / Codex CLI each get their own token, so you can trace heavy callers in the logs
- Use CC Switch as an upstream switcher — toggle one Claude Code instance between Anthropic official and LinkCompute; setup steps are in Token Management → Using with CC Switch
- Pick low-multiplier groups — for Claude try
mix-claude-hc 0.55xormix-claude-kiro 0.55x; for GPT trymix-gpt-kj 0.4x. Savings around 50%. - Turn on balance alerts — configure Webhook / Bark / Gotify in Personal Settings so heavy usage doesn't silently drain your balance
- Task-type calls go through Task Logs — async tasks from MJ, Suno, Kling, Seedance land in Task Logs; search by task ID for progress and errors
Popular links
AI Applications
Step-by-step setup for Cherry Studio, CC Switch, Claude Code, and Codex CLI.
Playground
Test model + parameter combinations in-browser, custom request body supported.
Dashboard
Profile your usage by cost distribution, call trends, and top models.
For enterprise procurement, compute partnerships, channel deals, or special SLAs — reach Business Cooperation at futurecore@zoho.com.cn.
📚 Full Documentation
API Documentation
Comprehensive API interface descriptions and call examples.
AI Applications
Integration guides for Cherry Studio, CC Switch, Claude Code, and Codex CLI.
Help & Support
Frequently asked questions and community communication.
Business Cooperation
Partner with us to jointly explore the AI ecosystem and business opportunities.
Last updated on