Lesson 520 of 2116
Multilingual Prompting on Kimi: Chinese-First, Globally Capable
Kimi was trained Chinese-first and is excellent across languages. Learn how to write multilingual prompts that take advantage of that — without accidentally degrading the output.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1Where the model's strengths come from
- 2multilingual prompting
- 3code-switching
- 4translation
Concept cluster
Terms to connect while reading
Section 1
Where the model's strengths come from
Kimi was trained on a corpus heavily weighted toward Chinese — including academic, legal, and literary sources that Western-headquartered models have less exposure to. Its English is excellent, but its Chinese is exceptional. That asymmetry is real and exploitable.
Three multilingual prompt patterns that work
- 1Match-the-source: write the system prompt in the language of the source document
- 2Anchor-and-translate: do the analysis in Chinese, then ask for an English summary at the end
- 3Cross-check: ask for the answer in both languages and verify they agree
A Chinese system prompt grounds the model in its strongest language; the user gets the answer in their language.
System (Chinese): 你是一名资深律师助理。请仅根据下面的合同段落作答,并标注出处。
User: Read the attached contract and answer in English: which clauses limit our indemnification?
Assistant: I will analyze the Chinese contract internally and report the
relevant clauses (with citations) in English.Compare the options
| Prompt language | Source language | Output language | Behavior |
|---|---|---|---|
| Chinese | Chinese | Chinese | Best raw quality |
| Chinese | Chinese | English | Strong, with translation step |
| English | Chinese | English | Good, but loses some nuance |
| English | English | English | On par with Western models |
Code-switching pitfalls
- Mid-sentence language switches sometimes nudge the model into the wrong output language
- Avoid 'translate this English instruction in your head' phrasing — write the instruction in the target language directly
- Names of people, places, and products should be given in both scripts when possible
Apply this
- Pick a real document in a non-English language you work with
- Write the same task with English prompt, target-language prompt, and the anchor-and-translate pattern
- Score outputs on accuracy and fidelity — note which pattern wins for your domain
Key terms in this lesson
The big idea: write prompts in the strongest language for the source. Kimi rewards you for going Chinese-first when the documents are Chinese — and English where English is the source language.
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “Multilingual Prompting on Kimi: Chinese-First, Globally Capable”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Creators · 38 min
Claude Opus 4.7 — when extended thinking earns its cost
Opus 4.7 shipped in April 2026 with a bigger thinking budget and a 1M-token window at standard prices. Here is the architecture, the pricing math, and when the premium is actually worth it.
Creators · 32 min
Grok Vision — visual reasoning on the third option
Grok Vision rounds out xAI's lineup. It is not the strongest visual model, but it has a niche around uncensored scene description and real-time X media.
Creators · 34 min
Qwen 3 VL — vision specialist
Qwen 3 VL punches above its weight on vision benchmarks and opens weights for self-hosted OCR and doc AI.
