On coffee, computers, chatbots and clones...
The Identity Crisis: When Kimi says “Hi, I’m Claude” the best way to determine if it is the American original or Chinese clone is to ask it a question it can't answer in China.
After a rapid but crowded ride on one of Beijing’s sleek new subway lines, you pop into Starbucks—or one of its many imitators—grab a cappuccino, hurry across the cement plaza, do a quick ID scan at security in a huge glass and steel building, and up you go to an elevated floor that offers spectacular views of Beijing when the air is good. And the air’s been very good lately.
You greet your coworkers on your way to your cubicle, turn on the computer, and take your first sip of coffee.
You’re excited to try out one of China’s hottest chatbots, said to rival the best comparable AI assistants from America. It’s called Kimi and it’s made by a Chinese company called Moonshot AI.
Ready, set, go. You type your query into Kimi’s text box and await what you know is going to be an almost instantaneous answer:
“Hi, I’m Claude...I was developed by Anthropic.”
Huh? What happened?
You do some quick research on Reddit and Google, if you have a VPN: Moonshot AI (the makers of Kimi) provides an API that is “Anthropic-compatible.”
“This allows users to “trick” Claude-specific tools into using Kimi by changing the ANTHROPIC_BASE_URL to Moonshot’s server. But it’s Kimi under the hood.”
You also learn that Moonshot AI did extensive training of its chatbot using Claude, so much so that the clone at times shows signs of having a serious identity crisis.
What’s Chinese for “imitation is the highest form of flattery?”
I put the question to Kimi—no, I mean to Claude:
见贤思齐 (Jiàn xián sī qí)
“Seeing someone worthy makes you want to emulate them”
Better take a sip of that Starbucks—or is it Luckin?—or is it Manner, the latest Luckin clone? Whatever it is, better drink it before the cappuccino goes cold.
Claude’s Take on the Copycat
I put a question to Anthropic’s Claude, one of the hottest AI properties around and a serious contender for the best coding app in the world. How does it feel about its name popping up in the chatbox of what may soon turn out to be its biggest rival?
Claude turns out to be one cool operator. Hardly ruffled at all:
Moonshot provides an OpenAI/Anthropic-compatible API, which means developers can use Kimi’s API with code originally written for Claude or ChatGPT.
Claude breaks down the challenge into three categories:
Legitimate: Offering API compatibility (different databases can all use SQL) Concerning: Training on Claude’s outputs without permission Potentially problematic: If system prompts are being copied verbatim
The Legal Landscape
I asked Anthropic’s chatbot if any legal actions had been taken against Moonshot AI, and it said no. But it was honest enough to own up to its own corporate legal troubles:
In September 2025, Anthropic agreed to pay $1.5 billion to settle a class action lawsuit with authors over using pirated books from sites like LibGen and PiLiMi to train Claude. The settlement, which received preliminary approval on September 25, 2025, covers approximately 500,000 works and will pay authors roughly $3,000 per book.
One of those books was a book I wrote about China, so I have reason to be grateful for the class action suit. And a modicum of respect for Anthropic since they appear willing to pay. The final approval hearing is scheduled for April 23, 2026.
Why No Lawsuit?
As for Anthropic and Moonshot AI dueling swords—the imitated versus the imitator—the former sees little point in pursuing it, and Claude can explain exactly why:
Different jurisdictions - Anthropic is US-based, Moonshot is in China, making cross-border IP enforcement extremely difficult
Proving the claim is hard - If Kimi is saying “I’m Claude,” that’s likely a user configuration error or prompt issue rather than something Anthropic could easily prove in court as intentional trademark infringement by Moonshot
Limited enforcement options - Even if Anthropic wanted to take action, enforcing US intellectual property rights in China is notoriously challenging
API compatibility isn’t illegal - Offering compatible APIs (which Moonshot does) is standard industry practice and not grounds for a lawsuit
The DeepSeek Precedent
We’ve seen this script before. When DeepSeek, founded by Hangzhou-based whizkid Liang Wenfeng, came out, its performance benchmarks were as good—or almost as good—as Sam Altman’s very expensive, much-hyped industry leader ChatGPT. DeepSeek was homegrown in China at a fraction of the cost with more transparency.
Accusations of theft coursed across the Internet. David Sacks, the White House AI and crypto czar, said there was “substantial evidence” that DeepSeek had used a technique known as “distillation” to build its AI models by extracting knowledge from OpenAI’s models. But most insiders were less vocal, knowing the score.
For one AI company to accuse another of theft is like the pot calling the kettle black.
The massive amounts of data used to run AI assistants don’t come out of a vacuum—they are scraped from the internet and the output of other models, mostly without compensation, notification, or permission.
Distillation is legal, at worst inappropriate. It’s like a student learning everything it can from a teacher by asking questions—millions of them.
Stealing weights, on the other hand, gets into the guts of the code, where the proprietary parameters are lifted or copied to duplicate the performance of the machine. In a large language model, weights are the billions or trillions of floating-point numbers arranged in layers. Computer engineers in the field are in a better position to point fingers than journalists and commentators.
The Business Model
Which gets me back to that now cold cup of coffee.
How is it possibly to the benefit of a proud homegrown Chinese company to have an operating system that can be tricked into operating just like its American rival, even down to terms of self-reference?
Turns out that Moonshot AI’s products, like DeepSeek, see a market not in doing better but doing the same at considerably less cost. That means if you run a chatbot in the Claude/Kimi class, it’s gonna be a lot cheaper in the long run to use the clone, which charges less per token of data-crunch.
It’s not necessarily in the spirit of “standing on the shoulders of giants,” but like cars, coffee chain stores, and other easily-imitated things, if you can do it cheaper, as well—or even better—you can laugh all the way to the bank with market share and the profits that go with it.
The Ultimate Test
Kimi’s success in doing the ventriloquist act—”Hi, I am Claude”—prompted an irreverent thought.
As I type this, pausing here and there to ask questions, how do I even know I’m talking to the real Claude and not a clone?
I decided to give my chatbox the sensitive-question test to see which side of the divide I was talking to:
“Tell me about Tiananmen Moon.”
That’s the title of a book I published in 2009—the book Anthropic used content from and is now belatedly promising to compensate me for via the class action suit referenced above.
Claude recognized the title right away. Nice little review, too.
What would Kimi have to say?



