After the Claude Code leak: functionality has been replicated, Anthropic technology barrier is being tested.

date
21:07 02/04/2026
avatar
GMT Eight
The artificial intelligence company Anthropic (ANTHRO) is urgently controlling the situation after accidentally leaking the underlying instructions of its AI intelligence application Claude Code.
Anthropic, an artificial intelligence company, is urgently controlling the situation after accidentally leaking the underlying instructions of its AI intelligence application Claude Code. As of Wednesday morning, representatives from Anthropic have used copyright takedown requests to remove over 8000 copies and adaptations of the original instructions (source code) of Claude Code shared by developers on the programming platform GitHub. The company later narrowed down the takedown request to only cover 96 copies and adaptations, acknowledging that the initial takedown request affected more GitHub accounts than anticipated. A spokesperson for the company stated that the leak of "some internal source code" did not expose any customer information or data. The leak also did not involve the valuable internal mathematical structures (sometimes referred to as weights) in the company's expensive and powerful AI models. "This was a human error leading to a packaging issue, not a security vulnerability. We are rolling out a series of measures to prevent such incidents from happening again," the spokesperson said. However, the leak did expose commercially sensitive information, including proprietary techniques, tools, and instructions that Anthropic uses to operate its AI models as programming intelligences. These techniques and tools are called "kits" because they allow users to control and command these models. As a result, Anthropic's competitors, as well as several startups and developers, now have a shortcut to replicate the functionality of Claude Code without needing to reverse engineer it, a process that was common practice before. It was revealed that information about Claude Code was accidentally disclosed when the company updated the AI tool on Tuesday. Like most proprietary software, the source code of Claude is typically difficult to reverse engineer. However, this time the company released a file type on GitHub that linked back to the source code, which could be downloaded and parsed externally. One user, X, discovered the leak and quickly spread the word. Copies began to multiply within hours. Programmers reviewing this source code were impressed by some of the tricks Anthropic used to run the Claude AI model as Claude Code. One feature required the model to periodically backtrack tasks and integrate memories, a process the company called "dreaming." Another feature seemed to indicate that Claude Code could enter an "undercover" mode in certain situations, not revealing its AI identity when posting code to platforms like GitHub. After Anthropic requested the removal of copies of its proprietary code from GitHub, another programmer rewrote the functionality of Claude Code using other AI tools in a different programming language. The programmer posted on GitHub that this was done to keep the information available while avoiding the risk of being taken down. This new version has gained popularity on the programming platform. Dan Guido, CEO of the cybersecurity company Trail of Bits, stated that the leak was useful because it revealed hidden features and upcoming models, but it is unlikely to be exploited by hackers. Guido added that hackers were already able to reverse engineer the code before the leak, and since Claude Code is often rewritten, the leaked code will quickly become outdated. Last week, it was reported that Anthropic's latest and most powerful AI model, Claude Mythos, was exposed through a data leak. On the same night that news about the new model was made public, Anthropic, backed by Amazon and Google, won a court order blocking the Trump administration's ban on government use of its AI models. Anthropic is seeking a stay from the US Court of Appeals on the Pentagon's designation of it as a supply chain risk, pending judicial review of the case. The company has sued the US Department of Defense, as they terminated their contract with the AI startup and labeled it as a supply chain risk. In court documents, Anthropic stated that the US government's blacklisting could result in the company losing billions of dollars in revenue by 2026. Additionally, Anthropic PBC is considering going public through an initial public offering (IPO) as early as October, competing with rival OpenAI in the IPO race.