Stable Diffusion startup launches LLM code generator StableCode


With StableCode, Stability AI releases a language model for code generation as open-source software under the Apache license, version 2.0.

According to Stability AI, StableCode is based on three models: The three billion parameter base model, built on Eleuther’s GPT-NeoX, was pre-trained with numerous programming languages from the BigCode dataset and then expanded to include additional programming languages such as Python, Go, Java, Javascript, C, Markdown, and C++, totaling 560 billion tokens.

An instruction model was built on top of the base model, which was refined with concrete application examples to “solve complex programming tasks” using the Alpaca formula. Stability AI used a total of 120,000 pairs of instructions and their solutions.

Stability AI’s announcement does not include an evaluation of the model’s performance against existing models such as Starcoder or Github Copilot.


StableCode 16K: code model with large context window

In addition to the standard model with a 4K context window, StableCode is also available in a 16K variant. The larger context window allows the model to view more code at once to solve a task, potentially generating better code.

According to Stability AI, the 16K model can view or edit the equivalent of up to five medium-sized Python files at once, which should be especially helpful for beginners. Both models can generate and complete single or multiple lines of code.

“People of every background will soon be able to create code to solve their everyday problems and improve their lives using AI, and we’d like to help make this happen,” the company writes. Stability AI CEO Emad Mostaque also teases “very interesting variations” of StableCode in the pipeline, claiming that “programming will be transformed so that there will be 1b coders.”

In addition to version 1.0 of its SD XL image model, Stability AI released its first open-source language model, StableLM, in April, as well as “Free Willy,” a language model based on Meta’s Llama v2 and refined with a synthetic dataset. It matches or exceeds the performance of the original model and, to some extent, GPT-3.5 (ChatGPT).

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top