OpenAI Unveils First Open-Weight Reasoning Models Since GPT-2, Optimized for Laptops

OpenAI today announced the release of two open-weight language models intended for providing high-end reasoning abilities on consumer-grade devices, the company’s first open models since GPT‑2 in 2019. Optimized for running on laptops and single GPUs, the models, gpt‑oss‑120b and gpt‑oss‑20b, are tailored for performance similar to OpenAI’s small proprietary reasoning models.

Open-weight language models differ from open-source models in that they provide access to trained parameters, or “weights,” without disclosing the full training data or source code. This allows developers to analyze, fine‑tune and deploy models locally, often without relying on cloud infrastructure.

“One of the things that is unique about open models is that people can run them locally. People can run them behind their own firewall, on their own infrastructure,” OpenAI co‑founder Greg Brockman said during a press briefing on Tuesday.

These were trained with a gigantic text‑alone corpus with an emphasis on science, mathematical and coding material, which enables high productivity in coding competitions, competition mathematics problems and health questions.

OpenAI commented gpt‑oss‑20b is small enough for direct running on personal computers with gpt‑oss‑120b running efficiently in a single GPU unit. These models reportedly keep abreast with OpenAI’s proprietary o3‑mini and o4‑mini reasoning models but no test reports for these were published compared with DeepSeek’s R1.

In a separate statement, Amazon Web Services (AWS) announced that OpenAI’s open-weight models can now be accessed through its Bedrock generative AI marketplace, the first time an OpenAI model has been listed on the AWS platform.

“OpenAI has been building fantastic models and we think these will be great open‑weight alternatives for customers,” noted Atul Deo, director of product at AWS Bedrock. He would not discuss contractual arrangements between AWS and OpenAI.

It comes at a time of growing competition in the open‑model segment. Meta’s Llama series for a long time been the de facto market champion pending displacement earlier this year by China’s DeepSeek with a low‑cost reasoning model while Meta failed with its expected Llama 4.

The release also coincides with OpenAI’s ongoing efforts to raise up to $40 billion in new funding led by Japan’s SoftBank Group (9984.T), which could push the Microsoft‑backed company’s valuation to $300 billion.

Amazon’s move indicates the strategic importance of offering high-performance AI models through cloud marketplaces. Company shares fell in recent times following a report of slowing growth at its AWS business compared with peers.

Through publicly releasing its reasoning models, without offering end-to-end training code or data, OpenAI aims at closing the gap between developer flexibility and enterprise-style control, possibly redefining the distribution of sophisticated AI utilities beyond proprietary environments.

Written By Rodney Mbua