How a Public-Benefit Startup Plans to Make Open Source the Default for Serious AI

Manos Koukoumidis on Truly Open AI, The Oumi Platform, AI Safety, and Enterprise Adoption.

Subscribe: AppleSpotify OvercastPocket CastsAntennaPodPodcast AddictAmazon •  RSS.

Oumi Labs CEO Manos Koukoumidis lays out a vision for “unconditionally open” foundation models—where data, code, weights, and recipes are all transparent and reproducible—arguing this is the only path to production-grade, trustworthy AI. He details Oumi’s dev-ops–style platform that lets anyone curate data, run training pipelines, and benchmark results, plus HallOumi, a claim-verification model that outperforms general LLMs at spotting hallucinations. The conversation spans open-source governance, safety managed through community scrutiny, and a Red-Hat-style business model designed to make open AI sustainable for enterprises.

Subscribe to the Gradient Flow Newsletter

Interview highlights – key sections from the video version:

Jump to transcript



Related content:


Support our work by subscribing to our newsletter📩


Transcript

Below is a heavily edited excerpt, in Question & Answer format.

Company Structure and Team

What is Oumi and what does the name stand for?

Oumi is an open source AI lab. The name stands for Open Universal Machine Intelligence, with the tagline “Let’s build better AI and open is the path forward.” It’s structured as a public benefit corporation (PBC), which means it’s for-profit but has a strong, legally binding mission to benefit the public.

What are “founding scholars” at Oumi?

Founding scholars are academics who were involved with Oumi in the very early days, some even before the company was officially incorporated. They have equity stakes and are more deeply involved than typical advisors. There are around 15 founding scholars with more collaborators joining as the project grows.

Open Source Vision for AI

What do you mean by AI having a “Linux moment,” and what does “unconditionally open” mean?

When we say “truly open,” we adhere to the OSI standard which requires open data, open code, and open weights. But we go beyond this to “open collaboration” – making it easy for others to reproduce, extend, and contribute to making models better. If something is open but people can’t push it forward, it doesn’t help much.

Just as Linux became the foundation for operating systems, AI models should become a common utility that anyone can build upon. The community needs all the necessary pieces to replicate and improve upon the work without barriers.

Why is this openness important for AI development?

AI has become the foundation not just for the tech industry, but for healthcare and all sciences. It would be a disservice not to make it a public utility that’s easy for anyone to leverage and contribute to. The foundation models should be a common utility that benefits everyone.

How does the current state of “open” models compare to your vision?

Currently, even the most open models like Llama, DeepSeek, and Alibaba’s models only provide open weights. While this is a great start and we’re grateful for these efforts, it’s not the full picture of what “open” should mean.

For the near term, we’ll likely see openness primarily in post-training rather than pre-training (which requires enormous resources). Pre-training massive models from scratch is currently prohibitive for smaller organizations, but there’s a huge opportunity for the open community to take existing open models and make them better through post-training collaboration.

For a full transcript, see our newsletter.