Thinking Machines Lab, the artificial intelligence startup founded by former OpenAI Chief Technology Officer Mira Murati, has officially opened its Tinker AI fine-tuning service to the public. As of December 11, the company has removed the waitlist that previously limited access and made the platform generally available to developers, researchers, and organizations worldwide.
This move marks an important milestone for the San Francisco–based startup, signaling a clear shift from controlled experimentation to broader adoption as it works to make advanced AI customization accessible beyond large technology firms.
When Tinker was first unveiled publicly in October, access was restricted to an invite-only beta program. That limited rollout allowed Thinking Machines to test the system with select users, gather feedback, and refine its infrastructure. With general availability now in place, the company is positioning Tinker as a practical, production-ready tool rather than a closed experimental service.
What Tinker Is Designed to Do
Tinker is built to simplify one of the most challenging aspects of modern artificial intelligence: fine-tuning large models for specific tasks. Traditionally, customizing large language or vision models required significant computational resources, specialized engineering teams, and complex training pipelines. Tinker aims to remove many of those barriers.
At its core, the platform allows users to adapt powerful pre-trained models to their own data and use cases with minimal overhead. Developers can launch fine-tuning jobs using straightforward workflows instead of managing distributed training systems. This approach is especially valuable for startups, research teams, and enterprises that want tailored AI behavior without building expensive infrastructure from scratch.
Expanded Model Support and New Capabilities
The general availability release introduces several major upgrades to Tinker’s model lineup and functionality. The most notable addition is Kimi K2 Thinking, a massive reasoning-focused model with roughly one trillion parameters. This model is designed for tasks that require extended chains of thought, complex reasoning, and advanced tool use, making it the largest and most sophisticated model currently supported on the platform.
In addition to text-based models, Tinker has significantly expanded its vision capabilities. Two new vision-language models are now available. One is optimized for hardware efficiency, making it suitable for teams working with limited resources. The other is a much larger model with an expanded context window, intended for more demanding vision tasks such as image understanding, classification, and multimodal reasoning. According to the company, this larger vision model can achieve strong performance even with minimal labeled data, lowering the entry barrier for image-based fine-tuning.
How Tinker Reduces Cost and Complexity
A key technical foundation of Tinker is its use of Low-Rank Adaptation (LoRA). Instead of retraining an entire model—which can be prohibitively expensive—LoRA fine-tunes only a small subset of parameters while keeping the core model weights unchanged. This dramatically reduces computational costs and training time.
This method also enables multiple users to fine-tune the same base model simultaneously without interfering with one another. As a result, Tinker can scale more efficiently while maintaining performance. The platform further enhances usability by offering features such as checkpointing during training, interactive sampling to evaluate model behavior in real time, and automated multi-GPU orchestration behind the scenes.
Another important update is compatibility with OpenAI-style APIs, allowing developers to integrate Tinker into existing workflows with minimal changes. This design choice reflects the company’s focus on meeting developers where they already are, rather than forcing them to adopt entirely new systems.
Rapid Growth, Funding Momentum, and Industry Impact
Thinking Machines Lab has grown at a remarkable pace since its founding earlier this year. In July, the company raised $2 billion at a $12 billion valuation, one of the largest seed funding rounds ever recorded in Silicon Valley. The round attracted major technology and investment players, underscoring strong confidence in the company’s long-term vision.
The startup has also strengthened its leadership and technical bench. In November, Soumith Chintala, a co-creator of the PyTorch deep learning framework, joined the company, adding further credibility to its developer-focused mission. Around the same time, reports emerged that Thinking Machines had entered early discussions about raising additional funding at a much higher valuation, though no official confirmation has been made.
By opening Tinker to the public, Thinking Machines is reinforcing its broader goal: democratizing access to advanced AI customization. Rather than limiting powerful fine-tuning tools to elite research labs or tech giants, the company is betting that easier access will unlock new innovation across industries. For developers and organizations alike, Tinker’s public launch represents a meaningful step toward more flexible, customizable, and widely accessible artificial intelligence.






