The global race for artificial intelligence (AI) dominance has taken center stage, with companies divided into two distinct camps: open source and closed source. As the debate heats up, the focus is on democratizing technology while prioritizing safety and profitability.
Open source AI, where the source code is publicly available for use, modification, and distribution, encourages innovation by allowing developers to build upon existing algorithms and models. In contrast, closed-source AI restricts the source code to private use, with only the owning company having the ability to make changes.
The Open Source Initiative (OSI) defines open source technology as meeting 10 criteria, including accessibility, non-discrimination, and license freedom. However, most companies claiming to be open source only partially meet these requirements. For example, French AI company Mistral open sources its model weights but not its data or training process.
Proponents of open source AI, such as Alex Combessie, CEO of French company Giskard, argue that it levels the playing field by making technology accessible to all, allowing smaller players to compete with tech giants. Open source also enables transparency, as the code can be audited by regulators.
On the other hand, closed-source AI companies like OpenAI, the creator of ChatGPT, cite safety concerns as a reason for restricting access. In 2019, OpenAI stopped releasing its GPT language model to the public due to its potential for generating high-quality fake news if misused. Closed-source AI also often includes usage policies that prevent harmful requests, such as instructions for creating weapons or deadly viruses.
The debate has divided Big Tech, with Meta and IBM launching the AI Alliance, advocating for an “open science” approach, while OpenAI, Google, and Microsoft remain in the closed source camp. The Alliance, consisting of 74 companies, promotes open, responsible, and transparent AI to advance safety and mitigate risks.
Funding and monetization play a significant role in the debate, with some investors favoring secure intellectual property offered by closed-source models. However, open source proponents argue that increased access to raw materials can lead to creative and innovative developments.
Regulators in Europe and the United States have taken differing stances on the issue. The upcoming EU AI Act is expected to exempt open source AI systems unless they are classified as high-risk, while the U.S. government has expressed concerns about the security risks associated with publicly available foundation models.
As the AI showdown continues, finding a balance between innovation, safety, and profitability will be crucial to shaping the future of this transformative technology.