#001 - Open Source or Closed Source for AI Development?

#001 - Open Source or Closed Source for AI Development?

The future lies in a balanced approach, taking the best from open-source and closed AI models.

Some of the best technological breakthroughs have been due to the open-source movement, be it the Linux operating system or Apache Web Server software. Open-source AI has to be transparent and an ethic to be powered by the community, which offers enormous advantages. Democratizes development in AI, accelerating innovation, and ensuring the resulting benefits are brought to a wide population. Hugging Face is one of the best examples wherein it has successfully rallied together both researchers and developers from across the globe to come together for innovating in the space of natural language processing.

Video Podcast of this Edition

However, the potential dangers of open-source AI are too big to ignore. AI technologies in the wrong hands, if misused, could be bad news for national security. Closed AI systems give substantial benefits to the protection of the proprietary information and security of the AI models. Support for faster development cycles, ease of use, and commercial advantage—vital for the United States in competition to hold on to their competitive edge.

Where critics of open-source AI often question its validity over issues relating to intellectual property protection and commercial viability, these challenges can easily be met with good regulatory frameworks. Clear guidance on the development and deployment of responsible AI will help balance fostering innovation against its perils.

The EU's proposed AI Act is an epitome of such a regulatory effort that wishes to set harmonized rules of the game for the development and use of AI systems, including open-source models. The EU's proposal also imposes stricter requirements for AI applications that are considered the most risky and lays down a regulatory framework with a risk-based approach for a well-functioning, balanced, and trusted AI system.

Given these considerations, I propose a mixed approach to AI development for the United States. That would encourage taking great benefit from innovation and transparency of the open-source AI, using great frameworks towards security and ethical concerns.

For example, we can encourage sharing AI models or researches in such a way that they would be open for community contribution and scrutiny, while keeping certain parts secure.

That sets up the need for industry leaders to learn how to be transparent and work with researchers and communities by contributing to open-source AI initiatives. Developers and researchers must contribute to and participate in the open-source project in accordance with existing guidelines and norms in the development of responsible AI.

Thus, the debate of whether AI should be open or closed source is in no way binary but a spectrum of possibilities. We need a balanced way, where the cherry-picked values between the strengths of the two models would exist in a way commensurate with the requirements of innovation, national security, and ethical standards in the development of AI.


What questions do you have about artificial intelligence in Life sciences? No question is too big or too small.

Reply

or to participate.