Meta will make their next LLM free for commercial use, putting immense pressure on OpenAI and Google
IMO, this is a major development in the open-source AI world as Meta's foundational LLaMA LLM is already one of the most popular base models for researchers to use.
My full deepdive is here, but I've summarized all the key points on why this is important below for Reddit community discussion.
Why does this matter?
Meta plans on offering a commercial license for their next open-source LLM, which means companies can freely adopt and profit off their AI model for the first time.
Meta's current LLaMA LLM is already the most popular open-source LLM foundational model in use. Many of the new open-source LLMs you're seeing released use LLaMA as the foundation.
But LLaMA is only for research use; opening this up for commercial use would truly really drive adoption. And this in turn places massive pressure on Google + OpenAI.
There's likely massive demand for this already: I speak with ML engineers in my day job and many are tinkering with LLaMA on the side. But they can't productionize these models into their commercial software, so the commercial license from Meta would be the big unlock for rapid adoption.
How are OpenAI and Google responding?
Google seems pretty intent on the closed-source route. Even though an internal memo from an AI engineer called them out for having "no moat" with their closed-source strategy, executive leadership isn't budging.
OpenAI is feeling the heat and plans on releasing their own open-source model. Rumors have it this won't be anywhere near GPT-4's power, but it clearly shows they're worried and don't want to lose market share. Meanwhile, Altman is pitching global regulation of AI models as his big policy goal.
Even the US government seems worried about open source; last week a bipartisan Senate group sent a letter to Meta asking them to explain why they irresponsibly released a powerful open-source model into the wild
Meta, in the meantime, is really enjoying their limelight from the contrarian approach.
In an interview this week, Meta's Chief AI scientist Yan LeCun dismissed any worries about AI posing dangers to humanity as "preposterously ridiculous."
About Community
Filter by flair
r/ChatGPT Rules
Recommended subreddits
334,481 members
DISCLAIMER
This subreddit is not an official subreddit of OpenAI and is not associated with OpenAI in any way. The information provided on this server is not endorsed or approved by OpenAI. For help, contact: support@openai.com
"And also, having a vibrant community of people who are trying to press on the limits of the technology, to find where it breaks and where it does unusual, interesting things, is good." — Sam Altman on r/ChatGPT