Meta opens the gates to its generative AI tech


Michael Fassbender

Meta, the parent company of Facebook, has recently launched its free-standing image generator website called Imagine with Meta AI. This move comes in the wake of Google’s Gemini launch, and it signifies Meta’s foray into the generative AI technology space. The company has been actively developing this technology, with WhatsApp having a beta in-app image generator since August of this year. Initially, access to the feature required the Meta app to be installed on smartphones. However, with Imagine, users only need an email address to create an account on the platform. Once registered, users can create images by entering a simple text prompt, similar to the functioning of DALL-E.

Upon testing the website, it was found that the AI generates four 1,280 x 1,280 pixel JPEG images, which can be downloaded by clicking the three dots in the upper right corner and selecting the option from the drop-down menu. The AI’s capabilities were demonstrated by creating images featuring famous cartoon characters like Homer Simpson and Mickey Mouse, despite potential copyright restrictions. However, there were noticeable flaws in the generated images, such as parts of the picture melting into each other and characters appearing bizarre.

Despite the impressive technology, Meta has implemented restrictions to keep the content family-friendly, refusing prompts that are violent, sexual, or mention famous individuals. However, there are workarounds to bypass these limitations using indirect wording. Meta is also planning to introduce “invisible watermarking” for increased transparency and traceability, but this feature is still in development.

The Open-Source Boom

A leaked memo from a senior engineer at Google highlighted the threat posed by open-source alternatives to the AI models developed by big tech companies. This open-source trend has led to the emergence of smaller, cheaper versions of the best-in-class AI models, which are shared for free, challenging the dominance of major firms in the AI space. Companies like Google and Meta, which have been integrating generative AI into various products, are now facing competition from open-source models that match their performance.

However, the open-source boom also presents challenges, as many of these models are built on top of large language models released by big firms. If these companies decide to restrict access to their models, it could hinder the progress of open-source AI development. The future of AI technology is at a crossroads, with the balance between open-source innovation and proprietary control being a key consideration.

Challenges and Opportunities in Open-Source AI

The open-source movement in AI has gained momentum in recent years, with startups and research organizations releasing alternative models to the ones developed by major tech companies. For instance, Hugging Face, a startup advocating for free and open access to AI, unveiled the first open-source alternative to ChatGPT, a popular chatbot released by OpenAI. This trend has democratized access to AI technology, inspiring developers to create new tools and explore its potential applications.

However, training large language models from scratch remains a challenging and costly endeavor, limiting the ability of smaller groups to develop advanced AI models. The reliance on existing models, such as Meta’s LLaMA, has become a common practice in open-source AI development. While open-source models have diversified the pool of contributors to AI technology, concerns about misinformation, prejudice, and hate speech have prompted the need for responsible democratization and accountability in releasing AI models.

The Role of Meta AI in Open-Source Development

Meta AI has played a crucial role in open-source AI development by training and releasing models to the research community. The company’s commitment to open-source development has enabled a diverse range of individuals and organizations to contribute to the advancement of AI technology. However, the balance between transparency and safety remains a key consideration, as large language models can be misused for harmful purposes.

As the open-source ecosystem continues to evolve, the decisions made by companies like Meta AI and OpenAI regarding the release of AI models will have a significant impact on the future of open-source innovation. The potential liability and safety risks associated with open-source AI models have led to discussions about responsible democratization and the need for mechanisms to prevent misuse.

Implications of Closed Access to AI Models

The recent shift towards restricting access to AI models by companies like OpenAI and Meta AI has raised concerns about the impact on open-source innovation. While the move towards closed access may address safety implications and competitive pressures, it could limit the progress of open-source AI development. The debate surrounding the democratization of AI technology and the role of big tech firms in shaping its future continues to be a topic of discussion within the AI community.

As the landscape of AI technology evolves, the decisions made by companies regarding the accessibility of AI models will shape the future of open-source innovation and the broader impact of AI on society. The balance between open-source ideals and the need for accountability and safety measures will continue to be a focal point in the development and deployment of AI technology.


The emergence of open-source alternatives to AI models developed by major tech companies has sparked a new wave of innovation and collaboration in the AI community. While open-source development has democratized access to AI technology, it has also raised concerns about the responsible democratization and accountability of AI models. The decisions made by companies like Meta AI and OpenAI regarding the release of AI models will have a significant impact on the future of open-source innovation and the broader implications of AI technology on society.

Leave a Comment