Meta has launched a standalone AI app built on its Llama 4 model, directly challenging OpenAI's ChatGPT by offering users a dedicated platform for text and voice conversations, image generation, and personalized assistance. According to TechCrunch and other sources, the new Meta AI app leverages the company’s vast social data to deliver tailored responses, introduces features like a Discover feed for sharing AI interactions, and marks Meta’s most ambitious move yet to compete in the rapidly evolving AI assistant market.
One of the app’s standout features is the Discover feed-a social hub where users can browse, share, and remix AI prompts and interactions. You’ll find a curated stream of creative exchanges, from clever prompts to quirky AI-generated summaries, all shared by people who’ve opted in. This isn’t just passive scrolling: users can like, comment, share, or even remix prompts to put their own spin on someone else’s idea, making AI exploration a communal experience12345.
Importantly, nothing appears in the Discover feed unless you choose to share it, putting privacy firmly in your hands. The result is a feed that feels more like a collaborative playground than a sterile showcase, helping users discover new ways to interact with AI while connecting with friends and the broader community. Whether you’re looking for inspiration, want to show off a clever prompt, or just enjoy seeing what others are up to, the Discover feed makes AI feel social, not solitary678.
One standout feature in the new app is its experimental full-duplex speech demo, which lets users and the AI speak and listen simultaneously-no more awkward pauses or turn-taking. This technology is designed to mimic the natural rhythm of human conversation, allowing for interruptions, back-and-forth banter, and overlapping speech, much like chatting with a friend. The demo can be toggled on or off, giving users a taste of what fluid, real-time voice interaction with AI feels like, though it’s currently limited to select regions such as the US, Canada, Australia, and New Zealand12345.
Unlike traditional voice assistants that wait for you to finish before responding, this system leverages Llama 4’s conversational prowess to generate responses on the fly, trained specifically on dialogue rather than just reading text aloud. While the feature doesn’t pull in live web data and may still have technical hiccups, it’s a bold step toward truly conversational AI, offering a glimpse into the future of seamless digital dialogue35.
The Llama 4 model isn’t just powering Meta’s new AI app-it’s rapidly being woven into a broad ecosystem of platforms and services. Developers and enterprises can now access Llama 4 through major cloud providers like Azure AI Studio, Amazon Bedrock, and Hugging Face, making it easy to experiment, deploy, and scale across a variety of environments123. The model’s architecture, with its Mixture-of-Experts design, allows for efficient inference even at massive scale, letting platforms activate only the necessary “experts” for each task. This means Llama 4 can deliver its advanced multimodal and long-context capabilities without overwhelming infrastructure, whether it’s summarizing millions of tokens or generating personalized content from text, images, or voice345.
Integration isn’t limited to cloud and enterprise tools-Llama 4 is also being embedded into Meta’s own products, from Facebook and Instagram to Ray-Ban smart glasses, and now the standalone app. This seamless cross-platform integration allows users to interact with Meta AI wherever they are, with context and personalization following them from one device to another. The result is a flexible, developer-friendly model that’s as at home in a data center as it is in your pocket, ready to deliver next-generation AI experiences at scale67.