Home
Finance
Travel
Academic
Library
Create a Thread
Home
Discover
Spaces
 
 
  • Introduction
  • Discover Feed Social Sharing
  • Full-Duplex Speech Demo
  • Llama 4 Model Integration
 
Meta launches standalone AI app challenging ChatGPT

Meta has launched a standalone AI app built on its Llama 4 model, directly challenging OpenAI's ChatGPT by offering users a dedicated platform for text and voice conversations, image generation, and personalized assistance. According to TechCrunch and other sources, the new Meta AI app leverages the company’s vast social data to deliver tailored responses, introduces features like a Discover feed for sharing AI interactions, and marks Meta’s most ambitious move yet to compete in the rapidly evolving AI assistant market.

User avatar
Curated by
mikeharb
3 min read
Published
15,466
621
about.fb.com favicon
about.fb
Introducing the Meta AI App: A New Way to Access Your AI Assistant
9to5mac.com favicon
9To5Mac
Meta launches standalone AI chat app based on Llama 4 for iPhone ...
thurrott.com favicon
Thurrott.com
Meta Launches Standalone Meta AI Mobile App - Thurrott.com
hypebeast.com favicon
Hypebeast
Meta Launches AI App With Social Feed - Hypebeast
Meta AI - Photo Illustration
NurPhoto
·
gettyimages.com
Discover Feed Social Sharing

One of the app’s standout features is the Discover feed-a social hub where users can browse, share, and remix AI prompts and interactions. You’ll find a curated stream of creative exchanges, from clever prompts to quirky AI-generated summaries, all shared by people who’ve opted in. This isn’t just passive scrolling: users can like, comment, share, or even remix prompts to put their own spin on someone else’s idea, making AI exploration a communal experience12345.

Importantly, nothing appears in the Discover feed unless you choose to share it, putting privacy firmly in your hands. The result is a feed that feels more like a collaborative playground than a sterile showcase, helping users discover new ways to interact with AI while connecting with friends and the broader community. Whether you’re looking for inspiration, want to show off a clever prompt, or just enjoy seeing what others are up to, the Discover feed makes AI feel social, not solitary678.

about.fb.com favicon
techcrunch.com favicon
axios.com favicon
15 sources
Full-Duplex Speech Demo

One standout feature in the new app is its experimental full-duplex speech demo, which lets users and the AI speak and listen simultaneously-no more awkward pauses or turn-taking. This technology is designed to mimic the natural rhythm of human conversation, allowing for interruptions, back-and-forth banter, and overlapping speech, much like chatting with a friend. The demo can be toggled on or off, giving users a taste of what fluid, real-time voice interaction with AI feels like, though it’s currently limited to select regions such as the US, Canada, Australia, and New Zealand12345.

Unlike traditional voice assistants that wait for you to finish before responding, this system leverages Llama 4’s conversational prowess to generate responses on the fly, trained specifically on dialogue rather than just reading text aloud. While the feature doesn’t pull in live web data and may still have technical hiccups, it’s a bold step toward truly conversational AI, offering a glimpse into the future of seamless digital dialogue35.

about.fb.com favicon
ai.meta.com favicon
meta.com favicon
16 sources
Llama 4 Model Integration

The Llama 4 model isn’t just powering Meta’s new AI app-it’s rapidly being woven into a broad ecosystem of platforms and services. Developers and enterprises can now access Llama 4 through major cloud providers like Azure AI Studio, Amazon Bedrock, and Hugging Face, making it easy to experiment, deploy, and scale across a variety of environments123. The model’s architecture, with its Mixture-of-Experts design, allows for efficient inference even at massive scale, letting platforms activate only the necessary “experts” for each task. This means Llama 4 can deliver its advanced multimodal and long-context capabilities without overwhelming infrastructure, whether it’s summarizing millions of tokens or generating personalized content from text, images, or voice345.

Integration isn’t limited to cloud and enterprise tools-Llama 4 is also being embedded into Meta’s own products, from Facebook and Instagram to Ray-Ban smart glasses, and now the standalone app. This seamless cross-platform integration allows users to interact with Meta AI wherever they are, with context and personalization following them from one device to another. The result is a flexible, developer-friendly model that’s as at home in a data center as it is in your pocket, ready to deliver next-generation AI experiences at scale67.

datacamp.com favicon
zapier.com favicon
outrightcrm.com favicon
13 sources
Related
How does the Mixture-of-Experts architecture improve Llama 4's efficiency
What are the main differences between Llama 4 Scout and Maverick
How does Llama 4 handle multimodal inputs like text and images
What are the benefits of Llama 4's long context handling
How does Llama 4 compare to previous versions in terms of performance
Discover more
Adobe launches Firefly AI app with integrated Google, OpenAI models
Adobe launches Firefly AI app with integrated Google, OpenAI models
Adobe released its first dedicated artificial intelligence smartphone application on Tuesday, integrating the company's own AI models with tools from partner firms including Google, OpenAI, and emerging startups in a bid to capture users sharing AI-generated content across social media platforms. The Firefly app, available on iOS and Android devices, marks Adobe's most direct challenge to...
3,842
Meta AI app exposes private conversations to public feed
Meta AI app exposes private conversations to public feed
According to reports from TechCrunch, Meta's standalone AI app has become a privacy nightmare, with users unknowingly publishing their private conversations with the chatbot to a public "Discover" feed that exposes sensitive personal information including medical queries, financial matters, and even home addresses.
14,287
Meta launches AI ‘world model’ to understand physical world and advance robotics, self-driving cars
Meta launches AI ‘world model’ to understand physical world and advance robotics, self-driving cars
Meta has introduced V-JEPA 2, a powerful 1.2-billion-parameter AI "world model" designed to help robots and autonomous systems better understand and interact with the physical world through advanced 3D reasoning and video-based learning, representing a significant shift in AI research beyond large language models toward systems that can predict and reason about physical interactions.
10,685
Meta AI adds video editing with 50 prompts for Facebook, Instagram
Meta AI adds video editing with 50 prompts for Facebook, Instagram
Meta has introduced new video editing capabilities to its Meta AI platform, allowing users to transform 10-second video clips using over 50 preset AI prompts that can change outfits, locations, styles, and lighting effects, with the transformed videos shareable directly to Facebook and Instagram, while future updates will enable custom text prompts beyond the current presets.
5,339