At its inaugural LlamaCon AI developer conference, Meta unveiled the Llama API in limited preview, allowing developers to build applications with its popular Llama models, alongside a standalone Meta AI assistant app powered by Llama 4 that features personalized text, voice, and image capabilities to compete directly with ChatGPT and other AI assistants.
Meta's standalone AI app marks its most ambitious push into consumer-facing AI, bringing the power of Llama 4 beyond its social platforms into a dedicated application for iOS and Android12. The app features a chat interface with image generation and editing capabilities, while also introducing an experimental full-duplex speech technology for more natural voice conversations3. This voice feature allows users to multitask by continuing AI conversations while using other apps2.
What sets Meta AI apart from competitors like ChatGPT and Claude is its ability to leverage Meta's vast stores of user data for personalization1. The app can remember user preferences and draw on information already shared across Meta's platforms, such as profile details and content engagement patterns42. It also introduces a social dimension through its "Discover" feed, where users can view how others are using AI and share their own AI experiences54. For Ray-Ban Meta glasses owners, the app replaces the previous Meta View companion app, allowing them to manage their glasses and handle media captured with the device67.
Ray-Ban Meta glasses have evolved from simple camera glasses to sophisticated AI-powered wearables. The integration of Meta's Llama AI models has transformed these devices into hands-free assistants capable of understanding both visual and audio inputs. Initially powered by Llama 2, the glasses now use Llama 3.1 70B model1, with multimodal capabilities that allow users to interact with their surroundings in new ways.
The AI features include real-time language translation across English, French, Italian, and Spanish23, visual analysis of surroundings4, smart memory for remembering where you parked5, and hands-free social media interactions like posting to Instagram or sending messages via Messenger6. Users can activate these features with the "Hey Meta" voice command, enabling seamless interaction with the AI assistant while on the go7. The glasses also support music streaming services including Spotify, Apple Music, and Shazam8, making them versatile companions for daily activities without requiring users to reach for their phones.
The Llama API provides developers with a comprehensive toolkit for building AI-powered applications. Key features include support for multiple Llama models (including Llama 4 Maverick, Llama 4 Scout, and various Llama 3 versions)1, SDKs in Python, TypeScript, Go, with Ruby and Java coming soon2, and compatibility with the OpenAI SDK for easy migration of existing applications2. The API offers function calling capabilities that allow models to invoke custom functions like database queries or email sending3, and supports advanced features such as structured outputs and fine-tuning options4.
Developers can easily get started with one-click API key creation and access to interactive playgrounds for exploring different models2. The API supports both text-only and multimodal capabilities (depending on the model), with the latter enabling image analysis and visual reasoning5. Additional parameters like temperature, max tokens, and frequency penalty can be adjusted to control model outputs61, while the expanded context length of 128K tokens in newer models allows for processing longer documents and more complex reasoning tasks7.