Google is testing a new feature called Audio Overviews in Search Labs that uses its latest Gemini AI models to generate spoken summaries of search results for specific queries, offering users a hands-free way to absorb information while multitasking or when an audio format is preferred.
Gemini is Google's most capable and general AI model family, designed to be natively multimodal from the ground up rather than stitching together separate components for different modalities.1 Developers can integrate Gemini into their applications through multiple pathways:
Google AI Studio: The fastest way to start building with Gemini, offering a generous free tier and flexible pay-as-you-go plans for scaling.2
Gemini API: Available through Google AI Studio or Google Cloud Vertex AI, allowing developers to access models like Gemini 2.0 Flash, 2.5 Pro, and 2.5 Flash.34
Integration platforms: Tools like Orkes Conductor support Gemini integration by connecting to Google Cloud services using project IDs and service account credentials.5
Development libraries: The langchain-google-genai
package provides LangChain integration for Gemini models, supporting features like tool calling, structured output, and multimodal inputs.3
Developers can leverage Gemini's sophisticated reasoning capabilities and multimodal understanding to build AI-powered applications that can process text, images, audio, and more simultaneously.14
NotebookLM offers two tiers of service: the standard version and the enhanced Pro capabilities. The Pro version provides significantly expanded limits, including 5x more Audio Overviews (20 vs 3 daily), notebooks (500 vs 100), chat queries (500 vs 50 daily), and sources per notebook (300 vs 50).1 Users can access these Pro capabilities through various subscription options including Google AI Pro, Google AI Ultra, qualifying Google Workspace plans, or Google Cloud for enterprise users.2
Beyond increased limits, Pro users gain access to premium features such as advanced chat customization, notebook analytics, and enhanced sharing capabilities like the "Chat-only" notebook sharing option.2 In June 2025, Google introduced public link sharing, allowing users to create shareable links that grant view-only access to notebooks, FAQs, and Audio Overviews without requiring individual email invitations—similar to Google Docs sharing functionality.3 NotebookLM is available in over 180 regions where the Gemini API is supported and currently works with more than 35 languages.4
Google's Audio Overviews feature transforms how users consume information by offering a convenient hands-free experience. The feature generates conversational audio summaries for search queries, allowing users to absorb information while multitasking or when they simply prefer listening over reading.12 The audio player includes essential controls like play/pause buttons, volume adjustment, and playback speed options ranging from 0.25x to 2x, making it highly customizable for different listening preferences.3
Beyond Search, Google has expanded audio-based information delivery across its ecosystem. "Daily Listen" provides personalized five-minute news rundowns based on users' Discover feeds and search history,4 while NotebookLM's implementation allows users to convert documents into podcast-like experiences with AI hosts that discuss topics with natural banter and even respond to user questions in real-time.56 All these features display relevant source links within their interfaces, enabling users to explore topics more deeply when desired.78