
Published on October 12, 2025
Nano Banana In Google Search and Lens Expanded For Image Creation And Editing
Google is taking another big leap in integrating generative AI across its ecosystem. The company’s playful yet powerful Nano Banana AI model, originally known for producing quirky and vibrant images inside the Gemini app, is now making its way to Google Search and Google Lens.
According to a recent 9to5Google report, the tech giant is gradually rolling out this feature to Android users in the United States via the AI Mode in Search Labs. The move marks a major step toward blending Google’s creativity tools—like Gemini, Search, and Lens—into a single, cohesive AI experience.
Table of Contents
What Is the Nano Banana AI Model?
The Nano Banana AI model is Google’s lightweight yet high-performance image generation model under the Gemini 2.5 Flash family. While the name may sound humorous, the model is no joke when it comes to performance.
It gained attention for its ability to generate fun, expressive, and visually rich images directly inside the Gemini app. Whether it was stylized art, playful photo edits, or vibrant scene creation, the model quickly became a fan favorite for its creativity and speed.
Now, Google is scaling that experience far beyond Gemini—right into Search and Lens, two of its most-used products.
New “Create Image” Option in Search’s AI Mode
With the rollout of Nano Banana AI in Google Search, Google is adding a fresh and simplified interface for creativity. Instead of the traditional carousel of suggested prompts, users will now see a clean list view—and a new plus icon (+) at the bottom left corner of the prompt box.
When tapped, this icon opens three creative options:
Gallery – to pick existing photos.
Camera – to capture new shots.
Create Images – the star feature, marked by a banana emoji.
Selecting Create Images changes the hint text to “Describe your image,” allowing users to generate visuals from scratch or edit uploaded photos using AI. Once processed, the images can be downloaded or shared, each carrying a subtle Gemini Spark watermark in the bottom-right corner—Google’s new signature mark for AI-generated content.
This approach aligns with Google’s efforts to maintain transparency about AI-generated media while ensuring easy access to creative tools.
Lens Gets a “Create” Tab for AI Selfies and Live Capture
Google Lens is also stepping up with an exciting new Create tab. After successfully integrating Search Live and Homework filters, Lens now gets its most dynamic update yet—AI-powered creativity.
The new Create tab brings along interface tweaks as well, such as repositioned text labels and more visible filters on-screen. The tab is identified by—you guessed it—a banana emoji on the shutter button.
When users open the Create mode, it automatically switches to the front-facing camera, perfect for AI selfies. You can take a photo, describe how you want it edited or stylized, and watch the Nano Banana model transform it within seconds.
Need to switch perspectives? The Lens toggle still allows you to flip between front and rear cameras seamlessly.
Gradual Rollout and Language Expansion Plans
Currently, the Nano Creation experience is limited to Android users in the United States, and only for accounts enrolled in Google’s AI Mode Search Lab. However, Google has confirmed that this rollout is just the beginning.
The company aims to expand the feature globally, introducing multi-language support and deeper integration with Gemini and Google Photos in future updates. This means that soon, users worldwide may be able to create, remix, and edit images directly within the apps they already use every day—without third-party tools or plugins.
By bringing the Nano Banana model into Search and Lens, Google is effectively turning AI image creation into a native, everyday experience rather than a specialized one.
The Bigger Picture: AI Everywhere in Google’s Ecosystem
This expansion isn’t happening in isolation. Over the past year, Google has been aggressively embedding generative AI capabilities across its services—from Docs and Gmail (with “Help Me Write”) to Maps and YouTube (with AI recommendations and summaries).
The addition of Nano Banana to Search and Lens signals that AI creativity is now as integral to Google’s identity as Search itself. By merging powerful AI models like Gemini 2.5 Flash with user-facing apps, Google is aiming to make creation as simple as a search query.
It’s a strategic move against rising competitors like OpenAI’s DALL·E and Microsoft Designer, as Google positions itself to lead the next wave of visual creativity and productivity tools.
What This Means for Users
Here’s what Android users can expect from the new Nano Banana expansion:
| Feature | Description |
|---|---|
| Create Tab in Lens | Capture AI selfies and edit them instantly. |
| Create Image in Search AI Mode | Generate or modify images from prompts. |
| Gemini Spark Watermark | Clearly marks AI-generated content. |
| Gallery & Camera Integration | Edit existing or freshly captured photos. |
| Gradual US Rollout | Currently in AI Mode Search Lab for Android users. |
Once fully deployed, these tools will make it easier than ever to express creativity, experiment with visual ideas, and personalize content—right inside Google’s apps.
What’s Next?
Google hasn’t confirmed an exact global release timeline, but reports indicate the company plans to expand availability to more regions and languages by early 2026.
As generative AI becomes increasingly central to how users search, learn, and create, Google’s vision seems clear: AI should be seamlessly woven into everyday digital experiences.
From a playful banana emoji to a powerful AI engine—Nano Banana is no longer just a fun experiment; it’s the future of creative search.
Key Takeaway
Google’s Nano Banana AI model is evolving from a fun Gemini feature into a full-fledged creative companion inside Search and Lens. By merging image generation, live camera capture, and editing tools under one ecosystem, Google is redefining what “searching” and “creating” can mean in the AI era.