Profile

google / nano-banana-2/edit

Edit images with text prompts. Make targeted changes like adding or removing objects, changing styles, or modifying specific elements while preserving the rest of the image.

Priced by megapixels

Model Input

Input

The input image(s) for editing. The model supports providing multiple images in a single request.

The text prompt for image editing.

Enables Google Search grounding to provide more accurate and up-to-date information in the generated image.

The aspect ratio of the generated image.

The resolution of the generated image.

Additional Settings

Customize your input with more control.

The harm category to apply the setting to.

The threshold for blocking content in the specified category.

The harm category to apply the setting to.

The threshold for blocking content in the specified category.

The harm category to apply the setting to.

The threshold for blocking content in the specified category.

The harm category to apply the setting to.

The threshold for blocking content in the specified category.

A list of unique safety settings for blocking unsafe content. Supported categories: HARM_CATEGORY_HARASSMENT, HARM_CATEGORY_HATE_SPEECH, HARM_CATEGORY_SEXUALLY_EXPLICIT, HARM_CATEGORY_DANGEROUS_CONTENT. Supported thresholds: HARM_BLOCK_THRESHOLD_UNSPECIFIED, BLOCK_LOW_AND_ABOVE, BLOCK_MEDIUM_AND_ABOVE, BLOCK_ONLY_HIGH, BLOCK_NONE. This setting can only be configured via the API.

You need to be logged in to run this model and view results.
Log in

Model Output

Output

Fill in the input form and click submit to see the output
Logs (1 lines)

Model Example Requests

Examples

Q8fgglh9dGfPo5gntNNnB

Model Pricing

Pricing

Model pricing varies by the resolution of your output image.

512
$0.0450
per image
1K
$0.0670
per image
2K
$0.1010
per image
4K
$0.1510
per image

Model Details

Model Details

Nano Banana 2/Edit is a high-efficiency model for editing images with text prompts. It excels at making specific, context-aware changes to your images, from simple additions to complete stylistic transformations. Provide one or more images and a descriptive prompt to modify elements, apply new styles, or create composite scenes while maintaining the original image's lighting, perspective, and overall coherence.

This model is ideal for a variety of creative and commercial tasks, including: - **Inpainting and Masking:** Conversationally define a "mask" to edit a specific part of an image while leaving the rest untouched. For example, instruct the model to "change only the blue sofa to be a vintage, brown leather chesterfield," and it will preserve the rest of the room. - **Adding & Removing Elements:** Seamlessly add new objects to your images or remove unwanted ones. The model intelligently matches the style, lighting, and perspective of the original photo, making edits appear natural. - **Style Transfer:** Transform a photograph into a different artistic style. Provide a reference image and instruct the model to recreate it in the style of a famous artist or a specific art movement.

### Example Usage ```javascript import { modelrunner } from "@modelrunner/client";

const result = await modelrunner.subscribe("google/nano-banana-2/edit", { input: { images: ["https://ai.google.dev/static/gemini-api/docs/images/cat_photo.png"], prompt: "Using the provided image of my cat, please add a small, knitted wizard hat on its head. Make it look like it's sitting comfortably and matches the soft lighting of the photo.", aspect_ratio: "1:1", resolution: "2K" }, }); ```

## Safety & Content Moderation

The `safety_settings` parameter allows you to adjust content moderation filters for your specific use case. This setting can only be configured via the API. You can set a blocking threshold for four harm categories: Harassment, Hate Speech, Sexually Explicit, and Dangerous Content.

Each safety setting consists of a `category` and a `threshold`.

- **`threshold`**: The confidence level at which to block content for the given category. - `BLOCK_NONE`: Always show content, regardless of the probability of it being unsafe. - `BLOCK_ONLY_HIGH`: Block content when there is a high probability of it being unsafe. - `BLOCK_MEDIUM_AND_ABOVE`: Block content when there is a medium or high probability of it being unsafe. - `BLOCK_LOW_AND_ABOVE`: Block content when there is a low, medium, or high probability of it being unsafe.

```javascript import { modelrunner } from "@modelrunner/client";

const result = await modelrunner.subscribe("google/nano-banana-2/edit", { input: { images: ["https://ai.google.dev/static/gemini-api/docs/images/cat_photo.png"], prompt: "A cat wearing a wizard hat.", safety_settings: [ { "category": "HARM_CATEGORY_HATE_SPEECH", "threshold": "BLOCK_ONLY_HIGH" }, { "category": "HARM_CATEGORY_DANGEROUS_CONTENT", "threshold": "BLOCK_MEDIUM_AND_ABOVE" } ] }, }); ```