Image Editing
Edit existing images using AI by providing instructions and optional masks with OpenAI DALL-E 2 and Stability AI.
The Image Edit node modifies existing images using AI. Instead of generating an image from scratch, you provide a source image and text instructions describing the changes you want. Optionally, you can supply a mask image to indicate which areas of the image should be edited. This is useful for removing objects, replacing backgrounds, adding elements, or modifying specific regions of an uploaded image.
Supported Providers and Models
| Provider | Model | Notes |
|---|---|---|
| OpenAI | DALL-E 2 | Supports image editing with masks and inpainting |
| Stability AI | Stable Image Core | AI-powered image editing |
How Image Editing Works
The Image Edit node uses inpainting -- a technique where the AI replaces specified regions of an image while preserving the rest. The process works as follows:
- You provide the source image (the original image to edit).
- Optionally, you provide a mask image (a black-and-white image where white areas indicate regions to edit).
- You write a prompt describing what the edited image should look like.
- The model generates a new image that matches your prompt in the masked areas while preserving the unmasked regions.
If no mask is provided, the model interprets the prompt more freely and may modify any part of the image. For precise edits, always provide a mask.
Configuration
Provider
Select OpenAI or Stability AI.
Model
Choose the editing model:
- DALL-E 2 (OpenAI) -- The standard model for image editing with mask support.
- Stable Image Core (Stability AI) -- AI-powered image editing.
Credential
Select a saved API key for the chosen provider. See Credential Management for setup instructions.
Image URL
The URL of the source image to edit. This field supports template variables, typically referencing a file upload form field.
Requirements:
- Must be a valid, accessible URL
- Image must be a square PNG file (for OpenAI)
- Maximum size: 4MB
- Recommended resolution: 1024x1024 pixels
If the source image is not square, you may need to crop or pad it before passing it to the Image Edit node. Buildorado file uploads generate accessible URLs automatically.
Edit Prompt
Text instructions describing what the edited image should look like. The prompt should describe the desired final result, not the editing action.
Good prompts (describe the result):
A golden retriever sitting on a beach with palm trees
in the background and a clear blue skyA modern office desk with a laptop, coffee mug, and
a small potted plant, bright natural lightingLess effective prompts (describe the action):
Remove the person from the photoChange the background color to blueThe model generates content to fill the masked area based on the prompt and the surrounding context from the unmasked regions. Describing the desired outcome produces better results than describing the edit operation.
Mask URL
An optional mask image that defines which areas of the source image to edit. This field supports template variables.
Mask requirements:
- Must be a PNG file with an alpha channel, or a black-and-white PNG
- Same dimensions as the source image
- White areas (or fully transparent areas) indicate regions where the AI should generate new content
- Black areas (or fully opaque areas) indicate regions to preserve unchanged
Example mask scenarios:
| Scenario | Mask Description |
|---|---|
| Remove an object | White area covering the object to remove |
| Replace background | White area everywhere except the main subject |
| Add an element | White area where the new element should appear |
| Change sky | White area covering the sky region |
If no mask is provided, the model treats the entire image as editable. This is useful for style transfers or overall modifications, but less precise for targeted edits.
Size
The output dimensions of the edited image:
| Size | Notes |
|---|---|
| 256x256 | Fastest, lowest cost |
| 512x512 | Moderate quality |
| 1024x1024 | Best quality, recommended |
The output size does not need to match the input size. The model will resize as needed. However, for best results, use 1024x1024 for both input and output.
Output
The Image Edit node produces:
| Field | Type | Description |
|---|---|---|
image | object | File reference with url, key, mimeType, sizeBytes, and filename |
imageUrl | string | URL of the edited image (shortcut to image.url) |
revisedPrompt | string | The revised prompt, if the model modified it |
model | string | The model that was used |
provider | string | The provider that was used |
The edited image URL is available to downstream nodes via template variables. You can include it in emails, store it in cloud storage, or pass it to another image processing node.
Use Cases
Product Photo Enhancement
Clean up and enhance product images submitted through a form:
- A seller uploads a product photo through a form.
- A mask highlights the background area.
- The prompt describes a clean, professional background.
- The edited image replaces the original in the product listing.
Document Redaction
Remove sensitive information from uploaded documents:
- A user uploads a document image.
- A mask covers areas containing personal information.
- The prompt describes the replacement content (e.g., solid colored blocks).
- The redacted version is stored for compliance purposes.
Creative Content Modification
Allow users to customize template images:
- A form presents options for modifying a template image (background scene, added elements).
- Based on the user's selections, the prompt and mask are configured dynamically.
- The customized image is generated and delivered.
Photo Restoration
Repair damaged or incomplete areas of photographs:
- A user uploads a damaged photo through a form.
- A mask covers the damaged region.
- The prompt describes what the restored area should look like.
- The AI fills in the damaged area with contextually appropriate content.
Image Edit vs. Image Generation
| Feature | Image Edit | Image Generation |
|---|---|---|
| Input | Existing image + prompt | Prompt only |
| Mask support | Yes | No |
| Models | DALL-E 2, Stable Image Core | DALL-E 2, DALL-E 3, Stability AI models |
| Use case | Modify parts of an image | Create entirely new images |
| Precision | High (with masks) | N/A (full generation) |
| Providers | OpenAI, Stability AI | OpenAI, Stability AI |
Use Image Edit when you have an existing image and want to modify specific parts of it.
Use Image Generation when you want to create an entirely new image from a text description.
Best Practices
- Always provide a mask for precise edits. Without a mask, the model may modify areas you intended to preserve.
- Use square images. DALL-E 2 image editing requires square PNG inputs. Crop or pad non-square images before processing.
- Describe the final result in your prompt, not the editing action. "A sunny beach with blue sky" works better than "remove the clouds."
- Use 1024x1024 resolution for both input and output to get the best quality results.
- Keep images under 4MB. Compress or resize large images before processing.
- Test with representative images. The quality of edits varies depending on the complexity of the source image and the requested changes.
- Combine with other AI nodes. Use a Vision node to analyze the source image before editing, or an OCR node to read text from the image before modification.
Limitations
- Source images must be square PNG files under 4MB (for OpenAI DALL-E 2).
- Mask images must match the source image dimensions exactly.
- The model may not perfectly preserve unmasked areas, especially near mask boundaries. Complex edits near detailed regions may produce artifacts.
- Image editing is less predictable than image generation. The same prompt and mask can produce different results across executions.
- The node processes one image per execution. For batch editing, use a Loop node.
- Execution is subject to a 120-second timeout.
- Generated images are uploaded to S3 and accessible via the returned URLs.