Image Generation
Image generation models allow you to create images from natural language prompts. This capability is powered by diffusion-based models exposed via the OpenAI-compatible Images API.
Overview
The Image Generation API creates one or more images based on a text prompt. You can control the number of images, size, quality, background handling, and output format.
Quick Start
Endpoint
POST https://api.inference.nebul.io/v1/images/generations
Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
prompt | String | Yes | The text description of the image to generate. |
model | String | No | The model ID to use for generation. |
n | Integer | No | Number of images to generate. Defaults to 1. |
size | String | No | Target image size (e.g. auto). |
quality | String | No | Quality preset (e.g. auto). |
background | String | No | Background handling strategy (e.g. auto). |
output_format | String | No | Image output format (e.g. png). |
response_format | String | No | Response encoding format (e.g. b64_json). |
guidance | Number | No | Optional guidance scale for prompt adherence. |
user | String | No | Optional end-user identifier for abuse monitoring. |
Code Examples
- Python
- cURL
Using the requests library:
import base64import requestsurl = "https://api.inference.nebul.io/v1/images/generations"headers = {"Authorization": "Bearer <YOUR_API_KEY>","Content-Type": "application/json",}payload = {"model": "<IMAGE_MODEL_ID>","prompt": "A photo of a cat sitting on a windowsill at sunset","n": 1,"response_format": "b64_json",}response = requests.post(url, headers=headers, json=payload)data = response.json()first_image = data["data"][0]["b64_json"]image_bytes = base64.b64decode(first_image)with open("generated.png", "wb") as image_file:image_file.write(image_bytes)
curl -X POST https://api.inference.nebul.io/v1/images/generations \-H "Authorization: Bearer <YOUR_API_KEY>" \-H "Content-Type: application/json" \-d '{"model": "<IMAGE_MODEL_ID>","prompt": "A photo of a cat sitting on a windowsill at sunset","n": 1,"response_format": "b64_json"}'
Prompt Engineering: Be specific and descriptive in your prompts. Include details about style, composition, lighting, and subject matter. For example, "A photorealistic portrait of a cat sitting on a windowsill at sunset, soft golden lighting, shallow depth of field" produces better results than "a cat".
Response Formats: Use b64_json to receive base64-encoded images directly in the JSON response, or url if the service provides temporary URLs. Base64 format is convenient for immediate use but increases response size.
Response Format
The API returns an ImageResponse object:
{"created": 1731500000,"data": [{"b64_json": "<BASE64_IMAGE_DATA>","url": null}],"background": "auto","output_format": "png","size": "auto","quality": "auto","usage": {"prompt_tokens": 0,"total_tokens": 0}}
Model Specifications
The following image generation models are available:
black-forest-labs/FLUX.1-Kontext-dev- 12B parameters, 512 context, bfloat16 precision, supports Text, Image (Preview)
Image Editing
Image editing allows you to modify an existing image based on a text prompt, optionally using a mask to specify the editable region.
Overview
The Image Editing API accepts an input image and a prompt describing the desired changes. You can optionally provide a mask image to control which parts of the original image are edited.
Quick Start
Endpoint
POST https://api.inference.nebul.io/v1/images/edits
Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
model | String | Yes | The model ID to use for editing. |
prompt | String | Yes | The text description of how to edit the image. |
image | File[] | Yes | One or more input images to edit. |
mask | File | No | Optional mask image to restrict the editable region. |
n | Integer | No | Number of images to generate. Defaults to 1. |
size | String | No | Target image size (e.g. auto). |
response_format | String | No | Response encoding format (e.g. b64_json). |
quality | String | No | Quality preset (e.g. auto). |
background | String | No | Background handling strategy (e.g. auto). |
guidance | Number | No | Optional guidance scale for prompt adherence. |
user | String | No | Optional end-user identifier for abuse monitoring. |
output_format | String | No | Image output format (e.g. png). |
Code Examples
- Python
- cURL
Using the requests library:
import base64import requestsurl = "https://api.inference.nebul.io/v1/images/edits"headers = {"Authorization": "Bearer <YOUR_API_KEY>",}files = [("image", ("original.png", open("original.png", "rb"), "image/png")),# Optional mask:# ("mask", ("mask.png", open("mask.png", "rb"), "image/png")),]data = {"prompt": "Add a red hat to the person in the image","model": "<IMAGE_EDIT_MODEL_ID>","n": 1,"response_format": "b64_json",}response = requests.post(url, headers=headers, files=files, data=data)data = response.json()first_image = data["data"][0]["b64_json"]image_bytes = base64.b64decode(first_image)with open("edited.png", "wb") as image_file:image_file.write(image_bytes)
curl -X POST https://api.inference.nebul.io/v1/images/edits \-H "Authorization: Bearer <YOUR_API_KEY>" \-F "prompt=Add a red hat to the person in the image" \-F "model=<IMAGE_EDIT_MODEL_ID>" \-F "image=@original.png"# Optionally add a mask:# -F "mask=@mask.png"
Response Format
The API returns an ImageResponse object, the same as for image generation:
{"created": 1731500000,"data": [{"b64_json": "<BASE64_IMAGE_DATA>","url": null}],"background": "auto","output_format": "png","size": "auto","quality": "auto","usage": {"prompt_tokens": 0,"total_tokens": 0}}
Image Variation
Image variation allows you to generate new images that are stylistic or semantic variations of an input image.
Overview
The Image Variation API accepts an input image and returns one or more related images. You can control the number of outputs, target size, quality, background handling, and output format.
Quick Start
Endpoint
POST https://api.inference.nebul.io/v1/images/variations
Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
model | String | No | The model ID to use for variations. |
image | File | Yes | The input image to base the variation on. |
n | Integer | No | Number of images to generate. Defaults to 1. |
size | String | No | Target image size (e.g. auto). |
response_format | String | No | Response encoding format (e.g. b64_json). |
quality | String | No | Quality preset (e.g. auto). |
background | String | No | Background handling strategy (e.g. auto). |
guidance | Number | No | Optional guidance scale for variation strength. |
user | String | No | Optional end-user identifier for abuse monitoring. |
output_format | String | No | Image output format (e.g. png). |
Code Examples
- Python
- cURL
Using the requests library:
import base64import requestsurl = "https://api.inference.nebul.io/v1/images/variations"headers = {"Authorization": "Bearer <YOUR_API_KEY>",}files = {"image": ("original.png", open("original.png", "rb"), "image/png"),}data = {"model": "<IMAGE_VARIATION_MODEL_ID>","n": 1,"response_format": "b64_json",}response = requests.post(url, headers=headers, files=files, data=data)data = response.json()first_image = data["data"][0]["b64_json"]image_bytes = base64.b64decode(first_image)with open("variation.png", "wb") as image_file:image_file.write(image_bytes)
curl -X POST https://api.inference.nebul.io/v1/images/variations \-H "Authorization: Bearer <YOUR_API_KEY>" \-F "model=<IMAGE_VARIATION_MODEL_ID>" \-F "image=@original.png"
Response Format
The API returns an ImageResponse object, the same as for image generation and editing:
{"created": 1731500000,"data": [{"b64_json": "<BASE64_IMAGE_DATA>","url": null}],"background": "auto","output_format": "png","size": "auto","quality": "auto","usage": {"prompt_tokens": 0,"total_tokens": 0}}