G
GetLLMs

Flux-Dev-LoRA-Mahi

Discover Flux-Dev-LoRA-Mahi, your go-to for advanced image generation and modification. Ready to experience the power of AI? Start your journey here!

Platform: Replicate
LoRA Image GenerationImage InpaintingCustomizable Image Synthesis
243 runs
H100
License Check Required

🚀Function Overview

Generates and modifies images using text prompts and reference images, with LoRA customization for style/object control and support for image-to-image transformations.

Key Features

  • Text-to-image generation with trigger word activation
  • Image-to-image and inpainting capabilities
  • LoRA scale control for style/object emphasis
  • Adjustable resolution, aspect ratio, and output quality
  • Fast mode optimization for quicker generation
  • Multiple output formats and safety checker options

Use Cases

  • Creating custom character/object illustrations
  • Editing existing images via inpainting
  • Style transfer applications
  • Prototyping fashion designs from reference images
  • High-quality digital art generation

⚙️Input Parameters

prompt

string

Prompt for generated image. If you include the `trigger_word` used in the training process you are more likely to activate the trained object, style, or concept in the resulting image.

image

string

Input image for image to image or inpainting mode. If provided, aspect_ratio, width, and height inputs are ignored.

mask

string

Image mask for image inpainting mode. If provided, aspect_ratio, width, and height inputs are ignored.

aspect_ratio

string

Aspect ratio for the generated image. If custom is selected, uses height and width below & will run in bf16 mode

height

integer

Height of generated image. Only works if `aspect_ratio` is set to custom. Will be rounded to nearest multiple of 16. Incompatible with fast generation

width

integer

Width of generated image. Only works if `aspect_ratio` is set to custom. Will be rounded to nearest multiple of 16. Incompatible with fast generation

prompt_strength

number

Prompt strength when using img2img. 1.0 corresponds to full destruction of information in image

model

string

Which model to run inference with. The dev model performs best with around 28 inference steps but the schnell model only needs 4 steps.

num_outputs

integer

Number of outputs to generate

num_inference_steps

integer

Number of denoising steps. More steps can give more detailed images, but take longer.

guidance_scale

number

Guidance scale for the diffusion process. Lower values can give more realistic images. Good values to try are 2, 2.5, 3 and 3.5

seed

integer

Random seed. Set for reproducible generation

output_format

string

Format of the output images

output_quality

integer

Quality when saving the output images, from 0 to 100. 100 is best quality, 0 is lowest quality. Not relevant for .png outputs

disable_safety_checker

boolean

Disable safety checker for generated images.

go_fast

boolean

Run faster predictions with model optimized for speed (currently fp8 quantized); disable to run in original bf16

megapixels

string

Approximate number of megapixels for generated image

lora_scale

number

Determines how strongly the main LoRA should be applied. Sane results between 0 and 1 for base inference. For go_fast we apply a 1.5x multiplier to this value; we've generally seen good performance when scaling the base value by that amount. You may still need to experiment to find the best value for your particular lora.

extra_lora

string

Load LoRA weights. Supports Replicate models in the format <owner>/<username> or <owner>/<username>/<version>, HuggingFace URLs in the format huggingface.co/<owner>/<model-name>, CivitAI URLs in the format civitai.com/models/<id>[/<model-name>], or arbitrary .safetensors URLs from the Internet. For example, 'fofr/flux-pixar-cars'

extra_lora_scale

number

Determines how strongly the extra LoRA should be applied. Sane results between 0 and 1 for base inference. For go_fast we apply a 1.5x multiplier to this value; we've generally seen good performance when scaling the base value by that amount. You may still need to experiment to find the best value for your particular lora.

💡Usage Examples

Example 1

Input Parameters

{
  "image": "https://replicate.delivery/pbxt/MrMXqRPxd2vmMKPc7aa2Ddswje29nKDV6tZj6yOJGWj8aHVr/WhatsApp%20Image%202025-04-17%20at%2016.25.59.jpeg",
  "model": "dev",
  "prompt": "Generate an ultra-detailed full-body portrait of <TOKMAHI>, styled in the outfit shown in the uploaded reference image.\nMaintain <TOKMAHI>'s facial identity, natural skin texture, and hairstyle.\nStrictly replicate the same fabric material, clothing style, silhouette, patterns, embroidery, and construction details from the reference without alterations.\nAlso replicate the body type, height, posture, and overall proportions of the model shown wearing the outfit in the reference image.\nThe garment must fit <TOKMAHI> exactly as it fits the reference model, preserving proportions, silhouette, and garment behavior.\nUse cinematic studio lighting, soft defined shadows, and a crisp depth-of-field.\nBackground: professional fashion studio (charcoal gray or soft gradient).\nPrioritize ultra-realistic textile rendering, visible stitching where appropriate, and organic garment draping that matches the model's physique.\nPose: confident, slightly dynamic, fashion-editorial standard",
  "go_fast": false,
  "lora_scale": 1.1,
  "megapixels": "1",
  "num_outputs": 1,
  "aspect_ratio": "1:1",
  "output_format": "webp",
  "guidance_scale": 2.3,
  "output_quality": 80,
  "prompt_strength": 0.5,
  "extra_lora_scale": 1,
  "num_inference_steps": 30
}

Output Results

https://replicate.delivery/xezq/3BfweC7fxjFZPo7Weycb8lzRyoTISvwssDTX63AndfRFvlgkC/out-0.webp