Luma Portrait
Unleash your creativity with Luma Portrait, the diffusion model for stylized portrait images. Discover how this AI model can transform your workflow!
🚀Function Overview
A diffusion model specialized in creating stylized portrait images with customizable parameters, supporting both text-to-image generation and image-to-image transformations like inpainting.
Key Features
- Prompt-based image generation with trigger word activation
- Image-to-image and inpainting capabilities
- Custom aspect ratios and image dimensions
- Adjustable LoRA scaling for style/concept control
- Multiple output formats and quality settings
- Speed optimization (FP8 quantized) vs precision (BF16) modes
Use Cases
- •Creating artistic portrait photography
- •Enhancing existing images with specific styles
- •Generating marketing visuals with branded aesthetics
- •Developing character designs for media productions
⚙️Input Parameters
prompt
stringPrompt for generated image. If you include the `trigger_word` used in the training process you are more likely to activate the trained object, style, or concept in the resulting image.
image
stringInput image for image to image or inpainting mode. If provided, aspect_ratio, width, and height inputs are ignored.
mask
stringImage mask for image inpainting mode. If provided, aspect_ratio, width, and height inputs are ignored.
aspect_ratio
stringAspect ratio for the generated image. If custom is selected, uses height and width below & will run in bf16 mode
height
integerHeight of generated image. Only works if `aspect_ratio` is set to custom. Will be rounded to nearest multiple of 16. Incompatible with fast generation
width
integerWidth of generated image. Only works if `aspect_ratio` is set to custom. Will be rounded to nearest multiple of 16. Incompatible with fast generation
prompt_strength
numberPrompt strength when using img2img. 1.0 corresponds to full destruction of information in image
model
stringWhich model to run inference with. The dev model performs best with around 28 inference steps but the schnell model only needs 4 steps.
num_outputs
integerNumber of outputs to generate
num_inference_steps
integerNumber of denoising steps. More steps can give more detailed images, but take longer.
guidance_scale
numberGuidance scale for the diffusion process. Lower values can give more realistic images. Good values to try are 2, 2.5, 3 and 3.5
seed
integerRandom seed. Set for reproducible generation
output_format
stringFormat of the output images
output_quality
integerQuality when saving the output images, from 0 to 100. 100 is best quality, 0 is lowest quality. Not relevant for .png outputs
disable_safety_checker
booleanDisable safety checker for generated images.
go_fast
booleanRun faster predictions with model optimized for speed (currently fp8 quantized); disable to run in original bf16
megapixels
stringApproximate number of megapixels for generated image
lora_scale
numberDetermines how strongly the main LoRA should be applied. Sane results between 0 and 1 for base inference. For go_fast we apply a 1.5x multiplier to this value; we've generally seen good performance when scaling the base value by that amount. You may still need to experiment to find the best value for your particular lora.
extra_lora
stringLoad LoRA weights. Supports Replicate models in the format <owner>/<username> or <owner>/<username>/<version>, HuggingFace URLs in the format huggingface.co/<owner>/<model-name>, CivitAI URLs in the format civitai.com/models/<id>[/<model-name>], or arbitrary .safetensors URLs from the Internet. For example, 'fofr/flux-pixar-cars'
extra_lora_scale
numberDetermines how strongly the extra LoRA should be applied. Sane results between 0 and 1 for base inference. For go_fast we apply a 1.5x multiplier to this value; we've generally seen good performance when scaling the base value by that amount. You may still need to experiment to find the best value for your particular lora.
💡Usage Examples
Example 1
Input Parameters
{ "model": "dev", "prompt": "a young girl with her dog bright and vibrant lighting, glowing skin, pastel pink and blue tones in the background, cinematic shallow depth of field, shot on a Canon RF 85mm f/1.2, soft bokeh, fashion editorial styling, sharp facial detail, sparkling highlights, natural pose with the style of LUMPOR, holding cotton candy, wearing glitter makeup, backlit with fairy lights", "go_fast": false, "lora_scale": 1, "megapixels": "1", "num_outputs": 1, "aspect_ratio": "1:1", "output_format": "webp", "guidance_scale": 3, "output_quality": 80, "prompt_strength": 0.8, "extra_lora_scale": 1, "num_inference_steps": 28 }
Quick Actions
Technical Specifications
- Hardware Type
- H100
- Run Count
- 50
- Commercial Use
- Unknown/Restricted
- Platform
- Replicate
Related Keywords
Related Models
Fluxgram XD1 Image Generator
A model for generating and editing images using text prompts with support for inpainting, image transformation, and LoRA integration.
Sokolovski Image Generation
A model for generating and editing images using prompts and input images.
fgtensei Replicate
A model for generating and editing images using text prompts, image inputs, and inpainting capabilities with LoRA integration.