Stable Diffusion Dance
Unleash creativity with Stable Diffusion Dance. Generate dynamic video sequences and striking dance animations synchronized perfectly to your audio.
🚀Function Overview
Generates dynamic video sequences or image frames that react to audio input, creating rhythmic visualizations synchronized to sound using Stable Diffusion.
Key Features
- Real-time audio synchronization for visual outputs
- Customizable stylistic prompts and scaling
- Adjustable audio parameters (smoothing, noise, loudness)
- Frame interpolation for smooth animations
- Batch generation for efficient processing
Use Cases
- •Creating music visualization videos
- •Generating dance animations synced to beats
- •Producing dynamic AI art installations
- •Developing rhythm-based game content
⚙️Input Parameters
prompts
stringstyle_suffix
stringStyle suffix to add to the prompt. This can be used to add the same style to each prompt.
audio_file
stringinput audio file
prompt_scale
numberDetermines influence of your prompt on generation.
random_seed
integerEach seed generates a different image
diffusion_steps
integerNumber of diffusion steps. Higher steps could produce better results but will take longer to generate. Maximum 30 (using K-Euler-Diffusion).
audio_smoothing
numberAudio smoothing factor.
audio_noise_scale
numberLarger values mean audio will lead to bigger changes in the image.
audio_loudness_type
stringType of loudness to use for audio. Options are 'rms' or 'peak'.
frame_rate
numberFrames per second for the generated video.
width
integerWidth of the generated image. The model was really only trained on 512x512 images. Other sizes tend to create less coherent images.
height
integerHeight of the generated image. The model was really only trained on 512x512 images. Other sizes tend to create less coherent images.
batch_size
integerNumber of images to generate at once. Higher batch sizes will generate images faster but will use more GPU memory i.e. not work depending on resolution.
frame_interpolation
booleanWhether to interpolate between frames using FFMPEG or not.
💡Usage Examples
Example 1
Input Parameters
{ "width": 384, "height": 512, "prompts": "Star and light, space and stardust", "audio_file": "https://replicate.delivery/pbxt/N1XZkpff9cLQB76lssrieB6Vx8XzLV9cwSAuLKatXx141k4d/Capo-In-The-Fields.wav", "batch_size": 24, "frame_rate": 16, "random_seed": 13, "prompt_scale": 15, "style_suffix": "by android jones, psychedelic, alien geometry, digital art, neon", "audio_smoothing": 0.8, "diffusion_steps": 20, "audio_noise_scale": 0.4, "audio_loudness_type": "peak", "frame_interpolation": true }
Output Results
Quick Actions
Technical Specifications
- Hardware Type
- 8x A100 (80GB)
- Run Count
- 7
- Commercial Use
- Unknown/Restricted
- Platform
- Replicate
Related Keywords
Related Models
PixVerse Video Generator
Quickly make 5s or 8s videos at 540p, 720p or 1080p. It has enhanced motion, prompt coherence and handles complex actions well.
Luma Reframe Video
Change the aspect ratio of any video up to 30 seconds long, outputs will be 720p
Frames to Video Merger
Convert a set of image frames (JPG or PNG) into a high-quality MP4 video. Automatically handles sorting and frame order for smooth playback.