Hailuo Minimax Text to Video Node
The Hailuo Minimax Text to Video Node takes a start frame and converts it to a video using Hailuo’s Minimax model (www.hailuoai.video).
Inputs
The prompt that describes the video to be generated. Keep the start frame in mind, because regardless of the prompt, the start frame will be the image that you pass in.
Settings
The output video aspect ratio.
The colors that you want in the output video
The haiper model does not necessarily support all aspect ratios. Specifically, it supports only outputs of 16:9. Therefore, if this boolean is set, Rubbrband will center-crop the video to fit your aspect ratio. If not set, there will be no crop, and the output will be the closest supported output.
Duration of the output video
The output video aspect ratio.
A random number that will set the seed for the generation. Keeping the seed constant will result in the same outputs. -1 randomizes the seed.
Outputs
The output video from the model.
Haiper Text to Video Node
The Haiper Text to Video Node takes a start frame and converts it to a video using Haiper 2.5 model (haiper.ai).
Inputs
The prompt that describes the video to be generated. Keep the start frame in mind, because regardless of the prompt, the start frame will be the image that you pass in.
Settings
The output video aspect ratio.
The colors that you want in the output video
The haiper model does not necessarily support all aspect ratios. Specifically, it supports only outputs of 16:9, 9:16, 3:4, 4:3, 1:1. Therefore, if this boolean is set, Rubbrband will center-crop the video to fit your aspect ratio. If not set, there will be no crop, and the output will be the closest supported output.
Duration of the output video
720p or 1020p resolution options
The output video aspect ratio.
A random number that will set the seed for the generation. Keeping the seed constant will result in the same outputs. -1 randomizes the seed.
Outputs
The output video from the model.
Hunyuan Text to Video Node
The Hunyuan Text to Video Node takes a start frame and converts it to a video using Tencent’s Hunyuan model (hunyuanvideoai.com).
Inputs
The prompt that describes the video to be generated. Keep the start frame in mind, because regardless of the prompt, the start frame will be the image that you pass in.
Settings
The output video aspect ratio.
The colors that you want in the output video
The hunyuan model does not necessarily support all aspect ratios. Specifically, it supports only outputs of 16:9 and 9:16. Therefore, if this boolean is set, Rubbrband will center-crop the video to fit your aspect ratio. If not set, there will be no crop, and the output will be the closest supported output.
Duration of the output video
The output video aspect ratio.
A random number that will set the seed for the generation. Keeping the seed constant will result in the same outputs. -1 randomizes the seed.
Outputs
The output video from the model.
Kling Text to Video Node
The Kling Text to Video Node takes a start frame and converts it to a video using Kling models (klingai.com).
Inputs
The prompt that describes the video to be generated. Keep the start frame in mind, because regardless of the prompt, the start frame will be the image that you pass in.
Settings
The output video aspect ratio.
The colors that you want in the output video
The Kling model does not necessarily support all aspect ratios. Specifically, it supports only outputs of 16:9, 9:16 and 1:1. Therefore, if this boolean is set, Rubbrband will center-crop the video to fit your aspect ratio. If not set, there will be no crop, and the output will be the closest supported output.
Duration of the output video
kling_1.5_pro: The latest Kling pro model
kling_1.0_pro: The previous edition Kling pro model
kling_1.0_standard: The cheaper, previous edition standard Kling model
A random number that will set the seed for the generation. Keeping the seed constant will result in the same outputs. -1 randomizes the seed.
Outputs
The output video from the model.
Luma Text to Video Node
The LUMA Text to Video Node takes a start frame and converts it to a video using luma (lumalabs.ai).
Inputs
The prompt that describes the video to be generated. Keep the start frame in mind, because regardless of the prompt, the start frame will be the image that you pass in.
Settings
The output video aspect ratio.
This motion explains how a theoretical camera would move if it were used to create the video.
The colors that you want in the output video
The luma model does not necessarily support all aspect ratios. Specifically, it supports only outputs of 16:9, 9:16, 4:3, 3:4, 21:9, 9:21. Therefore, if this boolean is set, Rubbrband will center-crop the video to fit your aspect ratio. If not set, there will be no crop, and the output will be the closest supported output.
Duration of the output video
If switched on, the video will automatically loop back to the start frame.
A random number that will set the seed for the generation. Keeping the seed constant will result in the same outputs. -1 randomizes the seed.
Outputs
The output video from the model.
Mochi Text to Video Node
The Mochi Text to Video Node takes a start frame and converts it to a video using Genmo’s Mochi model (genmo.ai).
Inputs
The prompt that describes the video to be generated. Keep the start frame in mind, because regardless of the prompt, the start frame will be the image that you pass in.
Settings
The output video aspect ratio.
The colors that you want in the output video
The hunyuan model does not necessarily support all aspect ratios. Specifically, it supports only outputs of 848x480. Therefore, if this boolean is set, Rubbrband will center-crop the video to fit your aspect ratio. If not set, there will be no crop, and the output will be the closest supported output.
Duration of the output video
The output video aspect ratio.
Switch on this setting to allow the AI to add details to your prompt to improve generations. This helps image quality, but can occasionally lead to worse prompt adherence.
A random number that will set the seed for the generation. Keeping the seed constant will result in the same outputs. -1 randomizes the seed.
Outputs
The output video from the model.