This is a unified control IC-LoRA trained on top of LTX-2-19b, enabling multiple control signals to be used for video generation from text and reference frames. It was trained with downscaled reference latents by a factor of 2.
It is based on the LTX-2 foundation model.
IC LoRA enables conditioning video generation on reference video frames at inference time, allowing fine-grained video-to-video control on top of a text-to-video, base model. It allows also the usage of an initial image for image-to-video, and generate audio-visual output.
IC LoRA uses a reference control signal, i.e. a video that is positionally aligned to the generated video and contains the reference for context. To allow for added efficiency, the reference video can be smaller, so it consumes less tokens. The reference downscale factor determines the expected downscaling of the reference video compared to the generated resolution. To signify the expected reference size, the checkpoint name will have a 'ref' denominator followed by the scale relative to the output resolution.
ltx-2-19b-ic-lora-union-control-ref0.5.safetensors
See the LTX-2-community-license for full terms.
models/loras.The model was trained using the Lightricks/Canny-Control-Dataset amongst others.
@article{hacohen2025ltx2, title={LTX-2: Efficient Joint Audio-Visual Foundation Model}, author={HaCohen, Yoav and Brazowski, Benny and Chiprut, Nisan and Bitterman, Yaki and Kvochko, Andrew and Berkowitz, Avishai and Shalem, Daniel and Lifschitz, Daphna and Moshe, Dudu and Porat, Eitan and others}, journal={arXiv preprint arXiv:2601.03233}, year={2025} } @misc{LTXVideoTrainer2025, title={LTX-Video Community Trainer}, author={Matan Ben Yosef and Naomi Ken Korem and Tavi Halperin}, year={2025}, publisher={GitHub}, }