logo
0
0
Login
G G<VeryAladeen@users.noreply.huggingface.co>
Update README.md

SeC-4B Model Files - Multiple Precision Formats

Single-file model formats for the SeC (Segment Concept) video object segmentation model, optimized for use with ComfyUI SeC Nodes.

Model Formats

FormatSizeDescriptionGPU Requirements
SeC-4B-fp16.safetensors7.35 GBRecommended - Best balance of quality and sizeAll CUDA GPUs
SeC-4B-fp8.safetensors3.97 GBVRAM-constrained systems (saves 1.5-2GB VRAM)RTX 30 series or newer
SeC-4B-bf16.safetensors7.35 GBAlternative to FP16All CUDA GPUs
SeC-4B-fp32.safetensors14.14 GBFull precisionAll CUDA GPUs

What is SeC?

SeC (Segment Concept) uses Large Vision-Language Models for video object segmentation, achieving +11.8 points improvement over SAM 2.1 on complex semantic scenarios (SeCVOS benchmark).

Key features:

  • Concept-driven tracking with semantic understanding
  • Handles occlusions and appearance changes
  • Bidirectional tracking support
  • State-of-the-art performance on multiple benchmarks

Usage

These models are designed for use with the ComfyUI SeC Nodes custom nodes.

Installation:

  1. Download your preferred model format
  2. Place in ComfyUI/models/sams/
  3. Install ComfyUI SeC Nodes
  4. The model will be automatically detected and available in the SeC Model Loader

Original Model

These are converted single-file versions of the original model:

Credits

Original Model: Developed by OpenIXCLab

  • Model architecture and weights: Apache 2.0 License
  • Paper: Zhang et al. "SeC: Advancing Complex Video Object Segmentation via Progressive Concept Construction"

Single-File Conversions: Created for ComfyUI SeC Nodes

  • Conversion script and ComfyUI integration: 9nate-drake
  • FP8 quantization support via torchao

License

Apache 2.0 (same as original SeC-4B model)

Citation

If you use this model in your research, please cite the original SeC paper:

@article{zhang2025sec, title = {SeC: Advancing Complex Video Object Segmentation via Progressive Concept Construction}, author = {Zhixiong Zhang and Shuangrui Ding and Xiaoyi Dong and Songxin He and Jianfan Lin and Junsong Tang and Yuhang Zang and Yuhang Cao and Dahua Lin and Jiaqi Wang}, journal = {arXiv preprint arXiv:2507.15852}, year = {2025} }