LFM2‑VL is Liquid AI's first series of multimodal models, designed to process text and images with variable resolutions. Built on the LFM2 backbone, it is optimized for low-latency and edge AI applications.
We're releasing the weights of two post-trained checkpoints with 450M (for highly constrained devices) and 1.6B (more capable yet still lightweight) parameters.
2× faster inference speed on GPUs compared to existing VLMs while maintaining competitive accuracy
Flexible architecture with user-tunable speed-quality tradeoffs at inference time
Native resolution processing up to 512×512 with intelligent patch-based handling for larger images, avoiding upscaling and distortion
Links