Fine-tuning a 450M VLM to beat Claude Sonnet 4.6
Demo

 

I recently fine-tuned LiquidAI's LFM2.5-VL-450M to classify deforestation into 3 classes:

  1. Currently a forest (STANDING_FOREST)
  2. Recently deforested (RECENTLY_CLEARED)
  3. Never a forest in recent time (LONG_TERM_NON_FOREST)
Example classification

Results were interesting. On a held-out test set with 996 images of Cambodia, the 450M model outperformed Claude Sonnet 4.6.

When deployed to a cheap GMKTec mini PC, the inference time with this model using llama.cpp was in fractions of a second (~0.3). 

There is so much to be excited about in the world of local AI. Taking small models and fine-tuning them for focused tasks is something that is very underrated. We need to do that more.

Check out the repo for my project. You can also download the model from HuggingFace.

Repo - urbanspr1nter/deforestation-classifier: Fine-tuning LiquidAI/LFM2.5-VL-450M to classify deforestation.

HuggingFace - urbanspr1nter/lfm2.5vl-450m-deforestation-classifier · Hugging Face

What's even cooler was that I implemented the code with a local LLM! Qwen3.6-27B at Q4_K_XL quantization using pi agent! It did very well helping me with the Docker pieces. Because of the complexity of deployment, there is no other choice but to use Docker. Gotta use the right tool for the job,