Ready for the most powerful foundation model for medical images/videos?
**🚨 Just dropped: MedSAM2**
The next-gen foundation model for 3D medical image & video segmentation — built on top of SAM 2.1.
### Why it matters:
- Trained on **455K+ 3D image–mask pairs** & **76K+ annotated video frames**
- **>85% reduction** in human annotation costs (validated in 3 studies)
- Fast, accurate, and generalizes across organs, modalities, and pathologies
### Big impact:
We used MedSAM2 to create 3 massive datasets:
- **5,000 CT lesions**
- **3,984 liver MRI lesions**
- **251,550 echo video frames**
### Plug & play:
Deployable in:
→ **3D Slicer**
→ **JupyterLab**
→ **Gradio**
→ **Google Colab**
** open-sourced everything!**
### Explore more:
- **Datasets:**