AI Disruption

AI Disruption

Share this post

AI Disruption
AI Disruption
Meta Releases Open-Source "Segment Anything" 2.0 Model, Now Capable of Video Segmentation
Copy link
Facebook
Email
Notes
More

Meta Releases Open-Source "Segment Anything" 2.0 Model, Now Capable of Video Segmentation

Meta unveils Segment Anything Model 2 (SAM 2), providing real-time object segmentation for both static images and dynamic videos. Now open-source and faster than ever.

Meng Li's avatar
Meng Li
Jul 30, 2024
∙ Paid

Share this post

AI Disruption
AI Disruption
Meta Releases Open-Source "Segment Anything" 2.0 Model, Now Capable of Video Segmentation
Copy link
Facebook
Email
Notes
More
1
Share

Remember Meta's "Segment Anything Model"? It was released in April last year and was seen as groundbreaking research in traditional CV tasks.

Now, over a year later, Meta has announced at SIGGRAPH the launch of Segment Anything Model 2 (SAM 2).

Building on its predecessor, SAM 2 represents a major advancement in the field. It provides real-time, promptable object segmentation for both static images and dynamic videos, unifying image and video segmentation into one powerful system.

SAM 2 can segment any object in any video or image, even those it hasn't seen before, supporting various use cases without custom adaptations.

In a conversation with Jensen Huang, Mark Zuckerberg mentioned SAM 2: "Being able to do this in video, with zero-shot capabilities, and tell it what you want is very cool."

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Meng Li
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

Copy link
Facebook
Email
Notes
More