Meta Unveils AI Image Segmentation Model, SAM

    Published on:

    Meta announced Segment Anything Model (SAM), an AI model that can identify individual objects in images, including those that have not been encountered before.

    Meta’s research arm said it has released new tools and corresponding datasets to facilitate research into underlying models of computer vision.

    On that twitter account Meta AI “Today we are releasing the Segment Anything Model (SAM), a step towards the first foundational model for image segmentation.

    “SAM enables one-click segmentation of any object from any photo or video + zero-shot transfer to other segmentation tasks.”

    According to the company blog post, SAM was trained using the largest dataset of its kind, with over 1 billion masks or objects on 11 million licensed images. With access to the largest datasets, AI models can segment never-before-seen images.

    “The model is designed and trained rapidly so that zero shots can be transferred to new image delivery and tasks,” the company said.

    “We evaluated its capabilities on a number of tasks and found its zero-shot performance to be impressive. I have.”

    Also Read: Google Claims Its AI Computer Is Better Than Nvidia’s A100 Chip

    Reportedly, Meta already uses SAM-like technology under the hood for photo tagging, content moderation, and post suggestions on Facebook and Instagram.

    First underlying model for image segmentation?

    The SAM model allows users to annotate an image by clicking on it or by displaying a text prompt. When I posted an image containing the word cat prompt, SAM immediately drew a box around the cat in the image.

    Most users expressed enthusiasm in response to Meta’s tweet announcing the new tool. “Wow, this will accelerate the self-driving and robotics industry 10x his.” responded Arkash said another described it as “very cool”.

    “This seems like an important step towards developing the first underlying model for image segmentation. Keep up the great work.” island pitch.

    but Magic Of Barca thinks differently and answers, “What’s the point of this, what’s the main use? I’d appreciate it if you could make a video about this.”

    According to Meta, SAM uses an image encoder that produces a one-time embedding of an image, while a lightweight encoder converts prompts into embedding vectors in real time.

    These two sources of information are combined in a lightweight decoder that predicts the segmentation mask. After the image embeddings are computed, SAM can generate segments in just 50ms.

    Why not try it?

    SAM is developed by Meta AI Research and published on GitHub. You can also try SAM online with a demo or download the dataset (SA-1B). Here is what you need to do.

    1. Download the demo or visit the Segment Anything Model demo Link.
    2. Upload an image or select one from your gallery.
    3. Add subject areas.
    4. Add points to mask areas.[エリアの追加], and select an object. Select to adjust the mask.
    5. Delete regions and select regions.

    AI arms race

    Since the launch of OpenAI’s ChatGPT in November, the AI ​​conversation has exploded around the world. This has led to serious competition in this space, with tech giants such as Meta, Microsoft and Google coming up with their own rival products or embedding the technology into their products and services.

    Microsoft is even going so far as to add ChatGPT functionality to the Bing search engine along with the Office suite.

    On the other hand, meta Experimenting with generative AI. CEO Mark Zuckerberg has said that incorporating such technology into the company’s apps is a priority this year, but that doesn’t mean he’s abandoned the metaverse.

    Some of the AI ​​tools the company is developing create surreal videos from text prompts. Others quickly generate children’s book illustrations from prose.

    Meta Announces AI Image Segmentation Model, SAM First Appears on MetaNews.


    Leave a Reply

    Please enter your comment!
    Please enter your name here