Users now have the capability to adjust images and create video clips based on text descriptions, thanks to Meta’s Emu AI design.
Authored by Jess Weatherbed, a media specialist focusing on cutting-edge companies, technology, and the digital sphere. Jess kicked off her career at TechRadar, delving into information technology reviews.
Meta’s CEO Mark Zuckerberg recently unveiled two groundbreaking features powered by Emu technology, Meta’s foundational graphic model, set to enhance Facebook and Instagram. The first feature, named “Emu Edit,” empowers users to modify images based on text inputs, akin to tools offered by Adobe, Google, and Canva, enabling effortless object and person removal or alteration without the need for advanced editing skills.
One intriguing aspect is that users may not need to manually select the image area for modification. By simply typing commands like “transform the puppy into a panda,” Emu Edit can accurately identify and modify the dog in the image, focusing solely on the requested changes without altering unrelated elements such as adding text to a baseball cap.
The upcoming “Emu Video” tool from Meta can generate videos based on textual cues, image research, or a blend of both inputs. While the results may not be hyper-realistic, they represent a departure from the somewhat rigid animations produced by Meta’s Make-A-Video system in the past.
The timeline for the rollout of these new functionalities on Facebook and Instagram remains undisclosed, leaving users curious about their potential integration with Meta’s upcoming AI-driven tools. Meta’s move to introduce AI-based image editing directly on its social platforms aligns seamlessly with the convenience it offers compared to third-party services like Adobe’s Photoshop conceptual load or Google Photos Magic Editor.
For further insights on the availability of these features, Meta’s plans, and any additional details, we await clarification from Meta and will update this story accordingly.