OpenAI has unveiled a new tool named Sora, which can generate videos based on a single line of text. The tool, named after the Japanese word for “sky,” represents a significant advancement for the artificial intelligence company, aligning with efforts by Google, Meta, and Runway ML in similar endeavors.
Sora’s model, developed by the creators of ChatGPT, demonstrates an understanding of how objects interact in the physical realm. It can accurately interpret scenes, props, and create characters that convey vivid emotions. OpenAI showcased several videos created by Sora, showcasing scenarios such as a realistic depiction of a woman strolling down a rainy street in Tokyo and woolly mammoths traversing a snowy meadow.
Despite the innovative nature of OpenAI’s tool, concerns have been raised regarding its potential misuse. Rachel Tobac, a member of the technical advisory council of the US’s Cybersecurity and Infrastructure Security Agency (CISA), emphasized the need to address the risks associated with this AI model. Privacy and copyright issues have also been highlighted, with industry experts questioning the transparency of the training data and the implications for content creators.
OpenAI has acknowledged these concerns and stated its commitment to collaborating with various stakeholders to address safety issues before releasing the tool to the public. The company is working with experts to test the model for potential misuse, including the development of tools to detect misleading content generated by Sora.
While OpenAI aims to ensure the responsible use of its technology, the company acknowledges the challenges in predicting all potential uses and abuses of AI systems. Learning from real-world applications is deemed crucial in enhancing the safety of AI technologies over time.
In a separate development, The New York Times filed a lawsuit against OpenAI and its major investor Microsoft, alleging unauthorized use of the newspaper’s articles to train ChatGPT. The lawsuit contends that the AI model competes with the newspaper’s information services, posing a threat to its operations.
Furthermore, OpenAI recently took action against state-affiliated groups from Russia, Iran, North Korea, and China, terminating their accounts for leveraging the company’s language models to support hacking activities. These groups were utilizing OpenAI’s tools for preliminary hacking tasks, underscoring the importance of responsible AI usage and security measures in the evolving technological landscape.