News Security

SORA AI’s Realism Raises Concerns For Cybersecurity Professionals

Open Ai confirms they are working towards mitigating potential misuse of the program 

OpenAI’ has again created buzz with its text to video software SORA, which has captivated the tech world with its ability to transform text descriptions into photorealistic videos. This innovation marks a significant advancement in AI technology, offering a glimpse into a future where text alone can bring vivid visualisation to life. However, there are certain growing concerns about the potential misuse of such technology, particularly in the realm of misinformation and disinformation during crucial election years worldwide.

“Sora is absolutely capable of creating videos that could trick everyday folks,” says Rachel Tobac, co-founder of SocialProof Security.

Hany Farid, a professor at the University of California, Berkeley, highlights the rapid advancements in generative AI and the looming challenge of distinguishing between real and fake content. He emphasises the potential ramifications of combining text-to-video technology with AI-powered voice cloning, which could open up new avenues for creating convincing deepfakes.

Sora builds upon OpenAI’s existing technologies, including DALL-E, an image generator, and large language models like GPT. By combining diffusion models and transformer architecture, Sora achieves a higher level of realism in its video generation process. Diffusion models gradually convert random image pixels into coherent visuals, while transformer architecture contextualises and assembles sequential data, such as text descriptions.

“Sora is a data-driven physics engine that can simulate worlds,” says Jim Fan, a senior research scientist at NVIDIA.

Despite its impressive capabilities, Sora’s videos still have some flaws, such as visual glitches and inconsistencies. These glitches , while noticeable in complex scenes with lots of movement, suggest that deepfake videos generated by Sora are currently detectable. However, experts like Arvind Narayanan from Princeton University caution that society will need to adapt to the evolving landscape of AI-generated content in the long run.

OpenAI is taking proactive steps to mitigate the potential misuse of Sora. The company has refrained from making Sora publicly available until rigorous “red team” exercises are conducted to assess its safeguards against misuse. These exercises involve domain experts in areas such as misinformation and bias, ensuring that Sora undergoes thorough testing before its release.

“While OpenAI has not disclosed specific plans for making Sora widely available in 2024, the company is prioritizing safety measures to prevent the generation of harmful content by its AI models,” says an OpenAI spokesperson.

In conclusion, Sora represents a remarkable advancement in AI technology, with the potential to revolutionize content creation and storytelling. However, it also raises important ethical and societal considerations, underscoring the need for responsible development and deployment of AI models like Sora in the digital age. As society navigates the evolving landscape of AI-generated content, collaboration and proactive measures will be essential in mitigating the risks and maximizing the benefits of this groundbreaking technology.

Leave a Reply

Your email address will not be published. Required fields are marked *