OpenAI’s latest AI videos prove why it was smart to limit access to Sora ahead of the US election

OpenAI's latest AI videos prove why it was smart to limit access to Sora ahead of the US election

What you need to know

  • OpenAI shared two videos that showcase its Sora model.
  • Sora can follow text prompts to create realistic videos.
  • While the videos showcase the potential of Sora, they include several awkward moments and artifacts that look unrealistic.
  • OpenAI limited access to Sora to a shortlist of creators in order to study potential risks of the technology.

As a seasoned researcher with over two decades of experience in the field of AI and technology, I find myself both awestruck and cautious when it comes to OpenAI’s latest offering, Sora. The potential for this model to revolutionize video creation is undeniable, as demonstrated by the impressive still images generated in the showcase videos by Niceaunties and David Sheldrick. However, I cannot help but notice the occasional awkward moments and unrealistic aspects that remind us of the technology’s infancy.


Earlier this year, OpenAI unveiled the Sora model – an AI-driven tool designed to create mostly lifelike videos based on text instructions. Although there are certain imperfections in the content generated by Sora, several aspects of these videos come across as authentic and convincing. To demonstrate their technology’s progress, OpenAI released two “Sora Showcase” videos on YouTube.

Two videos, crafted by skilled artists, aim to highlight the capabilities of Sora. One production hails from the creative mind of Singaporean artist Niceaunties, while the other originates from British Korean artist David Sheldrick.

Art appreciation can be personal, so I won’t delve into the deeper significance of the video content as I’m not an artist myself. Instead, let’s focus on the realism showcased in the video. The still images from Niceaunties are truly captivating upon first glance. Particularly striking are non-human objects like eggs, clocks, and kitchen utensils that appear strikingly lifelike in various scenes.

In many instances, creating realistic human movements and actions can be quite difficult. While some unnatural-looking individuals in the video might be deliberate artistic choices, it’s hard to believe that every awkward scene is solely due to creative decisions.

In a similar vein to Sheldrick’s video, static shots or moments appear strikingly authentic compared to lengthy clips featuring moving individuals. Particularly, certain instances of arms and hands seem artificial within the video. It’s questionable whether these effects were intentionally designed or perhaps not.

Sora is still quite fresh, so demanding flawlessness would be unreasonable. The fact that the clips exhibit such dynamic motion suggests that developers are eager to explore AI’s boundaries. With further AI training and creators fine-tuning their prompts, I believe it’s highly probable we’ll soon see videos like these that resemble real life.

Limiting AI access

As a tech enthusiast, I must admit that Generative AI, with its jaw-dropping capabilities, is a double-edged sword. While it’s undeniably impressive, the advancements in this field have sparked valid worries. The fear is that jobs may become obsolete due to automation. Moreover, there are environmental concerns looming over AI because of its high energy consumption, specifically the massive amounts of water and electricity it requires.

In their current state, Artificial Intelligence (AI) can sometimes experience illusions or provide incorrect answers, even when conditions are favorable. For instance, Google’s AI once stated that Google was no longer in existence, suggested consuming rocks as food, and advised self-harm when feeling down – incidents that occurred back when the technology was still relatively new and widely accessible to consumers. My primary arguments here are twofold: firstly, it’s crucial to understand AI responses within their context, and secondly, imposing certain limitations may be necessary for AI’s long-term wellbeing.

As the US Election dominates news, I appreciate that tools like Sora are in the hands of specific users, and it remains several years before anyone can easily create convincing false videos with just a few clicks. While I believe that time will come, I don’t think we’ve reached it quite yet.

Politics can sometimes bring out the less desirable aspects of human behavior. Misleading information spreads rapidly on social platforms, even when it’s blatantly false. This situation could escalate dramatically when political passions intersect with the rapid dissemination of information on social media and the ease of producing convincing fake videos in a flash.

We’ve already seen AI be used to spread misinformation. Bad actors will be able to do more damage if given more tools and left unchecked. Microsoft has plans to protect people from AI during the US election, but I think laws and protections are often behind criminals and bad actors.

It’s prudent of OpenAI to restrict access to Sora. It’s essential to establish more safeguards before allowing people to utilize any AI model for generating video content. Although there are other models in existence, OpenAI should be acknowledged for their thoughtful choices.

Read More

2024-09-10 19:09