Jump to content

Search the Community

Showing results for tags 'sora'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General
    • General Discussion
    • Artificial Intelligence
    • DevOpsForum News
  • DevOps & SRE
    • DevOps & SRE General Discussion
    • Databases, Data Engineering & Data Science
    • Development & Programming
    • CI/CD, GitOps, Orchestration & Scheduling
    • Docker, Containers, Microservices, Serverless & Virtualization
    • Infrastructure-as-Code
    • Kubernetes & Container Orchestration
    • Linux
    • Logging, Monitoring & Observability
    • Security, Governance, Risk & Compliance
  • Cloud Providers
    • Amazon Web Services
    • Google Cloud Platform
    • Microsoft Azure

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


LinkedIn Profile URL


About Me


Cloud Platforms


Cloud Experience


Development Experience


Current Role


Skills


Certifications


Favourite Tools


Interests

Found 5 results

  1. Sora fans just learned a hard lesson: filmmakers will be filmmakers and will do what's necessary to make their creations as convincing and eye-popping as possible. But if this made them think less of OpenAI's generative AI video platform, they're wrong. When OpenAI handed an early version of the generative Video AI platform to a bunch of creatives, one team – Shy Kids – created an unforgettable video of a man with a yellow balloon for a head. Many declared Air Head to be a weird and powerful breakthrough, but a behind-the-scenes video has cast a rather different spin on it. And it turns out that as good as Sora is at generating video from test prompts, there were many things that the platform either couldn't do or didn't produce just as the filmmakers wanted. The video's post-production editor Patrick Cederberg offered, in an interview with FxGuide, a lengthy list of changes Cederberg's team made to Sora's output to create the stunning effects we saw in the final, 1-minute, 22-second Air Head video. Sora's developers, for instance, included no understanding of typical film shots like panning, tracking, and zooming, so the team sometimes had to create a pan and tilt shot out of the existing more static clip. Plus, while Sora is capable of outputting lengthy videos based on long text prompts, there is no guarantee that the subjects in each prompt will remain consistent from one output clip to another. It took considerable work and experimentation in prompts to get videos that connected disparate shots into a semi-connected whole. As Cederberg notes in an Air Head Behind the Scenes video "What ultimately you're seeing took work time and human hands to get it looking semi-consistent." The balloon head sounds particularly challenging, as Sora understands the idea of a balloon but doesn't base its output on, say, an individual video or photo of a balloon. In Sora's original idea, every balloon had a sting attached; Cederberg's team had to paint that out of each frame. More frustratingly, Sora often wanted to put the impression (see above), outline, or drawing of a face on the balloons. And while the final video features a yellow balloon in each shot, the Sora output usually had different balloon colors that Shy Kids would adjust in post. Shy Kids told FxGuide that all the video they used is Sora output, it's just that if they had used the video untouched, the film would've lacked the continuity and cohesion of the final, wistful product. This is good news Does this news turn the charming Shy Kids video into Sora's Milkshake Duck? Not necessarily. If you look at some of the unretouched videos and images in the Behind the Scenes video, they're still remarkable and while post-production was necessary, Shy Kids never shot a single bit of real film to produce the initial images and video. Even as AI innovation races forward and we see huge generational leaps as often as every three months, AI of almost any stripe is far from perfect. ChatGPT's responses are usually accurate, but can still miss the context and get basic facts wrong. With text-to-imagery, the results are even more varied because, unlike AI-generated text response – which can use fact-based sources and mostly predicts the right next word – generative imaging base their output on a representation of that idea or concept. That's particularly true of diffusion models that use training information to figure out what something should look like, which means that output can vary wildly from image to image. "It's not as easy as a magic trick: type something in and get exactly what you're hoping for," Shy Kids Producer Syndey Leeder says in the Behind the Scenes video. These models may have a general idea of what a balloon or person looks like. Asking such a system to imagine a man on a bike six times will get you six different results. They may all look good, but it's unlikely the man or bicycle will be the same in every image. Video generation likely compounds the issue, with the odds of maintaining scene and image consistency across thousands of frames and from clip to clip extremely low. With that in mind, Shy Kids' accomplishment is even more noteworthy. Air Heads manages to maintain both the otherworldliness of an AI video and a cinematic essence. This is how AI should work Automation doesn't mean the complete removal of human intervention. This is as true for videos as it is on the factory floor, where the introduction of robots has not meant people-free production. I vividly recall Elon Musk's efforts to automate as much of the Tesla Model 3's production as possible. It was a near disaster and production went more smoothly when he added back the humanity. A creative process such as filmmaking or production will always require the human touch. Shy Kids needed an idea before they could start feeding it to Sora. And when Sora didn't understand their intentions, they had to adjust the output by hand. As most creative endeavors do, it became a partnership, one where the accomplished Sora AI provided a tremendous shortcut, but one that still didn't take the project to completion. Instead of bursting Air Head's bubble, these revelations remind us that the marriage of traditional media and AI still requires a human's guiding hand and that's unlikely to change – at least for the time being. You might also like I finally found a practical use for AI, and I may never garden the ...Samsung's next Galaxy AI feature could revolutionize smartphone ...My jaw hit the floor when I watched an AI master one of the world's ...AI Explorer could revolutionize Windows 11, but can your PC run it ...Apple is forging a path towards more ethical generative AI ... View the full article
  2. A new interview with the director behind the viral Sora clip Air Head has revealed that AI played a smaller part in its production than was originally claimed. Revealed by Patrick Cederberg (who did the post-production for the viral video) in an interview with Fxguide, it has now been confirmed that OpenAI's text-to-video program was far from the only force involved in its production. The 1-minute and 21-second clip was made with a combination of traditional filmmaking techniques and post-production editing to achieve the look of the final picture. Air Head was made by ShyKids and tells the short story of a man with a literal balloon for a head. While there's human voiceover utilized, from the way OpenAI was pushing the clip on social channels such as YouTube, it certainly left the impression that the visuals were was purely powered by AI, but that's not entirely true. As revealed in the behind-the-scenes clip, a ton of work was done by ShyKids who took the raw output from Sora and helped to clean it up into the finished product. This included manually rotoscoping the backgrounds, removing the faces that would occasionally appear on the balloons, and color correcting. Then there's the fact that Sora takes a ton of time to actually get things right. Cederberg explains that there were "hundreds of generations at 10 to 20 seconds a piece" which were then tightly edited in what the team described as a "300:1" ratio of what was generated versus what was primed for further touch-ups. Such manual work also included editing out the head which would appear and reappear, and even changing the color of the balloon itself which would appear red instead of yellow. While Sora was used to generate the initial imagery with good results, there was clearly a lot more happening behind the scenes to make the finished product look as good as it does, so we're still a long way out from instantly-generated movie-quality productions. Sora remains tightly under wraps save for a handful of carefully curated projects that have been allowed to surface, with Air Head among the most popular. The clip has over 120,000 views at the time of writing, with OpenAI touting as "experimentation" with the program, downplaying the obvious work that went into the final product. Sora is impressive but we're not convinced While OpenAI has done a decent job of showcasing what its text-to-video service can do through the large language model, the lack of transparency is worrying. Air Head is an impressive clip by a talented team, but it was subject to a ton of editing to get the final product to where it is in the short. It's not quite the one-click-and you-'re-done approach that many of the tech's boosters have represented it as. It turns out that it is merely a tool which could be used to enhance imagery instead of create from scratch, which is something that is already common enough in video production, making Sora seem less revolutionary than it first appeared. You may also like OpenAI’s new Sora video is an FPV drone ride through the strangest TED Talk you’ve ever seen – and I need to lie downOpenAI's Sora just made its first music video and it's like a psychedelic tripWhat is OpenAI's Sora? View the full article
  3. OpenAI's new Sora text-to-video generation tool won't be publicly available until later this year, but in the meantime it's serving up some tantalizing glimpses of what it can do – including a mind-bending new video (below) showing what TED Talks might look like in 40 years. To create the FPV drone-style video, TED Talks worked with OpenAI and the filmmaker Paul Trillo, who's been using Sora since February. The result is an impressive, if slightly bewildering, fly-through of futuristic conference talks, weird laboratories and underwater tunnels. The video again shows both the incredible potential of OpenAI Sora and its limitations. The FPV drone-style effect has become a popular one for hard-hitting social media videos, but it traditionally requires advanced drone piloting skills and expensive kit that goes way beyond the new DJI Avata 2. Sora's new video shows that these kind of effects could be opened up to new creators, potentially at a vastly lower cost – although that comes with the caveat that we don't yet know how much OpenAI's new tool itself will cost and who it'll be available to. What will TED look like in 40 years? For #TED2024, we worked with artist @PaulTrillo and @OpenAI to create this exclusive video using Sora, their unreleased text-to-video model. Stay tuned for more groundbreaking AI — coming soon to https://t.co/YLcO5Ju923! pic.twitter.com/lTHhcUm4FiApril 19, 2024 See more But the video (above) also shows that Sora is still quite far short of being a reliable tool for full-blown movies. The people in the shots are on-screen for only a couple of seconds and there's plenty of uncanny valley nightmare fuel in the background. The result is an experience that's exhilarating, while also leaving you feeling strangely off-kilter – like touching down again after a sky dive. Still, I'm definitely keen to see more samples as we hurtle towards Sora's public launch later in 2024. How was the video made? (Image credit: OpenAI / TED Talks) OpenAI and TED Talks didn't go into detail about how this specific video was made, but its creator Paul Trillo recently talked more broadly about his experiences of being one of Sora's alpha tester. Trillo told Business Insider about the kinds of prompts he uses, including "a cocktail of words that I use to make sure that it feels less like a video game and something more filmic". Apparently these include prompts like "35 millimeter", "anamorphic lens", and "depth of field lens vignette", which are needed or else Sora will "kind of default to this very digital-looking output". Right now, every prompt has to go through OpenAI so it can be run through its strict safeguards around issues like copyright. One of Trillo's most interesting observations is that Sora is currently "like a slot machine where you ask for something, and it jumbles ideas together, and it doesn't have a real physics engine to it". This means that it's still a long way way off from being truly consistent with people and object states, something that OpenAI admitted in an earlier blog post. OpenAI said that Sora "currently exhibits numerous limitations as a simulator", including the fact that "it does not accurately model the physics of many basic interactions, like glass shattering". These incoherencies will likely limit Sora to being a short-form video tool for some time, but it's still one I can't wait to try out. You might also like OpenAI just gave artists access to Sora and proved the AI video tool is weirder and more powerful than we thoughtOpenAI's new voice synthesizer can copy your voice from just 15 seconds of audioElon Musk might be right about OpenAI — but that doesn't mean he should win View the full article
  4. OpenAI recently published a music video for the song Worldweight by August Kamp made entirely by their text-to-video engine, Sora. You can check out the whole thing on the company’s official YouTube channel and it’s pretty trippy, to say the least. Worldweight consists of a series of short clips in a wide 8:3 aspect ratio featuring fuzzy shots of various environments. You see a cloudy day at the beach, a shrine in the middle of a forest, and what looks like pieces of alien technology. The ambient track coupled with the footage results in a uniquely ethereal experience. It’s half pleasant and half unsettling. It’s unknown what text prompts were used on Sora; Kamp didn’t share that information. But she did explain the inspiration behind them in the description. She states that whenever she created the track, she imagined what a video representing Worldweight would look like. However, she lacked a way to share her thoughts. Thanks to Sora, this is no longer an issue as the footage displays what she had always envisioned. It's "how the song has always ‘looked’” from her perspective. Embracing Sora If you pay attention throughout the entire runtime, you’ll notice hallucinations. Leaves turn into fish, bushes materialize out of nowhere, and flowers have cameras instead of petals. But because of the music’s ethereal nature, it all fits together. Nothing feels out of place or nightmare-inducing. If anything, the video embraces the nightmares. We should mention August Kamp isn’t the only person harnessing Sora for content creation. Media production company Shy Kids recently published a short film on YouTube called “Air Head” which was also made on the AI engine. It plays like a movie trailer about a man who has a balloon for a head. Analysis: Lofty goals It's hard to say if Sora will see widespread adoption judging by this content. Granted, things are in the early stages, but ready or not, that hasn't stopped OpenAI from pitching its tech to major Hollywood studios. Studio executives are apparently excited at the prospects of AI saving time and money on production. August Kamp herself is a proponent of the technology stating, “Being able to build and iterate on cinematic visuals intuitively has opened up categorically new lanes of artistry for me”. She looks forward to seeing “what other forms of storytelling” will appear as artificial intelligence continues to grow. In our opinion, tools such Sora will most likely enjoy a niche adoption among independent creators. Both Kamp and Shy Kids appear to understand what the generative AI can and cannot do. They embrace the weirdness, using it to great effect in their storytelling. Sora may be great at bringing strange visuals to life, but in terms of making “normal-looking content”, that remains to be seen. People still talk about how weird or nightmare-inducing content made by generative AI is. Unless OpenAI can surmount this hurdle, Sora may not amount to much beyond niche usage. It’s still unknown when Sora will be made publicly available. OpenAI is holding off on a launch, citing potential interference in global elections as one of its reasons. Although, there are plans to release the AI by the end of 2024. If you're looking for other platforms, check out TechRadar's list of the best AI video makers for 2024. You might also like OpenAI's new voice synthesizer can copy your voice from just 15 seconds of audioElon Musk might be right about OpenAI — but that doesn't mean he should winGoogle isn’t done trying to demonstrate Gemini’s genius and is working on integrating it directly into Android devices View the full article
  5. A man with a balloon for a head is somehow not the weirdest thing you'll see today thanks to a series of experimental video clips made by seven artists using OpenAI's Sora generative video creation platform. Unlike OpenAI's ChatGPT AI chatbot and the DALL-E image generation platform, the company's text-to-video tool still isn't publicly available. However, on Monday, OpenAI revealed it had given Sora access to "visual artists, designers, creative directors, and filmmakers" and revealed their efforts in a "first impressions" blog post. While all of the films ranging in length from 20 seconds to a minute-and-a-half are visually stunning, most are what you might describe as abstract. OpenAI's Artist In Residence Alex Reben's 20-second film is an exploration of what could very well be some of his sculptures (or at least concepts for them), and creative director Josephine Miller's video depicts models melded with what looks like translucent stained glass. Not all the videos are so esoteric. OpenAI Sora AI-generated video image by Don Allen Stevenson III (Image credit: OpenAI sora / Don Allen Stevenson III) If we had to give out an award for most entertaining, it might be multimedia production company shy kids' "Air Head". It's an on-the-nose short film about a man whose head is a hot-air-filled yellow balloon. It might remind you of an AI-twisted version of the classic film, The Red Balloon, although only if you expected the boy to grow up and marry the red balloon and...never mind. Sora's ability to convincingly merge the fantastical balloon head with what looks like a human body and a realistic environment is stunning. As shy kids' Walter Woodman noted, “As great as Sora is at generating things that appear real, what excites us is its ability to make things that are totally surreal." And yes, it's a funny and extremely surreal little movie. But wait, it gets stranger. The other video that will have you waking up in the middle of the night is digital artist Don Allen Stevenson III's "Beyond Our Reality," which is like a twisted National Geographic nature film depicting never-before-seen animal mergings like the Girafflamingo, flying pigs, and the Eel Cat. Each one looks as if a mad scientist grabbed disparate animals, carved them up, and then perfectly melded them to create these new chimeras. OpenAI and the artists never detail the prompts used to generate the videos, nor the effort it took to get from the idea to the final video. Did they all simply type in a paragraph describing the scene, style, and level of reality and hit enter, or was this an iterative process that somehow got them to the point where the man's balloon head somehow perfectly met his shoulders or the Bunny Armadillo transformed from grotesque to the final, cute product? That OpenAI has invited creatives to take Sora for a test run is not surprising. It's their livelihoods in art, film, and animation that are most at risk from Sora's already impressive capabilities. Most seem convinced it's a tool that can help them more quickly develop finished commercial products. "The ability to rapidly conceptualize at such a high level of quality is not only challenging my creative process but also helping me evolve in storytelling. It's enabling me to translate my imagination with fewer technical constraints," said Josephine Miller in the blog post. Go watch the clips but don't blame us if you wake up in the middle of the night screaming. You might also like Best AI toolsWhat is AI? Everything you need to know about Artificial Intelligence ...The first batch of Rabbit R1 AI devices will be shipping next week ...What is Suno? The viral AI song generator explained – and how to ...The iPhone 16 Pro's chipset could be designed with AI in mind ... View the full article
  • Forum Statistics

    43.9k
    Total Topics
    43.4k
    Total Posts
×
×
  • Create New...