Search the Community
Showing results for tags 'ted talks'.
-
OpenAI's new Sora text-to-video generation tool won't be publicly available until later this year, but in the meantime it's serving up some tantalizing glimpses of what it can do – including a mind-bending new video (below) showing what TED Talks might look like in 40 years. To create the FPV drone-style video, TED Talks worked with OpenAI and the filmmaker Paul Trillo, who's been using Sora since February. The result is an impressive, if slightly bewildering, fly-through of futuristic conference talks, weird laboratories and underwater tunnels. The video again shows both the incredible potential of OpenAI Sora and its limitations. The FPV drone-style effect has become a popular one for hard-hitting social media videos, but it traditionally requires advanced drone piloting skills and expensive kit that goes way beyond the new DJI Avata 2. Sora's new video shows that these kind of effects could be opened up to new creators, potentially at a vastly lower cost – although that comes with the caveat that we don't yet know how much OpenAI's new tool itself will cost and who it'll be available to. What will TED look like in 40 years? For #TED2024, we worked with artist @PaulTrillo and @OpenAI to create this exclusive video using Sora, their unreleased text-to-video model. Stay tuned for more groundbreaking AI — coming soon to https://t.co/YLcO5Ju923! pic.twitter.com/lTHhcUm4FiApril 19, 2024 See more But the video (above) also shows that Sora is still quite far short of being a reliable tool for full-blown movies. The people in the shots are on-screen for only a couple of seconds and there's plenty of uncanny valley nightmare fuel in the background. The result is an experience that's exhilarating, while also leaving you feeling strangely off-kilter – like touching down again after a sky dive. Still, I'm definitely keen to see more samples as we hurtle towards Sora's public launch later in 2024. How was the video made? (Image credit: OpenAI / TED Talks) OpenAI and TED Talks didn't go into detail about how this specific video was made, but its creator Paul Trillo recently talked more broadly about his experiences of being one of Sora's alpha tester. Trillo told Business Insider about the kinds of prompts he uses, including "a cocktail of words that I use to make sure that it feels less like a video game and something more filmic". Apparently these include prompts like "35 millimeter", "anamorphic lens", and "depth of field lens vignette", which are needed or else Sora will "kind of default to this very digital-looking output". Right now, every prompt has to go through OpenAI so it can be run through its strict safeguards around issues like copyright. One of Trillo's most interesting observations is that Sora is currently "like a slot machine where you ask for something, and it jumbles ideas together, and it doesn't have a real physics engine to it". This means that it's still a long way way off from being truly consistent with people and object states, something that OpenAI admitted in an earlier blog post. OpenAI said that Sora "currently exhibits numerous limitations as a simulator", including the fact that "it does not accurately model the physics of many basic interactions, like glass shattering". These incoherencies will likely limit Sora to being a short-form video tool for some time, but it's still one I can't wait to try out. You might also like OpenAI just gave artists access to Sora and proved the AI video tool is weirder and more powerful than we thoughtOpenAI's new voice synthesizer can copy your voice from just 15 seconds of audioElon Musk might be right about OpenAI — but that doesn't mean he should win View the full article
-
Forum Statistics
63.6k
Total Topics61.7k
Total Posts