Jump to content

Search the Community

Showing results for tags 'meta'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

There are no results to display.

There are no results to display.


Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


LinkedIn Profile URL


About Me


Cloud Platforms


Cloud Experience


Development Experience


Current Role


Skills


Certifications


Favourite Tools


Interests

Found 20 results

  1. If you’ve lost track of this week’s most important tech news then don’t fret, as we’re here to catch you up to speed – and this is one week you won't want to have missed. We say that because some major things have happened in the last seven days. Meta announced that it’s sharing its Horizon OS with other VR headset makers – which is the biggest announcement it will likely make this decade – Apple teased its 2024 iPad lines and gave us a launch date for them, and DJI gave us a release date for its cheapest-ever 4K drone. And here at TechRadar we hosted our first Sustainability Week, producing a whole host of articles showcasing some fantastic examples of how the tech industry is focusing on sustainability – and a few instances where it still needs to do better. Let’s get you all caught up on the week’s biggest stories that you might have missed… 7. We hosted Sustainability Week (Image credit: Shutterstock / Troyan) We ran our first Sustainability Week, highlighting the sustainability heroes working to make a difference in tech – and some we think could do more. We talked to Samsung about its energy efficient, AI-driven appliances, delved deep into Apple's recent patent for removable uniform battery enclosures, and even spoke with the creators of a bioengineered plant purifier. It's not all sunshine and rainbows, though, as we also zoned in on the energy impact of processors and graphics cards, calling for the likes of AMD, Nvidia and Intel to work together towards a more sustainable chipmaking future. We've also shared insights on how you can do your bit, whether that's by opting for refurbished tech, seeking out sustainably sourced devices, or using your phone to save the planet. Read more: Our sustainability week coverage 6. Apple set a launch date for its new iPads The official invite for Apple's May 7 launch event (Image credit: Apple) Apple has confirmed the date for its next “special Apple Event” as May 7 at 7am PT / 10am ET / 3pm BST, which is midnight AEST on May 8, and while iPads aren’t officially on the cards, the invite includes a snazzy Apple logo featuring an Apple Pencil surrounded by splashes of color – a strong indication that new tablets will be shown off. We’re expecting the headline announcement to be a new iPad Pro with an OLED display and M3 chipset, with two iPad Air 6 models plus some accessories – such as a new Pencil and potentially a Magic Keyboard – also likely to get shown off. And if this isn’t enough Apple for you, just over a month later Apple’s annual developer conference, WWDC 2024, will kick off on June 10. So make sure to check back here regularly to keep yourself in the loop. Read more: Apple sets imminent launch date for new iPads 5. Meta made a massive OS announcement (Image credit: Meta) This week Meta announced that its Horizon OS – the operating system used by its VR headsets like the Meta Quest 3 – is coming to third-party hardware, starting with ASUS, Lenovo and Xbox devices. This is huge news, as it’ll hopefully lead to a much more diverse range of VR headsets in the near future, with the already teased gadgets including a “performance gaming headset,” “mixed-reality devices for productivity,” and a more Quest-like headset coming from the trio of partners respectively. However, Meta might want to make sure that Horizon OS doesn’t copy too many of the bad aspects of Windows, though only time will tell how this move will play out. Read more: Meta is making spatial computing's Windows with Horizon OS 4. A Sony wearable took us one step closer to Dune (Image credit: Future / Axel Metz) With the recent launch of Apple’s Vision Pro headset and Dyson’s Bane-like air-purifying headphones, you’d be forgiven for thinking that we’ve reached peak wearable tech. However, this week Sony showed us that the wearable product class is just getting started. The Sony Reon Pocket 5 is a wearable thermo device that cools or warms your body, depending on the conditions of your environment. Designed to sit neatly on the back of your neck, the Reon Pocket 5 uses a plate-like "thermos module" and five sensors to determine optimal body temperature and, hopefully, make you more comfortable while you're traveling on public transport or walking in less-than-ideal conditions. The Reon Pocket 5 offers five levels of cooling and four levels of warmth, meaning that – in theory – it’s just as useful on a stuffy commuter train as it is outdoors on a frosty morning. We took the device for a spin at a recent demo event, and we can confirm that it does indeed regulate body temperature pretty effectively – though you’ll have to put up with looking like an extra from a sci-fi movie when wearing it. Read more: Sony’s wearable air conditioner could improve your commute 3. An Android phone served up superior audio (Image credit: Moondrop) The lack of an audio jack in most of today’s phones hasn’t been popular with everyone – especially not audiophiles, who love the fidelity of wired connections for hi-res audio. Enter the Moondrop MIAD01 – this phone not only has a 3.5mm jack, it also has a 4.4mm balanced output for connecting to a powerful music system without distortion, and a “flagship” DAC to make sure high-end digital files get treated properly on the way to your ears. On top of that, it’s a pretty cool phone as well. It’s got a great futuristic look (as do all Moondrop products – just check out their earbuds when you’re bored), a large 120Hz OLED screen, and dual cameras on the back. For serious streaming audiophiles, a dedicated music player or a portable DAC tend to be a big part of listening on the go, and this phone aims to replace both – and the music lovers on the TechRadar team are watching closely. Read more: This Android phone for audiophiles offers a hi-res DAC and 3.5mm jack 2. The Deadpool and Wolverine trailer delivered Easter egg galore (Image credit: Marvel Studios) Deadpool and Wolverine is edging closer towards its July 26 launch date, so it’s high time that Marvel released some new footage to further fuel our excitement for the duo’s multiversal buddy-cop flick. Thankfully, the comic book giant duly obliged earlier this week (April 22) with a brand-new trailer – and, unsurprisingly, the Marvel Phase 5 movie’s latest teaser is packed with Easter eggs. Some are easier to spot than others, mind you, so we’ve taken the liberty of picking out six of the best and/or easily missable ones from Deadpool 3’s newest trailer. Once you’ve read that, check out our X-Men movies in order guide to see what films you need to stream ahead of the MCU’s next flick, too. Read more: New Deadpool and Wolverine trailer is packed with Marvel Easter eggs 1. DJI’s cheapest-ever 4K drone got a release date (Image credit: DJI) A DJI drone announcement without the usual speculation, rumors and leaked pictures is a rare thing, but the DJI Mini 4K quietly popped up on the DJI Amazon store this week, complete with a April 29 release date. We’re not expecting big things from the Mini 4K – it will likely be a modest refresh of the DJI Mini 2 SE, with similar specs like 31-minute flight time, level 5 wind resistance and a sub-250g body – but it will become DJI’s cheapest ever drone to shoot 4K video, and that should make it one of 2024’s most popular drone for beginners. Read more: DJI Mini 4K release date confirmed, here's what to expect View the full article
  2. We are excited to partner with Meta to release the latest state-of-the-art large language model, Meta Llama 3 , on Databricks. With Llama... View the full article
  3. After officially entering the AI game in September 2023, Meta just scaled up its AI chatbot experiment. Some WhatsApp users have been able to play around with the company's new AI assistant for a while now, and Meta's AI upgrade was first introduced in beta in November last year. More functionalities appeared on users' search bars later in March. However, the trial was restricted to people in the US in a limited capacity. Now, people in India and parts of Africa have spotted Meta AI on WhatsApp. Speaking to TechCrunch, the company confirmed that it plans to expand its AI trails to more users worldwide and integrate the AI chatbot into Facebook Messenger and Instagram, too. More platforms, more users "Our generative AI-powered experiences are under development in varying phases, and we’re testing a range of them publicly in a limited capacity," a Meta spokesperson told TechCrunch. The move perfectly illustrates the company's will to compete with AI's bigger players, most notably OpenAI and its ChatGPT-powered tools. What's more, India is the country worldwide with the most Facebook and WhatsApp users. WhatsApp monthly usage is also reportedly high in African countries, such as Nigeria, South Africa, and Kenya. To check if you're a chosen one, you should update your WhatsApp for iOS or Android app to the latest version directly from the official app store. Meta AI will appear for some selected users who have their app set to English on a rolling basis. Meta starts limited testing of Meta AI on WhatsApp in different countries!Some users in specific countries can now experiment with the Meta AI chatbot, exploring its capabilities and functionalities through different entry points.https://t.co/PrycA4o0LI pic.twitter.com/BB2axOGnEjApril 12, 2024 See more Designed to reply to users' queries and generate images from text prompts, the Meta AI chatbot is also landing on Facebook Messenger and Instagram on a limited capacity across the US, India, and a few more selected countries. On Instagram, the plan is also to use the feature for search queries—TechCrunch reported. These signs of Meta AI expansion aren't happening in a vacuum, either. A few days back, the company announced plans to release AI models with "human-level cognition" capabilities. "We are hard at work in figuring out how to get these models not just to talk, but actually to reason, to plan . . . to have memory," Joelle Pineau, the vice president of AI research at Meta, told the Financial Times when announcing the new Llama 3 model. The choice is now yours if you want to help Meta accelerate its work towards an even more powerful AI—but do we all want that, really?—or, remain a silent and skeptical spectator. View the full article
  4. After a handful of rumors and speculation suggested Meta was working on a pair of AR glasses, it unceremoniously confirmed that Meta AR glasses are on the way – doing so via a short section at the end of a blog post celebrating the 10th anniversary of Reality Labs (the division behind its AR/VR tech). While not much is known about them, the glasses were described as a product merging Meta’s XR hardware with its developing Meta AI software to “deliver the best of both worlds” in a sleek wearable package. We’ve collected all the leaks, rumors, and some of our informed speculation in this one place so you can get up to speed on everything you need to know about the teased Meta AR glasses. Let’s get into it. Meta AR glasses: Price We’ll keep this section brief as right now it’s hard to predict how much a pair of Meta AR glasses might cost because we know so little about them – and no leakers have given a ballpark estimate either. Current smart glasses like the Ray-Ban Meta Smart Glasses, or the Xreal Air 2 AR smart glasses will set you back between $300 to $500 / £300 to £500 / AU$450 to AU$800; Meta’s teased specs, however, sound more advanced than what we have currently. Meta's glasses could cost as much as Google Glass (Image credit: Future) As such, the Meta AR glasses might cost nearer $1,500 (around £1,200 / AU$2300) – which is what the Google Glass smart glasses launched at. A higher price seems more likely given the AR glasses novelty, and the fact Meta would need to create small yet powerful hardware to cram into them – a combo that typically leads to higher prices. We’ll have to wait and see what gets leaked and officially revealed in the future. Meta AR glasses: Release date Unlike price, several leaks have pointed to when we might get our hands – or I suppose eyeballs – on Meta’s AR glasses. Unfortunately, we might be waiting until 2027. That’s according to a leaked Meta internal roadmap shared by The Verge back in March 2023. The document explained that a precursor pair of specs with a display will apparently arrive in 2025, with ‘proper’ AR smart glasses due in 2027. (Image credit: Meta) In February 2024 Business Insider cited unnamed sources who said a pair of true AR glasses could be shown off at this year’s Meta Connect conference. However, that doesn’t mean they’ll launch sooner than 2027. While Connect does highlight soon-to-release Meta tech, the company takes the opportunity to show off stuff coming further down the pipeline too. So, its demo of Project Orion (as those who claim to be in the know call it) could be one of those ‘you’ll get this when it’s ready’ kind of teasers. Obviously, leaks should be taken with a pinch of salt. Meta could have brought the release of its specs forward, or pushed it back depending on a multitude of technological factors – we won’t know until Meta officially announces more details. Considering it has teased the specs suggests their release is at least a matter of when not if. Meta AR glasses: Specs and features We haven't heard anything about the hardware you’ll find in Meta’s AR glasses, but we have a few ideas of what we’ll probably see from them based on Meta’s existing tech and partnerships. Meta and LG recently confirmed that they’ll be partnering to bring OLED panels to Meta’s headsets, and we expect they’ll bring OLED screens to its AR glasses too. OLED displays appear in other AR smart glasses so it would make sense if Meta followed suit. Additionally, we anticipate that Meta’s AR glasses will use a Qualcomm Snapdragon chipset just like Meta’s Ray-Ban smart glasses. Currently, that’s the AR1 Gen 1, though considering Meta’s AR specs aren’t due until 2027 it seems more likely they’d be powered by a next-gen chipset – either an AR2 Gen 1 or an AR1 Gen 2. The AR glasses could let you bust ghost wherever you go (Image credit: Meta) As for features, Meta’s already teased the two standouts: AR and AI abilities. What this means in actual terms is yet to be seen but imagine virtual activities like being able to set up an AR Beat Saber jam wherever you go, an interactive HUD when you’re navigating from one place to another, or interactive elements that you and other users can see and manipulate together – either for work or play. AI-wise, Meta is giving us a sneak peek of what's coming via its current smart glasses. That is you can speak to its Meta AI to ask it a variety of questions and for advice just as you can other generative AI but in a more conversational way as you use your voice. It also has a unique ability, Look and Ask, which is like a combination of ChatGPT and Google Lens. This allows the specs to snap a picture of what’s in front of you to inform your question, allowing you to ask it to translate a sign you can see, for a recipe using ingredients in your fridge, or what the name of a plant is so you can find out how best to care for it. The AI features are currently in beta but are set to launch properly soon. And while they seem a little imperfect right now, we’ll likely only see them get better in the coming years – meaning we could see something very impressive by 2027 when the AR specs are expected to arrive. Meta AR glasses: What we want to see A slick Ray-Ban-like design The design of the Ray-Ban Meta Smart Glasses is great (Image credit: Meta) While Meta’s smart specs aren't amazing in every way – more on that down below – they are practically perfect in the design department. The classic Ray-Ban shape is sleek, they’re lightweight, super comfy to wear all day, and the charging case is not only practical, it's gorgeous. While it’s likely Ray-Ban and Meta will continue their partnership to develop future smart glasses – and by extension the teased AR glasses – there’s no guarantee. But if Meta’s reading this, we really hope that you keep working with Ray-Ban so that your future glasses have the same high-quality look and feel that we’ve come to adore. If the partnership does end, we'd like Meta to at least take cues from what Ray-Ban has taught it to keep the design game on point. Swappable lenses We want to change our lenses Meta! (Image credit: Meta) While we will rave about Meta’s smart glasses design we’ll admit there’s one flaw that we hope future models (like the AR glasses) improve on; they need easily swappable lenses. While a handsome pair of shades will be faultless for your summer vacations, they won’t serve you well in dark and dreary winters. If we could easily change our Meta glasses from sunglasses to clear lenses as needed then we’d wear them a lot more frequently – as it stands, they’re left gathering dust most months because it just isn’t the right weather. As the glasses get smarter, more useful, and pricier (as we expect will be the case with the AR glasses) they need to be a gadget we can wear all year round, not just when the sun's out. Speakers you can (quietly) rave too These open ear headphones are amazing, Meta take notes (Image credit: Future) Hardware-wise the main upgrade we want to see in Meta’s AR glasses is better speakers. Currently, the speakers housed in each arm of the Ray-Ban Meta Smart Glasses are pretty darn disappointing – they can leak a fair amount of noise, the bass is practically nonexistent and the overall sonic performance is put to shame by even basic over-the-ears headphones. We know open-ear designs can be a struggle to get the balance right with. But when we’ve been spoiled by open-ear options like the JBL SoundGear Sense – that have an astounding ability to deliver great sound and let you hear the real world clearly (we often forget we’re wearing them) – we’ve come to expect a lot and are disappointed when gadgets don’t deliver. The camera could also get some improvements, but we expect the AR glasses won’t be as content creation-focused as Meta’s existing smart glasses – so we’re less concerned about this aspect getting an upgrade compared to their audio capabilities. You might also like Meta Quest Pro 2: everything we know about the Apple Vision Pro competitorMeta Quest 3 Lite: everything we know about the rumored cheap VR headset View the full article
  5. The post Building new custom silicon for Meta’s AI workloads appeared first on Engineering at Meta. View the full article
  6. The post Building an infrastructure for AI’s future appeared first on Engineering at Meta. View the full article
  7. The post Introducing the next-gen Meta Training and Inference Accelerator appeared first on Engineering at Meta. View the full article
  8. The unique internal design of the Apple Vision Pro compared to its direct rivals was today revealed in CT scans performed by Lumafield. Lumafield used its Neptune industrial CT scanner and Voyager analysis software to conduct non-destructive teardowns of the ‌Apple Vision Pro‌, Meta Quest Pro, and Meta Quest 3. The study began by examining the internal design and layout of the headsets, noting the ‌Apple Vision Pro‌'s emphasis on efficient use of space. The Vision Pro's components are arranged in a manner that maximizes internal space without compromising the exterior, featuring a flexible PCB ribbon and electronics positioned at various angles. This contrasts with the Meta Quest Pro and Quest 3, which utilize a more traditional approach by stacking primary elements on a single plane. An examination of the sensors across the devices reveals the Vision Pro's advanced use of eye and hand tracking technologies for UI navigation, involving a variety of sensors such as LiDAR and IR cameras. The Meta Quest devices, on the other hand, incorporate handheld controllers and an experimental version of hand tracking. Thermal management strategies also vary significantly between the headsets. The Quest Pro's uses a combination of basic active and passive cooling, while the Vision Pro features micro-blowers. Battery design and placement further differentiate the headsets, with the Vision Pro opting for an external battery pack to prioritize performance, while the Meta Quest models integrate the battery within the headset for user convenience. Visit Lumafield's website for more information and to interact with its CT scans of Apple's Vision Pro headset.Related Roundup: Apple Vision ProBuyer's Guide: Vision Pro (Buy Now)Related Forum: Apple Vision Pro This article, "Apple Vision Pro CT Scans Showcase Internal Differences to Meta Quest" first appeared on MacRumors.com Discuss this article in our forums View the full article
  9. This week has been another hectic one in the world of tech. Samsung's One UI 6.1 update – which was supposed to make its tech better – actually made Galaxy S23 phones worse (though there is a temporary fix), OpenAI released a weird AI-made music video, and Meta teased its first AR glasses. To help you get up to speed on the latest tech stories we've recapped these and the others so you can get caught up on the most important events from the last week. You'll also find links to our full coverage of every story if you want to learn more. So scroll down for your firmware update, and we'll see you next week for another ICYMI. 7. PlayStation Portal lost its PSP emulator The PlayStation Portal (Image credit: Future/Rob Dwiar) This week it became even more clear that the streaming-only PlayStation Portal remote player won't be getting any offline functionality any time soon. Back in February, a team of programmers claimed that they had managed to get some PlayStation Portable (PSP) games running natively on the handheld, which usually requires a PlayStation 5 console to play games over an internet connection. Don’t get too excited though, because the same team revealed that the exploits they used to get the games running have been patched after they “responsibly reported the issues to PlayStation”. The alleged change came as part of the wider version 2.0.6 software update and, while the official release notes cryptically state that the update simply “improved system software performance and stability”, it definitely seems plausible that Sony would patch out such an exploit if it was found. Whether we’ll see such games supported without an internet connection officially is yet to be seen, but the handheld is firmly remote play only for now. Read more: Programmers got PSP games running on the PlayStation Portal 6. Samsung's One UI 6.1 update wreaked havoc A Samsung Galaxy A54 (Image credit: Future | Alex Walker-Todd) It’s been a whirlwind week for Samsung Galaxy S23 users who downloaded the big One UI 6.1 update, which finally brings Galaxy AI features to last-gen Samsung phones. Just a few days after some Samsung fans blamed One UI 6.1 for causing slower charging speeds on older Galaxy phones, others reported that the company’s latest update had wreaked havoc on the touchscreen functionality of certain Galaxy S23, Galaxy S23 Plus and Galaxy S23 Ultra devices. Several users claimed that their Galaxy S23 displays were left “totally unresponsive” following the One UI 6.1 download, while others said their touchscreen functionality was limited to the S Pen. Yikes! Thankfully, Samsung quickly acknowledged the problem, identified the cause, and issued a temporary fix. The company blamed “compatibility issues with some Google app features”, specifically Google Discover, for the irregular touchscreen behavior triggered by One UI 6.1, and added that deleting the Google app's data should put a stop to any touchscreen slowdown (for now, at least). Read more: Samsung shares temporary fix for Galaxy S23 touchscreen issue 5. Google agreed to delete Incognito mode data Google Incognito Mode isn't so private (Image credit: Getty Images) Not to alarm you but all that incognito browsing you’ve been doing in Chrome? Turns out that maybe Google was storing some of the data related to it. No judgments here but it’s a revelation that we first heard whispers about a few years back when someone in California launched a class-action lawsuit. Google said, “Nope,” but now there’s a settlement that Google’s agreed to, which appears to indicate that, even as Google still sort of says, “Nope” there is something to delete (not locally but somewhere in Google’s cloud). As part of the agreement, which will be signed in June, Google agreed to delete any incognito data it stored, change its incognito browser messaging, and will allow people to proactively block third-party cookies when in this mode. As we see it, it’s a reminder that you should never assume that no one else can see what you’re browsing in any mode. Read more: Google may have been storing your incognito browsing data 4. Disney set a date for its password sharing crackdown Disney Plus password sharing is going extinct (Image credit: Netflix / Disney+ / Amazon Prime Video) It was announced that the Disney Plus password sharing crackdown will begin in June according to Disney CEO Bob Iger. Thankfully it won’t hit everyone right away, with Iger explaining that "In June, we'll be launching our first real foray into password sharing in just a few markets, but then it will grow significantly with a full rollout in September." What this means in practical terms is that by the end of September at the latest you'll no longer be able to share your Disney Plus account with people you don’t live with. The move comes after years of Disney hemorrhaging money – though its financial situation has been improving as of late – and following Netflix’s wildly successful password sharing crackdown. Despite users claiming they’d abandon the streaming service, Netflix’s subscriber numbers instead rose massively – so clearly Disney is hoping to replicate that with its own account sharing shutdown. Read more: Disney Plus' password crackdown starts in June 3. OpenAI released Sora’s first (very odd) music video Sora – OpenAI’s text-to-video tool – was used to create a music video for the song Worldweight by August Kamp this week, and the result is a trippy romp through forests, beaches, underwater habitats and various otherworldly environments. It’s an interesting watch for simply the sheer novelty of the project, though the unsettling vibe caused by the classic AI-image blurred look and the various hallucinations – where Sora makes errors – are yet another reminder that the tool still struggles to create 'normal-looking content'. For independent creators looking to embrace weirdness, Sora might be a popular tool, but despite Hollywood studios reportedly being interested in the technology we’re still far from convinced we’ll one day see a Sora-made blockbuster. Read more: OpenAI's Sora first music video is psychedelic trip 2. Apple reportedly started work on a robot The Amazon Astro home robot (Image credit: Amazon) Apple’s car project might have been permanently parked, but it’s apparently already working on its next pie-in-the-sky idea: a personal robot. This week, Bloomberg’s Mark Gurman (who often shares credible Apple insider information) reported that Apple “has teams investigating a push into personal robotics.” One of the proposed designs would be a mobile assistant that follows you around and can perform some household tasks, and another would in effect be a moving tablet. As with all leaks we should take this with a pinch of salt – especially because the project is apparently still in its early stages, so who knows when or even if Apple’s robot will ever see the light of day. But we'll be watching with interest even so. Read more: Apple could be planning a surprise Amazon Astro robot rival 1. Meta teased its first AR glasses Ray-Ban Meta Smart Glasses (Image credit: Meta) Meta’s Reality Labs – the team behind its VR tech – turned 10 this week, and to celebrate Meta released a blog post highlighting major events from the team’s history. It’s a delightful trip down memory road, sure, but what was more interesting was Meta’s first official teaser of its next major new hardware release: AR glasses. According to the blog post, these AR specs would “deliver the best of both worlds” by blending Meta’s VR hardware (like the Meta Quest 3) with the form factor and AI abilities of its Ray-Ban Meta Smart Glasses. Rumors have suggested Meta’s AR glasses could land in 2027 at the earliest – so we have a while to wait – but if they deliver on what’s been teased then we can’t wait to test them out. Read more: We’re excited for Meta’s first AR glasses View the full article
  10. Meta’s Reality Labs division – the team behind its VR hardware and software efforts – has turned 10 years old, and to celebrate the company has released a blog post outlining its decade-long history. However, while a trip down memory lane is fun, the most interesting part came right at the end, as Meta teased its next major new hardware release: its first-ever pair of AR glasses. According to the blog post, these specs would merge the currently distinct product pathways Meta’s Reality Labs has developed – specifically, melding its AR and VR hardware (such as the Meta Quest 3) with the form factor and AI capabilities of its Ray-Ban Meta Smart Glasses to, as Meta puts it, “deliver the best of both worlds.” Importantly for all you Quest fans out there, Meta adds that its AR glasses wouldn’t replace its mixed-reality headsets. Instead, it sees them being the smartphones to the headsets’ laptop/desktop computers – suggesting that the glasses will offer solid performance in a sleek form factor, but with less oomph than you’d get from a headset. Before we get too excited, though, Meta hasn’t said when these AR specs will be released – and unfortunately they might still be a few years away. When might we see Meta’s AR glasses? A report from The Verge back in March 2023 shared an apparent Meta Reality Labs roadmap that suggested the company wanted to release a pair of smart glasses with a display in 2025, followed by a pair of 'proper' AR smart glasses in 2027. We're ready for Meta's next big hardware release (Image credit: Meta) However, while we may have to wait some time to put these things on our heads, we might get a look at them in the next year or so, A later report that dropped in February this year, this time via Business Insider, cited unnamed sources who said a pair of true AR glasses would be demoed at this year’s Meta Connect conference. Dubbed 'Orion' by those who claim to be in the know, the specs would combine Meta’s XR (a catchall for VR, AR, and MR) and AI efforts – which is exactly what Meta described in its recent blog post. As always, we should take rumors with a pinch of salt, but given that this latest teaser came via Meta itself it’s somewhat safe to assume that Meta AR glasses are a matter of when, not if. And boy are we excited. We want Meta AR glasses, and we want ‘em now Currently Meta has two main hardware lines: its VR headsets and its smart glasses. And while it’s rumored to be working on new entries to both – such as a budget Meta Quest 3 Lite, a high-end Meta Quest Pro 2, and the aforementioned third-generation Ray-Ban glasses with a screen – these AR glasses would be its first big new hardware line since it launched the Ray-Ban Stories in 2021. And the picture Meta has painted of its AR glasses is sublime. Firstly, while Meta’s current Ray-Ban smart glasses aren’t yet the smartest, a lot of major AI upgrades are currently in beta – and should be launching properly soon. The Ray-Ban Meta Smart Glasses are set to get way better with AI (Image credit: Future / Philip Berne) Its Look and Ask feature combines the intelligence of ChatGPT – or in this instance its in-house Meta AI – with the image-analysis abilities of an app like Google Lens. This apparently lets you identify animals, discover facts about landmarks, and help you plan a meal based on the ingredients you have – it all sounds very sci-fi, and actually useful, unlike some AI applications. We then take those AI-abilities and combine them with Meta’s first-class Quest platform, which is home to the best software and developers working in the XR space. While many apps likely couldn’t be ported to the new system due to hardware restrictions – as the glasses might not offer controllers, will probably be AR-only, and might be too small to offer as powerful a chipset or as much RAM as its Quest hardware – we hope that plenty will make their way over. And Meta’s existing partners would plausibly develop all-new AR software to take advantage of the new system. Based on the many Quest 3 games and apps we’ve tried, even if just a few of the best make their way to the specs they’d help make Meta’s new product feel instantly useful. a factor that’s a must for any new gadget. Lastly, we’d hopefully see Meta’s glasses adopt the single-best Ray-Ban Meta Smart Glasses feature: their design. These things are gorgeous, comfortable, and their charging case is the perfect combination of fashion and function. We couldn't ask for better-looking smart specs than these (Image credit: Meta) Give us everything we have already design-wise, and throw in interchangeable lenses so we aren’t stuck with sunglasses all year round – which in the UK where I'm based are only usable for about two weeks a year – and the AR glasses could be perfect. We’ll just have to wait and see what Meta shows off, either at this year’s Meta Connect or in the future – and as soon as they're ready for prime time, we’ll certainly be ready to test them. You might also like Meta is cutting off support for the original Quest headset at the end of AprilApple confirms the Vision Pro will get international launch this yearMeta could launch an LG OLED VR headset in 2025 View the full article
  11. Support for the original Oculus Quest headset will soon end as Meta has sent out emails to developers informing them of the company’s future plans for the device. Forbes managed to get their hands on the details, and according to their report, the tech giant is going to be strict. They really do not want the headset to stick around. Developers have until April 30 to roll out any “app updates for the Quest 1 to the Meta Quest store.” Past that date, nothing will be allowed to be released even if dev teams want to continue catering to users of the older model. Meta will outright block the patch. If an app is available on other Quest devices, the update can roll out to those headsets, but the Quest 1 support will be denied. New apps that come out after April 30 are not going to appear on the online store nor will owners even be allowed to buy them. They’ll be stuck in limbo. The email continues by saying Meta will maintain the Quest 1 by releasing “critical bug fixes and security patches” until August of this year. Once the summer is over, the company will be wiping its hands clean, marking the official end of its first mainline headset. Users who want to continue on the platform will need to upgrade to either a Quest 2 or Quest 3. Depreciation The depreciation of the Quest 1, as sudden as it may seem, has been a long time coming. Meta originally announced the end of the headset back in January 2023. Soon after, it began to periodically pull the plug on certain features. Upgrades eventually grinded to a halt, people lost the ability to create parties, and lost access to the social aspects of Horizon Home. Meta is turning the Quest 1 into a plastic brick as it cuts off support without any wiggle room. However, it's possible that the headset could see new life among niche online communities or platforms like SideQuest. No one is stopping independent developers from sideloading apps. If you plan on joining these groups, keep in mind the software you download from unofficial spaces could come with malware. Meta isn’t going to come in and save you. You’re on your own. Analysis: is the Quest 2 next? Despite knowing all this would happen ahead of time, the Quest 1 cutoff is harsh to say the least especially when you compare it to gaming consoles. The headset didn’t even reach its fifth birthday before getting the ax. Consoles, on the other hand, often see many more years of support, sometimes a full decade’s worth. Seeing the shutdown makes us wonder what’s going to happen to the Quest 2. The second-gen model was released about a year-and-a-half after the original headset. Although it brought many improvements at launch, the performance of the Quest 2 has been eclipsed by other headsets. It could potentially see a similar end, although we think it’s unlikely. The Quest 2 has proven itself to be much more popular than the original, so a sudden cutoff likely won’t happen any time soon. It should exist as the brand's mid-range option moving forward. If you're affected by the shutoff and want a new device, check out TechRadar's list of the best VR headsets for 2024. You might also like Best VR games 2024 - top virtual reality experiences to play on your headsetApple confirms the Vision Pro will get international launch this yearMeta could launch an LG OLED VR headset in 2025 View the full article
  12. Meta has unveiled details about its AI training infrastructure, revealing that it currently relies on almost 50,000 Nvidia H100 GPUs to train its open source Llama 3 LLM. The company says it will have over 350,000 Nvidia H100 GPUs in service by the end of 2024, and the computing power equivalent to nearly 600,000 H100s when combined with hardware from other sources. The figures were revealed as Meta shared details on its 24,576-GPU data center scale clusters. Meta's own AI chips The company explained “These clusters support our current and next generation AI models, including Llama 3, the successor to Llama 2, our publicly released LLM, as well as AI research and development across GenAI and other areas.“ The clusters are built on Grand Teton (named after the National Park in Wyoming), an in-house-designed, open GPU hardware platform. Grand Teton integrates power, control, compute, and fabric interfaces into a single chassis for better overall performance and scalability. The clusters also feature high-performance network fabrics, enabling them to support larger and more complex models than before. Meta says one cluster uses a remote direct memory access network fabric solution based on the Arista 7800, while the other features an NVIDIA Quantum2 InfiniBand fabric. Both solutions interconnect 400 Gbps endpoints. "The efficiency of the high-performance network fabrics within these clusters, some of the key storage decisions, combined with the 24,576 NVIDIA Tensor Core H100 GPUs in each, allow both cluster versions to support models larger and more complex than that could be supported in the RSC and pave the way for advancements in GenAI product development and AI research," Meta said. Storage is another critical aspect of AI training, and Meta has developed a Linux Filesystem in Userspace backed by a version of its 'Tectonic' distributed storage solution optimized for Flash media. This solution reportedly enables thousands of GPUs to save and load checkpoints in a synchronized fashion, in addition to "providing a flexible and high-throughput exabyte scale storage required for data loading". While the company's current AI infrastructure relies heavily on Nvidia GPUs, it's unclear how long this will continue. As Meta continues to evolve its AI capabilities, it will inevitably focus on developing and producing more of its own hardware. Meta has already announced plans to use its own AI chips, called Artemis, in servers this year, and the company previously revealed it was getting ready to manufacture custom RISC-V silicon. More from TechRadar Pro Meta has done something that will get Nvidia and AMD very, very worriedMeta set to use own AI chips in its servers in 2024Intel has a new rival to Nvidia's uber-popular H100 AI GPU View the full article
  13. Marking a major investment in Meta’s AI future, we are announcing two 24k GPU clusters. We are sharing details on the hardware, network, storage, design, performance, and software that help us extract high throughput and reliability for various AI workloads. We use this cluster design for Llama 3 training. We are strongly committed to open compute and open source. We built these clusters on top of Grand Teton, OpenRack, and PyTorch and continue to push open innovation across the industry. This announcement is one step in our ambitious infrastructure roadmap. By the end of 2024, we’re aiming to continue to grow our infrastructure build-out that will include 350,000 NVIDIA H100 GPUs as part of a portfolio that will feature compute power equivalent to nearly 600,000 H100s. To lead in developing AI means leading investments in hardware infrastructure. Hardware infrastructure plays an important role in AI’s future. Today, we’re sharing details on two versions of our 24,576-GPU data center scale cluster at Meta. These clusters support our current and next generation AI models, including Llama 3, the successor to Llama 2, our publicly released LLM, as well as AI research and development across GenAI and other areas . A peek into Meta’s large-scale AI clusters Meta’s long-term vision is to build artificial general intelligence (AGI) that is open and built responsibly so that it can be widely available for everyone to benefit from. As we work towards AGI, we have also worked on scaling our clusters to power this ambition. The progress we make towards AGI creates new products, new AI features for our family of apps, and new AI-centric computing devices. While we’ve had a long history of building AI infrastructure, we first shared details on our AI Research SuperCluster (RSC), featuring 16,000 NVIDIA A100 GPUs, in 2022. RSC has accelerated our open and responsible AI research by helping us build our first generation of advanced AI models. It played and continues to play an important role in the development of Llama and Llama 2, as well as advanced AI models for applications ranging from computer vision, NLP, and speech recognition, to image generation, and even coding. Under the hood Our newer AI clusters build upon the successes and lessons learned from RSC. We focused on building end-to-end AI systems with a major emphasis on researcher and developer experience and productivity. The efficiency of the high-performance network fabrics within these clusters, some of the key storage decisions, combined with the 24,576 NVIDIA Tensor Core H100 GPUs in each, allow both cluster versions to support models larger and more complex than that could be supported in the RSC and pave the way for advancements in GenAI product development and AI research. Network At Meta, we handle hundreds of trillions of AI model executions per day. Delivering these services at a large scale requires a highly advanced and flexible infrastructure. Custom designing much of our own hardware, software, and network fabrics allows us to optimize the end-to-end experience for our AI researchers while ensuring our data centers operate efficiently. With this in mind, we built one cluster with a remote direct memory access (RDMA) over converged Ethernet (RoCE) network fabric solution based on the Arista 7800 with Wedge400 and Minipack2 OCP rack switches. The other cluster features an NVIDIA Quantum2 InfiniBand fabric. Both of these solutions interconnect 400 Gbps endpoints. With these two, we are able to assess the suitability and scalability of these different types of interconnect for large-scale training, giving us more insights that will help inform how we design and build even larger, scaled-up clusters in the future. Through careful co-design of the network, software, and model architectures, we have successfully used both RoCE and InfiniBand clusters for large, GenAI workloads (including our ongoing training of Llama 3 on our RoCE cluster) without any network bottlenecks. Compute Both clusters are built using Grand Teton, our in-house-designed, open GPU hardware platform that we’ve contributed to the Open Compute Project (OCP). Grand Teton builds on the many generations of AI systems that integrate power, control, compute, and fabric interfaces into a single chassis for better overall performance, signal integrity, and thermal performance. It provides rapid scalability and flexibility in a simplified design, allowing it to be quickly deployed into data center fleets and easily maintained and scaled. Combined with other in-house innovations like our Open Rack power and rack architecture, Grand Teton allows us to build new clusters in a way that is purpose-built for current and future applications at Meta. We have been openly designing our GPU hardware platforms beginning with our Big Sur platform in 2015. Storage Storage plays an important role in AI training, and yet is one of the least talked-about aspects. As the GenAI training jobs become more multimodal over time, consuming large amounts of image, video, and text data, the need for data storage grows rapidly. The need to fit all that data storage into a performant, yet power-efficient footprint doesn’t go away though, which makes the problem more interesting. Our storage deployment addresses the data and checkpointing needs of the AI clusters via a home-grown Linux Filesystem in Userspace (FUSE) API backed by a version of Meta’s ‘Tectonic’ distributed storage solution optimized for Flash media. This solution enables thousands of GPUs to save and load checkpoints in a synchronized fashion (a challenge for any storage solution) while also providing a flexible and high-throughput exabyte scale storage required for data loading. We have also partnered with Hammerspace to co-develop and land a parallel network file system (NFS) deployment to meet the developer experience requirements for this AI cluster. Among other benefits, Hammerspace enables engineers to perform interactive debugging for jobs using thousands of GPUs as code changes are immediately accessible to all nodes within the environment. When paired together, the combination of our Tectonic distributed storage solution and Hammerspace enable fast iteration velocity without compromising on scale. The storage deployments in our GenAI clusters, both Tectonic- and Hammerspace-backed, are based on the YV3 Sierra Point server platform, upgraded with the latest high capacity E1.S SSD we can procure in the market today. Aside from the higher SSD capacity, the servers per rack was customized to achieve the right balance of throughput capacity per server, rack count reduction, and associated power efficiency. Utilizing the OCP servers as Lego-like building blocks, our storage layer is able to flexibly scale to future requirements in this cluster as well as in future, bigger AI clusters, while being fault-tolerant to day-to-day Infrastructure maintenance operations. Performance One of the principles we have in building our large-scale AI clusters is to maximize performance and ease of use simultaneously without compromising one for the other. This is an important principle in creating the best-in-class AI models. As we push the limits of AI systems, the best way we can test our ability to scale-up our designs is to simply build a system, optimize it, and actually test it (while simulators help, they only go so far). In this design journey, we compared the performance seen in our small clusters and with large clusters to see where our bottlenecks are. In the graph below, AllGather collective performance is shown (as normalized bandwidth on a 0-100 scale) when a large number of GPUs are communicating with each other at message sizes where roofline performance is expected. Our out-of-box performance for large clusters was initially poor and inconsistent, compared to optimized small cluster performance. To address this we made several changes to how our internal job scheduler schedules jobs with network topology awareness – this resulted in latency benefits and minimized the amount of traffic going to upper layers of the network. We also optimized our network routing strategy in combination with NVIDIA Collective Communications Library (NCCL) changes to achieve optimal network utilization. This helped push our large clusters to achieve great and expected performance just as our small clusters. In the figure we see that small cluster performance (overall communication bandwidth and utilization) reaches 90%+ out of the box, but an unoptimized large cluster performance has very poor utilization, ranging from 10% to 90%. After we optimize the full system (software, network, etc.), we see large cluster performance return to the ideal 90%+ range. In addition to software changes targeting our internal infrastructure, we worked closely with teams authoring training frameworks and models to adapt to our evolving infrastructure. For example, NVIDIA H100 GPUs open the possibility of leveraging new data types such as 8-bit floating point (FP8) for training. Fully utilizing larger clusters required investments in additional parallelization techniques and new storage solutions provided opportunities to highly optimize checkpointing across thousands of ranks to run in hundreds of milliseconds. We also recognize debuggability as one of the major challenges in large-scale training. Identifying a problematic GPU that is stalling an entire training job becomes very difficult at a large scale. We’re building tools such as desync debug, or a distributed collective flight recorder, to expose the details of distributed training, and help identify issues in a much faster and easier way Finally, we’re continuing to evolve PyTorch, the foundational AI framework powering our AI workloads, to make it ready for tens, or even hundreds, of thousands of GPU training. We have identified multiple bottlenecks for process group initialization, and reduced the startup time from sometimes hours down to minutes. Commitment to open AI innovation Meta maintains its commitment to open innovation in AI software and hardware. We believe open-source hardware and software will always be a valuable tool to help the industry solve problems at large scale. Today, we continue to support open hardware innovation as a founding member of OCP, where we make designs like Grand Teton and Open Rack available to the OCP community. We also continue to be the largest and primary contributor to PyTorch, the AI software framework that is powering a large chunk of the industry. We also continue to be committed to open innovation in the AI research community. We’ve launched the Open Innovation AI Research Community, a partnership program for academic researchers to deepen our understanding of how to responsibly develop and share AI technologies – with a particular focus on LLMs. An open approach to AI is not new for Meta. We’ve also launched the AI Alliance, a group of leading organizations across the AI industry focused on accelerating responsible innovation in AI within an open community. Our AI efforts are built on a philosophy of open science and cross-collaboration. An open ecosystem brings transparency, scrutiny, and trust to AI development and leads to innovations that everyone can benefit from that are built with safety and responsibility top of mind. The future of Meta’s AI infrastructure These two AI training cluster designs are a part of our larger roadmap for the future of AI. By the end of 2024, we’re aiming to continue to grow our infrastructure build-out that will include 350,000 NVIDIA H100s as part of a portfolio that will feature compute power equivalent to nearly 600,000 H100s. As we look to the future, we recognize that what worked yesterday or today may not be sufficient for tomorrow’s needs. That’s why we are constantly evaluating and improving every aspect of our infrastructure, from the physical and virtual layers to the software layer and beyond. Our goal is to create systems that are flexible and reliable to support the fast-evolving new models and research. The post Building Meta’s GenAI Infrastructure appeared first on Engineering at Meta. View the full article
  14. Shortly after the Vision Pro launched, Meta CEO Mark Zuckerberg made it clear that he believes the Quest 3 is the better VR headset, and over the weekend, he again took to Threads to reiterate his belief that the $3,500 Vision Pro is inferior to the $500 Quest 3 (via 9to5Mac). Analyst Benedict Evans said that the Vision Pro is the "device Meta wants to reach in 3-5 years," and that it is confusing that Meta VR engineers have suggested the Vision Pro is "basically just the same thing" as the Quest. In response, Zuckerberg said that the Quest "is better" than the Vision Pro now, and that if the Meta Quest has the "motion blur," weight, or "lack of precision inputs" as the Vision Pro in the future, then Meta will have "regressed significantly." I don't think we're saying the devices are the same. We're saying Quest is better. If our devices weigh as much as theirs in 3-5 years, or have the motion blur theirs has, or the lack of precision inputs, etc, then that means we'll have regressed significantly. Yes, their resolution is higher, but they paid for that with many other product tradeoffs that make their device worse in most ways. That's not what we aspire to. Zuckerberg also took offense to the Meta Quest being called "a games device," and clarified that some of the top apps on the Quest are social, browser, and video player apps. Actually, 3 of the top 7 Quest apps are already social apps - Horizon, VR Chat, and Rec Room. Browser and video player are top apps too. Fitness isn't as high up there, but has a passionate community as well. So I think the narrative that these headsets are only for games is out of date. And yes, more resolution is better - but trading off ergonomics and motion blur isn't a clear win when Quest's resolution is also quite good. Device weight and "motion blur" have been two points that Zuckerberg has focused on in his criticism of the Vision Pro, and he has dismissed the higher resolution of Apple's headset as unnecessary given the "tradeoffs" that he sees. Zuckerberg in February said that the Quest 3 is superior because it is 7x less expensive than the Vision Pro, it's more comfortable, the Quest is "crisper," there are "precision controllers," and there's a "deeper" immersive content library. Compared to the Apple Vision Pro's 4K microLED displays, the Quest 3 has two 2K LCD panels. It also weighs in at 515 grams, while the Vision Pro weighs 600 to 650 grams depending on the Light Seal combination used, and the Quest does not have a separate battery pack.Related Roundup: Apple Vision ProBuyer's Guide: Vision Pro (Buy Now)Related Forum: Apple Vision Pro This article, "Meta CEO Mark Zuckerberg Again Disparages Apple Vision Pro" first appeared on MacRumors.com Discuss this article in our forums View the full article
  15. What’s going on with generative AI (GenAI) at Meta? And what does the future have in store? In this episode of the Meta Tech Podcast, Meta engineer Pascal Hartig (@passy) speaks with Devi Parikh, an AI research director at Meta. They cover a wide range of topics, including the history and future of GenAI and the most interesting research papers that have come out recently. And, of course, they discuss some of Meta’s latest GenAI innovations, including: Audiobox, a foundational model for generating sound and soundscapes using natural language prompts. Emu, Meta’s first foundational model for image generation. Purple Llama, a suite of tools to help developers safely and responsibly deploy GenAI models. Download or listen to the episode below: You can also find the episode on various podcast platforms: Spotify PocketCasts Apple Podcasts Google Podcasts The Meta Tech Podcast is a podcast, brought to you by Meta, where we highlight the work Meta’s engineers are doing at every level – from low-level frameworks to end-user features. Send us feedback on Instagram, Threads, or X. And if you’re interested in AI career opportunities at Meta visit the Meta Careers page. The post How Meta is advancing GenAI appeared first on Engineering at Meta. View the full article
  16. Amazon Bedrock is an easy way to build and scale generative AI applications with leading foundation models (FMs). Amazon Bedrock now supports fine-tuning for Meta Llama 2 and Cohere Command Light, along with Amazon Titan Text Lite and Amazon Titan Text Express FMs, so you can use labeled datasets to increase model accuracy for particular tasks. View the full article
  17. In this week’s #TheLongView: Meta’s enforced hybrid work plan is failing badly, and Meta makes more layoffs. View the full article
  18. Why AI has everyone’s attention, what it means for different data roles, and how Alteryx and Snowflake are bringing AI to data use cases LLaMA (Large Language Model Meta AI), along with other large language models (LLMs) have suddenly become more open and accessible for everyday applications. Ahmad Khan, Head of artificial intelligence (AI) machine learning (ML) strategy at Snowflake, has not only witnessed the emergence of AI in the data space but helped shape it throughout his career. At Snowflake, Ahmad works with customers to solve AI use cases and helps define the product strategy and vision for AI innovation, which includes key technology partners like Alteryx. Read on for the interview highlights and listen to the podcast episode for the full conversation. View the full article
  19. The post Code Llama: Meta’s state-of-the-art LLM for coding appeared first on Engineering at Meta. View the full article
  20. Microsoft is committed to the responsible advancement of AI to enable every person and organization to achieve more. Over the last few months, we have talked about advancements in our Azure infrastructure, Azure Cognitive Services, and Azure Machine Learning to make Azure better at supporting the AI needs of all our customers, regardless of their scale. Meanwhile, we also work closely with some of the leading research organizations around the world to empower them to build great AI. Today, we’re thrilled to announce an expansion of our ongoing collaboration with Meta: Meta has selected Azure as a strategic cloud provider to help accelerate AI research and development… View the full article
  • Forum Statistics

    67.4k
    Total Topics
    65.3k
    Total Posts
×
×
  • Create New...