Search the Community
Showing results for tags 'ai'.
-
The generative AI revolution is transforming the way that teams work, and Databricks Assistant leverages the best of these advancements. It allows you... View the full article
-
- data engineering
- ai
-
(and 1 more)
Tagged with:
-
Since I’ve been working with Azure OpenAI Service from a developer perspective as well, I’ve decided to build a sample application to help demonstrate not just the IaC deployment of Azure OpenAI Service and GPT models, but also to demonstrate some basic use cases of integrating AI into your own enterprise applications. Here’s a screenshot […] The article Introducing AIChatUI: Open Source AI Chat Sample with Azure OpenAI Service & GPT-3 / GPT-4 appeared first on Build5Nines. View the full article
-
- open source
- openai
-
(and 5 more)
Tagged with:
-
The Snowflake Summit 2024 is all set to bring together data, AI and tech to discuss the advancements and cutting-edge innovation in Data cloud. It is an unmissable opportunity to connect with data experts to explore the limitless possibilities of AI in data and emerging trends in application development. Hevo is thrilled to be at […]View the full article
-
Many businesses rush to adopt AI but fail due to poor strategy. This post serves as your go-to playbook for success.View the full article
-
- playbooks
- strategies
-
(and 1 more)
Tagged with:
-
We’re redefining the developer environment with GitHub Copilot Workspace–where any developer can go from idea, to code, to software in natural language. Sign up here. In the past two years, generative AI has foundationally changed the developer landscape largely as a tool embedded inside the developer environment. In 2022, we launched GitHub Copilot as an autocomplete pair programmer in the editor, boosting developer productivity by up to 55%. Copilot is now the most widely adopted AI developer tool. In 2023, we released GitHub Copilot Chat—unlocking the power of natural language in coding, debugging, and testing—allowing developers to converse with their code in real time. After sharing an early glimpse at GitHub Universe last year, today, we are reimagining the nature of the developer experience itself with the technical preview of GitHub Copilot Workspace: the Copilot-native developer environment. Within Copilot Workspace, developers can now brainstorm, plan, build, test, and run code in natural language. This new task-centric experience leverages different Copilot-powered agents from start to finish, while giving developers full control over every step of the process. Copilot Workspace represents a radically new way of building software with natural language, and is expressly designed to deliver–not replace–developer creativity, faster and easier than ever before. With Copilot Workspace we will empower more experienced developers to operate as systems thinkers, and materially lower the barrier of entry for who can build software. Welcome to the first day of a new developer environment. Here’s how it works: It all starts with the task… For developers, the greatest barrier to entry is almost always at the beginning. Think of how often you hit a wall in the first steps of a big project, feature request, or even bug report, simply because you don’t know how to get started. GitHub Copilot Workspace meets developers right at the origin: a GitHub Repository or a GitHub Issue. By leveraging Copilot agents as a second brain, developers will have AI assistance from the very beginning of an idea. …Workspace builds the full plan From there, Copilot Workspace offers a step-by-step plan to solve the issue based on its deep understanding of the codebase, issue replies, and more. It gives you everything you need to validate the plan, and test the code, in one streamlined list in natural language. And it’s entirely editable… Everything that GitHub Copilot Workspace proposes—from the plan to the code—is fully editable, allowing you to iterate until you’re confident in the path ahead. You retain all of the autonomy, while Copilot Workspace lifts your cognitive strain. And once you’re satisfied with the plan, you can run your code directly in Copilot Workspace, jump into the underlying GitHub Codespace, and tweak all code changes until you are happy with the final result. You can also instantly share a workspace with your team via a link, so they can view your work and even try out their own iterations. All that’s left then is to file your pull request, run your GitHub Actions, security code scanning, and ask your team members for human code review. And best of all, they can leverage your Copilot Workspace to see how you got from idea to code. Also: GitHub Copilot Workspace is mobile compatible And because ideas can happen anywhere, GitHub Copilot Workspace was designed to be used from any device—empowering a real-world development environment that can work on a desktop, laptop, or on the go. This is our mark on the future of the development environment: an intuitive, Copilot-powered infrastructure that makes it easier to get started, to learn, and ultimately to execute. Enabling a world with 1B developers Early last year, GitHub celebrated over 100 million developers on our platform—and counting. As programming in natural language lowers the barrier of entry to who can build software, we are accelerating to a near future where one billion people on GitHub will control a machine just as easily as they ride a bicycle. We’ve constructed GitHub Copilot Workspace in pursuit of this horizon, as a conduit to help extend the economic opportunity and joy of building software to every human on the planet. At the same time, we live in a world dependent on—and in short supply of—professional developers. Around the world, developers add millions of lines of code every single day to evermore complex systems and are increasingly behind on maintaining the old ones. Just like any infrastructure in this world, we need real experts to maintain and renew the world’s code. By quantifiably reducing boilerplate work, we will empower professional developers to increasingly operate as systems thinkers. We believe the step change in productivity gains that professional developers will experience by virtue of Copilot and now Copilot Workspace will only continue to increase labor demand. That’s the dual potential of GitHub Copilot: for the professional and hobbyist developer alike, channeling creativity into code just got a whole lot easier. Today, we begin the technical preview for GitHub Copilot Workspace. Sign up now. We can’t wait to see what you will build from here. https://github.blog/2024-04-29-github-copilot-workspace/
-
- 1
-
- github copilot workspace
- copilot-native
- (and 3 more)
-
This article is an overview of a particular subset of data structures useful in machine learning and AI development, along with explanations and example implementations.View the full article
-
- data structures
- ai
-
(and 2 more)
Tagged with:
-
It's a pretty good bet that the Google Pixel 8a is going to break cover at Google I/O 2024 on May 14, and as the day approaches, we've seen a pile of new leaks turn up that give us a better idea of what we can expect from this mid-ranger. First up is well-known tipster Evan Blass, who has posted an extensive set of pictures of the Pixel 8a. You can see the phone from the front and the back, and at an angle, and in its four rumored colors: Obsidian (black), Porcelain (white-ish), Bay (blue), and Mint (green). P8a pic.twitter.com/tqn9FvDGlwApril 25, 2024 See more These designs have previously been leaked, so there's not a whole lot that's new here, but it's more evidence that this is indeed what the Pixel 8a is going to look like. The images are sharp and clear too, giving us a good look at the design. It appears this phone will look a lot like the Pixel 8 and the Pixel 7a, with the recognizable camera bar around the back. It does seem as though this year's mid-range Pixel is going to sport a more curved frame than its immediate predecessors, however. Promo materials To no one's surprise, the Pixel 8a will feature AI (Image credit: @OnLeaks / MySmartPrice) Onward to the next leak, and MySmartPrice has managed to get hold of a promotional video for the Pixel 8a. It was briefly available to view on YouTube before being pulled – and as YouTube is owned by Google, we're assuming someone higher up had a word. If you want to see some stills taken from the video before it disappeared, you can find some over at Phandroid. There's actually not too much that's new in this video, besides seeing the Pixel 8a itself – a lot of the AI features the clip shows off, like instant photo edits and live text translations, are already available in newer Pixel phones. Our final leak for now is over at Android Headlines, where there are some promotional images showing off some of the capabilities of the Pixel 8a: capabilities including tools like Circle to Search. The images suggest all-day battery life, the Tensor G3 chipset, IP67 protection, and seven years of security updates. The same source says the on-sale date for the Google Pixel 8a is going to be May 16, and there are some pictures of the official silicone cases that'll come along with it. Expect to hear all the details about this upcoming phone on May 14. You might also like The Google Pixel 8a has now leaked on videoWhat we're expecting from Google I/O 2024The Google Pixel 8a might have a 120Hz screen View the full article
-
Intel has launched a new AI processor series for the edge, promising industrial-class deep learning inference. The new ‘Amston Lake’ Atom x7000RE chips offer up to double the cores and twice the higher graphics base frequency as the previous x6000RE series, all neatly packed within a 6W–12W BGA package. The x7000RE series packs more performance into a smaller footprint. Boasting up to eight E-cores it supports LPDDR5/DDR5/DDR4 memory and up to nine PCIe 3.0 lanes, delivering robust multitasking capabilities. Intel says its new processors are designed to withstand challenging conditions, enduring extreme temperature variations, shock, and vibration, and to operate in hard-to-reach locations. They offer 2x SATA Gen 3.2 ports, up to 4x USB 3.2 Gen 2 ports, a USB Type-C port, 2.5GbE Ethernet connection, along with Intel Wi-Fi, Bluetooth, and 5G platform capabilities. Embedded, industrial, and communication The x7000RE series consists of four SKUs, all suitable for embedded, industrial, and communication use under extended temperature conditions. The x7211RE and x7213RE have 2 cores and relatively lower base frequencies, while the x7433RE has 4 cores, and the x7835RE has 8 cores with higher base frequencies. All four SKUs support a GPU execution unit count of either 16 or 32, and Intel's Time Coordinated Computing and Time-Sensitive Networking GbE features. The x7000RE offer integrated Intel UHD Graphics, Intel DL Boost, Intel AVX2 with INT8 support, and OpenVINO toolkit support. Intel says the chips will allow customers to easily deploy deep learning inference at the industrial edge and in smart cities, and “enhance computer vision solutions with built-in AI capabilities and ecosystem-enabled camera modules” as well as “capture power- and cost-efficient performance to enable latency-bounded workloads in robotics and automation.” More from TechRadar Pro Intel bets on a secret weapon to beat AMD in some AI workloadsIntel unveils 288-core Leviathan 5th-gen Xeon CPUIntel could move away from regular CPU releases View the full article
-
With Apple set to announce iOS 18 (and a whole lot more) at its Worldwide Developers Conference (WWDC) on June 10, it's rumored that the company is in talks with ChatGPT developer OpenAI to help with a major AI upgrade for the iPhone. This comes from the usually reliable Mark Gurman at Bloomberg, who says discussions between Apple and OpenAI have been "renewed" and are now intensifying, according to unnamed people "familiar with the matter". Last month Bloomberg reported that Apple was speaking to Google about using the Gemini chatbot inside iOS 18, so this hasn't come completely out of the blue: Apple is clearly looking for a partnership with someone for its next big software upgrade. What this latest report suggests is that OpenAI might have emerged as the frontrunner in the race, which means tools like ChatGPT and Dall-E (also developed by OpenAI) might find their way into the iOS 18 update, expected to be rolled out around September time. To be confirmed Siri could be in line for an upgrade (Image credit: Apple) Gurman says that Apple hasn't made a decision yet: it might decide to work with Google, or with OpenAI, or with both companies. What is certain is that iOS 18 is going to be focused very much on artificial intelligence – Apple has already confirmed it. We can expect iOS 18 to come with some kind of local, device-based AI too. Apple has already been showing off some new large language models (LLMs) that are small enough to be stored and run from a smartphone. Exactly what we'll get remains to be seen, but some kind of AI text and image generation seems likely, plus a substantial upgrade to Siri. There have also been rumors of features like AI-powered playlist generation in Apple Music. All eyes are now on WWDC 2024 in June, when everything Apple has been working on should be revealed – for iOS, macOS, watchOS, tvOS, visionOS, and more. Public betas of these updates will then follow, before the final versions get pushed out. You might also like iOS 18 could finally let you properly customize your Home Screen5 new features rumored to be coming to iOS 18The Apple Notes app could seriously step up its game in iOS 18 View the full article
-
Ampere Computing unveiled its AmpereOne Family of processors last year, boasting up to 192 single-threaded Ampere cores, which was the highest in the industry. These chips, designed for cloud efficiency and performance, were Ampere's first product based on its new custom core leveraging internal IP, signalling a shift in the sector, according to CEO Renée James. At the time of the launch, James said, "Every few decades of compute there has emerged a driving application or use of performance that sets a new bar of what is required of performance. The current driving uses are AI and connected everything combined with our continued use and desire for streaming media. We cannot continue to use power as a proxy for performance in the data center. At Ampere, we design our products to maximize performance at a sustainable power, so we can continue to drive the future of the industry." AmpereOne-3 on its way Jeff Wittich, chief product officer at Ampere, recently spoke with The Next Platform about future generations of AmpereOne. He told the site that an updated chip, with 12 memory channels and an A2 core with improved performance, would be out later this year in keeping with the company's roadmap. This chip, which The Next Platform calls AmpereOne-2, will reportedly have a 33 percent increase in DDR5 memory controllers and up to 50 percent more memory bandwidth. However, what’s coming up beyond that, at some point in 2025, sounds the most exciting. The Next Platform says the third generation chip, AmpereOne-3 as it is calling it, will have 256 cores and be “etched in 3 nanometer (3N to be precise) processes from TSMC”. It will use a modified A2+ core with a “two-chiplet design on the cores, with 128 cores per chiplet. It could be a four-chiplet design with 64 cores per chiplet.” The site expects the AmpereOne-3 will support PCI-Express 6.0 I/O controllers and maybe have a dozen DDR5 memory controllers, although there’s some speculation here. “We have been moving pretty fast on the on the compute side,” Wittich told the site. “This design has got about a lot of other cloud features in it – things around performance management to get the most out of all of those cores. In each of the chip releases, we are going to be making what would generally be considered generational changes in the CPU core. We are adding a lot in every single generation. So you are going to see more performance, a lot more efficiency, a lot more features like security enhancements, which all happen at the microarchitecture level. But we have done a lot to ensure that you get great performance consistency across all of the AmpereOnes. We are also taking a chiplet approach with this 256-core design, which is another step as well. Chiplets are a pretty big part of our overall strategy.” The AmpereOne-3 is reportedly being etched at TSMC right now, prior to its launch next year. More from TechRadar Pro How Ampere Computing plans to ride the AI waveAmpere's new workstation could bring in a whole new dawn for developersPlucky CPU maker beats AMD and Intel to become first to offer 320 cores per server View the full article
-
- chipmakers
- cpus
- (and 5 more)
-
Apple is once again talking with OpenAI about using OpenAI technology to power artificial intelligence features in iOS 18, reports Bloomberg's Mark Gurman. Apple held talks with OpenAI earlier in the year, but nothing had come of the discussion. Apple and OpenAI are now said to be speaking about the terms of a possible agreement and how Apple might utilize OpenAI features. Along with OpenAI, Apple is still having discussions with Google about licensing Google's Gemini AI. Apple has not come to a final decision, and Gurman suggests that the company could partner with both Google and OpenAI or pick another provider entirely. Rumors suggest that iOS 18 will have a major focus on AI, with Apple set to introduce AI functionality across the operating system. Apple CEO Tim Cook confirmed in February that Apple plans to "break new ground" in AI. We'll get a first look at the AI features that Apple has planned in just over a month, with iOS 18 set to debut at the Worldwide Developers Conference that kicks off on June 10.Related Roundup: iOS 18Tag: Apple GPT This article, "Apple Reignites Talks With OpenAI About Generative AI for iOS 18" first appeared on MacRumors.com Discuss this article in our forums View the full article
-
A new interview with the director behind the viral Sora clip Air Head has revealed that AI played a smaller part in its production than was originally claimed. Revealed by Patrick Cederberg (who did the post-production for the viral video) in an interview with Fxguide, it has now been confirmed that OpenAI's text-to-video program was far from the only force involved in its production. The 1-minute and 21-second clip was made with a combination of traditional filmmaking techniques and post-production editing to achieve the look of the final picture. Air Head was made by ShyKids and tells the short story of a man with a literal balloon for a head. While there's human voiceover utilized, from the way OpenAI was pushing the clip on social channels such as YouTube, it certainly left the impression that the visuals were was purely powered by AI, but that's not entirely true. As revealed in the behind-the-scenes clip, a ton of work was done by ShyKids who took the raw output from Sora and helped to clean it up into the finished product. This included manually rotoscoping the backgrounds, removing the faces that would occasionally appear on the balloons, and color correcting. Then there's the fact that Sora takes a ton of time to actually get things right. Cederberg explains that there were "hundreds of generations at 10 to 20 seconds a piece" which were then tightly edited in what the team described as a "300:1" ratio of what was generated versus what was primed for further touch-ups. Such manual work also included editing out the head which would appear and reappear, and even changing the color of the balloon itself which would appear red instead of yellow. While Sora was used to generate the initial imagery with good results, there was clearly a lot more happening behind the scenes to make the finished product look as good as it does, so we're still a long way out from instantly-generated movie-quality productions. Sora remains tightly under wraps save for a handful of carefully curated projects that have been allowed to surface, with Air Head among the most popular. The clip has over 120,000 views at the time of writing, with OpenAI touting as "experimentation" with the program, downplaying the obvious work that went into the final product. Sora is impressive but we're not convinced While OpenAI has done a decent job of showcasing what its text-to-video service can do through the large language model, the lack of transparency is worrying. Air Head is an impressive clip by a talented team, but it was subject to a ton of editing to get the final product to where it is in the short. It's not quite the one-click-and you-'re-done approach that many of the tech's boosters have represented it as. It turns out that it is merely a tool which could be used to enhance imagery instead of create from scratch, which is something that is already common enough in video production, making Sora seem less revolutionary than it first appeared. You may also like OpenAI’s new Sora video is an FPV drone ride through the strangest TED Talk you’ve ever seen – and I need to lie downOpenAI's Sora just made its first music video and it's like a psychedelic tripWhat is OpenAI's Sora? View the full article
-
We’re excited to announce the Databricks Generative AI Hackathon winners. This hackathon garnered hundreds of data and AI practitioners spanning 60 invited companies... View the full article
-
- databricks
- genai
-
(and 1 more)
Tagged with:
-
Back in February, Samsung Mobile boss TM Roh teased that more Galaxy AI features are on the horizon for compatible Galaxy devices, and now we’ve got a better idea of what one of these new Galaxy AI features might be. According to serial Samsung leaker Ice Universe, “the key functional innovation of One UI 6.1.1 will be video AI.” One UI 6.1.1 refers to the next major Samsung software update, and while Ice Universe doesn’t elaborate on what “video AI” means, specifically, there’s a good chance that the term refers to either a generative AI-powered video editing tool, or some form of AI-powered video shooting assistance. Ice Universe’s claim comes just hours after Samsung’s official X account again confirmed that new Galaxy AI features are in development: “Our collaboration with Google continues [...] Exciting things are coming up for the future of AI-powered Android and Galaxy experiences,” the company writes in a new post. Our collaboration with @Google continues as we work towards a shared vision of delivering the best Android ecosystem of products and services. Exciting things are coming up for the future of AI-powered Android and Galaxy experiences. https://t.co/QNvFEiSq9uApril 25, 2024 See more At present, Samsung’s suite of Galaxy AI features includes Generative Edit, which lets you resize, remove or reposition objects in an image, and Instant Slow-Mo, which uses AI-powered frame generation to let you turn almost any regular video into a slow-motion video. Might this mystery “video AI” feature build on those creative tools by letting you retroactively edit the composition of videos? Or perhaps Samsung is preparing to roll out a full-blown text-to-video generator à la OpenAI's Sora. Generative Edit lets you resize, remove or reposition objects in an image (Image credit: Samsung) We won’t know for sure until Samsung confirms more details, but the company could use its upcoming Galaxy Unpacked event to showcase this rumored One UI 6.1.1 feature (since One UI 6.1 was unveiled at Samsung’s previous Galaxy Unpacked event in January). The latest leaks suggest that the next Galaxy Unpacked event will take place on July 10, so hopefully we don’t have too long to wait. In any case, Samsung’s assertion that new Galaxy AI features are on the way will come as a welcome reminder to Samsung Galaxy S24 owners of their new phones’ longevity. Samsung is committing to seven generations of OS updates and seven years of security updates for every phone in the Galaxy S24 line, but it’s exciting to hear that these phones will continue to be improved, rather than just maintained. Perhaps this mystery “video AI'' feature will come to a handful of previous-generation Galaxy phones, too. Samsung Galaxy S23 phones received every Galaxy AI feature two months after they debuted on the Galaxy S24 line, so we’re inclined to believe that the same will be true of any new Galaxy AI features. For a device-by-device breakdown of the current state of Galaxy AI feature compatibility, check out our Samsung Galaxy AI compatibility explainer. You might also like... Samsung Galaxy S21 phones are getting two Galaxy AI features soonSamsung’s first budget foldable could cost less than the Galaxy S24Samsung shares fix for Galaxy S23 One UI 6.1 touchscreen issue View the full article
-
- samsung galaxy
- ai
-
(and 1 more)
Tagged with:
-
The RSA Conference 2024 will kick off on May 6. Known as the “Oscars of Cybersecurity,” the RSAC Innovation Sandbox has become a benchmark for innovation in the cybersecurity industry. Let’s focus on the new hotspots in cybersecurity and understand the new trends in security development. Today, let’s get to know Harmonic Security. Introduction of […] The post RSAC 2024 Innovation Sandbox | The Future Frontline: Harmonic Security’s Data Protection in the AI Era appeared first on NSFOCUS, Inc., a global network and cyber security leader, protects enterprises and carriers from advanced cyber attacks.. The post RSAC 2024 Innovation Sandbox | The Future Frontline: Harmonic Security’s Data Protection in the AI Era appeared first on Security Boulevard. View the full article
-
Five years after LPDDR5 was first introduced, and a matter of months before JEDEC finalizes the LPDDR6 standard, Samsung has announced a new, faster version of its LPDDR5X DRAM. When the South Korean tech giant debuted LPDDR5X back in October 2022, its natural successor to LPDDR5 ran at a nippy 8.5Gbps. This new chip runs at 10.7Gbps, over 11% faster than the 9.6Gbps LPDDR5T variant offered by its archrival, SK Hynix. Samsung is building its new chips on a 12nm class process, which means the new DRAM isn’t only faster, but much smaller too – the smallest chip size for any LPDDR, in fact - making it ideal for use in on-device AI applications. Improved power efficiency “As demand for low-power, high-performance memory increases, LPDDR DRAM is expected to expand its applications from mainly mobile to other areas that traditionally require higher performance and reliability such as PCs, accelerators, servers and automobiles,” said YongCheol Bae, Executive Vice President of Memory Product Planning of the Memory Business at Samsung Electronics. “Samsung will continue to innovate and deliver optimized products for the upcoming on-device AI era through close collaboration with customers.” Samsung's 10.7Gbps LPDDR5X boosts performance by over 25% and increases capacity by upward of 30%, compared to LPDDR5. Samsung says it also elevates the single package capacity of mobile DRAM to 32GB. LPDDR5X offers several power-saving technologies, which bolster power efficiency by 25% and allow the chip to enter low-power mode for extended periods. Samsung intends to begin mass production of the 10.7Gbps LPDDR5X DRAM in the second half of this year upon successful verification with mobile application processor (AP) and mobile device providers. More from TechRadar Pro Samsung to showcase the world’s fastest GDDR7 memorySamsung is going after Nvidia's billions with new AI chipScientists inch closer to holy grail of memory breakthrough View the full article
-
In an attempt to make its users aware, Microsoft will be overlaying a watermark on PCs with Windows 11 24H2 which will not have compatible CPUs using SSE 4.2 instructions as its native apps use it for AI. View the full article
-
- windows 11
- microsoft
-
(and 1 more)
Tagged with:
-
Apple has released a set of several new AI models that are designed to run locally on-device rather than in the cloud, possibly paving the way for an AI-powered iOS 18 in the not-too-distant future. The iPhone giant has been doubling down on AI in recent months, with a carefully split focus across cloud-based and on-device AI. We saw leaks earlier this week indicating that Apple plans to make its own AI server chips, so this reveal of new local large language models (LLMs) demonstrates that the company is committed to both breeds of AI software. I’ll dig into the implications of that further down, but for now, let’s explain exactly what these new models are. The suite of AI tools contains eight distinct models, called OpenELMs (Open-source Efficient Language Models). As the name suggests, these models are fully open-source and available on the Hugging Face Hub, an online community for AI developers and enthusiasts. Apple also published a whitepaper outlining the new models. Four were pre-trained on CoreNet (previously CVNets), a massive library of data used for training AI language models, while the other four have been ‘instruction-tuned’ by Apple; a process by which an AI model’s learning parameters are carefully honed to respond to specific prompts. Releasing open-source software is a somewhat unusual move for Apple, which typically retains quite a close grip on its software ecosystem. The company claims to want to "empower and enrich" public AI research by releasing the OpenELMs to the wider AI community. So what does this actually mean for users? Apple has been seriously committed to AI recently, which is good to see as the competition is fierce in both the phone and laptop arenas, with stuff like the Google Pixel 8’s AI-powered Tensor chip and Qualcomm’s latest AI chip coming to Surface devices. By putting its new on-device AI models out to the world like this, Apple is likely hoping that some enterprising developers will help iron out the kinks and ultimately improve the software - something that could prove vital if it plans to implement new local AI tools in future versions of iOS and macOS. It’s worth bearing in mind that the average Apple device is already packed with AI capabilities, with the Apple Neural Engine found on the company’s A- and M-series chips powering features such as Face ID and Animoji. The upcoming M4 chip for Mac systems also appears to sport new AI-related processing capabilities, something that's swiftly becoming a necessity as more-established professional software implements machine-learning tools (like Firefly in Adobe Photoshop). In other words, we can probably expect AI to be the hot-button topic for iOS 18 and macOS 15. I just hope it’s used for clever and unique new features, rather than Microsoft’s constant Copilot nagging. You might also like... iOS 18 might break the iPhone's iconic app grid, and it's a change no one asked forThe latest iOS 17.5 beta gives iPhone users in the EU a new way to download appsThis neat iPhone camera trick will let you take pictures using nothing but your voice View the full article
-
AI’s content-creation capabilities have skyrocketed in the last year, yet the act of writing remains incredibly personal. When AI is used to help people communicate, respecting the original intent of a message is of paramount importance—but recent innovation, particularly in generative AI, has outpaced existing approaches to delivering responsible writing assistance. When thinking about safety and fairness in the context of AI writing systems, researchers and industry professionals usually focus on identifying toxic language like derogatory terms or profanity and preventing it from appearing to users. This is an essential step toward making models safer and ensuring they don’t produce the worst of the worst content. But on its own, this isn’t enough to make a model safe. What if a model produces content that is entirely innocuous in isolation but becomes offensive in particular contexts? A saying like “Look on the bright side” might be positive in the context of a minor inconvenience yet outrageously offensive in the context of war. As AI developers, it’s not enough for us to block toxic language to claim our models are safe. To actually deliver responsible AI products, we must understand how our models work, what their flaws are, and what contexts they might be used in—and we must put controls in place to prevent harmful interactions between our AI systems and our users. The problem and why it matters According to a Forrester study, 70 percent of people use generative AI for most or all of their writing and editing at work. With this rise in the use of generative AI tools, more content than ever before is regularly interacting with AI, machine learning (ML), and large language models (LLMs). And we know that AI makes mistakes. Typically, when an AI model makes a suggestion that changes the meaning of a sentence, it’s a harmless error—it can simply be rejected. This gets more complicated as technology advances and as developers rely more on LLMs. For instance, if an LLM is prone to political bias, it might not be responsible to allow it to generate political reporting. If it’s prone to misinformation and hallucination, it may be dangerous and unethical to allow it to generate medical advice and diagnoses. The stakes of inappropriate outputs are much higher, with harmless errors no longer the only outcome. A way forward The industry must develop new tactics for safety efforts to keep up with the capabilities—and flaws—of the latest AI models. I previously mentioned a few circumstances in which blocking toxic language is not enough to prevent dangerous interactions between AI systems and our users in today’s ecosystem. When we take the time to explore how our models work, their weaknesses, and the contexts they will be used in, we can deliver responsible support in those examples and more: A generative AI writing tool can draft a summary of a medical diagnosis. However, given the risk of inserting misleading or out-of-context information, we can prevent the LLM from returning inaccurate information by using the right ML model as a guardrail. Political opinions are nuanced, and an AI product’s suggestion or output can easily misconstrue the integrity of a point since it doesn’t understand the intent or context. Here again, a carefully crafted model may prevent an LLM from engaging with some political topics in cases where there is a risk of misinformation or bias. If you’re writing a condolence note to a coworker, a model can prevent an AI writing assistant from making a tone-deaf suggestion to sound more positive. One example of a mechanism that can help deliver results like these is Seismograph—the first model of its kind that can be layered on top of large language models and proprietary machine learning models to mitigate the likelihood of dicey outputs. Much as a seismograph machine measures earthquake waves, Seismograph technology detects and measures how sensitive a text is so models know how to engage, minimizing the negative impact on customers. Seismograph is just one example of how a hybrid approach to building—with LLMs, ML, and AI models working together—creates more trustworthy and reliable AI products. By reducing the odds of AI delivering adverse content without appropriate context, the industry can provide AI communication assistance from a place of empathy and responsibility. The future of responsible AI When AI communication tools were primarily limited to the basic mechanics of writing, the potential damage done by a writing suggestion was minimal regardless of the context. Today, we rely on AI to take on more complex writing tasks where context matters, so AI providers have a greater responsibility to ensure their technology doesn’t have unintended consequences. Product builders can follow these three principles to hold themselves accountable: 1. Test for weak spots in your product: Red teaming, bias and fairness evaluations, and other pressure tests can uncover vulnerabilities before they significantly impact customers. 2. Identify industry-wide solutions that make building responsible AI easier and more accessible: Developments in responsible approaches help us all improve the quality of our products and strengthen consumer trust in AI technology. 3. Embed Responsible AI teams across product development: This work can fall through the cracks if no one is explicitly responsible for ensuring models are safe. Companies must prioritize Responsible AI teams and empower them to play a central role in building new features and maintaining existing ones. These principles can guide the industry's work and commitment to developing publicly available models like Seismograph. In doing so, we demonstrate that the industry can stay ahead of risk and provide people with more complex suggestions and generated outputs—without causing harm. We've featured the best AI chatbot for business. This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro View the full article
-
Forum Statistics
67.4k
Total Topics65.3k
Total Posts