Jump to content

Search the Community

Showing results for tags 'ai'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General
    • General Discussion
    • Artificial Intelligence
    • DevOpsForum News
  • DevOps & SRE
    • DevOps & SRE General Discussion
    • Databases, Data Engineering & Data Science
    • Development & Programming
    • CI/CD, GitOps, Orchestration & Scheduling
    • Docker, Containers, Microservices, Serverless & Virtualization
    • Infrastructure-as-Code
    • Kubernetes & Container Orchestration
    • Linux
    • Logging, Monitoring & Observability
    • Security, Governance, Risk & Compliance
  • Cloud Providers
    • Amazon Web Services
    • Google Cloud Platform
    • Microsoft Azure

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


LinkedIn Profile URL


About Me


Cloud Platforms


Cloud Experience


Development Experience


Current Role


Skills


Certifications


Favourite Tools


Interests

  1. Ampere Computing unveiled its AmpereOne Family of processors last year, boasting up to 192 single-threaded Ampere cores, which was the highest in the industry. These chips, designed for cloud efficiency and performance, were Ampere's first product based on its new custom core leveraging internal IP, signalling a shift in the sector, according to CEO Renée James. At the time of the launch, James said, "Every few decades of compute there has emerged a driving application or use of performance that sets a new bar of what is required of performance. The current driving uses are AI and connected everything combined with our continued use and desire for streaming media. We cannot continue to use power as a proxy for performance in the data center. At Ampere, we design our products to maximize performance at a sustainable power, so we can continue to drive the future of the industry." AmpereOne-3 on its way Jeff Wittich, chief product officer at Ampere, recently spoke with The Next Platform about future generations of AmpereOne. He told the site that an updated chip, with 12 memory channels and an A2 core with improved performance, would be out later this year in keeping with the company's roadmap. This chip, which The Next Platform calls AmpereOne-2, will reportedly have a 33 percent increase in DDR5 memory controllers and up to 50 percent more memory bandwidth. However, what’s coming up beyond that, at some point in 2025, sounds the most exciting. The Next Platform says the third generation chip, AmpereOne-3 as it is calling it, will have 256 cores and be “etched in 3 nanometer (3N to be precise) processes from TSMC”. It will use a modified A2+ core with a “two-chiplet design on the cores, with 128 cores per chiplet. It could be a four-chiplet design with 64 cores per chiplet.” The site expects the AmpereOne-3 will support PCI-Express 6.0 I/O controllers and maybe have a dozen DDR5 memory controllers, although there’s some speculation here. “We have been moving pretty fast on the on the compute side,” Wittich told the site. “This design has got about a lot of other cloud features in it – things around performance management to get the most out of all of those cores. In each of the chip releases, we are going to be making what would generally be considered generational changes in the CPU core. We are adding a lot in every single generation. So you are going to see more performance, a lot more efficiency, a lot more features like security enhancements, which all happen at the microarchitecture level. But we have done a lot to ensure that you get great performance consistency across all of the AmpereOnes. We are also taking a chiplet approach with this 256-core design, which is another step as well. Chiplets are a pretty big part of our overall strategy.” The AmpereOne-3 is reportedly being etched at TSMC right now, prior to its launch next year. More from TechRadar Pro How Ampere Computing plans to ride the AI waveAmpere's new workstation could bring in a whole new dawn for developersPlucky CPU maker beats AMD and Intel to become first to offer 320 cores per server View the full article
  2. Apple is once again talking with OpenAI about using OpenAI technology to power artificial intelligence features in iOS 18, reports Bloomberg's Mark Gurman. Apple held talks with OpenAI earlier in the year, but nothing had come of the discussion. Apple and OpenAI are now said to be speaking about the terms of a possible agreement and how Apple might utilize OpenAI features. Along with OpenAI, Apple is still having discussions with Google about licensing Google's Gemini AI. Apple has not come to a final decision, and Gurman suggests that the company could partner with both Google and OpenAI or pick another provider entirely. Rumors suggest that ‌iOS 18‌ will have a major focus on AI, with Apple set to introduce AI functionality across the operating system. Apple CEO Tim Cook confirmed in February that Apple plans to "break new ground" in AI. We'll get a first look at the AI features that Apple has planned in just over a month, with ‌iOS 18‌ set to debut at the Worldwide Developers Conference that kicks off on June 10.Related Roundup: iOS 18Tag: Apple GPT This article, "Apple Reignites Talks With OpenAI About Generative AI for iOS 18" first appeared on MacRumors.com Discuss this article in our forums View the full article
  3. The RSA Conference 2024 will kick off on May 6. Known as the “Oscars of Cybersecurity,” the RSAC Innovation Sandbox has become a benchmark for innovation in the cybersecurity industry. Let’s focus on the new hotspots in cybersecurity and understand the new trends in security development. Today, let’s get to know Harmonic Security. Introduction of […] The post RSAC 2024 Innovation Sandbox | The Future Frontline: Harmonic Security’s Data Protection in the AI Era appeared first on NSFOCUS, Inc., a global network and cyber security leader, protects enterprises and carriers from advanced cyber attacks.. The post RSAC 2024 Innovation Sandbox | The Future Frontline: Harmonic Security’s Data Protection in the AI Era appeared first on Security Boulevard. View the full article
  4. Five years after LPDDR5 was first introduced, and a matter of months before JEDEC finalizes the LPDDR6 standard, Samsung has announced a new, faster version of its LPDDR5X DRAM. When the South Korean tech giant debuted LPDDR5X back in October 2022, its natural successor to LPDDR5 ran at a nippy 8.5Gbps. This new chip runs at 10.7Gbps, over 11% faster than the 9.6Gbps LPDDR5T variant offered by its archrival, SK Hynix. Samsung is building its new chips on a 12nm class process, which means the new DRAM isn’t only faster, but much smaller too – the smallest chip size for any LPDDR, in fact - making it ideal for use in on-device AI applications. Improved power efficiency “As demand for low-power, high-performance memory increases, LPDDR DRAM is expected to expand its applications from mainly mobile to other areas that traditionally require higher performance and reliability such as PCs, accelerators, servers and automobiles,” said YongCheol Bae, Executive Vice President of Memory Product Planning of the Memory Business at Samsung Electronics. “Samsung will continue to innovate and deliver optimized products for the upcoming on-device AI era through close collaboration with customers.” Samsung's 10.7Gbps LPDDR5X boosts performance by over 25% and increases capacity by upward of 30%, compared to LPDDR5. Samsung says it also elevates the single package capacity of mobile DRAM to 32GB. LPDDR5X offers several power-saving technologies, which bolster power efficiency by 25% and allow the chip to enter low-power mode for extended periods. Samsung intends to begin mass production of the 10.7Gbps LPDDR5X DRAM in the second half of this year upon successful verification with mobile application processor (AP) and mobile device providers. More from TechRadar Pro Samsung to showcase the world’s fastest GDDR7 memorySamsung is going after Nvidia's billions with new AI chipScientists inch closer to holy grail of memory breakthrough View the full article
  5. In an attempt to make its users aware, Microsoft will be overlaying a watermark on PCs with Windows 11 24H2 which will not have compatible CPUs using SSE 4.2 instructions as its native apps use it for AI. View the full article
  6. Apple has released a set of several new AI models that are designed to run locally on-device rather than in the cloud, possibly paving the way for an AI-powered iOS 18 in the not-too-distant future. The iPhone giant has been doubling down on AI in recent months, with a carefully split focus across cloud-based and on-device AI. We saw leaks earlier this week indicating that Apple plans to make its own AI server chips, so this reveal of new local large language models (LLMs) demonstrates that the company is committed to both breeds of AI software. I’ll dig into the implications of that further down, but for now, let’s explain exactly what these new models are. The suite of AI tools contains eight distinct models, called OpenELMs (Open-source Efficient Language Models). As the name suggests, these models are fully open-source and available on the Hugging Face Hub, an online community for AI developers and enthusiasts. Apple also published a whitepaper outlining the new models. Four were pre-trained on CoreNet (previously CVNets), a massive library of data used for training AI language models, while the other four have been ‘instruction-tuned’ by Apple; a process by which an AI model’s learning parameters are carefully honed to respond to specific prompts. Releasing open-source software is a somewhat unusual move for Apple, which typically retains quite a close grip on its software ecosystem. The company claims to want to "empower and enrich" public AI research by releasing the OpenELMs to the wider AI community. So what does this actually mean for users? Apple has been seriously committed to AI recently, which is good to see as the competition is fierce in both the phone and laptop arenas, with stuff like the Google Pixel 8’s AI-powered Tensor chip and Qualcomm’s latest AI chip coming to Surface devices. By putting its new on-device AI models out to the world like this, Apple is likely hoping that some enterprising developers will help iron out the kinks and ultimately improve the software - something that could prove vital if it plans to implement new local AI tools in future versions of iOS and macOS. It’s worth bearing in mind that the average Apple device is already packed with AI capabilities, with the Apple Neural Engine found on the company’s A- and M-series chips powering features such as Face ID and Animoji. The upcoming M4 chip for Mac systems also appears to sport new AI-related processing capabilities, something that's swiftly becoming a necessity as more-established professional software implements machine-learning tools (like Firefly in Adobe Photoshop). In other words, we can probably expect AI to be the hot-button topic for iOS 18 and macOS 15. I just hope it’s used for clever and unique new features, rather than Microsoft’s constant Copilot nagging. You might also like... iOS 18 might break the iPhone's iconic app grid, and it's a change no one asked forThe latest iOS 17.5 beta gives iPhone users in the EU a new way to download appsThis neat iPhone camera trick will let you take pictures using nothing but your voice View the full article
  7. Nvidia CEO Jensen Huang hand-delivered the world's first DGX H200 computer to OpenAI's CEO and president, continuing a trend of connecting OpenAI with bleeding edge AI compute power. View the full article
  8. Sir Walter Richardson is using a Raspberry Pi to power his AI-based robot that follows runners, shouting messages of encouragement or discouragement depending on their performance. View the full article
  9. AI’s content-creation capabilities have skyrocketed in the last year, yet the act of writing remains incredibly personal. When AI is used to help people communicate, respecting the original intent of a message is of paramount importance—but recent innovation, particularly in generative AI, has outpaced existing approaches to delivering responsible writing assistance. When thinking about safety and fairness in the context of AI writing systems, researchers and industry professionals usually focus on identifying toxic language like derogatory terms or profanity and preventing it from appearing to users. This is an essential step toward making models safer and ensuring they don’t produce the worst of the worst content. But on its own, this isn’t enough to make a model safe. What if a model produces content that is entirely innocuous in isolation but becomes offensive in particular contexts? A saying like “Look on the bright side” might be positive in the context of a minor inconvenience yet outrageously offensive in the context of war. As AI developers, it’s not enough for us to block toxic language to claim our models are safe. To actually deliver responsible AI products, we must understand how our models work, what their flaws are, and what contexts they might be used in—and we must put controls in place to prevent harmful interactions between our AI systems and our users. The problem and why it matters According to a Forrester study, 70 percent of people use generative AI for most or all of their writing and editing at work. With this rise in the use of generative AI tools, more content than ever before is regularly interacting with AI, machine learning (ML), and large language models (LLMs). And we know that AI makes mistakes. Typically, when an AI model makes a suggestion that changes the meaning of a sentence, it’s a harmless error—it can simply be rejected. This gets more complicated as technology advances and as developers rely more on LLMs. For instance, if an LLM is prone to political bias, it might not be responsible to allow it to generate political reporting. If it’s prone to misinformation and hallucination, it may be dangerous and unethical to allow it to generate medical advice and diagnoses. The stakes of inappropriate outputs are much higher, with harmless errors no longer the only outcome. A way forward The industry must develop new tactics for safety efforts to keep up with the capabilities—and flaws—of the latest AI models. I previously mentioned a few circumstances in which blocking toxic language is not enough to prevent dangerous interactions between AI systems and our users in today’s ecosystem. When we take the time to explore how our models work, their weaknesses, and the contexts they will be used in, we can deliver responsible support in those examples and more: A generative AI writing tool can draft a summary of a medical diagnosis. However, given the risk of inserting misleading or out-of-context information, we can prevent the LLM from returning inaccurate information by using the right ML model as a guardrail. Political opinions are nuanced, and an AI product’s suggestion or output can easily misconstrue the integrity of a point since it doesn’t understand the intent or context. Here again, a carefully crafted model may prevent an LLM from engaging with some political topics in cases where there is a risk of misinformation or bias. If you’re writing a condolence note to a coworker, a model can prevent an AI writing assistant from making a tone-deaf suggestion to sound more positive. One example of a mechanism that can help deliver results like these is Seismograph—the first model of its kind that can be layered on top of large language models and proprietary machine learning models to mitigate the likelihood of dicey outputs. Much as a seismograph machine measures earthquake waves, Seismograph technology detects and measures how sensitive a text is so models know how to engage, minimizing the negative impact on customers. Seismograph is just one example of how a hybrid approach to building—with LLMs, ML, and AI models working together—creates more trustworthy and reliable AI products. By reducing the odds of AI delivering adverse content without appropriate context, the industry can provide AI communication assistance from a place of empathy and responsibility. The future of responsible AI When AI communication tools were primarily limited to the basic mechanics of writing, the potential damage done by a writing suggestion was minimal regardless of the context. Today, we rely on AI to take on more complex writing tasks where context matters, so AI providers have a greater responsibility to ensure their technology doesn’t have unintended consequences. Product builders can follow these three principles to hold themselves accountable: 1. Test for weak spots in your product: Red teaming, bias and fairness evaluations, and other pressure tests can uncover vulnerabilities before they significantly impact customers. 2. Identify industry-wide solutions that make building responsible AI easier and more accessible: Developments in responsible approaches help us all improve the quality of our products and strengthen consumer trust in AI technology. 3. Embed Responsible AI teams across product development: This work can fall through the cracks if no one is explicitly responsible for ensuring models are safe. Companies must prioritize Responsible AI teams and empower them to play a central role in building new features and maintaining existing ones. These principles can guide the industry's work and commitment to developing publicly available models like Seismograph. In doing so, we demonstrate that the industry can stay ahead of risk and provide people with more complex suggestions and generated outputs—without causing harm. We've featured the best AI chatbot for business. This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro View the full article
  10. Only a few months into 2024, experts had already recorded numerous cyber attacks on companies and government institutions - a taste of the technological threats that states, companies and societies will have to prepare for this year. And with the rise of Artificial Intelligence (AI), cyber attacks hold the potential to reach a completely new dimension, with attacks poised to happen faster, more frequently and more effectively as a result of the use of AI. In fact, according to the World Economic Forum’s Global Risk Index, “Lack of cyber security ranked fourth among the greatest risks to humanity, with the number one threat claimed to be the spread of false information or disinformation campaigns. Cyber attacks and disinformation to both rise The potential extent, scale, and speed with which technology can be used to disrupt organisations and information flows is unprecedented. AI tools hold the potential to allow bad actors to carry out traditional cyber attacks as well as more effective disinformation campaigns via social media and other platforms, even with limited resources. Large, well-organised groups, often suspected of being nation-state-linked, have used cyberattacks to disrupt everything from business operations to civil infrastructure. This is alongside AI being used for disinformation campaigns, hacktivism, and sabotage. Disinformation currently stands to become an integral part of national conflicts and may affect important elections in different parts of the world. In my view, this year will see more intense and diverse cyberattacks and disinformation campaigns with commercial and economic motives, but also more targeted attacks on individuals, brands, and their reputations. Ensuring your business is prepared for increased threats There is no one-size-fits-all defense solution against cyberattacks or disinformation. When developing protection against cyber attacks, organizations and governments should ensure that the ‘fundamentals’ of cyber hygiene are in place and consistently applied. National and local authorities must focus on strengthening cyber defenses and work closely with experts to ensure they have the right strategies in place to both identify, defend and pretend cyber threats This includes a comprehensive and organized exchange of cyber knowledge, carrying out regular testing, the implementation of basic cyber hygiene and the use of powerful security and monitoring tools. Certain organizations should also concentrate on assessing the risk potential of threatened targets, defining which parts of the infrastructure, e.g. financial institutions, industrial capacities, power grids, telecommunications networks, etc. are primarily worth protecting. Organizations must take appropriate security measures and notify relevant national authorities of serious incidents. It's important to note that AI technologies, while complicating the context of cyber threats and disinformation, will also have a more positive role to play in cyber defense. In the coming years, various AI-powered tools will help to identify, assess, triage, and mitigate both traditional cyber attacks and disinformation via real time automation, meaning anomalies can be managed at a scale and speed that human beings could not manage alone. The fight against disinformation will be particularly challenging, requiring wider education on how attackers work, how to recognize fake information and the steps to take to limit misinformation from spreading. It will require companies, governments, and individuals to all play a role. Final words Cyber attacks aren’t going anywhere, and as the technology we use continues to transform, so will the attack landscape in tandem. While it's an ongoing battle, government and industry have proven adept at adapting to protect our IT infrastructures. And with the rise of disinformation campaigns this year, I expect governments, businesses, and citizens to work together effectively to adjust to this new reality, finding ways of overcoming digital disinformation more effectively. We've featured the best online cybersecurity course. This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro View the full article
  11. Recently, there’s been a lot of talk about how Apple is going to infuse its products with artificial intelligence (AI) at its Worldwide Developers Conference (WWDC) in June. But there’s another way the company might be putting AI to good use – and it could help keep your Mac safe from malware and other digital nasties. As spotted by macOS developer and blogger Howard Oakley, Apple has just updated its XProtect anti-malware system with 74 new rules aimed specifically at the Adload adware virus, which hijacks your browser and forces you to visit malicious sites. XProtect is a built-in macOS feature that detects malicious code in third-party apps and prevents them from running, and an update to its definitions is not particularly unusual. But what is unusual is the sheer size of the XProtect update. As Oakley puts it, “developing that many [definitions] by hand would normally take considerable time and effort.” And that raises an interesting question: is Apple using AI to write its antivirus definitions? Oakley certainly thinks it’s a possibility. In the blog post, he suggests that it could be a potential solution to a problem like Adload, which is frequently updated to evade detection, which in turn necessitates companies like Apple rapidly reacting to it. If Apple is using AI to do the heavy lifting, it might “overwhelm [Adload’s] efforts to evade detection until the malware has been extensively rewritten,” Oakley says. AI vs malware (Image credit: Passwork) There’s been much debate over what the rapid development of generative AI tools like ChatGPT will mean for malware creators and those who are fighting back against them. For some, it might help bad actors more rapidly craft viruses and trojans. For others, it’s an excellent tool for reverse engineering malware and building better defenses against it. Last year, I spoke to a range of cybersecurity experts on this topic. Joshua Long, Chief Security Analyst at antivirus firm Intego, suggested that AI can help to spot zero-day flaws by analyzing code uploaded into its chat window. And Martin Zugec, Technical Solutions Director at Bitdefender, noted: “The majority of novice malware writers are not likely to possess the skills required to bypass [ChatGPT’s] security measures, and therefore the risk posed by chatbot-generated malware remains relatively low at this time.” Whatever the case, it would be surprising if Apple was not at least looking into using AI to help write its antivirus definitions. Malware threats are always evolving, which means defenders need to adapt as quickly as possible to keep them out. With the speed that AI allows, it could become an invaluable tool in the antivirus arsenal. Interestingly, Oakley notes that there are already several AI tools that can write antivirus definitions, but that “but Apple doesn’t appear to have made much use of them in the past, at least not on this unprecedented scale.” Given the Adload example, we might soon see AI playing a much more active role in keeping your Mac safe. You might also like iOS 18 could bring generative AI to your iPhone in the most Apple way possibleWWDC 2024: AI, iOS 18, and everything we're expecting from Apple's big showChatGPT explained – everything you need to know about the AI chatbot View the full article
  12. Edge, Microsoft’s default web browser in Windows 11, is getting new text editing capabilities, including Copilot-assisted rewriting, improved clipboard functionality, and support for handwritten text in forms and web pages via a stylus. Windows Copilot is the AI assistant that Microsoft has been busy integrating into Windows 11 and various other products, including Microsoft Edge. It was presented as eventually being able to help you with any task on your device, and while it still looks like there’s a way to go before Copilot lives up to that lofty ambition, it is getting there. The new feature, AI Compose, will make rewrite suggestions for text selected by users in editable parts of a web page and can assist writers with possible phrasing improvements and pointers on sentence structure. It’ll also allow users to change the text suggestions’ tone, format, or length. MSPowerUser compares the new functionality to the popular AI-powered writing assistance tool Grammarly. Apparently, this update will make Copilot more competitive with Google’s large language model and AI assistant project, Gemini, which is rumored to bring similar features to Google’s rival Chrome web browser. (Image credit: Shutterstock/Jacob Lund) Adding support in for digital pens and more Edge will also get support for digital pen writing that will let users write in web pages’ input fields directly, turning their handwriting into text. Microsoft also describes in a blog post that users will be able to make use of Windows Ink support in Edge to do the following with digital pens: Enter text by writing with a pen in or near an input field Delete text by scribbling over words to delete themAdd or remove spaces by drawing vertical lines in the textAdd line breaks by drawing horizontal lines Other text-related updates that are coming to Edge include a new EditContext API tool for web developers that’s intended to simplify the process of creating custom text editors, an enhanced copy-and-paste function that allows users to copy and paste formatted rich HTML content more reliably, and more control for web developers over Edge’s text prediction function. I think this certainly has the potential to be a very helpful addition to Edge, because as Microsoft itself points out, a lot of the web’s success in general is due to its form submission and text editing capabilities. Microsoft has also stated that it would like feedback to improve the feature if needed, and this is a feature where it could take the initiative and actively encourage users to try the feature. You might also like... Microsoft’s Edge browser is now more popular than ever – but why?Google’s war on adblockers may have broken YouTube for Microsoft Edge usersMicrosoft just updated Edge – and completely broke the browser according to some reports View the full article
  13. Most businesses know that taking responsibility for their environmental and social impact is key for long-term success. But how can they make fully-informed decisions when most companies only have visibility into their immediate suppliers? At Prewave, we’re driven by the mission to help companies make their entire supply chains more resilient, transparent, and sustainable. Our end-to-end platform monitors and predicts a wide range of supply chain risks, and AI is the driving-force behind its success. Without AI, handling vast volumes of data and extracting meaningful insights from publicly-available information would be almost unfathomable at the scale that we do to help our clients. Because of that, Prewave needs a rock-solid technology foundation that is reliable, secure, and highly scalable to continually handle this demand. That’s why we built the Prewave supply chain risk intelligence platform on Google Cloud from inception in 2019. Back then, as a small team, we didn’t want to have to maintain hardware or infrastructure, and Google Cloud managed services stood out for providing reliability, availability, and security while freeing us up to develop our product and focus on Prewave’s mission. A shared concern for sustainability also influenced our decision, and we’re proud to be working with data centers with such a low carbon footprint. Tracking hundreds of thousands of suppliers Prewave’s end-to-end platform solves two key challenges for customers: First, it makes supply chains more resilient by identifying description risks and developing the appropriate mitigation plans. And second, it makes supply chains more sustainable by detecting and solving ESG risks, such as forced labor or environmental issues. It all starts with our Risk Monitoring capability, which uses AI that was developed by our co-founder Lisa in 2012 during her PhD research. With it, we’re scanning publicly available information in 120+ languages, looking for insights that can indicate early signals of Risk Events for our clients, such as labor unrest, an accident, fire, or 140 other different risk types that can disrupt their supply chain. Based on the resulting insights, clients can take actions on our platform to mitigate the risk, from filing an incident review to arranging an on-site audit. With this information, Prewave also maps our clients’ supply chains from immediate and sub-tier suppliers down to the raw materials’ providers. Having this level of granularity and transparency is now a requirement of new regulations such as the European corporate sustainability due diligence directive (CSDDD), but it can be challenging for our clients to do without help. They usually have hundreds or thousands of suppliers and our platform helps them to know each one, but also to focus attention, when needed, on those with the highest risk. The Prewave platform keeps effort on the supplier’s side as light as possible. They only have to act if potential risk is flagged by our Tier-N Monitoring capability, in which case, we support them to fix issues and raise their standards. Additionally, this level of visibility frees them up from having to manually answer hundreds of questionnaires in order to qualify to do business with more partners. To make all this possible, our engineering teams rely heavily on scalable technology such as Google Kubernetes Engine (GKE) to support our SaaS. We recently switched from Standard to Autopilot and noticed great results in time efficiency now that we don’t need to ensure that node pools are in place or that all CPU power available is being used appropriately, helping save up to 30% of resources. This also has helped us to reduce costs because we only pay for the deployments we run. We also believe that having the best tools in place is key to delivering the best experience not only to customers but also to our internal teams. So we use Cloud Build and Artifact Registry to experiment, build, and deploy artifacts and manage docker containers that we also use for GKE. Meanwhile, Cloud Armor acts as a firewall protecting us against denial of service and web attacks. Because scalability is key for our purposes, the application development and data science teams use Cloud SQL as a database. This is a fully managed service that helps us focus on developing our product, since we don’t have to worry about managing the servers according to demand. Data science teams also use Compute Engine to host our AI implementations as we develop and maintain our own models, and these systems are at the core of Prewave’s daily work. Helping more businesses improve their deep supply chains Since 2020, Prewave has grown from three clients to more than 170, our team of 10 grew to more than 160, and the company’s revenue growth multiplied by 100, achieving a significant milestone. We’ve also since then released many new features to our platform that required us to scale the product alongside scaling the company. With Google Cloud, this wasn’t an issue. We simply extended the resources that the new implementations needed, helping us to gain more visibility at the right time and win new customers. Because our foundation is highly stable and scalable, growing our business has been a smooth ride. Next, Prewave is continuing its expansion plans into Europe that began in 2023, before moving to new markets, such as the US. This is going well and our association with Google Cloud is helping us win the trust of early-stage clients who clearly also trust in its reliability and security. We’re confident that our collaboration with Google Cloud will continue to bring us huge benefits as we help more companies internationally to achieve transparency, resilience, sustainability, and legal compliance along their deep supply chains. View the full article
  14. Plus, Salesforce bundles its AI implementation and data governance services. View the full article
  15. Google Cloud Champion Innovators are a global network of more than 600 non-Google professionals, who are technical experts in Google Cloud products and services. Each Champion specializes in one of nine different technical categories which are cloud AI/ML, data analytics, hybrid multi-cloud, modern architecture, security and networking, serverless app development, storage, Workspace and databases. In this interview series we sit down with Champion Innovators across the world to learn more about their journeys, their technology focus, and what excites them. Today we’re talking to Juan Guillermo Gómez. Currently Technical Lead at Wordbox, Juan Guillermo is a Cloud Architect, Google Developer Expert, Serverless App Development Champion, and community builder who is regularly invited to share his thoughts on software architecture, entrepreneurship, and innovation at events across Latin America. Natalie Tack (Google Cloud Editorial): What technology area are you most fascinated with, and why? Juan Guillermo Gómez: As a kid, I dreamed of developing software that would change the world. I've always been fascinated by programming and the idea of creating products and services that make people's lives easier. Nowadays, my focus is on architecture modernization and using innovative new tech to scale and improve systems. Following a Google Cloud hackathon around 10 years ago, I started to take an interest in serverless architecture and became really enthusiastic about Google Cloud serverless computing services, which enable you to focus on coding without worrying about infrastructure. Nowadays, I’m excited about the potential of AI to help us create better, more robust, and more efficient systems, and I'm really looking forward to seeing where that will take us. NT: As a developer, what’s the best way to go about learning new things? JGG: There’s a wealth of resources out there, whether it’s YouTube, podcasts or developer blogs. I find the Google Cloud developer blog and YouTube channel particularly instructive when it comes to real use cases. But the most important thing in my opinion is to be part of a community. It enables you to share experiences, collaborate on projects and learn from others with expertise in specific industries. Google Developer Groups, or GDGs, for example are a great way to network and keep up with the latest developments. Becoming a Google Cloud Champion Innovator has been really beneficial. It enables me to learn directly from Googlers, collaborate around shared problems, and work directly with other people from my field. I can then share that knowledge with my community in Latin America, both as co-organizer of GDG Cali and via my podcast Snippets Tech. NT: Could you tell us more about your experience as an Innovator? JGG: I joined the Innovators program in September 2022, and it was a natural fit from the start. The Innovator culture is deeply rooted in that of developer communities, which I’ve participated in for over 20 years now. The core philosophy is sharing, collaboration and learning from others' experiences, not only through getting together at talks and conferences, but also open source software and libraries. Basically, creating things that benefit the community at large: the Innovator mindset is fundamentally collaborative. As a result, the program provides a wealth of information, and one of the biggest benefits from my point of view is the amount of real use cases it gives access to. I’ve learnt a lot about embedding and semantic similarity that I’m putting into practice in my work as Technical Lead at Wordbox, a startup that helps people learn languages through TV and music. NT: What are some upcoming trends you’re enthusiastic about? JGG: Generally speaking, I’m very interested to see where we’ll go with generative AI. I started working with Google Cloud APIs such as Translate and Speech-to-Text around three years ago as part of my work with Wordbox, and I’m impressed with the way Google Cloud democratizes machine learning and AI, allowing anyone to work with it without extensive machine learning knowledge. There are so many potential use cases for gen AI. As a developer, you can work faster, solve programming language challenges, write unit tests, and refer to best practices. As an Innovator, I gained early access to Duet AI for Developers (now called Gemini Code Assist), which is a great tool to fill in knowledge gaps, suggest scripts and help out junior architects or those that are new to Google Cloud - basically helping you focus on creating great code. NT: Can you tell us about an exciting new project you’re working on? JGG: The Wordbox language learning app features a collection of short videos with English phrases that we show users based on semantic similarity, where the phrase in the new video is similar to the previous one. To enable that, we use Vertex AI, PaLM 2 and Vector Search, and are keen to explore Gemini models, as they offer advanced capabilities for comparing semantic similarities not only between text, but also between text and video, which would enable us to create stories around specific phrases. For example, if a user is watching a video related to a “Game of Thrones” review series and learning certain expressions from it, we can use Gemini models to find similarities with other videos or texts. This will allow us to produce a narrative around the learned expression, creating a comprehensive learning environment that’s tailored to the user’s proficiency level. The learner can then read or watch the story, answer questions and engage in a more interactive, personalized learning experience. As a side project, I’m also working on an AI-enabled platform that helps musicians and singers create lyrics based on keywords, genre, and context. They input the information, and the platform generates lyrics that hopefully serve as a jumping-off point for a great new piece of music. NT: What advice would you give to budding innovators? JGG: The Innovators program can be summed up in three words: networking, learning, and growth. I’d advise anyone interested in becoming an Innovator to be proactive both in learning and in engaging with their fellow developers. It feels pretty great to be given early access to an API and then a few months later tell your community about it while creating fantastic new features for your clients. Take the next steps on your Google Cloud journey and learn more about the Google Cloud Innovators Program, designed to help developers and practitioners grow their Google Cloud skills and advance their careers. No matter where you are on your cloud journey, the Innovators Program has something for you! View the full article
  16. Consider this scenario: You fire up your Docker containers, hit an API endpoint, and … bam! It fails. Now what? The usual drill involves diving into container logs, scrolling through them to understand the error messages, and spending time looking for clues that will help you understand what’s wrong. But what if you could get a summary of what’s happening in your containers and potential issues with the proposed solutions already provided? In this article, we’ll dive into a solution that solves this issue using AI. AI can already help developers write code, so why not help developers understand their system, too? Signal0ne is a Docker Desktop extension that scans Docker containers’ state and logs in search of problems, analyzes the discovered issues, and outputs insights to help developers debug. We first learned about Signal0ne as the winning submission in the 2023 Docker AI/ML Hackathon, and we’re excited to show you how to use it to debug more efficiently. Introducing Signal0ne Docker extension: Streamlined debugging for Docker The magic of the Signal0ne Docker extension is its ability to shorten feedback loops for working with and developing containerized applications. Forget endless log diving — the extension offers a clear and concise summary of what’s happening inside your containers after logs and states are analyzed by an AI agent, pinpointing potential issues and even suggesting solutions. Developing applications these days involves more than a block of code executed in a vacuum. It is a complex system of dependencies, and different user flows that need debugging from time to time. AI can help filter out all the system noise and focuses on providing data about certain issues in the system so that developers can debug faster and better. Docker Desktop is one of the most popular tools used for local development with a huge community, and Docker features like Docker Debug enhance the community’s ability to quickly debug and resolve issues with their containerized apps. Signal0ne Docker extension’s suggested solutions and summaries can help you while debugging your container or editing your code so that you can focus on bringing value as a software engineer. The term “developer experience” is often used, but this extension focuses on one crucial aspect: shortening development time. This translates directly to increased productivity, letting you build containerized applications faster and more efficiently. How does the Docker Desktop extension work? Between AI co-pilots, highly integrated in IDEs that help write code, and browser AI chats that help understand software development concepts in a Q&A way, there is one piece missing: logs and runtime system data. The Signal0ne Docker Desktop extension consists of three components: two hosted on the user’s local system (UI and agent) and one in the Signal0ne cloud backend service. The agent scans the user’s local environment in the search of containers with invalid states, runtime issues, or some warnings or errors in the logs, after issue discovery, it collects additional data from container definition for enhanced analysis. After the Signal0ne agent discovery, data is sent to the backend service, where a combination of pre-trained LLM and solution search retrieval service performs the analysis. The analysis of the issues can be seen from the Signal0ne extension UI, including: Short log summary — Outlines what is happening within a particular container, including logs on which analysis was based can be accessed from the sources dropdown if you wish. Solutions summary — One paragraph about possible solutions to the analyzed issue with some explanation of why this particular solution was proposed. Solution sources — Links to websites where the issue has already been solved by the community (for example, a GitHub issue or StackOverflow post). Watch this short video demonstration for more details: Technical details The issue analysis pipeline is powered by the LangChain tooling framework, which utilizes open source Mistral 7b LLM with LLM-optimized web search that browses sources of existing solutions on the open web (Figure 1). Figure 1: Signal0ne architecture overview. The following section shows sample issue analysis pipeline output: Logs from Docker container: “$ irb\n>> \"12341:asdf\"[/\\d+/]\n# => \"12341\" \n>> \"12341:asdf\"[/\\d*/]\n# => \"12341\" \n>> \"12341:asdf\"[0..5]\n# => \"12341:\" \n>> \"12341:asdf\"[0...5]\n# => \"12341\" \n>> \"12341:asdf\"[0, ':']\nTypeError: can't convert String into Integer\n from (irb):5:in” Summary: The user is trying to extract a substring from the string 12341:asdf using different Ruby methods, such as [/\\\\d+/], [/\\\\d\\\\*/], [0..5], [0...5], and [0, ':']. However, when attempting to use the indexing method [0, ':'], an error occurs because the method expects an integer argument but receives a string instead. Predicted solutions: “To extract a substring from the given string 12341:asdf without encountering an error when using indexing methods, consider the following options: 1. Using regular expressions: To extract a substring using regular expressions, you can use the scan method with a regular expression pattern. For instance, to extract the digits before the colon, you can use the following code snippet: str = "12341:asdf" substring = str.scan(/[\\\\d]+/) puts substring.first # Output: ["12341"] puts substring.first[0] # Output: 12341 2. Slicing: Another alternative to indexing is slicing, which allows you to extract a specified range of characters from a string. To extract the digits before the colon using slicing, you can use the following code snippet: str = "12341:asdf" substring = str[0...str.index(":")] puts substring # Output: "12341" puts substring.to_i # Output: 12341 These two methods should help you extract the desired substring from the given string without encountering an error. Getting started Getting started with the Signal0ne Docker Desktop extension is a straightforward process that allows developers to leverage the benefits of unified development. Here are the steps for installing Signal0ne Docker extension: 1. Install Docker Desktop. 2. Choose Add Extensions in the left sidebar. The Browse tab will appear by default (Figure 2). Figure 2: Signal0ne extension installation from the marketplace. 3. In the Filters drop-down, select the Utility tools category. 4. Find Signal0ne and then select Install (Figure 3). Figure 3: Extension installation process. 5. Log in after the extension is installed (Figure 4). Figure 4: Signal0ne extension login screen. 6. Start developing your apps, and, if you face some issues while debugging, have a look at the Signal0ne extension UI. The issue analysis will be there to help you with debugging. Make sure the Signal0ne agent is enabled by toggling on (Figure 5): Figure 5: Agent settings tab. Figure 6 shows the summary and sources: Figure 6: Overview of the inspected issue. Proposed solutions and sources are shown in Figures 7 and 8. Solutions sources will redirect you to a webpage with predicted solution: Figure 7: Overview of proposed solutions to the encountered issue. Figure 8: Overview of the list of helpful links. If you want to contribute to the project, you can leave feedback via the Like or Dislike button in the issue analysis output (Figure 9). Figure 9: You can leave feedback about analysis output for further improvements. To explore Signal0ne Docker Desktop extension without utilizing your containers, consider experimenting with dummy containers using this docker compose to observe how logs are being analyzed and how helpful the output is with the insights: services: broken_bulb: # c# application that cannot start properly image: 'Signal0neai/broken_bulb:dev' faulty_roger: # image: 'Signal0neai/faulty_roger:dev' smoked_server: # nginx server hosting the website with the miss-configuration image: 'Signal0neai/smoked_server:dev' ports: - '8082:8082' invalid_api_call: # python webserver with bug image: 'Signal0neai/invalid_api_call:dev' ports: - '5000:5000' broken_bulb: This service uses the image Signal0neai/broken_bulb:dev. It’s a C# application that throws System.NullReferenceException during the startup. Thanks to that application, you can observe how Signal0ne discovers the failed container, extracts the error logs, and analyzes it. faulty_roger: This service uses the image Signal0neai/faulty_roger:dev. It is a Python API server that is trying to connect to an unreachable database on localhost. smoked_server: This service utilizes the image Signal0neai/smoked_server:dev. The smoked_server service is an Nginx instance that is throwing 403 forbidden while the user is trying to access the root path (http://127.0.0.1:8082/). Signal0ne can help you debug that. invalid_api_call: API service with a bug in one of the endpoints, to generate an error call http://127.0.0.1:5000/create-table after running the container. Follow the analysis of Signal0ne and try to debug the issue. Conclusion Debugging containerized applications can be time-consuming and tedious, often involving endless scrolling through logs and searching for clues to understand the issue. However, with the introduction of the Signal0ne Docker extension, developers can now streamline this process and boost their productivity significantly. By leveraging the power of AI and language models, the extension provides clear and concise summaries of what’s happening inside your containers, pinpoints potential issues, and even suggests solutions. With its user-friendly interface and seamless integration with Docker Desktop, the Signal0ne Docker extension is set to transform how developers debug and develop containerized applications. Whether you’re a seasoned Docker user or just starting your journey with containerized development, this extension offers a valuable tool that can save you countless hours of debugging and help you focus on what matters most — building high-quality applications efficiently. Try the extension in Docker Desktop today, and check out the documentation on GitHub. Learn more Subscribe to the Docker Newsletter. Get the latest release of Docker Desktop. Vote on what’s next! Check out our public roadmap. Have questions? The Docker community is here to help. New to Docker? Get started. View the full article
  17. Content creation can be tedious work and takes much of our time. With Generative AI, we can improve the quality and efficiency of our work.View the full article
  18. It looks like Google Maps is getting a cool new feature that’ll make use of generative AI to help you explore your town - grouping different locations to make it easier to find restaurants, specific shops, and cafes. In other words, no more sitting around and mulling over where you want to go today! Android Authority did an APK teardown (which basically means decompiling binary code within a program into a programming language that can be read normally) which hints at some new features on the horizon. The code within the Google Maps beta included mention of generative AI, which led Android Authority to Google Labs. If you’re unfamiliar with Google Labs, it’s a platform where users can experiment with Google’s current in-development tools and AI projects, like Gemini Chrome extensions and music ‘Time Travel’. So, what exactly is this new feature that has me so excited? Say you’re really craving a sweet treat. Instead of going back to your regular stop or simply Googling ‘sweet treats near me’, you’ll be able to ask Google Maps for exactly what you’re looking for and the app will give you suggestions for nearby places that offer it. Naturally, it will also provide you with pictures, ratings, and reviews from other users that you can use to make a decision. Sweet treat treasure hunter I absolutely love the idea and I really hope we get to see the feature come to life as someone who has a habit of going to the same places over and over again because I either don’t know any alternatives or just haven’t discovered other parts of my city. The new feature has the potential to offer a serious upgrade to Google Maps’ more specific location search abilities, beyond simply typing in the name of the shop you want or selecting a vague group like ‘Restaurants’ as you can currently. You’ll be able to see your results into categories, and if you want more in-depth recommendations you can ask follow-up questions to narrow down your search - much in the same way that AI assistants like Microsoft Copilot can ‘remember’ your previous chat history to provide more context-sensitive results. I often find myself craving a little cake or a delicious cookie, so if I want that specific treat I can specify to the app what I’m craving and get a personalized list of reviewed recommendations. We’re yet to find out when exactly to expect this new feature, and without an official announcement, we can’t be 100% certain that it will ever make a public release. However, I’m sure it would be a very popular addition to Google Maps, and I can’t wait to discover new places in my town with the help of an AI navigator. You might also like... Your phone is killing the planet – here are 3 ways to reduce your impactSustainability week 2024A huge Meta AI update finally arrives on Ray-Ban Meta smart glasses... for some View the full article
  19. Up until now, you've needed a phone running Android 12 or later to make use of the Google Gemini AI app for Android, but that has now changed – while a new 'conversation mode' for the chatbot has also leaked. As per Android Authority, some digging by well-known tipster @AssembleDebug revealed that Android 10 was the new minimum requirement for Gemini, and the Play Store listing now also reflects the support for more devices. Android 10 and Android 12 launched in 2019 and 2021 respectively, so a substantial number of older phones should now be Gemini-compatible. The app can replace Google Assistant on handsets, if requested, though it doesn't yet support all of the same features. According to the official Gemini support page, you also need 4GB of RAM in your phone to run the AI chatbot properly. That page still mentions compatibility with Android 12 and higher, though we're assuming it'll be updated soon. For now, you can only get at Gemini on an iPhone by going through the Google app for iOS. A little more conversation Gemini assistant to get a new 'Conversation' mode on Android Read - https://t.co/aIPx8sAuTcThere is definitely some uncertainty about this feature about how it will exactly work #Google #Android pic.twitter.com/GWbmwVZfqfApril 24, 2024 See more Yet another new find from @AssembleDebug (who we're assuming never sleeps) and PiunikaWeb points to something called 'conversation mode' in Gemini for Android. The code for it is disabled right now, but could be enabled in the near future. As it doesn't work yet, it's difficult to say for sure what it could be. It might match the 'continued conversation' feature in Google Assistant, where you can keep chatting without having to manually trigger the Assistant's listening mode each time. Alternatively, it could be something to do with live translation, a feature that's already appeared in several AI-powered apps from Google and others. Time will tell, if indeed this is something Google keeps developing and sets live. The next date of note for Google and Gemini AI news is May 14, when Google I/O 2024 gets underway. Google is expected to tell us a lot more about its AI efforts then, and there should also be updates on Android 15 and the Google Pixel 8a. You might also like Android tablets could soon support GeminiGoogle Gemini has plenty of ideasAn annoying Gemini problem gets fixed View the full article
  20. As a seasoned DevOps engineer, you've likely had your fair share of endless nights debugging issues, manually monitoring systems, and firefighting outages. While the role is deeply satisfying, there's no doubt operations can eat up a substantial chunk of your time each day. Well, friend, I'm here to tell you that help is on the way - in the form of artificial intelligence. AI and machine learning are advancing at a rapid pace, and they're poised to transform the world of DevOps in profound ways. In this article, I'll walk through some of the most impactful ways AI is already augmenting DevOps workflows and discuss the potential benefits. If you need a primer on DevOps principles, we offer a comprehensive overview in our article: What Is DevOps? Applying AI to Automate Infrastructure ProvisioningOne of the most time-consuming parts of any DevOps role is manually provisioning and configuring infrastructure. Whether it's spinning up new environments, cloning test setups, or patching existing systems, a lot gets spent doing repetitive configuration tasks. With tools like Pulumi, HashiCorp Terraform, and AWS CloudFormation, you can programmatically define and deploy all your environments in code. But things get even better when you incorporate machine learning. AI assistants like Anthropic's Claude can parse your infrastructure code, automatically detect patterns, and generate reusable modules and abstractions. Over time, they get smarter - so your infrastructure setups become simpler, more standardized, and easier to maintain at scale. Continuous Monitoring and Issue DetectionMonitoring your software stack, services, and metrics is key for stability and reliability. Unfortunately, manually scouring dashboards and alert floods is a constant drain. AI presents a smarter solution through self-supervised anomaly detection models. These systems learn your unique baselines over time and accurately detect issues the moment they occur. Many monitoring tools now offer AI-powered capabilities like automatic metrics clustering, correlation of logs and traces, and predictive problem diagnosis. With ML-based recommendations, you can jump directly to the root cause instead of wasting hours down rabbit holes. AI even enables continuous optimization by flagging unnecessary resource usage or outdated configurations. Automated Testing at ScaleAny DevOps engineer knows comprehensive testing is vital but difficult to do well at scale. Running test suites takes forever, bugs slip through, and edge cases are missed. By leveraging techniques like computer natural language processing, AI is helping automate functional and security testing in whole new ways. Advanced testing bots can now emulate human behaviors, automatically exploring applications from every angle. They scour codebases, fuzz inputs, and generate synthetic traffic to uncover bugs earlier. With AI, your team gains oversight into risks across all environments through continuous on-device testing. And thanks to smart test case generation, coverage increases while manual effort decreases substantially. AI-Enhanced Predictive AnalyticsPredictive analytics is where AI shines the most in DevOps. By analyzing historical data, AI can predict future outcomes with impressive accuracy. It can forecast potential system outages or failures before they occur, allowing you to proactively address issues, minimize downtime, and maintain a seamless user experience. Imagine if your monitoring systems could proactively warn of spikes several hours in advance. Or if models learned seasonality to perfectly scale resources on demand, making waste a thing of the past. The power of prediction helps teams plan better by simulating changes before they happen, to avoid regressions or downtime. For more on predictive analytics, check out our article on How to Run Predictive Analytics in DevOps Intelligent AutomationAutomation is at the heart of DevOps, and AI takes it a step further. Intelligent automation can learn from past actions and improve over time. It’s an apprentice that never stops learning, gradually taking over routine tasks with increasing efficiency, freeing you up for more complex challenges. It significantly accelerates your workflow while avoiding technical debt. To understand technical debt, check out our article on What is Technical Debt. Enhanced SecuritySecurity is paramount, and AI provides an extra layer of defense. By continuously learning what ‘normal’ looks like, AI can detect deviations that might indicate a security breach. It also fuzz tests code to find vulnerabilities, actively simulates exploits to patch gaps, and even helps rollback breaches through predictive analysis of earlier system states. It’s like having a tireless sentinel that’s always on the lookout for potential threats. For more on securing your DevOps environment, read the blog Container Security Best Practices Resource OptimizationAI also helps optimize resource usage through analysis of historical usage patterns. For instance, by generating Kubernetes limits and requests for containers based on peak memory and CPU consumption. This prevents any single container from using more than its fair share of resources and ensures performance isolation. Tools like Kluster or KubeAdvisor automatically tune configurations to maximize efficiency without compromising the customer experience. Limitations Of AI in DevOpsThe following are the limitations of AI in the DevOps environment. Data Dependency: AI and ML models are heavily reliant on data. The quality, volume, and relevance of the data you feed into these models will directly impact their effectiveness. Incomplete or biased data can lead to inaccurate predictions and automation.Complexity and Interpretability: AI systems can be complex and their decision-making processes opaque. This “black box” nature makes it difficult to interpret why certain decisions are made, which can be a significant issue when those decisions have substantial impacts on your systems.Integration Challenges: Incorporating AI into existing DevOps workflows can be challenging. It requires a seamless integration of AI tools with current infrastructure, which may involve significant changes to both tooling and processes.Skill Gap: There is a skill gap in the industry when it comes to AI for now. DevOps engineers need to have a solid understanding of AI principles to effectively implement and manage AI-driven systems. This often requires additional training and education.Continuous Learning and Adaptation: This is a good thing right? But it can also be a challenge because AI models will require continuous learning and adaptation to remain effective. As your systems and data change over time, models may become outdated and less accurate, necessitating regular updates and retraining. This usually costs money and time. Ethical and Security Considerations: AI systems can raise ethical questions, especially around privacy and data usage. Additionally, they can become new targets for security breaches, requiring robust security measures to protect sensitive data.Cost: Implementing AI can be costly. It involves not only the initial investment in technology but also ongoing costs related to processing power, storage, and human resources for managing and maintaining AI systems.Reliability and Trust: Building trust in AI’s capabilities is essential. Stakeholders may be hesitant to rely on AI for critical tasks without a clear understanding of its reliability and the ability to intervene when necessary.By understanding these limitations, you can better prepare for and mitigate potential risks, ensuring a more successful integration of AI into your practices. ConclusionWhile AI still has room to grow, real value emerges when it is applied judiciously to DevOps target pain points. By carefully incorporating machine learning where it makes sense, you gain a potent set of assistants to streamline operations. And that translates directly to more focus on innovation that drives business impact. View the full article
  21. Gemini may receive a big update on mobile in the near future where it’ll gain several new features including a text box overlay. Details of the upgrade come from industry insider AssembleDebug who shared his findings with a couple of publications. PiunikaWeb gained insight into the overlay and it’s quite fascinating seeing it in action. It converts the AI’s input box into a small floating window located at the bottom of a smartphone display, staying there even if you close the app. You could, for example, talk to Gemini while browsing the internet or checking your email. AssembleDebug was able to activate the window and get it working on his phone while on X (the platform formerly known as Twitter). His demo video shows it behaving exactly like the Gemini app. You ask the AI a question, and after a few seconds, a response comes out complete with source links, images, as well as YouTube videos if the inquiry calls for it. Answers have the potential to obscure the app behind it. AssembleDebug’s video reveals the length depends on whether the question requires a long-form answer. We should mention that the overlay is multimodal so you can write out an inquiry, verbally command the AI, or upload an image. Smarter AI The other notable changes were shared with Android Authority. First, Gemini on Android will gain the ability to accept different types of files besides photographs. Images show a tester uploading a PDF, and then asking the AI to summarize the text inside it. Apparently, the feature is present in the current version of Gemini however activating it doesn’t do anything. Android Authority speculates the update may be exclusive to either Google Workspace or Gemini Advanced; maybe both. It’s hard to tell at the moment. Second is a pretty basic tool, but useful nonetheless called Select Text. The way Gemini works right now is you’re forced to copy a whole block of text even if you just want a small portion. Select Text solves this issue by allowing you to grab a specific line or paragraph. Yeah, it’s not a flashy upgrade. Almost every app in the world has the same capability. Yet, the tool has “huge implications for Gemini’s usability”. It greatly improves the AI’s ease of use by not being so restrictive. #Google Gemini Android app will finally not force you to copy an entire prompt response Read on AndroidAuthority - https://t.co/M1EFGwfbNJ#Google #Gemini #AI pic.twitter.com/BFkKCbKylRApril 23, 2024 See more A fourth, smaller update was found by AssembleDebug. It’s known as Real-time Responses. The descriptor text found alongside it claims the tool lets you see answers being written out in real-time. However, as PiunikaWeb points out, it’s only an animation change. There aren’t any “practical benefits.” Instead of waiting for Gemini to generate a response as one solid mass, you can choose to see the AI write everything out line by line similar to its desktop counterpart. Google I/O 2024 kicks off in about three weeks on May 14. No word on when these features will roll out, but we'll learn a lot more during the event. While you wait, check out TechRadar's roundup of the best Android smartphones for 2024 if you're looking to upgrade. You might also like This is what Gemini AI in Google Messages may look likeThe first Android 15 public beta is out – here's how to download itGoogle’s Gemini AI app could soon let you sync and control your favorite music streaming service View the full article
  22. Mre than one in two Americans have already tried generative AI in the past year in the hope that it could improve productivity and creativity in their personal lives, new research from Adobe has found. The company's study found oer half (53%) had given the technology a go, however only 30% had used GenAI in the workplace compared with 81% in their personal lives. Testament to artificial intelligence’s potential to impact lives, two in five (41%) now claim to use GenAI daily. You can’t escape from generative AI The survey of 3,000 consumers illustrates of generative AI’s widespread use as well as the acceptance and enthusiasm for a relatively new technology – before the public preview launch of ChatGPT in late 2022, few consumers had ever heard of generative AI let alone tried an AI application. However, despite the technology’s capability to process huge amounts of data reasonably quickly, only 17% of the survey’s participants admitted to using it within education, suggesting that users could be more inquisitive than reliant. Delving deeper into specific tasks, brainstorming (64%), creating first drafts of written content (44%), creating visuals or presentations (36%), trying an alternative to search (32%), summarizing written text (31%), creating images or art (29%), and creating programming code (21%) emerged as some key use cases for generative AI. Four in five (82%) also hope that GenAI can improve their creativity, despite almost as many (72%) believing that it would never match a human’s creativity. Looking ahead, consumers anticipate generative AI helping them with learning a new skill (43%), making price comparison and shopping easier (36%), accessing better customer support from companies (33%), creating social media content (18%), and coding (14%). The study also noted GenAI’s impacts on retail and ecommerce. On the whole, generative AI is transitioning from a novelty to a productivity and experience enhancer as companies worldwide look to implement the technology across endless sectors. More from TechRadar Pro Microsoft announces new AI hub in London in latest AI pushWe’ve rounded up the best AI tools and best AI writersCheck out all the best productivity tools View the full article
  23. Apple is said to be developing its own AI server processor using TSMC's 3nm process, targeting mass production by the second half of 2025. According to a post by the Weibo user known as "Phone Chip Expert," Apple has ambitious plans to design its own artificial intelligence server processor. The user, who claims to have 25 years of experience in the integrated circuit industry, including work on Intel's Pentium processors, suggests this processor will be manufactured using TSMC's ‌3nm‌ node. TSMC is a vital partner for Apple, manufacturing all of its custom silicon chips. The chipmaker's ‌3nm‌ technology is one of the most advanced semiconductor processes available, offering significant improvements in performance and energy efficiency over the previous 5nm and 7nm nodes. Apple's purported move toward developing a specialist AI server processor is reflective of the company's ongoing strategy to vertically integrate its supply chain. By designing its own server chips, Apple can tailor hardware specifically to its software needs, potentially leading to more powerful and efficient technologies. Apple could use its own AI processors to enhance the performance of its data centers and future AI tools that rely on the cloud. While Apple is rumored to be prioritizing on-device processing for many of its upcoming AI tools, it is inevitable that some operations will have to occur in the cloud. By the time the custom processor could be integrated into operational servers in late 2025, Apple's new AI strategy should be well underway. The Weibo user has a number of accurate previous claims, including that the iPhone 7 would be water-resistant and that the standard iPhone 14 models would continue using the A15 Bionic chip, with the more advanced A16 chip being exclusive to the ‌iPhone 14‌ Pro models. These predictions were later corroborated by multiple credible sources and proved correct upon the products' release. Tags: TSMC, Artificial Intelligence, Phone Chip Expert This article, "Apple Reportedly Developing Its Own Custom Silicon for AI Servers" first appeared on MacRumors.com Discuss this article in our forums View the full article
  24. As a business leader, you know that artificial intelligence (AI) is no longer just a buzzword—it’s a transformative force that is reshaping every industry, redefining customer experiences, and unlocking unprecedented efficiencies. In his groundbreaking book, Adaptive Ethics for Digital Transformation, Mark Schwartz shines a light on the moral challenges that arise as companies race to harness the power of AI. He argues that our traditional, rule-based approaches to business ethics are woefully inadequate in the face of the complexity, uncertainty, and rapid change of the digital age. So, as a leader, how can you ensure that your organization is wielding AI in a way that is not only effective but also ethical? Here are some key takeaways from Schwartz’s book that can help you navigate this new terrain: Cultivate a Culture of Ethical Awareness and Accountability Too often, discussions about AI ethics are siloed within technical teams or relegated to an afterthought. Schwartz stresses that ethical considerations must be woven into the fabric of your organization’s culture. This means actively encouraging all employees, from data scientists to business leaders, to raise ethical questions and concerns. Foster an environment where it’s not only acceptable but expected to hit the pause button on an AI initiative if something doesn’t feel right. Celebrate those who have the courage to speak up, even if it means slowing down progress in the short term. By making ethics everyone’s responsibility, you can catch potential issues early before they spiral out of control. Embrace Humility and Adaptability One of the most dangerous traps in the realm of AI is overconfidence. We may be tempted to believe that we can anticipate and control every possible outcome of the intelligent systems we create. But as Schwartz points out, the reality is that we are often venturing into uncharted territory. Instead of clinging to a false sense of certainty, Schwartz advises embracing humility and adaptability. Approach AI initiatives as ongoing ethical experiments. Put forward your best hypotheses for how to encode human values into machine intelligence, but be prepared to continuously test, learn, and iterate. This means building mechanisms for regular ethical review and course correction. It means being willing to slow down or even shut down an AI system if unintended consequences emerge. In a world of constant change, agility is not just a technical imperative, but an ethical one. Make Transparency and Interpretability a Priority One of the biggest risks of AI is the “black box” problem—the tendency for the decision-making logic of machine learning models to be opaque and inscrutable. When we can’t understand how an AI system arrives at its conclusions, it becomes nearly impossible to verify that it is operating in an ethical manner. Schwartz emphasizes the importance of algorithmic transparency and interpretability. Strive to make the underlying logic of your AI systems as clear and understandable as possible. This may require investing in tools and techniques for explaining complex models, or even sacrificing some degree of performance for the sake of transparency. The goal is to create AI systems that are not just high-performing, but also accountable and auditable. By shining a light into the black box, you can build trust with stakeholders and ensure that your AI is aligned with your organization’s values. Keep Humans in the Loop Another key ethical principle that Schwartz stresses is the importance of human oversight and accountability. Even as AI becomes more sophisticated, it is critical that we resist the temptation to fully abdicate decision-making to machines. Establish clear protocols for human involvement in AI-assisted decisions, especially in high-stakes domains like healthcare, criminal justice, and financial services. Create mechanisms for human review and override of AI recommendations. Importantly, Schwartz cautions against using AI as a scapegoat for difficult decisions. We must be careful not to simply “blame the algorithm” when thorny ethical trade-offs arise. At the end of the day, it is human leaders who bear the responsibility for the AI systems they choose to deploy and the outcomes they generate. Use AI as a Mirror to Examine Societal Biases One of the most powerful ideas in Schwartz’s book is the notion of using AI as a tool for ethical introspection. Because AI models are trained on historical data, they often reflect and amplify the biases and inequities that are embedded in our society. Rather than seeing this as a flaw to be ignored or minimized, Schwartz encourages leaders to seize it as an opportunity. By proactively auditing your AI systems for bias, you can surface uncomfortable truths about the way your organization and society operate. This can spark much-needed conversations about fairness, inclusion, and social responsibility. In this way, AI can serve as a catalyst for positive change. By holding up a mirror to our collective blind spots, AI can challenge us to confront long-standing injustices and build a more equitable future. Conclusion As you embark on your own digital transformation journey, the insights from Adaptive Ethics for Digital Transformation provide an invaluable roadmap for navigating the ethical challenges of AI. By cultivating a culture of ethical awareness, embracing humility and adaptability, prioritizing transparency and human oversight, and using AI as a tool for introspection, you can harness the power of this transformative technology in a way that upholds your values and benefits society as a whole. The path forward won’t always be clear or easy. But with the right ethical framework and a commitment to ongoing learning and adaptation, you can lead your organization confidently into the age of AI—and create a future that you can be proud of. The post Navigating the Ethical Minefield of AI appeared first on IT Revolution. View the full article
  25. Artificial Intelligence (AI) and Machine Learning (ML) have emerged as transformative technologies, driving innovation across many industries. However, in addition to their benefits, AI and ML systems bring unique security challenges that demand a proactive and comprehensive approach. A new methodology that applies the principles of DevSecOps to AI and ML security, called AISecOps, ensures […] The article AISecOps: Applying DevSecOps to AI and ML Security appeared first on Build5Nines. View the full article
  • Forum Statistics

    43.3k
    Total Topics
    42.7k
    Total Posts
×
×
  • Create New...