Search the Community
Showing results for tags 'genai'.
-
The generative AI revolution is transforming the way that teams work, and Databricks Assistant leverages the best of these advancements. It allows you... View the full article
-
- data engineering
- ai
-
(and 1 more)
Tagged with:
-
Apple is once again talking with OpenAI about using OpenAI technology to power artificial intelligence features in iOS 18, reports Bloomberg's Mark Gurman. Apple held talks with OpenAI earlier in the year, but nothing had come of the discussion. Apple and OpenAI are now said to be speaking about the terms of a possible agreement and how Apple might utilize OpenAI features. Along with OpenAI, Apple is still having discussions with Google about licensing Google's Gemini AI. Apple has not come to a final decision, and Gurman suggests that the company could partner with both Google and OpenAI or pick another provider entirely. Rumors suggest that iOS 18 will have a major focus on AI, with Apple set to introduce AI functionality across the operating system. Apple CEO Tim Cook confirmed in February that Apple plans to "break new ground" in AI. We'll get a first look at the AI features that Apple has planned in just over a month, with iOS 18 set to debut at the Worldwide Developers Conference that kicks off on June 10.Related Roundup: iOS 18Tag: Apple GPT This article, "Apple Reignites Talks With OpenAI About Generative AI for iOS 18" first appeared on MacRumors.com Discuss this article in our forums View the full article
-
We’re excited to announce the Databricks Generative AI Hackathon winners. This hackathon garnered hundreds of data and AI practitioners spanning 60 invited companies... View the full article
-
- databricks
- genai
-
(and 1 more)
Tagged with:
-
Amid all the excitement around the potential of generative AI to transform business and unlock trillions of dollars in value across the global economy, it is easy to overlook the significant impact that the technology is already having. Indeed, the era of gen AI does not exist at some vague point in the not-too-distant future: it is here and now. The advent of generative AI marks a significant leap in the evolution of computing. For Media customers, generative AI introduces the ability to generate real time, personalized and unique interactions that weren’t possible before. This technology is not just revolutionizing the way we streamline the content creation process but it is also transforming broadcasting operations, such as discovering and searching media archives. Simultaneously, in Telco, generative AI boosts productivity by creating a knowledge based engine that can summarize and extract information from both large structures and unstructured data that employees can use to solve a customer problem, or by shortening the learning curve. Furthermore, generative AI can be easily implemented and understood by all levels of the organization without needing to know the model complexity. How generative AI is transforming the telco and media industry The telecommunications and media industry is at the forefront of integrating generative AI into their operations, viewing it as a catalyst for growth and innovation. Industry leaders are enthusiastic about its ability to not only enhance the current processes but also spearhead new innovations, create new opportunities, unlock new sources of value and improve the overall business efficiency. Communication Service Providers (CSPs) are now using generative AI to significantly reduce the time it takes to perform network-outage root-cause analysis. Traditionally, identifying the root cause of an outage involved engineers mining through several logs, vendor documents, past trouble tickets, and their resolutions. Vertex AI Search enables CSPs to extract relevant information across structured and unstructured data, and significantly shorten the time for a human engineer to identify probable causes. "Generative AI is helping our employees to do their jobs and increase their productivity, allowing them to spend more time strengthening the relationship with our customers” explains Uli Irnich, CIO of Vodafone Germany. Media organizations are using generative AI to smoothly and successfully engage and retain viewers by enabling more powerful search and recommendations. With Vertex AI, customers are building an advanced media recommendations application and enabling audiences to discover personalized content, with Google-quality results that are customized by optimization objectives. Responding to challenges with a responsible approach to development While the potential of generative AI is widely recognised, challenges to its widespread adoption still persist. On the one hand, many of these stem from the sheer size of the businesses involved, with legacy architecture, siloed data, and the need for skills training presenting obstacles to more widespread and effective usage of generative AI solutions. On the other hand, many of these risk-averse enterprise-scale organizations want to be sure that the benefits of generative AI outweigh any perceived risks. In particular, businesses seek reassurance around the security of customer data and the need to conform to regulation, as well as around some of the challenges that can arise when building generative AI models, such as hallucinations (more on that below). As part of our long-standing commitment to the responsible development of AI, Google Cloud put our AI Principles into practice. Through guidance, documentation, and practical tools, we are supporting customers to help ensure that businesses are able to roll out their solutions in a safe, secure, and responsible way. By tackling challenges and concerns head on, we are working to empower organizations to leverage generative AI safely and effectively. One such challenge is “hallucinations,” which are when a generative AI model outputs incorrect or invented information in response to a prompt. For enterprises, it’s key to build robust safety layers before deploying generative AI powered applications. Models, and the ways that generative AI apps leverage them, will continue to get better, and many methods for reducing hallucinations are available to organizations. Last year, we introduced grounding capabilities for Vertex AI, enabling large language models to incorporate specific data sources for model response generation. By providing models with access to specific data sources, grounding tethers their output to specific data and reduces the chances of inventing content. Consequently, it reduces model hallucinations, anchors the model response to specific data sources and enhances the trustworthiness of generated content. Grounding lets the model access information that goes beyond its training data. By linking to designated data stores within Vertex AI Search, the grounded model can produce relevant responses. As AI-generated images become increasingly popular, we offer digital watermarking and verification on Vertex AI, making us the first cloud provider to enable enterprises with a robust, usable and scalable approach to create AI-generated images responsibly, and identify them with confidence. Digital watermarking on Vertex AI provides two capabilities: Watermarking, which produces a watermark designed to be invisible to the human eye, and does not damage or reduce the image quality, and Verification, which determines whether an image is generated by Imagen vis a vis a confidence interval. This technology is powered by Google DeepMind SynthID, a state-of-the art technology that embeds the watermark directly into the image of pixels, making it imperceptible to the human eye, and very difficult to tamper with without damaging the image. Removing harmful content for more positive user experiences Given the versatility of Large Language Models, predicting unintended or unexpected output is challenging. To address this, our generative AI APIs have safety attribute scoring, enabling customers to test Google's safety filters and set confidence thresholds suitable for their specific use case and business. These safety attributes include "harmful categories'' and topics that can be considered sensitive, each assigned a confidence score between 0 to 1. This score reflects the likelihood of the input or response belonging to a given category. Implementing this measure is a step forward to a positive user experience, ensuring outputs align more closely with the desired safety standards. Embedding responsible AI governance throughout our processes As we work to develop generative AI responsibly, we keep a close eye on emerging regulatory frameworks. Google’s AI/ML Privacy Commitment outlines our belief that customers should have a higher level of security and control over their data on the cloud. That commitment extends to Google Cloud generative AI solutions: by default Google Cloud doesn't use customer data (including prompts, responses and adapter model training data) to train its foundation models. We also offer third-party intellectual property indemnity as standard for all customers. By integrating responsible AI principles and toolkits into all aspects of AI development, we are witnessing a growing confidence among organizations in using Google Cloud generative AI models and the platform. This approach enables them to enhance customer experience, and overall, foster a productive business environment in a secure, safe and responsible manner. As we progress on a shared generative AI journey, we are committed to empowering customers with tools and protection they need to use our services safely, securely and with confidence. “Google Cloud generative AI is optimizing the flow from ideation to dissemination,” says Daniel Hulme, Chief AI Officer at WPP. “And as we start to scale these technologies, what is really important over the coming years is how we use them in a safe, responsible and ethical way.” View the full article
-
- genai
- media industry
-
(and 2 more)
Tagged with:
-
Copado's genAI tool automates testing in Salesforce software-as-a-service (SaaS) application environments. View the full article
-
- genai
- salesforce
-
(and 1 more)
Tagged with:
-
In the vast universe of programming, the era of generative artificial intelligence (GenAI) has marked a turning point, opening up a plethora of possibilities for developers. Tools such as LangChain4j and Spring AI have democratized access to the creation of GenAI applications in Java, allowing Java developers to dive into this fascinating world. With Langchain4j, for instance, setting up and interacting with large language models (LLMs) has become exceptionally straightforward. Consider the following Java code snippet: public static void main(String[] args) { var llm = OpenAiChatModel.builder() .apiKey("demo") .modelName("gpt-3.5-turbo") .build(); System.out.println(llm.generate("Hello, how are you?")); } This example illustrates how a developer can quickly instantiate an LLM within a Java application. By simply configuring the model with an API key and specifying the model name, developers can begin generating text responses immediately. This accessibility is pivotal for fostering innovation and exploration within the Java community. More than that, we have a wide range of models that can be run locally, and various vector databases for storing embeddings and performing semantic searches, among other technological marvels. Despite this progress, however, we are faced with a persistent challenge: the difficulty of testing applications that incorporate artificial intelligence. This aspect seems to be a field where there is still much to explore and develop. In this article, I will share a methodology that I find promising for testing GenAI applications. Project overview The example project focuses on an application that provides an API for interacting with two AI agents capable of answering questions. An AI agent is a software entity designed to perform tasks autonomously, using artificial intelligence to simulate human-like interactions and responses. In this project, one agent uses direct knowledge already contained within the LLM, while the other leverages internal documentation to enrich the LLM through retrieval-augmented generation (RAG). This approach allows the agents to provide precise and contextually relevant answers based on the input they receive. I prefer to omit the technical details about RAG, as ample information is available elsewhere. I’ll simply note that this example employs a particular variant of RAG, which simplifies the traditional process of generating and storing embeddings for information retrieval. Instead of dividing documents into chunks and making embeddings of those chunks, in this project, we will use an LLM to generate a summary of the documents. The embedding is generated based on that summary. When the user writes a question, an embedding of the question will be generated and a semantic search will be performed against the embeddings of the summaries. If a match is found, the user’s message will be augmented with the original document. This way, there’s no need to deal with the configuration of document chunks, worry about setting the number of chunks to retrieve, or worry about whether the way of augmenting the user’s message makes sense. If there is a document that talks about what the user is asking, it will be included in the message sent to the LLM. Technical stack The project is developed in Java and utilizes a Spring Boot application with Testcontainers and LangChain4j. For setting up the project, I followed the steps outlined in Local Development Environment with Testcontainers and Spring Boot Application Testing and Development with Testcontainers. I also use Tescontainers Desktop to facilitate database access and to verify the generated embeddings as well as to review the container logs. The challenge of testing The real challenge arises when trying to test the responses generated by language models. Traditionally, we could settle for verifying that the response includes certain keywords, which is insufficient and prone to errors. static String question = "How I can install Testcontainers Desktop?"; @Test void verifyRaggedAgentSucceedToAnswerHowToInstallTCD() { String answer = restTemplate.getForObject("/chat/rag?question={question}", ChatController.ChatResponse.class, question).message(); assertThat(answer).contains("https://testcontainers.com/desktop/"); } This approach is not only fragile but also lacks the ability to assess the relevance or coherence of the response. An alternative is to employ cosine similarity to compare the embeddings of a “reference” response and the actual response, providing a more semantic form of evaluation. This method measures the similarity between two vectors/embeddings by calculating the cosine of the angle between them. If both vectors point in the same direction, it means the “reference” response is semantically the same as the actual response. static String question = "How I can install Testcontainers Desktop?"; static String reference = """ - Answer must indicate to download Testcontainers Desktop from https://testcontainers.com/desktop/ - Answer must indicate to use brew to install Testcontainers Desktop in MacOS - Answer must be less than 5 sentences """; @Test void verifyRaggedAgentSucceedToAnswerHowToInstallTCD() { String answer = restTemplate.getForObject("/chat/rag?question={question}", ChatController.ChatResponse.class, question).message(); double cosineSimilarity = getCosineSimilarity(reference, answer); assertThat(cosineSimilarity).isGreaterThan(0.8); } However, this method introduces the problem of selecting an appropriate threshold to determine the acceptability of the response, in addition to the opacity of the evaluation process. Toward a more effective method The real problem here arises from the fact that answers provided by the LLM are in natural language and non-deterministic. Because of this, using current testing methods to verify them is difficult, as these methods are better suited to testing predictable values. However, we already have a great tool for understanding non-deterministic answers in natural language: LLMs themselves. Thus, the key may lie in using one LLM to evaluate the adequacy of responses generated by another LLM. This proposal involves defining detailed validation criteria and using an LLM as a “Validator Agent” to determine if the responses meet the specified requirements. This approach can be applied to validate answers to specific questions, drawing on both general knowledge and specialized information By incorporating detailed instructions and examples, the Validator Agent can provide accurate and justified evaluations, offering clarity on why a response is considered correct or incorrect. static String question = "How I can install Testcontainers Desktop?"; static String reference = """ - Answer must indicate to download Testcontainers Desktop from https://testcontainers.com/desktop/ - Answer must indicate to use brew to install Testcontainers Desktop in MacOS - Answer must be less than 5 sentences """; @Test void verifyStraightAgentFailsToAnswerHowToInstallTCD() { String answer = restTemplate.getForObject("/chat/straight?question={question}", ChatController.ChatResponse.class, question).message(); ValidatorAgent.ValidatorResponse validate = validatorAgent.validate(question, answer, reference); assertThat(validate.response()).isEqualTo("no"); } @Test void verifyRaggedAgentSucceedToAnswerHowToInstallTCD() { String answer = restTemplate.getForObject("/chat/rag?question={question}", ChatController.ChatResponse.class, question).message(); ValidatorAgent.ValidatorResponse validate = validatorAgent.validate(question, answer, reference); assertThat(validate.response()).isEqualTo("yes"); } We can even test more complex responses where the LLM should suggest a better alternative to the user’s question. static String question = "How I can find the random port of a Testcontainer to connect to it?"; static String reference = """ - Answer must not mention using getMappedPort() method to find the random port of a Testcontainer - Answer must mention that you don't need to find the random port of a Testcontainer to connect to it - Answer must indicate that you can use the Testcontainers Desktop app to configure fixed port - Answer must be less than 5 sentences """; @Test void verifyRaggedAgentSucceedToAnswerHowToDebugWithTCD() { String answer = restTemplate.getForObject("/chat/rag?question={question}", ChatController.ChatResponse.class, question).message(); ValidatorAgent.ValidatorResponse validate = validatorAgent.validate(question, answer, reference); assertThat(validate.response()).isEqualTo("yes"); } Validator Agent The configuration for the Validator Agent doesn’t differ from that of other agents. It is built using the LangChain4j AI Service and a list of specific instructions: public interface ValidatorAgent { @SystemMessage(""" ### Instructions You are a strict validator. You will be provided with a question, an answer, and a reference. Your task is to validate whether the answer is correct for the given question, based on the reference. Follow these instructions: - Respond only 'yes', 'no' or 'unsure' and always include the reason for your response - Respond with 'yes' if the answer is correct - Respond with 'no' if the answer is incorrect - If you are unsure, simply respond with 'unsure' - Respond with 'no' if the answer is not clear or concise - Respond with 'no' if the answer is not based on the reference Your response must be a json object with the following structure: { "response": "yes", "reason": "The answer is correct because it is based on the reference provided." } ### Example Question: Is Madrid the capital of Spain? Answer: No, it's Barcelona. Reference: The capital of Spain is Madrid ### Response: { "response": "no", "reason": "The answer is incorrect because the reference states that the capital of Spain is Madrid." } """) @UserMessage(""" ### Question: {{question}} ### Answer: {{answer}} ### Reference: {{reference}} ### """) ValidatorResponse validate(@V("question") String question, @V("answer") String answer, @V("reference") String reference); record ValidatorResponse(String response, String reason) {} } As you can see, I’m using Few-Shot Prompting to guide the LLM on the expected responses. I also request a JSON format for responses to facilitate parsing them into objects, and I specify that the reason for the answer must be included, to better understand the basis of its verdict. Conclusion The evolution of GenAI applications brings with it the challenge of developing testing methods that can effectively evaluate the complexity and subtlety of responses generated by advanced artificial intelligences. The proposal to use an LLM as a Validator Agent represents a promising approach, paving the way towards a new era of software development and evaluation in the field of artificial intelligence. Over time, we hope to see more innovations that allow us to overcome the current challenges and maximize the potential of these transformative technologies. Learn more Check out the GenAI Stack to get started with adding AI to your apps. Subscribe to the Docker Newsletter. Get the latest release of Docker Desktop. Vote on what’s next! Check out our public roadmap. Have questions? The Docker community is here to help. New to Docker? Get started. View the full article
-
Mre than one in two Americans have already tried generative AI in the past year in the hope that it could improve productivity and creativity in their personal lives, new research from Adobe has found. The company's study found oer half (53%) had given the technology a go, however only 30% had used GenAI in the workplace compared with 81% in their personal lives. Testament to artificial intelligence’s potential to impact lives, two in five (41%) now claim to use GenAI daily. You can’t escape from generative AI The survey of 3,000 consumers illustrates of generative AI’s widespread use as well as the acceptance and enthusiasm for a relatively new technology – before the public preview launch of ChatGPT in late 2022, few consumers had ever heard of generative AI let alone tried an AI application. However, despite the technology’s capability to process huge amounts of data reasonably quickly, only 17% of the survey’s participants admitted to using it within education, suggesting that users could be more inquisitive than reliant. Delving deeper into specific tasks, brainstorming (64%), creating first drafts of written content (44%), creating visuals or presentations (36%), trying an alternative to search (32%), summarizing written text (31%), creating images or art (29%), and creating programming code (21%) emerged as some key use cases for generative AI. Four in five (82%) also hope that GenAI can improve their creativity, despite almost as many (72%) believing that it would never match a human’s creativity. Looking ahead, consumers anticipate generative AI helping them with learning a new skill (43%), making price comparison and shopping easier (36%), accessing better customer support from companies (33%), creating social media content (18%), and coding (14%). The study also noted GenAI’s impacts on retail and ecommerce. On the whole, generative AI is transitioning from a novelty to a productivity and experience enhancer as companies worldwide look to implement the technology across endless sectors. More from TechRadar Pro Microsoft announces new AI hub in London in latest AI pushWe’ve rounded up the best AI tools and best AI writersCheck out all the best productivity tools View the full article
-
Agents for Amazon Bedrock enable generative AI applications to automate multi-step tasks across company systems and data sources. Agents removes the undifferentiated lifting of orchestration, infrastructure hosting and management, and we’re making building Agents easier than ever. View the full article
-
It’s no secret that Apple has been biding its time on the AI front, and the latest intelligence surrounding iOS 18 suggests that the company’s upcoming generative AI features could differ from those already available on Samsung and Google Pixel devices in one key way. According to Bloomberg’s resident Apple expert Mark Gurman (via MacRumors), Apple's generative AI features will be underpinned by a proprietary large language model (LLM) that runs entirely on-device, rather than via the cloud. This approach would prioritize speed and privacy, since an on-device LLM doesn’t require an internet connection to function, though Apple's AI tools may be slightly less powerful than those available from cloud-based rivals (like Galaxy AI) as a result. To combat the latter point, Gurman hints that Apple could “fill in the gaps” by licensing technology from Google and other AI service providers. The tipster has previously reported that Apple is in “active negotiations” with Google to license Google Gemini for certain iOS 18 features, so a Google-assisted Apple LLM is looking increasingly likely, despite our initial skepticism. As above, on-device processing delivers quicker response times and superior privacy over cloud-based solutions, which fits with Apple’s traditional commitment to style, simplicity and security. Indeed, according to Gurman, this is how Apple will market its AI features – as reliable, usable tools that enhance users’ daily lives, rather than all-powerful creative ones. Superior Siri (Image credit: Apple) There’s still no word on what Apple's AI features will be, exactly, but the likes of Siri, Messages, Apple Music and Pages are expected to receive significant AI-based improvements in iOS 18, with the former reportedly in line for a ChatGPT-style makeover. Rumors suggest that Siri, specifically, will also harness generative AI to understand not just your vocal requests, but also the context behind them, which will presumably make the once-pioneering voice assistant a much more useful feature of the best iPhones, iPads and MacBooks, as well as, we hope, Apple’s long-awaited HomePod with a touchscreen. In any case, Apple’s suite of AI features are reportedly on track for a grand unveiling at WWDC 2024, so we don’t have too long to wait before we find out how the iPhone 16, iPhone 16 Pro Max and other iOS 18-compatible devices will challenge the current best phones on the market in the AI department. You might also like... More leaked iPhone 16 dummy units echo previous design leaksThe iPhone 16 could be sold in seven shadesBattery capacities for all four iPhone 16 models have leaked View the full article
-
untilAbout Experience everything that Summit has to offer. Attend all the parties, build your session schedule, enjoy the keynotes and then watch it all again on demand. Expo access to 150 + partners and 100’s of Databricks experts 500 + breakout sessions and keynotes 20 + Hands-on trainings Four days food and beverage Networking events and parties On-Demand session streaming after the event Join leading experts, researchers and open source contributors — from Databricks and across the data and AI community — who will speak at Data + AI Summit. Over 500 sessions covering everything from data warehousing, governance and the latest in generative AI. Join thousands of data leaders, engineers, scientists and architects to explore the convergence of data and AI. Explore the latest advances in Apache Spark™, Delta Lake, MLflow, PyTorch, dbt, Presto/Trino and much more. You’ll also get a first look at new products and features in the Databricks Data Intelligence Platform. Connect with thousands of data and AI community peers and grow your professional network in social meetups, on the Expo floor or at our event party. Register https://dataaisummit.databricks.com/flow/db/dais2024/landing/page/home Further Details https://www.databricks.com/dataaisummit/
-
- 1
-
- summits
- data & ai summit
- (and 12 more)
-
Apple is developing its own large language model (LLM) that runs on-device to prioritize speed and privacy, Bloomberg's Mark Gurman reports. Writing in his "Power On" newsletter, Gurman said that Apple's LLM underpins upcoming generative AI features. "All indications" apparently suggests that it will run entirely on-device, rather than via the cloud like most existing AI services. Since they will run on-device, Apple's AI tools may be less capable in certain instances than its direct cloud-based rivals, but Gurman suggested that the company could "fill in the gaps" by licensing technology from Google and other AI service providers. Last month, Gurman reported that Apple was in discussions with Google to integrate its Gemini AI engine into the iPhone as part of iOS 18. The main advantages of on-device processing will be quicker response times and superior privacy compared to cloud-based solutions. Apple's marketing strategy for its AI technology will apparently be based around how it can be useful to users' daily lives, rather than its power. Apple's broader AI strategy is expected to be revealed alongside previews of its major software updates at WWDC in June.Tags: Bloomberg, Artificial Intelligence, Mark Gurman This article, "Gurman: Apple Working on On-Device LLM for Generative AI Features" first appeared on MacRumors.com Discuss this article in our forums View the full article
-
Tricentis this week added a generative artificial intelligence (AI) capability, dubbed Tricentis Copilot, to its application testing automation platform to reduce the amount of code that DevOps teams need to manually create. Based on an instance of the large language model (LLM) created by Open AI that has been deployed on the Microsoft Azure cloud, […] View the full article
-
Cloud technologies are a rapidly evolving landscape. Securing cloud applications is everyone’s responsibility, meaning application development teams are needed to follow strict security guidelines from the earliest development stages, and to make sure of continuous security scans throughout the whole application lifecycle. The rise of generative AI enables new innovative approaches for addressing longstanding challenges with reduced effort. This post showcases how engineering teams can automate efficient remediation of container CVEs (common vulnerabilities and exposures) early in their continuous integration (CI) pipeline. Using cloud services such as Amazon Bedrock, Amazon Inspector, AWS Lambda, and Amazon EventBridge you can architect an event-driven serverless solution for automatically addressing container vulnerabilities detection and patching. Using the power of generative AI and serverless technologies can help simplify what used to be a complex challenge. Overview The exponential growth of modern applications has enabled developers to build highly decoupled microservice-based architectures. However, the distributed nature of those architectures comes with a set of operational challenges. Engineering teams were always responsible for various security aspects of their application environments, such as network security, IAM permissions, TLS certificates, and code vulnerability scanning. Addressing these aspects at the scale of dozens and hundreds of microservices requires a high degree of automation. Automation is imperative for efficient scaling as well as maintaining control and governance. Running applications in containers is a common approach for building microservices. It allows developers to have the same CI pipeline for their applications, regardless of whether they use Amazon Elastic Kubernetes Service (Amazon EKS), Amazon Elastic Container Service (Amazon ECS), or AWS Lambda to run it. No matter which programming language you use for your application, the deployable artifact is a container image that commonly includes application code and its dependencies. It is imperative for application development teams to scan those images for vulnerabilities to make sure of their safety prior to deploying them to cloud environments. Amazon Elastic Container Registry (Amazon ECR) is an OCI artifactory that provides two types of scanning, Basic and Enhanced, powered by the Amazon Inspector. The image scanning occurs after the container image is pushed to the registry. The basic scanning is triggered automatically when a new image is pushed, while the enhanced scanning runs continuously for images hosted in Amazon ECR. Both types of scans generate scan reports, but it is still the development team’s responsibility to act on it: read the report, understand the vulnerabilities, patch code, open a pull request, merge, and run CI again. The following steps illustrate how you can build an automated solution that uses the power of generative AI and event-driven serverless architectures to automate this process. The following sample solution uses the “in-context learning” approach, a technique that tailors AI responses to narrow scenarios. Used for CVE patching, the solution builds AI prompts based on the programming language in question and a previously generated example of what a PR might look like. This approach underscores a crucial point: for some narrow use cases, using a smaller Large Language Model (LLM), such as Llama 13B, with assisted prompt might yield equally effective results as a bigger LLM, such as Llama 2 70B. We recommend that you evaluate both few-shot prompts with smaller LLMs and zero-shot prompts with larger LLMs to find the model that works most efficiently for you. Read more about providing prompts and examples in the Amazon Bedrock documentation. Solution architecture Prior to packaging the application as a container, engineering teams should make sure that their CI pipeline includes steps such as static code scanning with tools such as SonarQube or Amazon CodeGuru, and image analysis tools such as Trivy or Docker Scout. Validating your code for vulnerabilities at this stage aligns with the shift-left mentality, and engineers should be able to detect and address potential threats in their code in the earliest stages of development. After packaging the new application code and pushing it to Amazon ECR, the image scanning with Amazon Inspector is triggered. Engineers can use languages supported by Amazon Inspector. As image scanning runs, Amazon Inspector emits EventBridge Finding events for each vulnerability detected. CI is triggered by a developer pushing new code to the shared code repository. This step is not implemented in the provided sample, and different engineering teams can use different tools for their CI pipeline. The application container image is built and pushed to the Amazon ECR. Amazon Inspector is triggered automatically. Note that you must first enable Amazon Inspector ECR enhanced scanning in your account. As Amazon Inspector scans the image, it emits findings in a format of events to EventBridge. Each finding generates a separate event. See the example JSON payload of a finding event in the Inspector documentation. EventBridge is configured to invoke a Lambda function for each finding event. Lambda is invoked for each finding. The function aggregates and updates the Amazon DynamoDB database table with each finding information. Once Amazon Inspector completes the scan, it emits the scan complete event to EventBridge, which calls the PR creation microservice hosted as an Amazon ECS Fargate Task to start the PR generation process. PR creation microservice clones the code repo to see the current dependencies list. Then it retrieves the aggregated findings data from DynamoDB, builds a prompt using the dependencies list, findings data, and in-context learning example based on previous scans. The microservice invokes Amazon Bedrock to generate a new PR content. Once the PR content is generated, the microservice opens a new PR and pushes changes upstream. Engineering teams validate the PR and merge it with code repository. Overtime, as engineering teams gain trust with the process, they might consider automating the merge part as well. Sample implementation Use the example project to replicate this solution in your AWS account. Follow the instructions in README.md for provisioning and testing the sample project using Hashicorp Terraform. Under the /apps directory of the sample project you should see two applications. The /apps/my-awesome-application intentionally contains a set of vulnerable dependencies. This application was used to create examples of what a PR should look like. Once the engineering team took this application through Amazon Inspector and Amazon Bedrock manually, a file containing this example was generated. See in_context_examples.py. Although it can be a one-time manual process, engineering teams can also periodically add more examples as they evolve and improve the generative AI model response. The /apps/my-amazing-application is the actual application that the engineering team works on delivering business value. They deploy this application several times a day to multiple environments, and they want to make sure that it doesn’t have vulnerabilities. Based on the in-context example created previously, they’re continuously using Amazon Inspector to detect new vulnerabilities, as well as Amazon Bedrock to automatically generate pull requests that patch those vulnerabilities. The following example shows a pull request generated when a member of the development team has introduced vulnerable dependencies. The pull request contains details about the packages with detected vulnerabilities and CVEs, as well as recommendations for how to patch them. Moreover, the pull request already contains an updated version of the requirements.txt file with the changes in place. The only thing left for the engineering team to do is review and merge the pull request. Conclusion This post illustrates a simple solution to address container image (OCI) vulnerabilities using AWS Services such as Amazon Inspector, Amazon ECR, Amazon Bedrock, Amazon EventBridge, AWS Lambda, and Amazon Fargate. The serverless and event-driven nature of this solution helps make sure of cost efficiency and minimal operational overhead. Engineering teams do not need to run additional infrastructure to implement this solution. Using generative AI and serverless technologies helps simplify what used to be a complex and laborious process. Having an automated workflow in place allows engineering teams to focus on delivering business value, thereby improving overall security posture without extra operational overhead. Checkout step-by-step deployment instructions and sample code for the solution discussed in the post in this GitHub repository. References https://aws.amazon.com/blogs/aws/amazon-bedrock-now-provides-access-to-llama-2-chat-13b-model/ https://docs.aws.amazon.com/bedrock/latest/userguide/general-guidelines-for-bedrock-users.html https://docs.aws.amazon.com/bedrock/latest/userguide/what-is-a-prompt.html#few-shot-prompting-vs-zero-shot-prompting View the full article
-
The emergence of low/no-code platforms is challenging traditional notions of coding expertise. Gone are the days when coding was an exclusive skill set reserved for a knowledgeable few. Low/no-code platforms have democratized software development. They empower individuals from non-IT or technical backgrounds to translate their business ideas into applications without the need to master complex […]View the full article
-
A Pandora's Box: Unpacking 5 Risks in Generative AI madhav Thu, 04/18/2024 - 05:07 Generative AI (GAI) is becoming increasingly crucial for business leaders due to its ability to fuel innovation, enhance personalization, automate content creation, augment creativity, and help teams explore new possibilities. This is confirmed by surveys, with 83% of business leaders saying they intend to increase their investments in the technology by 50% or more in the next six to 12 months. Unfortunately, the increasing use of AI tools has also brought a slew of emerging threats that security and IT teams are ill-equipped to deal with. Almost half (47%) of IT practitioners believe security threats are increasing in volume or severity and that the use of AI exacerbates these risks. It has become a race between security teams and advanced attackers to see who will be the first to take advantage of AI’s incredible abilities successfully. The Rising Threat Landscape Adversaries are already harnessing the power of generative AI for several nefarious purposes. Stealing the Model: AI models are the crown jewels of organizations leveraging machine learning algorithms. However, they are also prime targets for malicious actors seeking to gain an unfair advantage or disrupt operations. By infiltrating systems or exploiting vulnerabilities, adversaries can steal these models, leading to intellectual property theft and competitive disadvantage. AI Hallucinations: These are instances where artificial intelligence systems generate outputs that are not grounded in reality or are inconsistent with the intended task. These hallucinations can occur for various reasons, such as errors in the AI's algorithms, biases in the training data, or limitations in the AI's understanding of the context or task. Data Poisoning: The integrity of AI systems relies heavily on the quality and reliability of the data they are trained on. Data poisoning involves injecting malicious inputs into training datasets, corrupting the learning process, and compromising the model's performance. This tactic can manipulate outcomes, undermine decision-making processes, and even lead to catastrophic consequences in critical applications like healthcare or finance. Prompt Injection: Prompt injection attacks target natural language processing (NLP) models by injecting specific prompts or queries designed to elicit unintended responses. These subtle manipulations can deceive AI systems into generating misleading outputs or executing unauthorized actions, posing significant risks in applications such as chatbots, virtual assistants, or automated customer service platforms. Extracting Confidential Information: As AI systems process vast amounts of data, they often handle sensitive or proprietary information. Malicious actors exploit vulnerabilities within AI infrastructures to extract confidential data, including customer records, financial transactions, or trade secrets. Such breaches jeopardize privacy and expose enterprises to regulatory penalties, legal liabilities, and reputational damage. Vulnerabilities in AI There have already been instances where AI has caused vulnerabilities in popular apps and software. Security researchers from Imperva described in a blog called XSS Marks the Spot: Digging Up Vulnerabilities in ChatGPT how they discovered multiple security vulnerabilities in OpenAI’s ChatGPT that, if exploited, would enable bad actors to hijack a user’s account. The company’s researchers pinpointed two cross-site scripting (XSS) vulnerabilities, along with other security issues, in the ChatGPT backend. They outlined the process of refining their exploit from requiring a user to upload a malicious file and interact in a specific manner, to merely necessitating a single click on a citation within a ChatGPT conversation. This was accomplished by exploiting client-side path traversal and a broken function-level authorization bug in the ChatGPT backend. In another Imperva blog, Hacking Microsoft and Wix with Keyboard Shortcuts, researchers focused on the anchor tag and its behavior with varying target attributes and protocols. They noted that its inconsistent behavior can confuse security teams, enabling bugs to go unnoticed and become potential targets for exploitation. The technique shown in this blog post was instrumental in exploiting the second XSS bug they found in ChatGPT. Upholding AI Responsibility: Enterprise Strategies In the face of evolving risks, one thing is clear: The proliferation of GAI will change investment plans in cybersecurity circles. Enterprises must prioritize responsible AI governance to mitigate threats and uphold ethical standards. There are several key strategies for organizations to navigate this complex landscape: Secure Model Development and Deployment: Implement robust security measures throughout the AI lifecycle, from data collection and model training to deployment and monitoring. Employ encryption, access controls, and secure development practices to safeguard models against unauthorized access or tampering. Foster Collaboration and Knowledge Sharing: Foster a culture of collaboration and information sharing within the organization and across industry sectors. By collaborating with peers, academia, and cybersecurity experts, enterprises can stay informed about emerging threats, share best practices, and collectively address common challenges. Embrace Transparency and Accountability: Prioritize transparency and accountability in AI decision-making processes. Document model development methodologies, disclose data sources and usage policies, and establish mechanisms for auditing and accountability to ensure fairness, transparency, and compliance. Investments in Training: Security staff must be trained in AI systems and Generative AI. All this requires new investments for Cybersecurity and must be budgeted through a multi-year budgeting cycle. Developing Corporate Policies: Developing AI policies that govern the responsible use of GAI tools within the business is also critical to ensure ethical decision-making and mitigate the potential risks associated with their use. These policies can protect against biases, safeguard privacy, and ensure transparency, fostering trust both within the business and among all its stakeholders. Changing regulations Regulations are also changing to accommodate the threats posed by AI. Efforts such as the White House Blueprint for an AI Bill of Rights and the EU AI Act provide the guardrails to guide the responsible design, use, and deployment of automated systems. One of the principles is about privacy by design to ensure that default consideration of privacy protections, including making sure that data collection conforms to reasonable expectations and that only data strictly needed for the specific context is collected. Consent management is also considered critical. In sensitive domains, consumer data and related inferences should only be used for strictly necessary functions, and the consumer must be protected by ethical review and use prohibitions. Several frameworks provide appropriate guidance for implementors and practitioners. However, we are still early and in the discovery phase, so it will take time for data privacy and security regulations to evolve to include GAI-related safety considerations. For now, the best thing that organizations and security teams can do is keep learning, invest in GAI training (not only for the security professionals but all staff), and budget for incremental investments. This should help them stay a step ahead of adversaries. Data Security Luke Richardson | Product Marketing Manager, Imperva More About This Author > Schema The post A Pandora’s Box: Unpacking 5 Risks in Generative AI appeared first on Security Boulevard. View the full article
-
Generative AI tools can use retrieval-augmented generation to access new information that wasn't included in the training dataset. What does this mean for your business? The post How GenAI Uses Retrieval-Augmented Generation & What It Means for Your Business appeared first on Security Boulevard. View the full article
-
Nvidia has unveiled new GPUs that it says will be able to bring the power of generative AI to a wider audience than ever before. The new Nvidia RTX A400 and A1000 GPUs will give creatives and professionals alike access to some of the most useful AI tools in their fields, without demanding huge amounts of computing power and resources as is currently the case. Built on the company's Ampere architecture, the new GPUs will bring tools such as real-time ray tracing to a wider array of desktops and workstations, allowing generative AI tools to reach a bigger audience. AI for all "AI integration across design and productivity applications is becoming the new standard, fueling demand for advanced computing performance," Nvidia's senior product marketing manager for enterprise platforms Stacy Ozorio noted in a blog post announcing the launch. "This means professionals and creatives will need to tap into increased compute power, regardless of the scale, complexity or scope of their projects." The RTX A400 includes 24 Tensor Cores for AI processing, taking it far beyond traditional CPU-based machines, which Nvidia says allows for running cutting-edge AI services such as chatbots and copilots directly on the desktop. In a first for the RTX 400 series, the A400 also includes four display outputs, making it a good fit in industries such as retail, transportation and financial services, which can benefit from high-density display environments showing off detailed 3D renders. The A1000 is the first in the RTX 1000 series to bring Tensor Cores and RT Cores to users, allowing them to utilize ray-tracing performance and accelerated AI tools, while boasting a sleek, single-slot design that consumes just 50W of power. With the power of 72 Tensor Cores, it offers 3x faster generative AI processing for tools like Stable Diffusion over the previous generation, as well as faster video processing, with its 18 RT Cores speeding up graphics and rendering tasks by up to 3x, making it ideal for tasks such as 4K video editing, CAD and architectural designs. "These new GPUs empower users with cutting-edge AI, graphics and compute capabilities to boost productivity and unlock creative possibilities," Ozorio added. "Advanced workflows involving ray-traced renders and AI are now within reach, allowing professionals to push the boundaries of their work and achieve stunning levels of realism." The A1000 GPU is available now, with the A400 set to go on sale later in the summer of 2024. More from TechRadar Pro Nvidia GTC 2024 — all the updates as it happenedNvidia says its new Blackwell is set to power the next generation of AIWe've also rounded up the best mobile workstations around View the full article
-
From self-driving cars to AI agents and transformative drug discovery, humanity is entering a fourth industrial revolution - one powered by artificial intelligence. Nations around the world have taken notice. Harnessing generative AI promises massive socioeconomic, cultural and geopolitical benefits, yet modernizing a government’s ability to enable and improve its AI capabilities requires creating nationwide accelerated IT infrastructure on a level as basic and critical as energy and water grids. Countries that fail to invest in sovereign AI not only risk being left behind by their more AI-literate counterparts but also resign themselves to dependency on other countries for a 21st century critical resources. What is an AI factory? While the first industrial revolution brought us coal-fired factories to make work more efficient and the telegraph to empower wider communication, this latest revolution is spurred by the most computationally demanding task to ever face humanity – generative AI. Generative AI enables users to quickly create new content based on a variety of inputs, such as text or images. Because of the massive amounts of data this entails, our current computing infrastructure simply won’t suffice. European nations must prioritize the creation of sovereign AI infrastructure to meet demand. In practice, this means the creation of AI factories. At a basic level, an AI factory is where data comes in and intelligence comes out. It’s an entirely new generation of data center that uses a full-stack accelerated computing platform to perform the most intensive computational tasks. Much like heavy machinery is needed to refine raw materials into more useful resources, substantial computing power is required to turn enormous amounts of raw data into intelligence. The AI factory will become the bedrock of modern economies across the world. Currently, the world’s most powerful supercomputers are clustered, with the majority of AI computing power in prestigious universities, research labs and a handful of companies. This landscape prevents many nations from creating generative AI that takes advantage of valuable local data to understand the local language and its nuances. The Future of Compute Review, commissioned by the UK Government, found that for the UK to project its global power as a science and technology leader, it needed to ensure its own sovereign computing capability. Cooperating with national champions The sovereign AI race is already underway. Japan, India and Singapore have already announced plans to construct next-generation AI factories. While these countries are enjoying a head start, the race is far from over. Real progress is already starting to be made in Europe, as the European Commission has recently announced its support for a network of AI factories. However, governments are unable to power this new industrial revolution alone. Generative AI development on this scale requires vast resources in material wealth and technical skills, so partnering with the private sector will be critical to success. Every country already has its own strong domestic sector, filled with local technology champions. Making the most of their expertise and capabilities is the first step to success. The telecommunications industry is one such industry that is well-positioned to support generative AI infrastructure efforts by evolving into AI factories. Leading telecom operators, such as Orange in France or BT and EE in the United Kingdom, are trusted service providers with large in-region customer bases. The demands of the telco industry have prepared these companies to effectively assist the generative AI infrastructure revolution. Telcos are already used to intensive investment and infrastructure replacement cycles, such as recent rollouts of 4G and 5G solutions. Moreover, they have access to secure, high-performance distributed data centers located close to large metropolitan areas, which helps to combat latency issues. If Europe is to sit in the driving seat of the latest industrial revolution, rather than just be a passenger, European countries must make AI infrastructure investment an absolute priority. A new understanding of sovereignty Although we are in the midst of a generative AI boom and interest keeps growing, development and deployment tools remain limited in terms of their accessibility. Most, if not all, of the most popular AI tools are primarily available in the English language. In a geographical area as culturally and linguistically diverse as Europe, AI tools need to be accessible to all – not only those who happen to speak English. Making this a reality means using local data, implementing local languages and, most of all, bringing the translation capabilities to do so within one's own borders. Changing the perception of sovereignty to include computing power is no small feat and is certainly not achievable without action. The shift to sovereign data centers both preserves cultures and native languages in AI tools and ensures that GenAI applications can function accurately within their specific context. But it will require generational investments and ongoing support. The AI infrastructure that tomorrow’s economies will be built upon simply does not exist yet, and those who begin building first will stand to have the most to gain. Securely store your business data with the best business cloud storage. This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro View the full article
-
San Francisco, Calif. — The amazing digital services we have today wouldn’t have come to fruition without the leading technology and telecom giants investing heavily in R&D. Related: GenAi empowers business I had the chance to attend NTT Research’s Upgrade … (more…) The post MY TAKE: GenAI revolution — the transformative power of ordinary people conversing with AI appeared first on Security Boulevard. View the full article
-
The PartyRock Generative AI Hackathon wrapped up earlier this month. Entrants were asked to use PartyRock to build a functional app based on one of four challenge categories, with the option to remix an existing app as well. The hackathon attracted 7,650 registrants who submitted over 1,200 projects, and published over 250 project blog posts on community.aws . As a member of the judging panel, I was blown away by the creativity and sophistication of the entries that I was asked to review. The participants had the opportunity to go hands-on with prompt engineering and to learn about Foundation Models, and pushed the bounds of what was possible. Let’s take a quick look at the winners of the top 3 prizes… First Place First up, taking home the top overall prize of $20,000 in AWS credits is Parable Rhythm – The Interactive Crime Thriller by Param Birje. This project immerses you in a captivating interactive story using PartyRock’s generative capabilities. Just incredible stuff. To learn more, read the hackathon submission and the blog post. Second Place In second place, earning $10,000 in credits, is Faith – Manga Creation Tools by Michael Oswell. This creative assistant app lets you generate original manga panels and panels with the click of a button. So much potential there. To learn more, read the hackathon submission. Third Place And rounding out the top 3 overall is Arghhhh! Zombie by Michael Eziamaka. This is a wildly entertaining generative AI-powered zombie game that had the judges on the edge of their seats. Great work, Michael! To learn more, read the hackathon submission. Round of Applause I want to give a huge round of applause to all our category winners as well: Category / Place Submission Prize (USD) AWS Credits Overall 1st Place Parable Rhythm – $20,000 Overall 2nd Place Faith – Manga Creation Tools – $10,000 Overall 3rd Place Arghhhh! Zombie – $5,000 Creative Assistants 1st Place Faith – Manga Creation Tools $4,000 $1,000 Creative Assistants 2nd Place MovieCreator $1,500 $1,000 Creative Assistants 3rd Place WingPal $500 $1,000 Experimental Entertainment 1st Place Parable Rhythm $4,000 $1,000 Experimental Entertainment 2nd Place Arghhhh! Zombie $1,500 $1,000 Experimental Entertainment 3rd Place Find your inner potato $500 $1,000 Interactive Learning 1st Place DeBeat Coach $4,000 $1,000 Interactive Learning 2nd Place Asteroid Mining Assistant $1,500 $1,000 Interactive Learning 3rd Place Unlock your pet’s language $500 $1,000 Freestyle 1st Place MindMap Party $1,000 $1,000 Freestyle 2nd Place Angler Advisor $750 $1,000 Freestyle 3rd Place SafeScares $250 $1,000 BONUS: Remix ChatRPG Inferno – $2,500 BONUS: Remix ChatRPG Chat RPG Generator – $2,500 From interactive learning experiences to experimental entertainment, the creativity and technical execution on display was off the charts. And of course, a big thank you to all 7,650 participants who dove in and pushed the boundaries of what’s possible with generative AI. You all should be extremely proud. Join the Party You can click on any of the images above and try out the apps for yourself. You can remix and customize them, and you can build your own apps as well (read my post, Build AI apps with PartyRock and Amazon Bedrock to see how to get started). Alright, that’s a wrap. Congrats again to our winners, and a huge thanks to the PartyRock team and all our amazing sponsors. I can’t wait to see what you all build next. Until then, keep building, keep learning, and keep having fun! — Jeff; View the full article
-
Forum Statistics
67.4k
Total Topics65.3k
Total Posts