Jump to content

Search the Community

Showing results for tags 'anthropic'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

There are no results to display.

There are no results to display.


Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


LinkedIn Profile URL


About Me


Cloud Platforms


Cloud Experience


Development Experience


Current Role


Skills


Certifications


Favourite Tools


Interests

Found 11 results

  1. Agents for Amazon Bedrock enable developers to create generative AI-based applications that can complete complex tasks for a wide range of use cases and deliver answers based on company knowledge sources. In order to complete complex tasks, with high accuracy, reasoning capabilities of the underlying foundational model (FM) play a critical role. View the full article
  2. AWS Summits continue to rock the world, with events taking place in various locations around the globe. AWS Summit London (April 24) is the last one in April, and there are nine more in May, including AWS Summit Berlin (May 15–16), AWS Summit Los Angeles (May 22), and AWS Summit Dubai (May 29). Join us to connect, collaborate, and learn about AWS! While you decide which summit to attend, let’s look at the last week’s new announcements. Last week’s launches Last week was another busy one in the world of artificial intelligence (AI). Here are some launches that got my attention. Anthropic’s Claude 3 Opus now available in Amazon Bedrock – After Claude 3 Sonnet and Claude 3 Haiku, two of the three state-of-the-art models of Anthropic’s Claude 3, Opus is now available in Amazon Bedrock. Cluade 3 Opus is at the forefront of generative AI, demonstrating comprehension and fluency on complicated tasks at nearly human levels. Like the rest of the Claude 3 family, Opus can process images and return text outputs. Claude 3 Opus shows an estimated twofold gain in accuracy over Claude 2.1 on difficult open-ended questions, reducing the likelihood of faulty responses. Meta Llama 3 now available in Amazon SageMaker JumpStart – Meta Llama 3 is now available in Amazon SageMaker JumpStart, a machine learning (ML) hub that can help you accelerate your ML journey. You can deploy and use Llama 3 foundation models (FMs) with a few steps in Amazon SageMaker Studio or programmatically through the Amazon SageMaker Python SDK. Llama is available in two parameter sizes, 8B and 70B, and can be used to support a broad range of use cases, with improvements in reasoning, code generation, and instruction following. The model will be deployed in an AWS secure environment under your VPC controls, helping ensure data security. Built-in SQL extension with Amazon SageMaker Studio Notebooks – SageMaker Studio’s JupyterLab now includes a built-in SQL extension to discover, explore, and transform data from various sources using SQL and Python directly within the notebooks. You can now seamlessly connect to popular data services and easily browse and search databases, schemas, tables, and views. You can also preview data within the notebook interface. New features such as SQL command completion, code formatting assistance, and syntax highlighting improve developer productivity. To learn more, visit Explore data with ease: Use SQL and Text-to-SQL in Amazon SageMaker Studio JupyterLab notebooks and the SageMaker Developer Guide. AWS Split Cost Allocation Data for Amazon EKS – You can now receive granular cost visibility for Amazon Elastic Kubernetes Service (Amazon EKS) in the AWS Cost and Usage Reports (CUR) to analyze, optimize, and chargeback cost and usage for your Kubernetes applications. You can allocate application costs to individual business units and teams based on how Kubernetes applications consume shared Amazon EC2 CPU and memory resources. You can aggregate these costs by cluster, namespace, and other Kubernetes primitives to allocate costs to individual business units or teams. These cost details will be accessible in the CUR 24 hours after opt-in. You can use the Containers Cost Allocation dashboard to visualize the costs in Amazon QuickSight and the CUR query library to query the costs using Amazon Athena. AWS KMS automatic key rotation enhancements – AWS Key Management Service (AWS KMS) introduces faster options for automatic symmetric key rotation. You can now customize rotation frequency between 90 days to 7 years, invoke key rotation on demand for customer-managed AWS KMS keys, and view the rotation history for any rotated AWS KMS key. There is a nice post on the Security Blog you can visit to learn more about this feature, including a little bit of history about cryptography. Amazon Personalize automatic solution training – Amazon Personalize now offers automatic training for solutions. With automatic training, you can set a cadence for your Amazon Personalize solutions to automatically retrain using the latest data from your dataset group. This process creates a newly trained machine learning (ML) model, also known as a solution version, and maintains the relevance of Amazon Personalize recommendations for end users. Automatic training mitigates model drift and makes sure recommendations align with users’ evolving behaviors and preferences. With Amazon Personalize, you can personalize your website, app, ads, emails, and more, using the same machine learning technology used by Amazon, without requiring any prior ML experience. To get started with Amazon Personalize, visit our documentation. For a full list of AWS announcements, be sure to keep an eye on the What's New at AWS page. We launched existing services and instance types in additional Regions: Amazon RDS for Oracle extends support for x2iedn in Asia Pacific (Hyderabad, Jakarta, and Osaka), Europe (Milan and Paris), US West (N. California), AWS GovCloud (US-East), and AWS GovCloud (US-West). X2iedn instances are targeted for enterprise-class high-performance databases with high compute (up to 128 vCPUs), large memory (up to 4 TB) and storage throughput requirements (up to 256K IOPS) with a 32:1 ratio of memory to vCPU. Amazon MSK is now available in Canada West (Calgary) Region. Amazon Managed Streaming for Apache Kafka (Amazon MSK) is a fully managed service for Apache Kafka and Kafka Connect that makes it easier for you to build and run applications that use Apache Kafka as a data store. Amazon Cognito is now available in Europe (Spain) Region. Amazon Cognito makes it easy to add authentication, authorization, and user management to your web and mobile apps supporting sign-in with social identity providers such as Apple, Facebook, Google, and Amazon, and enterprise identity providers through standards such as SAML 2.0 and OpenID Connect. AWS Network Manager is now available in AWS Israel (Tel Aviv) Region. AWS Network Manager reduces the operational complexity of managing global networks across AWS and on-premises locations by providing a single global view of your private network. AWS Storage Gateway is now available in AWS Canada West (Calgary) Region. AWS Storage Gateway is a hybrid cloud storage service that provides on-premises applications access to virtually unlimited storage in the cloud. Amazon SQS announces support for FIFO dead-letter queue redrive in the AWS GovCloud (US) Regions. Dead-letter queue redrive is an enhanced capability to improve the dead-letter queue management experience for Amazon Simple Queue Service (Amazon SQS) customers. Amazon EC2 R6gd instances are now available in Europe (Zurich) Region. R6gd instances are powered by AWS Graviton2 processors and are built on the AWS Nitro System. These instances offer up to 25 Gbps of network bandwidth, up to 19 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS), up to 512 GiB RAM, and up to 3.8TB of NVMe SSD local instance storage. Amazon Simple Email Service is now available in the AWS GovCloud (US-East) Region. Amazon Simple Email Service (SES) is a scalable, cost-effective, and flexible cloud-based email service that allows you to send marketing, notification, and transactional emails from within any application. To learn more, visit Amazon SES page. AWS Glue Studio Notebooks is now available in the Middle East (UAE), Asia Pacific (Hyderabad), Asia Pacific (Melbourne), Israel (Tel Aviv), Europe (Spain), and Europe (Zurich) Regions. AWS Glue Studio Notebooks provides interactive job authoring in AWS Glue, which helps simplify the process of developing data integration jobs. To learn more, visit Authoring code with AWS Glue Studio notebooks. Amazon S3 Access Grants is now available in in the Middle East (UAE), Asia Pacific (Melbourne), Asia Pacific (Hyderabad), and Europe (Spain) Regions. Amazon Simple Storage Service (Amazon S3) Access Grants map identities in directories such as Microsoft Entra ID, or AWS Identity and Access Management (IAM) principals, to datasets in S3. This helps you manage data permissions at scale by automatically granting S3 access to end-users based on their corporate identity. To learn more, visit Amazon S3 Access Grants page. Other AWS news Here are some additional news that you might find interesting: The PartyRock Generative AI Hackathon winners – The PartyRock Generative AI Hackathon concluded with over 7,650 registrants submitting 1,200 projects across four challenge categories, featuring top winners like Parable Rhythm – The Interactive Crime Thriller, Faith – Manga Creation Tools, and Arghhhh! Zombie. Participants showed remarkable creativity and technical prowess, with prizes totaling $60,000 in AWS credits. I tried the Faith – Manga Creation Tools app using my daughter Arya’s made-up stories and ideas and the result was quite impressive. Visit Jeff Barr’s post to learn more about how to try the apps for yourself. AWS open source news and updates – My colleague Ricardo writes about open source projects, tools, and events from the AWS Community. Check out Ricardo’s page for the latest updates. Upcoming AWS events Check your calendars and sign up for upcoming AWS events: AWS Summits – Join free online and in-person events that bring the cloud computing community together to connect, collaborate, and learn about AWS. Register in your nearest city: Singapore (May 7), Seoul (May 16–17), Hong Kong (May 22), Milan (May 23), Stockholm (June 4), and Madrid (June 5). AWS re:Inforce – Explore cloud security in the age of generative AI at AWS re:Inforce, June 10–12 in Pennsylvania for 2.5 days of immersive cloud security learning designed to help drive your business initiatives. AWS Community Days – Join community-led conferences that feature technical discussions, workshops, and hands-on labs led by expert AWS users and industry leaders from around the world: Turkey (May 18), Midwest | Columbus (June 13), Sri Lanka (June 27), Cameroon (July 13), Nigeria (August 24), and New York (August 28). You can browse all upcoming AWS led in-person and virtual events and developer-focused events here. That’s all for this week. Check back next Monday for another Weekly Roundup! — Esra This post is part of our Weekly Roundup series. Check back each week for a quick roundup of interesting news and announcements from AWS! View the full article
  3. Anthropic’s Claude 3 Opus foundation model, the most advanced and intelligent model in the Claude 3 Family, is now available on Amazon Bedrock. The Claude 3 family of models (Claude 3 Opus, Claude 3 Sonnet, and Claude 3 Haiku) is the next generation of state-of-the-art models from Anthropic. Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models from leading AI companies, like Anthropic, along with a broad set of capabilities that provide you with the easiest way to build and scale generative AI applications. View the full article
  4. We are living in the generative artificial intelligence (AI) era; a time of rapid innovation. When Anthropic announced its Claude 3 foundation models (FMs) on March 4, we made Claude 3 Sonnet, a model balanced between skills and speed, available on Amazon Bedrock the same day. On March 13, we launched the Claude 3 Haiku model on Amazon Bedrock, the fastest and most compact member of the Claude 3 family for near-instant responsiveness. Today, we are announcing the availability of Anthropic’s Claude 3 Opus on Amazon Bedrock, the most intelligent Claude 3 model, with best-in-market performance on highly complex tasks. It can navigate open-ended prompts and sight-unseen scenarios with remarkable fluency and human-like understanding, leading the frontier of general intelligence. With the availability of Claude 3 Opus on Amazon Bedrock, enterprises can build generative AI applications to automate tasks, generate revenue through user-facing applications, conduct complex financial forecasts, and accelerate research and development across various sectors. Like the rest of the Claude 3 family, Opus can process images and return text outputs. Claude 3 Opus shows an estimated twofold gain in accuracy over Claude 2.1 on difficult open-ended questions, reducing the likelihood of faulty responses. As enterprise customers rely on Claude across industries like healthcare, finance, and legal research, improved accuracy is essential for safety and performance. How does Claude 3 Opus perform? Claude 3 Opus outperforms its peers on most of the common evaluation benchmarks for AI systems, including undergraduate-level expert knowledge (MMLU), graduate-level expert reasoning (GPQA), basic mathematics (GSM8K), and more. It exhibits high levels of comprehension and fluency on complex tasks, leading the frontier of general intelligence. Source: https://www.anthropic.com/news/claude-3-family Here are a few supported use cases for the Claude 3 Opus model: Task automation: planning and execution of complex actions across APIs, databases, and interactive coding Research: brainstorming and hypothesis generation, research review, and drug discovery Strategy: advanced analysis of charts and graphs, financials and market trends, and forecasting To learn more about Claude 3 Opus’s features and capabilities, visit Anthropic’s Claude on Bedrock page and Anthropic Claude models in the Amazon Bedrock documentation. Claude 3 Opus in action If you are new to using Anthropic models, go to the Amazon Bedrock console and choose Model access on the bottom left pane. Request access separately for Claude 3 Opus. To test Claude 3 Opus in the console, choose Text or Chat under Playgrounds in the left menu pane. Then choose Select model and select Anthropic as the category and Claude 3 Opus as the model. To test more Claude prompt examples, choose Load examples. You can view and run examples specific to Claude 3 Opus, such as analyzing a quarterly report, building a website, and creating a side-scrolling game. By choosing View API request, you can also access the model using code examples in the AWS Command Line Interface (AWS CLI) and AWS SDKs. Here is a sample of the AWS CLI command: aws bedrock-runtime invoke-model \ --model-id anthropic.claude-3-opus-20240229-v1:0 \ --body "{\"messages\":[{\"role\":\"user\",\"content\":[{\"type\":\"text\",\"text\":\" Your task is to create a one-page website for an online learning platform.\\n\"}]}],\"anthropic_version\":\"bedrock-2023-05-31\",\"max_tokens\":2000,\"temperature\":1,\"top_k\":250,\"top_p\":0.999,\"stop_sequences\":[\"\\n\\nHuman:\"]}" \ --cli-binary-format raw-in-base64-out \ --region us-east-1 \ invoke-model-output.txt As I mentioned in my previous Claude 3 model launch posts, you need to use the new Anthropic Claude Messages API format for some Claude 3 model features, such as image processing. If you use Anthropic Claude Text Completions API and want to use Claude 3 models, you should upgrade from the Text Completions API. My colleagues, Dennis Traub and Francois Bouteruche, are building code examples for Amazon Bedrock using AWS SDKs. You can learn how to invoke Claude 3 on Amazon Bedrock to generate text or multimodal prompts for image analysis in the Amazon Bedrock documentation. Here is sample JavaScript code to send a Messages API request to generate text: // claude_opus.js - Invokes Anthropic Claude 3 Opus using the Messages API. import { BedrockRuntimeClient, InvokeModelCommand } from "@aws-sdk/client-bedrock-runtime"; const modelId = "anthropic.claude-3-opus-20240229-v1:0"; const prompt = "Hello Claude, how are you today?"; // Create a new Bedrock Runtime client instance const client = new BedrockRuntimeClient({ region: "us-east-1" }); // Prepare the payload for the model const payload = { anthropic_version: "bedrock-2023-05-31", max_tokens: 1000, messages: [{ role: "user", content: [{ type: "text", text: prompt }] }] }; // Invoke Claude with the payload and wait for the response const command = new InvokeModelCommand({ contentType: "application/json", body: JSON.stringify(payload), modelId }); const apiResponse = await client.send(command); // Decode and print Claude's response const decodedResponseBody = new TextDecoder().decode(apiResponse.body); const responseBody = JSON.parse(decodedResponseBody); const text = responseBody.content[0].text; console.log(`Response: ${text}`); Now, you can install the AWS SDK for JavaScript Runtime Client for Node.js and run claude_opus.js. npm install @aws-sdk/client-bedrock-runtime node claude_opus.js For more examples in different programming languages, check out the code examples section in the Amazon Bedrock User Guide, and learn how to use system prompts with Anthropic Claude at Community.aws. Now available Claude 3 Opus is available today in the US West (Oregon) Region; check the full Region list for future updates. Give Claude 3 Opus a try in the Amazon Bedrock console today and send feedback to AWS re:Post for Amazon Bedrock or through your usual AWS Support contacts. — Channy View the full article
  5. Earlier this month, we shared the news that Anthropic’s Claude 3 family of models would soon be available to Google Cloud customers on Vertex AI Model Garden. Today, we’re announcing that Claude 3 Sonnet and Claude 3 Haiku are generally available to all customers on Vertex AI. Claude 3 Opus, Anthropic’s most capable and intelligent model to date, will also be available on Vertex AI in the coming weeks. Claude 3 Sonnet can help save time writing code or parsing text from images, excels at data-processing tasks like search and retrieval, and supports use cases like product recommendations, forecasting, and targeted marketing. Claude 3 Haiku is a great option for building quick and accurate customer interactions, applying content moderation, optimizing logistics, managing inventory, extracting knowledge from unstructured data, and more. In this blog post, we'll share how Google Cloud simplifies working with Anthropic’s newest models, highlight what our customers are saying, and provide a guide for getting started with Claude 3 on Vertex AI. Why use Claude 3 models on Google Cloud Vertex AI Model Garden includes over 130 models, from cutting-edge frontier models like Claude 3 and Google’s Gemini model family, to open models such as Llama 2 from Meta, Mixtral 8x7B from Mistral AI, and many more. This variety of leading models, in addition to a deep set of model development and deployment tools, makes Vertex AI the comprehensive, enterprise-ready destination for working at scale with generative foundation models like Claude 3: Flexibility and choice: Experimentation and optionality are crucial when balancing generative AI capabilities against budgetary constraints. Through Vertex AI, we are committed to providing customers a range of models and tools to enable greater flexibility and choice. Whether customers need large models for complex analysis, lighter models for powering conversational experiences at scale, or anything in between, Vertex AI allows customers to test, compare, and build with generative AI applications for their specific use case. Built-in data privacy, security, and governance: With Vertex AI, data privacy and security are built-in. Customers control their data — which Google Cloud has committed to not use for model training, and is not shared with model providers. Google Cloud offers robust regional availability to meet various data residency requirements, Identity and Access Management (IAM) tools to create and manage permissions for access to foundational models, and a host of other enterprise-grade features. Customers interact with Claude 3 API endpoints on Vertex AI the same way they interact with other Vertex AI endpoints, making Anthropic's newest offerings simple for customers to integrate. Ease of use: Claude 3 is offered as a serverless, managed API on Vertex AI, meaning customers don't have to worry about managing the underlying infrastructure. The pay-as-you-go pricing model and auto-scaling abilities help organizations optimize costs and easily go from experimentation to deploying production-grade generative AI applications at speed. Hear what customers are saying Enterprises are excited to adopt Claude 3 on Vertex AI for a wide range of use cases, including enhancing customer chats, automating task handling, generating code, and much more: GitLab: “GitLab’s AI-powered DevSecOps platform embeds AI throughout the entire software development lifecycle with a privacy- and transparency-first approach,” said Hillary Benson, senior director, product management at GitLab. “By leveraging Anthropic’s Claude models on Vertex AI, we look forward to helping customers harness the benefits of AI to deliver secure software faster.” Poe by Quora: “At Poe, we’re helping shape how people interact with AI, providing millions of global users with one place to chat, explore and build with a wide variety of AI-powered bots,” said Spencer Chan, Product Lead at Poe by Quora. “Claude has become very popular on Poe due to its strengths in multiple areas, including creative writing and image understanding. Our users describe Claude's answers as detailed and easily understood, and they like that exchanges feel like natural conversations. With millions of messages exchanged between our users and Anthropic’s Claude-based bots daily, we’re excited to work with Anthropic’s Claude 3 models on Vertex AI.” How to get started with Claude 3 on Vertex AI Get access to the Claude 3 Sonnet and Claude 3 Haiku models by following these simple steps: Visit the Vertex AI Model Garden console and select the model tile for Claude 3 Sonnet or Claude 3 Haiku. Click on the “Enable” button and follow the proceeding instructions. That’s it – you now have immediate access to the selected Claude 3 model. Click the “View code” button to obtain sample code or use the built-in colab notebook to start querying Claude 3 models on Vertex AI. To learn more about Anthropic's Claude 3 models on Google Cloud, explore the Claude 3 on Vertex AI documentation. Customers can also find the Claude 3 Sonnet and Claude 3 Haiku models on Google Cloud Marketplace. View the full article
  6. Last week, Anthropic announced their Claude 3 foundation model family. The family includes three models: Claude 3 Haiku, the fastest and most compact model for near-instant responsiveness; Claude 3 Sonnet, the ideal balanced model between skills and speed; and Claude 3 Opus, the most intelligent offering for top-level performance on highly complex tasks. AWS also announced the general availability of Claude 3 Sonnet in Amazon Bedrock. Today, we are announcing the availability of Claude 3 Haiku on Amazon Bedrock. The Claude 3 Haiku foundation model is the fastest and most compact model of the Claude 3 family, designed for near-instant responsiveness and seamless generative artificial intelligence (AI) experiences that mimic human interactions. For example, it can read a data-dense research paper on arXiv (~10k tokens) with charts and graphs in less than three seconds. With Claude 3 Haiku’s availability on Amazon Bedrock, you can build near-instant responsive generative AI applications for enterprises that need quick and accurate targeted performance. Like Sonnet and Opus, Haiku has image-to-text vision capabilities, can understand multiple languages besides English, and boasts increased steerability in a 200k context window. Claude 3 Haiku use cases Claude 3 Haiku is smarter, faster, and more affordable than other models in its intelligence category. It answers simple queries and requests with unmatched speed. With its fast speed and increased steerability, you can create AI experiences that seamlessly imitate human interactions. Here are some use cases for using Claude 3 Haiku: Customer interactions: quick and accurate support in live interactions, translations Content moderation: catch risky behavior or customer requests Cost-saving tasks: optimized logistics, inventory management, fast knowledge extraction from unstructured data To learn more about Claude 3 Haiku’s features and capabilities, visit Anthropic’s Claude on Amazon Bedrock and Anthropic Claude models in the AWS documentation. Claude 3 Haiku in action If you are new to using Anthropic models, go to the Amazon Bedrock console and choose Model access on the bottom left pane. Request access separately for Claude 3 Haiku. To test Claude 3 Haiku in the console, choose Text or Chat under Playgrounds in the left menu pane. Then choose Select model and select Anthropic as the category and Claude 3 Haiku as the model. To test more Claude prompt examples, choose Load examples. You can view and run examples specific to Claude 3 Haiku, such as advanced Q&A with citations, crafting a design brief, and non-English content generation. Using Compare mode, you can also compare the speed and intelligence between Claude 3 Haiku and the Claude 2.1 model using a sample prompt to generate personalized email responses to address customer questions. By choosing View API request, you can also access the model using code examples in the AWS Command Line Interface (AWS CLI) and AWS SDKs. Here is a sample of the AWS CLI command: aws bedrock-runtime invoke-model \ --model-id anthropic.claude-3-haiku-20240307-v1:0 \ --body "{\"messages\":[{\"role\":\"user\",\"content\":[{\"type\":\"text\",\"text\":\"Write the test case for uploading the image to Amazon S3 bucket\\n\"}]}],\"anthropic_version\":\"bedrock-2023-05-31\",\"max_tokens\":2000,\"temperature\":1,\"top_k\":250,\"top_p\":0.999,\"stop_sequences\":[\"\\n\\nHuman:\"]}" \ --cli-binary-format raw-in-base64-out \ --region us-east-1 \ invoke-model-output.txt To make an API request with Claude 3, use the new Anthropic Claude Messages API format, which allows for more complex interactions such as image processing. If you use Anthropic Claude Text Completions API, you should upgrade from the Text Completions API. Here is sample Python code to send a Message API request describing the image file: def call_claude_haiku(base64_string): prompt_config = { "anthropic_version": "bedrock-2023-05-31", "max_tokens": 4096, "messages": [ { "role": "user", "content": [ { "type": "image", "source": { "type": "base64", "media_type": "image/png", "data": base64_string, }, }, {"type": "text", "text": "Provide a caption for this image"}, ], } ], } body = json.dumps(prompt_config) modelId = "anthropic.claude-3-haiku-20240307-v1:0" accept = "application/json" contentType = "application/json" response = bedrock_runtime.invoke_model( body=body, modelId=modelId, accept=accept, contentType=contentType ) response_body = json.loads(response.get("body").read()) results = response_body.get("content")[0].get("text") return results To learn more sample codes with Claude 3, see Get Started with Claude 3 on Amazon Bedrock, Diagrams to CDK/Terraform using Claude 3 on Amazon Bedrock, and Cricket Match Winner Prediction with Amazon Bedrock’s Anthropic Claude 3 Sonnet in the Community.aws. Now available Claude 3 Haiku is available now in the US West (Oregon) Region with more Regions coming soon; check the full Region list for future updates. Claude 3 Haiku is the most cost-effective choice. For example, Claude 3 Haiku is cheaper, up to 68 percent of the price per 1,000 input/output tokens compared to Claude Instant, with higher levels of intelligence. To learn more, see Amazon Bedrock Pricing. Give Claude 3 Haiku a try in the Amazon Bedrock console today and send feedback to AWS re:Post for Amazon Bedrock or through your usual AWS Support contacts. — Channy View the full article
  7. Anthropic has released a new series of large language models and an updated Python API to access them.View the full article
  8. At Google Cloud, we're committed to empowering customer choice and innovation through our curated collection of first-party, open-source, and third-party models available in Vertex AI. That’s why we're thrilled to announce that Claude 3 — Anthropic’s new family of state-of-the-art models — will be generally available in Vertex AI Model Garden over the coming weeks, including private preview access for one of the models starting today. Claude is Anthropic’s next-generation AI assistant that helps manage organizations’ tasks, no matter the scale. Anthropic’s launch of Claude 3 includes a family of three distinct models optimized for various enterprise applications: Claude 3 Opus: Anthropic’s most capable and intelligent model yet.Claude 3 Sonnet: Anthropic’s best combination of skills and speed.Claude 3 Haiku: Anthropic’s fastest, most compact model.Compared to earlier iterations of Claude, both Claude 3 Opus and Sonnet offer superior reasoning across complex tasks, content creation, scientific queries, math, and coding, while Haiku is Anthropic’s fastest and most cost-effective model. All Claude 3 models boast improved fluency in non-English languages, as well as vision capabilities that unlock tasks ranging from image metadata generation to insights extraction across PDFs, flow charts, and a diverse range of other formats. In the weeks ahead, Google Cloud customers will be able to select from all three Claude 3 models via API access in Vertex AI Model Garden. And starting today, customers can apply for private preview access to Claude 3 Sonnet in Model Garden. Build and deploy with Claude 3 in Vertex AIThrough our partnership, we will bring Anthropic’s latest models to our customers via Vertex AI, the comprehensive AI development platform. The Claude 3 family joins over 130 models already available in Vertex AI Model Garden, further expanding customer choice and flexibility as gen AI use cases continue to rapidly evolve. By making Claude 3 models available in Vertex AI, customers have powerful new options to: Accelerate AI development with quick access to Claude's pre-trained models through simple API calls in Vertex AI.Focus on applications, not infrastructure as Claude models are offered in Vertex AI as managed APIs — meaning customers can concentrate on building groundbreaking applications instead of worrying about backend complexity or the management overhead of underlying infrastructure.Optimize performance and costs by leveraging flexible auto-scaling and pay-only-for-what-you-use pricing to optimize costs as needs grow. And of course, leverage world-class infrastructure, purpose-built for AI workloads.Deploy responsibly with Google Cloud’s built-in security, privacy, and compliance as Vertex AI’s assortment of models and tools are offered with Google Cloud's enterprise-grade security, privacy, and compliance for generative AI.Sign up to access Claude 3 in Vertex AIThis is just the beginning of our partnership with Anthropic, and we’re excited to enable customer innovation with the newest models. We'll continue to work closely with Anthropic and other partners to keep our customers at the forefront of AI capabilities. To get started with Vertex AI, visit our product page. To access Claude 3 Sonnet via private preview, visit Model Garden, and to learn more about Claude 3, check out Anthropic’s announcement. View the full article
  9. Anthropic’s Claude 2.1 foundation model is now generally available in Amazon Bedrock. Claude 2.1 delivers key capabilities for enterprises, such as an industry-leading 200,000 token context window (2x the context of Claude 2.0), reduced rates of hallucination, improved accuracy over long documents, system prompts, and a beta tool use feature for function calling and workflow orchestration. Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models from leading AI companies, like Anthropic, along with a broad set of capabilities that provide you with the easiest way to build and scale generative AI applications with foundation models. View the full article
  10. Today, we’re announcing the availability of Anthropic’s Claude 2.1 foundation model (FM) in Amazon Bedrock. Last week, Anthropic introduced its latest model, Claude 2.1, delivering key capabilities for enterprises such as an industry-leading 200,000 token context window (2x the context of Claude 2.0), reduced rates of hallucination, improved accuracy over long documents, system prompts, and a beta tool use feature for function calling and workflow orchestration. With Claude 2.1’s availability in Amazon Bedrock, you can build enterprise-ready generative artificial intelligence (AI) applications using more honest and reliable AI systems from Anthropic. You can now use the Claude 2.1 model provided by Anthropic in the Amazon Bedrock console. Here are some key highlights about the new Claude 2.1 model in Amazon Bedrock: 200,000 token context window – Enterprise applications demand larger context windows and more accurate outputs when working with long documents such as product guides, technical documentation, or financial or legal statements. Claude 2.1 supports 200,000 tokens, the equivalent of roughly 150,000 words or over 500 pages of documents. When uploading extensive information to Claude, you can summarize, perform Q&A, forecast trends, and compare and contrast multiple documents for drafting business plans and analyzing complex contracts. Strong accuracy upgrades – Claude 2.1 has also made significant gains in honesty, with a 2x decrease in hallucination rates, 50 percent fewer hallucinations in open-ended conversation and document Q&A, a 30 percent reduction in incorrect answers, and a 3–4 times lower rate of mistakenly concluding that a document supports a particular claim compared to Claude 2.0. Claude increasingly knows what it doesn’t know and will more likely demur rather than hallucinate. With this improved accuracy, you can build more reliable, mission-critical applications for your customers and employees. System prompts – Claude 2.1 now supports system prompts, a new feature that can improve Claude’s performance in a variety of ways, including greater character depth and role adherence in role-playing scenarios, particularly over longer conversations, as well as stricter adherence to guidelines, rules, and instructions. This represents a structural change, but not a content change from former ways of prompting Claude. Tool use for function calling and workflow orchestration – Available as a beta feature, Claude 2.1 can now integrate with your existing internal processes, products, and APIs to build generative AI applications. Claude 2.1 accurately retrieves and processes data from additional knowledge sources as well as invokes functions for a given task. Claude 2.1 can answer questions by searching databases using private APIs and a web search API, translate natural language requests into structured API calls, or connect to product datasets to make recommendations and help customers complete purchases. Access to this feature is currently limited to select early access partners, with plans for open access in the near future. If you are interested in gaining early access, please contact your AWS account team. To learn more about Claude 2.1’s features and capabilities, visit Anthropic Claude on Amazon Bedrock and the Amazon Bedrock documentation. Claude 2.1 in action To get started with Claude 2.1 in Amazon Bedrock, go to the Amazon Bedrock console. Choose Model access on the bottom left pane, then choose Manage model access on the top right side, submit your use case, and request model access to the Anthropic Claude model. It may take several minutes to get access to models. If you already have access to the Claude model, you don’t need to request access separately for Claude 2.1. To test Claude 2.1 in chat mode, choose Text or Chat under Playgrounds in the left menu pane. Then select Anthropic and then Claude v2.1. By choosing View API request, you can also access the model via code examples in the AWS Command Line Interface (AWS CLI) and AWS SDKs. Here is a sample of the AWS CLI command: $ aws bedrock-runtime invoke-model \ --model-id anthropic.claude-v2:1 \ --body "{\"prompt\":\"Human: \\n\\nHuman: Tell me funny joke about outer space!\n\nAssistant:", "max_tokens_to_sample": 50}' \ --cli-binary-format raw-in-base64-out \ invoke-model-output.txt You can use system prompt engineering techniques provided by the Claude 2.1 model, where you place your inputs and documents before any questions that reference or utilize that content. Inputs can be natural language text, structured documents, or code snippets using <document>, <papers>, <books>, or <code> tags, and so on. You can also use conversational text, such as chat history, and Retrieval Augmented Generation (RAG) results, such as chunked documents. Here is a system prompt example for support agents to respond to customer questions based on corporate documents. Here are some documents for you to reference for your task: <documents> <document index="1"> <document_content> (the text content of the document - could be a passage, web page, article, etc) </document_content> <document index="2"> <source>https://mycompany.repository/userguide/what-is-it.html</source> </document> <document index="3"> <source>https://mycompany.repository/docs/techspec.pdf</source> </document> ... </documents> You are Larry, and you are a customer advisor with deep knowledge of your company's products. Larry has a great deal of patience with his customers, even when they say nonsense or are sarcastic. Larry's answers are polite but sometimes funny. However, he only answers questions about the company's products and doesn't know much about other questions. Use the provided documentation to answer user questions. Human: Your product is making a weird stuttering sound when I operate. What might be the problem? To learn more about prompt engineering on Amazon Bedrock, see the Prompt engineering guidelines included in the Amazon Bedrock documentation. You can learn general prompt techniques, templates, and examples for Amazon Bedrock text models, including Claude. Now available Claude 2.1 is available today in the US East (N. Virginia) and US West (Oregon) Regions. You only pay for what you use, with no time-based term commitments for on-demand mode. For text generation models, you are charged for every input token processed and every output token generated. Or you can choose the provisioned throughput mode to meet your application’s performance requirements in exchange for a time-based term commitment. To learn more, see Amazon Bedrock Pricing. Give Anthropic Claude 2.1 a try in Amazon Bedrock console today and send feedback to AWS re:Post for Amazon Bedrock or through your usual AWS Support contacts. — Channy View the full article
  11. I recently gained access to Anthropic's API, and I am impressed by how easy it is to use and faster than OpenAI API.View the full article
  • Forum Statistics

    63.6k
    Total Topics
    61.7k
    Total Posts
×
×
  • Create New...