Jump to content

Search the Community

Showing results for tags 'google cloud next 2024'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General
    • General Discussion
    • Artificial Intelligence
    • DevOpsForum News
  • DevOps & SRE
    • DevOps & SRE General Discussion
    • Databases, Data Engineering & Data Science
    • Development & Programming
    • CI/CD, GitOps, Orchestration & Scheduling
    • Docker, Containers, Microservices, Serverless & Virtualization
    • Infrastructure-as-Code
    • Kubernetes & Container Orchestration
    • Linux
    • Logging, Monitoring & Observability
    • Security, Governance, Risk & Compliance
  • Cloud Providers
    • Amazon Web Services
    • Google Cloud Platform
    • Microsoft Azure

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


LinkedIn Profile URL


About Me


Cloud Platforms


Cloud Experience


Development Experience


Current Role


Skills


Certifications


Favourite Tools


Interests

  1. Welcome to the first Cloud CISO Perspectives for April 2024. In this update, we'll give a list of some of the major announcements of security products and security enhancements to Google Cloud. There's an even longer list here. As with all Cloud CISO Perspectives, the contents of this newsletter are posted to the Google Cloud blog. If you’re reading this on the website and you’d like to receive the email version, you can subscribe here. --Phil Venables, VP, TI Security & CISO, Google Cloud aside_block <ListValue: [StructValue([('title', 'Get vital CISO Insights with Google Cloud'), ('body', <wagtail.rich_text.RichText object at 0x3e18d75ffac0>), ('btn_text', 'Visit the hub'), ('href', 'https://cloud.google.com/solutions/security/leaders'), ('image', <GAEImage: GCAT-replacement-logo-A>)])]> 20 major security announcements from Next ‘24By Phil Venables, VP, TI Security & CISO, Google Cloud We held our annual Google Cloud Next conference earlier this month, and from the start of our opening keynote we highlighted how AI is transforming the way that companies work, our incredible customer momentum, and of course our exciting product news — 218 announcements in total. You can check out a recap of the keynote here. We made significant announcements in powering Google Cloud with Gemini and powering the next generation of AI startups with Google Cloud; and improvements to database management, workload-optimized infrastructure, and application development. Phil Venables, VP, TI Security & CISO, Google Cloud We also focused heavily on our work to advance secure products and security products, by making Google part of your security team anywhere you operate, with defenses supercharged by AI. As we said at Next ‘24, what organizations need are security essentials that can “bring simplicity, streamline operations, and enhance efficiency and effectiveness.” At Google Cloud, we’d of course like for all organizations to choose us as their security provider, but we are far more comprehensive than just what we bring to market. We recognize that 92% of organizations that use at least one cloud provider actually employ a multicloud approach. Our solution is to focus on securing Google Cloud customers — and their entire environment. I’ve split the list of 20 of our major security announcements from Next ‘24 into those focused on Gemini for Security, which further empowers defenders to identify and mitigate risk. Gemini in Security Operations, a new assisted investigation feature, generally available at the end of this month, that guides analysts through their workflow in Chronicle Enterprise and Chronicle Enterprise Plus. You now can ask Gemini for the latest threat intelligence from Mandiant directly in-line — including any indicators of compromise found in their environment.Gemini in Threat Intelligence, in public preview, allows you to tap into Mandiant’s frontline threat intelligence using conversational search. Further, VirusTotal now automatically ingests OSINT reports, which Gemini summarizes directly in the platform; generally available now.Gemini in Security Command Center, which now lets security teams search for threats and other security events using natural language in preview, and provides summaries of critical- and high-priority misconfiguration and vulnerability alerts, and summarizes attack paths.Gemini Cloud Assist also helps with security tasks with IAM Recommendations, which can provide straightforward, contextual recommendations to remove roles from over-permissioned users or service accounts; Key Insights, which can help during encryption key creation based on its understanding of your data, your encryption preferences, and your compliance needs; and Confidential Computing Insights, which can recommend options for adding confidential computing protection to sensitive workloads based on your data and your compute usage. We recognize that 92% of organizations that use at least one cloud provider actually employ a multicloud approach. Our solution is to focus on securing Google Cloud customers — and their entire environment. Additional security announcements include: The new Chrome Enterprise Premium, now generally available, combines the popular browser with Google threat and data protection, Zero Trust access controls, enterprise policy controls, and security insights and reporting.Applied threat intelligence in Google Security Operations, now generally available, automatically applies global threat visibility and applies it to each customer’s unique environment.Security Command Center Enterprise is now generally available and includes Mandiant Hunt, now in preview.Introducing Isolator: Enabling secure multi-party collaboration with healthcare data.Confidential Computing, a vital solution for data security and confidentiality, now offers Confidential Accelerators for AI workloads, as well as an expanded portfolio of hardware options, support for data migrations, and additional partnerships.Identity and Access Management Privileged Access Manager (PAM), now available in preview, provides just-in-time, time-bound, and approval-based access elevations.Identity and Access Management Principal Access Boundary (PAB) is a new, identity-centered control now in preview that enforces restrictions on IAM principals.Cloud Next-Gen Firewall (NGFW) Enterprise is now generally available, including threat protection from Palo Alto Networks.Cloud Armor Enterprise is now generally available and offers a pay-as-you-go model that includes advanced network DDoS protection, web application firewall capabilities, network edge policy, adaptive protection, and threat intelligence.Sensitive Data Protection integration with Cloud SQL is now generally available, and is deeply integrated into the Security Command Center Enterprise risk engine.Key management with Autokey is now in preview, simplifying the creation and management of customer encryption keys (CMEK).Bare metal HSM deployments in PCI-compliant facilities are now available in more regions.Regional Controls for Assured Workloads is now in preview and is available in 32 cloud regions in 14 countries.Audit Manager automates control verification with proof of compliance for workloads and data on Google Cloud, and is in preview.Advanced API Security, part of Apigee API Management, now offers shadow API detection in preview.We expanded data residency guarantees for data stored at-rest for Gemini, Imagen, and Embeddings APIs on Vertex AI to 11 new countries: Australia, Brazil, Finland, Hong Kong, India, Israel, Italy, Poland, Spain, Switzerland, and Taiwan.To learn more about how your organization can benefit from our announcements at Next ‘24, you can contact us at Ask Office of the CISO and stay tuned for our announcements next month at RSA Conference in San Francisco. aside_block <ListValue: [StructValue([('title', 'Join the Google Cloud CISO Community'), ('body', <wagtail.rich_text.RichText object at 0x3e18d75ffb20>), ('btn_text', 'Learn more'), ('href', 'https://rsvp.withgoogle.com/events/ciso-community-interest?utm_source=cgc-blog&utm_medium=blog&utm_campaign=2024-cloud-ciso-newsletter-events-ref&utm_content=-&utm_term=-'), ('image', <GAEImage: GCAT-replacement-logo-A>)])]> In case you missed itHere are the latest updates, products, services, and resources from our security teams so far this month: Trends on zero days exploited in the wild in 2023: The first joint zero-day report from Mandiant and Google’s Threat Analysis Group shows that 97 zero-day vulnerabilities were exploited in 2023, a big increase over the 62 zero-day vulnerabilities identified in 2022 but still fewer than 2021's peak of 106 zero days. Read more.Boosting data cyber-resilience for your Cloud Storage data with object retention lock: The new object retention lock for Cloud Storage makes it easier to meet regulatory standards, strengthen security, and improve data protection. Read more.Google Cloud offers new cybersecurity training to unlock job opportunities: Google Cloud is on a mission to help everyone build the skills they need for in-demand cloud jobs. We're excited to announce new learning opportunities that will help you gain these in-demand skills through new courses and certificates in AI, data analytics, and cybersecurity. Read more.Google Public DNS’s approach to fight against cache poisoning attacks: We look at DNS cache poisoning attacks, and how Google Public DNS addresses the risks associated with them. Read more.Please visit the Google Cloud blog for more security stories published this month. Threat Intelligence newsCyber threats linked to Russian businessman Prigozhin persist after his death: Mandiant has tracked and reported on covert information operations and threat activity linked to Prigozhin for years. We examine a sample of Prigozhin-linked campaigns to better understand their outcomes so far, and provide an overview of what can be expected from these activity sets in the future. Read more.Ivanti Connect Secure VPN post-exploitation lateral movement case studies: Our investigations into widespread Ivanti zero-day exploitation have continued. In this post, we catalog some of the different types of activity that Mandiant has observed on vulnerable Ivanti Connect Secure appliances. Read more.SeeSeeYouExec: Windows session hijacking via CcmExec: The security community has witnessed an uptick in System Center Configuration Manager (SCCM)-related attacks. Mandiant's Red Team has utilized SCCM technology to perform novel attacks against mature clients, and released a tool to facilitate the technique. Read more.Apache XML Security for C++ Library allows for server-side request forgery: We identified a default configuration in an Apache library that could lead to server-side request forgery, which is being actively exploited, and provided recommendations and a patch to help defend against it. Read more. Now hear this: Google Cloud Security and Mandiant podcastsHow SecLM enhances security and what teams can do with it: Take a trip around Google Cloud’s security-trained model SecLM as Cloud Security podcast hosts Anton Chuvakin and Tim Peacock hear all about it from Google Cloud Security’s Umesh Shankar, distinguished engineer and chief technologist, and Scott Coull, head of data science research. Listen here.How Google Cloud defends against abuse: From stolen credit cards to fake accounts, Maria Riaz, Google Cloud’s counter-abuse engineering lead, discusses with Anton and Tim what “counter-abuse” is, how Google Cloud stops abuse, and the skill set needed to do so. Listen here.What’s so spiffy about SPIFFE: Modern cloud tech has made IAM, Zero Trust, and security (relatively) easy. Evan Gilman and Eli Nesterov, co-founders of Spirl, tell Anton and Tim why workload identity is important to cloud security, and how it differs from network micro-segmentation. Listen here.To have our Cloud CISO Perspectives post delivered twice a month to your inbox, sign up for our newsletter. We’ll be back in two weeks with more security-related updates from Google Cloud. View the full article
  2. Returning from Google Cloud Next ‘24, we couldn’t be more excited with the different ways that startups are working with Google Cloud and leveraging it to grow and help their businesses. As part of our programming in the Startup Lounge, 28 startups launched new products and features during their time with us in Las Vegas. Here are some of the highlights of startups who showcased their groundbreaking innovation to attendees and peers across the three days of Next ‘24: Arize AI launched a new capability, Prompt Variable Monitoring, designed to help AI engineering teams automatically detect bugs in prompt variables and surface problematic datasets when troubleshooting LLM-powered apps. This capability was built using Google Cloud and supports Vertex AI and Gemini models. AssemblyAI is building new AI systems that can understand human speech with superhuman abilities. AssemblyAI uses TPUs on Google Cloud to lower the cost of inference at scale for the thousands of organizations building cutting-edge AI features in its Speech AI models. AssemblyAI's newest model, Universal-1, advances the state-of-the-art in multilingual Speech AI accuracy, and uses Google Cloud infrastructure and TPUs for both training and inference. Astronomer launched a new set of features designed to bolster governance at scale, fortify the security of your data platform, and accelerate innovation. Atomo, Inc. launched AskMED.ai™, a new generative AI platform engineered to deliver fast healthcare insights from real-world data, using Google Cloud's BigQuery and Vertex AI for unparalleled precision and speed. CAST AI launched a never-before-announced service built using Google Kubernetes Engine (GKE) that integrates with OpenAI’s API and automatically identifies the AI model that offers the most optimal performance and lowest inference costs, unlocking AI savings. Chronosphere launched a never-before-announced integration with Google Cloud Personalized Service Health designed to allow customers to centralize their change event data and use it to instantly correlate changes with system health issues. This integration gives customers a deeper insight into their Google Cloud environment. Connected-Stories, an end-to-end creative management platform, launched a new feature powered by Gemini models on Google Cloud, that allows users to generate dynamically personalized ads in seconds using only natural language. CrateDB launched its managed service, CrateDB Cloud, on Google Cloud and is using Vertex AI and Gemini models to power advanced forecasting, anomaly detection, and gen AI capabilities in its data platform. Fulfilld has built its intelligent warehouse management application on Google Cloud using Vertex AI. Targeted to both midmarket and large enterprise customers, Fulfilld’s platform helps optimize warehouse inventory management, product placement, and employee efficiency. Gretel is focused on helping developers generate high-quality synthetic data to develop and train AI models. Gretel offers its platform on Google Cloud Marketplace and now, Gretel is launched a new integration with BigQuery that will enable businesses to quickly access its synthetic data capabilities from within Google Cloud. Hiber has developed a lightweight gaming engine, Hiber3D, that allows game developers to build 3D worlds across platforms. Hiber launched a new feature, called Hiber3D SkyScape AI, that gives developers the ability to create a 3D world simply by uploading a picture or photo, and it’s powered by Gemini models through Vertex AI. LimaCharlie launched a never-before-announced Bi-Directionality capability designed to enable automated response across all platforms built using Google Cloud's large suite of tools. Moov Data Streams allows technology companies of all sizes to report on their critical financial data in real-time: card authorizations, transaction decline rates, revenue recognized, and any other payments data they need, all customizable to meet their ever-changing financial reporting needs, all from within Big Query and Analytics Hub. Mozart Data launched a major update to its data platform, built on BigQuery, that will help customers better prepare their data for analysis, visualization, and AI. OctoAI is announcing a strategic partnership with Google Cloud to bring its generative AI developer stack to our trusted, AI-optimized infrastructure, enabling greater scale and performance for OctoAI’s platform and providing even more capabilities to developers working on AI applications. Orby AI is bringing together enterprise automation and generative AI to deliver a new AI-powered automation platform that helps users continuously find new ways to streamline and automate common, repetitive tasks. Its AI automation platform is built entirely on Google Cloud. PharmaGuide launched a new chatbot, powered by Google Cloud AI and integrated into its PHOX platform, used widely by pharmacists to streamline common processes and provide efficient care to patients. Its new chatbot provides instant access to information and guidance to help surface timely, helpful information for pharmacists. Physna launched a never-before-announced integration of our patented 3D search technology into Unity Asset Manager, designed to enhance real-time spatial computing workloads with AI-based match reports driving deduplication, re-use, and suitable substitute discovery. Creators across industries can now unlock 3D intelligence and enhanced workflows within the Unity ecosystem, all built using Google Cloud services. Product Science, a leader in AI-based mobile performance engineering, launched CodeTuner for Cloud, revolutionizing management of cloud costs. CodeTuner for Cloud uniquely identifies code-level insights of your server requests’ compute, storage, and network costs in the context of real user sessions, optimizing at the source without compromising user experience. Queenly, a Y Combinator company that offers marketplace and search functions, is powering a new AI-generated virtual try-on experience with Google Cloud’s AI and data cloud capabilities. Rad AI is applying AI models, trained on Google Cloud using GKE, to make important advances in its lung cancer screening mode through its Continuity platform. Reality Defender builds software that helps in the detection of deepfakes and AI-generated disinformation. At Next ‘24, it launched a real-time voice deepfake detection platform built on Google Cloud and using NVIDIA A100 hardware, that will be used by financial services and call centers. Rocket Doctor, the digital health platform and marketplace, is rolling out new AI features built with a suite of products from Google Cloud, including Vertex AI, MedLM, and data analytics tools, which will help doctors intelligently search and summarize patient data in their EHR systems. Snorkel AI launched Snorkel Custom, an offering that combines Snorkel’s programmatic AI data development platform Snorkel Flow, with hands-on support from Snorkel’s machine learning experts to help enterprises use their data to adapt LLMs and deliver production-quality AI faster. Snorkel also expanded native integrations for Google Cloud LLMs, announcing new support for Gemini models. Suggestic, a turnkey platform for launching tele-wellness applications, launched a new product, Viium AI, that delivers highly personalized GLP-1 weight loss experiences, improves patient outcomes, and boosts provider revenue. Viium AI is built using GKE, Vertex AI, and Gemini models. Swit is using Google Cloud’s AI models through Vertex AI to power a new Snap chatbot product that can help people simplify common tasks at work, like managing to-dos, building checklists, creating contextual responses, and more. VEED.IO launched a new AI text-to-video tool that turns simple text prompts into engaging videos in seconds. The tool is built on VEED’s industry-leading video technology and powered by Gemini models through Vertex AI. Vurvey launched vTeam, a groundbreaking AI platform powered by people. vTeam brings together advanced AI agents, state-of-the-art orchestration, and automated workflows, to provide enterprises with cutting-edge AI backed by human insight. New products and strategies that once took years to research, ideate, and visualize, can now be generated in a matter of hours with vTeam We’re so excited to see the excitement from startups who shared their companies and products, and are so proud to see how much Google Cloud and the team continue to invest in startups year after year, at Next and across the business. We are looking forward to keeping the momentum up and highlighting amazing startups throughout 2024 and at Next ‘25! Want to take your startup to the next level? Visit our Google Cloud for Startups page to learn more and sign up for updates on events, offers, and our vibrant community. View the full article
  3. Organizations are increasingly adopting streaming technologies, and Google Cloud offers a comprehensive solution for streaming ingestion and analytics. Cloud Pub/Sub is Google Cloud’s simple, highly scalable and reliable global messaging service. It serves as the primary entry point for you to ingest your streaming data into Google Cloud and is natively integrated with BigQuery, Google Cloud’s unified, AI-ready data analytics platform. You can then use this data for downstream analytics, visualization, and AI applications. Today, we are excited to announce recent Pub/Sub innovations answering customer needs for simplified streaming data ingestion and analytics. One-click Streaming Import (GA) Multi-cloud workloads are becoming a reality for many organizations where customers would like to run certain workloads (e.g., operational) on one public cloud and want to run their analytical workloads on another. However, it can be a challenge to gain a holistic view of their business data. Through data consolidation in one public cloud, you can run analytics across their entire data footprint. For Google Cloud customers it is common to consolidate data in BigQuery, providing a source of truth for the organization. To ingest streaming data from external sources such as AWS Kinesis Data Streams into Google Cloud, you need to configure, deploy, run, manage and scale a custom connector. You also need to monitor and maintain the connector to ensure the streaming ingestion pipeline is running as expected. Last week, we launched a no-code, one-click capability to ingest streaming data into Pub/Sub topics from external sources, starting with Kinesis Data Streams. The Import Topics capability is now generally available (GA) and offers multiple benefits: Simplified data pipelines: You can streamline your cross-cloud streaming data ingestion pipelines by using the Import Topics capability. This removes the overhead of running and managing a custom connector. Auto-scaling: Streaming pipelines created with managed import topics scale up and down based on the incoming throughput. Out-of-the-box monitoring: Three new Pub/Sub metrics are now available out-of-the-box to monitor your import topics. Import Topics will support Cloud Storage as another external source later in the year. Streaming analytics with Pub/Sub Apache Flink connector (GA) Apache Flink is an open-source stream processing framework with powerful stream and batch processing capabilities, with growing adoption across enterprises. Customers often use Apache Flink with messaging services to power streaming analytics use cases. We are pleased to announce that a new version of the Pub/Sub Flink Connector is now GA with active support from the Google Cloud Pub/Sub team. The connector is fully open source under an Apache 2.0 license and hosted on our GitHub repository. With just a few steps, the connector allows you to connect your existing Apache Flink deployment to Pub/Sub. The connector allows you to publish an Apache Flink output into Pub/Sub topics or use Pub/Sub subscriptions as a source in Apache Flink applications. The new GA version of the connector comes with multiple enhancements. It now leverages the StreamingPull API to achieve maximum throughput and low latency. We also added support for automatic message lease extensions to enable setting longer checkpointing intervals. Finally, the connector supports the latest Apache Flink source streaming API. Enhanced Export Subscriptions experience Pub/Sub has two popular export subscriptions — BigQuery and Cloud Storage. BigQuery subscriptions can now be leveraged as a simple method to ingest streaming data into BigLake Managed Tables, BigQuery’s recently announced capability for building open-format lakehouses on Google Cloud. You can use this method to transform your streaming data into Parquet or Iceberg format files in your Cloud Storage buckets. We also launched a number of enhancements to these export subscriptions. BigQuery subscriptions support a growing number of ways to move your structured data seamlessly. The biggest change is the ability to write JSON data into columns in BigQuery without defining a schema on the Pub/Sub topic. Previously, the only way to get data into columns was to define a schema on the topic and publish data that matched that schema. Now, with the use table schema feature, Pub/Sub can write JSON messages to the BigQuery table using its schema. Basic types are supported now and support for more advanced types like NUMERIC and DATETIME is coming soon. Speaking of type support, BigQuery subscriptions now handle most Avro logical types. BigQuery subscriptions now support non-local timestamp types (compatible with the BigQuery TIMESTAMP type) and decimal types (compatible with the BigQuery NUMERIC and BIGNUMERIC types, coming soon). You can use these logical types to preserve the semantic meaning of fields across your pipelines. Another highly requested feature coming soon to both BigQuery subscriptions and Cloud Storage subscriptions is the ability to specify a custom service account. Currently, only the per-project Pub/Sub service account can be used to write messages to your table or bucket. Therefore, when you grant access, you enable anyone who has permission to use this project-wide service account to write to the destination. You may prefer to limit access to a specific service account via this upcoming feature. Cloud Storage subscriptions will be enhanced in the coming months with a new batching option allowing you to batch Cloud Storage files based on the number of Pub/Sub messages in each file. You will also be able to specify a custom datetime format in Cloud Storage filenames to support custom downstream data lake analysis pipelines. Finally, you’ll soon be able to use topic schema to write data to your Cloud Storage bucket. Getting started We’re excited to introduce a set of new capabilities to help you leverage your streaming data for a variety of use cases. You can now simplify your cross-cloud ingestion pipelines with Managed Import. You can also leverage Apache Flink with Pub/Sub for streaming analytics use cases. Finally, you can now use enhanced Export Subscriptions to seamlessly get data in either BigQuery or Cloud Storage. We are excited to see how you use these Pub/Sub features to solve your business challenges. View the full article
  4. Google Cloud Next made a big splash in Las Vegas this week! From our opening keynote showcasing incredible customer momentum to exciting product announcements, we covered how AI is transforming the way that companies work. You can catch up on the highlights in our 14 minute keynote recap! Developers were front and center at our Developer keynote and in our buzzing Innovators Hive on the Expo floor (which was triple the size this year!). Our nearly 400 partner sponsors were also deeply integrated throughout Next, bringing energy from the show floor to sessions and evening events throughout the week. Last year, we talked about the exciting possibilities of generative AI, and this year it was great to showcase how customers are now using it to transform the way they work. At Next ‘24, we featured 300+ customer and partner AI stories, 500+ breakout sessions, hands-on demos, interactive training sessions, and so much more. It was a jam-packed week, so we’ve put together a summary of our announcements which highlight how we’re delivering the new way to cloud. Read on for a complete list of the 218 (yes, you read that right) announcements from Next ‘24: Gemini for Google Cloud We shared how Google's Gemini family of models will help teams accomplish more in the cloud, including: 1. Gemini for Google Cloud, a new generation of AI assistants for developers, Google Cloud services, and applications. 2. Gemini Code Assist, which is the evolution of the Duet AI for Developers. 3. Gemini Cloud Assist, which helps cloud teams design, operate, and optimize their application lifecycle. 4. Gemini in Security Operations, generally available at the end of this month, converts natural language to new detections, summarizes event data, recommends actions to take, and navigates users through the platform via conversational chat. 5. Gemini in BigQuery, in preview, enables data analysts to be more productive, improve query performance and optimize costs throughout the analytics lifecycle. 6. Gemini in Looker, in private preview, provides a dedicated space in Looker to initiate a chat on any topic with your data and derive insights quickly. 7. Gemini in Databases, also in preview, helps developers, operators, and database administrators build applications faster using natural language; manage, optimize and govern an entire fleet of databases from a single pane of glass; and accelerate database migrations. Customer Stories We shared new customer announcements, including: 8. Cintas is leveraging Google Cloud’s gen AI to develop an internal knowledge center that will allow its customer service and sales employees to easily find key information. 9. Bayer will build a radiology platform that will help Bayer and other companies create and deploy AI-first healthcare apps that assist radiologists, ultimately improving efficiency and diagnosis turn-around time. 10. Best Buy is leveraging Google Cloud’s Gemini large language model to create new and more convenient ways to give customers the solutions they need, starting with gen AI virtual assistants that can troubleshoot product issues, reschedule order deliveries, and more. 11. Citadel Securities used Google Cloud to build the next generation of its quantitative research platform that increased its research productivity and price-performance ratio. 12. Discover Financial is transforming customer experience by bringing gen AI to its customer contact centers to improve agent productivity through personalized resolutions, intelligent document summarization, real-time search assistants, and enhanced self-service options. 13. IHG Hotels & Resorts is using Gemini to build a generative AI-powered chatbot to help guests easily plan their next vacation directly in the IHG Hotels & Rewards mobile app. 14. Mercedes-Benz will expand its collaboration with Google Cloud, using our AI and gen AI technologies to advance customer-facing use cases across e-commerce, customer service, and marketing. 15. Orange is expanding its partnership with Google Cloud to deploy generative AI closer to Orange’s and its customers’ operations to help meet local requirements for trusted cloud environments and accelerate gen AI adoption and benefits across autonomous networks, workforce productivity, and customer experience. 16. WPP will leverage Google Cloud’s gen AI capabilities to deliver personalization, creativity, and efficiency across the business. Following the adoption of Gemini, WPP is already seeing internal impacts, including real-time campaign performance analysis, streamlined content creation processes, AI narration, and more. 17. Covered California, California’s health insurance marketplace, will simplify the healthcare enrollment process using Google Cloud’s Document AI, enabling the organization to verify more than 50,000 healthcare documents with a 84% verification rate per month. Workspace and collaboration The next wave of innovations and enhancements are coming to Google Workspace: 18. Google Vids, a key part of our Google Workspace innovations, is a new AI-powered video creation app for work that sits alongside Docs, Sheets and Slides. Vids will be released to Workspace Labs in June. 19. Gemini is coming to Google Chat in preview, giving you an AI-powered teammate to summarize conversations, answer questions, and more. 20. The new AI Meetings and Messaging add-on is priced at $10 per user, per month, and includes: Take notes for me, now in preview, translate for me, coming in June, which automatically detects and translates captions in Meet, with support for 69 languages, and automatic translation of messages and on-demand conversation summaries in Google Chat, coming later this year. 21. Using large language models, Gmail can now block an additional 20% more spam and evaluate 1,000 times more user-reported spam every day. 22. A new AI Security add-on allows IT teams to automatically classify and protect sensitive files in Google Drive, and is available for $10 per user, per month. 23. We’re extending DLP controls and classification labels to Gmail in beta. 24. We’re adding experimental support for post-quantum cryptography (PQC) in client-side encryption with our partners Thales and Fortanix. 25. Voice prompting and instant polish in Gmail: Send emails easily when you’re on the go with voice input in Help me write, and convert rough notes to a complete email with one click. 26. A new tables feature in Sheets (generally available in the coming weeks) formats and organizes data with a sleek design and a new set of building blocks — from project management to event planning templates witautomatic alerts based on custom triggers like a change in a status field. 27. Tabs in Docs (generally available in the coming weeks) allow you to organize information in a single document rather than linking to multiple documents or searching through Drive. 28. Docs now supports full-bleed cover images that extend from one edge of your browser to the other; generally available in the coming weeks. 29. Generally available in the coming weeks, Chat will support increased member capacity of up to 500,000 in spaces. 30. Messaging interoperability for Slack and Teams is now generally available through our partner Mio. AI infrastructure 31. The Cloud TPU v5p GA is now generally available. 32. Google Kubernetes Engine (GKE) now supports Cloud TPU v5p and TPU multi-host serving, also generally available. 33. A3 Mega compute instance powered by NVIDIA H100 GPUs offers double the GPU-to-GPU networking bandwidth of A3, and will be generally available in May. 34. Confidential Computing is coming to the A3 VM family, in preview later this year. 35. The NVIDIA Blackwell GPU platform will be available on the AI Hypercomputer architecture in two configurations: NVIDIA HGX B200 for the most demanding AI, data analytics, and HPC workloads; and the liquid-cooled GB200 NVL72 GPU for real-time LLM inference and training massive-scale models. 36. New caching capabilities for Cloud Storage FUSE improve training throughput and serving performance, and are generally available. 37. The Parallelstore high-performance parallel filesystem now includes caching in preview. 38. Hyperdisk ML in preview is a next-generation block storage service optimized for AI inference/serving workloads. 39. The new open-source MaxDiffusion is a new high-performance and scalable reference implementation for diffusion models. 40. MaxText, a JAX LLM, now supports new LLM models including Gemma, GPT3, LLAMA2 and Mistral across both Cloud TPUs and NVIDIA GPUs. 41. PyTorch/XLA 2.3 will follow the upstream release later this month, bringing single program, multiple data (SPMD) auto-sharding, and asynchronous distributed checkpointing features. 42. For Hugging Face PyTorch users, the Hugging Face Optimum-TPU package lets you train and serve Hugging Face models on TPUs. 43. Jetstream is a new open-source, throughput- and memory-optimized LLM inference engine for XLA devices (starting with TPUs); it supports models trained with both JAX and PyTorch/XLA, with optimizations for popular open models such as Llama 2 and Gemma. 44. Google models will be available as NVIDIA NIM inference microservices. 45. Dynamic Workload Scheduler now offers two modes: flex start mode (in preview), and calendar mode (in preview). 46. We shared the latest performance results from MLPerf™ Inference v4.0 using A3 virtual machines (VMs) powered by NVIDIA H100 GPUs. 47. We shared performance benchmarks for Gemma models using Cloud TPU v5e and JetStream. 48. We introduced ML Productivity Goodput, a new metric to measure the efficiency of an overall ML system, as well as an API to integrate into your projects, and methods to maximize ML Productivity Goodput. Vertex AI 49. Gemini 1.5 Pro is now available in public preview in Vertex AI, bringing the world’s largest context window to developers everywhere. 50. Gemini 1.5 Pro on Vertex AI can now process audio streams including speech, and the audio portion of videos. 51. Imagen 2.0, our family of image generation models, can now be used to create short, 4-second live images from text prompts. 52. Image editing is generally available in Imagen 2.0, including inpainting/outpainting and digital watermarking powered by Google DeepMind’s SynthID. 53. We added CodeGemma, a new model from our Gemma family of lightweight models, to Vertex AI. 54. Vertex AI has expanded grounding capabilities, including the ability to directly ground responses with Google Search, now in public preview. 55. Vertex AI Prompt Management, in preview, helps teams improve prompt performance. 56. Vertex AI Rapid Evaluation, in preview, helps users evaluate model performance when iterating on the best prompt design. 57. Vertex AI AutoSxS is now generally available, and helps teams compare the performance of two models. 58. We expanded data residency guarantees for data stored at-rest for Gemini, Imagen, and Embeddings APIs on Vertex AI to 11 new countries: Australia, Brazil, Finland, Hong Kong, India, Israel, Italy, Poland, Spain, Switzerland, and Taiwan. 59. When using Gemini 1.0 Pro and Imagen, you can now limit machine-learning processing to the United States or European Union. 60. Vertex AI hybrid search, in preview, integrates vector-based and keyword-based search techniques to ensure relevant and accurate responses for users. 61. The new Vertex AI Agent Builder, in preview, lets developers build and deploy gen AI experiences using natural language or open-source frameworks like LangChain on Vertex AI. 62. Vertex AI includes two new text embedding models in public preview: the English-only text-embedding-preview-0409, and the multilingual text-multilingual-embedding-preview-0409 Core infrastructure Thomas with the Google Axion chip 63. We expanded Google Cloud’s compute portfolio, with major product releases spanning compute and storage for general-purpose workloads, as well as for more specialized workloads like SAP and high-performance databases. 64. Google Axion is our first custom Arm-based CPU designed for the data center, and will be in preview in the coming months. 65. Now in preview, the Compute Engine C4 general-purpose VM provides high performance paired with a controlled maintenance experience for your mission-critical workloads. 66. The general-purpose N4 machine series is built for price-performance with Dynamic Resource Management, and is generally available. 67. C3 bare-metal machines, available in an upcoming preview, provide workloads with direct access to the underlying server’s CPU and memory resources. 68. New X4 memory-optimized instances are now in preview, through this interest form. 69. Z3 VMs are designed for storage-dense workloads that require SSD, and are generally available. 70. Hyperdisk Storage Pools Advanced Capacity, in general availability, and Advanced Performance in preview, allow you to purchase and manage block storage capacity in a pool that’s shared across workloads. 71. Coming to general availability in May, Hyperdisk Instant Snapshots provide near-zero RPO/RTO for Hyperdisk volumes. 72. Google Compute Engine users can now use zonal flexibility, VM family flexibility, and mixed on-demand and spot consumption to deploy their VMs. As part of Google Distributed Cloud (GDC) offering, we announced: 73. A generative AI search packaged solution powered by Gemma open models will be available in preview in Q2 2024 on GDC to help customers retrieve and analyze data at the edge or on-premises. 74. GDC has achieved ISO27001 and SOC2 compliance certifications. 75. A new managed Intrusion Detection and Prevention Solution (IDPS) integrates Palo Alto Networks threat prevention technology with GDC, and is now generally available. 76. GDC Sandbox, in preview, helps application developers build and test services designed for GDC in a Google Cloud environment, without needing to navigate the air-gap and physical hardware. 77. A preview GDC storage flexibility feature can help you grow your storage independent of compute, with support for block, file, or object storage. 78. GDC can now run in disconnected mode for up to seven days, and offers a suite of offline management features to help ensure deployments and workloads are accessible and working while they are disconnected; this capability is generally available. 79. New Managed GDC Providers who can sell GDC as a managed service include Clarence, T-Systems, and WWT.and a new Google Cloud Ready — Distributed Cloud badge signals that a solution has been tuned for GDC. 80. GDC servers are now available with an energy-efficient NVIDIA L4 Tensor Core GPU. 81. Google Distributed Cloud Hosted (GDC Hosted) is now authorized to host Top Secret and Secret missions for the U.S. Intelligence Community, and Top Secret missions for the Department of Defense (DoD). From our Google Cloud Networking family, we announced: 82. Gemini Cloud Assist, in preview, provides AI-based assistance to solve a variety of networking tasks such as generating configurations, recommending capacity, correlating changes with issues, identifying vulnerabilities, and optimizing performance. 83. Now generally available, the Model as a Service Endpoint solution uses Private Service Connect, Cloud Load Balancing, and App Hub lets model creators own the model service endpoint to which application developers then connect. 84. Later this year, Cloud Load Balancing will add enhancements for inference workloads: Cloud Load Balancing with custom metrics, Cloud Load Balancing for streaming inference, and Cloud Load Balancing with traffic management for AI models. 85. Cloud Service Mesh is a fully managed service mesh that combines Traffic Director’s control plane and Google’s open-source Istio-based service mesh, Anthos Service Mesh. A service-centric Cross-Cloud Network delivers a consistent, secure experience from any cloud to any service, and includes the following enhancements: 86. Private Service Connect transitivity over Network Connectivity Center, available in preview this quarter, enables services in a spoke VPC to be transitively accessible from other spoke VPCs. 87. Cloud NGFW Enterprise (formerly Cloud Firewall Plus), now GA, provides network threat protection powered by Palo Alto Networks, plus network security posture controls for org-wide perimeter and Zero Trust microsegmentation. 88. Identity-based authorization with mTLS integrates the Identity-Aware Proxy with our internal application Load Balancer to support Zero Trust network access, including client-side and soon, back-end mutual TLS. 89. In-line network data-loss prevention (DLP), in preview soon, integrates Symantec DLP into Cloud Load Balancers and Secure Web Proxy using Service Extensions. 90. Partners Imperva, HUMAN Security, Palo Alto Networks and Traceable are integrating their advanced web protection services into Service Extensions, as are web services providers Cloudinary, Nagra, Queue-it, and Datadog. 91. Service Extensions now has a library of code examples to customize origin selection, adjust headers, and more. 92. Private Service Connect is now fully integrated with Cloud SQL, and generally available. There are many improvements to our storage offerings: 93. Generate insights with Gemini lets you use natural language to analyze your storage footprint, optimize costs, and enhance security across billions of objects. It is available now through the Google Cloud console as an allowlist experimental release. 94. Google Cloud NetApp Volumes is expanding to 15 new Google Cloud regions in Q2’24 (GA) and includes a number of enhancements: dynamically migrating files by policy to lower-cost storage based on access frequency (in preview Q2’24); increasing Premium and Extreme service levels up to 1PB in size, with throughput performance up to 3X (preview Q2’24). NetApp Volumes also includes a new Flex service level enabling volumes as small as 1GiB. 95. Filestore now supports single-share backup for Filestore Persistent Volumes and GKE (generally available) and NFS v4.1 (preview), plus expanded Filestore Enterprise capacity up to 100TiB. For Cloud Storage: 96. Cloud Storage Anywhere Cache now uses zonal SSD read cache across multiple regions within a continent (allowlist GA). 97. Cloud Storage soft delete protects against accidental or malicious deletion of data by preserving deleted items for a configurable period of time (generally available). 98. The new Cloud Storage managed folders resource type allows granular IAM permissions to be applied to groups of objects (generally available). 99. Tag-based at-scale backup helps manage data protection for Compute Engine VMs (generally available). 100. The new high-performance backup option for SAP HANA leverages persistent disk (PD) snapshot capabilities for database-aware backups (generally available). 101. As part of Backup and DR Service Report Manager, you can now customize reports with data from Google Cloud Backup and DR using Cloud Monitoring, Cloud Logging, and BigQuery (generally available). Databases 102. Database Studio, a part of Gemini in Databases, brings SQL generation and summarization capabilities to our rich SQL editor in the Google Cloud console, as well as an AI-driven chat interface. 103. Database Center lets operators manage an entire fleet of databases through intelligent dashboards that proactively assess availability, data protection, security, and compliance issues, as well as with smart recommendations to optimize performance and troubleshoot issues. 104. Database Migration Service is also integrated with Gemini in Databases, including assistive code conversion (e.g., from Oracle to PostgreSQL) and explainability features. Likewise, AlloyDB gains a lot of new functionality: 105. AlloyDB AI lets gen AI developers build applications that accurately query data with natural language, just like they do with SQL; available now in AlloyDB Omni. 106. AlloyDB AI now includes a new pgvector-compatible index based on Google’s approximate nearest neighbor algorithms, or ScaNN; it’s available as a technology preview in AlloyDB Omni. 107. AlloyDB model endpoint management makes it easier to call remote Vertex AI, third-party, and custom models; available in AlloyDB Omni today and soon on AlloyDB in Google Cloud. 108. AlloyDB AI “parameterized secure views” secures data based on end-users’ context; available now in AlloyDB Omni. Bigtable, which turns 20 this year, got several new features: 109. Bigtable Data Boost, a pre-GA offering, delivers high-performance, workload-isolated, on-demand processing of transactional data, without disrupting operational workloads. 110. Bigtable authorized views, now generally available, allow multiple teams to leverage the same tables and securely share data directly from the database. 111. New Bigtable distributed counters in preview process high-frequency event data like clickstreams directly in the database. 112. Bigtable large nodes, the first of other workload-optimized node shapes, offer more performance stability at higher server utilization rates, and are in private preview. Memorystore for Redis Cluster, meanwhile: 113. Now supports both AOF (Append Only File) and RDB (Redis Database)-based persistence and has new node shapes that offer better performance and cost management. 114. Offers ultra-fast vector search, now generally available. 115. Includes new configuration options to tune max clients, max memory, max memory policies, and more, now in preview. Firestore users, take note: 116. Gemini Code Assist now incorporates assistive capabilities for developing with Firestore. 117. Firestore now has built-in support for vector search using exact nearest neighbors, the ability to automatically generate vector embeddings using popular embedding models via a turn-key extension, and integrations with popular generative AI libraries such as LangChain and LlamaIndex. 118. Firestore Query Explain in preview can help you troubleshoot your queries. 119. Firestore now supports Customer Managed Encryption Keys (CMEK) in preview, which allows you to encrypt data stored at-rest using your own specified encryption key. 120. You can now deploy Firestore in any available supported Google Cloud region, and Firestore’s Scheduled Backup feature can now retain backups for up to 98 days, up from seven days. 121. Cloud SQL Enterprise Plus edition now offers advanced failover capabilities such as orchestrated switchover and switchback Data analytics 122. BigQuery is now Google Cloud’s single integrated platform for data to AI workloads, with BigLake, BigQuery’s unified storage engine, providing a single interface across BigQuery native and open formats for analytics and AI workloads. 123. BigQuery better supports Iceberg, DDL, DML and high-throughput support in preview, while BigLake now supports the Delta file format, also in preview. 124. BigQuery continuous queries are in preview, providing continuous SQL processing over data streams, enabling real-time pipelines with AI operators or reverse ETL. The above-mentioned Gemini in BigQuery enables all manner of new capabilities and offerings: 125. New BigQuery integrations with Gemini models in Vertex AI support multimodal analytics and vector embeddings, and fine-tuning of LLMs. 126. BigQuery Studio provides a collaborative data workspace, the choice of SQL, Python, Spark or natural language directly, and new integrations for real-time streaming and governance; it is now generally available. 127. The new BigQuery data canvas provides a notebook-like experience with embedded visualizations and natural language support courtesy of Gemini. 128. BigQuery can now connect models in Vertex AI with enterprise data, without having to copy or move data out of BigQuery. 129. You can now use BigQuery with Gemini 1.0 Pro Vision to analyze both images and videos by combining them with your own text prompts using familiar SQL statements. 130. Column-level lineage in BigQuery and expanded lineage capabilities for Vertex AI pipelines will be in preview soon. Other updates to our data analytics portfolio include: 131. Apache Kafka for BigQuery as a managed service is in preview, to enable streaming data workloads based on open source APIs. 132. A serverless engine for Apache Spark integrated within BigQuery Studio is now in preview. 133. Dataplex features expanded data-to-AI governance capabilities in preview. Developers & operators Gemini Code Assist includes several new enhancements: 134. Full codebase awareness, in preview, uses Gemini 1.5 Pro to make complex changes, add new features, and streamline updates to your codebase. 135. A new code transformation feature available today in Cloud Workstations and Cloud Shell Editor lets you use natural language prompts to tell Gemini Code Assist to analyze, refactor, and optimize your code. 136. Gemini Code Assist now has extended local context, automatically retrieving relevant local files from your IDE workspace and displaying references to the files used. 137. With code customization in private preview, Gemini Code Assist lets you integrate private codebases and repositories for hyper-personalized code generation and completions, and connects to GitLab, GitHub, and Bitbucket source-code repositories. 138. Gemini Code Assist extends to Apigee and Application Integration in preview, to access and connect your applications. 139. We extended our partnership with Snyk to Gemini Code Assist, letting you learn about vulnerabilities and common security topics right within your IDE. 140. The new App Hub provides an accurate, up-to-date representation of deployed applications and their resource dependencies. Integrated with Gemini Cloud Assist, App Hub is generally available. Users of our Cloud Run and Google Kubernetes Engine (GKE) runtime environments can look forward to a variety of features: 141. Cloud Run application canvas lets developers generate, modify and deploy Cloud Run applications with integrations to Vertex AI, Firestore, Memorystore, and Cloud SQL, as well as load balancing and Gemini Cloud Assist. 142. GKE now supports container and model preloading to accelerate workload cold starts. 143. GPU sharing with NVIDIA Multi-Process Service (MPS) is now offered in GKE, enabling concurrent processing on a single GPU. 144. GKE support GCS FUSE read caching, now generally available, using a local directory as a cache to accelerate repeat reads for small and random I/Os. 145. GKE Autopilot mode now supports NVIDIA H100 GPUs, TPUs, reservations, and Compute Engine committed use discounts (CUDs). 146. Gemini Cloud Assist in GKE is available to help with optimizing costs, troubleshooting, and synthetic monitoring. Cloud Billing tools help you track and understand Google Cloud spending, pay your bill, and optimize your costs; here are a few new features: 147. Support for Cloud Storage costs at the bucket level and storage tags is included out of the box with Cloud Billing detailed data exports to BigQuery. 148. A new BigQuery data view for FOCUS allows users to compare costs and usage across clouds. 149. You can now convert cost management reports into BigQuery billing queries right from the Cloud Billing console. 150. A new Cloud FinOps Anomaly Detection feature is in private preview. 151. FinOps hub is now generally available, adds support to view top savings opportunities, and a preview of our FinOps hub dashboard lets you to analyze costs by project, region, or machine type. 152. A new CUD Analysis solution is available across Google Compute Engine resource families including TPU v5e, TPU v5p, A3, H3, and C3D. 153. There are new spend-based CUDs available for Memorystore, AlloyDB, BigTable, and Dataflow. Security Building on natural language search and case summaries in Chronicle, Gemini in Security Operations is coming to the entire investigation lifecycle, including: 154. A new assisted investigation feature, generally available at the end of this month, that guides analysts through their workflow in Chronicle Enterprise and Chronicle Enterprise Plus. 155. The ability to ask Gemini for the latest threat intelligence from Mandiant directly in-line — including any indicators of compromise found in their environment. 156. Gemini in Threat Intelligence, in public preview, allows you to tap into Mandiant’s frontline threat intelligence using conversational search. 157. VirusTotal now automatically ingests OSINT reports, which Gemini summarizes directly in the platform; generally available now. 158. Gemini in Security Command Center, which now lets security teams search for threats and other security events using natural language in preview, and provides summaries of critical- and high-priority misconfiguration and vulnerability alerts, and summarizes attack paths. 159. Gemini Cloud Assist also helps with security tasks, via: IAM Recommendations, which can provide straightforward, contextual recommendations to remove roles from over-permissioned users or service accounts; Key Insights, which help during encryption key creation based on its understanding of your data, your encryption preferences, and your compliance needs; and Confidential Computing Insights, which recommends options for adding confidential computing protection to sensitive workloads based on your data and your compute usage. Other security news includes: 160. The new Chrome Enterprise Premium, now generally available, combines the popular browser with Google threat and data protection, Zero Trust access controls, enterprise policy controls, and security insights and reporting. 161. Applied threat intelligence in Google Security Operations, now generally available, automatically applies global threat visibility and applies it to each customer’s unique environment. 162. Security Command Center Enterprise is now generally available and includesMandiant Hunt, now in preview. 163. Identity and Access Management Privileged Access Manager (PAM), now available in preview, provides just-in-time, time-bound, and approval-based access elevations. 164. Identity and Access Management Principal Access Boundary (PAB) is a new, identity-centered control now in preview that enforces restrictions on IAM principals. 165. Cloud Next-Gen Firewall (NGFW) Enterprise is now generally available, including threat protection from Palo Alto Networks. 166. Cloud Armor Enterprise is now generally available and offers a pay-as-you-go model that includes advanced network DDoS protection, web application firewall capabilities, network edge policy, adaptive protection, and threat intelligence. 167. Sensitive Data Protection integration with Cloud SQL is now generally available, and is deeply integrated into the Security Command Center Enterprise risk engine. 168. Key management with Autokey is now in preview, simplifying the creation and management of customer encryption keys (CMEK). 169. Bare metal HSM deployments in PCI-compliant facilities are now available in more regions. 170. Regional Controls for Assured Workloads is now in preview and is available in 32 cloud regions in 14 countries. 171. Audit Manager automates control verification with proof of compliance for workloads and data on Google Cloud, and is in preview. 172. Advanced API Security, part of Apigee API Management, now offers shadow API detection in preview. As part of our Confidential Computing portfolio, we announced: 173. Confidential VMs on Intel TDX are now in preview and available on the C3 machine series with Intel TDX. For AI and ML workloads, we support Intel AMX, which provides CPU-based acceleration by default on C3 series Confidential VMs. 174. Confidential VMs on general-purpose N2D machine series with AMD Secure Encrypted Virtualization-Secure Nested Paging (SEV-SNP) are now in preview. 175. Live Migration on Confidential VMs is now in general availability on N2D machine series across all regions. 176. Confidential VMs on the A3 machine series with NVIDIA Tensor Core H100 GPUs will be in private preview later this year. Migration 177. The Rapid Migration Program (RaMP) now covers migration and modernization use cases that span across applications and the underlying infrastructure, data and analytics. For example, as part of RaMP for Storage: Storage egress costs from Amazon S3 to Google Cloud Storage are now completely free. Cloud Storage's client libraries for Python, Node.js, and Java now support parallelization of uploads and downloads from client libraries. Migration Center also includes several excellent new additions: 178. Migration use case navigator, for mapping out how to migrate your resources (servers, databases, data warehouses, etc.) from on-prem and other clouds directly into Google Cloud, including new Cloud Spend Estimators for rapid TCO assessments of on-premises VMware and Exadata environments. 179. Database discovery and assessment for Microsoft SQL Server, PostgreSQL and MySQL to Cloud SQL migrations. Google Cloud VMware Engine, an integrated VMware service on Google Cloud now offers: 180. The intent to support VMware Cloud Foundation License Portability 181. General availability of larger instance type (ve2-standard-128) offerings. 182. Networking enhancements including next-gen VMware Engine Networking, automated zero-config VPC peering, and Cloud DNS for workloads. 183. Terraform Infrastructure as Code Automation. Migrate to Virtual Machines helps teams migrate their workloads. Here’s what we announced: 184. A new Disk Migration solution for migrating disk volumes to Google Cloud. 185. Image Import (preview) as a managed service. 186. BIOS to UEFI Conversion in preview, which automatically converts bootloaders to the newer UEFI format. 187. Amazon Linux Conversion in preview, for converting Amazon Linux to Rocky Linux in Google Compute Engine. 188. CMEK support, so you maintain control over your own encryption keys. When replatforming VMs to containers in GKE or Cloud Run, there’s: 189. The new Migrate to Containers (M2C) CLI, which generates artifacts that you can deploy to either GKE or Cloud Run. 190. M2C Cloud Code Extension, in preview, which migrates applications from VMs to containers running on GKE directly in Visual Studio. Here are the enhancements to our Database Migration Service: 191. Database Migration Service now offers AI-powered last-mile code conversion from Oracle to PostgreSQL. 192. Database Migration Service now performs migration from SQL Server (on any platform) to Cloud SQL for SQL Server, in preview. 193. In Datastream, SQL Server as a source for CDC performs data movement to BigQuery destinations. Migrating from a mainframe? Here are some new capabilities: 194. The Mainframe Assessment Tool (MAT) now powered by gen AI analyzes the application codebase, performing fit assessment and creating application-level summarization and test cases. 195. Mainframe Connector sends a copy of your mainframe data to BigQuery for off-mainframe analytics. 196. G4 refactors mainframe application code (COBOL, RPG, JCL etc.) and data from their original state/programming language to a modern stack (JAVA). 197. Dual Run lets you run a new system side by side with your existing mainframe, duplicating all transactions and checking for completeness, quality and effectiveness of the new solution. Partners & ecosystem 198. Partners showcased more than 100 solutions that leverage Google AI on the Next ‘24 show floor. 199. We announced the 2024 Google Cloud Partner of the Year winners. 200. Gemini models will be available in the SAP Generative AI Hub. 201. GitLab announced that its authentication, security, and CI/CD integrations with Google Cloud are now in public beta for customers. 202. Palo Alto Networks named Google Cloud its AI provider of choice and will use Gemini models to improve threat analysis and incident summarization for its Cortex XSIAM platform. 203. Exabeam is using Google Cloud AI to improve security outcomes for customers. 204. Global managed security services company Optiv is expanding support for Google Cloud products. 205. Alteryx, Dynatrace, and Harness are launching new features built with Google Cloud AI to automate workflows, support data governance, and enable users to better observe and manage the data. 206. A new Generative AI Services Specialization is available for partners who demonstrate the highest level of technical proficiency with Google Cloud gen AI. 207. We introduced new Generative AI Delivery Excellence and Technical Bootcamps, and advanced Challenge Labs in generative AI. 208. The Google Cloud Ready - BigQuery initiative has 21 new partners: Actable, AgileData, Amplitude, Boostkpi, CaliberMind, Calibrate Analytics, CloudQuery, DBeaver, Decube, DinMo, Estuary, Followrabbit, Gretel, Portable, Precog, Retool, SheetGo, Tecton, Unravel Data, Vallidio, and Vaultree 209. The Google Cloud Ready - AlloyDB initiative has six new partners: Boostkpi, DBeaver, Estuary, Redis, Thoughtspot, and SeeBurger 210. The Google Cloud Ready - Cloud SQL initiative has five new partners: BoostKPI, DBeaver, Estuary, Redis, and Thoughtspot 211. Crowdstrike is integrating its Falcon Platform with Google Cloud products. Members of our Google for Startups program, meanwhile, will be interested to learn that: 212. The Google for Startups Cloud Program has a new partnership with the NVIDIA Inception startup program. The benefits include providing Inception members with access to Google Cloud credits, go-to-market support, technical expertise, and fast-tracked onboarding to Google Cloud Marketplace. 213. As part of the NVIDIA Inception partnership, Google for Startups Cloud Program members can join NVIDIA Inception and gain access to technological expertise, NVIDIA Deep Learning Institute course credits, NVIDIA hardware and software, and more. Eligible members of the Google for Startups Cloud Program also can participate in NVIDIA Inception Capital Connect, a platform that gives startups exposure to venture capital firms interested in the space. 214. The new Google for Startups Accelerator: AI-First program for startups building AI solutions based in the U.S. and Canada has launched, and its cohort includes 15 AI startups: Aptori, Augmend, Backpack Healthcare, BrainLogic AI, Cicerai, CLIKA, Easel AI, Findly, Glass Health, Kodif, Liminal, mbue, Modulo Bio, Rocket Doctor, and Sibli. 215. The Startup Learning Center provides startups with curated content to help them grow with Google Cloud, and will be launching an offering for startup developers and future founders via Innovators Plus in the coming months Finally, Google Cloud Consulting, has the following services to help you build out your Google Cloud environment: 216. Google Cloud Consulting is offering no-cost, on-demand training to top customers through Google Cloud Skills Boost, including new gen AI skill badges: Prompt Design in Vertex AI, Develop Gen AI Apps with Gemini and Streamlit, and Inspect Rich Documents with Gemini Multimodality and Multimodal RAG. 217. The new Isolator solution protects healthcare data used in collaborations between parties using a variety of Google Cloud technologies including Chrome Enterprise Premium, VPC Service Controls, Chrome Enterprise, and encryption. 218. Google Cloud Consulting’s Delivery Navigator is now generally available to all Google Cloud qualified services partners. Phew. What a week! On behalf of Google Cloud, we’re so grateful you joined us at Next ‘24, and can’t wait to host you again next year back in Las Vegas at the Mandalay Bay on April 9 - 11 in 2025! View the full article
  5. Regardless of what you’re looking to migrate or modernize, Google Cloud is committed to helping you on that journey, and we’re excited to announce a number of new updates for our portfolio of migration products and programs. What’s new with the Rapid Migration Program (RaMP)? We understand that customers’ journeys to the cloud are driven by the need to unlock the value that cloud computing offers, from cost-savings, efficiency gains and scalability to catalyzing innovation. And to help customers fully realize the value of cloud, RaMP now covers migration and modernization use cases that span across applications and the underlying infrastructure, data and analytics. RaMP now has pillar programs including RaMP for Databases, RaMP for Mainframe Migration and RaMP for Storage to provide customers with prescriptive guidance for specific workloads . This ensures Google Cloud and our partners consider the holistic view of customers’ workloads, and provide a roadmap that allows customers to take full advantage of cloud-native services like serverless and containerization. Recently we also announced a range of partner incentives to accelerate those customer migration and modernization outcomes across our product portfolio that include our GenAI offerings. In addition to these program enhancements, Google Cloud also offers comprehensive first party managed services and 3rd party tools (via Marketplace) to analyze, plan and execute cost effective migration and modernization projects efficiently. What’s new with Migration Center? A key part of RaMP is our unified platform, Migration Center, which helps you accelerate your end-to-end cloud journey from your current on-premises or cloud environments to Google Cloud. With features like cloud spend estimation, asset discovery of your current environment, and a variety of tooling for different migration scenarios, we were excited to share some excellent new additions: Migration use case navigator lets you map out exactly how to migrate your resources (servers, databases, data warehouses, etc.) from on-prem and other clouds directly into Google Cloud. This provides you with a complete, prescriptive plan with options that you can review, consider, and confirm. New Cloud Spend Estimators give you a rapid TCO assessment of on-premises VMware and Exadata to Google Cloud VMware Engine (GCVE) and Big Query, respectively. Database discovery & assessment lets you map out migration plans for MSSQL, PostgreSQL and MySQL to Cloud SQL. For each source database, you’ll get detailed information on technical fit with shape recommendations for Cloud SQL, and a thorough TCO report. What’s new with migrating my VMware workloads? Google Cloud VMware Engine, an integrated VMware service with the highest availability SLA, helps you easily migrate or host VMware workloads within Google Cloud with all the hardware and licenses included, announced some exciting new features and functionality: First cloud provider to announce intent to support VMware Cloud Foundation License Portability, which means broader access to on-premises customers that are looking to bring their VMware Cloud Foundation license and use it for transforming their VMware estate in Google Cloud. General Availability of larger instance type (ve2-standard-128) offerings that feature 1.8X vCPU, 2.7X RAM and 1.3X storage compared to ve1 and 200 Gbps east-west connectivity. Advancements in networking including next-gen VMware Engine Networking, automated zero-config VPC peering, and Cloud DNS for workloads. Terraform Infrastructure as Code Automation for GCVE: Google Cloud VMware Engine now supports additional Terraform resources for automating private cloud, cluster, and network management. These enhancements to the VMware Engine Terraform provider enables full environment Infrastructure-as-Code for your VMware Engine Environment. What’s new with migrating my virtual machines and disks? Whether you’re looking to migrate one application from on-premises or one thousand enterprise-grade applications across multiple data centers, Migrate to Virtual Machines gives any IT team, large or small, the power to migrate their workloads to Google Cloud. Here’s what we announced: Disk Migration offers a robust solution for migrating disk volumes, rather than VMs, to Google Cloud with confidence and ease, serving use cases such as self-managed database migrations, OS resbuild and more. Image Import (preview) as a managed service within our ‘Migrate to Virtual Machines’ product, which will reduce friction for customers and accelerate their import journeys, as well as improve usability and supportability. BIOS to UEFI Conversion (Preview) which automatically converts bootloaders to a newer format (UEFI) as part of the migration, which helps ensure that migrated VMs can leverage the full spectrum of Google Cloud’s security enhancements. Amazon Linux Conversion (Preview) makes it easy to convert Amazon Linux to Rocky Linux in Google Compute Engine. CMEK Support lets you use Customer Managed Encryption Keys (CMEK) support within Migrate to Virtual Machines, letting you maintain control over your own encryption keys, further securing your migration to Google Cloud. What’s new with replatforming my virtual machines to containers in GKE or Cloud Run? The new Migrate to Containers (M2C) CLI is purpose-built for application owners, for fast and easy modernization of application components that run on VMs using your local machine. The CLI generates artifacts that you can deploy to GKE or Cloud Run. The offline mode enables the use of the CLI locally without pulling resources from the internet during runtime. We also announced a preview of our M2C Cloud Code Extension, which provides an integrated experience to migrate applications from VMs to containers running on GKE, directly in Visual Studio. The extension provides a guided replatforming journey, technical fit assessment, and automated artifact generation to run existing applications on GKE, and helps boost developer productivity. What’s new with migrating my databases? Start migrating databases in just a few clicks with a single, integrated conversion and migration experience. Reduced migration complexity means you can enjoy the benefits of the cloud sooner. Here are some of our announcement highlights: Database Migration Service AI-powered last mile code conversion from Oracle (on any platform) to PostgreSQL. Gemini automates the conversion of database code, such as stored procedures, functions, triggers, packages and custom PL/SQL code, for destinations into Cloud SQL for PostgreSQL or AlloyDB. To see it in action watch this video. Plus, explainability (in preview) helps you reskill and retrain PL/SQL developers by allowing side by side comparison of the dialects along with detailed explanation of code along with recommendations. DMS for SQL Server migrations (preview), a new homogeneous minimal-downtime migration from SQL Server (on any platform) to Cloud SQL for SQL Server. In Datastream, SQL Server as a source for CDC introduces a seamless, next level data movement to BigQuery destinations. The new RaMP for database modernization program combines best practices, tools and attractive funding to help customers move from Oracle and SQL Server to Google Cloud databases. What’s new with modernizing mainframes? Google Cloud has significantly increased our investment in Mainframe Modernization in order to support our biggest customers running legacy applications on Mainframes and MidRange systems. With a wide portfolio of 1st and 3rd party tooling and key partnerships with delivery and tech partners for a complete end to end modernization solution, we were pleased to announce some great new functionality across the entire mainframe modernization journey (what we’re playfully referring to as our AARDVARK): Assess - Mainframe Assessment Tool (MAT) now powered by GenAI accelerates your migration journey by analyzing application codebase, performing fit assessment and creating application-level summarization and test cases. Watch this video to see it in action. Augment - Mainframe Connector sends a copy of your mainframe data to BigQuery for off-Mainframe analytics, which you can see in this video. Refactor & rewrite - G4 refactors your Mainframe application code (COBOL, RPG, JCL etc.) and data from their original state/programming language to a modern stack (JAVA), or leverages AI to rewrite entirely. De-Risk - Dual Run lets you run a new system side by side with your existing mainframe, duplicating all transactions and checking for completeness, quality and effectiveness of the new solution. Designed and implemented together with Santander Bank who are using it in production Partners participating in RaMP can also now leverage the new incentives to drive discovery and modernization of mainframe workloads to Google Cloud Platform. What’s new with storage migration? Sometimes you just have a bunch of storage you need to migrate to Google Cloud. To help support that, we’ve got two exciting new announcements as part of the RaMP for Storage pillar program: Storage egress costs from Amazon S3 to Google Cloud Storage are now completely free. Cloud Storage's client libraries for Python, Node.js, and Java now support parallelization of uploads and downloads, greatly speeding up reads and writes for applications built on top of them. This functionality was available via Google's command-line interfaces, and is now completely integrated into the client libraries, ensuring that Cloud Storage customers get optimal performance regardless of which transfer mechanism they use. That’s a wrap! What’s next? Our investment in making your migration and modernization projects fast and easy continues to be a huge focus for Google Cloud. In addition to all of the exciting announcements above, we’ve got so much more to come, especially as we continue to bake generative AI into our migration portfolio. Keep an eye out for more exciting news in 2024! If you are ready to take another step, you can visit our Rapid Migration Program website or sign up for a free discovery and assessment of your existing IT landscape, and then work with our migration experts to map out a plan to get you to the cloud! Related Article IDC: Migrating to Google Cloud IaaS has a 318 percent ROI Google Cloud’s AI-optimized infrastructure services can drive meaningful business growth for organizations worldwide, according to IDC. Read Article View the full article
  6. At Next ’24, we announced several FinOps innovations to help our customers inform on their cloud costs, optimize their cloud spend, and ensure no day-to-day cost surprises. Didn’t attend Next? No problem, read our summary of Google Cloud FinOps announcements here. 1. Driving cloud FinOps with game-changing billing cost data Cloud Billing cost data, specifically the output of price (p) x quantity (q), is the essential building block of any FinOps practice as you can’t optimize what you can’t see first. The Cloud FinOps team has been working hard on ensuring our cost data — what you pay for cloud — is the most timely, complete, and open. Timely: Google Cloud has focused on making end-to-end latency improvements to our cost data. During Next, we announced a reduction of 32%, bringing P99 end-to-end latency (the time it takes for 99% of costs to appear within Cloud Billing exports to BigQuery and our UI) to under 24 hours. Complete: Cloud Billing data is unique because Google Cloud streams its billing data. Streaming not only makes our billing data more timely but also more complete throughout the month because we publish net costs (including most credits) when they occur. Our cost data can be relied upon at any time — it’s not an estimate. To increase data completeness, Cloud Billing announced support for Cloud Storage costs at the bucket level and storage tags, which comes out of the box with Cloud Billing detailed data exports to BigQuery. Open: Cloud Billing helped author the v0.5 and v1.0 Preview of the FinOps Open Cost and Usage Specification (FOCUS), which aims to standardize cloud billing data into one common data schema. In conjunction with this work, we released a new BigQuery data view to help our customers compare costs and usage across clouds. The new view allows users to explore and query Google Cloud data in the FOCUS v1.0 Preview specified fields and formats. And for those of you thinking that you don’t have a team of Google developers to analyze all this great data, we’ve got you covered. We also announced the ability for FinOps to convert cost management reports into BigQuery billing queries right in the Cloud Billing console, so now anyone can easily visualize their data and dive deep — and you don’t have to be a data scientist to do it. Filter to the desired cost report view and generate its underlying query to use for exporting Billing data to BigQuery. 2. Unlocking cost anomaly detection for all with the power of AI To get ahead of cost surprises, customers can now use AI to continuously monitor costs and identify spikes that deviate from historical spend patterns with the preview of the new Cloud FinOps anomaly detection feature. Other added benefits for customers include: No setup needed: Cost anomaly detection is available in the Cloud Billing console to any Google Cloud account, which has a minimum of 30 days historical spend and $100 spend over the last 6 months. Anomaly tracking across all projects: Continuously monitor costs across all projects within an account at your specified cost impact threshold. Granular root cause analysis: Easily identify the top 3 services, regions, and SKUs that are driving unexpected cost increases with a detailed root cause analysis for every anomaly detected. Interested in giving it a try? Join the preview today to help us shape the future of this feature. A root cause analysis for an anomaly that shows the top drivers of spend including service, region, and SKU. 3. Maximizing committed use discounts with hassle-free management As the State of FinOps Report 2024 highlights, understanding and managing commitment-based discounts is a critical priority for maximizing cloud cost efficiency. Google Cloud is committed to empowering your FinOps success with a powerful suite of new features designed to optimize your committed use discounts (CUDs). These new features provide: More clarity and better control. Bring unmatched visibility in both your spend-based and resource-based commitments with Unified Committed Use Discount (CUD) Analysis, now in preview. This new integrated solution, fueled by the FinOps hub dashboard, enables you to effortlessly analyze costs at the most granular level – by project, region, or machine type. With Unified CUD Analysis, you can go beyond reporting and deliver actionable insights that directly improve your cost optimization strategies. Interested in joining? Sign up for the preview now. Better insights and tracking of your CUDs. Granular CUD metadata can bring even more transparency to your CUDs. Now, you'll find a breakdown of your CUD fees linked to individual subscription IDs within your detailed BigQuery export from Cloud Billing. Combined with the improved CSV download of your CUD inventory, you can easily track costs, analyze commitment lifecycles, and gain a comprehensive view of your commitments using your own naming conventions. More CUDs and more ways to view savings in FinOps hub: You can leverage the new CUD Analysis solution across multiple Google Compute Engine (GCE) resource families, including TPU v5e, TPU v5p, A3, H3, and C3D. Additionally, we introduced new spend-based CUDs to unlock cost savings potential for Memorystore, AlloyDB, BigTable, and Dataflow. All of these new CUDs come together for even greater cost savings opportunities in the FinOps hub, which is now generally available. The FinOps hub even comes with additional support to view top savings opportunities at the project level, so you can prioritize your optimization efforts for maximum impact. View top savings opportunities by project now within FinOps Hub, now generally available (GA). If you are still reading this, our only request for you is to sign up for any of our ongoing previews and get us your feedback today. We work closely with our customers on developing our products and value your opinions and ideas, so don’t hesitate to let us know what you think. And if this excites you, we are just getting started! We have a lot more planned for this year, so stay tuned for FinOpsX, June 19-22, 2024, where we will have a chance to meet up again in person and share even more exciting FinOps product developments! View the full article
  7. Developers love Firestore because of how fast they can build applications and services end to end. As of today, Firestore has more than 500,000 monthly active developers, and Firestore apps power more than 1.3 billion monthly active end users using Firebase Auth. Recently, we’ve made updates to Firestore to improve developer productivity, enable developers to build the next generation AI-enabled applications, express richer queries, and help ensure that Firestore databases meet enterprises’ ever-growing needs. Using Gemini to build applications with Firestore We’re excited to share that Gemini Code Assist now incorporates assistive capabilities for developing with Firestore. Using Gemini Code Assist, along with your favorite Integrated Development Environment (IDE), you can use natural language to define your Firestore data models and write queries. For example, you can express queries using natural language statements like “get products from Firestore inventory collection” and translate that into Firestore SDK code in your favorite programming language: Use natural language to write a Firestore query with Gemini Code Assist To get started, refer to the Gemini Code Assist documentation to install the plugin for your favorite IDE, including Visual Studio Code, IntelliJ or Cloud Code. You can also use Gemini to generate solution architectures and provision Firestore resources. Simply use the Gemini in Cloud Run application canvas in private preview, and type in a prompt like “LangChain app with Firestore and Vertex.” Use natural language using Gemini in Cloud Run application canvas, to generate a solution architecture that includes Firestore You can learn more about the Gemini in Cloud Run application canvas here. Build next-gen AI-enabled applications If you’re trying to build AI-enabled solutions such as a chatbot or recommendations engine, Firestore has you covered. Firestore now has built-in support for vector search using exact nearest neighbors, the ability to automatically generate vector embeddings using popular embedding models via a turn-key extension, and integrations with popular generative AI libraries such as LangChain and LlamaIndex. Here’s an example of using Firestore’s vector search capabilities: code_block <ListValue: [StructValue([('code', 'collection_ref = collection(‘beans’)\r\ncollection_ref\r\n.where("type", "==", "arabica")\r\n.find_nearest(\r\n vector_field="embedding_field",\r\n query_vector=Vector([0.1, 0.2, …, 1]),\r\n distance_measure=DistanceMeasure.COSINE,\r\n limit=5)'), ('language', 'lang-py'), ('caption', <wagtail.rich_text.RichText object at 0x3e476cfd2fa0>)])]> To get started, refer to the Firestore Vector Search documentation, Firestore Vector Search extension to generate embeddings documentation and documentation for Firestore’s LangChain and LlamaIndex integrations. Express richer queries Often applications require the ability to express queries that filter on range conditions across multiple fields. For example, if you run an e-commerce site, an end user might want to filter based on a t-shirt size and price ranges. With the recent launch of queries using range filters on multiple fields in preview, you can easily and cost-efficiently perform these queries directly in Firestore. code_block <ListValue: [StructValue([('code', 'db.collection("products")\r\n .whereGreaterThanOrEqualTo("price", 100)\r\n .whereGreaterThanOrEqualTo("rating", 4);'), ('language', 'lang-py'), ('caption', <wagtail.rich_text.RichText object at 0x3e476cfd2640>)])]> To get started, refer to the multiple range queries documentation. We’re also introducing Firestore Query Explain in preview to help you troubleshoot your queries. You can run Explain to retrieve the proposed query plan, or optionally indicate that you’d like to execute the query and analyze the performance, billing and retrieve the actual query results using a special analyze flag: code_block <ListValue: [StructValue([('code', 'Query query = db.collection("products")\r\n .whereGreaterThanOrEqualTo("price", 100)\r\n .whereGreaterThanOrEqualTo("rating", 4);\r\n\r\nExplainResults<QuerySnapshot> explainResults = query.explain(ExplainOptions.builder().analyze(true).build());\r\n\r\nExplainMetrics metrics = explainResults.getMetrics();\r\n\r\nSystem.out.println(metrics.getExecutionStats());'), ('language', 'lang-py'), ('caption', <wagtail.rich_text.RichText object at 0x3e476cfd23d0>)])]> Here’s the output: code_block <ListValue: [StructValue([('code', '{\r\n "executionStats": {\r\n "resultsReturned": "2",\r\n "bytesReturned": "190",\r\n "executionDuration": "0.059943s",\r\n "readOperations": "3",\r\n "debugStats": {\r\n "index_entries_scanned": "500",\r\n "documents_scanned": "2",\r\n "billing_details": {\r\n "index_entries_billable": "500",\r\n "documents_billable": "2",\r\n "small_ops": "0",\r\n "min_query_cost": "0"\r\n }\r\n }\r\n }\r\n}'), ('language', ''), ('caption', <wagtail.rich_text.RichText object at 0x3e476caa80d0>)])]> To get started, refer to the Query Explain documentation. Meeting enterprise needs One key aspect of enterprise readiness is privacy. Firestore now supports Customer Managed Encryption Keys (CMEK) in preview, which allows you to encrypt data stored at-rest using your own specified encryption key. This is an alternative to Firestore’s default behavior, which generates a Google-managed encryption key to encrypt your data. Your customer-specified key can be stored using the Cloud Key Management service, or you can even use your own external key manager. Get started with the Firestore CMEK documentation. And to help minimize serving latency and maximize data privacy, you can now deploy Firestore in any available Google Cloud region. You can review a full list of supported Firestore locations and pricing here. Lastly, you can now retain daily backups using Firestore’s Scheduled Backup feature for up to 98 days, up from seven days. Get started with Firestore Scheduled Backup and Restore today. Next steps To learn more about Firestore and the new features launching at Next ‘24, check out the following resources: Getting Started on Firestore: server client library or web & mobile client library Firestore Vector Search documentation Firestore Vector Search extension to automatically generate embeddings documentation Firestore Gen AI Library Integrations: LangChain and LlamaIndex Firestore Multiple Range queries documentation Firestore Query Explain documentation Firestore Customer Managed Encryption Keys documentation Firestore Locations and Pricing documentation Firestore Scheduled Backup and Restore documentation View the full article
  8. Google Cloud's flagship cloud conference — Google Cloud Next — wrapped up April 11 and HashiCorp was fully engaged with demos, breakout sessions, presentations, and experts at our lively booth. This post shares announcements from the event and highlights recent developments in our partnership. HashiCorp and Google Cloud help organizations control cloud spend, improve their risk profile, and unblock developer productivity for faster time to market. The strength of our partnership can be seen in this recent milestone: The Google Cloud Terraform provider has now surpassed 600 million downloads. The sheer scale of that number demonstrates that HashiCorp technologies underpin a significant portion of Google Cloud services and provide developer-friendly ways to scale infrastructure and security. HashiCorp-Google Cloud developments on display at Google Cloud Next include: Partnership update: * HashiCorp joins the new Google Distributed Cloud partner program Product integrations: * Secrets sync with Google Cloud Secrets Manager * Terraform Google Cloud provider-defined functions * Consul fleet-wide availability on GKE Autopilot Presentations, demos, and webinars: * On the floor at Google Cloud Next * Scaling infrastructure as code with Terraform on Google Cloud Partnership update HashiCorp joins the new Google Distributed Cloud partner program At Google Cloud Next, Google announced that customers can now take advantage of an expanded marketplace of independent software vendors (ISVs) with the Google Cloud Ready — Distributed Cloud program. The new program works with partners to validate their solutions by tuning and enhancing existing integrations and features to better support customer use cases for GDC, which can help identify software solutions that are compatible with GDC more quickly. HashiCorp is part of a diverse group of software partners that have committed to validating their solutions in this program. Read Google’s blog post to learn more about the program. Product integrations Sync secrets with Google Cloud Secrets Manager Vault Enterprise secrets sync, now generally available in Vault Enterprise 1.16, is a new feature that helps organizations manage secrets sprawl by centralizing the governance and control of secrets that are stored within other secret managers. Secrets sync lets users manage multiple external secrets managers, which are called destinations in Vault. We’re proud to announce that, at the time of Vault 1.16’s launch, Google Cloud Secrets Manager is one of several supported destinations. Terraform Google Cloud provider-defined functions We have announced the general availability of provider-defined functions in the Google Cloud Terraform provider. This release represents yet another step in our unique approach to ecosystem extensibility. Provider-defined functions allow anyone in the Terraform community to build custom functions within providers and extend the capabilities of Terraform. You can find examples of provider-defined functions in the officially supported Google Cloud and Kubernetes providers at our blog on Terraform 1.8 adding provider functions. Consul fleet-wide availability on GKE Autopilot As more customers use multiple cloud services or microservices, they face the difficulty of consistently managing and connecting their services across various environments, including on-premises datacenters, multiple clouds, and existing legacy systems. HashiCorp Consul's service mesh addresses this challenge by securely and consistently connecting applications on any runtime, network, cloud platform, or on-premises setup. In the Google Cloud ecosystem, Consul can be deployed across Google Kubernetes Engine (GKE) and Anthos GKE. Now, Consul 1.16 is also supported on GKE Autopilot, Google Cloud’s fully managed Kubernetes platform for containerized workloads. Consul 1.17 is currently on track to be supported on GKE Autopilot later this year. You can learn more about the benefits of GKE Autopilot and how to deploy Consul on GKE Autopilot in our blog post on GKE Autopilot support for Consul. Presentations, demos, and webinars On the floor at Google Cloud Next HashiCorp held two speaking sessions at Google Cloud Next: Multi-region, multi-runtime, multi-project infrastructure as code and Scaling Infrastructure as Code: Proven Strategies and Productive Workflows. These sessions were recorded and will be posted on the Google Cloud Next homepage later in April. You can also join our upcoming webinar, which will cover many of the concepts from these talks (more on the webinar in a moment). Google Cloud Next also featured a generative AI demo where customers could discover more than 100 generative AI solutions from partners. HashiCorp was selected for the demo and presented an AI debugger for Terraform that resolves run issues to better identify and remediate developer infrastructure deployment challenges. To learn more, check out the Github repo and read the Google Cloud partner gen AI demo blog. Webinar: Scaling infrastructure as code with Terraform on Google Cloud Now that Google Next is over, it’s time to make plans to join me and HashiCorp Developer Advocate Cole Morrison in our upcoming Scaling Infrastructure as Code on Google Cloud webinar, on Thursday, May 2, at 9 a.m. PT. We’ll cover the proven strategies and approaches to creating scalable infrastructure as code with HashiCorp Terraform, showing how large organizations find success in structuring projects across teams, architect globally available systems, share sensitive information between environments securely, set up developer-friendly workflows, and more. You'll see it all in action with a live demo and codebase that deploys many services across multiple teams. View the full article
  9. Hello from Las Vegas, where day two of Google Cloud Next ‘24 just wrapped up. What happened in Vegas today? We got hands-on with Gemini and AI agents. At the annual developer keynote, Google Cloud Chief Evangelist Richard Seroter told the audience how Gemini in Google Cloud can not only meet you where you are today, “but frankly take you further than anyone else can.” Google Cloud Next '24 Developer Keynote In a wide-ranging presentation full of demos, Richard and Senior Developer Advocate Chloe Condon dove deep with fellow Googlers and partners into Google Cloud AI technologies and integrations that help with the core tasks that Google Cloud customers do every day: build, run and operate amazing applications. Let’s take a deeper look. Build Google Cloud’s generative AI experience for developers starts with Gemini Code Assist. Google Cloud VP and GM Brad Calder showed the audience how support for Gemini 1.5 in Code Assist enables a 1M token context window — the largest in the industry. Then, Jason Davenport, Google Cloud Developer Advocate showed how Gemini Cloud Assist makes it easier to design, operate, troubleshoot, and optimize your application by using context from your specific cloud environment and resources, be they error logs, load balancer configurations, firewall rules — you name it. Finally, with Gemini embedded across Google Cloud applications like BigQuery and Looker, support in Google Cloud databases for vector search and embedding, plus integrations into developer tools like Cloud Workstations and web user interface libraries like React, developers got a taste of what AI brings to the table: Now, you can add AI capabilities like taking multi-modal inputs (i.e., both text and images) and use them to create recommendations, predictions, and syntheses — all in a fraction of the time it took before. Google Cloud Product Manager Femi Akinde and Chloe showed us how to go from a great idea to an immersive, inspirational AI app in just a few minutes. New things that makes this possible: App Hub - Announced today, and with a deep integration into Google Cloud Assist, App Hub provides an accurate, up-to-date representation of deployed applications and their resource dependencies, regardless of the specific Google Cloud products they use. BigQuery continuous queries - In preview, BigQuery can now provide continuous SQL processing over data streams, enabling real-time pipelines with AI operators or reverse ETL. Natural language support in AlloyDB - With support for Google’s state-of-the-art ScaNN algorithm, AlloyDB users get the enhanced vector performance that powers some of Google’s most popular services. Gemini Code Assist in Apigee API management: Use Gemini to help you build enterprise-grade APIs and integrations using natural language prompts. Run Building a generative AI app is one thing, but how do you make it production-grade? “That’s the question of the moment,” Google Cloud Developer Advocate Kaslin Fields told the audience. Thankfully, Google Cloud platforms like Cloud Run make it ridiculously fast to stand up and scale an application, while platforms like Google Kubernetes Engine (GKE) provide robust feature set to power the most demanding, or unique AI applications. New things that make this possible: Cloud Run application canvas - Generate, modify and deploy AI applications in Cloud Run, with integrations to Vertex AI so you can consume generative APIs from Cloud Run services in just a few clicks. Gen AI Quick Start Solutions for GKE - Run AI on GKE with a Retrieval Augmented Generation (RAG) pattern, or integrated with Ray. Support for Gemma on GKE: GKE offers many paths for running Gemma, Google’s open model based on Gemini. Better yet, the performance is excellent. Operate “AI apps can produce emergent behaviors, resulting in novel issues,” said Steve McGhee, Google Cloud Reliability Advocate during the developer keynote. Indeed, “our systems used to fail in fairly predictable ways,” said another presenter, Charity Majors, cofounder and CTO at Honeycomb.io. But now, “our systems are dynamic and chaotic, our architectures are far-flung and diverse and constantly changing. But what generative AI taketh away — the predictability of the same old, same old — it also giveth back in the form of new tools to help you understand and deal with change. New things that make this possible: Vertex AI MLOps capabilities - In preview, Vertex AI Prompt Management lets customers experiment with migrate, and track prompts and parameters, so they can compare prompt iterations and assess how small changes impact outputs, while Vertex AI Rapid Evaluation helps users evaluate model performance when iterating on the best prompt design. Shadow API detection - In preview in Advanced API Security, shadow API detection helps you find APIs that don’t have proper oversight or governance, so could be the source of damaging security incidents. Confidential Accelerators for AI workloads - Confidential VMs on the A3 machine series with NVIDIA Tensor Core H100 GPUs extends hardware-based data and model protection to the CPU to GPUs that handle sensitive AI and machine learning data. GKE container and model preloading - In preview, GKE can now accelerate workload cold-start to improve GPU utilization, save money, and keep AI inference latency low. Then, it was off to another day of spotlights, breakout sessions (295 on Day 2 alone), and trainings before the party tonight, where attendees will be entertained by Kings of Leon and Anderson .Paak. Tomorrow, Day 3, is also jam packed, with sessions running all day, including reruns of many of the sessions that you may have missed earlier in the week — be sure to add them to your agenda. And don’t forget to bring your badge to the party tonight! View the full article
  10. Hello from sunny Las Vegas, where we kicked off Google Cloud Next ’24 today! What happened in Vegas on the first day of Next? In a word, AI. Read on for highlights of the day. We started things off with a reminder of how companies are using AI — not as a future thing, but a today thing. New way, today Google and Alphabet CEO Sundar Pichai then joined the Next keynote on screen, reminding us how far we’ve come with generative AI — and how fast. “Last summer, we were just beginning to imagine how this technology could transform businesses, and today, that transformation is well underway,” Sundar said. On the keynote stage, Thomas Kurian and executives like Amin Vahdat, Aparna Pappu, and Brad Calder highlighted some of the biggest, most recognizable brands today: Goldman Sachs, Mercedes, Uber, Walmart, and Wayfair, to name a few. And throughout, they announced some incredible new products, partners, and technologies. Here’s just a small taste of all the things that we announced today, across four key themes: 1. Use AI to do amazing things To quote Sundar, we’re using AI to build products that are “radically more helpful.” Whether you’re a developer creating the next great app, an architect building and managing infrastructure, an end user collaborating on content with your colleagues, or a data scientist plumbing the depths of your business data, here are just a few of the new ways that Google AI can help you do your job better. What we announced: Gemini for Google Cloud help users build AI agents to work and code more efficiently, manage their applications, gain deeper data insights, identify and resolve security threats — all deeply integrated in a range of Google Cloud offerings: Gemini Code Assist, the evolution of the Duet AI for Developers, lets developers use natural language to add to, change, analyze, and streamline their code, across their private codebases and from their favorite integrated development environments (IDEs). Gemini Cloud Assist helps cloud teams design, operate, and optimize their application lifecycle. Gemini in Security, Gemini in Databases, Gemini in BigQuery, and Gemini in Looker help elevate teams’ skills and capabilities across these critical workloads. Then there’s Google Workspace, which we built around core tenets of real-time creation and collaboration. Today, we took that up a level with: Google Vids, your AI-powered video, writing, production, and editing assistant, all in one. Sitting alongside other productivity tools like Docs, Sheets, and Slides, Vids can help anyone become a great storyteller at work. A new AI Meetings and Messaging add-on includes features like Take notes for me (now in preview), Translate for me (coming in June), and automatic translation of messages and on-demand conversation summaries in Google Chat. This add-on is available for $10 per user, per month, and it can be added to most Workspace plans. 2. … built on the most advanced foundation models All the above-mentioned capabilities are based on, you guessed it, Gemini, Google’s most powerful model. Today at Next ’24, we announced ways to help developers bring the power of Gemini to their own applications through Vertex AI and other AI development platforms. What we announced: Gemini 1.5 Pro is now available in public preview to Vertex AI customers. Now, developers can see for themselves what it means to build with a 1M context window. Imagen, Google’s text-to-image mode, can now create live images from text, in preview. Just imagine generating animated images such as GIFs from a simple text prompt… Imagen also gets advanced photo editing features, including inpainting and outpainting, and a digital watermarking feature powered by Google DeepMind’s SynthID. Vertex AI has new MLOps capabilities: Vertex AI Prompt Management, and new Evaluation tools. Vertex AI Agent Builder brings together the Vertex AI Search and Conversation products, along with a number of enhanced tools for developers to make it much easier to build and deploy enterprise-ready gen AI experiences. 3. … running on AI-optimized infrastructure None of this would be possible if it weren’t for the investments in workload-optimized infrastructure that we make to power our own systems, as well as yours. What we announced: Enhancements across our AI Hypercomputer architecture, including the general availability of Cloud TPU v5p, and A3 Mega VMs powered by NVIDIA H100 Tensor Core GPUs; storage portfolio optimizations including Hyperdisk ML; and open software advancements including JetStream and JAX and PyTorch/XLA releases. Google Axion, our first custom Arm®-based CPU designed for the data center. New compute and networking capabilities, including C4 and N4 general-purpose VMs powered by 5th Generation Intel Xeon processors; as well as enhancements across our Google Distributed Cloud. 4. … all grounded on trusted data An AI model can only be as good as the data it’s trained on. And where better to get quality data than in your trusted enterprise databases and data warehouses? At Google Cloud, we think of this as “enterprise truth” and build capabilities across our Data Cloud to make sure your AI applications are based on trusted data. What we announced: Databases enhancements: Google Cloud databases are more AI-capable than ever. AlloyDB AI includes new vector capabilities, easier access to remote models, and flexible natural language support. Firestore, meanwhile, joins the long list of Google databases with strong vector search capabilities. Big news for BigQuery: We’re anchoring on BigQuery as our unified data analytics platform, including across clouds via BigQuery Omni. BigQuery also gets a new data canvas — a new natural language-based experience for data exploration, curation, wrangling, analysis, and visualization workflows. Last but not least, you can now ground your models in not only in your enterprise data, but also in Google Search results, so they have access to the latest, high-quality information And that was just during the morning keynote! From there, attendees went off to explore more than 300 Spotlights and breakout sessions, the Innovators Hive, and of course, the show floor, where partners this week are highlighting over 100 AI solutions built with Google Cloud technologies. We can’t wait to see you again tomorrow, when we’ll share even more news, go deep on today’s announcements, and host the perennial favorite — the Developer Keynote. Have fun in Vegas tonight. But don’t stay out too late, because there’s lots more ahead tomorrow! View the full article
  11. Some Google Cloud customers will be able to run instances on the Arm-based Axion chip later this year. View the full article
  12. Welcome to our live coverage of Google Cloud Next 2024! We're live in Las Vegas for a packed week of news and announcements from one of the world's biggest players when it comes to cloud innovation. The conference starts this morning with an opening keynote from Google Cloud CEO Thomas Kurian, and we've been promised plenty of other special guests. We'll be keeping you updated with all the latest announcements here, so stay tuned for everything you need to know for Google Cloud Next 2024! So what are we expecting to see from Google Cloud this week? In a word - AI. The company has been bullish in its use of AI across the board, and has been ramping up Gemini and Vertex in many of its product lines. Elsewhere, we're probably set to see a host of new updates and upgrades for Google Workspace - again, most likely to be around AI, but with the scale of the office software suite, who knows? We may also see some new hardware - with Nvidia recently announcing its latest flagship Blackwell chips, it's very likely we'll see something with Google announced here. Good morning from a beautifully sunny Las Vegas (although our view of the US eclipse yesterday was...sub-par, it has to be said) We're getting ready to head over for the day one keynote, hosted by none other than Google Cloud CEO Thomas Kurian himself. With a star-studded lineup of customers, partners and guests, the keynote should also include a whole host of news and announcements, so stay tuned for more soon! View the full article
  13. Welcome to Google Cloud Next. We last came together just eight months ago at Next 2023, but since then, we have made well over a year’s progress innovating and transforming with our customers and partners. We have introduced over a thousand product advances across Google Cloud and Workspace. We have expanded our planet-scale infrastructure to 40 regions and announced new subsea cable investments to connect the world to our Cloud with predictable low latency. We have introduced new, state-of-the-art models — including our Gemini models — and brought them to developers and enterprises. And the industry is taking notice — we have been recognized as a Leader in 20 of the top industry analyst evaluations. Last year, the world was just beginning to imagine how generative AI technology could transform businesses — and today, that transformation is well underway. More than 60% of funded gen AI startups and nearly 90% of gen AI unicorns are Google Cloud customers, including companies like Anthropic, AI21 Labs, Contextual AI, Essential AI, and Mistral AI who are using our infrastructure. Leading enterprises like Deutsche Bank, Estée Lauder, Mayo Clinic, McDonald’s, and WPP are building new gen AI applications on Google Cloud. And today, we are announcing new or expanded partnerships with Bayer, Cintas, Discover Financial, IHG Hotels & Resorts, Mercedes Benz, Palo Alto Networks, Verizon, WPP, and many more. In fact, this week at Next, more than 300 customers and partners will share their gen AI successes working with Google Cloud. Central to the opportunities of gen AI are the connected AI agents that bring them to life. Agents help users achieve specific goals — like helping a shopper find the perfect dress for a wedding or helping nursing staff expedite patient hand-offs when shifts change. They can understand multi-modal information — processing video, audio, and text together, connecting and rationalizing different inputs. They can learn over time and facilitate transactions and business processes. Today, our customers, including Best Buy, Etsy, The Home Depot, ING Bank and many more are seeing the benefits of powerful, accurate and innovative agents that make gen AI so revolutionary. This path to agents is built on our AI-optimized infrastructure, models and platform, or by utilizing our own agents in Gemini for Google Cloud and Gemini for Google Workspace.Today, at Next ‘24, we are making significant announcements to drive customer success and momentum, including: custom silicon advancements, like the general availability of TPU v5p and Google Axion, our first custom ArmⓇ-based CPU designed for the datacenter; Gemini 1.5 Pro, which includes a breakthrough in long context understanding, going into public preview; new grounding capabilities in Vertex AI; Gemini Code Assist for developers; expanded cybersecurity capabilities with Gemini in Threat Intelligence; new enhancements for Gemini in Google Workspace, and much more. These innovations transcend every aspect of Google Cloud, including: Our AI Hypercomputer, a supercomputing architecture that employs an integrated system of performance-optimized hardware, open software, leading ML frameworks, and flexible consumption models; Our foundation models, including Gemini models, which process multi-modal information, and have advanced reasoning skills; Our Vertex AI platform, which helps organizations and partners to access, tune, augment, and deploy custom models and connect them with enterprise data, systems and processes to roll out generative AI agents; Gemini for Google Cloud, which provides AI assistance to help users work and code more efficiently, manage their applications, gain deeper data insights, identify and resolve security threats, and more; Gemini for Workspace, which is the agent built right into Gmail, Docs, Sheets, and more, with enterprise-grade security and privacy; and A number of announcements across analytics, databases, cybersecurity, compute, networking, Google Workspace, our growing AI ecosystem, and more. Scale with AI-optimized infrastructure The potential for gen AI to drive rapid transformation for every business, government and user is only as powerful as the infrastructure that underpins it. Google Cloud offers our AI Hypercomputer, an architecture that combines our powerful TPUs, GPUs, AI software and more to provide an efficient and cost effective way to train and serve models. Leading AI companies globally, like Bending Spoons and Kakao Brain, are building their models on our platform. Today, we are strengthening our leadership with key advancements to support customers across every layer of the stack: A3 Mega: Developed with NVIDIA using H100 Tensor Core GPUs, this new GPU-based instance will be generally available next month and delivers twice the bandwidth per GPU compared to A3 instances, to support the most demanding workloads. We are also announcing Confidential A3, which enables customers to better protect the confidentiality and integrity of sensitive data and AI workloads during training and inferencing. NVIDIA HGX B200 and NVIDIA GB200 NVL72: The latest NVIDIA Blackwell platform will be coming to Google Cloud in early 2025 in two variations: the HGX B200 and the GB200 NVL72. The HGX B200 is designed for the most demanding AI, data analytics, and HPC workloads, while the GB200 NVL72 powers real-time large language model inference and massive-scale training performance for trillion-parameter- scale models. TPU v5p: We’re announcing the general availability of TPU v5p, our most powerful, scalable, and flexible AI accelerator for training and inference, with 4X the compute power per pod compared to our previous generation. We’re also announcing availability of Google Kubernetes Engine (GKE) support for TPU v5p. Over the past year, the use of GPUs and TPUs on GKE has grown more than 900%. AI-optimized storage options: We’re accelerating training speed with new caching features in Cloud Storage FUSE and Parallelstore, which keep data closer to a customer’s TPU or GPU. We’re also introducing Hyperdisk ML (in preview), our next generation block storage service that accelerates model load times up to 3.7X compared to common alternatives. New options for Dynamic Workload Scheduler: Calendar mode for start time assurance and flex start for optimized economics will help customers ensure efficient resource management for the distribution of complex training and inferencing jobs. We are also bringing AI closer to where the data is being generated and consumed — to the edge, to air-gapped environments, to Google Sovereign Clouds, and cross-cloud. We are enabling AI anywhere through Google Distributed Cloud (GDC), allowing you to choose the environment, configuration, and controls that best suit your organization's specific needs. For example, leading mobile provider Orange, which operates in 26 countries where local data must be kept in each country, leverages AI on GDC to improve network performance and enhance customer experiences. Today we are announcing a number of new capabilities in GDC, including: NVIDIA GPUs to GDC: We are bringing NVIDIA GPUs to GDC for both connected and air-gapped configurations. Each of these will support new GPU-based instances to run AI models efficiently. GKE on GDC: The same GKE technology that leading AI companies are using on Google Cloud will be available in GDC. Support AI models: We are validating a variety of open AI models, including Gemma, Llama 2 and more on GDC to run in air-gapped and connected edge environments. AlloyDB Omni for Vector Search: We are also bringing the power of AlloyDB Omni for Vector Search to allow search and information retrieval on GDC for your private and sensitive data with extremely low latency. Sovereign Cloud: For the most stringent regulatory requirements, we deliver GDC in a fully air-gapped configuration with local operations, full survivability, managed by Google or through a partner of your choice. You have complete control, and when regulations change, we have the flexibility to help you respond quickly. While not every workload is an AI workload, every workload you run in the cloud needs optimization — from web servers to containerized microservices. Each application has unique technical needs, which is why we’re pleased to introduce new, general-purpose compute options that help customers maximize performance, enable interoperability between applications, and meet sustainability goals, all while lowering costs: Google Axion: Our first custom ArmⓇ-based CPU designed for the datacenter, delivers up to 50% better performance and up to 60% better energy efficiency than comparable current-generation x86-based instances. We are also announcing N4 and C4, two new machine series in our general purpose VM portfolio; native bare-metal machine shapes in the C3 machine family; the general availability of Hyperdisk Advanced Storage Pools; and much more. Create agents with Vertex AI Vertex AI, our enterprise AI platform, sits on top of our world-class infrastructure. It is the only unified platform that lets customers discover, customize, augment, deploy, and manage gen AI models. We offer more than 130 models, including the latest versions of Gemini, partner models like Claude 3, and popular open models including Gemma, Llama 2, and Mistral. Today, we’re excited to deliver expanded access to a variety of models, giving customers the most choice when it comes to model selection: Gemini 1.5 Pro: Gemini 1.5 Pro offers two sizes of context windows: 128K tokens and 1 million tokens, and is now available in public preview. In addition, we are announcing the ability to process audio files including videos with audio. Customers can process vast amounts of information in a single stream including 1 hour of video, 11 hours of audio, codebases with over 30,000 lines of code, or over 700,000 words. Claude 3: Claude 3 Sonnet and Claude 3 Haiku, Anthropic’s state-of-the-art models, are generally available on Vertex AI thanks to our close partnership, with Claude 3 Opus to be available in the coming weeks. CodeGemma: Gemma is a family of lightweight, state-of-the-art open models built by the same research and technology used to create the Gemini models. A new fine-tuned version of Gemma designed for code generation and code assistance, CodeGemma, is now available on Vertex AI. Imagen 2.0: Our most advanced text-to-image technology boasts a variety of image generation features to help businesses create images that match their specific brand requirements. A new text-to-live image capability allows marketing and creative teams to generate animated images, such as gifs, which are equipped with safety filters and digital watermarks. In addition, we are announcing the general availability of advanced photo editing features, including inpainting and outpainting, and much more. Digital Watermarking: Powered by Google DeepMind’s SynthID, we are proud to announce it is generally available today for AI-generated images produced by Imagen 2.0. Vertex AI allows you to tune the foundation model you have chosen with your data. We provide a variety of different techniques including fine tuning, Reinforcement Learning with Human Feedback (RLHF), distilling, supervised, adapter-based tuning techniques such as Low Rank Adaption (LORA) and more. Today we are announcing support supervised, adapter-based tuning, to customize Gemini models in an efficient, lower-cost way. Customers get far more from their models when they augment and ground them with enterprise data. Vertex AI helps you do this with managed tooling for extensions, function calling, and grounding. In addition, Retrieval Augmented Generation (RAG) connects your model to enterprise systems to retrieve information and take action, allowing you to get up-to-the-second billing and product data, update customers’ contact info or subscriptions, or even complete transactions. Today, we are expanding Vertex AI grounding capabilities in two new ways: Google Search: Grounding models in Google Search combines the power of Google’s latest foundation models along with access to fresh, high-quality information to significantly improve the completeness and accuracy of responses. Your data: Give your agents Enterprise Truth by grounding models with your data from enterprise applications like Workday or Salesforce, or information stored in Google Cloud database offerings like AlloyDB and BigQuery. Once you have chosen the right model, tuned and grounded it, Vertex AI can help you deploy, manage and monitor the models. Today, we are announcing additional ML opps capabilities: Prompt management tools: These tools let you collaborate on built-in prompts with notes and status, track changes over time, and compare the quality of responses from different prompts. Automatic side-by-side (AutoSxS): Now generally available, Auto SxS provides explanations of why one response outperforms another and certainty scores, which helps users understand the accuracy of the evaluation. Rapid Evaluation feature: Now in preview, this helps customers quickly evaluate models on smaller data sets when iterating on prompt design. Finally, Vertex AI Agent Builder brings together foundation models, Google Search, and other developer tools, to make it easy for you to build and deploy agents. It provides the convenience of a no-code agent builder console alongside powerful grounding, orchestration and augmentation capabilities. With Vertex AI Agent Builder, you can now quickly create a range of gen AI agents, grounded with Google Search and your organization’s data. Accelerate development Gemini Code Assist is our enterprise-focused AI code-assistance solution. We deployed it to a group of developers inside Google and found they had more than 40% faster completion time for common dev tasks and spent roughly 55% less time writing new code. We’re also seeing success with customers, like Quantiphi, who recorded developer productivity gains over 30%. We are proud to share that Gemini Code Assist supports your private code base to be anywhere — on premises, GitHub, GitLab, Bitbucket, or even multiple locations. Today, we’re making key announcements to extend our industry leadership for developers: Gemini 1.5 Pro in Gemini Code Assist: This upgrade, now in private preview, brings a massive 1 million token context window, revolutionizing coding for even the largest projects. Gemini Code Assist now delivers even more accurate code suggestions, deeper insights, and streamlined workflows. Gemini Cloud Assist: This provides AI assistance across your application lifecycle, making it easier to design, secure, operate, troubleshoot, and optimize the performance and costs of your application. Unlock the potential of AI with data Google Cloud lets you combine the best of AI with your grounded enterprise data, while keeping your data private and secure. This year, we’ve made many advancements in our AI-ready Data Cloud such as LLM integration and vector matching capabilities across all of our databases, including AlloyDB and BigQuery. Now, data teams can use Gemini models for multimodal and advanced reasoning for their existing data. This can help improve patient care for healthcare providers, streamline supply chains, and increase customer engagement across industries like telco, retail, and financial services. For example, customers like Bayer, Mayo Clinic, Mercado Libre, NewsCorp, and Vodafone are already seeing benefits. And Walmart is building data agents to modernize their shopping experiences: “Using Gemini, we’ve enriched our data, helping us improve millions of product listings across our site and ultimately, enabling customers to make better decisions when they shop with Walmart.” - Suresh Kumar, EVP, Global Chief Technology Officer and Chief Development Officer, Walmart, Inc. Today, we’re announcing new enhancements to help organizations build great data agents: Gemini in BigQuery: Gemini in BigQuery uses AI to help your data teams with data preparation, discovery, analysis and governance. Combined with this, you can build and execute data pipelines with our new BigQuery data canvas, which provides a new notebook-like experience with natural language and embedded visualizations, both available in preview. Gemini in Databases: This makes it easy for you to migrate data safely and securely from legacy systems, for example, converting your database to a modern cloud database like AlloyDB. Vector indexing: New query capabilities using vector indexing directly in BigQuery and AlloyDB allow you to leverage AI over your data where it is stored, enabling real-time and accurate responses. Gemini in Looker: We’re introducing new capabilities, currently in preview, that allow your data agents to easily integrate with your workflows. We have also added new gen AI capabilities to enable you to chat with your business data, and it is integrated with Google Workspace. Improve your cybersecurity posture with AI-driven capabilities The number and sophistication of cybersecurity attacks continues to increase, and gen AI has the potential to tip the balance in favor of defenders, with Security Agents providing help across every stage of the security lifecycle: prevention, detection and response. Today at Google Cloud Next, we are announcing new AI-driven innovations across our security portfolio that are designed to deliver stronger security outcomes and enable every organization to make Google a part of their security team: Gemini in Threat Intelligence: Uses natural language to deliver deep insight about threat actor behavior. With Gemini, we are able to analyze much larger samples of potentially malicious code. Gemini’s larger context window allows for analysis of the interactions between modules, providing new insight into code’s true intent. Gemini in Security Operations: A new assisted investigation feature converts natural language to detections, summarizes event data, recommends actions to take, and navigates users through the platform via conversational chat. Leading global brands are already seeing the benefits in their security programs. At Pfizer, data sources that used to take days to aggregate can now be analyzed in seconds. 3M is using Gemini in Security Operations to help their team cut through security noise, while engineers in Fiserv’s Security Operations Center are able to create detections and playbooks with significantly less effort, and analysts get answers more quickly. Supercharge productivity with Google Workspace Google Workspace is the world’s most popular productivity suite, with more than 3 billion users and over 10 million paying customers, from individuals to enterprises. Over the last year, we have released hundreds of features and enhancements to Google Workspace and Gemini for Workspace, the AI-powered agent that’s built right into Gmail, Docs, Sheets and more. And customers are experiencing a significant benefit. In fact, 70% of enterprise users who use "Help me write" in Docs or Gmail end up using Gemini's suggestions, and more than 75% of users who create images in Slides are inserting them into their presentations. Google Workspace is already helping employees at leading brands like Uber, Verizon and Australian retailer Woolworths, and today we are announcing the next wave of innovations and enhancements to Gemini for Google Workspace, including: Google Vids: This new AI-powered video creation app for work is your video, writing, production, and editing assistant, all in one. It can generate a storyboard you can easily edit, and after choosing a style, it pieces together your first draft with suggested scenes from stock videos, images, and background music. It can also help you land your message with the right voiceover — either choosing one of our preset voiceovers or using your own. Vids will sit alongside other Workspace apps like Docs, Sheets, and Slides. It includes a simple, easy-to-use interface and the ability to collaborate and share projects securely from your browser. Vids is being released to Workspace Labs in June. AI Meetings and Messaging add-on: With “take notes for me”, chat summarization, and real-time translation in 69 languages (equal to 4,600 language pairs), this collaboration tool will only cost $10 per user, per month. New AI Security add-on: Workspace admins can now automatically classify and protect sensitive files and data using privacy-preserving AI models and Data Loss Prevention controls trained for their organization. The AI Security add-on is available for $10 per user, per month and can be added to most Workspace plans. Benefit from AI agents With our entire AI portfolio — infrastructure, Gemini, models, Vertex AI, and Workspace — many customers and partners are building increasingly sophisticated AI agents. We are excited to see organizations building AI agents that serve customers, support employees, and help them create content, in addition to the coding agents, data agents, and security agents mentioned earlier. Great customer agents can understand what you want, know the products and facts, engage conveniently, and ultimately help your customers interact with your business more seamlessly. The most impactful customer agents work across channels — web, mobile, call center, and point of sale — and in multiple modalities, like text, voice, and more. The opportunity for customer agents is tremendous for every organization, and our customers are just getting started: Discover Financial’s 10,000 contact center reps search and synthesize across detailed policies and procedures during calls. IHG Hotels & Resorts will launch a generative AI-powered travel planning capability that can help guests easily plan their next vacation. Minnesota’s Department of Public Safety helps foreign-language speakers get licenses and other services with two-way, real-time translation. Target is optimizing offers and curbside pickup on the Target app and Target.com. Employee agents help all your employees be more productive and work better together. Employee agents can streamline processes, manage repetitive tasks, answer employee questions, as well as edit and translate critical communications. We’re seeing the impact of this every day with our customers, including Bristol Myers Squibb, HCA Healthcare, Sutherland, a leading services team, and more: Dasa, the largest medical diagnostics company in Brazil, is helping physicians detect relevant findings in test results more quickly. Etsy uses Vertex AI to improve search, provide more personalized recommendations to buyers, optimize their ads models, and increase the accuracy of delivery dates. Pennymac, a leading US-based national mortgage lender, is using Gemini across several teams including HR, where Gemini in Docs, Sheets, Slides and Gmail is helping them accelerate recruiting, hiring, and new employee onboarding. Pepperdine University benefits from Gemini in Google Meet, which enables real-time translated captioning and notes for students and faculty who speak a wide range of languages. Creative agents Creative Agents can serve as the best designer and production team — working across images, slides, and exploring concepts with you. We provide the most powerful platform and stack to build creative agents, and many customers are building agents for their marketing teams, audio and video production teams, and all the creative people that can use a hand, for example: Canva is using Vertex AI to power its Magic Design for Video, helping users skip tedious editing steps. Carrefour is pioneering new ways to use gen AI for marketing. Using Vertex AI, they were able to create dynamic campaigns across various social networks in weeks not months. Procter & Gamble is using Imagen to accelerate the development of photo-realistic images and creative assets, giving teams more time back to focus on high-level plans. A leading AI ecosystem To adopt gen AI broadly, customers need an enterprise AI platform that provides the broadest set of end-to-end capabilities, highly optimized for cost and performance — an open platform that offers choice, is easy to integrate with existing systems, and is supported by a broad ecosystem. At Google, we are engineering our AI platform to address these challenges. Google Cloud is the only major cloud provider offering both first-party and extensible, partner-enabled solutions at every layer of the AI stack. Through Google Cloud’s own innovations and those of our partners, we’re able to provide choice across infrastructure, chips, models, data solutions, and AI tooling to help customers build new gen AI applications and create business value with this exciting technology. At Next ‘24, we’re highlighting important news and innovations with our partners across every layer of the stack, including: Broadcom will migrate its VMware workloads to our trusted infrastructure and use Vertex AI to enhance customer experiences. Palo Alto Networks is choosing Google Cloud as its AI provider of choice, helping to improve cybersecurity outcomes for global businesses. Accenture, Capgemini, Cognizant, Deloitte, HCLTech, KPMG, McKinsey, and PwC have all announced expanded gen AI implementation services for enterprises, and our ecosystem of services partners have now taken more than half a million gen AI courses in order to help deliver the cloud of connected agents. To date, we have helped more than a million developers get started with gen AI and our gen AI trainings have been taken millions of times. Looking back at this past year, it’s truly remarkable to see how quickly our customers have moved from enthusiasm and experimentation to implementing AI tools and launching early-stage products. Today, realizing the full potential of the cloud goes beyond infrastructure, network, and storage. It demands a new way of thinking. It's embracing possibilities to solve problems in boldly creative ways, and reimagining solutions to achieve the previously impossible. We're both inspired and amazed to see this mindset quickly materialize in our customers’ work as they pave new paths forward in the AI era — whether automating day-to-day tasks or tackling complex challenges. The world is changing, but at Google, our north star is the same: to make AI helpful for everyone, to improve the lives of as many people as possible. Thank you, customers, developers, and partners for entrusting us to join you on this journey. We can't wait to see what you do next. View the full article
  14. We’re entering a new era for data analytics, going from narrow insights to enterprise-wide transformation through a virtuous cycle of data, analytics, and AI. At the same time, analytics and AI are becoming widely accessible, providing insights and recommendations to anyone with a question. Ultimately, we’re going beyond our own human limitations to leverage AI-based data agents to find deeply hidden insights for us. Organizations already recognize that data and AI can come together to unlock the value of AI for their business. Research from Google’s 2024 Data and AI Trends Report highlighted 84% of data leaders believe that generative AI will help their organization reduce time-to-insight, and 80% agree that the lines of data and AI are starting to blur. Today at Google Cloud Next ’24, we’re announcing new innovations for BigQuery and Looker that will help activate all of your data with AI: BigQuery is a unified AI-ready data platform with support for multimodal data, multiple serverless processing engines and built-in streaming and data governance to support the entire data-to-AI lifecycle. New BigQuery integrations with Gemini models in Vertex AI support multimodal analytics, vector embeddings, and fine-tuning of LLMs from within BigQuery, applied to your enterprise data. Gemini in BigQuery provides AI-powered experiences for data preparation, analysis and engineering, as well as intelligent recommenders to optimize your data workloads. Gemini in Looker enables business users to chat with their enterprise data and generate visualizations and reports—all powered by the Looker semantic data model that’s seamlessly integrated into Google Workspace. Let’s take a deeper look at each of these developments. BigQuery: the unified AI-ready data foundation BigQuery is now Google Cloud’s single integrated platform for data to AI workloads. BigLake, BigQuery’s unified storage engine, provides a single interface across BigQuery native and open formats for analytics and AI workloads, giving you the choice of where your data is stored and access to all of your data, whether structured or unstructured, along with a universal view of data supported by a single runtime metastore, built-in governance, and fine grained access controls. Today we’re expanding open format support with the preview of a fully managed experience for Iceberg, with DDL, DML and high throughput support. In addition to support for Iceberg and Hudi, we’re also extending BigLake capabilities with native support for the Delta file format, now in preview. “At HCA Healthcare we are committed to the care and improvement of human life. We are on a mission to redesign the way care is delivered, letting clinicians focus on patient care and using data and AI where it can best support doctors and nurses. We are building our unified data and AI foundation using Google Cloud's lakehouse stack, where BigQuery and BigLake enable us to securely discover and manage all data types and formats in a single platform to build the best possible experiences for our patients, doctors, and nurses. With our data in Google Cloud’s lakehouse stack, we’ve built a multimodal data foundation that will enable our data scientists, engineers, and analysts to rapidly innovate with AI." - Mangesh Patil, Chief Analytics Officer, HCA Healthcare We’re also extending our cross-cloud capabilities of BigQuery Omni. Through partnerships with leading organizations like Salesforce and our recent launch of bidirectional data sharing between BigQuery and Salesforce Data Cloud, customers can securely combine data across platforms with zero copy and zero ops to build AI models and predictions on combined Salesforce and BigQuery data. Customers can also enrich customer 360 profiles in Salesforce Data Cloud with data from BigQuery, driving additional personalization opportunities powered by data and AI. “It is great to collaborate without boundaries to unlock trapped data and deliver amazing customer experiences. This integration will help our joint customers tap into Salesforce Data Cloud's rich capabilities and use zero copy data sharing and Google AI connected to trusted enterprise data.” - Rahul Auradkar, EVP and General Manager of United Data Services & Einstein at Salesforce Building on this unified AI-ready data foundation, we are now making BigQuery Studio generally available, which already has hundreds of thousands of active users. BigQuery Studio provides a collaborative data workspace across data and AI that all data teams and practitioners can use to accelerate their data-to-AI workflows. BigQuery Studio provides the choice of SQL, Python, Spark or natural language directly within BigQuery, as well as new integrations for real-time streaming and governance. Customers’ use of serverless Apache Spark for data processing increased by over 500% in the past year1. Today, we are excited to announce the preview of our serverless engine for Apache Spark integrated within BigQuery Studio to help data teams work with Python as easily as they do with SQL, without having to manage infrastructure. The data team at Snap Inc. uses these new capabilities to converge toward a common data and AI platform with multiple engines that work across a single copy of data. This gives them the ability to enforce fine-grained governance and track lineage close to the data to easily expand analytics and AI use cases needed to drive transformation. To make data processing on real-time streams directly accessible from BigQuery, we’re announcing the preview of BigQuery continuous queries providing continuous SQL processing over data streams, enabling real-time pipelines with AI operators or reverse ETL. We are also announcing the preview of Apache Kafka for BigQuery as a managed service to enable streaming data workloads based on open-source APIs. We’re expanding our governance capabilities with Dataplex with new innovations for data-to-AI governance available in preview. You can now perform integrated search and drive gen AI-powered insights on your enterprise data, including data and models from Vertex AI, with a fully integrated catalog in BigQuery. We’re introducing column-level lineage in BigQuery and expanding lineage capabilities to support Vertex AI pipelines (available in preview soon) to help you better understand data-to-AI workloads. Finally, to facilitate governance for data-access at scale, we are launching governance rules in Dataplex. Multimodal analytics with new BigQuery and Vertex AI integrations With BigQuery’s direct integration with Vertex AI, we are now announcing the ability to connect models in Vertex AI with your enterprise data, without having to copy or move your data out of BigQuery. This enables multi-modal analytics using unstructured data, fine tuning of LLMs and the use of vector embeddings in BigQuery. Priceline, for instance, is using business data stored in BigQuery for LLMs across a wide range of applications. “BigQuery gave us a solid data foundation for AI. Our data was exactly where we needed it. We were able to connect millions of customer data points from hotel information, marketing content, and customer service chat and use our business data to ground LLMs.” - Allie Surina Dixon, Director of Data, Priceline The direct integration between BigQuery and Vertex AI now enables seamless preparation and analysis of multimodal data such as documents, audio and video files. BigQuery features rich support for analyzing unstructured data using object tables and Vertex AI Vision, Document AI and Speech-to-Text APIs. We are now enabling BigQuery to analyze images and video using Gemini 1.0 Pro Vision, making it easier than ever to combine structured with unstructured data in data pipelines using the generative AI capabilities of the latest Gemini models. BigQuery makes it easier than ever to execute AI on enterprise data by providing the ability to build prompts based on your BigQuery data, and use of LLMs for sentiment extraction, classification, topic detection, translation, classification, data enrichment and more. BigQuery now also supports generating vector embeddings and indexing them at scale using vector and semantic search. This enables new use cases that require similarity search, recommendations or retrieval of your BigQuery data, including documents, images or videos. Customers can use the semantic search in the BigQuery SQL interface or via our integration with gen AI frameworks such as LangChain and leverage Retrieval Augmented Generation based on their enterprise data. Gemini in BigQuery and Gemini in Looker for AI-powered assistance Gen AI is creating new opportunities for rich data-driven experiences that enable business users to ask questions, build custom visualizations and reports, and surface new insights using natural language. In addition to business users, gen AI assistive and agent capabilities can also accelerate the work of data teams, spanning data exploration, analysis, governance, and optimization. In fact, more than 90% of organizations believe business intelligence and data analytics will change significantly due to AI. Today, we are announcing the public preview of Gemini in BigQuery, which provides AI-powered features that enhance user productivity and optimize costs throughout the analytics lifecycle, from ingestion and pipeline creation to deriving valuable insights. What makes Gemini in BigQuery unique is its contextual awareness of your business through access to metadata, usage data, and semantics. Gemini in BigQuery also goes beyond chat assistance to include new visual experiences such as data canvas, a new natural language-based experience for data exploration, curation, wrangling, analysis, and visualization workflows. Imagine you are a data analyst at a bikeshare company. You can use the new data canvas of Gemini in BigQuery to explore the datasets, identify the top trips and create a customized visualization, all using natural language prompts within the same interface Gemini in BigQuery capabilities extend to query recommendations, semantic search capabilities, low-code visual data pipeline development tools, and AI-powered recommendations for query performance improvement, error minimization, and cost optimization. Additionally, it allows users to create SQL or Python code using natural language prompts and get real-time suggestions while composing queries. Today, we are also announcing the private preview of Gemini in Looker to enable business users and analysts to chat with their business data. Gemini in Looker capabilities include conversational analytics, report and formula generation, LookML and visualization assistance, and automated Google slide generation. What’s more, these capabilities are being integrated with Workspace to enable users to easily access beautiful data visualizations and insights right where they work. Imagine you’re an ecommerce store. You can query Gemini in Looker to learn sales trends and market details and immediately explore the insights, with details on how the charts were created. To learn more about our data analytics product innovations, hear customer stories, and gain hands-on knowledge from our developer experts, join our data analytics spotlights and breakout sessions at Google Cloud Next ‘24, or watch them on-demand. 1. Google internal data - YoY growth of data processed using Apache Spark on Google Cloud compared with Feb ‘23 View the full article
  15. Google Cloud Next ‘24 is right around the corner, and this year, there will be even more for developers to enjoy. Whether you're on the generative AI kick or keeping up with what's new with Javascript frameworks, there's a session for technical practitioners in the Application Developers Zone! Meet the experts and get inspired to solve your biggest technical challenges over three days in Las Vegas, April 9-11. With over 600 sessions available, you can choose from a wide range of topics, covering everything from security to growing revenue with AI. Here are a few of the sessions you won't want to miss: 1. Building generative AI apps on Google Cloud with LangChain Learn how to use LangChain, the most popular open-source framework for building LLM-based apps, on Cloud Run to add gen AI features to your own applications. See how to combine it with Cloud SQL's pgvector extension for vector storage to quickly locate similar data points with vector search and reduce the cost of deploying models with Vertex Endpoints. 2. How to deploy all the JavaScript frameworks to Cloud Run Join this session, where we’ll deploy as many JavaScript frameworks as we can, including Angular, React, and Node.js, as quickly as we can to prove that it can be done and have some fun! 3. Build full stack applications using Firebase & Google Cloud Discover how Firebase and Google Cloud simplify full-stack development, empowering you to rapidly build scalable, enterprise-grade applications. 4. Get developer assistance customized to your organization code with Gemini Join Gemini product managers and Capgemini to learn how to customize Duet AI with your own private codebase and bring AI code assistance to your development teams. Also be sure to check out the Innovators Hive — a central hub where you can discover what’s new and next at Google Cloud — to dive into interactive demos and talk face-to-face with our developer experts. Here are a few of the demos you can expect to find in the Innovators Hive AppDev zone: Write your own Cloud Functions (with the help of Duet AI/Gemini) to interact with our programmable pinball machine that uses Pub/Sub to collect events from the game in real time. Learn all about serving a Gemma large language model using graphical processing units (GPUs) on Google Kubernetes Engine (GKE) Autopilot, working with the 2B and 7B parameter models in particular. There’s so much more to enjoy at Next ‘24 that you’ll have to see it to believe it. And if you can’t go in person, you can still register for a digital pass, and access keynotes, 100+ breakout sessions on demand. View the full article
  16. Google Cloud Next ‘24 is around the corner, and it’s the place to be if you’re serious about cloud development! Starting April 9 in Las Vegas, this global event promises a deep dive into the latest updates, features, and integrations for the services of Google Cloud’s managed container platform, Google Kubernetes Engine (GKE) and Cloud Run. From effortlessing scaling and optimizing AI models to providing tailored environments across a range of workloads — there’s a session for everyone. Whether you’re a seasoned cloud pro or just starting your serverless journey, you can expect to learn new insights and skills to help you deliver powerful, yet flexible, managed container environments in this next era of AI innovation. Don’t forget to add these sessions to your event agenda — you won’t want to miss them. Google Kubernetes Engine sessions OPS212: How Anthropic uses Google Kubernetes Engine to run inference for Claude Learn how Anthropic is using GKEs resource management and scaling capabilities to run inference for Claude, its family of foundational AI models, on TPU v5e. OPS200: The past, present, and future of Google Kubernetes Engine Kubernetes is turning 10 this year in June! Since its launch, Kubernetes has become the de facto platform to run and scale containerized workloads. The Google team will reflect on the past decade, highlight how some of the top GKE customers use our managed solution to run their businesses, and what the future holds. DEV201: Go from large language model to market faster with Ray, Hugging Face, and LangChain Learn how to deploy Retrieval-Augmented Generation (RAG) applications on GKE using open-source tools and models like Ray, HuggingFace, and LangChain. We’ll also show you how to augment the application with your own enterprise data using the pgvector extension in Cloud SQL. After this session, you’ll be able to deploy your own RAG app on GKE and customize it. DEV240: Run workloads not infrastructure with Google Kubernetes Engine Join this session to learn how GKE's automated infrastructure can simplify running Kubernetes in production. You’ll explore cost -optimization, autoscaling, and Day 2 operations, and learn how GKE allows you to focus on building and running applications instead of managing infrastructure. OPS217: Access traffic management for your fleet using Google Kubernetes Engine Enterprise Multi-cluster and tenant management are becoming an increasingly important topic. The platform team will show you how GKE Enterprise makes operating a fleet of clusters easy, and how to set up multi-cluster networking to manage traffic by combining it with the Kubernetes Gateway API controllers for GKE. OPS304: Build an internal developer platform on Google Kubernetes Engine Enterprise Internal Developers Platforms (IDP) are simplifying how developers work, enabling them to be more productive by focusing on providing value and letting the platform do all the heavy lifting. In this session, the platform team will show you how GKE Enterprise can serve as a great starting point for launching your IDP and demo the GKE Enterprise capabilities that make it all possible. Cloud Run sessions DEV205: Cloud Run – What's new Join this session to learn what's new and improved in Cloud Run in two major areas — enterprise architecture and application management. DEV222: Live-code an app with Cloud Run and Flutter During this session, see the Cloud Run developer experience in real time. Follow along as two Google Developer Relations Engineers live-code a Flutter application backed by Firestore and powered by an API running on Cloud Run. DEV208: Navigating Google Cloud - A comprehensive guide for website deployment Learn about the major options for deploying websites on Google Cloud. This session will cover the full range of tools and services available to match different deployment strategies — from simple buckets to containerized solutions to serverless platforms like Cloud Run. DEV235: Java on Google Cloud — The enterprise, the serverless, and the native In this session, you’ll learn how to deploy Java Cloud apps to Google Cloud and explore all the options for running Java workloads using various frameworks. DEV237: Roll up your sleeves - Craft real-world generative AI Java in Cloud Run In this session, you’ll learn how to build powerful gen AI applications in Java and deploy them on Cloud Run using Vertex AI and Gemini models. DEV253: Building generative AI apps on Google Cloud with LangChain Join this session to learn how to combine the popular open-source framework LangChain and Cloud Run to build LLM-based applications. DEV228: How to deploy all the JavaScript frameworks to Cloud Run Have you ever wondered if you can deploy JavaScript applications to Cloud Run? Find out in this session as one Google Cloud developer advocate sets out to prove that you can by deploying as many JavaScript frameworks to Cloud Run as possible. DEV241: Cloud-powered, API-first testing with Testcontainers and Kotlin Testcontainers is a popular API-first framework for testing applications. In this session, you’ll learn how to use the framework with an end-to-end example that uses Kotlin code in BigQuery and PubSub, Cloud Build, and Cloud Run to improve the testing feedback cycle. ARC104: The ultimate hybrid example - A fireside chat about how Google Cloud powers (part of) Alphabet Join this fireside chat to learn about the ultimate hybrid use case — running Alphabet services in some of Google Cloud’s most popular offerings. Learn how Alphabet leverages Google Cloud runtimes like GKE, why it doesn’t run everything on Google Cloud, and the reason some products run partially on cloud. Firebase sessions DEV221: Use Firebase for faster, easier mobile application development Firebase is a beloved platform for developers, helping them develop apps faster and more efficiently. This session will show you how Firebase can accelerate application development with prebuilt backend services, including authentication, databases and storage. DEV243: Build full stack applications using Firebase and Google Cloud Firebase and Google Cloud can be used together to build and run full stack applications. In this session, you’ll learn how to combine these two powerful platforms to enable enterprise-grade applications development and create better experiences for users. DEV107: Make your app super with Google Cloud Firebase Learn how Firebase and Google Cloud are the superhero duo you need to build enterprise-scale AI applications. This session will show you how to extend Firebase with Google Cloud using Gemini — our most capable and flexible AI model yet — to build, secure, and scale your AI apps. DEV250: Generative AI web development with Angular In this session, you’ll explore how to use Angular v18 and Firebase hosting to build and deploy lightning-fast applications with Google's Gemini generative AI. See you at the show! View the full article
  17. Experience the magic at Google Cloud Next '24, where access to new opportunities awaits, helping you create, innovate, and supercharge your startup's growth. In the Startup Lounge, you’ll be able to grow your network, receive practical training on launching and scaling more efficiently, and acquire essential strategies to propel your startup forward. Over three days, dive into exclusive startup sessions and founder stories tailored to a startup's practical needs and transformational goals. You’ll also have the chance to learn the secrets to securing funding from Menlo Ventures, understand why AI unicorns are increasingly choosing Google Cloud and find out about the state of the startup landscape around the world. This year’s agenda is packed with exciting moments you won’t want to miss. Come listen to Dario Amodei, CEO and Co-Founder of Anthropic, and Elad Gil, a renowned entrepreneur and investor, as they discuss the transformative impact of generative AI on the business landscape in a fireside chat. You’ll also want to bookmark sessions from the Founder’s Series, which will offer insights on redefining strategies related to cost, productivity, and innovation from the founders of startups like LangChain, Hugging Face, Runway, Character.AI, AI21 Labs, Contextual AI, and more. These are only some of the can’t-miss sessions geared towards startup leaders — make sure you explore the full list here. But that’s not all, here’s what else startups can look forward to at Next ‘24. Build meaningful connections in the Startup Lounge Make sure to join us after the Main Keynote for the "Startups at Next” Kickoff Event, where we’ll be sharing some big announcements that will create new opportunities for you and your startup to grow with AI. Connect with peers, venture capitalists, and industry leaders during events such as the Xoogler Networking Event, the DEI Founder’s Ice Cream Social, and the Founder's Toast. Access tailored expertise with personal Office Hours Facing challenges or seeking to refine your strategy? Book direct access to Google experts and engineers in 15-minute Startup Expert sessions or pre-book 30-minute Office Hours sessions. These opportunities allow you to address technical questions, strategize on business development, or delve into AI, equipping you with the tools to unlock your startup's full potential. Book today before they are gone! Experience innovation at the Startup Showcases The excitement at Google Cloud Next '24 goes beyond traditional formats with the Startup Lounge featuring unique activations that foster creativity and growth. Witness over 25 startups launching new products or features on stage, enjoy a life-sized game of Chutes & Ladders by Product Science, and explore the future with Octo.ai in the Google Next Yearbook. Engage with AssemblyAI Speech-to-text AI for a unique fortune-telling experience and participate in the Suggestic three-day wellness challenge as you navigate Next ‘24. Don't miss out on grand prizes from each showcase and more. Apply for Cloud Credits and More with 1:1 Application Help This year, dedicated staff will be available at Next '24 to guide you through the Google for Startups Cloud Program application process. By joining, you'll gain access to startup experts, coverage for your Google Cloud and Firebase costs up to $200,000 USD (up to $350,000 USD for AI startups) over two years, technical training, business support, and exclusive Google-wide offers. We hope to see you at the event, and look forward to welcoming startup leaders from around the globe to Mandalay Bay in Las Vegas. View the full article
  18. The Public Sector track at Google Cloud Next '24 is our chance to learn, collaborate, and pave the way for the next generation of public service. To support this shared mission, we've designed a program of insightful sessions, lightning talks, and interactive demos. Here's how to make this collaborative experience work for you: Focus on Impact Sessions for Strategic Advantage: Delve into topics shaping the future of public service: cybersecurity resilience, AI for citizen-centric experiences, and the evolving landscape of education technology. Gain insights to guide your organization's strategy.Lightning Talks for Rapid Innovation: Learn from pioneers like Adtalem, Purdue University Global, Hawaii's Department of Human Services, Illinois Office of Children’s Behavioral Health Transformation, and the USPTO. Discover how they're utilizing cutting-edge technology to drive efficiency and impact.Join us at the Google Public Sector Booth #1240 Demos That Spark Solutions: See Google Cloud in action, tackling real-world public sector mission critical challenges. Explore how these solutions could streamline your Medicaid processes, enhance transportation systems, or accelerate research initiatives. Hear from our Public Sector sponsoring partners and learn about our joint solutions tailored to supporting education and government missions.Innovation in Action at the Public Sector Solution Spotlight: Explore ISV demos of solutions built on Google Cloud to drive strategic impact. See how agencies unlock insights with AI, transform education, enhance cybersecurity, simplify compliance, and deliver exceptional citizen services.Your Essential Planning Tools Maximize Your Time: Our Public Sector Quick View Guide and the Next '24 agenda builder are your keys to success. Personalize your schedule to focus on the areas most relevant to your organization's goals.Inspiration Beyond Sessions Take advantage of the keynotes, panels, and networking opportunities at Next '24. Connect with peers, Google experts, and industry leaders for invaluable exchange of ideas. Don't miss the dedicated Public Sector events hosted by Carahsoft and Deloitte for added value. Let's Drive Transformation Together The Public Sector track at Google Cloud Next '24 is our chance to learn, collaborate, and pave the way for the next generation of public service. Let's make it count! View the full article
  19. Inspiration awaits! Google Cloud Next takes over Las Vegas on April 9-11, bringing together a powerhouse collection of innovative customers who are pushing the boundaries with Google Cloud. In this blog, we'll shine a spotlight on customers leveraging Google Cloud databases to transform their businesses. And don’t forget to add these sessions to your event agenda to catch their insights and experiences at Next ‘24. Nuro Autonomous driving company, Nuro, uses vector similarity search to help classify objects that autonomous vehicles encounter while driving on the road and ultimately trigger the right action. Nuro currently has hundreds of millions of vectors that are moving to AlloyDB AI in order to simplify their application architecture. Fei Meng Head of Data Platform, Nuro >> Add to my agenda << Lightricks Lightricks utilizes the popular pgvector extension on Cloud SQL for PostgreSQL to categorize video "Templates" within their Videoleap application. Videoleap UGC is a comprehensive social platform developed by Lightricks, which is designed for editing and sharing videos. Template categorization allows users to easily search through the provided templates to find one that matches their needs and generate their own customized videos. The usage of pgvector has enabled us to use semantic search instead of traditional keyword search. The retrieval rate due to the use of pgvector increased by 40% and the template usage from the retrieved results increased by 40% as well. The usage of the pgvector hnsw index enabled us to query millions of embeddings with high accuracy and response times below 100ms. David Gang Tech Lead, Brands >> Add to my agenda << Bayer Bayer Crop Science is a division of Bayer dedicated to agricultural advancements. Their modern data solution, “Field Answers,” which stores and analyzes vast amounts of observational data, experienced an increase in data load and latency requirements. And with the fall harvest season looming, Bayer needed a solution that would hold up to the upcoming demand. The team turned to AlloyDB for PostgreSQL, drawn by its compatibility with existing systems and low replication lag. This upgrade has helped streamline operations, centralize solutions, and improve collaboration with data scientists across the company. >> Add to my agenda << Yahoo! Yahoo!’s global reach demanded a data solution that would allow them to offer transformative experience at scale. With audacious goals for modernization, Yahoo! leveraged Spanner as a database to meet its strategy requirements. With Spanner’s superior performance, low cost, low operational overhead and global consistency, Yahoo! plans to consolidate diverse databases and expand Spanner’s footprint to support other services. >> Add to my agenda << Statsig Statsig helps companies ship, test, and manage software and application features with confidence. Facing bottlenecks and connectivity issues, the company realized it needed a performant, reliable, scalable, and fully managed Redis service—and Memorystore for Redis Cluster ticked all the boxes. With real-time analytics capabilities and robust storage (99.99% SLA) at a lower cost, Memorystore provides a higher queries per second (QPS) capacity. This allows Statsig to refocus on its core mission: building a full product observability platform that maximizes impact. >> Add to my agenda << Hit the databases jackpot at Google Cloud Next If these stories have sparked your imagination, then get ready for even more inspiration at Google Cloud Next '24! Register now and be sure to add these breakout sessions mentioned above to your agenda to experience firsthand how Google Cloud databases are empowering businesses to achieve amazing things. We'll see you in Las Vegas! View the full article
  20. Google Cloud Next ‘24 is coming to Las Vegas and this year’s offerings are packed with exciting experiences for cloud engineers of every type. In particular, cloud architects have a lot to choose from — Spotlights, Showcases, Breakouts, and the Innovator’s Hive. Hot topics include networks, storage, distributed cloud and of course, AI. You can view the entire session library here, but here are a few that you should be sure to check out: Spotlight SPTL204 AI and modernization on your terms: from edge to sovereign to cross-cloud: Join VP/GM of Infrastructure Sachin Gupta to learn about leveraging AI in public cloud, edge, and sovereign cloud use cases. Spotlight SPTL205 Modern cloud computing: workload-optimized and AI-powered infrastructure: Here, VP/GM Compute ML Mark Lohmeyer shows how to build, scale modern AI workloads and use AI to optimize existing infrastructure. Showcase Demo INFRA-104 Simplify and secure cloud networks with Cross-Cloud Network: Come to this Infrastructure Showcase to see how Cross-Cloud Networking can simplify your architecture. Breakout ARC215 AI anywhere: How to build Generative AI Applications On-Premises with Google Distributed Cloud: Join this breakout to learn how to deploy and operate generative AI applications on-premises. Innovator’s Hive Lightning Talk IHLT103 7 tips and tools to choose the Right Cloud Regions for your AI Workloads: This Lightning Talk will give you tips on how to deploy your AI workloads right the first time. Breakout ARC218 Accelerate AI inference workloads with Google Cloud TPUs and GPUs: In this breakout session, learn the finer points of scaling your inference workloads using TPUs and GPUs. Breakout ARC108 Increase AI productivity with Google's AI Hypercomputer: Learn the latest about Google’s new AI Hypercomputer Breakout ARC208 What's new in cloud networking: All the latest from Cloud Networking with VP/GM Muninder Sambi and team Breakout ARC204 How to optimize block storage for any workload with the latest from Hyperdisk: Learn how to accelerate your AI inference workloads with optimized block storage using Hyperdisk. But these are just a handful of the amazing sessions we’re offering this year — be sure to add them to your personal agenda. And if you’re a networking person, be sure to check out my networking session recommendations. See you at the show! View the full article
  21. Google Cloud Next, our global exhibition of inspiration, innovation, and education, is coming to the Mandalay Bay Convention Center in Las Vegas starting Tuesday, April 9. We’re featuring a packed Security Professionals track, including more than 40 breakout sessions, demos, and lightning talks where you can discover new ways to secure your cloud workloads, get the latest in threat intelligence, and learn how AI is evolving security practices. Learn from an exciting lineup of speakers, including experts from Google Cloud and leaders from Charles Schwab, L’Oreal, Snap, Fiserv, Spotify, and Electronic Arts. Register here and get ready to defend your business with the latest in frontline intelligence, intel-driven SecOps, and cloud innovation. Here are some must-attend session for security professionals: Security keynote What’s next for security professionals Organizations need powerful security solutions to protect their most sensitive assets, but the familiar approach of bolting on more security products is unsustainable. Instead, the convergence of key capabilities can reduce complexity, increase productivity, and deliver stronger security outcomes. In this spotlight, leaders from Google and Mandiant will share our insight into the threat landscape and our latest innovations that make Google part of your security team. Featured speakers: Sunil Potti, vice president and general manager, Cloud Security, Google Cloud Kevin Mandia, CEO Mandiant, Google Cloud Parisa Tabriz, vice president, Chrome Browser, Google Steph Hay, director of User Experience, Cloud Security, Google Cloud Security breakout sessions How AI can transform your approach to security Generative AI from Google Cloud can help seasoned security professionals quickly discover, investigate, and respond to threats, and help newcomers boost their skills to get more things done, faster. Join this session to learn about new features, hear from customers, and gain insights on how to address common security challenges with AI. Featured speakers: Anton Chuvakin, senior staff security consultant, Google Cloud Bashar Abouseido, senior vice president and CISO, Charles Schwab Steph Hay, director of User Experience, Cloud Security, Google Cloud Simplified Google Cloud network security: Zero Trust and beyond This demo-packed session on how to proactively defend your cloud network with Google Cloud's machine learning-powered Zero Trust architecture reveals the latest Cloud next-generation firewall (NGFW), Secure Web Proxy and Cloud Armor enhancements, partner integrations, and real-world use cases. Learn to neutralize unauthorized access, sensitive data loss, and other cloud attack vectors across public and hybrid cloud environments. Featured speakers: Ashok Narayanan, principal engineer, Google Cloud Tobias Pischl, director of product management, Broadcom, Inc. Olivier Richaud, vice president, Platforms, Symphony Sid Shibiraj, group product manager, Google Cloud Start secure, stay secure: Build a strong security foundation for your cloud In this session, you’ll learn how to start secure and stay secure in the cloud using the Security Foundation, which provides recommended controls to common cloud adoption use cases, including infrastructure modernization, AI workloads, data analytics, and application modernization. Featured speakers: William Bear, cloud architect, Mayo Clinic Dhananjay Dalave, strategic cloud engineer, Google Cloud Marcus Hartwig, product marketing lead, Google Cloud Stacy Moening, director of product management - IT, Mayo Clinic Anil Nandigam, product marketing lead, Google Cloud Practical use cases for AI in security operations AI isn’t a future concept: It’s here and available, with early user feedback showing that AI can reduce the time required for common analyst tasks such as triaging complex cases by seven times. In this session, we’ll dive into the real-world applications of AI in security operations with hands-on demonstrations and case studies. Featured speakers: Payal Chakravarty, product manager, Google Cloud Mark Ruiz, head of Cybersecurity Analytics, Pfizer A cybersecurity expert's guide to securing AI products with Google SAIF AI is rapidly advancing, and it’s important that effective security and privacy strategies evolve along with it. To support this evolution, Google Cloud introduced the Secure AI Framework (SAIF). Join us to learn why SAIF offers a practical approach to addressing the concerns that are top of mind for security and privacy professionals like you, and how to implement it. Featured speakers: Anton Chuvakin, senior staff security consultant, Google Cloud Shan Rao, group product manager, Google Cloud Preventing data exfiltration with Google Cloud’s built-in controls Join us to hear from Charles Schwab and Commerzbank on their security journey to mitigate data exfiltration risks and reduce the rising cost of a breach. You’ll also learn more about the Google Cloud platform security technologies, including VPC Service Controls and Organization Restrictions, and how they can be deployed to effectively address different threat vectors. Featured speakers: Sri B, product manager, Google Cloud Steffen Kreis, senior cloud architect, Commerzbank AG Florin Stingaciu, product manager, Google Cloud Jonathan Wentz, cloud architect, Charles Schwab Go beyond multi-cloud CNAPP security with Security Command Center Enterprise Learn how to implement a full life cycle approach to reduce cloud risks with the industry’s first solution that converges best-in-class cloud security and enterprise security operations capabilities. Hear how Verizon security teams assess and mitigate cloud risks, and how your organization can proactively manage threats, posture, data, and identities, and respond to security events across clouds with a single, converged solution operating at Google Cloud scale. Featured speakers: Arshad Aziz, director, Cloud Security, Verizon Connor Hammersmith, Security Command Center GTM, Google Cloud Jess Leroy, director, product management, Google Cloud Cloud compromises: Lessons learned from Mandiant investigations in 2023 This session will draw from real-world experiences and highlight the top threats that emerged in 2023 in relation to cloud security. Participants will gain valuable insights through case studies derived from Mandiant's incident response engagements in 2023, covering numerous threat actors such as Scattered Spider aka UNC3944. Featured speakers: Omar ElAhdan, principal consultant, Mandiant, Google Cloud Will Silverstone, senior consultant, Google Cloud Learn to love IAM: The most important step in securing your cloud infrastructure Join this session to learn all things to get started with IAM, including how to structure and set up identities, and access resources with minimal risk to your cloud environment. Featured speakers: Elliott Abraham, cloud security architect, Google Cloud Michele Chubirka, cloud security advocate, Google Cloud Your model. Your data. Your business: A privacy-first approach to generative AI Want to learn how to quickly and easily train and fine-tune production AI models? Use your data and own your model? Train in the cloud or on-premises, pain-free? Then this is the session for you! Featured speakers: Todd Moore, vice president, Encryption Solutions, Thales Amit Patil, director, Cloud Security Platform and Products, Google Cloud Nelly Porter, director, product management, Google Cloud Secure enterprise browsing: New ways to strengthen endpoint security Discover how to leverage Chrome Enterprise to apply built-in security measures, including data-loss prevention and context-aware access controls, plus the latest product announcements. You’ll leave with actionable best practices, and tips to safeguard your corporate data and empower your team. Featured speakers: Mark Berschadski, director, product management, Chrome Browser Enterprise, Google Cloud Tanisha Rai, Google Cloud Security product manager, Google Using Security Command Center for multi-cloud threat detection and response Learn how Google detects threats against Google Cloud to protect multicloud environments and find out how industry-leading Mandiant Threat Intelligence combines with planet-scale data analytics to detect new and persistent threats to cloud environments, cloud identities, and cloud workloads. Featured speakers: Ken Duffy, staff cloud architect, Telus Corporation Timothy Peacock, senior product manager, Google Cloud Jason Sloderbeck, group product manager, Security Command Center, Google Cloud TD shows Google Cloud’s shared fate model in action Discover how expecting more from Google Cloud’s shared fate model has helped TD accelerate its transformation with peace of mind. Join this session to learn how TD is building out a generative AI platform designed to expand its business, create a next generation user experience, and enhance the bank’s security posture. Featured speakers: Dan Bosman, senior vice president and CIO, TD Securities and TBSM, TD Securities Andy Chang, group product manager, Google Cloud Sean Leighton, principal architect, Google Cloud Address retail security challenges with AI Retail organizations are battling some challenging, financially motivated threat actors. To keep pace and stay within your resources budget, AI may provide some answers. In this session, we’ll discuss the evolving retail cyber-threat landscape and explore how AI can help organizations improve their security. Featured speakers: Brian Klenke, CISO, Bath & Body Works Luke McNamara, Deputy Chief Analyst, Mandiant Intelligence Harpreet Sidhu, North American Cybersecurity Lead, Accenture To explore the full catalog of breakout sessions and labs designed for security professionals, check out the security track in the Next ‘24 catalog. View the full article
  22. Calling all the Firebase fanatics, Android experts, Flutter enthusiasts, Kaggle competitors, machine learning wizards, and anyone deeply involved with open-source Google AI projects! We're excited to unveil Dev Connect, a new experience built to help you harness the power of Google Cloud and build bridges between developer communities at Google Cloud Next ‘24. Designed for developers not currently utilizing Google Cloud, but eager to explore it for building and deploying applications written in Firebase, Android, Kaggle, Angular, Flutter, and Go, Dev Connect launches at Google Cloud Next '24. Dev Connect: Your Google Cloud launchpad Dev Connect at Next ‘24 is your fast-track to building apps and solutions with Google Cloud, no matter your existing technical stack. Get hands-on workshops, expert sessions, an AI competition, and dedicated networking — all designed for developers not already working with Google Cloud technologies. Immerse yourself in Google Cloud Prepare to unlock the full potential of Google Cloud. Expect a packed schedule overflowing with experiences tailored for developers including: Tailored sessions: Dive into topics like building and deploying apps, proven cloud development strategies, and inspiring success stories from real-world Google Cloud users. You can expect sessions like: AIML163: Boost your Android app with generative AI: building the next generation of apps DEV222: Scale and deliver GenAI apps with Google's Firebase and Firestore DEV247: Live-coding an app with Cloud Run and Flutter AIML209: Build production AI applications in Go AIML233: Use large language models to answer difficult science questions This is just a sample of the many sessions curated for developers like you. Whether you care about full-stack, web, or mobile app development, want to compete in Kaggle’s AI and ML competitions, or are interested in getting experience with Google AI models, we’ve got you covered! Hands-on workshops: Get ready to learn by doing. Code, build, and create alongside experts. Networking central: Connect with like-minded developers from diverse Google communities, building valuable connections and sharing knowledge. But wait, there's more… Dev Connect is so much more than just sessions and workshops. Get ready for: The developer keynote: Hear about announcements that will make you smile, and discover the latest ways to accelerate your work. Community Hub: Visit Innovators Hive and immerse yourself in the vibrant world of Google communities. AI, Android, Firebase, Flutter, Kaggle experts are waiting for you to engage in 1:1 conversations, show you exciting demos, present you with inspiring projects, and swap ideas about extending projects to Google Cloud. Lightning talks: Get quick bursts of inspiration from real-world case studies and demos that showcase the power of Google Cloud. Catch these exciting talks and more: IHLT217: Using Firebase for gamification on WhatsApp: A next-level DevRel strategy to measure impact IHLT107: From idea to full-stack solutions with Dart and Google Cloud IHLT216: OSS + AI: Framework and resources for technologists IHLT110: Power up your Android app: a guide to generative AI integration AI & ML Kaggle competition: Put your skills to the test in a friendly Kaggle challenge with awesome prizes up for grabs. Kaggle Grandmasters and fellow AI enthusiasts will all be on hand. Whether you're a seasoned data scientist or just starting to explore LLMs, the LLM Trivia Time Kaggle competition is your chance to experiment, have fun, and compete for $20k in prizes. And you get to tune an LLM to tackle an entertaining collection of trivia, riddles, and brain teasers. And we also have a separate AI hackathon happening in case the Kaggle competition is not enough for you. Fun learning: Quizzes, demos, and developer badges turn education into fun. Happy Hour: Unwind, network, and socialize with Googlers, experts, and fellow devs. Whether you're a Firebase fanatic, Android aficionado, or an AI enthusiast ready to explore cloud technologies, attending Dev Connect at Google Cloud Next '24 is a great opportunity to accelerate your development journey. Discover why Google Cloud is the ideal platform to power your Firebase and Android projects, experiment with the latest AI innovations, and connect with Google's vibrant open-source communities. Mark your calendars! Google Cloud Next '24 kicks off on April 9th in Las Vegas. Keep an eye on the Google Cloud blog for the latest on Dev Connect sessions, the Kaggle competition, and how to get the most out of this thrilling developer experience. This is your chance to supercharge your development journey. Get ready to learn, connect, and be inspired – Dev Connect awaits! Reserve your seat in your favorite sessions today! View the full article
  23. Google Next is making its way to Las Vegas, and Ubuntu is joining the journey. As a proud sponsor, Canonical, the publisher of Ubuntu , invites you to join us at the event and visit booth #252 in the Mandalay Bay Expo Hall. As one of the most popular Linux operating systems, Canonical is dedicated to providing commercial support and driving open source innovation across a diverse range of industries and applications. Stop by and learn more about how Canonical and GCP are collaborating to empower businesses with secure and scalable solutions for their cloud computing needs. Ubuntu ‘Show you’re a Pro’ Challenge: Find and patch the vulnerabilities and earn awesome swag! Are you an Ubuntu Pro? Test your skills at our booth! Sit down at our workstation and discover any unpatched vulnerabilities on the machine. Showcase your expertise by securing the system completely, and receive exclusive swag as a token of our gratitude. Security maintenance for your full software stack At Canonical, security is paramount. Ubuntu Pro offers a solution to offload security and compliance concerns for your open source stack, allowing you to concentrate on building and managing your business. Serving as an additional layer of services atop every Ubuntu LTS release, Ubuntu Pro ensures robust protection for your entire software stack, encompassing over 30,000 open source packages. Say farewell to fragmented security measures; Canonical provides a holistic approach, delivering security and support through a unified vendor. Additionally, relish the assurance of vendor-backed SLA support for open source software, providing peace of mind for your operations. Confidential computing across clouds Confidential computing is a revolutionary technology that disrupts the conventional threat model of public clouds. In the past, vulnerabilities within the extensive codebase of the cloud’s privileged system software, including the operating system and hypervisor, posed a constant risk to the confidentiality and integrity of code and data in operation. Likewise, unauthorized access by a cloud administrator could compromise the security of your virtual machine (VM). Ubuntu Confidential VMs (CVMs) on Google Cloud offer enhanced security for your workloads by utilizing hardware-protected Trusted Execution Environments (TEEs). With the broadest range of CVMs available, Ubuntu enables users on Google Cloud to benefit from the cutting-edge security features of AMD 4th Gen EPYC processors with SEV-SNP and Intel Trust Domain Extensions (Intel TDX). Scale your AI projects with open source tooling Empower your organization with Canonical’s AI solutions. We specialize in the automation of machine learning workloads on any environment, whether private or public cloud, or hybrid or multi cloud. We provide an end-to-end MLOps solution to develop and deploy models in a secure, reproducible, and portable manner that seamlessly integrates with your existing technology stack. Let us help you unlock the full potential of AI. Join Us at Google Next 2024 Mark your calendars and make plans to visit Canonical at Google Cloud Next 2024. Whether you’re seeking cutting-edge solutions for cloud computing, robust security measures for your software stack, or innovative AI tools to propel your organization forward, our team will be on hand to offer insights, demonstrations, and personalized consultations to help you harness the power of open source technology for your business. Join us at booth #252 to discover how Canonical and Ubuntu can elevate your digital journey. See you there! Prompts: Canonical at Google Next – What you need to know! Canonical is excited to sponsor Google Cloud Next in Las Vegas, NV April 9-11, 2024. visit to the Canonical-Ubuntu booth #252 in the Mandalay Bay Expo Hall. Our team will be available to discuss the following: Protect your full software tech stack with Ubuntu Pro providing security coverage for 30,000+ software packages. Single vendor for security requirements – delivery, security, support; Vendor-backed SLA support for open source Confidential computing – OS support across all clouds (multi-cloud/hybrid cloud) AI Canonical provides tailored solutions to enable your organisation to efficiently run machine learning workloads. Canonical offers an end-to-end MLOps solution that can be used across all layers of the technology stack. While at our booth, earn some awesome swag by showing that you’re an Ubuntu Pro. Take a seat at our workstation to find the unpatched vulnerabilities on the machine! Upgrade the machine to be fully secure to earn awesome swag! See you at the event View the full article
  24. As IT admins or architects, you have your work cut out for you. You need infrastructure that's fast, secure, cost-effective, and ready for everything from AI to analytics to large enterprise apps. And that’s just your day job. You also face a constantly evolving IT environment where emerging technologies like generative AI force you to adapt and learn to stay ahead of the curve. If this sounds familiar, Google Cloud Next '24 is exactly what you need to learn about the latest cloud secrets, systems and silicon. “Really?” you ask in a discernibly weary, sarcastic tone. Really. This year the event is in Las Vegas on April 9-11, with a broad agenda including keynotes, ‘how-to’ deep-dives, panels, demos and labs. You can view the entire session library here, but if you’re still on the fence, let’s power through five questions that might be on your mind right now, and how you can get answers to them at Next ‘24. 1. How can I reduce costs and evaluate the reliability of cloud providers? Moving and grooving on the cloud doesn’t need to be a headache. And not all providers are the same. You can save big with easy-to-use strategies, tools and flexible infrastructure. Here are a few sessions to explore: The reality of reliability: Big differences between hyperscalers, explained - Get an objective assessment of outage/incident data from three major cloud providers, then learn how you can improve your operational reliability. Apps take flight: Migrating to Google Cloud - Learn modern migration strategies and how Google simplifies the transition from on-prem or other clouds. Optimize costs and efficiency with new compute operations solutions - Discover features and automations that streamline workload management; hear the latest product announcements and roadmap. 2. What are the opportunities, risks, and best practices when modernizing enterprise apps like VMware and SAP? Cloud-native workloads are great and all, but enterprise workloads are the lifeblood of most organizations. Deciding if, when, and how to modernize them is challenging. At Next ‘24, we will share a ton of ideas and help you and assess the trade-offs for yourself: Transform your VMware estate in the cloud without re-factoring or re-skilling - Explore how Google Cloud VMware Engine makes modernizing existing VMware setups fast and smooth. Storage solutions optimized for modern and enterprise workloads - Find the perfect cloud-based file storage to balance your workload's performance, availability, and cost needs. Transform your SAP workload with Google Cloud - Optimize SAP with Google Cloud's reliable infrastructure, tools, and best practices). 3. How should I architect my infrastructure for AI? Tackling AI projects requires performance, scalability and know-how. Google Cloud's AI Hypercomputer, the culmination of a decade of pioneering AI infrastructure advancements, helps businesses with performance-optimized hardware, open software, leading ML frameworks, and flexible consumption models. In fact, we were just named a Leader in The Forrester Wave™: AI Infrastructure Solutions, Q1 2024, with the highest scores of any vendor evaluated in both the Current Offering and Strategy categories. Here are a few sessions where you can learn about our AI infrastructure during Next ‘24: Workload-optimized and AI-powered infrastructure - Hear the latest product announcements from VP/GM Compute & ML, Mark Lohmeyer. Increase AI productivity with Google Cloud's AI Hypercomputer - A 101 on our AI supercomputing architecture. How to get easy and affordable access to GPUs for AI/ML workloads - Get tips on making the most of GPU access for your AI/ML projects. 4. What’s the best way to build and run container-based apps? Using managed services and choosing the right database will help you secure, scalable container-based applications. Google Cloud’s container offerings are developer favorites, packaging more than a decade’s worth of experience launching several billion containers per week. Here are a few sessions you should check out during Next ‘24: The ultimate hybrid example: A fireside chat about how Google Cloud powers (part of) Alphabet - See how Google itself uses GKE, Cloud Run, and more to power its own services! A primer on data on Kubernetes - Explore the rise of Kubernetes for data management, including AI/ML and storage best practices. 5. Can you meet my sovereignty, scalability and control requirements? Many customers face challenges in adopting public cloud services due to policies and regulations that affect connectivity, reliability, data volumes, and security. Google Distributed Cloud offers solutions for government, regulated industries, telecom, manufacturing or retail (check out McDonald’s recent experience). Here are a few sessions where you can learn more: Google Distributed Cloud's 2024 Roadmap - Learn the basics and get a summary of new features, plus our roadmap. How to build on-premises compliance controls for AI in regulated industries - Explore solutions to meet your toughest data sovereignty, security, and compliance requirements. Deliver modern AI-enabled retail experiences for one or thousands of locations - Learn how to simplify edge configuration management at the edge, enabling store analytics, fast check-out, and predictive analytics, and more. 6. I just want to see something cool. What’s the one session I should attend? You probably want to know how we’re embedding generative AI into Google Cloud solutions. We can’t reveal much yet, but make sure you add this session to your schedule: Transform your cloud operations and design capability with Duet AI for Google Cloud Any of those questions ring a bell? We've got lots of answers, and we're ready to share them at Next ‘24 starting on April 9th. Not registered yet? Don’t delay! Space is limited. View the full article
  25. Are you ready to bridge the digital divide, deliver citizen-centric services, and future-proof your agency in 2024? Public sector technology is evolving at breakneck speed, with AI, cybersecurity, and sustainable solutions taking center stage. Join us at Google Cloud Next ‘24, April 9-11. Here are 10 great reasons to register now for Next ‘24! Master cutting-edge AI: Dive into Google's advanced AI tools through workshops, demos, and sessions that propel your mission forward. See real-world examples of AI transforming areas like disaster relief and research.Supercharge your skills: Gain hands-on experience with the latest cloud technologies and data-driven solutions. Engage your leadership with your newfound expertise and elevate your career.Network with your public sector community and cross industry experts: Connect with fellow leaders, Google Cloud experts, and industry peers and partners. Share best practices, exchange ideas, and build collaborations that drive innovation.Learn from inspiring success stories: Discover impactful case studies, like empowering examiners with AI, building disaster resilience, and revolutionizing education. See how technology is making a difference and get inspired to do the same.Deep dive into your interests: Explore 500+ sessions across 10 tracks, covering security, analytics, citizen engagement, and more. Learn from 30+ public sector speakers, including leaders from the states of New Jersey and Wisconsin, and Deloitte Consulting to gain industry and real-world insights.Get ahead of cybersecurity threats: Discover how AI-powered security solutions are protecting public sector agencies. Gain the knowledge and tools to confidently safeguard your data and infrastructure.Embrace sustainable solutions: Learn about technologies that promote environmental responsibility and social good. Be part of building a better future for everyone.Save big with special discounts: Enjoy exclusive pricing for government and education registrants. Use your business email at registration. Make your budget stretch further and invest in your professional development.Reimagine your mission: See how technology can transform public services and create a brighter future for your community. Leave feeling energized and ready to make a real impact.Join a movement for good: Connect with a vibrant community of passionate public servants committed to positive change. Together, we can harness technology to serve better, faster, and stronger.Ready to make a difference? Register for Google Cloud Next '24 and unlock your full potential and help take your mission to the next level! View the full article
  • Forum Statistics

    43.4k
    Total Topics
    42.8k
    Total Posts
×
×
  • Create New...