Jump to content

Search the Community

Showing results for tags 'google gemini'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General
    • General Discussion
    • Artificial Intelligence
    • DevOpsForum News
  • DevOps & SRE
    • DevOps & SRE General Discussion
    • Databases, Data Engineering & Data Science
    • Development & Programming
    • CI/CD, GitOps, Orchestration & Scheduling
    • Docker, Containers, Microservices, Serverless & Virtualization
    • Infrastructure-as-Code
    • Kubernetes & Container Orchestration
    • Linux
    • Logging, Monitoring & Observability
    • Security, Governance, Risk & Compliance
  • Cloud Providers
    • Amazon Web Services
    • Google Cloud Platform
    • Microsoft Azure

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


LinkedIn Profile URL


About Me


Cloud Platforms


Cloud Experience


Development Experience


Current Role


Skills


Certifications


Favourite Tools


Interests

Found 14 results

  1. A summarize feature powered by Gemini, Google’s recently debuted generative AI model and digital assistance tool, is coming to the Gmail app for Android - and it could make reading and understanding emails much faster and easier. The feature is expected to roll out soon to all users, and they’ll be able to provide feedback by rating the quality of the generated email summaries. The feature has been suspected to be in the works for some time now, as documented by Android Authority, and now it’s apparently close to launch. One of Android Authority’s code sleuth sources managed to get the feature working on the Gmail app version 2024.04.21.626860299 after a fair amount of tinkering. It’s not disclosed what steps they took, so if you want to replicate this, you’ll have to do some experimenting, but the fact that the summarize feature can be up and running shows that Android Gmail users may not have to wait very long. There is a screenshot featured in Android Authority’s report showing a chat window where the user asks Gemini to summarize an email that they currently have open, and Gemini obliging. Apparently, this feature will be available via a ‘Summarize this email’ button under an email’s subject line, I assume triggering the above prompt, and this should return a compact summary of the email. This could prove to be especially helpful when dealing with a large number of emails, or for particularly long emails with many details. Once the summary is provided, users will be shown thumbs up and thumbs down buttons under Gemini’s output, similar to OpenAI’s ChatGPT after it gives its reply to a user’s query. This will give Google a better understanding of how helpful the feature is to users and how it could be improved. There will also be a button that allows you to copy the email summary to your clipboard, according to the screenshot. (Image credit: Shutterstock/fizkes) When to expect the new feature The speculation is that the feature could be rolled out during Google’s I/O 2024 event, its annual developer conference, which is scheduled for May 14, 2024. Google is also expected to show off the next iteration of its Pixel A series, the Pixel 8A, it could show its development of augmented reality (AR) technology, and new software and service developments, especially for its devices and ChromeOS (the operating system that powers the best Chromebooks). Many Gmail users could potentially find this new summarize feature to be time-saving and that it streamlines their emails, but as with any generative AI, there are concerns about the accuracy of the generated text. If Gemini omits or misinterprets important information, it could lead to oversights or misunderstandings. I’m glad that Google has the feedback system in place, as this will show if the feature is actually serving its purpose well. We’ll have to wait and see, and the proof will be in the pudding whether it results in improved productivity and is reasonably accurate when it’s finally released. YOU MIGHT ALSO LIKE... Google Gemini explained: 7 things you need to know about the new Copilot and ChatGPT rivalGoogle Gemini AI looks like it’s coming to Android tablets and could coexist with Google Assistant (for now)Google Gemini's Android app gets a wider rollout, but it's still no Assistant replacement View the full article
  2. Find out all about Google Cloud's latest learning path, and learn how to use the Gemini language model in the Google Cloud.View the full article
  3. Up until now, you've needed a phone running Android 12 or later to make use of the Google Gemini AI app for Android, but that has now changed – while a new 'conversation mode' for the chatbot has also leaked. As per Android Authority, some digging by well-known tipster @AssembleDebug revealed that Android 10 was the new minimum requirement for Gemini, and the Play Store listing now also reflects the support for more devices. Android 10 and Android 12 launched in 2019 and 2021 respectively, so a substantial number of older phones should now be Gemini-compatible. The app can replace Google Assistant on handsets, if requested, though it doesn't yet support all of the same features. According to the official Gemini support page, you also need 4GB of RAM in your phone to run the AI chatbot properly. That page still mentions compatibility with Android 12 and higher, though we're assuming it'll be updated soon. For now, you can only get at Gemini on an iPhone by going through the Google app for iOS. A little more conversation Gemini assistant to get a new 'Conversation' mode on Android Read - https://t.co/aIPx8sAuTcThere is definitely some uncertainty about this feature about how it will exactly work #Google #Android pic.twitter.com/GWbmwVZfqfApril 24, 2024 See more Yet another new find from @AssembleDebug (who we're assuming never sleeps) and PiunikaWeb points to something called 'conversation mode' in Gemini for Android. The code for it is disabled right now, but could be enabled in the near future. As it doesn't work yet, it's difficult to say for sure what it could be. It might match the 'continued conversation' feature in Google Assistant, where you can keep chatting without having to manually trigger the Assistant's listening mode each time. Alternatively, it could be something to do with live translation, a feature that's already appeared in several AI-powered apps from Google and others. Time will tell, if indeed this is something Google keeps developing and sets live. The next date of note for Google and Gemini AI news is May 14, when Google I/O 2024 gets underway. Google is expected to tell us a lot more about its AI efforts then, and there should also be updates on Android 15 and the Google Pixel 8a. You might also like Android tablets could soon support GeminiGoogle Gemini has plenty of ideasAn annoying Gemini problem gets fixed View the full article
  4. Gemini may receive a big update on mobile in the near future where it’ll gain several new features including a text box overlay. Details of the upgrade come from industry insider AssembleDebug who shared his findings with a couple of publications. PiunikaWeb gained insight into the overlay and it’s quite fascinating seeing it in action. It converts the AI’s input box into a small floating window located at the bottom of a smartphone display, staying there even if you close the app. You could, for example, talk to Gemini while browsing the internet or checking your email. AssembleDebug was able to activate the window and get it working on his phone while on X (the platform formerly known as Twitter). His demo video shows it behaving exactly like the Gemini app. You ask the AI a question, and after a few seconds, a response comes out complete with source links, images, as well as YouTube videos if the inquiry calls for it. Answers have the potential to obscure the app behind it. AssembleDebug’s video reveals the length depends on whether the question requires a long-form answer. We should mention that the overlay is multimodal so you can write out an inquiry, verbally command the AI, or upload an image. Smarter AI The other notable changes were shared with Android Authority. First, Gemini on Android will gain the ability to accept different types of files besides photographs. Images show a tester uploading a PDF, and then asking the AI to summarize the text inside it. Apparently, the feature is present in the current version of Gemini however activating it doesn’t do anything. Android Authority speculates the update may be exclusive to either Google Workspace or Gemini Advanced; maybe both. It’s hard to tell at the moment. Second is a pretty basic tool, but useful nonetheless called Select Text. The way Gemini works right now is you’re forced to copy a whole block of text even if you just want a small portion. Select Text solves this issue by allowing you to grab a specific line or paragraph. Yeah, it’s not a flashy upgrade. Almost every app in the world has the same capability. Yet, the tool has “huge implications for Gemini’s usability”. It greatly improves the AI’s ease of use by not being so restrictive. #Google Gemini Android app will finally not force you to copy an entire prompt response Read on AndroidAuthority - https://t.co/M1EFGwfbNJ#Google #Gemini #AI pic.twitter.com/BFkKCbKylRApril 23, 2024 See more A fourth, smaller update was found by AssembleDebug. It’s known as Real-time Responses. The descriptor text found alongside it claims the tool lets you see answers being written out in real-time. However, as PiunikaWeb points out, it’s only an animation change. There aren’t any “practical benefits.” Instead of waiting for Gemini to generate a response as one solid mass, you can choose to see the AI write everything out line by line similar to its desktop counterpart. Google I/O 2024 kicks off in about three weeks on May 14. No word on when these features will roll out, but we'll learn a lot more during the event. While you wait, check out TechRadar's roundup of the best Android smartphones for 2024 if you're looking to upgrade. You might also like This is what Gemini AI in Google Messages may look likeThe first Android 15 public beta is out – here's how to download itGoogle’s Gemini AI app could soon let you sync and control your favorite music streaming service View the full article
  5. Google's latest AI experiment, Gemini, is about to get a whole lot more useful thanks to support for third-party music streaming services like Spotify and Apple Music. This new development was apparently found in Gemini’s settings, and users will be able to pick their preferred streaming service to use within Gemini. Gemini has been running shifts all around different Google products, particularly as a digital assistant sometimes in place of and sometimes in tandem with Google Assistant. It’s still somewhat limited compared to Assistant and is not at the stage where it can fully replace the Google staple. One of these limitations is that it can’t enlist a streaming service of a user’s choice to play a song or other audio recording like many popular digital assistants (including Google Assistant) can. This might not be the case for long, however. The tech blog PiunikaWeb and X user @AssembleDebug claim that Gemini is getting the feature, and they have screenshots to back up their claim. Screenshots from PiunikaWeb’s tipster show that the Gemini app’s settings now have a new “Music” option, with text reading “Select preferred services used to play music” underneath. This will presumably allow users to choose from whatever streaming services Google deems compatible. Once you choose a streaming service, Gemini will hopefully work seamlessly with that service and enable you to control it using voice commands. PiunikaWeb suggests that users will be able to use Gemini for song identification, possibly by letting Gemini listen to the song, and then interact with a streaming app to try and find the song that’s playing in their surroundings, similar to the way Shazam works. If that’s the case, that’s one fewer separate app you’ll need. What we don't know yet, but hope to soon (Image credit: Shutterstock/Dean Drobot) This is all very exciting and from the screenshots, it looks like the feature is a good amount into development. It’s not clear if PiunikaWeb’s tipster could get the feature to actually work or which streaming services will work in sync with Gemini, and we don’t know when Google will roll this feature out. Still, it’s highly requested and a must if Google has plans for Gemini to take Assistant’s place, so it’ll probably be rolled out in a future Gemini update. It’s also indicative to me that Google seems pretty committed to expanding Gemini’s repertoire so that it joins Google’s other popular products and services. YOU MIGHT ALSO LIKE... Google’s new Gemini Code Assist tool could be the best thing to happen to developers this yearThis could be the next Gemini AI trick you get on your Android phoneGoogle has fixed an annoying Gemini voice assistant problem – and more upgrades are coming soon View the full article
  6. (Image credit: Shutterstock) If you’ve been using Google Chrome for the past few years, you may have noticed that whenever you’ve had to think up a new password, or change your existing one, for a site or app, a little “Suggest strong password” dialog box would pop up - and it looks like it could soon offer AI-powered password suggestions. A keen-eyed software development observer has spotted that Google might be gearing up to infuse this feature with the capabilities of Gemini, its latest large language model (LLM). The discovery was made by @Leopeva64 on X. They found references to Gemini in patches of Gerrit, a web-based code review system developed by Google and used in the development of Google products like Android. These findings appear to be backed up by screenshots that show glimpses of how Gemini could be incorporated into Chrome to give you even better password suggestions when you’re looking to create a new password or change from one you’ve previously set. Gemini guesswork One line of code that caught my attention is that “deleting all passwords will turn this feature off.” I wonder if this does what it says on the tin: shutting the feature off if a user deletes all of their passwords, or if this just means all of the passwords generated by the “Suggest strong passwords” feature. The final screenshot that @Leopeva64 provides is also intriguing as it seems to show the prompt that Google engineers have included to get Gemini to generate a suitable password. This is a really interesting move by Google and it could play out well for Chrome users who use the strong password suggestion feature. I’m a little wary of the potential risks associated with this method of password generation, similar to risks you find with many such methods. LLMs are susceptible to information leaks caused by prompt or injection hacks. These hacks are designed to trick the AI models to give out information that their creators, individuals, or organizations might want to keep private, like someone’s login information. (Image credit: Shutterstock/Gorodenkoff) An important security consideration Now, that sounds scary and as far as we know, this hasn’t happened yet with any widely-deployed LLM, including Gemini. It’s a theoretical fear and there are standard password security practices that tech organizations like Google employ to prevent data breaches. These include encryption technologies, which encode data so that only authorized parties can access it for multiple stages of the password generation and storage process, and hashing, a one-way data conversion process that’s intended to make data reverse-engineering hard to do. You could also use any other LLM like ChatGPT to generate a strong password manually, although I feel like Google knows more about how to do this, and I’d only advise experimenting with that if you’re a software data professional. It’s not a bad idea as a proposition and a use of AI that could actually be very beneficial for users, but Google will have to put an equal (if not greater) amount of effort into making sure Gemini is bolted down and as impenetrable to outside attacks as can be. If it implements this and by some chance it does cause a huge data breach, that will likely damage people’s trust of LLMs and could impact the reputations of the tech companies, including Google, who are championing them. YOU MIGHT ALSO LIKE... 'The party is over for developers looking for AI freebies' — Google terminates Gemini API free access Google has fixed an annoying Gemini voice assistant problem – and more upgrades are coming soon Google Gemini is its most powerful AI brain so far – and it’ll change the way you use Google View the full article
  7. It’s been rumored for a while now that Google is considering charging users for AI powered results, especially concerning the idea of a premium search option which leverages generative AI. Whether that will happen remains to be seen, but Google is ending the era of free access to its Gemini API, signaling a new financial strategy within its AI development. Developers previously enjoyed free access to lure them towards Google’s AI products and away from OpenAI’s, but that is set to change. OpenAI was first to market and has already monetized its APIs and LLM access. Now Google is planning to emulate this through its cloud and AI Studio services, and it seems the days of unfettered free access are numbered. RIP PaLM API In an email to developers, Google said it was shutting down access to its PaLM API (the pre-Gemini model which was used to build custom chatbots) to developers via AI Studio on August 15. This API was deprecated back in February. The tech giant is hoping to convert free users into paying customers by promoting the stable Gemini 1.0 Pro. “We encourage testing prompts, tuning, inference, and other features with stable Gemini 1.0 Pro to avoid interruptions," The email reads. “You can use the same API key you used for the PaLM API to access Gemini models through Google AI SDKs.” Pricing for the paid plan begins at $7 for one million input tokens and rises to $21 for the same number of output tokens. There is one exception to Google’s plans - PaLM and Gemini will remain accessible to customers paying for Vertex AI in Google Cloud. However, as HPCWire points out, “Regular developers on cheaper budgets typically use AI Studio as they cannot afford Vertex.” More from TechRadar Pro These are the best AI chatbots for businesses around todayGoogle's non-profit arm launches AI accelerator to fund the next big thingNew search engines fueled by Generative AI will compete with Google View the full article
  8. Discover the basics of using Gemini with Python via VertexAI, creating APIs with FastAPI, data validation with Pydantic and the fundamentals of Retrieval-Augmented Generation (RAG)Photo by Kenny Eliason on UnsplashWithin this article, I share some of the basics to create a LLM-driven web-application, using various technologies, such as: Python, FastAPI, Pydantic, VertexAI and more. You will learn how to create such a project from the very beginning and get an overview of the underlying concepts, including Retrieval-Augmented Generation (RAG).Disclaimer: I am using data from The Movie Database within this project. The API is free to use for non-commercial purposes and complies with the Digital Millennium Copyright Act (DMCA). For further information about TMDB data usage, please read the official FAQ. Table of contents– Inspiration – System Architecture – Understanding Retrieval-Augmented Generation (RAG) – Python projects with Poetry – Create the API with FastAPI – Data validation and quality with Pydantic – TMDB client with httpx – Gemini LLM client with VertexAI – Modular prompt generator with Jinja – Frontend – API examples – Conclusion The best way to share this knowledge is through a practical example. Hence, I’ll use my project Gemini Movie Detectives to cover the various aspects. The project was created as part of the Google AI Hackathon 2024, which is still running while I am writing this. Gemini Movie Detectives (by author)Gemini Movie Detectives is a project aimed at leveraging the power of the Gemini Pro model via VertexAI to create an engaging quiz game using the latest movie data from The Movie Database (TMDB). Part of the project was also to make it deployable with Docker and to create a live version. Try it yourself: movie-detectives.com. Keep in mind that this is a simple prototype, so there might be unexpected issues. Also, I had to add some limitations in order to control costs that might be generated by using GCP and VertexAI. Gemini Movie Detectives (by author)The project is fully open-source and is split into two separate repositories: Github repository for backend: https://github.com/vojay-dev/gemini-movie-detectives-api Github repository for frontend: https://github.com/vojay-dev/gemini-movie-detectives-uiThe focus of the article is the backend project and underlying concepts. It will therefore only briefly explain the frontend and its components. In the following video, I also give an overview over the project and its components: https://medium.com/media/bf4270fa881797cd515421b7bb646d1d/hrefInspirationGrowing up as a passionate gamer and now working as a Data Engineer, I’ve always been drawn to the intersection of gaming and data. With this project, I combined two of my greatest passions: gaming and data. Back in the 90’ I always enjoyed the video game series You Don’t Know Jack, a delightful blend of trivia and comedy that not only entertained but also taught me a thing or two. Generally, the usage of games for educational purposes is another concept that fascinates me. In 2023, I organized a workshop to teach kids and young adults game development. They learned about mathematical concepts behind collision detection, yet they had fun as everything was framed in the context of gaming. It was eye-opening that gaming is not only a huge market but also holds a great potential for knowledge sharing. With this project, called Movie Detectives, I aim to showcase the magic of Gemini, and AI in general, in crafting engaging trivia and educational games, but also how game design can profit from these technologies in general. By feeding the Gemini LLM with accurate and up-to-date movie metadata, I could ensure the accuracy of the questions from Gemini. An important aspect, because without this Retrieval-Augmented Generation (RAG) methodology to enrich queries with real-time metadata, there’s a risk of propagating misinformation — a typical pitfall when using AI for this purpose. Another game-changer lies in the modular prompt generation framework I’ve crafted using Jinja templates. It’s like having a Swiss Army knife for game design — effortlessly swapping show master personalities to tailor the game experience. And with the language module, translating the quiz into multiple languages is a breeze, eliminating the need for costly translation processes. Taking that on a business perspective, it can be used to reach a much broader audience of customers, without the need of expensive translation processes. From a business standpoint, this modularization opens doors to a wider customer base, transcending language barriers without breaking a sweat. And personally, I’ve experienced firsthand the transformative power of these modules. Switching from the default quiz master to the dad-joke-quiz-master was a riot — a nostalgic nod to the heyday of You Don’t Know Jack, and a testament to the versatility of this project. Movie Detectives — Example: Santa Claus personality (by author)System ArchitectureBefore we jump into details, let’s get an overview of how the application was built. Tech Stack: Backend Python 3.12 + FastAPI API developmenthttpx for TMDB integrationJinja templating for modular prompt generationPydantic for data modeling and validationPoetry for dependency managementDocker for deploymentTMDB API for movie dataVertexAI and Gemini for generating quiz questions and evaluating answersRuff as linter and code formatter together with pre-commit hooksGithub Actions to automatically run tests and linter on every pushTech Stack: Frontend VueJS 3.4 as the frontend frameworkVite for frontend toolingEssentially, the application fetches up-to-date movie metadata from an external API (TMDB), constructs a prompt based on different modules (personality, language, …), enriches this prompt with the metadata and that way, uses Gemini to initiate a movie quiz in which the user has to guess the correct title. The backend infrastructure is built with FastAPI and Python, employing the Retrieval-Augmented Generation (RAG) methodology to enrich queries with real-time metadata. Utilizing Jinja templating, the backend modularizes prompt generation into base, personality, and data enhancement templates, enabling the generation of accurate and engaging quiz questions. The frontend is powered by Vue 3 and Vite, supported by daisyUI and Tailwind CSS for efficient frontend development. Together, these tools provide users with a sleek and modern interface for seamless interaction with the backend. In Movie Detectives, quiz answers are interpreted by the Language Model (LLM) once again, allowing for dynamic scoring and personalized responses. This showcases the potential of integrating LLM with RAG in game design and development, paving the way for truly individualized gaming experiences. Furthermore, it demonstrates the potential for creating engaging quiz trivia or educational games by involving LLM. Adding and changing personalities or languages is as easy as adding more Jinja template modules. With very little effort, this can change the full game experience, reducing the effort for developers. System Overview (by author)As can be seen in the overview, Retrieval-Augmented Generation (RAG) is one of the essential ideas of the backend. Let’s have a closer look at this particular paradigm. Understanding Retrieval-Augmented Generation (RAG)In the realm of Large Language Models (LLM) and AI, one paradigm becoming more and more popular is Retrieval-Augmented Generation (RAG). But what does RAG entail, and how does it influence the landscape of AI development? At its essence, RAG enhances LLM systems by incorporating external data to enrich their predictions. Which means, you pass relevant context to the LLM as an additional part of the prompt, but how do you find relevant context? Usually, this data can be automatically retrieved from a database with vector search or dedicated vector databases. Vector databases are especially useful, since they store data in a way, so that it can be queried for similar data quickly. The LLM then generates the output based on both, the query and the retrieved documents. Picture this: you have an LLM capable of generating text based on a given prompt. RAG takes this a step further by infusing additional context from external sources, like up-to-date movie data, to enhance the relevance and accuracy of the generated text. Let’s break down the key components of RAG: LLMs: LLMs serve as the backbone of RAG workflows. These models, trained on vast amounts of text data, possess the ability to understand and generate human-like text.Vector Indexes for contextual enrichment: A crucial aspect of RAG is the use of vector indexes, which store embeddings of text data in a format understandable by LLMs. These indexes allow for efficient retrieval of relevant information during the generation process. In the context of the project this could be a database of movie metadata.Retrieval process: RAG involves retrieving pertinent documents or information based on the given context or prompt. This retrieved data acts as the additional input for the LLM, supplementing its understanding and enhancing the quality of generated responses. This could be getting all relevant information known and connected to a specific movie.Generative Output: With the combined knowledge from both the LLM and the retrieved context, the system generates text that is not only coherent but also contextually relevant, thanks to the augmented data.RAG architecture (by author)While in the Gemini Movie Detectives project, the prompt is enhanced with external API data from The Movie Database, RAG typically involves the use of vector indexes to streamline this process. It is using much more complex documents as well as a much higher amount of data for enhancement. Thus, these indexes act like signposts, guiding the system to relevant external sources quickly. In this project, it is therefore a mini version of RAG but showing the basic idea at least, demonstrating the power of external data to augment LLM capabilities. In more general terms, RAG is a very important concept, especially when crafting trivia quizzes or educational games using LLMs like Gemini. This concept can avoid the risk of false positives, asking wrong questions, or misinterpreting answers from the users. Here are some open-source projects that might be helpful when approaching RAG in one of your projects: txtai: All-in-one open-source embeddings database for semantic search, LLM orchestration and language model workflows.LangChain: LangChain is a framework for developing applications powered by large language models (LLMs).Qdrant: Vector Search Engine for the next generation of AI applications.Weaviate: Weaviate is a cloud-native, open source vector database that is robust, fast, and scalable.Of course, with the potential value of this approach for LLM-based applications, there are many more open- and close-source alternatives, but with these, you should be able to get your research on the topic started. Python projects with PoetryNow that the main concepts are clear, let’s have a closer look how the project was created and how dependencies are managed in general. The three main tasks Poetry can help you with are: Build, Publish and Track. The idea is to have a deterministic way to manage dependencies, to share your project and to track dependency states. Photo by Kat von Wood on UnsplashPoetry also handles the creation of virtual environments for you. Per default, those are in a centralized folder within your system. However, if you prefer to have the virtual environment of project in the project folder, like I do, it is a simple config change: poetry config virtualenvs.in-project trueWith poetry new you can then create a new Python project. It will create a virtual environment linking you systems default Python. If you combine this with pyenv, you get a flexible way to create projects using specific versions. Alternatively, you can also tell Poetry directly which Python version to use: poetry env use /full/path/to/python. Once you have a new project, you can use poetry add to add dependencies to it. With this, I created the project for Gemini Movie Detectives: poetry config virtualenvs.in-project true poetry new gemini-movie-detectives-api cd gemini-movie-detectives-api poetry add 'uvicorn[standard]' poetry add fastapi poetry add pydantic-settings poetry add httpx poetry add 'google-cloud-aiplatform>=1.38' poetry add jinja2The metadata about your projects, including the dependencies with the respective versions, are stored in the poetry.toml and poetry.lock files. I added more dependencies later, which resulted in the following poetry.toml for the project: [tool.poetry] name = "gemini-movie-detectives-api" version = "0.1.0" description = "Use Gemini Pro LLM via VertexAI to create an engaging quiz game incorporating TMDB API data" authors = ["Volker Janz <volker@janz.sh>"] readme = "README.md" [tool.poetry.dependencies] python = "^3.12" fastapi = "^0.110.1" uvicorn = {extras = ["standard"], version = "^0.29.0"} python-dotenv = "^1.0.1" httpx = "^0.27.0" pydantic-settings = "^2.2.1" google-cloud-aiplatform = ">=1.38" jinja2 = "^3.1.3" ruff = "^0.3.5" pre-commit = "^3.7.0" [build-system] requires = ["poetry-core"] build-backend = "poetry.core.masonry.api"Create the API with FastAPIFastAPI is a Python framework that allows for rapid API development. Built on open standards, it offers a seamless experience without new syntax to learn. With automatic documentation generation, robust validation, and integrated security, FastAPI streamlines development while ensuring great performance. Photo by Florian Steciuk on UnsplashImplementing the API for the Gemini Movie Detectives projects, I simply started from a Hello World application and extended it from there. Here is how to get started: from fastapi import FastAPI app = FastAPI() @app.get("/") def read_root(): return {"Hello": "World"}Assuming you also keep the virtual environment within the project folder as .venv/ and use uvicorn, this is how to start the API with the reload feature enabled, in order to test code changes without the need of a restart: source .venv/bin/activate uvicorn gemini_movie_detectives_api.main:app --reload curl -s localhost:8000 | jq .If you have not yet installed jq, I highly recommend doing it now. I might cover this wonderful JSON Swiss Army knife in a future article. This is how the response looks like: Hello FastAPI (by author)From here, you can develop your API endpoints as needed. This is how the API endpoint implementation to start a movie quiz in Gemini Movie Detectives looks like for example: @app.post('/quiz') @rate_limit @retry(max_retries=settings.quiz_max_retries) def start_quiz(quiz_config: QuizConfig = QuizConfig()): movie = tmdb_client.get_random_movie( page_min=_get_page_min(quiz_config.popularity), page_max=_get_page_max(quiz_config.popularity), vote_avg_min=quiz_config.vote_avg_min, vote_count_min=quiz_config.vote_count_min ) if not movie: logger.info('could not find movie with quiz config: %s', quiz_config.dict()) raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail='No movie found with given criteria') try: genres = [genre['name'] for genre in movie['genres']] prompt = prompt_generator.generate_question_prompt( movie_title=movie['title'], language=get_language_by_name(quiz_config.language), personality=get_personality_by_name(quiz_config.personality), tagline=movie['tagline'], overview=movie['overview'], genres=', '.join(genres), budget=movie['budget'], revenue=movie['revenue'], average_rating=movie['vote_average'], rating_count=movie['vote_count'], release_date=movie['release_date'], runtime=movie['runtime'] ) chat = gemini_client.start_chat() logger.debug('starting quiz with generated prompt: %s', prompt) gemini_reply = gemini_client.get_chat_response(chat, prompt) gemini_question = gemini_client.parse_gemini_question(gemini_reply) quiz_id = str(uuid.uuid4()) session_cache[quiz_id] = SessionData( quiz_id=quiz_id, chat=chat, question=gemini_question, movie=movie, started_at=datetime.now() ) return StartQuizResponse(quiz_id=quiz_id, question=gemini_question, movie=movie) except GoogleAPIError as e: raise HTTPException(status_code=status.HTTP_500_INTERNAL_SERVER_ERROR, detail=f'Google API error: {e}') except Exception as e: raise HTTPException(status_code=status.HTTP_500_INTERNAL_SERVER_ERROR, detail=f'Internal server error: {e}')Within this code, you can see already three of the main components of the backend: tmdb_client: A client I implemented using httpx to fetch data from The Movie Database (TMDB).prompt_generator: A class that helps to generate modular prompts based on Jinja templates.gemini_client: A client to interact with the Gemini LLM via VertexAI in Google Cloud.We will look at these components in detail later, but first some more helpful insights regarding the usage of FastAPI. FastAPI makes it really easy to define the HTTP method and data to be transferred to the backend. For this particular function, I expect a POST request as this creates a new quiz. This can be done with the post decorator: @app.post('/quiz')Also, I am expecting some data within the request sent as JSON in the body. In this case, I am expecting an instance of QuizConfig as JSON. I simply defined QuizConfig as a subclass of BaseModel from Pydantic (will be covered later) and with that, I can pass it in the API function and FastAPI will do the rest: class QuizConfig(BaseModel): vote_avg_min: float = Field(5.0, ge=0.0, le=9.0) vote_count_min: float = Field(1000.0, ge=0.0) popularity: int = Field(1, ge=1, le=3) personality: str = Personality.DEFAULT.name language: str = Language.DEFAULT.name # ... def start_quiz(quiz_config: QuizConfig = QuizConfig()):Furthermore, you might notice two custom decorators: @rate_limit @retry(max_retries=settings.quiz_max_retries)These I implemented to reduce duplicate code. They wrap the API function to retry the function in case of errors and to introduce a global rate limit of how many movie quizzes can be started per day. What I also liked personally is the error handling with FastAPI. You can simply raise a HTTPException, give it the desired status code and the user will then receive a proper response, for example, if no movie could be found with a given configuration: raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail='No movie found with given criteria')With this, you should have an overview of creating an API like the one for Gemini Movie Detectives with FastAPI. Keep in mind: all code is open-source, so feel free to have a look at the API repository on Github. Data validation and quality with PydanticOne of the main challenges with todays AI/ML projects is data quality. But that does not only apply to ETL/ELT pipelines, which prepare datasets to be used in model training or prediction, but also to the AI/ML application itself. Using Python for example usually enables Data Engineers and Scientist to get a reasonable result with little code but being (mostly) dynamically typed, Python lacks of data validation when used in a naive way. That is why in this project, I combined FastAPI with Pydantic, a powerful data validation library for Python. The goal was to make the API lightweight but strict and strong, when it comes to data quality and validation. Instead of plain dictionaries for example, the Movie Detectives API strictly uses custom classes inherited from the BaseModel provided by Pydantic. This is the configuration for a quiz for example: class QuizConfig(BaseModel): vote_avg_min: float = Field(5.0, ge=0.0, le=9.0) vote_count_min: float = Field(1000.0, ge=0.0) popularity: int = Field(1, ge=1, le=3) personality: str = Personality.DEFAULT.name language: str = Language.DEFAULT.nameThis example illustrates, how not only correct type is ensured, but also further validation is applied to the actual values. Furthermore, up-to-date Python features, like StrEnum are used to distinguish certain types, like personalities: class Personality(StrEnum): DEFAULT = 'default.jinja' CHRISTMAS = 'christmas.jinja' SCIENTIST = 'scientist.jinja' DAD = 'dad.jinja'Also, duplicate code is avoided by defining custom decorators. For example, the following decorator limits the number of quiz sessions today, to have control over GCP costs: call_count = 0 last_reset_time = datetime.now() def rate_limit(func: callable) -> callable: @wraps(func) def wrapper(*args, **kwargs) -> callable: global call_count global last_reset_time # reset call count if the day has changed if datetime.now().date() > last_reset_time.date(): call_count = 0 last_reset_time = datetime.now() if call_count >= settings.quiz_rate_limit: raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail='Daily limit reached') call_count += 1 return func(*args, **kwargs) return wrapperIt is then simply applied to the related API function: @app.post('/quiz') @rate_limit @retry(max_retries=settings.quiz_max_retries) def start_quiz(quiz_config: QuizConfig = QuizConfig()):The combination of up-to-date Python features and libraries, such as FastAPI, Pydantic or Ruff makes the backend less verbose but still very stable and ensures a certain data quality, to ensure the LLM output has the expected quality. TMDB client with httpxThe TMDB Client class is using httpx to perform requests against the TMDB API. httpx is a rising star in the world of Python libraries. While requests has long been the go-to choice for making HTTP requests, httpx offers a valid alternative. One of its key strengths is asynchronous functionality. httpx allows you to write code that can handle multiple requests concurrently, potentially leading to significant performance improvements in applications that deal with a high volume of HTTP interactions. Additionally, httpx aims for broad compatibility with requests, making it easier for developers to pick it up. In case of Gemini Movie Detectives, there are two main requests: get_movies: Get a list of random movies based on specific settings, like average number of votesget_movie_details: Get details for a specific movie to be used in a quizIn order to reduce the amount of external requests, the latter one uses the lru_cache decorator, which stands for “Least Recently Used cache”. It’s used to cache the results of function calls so that if the same inputs occur again, the function doesn’t have to recompute the result. Instead, it returns the cached result, which can significantly improve the performance of the program, especially for functions with expensive computations. In our case, we cache the details for 1024 movies, so if 2 players get the same movie, we do not need to make a request again: @lru_cache(maxsize=1024) def get_movie_details(self, movie_id: int): response = httpx.get(f'https://api.themoviedb.org/3/movie/{movie_id}', headers={ 'Authorization': f'Bearer {self.tmdb_api_key}' }, params={ 'language': 'en-US' }) movie = response.json() movie['poster_url'] = self.get_poster_url(movie['poster_path']) return movieAccessing data from The Movie Database (TMDB) is for free for non-commercial usage, you can simply generate an API key and start making requests. Gemini LLM client with VertexAIBefore Gemini via VertexAI can be used, you need a Google Cloud project with VertexAI enabled and a Service Account with sufficient access together with its JSON key file. Create GCP project (by author)After creating a new project, navigate to APIs & Services –> Enable APIs and service –> search for VertexAI API –> Enable. Enable VertexAI (by author)To create a Service Account, navigate to IAM & Admin –> Service Accounts –> Create service account. Choose a proper name and go to the next step. Create Service Account (by author)Now ensure to assign the account the pre-defined role Vertex AI User. Assign correct role (by author)Finally you can generate and download the JSON key file by clicking on the new user –> Keys –> Add Key –> Create new key –> JSON. With this file, you are good to go. Create JSON key file (by author)Using Gemini from Google with Python via VertexAI starts by adding the necessary dependency to the project: poetry add 'google-cloud-aiplatform>=1.38'With that, you can import and initialize vertexai with your JSON key file. Also you can load a model, like the newly released Gemini 1.5 Pro model, and start a chat session like this: import vertexai from google.oauth2.service_account import Credentials from vertexai.generative_models import GenerativeModel project_id = "my-project-id" location = "us-central1" credentials = Credentials.from_service_account_file("credentials.json") model = "gemini-1.0-pro" vertexai.init(project=project_id, location=location, credentials=credentials) model = GenerativeModel(model) chat_session = model.start_chat()You can now use chat.send_message() to send a prompt to the model. However, since you get the response in chunks of data, I recommend using a little helper function, so that you simply get the full response as one String: def get_chat_response(chat: ChatSession, prompt: str) -> str: text_response = [] responses = chat.send_message(prompt, stream=True) for chunk in responses: text_response.append(chunk.text) return ''.join(text_response)A full example can then look like this: import vertexai from google.oauth2.service_account import Credentials from vertexai.generative_models import GenerativeModel, ChatSession project_id = "my-project-id" location = "us-central1" credentials = Credentials.from_service_account_file("credentials.json") model = "gemini-1.0-pro" vertexai.init(project=project_id, location=location, credentials=credentials) model = GenerativeModel(model) chat_session = model.start_chat() def get_chat_response(chat: ChatSession, prompt: str) -> str: text_response = [] responses = chat.send_message(prompt, stream=True) for chunk in responses: text_response.append(chunk.text) return ''.join(text_response) response = get_chat_response( chat_session, "How to say 'you are awesome' in Spanish?" ) print(response)Running this, Gemini gave me the following response: You are awesome (by author)I agree with Gemini: Eres increíbleAnother hint when using this: you can also configure the model generation by passing a configuration to the generation_config parameter as part of the send_message function. For example: generation_config = { 'temperature': 0.5 } responses = chat.send_message( prompt, generation_config=generation_config, stream=True )I am using this in Gemini Movie Detectives to set the temperature to 0.5, which gave me best results. In this context temperature means: how creative are the generated responses by Gemini. The value must be between 0.0 and 1.0, whereas closer to 1.0 means more creativity. One of the main challenges apart from sending a prompt and receive the reply from Gemini is to parse the reply in order to extract the relevant information. One learning from the project is: Specify a format for Gemini, which does not rely on exact words but uses key symbols to separate information elementsFor example, the question prompt for Gemini contains this instruction: Your reply must only consist of three lines! You must only reply strictly using the following template for the three lines: Question: <Your question> Hint 1: <The first hint to help the participants> Hint 2: <The second hint to get the title more easily>The naive approach would be, to parse the answer by looking for a line that starts with Question:. However, if we use another language, like German, the reply would look like: Antwort:. Instead, focus on the structure and key symbols. Read the reply like this: It has 3 linesThe first line is the questionSecond line the first hintThird line the second hintKey and value are separated by :With this approach, the reply can be parsed language agnostic, and this is my implementation in the actual client: @staticmethod def parse_gemini_question(gemini_reply: str) -> GeminiQuestion: result = re.findall(r'[^:]+: ([^\n]+)', gemini_reply, re.MULTILINE) if len(result) != 3: msg = f'Gemini replied with an unexpected format. Gemini reply: {gemini_reply}' logger.warning(msg) raise ValueError(msg) question = result[0] hint1 = result[1] hint2 = result[2] return GeminiQuestion(question=question, hint1=hint1, hint2=hint2)In the future, the parsing of responses will become even easier. During the Google Cloud Next ’24 conference, Google announced that Gemini 1.5 Pro is now publicly available and with that, they also announced some features including a JSON mode to have responses in JSON format. Checkout this article for more details. Apart from that, I wrapped the Gemini client into a configurable class. You can find the full implementation open-source on Github. Modular prompt generator with JinjaThe Prompt Generator is a class wich combines and renders Jinja2 template files to create a modular prompt. There are two base templates: one for generating the question and one for evaluating the answer. Apart from that, there is a metadata template to enrich the prompt with up-to-date movie data. Furthermore, there are language and personality templates, organized in separate folders with a template file for each option. Prompt Generator (by author)Using Jinja2 allows to have advanced features like template inheritance, which is used for the metadata. This makes it easy to extend this component, not only with more options for personalities and languages, but also to extract it into its own open-source project to make it available for other Gemini projects. FrontendThe Gemini Movie Detectives frontend is split into four main components and uses vue-router to navigate between them. The Home component simply displays the welcome message. The Quiz component displays the quiz itself and talks to the API via fetch. To create a quiz, it sends a POST request to api/quiz with the desired settings. The backend is then selecting a random movie based on the user settings, creates the prompt with the modular prompt generator, uses Gemini to generate the question and hints and finally returns everything back to the component so that the quiz can be rendered. Additionally, each quiz gets a session ID assigned in the backend and is stored in a limited LRU cache. For debugging purposes, this component fetches data from the api/sessions endpoint. This returns all active sessions from the cache. This component displays statistics about the service. However, so far there is only one category of data displayed, which is the quiz limit. To limit the costs for VertexAI and GCP usage in general, there is a daily limit of quiz sessions, which will reset with the first quiz of the next day. Data is retrieved form the api/limit endpoint. Vue components (by author)API examplesOf course using the frontend is a nice way to interact with the application, but it is also possible to just use the API. The following example shows how to start a quiz via the API using the Santa Claus / Christmas personality: curl -s -X POST https://movie-detectives.com/api/quiz \ -H 'Content-Type: application/json' \ -d '{"vote_avg_min": 5.0, "vote_count_min": 1000.0, "popularity": 3, "personality": "christmas"}' | jq .{ "quiz_id": "e1d298c3-fcb0-4ebe-8836-a22a51f87dc6", "question": { "question": "Ho ho ho, this movie takes place in a world of dreams, just like the dreams children have on Christmas Eve after seeing Santa Claus! It's about a team who enters people's dreams to steal their secrets. Can you guess the movie? Merry Christmas!", "hint1": "The main character is like a skilled elf, sneaking into people's minds instead of houses. ", "hint2": "I_c_p_i_n " }, "movie": {...} }Movie Detectives — Example: Santa Claus personality (by author)This example shows how to change the language for a quiz: curl -s -X POST https://movie-detectives.com/api/quiz \ -H 'Content-Type: application/json' \ -d '{"vote_avg_min": 5.0, "vote_count_min": 1000.0, "popularity": 3, "language": "german"}' | jq .{ "quiz_id": "7f5f8cf5-4ded-42d3-a6f0-976e4f096c0e", "question": { "question": "Stellt euch vor, es gäbe riesige Monster, die auf der Erde herumtrampeln, als wäre es ein Spielplatz! Einer ist ein echtes Urviech, eine Art wandelnde Riesenechse mit einem Atem, der so heiß ist, dass er euer Toastbrot in Sekundenschnelle rösten könnte. Der andere ist ein gigantischer Affe, der so stark ist, dass er Bäume ausreißt wie Gänseblümchen. Und jetzt ratet mal, was passiert? Die beiden geraten aneinander, wie zwei Kinder, die sich um das letzte Stück Kuchen streiten! Wer wird wohl gewinnen, die Riesenechse oder der Superaffe? Das ist die Frage, die sich die ganze Welt stellt! ", "hint1": "Der Film spielt in einer Zeit, in der Monster auf der Erde wandeln.", "hint2": "G_dz_ll_ vs. K_ng " }, "movie": {...} }And this is how to answer to a quiz via an API call: curl -s -X POST https://movie-detectives.com/api/quiz/84c19425-c179-4198-9773-a8a1b71c9605/answer \ -H 'Content-Type: application/json' \ -d '{"answer": "Greenland"}' | jq .{ "quiz_id": "84c19425-c179-4198-9773-a8a1b71c9605", "question": {...}, "movie": {...}, "user_answer": "Greenland", "result": { "points": "3", "answer": "Congratulations! You got it! Greenland is the movie we were looking for. You're like a human GPS, always finding the right way!" } }ConclusionAfter I finished the basic project, adding more personalities and languages was so easy with the modular prompt approach, that I was impressed by the possibilities this opens up for game design and development. I could change this game from a pure educational game about movies, into a comedy trivia “You Don’t Know Jack”-like game within a minute by adding another personality. Also, combining up-to-date Python functionality with validation libraries like Pydantic is very powerful and can be used to ensure good data quality for LLM input. And there you have it, folks! You’re now equipped to craft your own LLM-powered web application. Feeling inspired but need a starting point? Check out the open-source code for the Gemini Movie Detectives project: Github repository for backend: https://github.com/vojay-dev/gemini-movie-detectives-api Github repository for frontend: https://github.com/vojay-dev/gemini-movie-detectives-uiThe future of AI-powered applications is bright, and you’re holding the paintbrush! Let’s go make something remarkable. And if you need a break, feel free to try https://movie-detectives.com/. Create an AI-Driven Movie Quiz with Gemini LLM, Python, FastAPI, Pydantic, RAG and more was originally published in Towards Data Science on Medium, where people are continuing the conversation by highlighting and responding to this story. View the full article
  9. Google appears to be working on adding quick access to its AI chatbot Gemini to the drop-down menu that appears in your address bar. If you want to talk to the bot quickly you’ll be able to type in ‘@gemini’ and get instant access to the bot. According to Windows Report, Chrome is due to have a ‘chat with Gemini’ shortcut to the address bar, so you don’t have to go to the official Gemini website. The feature is yet to be widespread, but with the ‘chat with Gemini’ shortcut, you should be able to give it a try. It’s not surprising to see Google forging ahead with its newest AI assistant in Chrome (and on Chromebooks) - in fact, Gemini was technically implemented into the browser back in February, when Google introduced an AI-powered ‘help me write’ feature. Microsoft has been pushing its own AI helper Copilot aggressively across everything from the Edge browser to Windows tablets. Google has demonstrated a more reserved approach to AI tools compared to Microsoft’s bull-in-a-china-shop efforts, but it makes sense that the search engine giant wants to keep pace with the competition. How to use Gemini in Chrome right now You can try typing out ‘@gemini’ and see if anything comes up, but as of right now it’s not functional - you can’t click on it or select it. It’s still part of a Chrome Canary patch, which is Google’s channel for testing out potential new features that enthusiasts and developers can try out and give feedback on. If you’d like to try it out, you can launch and set up Chrome Canary, and once the test browser is installed you can enter chrome://flags in the address bar and hit enter. This should take you to the ‘Expansion pack page for the site’ and you’ll be able to enable the starter pack. Restart the browser and you’re done! You should then be able to chat with Gemini from the address bar. Of course, as this is still in testing there’s no guarantee that it’ll work flawlessly, and we can’t be sure just yet that the feature will make it to the public version of the browser. However, if it does make it to the public it’ll be good news for Gemini fans or anyone who wants to get more familiar with the ChatGPT alternative. You might also like... Elon Musk has a plan to curb the influx of bots on X - but I fear it could end the social media platform for goodWindows 11 Moment 5 update reportedly causes a ‘white screen of doom’ along with installation failuresGoogle's next foldable could be the Pixel 9 Pro Fold and finally get flagship specs View the full article
  10. Google Cloud Next made a big splash in Las Vegas this week! From our opening keynote showcasing incredible customer momentum to exciting product announcements, we covered how AI is transforming the way that companies work. You can catch up on the highlights in our 14 minute keynote recap! Developers were front and center at our Developer keynote and in our buzzing Innovators Hive on the Expo floor (which was triple the size this year!). Our nearly 400 partner sponsors were also deeply integrated throughout Next, bringing energy from the show floor to sessions and evening events throughout the week. Last year, we talked about the exciting possibilities of generative AI, and this year it was great to showcase how customers are now using it to transform the way they work. At Next ‘24, we featured 300+ customer and partner AI stories, 500+ breakout sessions, hands-on demos, interactive training sessions, and so much more. It was a jam-packed week, so we’ve put together a summary of our announcements which highlight how we’re delivering the new way to cloud. Read on for a complete list of the 218 (yes, you read that right) announcements from Next ‘24: Gemini for Google Cloud We shared how Google's Gemini family of models will help teams accomplish more in the cloud, including: 1. Gemini for Google Cloud, a new generation of AI assistants for developers, Google Cloud services, and applications. 2. Gemini Code Assist, which is the evolution of the Duet AI for Developers. 3. Gemini Cloud Assist, which helps cloud teams design, operate, and optimize their application lifecycle. 4. Gemini in Security Operations, generally available at the end of this month, converts natural language to new detections, summarizes event data, recommends actions to take, and navigates users through the platform via conversational chat. 5. Gemini in BigQuery, in preview, enables data analysts to be more productive, improve query performance and optimize costs throughout the analytics lifecycle. 6. Gemini in Looker, in private preview, provides a dedicated space in Looker to initiate a chat on any topic with your data and derive insights quickly. 7. Gemini in Databases, also in preview, helps developers, operators, and database administrators build applications faster using natural language; manage, optimize and govern an entire fleet of databases from a single pane of glass; and accelerate database migrations. Customer Stories We shared new customer announcements, including: 8. Cintas is leveraging Google Cloud’s gen AI to develop an internal knowledge center that will allow its customer service and sales employees to easily find key information. 9. Bayer will build a radiology platform that will help Bayer and other companies create and deploy AI-first healthcare apps that assist radiologists, ultimately improving efficiency and diagnosis turn-around time. 10. Best Buy is leveraging Google Cloud’s Gemini large language model to create new and more convenient ways to give customers the solutions they need, starting with gen AI virtual assistants that can troubleshoot product issues, reschedule order deliveries, and more. 11. Citadel Securities used Google Cloud to build the next generation of its quantitative research platform that increased its research productivity and price-performance ratio. 12. Discover Financial is transforming customer experience by bringing gen AI to its customer contact centers to improve agent productivity through personalized resolutions, intelligent document summarization, real-time search assistants, and enhanced self-service options. 13. IHG Hotels & Resorts is using Gemini to build a generative AI-powered chatbot to help guests easily plan their next vacation directly in the IHG Hotels & Rewards mobile app. 14. Mercedes-Benz will expand its collaboration with Google Cloud, using our AI and gen AI technologies to advance customer-facing use cases across e-commerce, customer service, and marketing. 15. Orange is expanding its partnership with Google Cloud to deploy generative AI closer to Orange’s and its customers’ operations to help meet local requirements for trusted cloud environments and accelerate gen AI adoption and benefits across autonomous networks, workforce productivity, and customer experience. 16. WPP will leverage Google Cloud’s gen AI capabilities to deliver personalization, creativity, and efficiency across the business. Following the adoption of Gemini, WPP is already seeing internal impacts, including real-time campaign performance analysis, streamlined content creation processes, AI narration, and more. 17. Covered California, California’s health insurance marketplace, will simplify the healthcare enrollment process using Google Cloud’s Document AI, enabling the organization to verify more than 50,000 healthcare documents with a 84% verification rate per month. Workspace and collaboration The next wave of innovations and enhancements are coming to Google Workspace: 18. Google Vids, a key part of our Google Workspace innovations, is a new AI-powered video creation app for work that sits alongside Docs, Sheets and Slides. Vids will be released to Workspace Labs in June. 19. Gemini is coming to Google Chat in preview, giving you an AI-powered teammate to summarize conversations, answer questions, and more. 20. The new AI Meetings and Messaging add-on is priced at $10 per user, per month, and includes: Take notes for me, now in preview, translate for me, coming in June, which automatically detects and translates captions in Meet, with support for 69 languages, and automatic translation of messages and on-demand conversation summaries in Google Chat, coming later this year. 21. Using large language models, Gmail can now block an additional 20% more spam and evaluate 1,000 times more user-reported spam every day. 22. A new AI Security add-on allows IT teams to automatically classify and protect sensitive files in Google Drive, and is available for $10 per user, per month. 23. We’re extending DLP controls and classification labels to Gmail in beta. 24. We’re adding experimental support for post-quantum cryptography (PQC) in client-side encryption with our partners Thales and Fortanix. 25. Voice prompting and instant polish in Gmail: Send emails easily when you’re on the go with voice input in Help me write, and convert rough notes to a complete email with one click. 26. A new tables feature in Sheets (generally available in the coming weeks) formats and organizes data with a sleek design and a new set of building blocks — from project management to event planning templates witautomatic alerts based on custom triggers like a change in a status field. 27. Tabs in Docs (generally available in the coming weeks) allow you to organize information in a single document rather than linking to multiple documents or searching through Drive. 28. Docs now supports full-bleed cover images that extend from one edge of your browser to the other; generally available in the coming weeks. 29. Generally available in the coming weeks, Chat will support increased member capacity of up to 500,000 in spaces. 30. Messaging interoperability for Slack and Teams is now generally available through our partner Mio. AI infrastructure 31. The Cloud TPU v5p GA is now generally available. 32. Google Kubernetes Engine (GKE) now supports Cloud TPU v5p and TPU multi-host serving, also generally available. 33. A3 Mega compute instance powered by NVIDIA H100 GPUs offers double the GPU-to-GPU networking bandwidth of A3, and will be generally available in May. 34. Confidential Computing is coming to the A3 VM family, in preview later this year. 35. The NVIDIA Blackwell GPU platform will be available on the AI Hypercomputer architecture in two configurations: NVIDIA HGX B200 for the most demanding AI, data analytics, and HPC workloads; and the liquid-cooled GB200 NVL72 GPU for real-time LLM inference and training massive-scale models. 36. New caching capabilities for Cloud Storage FUSE improve training throughput and serving performance, and are generally available. 37. The Parallelstore high-performance parallel filesystem now includes caching in preview. 38. Hyperdisk ML in preview is a next-generation block storage service optimized for AI inference/serving workloads. 39. The new open-source MaxDiffusion is a new high-performance and scalable reference implementation for diffusion models. 40. MaxText, a JAX LLM, now supports new LLM models including Gemma, GPT3, LLAMA2 and Mistral across both Cloud TPUs and NVIDIA GPUs. 41. PyTorch/XLA 2.3 will follow the upstream release later this month, bringing single program, multiple data (SPMD) auto-sharding, and asynchronous distributed checkpointing features. 42. For Hugging Face PyTorch users, the Hugging Face Optimum-TPU package lets you train and serve Hugging Face models on TPUs. 43. Jetstream is a new open-source, throughput- and memory-optimized LLM inference engine for XLA devices (starting with TPUs); it supports models trained with both JAX and PyTorch/XLA, with optimizations for popular open models such as Llama 2 and Gemma. 44. Google models will be available as NVIDIA NIM inference microservices. 45. Dynamic Workload Scheduler now offers two modes: flex start mode (in preview), and calendar mode (in preview). 46. We shared the latest performance results from MLPerf™ Inference v4.0 using A3 virtual machines (VMs) powered by NVIDIA H100 GPUs. 47. We shared performance benchmarks for Gemma models using Cloud TPU v5e and JetStream. 48. We introduced ML Productivity Goodput, a new metric to measure the efficiency of an overall ML system, as well as an API to integrate into your projects, and methods to maximize ML Productivity Goodput. Vertex AI 49. Gemini 1.5 Pro is now available in public preview in Vertex AI, bringing the world’s largest context window to developers everywhere. 50. Gemini 1.5 Pro on Vertex AI can now process audio streams including speech, and the audio portion of videos. 51. Imagen 2.0, our family of image generation models, can now be used to create short, 4-second live images from text prompts. 52. Image editing is generally available in Imagen 2.0, including inpainting/outpainting and digital watermarking powered by Google DeepMind’s SynthID. 53. We added CodeGemma, a new model from our Gemma family of lightweight models, to Vertex AI. 54. Vertex AI has expanded grounding capabilities, including the ability to directly ground responses with Google Search, now in public preview. 55. Vertex AI Prompt Management, in preview, helps teams improve prompt performance. 56. Vertex AI Rapid Evaluation, in preview, helps users evaluate model performance when iterating on the best prompt design. 57. Vertex AI AutoSxS is now generally available, and helps teams compare the performance of two models. 58. We expanded data residency guarantees for data stored at-rest for Gemini, Imagen, and Embeddings APIs on Vertex AI to 11 new countries: Australia, Brazil, Finland, Hong Kong, India, Israel, Italy, Poland, Spain, Switzerland, and Taiwan. 59. When using Gemini 1.0 Pro and Imagen, you can now limit machine-learning processing to the United States or European Union. 60. Vertex AI hybrid search, in preview, integrates vector-based and keyword-based search techniques to ensure relevant and accurate responses for users. 61. The new Vertex AI Agent Builder, in preview, lets developers build and deploy gen AI experiences using natural language or open-source frameworks like LangChain on Vertex AI. 62. Vertex AI includes two new text embedding models in public preview: the English-only text-embedding-preview-0409, and the multilingual text-multilingual-embedding-preview-0409 Core infrastructure Thomas with the Google Axion chip 63. We expanded Google Cloud’s compute portfolio, with major product releases spanning compute and storage for general-purpose workloads, as well as for more specialized workloads like SAP and high-performance databases. 64. Google Axion is our first custom Arm-based CPU designed for the data center, and will be in preview in the coming months. 65. Now in preview, the Compute Engine C4 general-purpose VM provides high performance paired with a controlled maintenance experience for your mission-critical workloads. 66. The general-purpose N4 machine series is built for price-performance with Dynamic Resource Management, and is generally available. 67. C3 bare-metal machines, available in an upcoming preview, provide workloads with direct access to the underlying server’s CPU and memory resources. 68. New X4 memory-optimized instances are now in preview, through this interest form. 69. Z3 VMs are designed for storage-dense workloads that require SSD, and are generally available. 70. Hyperdisk Storage Pools Advanced Capacity, in general availability, and Advanced Performance in preview, allow you to purchase and manage block storage capacity in a pool that’s shared across workloads. 71. Coming to general availability in May, Hyperdisk Instant Snapshots provide near-zero RPO/RTO for Hyperdisk volumes. 72. Google Compute Engine users can now use zonal flexibility, VM family flexibility, and mixed on-demand and spot consumption to deploy their VMs. As part of Google Distributed Cloud (GDC) offering, we announced: 73. A generative AI search packaged solution powered by Gemma open models will be available in preview in Q2 2024 on GDC to help customers retrieve and analyze data at the edge or on-premises. 74. GDC has achieved ISO27001 and SOC2 compliance certifications. 75. A new managed Intrusion Detection and Prevention Solution (IDPS) integrates Palo Alto Networks threat prevention technology with GDC, and is now generally available. 76. GDC Sandbox, in preview, helps application developers build and test services designed for GDC in a Google Cloud environment, without needing to navigate the air-gap and physical hardware. 77. A preview GDC storage flexibility feature can help you grow your storage independent of compute, with support for block, file, or object storage. 78. GDC can now run in disconnected mode for up to seven days, and offers a suite of offline management features to help ensure deployments and workloads are accessible and working while they are disconnected; this capability is generally available. 79. New Managed GDC Providers who can sell GDC as a managed service include Clarence, T-Systems, and WWT.and a new Google Cloud Ready — Distributed Cloud badge signals that a solution has been tuned for GDC. 80. GDC servers are now available with an energy-efficient NVIDIA L4 Tensor Core GPU. 81. Google Distributed Cloud Hosted (GDC Hosted) is now authorized to host Top Secret and Secret missions for the U.S. Intelligence Community, and Top Secret missions for the Department of Defense (DoD). From our Google Cloud Networking family, we announced: 82. Gemini Cloud Assist, in preview, provides AI-based assistance to solve a variety of networking tasks such as generating configurations, recommending capacity, correlating changes with issues, identifying vulnerabilities, and optimizing performance. 83. Now generally available, the Model as a Service Endpoint solution uses Private Service Connect, Cloud Load Balancing, and App Hub lets model creators own the model service endpoint to which application developers then connect. 84. Later this year, Cloud Load Balancing will add enhancements for inference workloads: Cloud Load Balancing with custom metrics, Cloud Load Balancing for streaming inference, and Cloud Load Balancing with traffic management for AI models. 85. Cloud Service Mesh is a fully managed service mesh that combines Traffic Director’s control plane and Google’s open-source Istio-based service mesh, Anthos Service Mesh. A service-centric Cross-Cloud Network delivers a consistent, secure experience from any cloud to any service, and includes the following enhancements: 86. Private Service Connect transitivity over Network Connectivity Center, available in preview this quarter, enables services in a spoke VPC to be transitively accessible from other spoke VPCs. 87. Cloud NGFW Enterprise (formerly Cloud Firewall Plus), now GA, provides network threat protection powered by Palo Alto Networks, plus network security posture controls for org-wide perimeter and Zero Trust microsegmentation. 88. Identity-based authorization with mTLS integrates the Identity-Aware Proxy with our internal application Load Balancer to support Zero Trust network access, including client-side and soon, back-end mutual TLS. 89. In-line network data-loss prevention (DLP), in preview soon, integrates Symantec DLP into Cloud Load Balancers and Secure Web Proxy using Service Extensions. 90. Partners Imperva, HUMAN Security, Palo Alto Networks and Traceable are integrating their advanced web protection services into Service Extensions, as are web services providers Cloudinary, Nagra, Queue-it, and Datadog. 91. Service Extensions now has a library of code examples to customize origin selection, adjust headers, and more. 92. Private Service Connect is now fully integrated with Cloud SQL, and generally available. There are many improvements to our storage offerings: 93. Generate insights with Gemini lets you use natural language to analyze your storage footprint, optimize costs, and enhance security across billions of objects. It is available now through the Google Cloud console as an allowlist experimental release. 94. Google Cloud NetApp Volumes is expanding to 15 new Google Cloud regions in Q2’24 (GA) and includes a number of enhancements: dynamically migrating files by policy to lower-cost storage based on access frequency (in preview Q2’24); increasing Premium and Extreme service levels up to 1PB in size, with throughput performance up to 3X (preview Q2’24). NetApp Volumes also includes a new Flex service level enabling volumes as small as 1GiB. 95. Filestore now supports single-share backup for Filestore Persistent Volumes and GKE (generally available) and NFS v4.1 (preview), plus expanded Filestore Enterprise capacity up to 100TiB. For Cloud Storage: 96. Cloud Storage Anywhere Cache now uses zonal SSD read cache across multiple regions within a continent (allowlist GA). 97. Cloud Storage soft delete protects against accidental or malicious deletion of data by preserving deleted items for a configurable period of time (generally available). 98. The new Cloud Storage managed folders resource type allows granular IAM permissions to be applied to groups of objects (generally available). 99. Tag-based at-scale backup helps manage data protection for Compute Engine VMs (generally available). 100. The new high-performance backup option for SAP HANA leverages persistent disk (PD) snapshot capabilities for database-aware backups (generally available). 101. As part of Backup and DR Service Report Manager, you can now customize reports with data from Google Cloud Backup and DR using Cloud Monitoring, Cloud Logging, and BigQuery (generally available). Databases 102. Database Studio, a part of Gemini in Databases, brings SQL generation and summarization capabilities to our rich SQL editor in the Google Cloud console, as well as an AI-driven chat interface. 103. Database Center lets operators manage an entire fleet of databases through intelligent dashboards that proactively assess availability, data protection, security, and compliance issues, as well as with smart recommendations to optimize performance and troubleshoot issues. 104. Database Migration Service is also integrated with Gemini in Databases, including assistive code conversion (e.g., from Oracle to PostgreSQL) and explainability features. Likewise, AlloyDB gains a lot of new functionality: 105. AlloyDB AI lets gen AI developers build applications that accurately query data with natural language, just like they do with SQL; available now in AlloyDB Omni. 106. AlloyDB AI now includes a new pgvector-compatible index based on Google’s approximate nearest neighbor algorithms, or ScaNN; it’s available as a technology preview in AlloyDB Omni. 107. AlloyDB model endpoint management makes it easier to call remote Vertex AI, third-party, and custom models; available in AlloyDB Omni today and soon on AlloyDB in Google Cloud. 108. AlloyDB AI “parameterized secure views” secures data based on end-users’ context; available now in AlloyDB Omni. Bigtable, which turns 20 this year, got several new features: 109. Bigtable Data Boost, a pre-GA offering, delivers high-performance, workload-isolated, on-demand processing of transactional data, without disrupting operational workloads. 110. Bigtable authorized views, now generally available, allow multiple teams to leverage the same tables and securely share data directly from the database. 111. New Bigtable distributed counters in preview process high-frequency event data like clickstreams directly in the database. 112. Bigtable large nodes, the first of other workload-optimized node shapes, offer more performance stability at higher server utilization rates, and are in private preview. Memorystore for Redis Cluster, meanwhile: 113. Now supports both AOF (Append Only File) and RDB (Redis Database)-based persistence and has new node shapes that offer better performance and cost management. 114. Offers ultra-fast vector search, now generally available. 115. Includes new configuration options to tune max clients, max memory, max memory policies, and more, now in preview. Firestore users, take note: 116. Gemini Code Assist now incorporates assistive capabilities for developing with Firestore. 117. Firestore now has built-in support for vector search using exact nearest neighbors, the ability to automatically generate vector embeddings using popular embedding models via a turn-key extension, and integrations with popular generative AI libraries such as LangChain and LlamaIndex. 118. Firestore Query Explain in preview can help you troubleshoot your queries. 119. Firestore now supports Customer Managed Encryption Keys (CMEK) in preview, which allows you to encrypt data stored at-rest using your own specified encryption key. 120. You can now deploy Firestore in any available supported Google Cloud region, and Firestore’s Scheduled Backup feature can now retain backups for up to 98 days, up from seven days. 121. Cloud SQL Enterprise Plus edition now offers advanced failover capabilities such as orchestrated switchover and switchback Data analytics 122. BigQuery is now Google Cloud’s single integrated platform for data to AI workloads, with BigLake, BigQuery’s unified storage engine, providing a single interface across BigQuery native and open formats for analytics and AI workloads. 123. BigQuery better supports Iceberg, DDL, DML and high-throughput support in preview, while BigLake now supports the Delta file format, also in preview. 124. BigQuery continuous queries are in preview, providing continuous SQL processing over data streams, enabling real-time pipelines with AI operators or reverse ETL. The above-mentioned Gemini in BigQuery enables all manner of new capabilities and offerings: 125. New BigQuery integrations with Gemini models in Vertex AI support multimodal analytics and vector embeddings, and fine-tuning of LLMs. 126. BigQuery Studio provides a collaborative data workspace, the choice of SQL, Python, Spark or natural language directly, and new integrations for real-time streaming and governance; it is now generally available. 127. The new BigQuery data canvas provides a notebook-like experience with embedded visualizations and natural language support courtesy of Gemini. 128. BigQuery can now connect models in Vertex AI with enterprise data, without having to copy or move data out of BigQuery. 129. You can now use BigQuery with Gemini 1.0 Pro Vision to analyze both images and videos by combining them with your own text prompts using familiar SQL statements. 130. Column-level lineage in BigQuery and expanded lineage capabilities for Vertex AI pipelines will be in preview soon. Other updates to our data analytics portfolio include: 131. Apache Kafka for BigQuery as a managed service is in preview, to enable streaming data workloads based on open source APIs. 132. A serverless engine for Apache Spark integrated within BigQuery Studio is now in preview. 133. Dataplex features expanded data-to-AI governance capabilities in preview. Developers & operators Gemini Code Assist includes several new enhancements: 134. Full codebase awareness, in preview, uses Gemini 1.5 Pro to make complex changes, add new features, and streamline updates to your codebase. 135. A new code transformation feature available today in Cloud Workstations and Cloud Shell Editor lets you use natural language prompts to tell Gemini Code Assist to analyze, refactor, and optimize your code. 136. Gemini Code Assist now has extended local context, automatically retrieving relevant local files from your IDE workspace and displaying references to the files used. 137. With code customization in private preview, Gemini Code Assist lets you integrate private codebases and repositories for hyper-personalized code generation and completions, and connects to GitLab, GitHub, and Bitbucket source-code repositories. 138. Gemini Code Assist extends to Apigee and Application Integration in preview, to access and connect your applications. 139. We extended our partnership with Snyk to Gemini Code Assist, letting you learn about vulnerabilities and common security topics right within your IDE. 140. The new App Hub provides an accurate, up-to-date representation of deployed applications and their resource dependencies. Integrated with Gemini Cloud Assist, App Hub is generally available. Users of our Cloud Run and Google Kubernetes Engine (GKE) runtime environments can look forward to a variety of features: 141. Cloud Run application canvas lets developers generate, modify and deploy Cloud Run applications with integrations to Vertex AI, Firestore, Memorystore, and Cloud SQL, as well as load balancing and Gemini Cloud Assist. 142. GKE now supports container and model preloading to accelerate workload cold starts. 143. GPU sharing with NVIDIA Multi-Process Service (MPS) is now offered in GKE, enabling concurrent processing on a single GPU. 144. GKE support GCS FUSE read caching, now generally available, using a local directory as a cache to accelerate repeat reads for small and random I/Os. 145. GKE Autopilot mode now supports NVIDIA H100 GPUs, TPUs, reservations, and Compute Engine committed use discounts (CUDs). 146. Gemini Cloud Assist in GKE is available to help with optimizing costs, troubleshooting, and synthetic monitoring. Cloud Billing tools help you track and understand Google Cloud spending, pay your bill, and optimize your costs; here are a few new features: 147. Support for Cloud Storage costs at the bucket level and storage tags is included out of the box with Cloud Billing detailed data exports to BigQuery. 148. A new BigQuery data view for FOCUS allows users to compare costs and usage across clouds. 149. You can now convert cost management reports into BigQuery billing queries right from the Cloud Billing console. 150. A new Cloud FinOps Anomaly Detection feature is in private preview. 151. FinOps hub is now generally available, adds support to view top savings opportunities, and a preview of our FinOps hub dashboard lets you to analyze costs by project, region, or machine type. 152. A new CUD Analysis solution is available across Google Compute Engine resource families including TPU v5e, TPU v5p, A3, H3, and C3D. 153. There are new spend-based CUDs available for Memorystore, AlloyDB, BigTable, and Dataflow. Security Building on natural language search and case summaries in Chronicle, Gemini in Security Operations is coming to the entire investigation lifecycle, including: 154. A new assisted investigation feature, generally available at the end of this month, that guides analysts through their workflow in Chronicle Enterprise and Chronicle Enterprise Plus. 155. The ability to ask Gemini for the latest threat intelligence from Mandiant directly in-line — including any indicators of compromise found in their environment. 156. Gemini in Threat Intelligence, in public preview, allows you to tap into Mandiant’s frontline threat intelligence using conversational search. 157. VirusTotal now automatically ingests OSINT reports, which Gemini summarizes directly in the platform; generally available now. 158. Gemini in Security Command Center, which now lets security teams search for threats and other security events using natural language in preview, and provides summaries of critical- and high-priority misconfiguration and vulnerability alerts, and summarizes attack paths. 159. Gemini Cloud Assist also helps with security tasks, via: IAM Recommendations, which can provide straightforward, contextual recommendations to remove roles from over-permissioned users or service accounts; Key Insights, which help during encryption key creation based on its understanding of your data, your encryption preferences, and your compliance needs; and Confidential Computing Insights, which recommends options for adding confidential computing protection to sensitive workloads based on your data and your compute usage. Other security news includes: 160. The new Chrome Enterprise Premium, now generally available, combines the popular browser with Google threat and data protection, Zero Trust access controls, enterprise policy controls, and security insights and reporting. 161. Applied threat intelligence in Google Security Operations, now generally available, automatically applies global threat visibility and applies it to each customer’s unique environment. 162. Security Command Center Enterprise is now generally available and includesMandiant Hunt, now in preview. 163. Identity and Access Management Privileged Access Manager (PAM), now available in preview, provides just-in-time, time-bound, and approval-based access elevations. 164. Identity and Access Management Principal Access Boundary (PAB) is a new, identity-centered control now in preview that enforces restrictions on IAM principals. 165. Cloud Next-Gen Firewall (NGFW) Enterprise is now generally available, including threat protection from Palo Alto Networks. 166. Cloud Armor Enterprise is now generally available and offers a pay-as-you-go model that includes advanced network DDoS protection, web application firewall capabilities, network edge policy, adaptive protection, and threat intelligence. 167. Sensitive Data Protection integration with Cloud SQL is now generally available, and is deeply integrated into the Security Command Center Enterprise risk engine. 168. Key management with Autokey is now in preview, simplifying the creation and management of customer encryption keys (CMEK). 169. Bare metal HSM deployments in PCI-compliant facilities are now available in more regions. 170. Regional Controls for Assured Workloads is now in preview and is available in 32 cloud regions in 14 countries. 171. Audit Manager automates control verification with proof of compliance for workloads and data on Google Cloud, and is in preview. 172. Advanced API Security, part of Apigee API Management, now offers shadow API detection in preview. As part of our Confidential Computing portfolio, we announced: 173. Confidential VMs on Intel TDX are now in preview and available on the C3 machine series with Intel TDX. For AI and ML workloads, we support Intel AMX, which provides CPU-based acceleration by default on C3 series Confidential VMs. 174. Confidential VMs on general-purpose N2D machine series with AMD Secure Encrypted Virtualization-Secure Nested Paging (SEV-SNP) are now in preview. 175. Live Migration on Confidential VMs is now in general availability on N2D machine series across all regions. 176. Confidential VMs on the A3 machine series with NVIDIA Tensor Core H100 GPUs will be in private preview later this year. Migration 177. The Rapid Migration Program (RaMP) now covers migration and modernization use cases that span across applications and the underlying infrastructure, data and analytics. For example, as part of RaMP for Storage: Storage egress costs from Amazon S3 to Google Cloud Storage are now completely free. Cloud Storage's client libraries for Python, Node.js, and Java now support parallelization of uploads and downloads from client libraries. Migration Center also includes several excellent new additions: 178. Migration use case navigator, for mapping out how to migrate your resources (servers, databases, data warehouses, etc.) from on-prem and other clouds directly into Google Cloud, including new Cloud Spend Estimators for rapid TCO assessments of on-premises VMware and Exadata environments. 179. Database discovery and assessment for Microsoft SQL Server, PostgreSQL and MySQL to Cloud SQL migrations. Google Cloud VMware Engine, an integrated VMware service on Google Cloud now offers: 180. The intent to support VMware Cloud Foundation License Portability 181. General availability of larger instance type (ve2-standard-128) offerings. 182. Networking enhancements including next-gen VMware Engine Networking, automated zero-config VPC peering, and Cloud DNS for workloads. 183. Terraform Infrastructure as Code Automation. Migrate to Virtual Machines helps teams migrate their workloads. Here’s what we announced: 184. A new Disk Migration solution for migrating disk volumes to Google Cloud. 185. Image Import (preview) as a managed service. 186. BIOS to UEFI Conversion in preview, which automatically converts bootloaders to the newer UEFI format. 187. Amazon Linux Conversion in preview, for converting Amazon Linux to Rocky Linux in Google Compute Engine. 188. CMEK support, so you maintain control over your own encryption keys. When replatforming VMs to containers in GKE or Cloud Run, there’s: 189. The new Migrate to Containers (M2C) CLI, which generates artifacts that you can deploy to either GKE or Cloud Run. 190. M2C Cloud Code Extension, in preview, which migrates applications from VMs to containers running on GKE directly in Visual Studio. Here are the enhancements to our Database Migration Service: 191. Database Migration Service now offers AI-powered last-mile code conversion from Oracle to PostgreSQL. 192. Database Migration Service now performs migration from SQL Server (on any platform) to Cloud SQL for SQL Server, in preview. 193. In Datastream, SQL Server as a source for CDC performs data movement to BigQuery destinations. Migrating from a mainframe? Here are some new capabilities: 194. The Mainframe Assessment Tool (MAT) now powered by gen AI analyzes the application codebase, performing fit assessment and creating application-level summarization and test cases. 195. Mainframe Connector sends a copy of your mainframe data to BigQuery for off-mainframe analytics. 196. G4 refactors mainframe application code (COBOL, RPG, JCL etc.) and data from their original state/programming language to a modern stack (JAVA). 197. Dual Run lets you run a new system side by side with your existing mainframe, duplicating all transactions and checking for completeness, quality and effectiveness of the new solution. Partners & ecosystem 198. Partners showcased more than 100 solutions that leverage Google AI on the Next ‘24 show floor. 199. We announced the 2024 Google Cloud Partner of the Year winners. 200. Gemini models will be available in the SAP Generative AI Hub. 201. GitLab announced that its authentication, security, and CI/CD integrations with Google Cloud are now in public beta for customers. 202. Palo Alto Networks named Google Cloud its AI provider of choice and will use Gemini models to improve threat analysis and incident summarization for its Cortex XSIAM platform. 203. Exabeam is using Google Cloud AI to improve security outcomes for customers. 204. Global managed security services company Optiv is expanding support for Google Cloud products. 205. Alteryx, Dynatrace, and Harness are launching new features built with Google Cloud AI to automate workflows, support data governance, and enable users to better observe and manage the data. 206. A new Generative AI Services Specialization is available for partners who demonstrate the highest level of technical proficiency with Google Cloud gen AI. 207. We introduced new Generative AI Delivery Excellence and Technical Bootcamps, and advanced Challenge Labs in generative AI. 208. The Google Cloud Ready - BigQuery initiative has 21 new partners: Actable, AgileData, Amplitude, Boostkpi, CaliberMind, Calibrate Analytics, CloudQuery, DBeaver, Decube, DinMo, Estuary, Followrabbit, Gretel, Portable, Precog, Retool, SheetGo, Tecton, Unravel Data, Vallidio, and Vaultree 209. The Google Cloud Ready - AlloyDB initiative has six new partners: Boostkpi, DBeaver, Estuary, Redis, Thoughtspot, and SeeBurger 210. The Google Cloud Ready - Cloud SQL initiative has five new partners: BoostKPI, DBeaver, Estuary, Redis, and Thoughtspot 211. Crowdstrike is integrating its Falcon Platform with Google Cloud products. Members of our Google for Startups program, meanwhile, will be interested to learn that: 212. The Google for Startups Cloud Program has a new partnership with the NVIDIA Inception startup program. The benefits include providing Inception members with access to Google Cloud credits, go-to-market support, technical expertise, and fast-tracked onboarding to Google Cloud Marketplace. 213. As part of the NVIDIA Inception partnership, Google for Startups Cloud Program members can join NVIDIA Inception and gain access to technological expertise, NVIDIA Deep Learning Institute course credits, NVIDIA hardware and software, and more. Eligible members of the Google for Startups Cloud Program also can participate in NVIDIA Inception Capital Connect, a platform that gives startups exposure to venture capital firms interested in the space. 214. The new Google for Startups Accelerator: AI-First program for startups building AI solutions based in the U.S. and Canada has launched, and its cohort includes 15 AI startups: Aptori, Augmend, Backpack Healthcare, BrainLogic AI, Cicerai, CLIKA, Easel AI, Findly, Glass Health, Kodif, Liminal, mbue, Modulo Bio, Rocket Doctor, and Sibli. 215. The Startup Learning Center provides startups with curated content to help them grow with Google Cloud, and will be launching an offering for startup developers and future founders via Innovators Plus in the coming months Finally, Google Cloud Consulting, has the following services to help you build out your Google Cloud environment: 216. Google Cloud Consulting is offering no-cost, on-demand training to top customers through Google Cloud Skills Boost, including new gen AI skill badges: Prompt Design in Vertex AI, Develop Gen AI Apps with Gemini and Streamlit, and Inspect Rich Documents with Gemini Multimodality and Multimodal RAG. 217. The new Isolator solution protects healthcare data used in collaborations between parties using a variety of Google Cloud technologies including Chrome Enterprise Premium, VPC Service Controls, Chrome Enterprise, and encryption. 218. Google Cloud Consulting’s Delivery Navigator is now generally available to all Google Cloud qualified services partners. Phew. What a week! On behalf of Google Cloud, we’re so grateful you joined us at Next ‘24, and can’t wait to host you again next year back in Las Vegas at the Mandalay Bay on April 9 - 11 in 2025! View the full article
  11. Developers love Firestore because of how fast they can build applications and services end to end. As of today, Firestore has more than 500,000 monthly active developers, and Firestore apps power more than 1.3 billion monthly active end users using Firebase Auth. Recently, we’ve made updates to Firestore to improve developer productivity, enable developers to build the next generation AI-enabled applications, express richer queries, and help ensure that Firestore databases meet enterprises’ ever-growing needs. Using Gemini to build applications with Firestore We’re excited to share that Gemini Code Assist now incorporates assistive capabilities for developing with Firestore. Using Gemini Code Assist, along with your favorite Integrated Development Environment (IDE), you can use natural language to define your Firestore data models and write queries. For example, you can express queries using natural language statements like “get products from Firestore inventory collection” and translate that into Firestore SDK code in your favorite programming language: Use natural language to write a Firestore query with Gemini Code Assist To get started, refer to the Gemini Code Assist documentation to install the plugin for your favorite IDE, including Visual Studio Code, IntelliJ or Cloud Code. You can also use Gemini to generate solution architectures and provision Firestore resources. Simply use the Gemini in Cloud Run application canvas in private preview, and type in a prompt like “LangChain app with Firestore and Vertex.” Use natural language using Gemini in Cloud Run application canvas, to generate a solution architecture that includes Firestore You can learn more about the Gemini in Cloud Run application canvas here. Build next-gen AI-enabled applications If you’re trying to build AI-enabled solutions such as a chatbot or recommendations engine, Firestore has you covered. Firestore now has built-in support for vector search using exact nearest neighbors, the ability to automatically generate vector embeddings using popular embedding models via a turn-key extension, and integrations with popular generative AI libraries such as LangChain and LlamaIndex. Here’s an example of using Firestore’s vector search capabilities: code_block <ListValue: [StructValue([('code', 'collection_ref = collection(‘beans’)\r\ncollection_ref\r\n.where("type", "==", "arabica")\r\n.find_nearest(\r\n vector_field="embedding_field",\r\n query_vector=Vector([0.1, 0.2, …, 1]),\r\n distance_measure=DistanceMeasure.COSINE,\r\n limit=5)'), ('language', 'lang-py'), ('caption', <wagtail.rich_text.RichText object at 0x3e476cfd2fa0>)])]> To get started, refer to the Firestore Vector Search documentation, Firestore Vector Search extension to generate embeddings documentation and documentation for Firestore’s LangChain and LlamaIndex integrations. Express richer queries Often applications require the ability to express queries that filter on range conditions across multiple fields. For example, if you run an e-commerce site, an end user might want to filter based on a t-shirt size and price ranges. With the recent launch of queries using range filters on multiple fields in preview, you can easily and cost-efficiently perform these queries directly in Firestore. code_block <ListValue: [StructValue([('code', 'db.collection("products")\r\n .whereGreaterThanOrEqualTo("price", 100)\r\n .whereGreaterThanOrEqualTo("rating", 4);'), ('language', 'lang-py'), ('caption', <wagtail.rich_text.RichText object at 0x3e476cfd2640>)])]> To get started, refer to the multiple range queries documentation. We’re also introducing Firestore Query Explain in preview to help you troubleshoot your queries. You can run Explain to retrieve the proposed query plan, or optionally indicate that you’d like to execute the query and analyze the performance, billing and retrieve the actual query results using a special analyze flag: code_block <ListValue: [StructValue([('code', 'Query query = db.collection("products")\r\n .whereGreaterThanOrEqualTo("price", 100)\r\n .whereGreaterThanOrEqualTo("rating", 4);\r\n\r\nExplainResults<QuerySnapshot> explainResults = query.explain(ExplainOptions.builder().analyze(true).build());\r\n\r\nExplainMetrics metrics = explainResults.getMetrics();\r\n\r\nSystem.out.println(metrics.getExecutionStats());'), ('language', 'lang-py'), ('caption', <wagtail.rich_text.RichText object at 0x3e476cfd23d0>)])]> Here’s the output: code_block <ListValue: [StructValue([('code', '{\r\n "executionStats": {\r\n "resultsReturned": "2",\r\n "bytesReturned": "190",\r\n "executionDuration": "0.059943s",\r\n "readOperations": "3",\r\n "debugStats": {\r\n "index_entries_scanned": "500",\r\n "documents_scanned": "2",\r\n "billing_details": {\r\n "index_entries_billable": "500",\r\n "documents_billable": "2",\r\n "small_ops": "0",\r\n "min_query_cost": "0"\r\n }\r\n }\r\n }\r\n}'), ('language', ''), ('caption', <wagtail.rich_text.RichText object at 0x3e476caa80d0>)])]> To get started, refer to the Query Explain documentation. Meeting enterprise needs One key aspect of enterprise readiness is privacy. Firestore now supports Customer Managed Encryption Keys (CMEK) in preview, which allows you to encrypt data stored at-rest using your own specified encryption key. This is an alternative to Firestore’s default behavior, which generates a Google-managed encryption key to encrypt your data. Your customer-specified key can be stored using the Cloud Key Management service, or you can even use your own external key manager. Get started with the Firestore CMEK documentation. And to help minimize serving latency and maximize data privacy, you can now deploy Firestore in any available Google Cloud region. You can review a full list of supported Firestore locations and pricing here. Lastly, you can now retain daily backups using Firestore’s Scheduled Backup feature for up to 98 days, up from seven days. Get started with Firestore Scheduled Backup and Restore today. Next steps To learn more about Firestore and the new features launching at Next ‘24, check out the following resources: Getting Started on Firestore: server client library or web & mobile client library Firestore Vector Search documentation Firestore Vector Search extension to automatically generate embeddings documentation Firestore Gen AI Library Integrations: LangChain and LlamaIndex Firestore Multiple Range queries documentation Firestore Query Explain documentation Firestore Customer Managed Encryption Keys documentation Firestore Locations and Pricing documentation Firestore Scheduled Backup and Restore documentation View the full article
  12. At Google Cloud Next ‘23, we announced support in Database Migration Service (DMS) for Oracle-to-PostgreSQL migrations to make them smooth, intuitive and efficient. We also introduced integrated code and schema conversions to reduce friction in the most time-consuming phases of the migration. This week at Google Cloud Next ‘24, we unveiled Gemini for Google Cloud, a new generation of AI-assistive capabilities based on Google's Gemini family of models, including Gemini in Databases. In addition, we announced the preview of a key feature of Gemini in Database: assistive code and schema conversion, allowing you to speed up the modernization of Oracle databases to PostgreSQL databases on Google Cloud. Finally, we announced Gemini-assisted code explainability, a key feature of Gemini in Databases, which simplifies code conversion and helps developers become proficient with the PostgreSQL dialect. Let’s dive in. Gemini-assisted code and schema conversion Migrating code and schema from Oracle PL/SQL to PostgreSQL can be very challenging due to to the inherent differences between them in terms of syntax, data types, and procedural constructs, etc. The distinct nature of built-in functions, transaction-handling mechanisms, and system-specific objects further complicates the conversion process. When migrating data, developers must meticulously address variations in error handling, security models, and procedural language specifics. Yet successfully navigating these disparities is essential if you want to adopt Cloud SQL or AlloyDB for PostgreSQL. And while automatic code conversion can handle the bulk of the migration, there's often a last mile that requires manual intervention and user feedback. In Database Migration Service, Gemini is trained on your manual interventions, learning based on the edits you make, your responses to its real-time recommendations about the intricacies of your specific code patterns and transformations, and from other commonalities across your code base. For example, consider Oracle Bulk Collect, which lacks a direct equivalent in PostgreSQL. In the DMS conversion workspace, adjusting SQL code and converting Bulk Collect into PostgreSQL “For,” “In”, and “Loop” statements is a manual process. But once the code is changed manually, Gemini identifies other occurrences so it can suggest additional edits based on what it learned. A look at Gemini explainability If you’re modernizing your database to a new, unfamiliar environment, then Gemini explainability is for you. With this enhancement, you can use simple natural language constructs to ask Database Migration Service questions like: How can I fix a conversion issue without a deep understanding of the PostgreSQL dialect? Why was my source converted to the PostgreSQL dialect in that particular way? How can I optimize my destination code to get the most of the AlloyDB or Cloud SQL for PostgreSQL engine? How can I add in-line comments to the target code for future reference? Let’s go back to the previous example. For a non-PostgreSQL developer, finding the equivalent for Bulk collect can take time. With Gemini explainability, simply ask Gemini “How can I fix this issue?” and get insightful information about the issue, how to fix it, and even revised code that you can use as a starting point! If you’re seeking to modernize your databases and leverage managed PostgreSQL services to reduce costs, increase flexibility, improve performance, and avoid vendor lock-in, Gemini explainability can help you get there much faster. You can sign up for the Gemini Explainability preview today by filling out this form. Getting started Migrating with Database Migration Service is easy. To start, navigate to the Database Migration page in the Google Cloud console, create a new migration job, and take these following steps: Create your source connection profile, which contains information about the source database. The connection profile can later be used for additional migrations. Create a conversion workspace that automatically converts your source Oracle schema and the PL/SQL code to a PostgreSQL schema and compatible SQL. If any exceptions are found and need attention, you can fix them and save the refined SQL. Once saved, Gemini will use your edits to train its model, and if enough training data was accumulated, will prompt a suggestion dialog. Review the Gemini suggestions and accept/reject. Apply the converted schema objects and SQL code on your destination Cloud SQL for your PostgreSQL instance. Create a migration job and choose the conversion workspace and connection profiles previously created. Test your migration job and start your migration whenever you're ready. Duet AI for Cloud Database Engineers Once the migration job starts, DMS takes an initial snapshot of your data, then replicates new changes as they happen. The migration job continues to replicate the source data until you decide to finalize the migration and cut over to the target database. What DMS customers are saying “A database migration is a complex, multi-step process that is important for modernization, yet it carries the potential for serious risk of enterprise disruption. Google’s Database Migration Service is addressing the key capabilities for minimal downtime migrations in a single place, including Gemini-assisted code conversion to make this journey seamless. ,Google and Sabre have partnered on this journey and are collaborating to drive the modernization of Sabre’s databases and enhance Database Migration Service through real-world testing.” - Jeff Carroll, SVP, Platform and Cloud Engineering, Sabre “The Google Database Migration Service and our partner Nordcloud made our Proof of Concept a success. We were able to migrate and convert our entire 10TB Point of Sale database from Oracle to AlloyDB. The process of backfilling was completed with high efficiency. In general, we were surprised by the high level of automation for both conversion and data migration.” - Head of Software Engineering, Bison Learn more and start your database journey Gemini-assisted migrations in Database Migration Service are a huge step in the evolution of database migration tools. The ability to learn, adapt, and continuously refine the code conversion process not only helps reduce your burden but also sets you up for efficient migrations in the future. As you embark on your journey from Oracle to PostgreSQL, Gemini in Database Migration Service can reduce cost, time, and risk, setting you up for success. For more information, head over to the documentation or start training with this Database Migration Service Qwiklab. In addition, you can view this conversion workspace introduction video. To get started with the new Database Migration Service for Oracle to PostgreSQL migrations, simply visit the Database Migration page in the console. View the full article
  13. The journey of going from data to insights can be fragmented, complex and time consuming. Data teams spend time on repetitive and routine tasks such as ingesting structured and unstructured data, wrangling data in preparation for analysis, and optimizing and maintaining pipelines. Obviously, they’d rather prefer doing higher-value analysis and insights-led decision making. At Next ‘23, we introduced Duet AI in BigQuery. This year at Next ‘24, Duet AI in BigQuery becomes Gemini in BigQuery which provides AI-powered experiences for data preparation, analysis and engineering as well as intelligent recommendations to enhance user productivity and optimize costs. "With the new AI-powered assistive features in BigQuery and ease of integrating with other Google Workspace products, our teams can extract valuable insights from data. The natural language-based experiences, low-code data preparation tools, and automatic code generation features streamline high-priority analytics workflows, enhancing the productivity of data practitioners and providing the space to focus on high impact initiatives. Moreover, users with varying skill sets, including our business users, can leverage more accessible data insights to effect beneficial changes, fostering an inclusive data-driven culture within our organization." said Tim Velasquez, Head of Analytics, Veo Let’s take a closer look at the new features of Gemini in BigQuery. Accelerate data preparation with AI Your business insights are only as good as your data. When you work with large datasets that come from a variety of sources, there are often inconsistent formats, errors, and missing data. As such, cleaning, transforming, and structuring them can be a major hurdle. To simplify data preparation, validation, and enrichment, BigQuery now includes AI augmented data preparation that helps users to cleanse and wrangle their data. Additionally we are enabling users to build low-code visual data pipelines, or rebuild legacy pipelines in BigQuery. Once the pipelines are running in production, AI assists with finding and resolving issues such as schema or data drift, significantly reducing the toil associated with maintaining a data pipeline. Because the resulting pipelines run in BigQuery, users also benefit from integrated metadata management, automatic end-to-end data lineage, and capacity management. Gemini in BigQuery provides AI-driven assistance for users to clean and wrangle data Kickstart the data-to-insights journey Most data analysis starts with exploration — finding the right dataset, understanding the data’s structure, identifying key patterns, and identifying the most valuable insights you want to extract. This step can be cumbersome and time-consuming, especially if you are working with a new dataset or if you are new to the team. To address this problem, Gemini in BigQuery provides new semantic search capabilities to help you pinpoint the most relevant tables for your tasks. Leveraging the metadata and profiling information of these tables from Dataplex, Gemini in BigQuery surfaces relevant, executable queries that you can run with just one click. You can learn more about BigQuery data insights here. Gemini in BigQuery suggests executable queries for tables that you can run in single click Reimagine analytics workflows with natural language To boost user productivity, we’re also rethinking the end-to-end user experience. The new BigQuery data canvas provides a reimagined natural language-based experience for data exploration, curation, wrangling, analysis, and visualization, allowing you to explore and scaffold your data journeys in a graphical workflow that mirrors your mental model. For example, to analyze a recent marketing campaign, you can use simple natural language prompts to discover campaign data sources, integrate with existing customer data, derive insights, and share visual reports with executives — all within a single experience. Watch this video for a quick overview of BigQuery data canvas. BigQuery data canvas allows you to explore and analyze datasets, and create a customized visualization, all using natural language prompts within the same interface Enhance productivity with SQL and Python code assistance Even advanced users sometimes struggle to remember all the details of SQL or Python syntax, and navigating through numerous tables, columns, and relationships can be daunting. Gemini in BigQuery helps you write and edit SQL or Python code using simple natural language prompts, referencing relevant schemas and metadata. You can also leverage BigQuery’s in-console chat interface to explore tutorials, documentation and best practices for specific tasks using simple prompts such as: “How can I use BigQuery materialized views?” “How do I ingest JSON data?” and “How can I improve query performance?” Optimize analytics for performance and speed With growing data volumes, analytics practitioners including data administrators, find it increasingly challenging to effectively manage capacity and enhance query performance. We are introducing recommendations that can help continuously improve query performance, minimize errors and optimize your platform costs. With these recommendations, you can identify materialized views that can be created or deleted based on your query patterns and partition or cluster of your tables. Additionally, you can autotune Spark pipelines and troubleshoot failures and performance issues. Get started To learn more about Gemini in BigQuery, watch this short overview video and refer to the documentation , and sign up to get early access to the preview features. If you’re at Next ‘24, join our data and analytics breakout sessions and stop by at the demo stations to explore further and see these capabilities in action. Pricing details for Gemini in BigQuery will be shared when generally available to all customers. View the full article
  14. Everything you need to know to get started with Bard, Google’s experimental conversational AI chatbot.View the full article
  • Forum Statistics

    43.6k
    Total Topics
    43.2k
    Total Posts
×
×
  • Create New...