Search the Community
Showing results for tags 'pydantic'.
-
Discover the basics of using Gemini with Python via VertexAI, creating APIs with FastAPI, data validation with Pydantic and the fundamentals of Retrieval-Augmented Generation (RAG)Photo by Kenny Eliason on UnsplashWithin this article, I share some of the basics to create a LLM-driven web-application, using various technologies, such as: Python, FastAPI, Pydantic, VertexAI and more. You will learn how to create such a project from the very beginning and get an overview of the underlying concepts, including Retrieval-Augmented Generation (RAG).Disclaimer: I am using data from The Movie Database within this project. The API is free to use for non-commercial purposes and complies with the Digital Millennium Copyright Act (DMCA). For further information about TMDB data usage, please read the official FAQ. Table of contents– Inspiration – System Architecture – Understanding Retrieval-Augmented Generation (RAG) – Python projects with Poetry – Create the API with FastAPI – Data validation and quality with Pydantic – TMDB client with httpx – Gemini LLM client with VertexAI – Modular prompt generator with Jinja – Frontend – API examples – Conclusion The best way to share this knowledge is through a practical example. Hence, I’ll use my project Gemini Movie Detectives to cover the various aspects. The project was created as part of the Google AI Hackathon 2024, which is still running while I am writing this. Gemini Movie Detectives (by author)Gemini Movie Detectives is a project aimed at leveraging the power of the Gemini Pro model via VertexAI to create an engaging quiz game using the latest movie data from The Movie Database (TMDB). Part of the project was also to make it deployable with Docker and to create a live version. Try it yourself: movie-detectives.com. Keep in mind that this is a simple prototype, so there might be unexpected issues. Also, I had to add some limitations in order to control costs that might be generated by using GCP and VertexAI. Gemini Movie Detectives (by author)The project is fully open-source and is split into two separate repositories: Github repository for backend: https://github.com/vojay-dev/gemini-movie-detectives-api Github repository for frontend: https://github.com/vojay-dev/gemini-movie-detectives-uiThe focus of the article is the backend project and underlying concepts. It will therefore only briefly explain the frontend and its components. In the following video, I also give an overview over the project and its components: https://medium.com/media/bf4270fa881797cd515421b7bb646d1d/hrefInspirationGrowing up as a passionate gamer and now working as a Data Engineer, I’ve always been drawn to the intersection of gaming and data. With this project, I combined two of my greatest passions: gaming and data. Back in the 90’ I always enjoyed the video game series You Don’t Know Jack, a delightful blend of trivia and comedy that not only entertained but also taught me a thing or two. Generally, the usage of games for educational purposes is another concept that fascinates me. In 2023, I organized a workshop to teach kids and young adults game development. They learned about mathematical concepts behind collision detection, yet they had fun as everything was framed in the context of gaming. It was eye-opening that gaming is not only a huge market but also holds a great potential for knowledge sharing. With this project, called Movie Detectives, I aim to showcase the magic of Gemini, and AI in general, in crafting engaging trivia and educational games, but also how game design can profit from these technologies in general. By feeding the Gemini LLM with accurate and up-to-date movie metadata, I could ensure the accuracy of the questions from Gemini. An important aspect, because without this Retrieval-Augmented Generation (RAG) methodology to enrich queries with real-time metadata, there’s a risk of propagating misinformation — a typical pitfall when using AI for this purpose. Another game-changer lies in the modular prompt generation framework I’ve crafted using Jinja templates. It’s like having a Swiss Army knife for game design — effortlessly swapping show master personalities to tailor the game experience. And with the language module, translating the quiz into multiple languages is a breeze, eliminating the need for costly translation processes. Taking that on a business perspective, it can be used to reach a much broader audience of customers, without the need of expensive translation processes. From a business standpoint, this modularization opens doors to a wider customer base, transcending language barriers without breaking a sweat. And personally, I’ve experienced firsthand the transformative power of these modules. Switching from the default quiz master to the dad-joke-quiz-master was a riot — a nostalgic nod to the heyday of You Don’t Know Jack, and a testament to the versatility of this project. Movie Detectives — Example: Santa Claus personality (by author)System ArchitectureBefore we jump into details, let’s get an overview of how the application was built. Tech Stack: Backend Python 3.12 + FastAPI API developmenthttpx for TMDB integrationJinja templating for modular prompt generationPydantic for data modeling and validationPoetry for dependency managementDocker for deploymentTMDB API for movie dataVertexAI and Gemini for generating quiz questions and evaluating answersRuff as linter and code formatter together with pre-commit hooksGithub Actions to automatically run tests and linter on every pushTech Stack: Frontend VueJS 3.4 as the frontend frameworkVite for frontend toolingEssentially, the application fetches up-to-date movie metadata from an external API (TMDB), constructs a prompt based on different modules (personality, language, …), enriches this prompt with the metadata and that way, uses Gemini to initiate a movie quiz in which the user has to guess the correct title. The backend infrastructure is built with FastAPI and Python, employing the Retrieval-Augmented Generation (RAG) methodology to enrich queries with real-time metadata. Utilizing Jinja templating, the backend modularizes prompt generation into base, personality, and data enhancement templates, enabling the generation of accurate and engaging quiz questions. The frontend is powered by Vue 3 and Vite, supported by daisyUI and Tailwind CSS for efficient frontend development. Together, these tools provide users with a sleek and modern interface for seamless interaction with the backend. In Movie Detectives, quiz answers are interpreted by the Language Model (LLM) once again, allowing for dynamic scoring and personalized responses. This showcases the potential of integrating LLM with RAG in game design and development, paving the way for truly individualized gaming experiences. Furthermore, it demonstrates the potential for creating engaging quiz trivia or educational games by involving LLM. Adding and changing personalities or languages is as easy as adding more Jinja template modules. With very little effort, this can change the full game experience, reducing the effort for developers. System Overview (by author)As can be seen in the overview, Retrieval-Augmented Generation (RAG) is one of the essential ideas of the backend. Let’s have a closer look at this particular paradigm. Understanding Retrieval-Augmented Generation (RAG)In the realm of Large Language Models (LLM) and AI, one paradigm becoming more and more popular is Retrieval-Augmented Generation (RAG). But what does RAG entail, and how does it influence the landscape of AI development? At its essence, RAG enhances LLM systems by incorporating external data to enrich their predictions. Which means, you pass relevant context to the LLM as an additional part of the prompt, but how do you find relevant context? Usually, this data can be automatically retrieved from a database with vector search or dedicated vector databases. Vector databases are especially useful, since they store data in a way, so that it can be queried for similar data quickly. The LLM then generates the output based on both, the query and the retrieved documents. Picture this: you have an LLM capable of generating text based on a given prompt. RAG takes this a step further by infusing additional context from external sources, like up-to-date movie data, to enhance the relevance and accuracy of the generated text. Let’s break down the key components of RAG: LLMs: LLMs serve as the backbone of RAG workflows. These models, trained on vast amounts of text data, possess the ability to understand and generate human-like text.Vector Indexes for contextual enrichment: A crucial aspect of RAG is the use of vector indexes, which store embeddings of text data in a format understandable by LLMs. These indexes allow for efficient retrieval of relevant information during the generation process. In the context of the project this could be a database of movie metadata.Retrieval process: RAG involves retrieving pertinent documents or information based on the given context or prompt. This retrieved data acts as the additional input for the LLM, supplementing its understanding and enhancing the quality of generated responses. This could be getting all relevant information known and connected to a specific movie.Generative Output: With the combined knowledge from both the LLM and the retrieved context, the system generates text that is not only coherent but also contextually relevant, thanks to the augmented data.RAG architecture (by author)While in the Gemini Movie Detectives project, the prompt is enhanced with external API data from The Movie Database, RAG typically involves the use of vector indexes to streamline this process. It is using much more complex documents as well as a much higher amount of data for enhancement. Thus, these indexes act like signposts, guiding the system to relevant external sources quickly. In this project, it is therefore a mini version of RAG but showing the basic idea at least, demonstrating the power of external data to augment LLM capabilities. In more general terms, RAG is a very important concept, especially when crafting trivia quizzes or educational games using LLMs like Gemini. This concept can avoid the risk of false positives, asking wrong questions, or misinterpreting answers from the users. Here are some open-source projects that might be helpful when approaching RAG in one of your projects: txtai: All-in-one open-source embeddings database for semantic search, LLM orchestration and language model workflows.LangChain: LangChain is a framework for developing applications powered by large language models (LLMs).Qdrant: Vector Search Engine for the next generation of AI applications.Weaviate: Weaviate is a cloud-native, open source vector database that is robust, fast, and scalable.Of course, with the potential value of this approach for LLM-based applications, there are many more open- and close-source alternatives, but with these, you should be able to get your research on the topic started. Python projects with PoetryNow that the main concepts are clear, let’s have a closer look how the project was created and how dependencies are managed in general. The three main tasks Poetry can help you with are: Build, Publish and Track. The idea is to have a deterministic way to manage dependencies, to share your project and to track dependency states. Photo by Kat von Wood on UnsplashPoetry also handles the creation of virtual environments for you. Per default, those are in a centralized folder within your system. However, if you prefer to have the virtual environment of project in the project folder, like I do, it is a simple config change: poetry config virtualenvs.in-project trueWith poetry new you can then create a new Python project. It will create a virtual environment linking you systems default Python. If you combine this with pyenv, you get a flexible way to create projects using specific versions. Alternatively, you can also tell Poetry directly which Python version to use: poetry env use /full/path/to/python. Once you have a new project, you can use poetry add to add dependencies to it. With this, I created the project for Gemini Movie Detectives: poetry config virtualenvs.in-project true poetry new gemini-movie-detectives-api cd gemini-movie-detectives-api poetry add 'uvicorn[standard]' poetry add fastapi poetry add pydantic-settings poetry add httpx poetry add 'google-cloud-aiplatform>=1.38' poetry add jinja2The metadata about your projects, including the dependencies with the respective versions, are stored in the poetry.toml and poetry.lock files. I added more dependencies later, which resulted in the following poetry.toml for the project: [tool.poetry] name = "gemini-movie-detectives-api" version = "0.1.0" description = "Use Gemini Pro LLM via VertexAI to create an engaging quiz game incorporating TMDB API data" authors = ["Volker Janz <volker@janz.sh>"] readme = "README.md" [tool.poetry.dependencies] python = "^3.12" fastapi = "^0.110.1" uvicorn = {extras = ["standard"], version = "^0.29.0"} python-dotenv = "^1.0.1" httpx = "^0.27.0" pydantic-settings = "^2.2.1" google-cloud-aiplatform = ">=1.38" jinja2 = "^3.1.3" ruff = "^0.3.5" pre-commit = "^3.7.0" [build-system] requires = ["poetry-core"] build-backend = "poetry.core.masonry.api"Create the API with FastAPIFastAPI is a Python framework that allows for rapid API development. Built on open standards, it offers a seamless experience without new syntax to learn. With automatic documentation generation, robust validation, and integrated security, FastAPI streamlines development while ensuring great performance. Photo by Florian Steciuk on UnsplashImplementing the API for the Gemini Movie Detectives projects, I simply started from a Hello World application and extended it from there. Here is how to get started: from fastapi import FastAPI app = FastAPI() @app.get("/") def read_root(): return {"Hello": "World"}Assuming you also keep the virtual environment within the project folder as .venv/ and use uvicorn, this is how to start the API with the reload feature enabled, in order to test code changes without the need of a restart: source .venv/bin/activate uvicorn gemini_movie_detectives_api.main:app --reload curl -s localhost:8000 | jq .If you have not yet installed jq, I highly recommend doing it now. I might cover this wonderful JSON Swiss Army knife in a future article. This is how the response looks like: Hello FastAPI (by author)From here, you can develop your API endpoints as needed. This is how the API endpoint implementation to start a movie quiz in Gemini Movie Detectives looks like for example: @app.post('/quiz') @rate_limit @retry(max_retries=settings.quiz_max_retries) def start_quiz(quiz_config: QuizConfig = QuizConfig()): movie = tmdb_client.get_random_movie( page_min=_get_page_min(quiz_config.popularity), page_max=_get_page_max(quiz_config.popularity), vote_avg_min=quiz_config.vote_avg_min, vote_count_min=quiz_config.vote_count_min ) if not movie: logger.info('could not find movie with quiz config: %s', quiz_config.dict()) raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail='No movie found with given criteria') try: genres = [genre['name'] for genre in movie['genres']] prompt = prompt_generator.generate_question_prompt( movie_title=movie['title'], language=get_language_by_name(quiz_config.language), personality=get_personality_by_name(quiz_config.personality), tagline=movie['tagline'], overview=movie['overview'], genres=', '.join(genres), budget=movie['budget'], revenue=movie['revenue'], average_rating=movie['vote_average'], rating_count=movie['vote_count'], release_date=movie['release_date'], runtime=movie['runtime'] ) chat = gemini_client.start_chat() logger.debug('starting quiz with generated prompt: %s', prompt) gemini_reply = gemini_client.get_chat_response(chat, prompt) gemini_question = gemini_client.parse_gemini_question(gemini_reply) quiz_id = str(uuid.uuid4()) session_cache[quiz_id] = SessionData( quiz_id=quiz_id, chat=chat, question=gemini_question, movie=movie, started_at=datetime.now() ) return StartQuizResponse(quiz_id=quiz_id, question=gemini_question, movie=movie) except GoogleAPIError as e: raise HTTPException(status_code=status.HTTP_500_INTERNAL_SERVER_ERROR, detail=f'Google API error: {e}') except Exception as e: raise HTTPException(status_code=status.HTTP_500_INTERNAL_SERVER_ERROR, detail=f'Internal server error: {e}')Within this code, you can see already three of the main components of the backend: tmdb_client: A client I implemented using httpx to fetch data from The Movie Database (TMDB).prompt_generator: A class that helps to generate modular prompts based on Jinja templates.gemini_client: A client to interact with the Gemini LLM via VertexAI in Google Cloud.We will look at these components in detail later, but first some more helpful insights regarding the usage of FastAPI. FastAPI makes it really easy to define the HTTP method and data to be transferred to the backend. For this particular function, I expect a POST request as this creates a new quiz. This can be done with the post decorator: @app.post('/quiz')Also, I am expecting some data within the request sent as JSON in the body. In this case, I am expecting an instance of QuizConfig as JSON. I simply defined QuizConfig as a subclass of BaseModel from Pydantic (will be covered later) and with that, I can pass it in the API function and FastAPI will do the rest: class QuizConfig(BaseModel): vote_avg_min: float = Field(5.0, ge=0.0, le=9.0) vote_count_min: float = Field(1000.0, ge=0.0) popularity: int = Field(1, ge=1, le=3) personality: str = Personality.DEFAULT.name language: str = Language.DEFAULT.name # ... def start_quiz(quiz_config: QuizConfig = QuizConfig()):Furthermore, you might notice two custom decorators: @rate_limit @retry(max_retries=settings.quiz_max_retries)These I implemented to reduce duplicate code. They wrap the API function to retry the function in case of errors and to introduce a global rate limit of how many movie quizzes can be started per day. What I also liked personally is the error handling with FastAPI. You can simply raise a HTTPException, give it the desired status code and the user will then receive a proper response, for example, if no movie could be found with a given configuration: raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail='No movie found with given criteria')With this, you should have an overview of creating an API like the one for Gemini Movie Detectives with FastAPI. Keep in mind: all code is open-source, so feel free to have a look at the API repository on Github. Data validation and quality with PydanticOne of the main challenges with todays AI/ML projects is data quality. But that does not only apply to ETL/ELT pipelines, which prepare datasets to be used in model training or prediction, but also to the AI/ML application itself. Using Python for example usually enables Data Engineers and Scientist to get a reasonable result with little code but being (mostly) dynamically typed, Python lacks of data validation when used in a naive way. That is why in this project, I combined FastAPI with Pydantic, a powerful data validation library for Python. The goal was to make the API lightweight but strict and strong, when it comes to data quality and validation. Instead of plain dictionaries for example, the Movie Detectives API strictly uses custom classes inherited from the BaseModel provided by Pydantic. This is the configuration for a quiz for example: class QuizConfig(BaseModel): vote_avg_min: float = Field(5.0, ge=0.0, le=9.0) vote_count_min: float = Field(1000.0, ge=0.0) popularity: int = Field(1, ge=1, le=3) personality: str = Personality.DEFAULT.name language: str = Language.DEFAULT.nameThis example illustrates, how not only correct type is ensured, but also further validation is applied to the actual values. Furthermore, up-to-date Python features, like StrEnum are used to distinguish certain types, like personalities: class Personality(StrEnum): DEFAULT = 'default.jinja' CHRISTMAS = 'christmas.jinja' SCIENTIST = 'scientist.jinja' DAD = 'dad.jinja'Also, duplicate code is avoided by defining custom decorators. For example, the following decorator limits the number of quiz sessions today, to have control over GCP costs: call_count = 0 last_reset_time = datetime.now() def rate_limit(func: callable) -> callable: @wraps(func) def wrapper(*args, **kwargs) -> callable: global call_count global last_reset_time # reset call count if the day has changed if datetime.now().date() > last_reset_time.date(): call_count = 0 last_reset_time = datetime.now() if call_count >= settings.quiz_rate_limit: raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail='Daily limit reached') call_count += 1 return func(*args, **kwargs) return wrapperIt is then simply applied to the related API function: @app.post('/quiz') @rate_limit @retry(max_retries=settings.quiz_max_retries) def start_quiz(quiz_config: QuizConfig = QuizConfig()):The combination of up-to-date Python features and libraries, such as FastAPI, Pydantic or Ruff makes the backend less verbose but still very stable and ensures a certain data quality, to ensure the LLM output has the expected quality. TMDB client with httpxThe TMDB Client class is using httpx to perform requests against the TMDB API. httpx is a rising star in the world of Python libraries. While requests has long been the go-to choice for making HTTP requests, httpx offers a valid alternative. One of its key strengths is asynchronous functionality. httpx allows you to write code that can handle multiple requests concurrently, potentially leading to significant performance improvements in applications that deal with a high volume of HTTP interactions. Additionally, httpx aims for broad compatibility with requests, making it easier for developers to pick it up. In case of Gemini Movie Detectives, there are two main requests: get_movies: Get a list of random movies based on specific settings, like average number of votesget_movie_details: Get details for a specific movie to be used in a quizIn order to reduce the amount of external requests, the latter one uses the lru_cache decorator, which stands for “Least Recently Used cache”. It’s used to cache the results of function calls so that if the same inputs occur again, the function doesn’t have to recompute the result. Instead, it returns the cached result, which can significantly improve the performance of the program, especially for functions with expensive computations. In our case, we cache the details for 1024 movies, so if 2 players get the same movie, we do not need to make a request again: @lru_cache(maxsize=1024) def get_movie_details(self, movie_id: int): response = httpx.get(f'https://api.themoviedb.org/3/movie/{movie_id}', headers={ 'Authorization': f'Bearer {self.tmdb_api_key}' }, params={ 'language': 'en-US' }) movie = response.json() movie['poster_url'] = self.get_poster_url(movie['poster_path']) return movieAccessing data from The Movie Database (TMDB) is for free for non-commercial usage, you can simply generate an API key and start making requests. Gemini LLM client with VertexAIBefore Gemini via VertexAI can be used, you need a Google Cloud project with VertexAI enabled and a Service Account with sufficient access together with its JSON key file. Create GCP project (by author)After creating a new project, navigate to APIs & Services –> Enable APIs and service –> search for VertexAI API –> Enable. Enable VertexAI (by author)To create a Service Account, navigate to IAM & Admin –> Service Accounts –> Create service account. Choose a proper name and go to the next step. Create Service Account (by author)Now ensure to assign the account the pre-defined role Vertex AI User. Assign correct role (by author)Finally you can generate and download the JSON key file by clicking on the new user –> Keys –> Add Key –> Create new key –> JSON. With this file, you are good to go. Create JSON key file (by author)Using Gemini from Google with Python via VertexAI starts by adding the necessary dependency to the project: poetry add 'google-cloud-aiplatform>=1.38'With that, you can import and initialize vertexai with your JSON key file. Also you can load a model, like the newly released Gemini 1.5 Pro model, and start a chat session like this: import vertexai from google.oauth2.service_account import Credentials from vertexai.generative_models import GenerativeModel project_id = "my-project-id" location = "us-central1" credentials = Credentials.from_service_account_file("credentials.json") model = "gemini-1.0-pro" vertexai.init(project=project_id, location=location, credentials=credentials) model = GenerativeModel(model) chat_session = model.start_chat()You can now use chat.send_message() to send a prompt to the model. However, since you get the response in chunks of data, I recommend using a little helper function, so that you simply get the full response as one String: def get_chat_response(chat: ChatSession, prompt: str) -> str: text_response = [] responses = chat.send_message(prompt, stream=True) for chunk in responses: text_response.append(chunk.text) return ''.join(text_response)A full example can then look like this: import vertexai from google.oauth2.service_account import Credentials from vertexai.generative_models import GenerativeModel, ChatSession project_id = "my-project-id" location = "us-central1" credentials = Credentials.from_service_account_file("credentials.json") model = "gemini-1.0-pro" vertexai.init(project=project_id, location=location, credentials=credentials) model = GenerativeModel(model) chat_session = model.start_chat() def get_chat_response(chat: ChatSession, prompt: str) -> str: text_response = [] responses = chat.send_message(prompt, stream=True) for chunk in responses: text_response.append(chunk.text) return ''.join(text_response) response = get_chat_response( chat_session, "How to say 'you are awesome' in Spanish?" ) print(response)Running this, Gemini gave me the following response: You are awesome (by author)I agree with Gemini: Eres increíbleAnother hint when using this: you can also configure the model generation by passing a configuration to the generation_config parameter as part of the send_message function. For example: generation_config = { 'temperature': 0.5 } responses = chat.send_message( prompt, generation_config=generation_config, stream=True )I am using this in Gemini Movie Detectives to set the temperature to 0.5, which gave me best results. In this context temperature means: how creative are the generated responses by Gemini. The value must be between 0.0 and 1.0, whereas closer to 1.0 means more creativity. One of the main challenges apart from sending a prompt and receive the reply from Gemini is to parse the reply in order to extract the relevant information. One learning from the project is: Specify a format for Gemini, which does not rely on exact words but uses key symbols to separate information elementsFor example, the question prompt for Gemini contains this instruction: Your reply must only consist of three lines! You must only reply strictly using the following template for the three lines: Question: <Your question> Hint 1: <The first hint to help the participants> Hint 2: <The second hint to get the title more easily>The naive approach would be, to parse the answer by looking for a line that starts with Question:. However, if we use another language, like German, the reply would look like: Antwort:. Instead, focus on the structure and key symbols. Read the reply like this: It has 3 linesThe first line is the questionSecond line the first hintThird line the second hintKey and value are separated by :With this approach, the reply can be parsed language agnostic, and this is my implementation in the actual client: @staticmethod def parse_gemini_question(gemini_reply: str) -> GeminiQuestion: result = re.findall(r'[^:]+: ([^\n]+)', gemini_reply, re.MULTILINE) if len(result) != 3: msg = f'Gemini replied with an unexpected format. Gemini reply: {gemini_reply}' logger.warning(msg) raise ValueError(msg) question = result[0] hint1 = result[1] hint2 = result[2] return GeminiQuestion(question=question, hint1=hint1, hint2=hint2)In the future, the parsing of responses will become even easier. During the Google Cloud Next ’24 conference, Google announced that Gemini 1.5 Pro is now publicly available and with that, they also announced some features including a JSON mode to have responses in JSON format. Checkout this article for more details. Apart from that, I wrapped the Gemini client into a configurable class. You can find the full implementation open-source on Github. Modular prompt generator with JinjaThe Prompt Generator is a class wich combines and renders Jinja2 template files to create a modular prompt. There are two base templates: one for generating the question and one for evaluating the answer. Apart from that, there is a metadata template to enrich the prompt with up-to-date movie data. Furthermore, there are language and personality templates, organized in separate folders with a template file for each option. Prompt Generator (by author)Using Jinja2 allows to have advanced features like template inheritance, which is used for the metadata. This makes it easy to extend this component, not only with more options for personalities and languages, but also to extract it into its own open-source project to make it available for other Gemini projects. FrontendThe Gemini Movie Detectives frontend is split into four main components and uses vue-router to navigate between them. The Home component simply displays the welcome message. The Quiz component displays the quiz itself and talks to the API via fetch. To create a quiz, it sends a POST request to api/quiz with the desired settings. The backend is then selecting a random movie based on the user settings, creates the prompt with the modular prompt generator, uses Gemini to generate the question and hints and finally returns everything back to the component so that the quiz can be rendered. Additionally, each quiz gets a session ID assigned in the backend and is stored in a limited LRU cache. For debugging purposes, this component fetches data from the api/sessions endpoint. This returns all active sessions from the cache. This component displays statistics about the service. However, so far there is only one category of data displayed, which is the quiz limit. To limit the costs for VertexAI and GCP usage in general, there is a daily limit of quiz sessions, which will reset with the first quiz of the next day. Data is retrieved form the api/limit endpoint. Vue components (by author)API examplesOf course using the frontend is a nice way to interact with the application, but it is also possible to just use the API. The following example shows how to start a quiz via the API using the Santa Claus / Christmas personality: curl -s -X POST https://movie-detectives.com/api/quiz \ -H 'Content-Type: application/json' \ -d '{"vote_avg_min": 5.0, "vote_count_min": 1000.0, "popularity": 3, "personality": "christmas"}' | jq .{ "quiz_id": "e1d298c3-fcb0-4ebe-8836-a22a51f87dc6", "question": { "question": "Ho ho ho, this movie takes place in a world of dreams, just like the dreams children have on Christmas Eve after seeing Santa Claus! It's about a team who enters people's dreams to steal their secrets. Can you guess the movie? Merry Christmas!", "hint1": "The main character is like a skilled elf, sneaking into people's minds instead of houses. ", "hint2": "I_c_p_i_n " }, "movie": {...} }Movie Detectives — Example: Santa Claus personality (by author)This example shows how to change the language for a quiz: curl -s -X POST https://movie-detectives.com/api/quiz \ -H 'Content-Type: application/json' \ -d '{"vote_avg_min": 5.0, "vote_count_min": 1000.0, "popularity": 3, "language": "german"}' | jq .{ "quiz_id": "7f5f8cf5-4ded-42d3-a6f0-976e4f096c0e", "question": { "question": "Stellt euch vor, es gäbe riesige Monster, die auf der Erde herumtrampeln, als wäre es ein Spielplatz! Einer ist ein echtes Urviech, eine Art wandelnde Riesenechse mit einem Atem, der so heiß ist, dass er euer Toastbrot in Sekundenschnelle rösten könnte. Der andere ist ein gigantischer Affe, der so stark ist, dass er Bäume ausreißt wie Gänseblümchen. Und jetzt ratet mal, was passiert? Die beiden geraten aneinander, wie zwei Kinder, die sich um das letzte Stück Kuchen streiten! Wer wird wohl gewinnen, die Riesenechse oder der Superaffe? Das ist die Frage, die sich die ganze Welt stellt! ", "hint1": "Der Film spielt in einer Zeit, in der Monster auf der Erde wandeln.", "hint2": "G_dz_ll_ vs. K_ng " }, "movie": {...} }And this is how to answer to a quiz via an API call: curl -s -X POST https://movie-detectives.com/api/quiz/84c19425-c179-4198-9773-a8a1b71c9605/answer \ -H 'Content-Type: application/json' \ -d '{"answer": "Greenland"}' | jq .{ "quiz_id": "84c19425-c179-4198-9773-a8a1b71c9605", "question": {...}, "movie": {...}, "user_answer": "Greenland", "result": { "points": "3", "answer": "Congratulations! You got it! Greenland is the movie we were looking for. You're like a human GPS, always finding the right way!" } }ConclusionAfter I finished the basic project, adding more personalities and languages was so easy with the modular prompt approach, that I was impressed by the possibilities this opens up for game design and development. I could change this game from a pure educational game about movies, into a comedy trivia “You Don’t Know Jack”-like game within a minute by adding another personality. Also, combining up-to-date Python functionality with validation libraries like Pydantic is very powerful and can be used to ensure good data quality for LLM input. And there you have it, folks! You’re now equipped to craft your own LLM-powered web application. Feeling inspired but need a starting point? Check out the open-source code for the Gemini Movie Detectives project: Github repository for backend: https://github.com/vojay-dev/gemini-movie-detectives-api Github repository for frontend: https://github.com/vojay-dev/gemini-movie-detectives-uiThe future of AI-powered applications is bright, and you’re holding the paintbrush! Let’s go make something remarkable. And if you need a break, feel free to try https://movie-detectives.com/. Create an AI-Driven Movie Quiz with Gemini LLM, Python, FastAPI, Pydantic, RAG and more was originally published in Towards Data Science on Medium, where people are continuing the conversation by highlighting and responding to this story. View the full article
-
- google gemini
- llms
- (and 4 more)
-
Want to write more robust Python applications? Learn how to use Pydantic, a popular data validation library, to model and validate your data.View the full article
-
- data validation
- python
-
(and 1 more)
Tagged with:
-
LangChain is used to configure/design Language Models or Chatbots that can interact with humans like a chat. These chat messages are linked through chains as the name LangChain suggests and the user can also store them in memory. The LangChain allows the developers to use memory libraries that provide the use of built-in classes or the customization of their own memory. Quick Outline This post will show: How to Add a Custom Memory Type in LangChain Installing Frameworks Importing Libraries Building Custom Memory Configuring Prompt Template Testing the Model Conclusion How to Add a Custom Memory Type in LangChain? Adding a customized memory type in LangChain allows the user to get the most performance as the memory. The user can configure the memory type according to his requirements. To add a custom memory type in LangChain, simply go through the following steps: Step 1: Installing Frameworks First, install the LangChain framework to get started with the process of adding a custom memory type: pip install langchain Running the above command in the Python Notebook will install the dependencies for the LangChain as displayed in the following snippet: Install the OpenAI module to get its libraries that can be used to configure the LLMs: pip install openai This guide will use the spaCy framework to design the custom memory type in the LangChain and the following code is used to install the module: pip install spacy The spaCy model uses the hash table to store the information as the observation like previous chat messages. The following code is used to download the Large Language Model or LLM from the spaCy library to build an advanced NLP model: !python -m spacy download en_core_web_lg Importing “os” and “getpass” libraries are for entering the API key from the OpenAI’s account to set up its environment: import os import getpass os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:") Step 2: Importing Libraries The next step is to import the required libraries for customizing the memory type according to the chat model: from langchain.schema import BaseMemory from langchain.chains import ConversationChain from pydantic import BaseModel from langchain.llms import OpenAI from typing import List, Dict, Any Importing the “spaCy” library to load the “en_core_web_lg” model and assign it to the “nlp” variable as it is the Natural Language Processing model: import spacy nlp = spacy.load("en_core_web_lg") Step 3: Building Custom Memory After that, simply build the custom memory using BaseMemory and BaseModel arguments in the Memory class. Then, configure entities (collected/stored from the data) that can be stored in the memory as complete information or as a single unit. The memory is configured to contain all the entities from the document to optimize the performance of the memory and model: class SpacyEntityMemory(BaseMemory, BaseModel): """ Memory class for storing information about entities""" entities: dict = {} memory_key: str = "entities" def clear(self): self.entities = {} @property def memory_variables(self) -> List[str]: """ Initialize the variables provided to the query""" return [self.memory_key] #define the memory variables using the arguments def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, str]: """ Call the variables for memory i.e. entity key""" doc = nlp(inputs[list(inputs.keys())[0]]) #configure entities to be stored in the memory for an individual unit entities = [ self.entities[str(ent)] for ent in doc.ents if str(ent) in self.entities ] return {self.memory_key: "\n".join(entities)} #define the save_context() to use the memory def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None: """Store observation from this chat to the memory""" text = inputs[list(inputs.keys())[0]] doc = nlp(text) for ent in doc.ents: ent_str = str(ent) if ent_str in self.entities: self.entities[ent_str] += f"\n{text}" else: self.entities[ent_str] = text Step 4: Configuring Prompt Template After that, simply configure the prompt template that explains the structure of the input provided by the user/human: from langchain.prompts.prompt import PromptTemplate template = """The following is an interaction between a machine and a human It says it does not know If the machine does not know the answer The machine (AI) provides details from its context and if it does not understand the answer to any question it simply says sorry Entity info: {entities} Communication: Human: {input} AI:""" prompt = PromptTemplate(input_variables=["entities", "input"], template=template) Step 5: Testing the Model Before testing the model, simply configure the LLM using the OpenAI() method and set up the ConversationChain() function with arguments: llm = OpenAI(temperature=0) conversation = ConversationChain( llm=llm, prompt=prompt, verbose=True, memory=SpacyEntityMemory() ) Give information to the model using the input argument while calling the predict() method with the conversation variable: conversation.predict(input="Harrison likes machine learning") Output The model has absorbed the information and stored it in the memory and also posed the question related to the information for getting on with the conversation: The user can respond to the question from the model to add more information to the memory or test the memory by asking the question about the information: conversation.predict( input="What is the Harrison's favorite subject" ) The model gives the output based on the previous information and displays it on the screen as the following snippet shows: That’s all about adding a custom memory type in LangChain. Conclusion To add a custom memory type in LangChain, simply install the required modules for importing libraries to build the custom memory. The spaCy is the important library that is being used in this guide to add a custom memory using its NLP model. After that, configure the custom memory and prompt template to give the structure of the chat interface. Once the configuration is done, simply test the memory of the model by asking for information related to stored data. View the full article
-
In the world of Python programming, Pydantic’s create_model function stands out as a powerful tool for creating dynamic models with unmatched flexibility. This innovative feature allows the developers to construct the data models effortlessly, customizing them to the specific needs of their applications. By enabling the creation of models, create_model eliminates the repetitive code and promotes efficient development. In this article, we will look into the mechanics of Pydantic’s create_model function, exploring its capabilities and demonstrating how it can revolutionize the data models. Example 1: Creating Dynamic Models Using Pydantic’s Create_Model Function In this era of Python programming, where versatility and efficiency are essential features, Pydantic’s create_model function emerges as a powerful weapon in a developer’s toolkit. This function allows us to create dynamic and customizable data models effortlessly, simplifying the often-complex process of defining the structured data. Imagine constructing a digital Lego set where each piece represents a specific aspect of our data. With create_model, we can now precisely arrange these data pieces, eliminating repetition and enhancing the code style. Let’s start on a journey through utilizing Pydantic’s create_model function to create the dynamic models step by step. Import the necessary modules to begin. Ensure that we have the Pydantic library installed (pip install pydantic). Next, import the required module using the following command: !pip install pydantic from pydantic import BaseModel, create_model from typing import ClassVar Define the base model. Before diving into dynamic models, let’s set a foundation by defining a base model. This base model contains standard fields that are shared across various dynamic models. For instance, consider an application that manages the users with name, age, and email attributes. Create the base model for this like the following: class BaseUser(BaseModel): name: str age: int email: str Creating dynamic models is the next step where the dynamic models are created using the create_model function. Imagine that we want to create the models for different roles within our application, like “Admin” and “RegularUser”. Instead of duplicating the fields, we can efficiently generate these models using the following approach: def greet_user(self): return f"Hello, {self.name}!" class AdminUser(BaseUser): is_admin: ClassVar[bool] = True class RegularUser(BaseUser): is_admin: ClassVar[bool] = False In this example, “is_admin” is a field that is specific to the “AdminUser” model. The “__base__” parameter ensures that the dynamic models inherit the properties of the base model, supporting consistency while minimizing the repetition of the code. Utilizing the dynamic models, with our dynamic models set, let’s explore how to use them effectively. Begin by instantiating the instances of these models, providing the values for their respective fields: admin_data = { "name": "Admin Name", "age": 30, "email": "admin@example.com", } regular_data = { "name": "Regular Name", "age": 25, "email": "regular@example.com", } admin_user = AdminUser(**admin_data) regular_user = RegularUser(**regular_data) By passing the data dictionaries to the model constructors, Pydantic automatically validates the input data against the model’s defined fields, ensuring accuracy and consistency. For the additional validation and methods, Pydantic’s strength extends beyond the field definition only. We can also include the custom validation logic and methods in our dynamic models. For instance, increase or upgrade the base model by adding a method to greet the users: Now, both “AdminUser” and “RegularUser” models can access this method: print(admin_user.greet_user()) print(regular_user.greet_user()) In the world of Python programming, Pydantic’s create_model function empowers the developers to design the dynamic models with exceptional flexibility and better style. By eliminating the redundant code and supporting the code reusability, this function simplifies the process of creating the custom data models. Through our step-by-step exploration, we witnessed how to create the dynamic models, inherit the properties from base models, and employ these models effectively within the applications. This journey not only mentions the power of Pydantic but also tells how creativity and efficiency shape the modern programming practices. Full code with the observed output: !pip install pydantic from pydantic import BaseModel, create_model from typing import ClassVar class BaseUser(BaseModel): name: str age: int email: str def greet_user(self): return f"Hello, {self.name}!" class AdminUser(BaseUser): is_admin: ClassVar[bool] = True class RegularUser(BaseUser): is_admin: ClassVar[bool] = False admin_data = { "name": "Admin Name", "age": 30, "email": "admin@example.com", } regular_data = { "name": "Regular Name", "age": 25, "email": "regular@example.com", } admin_user = AdminUser(**admin_data) regular_user = RegularUser(**regular_data) print(admin_user.greet_user()) print(regular_user.greet_user()) Example 2: Empowering the Dynamic Model Creation with Pydantic’s Create_Model Function In the dynamic world of Python programming, Pydantic’s create_model function emerges as a key instrument to design the versatile and adaptable data models. This powerful feature enables the developers to construct the models quickly, making them precise to their application’s requirements. Through a step-by-step journey, let’s uncover how to use the power of Pydantic’s create_model function to create the dynamic models that can transform the way we manage the data. Lay the foundation to begin. Ensure that Pydantic is installed within our environment using the “pip install pydantic” command. Then, import the necessary modules for Pydantic usage: !pip install pydantic from pydantic import BaseModel, create_model import datetime Define the base model. A base model acts as a template that keeps the common fields shared across multiple models. Think of it as creating the basic layout for our dynamic models. For instance, consider an e-commerce scenario where we want to define a common set of attributes for both customers and products. Define the base model as follows: class BaseDataModel(BaseModel): created_at: str updated_at: str For the crafting of dynamic models, after we set the foundation, it’s time to look into the creation of dynamic models using Pydantic’s create_model function. Imagine that we want to create specific models for customers and products, extending the base model with unique attributes. Here’s how we can do it: def get_age(self): created = datetime.datetime.strptime(self.created_at, "%Y-%m-%d") today = datetime.datetime.today() age = (today - created).days return age In this illustration, we design the “CustomerModel” and “ProductModel” by including the distinct fields to each model. The “…” denotes the field’s required status. CustomerModel = create_model( "CustomerModel", age=(int, ...), email=(str, ...), __base__=BaseDataModel ) ProductModel = create_model( "ProductModel", price=(float, ...), quantity=(int, ...), __base__=BaseDataModel ) Applying the dynamic models after the dynamic models are designed, we will see how to use them effectively within our application. Start by creating the instances of these models and filling their fields: customer_data = { "created_at": "2023-01-15", "updated_at": "2023-08-01", "age": 28, "email": "customer@example.com" } product_data = { "created_at": "2023-05-10", "updated_at": "2023-08-20", "price": 49.99, "quantity": 100 } customer_instance = CustomerModel(**customer_data) product_instance = ProductModel(**product_data) Pydantic automatically validates the input data against the defined fields, ensuring the data integrity and accuracy. Utilizing the custom validation and methods, Pydantic’s capabilities extend beyond field definitions. We can introduce the custom validation logic and methods into our dynamic models. We can enhance the base model by including a method that calculates the time since creation: Now, both “CustomerModel” and “ProductModel” can access this method: print(customer_instance.get_age()) print(product_instance.get_age()) In Python development, Pydantic’s create_model function emerges as a valuable asset, empowering the developers to generate the custom data models effortlessly. By combining a strong base model with dynamic model creation, Pydantic simplifies the process of managing the diverse datasets with elegance and efficiency. This journey through the previous examples shows the adaptability and creativity that Pydantic supports, entering a new era of data modeling in Python programming. Full code with the observed output: !pip install pydantic from pydantic import BaseModel, create_model import datetime class BaseDataModel(BaseModel): created_at: str updated_at: str def get_age(self): created = datetime.datetime.strptime(self.created_at, "%Y-%m-%d") today = datetime.datetime.today() age = (today - created).days return age CustomerModel = create_model( "CustomerModel", age=(int, ...), email=(str, ...), __base__=BaseDataModel ) ProductModel = create_model( "ProductModel", price=(float, ...), quantity=(int, ...), __base__=BaseDataModel ) customer_data = { "created_at": "2023-01-15", "updated_at": "2023-08-01", "age": 28, "email": "customer@example.com" } product_data = { "created_at": "2023-05-10", "updated_at": "2023-08-20", "price": 49.99, "quantity": 100 } customer_instance = CustomerModel(**customer_data) product_instance = ProductModel(**product_data) print(customer_instance.get_age()) print(product_instance.get_age()) Conclusion Pydantic’s create_model function presents a transformative approach to dynamic model creation in Python programming. Through properly explained examples, we unveiled the power of this feature, highlighting its ability to design the flexible models while minimizing the code repetition and eliminating redundancy. By constructing the dynamic models based on specific needs and effortlessly inheriting the properties from the base models, Pydantic simplifies the development process and promotes code elegance. View the full article
-
pydantic How to Convert Pydantic Model to Dict: A Step-by-Step Guide
Linux Hint posted a topic in Linux
Converting a Pydantic model to a dictionary is a simple way to transform a structured data into a format that’s easy to work with. Pydantic models help to ensure the data validity and structure. By converting them to dictionaries, you can access and manipulate the data more flexibly. To convert a Pydantic model to a dictionary, you can use the “dict()” method on a model instance. This method instantly transforms your structured data into a format that’s easy to manipulate and share. Converting a Pydantic Model to a Dictionary This example demonstrates the process of creating a Pydantic model class, instantiating it with attribute values and then converting the instance to a dictionary. Let’s understand the procedure step by step. To implement the example, we must first import the Pydantic library into our project. from pydantic import BaseModel The script starts by importing the “BaseModel” class from the Pydantic module. This class generates the Pydantic models that define the organized data structures accompanied by validation and parsing functionalities. After importing the required module, we create a structure of the Pydantic model class. class Person(BaseModel): Name: str Age: int Country: str Here, we define a new class named “Person”, inheriting from “BaseModel”. This class represents a data structure that is expected to have specific features. Then, we specify three attributes of the “Person” class, and they are defined using the class-level variables. In this case, the “Name”, “Age”, and “Country” are the attributes of the “Person” class. We associate each attribute with a specific data type: “str”, “int”, and “str”, respectively. Now that the structure of the model is defied, we create an instance of the “Person” class. p_instance = Person(Name="Alexander", Age=35, Country="England") In this line of code, “p_instance” is the variable name that is chosen to store the instance of the “Person” class that we’re creating. The “Person” refers to the “Person” class that we defined earlier using the “BaseModel” as the base class. Then, the code has (Name=”Alexander”, Age=35, Country=”England”) part which is called the “constructor parameters”. It’s used to provide the values for the attributes of the “Person” class. We assign each attribute with a value using the “attribute_name=value” syntax. In this constructor part, we assign the “Alexander” value to the “Name” attribute, the value of 35 to the “Age” attribute, and the “England” value is assigned to the “Country” attribute of the “Person” instance. To sum up, this line of code creates an instance of the “Person” class with specific attribute values. The “p_instance” variable now holds an instance of the “Person” class with the “Name”, “Age”, and “Country” attributes set to “Alexander”, 35, and “England”, respectively. We can use this instance to access and manipulate the data associated with a person’s information. Now, we build an illustration of the “Person” class. We will see how to convert this instance to a dictionary using the “dict()” method. By invoking the “dict()” function, we convert the instance to a dictionary in the following line of code: p_dict = p_instance.dict() Here, “p_dict” is the variable name that we choose to store the resulting dictionary that we get from the conversion process. The “p_instance” variable stores an instance of the “Person” class that we created earlier. With that, we invoke the “dict()” method on the “p_instance” object. Calling this method converts the specified instance to a dictionary representation. When we call the “dict()” method on a Pydantic model instance like “p_instance”, it converts the instance’s attributes and their corresponding values into a dictionary. The resulting dictionary is assigned to the “p_dict” variable. The complete code for observation is provided here: from pydantic import BaseModel class Person(BaseModel): Name: str Age: int Country: str p_instance = Person(Name="Alexander", Age=35, Country="England") p_dict = p_instance.dict() Running this whole code results in the following output: We successfully converted an instance of the model class to a dictionary. The code in this example demonstrates how the Pydantic models can be used to define the structured data and how the instances of those models can be conveniently converted to dictionaries for various purposes. After converting a Pydantic model to a dictionary, we gain the ability to manipulate the resulting data representation using a range of available options. These options provide precise control over which fields are included or excluded from the dictionary, tailoring the data to our needs. We will demonstrate some of the options that could help in refining the dictionary to match the specific use cases. Excluding the Default Values in the Resultant Dictionary Here, we will see how to convert a Pydantic model instance to a dictionary while excluding the fields with default values. p_dict = p_instance.dict(exclude_unset=True) In this line of code, “p_instance” is a Pydantic model instance that represents the data, and “p_dict” is a resulting dictionary. When we pass “exclude_unset=True” as an argument to the “dict()” method, it tells Pydantic to ignore the attributes from the dictionary that have values that are equal to their default values. This is useful to generate the relevant representation of the data. For example, if you have a Pydantic model with a default value of 0 for an attribute, and that attribute is not explicitly set when creating an instance, using “exclude_unset=True” excludes that attribute from the resulting dictionary. Here is the complete code: from pydantic import BaseModel class Person(BaseModel): Name: str Age: int Country: str p_instance = Person(Name="Alexander", Age=35, Country="England") p_dict = p_instance.dict(exclude_unset=True) Excluding and Including the Specific Fields in the Resultant Dictionary We can also exclude specific fields while converting the model instance to a dictionary that we do not need in the resultant dictionary. The code to this is as follows: p_dict = p_instance.dict(exclude={"Country"}) In this script, “p_instance” is an instance of a Pydantic model and “p_dict” is a dictionary that will be created. By calling the “dict()” method with the “exclude={“Country”}” argument, the code generates a “p_dict” dictionary from the “p_instance” model instance, excluding the “Country” attribute. The complete code is: from pydantic import BaseModel class Person(BaseModel): Name: str Age: int Country: str p_instance = Person(Name="Alexander", Age=35, Country="England") p_dict = p_instance.dict(exclude={"Country"}) This results in a dictionary representation of the data without the excluded attribute. Similarly, we can specify which fields we want to include in the output dictionary using the “include” argument in the “dict()” method. p_dict = p_instance.dict(include={"Name"}) By applying the “dict()” method with the include={“Name”} argument, the code generates a dictionary (p_dict) from the model instance (p_instance), containing only the specified field (Name). This allows for the selective inclusion of data attributes in the resulting dictionary, focusing solely on the “Name” attribute in this case. The code for observation is: from pydantic import BaseModel class Person(BaseModel): Name: str Age: int Country: str p_instance = Person(Name="Alexander", Age=35, Country="England") p_dict = p_instance.dict(include={"Name"}) The generated output is as follows: Conclusion Converting a Pydantic model to a dictionary is a simple and easy process. This article provided you with a step-by-step guide for doing the conversion. We used the “dict()” method to convert the model to the dictionary. Moreover, after getting the converted dictionary, certain options can be applied to manipulate the dictionary to match the specific requirements. We demonstrated the utilization of some options in the created example. View the full article -
Image: Alice Lang, alicelang-creations@outlook.fr At Docker, we are incredibly proud of our vibrant, diverse and creative community. From time to time, we feature cool contributions from the community on our blog to highlight some of the great work our community does. The following is a guest post by Docker community member Gabriel de Marmiesse. Are you working on something awesome with Docker? Send your contributions to William Quiviger (@william) on the Docker Community Slack and we might feature your work! The most common way to call and control Docker is by using the command line. With the increased usage of Docker, users want to call Docker from programming languages other than shell. One popular way to use Docker from Python has been to use docker-py. This library has had so much success that even docker-compose is written in Python, and leverages docker-py. The goal of docker-py though is not to replicate the Docker client (written in Golang), but to talk to the Docker Engine HTTP API. The Docker client is extremely complex and is hard to duplicate in another language. Because of this, a lot of features that were in the Docker client could not be available in docker-py. Sometimes users would sometimes get frustrated because docker-py did not behave exactly like the CLI. Today, we’re presenting a new project built by Gabriel de Marmiesse from the Docker community: Python-on-whales. The goal of this project is to have a 1-to-1 mapping between the Docker CLI and the Python library. We do this by communicating with the Docker CLI instead of calling directly the Docker Engine HTTP API. If you need to call the Docker command line, use Python-on-whales. And if you need to call the Docker engine directly, use docker-py. In this post, we’ll take a look at some of the features that are not available in docker-py but are available in Python-on-whales: Building with Docker buildx Deploying to Swarm with docker stack Deploying to the local Engine with Compose Start by downloading Python-on-whales with pip install python-on-whales and you’re ready to rock! Docker Buildx0 Here we build a Docker image. Python-on-whales uses buildx by default and gives you the output in real time. >>> from python_on_whales import docker >>> my_image = docker.build(".", tags="some_name") [+] Building 1.6s (17/17) FINISHED => [internal] load build definition from Dockerfile 0.0s => => transferring dockerfile: 32B 0.0s => [internal] load .dockerignore 0.0s => => transferring context: 2B 0.0s => [internal] load metadata for docker.io/library/python:3.6 1.4s => [python_dependencies 1/5] FROM docker.io/library/python:3.6@sha256:293 0.0s => [internal] load build context 0.1s => => transferring context: 72.86kB 0.0s => CACHED [python_dependencies 2/5] RUN pip install typeguard pydantic re 0.0s => CACHED [python_dependencies 3/5] COPY tests/test-requirements.txt /tmp 0.0s => CACHED [python_dependencies 4/5] COPY requirements.txt /tmp/ 0.0s => CACHED [python_dependencies 5/5] RUN pip install -r /tmp/test-requirem 0.0s => CACHED [tests_ubuntu_install_without_buildx 1/7] RUN apt-get update && 0.0s => CACHED [tests_ubuntu_install_without_buildx 2/7] RUN curl -fsSL https: 0.0s => CACHED [tests_ubuntu_install_without_buildx 3/7] RUN add-apt-repositor 0.0s => CACHED [tests_ubuntu_install_without_buildx 4/7] RUN apt-get update & 0.0s => CACHED [tests_ubuntu_install_without_buildx 5/7] WORKDIR /python-on-wh 0.0s => CACHED [tests_ubuntu_install_without_buildx 6/7] COPY . . 0.0s => CACHED [tests_ubuntu_install_without_buildx 7/7] RUN pip install -e . 0.0s => exporting to image 0.1s => => exporting layers 0.0s => => writing image sha256:e1c2382d515b097ebdac4ed189012ca3b34ab6be65ba0c 0.0s => => naming to docker.io/library/some_image_name Docker Stacks Here we deploy a simple Swarmpit stack on a local Swarm. You get a Stack object that has several methods: remove(), services(), ps(). >>> from python_on_whales import docker >>> docker.swarm.init() >>> swarmpit_stack = docker.stack.deploy("swarmpit", compose_files=["./docker-compose.yml"]) Creating network swarmpit_net Creating service swarmpit_influxdb Creating service swarmpit_agent Creating service swarmpit_app Creating service swarmpit_db >>> swarmpit_stack.services() [<python_on_whales.components.service.Service object at 0x7f9be5058d60>, <python_on_whales.components.service.Service object at 0x7f9be506d0d0>, <python_on_whales.components.service.Service object at 0x7f9be506d400>, <python_on_whales.components.service.Service object at 0x7f9be506d730>] >>> swarmpit_stack.remove() Docker Compose Here we show how we can run a Docker Compose application with Python-on-whales. Note that, behind the scenes, it uses the new version of Compose written in Golang. This version of Compose is still experimental. Take appropriate precautions. $ git clone https://github.com/dockersamples/example-voting-app.git $ cd example-voting-app $ python >>> from python_on_whales import docker >>> docker.compose.up(detach=True) Network "example-voting-app_back-tier" Creating Network "example-voting-app_back-tier" Created Network "example-voting-app_front-tier" Creating Network "example-voting-app_front-tier" Created example-voting-app_redis_1 Creating example-voting-app_db_1 Creating example-voting-app_db_1 Created example-voting-app_result_1 Creating example-voting-app_redis_1 Created example-voting-app_worker_1 Creating example-voting-app_vote_1 Creating example-voting-app_worker_1 Created example-voting-app_result_1 Created example-voting-app_vote_1 Created >>> for container in docker.compose.ps(): ... print(container.name, container.state.status) example-voting-app_vote_1 running example-voting-app_worker_1 running example-voting-app_result_1 running example-voting-app_redis_1 running example-voting-app_db_1 running >>> docker.compose.down() >>> print(docker.compose.ps()) [] Bonus section: Docker objects attributes as Python attributes All information that you can access with docker inspect is available as Python attributes: >>> from python_on_whales import docker >>> my_container = docker.run("ubuntu", ["sleep", "infinity"], detach=True) >>> my_container.state.started_at datetime.datetime(2021, 2, 18, 13, 55, 44, 358235, tzinfo=datetime.timezone.utc) >>> my_container.state.running True >>> my_container.kill() >>> my_container.remove() >>> my_image = docker.image.inspect("ubuntu") >>> print(my_image.config.cmd) ['/bin/bash'] What’s next for Python-on-whales ? We’re currently improving the integration of Python-on-whales with the new Compose in the Docker CLI (currently beta). You can consider that Python-on-whales is in beta. Some small API changes are still possible. We encourage the community to try it out and give feedback in the issues! To learn more about Python-on-whales: Documentation Github repository The post Guest Post: Calling the Docker CLI from Python with Python-on-whales appeared first on Docker Blog. View the full article
-
Forum Statistics
67.4k
Total Topics65.3k
Total Posts