Jump to content

Search the Community

Showing results for tags 'python'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

There are no results to display.

There are no results to display.


Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


LinkedIn Profile URL


About Me


Cloud Platforms


Cloud Experience


Development Experience


Current Role


Skills


Certifications


Favourite Tools


Interests

  1. The quality of your data analysis and the insights derived directly depends on the quality of the data you feed. This is why data cleaning is crucial in ensuring your datasets are accurate, consistent, and reliable for further analysis. Python, a versatile programming language, has many tools with various functionalities to streamline and optimize this […]View the full article
  2. This is a collection of free courses, books, projects, repositories, cheat sheets, and online compilers on Python to help you get started and gain experience.View the full article
  3. PyTorch's flexibility and dynamic nature make it a popular choice for deep learning researchers and practitioners. Developed by Google, XLA is a specialized compiler designed to optimize linear algebra computations – the foundation of deep learning models. PyTorch/XLA offers the best of both worlds: the user experience and ecosystem advantages of PyTorch, with the compiler performance of XLA. PyTorch/XLA stack diagram We are excited to launch PyTorch/XLA 2.3 this week. The 2.3 release brings with it even more productivity, performance and usability improvements. Why PyTorch/XLA? Before we get into the release updates, here’s a short overview of why PyTorch/XLA is great for model training, fine-tuning and serving. The combination of PyTorch and XLA provides key advantages: Easy Performance: Retain PyTorch's intuitive, pythonic flow while gaining significant and easy performance improvements through the XLA compiler. For example, PyTorch/XLA produces a throughput of 5000 tokens/second while finetuning Gemma and Llama 2 7B models and reduces the cost of serving down to $0.25 per million tokens. Ecosystem advantage: Seamlessly access PyTorch's extensive resources, including tools, pretrained models, and its large community. These benefits underscore the value of PyTorch/XLA. Lightricks shares the following feedback on their experience with PyTorch/XLA 2.2: "By leveraging Google Cloud’s TPU v5p, Lightricks has achieved a remarkable 2.5X speedup in training our text-to-image and text-to-video models compared to TPU v4. With the incorporation of PyTorch XLA’s gradient checkpointing, we’ve effectively addressed memory bottlenecks, leading to improved memory performance and speed. Additionally, autocasting to bf16 has provided crucial flexibility, allowing certain parts of our graph to operate on fp32, optimizing our model’s performance. The XLA cache feature, undoubtedly the highlight of PyTorch XLA 2.2, has saved us significant development time by eliminating compilation waits. These advancements have not only streamlined our development process, making iterations faster but also enhanced video consistency significantly. This progress is pivotal in keeping Lightricks at the forefront of the generative AI sector, with LTX Studio showcasing these technological leaps." - Yoav HaCohen, Research team lead, Lightricks What's in the 2.3 release: Distributed training, dev experience, and GPUs PyTorch/XLA 2.3 keeps us current with PyTorch Foundation's 2.3 release from earlier this week, and offers notable upgrades from PyTorch/XLA 2.2. Here's what to expect: 1. Distributed training improvements SPMD with FSDP: Fully Sharded Data Parallel (FSDP) support enables you to scale large models. The new Single Program, Multiple Data (SPMD) implementation in 2.3 integrates compiler optimizations for faster, more efficient FSDP. Pallas integration: For maximum control, PyTorch/XLA + Pallas lets you write custom kernels specifically tuned for TPUs. 2. Smoother development SPMD auto-sharding: SPMD automates model distribution across devices. Auto-sharding further simplifies this process, eliminating the need for manual tensor distribution. In this release, this feature is experimental, supporting XLA:TPU and single-host training. PyTorch/XLA autosharding architecture Distributed checkpointing: This makes long training sessions less risky. Asynchronous checkpointing saves your progress in the background, protecting against potential hardware failures. 3. Hello, GPUs! SPMD XLA: GPU support: We have extended the benefits of SPMD parallelization to GPUs, making scaling easier, especially when handling large models or datasets. Start planning your upgrade PyTorch/XLA continues to evolve, streamlining the creation and deployment of powerful deep learning models. The 2.3 release emphasizes improved distributed training, a smoother development experience, and broader GPU support. If you're in the PyTorch ecosystem and seeking performance optimization, PyTorch/XLA 2.3 is worth exploring! Stay up-to-date, find installation instructions or get support on the official PyTorch/XLA repository on GitHub: https://github.com/pytorch/xla PyTorch/XLA is also well-integrated into the AI Hypercomputer stack that optimizes AI training, fine-tuning and serving performance end-to-end at every layer of the stack: Ask your sales representative about how you can apply these capabilities within your own organization. View the full article
  4. Looking to level up your Python skills without spending a dime? Check out this article featuring 5 advanced Python courses that you can take for free!View the full article
  5. Looking to level up your Python skills and ace coding interviews? Start practicing today on these platforms. View the full article
  6. Discover the basics of using Gemini with Python via VertexAI, creating APIs with FastAPI, data validation with Pydantic and the fundamentals of Retrieval-Augmented Generation (RAG)Photo by Kenny Eliason on UnsplashWithin this article, I share some of the basics to create a LLM-driven web-application, using various technologies, such as: Python, FastAPI, Pydantic, VertexAI and more. You will learn how to create such a project from the very beginning and get an overview of the underlying concepts, including Retrieval-Augmented Generation (RAG).Disclaimer: I am using data from The Movie Database within this project. The API is free to use for non-commercial purposes and complies with the Digital Millennium Copyright Act (DMCA). For further information about TMDB data usage, please read the official FAQ. Table of contents– Inspiration – System Architecture – Understanding Retrieval-Augmented Generation (RAG) – Python projects with Poetry – Create the API with FastAPI – Data validation and quality with Pydantic – TMDB client with httpx – Gemini LLM client with VertexAI – Modular prompt generator with Jinja – Frontend – API examples – Conclusion The best way to share this knowledge is through a practical example. Hence, I’ll use my project Gemini Movie Detectives to cover the various aspects. The project was created as part of the Google AI Hackathon 2024, which is still running while I am writing this. Gemini Movie Detectives (by author)Gemini Movie Detectives is a project aimed at leveraging the power of the Gemini Pro model via VertexAI to create an engaging quiz game using the latest movie data from The Movie Database (TMDB). Part of the project was also to make it deployable with Docker and to create a live version. Try it yourself: movie-detectives.com. Keep in mind that this is a simple prototype, so there might be unexpected issues. Also, I had to add some limitations in order to control costs that might be generated by using GCP and VertexAI. Gemini Movie Detectives (by author)The project is fully open-source and is split into two separate repositories: Github repository for backend: https://github.com/vojay-dev/gemini-movie-detectives-api Github repository for frontend: https://github.com/vojay-dev/gemini-movie-detectives-uiThe focus of the article is the backend project and underlying concepts. It will therefore only briefly explain the frontend and its components. In the following video, I also give an overview over the project and its components: https://medium.com/media/bf4270fa881797cd515421b7bb646d1d/hrefInspirationGrowing up as a passionate gamer and now working as a Data Engineer, I’ve always been drawn to the intersection of gaming and data. With this project, I combined two of my greatest passions: gaming and data. Back in the 90’ I always enjoyed the video game series You Don’t Know Jack, a delightful blend of trivia and comedy that not only entertained but also taught me a thing or two. Generally, the usage of games for educational purposes is another concept that fascinates me. In 2023, I organized a workshop to teach kids and young adults game development. They learned about mathematical concepts behind collision detection, yet they had fun as everything was framed in the context of gaming. It was eye-opening that gaming is not only a huge market but also holds a great potential for knowledge sharing. With this project, called Movie Detectives, I aim to showcase the magic of Gemini, and AI in general, in crafting engaging trivia and educational games, but also how game design can profit from these technologies in general. By feeding the Gemini LLM with accurate and up-to-date movie metadata, I could ensure the accuracy of the questions from Gemini. An important aspect, because without this Retrieval-Augmented Generation (RAG) methodology to enrich queries with real-time metadata, there’s a risk of propagating misinformation — a typical pitfall when using AI for this purpose. Another game-changer lies in the modular prompt generation framework I’ve crafted using Jinja templates. It’s like having a Swiss Army knife for game design — effortlessly swapping show master personalities to tailor the game experience. And with the language module, translating the quiz into multiple languages is a breeze, eliminating the need for costly translation processes. Taking that on a business perspective, it can be used to reach a much broader audience of customers, without the need of expensive translation processes. From a business standpoint, this modularization opens doors to a wider customer base, transcending language barriers without breaking a sweat. And personally, I’ve experienced firsthand the transformative power of these modules. Switching from the default quiz master to the dad-joke-quiz-master was a riot — a nostalgic nod to the heyday of You Don’t Know Jack, and a testament to the versatility of this project. Movie Detectives — Example: Santa Claus personality (by author)System ArchitectureBefore we jump into details, let’s get an overview of how the application was built. Tech Stack: Backend Python 3.12 + FastAPI API developmenthttpx for TMDB integrationJinja templating for modular prompt generationPydantic for data modeling and validationPoetry for dependency managementDocker for deploymentTMDB API for movie dataVertexAI and Gemini for generating quiz questions and evaluating answersRuff as linter and code formatter together with pre-commit hooksGithub Actions to automatically run tests and linter on every pushTech Stack: Frontend VueJS 3.4 as the frontend frameworkVite for frontend toolingEssentially, the application fetches up-to-date movie metadata from an external API (TMDB), constructs a prompt based on different modules (personality, language, …), enriches this prompt with the metadata and that way, uses Gemini to initiate a movie quiz in which the user has to guess the correct title. The backend infrastructure is built with FastAPI and Python, employing the Retrieval-Augmented Generation (RAG) methodology to enrich queries with real-time metadata. Utilizing Jinja templating, the backend modularizes prompt generation into base, personality, and data enhancement templates, enabling the generation of accurate and engaging quiz questions. The frontend is powered by Vue 3 and Vite, supported by daisyUI and Tailwind CSS for efficient frontend development. Together, these tools provide users with a sleek and modern interface for seamless interaction with the backend. In Movie Detectives, quiz answers are interpreted by the Language Model (LLM) once again, allowing for dynamic scoring and personalized responses. This showcases the potential of integrating LLM with RAG in game design and development, paving the way for truly individualized gaming experiences. Furthermore, it demonstrates the potential for creating engaging quiz trivia or educational games by involving LLM. Adding and changing personalities or languages is as easy as adding more Jinja template modules. With very little effort, this can change the full game experience, reducing the effort for developers. System Overview (by author)As can be seen in the overview, Retrieval-Augmented Generation (RAG) is one of the essential ideas of the backend. Let’s have a closer look at this particular paradigm. Understanding Retrieval-Augmented Generation (RAG)In the realm of Large Language Models (LLM) and AI, one paradigm becoming more and more popular is Retrieval-Augmented Generation (RAG). But what does RAG entail, and how does it influence the landscape of AI development? At its essence, RAG enhances LLM systems by incorporating external data to enrich their predictions. Which means, you pass relevant context to the LLM as an additional part of the prompt, but how do you find relevant context? Usually, this data can be automatically retrieved from a database with vector search or dedicated vector databases. Vector databases are especially useful, since they store data in a way, so that it can be queried for similar data quickly. The LLM then generates the output based on both, the query and the retrieved documents. Picture this: you have an LLM capable of generating text based on a given prompt. RAG takes this a step further by infusing additional context from external sources, like up-to-date movie data, to enhance the relevance and accuracy of the generated text. Let’s break down the key components of RAG: LLMs: LLMs serve as the backbone of RAG workflows. These models, trained on vast amounts of text data, possess the ability to understand and generate human-like text.Vector Indexes for contextual enrichment: A crucial aspect of RAG is the use of vector indexes, which store embeddings of text data in a format understandable by LLMs. These indexes allow for efficient retrieval of relevant information during the generation process. In the context of the project this could be a database of movie metadata.Retrieval process: RAG involves retrieving pertinent documents or information based on the given context or prompt. This retrieved data acts as the additional input for the LLM, supplementing its understanding and enhancing the quality of generated responses. This could be getting all relevant information known and connected to a specific movie.Generative Output: With the combined knowledge from both the LLM and the retrieved context, the system generates text that is not only coherent but also contextually relevant, thanks to the augmented data.RAG architecture (by author)While in the Gemini Movie Detectives project, the prompt is enhanced with external API data from The Movie Database, RAG typically involves the use of vector indexes to streamline this process. It is using much more complex documents as well as a much higher amount of data for enhancement. Thus, these indexes act like signposts, guiding the system to relevant external sources quickly. In this project, it is therefore a mini version of RAG but showing the basic idea at least, demonstrating the power of external data to augment LLM capabilities. In more general terms, RAG is a very important concept, especially when crafting trivia quizzes or educational games using LLMs like Gemini. This concept can avoid the risk of false positives, asking wrong questions, or misinterpreting answers from the users. Here are some open-source projects that might be helpful when approaching RAG in one of your projects: txtai: All-in-one open-source embeddings database for semantic search, LLM orchestration and language model workflows.LangChain: LangChain is a framework for developing applications powered by large language models (LLMs).Qdrant: Vector Search Engine for the next generation of AI applications.Weaviate: Weaviate is a cloud-native, open source vector database that is robust, fast, and scalable.Of course, with the potential value of this approach for LLM-based applications, there are many more open- and close-source alternatives, but with these, you should be able to get your research on the topic started. Python projects with PoetryNow that the main concepts are clear, let’s have a closer look how the project was created and how dependencies are managed in general. The three main tasks Poetry can help you with are: Build, Publish and Track. The idea is to have a deterministic way to manage dependencies, to share your project and to track dependency states. Photo by Kat von Wood on UnsplashPoetry also handles the creation of virtual environments for you. Per default, those are in a centralized folder within your system. However, if you prefer to have the virtual environment of project in the project folder, like I do, it is a simple config change: poetry config virtualenvs.in-project trueWith poetry new you can then create a new Python project. It will create a virtual environment linking you systems default Python. If you combine this with pyenv, you get a flexible way to create projects using specific versions. Alternatively, you can also tell Poetry directly which Python version to use: poetry env use /full/path/to/python. Once you have a new project, you can use poetry add to add dependencies to it. With this, I created the project for Gemini Movie Detectives: poetry config virtualenvs.in-project true poetry new gemini-movie-detectives-api cd gemini-movie-detectives-api poetry add 'uvicorn[standard]' poetry add fastapi poetry add pydantic-settings poetry add httpx poetry add 'google-cloud-aiplatform>=1.38' poetry add jinja2The metadata about your projects, including the dependencies with the respective versions, are stored in the poetry.toml and poetry.lock files. I added more dependencies later, which resulted in the following poetry.toml for the project: [tool.poetry] name = "gemini-movie-detectives-api" version = "0.1.0" description = "Use Gemini Pro LLM via VertexAI to create an engaging quiz game incorporating TMDB API data" authors = ["Volker Janz <volker@janz.sh>"] readme = "README.md" [tool.poetry.dependencies] python = "^3.12" fastapi = "^0.110.1" uvicorn = {extras = ["standard"], version = "^0.29.0"} python-dotenv = "^1.0.1" httpx = "^0.27.0" pydantic-settings = "^2.2.1" google-cloud-aiplatform = ">=1.38" jinja2 = "^3.1.3" ruff = "^0.3.5" pre-commit = "^3.7.0" [build-system] requires = ["poetry-core"] build-backend = "poetry.core.masonry.api"Create the API with FastAPIFastAPI is a Python framework that allows for rapid API development. Built on open standards, it offers a seamless experience without new syntax to learn. With automatic documentation generation, robust validation, and integrated security, FastAPI streamlines development while ensuring great performance. Photo by Florian Steciuk on UnsplashImplementing the API for the Gemini Movie Detectives projects, I simply started from a Hello World application and extended it from there. Here is how to get started: from fastapi import FastAPI app = FastAPI() @app.get("/") def read_root(): return {"Hello": "World"}Assuming you also keep the virtual environment within the project folder as .venv/ and use uvicorn, this is how to start the API with the reload feature enabled, in order to test code changes without the need of a restart: source .venv/bin/activate uvicorn gemini_movie_detectives_api.main:app --reload curl -s localhost:8000 | jq .If you have not yet installed jq, I highly recommend doing it now. I might cover this wonderful JSON Swiss Army knife in a future article. This is how the response looks like: Hello FastAPI (by author)From here, you can develop your API endpoints as needed. This is how the API endpoint implementation to start a movie quiz in Gemini Movie Detectives looks like for example: @app.post('/quiz') @rate_limit @retry(max_retries=settings.quiz_max_retries) def start_quiz(quiz_config: QuizConfig = QuizConfig()): movie = tmdb_client.get_random_movie( page_min=_get_page_min(quiz_config.popularity), page_max=_get_page_max(quiz_config.popularity), vote_avg_min=quiz_config.vote_avg_min, vote_count_min=quiz_config.vote_count_min ) if not movie: logger.info('could not find movie with quiz config: %s', quiz_config.dict()) raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail='No movie found with given criteria') try: genres = [genre['name'] for genre in movie['genres']] prompt = prompt_generator.generate_question_prompt( movie_title=movie['title'], language=get_language_by_name(quiz_config.language), personality=get_personality_by_name(quiz_config.personality), tagline=movie['tagline'], overview=movie['overview'], genres=', '.join(genres), budget=movie['budget'], revenue=movie['revenue'], average_rating=movie['vote_average'], rating_count=movie['vote_count'], release_date=movie['release_date'], runtime=movie['runtime'] ) chat = gemini_client.start_chat() logger.debug('starting quiz with generated prompt: %s', prompt) gemini_reply = gemini_client.get_chat_response(chat, prompt) gemini_question = gemini_client.parse_gemini_question(gemini_reply) quiz_id = str(uuid.uuid4()) session_cache[quiz_id] = SessionData( quiz_id=quiz_id, chat=chat, question=gemini_question, movie=movie, started_at=datetime.now() ) return StartQuizResponse(quiz_id=quiz_id, question=gemini_question, movie=movie) except GoogleAPIError as e: raise HTTPException(status_code=status.HTTP_500_INTERNAL_SERVER_ERROR, detail=f'Google API error: {e}') except Exception as e: raise HTTPException(status_code=status.HTTP_500_INTERNAL_SERVER_ERROR, detail=f'Internal server error: {e}')Within this code, you can see already three of the main components of the backend: tmdb_client: A client I implemented using httpx to fetch data from The Movie Database (TMDB).prompt_generator: A class that helps to generate modular prompts based on Jinja templates.gemini_client: A client to interact with the Gemini LLM via VertexAI in Google Cloud.We will look at these components in detail later, but first some more helpful insights regarding the usage of FastAPI. FastAPI makes it really easy to define the HTTP method and data to be transferred to the backend. For this particular function, I expect a POST request as this creates a new quiz. This can be done with the post decorator: @app.post('/quiz')Also, I am expecting some data within the request sent as JSON in the body. In this case, I am expecting an instance of QuizConfig as JSON. I simply defined QuizConfig as a subclass of BaseModel from Pydantic (will be covered later) and with that, I can pass it in the API function and FastAPI will do the rest: class QuizConfig(BaseModel): vote_avg_min: float = Field(5.0, ge=0.0, le=9.0) vote_count_min: float = Field(1000.0, ge=0.0) popularity: int = Field(1, ge=1, le=3) personality: str = Personality.DEFAULT.name language: str = Language.DEFAULT.name # ... def start_quiz(quiz_config: QuizConfig = QuizConfig()):Furthermore, you might notice two custom decorators: @rate_limit @retry(max_retries=settings.quiz_max_retries)These I implemented to reduce duplicate code. They wrap the API function to retry the function in case of errors and to introduce a global rate limit of how many movie quizzes can be started per day. What I also liked personally is the error handling with FastAPI. You can simply raise a HTTPException, give it the desired status code and the user will then receive a proper response, for example, if no movie could be found with a given configuration: raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail='No movie found with given criteria')With this, you should have an overview of creating an API like the one for Gemini Movie Detectives with FastAPI. Keep in mind: all code is open-source, so feel free to have a look at the API repository on Github. Data validation and quality with PydanticOne of the main challenges with todays AI/ML projects is data quality. But that does not only apply to ETL/ELT pipelines, which prepare datasets to be used in model training or prediction, but also to the AI/ML application itself. Using Python for example usually enables Data Engineers and Scientist to get a reasonable result with little code but being (mostly) dynamically typed, Python lacks of data validation when used in a naive way. That is why in this project, I combined FastAPI with Pydantic, a powerful data validation library for Python. The goal was to make the API lightweight but strict and strong, when it comes to data quality and validation. Instead of plain dictionaries for example, the Movie Detectives API strictly uses custom classes inherited from the BaseModel provided by Pydantic. This is the configuration for a quiz for example: class QuizConfig(BaseModel): vote_avg_min: float = Field(5.0, ge=0.0, le=9.0) vote_count_min: float = Field(1000.0, ge=0.0) popularity: int = Field(1, ge=1, le=3) personality: str = Personality.DEFAULT.name language: str = Language.DEFAULT.nameThis example illustrates, how not only correct type is ensured, but also further validation is applied to the actual values. Furthermore, up-to-date Python features, like StrEnum are used to distinguish certain types, like personalities: class Personality(StrEnum): DEFAULT = 'default.jinja' CHRISTMAS = 'christmas.jinja' SCIENTIST = 'scientist.jinja' DAD = 'dad.jinja'Also, duplicate code is avoided by defining custom decorators. For example, the following decorator limits the number of quiz sessions today, to have control over GCP costs: call_count = 0 last_reset_time = datetime.now() def rate_limit(func: callable) -> callable: @wraps(func) def wrapper(*args, **kwargs) -> callable: global call_count global last_reset_time # reset call count if the day has changed if datetime.now().date() > last_reset_time.date(): call_count = 0 last_reset_time = datetime.now() if call_count >= settings.quiz_rate_limit: raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail='Daily limit reached') call_count += 1 return func(*args, **kwargs) return wrapperIt is then simply applied to the related API function: @app.post('/quiz') @rate_limit @retry(max_retries=settings.quiz_max_retries) def start_quiz(quiz_config: QuizConfig = QuizConfig()):The combination of up-to-date Python features and libraries, such as FastAPI, Pydantic or Ruff makes the backend less verbose but still very stable and ensures a certain data quality, to ensure the LLM output has the expected quality. TMDB client with httpxThe TMDB Client class is using httpx to perform requests against the TMDB API. httpx is a rising star in the world of Python libraries. While requests has long been the go-to choice for making HTTP requests, httpx offers a valid alternative. One of its key strengths is asynchronous functionality. httpx allows you to write code that can handle multiple requests concurrently, potentially leading to significant performance improvements in applications that deal with a high volume of HTTP interactions. Additionally, httpx aims for broad compatibility with requests, making it easier for developers to pick it up. In case of Gemini Movie Detectives, there are two main requests: get_movies: Get a list of random movies based on specific settings, like average number of votesget_movie_details: Get details for a specific movie to be used in a quizIn order to reduce the amount of external requests, the latter one uses the lru_cache decorator, which stands for “Least Recently Used cache”. It’s used to cache the results of function calls so that if the same inputs occur again, the function doesn’t have to recompute the result. Instead, it returns the cached result, which can significantly improve the performance of the program, especially for functions with expensive computations. In our case, we cache the details for 1024 movies, so if 2 players get the same movie, we do not need to make a request again: @lru_cache(maxsize=1024) def get_movie_details(self, movie_id: int): response = httpx.get(f'https://api.themoviedb.org/3/movie/{movie_id}', headers={ 'Authorization': f'Bearer {self.tmdb_api_key}' }, params={ 'language': 'en-US' }) movie = response.json() movie['poster_url'] = self.get_poster_url(movie['poster_path']) return movieAccessing data from The Movie Database (TMDB) is for free for non-commercial usage, you can simply generate an API key and start making requests. Gemini LLM client with VertexAIBefore Gemini via VertexAI can be used, you need a Google Cloud project with VertexAI enabled and a Service Account with sufficient access together with its JSON key file. Create GCP project (by author)After creating a new project, navigate to APIs & Services –> Enable APIs and service –> search for VertexAI API –> Enable. Enable VertexAI (by author)To create a Service Account, navigate to IAM & Admin –> Service Accounts –> Create service account. Choose a proper name and go to the next step. Create Service Account (by author)Now ensure to assign the account the pre-defined role Vertex AI User. Assign correct role (by author)Finally you can generate and download the JSON key file by clicking on the new user –> Keys –> Add Key –> Create new key –> JSON. With this file, you are good to go. Create JSON key file (by author)Using Gemini from Google with Python via VertexAI starts by adding the necessary dependency to the project: poetry add 'google-cloud-aiplatform>=1.38'With that, you can import and initialize vertexai with your JSON key file. Also you can load a model, like the newly released Gemini 1.5 Pro model, and start a chat session like this: import vertexai from google.oauth2.service_account import Credentials from vertexai.generative_models import GenerativeModel project_id = "my-project-id" location = "us-central1" credentials = Credentials.from_service_account_file("credentials.json") model = "gemini-1.0-pro" vertexai.init(project=project_id, location=location, credentials=credentials) model = GenerativeModel(model) chat_session = model.start_chat()You can now use chat.send_message() to send a prompt to the model. However, since you get the response in chunks of data, I recommend using a little helper function, so that you simply get the full response as one String: def get_chat_response(chat: ChatSession, prompt: str) -> str: text_response = [] responses = chat.send_message(prompt, stream=True) for chunk in responses: text_response.append(chunk.text) return ''.join(text_response)A full example can then look like this: import vertexai from google.oauth2.service_account import Credentials from vertexai.generative_models import GenerativeModel, ChatSession project_id = "my-project-id" location = "us-central1" credentials = Credentials.from_service_account_file("credentials.json") model = "gemini-1.0-pro" vertexai.init(project=project_id, location=location, credentials=credentials) model = GenerativeModel(model) chat_session = model.start_chat() def get_chat_response(chat: ChatSession, prompt: str) -> str: text_response = [] responses = chat.send_message(prompt, stream=True) for chunk in responses: text_response.append(chunk.text) return ''.join(text_response) response = get_chat_response( chat_session, "How to say 'you are awesome' in Spanish?" ) print(response)Running this, Gemini gave me the following response: You are awesome (by author)I agree with Gemini: Eres increíbleAnother hint when using this: you can also configure the model generation by passing a configuration to the generation_config parameter as part of the send_message function. For example: generation_config = { 'temperature': 0.5 } responses = chat.send_message( prompt, generation_config=generation_config, stream=True )I am using this in Gemini Movie Detectives to set the temperature to 0.5, which gave me best results. In this context temperature means: how creative are the generated responses by Gemini. The value must be between 0.0 and 1.0, whereas closer to 1.0 means more creativity. One of the main challenges apart from sending a prompt and receive the reply from Gemini is to parse the reply in order to extract the relevant information. One learning from the project is: Specify a format for Gemini, which does not rely on exact words but uses key symbols to separate information elementsFor example, the question prompt for Gemini contains this instruction: Your reply must only consist of three lines! You must only reply strictly using the following template for the three lines: Question: <Your question> Hint 1: <The first hint to help the participants> Hint 2: <The second hint to get the title more easily>The naive approach would be, to parse the answer by looking for a line that starts with Question:. However, if we use another language, like German, the reply would look like: Antwort:. Instead, focus on the structure and key symbols. Read the reply like this: It has 3 linesThe first line is the questionSecond line the first hintThird line the second hintKey and value are separated by :With this approach, the reply can be parsed language agnostic, and this is my implementation in the actual client: @staticmethod def parse_gemini_question(gemini_reply: str) -> GeminiQuestion: result = re.findall(r'[^:]+: ([^\n]+)', gemini_reply, re.MULTILINE) if len(result) != 3: msg = f'Gemini replied with an unexpected format. Gemini reply: {gemini_reply}' logger.warning(msg) raise ValueError(msg) question = result[0] hint1 = result[1] hint2 = result[2] return GeminiQuestion(question=question, hint1=hint1, hint2=hint2)In the future, the parsing of responses will become even easier. During the Google Cloud Next ’24 conference, Google announced that Gemini 1.5 Pro is now publicly available and with that, they also announced some features including a JSON mode to have responses in JSON format. Checkout this article for more details. Apart from that, I wrapped the Gemini client into a configurable class. You can find the full implementation open-source on Github. Modular prompt generator with JinjaThe Prompt Generator is a class wich combines and renders Jinja2 template files to create a modular prompt. There are two base templates: one for generating the question and one for evaluating the answer. Apart from that, there is a metadata template to enrich the prompt with up-to-date movie data. Furthermore, there are language and personality templates, organized in separate folders with a template file for each option. Prompt Generator (by author)Using Jinja2 allows to have advanced features like template inheritance, which is used for the metadata. This makes it easy to extend this component, not only with more options for personalities and languages, but also to extract it into its own open-source project to make it available for other Gemini projects. FrontendThe Gemini Movie Detectives frontend is split into four main components and uses vue-router to navigate between them. The Home component simply displays the welcome message. The Quiz component displays the quiz itself and talks to the API via fetch. To create a quiz, it sends a POST request to api/quiz with the desired settings. The backend is then selecting a random movie based on the user settings, creates the prompt with the modular prompt generator, uses Gemini to generate the question and hints and finally returns everything back to the component so that the quiz can be rendered. Additionally, each quiz gets a session ID assigned in the backend and is stored in a limited LRU cache. For debugging purposes, this component fetches data from the api/sessions endpoint. This returns all active sessions from the cache. This component displays statistics about the service. However, so far there is only one category of data displayed, which is the quiz limit. To limit the costs for VertexAI and GCP usage in general, there is a daily limit of quiz sessions, which will reset with the first quiz of the next day. Data is retrieved form the api/limit endpoint. Vue components (by author)API examplesOf course using the frontend is a nice way to interact with the application, but it is also possible to just use the API. The following example shows how to start a quiz via the API using the Santa Claus / Christmas personality: curl -s -X POST https://movie-detectives.com/api/quiz \ -H 'Content-Type: application/json' \ -d '{"vote_avg_min": 5.0, "vote_count_min": 1000.0, "popularity": 3, "personality": "christmas"}' | jq .{ "quiz_id": "e1d298c3-fcb0-4ebe-8836-a22a51f87dc6", "question": { "question": "Ho ho ho, this movie takes place in a world of dreams, just like the dreams children have on Christmas Eve after seeing Santa Claus! It's about a team who enters people's dreams to steal their secrets. Can you guess the movie? Merry Christmas!", "hint1": "The main character is like a skilled elf, sneaking into people's minds instead of houses. ", "hint2": "I_c_p_i_n " }, "movie": {...} }Movie Detectives — Example: Santa Claus personality (by author)This example shows how to change the language for a quiz: curl -s -X POST https://movie-detectives.com/api/quiz \ -H 'Content-Type: application/json' \ -d '{"vote_avg_min": 5.0, "vote_count_min": 1000.0, "popularity": 3, "language": "german"}' | jq .{ "quiz_id": "7f5f8cf5-4ded-42d3-a6f0-976e4f096c0e", "question": { "question": "Stellt euch vor, es gäbe riesige Monster, die auf der Erde herumtrampeln, als wäre es ein Spielplatz! Einer ist ein echtes Urviech, eine Art wandelnde Riesenechse mit einem Atem, der so heiß ist, dass er euer Toastbrot in Sekundenschnelle rösten könnte. Der andere ist ein gigantischer Affe, der so stark ist, dass er Bäume ausreißt wie Gänseblümchen. Und jetzt ratet mal, was passiert? Die beiden geraten aneinander, wie zwei Kinder, die sich um das letzte Stück Kuchen streiten! Wer wird wohl gewinnen, die Riesenechse oder der Superaffe? Das ist die Frage, die sich die ganze Welt stellt! ", "hint1": "Der Film spielt in einer Zeit, in der Monster auf der Erde wandeln.", "hint2": "G_dz_ll_ vs. K_ng " }, "movie": {...} }And this is how to answer to a quiz via an API call: curl -s -X POST https://movie-detectives.com/api/quiz/84c19425-c179-4198-9773-a8a1b71c9605/answer \ -H 'Content-Type: application/json' \ -d '{"answer": "Greenland"}' | jq .{ "quiz_id": "84c19425-c179-4198-9773-a8a1b71c9605", "question": {...}, "movie": {...}, "user_answer": "Greenland", "result": { "points": "3", "answer": "Congratulations! You got it! Greenland is the movie we were looking for. You're like a human GPS, always finding the right way!" } }ConclusionAfter I finished the basic project, adding more personalities and languages was so easy with the modular prompt approach, that I was impressed by the possibilities this opens up for game design and development. I could change this game from a pure educational game about movies, into a comedy trivia “You Don’t Know Jack”-like game within a minute by adding another personality. Also, combining up-to-date Python functionality with validation libraries like Pydantic is very powerful and can be used to ensure good data quality for LLM input. And there you have it, folks! You’re now equipped to craft your own LLM-powered web application. Feeling inspired but need a starting point? Check out the open-source code for the Gemini Movie Detectives project: Github repository for backend: https://github.com/vojay-dev/gemini-movie-detectives-api Github repository for frontend: https://github.com/vojay-dev/gemini-movie-detectives-uiThe future of AI-powered applications is bright, and you’re holding the paintbrush! Let’s go make something remarkable. And if you need a break, feel free to try https://movie-detectives.com/. Create an AI-Driven Movie Quiz with Gemini LLM, Python, FastAPI, Pydantic, RAG and more was originally published in Towards Data Science on Medium, where people are continuing the conversation by highlighting and responding to this story. View the full article
  7. In today’s data-driven world, developer productivity is essential for organizations to build effective and reliable products, accelerate time to value, and fuel ongoing innovation. To deliver on these goals, developers must have the ability to manipulate and analyze information efficiently. Yet while SQL applications have long served as the gateway to access and manage data, Python has become the language of choice for most data teams, creating a disconnect. Recognizing this shift, Snowflake is taking a Python-first approach to bridge the gap and help users leverage the power of both worlds. Our previous Python connector API, primarily available for those who need to run SQL via a Python script, enabled a connection to Snowflake from Python applications. This traditional SQL-centric approach often challenged data engineers working in a Python environment, requiring context-switching and limiting the full potential of Python’s rich libraries and frameworks. Since the previous Python connector API mostly communicated via SQL, it also hindered the ability to manage Snowflake objects natively in Python, restricting data pipeline efficiency and the ability to complete complex tasks. Snowflake’s new Python API (in public preview) marks a significant leap forward, offering a more streamlined, powerful solution for using Python within your data pipelines — and furthering our vision to empower all developers, regardless of experience, with a user-friendly and approachable platform. A New Era: Introducing Snowflake’s Python API With the new Snowflake Python API, readily available through pip install snowflake, developers no longer need to juggle between languages or grapple with cumbersome syntax. They can effortlessly leverage the power of Python for a seamless, unified experience across Snowflake workloads encompassing data engineering, Snowpark, machine learning and application development. This API is a testament to Snowflake’s commitment to a Python-first approach, offering a plethora of features designed to streamline workflows and enhance developer productivity. Key benefits of the new Snowflake Python API include: Simplified syntax and intuitive API design: Featuring a Pythonic design, the API is built on the foundation of REST APIs, which are known for their clarity and ease of use. This allows developers to interact with Snowflake objects naturally and efficiently, minimizing the learning curve and reducing development time. Rich functionality and support for advanced operations: The API goes beyond basic operations, offering comprehensive functionality for managing various Snowflake resources and performing complex tasks within your Python environment. This empowers developers to maximize the full potential of Snowflake through intuitive REST API calls. Enhanced performance and improved scalability: Designed with performance in mind, the API leverages the inherent scalability of REST APIs, enabling efficient data handling and seamless scaling to meet your growing data needs. This allows your applications to handle large data sets and complex workflows efficiently. Streamlined integration with existing tools and frameworks: The API seamlessly integrates with popular Python data science libraries and frameworks, enabling developers to leverage their existing skill sets and workflows effectively. This integration allows developers to combine the power of Python libraries with the capabilities of Snowflake through familiar REST API structures. By prioritizing the developer experience and offering a comprehensive, user-friendly solution, Snowflake’s new Python API paves the way for a more efficient, productive and data-driven future. Getting Started with the Snowflake Python API Our Quickstart guide makes it easy to see how the Snowflake Python API can manage Snowflake objects. The API allows you to create, delete and modify tables, schemas, warehouses, tasks and much more. In this Quickstart, you’ll learn how to perform key actions — from installing the Snowflake Python API to retrieving object data and managing Snowpark Container Services. Dive in to experience how the enhanced Python API streamlines your data workflows and unlocks the full potential of Python within Snowflake. To get started, explore the comprehensive API documentation, which will guide you through every step. We recommend that Python developers prioritize the new API for data engineering tasks since it offers a more intuitive and efficient approach compared to the legacy SQL connector. While the Python API connector remains available for specific SQL use cases, the new API is designed to be your go-to solution. By general availability, we aim to achieve feature parity, empowering you to complete 100% of your data engineering tasks entirely through Python. This means you’ll only need to use SQL commands if you truly prefer them or for rare unsupported functionalities. The New Wave of Native DevOps on Snowflake The Snowflake Python API release is among a series of native DevOps tools becoming available on the Snowflake platform — all of which aim to empower developers of every experience level with a user-friendly and approachable platform. These benefits extend far beyond the developer team. The 2023 Accelerate State of DevOps Report, the annual report from Google Cloud’s DevOps Research and Assessment (DORA) team, reveals that a focus on user-centricity around the developer experience leads to a 40% increase in organizational performance. With intuitive tools for data engineers, data scientists and even citizen developers, Snowflake strives to enhance these advantages by fostering collaboration across your data and delivery teams. By offering the flexibility and control needed to build unique applications, Snowflake aims to become your one-stop shop for data — minimizing reliance on third-party tools for core development lifecycle use cases and ultimately reducing your total cost of ownership. We’re excited to share more innovations soon, making data even more accessible for all. For a deeper dive into Snowflake’s Python API and other native Snowflake DevOps features, register for the Snowflake Data Cloud Summit 2024. Or, experience these features firsthand at our free Dev Day event on June 6th in the Demo Zone. The post Snowflake’s New Python API Empowers Data Engineers to Build Modern Data Pipelines with Ease appeared first on Snowflake. View the full article
  8. Let's learn Python by building a command-line TO-DO list app, one step at a time.View the full article
  9. As application development evolves and programming languages advance, choosing the programming language to learn can be difficult. Two of the most widely used languages today are Python and JavaScript. Both languages offer unique features, advantages, and application areas. In this blog post, we'll discuss their characteristics, applications, and benefits to help you decide which language best fits your needs. Introduction to Python and JavaScriptPython is an interpreted, high-level, and general-purpose programming language that was released publicly in 1991. It has quickly become popular as an excellent option for beginners due to its shallow learning curve. It emphasizes code readability while being user-friendly, making it perfect for novice programmers! On the other hand, JavaScript is a dynamically interpreted programming language created by Brendan Eich in 1995 for front-end web application development. Over time, JavaScript has also found use in server-side web development through Node.js. It is now a full-stack language capable of powering client-side and server-side processes for web apps. Advantages of Learning PythonBelow are the advantages of learning Python: Simplicity and Readability: Python's clean and readable syntax makes it a preferred choice for beginners who want to learn how to write, debug, and maintain code fast. Versatility: Python's wide collections of libraries and frameworks make it an excellent fit for different use cases. Additionally, learning Python offers a wide range of career prospects in many sectors. Community and Resources: Python boasts a lively and active developer community, contributing to its continued evolution and advancement. There is ample online documentation, tutorials, forums, and open-source projects available that make learning the language easy. High Demand in the Job Market: Python developers are in demand in the job market due to its wide adoption. From web development to data science or artificial intelligence, proficiency in Python increases your career opportunities significantly. To get started with Python, check out our Python Basics course. Advantages of Learning JavaScriptBelow are the advantages of learning JavaScript: Full-Stack Development: With the introduction of Node.js, JavaScript has expanded to cover server-side development capabilities, making it possible for developers to create full-stack applications using only one language. It helps streamline development processes by eliminating multiple languages and frameworks from their learning process. Rich Ecosystem of Libraries and Frameworks: JavaScript offers a diverse ecosystem of libraries and frameworks. It has an expansive ecosystem of libraries and frameworks such as React, Angular, and Vue.js that make web application development faster and simpler. These frameworks provide tools for building modern user interfaces with responsive design features. High Demand and Attractive Salaries: JavaScript developers are in high demand, and companies are willing to pay top salaries for experienced professionals. Proficiency in JavaScript could open doors to lucrative career opportunities across front-end, back-end, or full-stack development roles. Python vs. JavaScript: Key DifferencesBelow are the main differences between Python and JavaScript: Syntax and Language FeaturesPython emphasizes code readability through indentation to specify code blocks, making it intuitive and straightforward for beginner programmers to learn. It supports procedural and object-oriented programming paradigms and includes powerful features like dynamic typing, automatic memory management, and an extensive standard library. JavaScript's syntax can be more complex for novice programmers than Python, especially at the beginning. It follows C-style syntax with curly braces and semicolons, denoting code blocks and statements. JavaScript supports procedural and object-oriented programming paradigms and offers features like dynamic typing, first-class functions, and closures, which can be difficult for beginners to learn. Performance and Execution EnvironmentPython is an interpreted language, meaning an interpreter executes code line by line without compilation. While Python's interpreted nature can lead to slower execution than compiled languages like C or C++, its simplicity and ease of use make it a preferred choice for many developers. Additionally, it’s performance can be optimized using techniques such as code optimization, built-in data structures, and leveraging third-party libraries written in C or C++. JavaScript is also an interpreted language, but its execution environment varies depending on whether it's running on the client side (in a web browser) or the server side (using Node.js). In a web browser, JavaScript code is executed by the browser's JavaScript engine, such as V8 in Chrome or SpiderMonkey in Firefox. On the server side, JavaScript code is executed by the Node.js runtime environment, which allows developers to write server-side applications using JavaScript. Its performance varies depending on the execution environment and the efficiency of the underlying JavaScript engine. Learning Curve and ResourcesWith its easy learning curve and extensive documentation, Python is an excellent option for newcomers to programming. Its clean syntax, comprehensive documentation, and extensive developer community contribute to its accessibility and ease of learning. Additionally, numerous online resources, such as tutorials, courses, books, and interactive coding platforms, are available. Compared to Python, JavaScript offers an intense learning curve for beginners. Its complex syntax, asynchronous nature, and peculiarities like type coercion and hoisting may make learning it daunting. However, with dedication and practice, you can overcome these challenges and become a proficient developer. Luckily, many resources such as tutorials, courses, books, and coding boot camps exist for learning JavaScript. Moreover, its community is active and welcoming, providing forums, meetups, and online communities where developers can collaborate while learning from one another. Ecosystem and Community SupportPython boasts an extensive ecosystem featuring libraries, frameworks, and tools that enable development across many domains. The Python Package Index (PyPI) hosts over 300K packages to meet nearly any programming need imaginable. Popular frameworks like Django, Flask, and Pyramid facilitate web development, while libraries like NumPy, SciPy, and Matplotlib support data science. Furthermore, its active community fosters collaboration through conferences, meetups, Stack Overflow, Reddit, etc. JavaScript's ecosystem has libraries, frameworks, and tools for front-end, back-end, and full-stack development. npm has grown into the largest open-source library ecosystem worldwide, housing over 1.5 million packages in its registry. Frameworks like React, Angular, and Vue.js have transformed front-end development by enabling developers to create powerful interactive user interfaces. Additionally, frameworks like Express.js and NestJS provide efficient server solutions. JavaScript's large community of developers, designers, and enthusiasts thrive through collaboration and innovation, driving the language's evolution while shaping its ecosystem. Job Market and Career OpportunitiesDue to Python's versatility and wide adoption, its developers are in high demand across various industries, including tech giants, startups, academia, finance, healthcare, and government. There are multiple job roles for Python developers, ranging from web developers and software developers to data analysts and machine learning engineers. A Python certification can boost your career. Check out this article on How to Get Python Certification: The Step-By-Step Guide. JavaScript's rise as the go-to web development language and server-side programming technology has increased the need for JavaScript developers. Front-end specialists and back-end developers are in high demand at the moment. Full-stack developers with expertise in both front-end and back-end JavaScript development are in even greater demand in today's tech economy! Future Trends and Industry OutlookPython continues its rise as a top programming language, experiencing steady adoption across industries and domains. As the artificial intelligence, machine learning, and data science industries grow, Python’s popularity and adoption will continue to grow. It’s simple programming model, versatility features, and community support make it ideal for addressing emerging challenges like cybersecurity development, cloud computing deployment, and Internet of Things development (IoT development). JavaScript's evolution from a client-side scripting language to a full-stack powerhouse has cemented it as one of the cornerstones of modern web development. As web technologies advance and consolidate, their influence will extend beyond traditional web apps to emerging paradigms such as Progressive Web Apps (PWAs), serverless architecture, and real-time collaboration. JavaScript’s versatility and cross-platform compatibility make it an excellent choice for building next-generation applications on the web, mobile desktop computers, or IoT devices, which ensures its relevance within the ever-expanding tech landscape, securing its place within the ever-expanding tech landscape. When to Learn Python Here are some scenarios in which learning Python can be particularly advantageous: Beginners in ProgrammingData Science and Analytics enthusiastsIndividuals who want to pursue web developmentIndividuals who want to pursue automation and scripting projectsMachine learning and artificial intelligence enthusiastsWhen to Learn JavascriptLearning JavaScript can be advantageous in various scenarios, given its versatility and widespread adoption across different domains. Here are some situations in which learning JavaScript can be particularly beneficial: Individuals who want to pursue web developmentIndividuals who want to become full-stack web developersIndividuals who want to become mobile app developersWhen you want to learn APIs and Browser AutomationGame Development enthusiastsConclusionPython and JavaScript are powerful programming languages with distinct advantages and applications. Ultimately, you should select which one to learn depending on your interests, career goals, and projects you wish to undertake. No matter which language you decide to learn, proficiency in Python or JavaScript will undoubtedly advance your skills as a programmer and open up exciting career prospects in today’s fast-moving technology sector. More Python articles: Create A Simple Python Web Application That Interacts With Your Kubernetes ClusterTop 7 Skills Required for DevOps Engineers (with Roadmap)Top 10 Programming LanguagesView the full article
  10. Python consistently ranks at the top of programming language popularity surveys due to its diverse usage. One of the areas where it excels is in the development of robust web applications. It does this using frameworks such as Django, Flask, and FastAPI. In this blog, you will learn why Python can be a very good choice for web development. What is web development, and what does it entail? Web development refers to the process of designing, creating, and hosting websites or applications accessible over the Internet. Web development can generally be divided into three major areas: front-end development, back-end development, and hosting. Front-End Development Front-end development specializes in designing websites or applications from their user perspective and is therefore known as client-side development. Back-End Development Back-end development refers to the server-side logic and functionality that drives websites and web applications. It includes configuring servers, databases, and application logic that handles data processing and user authentication. Hosting and Deployment Hosting and deployment involve making a website or web app accessible to users on the internet. This involves selecting a hosting provider, configuring servers, deploying application code, as well as ensuring the reliability, security, and performance of said application or website. Python's Role in Web DevelopmentHere are the different roles that Python can play in web development: Backend Development Web Frameworks API Development Data Processing and Analysis Web ScrapingCheck out our Python Basics course to get started with Python. What Python frameworks can you use in web development?Below are the popular Python frameworks used in web development: Django Django is a high-level web framework that promotes rapid development and clean design. Following the "don't repeat yourself" (DRY) principle, it provides built-in features, such as ORM (Object-Relational Mapping), form handling, and authentication mechanisms. These features help developers focus on creating unique applications instead of performing routine development tasks such as redundancy reduction. Due to its extensive feature set and conventions, Django has a steep learning curve for beginners compared to micro-frameworks. Django is ideal for creating large-scale web apps with complex requirements. Applications like content management systems, e-commerce platforms, social networking sites, and enterprise applications can all be constructed using Django. Flask Flask is an innovative micro-framework designed for web development that takes an approachable yet minimalistic approach to web application creation. Though simple in appearance, Flask provides developers with all of the essential tools required to build web apps, such as routing, templating, and session management. Flask does not come equipped with built-in features for authentication, database ORM, or form validation as Django does. Developers will, therefore, have to rely on third-party extensions or libraries to implement these features. Flask is an ideal tool for building lightweight web apps and APIs of small to medium sizes that need flexible scaling capabilities, API prototyping, or prototype development. It provides great freedom when developing projects requiring flexibility and simplicity with no bounds on expansion potential. FastAPI FastAPI is a relatively new addition to the Python web framework landscape, known for its exceptional performance and modern features. Built on Starlette and Pydantic, it leverages Python's asynchronous capabilities to deliver high-performance APIs with automatic validation and documentation. Its intuitive API design and automatic interactive documentation make it a compelling choice for building scalable and efficient web services. FastAPI is a relatively new framework compared to Django and Flask, which means it may have a smaller community and ecosystem of extensions. It is well-suited for building high-performance APIs, microservices, and real-time applications that require fast response times and scalability. It's particularly useful for projects that prioritize performance, productivity, and automatic API documentation. Considerations when choosing a Python frameworkHere are a few considerations to keep in mind when selecting a Python framework for web development: Security: For optimal web security, consider frameworks with built-in protection features to combat common web vulnerabilities, such as SQL injection, cross-site scripting (XSS), and cross-site request forgery (CSRF). Look out for features like secure authentication, HTML escaping, and input validation. Performance: Evaluate the framework's scalability features, such as caching, database connection pooling, and asynchronous programming capabilities. Choose frameworks that can handle heavy traffic volumes while scaling horizontally with growing user bases. Cost: When selecting a framework, be aware of its licensing model; certain frameworks may have licensing fees or restrictions that could eat into project costs. For beginner projects, look for frameworks with open-source licenses that best meet the allocated budget requirements. Capability: Evaluate the features and capabilities of the framework in question to ensure it can meet the project requirements. Check for features such as authentication, database integration, form validation, and RESTful APIs. Integration: Explore the framework's ecosystem of extensions, libraries, and integrations to determine its compatibility with third-party tools and services. Look for frameworks with expansive ecosystems that support common integrations, such as database connectors, authentication providers, and payment gateways. Advantages of Using Python in Web DevelopmentBelow are the advantages you get from using Python for Web Development: Ease of Learning and Readability: Python's syntax is simple, clean, and easy to understand, making it convenient for beginners and experienced developers. Its easy readability encourages collaborative development and reduces the time needed to onboard new team members. Extensive Ecosystem of Libraries and Frameworks: Python boasts a vast ecosystem of libraries and frameworks explicitly tailored for web development tasks. Frameworks like Django, Flask, and FastAPI provide developers with the tools to build robust, scalable, and maintainable web applications efficiently. Rapid Development: Python's concise syntax and high-level abstractions allow developers to write their code quickly and focus on solving business problems. This results in faster development cycles, which leads to faster time-to-market for web applications, giving businesses a competitive edge. Versatility: Python is a versatile language used for various web development tasks, including backend development, frontend development (with frameworks like Django and Flask), scripting, automation, data analysis, and more. Its versatility makes it a valuable asset for developers working on diverse projects and domains. Scalability: Python offers an ideal environment for building web applications that can scale seamlessly as their traffic and data volumes grow. Frameworks like Django offer built-in features like caching, database optimization, and load balancing to help developers effortlessly expand their apps as their user base expands. Community and Support: Python boasts an active community of developers contributing to its continuous development and evolution. This community offers resources like documentation, tutorials, forums, and open-source libraries that make finding solutions to problems easier. Cross-Platform Compatibility: Python is a cross-platform programming language, meaning code runs without modification on multiple operating systems. This compatibility greatly simplifies deployment while giving developers access to an audience of potential web app users. Integration Capabilities: Python has impressive integration capabilities that enable it to seamlessly mesh with various technologies and platforms, making integration with third-party APIs, services, libraries, or third-party databases effortless for developers. From databases and web servers to frontend frameworks, its robust integration capabilities simplify development workflows significantly. If you'd like to build a strong foundation in Python syntax and core concepts, I recommend checking out our interactive Python Basics course. In it, you'll find dozens of bite-sized lessons that help cement your understanding of data types, functions, object-oriented programming, and more. Python Web Development Use CasesHere are some real-world examples of Python-powered websites and applications: Instagram: One of the world's largest social media platforms relies heavily on Python programming language for its backend infrastructure. It was initially developed using the Django framework. Pinterest: As with many popular social networking platforms, Pinterest utilizes Python for its backend services. It powers Pinterest's recommendation algorithms, content delivery systems, and user engagement features. Reddit: The popular social news aggregation and discussion platform is powered by Python. The site's backend infrastructure is built using the Pylons framework. YouTube: While YouTube's frontend is primarily built with JavaScript and other web technologies, its backend infrastructure relies on Python for tasks such as video processing, recommendation systems, content delivery, and analytics. Netflix: One of the leading streaming entertainment services uses Python extensively in their backend services and tools. They use it to implement content delivery, recommendation algorithms, data analysis, and open-source projects like Metaflow and Polynote. NASA: Python is widely utilized at NASA for scientific computing, data analysis, and mission-critical applications. Its user-friendliness and extensive library features make it an ideal solution for processing large volumes of scientific data efficiently and analyzing it effectively. NASA's Jet Propulsion Laboratory (JPL), as well as other research centers, utilize it for tasks like satellite image processing, climate modeling, and spacecraft control systems. These examples highlight the diverse use cases where Python is employed to build reliable and innovative web applications and services. Check out this article on How to Get Python Certification: The Step-By-Step Guide. ConclusionPython's suitability for web development is inarguable, thanks to its simplicity, flexibility, and robust ecosystem of frameworks and libraries. From creating simple websites or RESTful APIs to complex applications, it gives developers all they need to bring their ideas into reality efficiently and cost-effectively. Thanks to its rising popularity and widespread adoption, it continues to reshape web development while providing innovative yet scalable web solutions across industries and domains. More on Python: Create A Simple Python Web Application That Interacts With Your Kubernetes ClusterTop 7 Skills Required for DevOps Engineers (with Roadmap)Top 10 Programming LanguagesView the full article
  11. For weeks now, unidentified threat actors have been leveraging a critical zero-day vulnerability in Palo Alto Networks’ PAN-OS software, running arbitrary code on vulnerable firewalls, with root privilege. Multiple security researchers have flagged the campaign, including Palo Alto Networks’ own Unit 42, noting a single threat actor group has been abusing a vulnerability called command injection, since at least March 26 2024. This vulnerability is now tracked as CVE-2024-3400, and carries a maximum severity score (10.0). The campaign, dubbed MidnightEclipse, targeted PAN-OS 10.2, PAN-OS 11.0, and PAN-OS 11.1 firewall configurations with GlobalProtect gateway and device telemetry enabled, since these are the only vulnerable endpoints. Highly capable threat actor The attackers have been using the vulnerability to drop a Python-based backdoor on the firewall which Volexity, a separate threat actor that observed the campaign in the wild, dubbed UPSTYLE. While the motives behind the campaign are subject to speculation, the researchers believe the endgame here is to extract sensitive data. The researchers don’t know exactly how many victims there are, nor who the attackers primarily target. The threat actors have been given the moniker UTA0218 for now. "The tradecraft and speed employed by the attacker suggests a highly capable threat actor with a clear playbook of what to access to further their objectives," the researchers said. "UTA0218's initial objectives were aimed at grabbing the domain backup DPAPI keys and targeting active directory credentials by obtaining the NTDS.DIT file. They further targeted user workstations to steal saved cookies and login data, along with the users' DPAPI keys." In its writeup, The Hacker News reported that the U.S. Cybersecurity and Infrastructure Security Agency (CISA) added this flaw to its Known Exploited Vulnerabilities (KEV) catalog, giving federal agencies a deadline of April 19 to apply the patch and otherwise mitigate the threat. "Targeting edge devices remains a popular vector of attack for capable threat actors who have the time and resources to invest into researching new vulnerabilities," Volexity said. "It is highly likely UTA0218 is a state-backed threat actor based on the resources required to develop and exploit a vulnerability of this nature, the type of victims targeted by this actor, and the capabilities displayed to install the Python backdoor and further access victim networks." More from TechRadar Pro North Korean hackers are posing as job interviewers - don't be fooledHere's a list of the best firewalls around todayThese are the best endpoint security tools right now View the full article
  12. Let’s learn all the useful services from the OpenAI.View the full article
  13. Learn Python through tutorials, blogs, books, project work, and exercises. Access all of it on GitHub for free and join a supportive open-source community.View the full article
  14. Learn how to convert a Python dictionary to JSON with this quick tutorial.View the full article
  15. Ubuntu is a preferred Linux distribution, especially for programmers and developers. When using Ubuntu 24.04, you must know how to install pip. As a Python developer or user, pip is a Python package manager that allows you to install and manage Python packages for your projects. Besides, when installing a package that requires some dependencies, you might require pip to help install them. Installing pip on Ubuntu 24.04 is straightforward. With only a few commands, you will have pip installed and ready for use. This post shares all the details you should know regarding installing pip. How to Install pip on Ubuntu 24.04 Different circumstances make installing pip on Ubuntu 24.04 a must-know for everyone. If you are a Python developer using Ubuntu 24.04, you inevitably need pip to install and manage Python packages. As a regular Ubuntu 24.04 user, pip helps install package dependencies and is the recommended approach for installing packages by sourcing them from indexes such as PyPI. It’s worth mentioning that Python has two flavors, but for this example, we are focused on installing pip for Python3. Besides, Python3 is the latest flavor and is recommended for any Python activity. For Python 3 packages, including pip, they will have the ‘python3-’ prefix before their name. Below are the steps to follow to install pip on Ubuntu 24.04 Step 1: Update the Ubuntu 24.04 Package List Before installing pip on Ubuntu 24.04, we must update the package list. Doing so helps refresh the sources list, allowing us to access the recent pip version. Run the update command below. $ sudo apt update You will get prompted to enter your password to run the apt command, as it is an administrative task that requires sudo privileges. Once you enter the password, allow the process to complete. Step 2: Install Python3 pip Ubuntu 24.04 has pip in its repository. Since we want to install pip for Python3, not Python2, we must specify it when running the install command. Here’s how you install pip. $ sudo apt install python3-pip Once you run the command, it will execute and fetch the pip package and its dependencies. Once the process is complete, you will have managed to install pip on your Ubuntu 24.04. Step 3: Verify the Installation Although we’ve successfully installed pip on Ubuntu 24.04, we still need to verify the installation. One way to do this is to check the installed pip version. The command below will return the pip version if installed. $ pip3 --version We’ve installed pip 24.0 for this guide. How to Use pip on Ubuntu 24.04 After installing pip, the next task is to understand how to use it for different tasks. Like other packages, pip also has a help page where you will view all the various options and their descriptions. To access the help page, execute the below command. $ pip3 --help You will get an output showing the available options alongside their descriptions. Go through the output to understand the different actions you can take when running pip. For instance, if we want to see the different packages installed on our system, we can list them using the below command. $ pip3 list We then get an output showing the available packages and a brief description of the version. Feel free to explore how to use pip more, depending on your project needs. Conclusion pip is a reliable Python package manager. With pip, you can install and manage Python packages with ease, offering convenience in installing project dependencies. To install pip, specify what Python flavor to use, and then run the install command. This post focuses on installing Python3-pip, and we’ve given the steps to follow, from installing pip to giving an example of how to use it. View the full article
  16. Anaconda is an open-source Python and R programming language distribution. It is powerful software for managing environments, packages, and other development tools like Jupyter Notebook and Spyder. Moreover, it comprises over 250 packages, making it easy to kickstart your development journey. Anaconda’s features include package management, creating virtual environments, integrated development environment (IDE) support, and more. Its reproducibility function generates easy-to-share projects. This short guide provides brief information about installing Anaconda on Linux without any hassles. How To Install Anaconda First, download the Anaconda installer from the Anaconda’s official archive. Please ensure to download an appropriate version for your Linux architecture. Here are the commands you can follow: sudo apt update wget https://repo.anaconda.com/archive/Anaconda3-2024.02-1-Linux-x86_64.sh Although we use the wget command to download the installer, you can alternatively download it through Anaconda’s website. Once you have downloaded this installer script file, run it using the below command: bash Anaconda3-2024.02-1.Linux-x86_64.sh Now, follow the on-screen instructions, then it will ask you to confirm the installation path. Press Enter and keep it on the default path. However, you can also specify your desired location. Lastly, enter ‘yes’ to automatically activate Conda on system startup. You can change this anytime by running: conda init --reverse $SHELL Finally, the terminal will show, “Thank you for installing Anaconda3!”. Before moving further, you need to activate and initialize the Anaconda3 through the following command: export PATH="</path of anaconda3/>bin:$PATH" Make sure you change the above command from </path of anaconda3/> to the actual path of Anaconda3 according to your system. Verifying all new packages is a good practice to avoid unintentional system errors. So, Let’s now verify Anaconda’s installation by checking the version information: conda --version It shows the version number correctly, so there’s no issue. Otherwise, you would have to reinstall it using the above steps. How to Update Anaconda If you anytime need to update Anaconda, run the below command: conda update --all Difference Between Anaconda and Miniconda Anaconda is a full-packed distribution with over 250 standard machine learning and data science packages. Miniconda is a minimal installer that consists of Conda, Python, and a few more packages, and it lets you install other packages according to your needs. Anaconda distribution is best for beginners who are unaware of the packages they should use. Miniconda, on the other hand, is for users who know what packages they want to use. A Quick Wrap-Up Anaconda is powerful open-source software that can run machine learning projects, create virtual environments, distribute packages, and more. It lets you run your Python and R language programs smoothly. This article comprehensively demonstrates installing the Anaconda command line client on Linux. Moreover, we have also added a simple command to update Anaconda quickly. View the full article
  17. A comparative overview Continue reading on Towards Data Science » View the full article
  18. Modular created Mojo to provide Python developers with a programming language that uses the same familiar syntax to build high-performance applications. View the full article
  19. Perfect for first-time coders, this on-demand bootcamp shows you how to build apps, write automations and dive into data through 113 hours of content.View the full article
  20. Explore some of Python’s sharp corners by coding your way through simple yet helpful examples.View the full article
  21. Python Package Index (PyPI), the largest repository of Python packages, has once again been forced to suspend new account and new project registrations. Cybersecurity experts from both Checkmarx and Check Point observed a large-scale cyberattack in which threat actors tried to upload hundreds of malicious packages to the platform, in an attempt to compromise software developers and mount supply chain attacks. The packages mimic legitimate ones already uploaded to PyPI, an attack usually called “typosquatting”. It relies on developers being reckless and picking up the malicious version of the package, instead of the legitimate one. While Checkmarx says the attackers tried to upload some 365 packages, Check Point claims at least 500. Regardless of the total number, the attack’s goal is to get the victims to install an infostealer with persistence capabilities. This infostealer grabs, among other things, passwords stored in browsers, cookies, and cryptocurrency wallet-related information. Registrations reopened PyPi seems to have addressed the issue in the meantime, as at the time of writing, registrations were reopened. PyPI is the world’s biggest repository for open-source Python packages, and as such, is facing a constant barrage of cyberattacks. In late May 2023, the platform was forced to do the same thing, as it faced an “unimaginable flood of malicious code” being uploaded to the platform. In an announcement posted on the PyPI status page, the organization said: “The volume of malicious users and malicious projects being created on the index in the past week has outpaced our ability to respond to it in a timely fashion, especially with multiple PyPI administrators on leave.” It took the company the entire weekend to lift the suspension. Via BleepingComputer More from TechRadar Pro The huge rise in AI and ML transactions are putting businesses at riskHere's a list of the best firewalls around todayThese are the best endpoint security tools right now View the full article
  22. Emergency stop button: The Python Package Index was drowning in malicious code again, so they had to shut down registration for cleanup. The post PyPI Goes Quiet After Huge Malware Attack: 500+ Typosquat Fakes Found appeared first on Security Boulevard. View the full article
  23. This article serves as a detailed guide on how to master advanced Python techniques for data science. It covers topics such as efficient data manipulation with Pandas, parallel processing with Python, and how to turn models into web services.View the full article
  24. Spend less time researching and more time recruiting the ideal Python developer. Find out how in this article. View the full article
  25. Cybersecurity researchers from Checkmarx have discovered a new infostealing campaign that leveraged typosquatting and stolen GitHub accounts to distribute malicious Python packages to the PyPI repository. In a blog post, Tal Folkman, Yehuda Gelb, Jossef Harush Kadouri, and Tzachi Zornshtain of Checkmarx said they discovered the campaign after a Python developer complained about falling victim to the attack. Apparently, the company believes more than 170,000 people are at risk. Infostealers and keyloggers The attackers first took a popular Python mirror, Pythonhosted, and created a typosquatted website version. They named it PyPIhosted. Then, they grabbed a major package, called Colorama (150+ million monthly downloads), added malicious code to it, and then uploaded it on their typosquatted-domain fake-mirror. “This strategy makes it considerably more challenging to identify the package's harmful nature with the naked eye, as it initially appears to be a legitimate dependency,” the researchers explained. Another strategy involved stealing popular GitHub accounts. An account named “editor-syntax” got their account compromised, most likely via session cookie theft. By obtaining session cookies, the attackers managed to bypass any and all authentication methods and logged directly into the person’s account. Editor-syntax is a major contributor, maintaining the Top.gg GitHub organization whose community counts more than 170,000 members. The threat actors used the access to commit malware to the Top.gg Python library. The goal of the campaign was to steal sensitive data from the victims. Checkmarx’s researchers said the malware stole browser data (cookies, autofill information, browsing history, bookmarks, credit cards, and login credentials, from the biggest browsers such as Opera, Chrome, Brave, Vivaldi, Yandex, and Edge), Discord data (including Discord tokens, which can be used to access accounts), cryptocurrency wallet data, Telegram chat sessions, computer files, and Instagram data. Further analysis also discovered that the infostealer was able to work as a keylogger, as well. More from TechRadar Pro This well-known infostealer is back with upgraded malwareHere's a list of the best firewalls around todayThese are the best endpoint security tools right now View the full article
  • Forum Statistics

    67.4k
    Total Topics
    65.3k
    Total Posts
×
×
  • Create New...