Jump to content

Search the Community

Showing results for tags 'docker images'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

There are no results to display.

There are no results to display.


Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


LinkedIn Profile URL


About Me


Cloud Platforms


Cloud Experience


Development Experience


Current Role


Skills


Certifications


Favourite Tools


Interests

Found 8 results

  1. Docker Official Images are a curated set of Docker repositories hosted on Docker Hub that provide a wide range of pre-configured images for popular language runtimes and frameworks, cloud-first utilities, data stores, and Linux distributions. These images are maintained and vetted, ensuring they meet best practices for security, usability, and versioning, making it easier for developers to deploy and run applications consistently across different environments. Docker Official Images are an important component of Docker’s commitment to the security of both the software supply chain and open source software. Docker Official Images provide thousands of images you can use directly or as a base image when building your own images. For example, there are Docker Official Images for Alpine Linux, NGINX, Ubuntu, PostgreSQL, Python, and Node.js. Visit Docker Hub to search through the currently available Docker Official Images. In this blog post, we address three common misconceptions about Docker Official Images and outline seven ways they help secure the software supply chain. 3 common misconceptions about Docker Official Images Even though Docker Official Images have been around for more than a decade and have been used billions of times, they are somewhat misunderstood. Who “owns” Docker Official Images? What is with all those tags? How should you use Docker Official Images? Let’s address some of the more common misconceptions. Misconception 1: Docker Official Images are controlled by Docker Docker Official Images are maintained through a partnership between upstream maintainers, community volunteers, and Docker engineers. External developers maintain the majority of Docker Official Images Dockerfiles, with Docker engineers providing insight and review to ensure best practices and uniformity across the Docker Official Images catalog. Additionally, Docker provides and maintains the Docker Official Images build infrastructure and logic, ensuring consistent and secure build environments that allow Docker Official Images to support more than 10 architecture/operating system combinations. Misconception 2: Docker Official Images are designed for a single use case Most Docker Official Images repositories offer several image variants and maintain multiple supported versions. In other words, the latest tag of a Docker Official Image might not be the right choice for your use case. Docker Official Images tags The documentation for each Docker Official Images repository contains a “Supported tags and respective Dockerfile links” section that lists all the current tags with links to the Dockerfiles that created the image with those tags (Figure 1). This section can be a little intimidating for first-time users, but keeping in mind a few conventions will allow even novices to understand what image variants are available and, more importantly, which variant best fits their use case. Figure 1: Documentation showing the current tags with links to the Dockerfiles that created the image with those tags. Tags listed on the same line all refer to the same underlying image. (Multiple tags can point to the same image.) For example, Figure 1 shows the ubuntu Docker Official Images repository, where the 20.04, focal-20240216, and focal tags all refer to the same image. Often the latest tag for a Docker Official Images repository is optimized for ease of use and includes a wide variety of software helpful, but not strictly necessary, when using the main software packaged in the Docker Official Image. For example, latest images often include tools like Git and build tools. Because of their ease of use and wide applicability, latest images are often used in getting-started guides. Some operating system and language runtime repositories offer “slim” variants that have fewer packages installed and are therefore smaller. For example, the python:3.12.2-bookworm image contains not only the Python runtime, but also any tool you might need to build and package your Python application — more than 570 packages! Compare this to the python:3.12.2-slim-bookworm image, which has about 150 packages. Many Docker Official Images repositories offer “alpine” variants built on top of the Alpine Linux distribution rather than Debian or Ubuntu. Alpine Linux is focused on providing a small, simple, and secure base for container images, and Docker Official Images alpine variants typically aim to install only necessary packages. As a result, Docker Official Images alpine variants are typically even smaller than “slim” variants. For example, the linux/amd64 node:latest image is 382 MB, the node:slim image is 70 MB, and the node:alpine image is 47 MB. If you see tags with words that look like Toy Story characters (for example, bookworm, bullseye, and trixie) or adjectives (such as jammy, focal, and bionic), those indicate the codename of the Linux distribution they use as a base image. Debian-release codenames are based on Toy Story characters, and Ubuntu releases use alliterative adjective-animal appellations. Linux distribution indicators are helpful because many Docker Official Images provide variants built upon multiple underlying distribution versions (for example, postgres:bookworm and postgres:bullseye). Tags may contain other hints to the purpose of their image variant. Often these are explained later in the Docker Official Images repository documentation. Check the “How to use this image” and/or “Image Variants” sections. Misconception 3: Docker Official Images do not follow software development best practices Some critics argue that Docker Official Images go against the grain of best practices, such as not running container processes as root. While it’s true that we encourage users to embrace a few opinionated standards, we also recognize that different use cases require different approaches. For example, some use cases may require elevated privileges for their workloads, and we provide options for them to do so securely. 7 ways Docker Official Images help secure the software supply chain We recognize that security is a continuous process, and we’re committed to providing the best possible experience for our users. Since the company’s inception in 2013, Docker has been a leader in the software supply chain, and our commitment to security — including open source security — has helped to protect developers from emerging threats all along the way. With the availability of open source software, efficiently building powerful applications and services is easier than ever. The transparency of open source allows unprecedented insight into the security posture of the software you create. But to take advantage of the power and transparency of open source software, fully embracing software supply chain security is imperative. A few ways Docker Official Images help developers build a more secure software supply chain include: Open build process Because visibility is an important aspect of the software supply chain, Docker Official Images are created from a transparent and open build process. The Dockerfile inputs and build scripts are all open source, all Docker Official Images updates go through a public pull request process, and the logs from all Docker Official Images builds are available to inspect (Jenkins / GitHub Actions). Principle of least privilege The Docker Official Images build system adheres strictly to the principle of least privilege (POLP), for example, by restricting writes for each architecture to architecture-specific build agents. Updated build system Ensuring the security of Docker Official Images builds and images is paramount. The Docker Official Images build system is kept up to date through automated builds, regular security audits, collaboration with upstream projects, ongoing testing, and security patches. Vulnerability reports and continuous monitoring Courtesy of Docker Scout, vulnerability insights are available for all Docker Official Images and are continuously updated as new vulnerabilities are discovered. We are committed to continuously monitoring our images for security issues and addressing them promptly. For example, we were among the first to provide reasoned guidance and remediation for the recent xz supply chain attack. We also use insights and remediation guidance from Docker Scout, which surfaces actionable insights in near-real-time by updating CVE results from 20+ CVE databases every 20-60 minutes. Software Bill of Materials (SBOM) and provenance attestations We are committed to providing a complete and accurate SBOM and detailed build provenance as signed attestations for all Docker Official Images. This allows our users to have confidence in the origin of Docker Official Images and easily identify and mitigate any potential vulnerabilities. Signature validation We are working on integrating signature validation into our image pull and build processes. This will ensure that all Docker Official Images are verified before use, providing an additional layer of security for our users. Increased update frequency Docker Official Images provide the best of both worlds: the latest version of the software you want, built upon stable versions of Linux distributions. This allows you to use the latest features and fixes of the software you are running without having to wait for a new package from your Linux distribution or being forced to use an unstable version of your Linux distribution. Further, we are working to increase the throughput of the Docker Official Images build infrastructure to allow us to support more frequent updates for larger swaths of Docker Official Images. As part of this effort, we are piloting builds on GitHub Actions and Docker Build Cloud. Conclusion Docker’s leadership in security and protecting open source software has been established through Docker Official Images and other trusted content we provide our customers. We take a comprehensive approach to security, focusing on best practices, tooling, and community engagement, and we work closely with upstream projects and SIGs to address security issues promptly and proactively. Docker Official Images provide a flexible and secure way for developers to build, ship, test, and run their applications. Docker Official Images are maintained through a partnership between the Docker Official Images community, upstream maintainers/volunteers, and Docker engineers, ensuring best practices and uniformity across the Docker Official Images catalog. Each Docker Official Image offers numerous image variants that cater to different use cases, with tags indicating the purpose of each variant. Developers can build using Docker tools and products with confidence, knowing that their applications are built on a secure, transparent foundation. Looking to dive in? Get started building with Docker Official Images today. Learn more Browse the available Docker Official Images. Visit hub.docker.com and start developing. Find Docker Official Images on GitHub. Learn more about Docker Hub. Read: Debian’s Dedication to Security: A Robust Foundation for Docker Developers How to Use the Postgres Docker Official Image How to Use the NGINX Docker Official Image How to Use the Alpine Docker Official Image How to Use the Node Docker Official Image View the full article
  2. Today, Amazon CodeCatalyst announces a new runtime docker image for customers to use with their build and test actions within workflows, along with the ability to choose between curated images. The new image contains updated tooling, including Node18. View the full article
  3. One question that often arises for both newcomers and seasoned Docker users alike is: "Where are Docker images stored?" Understanding the storage location of Docker images is crucial for managing disk space and optimizing Docker performance. Whether you're running Docker on macOS, Windows, or Linux, this blog post will guide you on how to locate Docker images on each platform. Let’s get started! Where Docker Images are Stored on macOSThe most important point to keep in mind is that Docker doesn't run natively on macOS. Instead, it operates within a virtual machine using HyperKit. For Docker Desktop on macOS, Docker stores Docker images within a disk image file. But what exactly is a disk image file? In simple terms, a disk image file is a single, large file that acts like a virtual "hard drive" for the Docker virtual machine. It contains the entire file system used by Docker, including images, containers, and volumes. To locate this disk image file, follow the steps below: #1 Open Docker DesktopStart Docker Desktop on your Mac if it's not already running. #2 Open settings windowOnce Docker Desktop is open, click on the Settings icon (highlighted below) to open the Settings window. #3 Navigate to the "Advanced" tab In the Settings window, click on the Resources tab, and then click on the Advanced section. #4 Find disk image location Scroll down a bit in the Advanced section until you see Disk image location. Here, Docker Desktop displays the path to the disk image file on your Mac's file system, as highlighted below: Note that the disk image is managed by the Docker Desktop virtual machine and is not meant to be manually modified or directly accessed. Instead, you should interact with Docker images through Docker CLI commands. Where Docker Images are Stored on LinuxUnlike macOS, Docker runs natively on Linux. This means that Docker directly interacts with the Linux kernel and filesystem without the need for a virtual machine. On Linux, all Docker-related data, including images, containers, volumes, and networks, is stored within the /var/lib/docker directory, known as the root directory. The exact directory inside the root directory where images are stored depends on the storage driver being used. For example, if the storage driver is overlay2, the image-related data will be stored within the subdirectory of /var/lib/docker/overlay2. To find out what storage driver you’re using, you can run the docker info command and look for a field named Storage Driver as shown below: Note: While it's useful to know where Docker stores its image-related data, manually altering files or directories within /var/lib/docker is not recommended, as it can disrupt Docker's operations and lead to data loss. Where Docker Images are Stored on WindowsFor Docker Desktop on Windows configured to use WSL 2 (recommended by Docker), Docker stores Docker images in a disk image. To locate it, follow the steps below: #1 Open Docker DesktopStart Docker Desktop on your machine if it's not already running. #2 Open settings windowOnce Docker Desktop is open, click on the Settings icon (highlighted below) to open the Settings window. #3 Navigate to the "Advanced" tab In the Settings window, click on the Resources tab, and then click on the Advanced section. #4 Find disk image location Scroll down a bit in the Advanced section until you see Disk image location. Here, Docker Desktop displays the path to the disk image file on your Windows file system, as shown below: Cleaning up Images Used by DockerAs you use Docker, over time, your system might accumulate many unused and "dangling" images. An unused image is an image not currently used by any container (stopped or running). A dangling image is an image that neither has a repository name nor a tag. It appears in Docker image listings as <none>:<none>. These images can take up significant disk space, leading to inefficiencies and potential storage issues. To address this, Docker provides a simple command: docker image prune. To remove all dangling images, run the following command: docker image prune This command will issue a prompt asking whether you want to remove all dangling images. Type y and hit enter to proceed. To remove unused images, run the following command: docker image prune -aWhen you run this command, it’ll issue a warning and will prompt you to confirm whether you want to proceed or not. Type y and hit enter to proceed. You should regularly clean up unused and dangling Docker images to maintain an efficient and clutter-free Docker environment. ConclusionNow you know where Docker images are stored on three different platforms: Linux, macOS, and Windows. If you’re still unable to find the location where Docker images are stored, which may be due to a specific configuration not covered in this blog post, consider asking questions on platforms such as Stack Overflow and the Docker Community forum. A deep understanding of Docker images, including knowledge of how to create and manage them, is crucial for mastering Docker. To help you with this, check out the blog posts below: How to Build a Docker Image With Dockerfile From ScratchHow to Create Docker Image From a Container?How to Remove Unused and Dangling Docker Images?Why and How to Tag a Docker Image?What Are Docker Image Layers and How Do They Work?How to Run a Docker Image as a Container?Interested in learning more about Docker? Check out the following courses from KodeKloud: Docker for the Absolute Beginner: This course will help you understand Docker using lectures and demos. You’ll get a hands-on learning experience and coding exercises that will validate your Docker skills. Additionally, assignments will challenge you to apply your skills in real-life scenarios.Docker Certified Associate Exam Course: This course covers all the required topics from the Docker Certified Associate Exam curriculum. The course offers several opportunities for practice and self-assessment. There are hundreds of research questions in multiple-choice format, practice tests at the end of each section, and multiple mock exams that closely resemble the actual exam pattern.View the full article
  4. Docker images play a pivotal role in containerized application deployment. They encapsulate your application and its dependencies, ensuring consistent and efficient deployment across various environments. However, security is a paramount concern when working with Docker images. In this guide, we will explore security best practices for Docker images to help you create and maintain secure images for your containerized applications. 1. Introduction The Significance of Docker Images Docker images are at the core of containerization, offering a standardized approach to packaging applications and their dependencies. They allow developers to work in controlled environments and empower DevOps teams to deploy applications consistently across various platforms. However, the advantages of Docker images come with security challenges, making it essential to adopt best practices to protect your containerized applications. View the full article
  5. Docker is a compelling platform to package and run web applications, especially when paired with one of the many Platform-as-a-Service (PaaS) offerings provided by cloud platforms. NGINX has long provided DevOps teams with the ability to host web applications on Linux and also provides an official Docker image to use as the base for custom web applications. In this post, I explain how DevOps teams can use the NGINX Docker image to build and run web applications on Docker. View the full article
  6. What do Docker and IKEA have in common? Both companies have changed how products are built, stored, and shipped. In IKEA’s case, they created the market of flat-packed furniture items, which made everything from shipping, warehousing, and delivering their furniture to the end location much easier and more cost effective. This parallels what Docker has done for developers. Docker has changed the way that software is built, shipped, and stored, with Docker Images taking up much less space “shelf” space. In this post, contributing authors Karan Honavar and Fernando Dorado Rueda from IKEA walk through their MLOps solution, built with Docker... View the full article
  7. Introduction Many applications built today or modernized from monoliths are done so using microservice architectures. The microservice architecture makes applications easier to scale and faster to develop, which enables innovation and accelerating time-to-market for new features. In addition, microservices also provide lifecycle autonomy enabling applications to have independent build and deploy processes, which provides technological freedom such that they can be implemented in different programming languages and provide scaling flexibility to scale up or scale down independently based on workload utilization. While microservices provide a lot of flexibility, the process of building and deploying them, ensuring the right application versions, and required dependencies are used is a tedious process. This is where containers come in. Microservices can be packaged into a single lightweight and standalone executable artifact called container image that includes everything to run an application. This process of packaging a microservice into container image is called containerization. Containerization offers a lot of benefits, such as portability, which allow containers to be deployed to different infrastructures. It also offers fault isolation, which ensures that different containers are running as isolated process within their own user space in the host OS so that one container’s crash or failure wouldn’t impact the other and provide ease-of-management for deployment and version management. With the benefits that are offered via microservices and subsequent containerization, creation of container images have increased at a rapid scale. As the use of containerized applications continue to grow, it is important to ensure that your container images are optimized, secure, and reliable. Taking these tenets into account in this post, we discuss best practices for building better container images for use on Amazon Elastic Container Service (Amazon ECS), Amazon Elastic Kubernetes Service (Amazon EKS), and other services. For the container image repositories, we focus on Amazon Elastic Container Registry (Amazon ECR). Building better images We’ll look at the following approaches to build better images. While this isn’t an exhaustive list, these topics provide a good base for your image builds, and you can adopt them as you wish. Walkthrough Use trusted images and source Using a base image from a trusted source can improve the security and reliability of your container. This is because you can be confident that the base image has been thoroughly tested and vetted. It can also reduce the burden of establishing provenance, because you only need to consider the packages and libraries that you include in your image, rather than the entire base image. Here is an example creating a Dockerfile using the official Python base image from the Amazon ECR repository. In fact, all of the Docker official images are available on Amazon ECR public gallery. Note: The code samples in the blog are for Docker. If you don’t have Docker, then please refer to Docker installation guide for information about how to install Docker on your particular operating system. Alternatively, you can also consider using Finch an open source client for container development. FROM public.ecr.aws/docker/library/python:slim # Install necessary packages RUN pip install flask COPY app.py /app/ ENTRYPOINT ["python", "/app/app.py"] Here’s another example of using latest version (as of the data of this post) of Amazon Linux image, which is a secure and lightweight Linux distribution provided by AWS. FROM public.ecr.aws/amazonlinux/amazonlinux:2023 It is important to keep images up-to-date by regularly updating to latest secure versions of the software and libraries included in the image. As new versions of the images get created, they should be explicitly tagged with versions such as v1, v2, rc_03122023, etc instead of tagging as latest. Using explicit tags instead of latest tag could prevent situations where the image with latest tag isn’t actually updated and instead gives a false appearance that the image contains the latest version of the application. If you’re confident in your automation, then vanity tags such as latest or prod might be acceptable to use, but avoiding them also reduces ambiguity. This can avoid confusion about which application version may actually being used. Once images are created, they can be pushed into the Amazon ECR repository for secure storage and highly available distribution. Amazon ECR encrypts data at rest and offers lifecycle management, replication, and caching features. Amazon ECR can also scan the images to help in identifying software vulnerabilities through Basic and Enhanced Scanning. Stored images from Amazon ECR can then be pulled and run by services such Amazon ECS, Amazon EKS, or other services and tools. Here is an example of using AWS Command Line Interface (AWS CLI) and Docker CLI to pull a versioned image from Amazon ECR. Step 1 – Authenticate your Docker client to the Amazon Linux Public registry. Authentication tokens are valid for 12 hours. For more information, see Private registry authentication. Alternatively, you can also use Amazon ECR Docker Credential Helper, which is a credential helper for the Docker daemon that makes it easier to use Amazon Elastic Container Registry. Amazon ECR Docker credential helper automatically gets credentials for Amazon ECR on docker push and docker pull. Note that this would only be required for Amazon ECR, but ECR Public doesn’t need authentication. $ aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin public.ecr.aws Step 2 – Pull the Amazon Linux container image using the docker pull command. To view the Amazon Linux container image on the Amazon ECR Public Gallery, see Amazon ECR Public Gallery – amazonlinux. docker pull public.ecr.aws/amazonlinux/amazonlinux:2023 Step 3 – Run the container locally. docker run -it public.ecr.aws/amazonlinux/amazonlinux /bin/bash Bonus While Amazon Linux is a secure, trusted, and lightweight container image, AWS also offers purpose built container image called Bottlerocket, which is a Linux-based open-source operating system. Bottlerocket includes only the essential software required to run containers, and ensures that the underlying software is always secure. Additionally, Bottlerocket is available at no cost as an Amazon Machine Image (AMI) for Amazon Elastic Compute Cloud (Amazon EC2) and can be used on Amazon EKS and Amazon ECS setups. Sign container images Container image signing can help verify that the trusted images you have selected and vetted are in use throughout your build pipelines and deployments. This process involves trusted parties cryptographically signing images so that they can be verified when used. This can be used to also sign and verify images throughout your organization. Container image signing is fairly lightweight. Because container images and runtimes have built-in integrity checks and all image content is immutable, signing solutions can simply sign image manifests. Signatures are stored alongside images in the registry, and at any point in time a consumer of the image can retrieve its signature and verify against a trusted publisher’s identity or public key. It is a good practice to sign and verify container images as part of overall security practices, and verifying public content establishes trust in content authenticity. With signed images, you can implement a solution that blocks images from running in your container environment unless they can be verified as trusted. This not only guarantees the authenticity of container images but also reduces the need for additional work to validate the container images in other ways prior to their use. For a full walkthrough of such a solution, see the recent launch of Container Image Signing with AWS Signer and Amazon EKS. Limit the number of layers It is a good practice to limit the number of layers in your container images. Having a large number of layers can increase the size and complexity of the image, which can make it more difficult to manage and maintain. For example, consider the following Dockerfile: FROM public.ecr.aws/docker/library/alpine:3.17.2 # Install all necessary dependencies in a single layer RUN apk add --no-cache \ curl \ nginx \ && rm -rf /var/cache/apk/* # Set nginx as the entrypoint ENTRYPOINT ["nginx"] In this example, we install all necessary dependencies in a single layer, and then remove the cache to reduce the number of layers in the final image. This results in a smaller and more efficient image that is easier to manage and maintain. Make use of multi-stage builds A multi-stage build is a technique that allows you to separate the build tools and dependencies from the final image content. This can be beneficial for several reasons: Reducing image size: By separating the build stage from the runtime stage, you can include only the necessary dependencies in the final image, rather than including all of the build tools and libraries. This can significantly reduce the size of your final image as and reduce the total number of image layers. Improved security: By including only the necessary dependencies in the final image, you can reduce the attack surface of the image. This is because there are fewer packages and libraries that could potentially have vulnerabilities. Here is an example of a multi-stage build in a Dockerfile: # Build stage FROM public.ecr.aws/bitnami/golang:1.18.10 as builder WORKDIR /app COPY . . RUN go build -o app . # Runtime stage FROM public.ecr.aws/amazonlinux/amazonlinux:2023 RUN yum install -y go WORKDIR /app COPY --from=builder/app ./ ENTRYPOINT ["./app"] In this example, we first use the golang:1.18.10 image as the build stage, in which we copy the source code and build the binary. Then, in the runtime stage, we use the amazonlinux:2023 container image, which is a minimal base image, and copy the built binary from build stage. This results in a smaller and more secure final image that only includes the necessary dependencies. Secure your secrets Secrets management is a technique for securely storing and managing sensitive information such as passwords and encryption keys and also externalizing environment specific application configuration. Secrets should not be stored in an image but instead stored and managed externally in service such as AWS Secrets Manager. For secrets that are required for your application during runtime, you can retrieve them from AWS Secrets Manager in real-time or use container orchestrator service such as Amazon ECS or Amazon EKS to mount secrets as volumes on the container such that the application can read from the mounted volume. Details on how secrets can be used on Amazon EKS can be found here and how secrets can be used on Amazon ECS can be found here. If secrets are required during build time, then you can inject ephemeral secrets using Docker’s secret mount-type. For example, the following Dockerfile uses a multi-stage build process where the first stage called builder_image_stage uses mount=type=secret to load the AWS credentials. The second stage then uses the build artifacts from first builder_image_stage. The resulting final image from second stage won’t contain any build tools or secrets from the builder_image_stage thereby not maintaining secrets on it. FROM public.ecr.aws/amazonlinux/amazonlinux:2023 AS builder_image_stage WORKDIR /tmp RUN yum install -y unzip && \ curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" && \ unzip /tmp/awscliv2.zip && \ /tmp/aws/install RUN --mount=type=secret,id=aws,target=/root/.aws/credentials \ aws s3 cp s3://... /app/... # cd /app # make FROM public.ecr.aws/amazonlinux/amazonlinux:2023 COPY --from=builder_image_stage /app/files_from_builder_image /app Reduce the attack surface Images, by default, are not secure and can potentially be vulnerable for attacks. Following approaches can help mitigate the attack surface. Build from scratch Most images are based on an existing base image such as Amazon Linux, ubuntu, alpine, etc. However, if you need to create a completely new base image with your own distribution, or if your application or service can be deployed as a single statically linked binary, you can use the special scratch image as a starting point. The scratch base image is empty and doesn’t contain any files. All of the official base images are actually built on top of scratch, and you can place any content on subsequent image layers. Using FROM scratch is basically a signal to the image build process that the layer built by the next command in the Dockerfile should be the first layer of the image. Scratch gives you the ability to completely control the contents of your image starting from the beginning. It is important to note that using scratch, you’ll be responsible for the entire toolchain and all content in the resulting container image. Also, container scanning solutions, such as Amazon ECR Enhanced Scanning, do not support vulnerability scanning of images without an OS packaging system in place. If your container images have many environmental dependencies or use interpreted languages or need vulnerability scanning, then using a minimal base image instead of scratch may be better. Here are a few examples of using scratch. The example below adds hello-world executable to the image as a first layer. FROM scratch ADD hello-world / CMD ["/hello-wold"] The second example copies the root filesystem to the root of the image as a first layer. The container uses this root filesystem to spin off processes from binaries and libraries in that directory. FROM scratch ADD root-filesystem.tar.gz / CMD ["bash"] Remove unwanted packages It is a good practice to remove any unwanted or unnecessary packages from your container image to reduce the attack surface and size of the image. For example, if your application does not require certain libraries or utilities, you should remove them from the image. Here is an example on how to remove package manager that may be unnecessary on a final image. FROM public.ecr.aws/ubuntu/ubuntu:20.04_stable # Remove package manager RUN apt-get -y --allow-remove-essential remove apt Use a Minimal Base Image Choosing a minimal base image, such as a Distroless image, can help to restrict what is included in your runtime container. Distroless images only include your application and its runtime dependencies that is necessary for the application to run. As such they don’t contain package managers or shells. This can improve the security and performance of your container by reducing the attack surface, as there are fewer packages and libraries that could potentially have vulnerabilities or add unnecessary overhead. Here is an example of using the Distroless base image for a Go application. This base image contains a minimal Linux, glibc-based system and is intended for use directly by mostly-statically compiled languages like Go, Rust or D. For more information, see the GitHub documentation on base distroless image. FROM gcr.io/distroless/base COPY app /app ENTRYPOINT ["/app"] Secure image configuration Images by default may not be secure and can allow privileged access. It is best to ensure that they are setup with least privileged access and remove configurations that are unnecessary for your application. One such approach is to run containers as a non-root user. Processes within Docker containers have root privileges by default to both the container and the underlying host. This opens up the container and host to security vulnerabilities that can be exploited. To prevent these vulnerabilities, it is a good practice to minimize the privileges granted to your container images, only giving the necessary permissions to perform required tasks. This helps to reduce the attack surface and potential for privilege escalation. On Dockerfile, you can use USER directive to specify a non-root user to run the container, or use the securityContext field in the Kubernetes pod specification to specify a non-root user and set group ownership and file permissions. Furthermore, on Kubernetes with Amazon EKS, you can also limit the default capabilities assigned to a POD as explained on EKS best practices Guides. For example, consider the following Dockerfile: FROM public.ecr.aws/amazonlinux/amazonlinux:2023 # 1) install package for adduser # 2) create new non-root user named myuser # 3) create app folder # 4) remove package for adduser RUN yum install -y shadow-utils && \ adduser myuser && \ mkdir /app && \ yum install -y remove shadow-utils COPY . /app # Set group ownership and file permissions RUN chown -R myuser:myuser /app # Run as non-root user USER myuser # Set entrypoint ENTRYPOINT ["/app/myapp"] In this example, we create a non-root user named myuser and set the group ownership and file permissions of the application files to this user. We then set the entry point to run as the myuser user, rather than as root. This helps to reduce the potential for privilege escalation and increase the security of the container. Tag Images When building container images, it is recommended to use tags to identify them. If the image is built without any tags, then Docker will assign latest as the default tag. For instance, when building an image, you may use the following command: docker build -t my-pricing-app . With this command, Docker automatically assigns the latest tag to the image, since no specific tag was provided. However, building an updated image using the same command assigns the latest tag to the new image. This can cause ambiguity and potential deployment of outdated or unverified versions. To avoid this, consider tagging the image descriptively as below: docker build -t my-pricing-app:<git-commit-hash>-1.0-prod . This command builds the my-pricing-app image tagged as <git-commit-hash>-1.0-prod. This allows specific version of the image to be deployed unlike latest tag. Additionally, you can also use the tag command to tag existing images. It is important to note that Docker tags are mutable by design, which allows you to build a new image with an existing tag. If you need to have immutable tags, then you can consider using Amazon Elastic Container Registry (Amazon ECR) to store your images. Amazon ECR supports immutable tags, which is a capability that prevents image tags from being overwritten. This enables users to rely on the descriptive tags of an image as a reliable mechanism to track and uniquely identify images and also trust that they have not been tampered with. More details can be found at Image tag mutability – Amazon ECR. Conclusion In this post, we showed you how to build better container images using best practices. As adoption of containers increase, it becomes important to ensure the container images are secure, lightweight, and are built from trusted sources. The options described in this post can act as a starting point to building such images. Additionally, this post also shows how users can use a secure and fully managed Amazon ECR to manage container images. View the full article
  8. Today we are pleased to announce that Docker and Snyk have extended our existing partnership to bring vulnerability scanning to Docker Official and certified images. As the exclusive scanning partner for these two image categories, Snyk will work with Docker to provide developers with insights into our most popular images. It builds on our previous announcement earlier this year where Snyk scanning was integrated into the Docker Desktop and Docker Hub. This means that developers can now incorporate vulnerability assessment along each step of the container development and deployment process. Docker Official images represent approximately 25% of all of the pull activity on Docker Hub. Docker Official images are used extensively by millions of developers and developer world wide teams to build and run tens of millions of containerized applications. By integrating vulnerability scanning from Snyk users are now able to get more visibility into the images and have a higher level of confidence that their applications are secure and ready for production. Docker Official images that have been scanned by Snyk will be available early next year. You can read more about it from Snyk here and you can catch Docker CEO Scott Johnson and Snyk CEO Peter McKay discuss the partnership during the Snykcon user conference keynote Thursday morning October 22 at 8:30 AM Pacific. You can register for Snykcon at http://bit.ly/SnykConDocker Additional Resources Get started with scanning in the desktop now https://www.docker.com/get-started Learn more about scanning in Docker Hub https://goto.docker.com/on-demand-adding-container-security.html Learn more about scanning in Docker Desktop https://goto.docker.com/on-demand-find-fix-container-image-vulnerabilities.html The post Docker and Snyk Extend Partnership to Docker Official and Certified Images appeared first on Docker Blog. View the full article
  • Forum Statistics

    67.4k
    Total Topics
    65.3k
    Total Posts
×
×
  • Create New...