Jump to content

Search the Community

Showing results for tags 'distroless'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

There are no results to display.

There are no results to display.


Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


LinkedIn Profile URL


About Me


Cloud Platforms


Cloud Experience


Development Experience


Current Role


Skills


Certifications


Favourite Tools


Interests

Found 6 results

  1. Containerization helped drastically improve the security of applications by providing engineers with greater control over the runtime environment of their applications. However, a significant time investment is required to maintain the security posture of those applications, given the daily discovery of new vulnerabilities as well as regular releases of languages and frameworks. The concept of “distroless” images offers the promise of greatly reducing the time needed to keep applications secure by eliminating most of the software contained in typical container images. This approach also reduces the amount of time teams spend remediating vulnerabilities, allowing them to focus only on the software they are using. In this article, we explain what makes an image distroless, describe tools that make the creation of distroless images practical, and discuss whether distroless images live up to their potential. What’s a distro? A Linux distribution is a complete operating system built around the Linux kernel, comprising a package management system, GNU tools and libraries, additional software, and often a graphical user interface. Common Linux distributions include Debian, Ubuntu, Arch Linux, Fedora, Red Hat Enterprise Linux, CentOS, and Alpine Linux (which is more common in the world of containers). These Linux distributions, like most Linux distros, treat security seriously, with teams working diligently to release frequent patches and updates to known vulnerabilities. A key challenge that all Linux distributions must face involves the usability/security dilemma. On its own, the Linux kernel is not very usable, so many utility commands are included in distributions to cover a large array of use cases. Having the right utilities included in the distribution without having to install additional packages greatly improves a distro’s usability. The downside of this increase in usability, however, is an increased attack surface area to keep up to date. A Linux distro must strike a balance between these two elements, and different distros have different approaches to doing so. A key aspect to keep in mind is that a distro that emphasizes usability is not “less secure” than one that does not emphasize usability. What it means is that the distro with more utility packages requires more effort from its users to keep it secure. Multi-stage builds Multi-stage builds allow developers to separate build-time dependencies from runtime ones. Developers can now start from a full-featured build image with all the necessary components installed, perform the necessary build step, and then copy only the result of those steps to a more minimal or even an empty image, called “scratch”. With this approach, there’s no need to clean up dependencies and, as an added bonus, the build stages are also cacheable, which can considerably reduce build time. The following example shows a Go program taking advantage of multi-stage builds. Because the Golang runtime is compiled into the binary, only the binary and root certificates need to be copied to the blank slate image. FROM golang:1.21.5-alpine as build WORKDIR / COPY go.* . RUN go mod download COPY . . RUN go build -o my-app FROM scratch COPY --from=build /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ca-certificates.crt COPY --from=build /my-app /usr/local/bin/my-app ENTRYPOINT ["/usr/local/bin/my-app"] BuildKit BuildKit, the current engine used by docker build, helps developers create minimal images thanks to its extensible, pluggable architecture. It provides the ability to specify alternative frontends (with the default being the familiar Dockerfile) to abstract and hide the complexity of creating distroless images. These frontends can accept more streamlined and declarative inputs for builds and can produce images that contain only the software needed for the application to run. The following example shows the input for a frontend for creating Python applications called mopy by Julian Goede. #syntax=cmdjulian/mopy apiVersion: v1 python: 3.9.2 build-deps: - libopenblas-dev - gfortran - build-essential envs: MYENV: envVar1 pip: - numpy==1.22 - slycot - ./my_local_pip/ - ./requirements.txt labels: foo: bar fizz: ${mopy.sbom} project: my-python-app/ So, is your image really distroless? Thanks to new tools for creating container images like multi-stage builds and BuildKit, it is now a lot more practical to create images that only contain the required software and its runtime dependencies. However, many images claiming to be distroless still include a shell (usually Bash) and/or BusyBox, which provides many of the commands a Linux distribution does — including wget — that can leave containers vulnerable to Living off the land (LOTL) attacks. This raises the question, “Why would an image trying to be distroless still include key parts of a Linux distribution?” The answer typically involves container initialization. Developers often have to make their applications configurable to meet the needs of their users. Most of the time, those configurations are not known at build time so they need to be configured at run time. Often, these configurations are applied using shell initialization scripts, which in turn depend on common Linux utilities such as sed, grep, cp, etc. When this is the case, the shell and utilities are only needed for the first few seconds of the container’s lifetime. Luckily, there is a way to create true distroless images while still allowing initialization using tools available from most container orchestrators: init containers. Init containers In Kubernetes, an init container is a container that starts and must complete successfully before the primary container can start. By using a non-distroless container as an init container that shares a volume with the primary container, the runtime environment and application can be configured before the application starts. The lifetime of that init container is short (often just a couple seconds), and it typically doesn’t need to be exposed to the internet. Much like multi-stage builds allow developers to separate the build-time dependencies from the runtime dependencies, init containers allow developers to separate initialization dependencies from the execution dependencies. The concept of init container may be familiar if you are using relational databases, where an init container is often used to perform schema migration before a new version of an application is started. Kubernetes example Here are two examples of using init containers. First, using Kubernetes: apiVersion: v1 kind: Pod metadata: name: kubecon-postgress-pod labels: app.kubernetes.io/name: KubeConPostgress spec: containers: - name: postgress image: laurentgoderre689/postgres-distroless securityContext: runAsUser: 70 runAsGroup: 70 volumeMounts: - name: db mountPath: /var/lib/postgresql/data/ initContainers: - name: init-postgress image: postgres:alpine3.18 env: - name: POSTGRES_PASSWORD valueFrom: secretKeyRef: name: kubecon-postgress-admin-pwd key: password command: ['docker-ensure-initdb.sh'] volumeMounts: - name: db mountPath: /var/lib/postgresql/data/ volumes: - name: db emptyDir: {} - - - > kubectl apply -f pod.yml && kubectl get pods pod/kubecon-postgress-pod created NAME READY STATUS RESTARTS AGE kubecon-postgress-pod 0/1 Init:0/1 0 0s > kubectl get pods NAME READY STATUS RESTARTS AGE kubecon-postgress-pod 1/1 Running 0 10s Docker Compose example The init container concept can also be emulated in Docker Compose for local development using service dependencies and conditions. services: db: image: laurentgoderre689/postgres-distroless user: postgres volumes: - pgdata:/var/lib/postgresql/data/ depends_on: db-init: condition: service_completed_successfully db-init: image: postgres:alpine3.18 environment: POSTGRES_PASSWORD: example volumes: - pgdata:/var/lib/postgresql/data/ user: postgres command: docker-ensure-initdb.sh volumes: pgdata: - - - > docker-compose up [+] Running 4/0 ✔ Network compose_default Created ✔ Volume "compose_pgdata" Created ✔ Container compose-db-init-1 Created ✔ Container compose-db-1 Created Attaching to db-1, db-init-1 db-init-1 | The files belonging to this database system will be owned by user "postgres". db-init-1 | This user must also own the server process. db-init-1 | db-init-1 | The database cluster will be initialized with locale "en_US.utf8". db-init-1 | The default database encoding has accordingly been set to "UTF8". db-init-1 | The default text search configuration will be set to "english". db-init-1 | [...] db-init-1 exited with code 0 db-1 | 2024-02-23 14:59:33.191 UTC [1] LOG: starting PostgreSQL 16.1 on aarch64-unknown-linux-musl, compiled by gcc (Alpine 12.2.1_git20220924-r10) 12.2.1 20220924, 64-bit db-1 | 2024-02-23 14:59:33.191 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 db-1 | 2024-02-23 14:59:33.191 UTC [1] LOG: listening on IPv6 address "::", port 5432 db-1 | 2024-02-23 14:59:33.194 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" db-1 | 2024-02-23 14:59:33.196 UTC [9] LOG: database system was shut down at 2024-02-23 14:59:32 UTC db-1 | 2024-02-23 14:59:33.198 UTC [1] LOG: database system is ready to accept connections As demonstrated by the previous example, an init container can be used alongside a container to remove the need for general-purpose software and allow the creation of true distroless images. Conclusion This article explained how Docker build tools allow for the separation of build-time dependencies from run-time dependencies to create “distroless” images. For example, using init containers allows developers to separate the logic needed to configure a runtime environment from the environment itself and provide a more secure container. This approach also helps teams focus their efforts on the software they use and find a better balance between security and usability. Learn more Subscribe to the Docker Newsletter. Get the latest release of Docker Desktop. Vote on what’s next! Check out our public roadmap. Have questions? The Docker community is here to help. New to Docker? Get started. View the full article
  2. We are racing toward the finish line at KubeCon + CloudNativeCon Europe, March 19 – 22, 2024 in Paris, France. Join the Docker “pit crew” at Booth #J3 for an incredible racing experience, new product demos, and limited-edition SWAG. Meet us at our KubeCon booth, sessions, and events to learn about the latest trends in AI productivity and best practices in cloud-native development with Docker. At our KubeCon booth (#J3), we’ll show you how building in the cloud accelerates development and simplifies multi-platform builds with a side-by-side demo of Docker Build Cloud. Learn how Docker and Test Containers Cloud provide a seamless integration within the testing framework to improve the quality and speed of application delivery. It’s not all work, though — join us at the booth for our Megennis Motorsport Racing experience and try to beat the best! Take advantage of this opportunity to connect with the Docker team, learn from the experts, and contribute to the ever-evolving cloud-native landscape. Let’s shape the future of cloud-native technologies together at KubeCon! Deep dive sessions from Docker experts Is Your Image Really Distroless? — Docker software engineer Laurent Goderre will dive into the world of “distroless” Docker images on Wednesday, March 20. In this session, Goderre will explain the significance of separating build-time and run-time dependencies to enhance container security and reduce vulnerabilities. He’ll also explore strategies for configuring runtime environments without compromising security or functionality. Don’t miss this must-attend session for KubeCon attendees keen on fortifying their Docker containers. Simplified Inner and Outer Cloud Native Developer Loops — Docker Staff Community Relations Manager Oleg Šelajev and Diagrid Customer Success Engineer Alice Gibbons tackle the challenges of developer productivity in cloud-native development. On Wednesday, March 20, they will present tools and practices to bridge the gap between development and production environments, demonstrating how a unified approach can streamline workflows and boost efficiency across the board. Engage, learn, and network at these events Security Soiree: Hands-on cloud-native security workshop and party — Join Sysdig, Snyk, and Docker on March 19 for cocktails, team photos, music, prizes, and more at the Security Soiree. Listen to a compelling panel discussion led by industry experts, including Docker’s Director of Security, Risk & Trust, Rachel Taylor, followed by an evening of networking and festivities. Get tickets to secure your invitation. Docker Meetup at KubeCon: Development & data productivity in the age of AI — Join us at our meetup during KubeCon on March 21 and hear insights from Docker, Pulumi, Tailscale, and New Relic. This networking mixer at Tonton Becton Restaurant promises candid discussions on enhancing developer productivity with the latest AI and data technologies. Reserve your spot now for an evening of casual conversation, drinks, and delicious appetizers. See you March 19 – 22 at KubeCon + CloudNativeCon Europe We look forward to seeing you in Paris — safe travels and prepare for an unforgettable experience! Learn more New to Docker? Create an account. Learn about Docker Build Cloud. Subscribe to the Docker Newsletter. Read about what rolled out in Docker Desktop 4.27, including synchronized file shares, Docker Init GA, a private marketplace for extensions, Moby 25, support for Testcontainers with ECI, Docker Build Cloud, and Docker Debug Beta. View the full article
  3. Google Kubernetes Engine (GKE) traces its roots back to Google’s development of Borg in 2004, a Google internal system managing clusters and applications. In 2014, Google introduced Kubernetes, an open-source platform based on Borg’s principles, gaining rapid popularity for automating containerized application deployment. In 2015, Google launched GKE, a managed Kubernetes service on Google Cloud Platform (GCP), streamlining infrastructure management for users. Over the years, GKE saw substantial growth, introducing Autopilot mode in 2017 and surpassing 2 million managed clusters by 2019. Throughout 2020 and 2021, GKE continued evolving with features like Workload Identity and Anthos Config Management, aiding in access management and configurations. In 2022, GKE maintained its status as a leading managed Kubernetes platform, introducing innovations like Anthos Service Mesh and Anthos Multi-Cluster Ingress. GKE’s trajectory underscores the significance of containerization and cloud-native tech, poised to further empower the development, deployment, and management of modern applications amidst growing industry adoption. Ubuntu can be used as the base operating system for GKE nodes, and GKE can be used to deploy and manage containerized applications on Ubuntu nodes. Here are some of the benefits of using Ubuntu and GKE together: Flexibility: Ubuntu can be used with a variety of Kubernetes distributions, so developing teams will have a standardized experience across multiple cloud, such as Azure Kubernetes Service (AKS) and Amazon EKS.Secure. Timely updates and patches, building on Ubuntu’s security track record.Stable. Long-term support with a consistent, predictable lifecycle and support commitment. Each GKE version has a customized version of Ubuntu that support its latest features.Seamless. Developer-friendly, smooth experience from development to production.Small. Ultra-small, resource-efficient, yet robust images with chiselled (distroless) Ubuntu. Let’s go through the steps you need to deploy Ubuntu nodes in GKE. In Google Cloud Console, locate Kubernetes Engine: In “Clusters”, click CREATE: Choose “Standard: You manage your cluster” in the pop-up window: In the left of the console, click “default-pool”: Click “Nodes”: Find “Ubuntu with containerd (ubuntu_containerd)” in “Image type” drop-down menu: Choose “Ubuntu with containerd (ubuntu_containerd)” and Click “CREATE” at the bottom: Once the cluster is running, you can check “OS image” in the “Node details”: View the full article
  4. We’re excited to announce our participation in Alamo Ace 2023 as Platinum Sponsors. As our collaboration with US Federal and Defense agencies is strengthening, we’re looking forward to meeting our partners and customers on-site to discuss the critical topics for 2024: Cybersecurity, Artificial Intelligence and open-source innovation. With our commitment towards securing open source, this year, we announced the general availability of Ubuntu Pro. This subscription secures an organisation’s Linux estate from OS to the application level. Pro is available on-prem, in the cloud and air-gapped environments, automating security patching, auditing, access management and compliance. For example, Ubuntu Pro delivers FIPS compliance and automation for security standards such as DISA’s Ubuntu STIG, and CIS hardening via the Ubuntu Security Guide (USG). One of the growing concerns for 2024 is application security. Many open-source packages for applications and toolchains exist in a space with no guarantee or SLA for security patching. With Ubuntu Pro, we secure over 23,000 + open source applications. Schedule a meeting with our DoD representative Devin Breen, or Federal representative Kelley Riggs, for an in-person discussion or a demo! Secure Open Source with Canonical Ubuntu, NVIDIA, USAF & Platform One Joint Panel Discussion | November 14th 2:15, San Xavier Join Canonical, NVIDIA, Air Force and Platform One leadership for a compelling discussion on how trusted software ecosystems help provide visibility into security threats, mitigate supply chain attacks, and minimize the impact of zero-day vulnerabilities. We will discuss tactical vs enterprise IT architectures and provide real-world examples of how Zero-Distance partner relationships are helping bring this trusted ecosystem and the latest AI/ML capability to the DoD. We will also discuss: Secure Containers – Zero-Distance Delivery to Iron Bank USAF perspective on Tactical vs Strategic Software RequirementsAddressing Open-Source Bug/Fix/CVE with Upstream MaintainersAligning AI with Security and Test and Evaluation – NVIDIAHow does software security affect hardware choices? Making Ubuntu available for DoD Ubuntu STIG containers are now available in DoD Iron Bank. Our recent work with Platform One enabled a Zero-Distance approach where we eliminated middlemen by delivering Ubuntu containers to a trusted source for DoD consumption. We’ve also discussed Chiselled Ubuntu containers that benefit from combining Distroless and Ubuntu. The full recording of the talk is available below. AI/ML solutions for the Public Sector The public sector invests heavily in AI, aiming to impact real-time operational decisions and outcomes within the next 12 months. Organisations kickstarted initiatives with different use cases in mind, such as smart cities or task automation, looking for tooling that enables them to run AI at scale. The obvious next step to achieve these goals would be finding and integrating the right machine learning operations (MLOps) solutions, such as Charmed Kubeflow. Kubeflow helps professionals focus on the development and deployment of machine learning models, offering security patching, user management and a wide range of integrations on top of any Kubernetes. Read more about AI in public sector Demos Join us for two security demos: In the first demo we will demonstrate the ease of deploying secure virtual machines running FIPS kernels in public clouds such as AWS, Azure, and GCP. Canonical offers FIPS-enabled images with integrated pay-as-you-go billing, allowing for easy setup and deployment. The second demo will demonstrate its microcloud offerings by deploying secure container images from Platform One on MicroK8s, the simplest production-grade K8s. MicroK8s makes it easy to deploy high availability, pure-upstream K8s from developer workstations to production. Schedule a meeting or in-person demo Schedule a meeting with our DoD representative Devin Breen, or Federal representative Kelley Riggs, for an in-person discussion or a demo! View the full article
  5. Introduction Many applications built today or modernized from monoliths are done so using microservice architectures. The microservice architecture makes applications easier to scale and faster to develop, which enables innovation and accelerating time-to-market for new features. In addition, microservices also provide lifecycle autonomy enabling applications to have independent build and deploy processes, which provides technological freedom such that they can be implemented in different programming languages and provide scaling flexibility to scale up or scale down independently based on workload utilization. While microservices provide a lot of flexibility, the process of building and deploying them, ensuring the right application versions, and required dependencies are used is a tedious process. This is where containers come in. Microservices can be packaged into a single lightweight and standalone executable artifact called container image that includes everything to run an application. This process of packaging a microservice into container image is called containerization. Containerization offers a lot of benefits, such as portability, which allow containers to be deployed to different infrastructures. It also offers fault isolation, which ensures that different containers are running as isolated process within their own user space in the host OS so that one container’s crash or failure wouldn’t impact the other and provide ease-of-management for deployment and version management. With the benefits that are offered via microservices and subsequent containerization, creation of container images have increased at a rapid scale. As the use of containerized applications continue to grow, it is important to ensure that your container images are optimized, secure, and reliable. Taking these tenets into account in this post, we discuss best practices for building better container images for use on Amazon Elastic Container Service (Amazon ECS), Amazon Elastic Kubernetes Service (Amazon EKS), and other services. For the container image repositories, we focus on Amazon Elastic Container Registry (Amazon ECR). Building better images We’ll look at the following approaches to build better images. While this isn’t an exhaustive list, these topics provide a good base for your image builds, and you can adopt them as you wish. Walkthrough Use trusted images and source Using a base image from a trusted source can improve the security and reliability of your container. This is because you can be confident that the base image has been thoroughly tested and vetted. It can also reduce the burden of establishing provenance, because you only need to consider the packages and libraries that you include in your image, rather than the entire base image. Here is an example creating a Dockerfile using the official Python base image from the Amazon ECR repository. In fact, all of the Docker official images are available on Amazon ECR public gallery. Note: The code samples in the blog are for Docker. If you don’t have Docker, then please refer to Docker installation guide for information about how to install Docker on your particular operating system. Alternatively, you can also consider using Finch an open source client for container development. FROM public.ecr.aws/docker/library/python:slim # Install necessary packages RUN pip install flask COPY app.py /app/ ENTRYPOINT ["python", "/app/app.py"] Here’s another example of using latest version (as of the data of this post) of Amazon Linux image, which is a secure and lightweight Linux distribution provided by AWS. FROM public.ecr.aws/amazonlinux/amazonlinux:2023 It is important to keep images up-to-date by regularly updating to latest secure versions of the software and libraries included in the image. As new versions of the images get created, they should be explicitly tagged with versions such as v1, v2, rc_03122023, etc instead of tagging as latest. Using explicit tags instead of latest tag could prevent situations where the image with latest tag isn’t actually updated and instead gives a false appearance that the image contains the latest version of the application. If you’re confident in your automation, then vanity tags such as latest or prod might be acceptable to use, but avoiding them also reduces ambiguity. This can avoid confusion about which application version may actually being used. Once images are created, they can be pushed into the Amazon ECR repository for secure storage and highly available distribution. Amazon ECR encrypts data at rest and offers lifecycle management, replication, and caching features. Amazon ECR can also scan the images to help in identifying software vulnerabilities through Basic and Enhanced Scanning. Stored images from Amazon ECR can then be pulled and run by services such Amazon ECS, Amazon EKS, or other services and tools. Here is an example of using AWS Command Line Interface (AWS CLI) and Docker CLI to pull a versioned image from Amazon ECR. Step 1 – Authenticate your Docker client to the Amazon Linux Public registry. Authentication tokens are valid for 12 hours. For more information, see Private registry authentication. Alternatively, you can also use Amazon ECR Docker Credential Helper, which is a credential helper for the Docker daemon that makes it easier to use Amazon Elastic Container Registry. Amazon ECR Docker credential helper automatically gets credentials for Amazon ECR on docker push and docker pull. Note that this would only be required for Amazon ECR, but ECR Public doesn’t need authentication. $ aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin public.ecr.aws Step 2 – Pull the Amazon Linux container image using the docker pull command. To view the Amazon Linux container image on the Amazon ECR Public Gallery, see Amazon ECR Public Gallery – amazonlinux. docker pull public.ecr.aws/amazonlinux/amazonlinux:2023 Step 3 – Run the container locally. docker run -it public.ecr.aws/amazonlinux/amazonlinux /bin/bash Bonus While Amazon Linux is a secure, trusted, and lightweight container image, AWS also offers purpose built container image called Bottlerocket, which is a Linux-based open-source operating system. Bottlerocket includes only the essential software required to run containers, and ensures that the underlying software is always secure. Additionally, Bottlerocket is available at no cost as an Amazon Machine Image (AMI) for Amazon Elastic Compute Cloud (Amazon EC2) and can be used on Amazon EKS and Amazon ECS setups. Sign container images Container image signing can help verify that the trusted images you have selected and vetted are in use throughout your build pipelines and deployments. This process involves trusted parties cryptographically signing images so that they can be verified when used. This can be used to also sign and verify images throughout your organization. Container image signing is fairly lightweight. Because container images and runtimes have built-in integrity checks and all image content is immutable, signing solutions can simply sign image manifests. Signatures are stored alongside images in the registry, and at any point in time a consumer of the image can retrieve its signature and verify against a trusted publisher’s identity or public key. It is a good practice to sign and verify container images as part of overall security practices, and verifying public content establishes trust in content authenticity. With signed images, you can implement a solution that blocks images from running in your container environment unless they can be verified as trusted. This not only guarantees the authenticity of container images but also reduces the need for additional work to validate the container images in other ways prior to their use. For a full walkthrough of such a solution, see the recent launch of Container Image Signing with AWS Signer and Amazon EKS. Limit the number of layers It is a good practice to limit the number of layers in your container images. Having a large number of layers can increase the size and complexity of the image, which can make it more difficult to manage and maintain. For example, consider the following Dockerfile: FROM public.ecr.aws/docker/library/alpine:3.17.2 # Install all necessary dependencies in a single layer RUN apk add --no-cache \ curl \ nginx \ && rm -rf /var/cache/apk/* # Set nginx as the entrypoint ENTRYPOINT ["nginx"] In this example, we install all necessary dependencies in a single layer, and then remove the cache to reduce the number of layers in the final image. This results in a smaller and more efficient image that is easier to manage and maintain. Make use of multi-stage builds A multi-stage build is a technique that allows you to separate the build tools and dependencies from the final image content. This can be beneficial for several reasons: Reducing image size: By separating the build stage from the runtime stage, you can include only the necessary dependencies in the final image, rather than including all of the build tools and libraries. This can significantly reduce the size of your final image as and reduce the total number of image layers. Improved security: By including only the necessary dependencies in the final image, you can reduce the attack surface of the image. This is because there are fewer packages and libraries that could potentially have vulnerabilities. Here is an example of a multi-stage build in a Dockerfile: # Build stage FROM public.ecr.aws/bitnami/golang:1.18.10 as builder WORKDIR /app COPY . . RUN go build -o app . # Runtime stage FROM public.ecr.aws/amazonlinux/amazonlinux:2023 RUN yum install -y go WORKDIR /app COPY --from=builder/app ./ ENTRYPOINT ["./app"] In this example, we first use the golang:1.18.10 image as the build stage, in which we copy the source code and build the binary. Then, in the runtime stage, we use the amazonlinux:2023 container image, which is a minimal base image, and copy the built binary from build stage. This results in a smaller and more secure final image that only includes the necessary dependencies. Secure your secrets Secrets management is a technique for securely storing and managing sensitive information such as passwords and encryption keys and also externalizing environment specific application configuration. Secrets should not be stored in an image but instead stored and managed externally in service such as AWS Secrets Manager. For secrets that are required for your application during runtime, you can retrieve them from AWS Secrets Manager in real-time or use container orchestrator service such as Amazon ECS or Amazon EKS to mount secrets as volumes on the container such that the application can read from the mounted volume. Details on how secrets can be used on Amazon EKS can be found here and how secrets can be used on Amazon ECS can be found here. If secrets are required during build time, then you can inject ephemeral secrets using Docker’s secret mount-type. For example, the following Dockerfile uses a multi-stage build process where the first stage called builder_image_stage uses mount=type=secret to load the AWS credentials. The second stage then uses the build artifacts from first builder_image_stage. The resulting final image from second stage won’t contain any build tools or secrets from the builder_image_stage thereby not maintaining secrets on it. FROM public.ecr.aws/amazonlinux/amazonlinux:2023 AS builder_image_stage WORKDIR /tmp RUN yum install -y unzip && \ curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" && \ unzip /tmp/awscliv2.zip && \ /tmp/aws/install RUN --mount=type=secret,id=aws,target=/root/.aws/credentials \ aws s3 cp s3://... /app/... # cd /app # make FROM public.ecr.aws/amazonlinux/amazonlinux:2023 COPY --from=builder_image_stage /app/files_from_builder_image /app Reduce the attack surface Images, by default, are not secure and can potentially be vulnerable for attacks. Following approaches can help mitigate the attack surface. Build from scratch Most images are based on an existing base image such as Amazon Linux, ubuntu, alpine, etc. However, if you need to create a completely new base image with your own distribution, or if your application or service can be deployed as a single statically linked binary, you can use the special scratch image as a starting point. The scratch base image is empty and doesn’t contain any files. All of the official base images are actually built on top of scratch, and you can place any content on subsequent image layers. Using FROM scratch is basically a signal to the image build process that the layer built by the next command in the Dockerfile should be the first layer of the image. Scratch gives you the ability to completely control the contents of your image starting from the beginning. It is important to note that using scratch, you’ll be responsible for the entire toolchain and all content in the resulting container image. Also, container scanning solutions, such as Amazon ECR Enhanced Scanning, do not support vulnerability scanning of images without an OS packaging system in place. If your container images have many environmental dependencies or use interpreted languages or need vulnerability scanning, then using a minimal base image instead of scratch may be better. Here are a few examples of using scratch. The example below adds hello-world executable to the image as a first layer. FROM scratch ADD hello-world / CMD ["/hello-wold"] The second example copies the root filesystem to the root of the image as a first layer. The container uses this root filesystem to spin off processes from binaries and libraries in that directory. FROM scratch ADD root-filesystem.tar.gz / CMD ["bash"] Remove unwanted packages It is a good practice to remove any unwanted or unnecessary packages from your container image to reduce the attack surface and size of the image. For example, if your application does not require certain libraries or utilities, you should remove them from the image. Here is an example on how to remove package manager that may be unnecessary on a final image. FROM public.ecr.aws/ubuntu/ubuntu:20.04_stable # Remove package manager RUN apt-get -y --allow-remove-essential remove apt Use a Minimal Base Image Choosing a minimal base image, such as a Distroless image, can help to restrict what is included in your runtime container. Distroless images only include your application and its runtime dependencies that is necessary for the application to run. As such they don’t contain package managers or shells. This can improve the security and performance of your container by reducing the attack surface, as there are fewer packages and libraries that could potentially have vulnerabilities or add unnecessary overhead. Here is an example of using the Distroless base image for a Go application. This base image contains a minimal Linux, glibc-based system and is intended for use directly by mostly-statically compiled languages like Go, Rust or D. For more information, see the GitHub documentation on base distroless image. FROM gcr.io/distroless/base COPY app /app ENTRYPOINT ["/app"] Secure image configuration Images by default may not be secure and can allow privileged access. It is best to ensure that they are setup with least privileged access and remove configurations that are unnecessary for your application. One such approach is to run containers as a non-root user. Processes within Docker containers have root privileges by default to both the container and the underlying host. This opens up the container and host to security vulnerabilities that can be exploited. To prevent these vulnerabilities, it is a good practice to minimize the privileges granted to your container images, only giving the necessary permissions to perform required tasks. This helps to reduce the attack surface and potential for privilege escalation. On Dockerfile, you can use USER directive to specify a non-root user to run the container, or use the securityContext field in the Kubernetes pod specification to specify a non-root user and set group ownership and file permissions. Furthermore, on Kubernetes with Amazon EKS, you can also limit the default capabilities assigned to a POD as explained on EKS best practices Guides. For example, consider the following Dockerfile: FROM public.ecr.aws/amazonlinux/amazonlinux:2023 # 1) install package for adduser # 2) create new non-root user named myuser # 3) create app folder # 4) remove package for adduser RUN yum install -y shadow-utils && \ adduser myuser && \ mkdir /app && \ yum install -y remove shadow-utils COPY . /app # Set group ownership and file permissions RUN chown -R myuser:myuser /app # Run as non-root user USER myuser # Set entrypoint ENTRYPOINT ["/app/myapp"] In this example, we create a non-root user named myuser and set the group ownership and file permissions of the application files to this user. We then set the entry point to run as the myuser user, rather than as root. This helps to reduce the potential for privilege escalation and increase the security of the container. Tag Images When building container images, it is recommended to use tags to identify them. If the image is built without any tags, then Docker will assign latest as the default tag. For instance, when building an image, you may use the following command: docker build -t my-pricing-app . With this command, Docker automatically assigns the latest tag to the image, since no specific tag was provided. However, building an updated image using the same command assigns the latest tag to the new image. This can cause ambiguity and potential deployment of outdated or unverified versions. To avoid this, consider tagging the image descriptively as below: docker build -t my-pricing-app:<git-commit-hash>-1.0-prod . This command builds the my-pricing-app image tagged as <git-commit-hash>-1.0-prod. This allows specific version of the image to be deployed unlike latest tag. Additionally, you can also use the tag command to tag existing images. It is important to note that Docker tags are mutable by design, which allows you to build a new image with an existing tag. If you need to have immutable tags, then you can consider using Amazon Elastic Container Registry (Amazon ECR) to store your images. Amazon ECR supports immutable tags, which is a capability that prevents image tags from being overwritten. This enables users to rely on the descriptive tags of an image as a reliable mechanism to track and uniquely identify images and also trust that they have not been tampered with. More details can be found at Image tag mutability – Amazon ECR. Conclusion In this post, we showed you how to build better container images using best practices. As adoption of containers increase, it becomes important to ensure the container images are secure, lightweight, and are built from trusted sources. The options described in this post can act as a starting point to building such images. Additionally, this post also shows how users can use a secure and fully managed Amazon ECR to manage container images. View the full article
  6. Last August, we announced 6 MB-size Ubuntu base images designed for self-contained .NET applications — we called them “chiselled Ubuntu”. How did we make our Ubuntu base images 15 times smaller? And how can you create your own chiselled Ubuntu images? In this blog, I explain the idea behind Distroless container images, which inspired us to create chiselled Ubuntu images — adding the “distro” back to distro-less! In a follow-up blog, I will then provide a step-by-step example of how you can create your own chiselled Ubuntu base container images, built for your specific needs. At-scale comparison of popular base images’ sizes (compressed and uncompressed) Introduction to Distroless container images Thanks to the widespread use of container images, developers now have an easy way to package an application and its dependencies into a self-contained unit compatible with any platform supporting the “OCI” standard (for example, Docker, Kubernetes or one of The 17 Ways to Run Containers on AWS). Container images make it easy to package and run applications in a consistent environment without worrying about differences in the host operating system or runtime environment. Distroless container images are ultra-small images that only include an application and its runtime dependencies without additional libraries or utilities from a Linux distribution. This makes them smaller and more secure than traditional container images, which often include many libraries and utilities that are not needed by the application. In particular, traditional images often have a package manager and shell that give them their “look and feel”. In opposition, we could call them “distro-full”. Minimal and mighty: the benefits of Distroless container images Smaller container images have a de facto smaller attack surface, decreasing the likelihood of including unpatched security vulnerabilities and removing opportunities for attackers to exploit. But this probabilistic approach needs to consider how well-maintained the remaining content is. A large image with no CVEs and regular security updates is safer than an ultra-small unstable unmaintained one. The ultimate security of a containerised application depends on various factors, including how it is designed and deployed and how it is maintained and updated over time. Using a well-maintained and supported Linux distribution like Ubuntu can help improve the security of containerised applications. Additionally, smaller container images can save time and resources, especially in environments with limited storage capacity or where many container images are being used. The best of both worlds: introducing Chiselled Ubuntu container images Chiselled Ubuntu is a variation of Distroless container images built using the packages from the Ubuntu distribution. Chiselled Ubuntu images are carefully crafted to only fit the minimum required dependencies. They are constructed using a developer-friendly package manager called “Chisel”, which is only used at build time and not shipped in the final image. This makes them smaller and more secure than traditional Ubuntu container images, which often include many additional libraries and utilities. Chiselled Ubuntu images inherit the advantages of the Ubuntu distribution: regularly updated and supported, offering a reliable and secure platform for creating and operating applications. On the other hand, they suppress the downsides of using a “distro-full” image when shipping to production. “Breaking the Chisel” – how Chisel works Chisel uses an open database of package slices, which supersedes the Debian packages database with specific file subsets and edited maintainer scripts for creating ultra-small runtime file systems. Chisel is a sort of “from-scratch package manager” that creates partial filesystems that just work for the intended use cases. The information contained in a Package slice is what image developers used to define manually at the image definition level when crafting Distroless-type images. With Chisel, community developers can now easily reuse this knowledge – effortlessly. Illustration of Package slices, reusing upstream Ubuntu packages information Chiselled Ubuntu container images are a new development that offers many benefits, including a consistent and compatible developer experience. They are slices of the same libraries and utilities in the regular Ubuntu distribution, making it easy to go from using Ubuntu in development to using chiselled Ubuntu in production. As a result, multi-stage builds work seamlessly with chiselled Ubuntu images. Next steps: creating your own chiselled Ubuntu base container images In conclusion, chiselled Ubuntu container images combine the benefits of Distroless containers with those of Ubuntu to create smaller, more secure containers that are easier to use. In this blog, I have explained the idea behind Distroless containers and introduced the concept of chiselled Ubuntu images. In the next blog, I will provide a step-by-step guide for creating chiselled Ubuntu base container images built for your specific needs. Ready? Keep reading! Chisel in GitHub:https://github.com/canonical/chisel Open Source Summit Europe 2022 presentation on Chiselled Ubuntu Chiselled Ubuntu for .NET announcement Microsoft .NET developers’ 2022 conf x Chiselled Ubuntu keynote That’s what chiselled Ubuntu containers could look like… well, if you ask the DALL-E2 AI. View the full article
  • Forum Statistics

    63.7k
    Total Topics
    61.7k
    Total Posts
×
×
  • Create New...