Jump to content

Search the Community

Showing results for tags 'buildkit'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General
    • General Discussion
    • Artificial Intelligence
    • DevOpsForum News
  • DevOps & SRE
    • DevOps & SRE General Discussion
    • Databases, Data Engineering & Data Science
    • Development & Programming
    • CI/CD, GitOps, Orchestration & Scheduling
    • Docker, Containers, Microservices, Serverless & Virtualization
    • Infrastructure-as-Code
    • Kubernetes & Container Orchestration
    • Linux
    • Logging, Monitoring & Observability
    • Security, Governance, Risk & Compliance
  • Cloud Providers
    • Amazon Web Services
    • Google Cloud Platform
    • Microsoft Azure

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


LinkedIn Profile URL


About Me


Cloud Platforms


Cloud Experience


Development Experience


Current Role


Skills


Certifications


Favourite Tools


Interests

Found 2 results

  1. January 31 update: Patches for runc and BuildKit are now available. We at Docker prioritize the security and integrity of our software and the trust of our users. Security researchers at Snyk Labs recently identified and reported four security vulnerabilities in the container ecosystem. One of the vulnerabilities, CVE-2024-21626, concerns the runc container runtime, and the other three affect BuildKit (CVE-2024-23651, CVE-2024-23652, and CVE-2024-23653). We want to assure our community that our team, in collaboration with the reporters and open source maintainers, has been diligently working on coordinating and implementing necessary remediations. We are committed to maintaining the highest security standards. We will publish patched versions of runc, BuildKit, and Moby on January 31 and release an update for Docker Desktop on February 1 to address these vulnerabilities. Additionally, our latest Moby and BuildKit releases will include fixes for CVE-2024-23650 and CVE-2024-24557, discovered respectively by an independent researcher and through Docker’s internal research initiatives. Versions impactedrunc<= 1.1.11BuildKit<= 0.12.4Moby (Docker Engine)<= 25.0.1 and <= 24.0.8Docker Desktop<= 4.27.0 These vulnerabilities can only be exploited if a user actively engages with malicious content by incorporating it into the build process or running a container from a suspect image (particularly relevant for the CVE-2024-21626 container escape vulnerability). Potential impacts include unauthorized access to the host filesystem, compromising the integrity of the build cache, and, in the case of CVE-2024-21626, a scenario that could lead to full container escape. We strongly urge all customers to prioritize security by applying the available updates as soon as they are released. Timely application of these updates is the most effective measure to safeguard your systems against these vulnerabilities and maintain a secure and reliable Docker environment. What should I do if I’m on an affected version? If you are using affected versions of runc, BuildKit, Moby, or Docker Desktop, make sure to update to the latest versions as soon as patched versions become available: Patched versionsrunc>= 1.1.12BuildKit>= 0.12.5Moby (Docker Engine)>= 25.0.2 and >= 24.0.9Docker Desktop>= 4.27.1 If you are unable to update to an unaffected version promptly after it is released, follow these best practices to mitigate risk: Only use trusted Docker images (such as Docker Official Images). Don’t build Docker images from untrusted sources or untrusted Dockerfiles. If you are a Docker Business customer using Docker Desktop and unable to update to v4.27.1 immediately after it’s released, make sure to enable Hardened Docker Desktop features such as: Enhanced Container Isolation, which mitigates the impact of CVE-2024-21626 in the case of running containers from malicious images. Image Access Management, and Registry Access Management, which give organizations control over which images and repositories their users can access. For CVE-2024-23650, CVE-2024-23651, CVE-2024-23652, and CVE-2024-23653, avoid using BuildKit frontend from an untrusted source. A frontend image is usually specified as the #syntax line on your Dockerfile, or with --frontend flag when using the buildctl build command. To mitigate CVE-2024-24557, make sure to either use BuildKit or disable caching when building images. From the CLI this can be done via the DOCKER_BUILDKIT=1 environment variable (default for Moby >= v23.0 if the buildx plugin is installed) or the --no-cache flag. If you are using the HTTP API directly or through a client, the same can be done by setting nocache to true or version to 2 for the /build API endpoint. Technical details and impact CVE-2024-21626 (High) In runc v1.1.11 and earlier, due to certain leaked file descriptors, an attacker can gain access to the host filesystem by causing a newly-spawned container process (from runc exec) to have a working directory in the host filesystem namespace, or by tricking a user to run a malicious image and allow a container process to gain access to the host filesystem through runc run. The attacks can also be adapted to overwrite semi-arbitrary host binaries, allowing for complete container escapes. Note that when using higher-level runtimes (such as Docker or Kubernetes), this vulnerability can be exploited by running a malicious container image without additional configuration or by passing specific workdir options when starting a container. The vulnerability can also be exploited from within Dockerfiles in the case of Docker. The issue has been fixed in runc v1.1.12. CVE-2024-23651 (High) In BuildKit <= v0.12.4, two malicious build steps running in parallel sharing the same cache mounts with subpaths could cause a race condition, leading to files from the host system being accessible to the build container. This will only occur if a user is trying to build a Dockerfile of a malicious project. The issue will be fixed in BuildKit v0.12.5. CVE-2024-23652 (High) In BuildKit <= v0.12.4, a malicious BuildKit frontend or Dockerfile using RUN --mount could trick the feature that removes empty files created for the mountpoints into removing a file outside the container from the host system. This will only occur if a user is using a malicious Dockerfile. The issue will be fixed in BuildKit v0.12.5. CVE-2024-23653 (High) In addition to running containers as build steps, BuildKit also provides APIs for running interactive containers based on built images. In BuildKit <= v0.12.4, it is possible to use these APIs to ask BuildKit to run a container with elevated privileges. Normally, running such containers is only allowed if special security.insecure entitlement is enabled both by buildkitd configuration and allowed by the user initializing the build request. The issue will be fixed in BuildKit v0.12.5. CVE-2024-23650 (Medium) In BuildKit <= v0.12.4, a malicious BuildKit client or frontend could craft a request that could lead to BuildKit daemon crashing with a panic. The issue will be fixed in BuildKit v0.12.5. CVE-2024-24557 (Medium) In Moby <= v25.0.1 and <= v24.0.8, the classic builder cache system is prone to cache poisoning if the image is built FROM scratch. Also, changes to some instructions (most important being HEALTHCHECK and ONBUILD) would not cause a cache miss. An attacker with knowledge of the Dockerfile someone is using could poison their cache by making them pull a specially crafted image that would be considered a valid cache candidate for some build steps. The issue will be fixed in Moby >= v25.0.2 and >= v24.0.9. How are Docker products affected? The following Docker products are affected. No other products are affected by these vulnerabilities. Docker Desktop Docker Desktop v4.27.0 and earlier are affected. Docker Desktop v4.27.1 will be released on February 1 and includes runc, BuildKit, and dockerd binaries patches. In addition to updating to this new version, we encourage all Docker users to diligently use Docker images and Dockerfiles and ensure you only use trusted content in your builds. Docker Build Cloud Any new Docker Build Cloud builder instances will be provisioned with the latest Docker Engine and BuildKit versions after fixes are released and will, therefore, be unaffected by these CVEs. Docker will also be rolling out gradual updates to any existing builder instances. Security at Docker At Docker, we know that part of being developer-obsessed is providing secure software to developers. We appreciate the responsible disclosure of these vulnerabilities. If you’re aware of potential security vulnerabilities in any Docker product, report them to security@docker.com. For more information on Docker’s security practices, see our website. Advisory links Runc CVE-2024-21626 BuildKit CVE-2024-23650 CVE-2024-23651 CVE-2024-23652 CVE-2024-23653 Moby CVE-2024-24557 View the full article
  2. Today we’re featuring a blog from Adam Gordon Bell at Earthly who writes about how BuildKit, a technology developed by Docker and the community, works and how to write a simple frontend. Earthly uses BuildKit in their product. Introduction How are containers made? Usually, from a series of statements like `RUN`, `FROM`, and `COPY`, which are put into a Dockerfile and built. But how are those commands turned into a container image and then a running container? We can build up an intuition for how this works by understanding the phases involved and creating a container image ourselves. We will create an image programmatically and then develop a trivial syntactic frontend and use it to build an image. On `docker build` We can create container images in several ways. We can use Buildpacks, we can use build tools like Bazel or sbt, but by far, the most common way images are built is using `docker build` with a Dockerfile. The familiar base images Alpine, Ubuntu, and Debian are all created this way. Here is an example Dockerfile: ```dockerfile FROM alpine COPY README.md README.md RUN echo "standard docker build" > /built.txt" We will be using variations on this Dockerfile throughout this tutorial. We can build it like this: ```console docker build . -t test ``` But what is happening when you call `docker build`? To understand that, we will need a little background. Background A docker image is made up of layers. Those layers form an immutable filesystem. A container image also has some descriptive data, such as the start-up command, the ports to expose, and volumes to mount. When you `docker run` an image, it starts up inside a container runtime. I like to think about images and containers by analogy. If an image is like an executable, then a container is like a process. You can run multiple containers from one image, and a running image isn’t an image at all but a container. Continuing our analogy, BuildKit is a compiler, just like LLVM. But whereas a compiler takes source code and libraries and produces an executable, BuildKit takes a Dockerfile and a file path and creates a container image. Docker build uses BuildKit, to turn a Dockerfile into a docker image, OCI image, or another image format. In this walk-through, we will primarily use BuildKit directly. This primer on using BuildKit supplies some helpful background on using BuildKit, `buildkitd`, and `buildctl` via the command-line. However, the only prerequisite for today is running `brew install buildkit` or the appropriate OS equivalent steps. How Do Compilers Work? A traditional compiler takes code in a high-level language and lowers it to a lower-level language. In most conventional ahead-of-time compilers, the final target is machine code. Machine code is a low-level programming language that your CPU understands. Fun Fact: Machine Code VS. Assembly Machine code is written in binary. This makes it hard for a human to understand. Assembly code is a plain-text representation of machine code that is designed to be somewhat human-readable. There is generally a 1-1 mapping between instructions the machine understands (in machine code) and the OpCodes in Assembly Compiling the classic C “Hello, World” into x86 assembly code using the Clang frontend for LLVM looks like this: Creating an image from a dockerfile works a similar way: BuildKit is passed the Dockerfile and the build context, which is the present working directory in the above diagram. In simplified terms, each line in the dockerfile is turned into a layer in the resulting image. One significant way image building differs from compiling is this build context. A compiler’s input is limited to source code, whereas `docker build` takes a reference to the host filesystem as an input and uses it to perform actions such as `COPY`. There Is a Catch The earlier diagram of compiling “Hello, World” in a single step missed a vital detail. Computer hardware is not a singular thing. If every compiler were a hand-coded mapping from a high-level language to x86 machine code, then moving to the Apple M1 processor would be quite challenging because it has a different instruction set. Compiler authors have overcome this challenge by splitting compilation into phases. The traditional phases are the frontend, the backend, and the middle. The middle phase is sometimes called the optimizer, and it deals primarily with an internal representation (IR). This staged approach means you don’t need a new compiler for each new machine architecture. Instead, you just need a new backend. Here is an example of what that looks like in LLVM: Intermediate Representations This multiple backend approach allows LLVM to target ARM, X86, and many other machine architectures using LLVM Intermediate Representation (IR) as a standard protocol. LLVM IR is a human-readable programming language that backends need to be able to take as input. To create a new backend, you need to write a translator from LLVM IR to your target machine code. That translation is the primary job of each backend. Once you have this IR, you have a protocol that various phases of the compiler can use as an interface, and you can build not just many backends but many frontends as well. LLVM has frontends for numerous languages, including C++, Julia, Objective-C, Rust, and Swift. If you can write a translation from your language to LLVM IR, LLVM can translate that IR into machine code for all the backends it supports. This translation function is the primary job of a compiler frontend. In practice, there is much more to it than that. Frontends need to tokenize and parse input files, and they need to return pleasant errors. Backends often have target-specific optimizations to perform and heuristics to apply. But for this tutorial, the critical point is that having a standard representation ends up being a bridge that connects many front ends with many backends. This shared interface removes the need to create a compiler for every combination of language and machine architecture. It is a simple but very empowering trick! BuildKit Images, unlike executables, have their own isolated filesystem. Nevertheless, the task of building an image looks very similar to compiling an executable. They can have varying syntax (dockerfile1.0, dockerfile1.2), and the result must target several machine architectures (arm64 vs. x86_64). “LLB is to Dockerfile what LLVM IR is to C” – BuildKit Readme This similarity was not lost on the BuildKit creators. BuildKit has its own intermediate representation, LLB. And where LLVM IR has things like function calls and garbage-collection strategies, LLB has mounting filesystems and executing statements. LLB is defined as a protocol buffer, and this means that BuildKit frontends can make GRPC requests against buildkitd to build a container directly. Programmatically Making An Image Alright, enough background. Let’s programmatically generate the LLB for an image and then build an image. Using Go In this example, we will be using Go which lets us leverage existing BuildKit libraries, but it’s possible to accomplish this in any language with Protocol Buffer support. Import LLB definitions: ```go import ( "github.com/moby/buildkit/client/llb" ) ``` Create LLB for an Alpine image: go func createLLBState() llb.State { return llb.Image("docker.io/library/alpine"). File(llb.Copy(llb.Local("context"), "README.md", "README.md")). Run(llb.Args([]string{"/bin/sh", "-c", "echo \"programmatically built\" > /built.txt"})). Root() } ``` We are accomplishing the equivalent of a `FROM` by using `llb.Image`. Then, we copy a file from the local file system into the image using `File` and `Copy`. Finally, we `RUN` a command to echo some text to a file. LLB has many more operations, but you can recreate many standard images with these three building blocks. The final thing we need to do is turn this into protocol-buffer and emit it to standard out: ```go func main() { dt, err := createLLBState().Marshal(context.TODO(), llb.LinuxAmd64) if err != nil { panic(err) } llb.WriteTo(dt, os.Stdout) } ``` Let’s look at the what this generates using the `dump-llb` option of buildctl: ```console go run ./writellb/writellb.go | buildctl debug dump-llb | jq . We get this JSON formatted LLB: ```json { "Op": { "Op": { "source": { "identifier": "local://context", "attrs": { "local.unique": "s43w96rwjsm9tf1zlxvn6nezg" } } }, "constraints": {} }, "Digest": "sha256:c3ca71edeaa161bafed7f3dbdeeab9a5ab34587f569fd71c0a89b4d1e40d77f6", "OpMetadata": { "caps": { "source.local": true, "source.local.unique": true } } } { "Op": { "Op": { "source": { "identifier": "docker-image://docker.io/library/alpine:latest" } }, "platform": { "Architecture": "amd64", "OS": "linux" }, "constraints": {} }, "Digest": "sha256:665ba8b2cdc0cb0200e2a42a6b3c0f8f684089f4cd1b81494fbb9805879120f7", "OpMetadata": { "caps": { "source.image": true } } } { "Op": { "inputs": [ { "digest": "sha256:665ba8b2cdc0cb0200e2a42a6b3c0f8f684089f4cd1b81494fbb9805879120f7", "index": 0 }, { "digest": "sha256:c3ca71edeaa161bafed7f3dbdeeab9a5ab34587f569fd71c0a89b4d1e40d77f6", "index": 0 } ], "Op": { "file": { "actions": [ { "input": 0, "secondaryInput": 1, "output": 0, "Action": { "copy": { "src": "/README.md", "dest": "/README.md", "mode": -1, "timestamp": -1 } } } ] } }, "platform": { "Architecture": "amd64", "OS": "linux" }, "constraints": {} }, "Digest": "sha256:ba425dda86f06cf10ee66d85beda9d500adcce2336b047e072c1f0d403334cf6", "OpMetadata": { "caps": { "file.base": true } } } { "Op": { "inputs": [ { "digest": "sha256:ba425dda86f06cf10ee66d85beda9d500adcce2336b047e072c1f0d403334cf6", "index": 0 } ], "Op": { "exec": { "meta": { "args": [ "/bin/sh", "-c", "echo "programmatically built" > /built.txt" ], "cwd": "/" }, "mounts": [ { "input": 0, "dest": "/", "output": 0 } ] } }, "platform": { "Architecture": "amd64", "OS": "linux" }, "constraints": {} }, "Digest": "sha256:d2d18486652288fdb3516460bd6d1c2a90103d93d507a9b63ddd4a846a0fca2b", "OpMetadata": { "caps": { "exec.meta.base": true, "exec.mount.bind": true } } } { "Op": { "inputs": [ { "digest": "sha256:d2d18486652288fdb3516460bd6d1c2a90103d93d507a9b63ddd4a846a0fca2b", "index": 0 } ], "Op": null }, "Digest": "sha256:fda9d405d3c557e2bd79413628a435da0000e75b9305e52789dd71001a91c704", "OpMetadata": { "caps": { "constraints": true, "platform": true } } } ``` Looking through the output, we can see how our code maps to LLB. Here is our `Copy` as part of a FileOp: ```json "Action": { "copy": { "src": "/README.md", "dest": "/README.md", "mode": -1, "timestamp": -1 } ``` Here is mapping our build context for use in our `COPY` command: ```json "Op": { "source": { "identifier": "local://context", "attrs": { "local.unique": "s43w96rwjsm9tf1zlxvn6nezg" } } ``` Similarly, the output contains LLB that corresponds to our `RUN` and `FROM` commands. Building Our LLB To build our image, we must first start `buildkitd`: ```console docker run --rm --privileged -d --name buildkit moby/buildkit export BUILDKIT_HOST=docker-container://buildkit ``` To build our image, we must first start `buildkitd`: ```console go run ./writellb/writellb.go | buildctl build --local context=. --output type=image,name=docker.io/agbell/test,push=true ``` The output flag lets us specify what backend we want BuildKit to use. We will ask it to build an OCI image and push it to docker.io. Real-World Usage In the real-world tool, we might want to programmatically make sure `buildkitd` is running and send the RPC request directly to it, as well as provide friendly error messages. For tutorial purposes, we will skip all that. We can run it like this: ```console > docker run -it --pull always agbell/test:latest /bin/sh ``` And we can then see the results of our programmatic `COPY` and `RUN` commands: ```console / # cat built.txt programmatically built / # ls README.md README.md ``` There we go! The full code example can be a great starting place for your own programmatic docker image building. A True Frontend for BuildKit A true compiler front end does more than just emit hardcoded IR. A proper frontend takes in files, tokenizes them, parses them, generates a syntax tree, and then lowers that syntax tree into the internal representation. Mockerfiles are an example of such a frontend: yaml #syntax=r2d4/mocker apiVersion: v1alpha1 images:- name: demo from: ubuntu:16.04 package: install: - curl - git - gcc And because Docker build supports the `#syntax` command we can even build a Mockerfiles directly with `docker build`. ```console docker build -f mockerfile.yaml ``` To support the #syntax command, all that is needed is to put the frontend in a docker image that accepts a gRPC request in the correct format, publish that image somewhere. At that point, anyone can use your frontend `docker build` by just using `#syntax=yourimagename`. Building Our Own Example Frontend for `docker build` Building a tokenizer and a parser as a gRPC service is beyond the scope of this article. But we can get our feet wet by extracting and modifying an existing frontend. The standard dockerfile frontend is easy to disentangle from the moby project. I’ve pulled the relevant parts out into a stand-alone repo. Let’s make some trivial modifications to it and test it out. So far, we’ve only used the docker commands `FROM`, `RUN` and `COPY`. At a surface level, with its capitalized commands, Dockerfile syntax looks a lot like the programming language INTERCAL. Let change these commands to their INTERCAL equivalent and develop our own Ickfile format. Dockerfile Ickfile FROM COME FROM RUN PLEASE COPY STASH The modules in the dockerfile frontend split the parsing of the input file into several discrete steps, with execution flowing this way: For this tutorial, we are only going to make trivial changes to the frontend. We will leave all the stages intact and focus on customizing the existing commands to our tastes. To do this, all we need to do is change `command.go`: ```go package command // Define constants for the command strings const ( Copy = "stash" Run = "please" From = "come_from" ... ) ``` And we can then see results of our `STASH` and `PLEASE` commands: ```console / # cat built.txt custom frontend built / # ls README.md README.md ``` I’ve pushed this image to dockerhub. Anyone can start building images using our `ickfile` format by adding `#syntax=agbell/ick` to an existing Dockerfile. No manual installation is required! Enabling BuildKit BuildKit is enabled by default on Docker Desktop. It is not enabled by default in the current version of Docker for Linux (`version 20.10.5`). To instruct `docker build` to use BuildKit set the following environment variable `DOCKER_BUILDKIT=1` or change the Engine config. Conclusion We have learned that a three-phased structure borrowed from compilers powers building images, that an intermediate representation called LLB is the key to that structure. Empowered by the knowledge, we have produced two frontends for building images. This deep dive on frontends still leaves much to explore. If you want to learn more, I suggest looking into BuildKit workers. Workers do the actual building and are the secret behind `docker buildx`, and multi-archtecture builds. `docker build` also has support for remote workers and cache mounts, both of which can lead to faster builds. Earthly uses BuildKit internally for its repeatable build syntax. Without it, our containerized Makefile-like syntax would not be possible. If you want a saner CI process, then you should check it out. There is also much more to explore about how modern compilers work. Modern compilers often have many stages and more than one intermediate representation, and they are often able to do very sophisticated optimizations. The post Compiling Containers – Dockerfiles, LLVM and BuildKit appeared first on Docker Blog. View the full article
  • Forum Statistics

    43.3k
    Total Topics
    42.7k
    Total Posts
×
×
  • Create New...