{
  "version": "https://jsonfeed.org/version/1",
  "title": "Ian's Digital Garden",
  "home_page_url": "https://ianwwagner.com/",
  "feed_url": "https://ianwwagner.com//tag-devops.json",
  "description": "",
  "items": [
    {
      "id": "https://ianwwagner.com//rootless-gitlab-ci-container-builds-with-buildkit.html",
      "url": "https://ianwwagner.com//rootless-gitlab-ci-container-builds-with-buildkit.html",
      "title": "Rootless GitLab CI Container Builds with BuildKit",
      "content_html": "<p>Forgive me in advance, but this post will probably be a bit rant-y.\nIf you're looking for a way to do container builds in GitLab CI without a lot of fuss,\nthis article is for you.</p>\n<h1><a href=\"#rip-kaniko\" aria-hidden=\"true\" class=\"anchor\" id=\"rip-kaniko\"></a>RIP Kaniko</h1>\n<p>I'm writing this post because Google recently canned yet another project: Kaniko.\nThis used to be pretty much the only way to build container images in Kubernetes.\nThis probably hasn't been the case for quite some time,\nbut devops is a chore for me, and my level of involvement is usually\njust enough to get my actual work done.</p>\n<p>Anyways, there are a few possible replacements, including podman buildah and BuildKit.\nWith not so much as a finger to the wind, I decided that BuildKit looked like the more polished and capable option,\nso I went with that (even though I actually use podman on servers way more often than Docker proper).\nI found the documentation for switching to be a bit lacking,\nso you all get this post!</p>\n<h1><a href=\"#gitlab-runner-setup\" aria-hidden=\"true\" class=\"anchor\" id=\"gitlab-runner-setup\"></a>GitLab Runner Setup</h1>\n<p>I've used GitLab CI for years and find it to be an extremely capable and easy to configure system.\nRunners are <a href=\"https://docs.gitlab.com/runner/install/\">extremely easy to set up</a>.\nRead the docs for details, but I'll whizz through a Docker-based setup (I haven't gotten around to k8s, so this is plain Docker).</p>\n<p>First, create a volume to persist the configuration:</p>\n<pre><code class=\"language-shell\">docker volume create buildkit-gitlab-runner-config\n</code></pre>\n<p>Then, start the runner for the first time.\nI've added a few flags to set up the required volumes, and ensure it restarts automatically.</p>\n<pre><code class=\"language-shell\">docker run -d --name buildkit-gitlab-runner --restart always -v /var/run/docker.sock:/var/run/docker.sock -v buildkit-gitlab-runner-config:/etc/gitlab-runner gitlab/gitlab-runner:latest\n</code></pre>\n<p>The runner will launch, ish, but won't do anything useful without configuration + registration.\nYou can use an ephemeral instance of the runner image to register it.\nThis prompts you for your GitLab server URL and a token.\nYou get a token by logging into the GitLab admin section and registering a new runner.\nI also set the default image to <code>moby/buildkit:rootless</code>, but this is optional.</p>\n<pre><code class=\"language-shell\">docker run --rm -it -v buildkit-gitlab-runner-config:/etc/gitlab-runner gitlab/gitlab-runner:latest register\n</code></pre>\n<p>Next, get the path to the volume with the config using <code>docker volume inspect buildkit-gitlab-runner-config</code>.\nInside that directory (probably only readable by root),\nyou'll see the newly generated <code>config.toml</code>.</p>\n<p>To enable running container builds, you'll need this in a runners.docker section:</p>\n<pre><code class=\"language-toml\">security_opt = [&quot;seccomp:unconfined&quot;, &quot;apparmor:unconfined&quot;]\n</code></pre>\n<p>This is a bit confusing, since the GitLab docs <em>do</em> document this but,\nat the time of this writing, there are no examples and the description of the format is confusing.\nSee <a href=\"https://github.com/moby/buildkit/blob/master/docs/rootless.md#docker\">the BuildKit docs</a>\nsection on Docker for an explanation of why these options are necessary but safe enough for rootless builds.\nThere's also a third flag, <code>systempaths=unconfined</code>, which I've omitted (we'll revisit in a moment).</p>\n<p>In short, the above gets you rootless image builds from within a dockerized GitLab CI runner,\n<strong>without resorting to privileged containers or docker-in-docker</strong>.</p>\n<h1><a href=\"#example-gitlab-ciyml\" aria-hidden=\"true\" class=\"anchor\" id=\"example-gitlab-ciyml\"></a>Example <code>.gitlab-ci.yml</code></h1>\n<p>Now let's look at what you need to get this working in your CI configuration.\nI'll assume your repo has a Dockerfile in it already and you want to build + push to your GitLab Container Registry.</p>\n<pre><code class=\"language-yaml\">stages:\n  - build\n\ncontainerize:\n  stage: build\n  image:\n    name: moby/buildkit:rootless\n    entrypoint: [ &quot;&quot; ]  # !!\n  variables:\n    BUILDKITD_FLAGS: --oci-worker-no-process-sandbox\n  tags:\n    - docker\n    - buildkit-rootless\n  before_script:\n    # We have some more elaborate logic to our tag naming, but that's irrelevant...\n    - export IMAGE_TAG=&quot;$CI_COMMIT_TAG&quot;\n    # Container registry credentials\n    - mkdir -p ~/.docker\n    - echo &quot;{\\&quot;auths\\&quot;:{\\&quot;$CI_REGISTRY\\&quot;:{\\&quot;username\\&quot;:\\&quot;$CI_REGISTRY_USER\\&quot;,\\&quot;password\\&quot;:\\&quot;$CI_REGISTRY_PASSWORD\\&quot;}}}&quot; &gt; ~/.docker/config.json\n  script:\n    - buildctl-daemonless.sh build\n        --frontend dockerfile.v0\n        --local context=.\n        --local dockerfile=.\n        --output type=image,name=$CI_REGISTRY_IMAGE:$IMAGE_TAG,push=true\n        --opt build-arg:CI_JOB_TOKEN=$CI_JOB_TOKEN\n</code></pre>\n<p>This should be pretty much copy+paste for most projects.\nA few things to note:</p>\n<ol>\n<li>The GitLab documentation does not (presently) state that you need to explicitly clear the entrypoint.\nI am not sure if I've made a mistake elsewhere in my Kaniko migration, but we needed this for Kaniko,\nand we seem to need it for BuildKit too. Without this, it launches a build daemon or something\nand sits there patiently waiting for instructions.\nUpon trying to make an MR myself, I realized <a href=\"https://gitlab.com/gitlab-org/gitlab/-/merge_requests/199319\">there was one already open</a>.</li>\n<li>I've added some tags.\nWe operate a heterogenous bunch of self-hosted runners, and use tags to ensure the right capabilities.\nThis is optional / can be adapted for your needs.</li>\n<li>We set <code>BUILDKITD_FLAGS</code> using the flag that BuildKit discourages.\nSince we'd like the same config to work on k8s runners too, we have to use this.</li>\n<li>This config propagates the CI job token to the Docker builder as an <code>ARG</code>. Why? I'm glad you asked...</li>\n</ol>\n<h1><a href=\"#bonus-transitive-dependencies-private-repos-and-cargo\" aria-hidden=\"true\" class=\"anchor\" id=\"bonus-transitive-dependencies-private-repos-and-cargo\"></a>Bonus: Transitive dependencies, Private Repos, and Cargo</h1>\n<p>We have a lot of internal crates in private repos.\nRecently I hit a snag with our previous approach to authenticated pulls though.\nOur <code>Cargo.toml</code> files use git SSH links, which won't work in CI.\nBut you can authenticate using HTTPS and a job token!</p>\n<p>Our previous approach was to use <code>sed</code> to rewrite <code>Cargo.toml</code> and <code>Cargo.lock</code>.\nIt worked until we had a transitive dependency (a direct dependency on one private crate,\nwhich in turn had a dependency on another private crate).\nI don't know why this broke exactly, since we do use <code>Cargo.lock</code>,\nbut regardless, it was brittle.</p>\n<p>The solution was to amed our <code>Dockerfile</code> with an <code>ARG CI_JOB_TOKEN</code>.\nThe build script then does some <code>git</code> magic to rewrite the SSH requests into HTTPS ones.\nI don't know why this feature exists,\nbut I'm happy I don't need to figure out how to run a private crate registry!</p>\n<pre><code class=\"language-shell\"># Git hacks\ngit config --global credential.helper store\necho &quot;https://gitlab-ci-token:${CI_JOB_TOKEN}@git.mycompany.com&quot; &gt; ~/.git-credentials\ngit config --global url.&quot;https://gitlab-ci-token:${CI_JOB_TOKEN}@git.mycompany.com&quot;.insteadOf ssh://git@git.mycompany.com\n</code></pre>\n<p>Just add this to your build script and replace the domain with your actual GitLab domain,\nand you'll have no more issues with transitive dependencies and authenticated pulls.</p>\n",
      "summary": "",
      "date_published": "2025-07-31T00:00:00-00:00",
      "image": "",
      "authors": [
        {
          "name": "Ian Wagner",
          "url": "https://fosstodon.org/@ianthetechie",
          "avatar": "media/avi.jpeg"
        }
      ],
      "tags": [
        "devops"
      ],
      "language": "en"
    },
    {
      "id": "https://ianwwagner.com//optimizing-rust-builds-with-target-flags.html",
      "url": "https://ianwwagner.com//optimizing-rust-builds-with-target-flags.html",
      "title": "Optimizing Rust Builds with Target Flags",
      "content_html": "<p>Recently I've been doing some work using <a href=\"https://datafusion.apache.org/\">Apache DataFusion</a> for some high-throughput data pipelines.\nOne of the interesting things I noticed on the user guide was the suggestion to set\n<code>RUSTFLAGS='-C target-cpu=native'</code>.\nThis is actually a pretty common optimization (which I periodically forget about and rediscover),\nso I thought I'd do a quick writeup on this.</p>\n<h1><a href=\"#background-cpu-features\" aria-hidden=\"true\" class=\"anchor\" id=\"background-cpu-features\"></a>Background: CPU features</h1>\n<p>A compiler translates your &quot;idiomatic&quot; code into low-level instructions.\nModern optimizing compilers are pretty good at figuring out ways to cleverly rewrite your code\nto make it faster, while still being functionally equivalent at execution time.\nThe instructions may be reordered from what your simple mental model expects,\nand they may even have no resemblance.\nThis includes rewriting some loop-like (or iterator) patterns into &quot;vectorized&quot; code using SIMD instructions\nthat perform some operation on multiple values at once.</p>\n<p>Special instruction families like this often vary within a single architecture,\nwhich may be surprising at first.\nThe compiler can be configured to enable (or disable!) specific &quot;features&quot;,\noptimizing for compatibility or speed.</p>\n<p>In <code>rustc</code>, each <em>target triple</em> has a default set of CPU features enabled.\nIn the case of my work laptop, that's <code>aarch64-apple-darwin</code>.\nSince this architecture doesn't have a lot of variation among chips,\nthe compiler can make some pretty good assumptions about what's available.\n(In fact, for my specific CPU, the M1 Max, it's perfect!)\nBut we'll soon see this is not the case for the most common target: x86_64 Linux.</p>\n<h1><a href=\"#checking-available-features\" aria-hidden=\"true\" class=\"anchor\" id=\"checking-available-features\"></a>Checking available features</h1>\n<p>To figure out what features we could theoretically enable,\nwe need some CPU info from the machine we intend to deploy on.\nThe canonical way of checking CPU features on Linux is probably to <code>cat /proc/cpuinfo</code>.\nThis gives a lot more output that you probably need though.\nHelpfully, <code>rustc</code> includes a simple command that shows you the config\nfor the native CPU capabilities: <code>rustc --print=cfg -C target-cpu=native</code>.\nHere's what it looks like on one Linux machine:</p>\n<pre><code>debug_assertions\npanic=&quot;unwind&quot;\ntarget_abi=&quot;&quot;\ntarget_arch=&quot;x86_64&quot;\ntarget_endian=&quot;little&quot;\ntarget_env=&quot;gnu&quot;\ntarget_family=&quot;unix&quot;\ntarget_feature=&quot;adx&quot;\ntarget_feature=&quot;aes&quot;\ntarget_feature=&quot;avx&quot;\ntarget_feature=&quot;avx2&quot;\ntarget_feature=&quot;bmi1&quot;\ntarget_feature=&quot;bmi2&quot;\ntarget_feature=&quot;cmpxchg16b&quot;\ntarget_feature=&quot;f16c&quot;\ntarget_feature=&quot;fma&quot;\ntarget_feature=&quot;fxsr&quot;\ntarget_feature=&quot;lzcnt&quot;\ntarget_feature=&quot;movbe&quot;\ntarget_feature=&quot;pclmulqdq&quot;\ntarget_feature=&quot;popcnt&quot;\ntarget_feature=&quot;rdrand&quot;\ntarget_feature=&quot;rdseed&quot;\ntarget_feature=&quot;sse&quot;\ntarget_feature=&quot;sse2&quot;\ntarget_feature=&quot;sse3&quot;\ntarget_feature=&quot;sse4.1&quot;\ntarget_feature=&quot;sse4.2&quot;\ntarget_feature=&quot;ssse3&quot;\ntarget_feature=&quot;xsave&quot;\ntarget_feature=&quot;xsavec&quot;\ntarget_feature=&quot;xsaveopt&quot;\ntarget_feature=&quot;xsaves&quot;\ntarget_has_atomic=&quot;16&quot;\ntarget_has_atomic=&quot;32&quot;\ntarget_has_atomic=&quot;64&quot;\ntarget_has_atomic=&quot;8&quot;\ntarget_has_atomic=&quot;ptr&quot;\ntarget_os=&quot;linux&quot;\ntarget_pointer_width=&quot;64&quot;\ntarget_vendor=&quot;unknown&quot;\nunix\n</code></pre>\n<p><del>Aside: I'm not quite sure why, but this isn't a 1:1 match with <code>/proc/cpuinfo</code> on this box!\nIt definitely does support some AVX512 instructions,\nbut those don't show up in the native CPU options.\nIf anyone knows why, let me know!</del></p>\n<p><strong>UPDATE:</strong> Shortly after publishing this post, Rust 1.89 was released.\n<a href=\"https://github.com/rust-lang/rust/pull/138940\">This PR</a> linked in the release notes caught my eye.\nApparently the target features for AVX512 were not actually stable at the time of writing,\nbut they are now.\nRe-running the above command with version 1.89 of rustc now includes the AVX512 instructions.</p>\n<h1><a href=\"#checking-the-default-features\" aria-hidden=\"true\" class=\"anchor\" id=\"checking-the-default-features\"></a>Checking the default features</h1>\n<p>Perhaps the more interesting question which motivates this investigation is\nwhat the <em>defaults</em> are.\nYou can get this with <code>rustc --print cfg</code>.\nThis shows what you get when you run <code>cargo build</code> without any special configuration.\nHere's the output for the same machine:</p>\n<pre><code>debug_assertions\npanic=&quot;unwind&quot;\ntarget_abi=&quot;&quot;\ntarget_arch=&quot;x86_64&quot;\ntarget_endian=&quot;little&quot;\ntarget_env=&quot;gnu&quot;\ntarget_family=&quot;unix&quot;\ntarget_feature=&quot;fxsr&quot;\ntarget_feature=&quot;sse&quot;\ntarget_feature=&quot;sse2&quot;\ntarget_has_atomic=&quot;16&quot;\ntarget_has_atomic=&quot;32&quot;\ntarget_has_atomic=&quot;64&quot;\ntarget_has_atomic=&quot;8&quot;\ntarget_has_atomic=&quot;ptr&quot;\ntarget_os=&quot;linux&quot;\ntarget_pointer_width=&quot;64&quot;\ntarget_vendor=&quot;unknown&quot;\nunix\n</code></pre>\n<p>Well, that's disappointing, isn't it?\nBy default, you'd only get up to SSE2, which is over 20 years old by now!\nThis is a consequence of the diversity of the <code>x86_64</code> architecture.\nIf you want your binary to run <em>everywhere</em>, this is the price you'd have to pay.</p>\n<h1><a href=\"#enabling-features-individually\" aria-hidden=\"true\" class=\"anchor\" id=\"enabling-features-individually\"></a>Enabling features individually</h1>\n<p>While <code>-C target-cpu=native</code> will usually make your code faster on the build machine,\na lot of modern software is built by a CI pipeline on cheap runners, but deployed elsewhere.\nTo reliably target a specific set of features, use the <code>target-feature</code> flag.\nThis lets you specifically enable features you know will be available on the machine running the code.\nHere's an example of <code>RUSTFLAGS</code> that incorporates all of the above features.\nThis should enable builds to proceed from <em>any</em> other x86_64 Linux machine and while producing a binary\nthat supports the exact features of the deployment machine.</p>\n<pre><code class=\"language-shell\">RUSTFLAGS=&quot;-C target-feature=+adx,+aes,+avx,+avx2,+bmi1,+bmi2,+cmpxchg16b,+f16c,+fma,+fxsr,+lzcnt,+movbe,+pclmulqdq,+popcnt,+rdrand,+rdseed,+sse,+sse2,+sse3,+sse4.1,+sse4.2,+ssse3,+xsave,+xsavec,+xsaveopt,+xsaves&quot;\n</code></pre>\n<h1><a href=\"#enabling-features-by-x86-microarchitecture-level\" aria-hidden=\"true\" class=\"anchor\" id=\"enabling-features-by-x86-microarchitecture-level\"></a>Enabling features by x86 microarchitecture level</h1>\n<p>A few days after writing this, I accidentally stumbled upon something else when working out target flags\nfor a program I knew would have wider support across several datacenters.\nIt sure would be nice if there were some &quot;groups&quot; of commonly supported features, right?</p>\n<p>Turns out this exists, and it was staring right at me in the CPU list: microarchitecture levels!\nIf you list out all the available target CPUs via <code>rustc --print target-cpus</code> on a typical x86_64 Linux box,\nyou'll see that your default target CPU is <code>x86-64</code>.\nThis means it will run on all x86_64 CPUs, and as we discussed above, this doesn't give much of a baseline.\nBut there are 4 versions in total, going up to <code>x86-64-v4</code>.\nIt turns out that AMD, Intel, RedHat, and SUSE got together in 2020 to define these,\nand came up with some levels which are specifically designed for our use case of optimizing compilers!\nYou can find the <a href=\"https://en.wikipedia.org/wiki/X86-64\">full list of supported features by level on Wikipedia</a>\n(search for &quot;microarchitecture levels&quot;).</p>\n<p><code>rustc --print target-cpus</code> will also tell you which <em>specific</em> CPU you're on.\nYou can use this info to find which &quot;level&quot; you support.\nBut a more direct way to map to level support is to run <code>/lib64/ld-linux-x86-64.so.2 --help</code>.\nThanks, internet!\nYou'll get some output like this on a modern CPU:</p>\n<pre><code>Subdirectories of glibc-hwcaps directories, in priority order:\n  x86-64-v4 (supported, searched)\n  x86-64-v3 (supported, searched)\n  x86-64-v2 (supported, searched)\n</code></pre>\n<p>And if you run on slightly older hardware, you might get something like this:</p>\n<pre><code>Subdirectories of glibc-hwcaps directories, in priority order:\n  x86-64-v4\n  x86-64-v3 (supported, searched)\n  x86-64-v2 (supported, searched)\n</code></pre>\n<p>This should help if you're trying to aim for broader distribution rather than enabling specific features for some known host.\nThe line to target an x86_64 microarch level is a lot shorter.\nFor example:</p>\n<pre><code>RUSTFLAGS=&quot;-C target-cpu=x86-64-v3&quot;\n</code></pre>\n<p><strong>NOTE:</strong> As mentioned above, Rust 1.89 was released shortly after this post.\nThis incidentally brings support for AVX512 CPU features in the <code>x86-64-v4</code> target CPU,\nwhich were previously marked unstable.</p>\n<h1><a href=\"#dont-forget-to-measure\" aria-hidden=\"true\" class=\"anchor\" id=\"dont-forget-to-measure\"></a>Don't forget to measure!</h1>\n<p>Enabling CPU features doesn't always make things faster.\nIn fact, in some cases, it can even do the opposite!\nThis <a href=\"https://internals.rust-lang.org/t/slower-code-with-c-target-cpu-native/17315\">thread</a>\nhas some interesting anecdotes.</p>\n<h1><a href=\"#summary-of-helpful-commands\" aria-hidden=\"true\" class=\"anchor\" id=\"summary-of-helpful-commands\"></a>Summary of helpful commands</h1>\n<p>In conclusion, here's a quick reference of the useful commands we covered:</p>\n<ul>\n<li><code>rustc --print cfg</code> - Shows the compiler configuration that your toolchain will use by default.</li>\n<li><code>rustc --print=cfg -C target-cpu=native</code> - List the configuration if you were to specifically target for your CPU. Use this to see the delta between the defaults and the featurse supported for a specific CPU.</li>\n<li><code>rustc --print target-cpus</code> - List all known target CPUs. This also tells you what your current CPU and what the default CPU is for your current toolchain.</li>\n<li><code>/lib64/ld-linux-x86-64.so.2 --help</code> - Specifically for x86_64 Linux users, will show you what microarchitecture levels your CPU supports.</li>\n<li><code>rustc --print target-features</code> - List <em>all available</em> target features with a short description. You can scope to a specific CPU with <code>-C target-cpu=</code>. Useful mostly to see what you're missing, I guess.</li>\n</ul>\n",
      "summary": "",
      "date_published": "2025-07-28T00:00:00-00:00",
      "image": "",
      "authors": [
        {
          "name": "Ian Wagner",
          "url": "https://fosstodon.org/@ianthetechie",
          "avatar": "media/avi.jpeg"
        }
      ],
      "tags": [
        "rust",
        "devops"
      ],
      "language": "en"
    }
  ]
}