{
  "version": "https://jsonfeed.org/version/1",
  "title": "Ian's Digital Garden",
  "home_page_url": "https://ianwwagner.com/",
  "feed_url": "https://ianwwagner.com//index.json",
  "description": "",
  "items": [
    {
      "id": "https://ianwwagner.com//setting-up-a-wireguard-tunnel-on-freebsd-15.html",
      "url": "https://ianwwagner.com//setting-up-a-wireguard-tunnel-on-freebsd-15.html",
      "title": "Setting up a WireGuard Tunnel on FreeBSD 15",
      "content_html": "<p>It's not like the world needs yet another WireGuard tutorial,\nbut I thought I'd write one since one of the top SEO-ranked ones I stumbled upon was pretty low quality,\nwith several obvious errors and omissions.</p>\n<p>In this post, I'll focus on how you can set up a VPN tunnel\nin the sense that such things were used before shady companies hijacked the term.\nIt's just a way to tunnel traffic between networks.\nFor example, to connect non-internet-facing servers behind a firewall\nto a public host that firewalls and selectively routes traffic over the tunnel.</p>\n<p>I'll assume a pair of FreeBSD servers for the rest of the post,\none that's presumably more accessible (the &quot;server&quot;),\nand a client which is not necessarily routable over the internet.</p>\n<h1><a href=\"#server-setup\" aria-hidden=\"true\" class=\"anchor\" id=\"server-setup\"></a>&quot;Server&quot; setup</h1>\n<p>We'll start with the server setup.\nThis is where your client(s) will connect.\nAt a high level, we'll generate a keypair for the server,\na keypair for the client,\nand generate configuration files for both.\nAnd finally we'll do some basic firewall configuration.</p>\n<h2><a href=\"#wireguard-config\" aria-hidden=\"true\" class=\"anchor\" id=\"wireguard-config\"></a>WireGuard config</h2>\n<p>The following can be run,\neither in a script or line-by-line in a POSIX shell as root.</p>\n<pre><code class=\"language-sh\"># Set this to your server's public IP\nSERVER_PUBLIC_IP=&quot;192.0.2.42&quot;\n\n# We'll be setting up some config files here that we only want to be readable by root.\n# The umask saves us the effort of having to chmod these later.\numask 077\n\n# Wireguard kernel-level support is available in FreeBSD 14+,\n# but this port has a nice service wrapper\npkg install wireguard-tools\n\n# Set up WireGuard config directory\nchmod 770 /usr/local/etc/wireguard\ncd /usr/local/etc/wireguard\n\n# Create a keypair for the server\nSERVER_PRIV_KEY=$(wg genkey)\nSERVER_PUB_KEY=`echo $SERVER_PRIV_KEY | wg pubkey`\n\n# Generate the first section of our WireGuard server config.\n# We'll use 172.16.0.1/24 (no real reason for the choice;\n# it's just somewhat convenient as it doesn't collide with the more common\n# Class A and Class C private networks).\ncat &gt; wg0.conf &lt;&lt;EOF\n[Interface]\nAddress = 172.16.0.1/24\nSaveConfig = true\nListenPort = 51820\nPrivateKey = ${SERVER_PRIV_KEY}\nEOF\n\n# Similarly, we need a client keypair\nCLIENT_PRIV_KEY=$(wg genkey)\nCLIENT_PUB_KEY=`echo $CLIENT_PRIV_KEY | wg pubkey`\n\n# Add peer to the server config.\n# This is what lets your client connect later.\n# The server only stores the client's public key\n# and the private IP that it will connect as.\nCLIENT_IP=&quot;172.16.0.2&quot;\ncat &gt;&gt; wg0.conf &lt;&lt;EOF\n# bsdcube\n[Peer]\nPublicKey = ${CLIENT_PUB_KEY}\nAllowedIps = ${CLIENT_IP}/32\nEOF\n\numask 022 # Revert to normal umask\n\n# Enable the wireguard service\nsysrc wireguard_interfaces=&quot;wg0&quot;\nsysrc wireguard_enable=&quot;YES&quot;\nservice wireguard start\n</code></pre>\n<p><strong>Don't ditch this shell session yet!</strong>\nWe'll come back to the client config later and will need the vars defined above.\nBut first, a brief interlude for packet filtering.</p>\n<h2><a href=\"#pf-setup\" aria-hidden=\"true\" class=\"anchor\" id=\"pf-setup\"></a><code>pf</code> setup</h2>\n<p>We'll use <code>pf</code>, the robust packet filtering (colloquially &quot;firewall&quot;) system\nported from OpenBSD.</p>\n<p>I'm using <code>vtnet0</code> for the external interface,\nsince that's the interface name with my VPS vendor.\nYou may need to change this based on what your main network interface is\n(check <code>ifconfig</code>).</p>\n<p><strong>DISCLAIMER</strong>: This is <em>not</em> necessarily everything you need to launch a production system.\nI've distilled just the parts that are relevant to a minimal WireGuard setup.\nThat said, here's a minimal <code>/etc/pf.conf</code>.</p>\n<pre><code class=\"language-pf\">ext_if = &quot;vtnet0&quot;\nwg_if = &quot;wg0&quot;\n\n# Pass all traffic on the loopback interface\nset skip on lo\n\n# Basic packet cleanup\nscrub in on $ext_if all fragment reassemble\n\n# Allows WireGuard clients to reach the internet.\n# I do not nede this in my config, but noting it here\n# in case your use case is *that* sort of VPN.\n# nat on $ext_if from $wg_if:network to any -&gt; ($ext_if)\n\n# Allow all outbound connections\npass out keep state\n\n# SSH (there's a good chance you need this)\npass in on $ext_if proto tcp from any to ($ext_if) port 22\n\n# Allow inbound WireGuard traffic\npass in on $ext_if proto udp from any to ($ext_if) port 51820\n\n# TODO: Forwarding for the services that YOU need\n# Here's one example demonstrating how you would allow traffic\n# to route directly to one of the WireGuard network IPs (e.g. 172.16.42.1/24 in this example)\n# over port 8080.\n# pass in on $wg_if proto tcp from $wg_if:network to ($wg_if) port 8080\n\n# Allow ICMP\npass in inet proto icmp all\npass in inet6 proto icmp6 all\n</code></pre>\n<p>Next, we enable the service and start it.\nIf you're already running <code>pf</code>, then at least part of this isn't necessary.</p>\n<pre><code class=\"language-sh\"># Allow forwarding of traffic from from WireGuard clients\nsysctl net.inet.ip.forwarding=1\n\n# Enable pf\nsysrc pf_enable=&quot;YES&quot;\nservice pf start\n</code></pre>\n<h1><a href=\"#client-configuration\" aria-hidden=\"true\" class=\"anchor\" id=\"client-configuration\"></a>Client configuration</h1>\n<p>And now we come back to the client configuration.\nThe &quot;client&quot; in this case does not necessarily have to be routable over the internet;\nit just needs to be able to connect to the server.\nYou've still got the same shell session with those variables, right?</p>\n<pre><code class=\"language-sh\">cat &lt;&lt;EOF\n[Interface]\nPrivateKey = ${CLIENT_PRIV_KEY}\nAddress = ${CLIENT_IP}/24\n\n[Peer]\nPublicKey = ${SERVER_PUB_KEY}\nAllowedIPs = 172.16.0.0/24  # Only route private subnet traffic over the tunnel\nEndpoint = ${SERVER_PUBLIC_IP}:51820\nPersistentKeepalive = 30\nEOF\n</code></pre>\n<p>That's it; that's the client config.\nRun through the same initial setup steps for adding the <code>wireguard-tools</code> package\nand creating the directory with the right permissions.\nThen put this config in <code>/usr/local/etc/wireguard/wg0.conf</code>.</p>\n<p>The client will also need a similar <code>pf</code> configuration,\nbut rather than blanket allowing traffic in over <code>$wg_if</code>,\nyou probably want something a bit more granular.\nFor example, allowing traffic in over a specific port (e.g. <code>8080</code>).\nI'll leave that as an exercise to the reader based on the specific scenario.</p>\n",
      "summary": "",
      "date_published": "2026-04-07T00:00:00-00:00",
      "image": "",
      "authors": [
        {
          "name": "Ian Wagner",
          "url": "https://fosstodon.org/@ianthetechie",
          "avatar": "media/avi.jpeg"
        }
      ],
      "tags": [
        "networking",
        "FreeBSD"
      ],
      "language": "en"
    },
    {
      "id": "https://ianwwagner.com//two-weeks-of-emacs.html",
      "url": "https://ianwwagner.com//two-weeks-of-emacs.html",
      "title": "Two Weeks of Emacs",
      "content_html": "<p>I'm approximately 2 weeks into using emacs as my daily editor and, well, I haven't opened JetBrains since.\nI honestly didn't expect that, but here we are.</p>\n<h1><a href=\"#papercuts-i-said-i-would-solve-later\" aria-hidden=\"true\" class=\"anchor\" id=\"papercuts-i-said-i-would-solve-later\"></a>Papercuts I said I would solve later</h1>\n<p>Here's the list of things I noted in my last post that I said I'd come back to.\nThe list has changed a bit since the last post:</p>\n<p>Solved:</p>\n<ul>\n<li>Issues with automatic indentation</li>\n<li>Files not reloading automatically when changed externally (fixed with <code>global-auto-revert-mode</code>)</li>\n<li>Highlighting mutable variables</li>\n</ul>\n<p>Haven't bothered to try resolving (infrequently used):</p>\n<ul>\n<li>Macro expansion</li>\n<li>Code completion and jump to definition within rustdoc comments</li>\n</ul>\n<p>The highlighting one is worth a bit of explanation.\nHere's what I had to do to get it working:</p>\n<pre><code class=\"language-lisp\">;; Highlight mutable variables (like RustRover/JetBrains).\n;; NB: Requires eglot 1.20+\n(defface eglot-semantic-mutable\n  '((t :underline t))\n  &quot;Face for mutable variables via semantic tokens.&quot;)\n\n(with-eval-after-load 'eglot\n  (add-to-list 'eglot-semantic-token-modifiers &quot;mutable&quot;))\n</code></pre>\n<p>Apparently this requires a fairly recent version of eglot to work,\nand it isn't necessarily supported by every LSP,\nbut it works for me with rust-analyzer.\nI spent way too much time on this because for some reason running <code>M-x eglot-reconnect</code>\nor <code>M-x eglot</code> and accepting a restart didn't reset the buffer settings or something.\nIf this doesn't work, try killing the buffer and then find the file again.</p>\n<h1><a href=\"#other-new-papercuts\" aria-hidden=\"true\" class=\"anchor\" id=\"other-new-papercuts\"></a>Other (new) papercuts!</h1>\n<p>Here's a similarly categorized list of things that I found over the past week or so.</p>\n<p>Solved:</p>\n<ul>\n<li>&quot;Project&quot; views: I got even more than I bargained for with <code>(setq tab-bar-mode t)</code>! It's great.\nIt's even better than I expected TBH since every tab can contain an arbitrary configuration of buffers.\nThis is a weird way of thinking at first, but it's really nice since stuff doesn't need to follow the traditional bounds\nthat I was used to in IDEs (e.g. a tab can be entirely terminal buffers, or cross &quot;projects&quot; which is useful to me).</li>\n<li><code>xref-matches-in-files</code> was SLOW. Turned out to be an issue in my <code>fish</code> configuration (which isn't even my &quot;preferred&quot; shell,\nbut it's still my login shell due to being more supported than nushell, which I use for most things).\nRemoving pyenv fixed that.\nAlso you can set it to use ripgrep with <code>(setq xref-search-program 'ripgrep)</code></li>\n<li>Fuzzy finding files by name within a project quickly annoyed me.\nTurns out this is also not an unreasonable hotkey with the built-in project.el: <code>C-x p f</code> (mnemonic: project find).</li>\n<li>Searching the project by <em>symbol</em> (variable, struct, trait, etc.) works well with the <code>consult-eglot</code> package.\nSpecifically, it includes a <code>consult-eglot-symbols</code> command.</li>\n</ul>\n<p>Not solved yet:</p>\n<ul>\n<li>It was really nice to just fold sections of code by clicking something in the margin (&quot;fringe&quot; in emacs parlance; gutter in JetBrains).\nIt looks like there are ways to do this; I just haven't had time to mess with it.</li>\n<li>The language server can get confused if you do a big operation like a git branch switch. Restarting eglot fixes this.\nI'm sure this happened occasionally with JetBrains but it seems worse here.</li>\n<li>The lovely <code>diff-hl</code> package doesn't get the hint when files reload for some reason.</li>\n</ul>\n<p>I'll also add a quick note that it's (still) surprisingly easy to screw up your own config.\nEmacs as a system is super flexible but that also makes it somewhat fragile.\nEverything is programmable, in a single-threaded, garbage-collected language.</p>\n<p>One snag I hit was that after some period, the environment got super slow,\naffecting things like unit test runtimes in terminal buffers,\nand making input noticeably laggy.\nThe issue turned out to be my <code>global-auto-revert-mode</code> config.\nApparently if you do it wrong, it turns into a whole stack of polling operations for every buffer.\nThis was a consequence of Claude suggesting something dumb and me not researching it :P\nThe normal configuration will use filesystem notifications like kqueue or inotify.</p>\n<h1><a href=\"#whats-next\" aria-hidden=\"true\" class=\"anchor\" id=\"whats-next\"></a>What's next?</h1>\n<p>I'm pretty happy with the new setup overall.\nObviously some room for tweaks, but it's pretty great overall,\nand I'm really enjoying the tab bar approach for organizing things.\nI'm also frankly shocked at how little CPU I'm using relative to previous norms on my MacBook.</p>\n<p>Next up I'll probably try (in no particular order):</p>\n<ul>\n<li>Magit / Majitsu; I actually love Sublime Merge, but wouldn't mind one less context switch.\nEspecially if I can get a view of the current project easily based on context.\nSublime's search interface is terrible when you have hundreds of repos.</li>\n<li>Chezmoi for dotfile sync + see what breaks on my desktop (FreeBSD).</li>\n<li>More adventures with TRAMP. I used this extensively in the early '00s but have mostly been doing local dev this time around.\nBut I see emacs having a lot of potential for remote dev with TRAMP so I'll give that a shot for some stuff over the next few weeks.</li>\n</ul>\n",
      "summary": "",
      "date_published": "2026-03-28T00:00:00-00:00",
      "image": "",
      "authors": [
        {
          "name": "Ian Wagner",
          "url": "https://fosstodon.org/@ianthetechie",
          "avatar": "media/avi.jpeg"
        }
      ],
      "tags": [
        "software-engineering"
      ],
      "language": "en"
    },
    {
      "id": "https://ianwwagner.com//returning-to-emacs.html",
      "url": "https://ianwwagner.com//returning-to-emacs.html",
      "title": "Returning to Emacs",
      "content_html": "<h1><a href=\"#jetbrains-woes\" aria-hidden=\"true\" class=\"anchor\" id=\"jetbrains-woes\"></a>JetBrains woes</h1>\n<p>I have been a fan of JetBrains products for over a decade by now,\nand an unapologetic lover of IDEs generally.\nI've used PyCharm since shortly after it launched,\nand over the years I've used IntelliJ IDEA,\nWebStorm, DataGrip, RustRover, and more.\nI literally have the all products pack (and have for many years).</p>\n<p>I truly believe that a good IDE can be a productivity multiplier.\nYou get refactoring, jump-to-definition, symbol-aware search,\nsaved build/run configurations, a nice and consistent interface\nto otherwise terrible tooling (looking at you CMake and the half dozen Python package managers\nof the last decade and change).</p>\n<p>But something has changed over the past few years.\nThe quality of the product has generally deteriorated in several ways.\nWith the advent of LSP, the massive lead JetBrains had in &quot;code intelligence&quot;\nhas eroded, and in many cases no longer exists.\nThe resource requirements of the IDE have also ballooned massively,\neven occasionally causing memory pressure on my amply equipped MacBook Pro with 32GB of RAM.</p>\n<p>(Side note: I regularly have 3 JetBrains IDEs open at once because I need to work in many languages,\nand for some reason they refuse to ship a single product that does that.\nI would have paid for such a product.)</p>\n<p>And as if that weren't enough, it seems like I have to restart to install some urgent nagging update\nseveral times/week, usually related to one of their confusing mess of AI plugins\n(is AI Chat what we're supposed to use? Or Junie? Or... what?).\nTo top it all off, stability has gone out the window.\nAt least once/week, I will open my laptop from sleep,\nonly to find out that one or more of my JetBrains IDEs has crashed.\nUsually RustRover.\nWhich also eats up like 30GB of extra disk space for things like macro expansions\nand other code analysis.\nThe taxes are high and increasing on every front.</p>\n<h1><a href=\"#my-philosophy-of-editors\" aria-hidden=\"true\" class=\"anchor\" id=\"my-philosophy-of-editors\"></a>My philosophy of editors</h1>\n<p>So, I decided the time was right to give Emacs another shot.</p>\n<p>If you know me personally, you may recall that I made some strong statements in the past\nto the effect that spending weeks writing thousands of lines of Lua to get the ultimate Neovim config was silly.\nAnd my strongly worded statements of the past were partially based on my own experiences with such editors,\nincluding Emacs.\nBasically, I appreciate that you <em>can</em> &quot;build your own lightsaber&quot;,\nbut I did not consider that to be a good use of my time.\nOne of the reasons I like(d) JetBrains is that I <em>didn't</em> ever need to think about tweaking configs!</p>\n<p>But things have gotten so bad that I figured I'd give it a shot with a few stipulations.</p>\n<ol>\n<li>I would try it for a week, but if it seriously hampered my productivity after a few days, I'd switch back.</li>\n<li>I was only going to spend a few hours configuring it.</li>\n</ol>\n<p>With these constraints, I set off to see if I needed to revise my philosophy of editors.</p>\n<h1><a href=\"#why-emacs\" aria-hidden=\"true\" class=\"anchor\" id=\"why-emacs\"></a>Why Emacs?</h1>\n<p>Aside: why not (Helix|Neovim|Zed|something else)?\nA few reasons, in no particular order:</p>\n<ul>\n<li>I sorta know Emacs. I used it as one of my primary editors for a year or two in the early 2010s.</li>\n<li>I tried Helix for a week last year. It didn't stick; something about &quot;modal editing&quot; just does not fit with my brain.</li>\n<li>I don't mind a terminal per se, but we invented windowing systems decades before I was born and I don't understand the fascination\nwith running <em>everything</em> in a terminal (or a web browser, for that matter :P).</li>\n<li>If I'm going to go through the pain of switching, I want to be confident it'll be around and thriving in another 10 years.\nAnd it should work everywhere, including lesser known platforms like FreeBSD.</li>\n<li>If your movement keys require a QWERTY layout, I will be very annoyed.</li>\n</ul>\n<h1><a href=\"#first-impressions-3-days-in\" aria-hidden=\"true\" class=\"anchor\" id=\"first-impressions-3-days-in\"></a>First impressions (3 days in)</h1>\n<p>So, how's it going so far?\nHere are a few of the highlights.</p>\n<h2><a href=\"#lsps-have-improved-a-lot\" aria-hidden=\"true\" class=\"anchor\" id=\"lsps-have-improved-a-lot\"></a>LSPs have improved a lot!</h2>\n<p>It used to be the case that JetBrains had a dominant position in code analysis.\nThis isn't the case anymore, and most of the languages I use that would benefit from an LSP\nhave a great one available.\nThings have improved a lot, particularly in terms of Emacs integrations,\nover the past decade!\n<a href=\"https://www.gnu.org/software/emacs/manual/html_node/eglot/Eglot-Features.html\"><code>eglot</code></a> is now bundled with Emacs,\nso you don't even need to go out of your way to get some funky packages hooked up\n(like I had to with some flycheck plugin for Haskell back in the day).</p>\n<h3><a href=\"#refactoring-tools-have-also-improved\" aria-hidden=\"true\" class=\"anchor\" id=\"refactoring-tools-have-also-improved\"></a>Refactoring tools have also improved</h3>\n<p>The LSP-guided tools for refactoring have also improved a lot.\nIt used to be that only a &quot;real IDE&quot; had much better than grep and replace.\nI was happy to find that <code>eglot-rename</code> &quot;just worked&quot;.</p>\n<h3><a href=\"#docs\" aria-hidden=\"true\" class=\"anchor\" id=\"docs\"></a>Docs</h3>\n<p>I'm used to hovering my mouse over any bit of code, waiting a few seconds,\nand being greeted by a docs popover.\nThis is now possible in Emacs too with <code>eldoc</code> + your LSP.\nI added the <a href=\"https://github.com/casouri/eldoc-box\"><code>eldoc-box</code></a> plugin and configured it to my liking.</p>\n<h3><a href=\"#quick-fix-actions-work-too\" aria-hidden=\"true\" class=\"anchor\" id=\"quick-fix-actions-work-too\"></a>Quick fix actions work too!</h3>\n<p>So far, every single quick-fix action that I'm used to in RustRover\nseems to be there in the eglot integration with rust-analyzer.\nIt took me a few minutes to realize that this was called <code>eglot-code-actions</code>),\nbut once I figured that out, I was rolling.</p>\n<h2><a href=\"#jump-to-definition-works-great-but-navigation-has-caveats\" aria-hidden=\"true\" class=\"anchor\" id=\"jump-to-definition-works-great-but-navigation-has-caveats\"></a>Jump to definition works great, but navigation has caveats</h2>\n<p>I frequently use the jump-to-definition feature in IDEs.\nUsually by command+clicking.\nYou can do the same in Emacs with <code>M-.</code>, which is a bit weird, but okay.\nI picked up the muscle memory after less than an hour.\nThe weird thing though is what happens next.\nI'm used to JetBrains and most other well-designed software (<em>glares in the general direction of Apple</em>)\n&quot;just working&quot; with the forward+back buttons that many input devices have.\nEmacs did not out of the box.</p>\n<p>One thing JetBrains did fairly well was bookmarking where you were in a file, and even letting you jump back after\nnavigating to the definition or to another file.\nThis had some annoying side effects with multiple tabs, which I won't get into but it worked overall.\nIn Emacs, you can return from a definition jump with <code>M-,</code>, but there is no general navigate forward/backward concept.\nThis is where the build-your-own-lightsaber philosophy comes in I guess.\nI knew I'd hit it eventually.</p>\n<p>I tried out a package called <code>better-jumper</code> but it didn't <em>immediately</em> do what I wanted,\nso I abandoned it.\nI opted instead to simple backward and forward navigation.\nIt works alright.</p>\n<pre><code class=\"language-lisp\">(global-set-key (kbd &quot;&lt;mouse-3&gt;&quot;) #'previous-buffer)\n(global-set-key (kbd &quot;&lt;mouse-4&gt;&quot;) #'next-buffer)\n</code></pre>\n<p>Aside: I had to use <code>C-h k</code> (<code>describe-key</code>) to figure out what the mouse buttons were.\nAdvice I saw online apparently isn't universally applicable,\nand Xorg, macOS, etc. may number the buttons differently!</p>\n<h2><a href=\"#terminal-emulation-within-emacs\" aria-hidden=\"true\" class=\"anchor\" id=\"terminal-emulation-within-emacs\"></a>Terminal emulation within Emacs</h2>\n<p>The emacs <code>shell</code> mode is terrible.\nIt's particularly unusable if you're running any sort of TUI application.\nA friend recommended <a href=\"https://codeberg.org/akib/emacs-eat\"><code>eat</code></a> as an alternative.\nThis worked pretty well out of the box with most things,\nbut when I ran <code>cargo nextest</code> for the first time,\nI was shocked at how slow it was.\nMy test suite which normally runs in under a second took over 30!\nYikes.\nI believe the slowness is because it's implemented in elisp,\nwhich is still pretty slow even when native compilation is enabled.</p>\n<p>Another Emacs user recommended I try out <a href=\"https://github.com/akermu/emacs-libvterm\"><code>vterm</code></a>, so I did.\nHallelujah!\nIt's no iTerm 2, and it does have a few quirks,\nbut it's quite usable and MUCH faster.\nIt also works better with full-screen TUI apps like Claude Code.</p>\n<h2><a href=\"#claude-code-cli-is-actually-great\" aria-hidden=\"true\" class=\"anchor\" id=\"claude-code-cli-is-actually-great\"></a>Claude Code CLI is actually great</h2>\n<p>I'm not going to get into the pros and cons of LLMs in this post.\nBut if you use these tools in your work,\nI think you'll be surprised by how good the experience is with <code>vterm</code> and the <code>claude</code> CLI.\nI have been evaluating JetBrains' disjoint attempts at integrations with Junie,\nand more recently Claude Code and Codex.</p>\n<p>Junie is alright for some things.\nThe only really good thing I have to say about the product is that at least it let me select a GPT model.\nAnthropic models have been severely hampered in their ability to do anything useful in most codebases I work in,\ndue to tiny context windows.\nThat recently changed when Anthropic rolled out a 1 million token context window to certain users.</p>\n<p>JetBrains confusingly refers to Claude Code as &quot;Claude Agent&quot; and team subscriptions automatically include some monthly credits.\nEvery single JetBrains IDE will install its own separate copy of Claude Code (yay).\nBut it <em>is</em> really just shelling out to Claude Code it seems\n(it asks for your permission to download the binary.\nCodex is the same.)</p>\n<p>Given this, I assumed the experience and overall quality would be similar.\nWell, I was VERY wrong there.\nClaude Code in the terminal is far superior for a number of reasons.\nNot just access to the new model though that helps.\nYou can also configure &quot;effort&quot; (lol), and the &quot;plan&quot; mode seems to be far more sophisticated than what you get in the JetBrains IDEs.</p>\n<p>So yeah, if you're going to use these tools, just use the official app.\nIt makes sense; they have an incentive to push people to buy direct.\nAnd it so happens that Claude Code fits comfortably in my Emacs environment.</p>\n<p>More directly relevant to this post,\nLLMs (any of them really) are excellent at recommending Emacs packages and config tweaks.\nSo it's never been easier to give it a try.\nI've spent something like 2-3x longer writing this post than I did configuring Emacs.\n(And yes, before you ask, this post is 100% hand-written.)\nMy basic flow was to work, get annoyed (thats pretty easy for me),\nand describe my problem to ChatGPT or Claude.\nI am nowhere near the hours I budgeted for config fiddling.\nThat surprised me!</p>\n<h2><a href=\"#vcs-integration\" aria-hidden=\"true\" class=\"anchor\" id=\"vcs-integration\"></a>VCS integration</h2>\n<p>While I'm no stranger to hacking around with nothing more than a console,\nI really don't like the git CLI.\nI've heard jj is better, but honestly I think GUIs are pretty great most of the time.\nI will probably try magit at some point,\nbut for now I'm very happy with Sublime Merge.</p>\n<p>But one thing I MUST have in my editor is a &quot;gutter&quot; view of lines that are new/changed,\nand a way to get a quick inline diff.\nJetBrains had a great UX for this which I used daily.\nAnd for Emacs, I found something just as great: <a href=\"https://github.com/dgutov/diff-hl\"><code>diff-hl</code></a>.</p>\n<p>My config for this is very simple:</p>\n<pre><code class=\"language-lisp\">(unless (package-installed-p 'diff-hl)\n  (package-install 'diff-hl))\n(use-package diff-hl\n  :config\n  (global-diff-hl-mode))\n</code></pre>\n<p>To get a quick diff of a section that's changed,\nI use <code>diff-hl-show-chunk</code>.\nI might even like the hunk review experience here better than in JetBrains!</p>\n<h2><a href=\"#project-wide-search\" aria-hidden=\"true\" class=\"anchor\" id=\"project-wide-search\"></a>Project-wide search</h2>\n<p>I think JetBrains has the best search around with their double-shift, cmd+shift+o, and cmd-shift-f views.\nI have not yet gotten my Emacs configured to be as good.\nBut <code>C-x p g</code> (<code>project-find-regexp</code>) is pretty close.\nI'll look into other plugins later for fuzzy filename/symbol search.\nI <em>do</em> miss that.</p>\n<h2><a href=\"#run-configurations\" aria-hidden=\"true\" class=\"anchor\" id=\"run-configurations\"></a>Run configurations</h2>\n<p>The final pleasant surprise is that I don't miss JetBrains run configurations as much as I expected.\nI instead switch to putting a <a href=\"https://just.systems/man/en/introduction.html\"><code>justfile</code></a> in my repo and populating that with my run configurations\n(much of the software I work on has half a dozen switches which vary by environment).\nThis also has the side effect of cleaning up some of my CI configuration (<code>just</code> run the same thing!)\nand also serves as useful documentation to LLMs.</p>\n<h2><a href=\"#spell-checking\" aria-hidden=\"true\" class=\"anchor\" id=\"spell-checking\"></a>Spell checking</h2>\n<p>I have <a href=\"https://github.com/crate-ci/typos\"><code>typos</code></a> configured for most of my projects in CI,\nbut it drives me nuts when an editor doesn't flag typos for me.\nJetBrains did this well.\nEmacs has nothing out of the box (Zed also annoyingly doesn't ship with anything, which is really confusing to me).\nBut it's easy to add.</p>\n<p>I went with Jinx.\nThere are other options, but this one seemed pretty modern and worked without any fuss, so I stuck with it.</p>\n<h1><a href=\"#papercuts-to-solve-later\" aria-hidden=\"true\" class=\"anchor\" id=\"papercuts-to-solve-later\"></a>Papercuts to solve later</h1>\n<p>This is all a lot more positive than I was expecting to be honest!\nI am not going to cancel my JetBrains subscription tomorrow;\nthey still <em>do</em> make the best database tool I know of.\nBut I've moved all my daily editing to Emacs.</p>\n<p>That said, there are still some papercuts I need to address:</p>\n<ul>\n<li>Macro expansion. I liked that in RustRover. There's apparently a way to get this with <code>eglot-x</code> which I'll look into later.</li>\n<li>Automatic indentation doesn't work out of the box for all modes to my liking. I think I've fixed most of these but found the process confusing.</li>\n<li>Files don't reload in buffers automatically with disk changes (e.g. <code>cargo fmt</code>)!</li>\n<li>Code completion and jump to definition don't work inside rustdoc comments.</li>\n<li>RustRover used to highlight all of my <code>mut</code> variables. I would love to get that back in Emacs.</li>\n</ul>\n",
      "summary": "",
      "date_published": "2026-03-18T00:00:00-00:00",
      "image": "",
      "authors": [
        {
          "name": "Ian Wagner",
          "url": "https://fosstodon.org/@ianthetechie",
          "avatar": "media/avi.jpeg"
        }
      ],
      "tags": [
        "software-engineering",
        "shell"
      ],
      "language": "en"
    },
    {
      "id": "https://ianwwagner.com//typing-hanja-on-macos.html",
      "url": "https://ianwwagner.com//typing-hanja-on-macos.html",
      "title": "Typing Hanja on macOS",
      "content_html": "<p>이제 漢字 쓰는 方法을 알게 되었다!</p>\n<p>I was today years old when I finally figured out how to type Hanja (characters from China that were historically used to write Korean).\nIt struck me as very strange that this didn't seem possible in any of the obvious input methods.\nIn Japanese, for example, you get search-as-you-type style suggestions popping up as you type,\nwhether in Kana or Romaji mode.\nIn fact, until now, I mostly relied on my prior study of Japanese,\nswitched to that layout, and typed in a Japanese reading.\nThis was quite clunky though as I am now learning the Korean readings.</p>\n<p>I even asked several Koreans if they knew how,\nand none did (at least for macOS), since it's relatively uncommon to use them these days,\nparticularly for younger people.\nWindows keyboards have a dedicated Hanja mode key,\nbut I've never seen an Apple keyboard with this,\nand I'm not even totally sure if macOS understands the key code\n(if anyone knows, let me know on Mastodon).</p>\n<p>It turns out this IS in fact possible; it's just uncharacteristically buried.\nThe trick is to press option+return.\nThen you'll get a menu where you can select Hanja matches\nfor the previous &quot;word&quot; (it seems to rely on spacing, which is not always completely consistent in colloquial writing,\nbut it's not too hard to get used to.)\nI found tip on Apple's <a href=\"https://support.apple.com/en-gb/guide/korean-input-method/welcome/mac\">website</a>\nvia a search.</p>\n<p>This is probably only relevant to like 2 other people on the internet, but I thought I'd spread the word\nsince it was relatively hard to find!</p>\n",
      "summary": "",
      "date_published": "2026-03-07T00:00:00-00:00",
      "image": "",
      "authors": [
        {
          "name": "Ian Wagner",
          "url": "https://fosstodon.org/@ianthetechie",
          "avatar": "media/avi.jpeg"
        }
      ],
      "tags": [
        "macos",
        "i18n",
        "korean",
        "languages"
      ],
      "language": "en"
    },
    {
      "id": "https://ianwwagner.com//reqwest-0-13-upgrade-and-webpki.html",
      "url": "https://ianwwagner.com//reqwest-0-13-upgrade-and-webpki.html",
      "title": "reqwest 0.13 Upgrade and WebPKI",
      "content_html": "<p>In case you missed the <a href=\"https://seanmonstar.com/blog/reqwest-v013-rustls-default/\">announcement</a>,\nthe <code>reqwest</code> crate has a new and very important release out!\n<code>reqwest</code> is an opinionated, high-level HTTP client for Rust,\nand the main feature of this release is that <a href=\"https://rustls.dev/\"><code>rustls</code></a>\nis now the default TLS backend.\nRead the excellent blog posts from Sean and others on why <code>rustls</code>\nsafer and often faster than native TLS.\nIt's also a lot more convenient most of the time!</p>\n<h1><a href=\"#changes-to-certificate-verification\" aria-hidden=\"true\" class=\"anchor\" id=\"changes-to-certificate-verification\"></a>Changes to certificate verification</h1>\n<p>This post is about one of the more mundane parts of the release.\nPreviously there were a lot of somewhat confusing features related to certificate verification.\nThese have been condensed down to a smaller number of feature flags.\nThe summary of these changes took a bit to &quot;click&quot; for me so here's a rephrasing in my own words.</p>\n<ul>\n<li>By default, it uses the <a href=\"https://docs.rs/rustls-platform-verifier/latest/rustls_platform_verifier/\">native platform verifier</a>,\nwhich looks for root certificates in your system store, and inherits systemwide revocations and explicit trust settings\nin addition to the &quot;baseline&quot; root CAs trusted by your OS.</li>\n<li>The feature flag to enable WebPKI bundling of roots is gone.\nWebPKI is a bundle of CA root certificates trusted and curated by Mozilla.\nIt's a reasonably standard set, and most other trust stores look pretty similar.</li>\n<li>You can merge in your own <em>additionally</em> trusted root certificates using <a href=\"https://docs.rs/reqwest/latest/reqwest/struct.ClientBuilder.html#method.tls_certs_merge\"><code>tls_certs_merge</code></a>.</li>\n<li>You can be extra exclusive and use <a href=\"https://docs.rs/reqwest/latest/reqwest/struct.ClientBuilder.html#method.tls_certs_only\"><code>tls_certs_only</code></a>\nto limit verification to only the certificates you specify.</li>\n</ul>\n<p>The documentation and release notes also mention that <code>tls_certs_merge</code> is not always supported.\nI frankly have no idea what conditions cause this to be supported or not.\nBut <code>tls_certs_only</code> apparently can't fail. ¯\\_(ツ)_/¯</p>\n<h1><a href=\"#what-this-means-for-containerized-applications\" aria-hidden=\"true\" class=\"anchor\" id=\"what-this-means-for-containerized-applications\"></a>What this means for containerized applications</h1>\n<p>The reason I'm interested in this in mostly because at <code>$DAYJOB</code>, just about everything is deployed in containers.\nFor reasons that I don't fully understand (something about image size maybe??),\nthe popular container images like <code>debian:trixie-slim</code> <strong>do not include any root CAs</strong>.\nYou have to <code>apt-get install</code> them yourself.\nThis is to say that most TLS applications will straight up break in the out-of-the-box config.</p>\n<p>Previously I had seen this solved in two ways.\nThe first is to install the certs from your distribution's package manager like so:</p>\n<pre><code class=\"language-dockerfile\">RUN apt-get update \\\n &amp;&amp; apt-get install -y --no-install-recommends ca-certificates \\\n &amp;&amp; rm -rf /var/lib/apt/lists/*\n</code></pre>\n<p>The second is to add the WebPKI roots to your cargo dependencies.\nThis actually requires some manual work; adding the crate isn't enough.\nYou then have to add all of the roots (e.g. via <code>tls_certs_merge</code> or <code>tls_certs_only</code>).</p>\n<h1><a href=\"#which-approach-is-better\" aria-hidden=\"true\" class=\"anchor\" id=\"which-approach-is-better\"></a>Which approach is better?</h1>\n<p>The net result is <em>approximately</em> the same, but not entirely.\nThe system-level approach is more flexible.\nPresumably you would get updates in some cases without having to rebuild your application\n(though you do <em>not</em> get these automatically; the certs are only loaded once on app startup\nby <code>rustls_platform_verifier</code>!).\nPresumably you would also get any, say, enterprise-level trust, distrust, CRLs, etc.\nthat are dictated by your corporate IT department.</p>\n<p>The WebPKI approach on the other hand is baked at build time.\nThe <a href=\"https://docs.rs/webpki-root-certs/latest/webpki_root_certs/\">crate</a>\nhas a pretty strong, if slightly obtuse warning about this:</p>\n<blockquote>\n<p>This library is suitable for use in applications that can always be recompiled and instantly deployed. For applications that are deployed to end-users and cannot be recompiled, or which need certification before deployment, consider a library that uses the platform native certificate verifier such as <code>rustls-platform-verifier</code>. This has the additional benefit of supporting OS provided CA constraints and revocation data.</p>\n</blockquote>\n<p>Attempting to read between the lines, past that &quot;instantly deployed&quot; jargon,\nI think they are really just saying &quot;if you use this, certs are baked at compile time and you <em>never</em> get automatic updates. Be careful with that.&quot;</p>\n<p>So it's clear to me you shouldn't ship, say, a static binary to users with certs baked like this.\nBut I'm building server-side software.\nAnd as of February 2026, people look at you funny if you don't deploy using containers.\nI <em>can</em> deploy sufficiently instantly,\nthough to be honest I would have no idea <em>when</em> I should.\nMost apps get deployed frequently enough that I would assume this just doesn't matter,\nand so I'm not sure the warning as-written does much to help a lot of the Rust devs I know.</p>\n<h1><a href=\"#conclusion\" aria-hidden=\"true\" class=\"anchor\" id=\"conclusion\"></a>Conclusion</h1>\n<p>My conclusion is that if you're deploying containerized apps, there is approximately no functional difference.\nYour container is a static image anyways.\nThey don't typically run background tasks of any sort.\nAnd even if they did, the library won't reload the trusted store during application.\nSo it's functionally the same (delta any minor differences between WebPKI and Debian, which should be minimal).\nSimilarly, unless you work for a large enterprises / government,\nyou probably don't have mandated, hand-picked set of CAs and CRLs.\nSo again here there really is no difference as far as I can tell.</p>\n<p>In spite of that, I decided to switch away from using WebPKI in one of our containers that I upgraded.\nThe reason is that structuring this way\n(provided that the sources are copied from a previous layer!)\nensures that every image build always has the latest certs from Debian.\n<code>cargo build</code> is a lot more deterministic,\nand will use whatever you have in the lockfile unless you explicitly run <code>cargo update</code>.</p>\n<p>And even though I'm fortunate to not have an IT apparatus dictating cert policy today,\nyou never know... this approach seems to be both more flexible and creates a &quot;pit of success&quot;\nrather than a landmine where the trust store may not see an update for a year\ndespite regular rebuilds.</p>\n<p>In other words, I think Sean made the right choice, and you should <em>probably</em> delegate to the system,\nunless you have a particular reason to do otherwise.</p>\n<p>Hope this helps; I wrote this because I didn't understand the tradeoffs initially,\nand had some trouble parsing the existing writing on the subject.</p>\n",
      "summary": "",
      "date_published": "2026-02-13T00:00:00-00:00",
      "image": "",
      "authors": [
        {
          "name": "Ian Wagner",
          "url": "https://fosstodon.org/@ianthetechie",
          "avatar": "media/avi.jpeg"
        }
      ],
      "tags": [
        "rust",
        "cryptography"
      ],
      "language": "en"
    },
    {
      "id": "https://ianwwagner.com//even-safer-rust-with-miri.html",
      "url": "https://ianwwagner.com//even-safer-rust-with-miri.html",
      "title": "Even Safer Rust with Miri",
      "content_html": "<p>Recently some of the Miri contributors published a <a href=\"https://plf.inf.ethz.ch/research/popl26-miri.html\">paper that was accepted to POPL</a>.\nI've been using Rust professionally for about 7 years now,\nand while I'd <em>heard of</em> Miri several times over the years,\nI think there's a wide lack of knowledge about what it does, and why anyone should care.\nI only recently started using it myself, so I'm writing this post to share\nwhat Miri is, why you should care, and how you can get started easily.</p>\n<h1><a href=\"#what-is-miri\" aria-hidden=\"true\" class=\"anchor\" id=\"what-is-miri\"></a>What is Miri?</h1>\n<p>Miri is an interpreter for Rust's mid-level intermediate representation (MIR; hence the acronym).\nThat's how I first remember seeing it described years ago,\nand that's what the GitHub project description still says.</p>\n<p>The latest README is a bit more helpful though: it's a tool for detecting <em>undefined behavior</em> (UB) in Rust code.\nIn other words, it helps you identify code that's unsafe or unsound.\nWhile it would be a bug to hit such behaviors in safe Rust,\nif you're using <code>unsafe</code> (or any of your dependency chain does!),\nthen this is a real concern!\nMiri has in fact even found soundness bugs in the Rust standard library,\nso even a transitive sort of <code>#![forbid(unsafe_code)]</code> won't help you.</p>\n<h1><a href=\"#what-is-ub-and-why-is-it-bad\" aria-hidden=\"true\" class=\"anchor\" id=\"what-is-ub-and-why-is-it-bad\"></a>What is UB (and why is it bad)?</h1>\n<p>I think to understand why Miri matters,\nwe first need to understand why UB is bad.\nThis is not something that most professional programmers have a great understanding of (myself included).</p>\n<p>In abstract, UB can mean &quot;anything that isn't specified&quot;, or something like that...\nBut that's not very helpful!\nAnd it doesn't really explain the stakes if we don't avoid it.\nThe Rust Reference has a <a href=\"https://doc.rust-lang.org/reference/behavior-considered-undefined.html\">list</a>\nof behaviors that are considered to be undefined in Rust,\nbut they note that this list is not exhaustive.</p>\n<p>When searching for a better understanding,\nI've seen people online make statements like\n&quot;UB means your program can do literally anything at this point, like launch nuclear missiles.&quot;\nWhile this is technically true, this isn't particularly helpful to most readers.\nI want something more concrete...</p>\n<p>The authors of the paper put UB's consequences in terms which really &quot;clicked&quot; for me\nusing a logical equivalence, which I'll quote here:</p>\n<blockquote>\n<p>Furthermore, Undefined Behavior is a massive security problem. Around 70% of critical security vulnerabilities are caused by memory safety violations [38, 18, 32], and all of these memory safety violations are instances of Undefined Behavior. After all, if the attacker overflows a buffer to eventually execute their own code, this is not something that the program does because the C or C++ specification says so—the specification just says that doing out-of-bounds writes (or overwriting the vtable, or calling a function pointer that does not actually point to a function, or doing any of the other typical first steps of an exploit chain) is Undefined Behavior, and executing the attacker’s code is just how Undefined Behavior happens to play out in this particular case.</p>\n</blockquote>\n<p>I never made this connection on my own.\nI equate UB most often with things like data races between threads,\nwhere you can have unexpected update visibility without atomics or locks.\nOr maybe torn reads of shared memory that's not properly synchronized.\nBut this is a new way of looking at it that makes the stakes more clear,\nespecially if you're doing anything with pointers.</p>\n<p>Another connection I never made previously is that UB is relative to a very specific context.\nHere's another quote from the paper:</p>\n<blockquote>\n<p>The standard random number crate used across the Rust ecosystem performed an unaligned memory access. Interestingly, the programmers seemed to have been aware that alignment is a problem in this case: there were dedicated code paths for x86 and for other architectures. Other architectures used read_unaligned, but the x86 code path had a comment saying that x86 allows unaligned reads, so we do not need to use this (potentially slower) operation. Unfortunately, this is a misconception: even though x86 allows unaligned accesses, Rust does not, no matter the target architecture—and this can be relevant for optimizations.</p>\n</blockquote>\n<p>This is REALLY interesting to me!\nIt makes sense in retrospect, but it's not exactly obvious.\nLanguages are free to define their own semantics in addition to or independently of hardware.\nI suspect Rust's specification here is somehow related to its concept of allocations\n(which the paper goes into more detail about).</p>\n<p>It is obviously not &quot;undefined&quot; what the hardware will do when given a sequence of instructions.\nBut it <em>is</em> undefined in Rust, which controls how those instructions are generated.\nAnd here the Rust Reference is explicit in calling this UB.\n(NOTE: I don't actually know what the &quot;failure modes&quot; are here, but you can imagine they could be very bad\nsince it could enable the compiler to make a bad assumption that leads to a program correctness or memory safety vulnerability.)</p>\n<p>I actually encountered the same confusion re: what the CPU guarantees vs what Rust guarantees for unaligned reads in <a href=\"https://github.com/stadiamaps/valinor/blob/5e75b2b8267cee2a57d4f22fcc5605728e0cf76e/valhalla-graphtile/src/graph_tile.rs#L857\">one of my own projects</a>,\nas a previous version of this function didn't account for alignment.\nI addressed the issue by using the native zerocopy <a href=\"https://docs.rs/zerocopy/latest/zerocopy/byteorder/struct.U32.html\"><code>U32</code></a> type,\nwhich is something I'd have needed to do anyways to ensure correctness regardless of CPU endianness.\n(If you need to do something like this at a lower level for some reason, there's a <a href=\"https://doc.rust-lang.org/std/ptr/fn.read_unaligned.html\"><code>read_unaligned</code> function in <code>std::ptr</code></a>).</p>\n<p>TL;DR - UB is both a correctness and a security issue, so it's really bad!</p>\n<h1><a href=\"#using-miri-for-great-good\" aria-hidden=\"true\" class=\"anchor\" id=\"using-miri-for-great-good\"></a>Using Miri for great good</h1>\n<p>One of the reasons I write pretty much everything that I can in Rust is because\nit naturally results in more correct and maintainable software.\nThis is a result of the language guarantees of safe Rust,\nthe powerful type system,\nand the whole ecosystem of excellent tooling.\nIt's a real <a href=\"https://blog.codinghorror.com/falling-into-the-pit-of-success/\">pit of success</a> situation.</p>\n<p>While you can run a program under Miri as a one-shot test,\nthis isn't a practical approach to ensuring correctness long-term.\nMiri is a <em>complementary</em> tool to existing things that you should be doing already.\nAutomated testing is the most obvious one,\nbut fuzzing and other strategies may also be relevant for you.</p>\n<p>If you're already running automated tests in CI, adding Miri is easy.\nHere's an example of how I use it in GitHub actions:</p>\n<pre><code class=\"language-yaml\">steps:\n    - uses: actions/checkout@v4\n    - uses: taiki-e/install-action@nextest\n\n    - name: Build workspace\n      run: cargo build --verbose\n\n    - name: Run tests\n      run: cargo nextest run --no-fail-fast\n\n    - name: Run doc tests (not currently supported by nextest https://github.com/nextest-rs/nextest/issues/16)\n      run: cargo test --doc\n\n    - name: Install big-endian toolchain (s390x)\n      run: rustup target add s390x-unknown-linux-gnu\n\n    - name: Install s390x cross toolchain and QEMU (Ubuntu only)\n      run: sudo apt-get update &amp;&amp; sudo apt-get install -y gcc-s390x-linux-gnu g++-s390x-linux-gnu libc6-dev-s390x-cross qemu-user-static\n\n    - name: Run tests (big-endian s390x)\n      run: cargo nextest run --no-fail-fast --target s390x-unknown-linux-gnu\n\n    - name: Install Miri\n      run: rustup +nightly component add miri\n\n    - name: Run tests in Miri\n      run: cargo +nightly miri nextest run --no-fail-fast\n      env:\n        RUST_BACKTRACE: 1\n        MIRIFLAGS: -Zmiri-disable-isolation\n\n    - name: Run doc tests in Miri\n      run: cargo +nightly miri test --doc\n      env:\n        RUST_BACKTRACE: 1\n        MIRIFLAGS: -Zmiri-disable-isolation\n\n    - name: Install nightly big-endian toolchain (s390x)\n      run: rustup +nightly target add s390x-unknown-linux-gnu\n\n    - name: Run tests in Miri (big-endian s390x)\n      run: cargo +nightly miri nextest run --no-fail-fast --target s390x-unknown-linux-gnu\n      env:\n        RUST_BACKTRACE: 1\n        MIRIFLAGS: -Zmiri-disable-isolation\n</code></pre>\n<p>I know that's a bit longer than what you'll find in the README,\nbut I wanted to highlight my usage in a more complex codebase\nsince these examples are less common.\n(NOTE: I assume an Ubuntu runner here, since Linux has the best support for Miri right now.)\nSome things to highlight:</p>\n<ul>\n<li>I use <a href=\"https://nexte.st/\">nextest</a>, which is significantly faster for large suites. (NOTE: It <a href=\"https://github.com/nextest-rs/nextest/issues/16\">does not support doc tests</a> at the time of this writing).</li>\n<li>I pass some <code>MIRIFLAGS</code> to disable host isolation for my tests, since they require direct filesystem access. You may not need this for your project, but I do for mine.</li>\n<li>Partly because I can, and partly because big-endian CPUS do still exist, I do tests under two targets. Miri is capable of doing this with target flags, which is REALLY cool, and the <code>s390x-unknown-linux-gnu</code> is the &quot;big-endian target of choice&quot; from the Miri authors. This requires a few dependencies and flags.</li>\n<li>Note that cargo doc tests <a href=\"https://github.com/rust-lang/cargo/issues/6460\">do not support building for alternate targets</a>.</li>\n</ul>\n<p>Hopefully you learned something from this post.\nI'm pretty sure I wrote my first line of unsafe Rust less than a year ago\n(after using it professionally for over 6 years prior),\nso even if you don't need this today, file it away for later.\nAs I said at the start, I'm still not an expert,\nso if you spot any errors, please reach out to me on Mastodon!</p>\n",
      "summary": "",
      "date_published": "2026-01-07T00:00:00-00:00",
      "image": "",
      "authors": [
        {
          "name": "Ian Wagner",
          "url": "https://fosstodon.org/@ianthetechie",
          "avatar": "media/avi.jpeg"
        }
      ],
      "tags": [
        "rust",
        "software-reliability"
      ],
      "language": "en"
    },
    {
      "id": "https://ianwwagner.com//2025-in-review.html",
      "url": "https://ianwwagner.com//2025-in-review.html",
      "title": "2025 in Review",
      "content_html": "<p>I have never done one of these kinds of public posts, but saw a few from friends so I thought it might be useful!</p>\n<p>This year I was simultaneously more focused on my technical craft than ever,\nbut also had more of a &quot;life&quot; than ever.\nI took more random days off to go chill with friends, go skiing, etc.,\nand had more time with family.</p>\n<p>It is probably also one of the darkest years in world history as a whole.\nThe worst humanitarian abuses in a century continue,\nencouraged and perpetrated by what is supposed to be the &quot;free world.&quot;\nBut enough ink has already been spilled on that and you don't need to hear it from me.\nAnd South Korean politics show that you DO have a voice.\nSo make it heard, and let's focus on the good stuff.</p>\n<h1><a href=\"#travel\" aria-hidden=\"true\" class=\"anchor\" id=\"travel\"></a>Travel</h1>\n<p>I also traveled more than any year since COVID.\nIn addition to my annual pilgrimage to <a href=\"https://latitude59.ee/\">Latitude59</a> in Tallinn,\nother highlights were going to Hong Kong for Rust Asia\nand London for Anjunadeep Open Air.</p>\n<p>Surprisingly, this was my first time to visit the UK,\nand I have to say London is one of the few other cities I could actually see myself living in.\nDespite its flaws, London had a charming atmosphere,\namazing public spaces, loads of greenery,\ngreat food and drink (I don't really get the hate... I thoroughly enjoyed all of my meals),\nand well-functioning public transportation.\nOverall it was a very &quot;livable&quot; city to me,\nand joins Tallinn and Seoul as one of the few places I'd really enjoy living.</p>\n<h1><a href=\"#music\" aria-hidden=\"true\" class=\"anchor\" id=\"music\"></a>Music</h1>\n<p>2025 was a great year for musical experiences.\nHere are a few of my highlights of the year (in no particular order)\nwhich get regularly stuck in my head:</p>\n<ul>\n<li>Kasablanca - Higher Resolution (Side B)</li>\n<li>Monolink - The Beauty of it All</li>\n<li><a href=\"https://www.youtube.com/watch?v=S5UNox0G3xY\">Der Bahn Song</a> (niche bit of parody that I found <em>hilarious</em>)</li>\n<li>Estiva - Little Love (Icarus Remix)</li>\n<li>Perfume - Nebula Romance: Part II</li>\n<li>James Grant pres. Movement Vol. 3 (Live from Mount Agung, Bali)</li>\n</ul>\n<p>Besides all the great albums and mixes,\nI enjoyed more live shows than I have in a very long time (probably since 2015 or so).\nThe club nights and live bands in Tallinn were as amazing as ever.\nI got extremely lucky with tickets to a sold-out Fred Again tour show just 15 mins from home.\nThat was probably the best live show I've ever seen; absolutely incredible production and musical talent!\nAnd Anjunadeep Open Air was great.</p>\n<p>2025 also saw me get back into <em>creating</em> music for fun.\nI hadn't made much time for this in the past decade,\nbut the time felt right.\nI bought myself an Ableton Push,\nand will probably upload something on SoundCloud at some point.\nOr not.\nI'm making music for me, for fun.\nI wish I could do house parties where I'm just jamming,\nbut that probably won't happen in a Korean apartment anytime soon.</p>\n<h1><a href=\"#community\" aria-hidden=\"true\" class=\"anchor\" id=\"community\"></a>Community</h1>\n<p>I initially used &quot;conferences&quot; as a section heading,\nbut it struck me that the reason I go to conferences,\nmeetups, coworking, and online forums is the same: community.</p>\n<p>As I do basically every year, I went to Latitude59 in May\nfor the community gatherings.\nIt was a great time, and I got an early look at how AI agents were being adopted.</p>\n<p>The other international conference I attended was Rust Asia in Hong Kong.\nWhat a cool and diverse group of people!\nIt was also great to be back in Hong Kong again for the first time in quite a few years.\nI really hope they do the conference again in 2026.</p>\n<p>I also got to attend two local conferences late in the year: FOSS for All, and FOSS4G Korea.\nBoth conferences wouldn't have been on my radar if not for some friends being involved organizing them.</p>\n<p>FOSS for All is also a new conference, and the first edition was a huge success.\nIt was far more international than I expected for a Korean conference,\nand a model for running a properly international, bilingual conference.\nI was somewhat surprised that I gave the <em>only</em> talk with a heavy focus on Rust.\nAnd I was pleasantly surprised to see how much of the Korean FOSS community is active on Mastodon.\nI think I tripled the amount of Koreans I follow in an afternoon.</p>\n<p>It was also a surprisingly good value for my company as a sponsor.\nI had something like 20 serious conversations with people at our table,\nwhich was something I didn't really expect (the conference was maybe 200 attendees)!\nI'll definitely be back next year.</p>\n<p>FOSS4G Korea was also surprisingly great!\nI think I was the only non-Asian there; a few dedicated people flew in from Japan, which was awesome!\nAI was definitely a theme, and it wasn't the sort of slop generating 10x &quot;productivity&quot; sort of narrative.\nThe talks were overall even more interesting than I had expected; better than the last international FOSS4G I attended!\nThis was also the first time I fully participated in a conference conducted in another language.\nI'm setting a goal to give a talk in Korean next time.</p>\n<p>And speaking of international FOSS4G, it seems the next edition will also be close by\nin Hiroshima!\nI'm very excited to go, after several years of them being quite far away.\nGuess I need to start working on my talk proposals ;)</p>\n<p>Meetup-wise, I took over hosting the Seoul Rust meetup this year, and we did a lot more events than any year since COVID.\nWe've had some great talks, and even started a <a href=\"https://www.youtube.com/@RustSeoul\">YouTube channel</a>,\nwhere we'll post recordings of talks in the future (provided that the speaker is OK with it).\nI also gave two talks at the Seoul iOS Meetup: one on Ferrostar, and another on Apple's new Foundation models.\nThe iOS meetup also spawned a new, more general meetup called Dev Korea,\nwhich is growing really fast and has a great community on Discord!</p>\n<h1><a href=\"#reading\" aria-hidden=\"true\" class=\"anchor\" id=\"reading\"></a>Reading</h1>\n<p>I read a lot last year!\nI finally <a href=\"finishing-dragonball-in-korean.html\">finished reading Dragonball in Korean</a>.\nI had never read / watch the series before (because I grew up in relatively rural America without cable TV),\nbut it came highly recommended.\nYou can read about that in my other post.</p>\n<p>Here are some other things that I read + highly recommend:</p>\n<ul>\n<li>Sarah Wynn-Williams - Careless People</li>\n<li>John Carreyrou - Bad Blood</li>\n<li>David Graeber - Debt: The First 5000 Years</li>\n<li>Joseph Cox - Dark Wire</li>\n<li>Geoff White - The Lazarus Heist</li>\n<li>John Bloom - Eccentric Orbits</li>\n<li>Sarah Goodyear, Doug Gordon, and Aaron Naparstek - Life After Cars</li>\n<li>Karl Popper - The Open Society and its Enemies</li>\n</ul>\n<h1><a href=\"#work\" aria-hidden=\"true\" class=\"anchor\" id=\"work\"></a>Work</h1>\n<p>I probably talk about this enough elsewhere, but it was a really fun year work-wise, and we grew a lot too!</p>\n<p>Ferrostar started as one of those audacious ideas which I just couldn't resist trying.\nIt's now a healthy open-source project with weekly meetings of the core contributors,\nover 300 stars on GitHub, and 56(!!) forks.\nI think it's pretty safe to say that it's now regarded as the first choice\nunless you want to pay Google millions of dollars, or have an <em>extremely</em> simple use case.\nIt's being adopted by large companies in the space,\nwe're benefiting from contributions back upstream,\nand we're getting new business as a result.</p>\n<p>I'm pretty proud of this as I think it's an example of how open source can balance\ncommunity, collaboration, and sustainability.\nThat last 2 points is worth emphasizing.\nAll of the core contributors are working in a professional capacity,\nand find it valuable to work together on a shared foundation.</p>\n<p>The other big achievement that I haven't written as much about is rewriting our geocoder,\nmore or less from scratch, in a matter of months.\nYou've probably heard of the <a href=\"https://en.wikipedia.org/wiki/Second-system_effect\">second-system syndrome</a>.\nThe popular trope these days is for engineers to take something that works but is clunky / limited,\nand decide to rewrite it (maybe in Rust, like me 🤣), and never ship, or ship VERY late due to feature creep\nand wanting to get everything perfect.\nI'm definitely guilty of being a perfectionist, but I also believe you can get there gradually while shipping something valuable quickly.</p>\n<p>I approached this rewrite with a clear set of things that I wanted to change,\nand focused almost all of the time initially on getting the foundations right,\nwhich would let me replace the higher layers in a more &quot;agile&quot; way\n(in the sense of the normal use of the word, not a specific methodology).\nIt worked.\nWithin a few months, I had replaced the existing API layer with a new one,\nwhich was serving 99% of our traffic.\nWe didn't have any downtime, and I'm only aware of one accidental breaking change.\nThis is a result of careful testing, including snapshot testing at several levels (using the <code>insta</code> crate),\nand oracle testing (simple Python scripts in this case which hit the current and next gen APIs and flagged any differences).</p>\n<p>There will always be more improvements to make, but what's important is that we shipped,\nand we have a solid foundation to build from here.\nAnd not just that, we also have a v2 API with a bunch of improvements.\nAnd since the new API system is serving all the traffic,\nwe even get to backport a lot of the improvements to v1!\nIn fact, we have zero plans of deprecating our v1 API, since the internals are shared,\nand we can continue improving it within the limits of that API contract.\nThis is an engineering achievement I'm really proud of.</p>\n<h1><a href=\"#the-year-ahead\" aria-hidden=\"true\" class=\"anchor\" id=\"the-year-ahead\"></a>The Year Ahead</h1>\n<p>I don't do new years resolutions per se,\nbut I expect to work at a slightly less crazy pace,\nand make more time side projects like music and non-work-related tech.\nI've also decided on my next Korean reading series: Neon Genesis Evangelion.\nI'm currently on volume 5, and expect to finish that this year.</p>\n",
      "summary": "",
      "date_published": "2026-01-03T00:00:00-00:00",
      "image": "",
      "authors": [
        {
          "name": "Ian Wagner",
          "url": "https://fosstodon.org/@ianthetechie",
          "avatar": "media/avi.jpeg"
        }
      ],
      "tags": [
        "reflections"
      ],
      "language": "en"
    },
    {
      "id": "https://ianwwagner.com//using-tar-with-your-favorite-compression.html",
      "url": "https://ianwwagner.com//using-tar-with-your-favorite-compression.html",
      "title": "Using tar with Your Favorite Compression",
      "content_html": "<p>Here's a fun one!\nYou may already know that tarball is a pure archive format,\nand that any compression is applied to the whole archive as a unit.\nThat is to say that compression is not actually applied at the <em>file</em> level,\nbut to the entire archive.</p>\n<p>This is a trade-off the designers made to limit complexity,\nand as a side-effect, is the reason why you can't randomly access parts of a compressed tarball.</p>\n<p>What you may not know is that the <code>tar</code> utility has built-in support for a few formats!\nGZIP is probably the most commonly used for historical reasons,\nbut <code>zstd</code> and <code>lz4</code> are built-in options on my Mac.\nThis is probably system-dependent, so check your local manpages.</p>\n<p>Here's an example of compressing and decompressing with <code>zstd</code>:</p>\n<pre><code class=\"language-shell\">tar --zstd -cf directory.tar.zst directory/\ntar --zstd -xf directory.tar.zst\n</code></pre>\n<p>You can also use this with <em>any</em> (de)compression program that operates on stdin and stdout!</p>\n<pre><code class=\"language-shell\">tar --use-compress-program zstd -cf directory.tar.zst directory/\n</code></pre>\n<p>Pretty cool, huh?\nIt's no different that using pipes at the end of the day,\nbut it does simplify the invocation a bit in my opinion.</p>\n<p>After I initially published this article,\n<code>@cartocalypse@norden.social</code> noted that some versions of tar include the\n<code>-a</code>/<code>--auto-compress</code> option which will automatically determine format and compression based on the suffix!\nCheck your manpages for details; it appears to work on FreeBSD, macOS (which inherits the FreeBSD implementation), and GNU tar.</p>\n",
      "summary": "",
      "date_published": "2025-12-14T00:00:00-00:00",
      "image": "",
      "authors": [
        {
          "name": "Ian Wagner",
          "url": "https://fosstodon.org/@ianthetechie",
          "avatar": "media/avi.jpeg"
        }
      ],
      "tags": [
        "shell",
        "compression",
        "tar"
      ],
      "language": "en"
    },
    {
      "id": "https://ianwwagner.com//delightfully-simple-pipelines-with-nushell.html",
      "url": "https://ianwwagner.com//delightfully-simple-pipelines-with-nushell.html",
      "title": "Delightfully Simple Pipelines with Nushell",
      "content_html": "<p>I've been using <a href=\"https://www.nushell.sh/\">nushell</a> as my daily driver for about six months now,\nand wanted to show a few simple examples of why I'm enjoying it so much.\nI think it's a breath of fresh air compared to most shells.</p>\n<h1><a href=\"#why-a-new-shell\" aria-hidden=\"true\" class=\"anchor\" id=\"why-a-new-shell\"></a>Why a new Shell?</h1>\n<p>In case you've never heard of it before, nushell is a, well, new shell ;)\n<code>bash</code> has been the dominant shell for as long as I can remember,\nthough <code>zsh</code> have their fair share of devotees.\n<code>fish</code> is the only recent example I can think of as a &quot;challenger&quot; shell.\n<code>fish</code> gained enough traction that it's supported by tooling such as Python <code>virtualenv</code>\n(which only has integrations out of the box for a handful of shells).\nI think <code>fish</code> is popular because it had some slightly saner defaults out of the box,\nwas easier to &quot;customize&quot; with flashy prompts (which can make your shell SUPER slow to init),\nand had a saner scripting language than <code>bash</code>.\nBut it still retained a lot of the historical baggage from POSIX shells.</p>\n<p>Nushell challenges two common assumptions about shells\nand asks &quot;what if things were different?&quot;</p>\n<ol>\n<li>POSIX compliance is a non-goal.</li>\n<li>Many standard tools from GNU coreutils/base system (e.g. <code>ls</code> and <code>du</code>) are replaced by builtins.</li>\n<li>All nushell &quot;native&quot; utilities produce and consume <strong>structured data</strong> rather than text by default.</li>\n</ol>\n<p>By dropping the goal of POSIX compliance,\nnushell frees itself from decades of baggage.\nThis means you get a scripting language that feels a lot more like Rust.\nYou'll actually get errors by default when you try to do something stupid,\nunlike most shells which will happily proceed,\nusually doing something even more stupid.\nMaybe treating undefined variables as empty string make sense in the 1970s,\nbut that's almost never helpful.</p>\n<p>nushell also takes a relatively unique approach to utilities.\nWhen you type something like <code>ls</code> or <code>ps</code> in nushell,\nthis is handled by a shell builtin!\nIt's just Rust code baked into the shell rather than calling out to GNU coreutils\nor whatever your base system includes.\nThis means that whether you type <code>ps</code> on FreeBSD, Debian, or macOS,\nyou'll get the same behavior!</p>\n<p>I can already hear some readers thinking &quot;doesn't this just massively bloat the shell?&quot;\nNo, not really.\nThe code for these is for less than that of the typical GNU utility,\nbecause nushell actually (IMO) embraces UNIX philosophy even better than the original utilities.\nThey are all extremely minimal and work with other builtins.\nFor example, there are no sorting flags for <code>ls</code>,\nand no format/unit flags for <code>du</code>.</p>\n<p>The reason that nushell <em>can</em> take this approach is because they challenge the notion that\n&quot;text is the universal API.&quot;\nYou <em>can't</em> meaningfully manipulate text without lots of lossy heuristics.\nBut you <em>can</em> do this for structured data!\nI admit I'm a bit of a chaos monkey, so I love to see a project taking a rare new approach\nin space where nothing has fundamentally changed since the 1970s.</p>\n<p>Okay, enough about philosophy... here are a few examples of some shell pipelines I found delightful.</p>\n<h1><a href=\"#elastic-snapshot-status\" aria-hidden=\"true\" class=\"anchor\" id=\"elastic-snapshot-status\"></a>Elastic snapshot status</h1>\n<p>First up: I do a lot of work with Elasticsearch during <code>$DAYJOB</code>.\nOne workflow I have to do fairly often is spin up a cluster and restore from a snapshot.\nThe Elasticsearch API is great... for programs.\nBut I have a hard time grokking hundreds of lines of JSON.\nHere's an example of a pipeline I built which culls the JSON response down to just the section I care about.</p>\n<pre><code class=\"language-nu\">http get &quot;http://localhost:9200/myindex/_recovery&quot;\n  | get restored-indexname\n  | get shards\n  | get index\n  | get size\n</code></pre>\n<p>In case it's not obvious, the <code>http</code> command makes HTTP requests.\nThis is another nushell builtin that is an excellent alternative to <code>curl</code>.\nIt's not as feature-rich (<code>curl</code> has a few decades of head start),\nbut it brings something new to the table: it understands from the response that the content is JSON,\nand converts it into structured data!\nEverything in this pipeline is nushell builtins.\nAnd it'll work on <em>any</em> OS that nushell supports.\nEven Windows!\nThat's wild!</p>\n<p>Pro tip: you can press option+enter to add a new line when typing in the shell.</p>\n<h1><a href=\"#disk-usage-in-bytes\" aria-hidden=\"true\" class=\"anchor\" id=\"disk-usage-in-bytes\"></a>Disk usage in bytes</h1>\n<p>Here's an example I hinted at earlier.\nIf you type <code>help du</code> (to get built-in docs),\nyou won't find any flags for changing the units.\nBut you can do it using formatters like so:</p>\n<pre><code class=\"language-nu\">du path/to/bigfile.bin | format filesize B apparent\n</code></pre>\n<p>The <code>du</code> command <em>always</em> shows human-readable units by default. Which I very much appreciate!\nAnd did you notice <code>apparent</code> at the end there?\nWell, the version of <code>du</code> you'd find with a typical Linux distro doesn't <em>exactly</em> lie to you,\nbut it withholds some very important information.\nThe physical size occupied on disk is not necessarily the same as how large the file\n(in an abstract platonic sense) <em>actually</em> is.</p>\n<p>There are a bunch of reasons for this, but the most impactful one is compressed filesystems.\nIf I ask Linux <code>du</code> how large a file is in an OpenZFS dataset,\nit will report the physical size by default, which may be a few hundred megabytes\nwhen the file is really multiple gigabytes.\nNot <em>necessarily</em> helpful.</p>\n<p>Anyways, the nushell builtin always gives you columns for both physical and apparent.\nSo you can't ignore the fact that these sizes are often different.\nI like that!</p>\n<h1><a href=\"#some-other-helpful-bits-for-switching\" aria-hidden=\"true\" class=\"anchor\" id=\"some-other-helpful-bits-for-switching\"></a>Some other helpful bits for switching</h1>\n<p>If you want to give nushell a try,\nthey have some great documentation.\nRead the basics, but also check out their specific pages on, e.g.\n<a href=\"https://www.nushell.sh/book/coming_from_bash.html\">coming from bash</a>.</p>\n<p>Finally, here are two more that tripped me up at first.</p>\n<ul>\n<li>If you want to get the PID of your process, use <code>$nu.pid</code> instead of <code>$$</code>.</li>\n<li>To access environment <em>variables</em>, you need to be explicit and go through <code>$env</code>. On the plus side, you can now explicitly differentiate from environment variables.</li>\n</ul>\n",
      "summary": "",
      "date_published": "2025-12-13T00:00:00-00:00",
      "image": "",
      "authors": [
        {
          "name": "Ian Wagner",
          "url": "https://fosstodon.org/@ianthetechie",
          "avatar": "media/avi.jpeg"
        }
      ],
      "tags": [
        "shell"
      ],
      "language": "en"
    },
    {
      "id": "https://ianwwagner.com//faster-ssh-file-transfers-with-rsync.html",
      "url": "https://ianwwagner.com//faster-ssh-file-transfers-with-rsync.html",
      "title": "Faster SSH File Transfers with rsync",
      "content_html": "<p>If you're a developer or sysadmin, there's a pretty good chance you've had to transfer files back and forth.\nBack in the old days, you may have used the <code>ftp</code> utility or something similar\n(I think my first one was probably CuteFTP).\nThen you probably thought better of doing things in plaintext,\nand switched to SFTP/SCP, which operate over an SSH connection.</p>\n<p>My quick post today is not exactly a bombshell of new information,\nbut these tools are not the fastest way to transfer.\nThere's often not a huge difference if you're transferring between machines in the same datacenter,\nbut you can do many times better then transferring from home or the office.</p>\n<p>Of course I'm talking about <code>rsync</code>, which has a lot of features like compression, partial transfer resume,\nchecksumming so just the deltas are sent in a large file, and more.\nI don't know why, but it's rarely in my consciousness, and always strikes me as a bit quirky.\nYou need quite a few flags for what I would consider to be reasonable defaults.\nBut if you remember those (or use shel history, like me),\nit can save you a ton of time.</p>\n<p>In fact, this morning, using rsync saved me more time than it took to write this blog post.\nI was transferring a ~40GB file from a server halfway around the world,\nbut I only had to transfer bytes equivalent to 20% of the total.</p>\n<p>Here's a look at the <code>rsync</code> command I often use for pulling files from a remote server\n(I usually do this to download a large build artifact without going through a cloud storage intermediary,\nwhich is both slower and more eexpensive):</p>\n<pre><code class=\"language-shell\">rsync -Pzhv example.com:/remote/path/to/big/file ~/file\n</code></pre>\n<p>It's not really that bad compared to an <code>scp</code> invocation, but those flags make all the difference.\nHere's what they do:</p>\n<ul>\n<li><code>-P</code> - Keeps partially transferred files (in case of interruption) and shows progress during the transfer.</li>\n<li><code>-z</code> - Compresses data on the source server before sending to the destination. This probably isn't a great idea for an intra-datacenter transfer (it just wastes CPU), but it's perfect for long distance transfers over &quot;slower&quot; links (where I'd say slower is something less than like 100Mbps between you and the server... the last part is important because you may have a gigabit link, but the peering arrangements or other issues conspire to limit effective transfer rates to something much lower).</li>\n<li><code>-h</code> - Makes the output more &quot;human-readable.&quot; This shows previfexs like K, M, or G. You can add it twice if you'd like 1024-based values instead of 1000-based. To be honest, I don't know why this isn't the default.</li>\n<li><code>-v</code> - Verbose mode. By default, <code>rsync</code> is silent. This is another behavior that I find strange in the present, but probably made more sense in the era of teletype terminals and very slow links. It's not really that verbose; it just tells you what files are being transferred and a brief summary at the end. You actually have to give <em>two</em> v's (<code>-vv</code>) for <code>rsync</code> to tell you which files it's skipping!</li>\n</ul>\n<p>Hope this helps speed up your next remote file transfer.\nIf there's any options you like which I may have missed, hit me up on Mastodon with a suggestion!</p>\n<p>Bonus pro tip: I had (until I recently switched to nushell) over a decade of accumulated shell history,\nand sometimes it's hard to keep the options straight.\nNaturally my history has a few different variations on a command like <code>rsync</code>.\nRather than searching through manpages with the equivalent of <code>grep</code>,\nI usually go to <a href=\"https://explainshell.com\">explainshell.com</a>.\nIt seems to be a frontend to the manapages that understands the various sections,\nproviding a much quicker explanation of what your switches do!</p>\n",
      "summary": "",
      "date_published": "2025-12-08T00:00:00-00:00",
      "image": "",
      "authors": [
        {
          "name": "Ian Wagner",
          "url": "https://fosstodon.org/@ianthetechie",
          "avatar": "media/avi.jpeg"
        }
      ],
      "tags": [
        "shell"
      ],
      "language": "en"
    },
    {
      "id": "https://ianwwagner.com//my-current-favorite-headphones-and-amp-for-listening.html",
      "url": "https://ianwwagner.com//my-current-favorite-headphones-and-amp-for-listening.html",
      "title": "My Current Favorite Headphones (and Amp) for Listening",
      "content_html": "<p>It occurred to me that things which work well are often under-represented online.\nI absolutely love my listening headphones, so I thought I'd write a quick post about them.</p>\n<p>To say I love music is an understatement.\nMy iTunes... er... Apple Music library is has 61.5 <em>days</em> of it.\nAnd I am reasonably picky.\nAnd unlike my Steam library, I have listened to all of it!\nMany times over.</p>\n<p>I also tend to listen a lot in situations where headphones are the best output device.\nEither nobody else is awake (I'm listening to Arthur Rbenstein's performance of Chopin's Nocturnes right now, if that gives you a clue)\nor others are very much around who may not appreciate classic trance blasting at 100dB.\nHeadphones are a really fantastic way to listen to music,\nand if you care about quality, they are some of the best bang for your buck.\nYou can get some AMAZING quality cans for a fraction of what you'd pay for a comparable speaker setup.</p>\n<p>But headphones are also hard to get right, and <em>highly personal</em>.\nWhat follows is my current favorite, and the journey to here.\nI'll try to note cases where I'm aware of my own subjectivity.\nbut I probably won't get everything.</p>\n<h1><a href=\"#what-makes-a-good-headphone-for-me\" aria-hidden=\"true\" class=\"anchor\" id=\"what-makes-a-good-headphone-for-me\"></a>What Makes a Good Headphone for Me</h1>\n<p>I've worn headphones for a significant chunk of my computer time (an inordinate portion of my life) since 2011 or so.\nThat's about 15 years at the time of this writing.\nHere are some things that are important for me (or not important), which may help frame my opinion:</p>\n<ul>\n<li><strong>Excellent sound quality</strong> is non-negotiable. No AirPods, sorry.</li>\n<li><strong>Relatively natural frequency response</strong> is also important to me. I really dislike units which skew the response (usually to the bass side for popular brands).</li>\n<li><strong>I strongly dislike closed back designs</strong> since they kinda kill the sound stage depth and make me feel like I have ear muffs on dampening everything around me.\nFor reference, I tend to work in more or less quiet environments, and that's where I do much of my listening.\nI also strongly dislike <em>active</em> noise cancelling as it makes me feel like I've got a cold.</li>\n<li>They need to be light enough to wear for ~2 hours.</li>\n</ul>\n<h1><a href=\"#previous-cans\" aria-hidden=\"true\" class=\"anchor\" id=\"previous-cans\"></a>Previous Cans</h1>\n<p>Over the years, I've really liked Sennheiser.\nI had several mid to high end pairs over the years.\nI rather liked my HD558s from years past.\nThey were mostly neutral response and open back.\nBut those were a long time ago (I bought them in 2013).\nAnd current Sennheiser isn't what they once were.</p>\n<p>More recently, I bought the Grado 325x.\nThey were <em>crisp</em> on the high end.\nBut after about 8 months, I couldn't stand them any more.\nThery were far too unforgiving of all but a perfectly mastered track.\nThe lower frequencies were off, especially for electronic music (but sounded great for rock).\nThey were heavy.\nThey were uncomfortable (I hated the headband and the weird angle of the earpieces,\nwhich look cool as a piece of steampunk concept art but are highly impractcial IMO).</p>\n<p>To make matters worse, the cable is non-detatchable and soldered all the way through.\nIt also weighs a ton and is super inflexible.\nI hear there are a bunch of people that do cable mods,\nbut by the time my pair developed a loose connection (probably due to the above)\nthat made them practically unlistenable, I was ready for something else.</p>\n<h1><a href=\"#current-setup\" aria-hidden=\"true\" class=\"anchor\" id=\"current-setup\"></a>Current setup</h1>\n<p>Moderately annoyed, I decided to buy some new headphones almost a year ago.\nI didn't find much help in reviews honestly, and you probably shouldn't expect to here either.\nBut I ended up going for some Beyerdynamic DT 900 Pro X's.\nI definitely over-indexed on the detatchable cable by the way\n(I haven't had a single issue with the cabling, nor have I bothered detaching it, but it's the principle of the thing).\nThe sound quality on these is excellent, and they have a better sound stage than my old Sennheiser's.\nI liked some things about the Grados better, but these are way more versatile.\nThey are also comfortable to wear for 2-3 hours:\nthey are lighter, and the cushioning is amazing.\nEspecially with my glasses.\nMy only complaint is they are a bit tight fitting for my large head,\nand I can actually hear some squeaks from my glasses frame when moving around sometimes.</p>\n<p>Headphones also need a solid amp (+ DAC if you're using it like most people).\nI currently use a Schiit Magni.\nA ridiculously under-priced amp+DAC.\nI really don't have much to say except that it's an excellent little unit that sounds great.\nI haven't had any issues in the 2 years I've owned it,\nbeyond the decade and change issues with certain music players not filling buffers fast enough\nduring a cold start of a new album.</p>\n",
      "summary": "",
      "date_published": "2025-11-25T00:00:00-00:00",
      "image": "",
      "authors": [
        {
          "name": "Ian Wagner",
          "url": "https://fosstodon.org/@ianthetechie",
          "avatar": "media/avi.jpeg"
        }
      ],
      "tags": [
        "gear"
      ],
      "language": "en"
    },
    {
      "id": "https://ianwwagner.com//elecom-deft-pro-two-months-in.html",
      "url": "https://ianwwagner.com//elecom-deft-pro-two-months-in.html",
      "title": "Elecom Deft Pro - Two Months In",
      "content_html": "<p>It's not quite two months in, but I said I'd give an update on the Elecom Deft Pro after some use, so here I am!</p>\n<h1><a href=\"#bad-stuff\" aria-hidden=\"true\" class=\"anchor\" id=\"bad-stuff\"></a>Bad Stuff</h1>\n<p>Bad news first...</p>\n<h2><a href=\"#initial-adjustment-period\" aria-hidden=\"true\" class=\"anchor\" id=\"initial-adjustment-period\"></a>Initial Adjustment Period</h2>\n<p>I'm not gonna lie, the initial adjustent period on this was rougher than I expected.\nThis trackballl was extremely uncomfortable for the first week,\nand not particularly comfortable for the following 2 or 3 weeks either.\nExpect to take some time getting used to this.</p>\n<h2><a href=\"#form-factor\" aria-hidden=\"true\" class=\"anchor\" id=\"form-factor\"></a>Form Factor</h2>\n<p>My biggest complaint is probably that it's too small and too light.\nIt should be 2-3 times heavier in my opinion to stay put.\nAnd I guess I should have guessed it from the name, but it's really small.\nAlmost too small for my hands.</p>\n<h2><a href=\"#ergonomics\" aria-hidden=\"true\" class=\"anchor\" id=\"ergonomics\"></a>Ergonomics</h2>\n<p>I'm not sure what it is with Elecom, but a lot of the angles make no sense to me.\n&quot;Straight&quot; as measured from the USB port puts the angle for the main (&quot;left&quot;) click button off by like 5-10 degrees.\nSo I always want to have it off center, but that's challenging on my admittedly small keyboard tray.\nI'm also not really sure whose hand they designed whatever is further right than the right (alt-right?) button for.\nI never planned on using this or the middle click button anyways, but they are just weirdly positioned.\nThe forward and back buttons are also weirdly positioned.</p>\n<p>These were also weirdly positioned on the HUGE, but the Deft Pro seems worse.\nIt's exacerbated by the fact that the thing weighs basically nothing,\nwhich is not a positive thing when you're trying to poke it at weird angles that will definitely move it on your desk.\nAnd I'm not sure what the other weird button and switch are closest to the hand rest and on level with the left click button,\nbut they are in such weird positions I will NEVER use them.</p>\n<h2><a href=\"#scrolling\" aria-hidden=\"true\" class=\"anchor\" id=\"scrolling\"></a>Scrolling</h2>\n<p>Oh my, this was a huge (lol) adjustment...\nThe scroll wheel is SO CRISP!\nI also kinda hate it, since even at max speed it's way too slow.\nI literally reach for my MacBook trackpad any time I want to scroll fast through a large document.\nThe scroll wheel <em>is</em> precise, but it's also kinda terrible IMO.\nAnd unlike the HUGE where you could sort of scroll horizontally by awkwardly nudging the wheel up or down,\nthe Deft is so tiny and light, and the wheel is so stiff, that you can just forget even the most awkward horizontal scrolling.\nUse your laptop trackpad for this.</p>\n<h1><a href=\"#good-stuff\" aria-hidden=\"true\" class=\"anchor\" id=\"good-stuff\"></a>Good Stuff</h1>\n<p>I'm sure I sound VERY negative at this point.\nAnd I really do have a lot of bad to say about this trackball.\nBut I'm still using it and actually do have some good things to say.</p>\n<h2><a href=\"#ball--mouse-behavior\" aria-hidden=\"true\" class=\"anchor\" id=\"ball--mouse-behavior\"></a>Ball / mouse behavior</h2>\n<p>Honestly I don't have any complaints about this.\nI saw some very polarizing opinions about the ball not rolling smoothly but I have had relatively few issues.\nIt's not sticking or anything for me and feels pretty good overall.\nMy precision feels slightly better than with the HUGE.</p>\n<h2><a href=\"#buttons-actually-working\" aria-hidden=\"true\" class=\"anchor\" id=\"buttons-actually-working\"></a>Buttons actually working!</h2>\n<p>The reason I replaced my HUGE was the unreliable buttons.\nMaybe my HUGE was a lemon, but the Deft Pro has been great.\nNo missed / duplicate click events.\nIt really just works.</p>\n<h1><a href=\"#overall\" aria-hidden=\"true\" class=\"anchor\" id=\"overall\"></a>Overall</h1>\n<p>Well, I'm still using it ;)\nSeveral things annoy me, but it's better than <em>my</em> HUGE.\nThe other trackball I considered was a Kensigton.\nI might still buy one, but only after I either:</p>\n<ol>\n<li>Replace my Kinesis Advantage2 with a smaller keyboard with fully disconnected halves\n(I'm eyeing the Advantage360 or one of the Cosmos Keyboards designs), or...</li>\n<li>Get a bigger keyboard drawer.</li>\n</ol>\n<p>Overall it's not a bad trackball, and you might even like it if you're primarily working mobile.\nBut I use this at my large desk and I have a few complaints.\nDespite my ergonomic complaints, it's still better than most mice,\nand your experience will <em>probably</em> be better than mine since my hands are large enough to make serious piano players envious.</p>\n",
      "summary": "",
      "date_published": "2025-11-19T00:00:00-00:00",
      "image": "",
      "authors": [
        {
          "name": "Ian Wagner",
          "url": "https://fosstodon.org/@ianthetechie",
          "avatar": "media/avi.jpeg"
        }
      ],
      "tags": [
        "gear"
      ],
      "language": "en"
    },
    {
      "id": "https://ianwwagner.com//korean-music-beyond-k-pop-demon-hunters.html",
      "url": "https://ianwwagner.com//korean-music-beyond-k-pop-demon-hunters.html",
      "title": "Korean Music Beyond K-Pop Demon Hunters",
      "content_html": "<p>Pretty much overnight, it seems that a movie I had never even heard of became the most-watched movie ever on Netflix.\nPeople everywhere are now singing the songs from K-Pop Demon Hunters.\nIt originally seemed more popular outside Korea\n(the first dozen or so people I heard about it from were not Korean, nor were they living here).\nBut now it's all over the promotional marketing locally too.</p>\n<p>I absolutely <em>love</em> living here,\nand am immersed in many aspects of the culture,\nbut have never been a big K-pop fan.\nIn fact, I don't actually like <em>most</em> pop music anywhere in the world ;)</p>\n<p>If you, like me, have different musical tastes from the mainstream,\nthen this post is for you!\nIt's a collection of some of my favorite Korean music\nthat you've probably never heard of.</p>\n<p>There is no particular order to this post, so don't take this as some sort of ranking.\nIt's just a list of music and artists I like which can broaden your horizons.</p>\n<h1><a href=\"#candy-and-lollipop\" aria-hidden=\"true\" class=\"anchor\" id=\"candy-and-lollipop\"></a>Candy and Lollipop</h1>\n<p>Okay, let's kick off with some light... hip-hop?\nI don't really even know how to categorize this pair of songs,\nseparated by almost a decade but having a lot of similarities in both style and name.\nBefore the current American-influenced styles of rap and hip-hop started to dominate,\nthere was this more fun and weird side.</p>\n<p>First up is boy band H.O.T. with <a href=\"https://www.youtube.com/watch?v=3NUaXU1d-NY\">Candy</a>.\nThis was 1996, so it's probably the oldest on my list.\nIt's just light and fun.\nAnd very 90s... lighter side of hip hop.</p>\n<p>Over the next decade, hip-hop and rap grew even more popular.\nThis next song, Lollipop, has one of the more bizarre music videos,\nand is a collaboration with more people than you'll be able to keep track of.\nHere's the <a href=\"https://www.youtube.com/watch?v=zIRW_elc-rY\">MV</a>.</p>\n<h1><a href=\"#소찬휘---tears\" aria-hidden=\"true\" class=\"anchor\" id=\"소찬휘---tears\"></a>소찬휘 - Tears</h1>\n<p>This song was released in the year 2000.\nThis style of dance+rock fusion was actually quite popular in Korea at the time,\nbut you don't hear very many producers putting out stuff like this today.\nThis song is well-known and loved by many in their late 30s or older,\nand is one of the most popular 노래방 (singing room = Karaoke) songs from that era,\neven today.\nThe song is full of passion and energy,\nand is very technically challenging.\nHere's a recent <a href=\"https://youtu.be/3UZm-Mm7WDs?si=FkgXCFuvA_Mil4k3\">live performance</a>.</p>\n<h1><a href=\"#윤종신---좋니\" aria-hidden=\"true\" class=\"anchor\" id=\"윤종신---좋니\"></a>윤종신 - 좋니</h1>\n<p>The gist of the song is a post-breakup story.\nIt's more of a power ballad, which is NOT my usual style of music.\nBut this song is just SO good, it has to be on my list.\nWhen it was released in 2017, it did extremely well,\nand got loads of radio play, despite not fitting the typical pop mold of the time.\nHere's a <a href=\"https://www.youtube.com/watch?v=jy_UiIQn_d0&amp;list=RDjy_UiIQn_d0&amp;start_radio=1\">live performance</a>\nthat also has some English subtitles.</p>\n<h1><a href=\"#frontier-leaders---new-stage\" aria-hidden=\"true\" class=\"anchor\" id=\"frontier-leaders---new-stage\"></a>Frontier Leaders - New Stage</h1>\n<p>Okay, this song probably does qualify as K-pop...\nThere is <em>some</em> good stuff in there ;)</p>\n<p>This is also the newest release on the list.\nIt grabbed my attention immediately\nbecause the musical structure, choice of scales, and rhythms\naren't typical in current K-pop.</p>\n<p>Here's the <a href=\"https://youtu.be/KpzAm3QFB2A?si=vEALVclevY1_koew\">music video</a>.\nIt's definitely an idol group,\nbut the MV is artistically solid and generally quite creative\n(it alludes to both streamer culture and Korean-style photo booths)!</p>\n<h1><a href=\"#moon-kyoo\" aria-hidden=\"true\" class=\"anchor\" id=\"moon-kyoo\"></a>Moon Kyoo</h1>\n<p>I have a hard time recommending a <em>specific</em> track,\nbecause this guy is a pretty wide ranging artist,\nand much of what he does is live.\nHe does everything from <a href=\"https://www.youtube.com/watch?v=WUOUcT2aAtU\">deep house on one of my favorite labels</a>\nto <a href=\"https://youtu.be/poT_M6bVW7c?si=beq9pGK4x4vHb_2m\">ambient sets like this with a Eurorack modular</a>.</p>\n<h1><a href=\"#lazenca-save-us\" aria-hidden=\"true\" class=\"anchor\" id=\"lazenca-save-us\"></a>Lazenca, Save Us</h1>\n<p>This song is getting REALLY out of the mold now...\nWe're in metal territory.</p>\n<p>The original was released back in 1997 by the group N.E.X.T,\nbut a newer in 2016 on a Korean TV show &quot;The King of Mask Singer&quot; made the song much more popular.\n(The concept of the show is that some famous singer puts on a mask, sings a song,\nand then celebrity judges have to guess who's behind the mask at the end).</p>\n<p>On one of these episodes, 하현우 does an absolutely <em>stellar</em> cover,\nwhich I liked even better than the original.\nHis vocal style is really on point for the song,\nand the high notes are absolutely killer!</p>\n<p>You can watch a <a href=\"https://youtu.be/A5qx1_9yMZo?si=eR1v1z3ezByt_Qcx\">video of the performance on YouTube</a>.\nIt's truly something with a full orchestra, drums, guitar... the whole bit.</p>\n<h1><a href=\"#cherryfilter---오리-날다\" aria-hidden=\"true\" class=\"anchor\" id=\"cherryfilter---오리-날다\"></a>cherryfilter - 오리 날다</h1>\n<p>Last but definitely not least,\nthis song is fun, upbeat, and a bit punk-y in style.\nIt was released back in 2003 and bears a lot of resemblance to J-Rock at the time.</p>\n<p>The song is about a duck who dreams he can fly.\nHis mom scolds him and says ducks can't fly,\nbut that doesn't stop him from dreaming.</p>\n<p>It's such a beautiful song!\nAnd I love the metaphor about chasing your dreams.\nSomething that I really hope can continue to inspire the younger generation.</p>\n<p>And the vocal style of the lead singer is pretty unique.\nShe's happy and has piercing high notes, but also has that rock edge at several points.\nAnd in <a href=\"https://www.youtube.com/watch?v=1tR17y0lE-o&amp;list=RD1tR17y0lE-o&amp;start_radio=1\">live performances</a>\nshe does this cute little hand motion throughout,\nmimicking wing flapping motion, which I thought was a cool and endearing!</p>\n<p>I had this song stuck in my head for <em>weeks</em> after I discovered it.</p>\n",
      "summary": "",
      "date_published": "2025-11-09T00:00:00-00:00",
      "image": "",
      "authors": [
        {
          "name": "Ian Wagner",
          "url": "https://fosstodon.org/@ianthetechie",
          "avatar": "media/avi.jpeg"
        }
      ],
      "tags": [
        "music",
        "culture",
        "korea"
      ],
      "language": "en"
    },
    {
      "id": "https://ianwwagner.com//const-assertions.html",
      "url": "https://ianwwagner.com//const-assertions.html",
      "title": "Const Assertions",
      "content_html": "<p>I'm currently working on a <a href=\"https://github.com/stadiamaps/valinor\">project</a> which involves a lot of lower level\ndata structures.\nBy lower level I mean things like layout and bit positions, and exact sizes being important.\nAs such, I have a number of pedantic lints enabled.</p>\n<p>One of the lints I use is <a href=\"https://rust-lang.github.io/rust-clippy/master/index.html#cast_precision_loss\"><code>cast_precision_loss</code></a>.\nFor example, casting from <code>usize</code> to <code>f32</code> using the <code>as</code> keyword is not guaranteed to be exact,\nsince <code>f32</code> can only precisely represent integrals up to 23 bits of precision (due to how floating point is represented).\nAbove this you can have precision loss.</p>\n<p>This lint is pedantic because it can generate false positives where you <em>know</em> the input can't ever exceed some threshold.\nBut wouldn't it be nice if we could go from &quot;knowing&quot; we can safely disable a lint to actually <em>proving</em> it?</p>\n<p>The first thing that came to mind was runtime assertions, but this is kind of ugly.\nIt requires that we actually exercise the code at runtime, for one.\nWe <em>should</em> be able to cover this in unit tests, but even if we do that,\nan assertion isn't as good as a compile time guarantee.</p>\n<h1><a href=\"#const\" aria-hidden=\"true\" class=\"anchor\" id=\"const\"></a><code>const</code></h1>\n<p>One thing I didn't mention, and the reason I &quot;know&quot; that the lint would be fine is that I'm using a <code>const</code> declaration.\nHere's a look at what that's like:</p>\n<pre><code class=\"language-rust\">pub const BUCKET_SIZE_MINUTES: u32 = 5;\npub const BUCKETS_PER_WEEK: usize = (7 * 24 * 60) as usize / BUCKET_SIZE_MINUTES as usize;\n</code></pre>\n<p>This isn't the same as a <code>static</code> or a <code>let</code> binding.\n<code>const</code> expressions are actually evaluated at compile time.\n(Well, most of the time... there's a funny edge case where <code>const</code> blocks which can <em>never</em> executed at runtime\n<a href=\"https://doc.rust-lang.org/reference/expressions/block-expr.html#const-blocks\">is not guaranteed to evaluate</a>.)</p>\n<p>You can't do everything in <code>const</code> contexts, but you can do quite a lot, including many kinds of math.\nNot all math; some things like square root and trigonometry are not yet usable in <code>const</code> contexts\nsince they are not reproducible across architectures (and sometimes even on the same machine, it seems).</p>\n<h1><a href=\"#assert-in-a-const-block\" aria-hidden=\"true\" class=\"anchor\" id=\"assert-in-a-const-block\"></a><code>assert!</code> in a <code>const</code> block</h1>\n<p>And now for the cool trick!\nI want to do a division here, and to do so, I need to ensure the types match.\nThis involves casting a <code>usize</code> to <code>f32</code>,\nwhich can cause truncation as noted above.</p>\n<p>But since <code>BUCKETS_PER_WEEK</code> is a constant value,\nwe can actually do an assertion against it <em>in our <code>const</code> context</em>.\nThis lets us safely enable the lint, while ensuring we'll get a compile-time error if this ever changes!\nThis has no runtime overhead.</p>\n<pre><code class=\"language-rust\">#[allow(clippy::cast_precision_loss, reason = &quot;BUCKETS_PER_WEEK is always &lt;= 23 bits&quot;)]\nconst PI_BUCKET_CONST: f32 = {\n    // Asserts the invariant; panics at compile time if violated\n    assert!(BUCKETS_PER_WEEK &lt; 2usize.pow(24));\n\t// Computes the value\n    std::f32::consts::PI / BUCKETS_PER_WEEK as f32\n};\n</code></pre>\n<p>This is all possible in stable Rust at the time of this writing (tested on 1.89).\nI saw some older crates out there which appeared to do this,\nbut as far as I can tell, they are no longer necessary.</p>\n<p>Here's a <a href=\"https://play.rust-lang.org/?version=stable&amp;mode=debug&amp;edition=2024&amp;gist=dd294501c156f8d67f72a21f7dea27c4\">Rust Playground</a>\npreloaded with the sample code\nwhere you can verify that changing <code>BUCKETS_PER_WEEK</code> to a disallowed value causes a compile-time error.</p>\n",
      "summary": "",
      "date_published": "2025-10-07T00:00:00-00:00",
      "image": "",
      "authors": [
        {
          "name": "Ian Wagner",
          "url": "https://fosstodon.org/@ianthetechie",
          "avatar": "media/avi.jpeg"
        }
      ],
      "tags": [
        "rust"
      ],
      "language": "en"
    },
    {
      "id": "https://ianwwagner.com//elecom-deft-pro-first-impressions.html",
      "url": "https://ianwwagner.com//elecom-deft-pro-first-impressions.html",
      "title": "Elecom Deft Pro - First Impressions",
      "content_html": "<p>I recently picked up a new trackball to replace my prematurely aging Elecom HUGE.\nHere are my first impressions after only a few hours, since there wasn't much I could find online comparing them properly.</p>\n<p>NOTE: I have an updated review after ~2 months with the unit <a href=\"elecom-deft-pro-two-months-in.html\">here</a>.</p>\n<h1><a href=\"#why-i-use-a-trackball\" aria-hidden=\"true\" class=\"anchor\" id=\"why-i-use-a-trackball\"></a>Why I use a Trackball</h1>\n<p>To get the obvious question out of the way first,\nI have used a trackball for about 5 years as my primary pointer device.\nI switched from all manner of mice and trackpads due to ergonomics.\nI had pretty bad RSI for many years, due to excessive computer use,\nand a trackball was one of the key changes that solved my issues.</p>\n<p>The trackball actually had a larger impact than switching to Dvorak (minimal improvement)\nor switching to a better ergonomic keyboard (Kinesis Advantage 2 QD; incremental, but not order of magnitude improvement).\nAnd yes, I'm aware of vertical mice.\nI tried a Logitech one for about 2 years and it actually made things worse.</p>\n<h1><a href=\"#size\" aria-hidden=\"true\" class=\"anchor\" id=\"size\"></a>Size</h1>\n<p>The biggest difference is probably the size.\nThe Deft Pro has a much smaller footprint than the HUGE.\nPart of the reason for this early replacement is that I ordered a replacement keyboard drawer online the other week.\nBeing too lazy to measure, and just going off the scale photos, it looked <em>plenty</em> big.</p>\n<p><figure><img src=\"media/huge-on-the-tray.jpeg\" alt=\"An Elecom HUGE trackball, with the side hanging off the edge of a white keyboard tray\" /></figure></p>\n<p>Narrator: it was, in fact, far too small and barely fits the keyboard.</p>\n<p>So, with the HUGE practically falling off,\ncombined with my bad habit of snacking and drinking at my desk late at night contributing to click malfunction\n(weirdly it would DOUBLE register clicks!), I needed a replacement.</p>\n<p><figure><img src=\"media/deft-on-the-tray.jpeg\" alt=\"An Elecom Deft Pro trackball, which barely fits on the keyboard tray\" /></figure></p>\n<p>The Deft Pro is pretty much the size I wanted.\nIt fits (just barely) on my tiny keyboard tray,\nand it is quite comfortable to use.</p>\n<h1><a href=\"#weightgrip\" aria-hidden=\"true\" class=\"anchor\" id=\"weightgrip\"></a>Weight/grip</h1>\n<p>I have to say I wish this thing were a bit heavier.\nI want the thing to sit on my desk like a rock and not move.\nI tend to manipulate the ball with my fingertips,\nand keep other fingers on the buttons, but this tends to push the base off to the left.\nMaybe I'll look at a sticky setup or something...</p>\n<h1><a href=\"#control-positions\" aria-hidden=\"true\" class=\"anchor\" id=\"control-positions\"></a>Control positions</h1>\n<p>I can only give a partial review here, since I do not use most of the extra buttons.\nPartly because I can't be bothered to configure them (most apps won't support them easily),\nand partly because I don't like installing proprietary software with low level system access for marginal benefit.\n(This is a subtoot at Logitech Options, but Elecom is not much better.)\nSo, I only use the left+right click, and the forward/back buttons.\nThe forward/back buttons are plug-and-play\nwith my web browser and many dev tools like JetBrains IDEs.</p>\n<p>I think the HUGE positioning was a bit better for the forward/back buttons.\nThe scroll and left click position are fine for my usage style.\nThe right click though... That will take some getting used to.\nI think they designed the Deft to be held in your hand pretty closely,\nbut that isn't how I got used to using the HUGE.</p>\n<p>I'll give this a shot and see how it goes after a few weeks trying a different grip.\nI think right click is positioned fine for the intended style of gripping.</p>\n<h1><a href=\"#scroll-wheel\" aria-hidden=\"true\" class=\"anchor\" id=\"scroll-wheel\"></a>Scroll wheel</h1>\n<p>The Deft allows for much better precision, with each step being a nice crisp click.\nI personally liked the HUGE better.\nIt was pretty fluid, smooth, and not &quot;clicky.&quot;\nIt also could generate acceptable scrolling speeds.\nNot as good as an Apple trackpad, but acceptable.\nI had to jack up the scroll speed to max for the Deft,\nand will probably continue using my trackpad for gestures like this when I want to go fast.</p>\n<p>This is not a criticism of Elecom specifically; it's a universal gripe I have with most things\n<em>except</em> the Apple trackpad.</p>\n<h1><a href=\"#the-ball\" aria-hidden=\"true\" class=\"anchor\" id=\"the-ball\"></a>The ball</h1>\n<p>The ball itself is great!\nI found some reviews online complaining that it was sticky,\nor hard to manipulate precisely.\nI have not had any issues so far; it feels as good or better than the HUGE.</p>\n<h1><a href=\"#connectivity\" aria-hidden=\"true\" class=\"anchor\" id=\"connectivity\"></a>Connectivity</h1>\n<p>If I recall, the HUGE required you to select a connectivity option at purchase time.\nThe Deft Pro includes all in the same package: bluetooth, RF via USB wireless dongle, and cabled USB connection.\nThis is great if you like to travel with an external input device!</p>\n<p>I personally don't since I love my MacBook trackpad.\nBut if you do, the only thing I'd flag is that there is no internal battery.\nYou need a single AA battery rather than an internal rechargeable like the higher end Logitech mice.</p>\n<h1><a href=\"#overall-impressions\" aria-hidden=\"true\" class=\"anchor\" id=\"overall-impressions\"></a>Overall impressions</h1>\n<p>It lives up to the name: it's Deft.\nI like a few things about the HUGE better (the scroll wheel and right mouse button position being the main ones),\nbut I <em>did</em> buy this for a smaller footprint,\nand I think I can get used to the button position with a tighter grip.</p>\n<p>I'll post a follow up later once I have a chance to use it more.</p>\n",
      "summary": "",
      "date_published": "2025-09-28T00:00:00-00:00",
      "image": "media/huge-on-the-tray.jpeg",
      "authors": [
        {
          "name": "Ian Wagner",
          "url": "https://fosstodon.org/@ianthetechie",
          "avatar": "media/avi.jpeg"
        }
      ],
      "tags": [
        "gear"
      ],
      "language": "en"
    }
  ]
}