+
+ {{partial "header.html" .}}
+
+ {{ i18n "tag" }}: {{ .Title }}
+ {{ .Content }}
+ {{with .Params.summary}}
+ {{.}}
+ {{end}}
+ {{partial "page-list.html" .Paginator.Pages}}
+ {{ template "_internal/pagination.html" . }}
+
+ {{partial "contact.html" .}}
+
+
+
+```
+
+### Where does it come from?
+There was a really confusing issue I had with the title section of the term page. The title was appearing correctly, even pulling from the `_index.md`, but it was being prefixed with `Tag: `.
+
+It took me some digging to find the cause. I started with grepping for `Tag`, and found it in the `i18n/en.toml`. This told me that a template was calling `i18n`, and there we had it in `layouts/_default/term.html`
+```html
+
+
+ {{partial "header.html" .}}
+
+ {{ i18n "interactive_graph" }}: {{ .Title }}
+ {{ .Content }}
+ {{with .Params.summary}}
+ {{.}}
+ {{end}}
+ {{partial "page-list.html" .Paginator.Pages}}
+ {{ template "_internal/pagination.html" . }}
+
+ {{partial "contact.html" .}}
+
+
+
+```
+
+## Now what?
+We've got a nice looking series page now:
+
+{{< figure
+ src="series-display-screenshot.png"
+ alt="Screenshot of a website with four links to blog posts connected as a series"
+ caption=""
+>}}
+
+The next steps are to start filling out my series pages and writing about my projects. This actually clears out the outstanding list of projects I had for the blog, so I don't have any big structural stuff to do.
\ No newline at end of file
diff --git a/content/posts/adding-content-to-taxonomy-terms/series-display-screenshot.png b/content/posts/adding-content-to-taxonomy-terms/series-display-screenshot.png
new file mode 100644
index 0000000..de1931e
--- /dev/null
+++ b/content/posts/adding-content-to-taxonomy-terms/series-display-screenshot.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:bfb309fed293348de4b3a3d44eb061d1e56cb12f53dff49b186a0335454ff368
+size 86850
diff --git a/content/posts/automating-caddy-on-my-droplet/index.md b/content/posts/automating-caddy-on-my-droplet/index.md
new file mode 100644
index 0000000..1a36802
--- /dev/null
+++ b/content/posts/automating-caddy-on-my-droplet/index.md
@@ -0,0 +1,130 @@
+---
+draft: false
+title: "Automating Caddy on my DigitalOcean Droplet"
+date: "2023-01-05"
+author: "Nick Dumas"
+authorTwitter: "" #do not include @
+cover: ""
+tags: ["webdev", "devops"]
+keywords: ["", ""]
+summary: "Automation ambitions fall flat"
+showFullContent: false
+---
+
+## Defining units of work
+I've got a few different websites that I want to run: this blog, my portfolio, and my about page which acts as a hub for my other sites, my Bandcamp, and whatever else I end up wanting to show off.
+
+To keep things maintainable and reproducible, I decided to stop attempting to create a monolithic all-in-one configuration file. This made it way harder to keep changes atomic; multiple iterations on my blog started impacting my prank websites, and it became harder and harder to return my sites to a working state.
+
+## Proof of concept
+
+The first test case was my blog because I knew the Hugo build was fine, I just needed to get Caddy serving again.
+
+A static site is pretty straightforward with Caddy:
+
+```
+blog.ndumas.com {
+ encode gzip
+ fileserver
+ root * /var/www/blog.ndumas.com
+}
+```
+
+And telling Caddy to load it:
+
+```bash
+curl "http://localhost:2019/load" \
+ -H "Content-Type: text/caddyfile" \
+ --data-binary @blog.ndumas.com.caddy
+```
+
+This all works perfectly, Caddy's more than happy to load this but it does warn that the file hasn't be formatted with `caddy fmt`:
+
+```
+[{"file":"Caddyfile","line":2,"message":"input is not formatted with 'caddy fmt'"}]
+```
+
+## The loop
+
+Here's where things went sideways. Now that I have two unit files, I'm ready to work on the tooling that will dynamically load my config. For now, I'm just chunking it all out in `bash`. I've got no particular fondness for `bash`, but it's always a bit of a matter of pride to see whether or not I *can*.
+
+```bash
+# load-caddyfile
+#! /bin/bash
+
+function loadConf() {
+ curl localhost:2019/load \
+ -X POST \
+ -H "Content-Type: text/caddyfile" \
+ --data-binary @"$1"
+}
+
+loadConf "$1"
+```
+
+```bash
+# load-caddyfiles
+#! /bin/bash
+
+source load-caddyfile
+
+# sudo caddy stop
+# sudo caddy start
+for f in "$1/*.caddy"; do
+ echo -e "Loading $(basename $f)"
+ loadConf "$f"
+ echo
+done
+```
+
+After implementing the loop my barelylegaltrout.biz site started throwing a 525 while blog.ndumas.com continued working perfectly. This was a real head scratcher, and I had to let the problem sit for a day before I came back to it.
+
+After some boring troubleshooting legwork, I realized I misunderstood how the `/load` endpoint works. This endpoint completely replaces the current config with the provided payload. In order to do partial updates, I'd need to use the `PATCH` calls, and look who's back?
+
+## can't escape JSON
+
+The `PATCH` API *does* let you do partial updates, but it requires your payloads be JSON which does make sense. Because my current set of requirements explicitly excludes any JSON ( for now ), I'm going to have to ditch my dreams of modular code.
+
+
+
+Not all my code has gone to waste, though. Now that I know `POST`ing to to `/load` overwrites the whole configuration, I don't need to worry about stopping/restarting the caddy process to get a clean slate. `load-caddyfile` will let me keep iterating as I make changes.
+
+## Proxies
+
+In addition to the static sites I'm running a few applications to make life a little easier. I'll showcase my Gitea/Gitlab and Asciinema configs. At the moment, my setup for these are really janky, I've got a `tmux` session on my droplet where I've manually invoked `docker-compse up`. I'll leave cleaning that up and making systemd units or something proper out of them for a future project.
+
+Reverse proxying with Caddy is blessedly simple:
+```
+cast.ndumas.com {
+ encode gzip
+ reverse_proxy localhost:10083
+}
+```
+
+```
+code.ndumas.com {
+ encode gzip
+ reverse_proxy localhost:3069
+}
+```
+
+With that, my gitea/gitlab is up and running along with my Asciinema instance is as well:
+
+[![asciicast](https://cast.ndumas.com/a/28.svg)](https://cast.ndumas.com/a/28)
+
+## Back to Square One
+After finally making an honest attempt to learn how to work with Caddy 2 and its configurations and admin API, I want to take a swing at making a systemd unit file for Caddy to make this a proper setup.
+
+## Finally
+
+Here's what's currently up and running:
+- [My blog](blog.ndumas.com)
+- [Asciinema](blog.ndumas.com)
+- [Gitea](blog.ndumas.com)
+
+I've had loads of toy projects over the years ( stay tuned for butts.ndumas.com ) which may come back, but for now I'm hoping these are going to help me focus on creative, stimulating projects in the future.
+
+The punchline is that I still haven't really automated Caddy; good thing you can count on `tmux`
+[![asciicast](https://cast.ndumas.com/a/aWYCFj69CjOg94kgjbGg4n2Uk.svg)](https://cast.ndumas.com/a/aWYCFj69CjOg94kgjbGg4n2Uk)
+
+The final code can be found [here](https://code.ndumas.com/ndumas/caddyfile). Nothing fancy, but troublesome enough that it's worth remembering the problems I had.
diff --git a/content/posts/beautiful-builds-with-bazel/index.md b/content/posts/beautiful-builds-with-bazel/index.md
new file mode 100644
index 0000000..73629c3
--- /dev/null
+++ b/content/posts/beautiful-builds-with-bazel/index.md
@@ -0,0 +1,494 @@
+---
+draft: false
+title: "Beautiful Builds with Bazel"
+aliases: ["Beautiful Builds with Bazel"]
+series: ["building-with-bazel"]
+series_order: 1
+date: "2023-08-25"
+author: "Nick Dumas"
+cover: ""
+summary: "bzlmod makes bazel extremely appealing and isn't hard to grasp for anyone already familiar with go modules. My frustration with make for complex builds led me to bazel."
+showFullContent: false
+tags:
+ - bazel
+ - golang
+ - devops
+---
+
+## What am I Doing?
+I write programs to solve problems. Most of the time these are pretty personal and only get used once or twice, never see the light of day again, and that's fine.
+
+Lately, though, I've been working on tooling for my Obsidian notes and I want to make my tools as accessible as possible. This involves a couple steps that are particularly important, tedious, and error prone when done manually:
+- trying to cross compile my binaries for a relatively long list of cpu/arch combinations
+- build a docker image
+ - push that docker image to OCI image repositories
+- run tests
+- run benchmarks
+- cache builds effectively
+
+I've started with a Makefile I stole from some gist I didn't save a link to. This [makefile](https://code.ndumas.com/ndumas/wikilinks-parser/src/tag/v0.0.5/Makefile) is kinda hefty so I'm gonna focus on compiling go binaries and preparing OCI images that contain those binaries.
+
+This makefile's extremely opinionated, hyper-targeted at Go builds. It assumes that your binaries live in `cmd/binaryName/`.
+
+```Makefile
+# Parameters
+PKG = code.ndumas.com/ndumas/obsidian-markdown
+NAME = parse-wikilinks
+DOC = README.md LICENSE
+
+DISTDIR ?= $(WD)/dist
+CMDS := $(shell find "$(CMDDIR)/" -mindepth 1 -maxdepth 1 -type d | sed 's/ /\\ /g' | xargs -n1 basename)
+INSTALL_TARGETS := $(addprefix install-,$(CMDS))
+
+VERSION ?= $(shell git -C "$(MD)" describe --tags --dirty=-dev)
+COMMIT_ID := $(shell git -C "$(MD)" rev-parse HEAD | head -c8)
+LDFLAGS = -X $(PKG).Version=$(VERSION) -X $(PKG).Build=$(COMMIT_ID)
+
+GOCMD = go
+GOINSTALL = $(GOCMD) install -a -tags "$(BUILD_TAGS)" -ldflags "$(LDFLAGS)"
+GOBUILD = gox -osarch="!darwin/386" -rebuild -gocmd="$(GOCMD)" -arch="$(ARCHES)" -os="$(OSES)" -output="$(OUTTPL)" -tags "$(BUILD_TAGS)" -ldflags "$(LDFLAGS)"
+
+GZCMD = tar -czf
+SHACMD = sha256sum
+ZIPCMD = zip
+
+
+build: $(CMDS)
+$(CMDS): setup-dirs dep
+ $(GOBUILD) "$(CMDPKG)/$@" | tee "$(RPTDIR)/build-$@.out"
+install: $(INSTALL_TARGETS)
+$(INSTALL_TARGETS):
+ $(GOINSTALL) "$(CMDPKG)/$(subst install-,,$@)"
+
+dist: clean build
+ for docfile in $(DOC); do \
+ for dir in "$(DISTDIR)"/*; do \
+ cp "$(PKGDIR)/$$docfile" "$$dir/"; \
+ done; \
+ done
+ cd "$(DISTDIR)"; for dir in ./*linux*; do $(GZCMD) "$(basename "$$dir").tar.gz" "$$dir"; done
+ cd "$(DISTDIR)"; for dir in ./*windows*; do $(ZIPCMD) "$(basename "$$dir").zip" "$$dir"; done
+ cd "$(DISTDIR)"; for dir in ./*darwin*; do $(GZCMD) "$(basename "$$dir").tar.gz" "$$dir"; done
+ cd "$(DISTDIR)"; find . -maxdepth 1 -type f -printf "$(SHACMD) %P | tee \"./%P.sha\"\n" | sh
+ $(info "Built v$(VERSION), build $(COMMIT_ID)")
+```
+
+Because this isn't a makefile tutorial, I'm going to just hit the high notes and explain why this isn't working. Given the parameters at the top, it looks in `cmd/` for directories and passes them to `go build` with `-ldflags` thrown in.
+
+Here we have the machinery behind `make bump`, github link below. `bump` is a tool that'll automatically create semantic versioning tags in a git repo based on existing tags. You can `bump {patch,minor,major}` and it'll create the next tag in the versioning sequence for you.
+```Makefile
+setup-bump:
+ go install github.com/guilhem/bump@latest
+
+bump-major: setup-bump
+ bump major
+
+bump-minor: setup-bump
+ bump minor
+
+bump-patch: setup-bump
+ bump patch
+```
+
+## Why does it work?
+Automation is a great thing. This makefile inspired me to start actually using semantic versioning diligently. It didn't hurt that I was working on a lot Drone pipelines at the time and was starting to get incredibly frustrated debugging `:latest` images and never being certain what code was running.
+
+Working with bash is never...pleasant, but it definitely gets the job done. I'm no stranger to shell scripts and the minor modifications needed to get `bump` integrated and other miscellany I've legitimately fully forgotten by now ( document your code for your own sake ) posed no real burden.
+
+This makefile helped me rapidly iterate on code and release it in a way that was easily consumable, including docker images pushed to my self-hosted registry on Gitea. The pipeline that handles this blog post is using a docker image tagged by the makefile components described above, in fact.
+
+## Why doesn't it work?
+The real kink in the hose ended up being gox. Gox worked great until I tried to generate alpine builds. It was possible, but I'd have to start changing the makefile pretty significantly, write bash helper functions, and more. I decided that wasn't worth the maintenance overhead pretty quickly and started looking in
+
+it's not "smart". The solutions for cross-compilation ended up being clunky to compose with Docker builds
+## What are the options?
+The only real solution is a smarter build system. I had to choose between hand-rolling something with a bunch of switch statements in bash, or I could look into more modern toolkits. I looked into three:
+- meson
+- bazel
+- scons
+## The contenders
+Bazel looked like it had the most to offer:
+- hermetic builds
+- reproducible builds
+- aggressive, fine-grained caching
+- extensible
+
+All of these fit the bill for what I needed. In particular, it has pretty decent go support through [rules_go](https://github.com/bazelbuild/rules_go) and [gazelle](https://github.com/bazelbuild/bazel-gazelle), which we'll look at in more depth later.
+
+There's not a lot to say here, I knew nothing about any of the three candidates and when I started I wasn't certain I'd stick with bazel all the way. Sometimes you just have to try stuff and see how it feels.
+
+### Caution
+bazel seems to be going through an ecosystem shift from the WORKSPACE paradigm to bzlmod. Documentation does exist, but it might not be in the README yet. I've tested the code here and it works in this narrow case. Caveat emptor.
+
+## Getting Going with Gazelle
+With that, here is how a modern bzlmod enabled go repo is born.
+
+### Building Go code
+The first step is, in no particular order, init your git repository and init your go module. The former is helpful for keeping track of when you broke something and the latter is required for gazelle to do its job.
+- `go mod init`
+- `git init`
+
+Write your go code. The simplest hello world will work for demonstration purposes.
+
+Create your `MODULE.bazel` file.
+``` {title="MODULE.bazel"}
+module(
+ name = "obsidian-markdown", # set this manually
+ repo_name = "code.ndumas.com_ndumas_obsidian-markdown", # this is the name of your go module, with /'s replaces with _'s
+)
+
+bazel_dep(name = "gazelle", version = "0.32.0")
+bazel_dep(name = "rules_go", version = "0.41.0")
+
+go_deps = use_extension("@gazelle//:extensions.bzl", "go_deps")
+go_deps.from_file(go_mod = "//:go.mod")
+
+```
+`module()` is how you declare a top-level bazel project. Everything is namedspaced under this module.
+
+`bazel_dep` tells bazel to retrieve modules from the [bazel registry](https://registry.bazel.build/).
+
+`use_extension` imports functions from bazel modules; here we're importing `go_deps` because it'll read out `go.mod` file and help bazel automatically calculate direct and transitive dependencies.
+
+
+and `BUILD.bazel`
+``` {title="BUILD.bazel"}
+load("@gazelle//:def.bzl", "gazelle")
+
+gazelle(name = "gazelle")
+
+gazelle(
+ name = "gazelle-update-repos",
+ args = [
+ "-from_file=go.mod",
+ "-to_macro=deps.bzl%go_dependencies",
+ "-prune",
+ ],
+ command = "update-repos",
+)
+```
+
+This is straight from the gazelle README. You `load()` the gazelle module and declare two build targets: `gazelle` and `gazelle-update-repos`. After the rest of the setup, these targets are what will do the work of actually generating build/test targets for all your code.
+
+Next, `.bazelrc`
+``` {title=".bazelrc"}
+common --experimental_enable_bzlmod
+
+# Disable lockfiles until it works properly.
+# https://github.com/bazelbuild/bazel/issues/19068
+common --lockfile_mode=off
+
+###############################
+# Directory structure #
+###############################
+
+# Artifacts are typically placed in a directory called "dist"
+# Be aware that this setup will still create a bazel-out symlink in
+# your project directory, which you must exclude from version control and your
+# editor's search path.
+build --symlink_prefix=dist/
+
+###############################
+# Output #
+###############################
+
+# A more useful default output mode for bazel query, which
+# prints "ng_module rule //foo:bar" instead of just "//foo:bar".
+query --output=label_kind
+
+# By default, failing tests don't print any output, it's logged to a
+# file instead.
+test --test_output=errors
+```
+
+Only the first line is required; the rest are just conveniences. I do **strongly** recommend the `query` setting though, extremely nice for debugging.
+
+Finally, a `.gitignore` to mask out generated artifacts.
+``` {title=".gitignore"}
+dist/*
+reports/*
+bazel-*
+*.bazel.lock
+```
+
+Run `bazel build //:gazelle`. This will auto-generate a lot of scaffolding, and probably emit a `buildozer` command that will modify something. This is the build system (specifically gazelle ) automatically detecting dependencies that are declared in `go.mod` but not in your bazel code.
+```bash
+$ bazel run //:gazelle
+WARNING: /home/ndumas/work/gomud/MODULE.bazel:8:24: The module extension go_deps defined in @gazelle//:extensions.bzl reported incorrect imports of repositorie
+s via use_repo():
+
+Not imported, but reported as direct dependencies by the extension (may cause the build to fail):
+ com_github_therealfakemoot_go_telnet
+
+ ** You can use the following buildozer command(s) to fix these issues:
+
+buildozer 'use_repo_add @gazelle//:extensions.bzl go_deps com_github_therealfakemoot_go_telnet' //MODULE.bazel:all
+INFO: Analyzed target //:gazelle (0 packages loaded, 0 targets configured).
+INFO: Found 1 target...
+Target //:gazelle up-to-date:
+ dist/bin/gazelle-runner.bash
+ dist/bin/gazelle
+INFO: Elapsed time: 0.473s, Critical Path: 0.01s
+INFO: 1 process: 1 internal.
+INFO: Build completed successfully, 1 total action
+INFO: Running command line: dist/bin/gazelle
+$ git st
+## dev
+?? .bazelrc
+?? .gitignore
+?? BUILD
+?? MODULE.bazel
+?? cmd/BUILD.bazel
+?? protocol/BUILD.bazel
+$
+```
+
+Running the `buildozer` command and then `git diff` shows its work:
+```bash
+$ buildozer 'use_repo_add @gazelle//:extensions.bzl go_deps com_github_therealfakemoot_go_telnet' //MODULE.bazel:all
+fixed /home/ndumas/work/gomud/MODULE.bazel
+$ git st
+## dev
+ M MODULE.bazel
+$ git diff
+diff --git a/MODULE.bazel b/MODULE.bazel
+index b482f31..8e82690 100644
+--- a/MODULE.bazel
++++ b/MODULE.bazel
+@@ -7,3 +7,4 @@ bazel_dep(name = "gazelle", version = "0.32.0")
+
+ go_deps = use_extension("@gazelle//:extensions.bzl", "go_deps")
+ go_deps.from_file(go_mod = "//:go.mod")
++use_repo(go_deps, "com_github_therealfakemoot_go_telnet")
+$
+```
+This diff shows how bazel references external dependencies. gazelle's `go_deps` tool acts as a provider for these lookups and offers information bazel needs to verify its build graphs. Yours may look different depending on what you've imported, if anything.
+
+Examining the produced `BUILD.bazel` file should yield something like this for a `main` package.
+``` {title="cmd/echo/BUILD.bazel"}
+load("@rules_go//go:def.bzl", "go_binary", "go_library")
+
+go_library(
+ name = "echo_lib",
+ srcs = ["server.go"],
+ importpath = "code.ndumas.com/ndumas/gomud/cmd/echo",
+ visibility = ["//visibility:private"],
+)
+
+go_binary(
+ name = "echo",
+ embed = [":echo_lib"],
+ visibility = ["//visibility:public"],
+)
+```
+
+
+If the package is importable, you'll see something like this:
+``` {ttile="protocol/BUILD.bazel"}
+load("@rules_go//go:def.bzl", "go_library")
+
+go_library(
+ name = "protocol",
+ srcs = ["telnet.go"],
+ importpath = "code.ndumas.com/ndumas/gomud/protocol",
+ visibility = ["//visibility:public"],
+)
+```
+These are examples of `rules_go` build targets. These do a bunch of magic to invoke Go toolchains and in theory let bazel cache builds at a pretty granular level. I'm hoping this is true, I've got a few pipelines that are starting to run way longer than I like.
+
+### OCI Images
+For ease of use, I like to build docker images containing my packages. This is particularly important for Drone pipelines.
+
+We're gonna amend our `MODULE.bazel` to add some new tools.
+``` {title="MODULE.bazel"}
+bazel_dep(name = "rules_oci", version = "1.3.1") # gives us ways to interact with OCI images and repositories
+bazel_dep(name = "rules_pkg", version = "0.9.1") # exposes a way to tar our app, which is necessary for packing with rules_oci
+
+
+oci = use_extension("@rules_oci//oci:extensions.bzl", "oci")
+oci.pull(
+ name = "distroless_base",
+ image = "gcr.io/distroless/base",
+ tag = "latest", # This is temporary. For reproducible builds, you'll want to use digest hashes.
+)
+
+use_repo(oci, "distroless_base")
+```
+
+`pull()` does more or less what it says: it creates a target that represents an OCI image pulled from a registry, and another `use_repo()` call tells bazel that we're *using* our image.
+
+And add this to the `BUILD.bazel` file for the binary you want built into an OCI image
+``` {title="cmd/echo/BUILD.bazel"}
+load("@rules_pkg//:pkg.bzl", "pkg_tar")
+
+pkg_tar(
+ name = "tar",
+ srcs = [":echo"],
+)
+
+load("@rules_oci//oci:defs.bzl", "oci_image")
+
+oci_image(
+ name = "image",
+ base = "@distroless_base",
+ entrypoint = ["/echo"],
+ tars = [":tar"],
+)
+```
+
+`oci_image` requires that whatever you package into the image it creates be contained in a tar file, which seems pretty reasonable. `rules_pkg` handles that for us.
+
+Run `bazel build //cmd/echo:image` and you'll see another `buildozer` command and a lot of errors. This is to be expected, bazel wants builds to be reproducible and because we haven't specified a version or a hash it can't do that. It helpfully emits the `buildozer` command that'll set the proper digest hash and platforms bazel needs to resolve its builds.
+```
+bazel build //cmd/echo:image
+WARNING: fetching from https://gcr.io/v2/distroless/base/manifests/latest without an integrity hash. The result will not be cached.
+WARNING: for reproducible builds, a digest is recommended.
+Either set 'reproducible = False' to silence this warning,
+or run the following command to change oci.pull to use a digest:
+(make sure you use a recent buildozer release with MODULE.bazel support)
+
+buildozer 'set digest "sha256:73deaaf6a207c1a33850257ba74e0f196bc418636cada9943a03d7abea980d6d"' 'remove tag' 'remove platforms' 'add platforms "linux/amd64" "
+linux/arm64" "linux/arm" "linux/s390x" "linux/ppc64le"' MODULE.bazel:distroless_base
+
+WARNING: fetching from https://gcr.io/v2/distroless/base/manifests/latest without an integrity hash. The result will not be cached.
+INFO: Repository rules_oci~1.3.1~oci~distroless_base_single instantiated at:
+ callstack not available
+Repository rule oci_pull defined at:
+ /home/ndumas/.cache/bazel/_bazel_ndumas/482ba52ed14b5c036eb1d379e90911a8/external/rules_oci~1.3.1/oci/private/pull.bzl:437:27: in