cleaning up
continuous-integration/drone/push Build is failing
Details
continuous-integration/drone/push Build is failing
Details
parent
ccea3d3913
commit
23b8623639
@ -1,3 +0,0 @@
|
|||||||
[submodule "themes/terminal"]
|
|
||||||
path = themes/terminal
|
|
||||||
url = https://github.com/panr/hugo-theme-terminal.git
|
|
@ -1,6 +0,0 @@
|
|||||||
---
|
|
||||||
title: "{{ replace .Name "-" " " | title }}"
|
|
||||||
date: {{ .Date }}
|
|
||||||
draft: true
|
|
||||||
---
|
|
||||||
|
|
@ -1,130 +0,0 @@
|
|||||||
+++
|
|
||||||
draft = false
|
|
||||||
title = "Automating Caddy on my DigitalOcean Droplet"
|
|
||||||
date = "2023-01-05"
|
|
||||||
author = "Nick Dumas"
|
|
||||||
authorTwitter = "" #do not include @
|
|
||||||
cover = ""
|
|
||||||
tags = ["webdev", "devops"]
|
|
||||||
keywords = ["", ""]
|
|
||||||
description = "Automation ambitions fall flat"
|
|
||||||
showFullContent = false
|
|
||||||
+++
|
|
||||||
|
|
||||||
# Defining units of work
|
|
||||||
I've got a few different websites that I want to run: this blog, my portfolio, and my about page which acts as a hub for my other sites, my Bandcamp, and whatever else I end up wanting to show off.
|
|
||||||
|
|
||||||
To keep things maintainable and reproducible, I decided to stop attempting to create a monolithic all-in-one configuration file. This made it way harder to keep changes atomic; multiple iterations on my blog started impacting my prank websites, and it became harder and harder to return my sites to a working state.
|
|
||||||
|
|
||||||
# Proof of concept
|
|
||||||
|
|
||||||
The first test case was my blog because I knew the Hugo build was fine, I just needed to get Caddy serving again.
|
|
||||||
|
|
||||||
A static site is pretty straightforward with Caddy:
|
|
||||||
|
|
||||||
```
|
|
||||||
blog.ndumas.com {
|
|
||||||
encode gzip
|
|
||||||
fileserver
|
|
||||||
root * /var/www/blog.ndumas.com
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
And telling Caddy to load it:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
curl "http://localhost:2019/load" \
|
|
||||||
-H "Content-Type: text/caddyfile" \
|
|
||||||
--data-binary @blog.ndumas.com.caddy
|
|
||||||
```
|
|
||||||
|
|
||||||
This all works perfectly, Caddy's more than happy to load this but it does warn that the file hasn't be formatted with `caddy fmt`:
|
|
||||||
|
|
||||||
```
|
|
||||||
[{"file":"Caddyfile","line":2,"message":"input is not formatted with 'caddy fmt'"}]
|
|
||||||
```
|
|
||||||
|
|
||||||
# The loop
|
|
||||||
|
|
||||||
Here's where things went sideways. Now that I have two unit files, I'm ready to work on the tooling that will dynamically load my config. For now, I'm just chunking it all out in `bash`. I've got no particular fondness for `bash`, but it's always a bit of a matter of pride to see whether or not I *can*.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# load-caddyfile
|
|
||||||
#! /bin/bash
|
|
||||||
|
|
||||||
function loadConf() {
|
|
||||||
curl localhost:2019/load \
|
|
||||||
-X POST \
|
|
||||||
-H "Content-Type: text/caddyfile" \
|
|
||||||
--data-binary @"$1"
|
|
||||||
}
|
|
||||||
|
|
||||||
loadConf "$1"
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# load-caddyfiles
|
|
||||||
#! /bin/bash
|
|
||||||
|
|
||||||
source load-caddyfile
|
|
||||||
|
|
||||||
# sudo caddy stop
|
|
||||||
# sudo caddy start
|
|
||||||
for f in "$1/*.caddy"; do
|
|
||||||
echo -e "Loading $(basename $f)"
|
|
||||||
loadConf "$f"
|
|
||||||
echo
|
|
||||||
done
|
|
||||||
```
|
|
||||||
|
|
||||||
After implementing the loop my barelylegaltrout.biz site started throwing a 525 while blog.ndumas.com continued working perfectly. This was a real head scratcher, and I had to let the problem sit for a day before I came back to it.
|
|
||||||
|
|
||||||
After some boring troubleshooting legwork, I realized I misunderstood how the `/load` endpoint works. This endpoint completely replaces the current config with the provided payload. In order to do partial updates, I'd need to use the `PATCH` calls, and look who's back?
|
|
||||||
|
|
||||||
# can't escape JSON
|
|
||||||
|
|
||||||
The `PATCH` API *does* let you do partial updates, but it requires your payloads be JSON which does make sense. Because my current set of requirements explicitly excludes any JSON ( for now ), I'm going to have to ditch my dreams of modular code.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Not all my code has gone to waste, though. Now that I know `POST`ing to to `/load` overwrites the whole configuration, I don't need to worry about stopping/restarting the caddy process to get a clean slate. `load-caddyfile` will let me keep iterating as I make changes.
|
|
||||||
|
|
||||||
# Proxies
|
|
||||||
|
|
||||||
In addition to the static sites I'm running a few applications to make life a little easier. I'll showcase my Gitea/Gitlab and Asciinema configs. At the moment, my setup for these are really janky, I've got a `tmux` session on my droplet where I've manually invoked `docker-compse up`. I'll leave cleaning that up and making systemd units or something proper out of them for a future project.
|
|
||||||
|
|
||||||
Reverse proxying with Caddy is blessedly simple:
|
|
||||||
```
|
|
||||||
cast.ndumas.com {
|
|
||||||
encode gzip
|
|
||||||
reverse_proxy localhost:10083
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
code.ndumas.com {
|
|
||||||
encode gzip
|
|
||||||
reverse_proxy localhost:3069
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
With that, my gitea/gitlab is up and running along with my Asciinema instance is as well:
|
|
||||||
|
|
||||||
[![asciicast](https://cast.ndumas.com/a/28.svg)](https://cast.ndumas.com/a/28)
|
|
||||||
|
|
||||||
# Back to Square One
|
|
||||||
After finally making an honest attempt to learn how to work with Caddy 2 and its configurations and admin API, I want to take a swing at making a systemd unit file for Caddy to make this a proper setup.
|
|
||||||
|
|
||||||
# Finally
|
|
||||||
|
|
||||||
Here's what's currently up and running:
|
|
||||||
- [My blog](blog.ndumas.com)
|
|
||||||
- [Asciinema](blog.ndumas.com)
|
|
||||||
- [Gitea](blog.ndumas.com)
|
|
||||||
|
|
||||||
I've had loads of toy projects over the years ( stay tuned for butts.ndumas.com ) which may come back, but for now I'm hoping these are going to help me focus on creative, stimulating projects in the future.
|
|
||||||
|
|
||||||
The punchline is that I still haven't really automated Caddy; good thing you can count on `tmux`
|
|
||||||
[![asciicast](https://cast.ndumas.com/a/aWYCFj69CjOg94kgjbGg4n2Uk.svg)](https://cast.ndumas.com/a/aWYCFj69CjOg94kgjbGg4n2Uk)
|
|
||||||
|
|
||||||
The final code can be found [here](https://code.ndumas.com/ndumas/caddyfile). Nothing fancy, but troublesome enough that it's worth remembering the problems I had.
|
|
@ -1,10 +0,0 @@
|
|||||||
---
|
|
||||||
title: "Bf in Go"
|
|
||||||
date: 2020-01-27T13:14:31-05:00
|
|
||||||
draft: true
|
|
||||||
toc: false
|
|
||||||
images:
|
|
||||||
tags:
|
|
||||||
- untagged
|
|
||||||
---
|
|
||||||
|
|
@ -1,89 +0,0 @@
|
|||||||
---
|
|
||||||
title: "Data Interfaces"
|
|
||||||
date: 2019-02-06T14:58:22Z
|
|
||||||
draft: false
|
|
||||||
toc: false
|
|
||||||
images:
|
|
||||||
tags:
|
|
||||||
- go
|
|
||||||
---
|
|
||||||
|
|
||||||
# interfaces
|
|
||||||
|
|
||||||
I'm a fan of Go's interfaces. They're really simple and don't require a lot of legwork.
|
|
||||||
|
|
||||||
{{< highlight go "linenos=table">}}
|
|
||||||
type Mover interface {
|
|
||||||
func Move(x, y int) (int, int)
|
|
||||||
}
|
|
||||||
|
|
||||||
type Dog struct {
|
|
||||||
Name string
|
|
||||||
}
|
|
||||||
|
|
||||||
func (d Dog) Move(x, y int) (int, int) {
|
|
||||||
return x, y
|
|
||||||
}
|
|
||||||
{{< / highlight >}}
|
|
||||||
|
|
||||||
Dog is now a Mover! No need for keywords like `implements`. The compiler just checks at the various boundaries in your app, like struct definitions and function signatures.
|
|
||||||
|
|
||||||
{{< highlight go "linenos=table">}}
|
|
||||||
type Map struct {
|
|
||||||
Actors []Mover
|
|
||||||
}
|
|
||||||
|
|
||||||
func something(m Mover, x,y int) bool {
|
|
||||||
// do something
|
|
||||||
}
|
|
||||||
{{< / highlight >}}
|
|
||||||
|
|
||||||
# Functionality vs Data
|
|
||||||
|
|
||||||
This is where things get tricky. Interfaces describe *functionality*. What if you want the compiler to enforce the existence of specific members of a struct? I encountered this problem in a project of mine recently and I'll use it as a case study for a few possible solutions.
|
|
||||||
|
|
||||||
## Concrete Types
|
|
||||||
|
|
||||||
If your only expectation is that the compiler enforce the existence of specific struct members, specifying a concrete type works nicely.
|
|
||||||
|
|
||||||
{{< highlight go "linenos=table">}}
|
|
||||||
type Issue struct {
|
|
||||||
Key string
|
|
||||||
Title string
|
|
||||||
Created time.Time
|
|
||||||
Updated time.Time
|
|
||||||
Body string
|
|
||||||
Attrs map[string][]string
|
|
||||||
}
|
|
||||||
|
|
||||||
type IssueService interface {
|
|
||||||
Get() []Issue
|
|
||||||
}
|
|
||||||
{{< / highlight >}}
|
|
||||||
|
|
||||||
There's a few benefits to this. Because Go will automatically zero out all members of a struct on initialization, one only has to fill in what you explicitly want or need to provide. For example, the `Issue` type may represent a Jira ticket, or a Gitlab ticket, or possibly something as simple as lines in a TODO.txt file in a project's root directory.
|
|
||||||
|
|
||||||
In the context of this project, the user provides their own functions which "process" these issues. By virtue of being responsible for both the production and consumption of issues, the user/caller doesn't have to worry about mysteriously unpopulated data causing issues.
|
|
||||||
|
|
||||||
Where this falls apart, though, is when you want to allow for flexible/arbitrary implementations of functionality while still enforcing presence of "data" members.
|
|
||||||
|
|
||||||
## Composition
|
|
||||||
|
|
||||||
I think the correct solution involves breaking your struct apart: you have the `Data` and the `Thinger` that needs the data. Instead of making `Issue` itself have methods, `Issue` can have a member that satisfies a given interface. This seems to offer the best of both worlds. You don't lose type safety, while allowing consumers of your API to plug-and-play their own concrete implementations of functionality while still respecting your requirements.
|
|
||||||
|
|
||||||
{{< highlight go "linenos=table">}}
|
|
||||||
type Issue struct {
|
|
||||||
Key string
|
|
||||||
Title string
|
|
||||||
Created time.Time
|
|
||||||
Updated time.Time
|
|
||||||
Body string
|
|
||||||
Checker IssueChecker
|
|
||||||
Attrs map[string][]string
|
|
||||||
}
|
|
||||||
|
|
||||||
type IssueChecker interface {
|
|
||||||
Check(Issue) bool
|
|
||||||
}
|
|
||||||
|
|
||||||
{{< / highlight >}}
|
|
@ -1,198 +0,0 @@
|
|||||||
+++
|
|
||||||
draft = false
|
|
||||||
title = "Copying HTML files by hand is for suckers"
|
|
||||||
date = "2023-02-02"
|
|
||||||
author = "Nick Dumas"
|
|
||||||
authorTwitter = ""
|
|
||||||
cover = ""
|
|
||||||
tags = ["drone", "gitea", "obsidian", "devops"]
|
|
||||||
keywords = ["drone", "gitea", "obsidian", "devops"]
|
|
||||||
description = "How I built a drone instance and pipeline to publish my blog"
|
|
||||||
showFullContent = false
|
|
||||||
+++
|
|
||||||
### Attribution
|
|
||||||
Credit to Jim Sheldon in the Harness slack server who pointed me [here](https://blog.ruanbekker.com/blog/2021/03/09/cicd-with-droneci-and-gitea-using-docker-compose/) which provided much of the starting skeleton of the project.
|
|
||||||
|
|
||||||
## The Old way
|
|
||||||
I use [hugo](https://gohugo.io/) to build my blog, and I love it. Static sites are the way to go for most content, and keeping them in git provides strong confidence that I'll never lose my work. I really like working in Markdown, and hosting is cheap and easy. Unfortunately, my current setup is extremely manual; I run `hugo` myself and copy the files into `/var/www`.
|
|
||||||
|
|
||||||
For a long time, this has been a really uncomfortable process and is part of why I find myself so disinterested in writing with any frequency. When the new year rolled around, I decided it was time to do better.
|
|
||||||
|
|
||||||
I want every push to my blog repository to generate a new hugo build and publish my content somewhere. The tools I've chosen are [gitea](/posts/gitea-lfs-and-syncing-obsidian-vaults) for managed git services, [drone](https://www.drone.io/) for continuous integration/deployment, and hugo to build the site.
|
|
||||||
|
|
||||||
## Hello Drone
|
|
||||||
|
|
||||||
Standing up a working Drone instance involves a few moving pieces:
|
|
||||||
1) configure an `ouath2` application in your hosted git service with which to authenticate your Drone instance
|
|
||||||
2) You need the `drone` server itself, which hosts the web UI, database, responds to webhooks
|
|
||||||
3) The `drone-runner` is a separate entity that communicates with `drone` and actually executes pipelines. There's a few flavors of `drone-runner` and I've selected the [docker runner](https://docs.drone.io/runner/docker/overview/).
|
|
||||||
|
|
||||||
Step 1 is accomplished [manually](https://docs.drone.io/server/provider/gitea/), or with the gitea admin API. Using `docker-compose`, I was able to assemble the following configuration files to satisfy points 2 and 3.
|
|
||||||
|
|
||||||
### docker-compose
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
version: '3.6'
|
|
||||||
services:
|
|
||||||
drone:
|
|
||||||
container_name: drone
|
|
||||||
image: drone/drone:${DRONE_VERSION:-1.6.4}
|
|
||||||
restart: unless-stopped
|
|
||||||
environment:
|
|
||||||
# https://docs.drone.io/server/provider/gitea/
|
|
||||||
- DRONE_DATABASE_DRIVER=sqlite3
|
|
||||||
- DRONE_DATABASE_DATASOURCE=/data/database.sqlite
|
|
||||||
- DRONE_GITEA_SERVER=https://code.ndumas.com
|
|
||||||
- DRONE_GIT_ALWAYS_AUTH=false
|
|
||||||
- DRONE_RPC_SECRET=${DRONE_RPC_SECRET}
|
|
||||||
- DRONE_SERVER_PROTO=https
|
|
||||||
- DRONE_SERVER_HOST=drone.ndumas.com
|
|
||||||
- DRONE_TLS_AUTOCERT=false
|
|
||||||
- DRONE_USER_CREATE=${DRONE_USER_CREATE}
|
|
||||||
- DRONE_GITEA_CLIENT_ID=${DRONE_GITEA_CLIENT_ID}
|
|
||||||
- DRONE_GITEA_CLIENT_SECRET=${DRONE_GITEA_CLIENT_SECRET}
|
|
||||||
ports:
|
|
||||||
- "3001:80"
|
|
||||||
- "3002:443"
|
|
||||||
networks:
|
|
||||||
- cicd_net
|
|
||||||
volumes:
|
|
||||||
- /var/run/docker.sock:/var/run/docker.sock
|
|
||||||
- ./drone:/data:z
|
|
||||||
|
|
||||||
drone-runner:
|
|
||||||
container_name: drone-runner
|
|
||||||
image: drone/drone-runner-docker:${DRONE_RUNNER_VERSION:-1}
|
|
||||||
restart: unless-stopped
|
|
||||||
depends_on:
|
|
||||||
- drone
|
|
||||||
environment:
|
|
||||||
# https://docs.drone.io/runner/docker/installation/linux/
|
|
||||||
# https://docs.drone.io/server/metrics/
|
|
||||||
- DRONE_RPC_PROTO=https
|
|
||||||
- DRONE_RPC_HOST=drone.ndumas.com
|
|
||||||
- DRONE_RPC_SECRET=${DRONE_RPC_SECRET}
|
|
||||||
- DRONE_RUNNER_NAME="${HOSTNAME}-runner"
|
|
||||||
- DRONE_RUNNER_CAPACITY=2
|
|
||||||
- DRONE_RUNNER_NETWORKS=cicd_net
|
|
||||||
- DRONE_DEBUG=false
|
|
||||||
- DRONE_TRACE=false
|
|
||||||
ports:
|
|
||||||
- "3000:3000"
|
|
||||||
networks:
|
|
||||||
- cicd_net
|
|
||||||
volumes:
|
|
||||||
- /var/run/docker.sock:/var/run/docker.sock
|
|
||||||
|
|
||||||
networks:
|
|
||||||
cicd_net:
|
|
||||||
name: cicd_net
|
|
||||||
```
|
|
||||||
|
|
||||||
All of the `docker-compose` files were ripped straight from documentation so there's very little surprising going on. The most common pitfall seems to be setting `DRONE_PROTO_HOST` to a URL instead of a hostname.
|
|
||||||
|
|
||||||
For me, the biggest hurdle I had to vault was SELinux. Because this is a fresh Fedora install, SELinux hasn't been relaxed in any way.
|
|
||||||
|
|
||||||
When dealing with SELinux, your friends are `ausearch` and `audit2{why,allow}`. In my case, I needed to grant `system_u:system_r:container_t` on `/var/run/docker.sock` so `drone` and `drone-runner` can access the host Docker service.
|
|
||||||
|
|
||||||
That wasn't the end of my SELinux woes, though. Initially, my Drone instance was crashing with "cannot open database file" errors. To that end, observe `:z` on this following line. This tells docker to automatically apply SELinux labels necessary to make the directory mountable.
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
- ./drone:/data:z
|
|
||||||
```
|
|
||||||
|
|
||||||
Why didn't this work for `docker.sock`? I really couldn't say, I did try it. With all the SELinux policies configured, I had a Drone instance that was able to see my Gitea repositories.
|
|
||||||
|
|
||||||
### caddy config
|
|
||||||
```
|
|
||||||
drone.ndumas.com {
|
|
||||||
encode gzip
|
|
||||||
reverse_proxy localhost:3001
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
The caddy configuration is a very simple reverse-proxy. Caddy has builtin LetsEncrypt support, so it's pretty nice to act as a last-hop for internet traffic. `sudo caddy start` will run caddy and detach, and with that Drone has been exposed to the internet under a friendly subdomain.
|
|
||||||
|
|
||||||
### startup script
|
|
||||||
```bash
|
|
||||||
#!/usr/bin/env bash
|
|
||||||
|
|
||||||
export HOSTNAME=$(hostname)
|
|
||||||
export DRONE_VERSION=2.16.0
|
|
||||||
export DRONE_RUNNER_VERSION=1.8.3
|
|
||||||
export DRONE_ADMIN_USER="admin"
|
|
||||||
export DRONE_RPC_SECRET="$(echo ${HOSTNAME} | openssl dgst -md5 -hex|cut -d' ' -f2)"
|
|
||||||
export DRONE_USER_CREATE="username:${DRONE_ADMIN_USER},machine:false,admin:true,token:${DRONE_RPC_SECRET}"
|
|
||||||
|
|
||||||
# These are set in ~/.bash_profile
|
|
||||||
# export DRONE_GITEA_CLIENT_ID=""
|
|
||||||
# export DRONE_GITEA_CLIENT_SECRET=""
|
|
||||||
docker-compose -f docker-compose/drone.yml up -d
|
|
||||||
caddy start --config caddy/drone --adapter caddyfile
|
|
||||||
```
|
|
||||||
|
|
||||||
The startup script, `drone.sh` injects some environment variables. Most of these are boring but `DRONE_RPC_SECRET` and `DRONE_USER_CREATE` are the two most important. This script is set up to make these deterministic; this will create an admin user whose access token is the `md5` of your host machine's hostname.
|
|
||||||
|
|
||||||
This really saved my bacon when I realized I didn't know how to access the admin user for my drone instance when I needed it. Diving into your Drone instance's database is technically on the table, but I wouldn't advise it.
|
|
||||||
|
|
||||||
## It's pipeline time
|
|
||||||
Once I had drone up and running, getting my blog publishing pipeline going was a relatively straightforward process: write a pipeline step, commit, push, check Drone for a green build. After a couple days of iterating, the complete result looks like this:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
kind: pipeline
|
|
||||||
name: default
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- name: submodules
|
|
||||||
image: alpine/git
|
|
||||||
commands:
|
|
||||||
- git submodule update --init --recursive
|
|
||||||
- name: build
|
|
||||||
image: alpine:3
|
|
||||||
commands:
|
|
||||||
- apk add hugo
|
|
||||||
- hugo
|
|
||||||
- name: publish
|
|
||||||
image: drillster/drone-rsync
|
|
||||||
settings:
|
|
||||||
key:
|
|
||||||
from_secret: blog_sync_key
|
|
||||||
user: blog
|
|
||||||
delete: true
|
|
||||||
recursive: true
|
|
||||||
hosts: ["blog.ndumas.com"]
|
|
||||||
source: ./public/
|
|
||||||
target: /var/www/blog.ndumas.com
|
|
||||||
include: ["*"]
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
The steps are pretty simple
|
|
||||||
1) Clone the repository ( this is actually handled by Drone itself ) and populate submodules, a vehcile for my Hugo theme
|
|
||||||
2) Building the site with Hugo is as simple as running `hugo`. Over time, I'm going to add more flags to the invocation, things like `--build{Drafts,Future,Expired}=false`, `--minify`, and so on.
|
|
||||||
3) Deployment of the static files to the destination server. This did require pulling in a pre-made Drone plugin, but I did vet the source code to make sure it wasn't trying anything funny. This could be relatively easily reproduced on a raw Alpine image if desired.
|
|
||||||
|
|
||||||
## Green checkmarks
|
|
||||||
At this point, I've got a fully automated publishing pipeline. As soon as a commit gets pushed to my blog repository, Drone jumps into action and runs a fresh Hugo build. The process is far from perfect, though.
|
|
||||||
|
|
||||||
![[Resources/attachments/obsidian-pipeline-screenshot.png]]
|
|
||||||
[[Resources/attachments/obsidian-pipeline-screenshot.png]]
|
|
||||||
|
|
||||||
You might've noticed a lack of screenshots or other media in my posts. At the moment, I'm authoring my blog posts in [Obsidian](https://obsidian.md), my preferred note-taking application, because it gives me quick access to...well, my notes. The catch is that Obsidian and Hugo use different conventions for linking between documents and referencing attachments/images.
|
|
||||||
|
|
||||||
In the long term, what I want to do is probably write a script and pipeline which can
|
|
||||||
1) convert Obsidian-style links and frontmatter blocks to their Hugo equivalents, so I can more easily cross-link between posts while drafting
|
|
||||||
2) Find embedded media ( images, etc ) and pull them into the blog repository, commit and push to trigger the blog publish pipeline.
|
|
||||||
|
|
||||||
## Unsolved Mysteries
|
|
||||||
|
|
||||||
For some reason, `audit2allow` was emitting invalid output as the result of something in my audit log. I never traced it down. Whatever was causing this wasn't related to my `drone` setup since I got everything running without fixing it.
|
|
||||||
|
|
||||||
```
|
|
||||||
[root@drone x]# cat /var/log/audit/audit.log|audit2allow -a -M volumefix
|
|
||||||
compilation failed:
|
|
||||||
volumefix.te:24:ERROR 'syntax error' at token 'mlsconstrain' on line 24:
|
|
||||||
mlsconstrain sock_file { write setattr } ((h1 dom h2 -Fail-) or (t1 != mcs_constrained_type -Fail-) ); Constraint DENIED
|
|
||||||
# mlsconstrain sock_file { ioctl read getattr } ((h1 dom h2 -Fail-) or (t1 != mcs_constrained_type -Fail-) ); Constraint DENIED
|
|
||||||
/usr/bin/checkmodule: error(s) encountered while parsing configuration
|
|
||||||
```
|
|
Binary file not shown.
Before Width: | Height: | Size: 38 KiB |
@ -1,17 +0,0 @@
|
|||||||
---
|
|
||||||
title: "First Post"
|
|
||||||
date: 2018-02-10T23:24:24Z
|
|
||||||
tags: [ "Code", "Site Updates"]
|
|
||||||
draft: false
|
|
||||||
---
|
|
||||||
|
|
||||||
# Introduction
|
|
||||||
I've been programming with a passion for the last 15 years. I started with Python 2.2 or so and stuck with that for a good while, I also spent a few years doing web development work and most recently added Golang to my kit.
|
|
||||||
|
|
||||||
# This Site
|
|
||||||
This site is going to be a portfolio/showroom for my projects. I'll try to find clever ways to interact with my tools through the browser to take advantage of the optimizations and growing array of high power tools being baked into browsers like Chrome and Firefox.
|
|
||||||
|
|
||||||
A section of this site will also be used for my streaming content and tools. I've finished the first prototype of my [idle screen](http://idle.ndumas.com) which will continue to get polished.
|
|
||||||
|
|
||||||
## Tech
|
|
||||||
Shoutout to [jcmdln](https://github.com/jcmdln) for the CSS framework, `yttrium`. I'll try to do it justice in making a site that doesn't look a mess.
|
|
@ -1,46 +0,0 @@
|
|||||||
---
|
|
||||||
title: "Genesis Roadmap"
|
|
||||||
date: 2013-06-03T00:00:00Z
|
|
||||||
|
|
||||||
tags: ["genesis","python"]
|
|
||||||
series: ''
|
|
||||||
draft: false
|
|
||||||
---
|
|
||||||
|
|
||||||
|
|
||||||
Recently, I was working on an idea for a MUD; the general idea was to build a game with detailed 'inhabited' areas such as cities and dungeons as well as expansive wilderness regions for exploring and claiming by players. Naturally, I realized that doing this by hand would be unbearably tedious. A semi-realistic MUD world could contain 4,000,000 rooms; manually creating "Meadowy Plains #1421" would be error-prone and would drain the creative ability of a human. Thus, [Genesis](https://github.com/therealfakemoot/genesis-retired) was conceived. Note: this repoistory is now retired; the Python version of this project will not be pursued any longer.
|
|
||||||
|
|
||||||
# Moving Beyond the MUD
|
|
||||||
Planning a world generator for a MUD was an excellent idea, but very limited in scope. In particular, I realized how little I wanted to indelibly restrict myself to [Evennia](http://evennia.com/) as the framework in which the creation process happened. Evennia offers a great deal of power and flexibility, but the restriction of MUD concepts got me thinking that I should generalise the project even further.
|
|
||||||
|
|
||||||
In the end, I want Genesis to be a completely standalone world-design toolkit. The target audience includes tabletop gamers who need a physical setting for their campaigns, as well as authors who can create characters and stories, but have trouble with the tedium of drawing coastlines and mountain ranges out by hand.
|
|
||||||
|
|
||||||
# The Vision
|
|
||||||
|
|
||||||
As of the time of writing, implementation of Phase 1 is only partially completed. What follows is my overall goals for what each phase can accomplish.
|
|
||||||
|
|
||||||
## Phase 1: Heightmap Generation
|
|
||||||
Using a [simplex](http://en.wikipedia.org/wiki/Simplex) function, I populate an array with values representing the height of the world's terrain at a given coordinate. This is pretty simple; it's a pure function and when run with PyPy it absolutely screams. There's little to be said about this step because it produces the least interesting output. If so desired, however, a user could take the topological map generated and do whatever they please without following any further phases.
|
|
||||||
|
|
||||||
## Phase 2: Water Placement
|
|
||||||
The water placement phase is the simplest phase, yet has the most potentially drastic consequences in further phases. Given a heightmap from Phase 1, the user will be able to select a sea-level and at that point all areas of the map with a height below sea level will be considered underwater. This step can be reapplied to smaller subsections of the map allowing for the creation of mountain lakes and other bodies of water which are not at sea-level.
|
|
||||||
|
|
||||||
## Phase 3: Biome Assignment
|
|
||||||
Biome assignment is a rather complex problem. Full weather simulations are way beyond what one needs for an interesting map. To this end, I've found what I believe to be two excellent candidates for biome classification systems.
|
|
||||||
|
|
||||||
{{< figure src="/img/two_biome.jpg" caption="Two-axis Biome Chart">}}
|
|
||||||
|
|
||||||
This graph uses two axes to describe biomes: rainfall and temperature. This is an exceedingly easy metric to use. Proximity to water is a simple way to determine a region's average rainfall. Temperature is also an easy calculation, given a planet's axial tilt and the latitude and longitude of a location.
|
|
||||||
|
|
||||||
{{< figure src="/img/tri_biome.png" caption="Three-axis Biome Chart">}}
|
|
||||||
|
|
||||||
This graph is slightly more detailed, factoring in a location's elevation into the determination of its biome. As this phase is still unimplemented, options remain open.
|
|
||||||
|
|
||||||
## Phase 4: Feature Generation
|
|
||||||
In the Milestones and Issues, I use the term 'feature' as a kind of catch-all; this phase is the most complex because it involves procedural generation of landmarks, cities, dungeons, villages, huts, and other details that make a piece of land anything more than an uninteresting piece of dirt. This phase will have the most direct interaction by a user, primarily in the form of reviewing generated features and approving or rejecting them for inclusion in the world during Phase 5. In this Phase, the user will determine what types of features they desire (large above ground stone structures, small villages, underground dungeons, and so on).
|
|
||||||
|
|
||||||
## Phase 5: Feature Placement
|
|
||||||
Phase 5 takes the objects generated during Phase 4 and allows the user the option of manually placing features, allowing Genesis to determine on its own where to place them, or some combination of both. Although it wouldn't make much sense to have multiple identical cities in the same world, this phase will allow duplication of features allowing for easy placement of templates which can be customised at some future point.
|
|
||||||
|
|
||||||
# In Practice
|
|
||||||
The Genesis Github repository currently has a working demo of Phase 1. CPython is exceedingly slow at generating large ranges of simplex values and as such, the demo will crash or stall when given exceedingly large inputs. This is currently being worked on as per #8.
|
|
@ -1,129 +0,0 @@
|
|||||||
+++
|
|
||||||
draft = false
|
|
||||||
title = "Gitea, git-lfs, and syncing Obsidian Vaults"
|
|
||||||
date = "2023-01-31"
|
|
||||||
author = "Nick Dumas"
|
|
||||||
authorTwitter = ""
|
|
||||||
cover = ""
|
|
||||||
tags = ["obsidian", "git", "gitea"]
|
|
||||||
keywords = ["obsidian", "git", "gitea"]
|
|
||||||
description = "A brief overview of how I stood up a gitea instance for the purpose of backing up and syncing my Obsidian vault."
|
|
||||||
showFullContent = false
|
|
||||||
+++
|
|
||||||
## What am I Doing?
|
|
||||||
I take notes on a broad spectrum of topics ranging from tabletop roleplaying games to recipes to the last wishes of my loved ones. Because of how valuable these notes are, I need to accomplish two things:
|
|
||||||
1) Back up my notes so that no single catastrophe can wipe them out
|
|
||||||
2) Make my notes accessible on multiple devices like my phone and various work laptops
|
|
||||||
|
|
||||||
For writing and organizing my notes, I use an application called [Obsidian](https://obsidian.md), an Electron Markdown reader and editor with an emphasis on plaintext, local-only files to represent your notes. This has a lot of interesting implications which are well beyond the scope of this post, but this is the one that's germane: your notes are a textbook use-case for version control.
|
|
||||||
|
|
||||||
Markdown files are plain-text, human-readable content that every modern Version Control System is supremely optimized for handling. In this arena, there's a lot of options ( mercurial, bzr, git, svn, fossil, and more ) but I'm partial to git.
|
|
||||||
|
|
||||||
## Life with git
|
|
||||||
```bash
|
|
||||||
nick@DESKTOP-D6H8V4O MINGW64 ~/Desktop/general-notes (main)
|
|
||||||
$ git log $(!!)
|
|
||||||
git log $(git rev-list --max-parents=0 HEAD)
|
|
||||||
commit 18de1f967d7d9c667ec42f0cb41ede868d6bdd31
|
|
||||||
Author: unknown <>
|
|
||||||
Date: Tue May 31 09:44:49 2022 -0400
|
|
||||||
|
|
||||||
adding gitignore
|
|
||||||
```
|
|
||||||
I've kept my vault under git for all but the first 2 months of my vault's lifetime and I cannot count the number of times it's saved me from a mistake or a bug.
|
|
||||||
|
|
||||||
A few times a day, I'll commit changes to my notes, plugins, or snippets and push them up. This is a manual process, but by reviewing all my changes as they're committed I kill a few birds with one stone:
|
|
||||||
1) I get a crude form of spaced repetition by forcing myself to review notes as they change
|
|
||||||
2) I verify that templates and other code/plugins are working correctly and if they aren't, I can revert to a known-good copy trivially
|
|
||||||
3) reorganizations become much easier ( see point 2, reverting to known-good copies )
|
|
||||||
|
|
||||||
For convenience, I chose to start off with Github as my provider. I set up a private repository because my notes contain sensitive information of various flavors and had no problems with it, except for attachments. This works great, Github is a fast reliable provider and meets all the requirements I laid out above.
|
|
||||||
## The catch
|
|
||||||
There is no free lunch. On Github, free repositories have restrictions:
|
|
||||||
1) github will warn you if you commit files larger than 50mb and ask you to consider removing them or using git-lfs
|
|
||||||
2) github will not permit any files larger than 100mb to be committed
|
|
||||||
3) You're allowed a limited number of private repositories, depending on the type and tier of your account.
|
|
||||||
|
|
||||||
My vault does not exclusively consist of plaintext files, though; there's PDFs, PNGs, PSDs, and more hanging out, taking up space and refusing to diff efficiently. I've got a lot of PDFs of TTRPG content, screenshots of important parts of software I care about for work or my personal life, and a lot of backup copies of configuration files.
|
|
||||||
|
|
||||||
In theory, this is sustainable. None of my attachments currently exceed 100mb, the median size is well under 1mb.
|
|
||||||
```bash
|
|
||||||
$ pwd
|
|
||||||
~/evac/obsidian-vaults/bonk/Resources/attachments
|
|
||||||
$ ls -lah|awk '{print $5}'|sort -hr|head -n5
|
|
||||||
62M
|
|
||||||
36M
|
|
||||||
8.4M
|
|
||||||
3.1M
|
|
||||||
2.9M
|
|
||||||
```
|
|
||||||
|
|
||||||
I'm not satisfied with theoretical sustainability, though. For something this important and sensitive, I'd like to have total confidence that my system will work as expected for the foreseeable future.
|
|
||||||
|
|
||||||
## What are the options?
|
|
||||||
1) Github has its own [lfs service](https://docs.github.com/en/repositories/working-with-files/managing-large-files/about-git-large-file-storage) with the free tier capped at 2gb of storage.
|
|
||||||
2) Pay for a higher tier of Github's LFS
|
|
||||||
3) Managed Gitlab (or similar) instance
|
|
||||||
4) Host my own
|
|
||||||
|
|
||||||
Options 1 and 2 are the lowest effort solution and rely the most on third parties. I've opted not to go with this because Github may change its private repository or git-lfs policies at any time.
|
|
||||||
|
|
||||||
Option 3 is better; a managed git hosting service splits the difference nicely. Using Gitlab would give me built-in CI/CD.
|
|
||||||
|
|
||||||
I've opted out of this mostly for price and partly because I know for a fact that I can implement option 4.
|
|
||||||
|
|
||||||
## Option 4
|
|
||||||
I chose to use what I'm already familiar with: [Gitea](https://gitea.io/en-us/). Gitea is a fork of Gogs, a hosted git service written in Go. It's lightweight and its simplest implementation runs off an sqlite database so I don't even need a PostgreSQL service running.
|
|
||||||
|
|
||||||
I've been using gogs and gitea for years and they've been extremely reliable and performant. It also integrates tightly with [Drone](https://www.drone.io/), a CI/CD system which will help me automate my blog, publish my notes, and more I haven't had the energy to plan.
|
|
||||||
|
|
||||||
## docker-compose and gitea
|
|
||||||
For my first implementation, I'm going to host gitea using docker-compose. This will give me a simple, reproducible setup that I can move between providers if necessary.
|
|
||||||
|
|
||||||
Hosting will be done on my DigitalOcean droplet running a comically old version of Fedora for now. This droplet is really old and up until now I've had very poor reproducibility on my setups. I'm working on fixing that with [caddy](/posts/automating-caddy-on-my-droplet), and using gitea for code management is next.
|
|
||||||
|
|
||||||
Below you'll see the `docker-compose.yaml` for my gitea instance. This is ripped directly from the gitea documentation so there's very little to comment on. The `ports` field is arbitrary and needs to be adjusted based on your hosting situation.
|
|
||||||
```yaml
|
|
||||||
version: "3"
|
|
||||||
|
|
||||||
networks:
|
|
||||||
gitea:
|
|
||||||
external: false
|
|
||||||
|
|
||||||
services:
|
|
||||||
server:
|
|
||||||
image: gitea/gitea:1.18.0
|
|
||||||
container_name: gitea
|
|
||||||
environment:
|
|
||||||
- USER_UID=1000
|
|
||||||
- USER_GID=1000
|
|
||||||
restart: always
|
|
||||||
networks:
|
|
||||||
- gitea
|
|
||||||
volumes:
|
|
||||||
- ./gitea:/data
|
|
||||||
- /etc/timezone:/etc/timezone:ro
|
|
||||||
- /etc/localtime:/etc/localtime:ro
|
|
||||||
ports:
|
|
||||||
- "3069:3000"
|
|
||||||
- "222:22"
|
|
||||||
```
|
|
||||||
|
|
||||||
Starting it up is similarly uninteresting; using detached mode for "production" work because I'm not super interested in watching all the logs. If something breaks, I can start it back up again without detaching and see whatever error output is getting kicked up.
|
|
||||||
```bash
|
|
||||||
$ docker-compose up -d
|
|
||||||
Starting gitea ... done
|
|
||||||
$
|
|
||||||
```
|
|
||||||
|
|
||||||
Once this is done, you've got a gitea instance waiting to be configured with an admin user and a few other bootstrap settings. Navigate to the URL you chose for your gitea instance while following the docs and you're ready to create a repository for your vault.
|
|
||||||
|
|
||||||
The web UI will guide you from there.
|
|
||||||
|
|
||||||
## Success Story???
|
|
||||||
|
|
||||||
This solution is only a week or two old so it has not be put under a lot of load yet, but gitea has a good reputation and supports a lot of very high profile projects, and DigitalOcean has been an extremely reliable provider for years.
|
|
||||||
|
|
||||||
Migrating my attachments into git-lfs was trivial, but it did rewrite every commit which is something to be mindful of if you're collaborating between people or devices.
|
|
||||||
|
|
||||||
I don't intend to get more aggressive with adding large media attachments to my vault, I *prefer* plaintext when it's an option. Backing up my notes was only one item on a list of reasons I stood gitea up, in the coming weeks I'm going to work on using Drone to automate blog posts and use that as a springboard into more automation.
|
|
@ -1,120 +0,0 @@
|
|||||||
---
|
|
||||||
title: "Golang Quantize"
|
|
||||||
date: 2018-04-22T17:30:51Z
|
|
||||||
|
|
||||||
tags: ["golang","math"]
|
|
||||||
series: ''
|
|
||||||
draft: false
|
|
||||||
---
|
|
||||||
|
|
||||||
# The Goal
|
|
||||||
|
|
||||||
Before going too deep into the implementation details of Genesis, I'll touch on the high level aspect of quantization. Quantization is a technique used to map arbitrary inputs into a well defined output space. This is, practically speaking, a hash function. When the term 'quantization' is used, however, it's typically numeric in nature. Quantization is typically used in audio/image processing to compress inputs for storage or transmission.
|
|
||||||
|
|
||||||
# Quantizing OpenSimplex
|
|
||||||
My use case is a little less straightforward. The OpenSimplex implementation I'm using as a default noisemap generator produces values in the [interval](https://en.wikipedia.org/wiki/Interval_(mathematics)#Including_or_excluding_endpoints) [-1,1]. The 3d simplex function produces continuous values suitable for a noisemap, but the values are by nature infinitesimally small and diverge from their neighbors in similarly small quantities. Here's an example of a small sampling of 25 points:
|
|
||||||
|
|
||||||
```
|
|
||||||
[1.9052595476929043e-65 0.23584641815494023 -0.15725758120580122 -0.16181229773462788 -0.2109552918614408 -0.24547524871149487 0.4641016420951697 0.08090614886731387 -0.3720484238283594 -0.5035758520116665 -0.14958647968356706 -0.22653721682847863 0.4359742698469777 -0.6589156578369094 -1.1984697154842467e-66 0.2524271844660192 -0.3132366454912306 -0.38147748611610527 5.131908781688952e-66 0.3814774861161053 0.07543249830197025 0.513284589875744 -1.4965506447200717e-65 0.031883015701786095 0.392504694554317]
|
|
||||||
```
|
|
||||||
|
|
||||||
As you can see, there are some reasonably comprehensible values, like `-0.50357585`, `-0.222`, `0.075432`, but there's also values like `-1.1984697154842467e-66` and `1.9052595476929043e-65`. Mathematically, these values end up being continous and suitable for generating a noisemap but for a human being doing development work and examining raw data, it's almost impossible to have any intuitive grasp of the numbers I'm seeing. Furthermore, when I pass these values to a visualization tool or serialize them to a storage format, I want them to be meaningful and contextually "sane". The noisemap values describe the absolute height of terrain at a given (X,Y) coordinate pair. If we assume that terrain hight is measured in meters, a world whose total height ranges between -1 meter and 1 meter isn't very sensible. A good visualization tool can accomodate this data, but it's not good enough for my purposes.
|
|
||||||
|
|
||||||
To that end, I'm working on implementing a quantization function to scale the [-1,1] float values to arbitrary user defined output spaces. For example, a user might desire a world with very deep oceans, but relatively short mountain features. They should be able to request from the map generator a range of [-7500, 1000], and Quantize() should evenly distribute inputs between those desired outputs.
|
|
||||||
|
|
||||||
In this way, I'll kill two birds with one stone. The first bird has been fudging coefficients in the noise generation algorithm, and at the "edges" of the `Eval3` function to modify the "scale" of the output. This has been an extremely troublesome process because I do not have enough higher maths education to full grasp the entirety of the simplex noise function, and because there's so many coefficients and "magic numbers" involved in the process that mapping each of their effects on each other and the output simultaneously is a very daunting task. The second bird is a longer term goal of Genesis involving detailed customizability of terrain output.
|
|
||||||
|
|
||||||
By virtue of having an effective quantization function, a user will be able to customize terrain to their liking in a well-defined manner, uniformly scaling the map however they desire.
|
|
||||||
|
|
||||||
# The problem
|
|
||||||
Unfortunately, my hand-rolled implementation of Quantize is not yet fully functional. Open source/free documentation on quantization of floating point values to integer domains is very sparse. I was only able to find one StackOverflow post where someone posted their MATLAB implementation which was marginally useful but largely incomprehensible, as I currently do not know or own MATLAB.
|
|
||||||
|
|
||||||
With some trial and error, however, I was able to get very close to a working *and* correct implementation:
|
|
||||||
|
|
||||||
```
|
|
||||||
Output Domain: {Min:-5 Max:5 Step:1}
|
|
||||||
[1.9052595476929043e-65 0.23584641815494023 -0.15725758120580122 -0.16181229773462788 -0.2109552918614408 -0.24547524871149487 0.4641016420951697 0.08090614886731387 -0.3720484238283594 -0.5035758520116665 -0.14958647968356706 -0.22653721682847863 0.4359742698469777 -0.6589156578369094 -1.1984697154842467e-66 0.2524271844660192 -0.3132366454912306 -0.38147748611610527 5.131908781688952e-66 0.3814774861161053 0.07543249830197025 0.513284589875744 -1.4965506447200717e-65 0.031883015701786095 0.392504694554317]
|
|
||||||
[-1 1 -2 -2 -3 -3 3 0 -4 -6 -2 -3 3 -7 -1 1 -4 -4 -1 2 0 5 -1 0 2]
|
|
||||||
```
|
|
||||||
|
|
||||||
Unfortunately, my current implementation is outputting values outside the provided domain. it almost looks like the output is twice what it should be, but I know that doing things incorrectly and just dividing by 2 afterwards isn't sufficient. I've got relatively few leads at the moment, but I'm not giving up.
|
|
||||||
|
|
||||||
# The Code
|
|
||||||
|
|
||||||
{{< highlight go "linenos=table">}}
|
|
||||||
package main
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
noise "github.com/therealfakemoot/genesis/noise"
|
|
||||||
// "math"
|
|
||||||
// "math/rand"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Domain describes the integer space to which float values must be mapped.
|
|
||||||
type Domain struct {
|
|
||||||
Min float64
|
|
||||||
Max float64
|
|
||||||
Step float64
|
|
||||||
}
|
|
||||||
|
|
||||||
// func quantize(delta float64, i float64) float64 {
|
|
||||||
// return delta * math.Floor((i/delta)+.5)
|
|
||||||
// }
|
|
||||||
|
|
||||||
func quantize(steps float64, x float64) int {
|
|
||||||
if x >= 0.5 {
|
|
||||||
return int(x*steps + 0)
|
|
||||||
}
|
|
||||||
return int(x*(steps-1) - 1)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Quantize normalizes a given set of arbitrary inputs into the provided output Domain.
|
|
||||||
func Quantize(d Domain, fs []float64) []int {
|
|
||||||
var ret []int
|
|
||||||
var steps []float64
|
|
||||||
|
|
||||||
for i := d.Min; i <= d.Max; i += d.Step {
|
|
||||||
steps = append(steps, i)
|
|
||||||
}
|
|
||||||
|
|
||||||
stepFloat := float64(len(steps))
|
|
||||||
// quantaSize := (d.Max - d.Min) / (math.Pow(2.0, stepFloat) - 1.0)
|
|
||||||
|
|
||||||
for _, f := range fs {
|
|
||||||
ret = append(ret, quantize(stepFloat, f))
|
|
||||||
}
|
|
||||||
|
|
||||||
fmt.Printf("Steps: %v\n", steps)
|
|
||||||
// fmt.Printf("Quanta size: %f\n", quantaSize)
|
|
||||||
|
|
||||||
return ret
|
|
||||||
}
|
|
||||||
|
|
||||||
func main() {
|
|
||||||
d := Domain{
|
|
||||||
Min: -5.0,
|
|
||||||
Max: 5.0,
|
|
||||||
Step: 1.0,
|
|
||||||
}
|
|
||||||
|
|
||||||
n := noise.NewWithSeed(8675309)
|
|
||||||
|
|
||||||
var fs []float64
|
|
||||||
for x := 0.0; x < 10.0; x++ {
|
|
||||||
for y := 0.0; y < 10.0; y++ {
|
|
||||||
fs = append(fs, n.Eval3(x, y, 0))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// for i := 0; i < 20; i++ {
|
|
||||||
// fs = append(fs, rand.Float64())
|
|
||||||
// }
|
|
||||||
|
|
||||||
v := Quantize(d, fs)
|
|
||||||
|
|
||||||
fmt.Printf("%v\n", fs)
|
|
||||||
fmt.Printf("%v\n", v)
|
|
||||||
}
|
|
||||||
{{< / highlight >}}
|
|
||||||
|
|
@ -1,45 +0,0 @@
|
|||||||
---
|
|
||||||
title: "Making Noise"
|
|
||||||
date: 2019-02-28T19:37:06Z
|
|
||||||
draft: false
|
|
||||||
toc: true
|
|
||||||
images:
|
|
||||||
tags:
|
|
||||||
- genesis
|
|
||||||
- golang
|
|
||||||
---
|
|
||||||
|
|
||||||
# The Conceit
|
|
||||||
I've written about Genesis [before](/posts/genesis-roadmap/), but it's got a lot of complexity attached to it, and the roadmap I originally laid out has shifted a bit. For this post I'm focusing solely on Phase 1, the generation of geography. This is obviously a fundamental starting point, and it has roadblocked my progress on Genesis for quite some time ( somewhere on the order of 8 years or so ).
|
|
||||||
|
|
||||||
My [original implementation](https://github.com/therealfakemoot/genesis_retired) was written in Python; this was...servicable, but not ideal. Specifically, visualizing the terrain I generated was impossible at the time. Matplotlib would literally lock up my entire OS if I tried to render a contour plot of my maps if they exceeded 500 units on a side. I had to power cycle my desktop computer many times during testing.
|
|
||||||
|
|
||||||
Eventually, I jumped from Python to Go, which was a pretty intuitive transition. I never abandoned Genesis, spiritually, but until this point I was never able to find technology that felt like it was adequate. Between Go's natural performance characteristics and the rise of tools like D3.js, I saw an opportunity to start clean.
|
|
||||||
|
|
||||||
It took a year and some change to make real progress
|
|
||||||
|
|
||||||
# Making Noise
|
|
||||||
Noise generation, in the context of "procedural" world or map generation, describes the process of creating random numbers or data in a way that produces useful and sensible representations of geography. To generate values that are sensible, though, there are some specific considerations to keep in mind. Your typical (P)RNG generates values with relatively little correlation to each other. This is fine for cryptography, rolling dice, and so on, but it's not so great for generating maps. When assigning a "height" value to point (X, Y), you may get 39; for (X+1, Y), you might get -21, and so on.
|
|
||||||
|
|
||||||
This is problematic because adjacent points on the map plane can vary wildly, leading to inconsistent or impossible geography: sharp peaks directly adjacent to impossible deep valleys with no transition or gradation between. This is where noise functions come in. Noise functions have the property of being "continuous" which means that when given inputs that are close to each other, the outputs change smoothly. A noise function, given (X, Y) as inputs might produce 89; when given (X+1, Y) it might produce 91. (X, Y+1) could yield 87. All these values are close together, and as the inputs vary, the outputs vary *smoothly*.
|
|
||||||
|
|
||||||
There seem to be two major candidates for noise generation in amateur projects: [Perlin noise](https://en.wikipedia.org/wiki/Perlin_noise) and [Simplex noise](https://en.wikipedia.org/wiki/Simplex_noise). Perlin noise was popular for a long while, but eventually deemed slow and prone to generating artifacts in the output that disrupted the "natural" feel of its content. Simplex noise is derived from the idea of extruding triangles into higher and higher dimensional space, but beyond that I haven't got a single clue how it works under the hood. I do know that it accepts integers ( in my use case, coordinates on the X,Y plane ) and spits out a floating point value in the range of `[-1,1]`.
|
|
||||||
|
|
||||||
# Quantization
|
|
||||||
This is something I've written about [before](/golang-quantize/), but shockingly, I was entirely wrong about the approach to a solution. At best, I overcomplicated it. Quantization is, using technical terms, transforming inputs in one interval to outputs in another. Specifcally, my noise generation algorithm returns floating point values in the range `[-1, 1]`. Conceptually, this is fine; the values produced for adjacent points in the x,y plane are reasonably similar.
|
|
||||||
|
|
||||||
Practically speaking, it's pretty bad. When troubleshooting the noise generation and map rendering, trying to compare `1.253e-64` and `1.254e-64` is problematic; these values aren't super meaningful to a human. When expressed in long-form notation, it's almost impossible to properly track the values in your head. Furthermore, the rendering tools I experimented with would have a lot of trouble dealing with infinitesimally small floating point values, from a configuration perspective if not a mathematical one.
|
|
||||||
|
|
||||||
In order to make this noise data comprehensible to humans, you can quantize it using, roughly speaking, three sets of parameters:
|
|
||||||
|
|
||||||
1) The value being quantized
|
|
||||||
2) The maximum and minimum input values
|
|
||||||
3) The maximum and minimum output values
|
|
||||||
|
|
||||||
Given these parameters, the function is `(v - (input.Min) ) * ( output.Max - output.Min ) / ( input.Max - input.Min ) + output.Min`. I won't go into explaining the math, because I didn't create it and probably don't understand it fully. But the important thing is that it works; it's pure math with no conditionals, no processing. As long as you provide these five parameters, it will work for all positive and negative inputs.
|
|
||||||
|
|
||||||
Now, with the ability to scale my simplex noise into ranges that are useful for humans to look at, I was ready to start generating visualizations of the "maps" produced by this noise function. At long last, I was going to see the worlds I had been creating.
|
|
||||||
|
|
||||||
# Until Next Time
|
|
||||||
|
|
||||||
This is where I'll close off this post and continue with the [solutions](/posts/unfolding-the-map/).
|
|
@ -1,103 +0,0 @@
|
|||||||
---
|
|
||||||
title: "Path of Market: Part 1"
|
|
||||||
date: 2019-07-08T10:45:07-04:00
|
|
||||||
draft: false
|
|
||||||
toc: true
|
|
||||||
images:
|
|
||||||
tags:
|
|
||||||
- golang
|
|
||||||
- prometheus
|
|
||||||
---
|
|
||||||
|
|
||||||
Path of Exile is an ARPG similar to Diablo: procedurally generated maps, kill monsters to get loot so you can kill monsters faster. It's pretty fun and offers a really flexible build system that allows for a lot of creativity in how you achieve your goals. Of particular interest is the API exposed by the development team.
|
|
||||||
|
|
||||||
# Stashes
|
|
||||||
|
|
||||||
Each character has a set of "stashes". These are storage boxes which can be flagged a public. Public boxes are exposed via the [api](https://www.pathofexile.com/developer/docs/api-resource-public-stash-tabs) endpoint. This API is interesting in how it handles paging; each request gives you an arbitrary number of stashes, and a GUID indicating the last stash provided by the API. Subsequent requests to the API can include the aforementioned GUID in the `id` url parameter to request the next batch of stashes. This is a sorta crude stream, in practice. Maybe one day they'll reimplement it as a websocket API or something fun like that.
|
|
||||||
|
|
||||||
# The Market
|
|
||||||
|
|
||||||
This API is what powers the market for the Path of Exile community. There's [quite](https://poe.watch/prices?league=Legion) [a](https://poe.trade/) few sites and tools leveraging this API, including the [official market site](https://www.pathofexile.com/trade/search/Legion). These market sites are very handy because they offer complex search functionality and various levels of "live" alerting when new items become available.
|
|
||||||
|
|
||||||
What I found fascinating, though, is the ability to monitor trends, more than finding individual items. As a former EVE player, I was used to relatively advanced market features like price histories, buy/sell orders, advanced graphing options etc and it's something I've missed everywhere I've gone since. After some investigation, I found that Prometheus and Grafana could offer a powerful base to build upon. Prometheus is a tool for storing time-based "metrics", and Grafana is a visualizer that can connect to Prometheus and other data sources and provide graphs, charts, tables, and all sorts of tools for seeing your data. Below is an example of a chart showing memory usage on a Kubernetes pod.
|
|
||||||
|
|
||||||
{{< figure src="/img/grafana-mem-usage.png" caption="Grafana memory usage chart">}}
|
|
||||||
|
|
||||||
# First Steps
|
|
||||||
|
|
||||||
Obviously, the first step is talking to the official Path of Exile API and getting these stashes into a format that I can work with programmatically. The JSON payload was moderately complex, but with the help of some [tooling](https://mholt.github.io/json-to-go/) and unit testing I was able to build out some Go structs that contained all the metadata available.
|
|
||||||
|
|
||||||
A particularly fun challenge was this one, describing how "gem" items could be slotted into an item. This was a challenge because the API can return either a string *or* a boolean for a specific set of fields. This is, in my opinion, not as "well behaved" API but you don't always get the luxury of working with ones that are well behaved. This unmarshaling solution helps account for this inconsistency and populates the relevant fields accordingly.
|
|
||||||
|
|
||||||
{{< highlight go "linenos=table" >}}
|
|
||||||
|
|
||||||
type SocketAttr struct {
|
|
||||||
Type string
|
|
||||||
Abyss bool
|
|
||||||
}
|
|
||||||
|
|
||||||
func (sa *SocketAttr) UnmarshalJSON(data []byte) error {
|
|
||||||
var val interface{}
|
|
||||||
|
|
||||||
err := json.Unmarshal(data, &val)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
switch val.(type) {
|
|
||||||
case string:
|
|
||||||
sa.Type = val.(string)
|
|
||||||
case bool:
|
|
||||||
sa.Abyss = val.(bool)
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
type SocketColour struct {
|
|
||||||
Colour string
|
|
||||||
Abyss bool
|
|
||||||
}
|
|
||||||
|
|
||||||
func (sc *SocketColour) UnmarshalJSON(data []byte) error {
|
|
||||||
var val interface{}
|
|
||||||
|
|
||||||
err := json.Unmarshal(data, &val)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
switch val.(type) {
|
|
||||||
case string:
|
|
||||||
sc.Colour = val.(string)
|
|
||||||
case bool:
|
|
||||||
sc.Abyss = val.(bool)
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
{{< /highlight >}}
|
|
||||||
|
|
||||||
With that done, I had passing tests that parsed a variety of sample items I had manually extracted from the API. Next was turning these into a "stream" that I could process. Channels seemed like a natural fit for this task; the API did not guarantee any number of results at any time and only declares that you periodically request the last ID you were given by the API.
|
|
||||||
|
|
||||||
The full code is [here](https://github.com/therealfakemoot/pom/blob/master/poe/client.go), but I'll highlight the parts that are interesting, and not standard issue HTTP client fare.
|
|
||||||
|
|
||||||
{{< highlight go >}}
|
|
||||||
d := json.NewDecoder(resp.Body)
|
|
||||||
err = d.Decode(&e)
|
|
||||||
if err != nil {
|
|
||||||
sa.Err <- StreamError{
|
|
||||||
PageID: sa.NextID,
|
|
||||||
Err: err,
|
|
||||||
}
|
|
||||||
log.Printf("error decoding envelope: %s", err)
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
log.Printf("next page ID: %s", e.NextChangeID)
|
|
||||||
sa.NextID = e.NextChangeID
|
|
||||||
|
|
||||||
for _, stash := range e.Stashes {
|
|
||||||
sa.Stashes <- stash
|
|
||||||
}
|
|
||||||
{{< /highlight >}}
|
|
||||||
|
|
||||||
This snippet is where the magic happens. JSON gets decoded, errors are pushed into a channel for processing. Finally, stashes are pushed into a channel to be consumed outside inside the main loop. And here's where I'll leave off for now. There's quite a bit more code to cover, and I'm still refactoring pieces of it relatively frequently, so I don't want to write too much about things that I expect to change.
|
|
@ -1,76 +0,0 @@
|
|||||||
---
|
|
||||||
title: "Prometheus Primer"
|
|
||||||
date: 2019-07-04T14:56:12-04:00
|
|
||||||
draft: false
|
|
||||||
toc: true
|
|
||||||
tags:
|
|
||||||
- prometheus
|
|
||||||
---
|
|
||||||
|
|
||||||
# Querying Basics
|
|
||||||
|
|
||||||
Queries run against *metrics*, which are sets of timeseries data. They have millisecond granularity and are stored as floating point values.
|
|
||||||
|
|
||||||
# Using Queries
|
|
||||||
|
|
||||||
Queries reference individual metrics and perform some analysis on them. Most often you use the `rate` function to "bucket" a metric into time intervals. Once the metric in question has been bucketed into time intervals, you can do comparisons.
|
|
||||||
|
|
||||||
```
|
|
||||||
(rate(http_response_size_bytes[1m])) > 512
|
|
||||||
```
|
|
||||||
|
|
||||||
This query takes the size of http responses in bytes and buckets it into one minute intervals and drops any data points smaller than 512 bytes. Variations on this query could be used to analyse how bandwidth is being consumed across your instrumented processes; a spike or trending rise in high bandwidth requests could trigger an alert to prevent data overages breaking the bank.
|
|
||||||
|
|
||||||
|
|
||||||
```
|
|
||||||
sum without(instance, node_name, hostname, kubernetes_io_hostname) (rate(http_request_duration_microseconds[1m])) > 2000
|
|
||||||
```
|
|
||||||
|
|
||||||
This query looks at the metric `http_request_duration_microseconds`, buckets it into one minute intervals, and then drops all data points that are smaller than 2000 microseconds. Increases in response durations might indicate network congestion or other I/O contention.
|
|
||||||
|
|
||||||
## Labels
|
|
||||||
|
|
||||||
Prometheus lets you apply labels to your metrics. Some are specificed in the scrape configurations; these are usually things like the hostname of the machine, its datacenter or geographic region, etc. Instrumented applications can also specify labels when generating metrics; these are used to indicate things known at runtime like the specific HTTP route ( e.g. `/blog` or `/images/kittens` ) being measured.
|
|
||||||
|
|
||||||
Prometheus queries allow you to specify labels to match against which will let you control how your data is grouped together; you can query against geographic regions, specific hostnames, etc. It also supports regular expressions so you can match against patterns instead of literal strict matches.
|
|
||||||
|
|
||||||
```
|
|
||||||
(rate(http_response_size_bytes{kubernetes_io_hostname="node-y3ul"}[1m])) > 512
|
|
||||||
(rate(http_response_size_bytes{version=~"v1\.2\.*"}[1m])) > 512
|
|
||||||
```
|
|
||||||
|
|
||||||
An important consideration is that when querying, prometheus considers metrics with any difference in labels as distinct sets of data. Two HTTP servers running in the same datacenter can have different hostnames in their labels; this is useful when you want to monitor error rates per-container but can be detrimental when you want to examine the data for the datacenter as a whole.
|
|
||||||
|
|
||||||
To that end, prometheus gives you the ability to strip labels off the metrics in the context of a given query. This is useful for generating aggregate reports.
|
|
||||||
|
|
||||||
```
|
|
||||||
sum without(instance, node_name, hostname, kubernetes_io_hostname)(rate(go_goroutines[1m]))
|
|
||||||
```
|
|
||||||
|
|
||||||
# Alerts
|
|
||||||
|
|
||||||
All of this is fun to play with, but none of it is useful if you have to manually run the queries all the time. On its own, prometheus can generate "alerts" but these don't go anywhere on their own; they're set in the config file and look like this:
|
|
||||||
|
|
||||||
```
|
|
||||||
groups:
|
|
||||||
- name: example
|
|
||||||
rules:
|
|
||||||
- alert: HighErrorRate
|
|
||||||
expr: job:request_latency_seconds:mean5m{job="myjob"} > 0.5
|
|
||||||
for: 10m
|
|
||||||
labels:
|
|
||||||
severity: page
|
|
||||||
annotations:
|
|
||||||
summary: High request latency
|
|
||||||
- alert: TotalSystemFailure
|
|
||||||
expr: job:avg_over_time(up{job="appName"}[5m]) < .5
|
|
||||||
for: 5m
|
|
||||||
labels:
|
|
||||||
severity: page
|
|
||||||
annotations:
|
|
||||||
summary: Large scale application outage
|
|
||||||
```
|
|
||||||
|
|
||||||
Alerts can have labels and metadata applied much like regular data sources. On their own, however, they don't *do* anything. Fortunately, the prometheus team has released [AlertManager](https://github.com/prometheus/alertmanager) to work with these alerts. AlertManager receives these events and dispatches them to various services, ranging from email to slack channels to VictorOps or other paging services.
|
|
||||||
|
|
||||||
AlertManager lets you define teams and hierarchies that alerts can cascade through and create conditions during which some subsets of alerts are emporarily muted; if a higher priority event is breaking, more trivial alerts can be ignored for a short time if desired.
|
|
Binary file not shown.
Before Width: | Height: | Size: 38 KiB |
@ -1,46 +0,0 @@
|
|||||||
---
|
|
||||||
title: "Selinux and Nginx"
|
|
||||||
date: 2018-04-13T16:28:20Z
|
|
||||||
|
|
||||||
tags: ["selinux","nginx","fedora"]
|
|
||||||
series: ''
|
|
||||||
draft: false
|
|
||||||
---
|
|
||||||
|
|
||||||
# SELinux
|
|
||||||
DigitalOcean's Fedora droplets include SELinux. I don't know a great deal about SELinux but it's presumably a good thing for helping prevent privilege escalations and so on. Unfortunately, it can be troublesome when trying to do simple static site stuff with nginx.
|
|
||||||
|
|
||||||
## nginx
|
|
||||||
With Fedora and nginx and selinux all in use simultaneously, you are allowed to tell nginx to serve files that are owned/grouped under a user other than nginx's. This is phenomenally useful when working with something like hugo. This is possible because SELinux monitors/intercepts syscalls relating to file access and approves/denies them based on context, role, and type. SELinux concepts are covered pretty thoroughly [here](https://www.digitalocean.com/community/tutorials/an-introduction-to-selinux-on-centos-7-part-1-basic-concepts) and [here](https://www.digitalocean.com/community/tutorials/an-introduction-to-selinux-on-centos-7-part-2-files-and-processes).
|
|
||||||
|
|
||||||
By default, nginx runs under the SELinux `system_u` user, the `system_r` role, and the `httpd_t` type:
|
|
||||||
|
|
||||||
```
|
|
||||||
$ ps -efZ|grep 'nginx'
|
|
||||||
system_u:system_r:httpd_t:s0 root 30543 1 0 Apr09 ? 00:00:00 nginx: master process /usr/sbin/nginx
|
|
||||||
system_u:system_r:httpd_t:s0 nginx 30544 30543 0 Apr09 ? 00:00:02 nginx: worker process
|
|
||||||
system_u:system_r:httpd_t:s0 nginx 30545 30543 0 Apr09 ? 00:00:00 nginx: worker process
|
|
||||||
$
|
|
||||||
```
|
|
||||||
|
|
||||||
Roughly speaking, SELinux compares nginx's user, role, and type against the same values on any value it's trying to access. If the values conflict, SELinux denies access. In the context of "I've got a pile of files I want nginx to serve", this denial manifests as a 403 error. This has caused issues for me repeatedly. genesis generates terrain renders as directories containing html and json files, and during the development and debugging process I just copy these directly into the `/var/www` directory for my renders.ndumas.com subdomain. Before I discovered the long term fix described below, every one of these new pages was throwing a 404 because this directory and its files did not have the `httpd_sys_content_t` type set. This caused nginx to be denied permission to read them and a 403 error.
|
|
||||||
|
|
||||||
A correctly set directory looks like this:
|
|
||||||
|
|
||||||
```
|
|
||||||
$ ls -Z
|
|
||||||
unconfined_u:object_r:httpd_sys_content_t:s0 demo/ unconfined_u:object_r:user_home_t:s0 sampleTest2/ unconfined_u:object_r:httpd_sys_content_t:s0 test2/
|
|
||||||
unconfined_u:object_r:httpd_sys_content_t:s0 sampleTest1/ unconfined_u:object_r:httpd_sys_content_t:s0 test1/
|
|
||||||
$
|
|
||||||
```
|
|
||||||
|
|
||||||
# The solution
|
|
||||||
There are two solutions to serving static files in this way. You can set the `httpd_sys_content_t` type for a given directory and its contents, or you can alter SELinux's configuration regarding the access of user files.
|
|
||||||
|
|
||||||
## Short Term
|
|
||||||
The short term fix is rather simple: `chcon -R -t httpd_sys_content_t /var/www/`. This sets a type value on the directory and its contents that tells SELinux that the `httpd_t` process context can read those files.
|
|
||||||
|
|
||||||
## Long Term
|
|
||||||
Unfortunately, in the context of my use case, I had to run that `chcon` invocation every time I generated a new page. I hate manual labor, so I had to find a way to make this stick. Fortunately, [StackOverflow](https://stackoverflow.com/questions/22586166/why-does-nginx-return-a-403-even-though-all-permissions-are-set-properly#answer-26228135) had the answer.
|
|
||||||
|
|
||||||
You can tell SELinux "httpd processes are allowed to access files owned by other users" with the following command: `setsebool -P httpd_read_user_content 1`. This is pretty straightforward and I confirmed that any content I move into the /var/www directories can now be read by nginx.
|
|
@ -1,34 +0,0 @@
|
|||||||
---
|
|
||||||
title: "Standing Up Gogs"
|
|
||||||
date: 2018-02-20T23:57:30Z
|
|
||||||
tags: ["nginx"]
|
|
||||||
draft: false
|
|
||||||
---
|
|
||||||
|
|
||||||
# The Reveal
|
|
||||||
|
|
||||||
It took me way longer than I'd like to admit but I finally discovered why I was not able to properly reverse proxy a subdomain ( [git.ndumas.com](http://git.ndumas.com) ). As it turns out, SELinux was the culprit. I wanted to write a short post about this partly to reinforce my own memory and maybe to make this sort of thing easier to find for others encountering 'mysterious' issues when operating on SELinux Fedora installations like Digital Ocean droplets.
|
|
||||||
|
|
||||||
# Symptoms
|
|
||||||
|
|
||||||
SELinux interference with nginx doing its business will generally manifest as a variation on "permission denied". Here's one such error message:
|
|
||||||
|
|
||||||
```
|
|
||||||
2018/02/20 23:32:51 [crit] 4679#0: *1 connect() to 127.0.0.1:3000 failed (13: Permission denied) while connecting to upstream, client: xxx.xxx.xxx.xxx, server: git.ndumas.com, request: "GET /favicon.ico HTTP/1.1", upstream: "http://127.0.0.1:3000/favicon.ico", host: "git.ndumas.com", referrer: "http://git.ndumas.com/"
|
|
||||||
```
|
|
||||||
|
|
||||||
# Solution
|
|
||||||
|
|
||||||
Resolving this requires whitelisting whatever action/context SELinux is restricting. There's a useful tool, `audit2allow`, that will build a module for you automatically. You can invoke it like this:
|
|
||||||
|
|
||||||
```
|
|
||||||
sudo cat /var/log/audit.log|grep nginx|grep denied|audit2allow -M filename
|
|
||||||
```
|
|
||||||
|
|
||||||
Once you've done this, you'll have a file named `filename.pp`. All you have to do next is:
|
|
||||||
|
|
||||||
```
|
|
||||||
semodule -i filename.pp
|
|
||||||
```
|
|
||||||
|
|
||||||
SELinux will revise its policies and any restrictions nginx encountered will be whitelisted.
|
|
@ -1,10 +0,0 @@
|
|||||||
---
|
|
||||||
title: "Unfolding the Map"
|
|
||||||
date: 2019-02-28T20:09:50Z
|
|
||||||
draft: true
|
|
||||||
toc: false
|
|
||||||
images:
|
|
||||||
tags:
|
|
||||||
- untagged
|
|
||||||
---
|
|
||||||
|
|
@ -1 +0,0 @@
|
|||||||
Subproject commit 2b14b3d4e5eab53aa45647490bb797b642a82a59
|
|
Loading…
Reference in New Issue