Blog – alanza.xyz
ROBBIE LYMAN
Seamstress is an art engine
Seamstress is an "art engine". That's not a term, as far as I'm aware, but I think it describes accurately where I'm hoping to take it. Programmers might be familiar with the concept of a "game engine", like Unreal Engine, Unity, Godot, or Love2D. None of these programs are games, rather, they enable building games by attempting to let a user focus on what is expressive and joyful about making a game rather than what can feel like drudgery.
I started writing Seamstress because I was lonely and a little bored. It grew out of a love letter to the monome norns, a little "sound computer" which orchestrates an inspiring and accessible palette for a truly special community by weaving together SuperCollider, a general-purpose synthesis and musical coding framework, softcut, a powerful audio buffer manipulation library, and Lua scripting for designing sequencers, user interfaces, speaking with external devices, drawing to a screen, and so on.
- Beginnings
- From C to Zig
- Version 1.0.0
- Gotta go fast
- ...By reducing parallelism?
- Where to keep the stuff?
- How to abstract it?
- Lean in to what Lua provides
-
The refactor hits
main()
again... - Forward!
- I am the runtime
-
The refactor hits
- Released yet incomplete
Beginnings
As a child with a demonstrated interest in computers, I was excited by the concept of programming—maybe more than the execution. I remember my dad had a couple C and C++ textbooks at home; one family vacation I read what might have been "The C Programming Language" as if it were a novel in long car rides. I remember I checked out a punchy book on Design Patterns (inspired by but not the Gang of Four book) in Java from the local library—at least twice. Despite this, I didn't do much programming—once I tried learning enough about RSA encryption to implement it in Java, but it would compile and not work, and I didn't attempt fixing it.
In college most “programming” that I did was was in LaTeX. Although LaTeX has a compilation step and is Turing complete, it’s difficult for me to think of it as a programming language, since I mainly use it to create documents with more ease and control than I could with Microsoft Word, for instance. I jumped into learning LaTeX fairly early among my math major peers, but honestly I did so mainly because improving at LaTeX felt accessible and tangible in a way that doing well on math problem sets did not.
I learned about Vim pretty early in college but didn't commit to properly learning it (as Neovim) until grad school. Configuring Neovim was my introduction to Lua and, after a long break, programming qua programming again.
Towards the end of grad school, norns was a key one of several ways back into my interest in coding. Monome has a truly excellent collection of studies, which I initially eagerly completed but was left with a "ok now what" feeling. In 2021, though, I suddenly wanted to try my hand at making a little 4-track "DAW" in norns, inspired by the Teenage Engineering OP-1's cheeky little "tape" metaphor. To my dismay, I discovered that the softcut library didn't support reading in a sound file while also preserving existing data in the buffer. Since it was 2021, somehow it felt like all of us were hanging out on Discord constantly—Ezra Buchla, the author of softcut, encouraged me to add the functionality I wanted. Generously, they pointed out the places where I should look in the codebase to find out what was what and what would need to be added. Going mostly by pattern-matching and a vague sense of what things meant, I was able to contribute my first lines of C, C++ and probably also Lua to a codebase used by thousands. I'm really so grateful to Ezra for the encouragement.
The script that I wrote, waver, wasn't particularly fun to actually use, but writing it taught me a lot. I've written a couple other scripts for norns since, lately mostly sketchier attempts at finding a way of getting closer to the kind of touching sound you can do as an acoustic instrumentalist or a singer.
But anyway, so I was lonely and a little bored—spending a semester in Montreal that I pinned perhaps too many hopes on. I had brought my norns, but I found I mostly wanted to find ways of bringing that same playful energy to my laptop. I wrote a little Lua script that would allow monome's Arc device to act as a controller for TouchDesigner, but it felt somehow fragile. Somehow one thing led to another and I was sitting on the bed in my sublet with one pane of my text editor open to the source code of Matron, the C program Brian Crabtree and Ezra Buchla (and collaborators and contributors) wrote to house the norns Lua environment, and started typing C code into the other pane.
From C to Zig
I posted an initial commit on April 30, 2023. I'm really grateful to the folks in the norns study group Discord who showed enthusiasm (or even just willingness to chat) early on. Jordan Besly and Dani Derks were the first contributors besides myself, and Dani especially really gave a lot of time to the project—Seamstress's params + menu code in version 1, for example, was a beautiful gift from them.
Since Matron is in C (and the implementation of Lua is as a C library, indeed, it is most often an extension language designed to be embedded in codebases in other languages), Seamstress began its life in C. The C programming language is an incredible achievement, to be honest. Its longevity and present ubiquity (even if it is often derided as unfriendly and easy to mess up) are unparalleled. There are lots of things to dislike about writing C, but there's lots to love too—I read somebody describing that C gives them a good feeling to write, and I agree. By June 10, 2023, though, it was clear to me that I was not very good at it. I was hitting persistent segfaults and it was clear to me that I was either going to have to switch to a language that would hopefully guide me a little more firmly down the straight and narrow or learn to use a debugger. I did both—on June 10, 2023 I pushed a commit titled "feat: rewrite in zig".
I chose Zig for not particularly enlightened reasons. The only other serious contenders were C++ and Rust, both languages with angle brackets and double colons. (I actually don't have anything against these bits of syntax, especially given that C's only answer is more macros, more underscores.) In actual fact, perhaps the main deciding factor was that Nathan Craddock's ziglua package seemed friendly and feature-complete, especially as compared with similar Rust offerings. C++, for better or worse, was never a real consideration.
Working on Seamstress has really been a profoundly world-expanding learning experience for me. Earlier in drafting this document I started writing about things that Zig taught me, and some of those things are truly foundational, like "what does a pointer mean and how do I work with one". To be clear, one can spend a deep, complete and fulfilling life in computing without a truly clear understanding of pointers, but on my particular journey, this has started to look like peanuts compared to other concepts.
Version 1.0.0
With encouragement from Brian, Dani and many others, I released version 1.0.0 of Seamstress on October 19, 2023.
Structurally, Seamstress, like many similar programs, spends most of its time in an event loop, waiting for input, e.g. from the user, and running code in response to that input. In version 1, a complete collection of events is determined beforehand and maintained in code in events.zig
. Each event has a bit of data associated to it, (in Zig, the event data type is an example of what is called a "tagged union"; in Rust the same concept goes by the name "enum") and the event loop attempts to pull events from a queue, handle, and then discard them. To populate the queue, Seamstress version 1 spawns various threads, each one responsible for collecting user input and putting it on the queue. The main()
function for Seamstress version 1 makes a number of calls to various initialization functions in different files to spawn these threads, and then starts running the event loop.
This structure is okay, but having worked with it for a while, I found it starting to grate. For one thing, passing data between threads in this way means potentially having to wait for memory allocation and deallocation as data is copied. (The queue itself, which I wrote based on Zig's PriorityQueue
and FIFO
data structures, does not allocate. I think I decided "hey, if we have more than a couple thousand events in the queue, maybe we should just crash".)
Initially I began by toying with using Zig's Writer
and Reader
abstractions as a way of avoiding this memory allocation. Since Zig doesn't have C++'s classes or Rust's traits, the precise form of Writer
and Reader
structs is evolving, caught between the runtime cost of "vtable lookups" on the one hand, or the code bloat of "monomorphization". But the basic idea is pretty clear: a Writer
is something that knows how take in a sequence of bytes and write them somewhere else with a function called write
. A Reader
is the inverse; it consumes a sequence of bytes. By passing a Writer
or Reader
with an event instead of a copy of the string payload, one can avoid making that copy entirely.
This change made it seem possible to reduce the coupling between the list of events and the rest of the code significantly—many different event handlers now took a Reader
parameter and maybe one other argument. Suddenly it became possible to think about an "abstract" or "generic" event; it might look like this
pub const Event = struct {
userdata: *anyoapaque,
reader: std.io.AnyReader,
handler: *const fn (*anyopaque, std.io.AnyReader) void,
pub fn handle(ev: Event) void {
ev.handler(ev.userdata, ev.reader);
}
};
Gotta go fast
Thinking this way, I started to wonder about other ways I could improve the structure of the event loop. There's this naïve understanding of programming that I had very much bought into that goes "wanna go fast? add more threads!" One problem with this kind of thinking is that in order to make this happen in practice, often there are bottlenecks where it matters that only one thread be allowed to operate on an object at a time. Seamstress version 1 manages this by (sensibly!) using a mutex, which one can think of as a kind of "lock" on a section of code.
Like, imagine you're at a popular café in the city and you're using their single-occupancy bathroom. To access the bathroom, you ask the barista for the key, and they give it to you—probably it's attached to some thing silly and obvious in hopes that you won't forget to return the key when you're done. But if it's really busy, you might go up to the barista and find that they don't have the key—somebody else must be in the bathroom and you have to wait.
Now, there are other ways of managing state; often these go by the name of "lock-free" and/or "wait-free" programming methods. For most of grad school, I lived in an apartment with my brother in Somerville, Massachusetts which had one bathroom whose door might have been capable of locking, but at least didn't make that fact obvious. So, we agreed "closed means locked". Lock-free programming relies on making the computer perform certain actions "atomically". Like, in the sense of an atom as being "indivisible". The collection of techniques for doing this goes by the term "atomics", but has nothing to do with bombs or reactors. One might imagine the atomic action of using the bathroom as being signaled by closing and opening the door. One might close the door with "acquire" semantics and open it again with "release" semantics.
Unfortunately, the truth is a little more complicated—because you're not locking, the computer needs a pretty literally atomic action from you to provide the same guarantees. If you're going to do a bunch of different things in the bathroom, in other words, that sequence of actions is not actually atomic. A more accurate use of the metaphor would be the way I've sometimes shared a bathroom with a romantic partner—if one of you needs the sink or the toilet, you can actually atomically acquire and release ownership of it, while the other person is free to perform other bathroom tasks in parallel.
Okay, we're really straining the bounds of metaphor here, but I promise I'm almost done. Something that's interesting about even the act of trying to make the metaphor more accurate is that the scope and parameters of who shares and how became intensely narrow. I think that's a fair description of my understanding of the uses of lock-free programming—it goes best when you have a high degree of trust in how each player plays its part.
Indeed, while trying to learn more about this stuff I watched a talk by King Butcher on parallelism and concurrency and was surprised and a little dismayed that the message of so much of the talk was "... don't, actually". He points out, for example, that if you have multiple threads, you lose determinacy—it's no longer possible for you to know for sure which of several parallel producers of events, for example, will actually have its event land on top of the queue.
...By reducing parallelism?
Slowly it started to dawn on me that having multiple threads made little sense. Indeed, although I realized the extent and relevance of the comparison only later, I started poking at Node.js to understand a little of how it responds to events and executes JavaScript code, and discovered that in large part it rigorously pretends, at least, to be a single-threaded application based around one giant poll()
call. Naomi Seyfer encouraged me in this line of thinking. Poll is a function provided in libc
, the C standard library, which waits, for example, until a file has data to read. This model of waiting (possibly with a timeout) for something to do is often called "polling" as a result, although many other APIs exist for the same concept.
Ultimately I decided to use Mitchell Hashimoto's libxev library, a cross-platform event loop library written for Zig that relies under the hood on and attempts to unify the APIs for a bunch of platform-specific spiritual successors to poll()
. It is most heavily inspired by io_uring
, a Linux-specific event loop which is interesting in part because it notifies the caller when work is complete, rather than ready to be done. Indeed, it is able to do this because it asks the Linux kernel to do that work, like reading a file. Since no comparable API exists on macOS, for instance, on those platforms libxev
is actually multi-threaded, often using for example blocking read()
calls on other threads on macOS rather than relying on kqueue()
. Still, I was grateful to discover Mitchell (relying on work of King) has done the heavy lifting of coordinating these threads so that I don't have to.
Items on the event loop in libxev
are elements of a platform-specific tagged union type called Completion
. In many ways the Completion
type is actually quite similar to the Event
struct I sketched above. However, rather than managing a buffer of Completion
objects, libxev
asks users to supply their own and instead stores them in a linked list. Briefly I thought I would rise to this challenge by making use of Zig's MemoryPool(T)
abstraction, which makes it cheap to create and reuse allocated objects of a single type T
. But as I kept writing, it became clear that I needed a better way to keep track of Completion
objects, particularly in the case of a request to exit Seamstress.
Where to keep the stuff?
This realization dovetailed with another desire. Seamstress version 1 maintains a discomfortingly large number of global variables. To be sure, storing these objects statically has advantages, but given that I had spent so much of the development process attempting to understand how I felt about software best practices (like SemVer or Conventional Commits) by following them, "avoid global mutable state" felt like a goal to strive for. I'm very happy that Seamstress version 2 has only two global variables, neither of which can be meaningfully accessed outside of the file in which they are declared. Since they are used to interface with the Zig standard library's logging and panicking functions, neither of which provides a user-specified parameter, global variables are unavoidable.
Thus most of Seamstress's modules started to look like each one might work best as a struct type. Various functions that Seamstress provides to Lua could be implemented as Lua closures, taking a reference (pointer) to an element of one of these struct types as an invisible parameter. These struct types could house the Completion
objects, allowing easy access to them for reuse or cancelation.
How to abstract it?
But how to organize the modules? Each one appeared to have three verbs associated to them: "init", which provided initial setup for the struct type, "launch" which revealed its capabilities fully to Lua and placed recurring events onto the event loop, and "deinit", which provided cleanup in anticipation of shutting down (or crashing) Seamstress.
Zig does not provide a notion of an "interface" or a "trait". There are several ways of providing generic abstractions in Zig, and initially it seemed like the one that best fit my situation is exemplified by Zig's std.mem.Allocator
type, which I wrote about in a previous blog post. Seamstress could create a list of these Module
types and call init
, launch
or deinit
on each of them as appropriate.
This worked okay for a while, until I started running into dependencies and clashes between different modules. Gracefully closing one module as part of the startup process of another one seemed very difficult.
Lean in to what Lua provides
In parallel with this, I rethought another design decision from Seamstress version 1: namely Seamstress's effect on the global Lua namespace. Rather than having MIDI functionality available via global table named midi
, it seemed more polite to me to provide seamstress.midi
. But then not every user of Seamstress wants to use MIDI functionality every time. Setting up and tearing down MIDI functionality is not particularly expensive, but it would help the versatility of the program to not launch MIDI functionality if it is not asked for. Initially, inspired by the Lua library penlight, I thought of using some Lua trickery to make seamstress.midi
load lazily—only when referenced. This turned out to be tricky to get right and not play nicely with the possibility of later unloading a module.
Eventually I realized that I was essentially trying to duplicate the effects of the Lua builtin function require
, and that many libraries for Lua written in C hook into this functionality just fine. Indeed, the Lua C API makes it easy to provide a user-specified function to respond to require
with, which loads the library and returns some token of its success, be that a function, a table, or a userdata.
(Full) Userdata in Lua are pretty magical. I wrote a little about them in a previous blog post. Crucially, in addition to being able to run setup code when require
runs, each full userdata is able to provide a __gc
metamethod which prepares the userdata for garbage collection.
The refactor hits main()
again...
By the time I realized this, I had already gone from imagining an incremental improvement to Seamstress version 1 to a full rewrite with libxev
and Module
types. Actually, I had made thousands of lines of progress towards writing a version 2.0.0 that made use of these insights. So this final realization was actually a little disorienting. Finally, it felt like, I had found the code structure that I was actually looking for, but the refactor to take advantage of this new structure quickly spread all the way up to main()
and then suddenly it was a rewrite within a rewrite. On August 15, 2024, I started a further branch on top of my version 2 branch and started from scratch yet again.
I spent most of August worrying—productively—rather than coding. For example, I wanted to make it so that—barring a crash—Seamstress exits if and only if the event loop has run out of things to do. This puts the burden on the author of Zig code for Seamstress to be prepared to correctly and quickly remove from the event loop every item she adds to it in the event that the user signals an intention to stop the program. Adding code to this effect to the Lua side of Seamstress also introduces the possibility for (accidentally?) messing up this code from Lua. I'm grateful to Brock Wilcox and Michael Dewberry for listening to my worries on this front and pointing out kindly that if somebody writes badly behaved Lua code that messes with processes like this, they are accepting responsibility for the results.
Forward!
Once I made a real start, the benefits were immediately apparent. For example, over the summer I realized that it would be useful to be able to unit test Seamstress from Lua. (Testing from Zig, which provides test
as a reserved word, is beautifully simple, but Lua tests are also desirable to test the use of Seamstress in addition to the details of the implementation.) Rather than reinvent yet another wheel, I decided to attempt to make use of busted
, a Lua testing framework available through the luarocks
project. Doing this introduced a pair of interesting challenges.
The first is that Seamstress needs to know how to access luarocks
if it is available. Of course, one possibility is to require the user to run the commands output by the invocation luarocks path
before starting to test Seamstress, but this is needlessly unfriendly. Since those calls set a couple environment variables, it seemed better to attempt to set those environment variables just for Seamstress programmatically. At first I attempted to do this only on demand, but this step properly belongs to main()
, so there it lives now.
I am the runtime
The other problem is that busted
, like all of Lua, is fundamentally single-threaded. Lua provides a beautiful implementation of coroutines, but no "runtime" to churn through them until finished. Jeremy Kaplan jokingly observed to me "ah, so you're the runtime", and every now and then when I see him I'll exclaim that yes, "I am the runtime!!" This joke really empowered me to solve this problem thoroughly. Explicitly, I needed a way to pause execution of Lua code in the middle of a busted
test block (because a test succeeds if the function returns without errors) to allow the Seamstress event loop to tick through anything it has accumulated.
Here the analogy with Node.js started to become very apparent. In JavaScript, async
/await
is a beautiful abstraction over the notion of a Promise
. A Promise
is a little bit of code that JavaScript will place onto the event loop and execute when and how the event loop allows it to. Fundamentally (a strong but I think correct choice), Promise
objects cannot be cancelled; the function they capture will always execute. I wrote seamstress.async.Promise
with the same semantics in mind.
In JavaScript, Promise
objects have a method called then
which allows one to chain execution of code to the success or failure of the Promise
function. In Lua, then
is a reserved word, so currently I've settled on the archaic English anon
, which is at least a little similar in flavor and helpfully short to type. I am, of course, always open to suggestions on ways to improve Seamstress. The seamstress.async
function serves a similar purpose. However, it takes in a function and returns yet another function—calling the returned function instantiates the passed-in function as a Promise
.
Rather than manually chaining Promise
objects with then
, JavaScript also provides the keyword await
, which lifts the success or failure value of the Promise
out into the current scope, which must itself be somehow asynchronous. Here's a Seamstress example of two ways of writing code with the same effect:
local async = require 'seamstress.async'
local fn = async(function(x, y) return x + y end)
-- both methods will print "we got 37"
-- method one: manual promise chaining
local p = fn(15, 22)
p:anon(function(sum)
print("we got " .. sum)
end)
-- method two: async / await
local p = async.Promise(function()
local sum = fn(15, 22):await()
print("we got " .. sum)
end)
By making the busted
tests run in an asynchronous context, I was able to use my async
module to allow busted
to test the event loop.
Released yet incomplete
When I set out to write Seamstress version 2.0.0, I wanted to hold off on releasing it until it was at least as feature-complete as the previous release of Seamstress, version 1.4.7. I have given up on this goal, with a little embarrassment. I don't like that the current version of Seamstress is less capable than the previous one, that most of the improvements are improvements for my vanity and comfort as a developer and only minimally visible to an author of Seamstress Lua code.
To be sure, there are real gains. For example, Seamstress 2.0.0 can be run inside of other Lua environments capable of loading dynamic libraries; your system's lua
prompt is a good example.
But ultimately, I realized, it serves nobody to know that version 2.0.0 is coming at all if it remains relatively inaccessible. Given that humans are single-threaded, and I have done most of the maintenance and development work on Seamstress myself, waiting till it's "ready" is a standard which hurts both Seamstress 1 and Seamstress 2. I like working "with the garage door up", so to speak. Given that I've accepted one increase to the major version number, I would trade the possibility of more increases for the ability to work on this project in the way I did when I started it—sharing with folks the wins I find as I go.
Seamstress version 2.0.0 is now available. I am developing it on macOS primarily, but would like it to run on Linux (and Windows if it turns out to be feasible). You can install it from source or via Homebrew. The GitHub repository has more directions. I welcome suggestions, issues, bug reports, pull requests, etc. I'll keep this blog abreast of plans and developments as I continue to work on Seamstress.
Thanks very much to Nicole Tietz-Sokolskaya for feedback on a draft of this post!