• React
  • Redux
  • Typescript
  • Node.js
  • Express
  • PostgreSQL


Grimoire is a long-running project for tracking time, productivity, and creative inspiration in a "logbook" format. It's taken several forms over the years, the original being an Electron app written in simple JS without frameworks (seen above), the latest being a React-Redux-Node-Express webapp.

In the latest version, the primary interface view is a "timebar", similar to a timeline in video editing software, where activities are logged. There are three main "entities" - Logs, Tasks, and Projects. Logs are descriptions of how time was spent, and can be associated with tasks and projects, but do not have to be. Tasks are to-dos, actionable milestones to complete - they can have subtasks as well as metadata about progress, inspiration, and other "amorphous" traits. Projects are simply a "tag" associated with a group of tasks (and thereby logs).

Grimoire is designed to be a simple and extensible foundation for tracking not just productivity but all kinds of life data over time, from mental health to sleep, so that it can be analyzed and used to understand trends in our habits and use of time.

Because Grimoire is not a system that passively collects data but requires the user to take the time to make entries themselves, a central aim of Grimoire is to make the user experience as simple, natural, and frictionless as possible. It's for this reason that I've chosen keyboard inputs and scrolling as the primary interaction modes, with most app interactions taking place with simple typed commands. I aim to make the experience of adding logs satisfying, with a fluid input flow where the user's hands never leave the keyboard.

One of the main design hurdles with this app was developing a timeline component that can display and scroll thousands of DOM elements in a performant manner, in addition to having the timeline remain accurate when dealing with a zoomable timespan from years to minutes. This involved several refactoring passes. An additional issue was fetching and selecting logs from redux for the timeline view - I initially wanted a "visibility culling" approach for this, where only a certain timespan around the currently visible span would be selected and rendered. Eventually I realized this was mostly premature optimization and it performed fine even with thousands of logs - although I may return to it to try to optimize the performance of the timeline component further.

I spent quite a bit of time thinking about the data schema for Grimoire, especially trying not to "over-describe" what I wanted to be a very open system. What I settled on was a fundamental "Log" type to describe "time spent", and two other types that are basically containers - Task and Project. Task and Project are mainly there to provide a minimal structure for project tracking.

The other main type is the Sector type, a base type that is extended by the user. A Sector describes a certain category of activity and its associated metadata. For instance, weightlifting may have a metadata associated with PRs, while music may have an integer description of inspiration level (or writer's block-ness). This is intended to be a rich system for tracking all the "little qualities" of all sorts of activities over time.

The final type is the Habit type, used to describe and track intentional habits the user would like to create and maintain. Habits simply remind the user to do the activity, as well as providing a benchmark to measure logged activity against, to see consistency, burnout, and other patterns.

These types all relate in some way, and, just as with the Eita project, understanding this dependency graph was key to architecting Grimoire in a sane way. Having a table with a "column of arbitrary inherited type" presented an immediate sticking point. To preserve strong typing, I chose to go with a "sparse table" approach, where each Log holds a relation to each and every derived type of Sector. I figured the number of Sectors most users would create and track would remain a small enough N to make this a chill tradeoff, even with a large number of Logs. Habits are essentially a name plus metadata with a relation to a Sector. Tasks have a one-to-many relationship with Logs and other Tasks, allowing the creation of subtasks. Projects have a one-to-many relationship with Tasks and Logs.

Grimoire is not meant to be Jira or Trello. This project is in service of productivity, but on a personal level, and past that, it's about intentionality. With so many platforms conditioning the user into unintentional patterns of behavior, I want to empower users in the opposite direction, towards intention, self-awareness, and "light patterns".



The Kuma KARASU is an open hardware project intended to enable new modes of connection and sharing that empower the individual user in a secure, private, and decentralized manner. I aim to enable powerful community-level capabilities that are the opposite of "global by default" monolithic internet services, facilitating capabilities that are compelling, even at a small network size.

The NRF9160 connects as an MQTT client to a "repeater" server acting as a broker. The repeater rebroadcasts transmissions from each connected device to the other talkgroup members.

The Nordic NRF9160 is used to enable secure, private push-to-talk capability, a similar application to Digital Mobile Radio or VHF/UHF. In addition, it facilitates the transmission and reception of high-quality audio broadcasts as with an internet radio station. The push-to-talk functionality is implemented with a device MQTT client that talks to an MQTT broker "repeater" server that rebroadcasts messages to other devices. To remain as performant and available as possible in low-bandwidth conditions, audio coming in through the I2S interface as PCM is encoded using the FreeDV codec, part of the Codec2 project. This enables ultra-low bitrate voice encoding, down to 10% of the bitrate of GSM full-rate voice coding.

The reception and broadcast of high-quality audio is facilitated by encoding incoming I2S audio with Opus. This facilitates much higher quality, as well as adaptive bitrate changing. This, too, is sent to the "repeater", this time over a traditional TCP socket.

Going forward, a key goal of this project is security and privacy. For the minimum viable product, authentication will happen from the clients to the "repeater" server, using basic Diffie-Hellman key exchange. However, this makes the server a single point of failure, and I'd like to use a Multiparty Key Exchange protocol, such as the fault-tolerant multiparty Diffie-Hellman approach in "Key agreement in ad hoc networks" (Asokan, 2000) or "Scalable Protocols for Authenticated Group Key Exchange" (Katz, 2003). This would fit the overall design goal of the "repeater" being a purely blind intermediary, simply forwarding encrypted messages. With multiparty key agreement usually being O(n2), I'd like to review current literature and try to take a modern approach to multiparty key agreement - I believe Telegram has a writeup on their approach as well.

The Expressif ESP32 is used to enable peer-to-peer data sharing without internet connectivity, taking place at a physically local level - a "sneakernet" or "meshnet" vision, over WiFi. The aim is for this functionality to dovetail with other tools for both consuming and creating content, to enable a slower, more private, decentralized form of social media - as well as regular file sharing.

Devices hold a unique ID / public key as well as a "membership list" of group IDs. When "beaconing" to other devices to initiate a share, group IDs of the groups the user is a part of are sent as well. Determining the other party's memberships allows each device to know which friend's updates can be passed along second or thirdhand. On device, shared content is stored in a flat manner in the filesystem, with a list of directories for each friend ID.

The device discovers peers by performing an AP scan in station mode. Peers broadcast their public key as the SSID. When a known publickey is found in the scan, the Wi-Fi connection is established, using the shared key as the password. If devices are members of the same group, but haven't yet exchanged keys, the connection is authenticated with the shared secret associated with the group. This also flags the device to only share updates with a permission level of "share to the entire group."
Once the connection has been established, a socket connection is opened between the devices. The "server" / device acting as AP walks its filesystem, each file is stat'd, and the modify time, size and absolute filename are sent. On the client / peer side, when a modify-time and filename tuple is received, the local file of that filename is opened, statted, and mtimes and sizes are compared. This is a more rudimentary and coarse approach than the rsync approach of sending a complete file signature list, with more failure modes - but this is intended to pare down the bandwidth footprint, and require less CPU time opening and streaming files from the not-so-fast eMMC bus. After profiling this file syncing approach, I'll eventually aim for a more exact "first pass" diff.
If the remote file is newer, the local device pushes the filename to a "generate and send file signature hash" stack. If the local file is newer, the local device pushes the filename to a "request file signature" stack. The list of requested signature filenames are sent to the server device. The server generates and sends the requested signatures, while the client sends its own signatures. The rdiff library is used to generate deltas from the corresponding local signatures, which are fwritten directly across the socket. Deltas are applied to the local files, and the sync process is complete.
Expressif has developed a connectionless Wi-Fi protocol called ESP-NOW that facilitates communication with 802.11 action frames alone. I've used this to implement a method for authenticated key agreement when two devices are in physical proximity. On a specific UI interaction, the device will set its SSID to a "magic word" known to all devices, to become "discoverable". This allows the other device to add it as an ESP-NOW peer by its MAC, and send a "hello" packet. This packet contains the publickey and "display name" of the device. Upon reception of this packet, the publickey and name are displayed in the UI, with an "add as friend?" modal. If the device consents to add the peer as a friend, it sends its own public key and name back. Both sides generate shared keys, and the discoverable device sets its SSID back to its public key. A test Wi-Fi connection is established, and if successful, is reflected in the UI.

Using LoRa and WiFi, the ESP32 will silently and passively sync data updates from friends when you pass within reception distance, with these updates passing virally through the friend network. This data sharing is governed by fine-grained privacy rules. The idea is that your Karasu is associated with a certain directory on your computer; on your trusted home network, it passively downloads all the new data put in this folder. When you go out and see friends, based on the privacy rules you've set, Karasu passively syncs some or all of those files and subdirectories on to your friends' Karasu, where it continues to be shared from friend to friend. Due to LoRa's range, this process doesn't have to happen in direct physical proximity as with WiFi, but can rather happen over distances of 500m or more.

My goal with the ESP32 part of Karasu is to enable a form of sharing that feels more valuable for the sharer and recipient, like getting a package in the mail. This platform doesn't enable instant updates, as with the internet, but I believe this is a huge plus. This platform enables a slower, "trickle" form of social network connectivity, where longer-form, higher-value content is shared more infrequently.


  • Python
  • Blender API


Eita is a project to implement a rich system for procedural 3D modeling of architecture. Eita is a fork of a project by @Isimic, and leverages Blender's best-in-class mesh representations and operations. Eita is planned to connect to an implementation of a modeling language based on the CGA language described in Pascal-Muller 2004. while most CGA implementations act as "lego sets" applying simple rule-based transformations to existing mesh "modules", the blender API allows me to develop rule-based routines for each constituent part of modern urban buildings.

This image demonstrates dynamically generating extruded molding / fascia profiles as well as paned glass windows.
My goal with Eita is to eventually dynamically generate rich and complex cities, going through a whole "layer cake" of procedural processes, from terrain generation, road network generation using tensor fields, parceling, building generation, and finally modeling the evolution of a city's architecture over time.

The scope of this project is ridiculous and I will probably be working on it for quite a while.

Blender's Python API provides a rich and extensive suite of mesh modeling operations. This allows me to spend less development time implementing mesh modeling operations, and more in understanding and implementing the style rules that lie under the structure of windows, facades, roofs, and other parts of buildings.

One of the fundamental "building blocks" of Eita is the generation of profiles for extrusion. Much of the geometry is formed by parametrically defining profiles to be extruded along a given curve (or edge - vertex list, rather.)

When procedurally generating complex forms, a key quality is the "order of operations". In my opinion, approaches that attempt to be order-independent are necessarily hamstrung, because the nature of the structure naturally follows an ordered series of steps, from coarse to fine complexity. Moreover, the dimension dependencies, while they are interrelated, are strictly hierarchical. By decomposing the building's structure into "blueprints" for each module, it allows me to understand the dependency graph of dimensional relationships much more clearly.

The purpose of breaking down a building into constituent components and developing a schema and "blueprint" for modeling it in a sequence of steps, is to be able to abstract buildings into "style families" of "style classes" with "style rules" that vary. For instance, a style classmay have windows of 3 styles - dependent on the style class, they may have a variety of dimensions and panes, etc. Many buildings fall into a small number of similar style families. By developing rich modular building blocks dependent on a hierarchy of style rules, we can procedurally generate a vast variety of buildings that still obey real-life dimensional relationships and style rules.

After I reach a good set of modular primitives, the next step will be to write a simple declarative language for describing style rules. I plan to use the PyParsing parser combinator.



Tetrakarn is a collection of a few different implementations of pitch classification for musical analysis, as well as chord detection and key detection, in Javascript.

Table Talk

Table Talk was a project I worked on in college to create a web platform for collaborative discussion around texts. The idea was to create a commentary platform for source texts, such as journal articles, literature, or other written works. Rather than commenting on the text as a whole, commenters would highlight a section of the text to leave a comment about the particular part, which would spawn a discussion thread around that highlighted section. Features that were planned were to allow "connections", where a commenter could highlight multiple sections to leave a comment on the parts taken together; support for rich comments that could include markup, LaTeX, images, and other hypertext; as well as support for importing a rich variety of document formats.


Myrrh is a slow-simmering long-term project to build a high-performance 3D game engine in C++. Myrrh aims to strike a balance between complexity and capability, choosing powerful and robust strategies for animation, rendering, and audio.