Published Dec 5, 2025
music · technology · software

truthbox

creating an immersive hip-hop jukebox

5-10 min read
unique views · total time spent

At its most basic, truthbox is my take on a jukebox with a visualizer component—an homage to the old Windows XP Media Player visualizers I enjoyed as a kid. It’s intentionally a bit silly: we’re embedding a screen within a screen, which I find both perplexing and interesting.

What real problem is this solving? None. This is an art piece that leverages technology, not technology that solves a problem. It’s a focused digital project that does a thing in a way I find satisfying & cool.

That said, I love the idea of building something my friends and other artists can use to share their music in an interesting way. It worked out perfectly that my good friend Olu was ready to release “PWR”, a track I’d first heard a few years back. From the beginning I felt it deserved something interesting to promote it. It was an honor to use this as the first track loaded into truthbox. There’s even a lil pun to be enjoyed here around the song “powering” it up.


A little backstory

I’m a technologist & software engineer with nearly two decades of professional experience. Within a few hours of using ChatGPT 3 for the first time a few years back, I knew we were stepping into a qualitatively new era in software. As these tools have progressed I’ve witnessed & been a part of some pretty wild feats of engineering, done in stints of time that hardly even make sense.

So when I had the idea for this jukebox, I knew implementing it wasn’t going to be the hard part. The challenge would be crafting the visual style and working out the effects layer.

My process started with visual references from movies, examples online, and working with ChatGPT to generate surfaces and textures. I brought in my current website style, my general thesis as an artist, and the kind of futurism I want to play with—and out came a style reminiscent of an e-reader: matte, metal, carbon-fiber-like surfaces with dials that feel like modern digital audio equipment. There’s a touch of Game Boy & Etch a Sketch thrown in there as well. All of that fused into the visual direction for the “device” itself.

visual style pallete


The tech

I knew I was going to use Astro as my build tool. My preferred language on the modern web is TypeScript—with generative models, type safety is useful friction. I’ve found that it increases the chances for outputs to meet expectations, makes it easier for LLM agents to reliably self-correct, and makes it safer to compose modules. That last bit becomes especially important later on, as code scales & modularity becomes neccesary to reduce cognitive load (read: LLM context limits & token burn).

I also knew I wanted to lean on WebGL shaders. Any visualizer is compute-intensive, and the graphics card is the right place for that work.

Outside of Astro, I chose not to use any frameworks. The entire thing is built with TypeScript, WebGL, CSS, & HTML.

For implementation, I started in VS Code using agentic chat mode with Sonnet 4. I used it to scaffold a basic interface, got audio playback working, wired up audio analysis, and built the initial reactivity between the audio signal and visuals.

As the project progressed, Sonnet 4.5 became available, which took things to another level. I also started using Claude Code and found myself running multiple agents at once on different parts of the system. That evolution—new models appearing mid-project—is why building software that is either disposable (where appropriate) or evolutionary seems like the right approach in the coming years.


On the visualizer side: I’ve been involved in video work even longer than software. I’ve spent a lot of time in tools like After Effects. More recently I’ve been enamored with TouchDesigner.

A big part of my process as a video editor is drawing out sonic qualities into visuals, creating a sense of synesthesia. And I love to make things trippy. So I decided to make this visualizer revolve around evolving noise and symmetrical mirroring.

That combination gives you a layer of chaos (the noise) with order imposed through mirroring. For how the noise evolves, I experimented with different ways of tying it to audio amplitudes. The key difference from my past work is that here, amplitude acts as an additive evolution. With every punch of sonic energy in a given frequency band, the noise is pushed forward rather than snapping back and forth.

When I got that working cleanly in the shader, it was an “aha” moment. I was up late with my friend Olu and we found ourselves gazing at the screen, grinning ear to ear. That feeling unlocked finishing the project.

// RENDER

Visualizers meet “fluidware”

I didn’t handwrite any of the effects. I followed my own prescription (fluidware) and developed this software playing the role of “governor” instead of “programmer”— treating it as yet another case study of this wompy new software paradigm we seem to be sloshing towards.

Every effect was written by one of three models: Sonnet 4.5, Gemini 2.5, or GPT-5. I designed the pipeline and constraints. Each had their own strengths: Sonnet was technically brilliant and produced well-organized, optimized effects; Gemini created the most creative effects but struggled with performance; GPT was somewhere in the middle.

I created a modular effects system where each effect lives in its own file tied to a simple index, with no coupling to anything else, but with enough parameterization to respond to shared signals like audio features and the effects / colors / background settings exposed on the device. This let me run agents in parallel without merge conflicts. Around that, I built a small workbench so effects could be:

Then I created a prompt and agentic loop instructing models on how to create or iterate on effects. This served a few purposes:

Because the models were multimodal, they could “see” what they produced and use that as feedback. With static guardrails and a constrained interface, I could let the system iterate safely. I’m excited to give this same system to newer models and see what comes out. I plan to add effects to truthbox over time.


Closing

This is probably one of my favorite things I’ve ever made.

For anyone who has used TouchDesigner and similar tools for audio-reactive visuals, truthbox will feel a bit mundane. But I love how accessible this is. It runs in a browser. It should run on most phones reasonably well.

Most importantly, I was able to build something quickly that acts as an accelerant for a friend’s art—a way to get a beautiful song into the world in a cool wrapper that might help more people hear it. On that note, you probably came here because you’ve already seen truthbox or listened to “PWR”. But if you haven’t: