This project transforms a laptop into a modular, programmable effects rig, replacing hardware pedals with custom code. Designed specifically for live performance with monophonic instruments (e.g., Trumpet), it explores the boundaries of software-defined instrumentation where stability and low latency are paramount.
The core of the system is a flexible DSP chain featuring two distinct Octaver engines and a distortion stage. Unlike rigid DAW plugins, this SuperCollider environment features a smart routing logic that dynamically handles signal flow, ensuring phase coherence and unity gain even when effects are bypassed or hot-swapped during a performance.
Technical Features
Hybrid Octaver Engine: Switchable between a standard Pitch-Shift (granular) and a Subharmonic Generator that synthesizes a new sine wave based on input pitch tracking for massive, synth-like low ends.
DSP Distortion: Implementation of wave-shaping algorithms, selectable between Tube Saturation (tanh) and aggressive Wave Folding (fold2).
Smart Signal Routing: Custom architecture using Groups and Audio Busses to prevent signal doubling and gain spikes when bypassing effects.
Fail-Safe Architecture: Includes a „Zombie-Killer“ routine to clean server nodes before booting, ensuring stability during live coding sessions.
This project is a computational study on inharmonic spectral structures and the synthesis of metallic idioms. Based on the psychoacoustic research of Jean-Claude Risset, I developed a parametric synthesizer that reconstructs the complex timbre of a bell from scratch using additive synthesis.
Instead of using samples, the engine generates sound by summing multiple sine wave oscillators tuned to specific non-integer ratios (modes). The system simulates physical damping properties, where higher partials decay logarithmically faster than the fundamental, allowing for a realistic recreation of the „strike tone,“ „hum,“ and „tierce“ characteristic of cast bells.
Technical Features
Spectral Reconstruction: precise tuning of partials to replicate the inharmonic series of various bell geometries.
Physics-based Damping: Implementation of frequency-dependent envelope generators to simulate material energy loss.
Parametric Design: Real-time control over the „Inharmonicity Factor,“ allowing the user to stretch or compress the spectral ratios—a direct precursor to my research on Spectral Elasticity.
NeuralMIDI-Streamer is a real-time generative music system that bridges the gap between Deep Learning and live audio synthesis. Unlike static AI generation tools, this system functions as a responsive virtual instrument, using LSTM (Long Short-Term Memory) neural networks to generate polyphonic MIDI streams on the fly.
Trained on datasets of classical composers (e.g., Bach, Beethoven), the system now operates in two distinct modes: a high-performance Desktop Mode connecting Python to professional audio engines, and a highly accessible Web Mode running entirely client-side in the browser. Both versions feature a custom state-machine to handle complex voice allocation and allow the performer to modulate generation parameters—such as „Temperature“ (creativity) and „Density“ (sparsity)—live during performance.
Technical Features
LSTM Architecture:Implementation of a recurrent neural network capable of learning and predicting polyphonic chord structures and time-based dependencies.
Dual-Platform Pipeline:
Desktop: A robust communication bridge via OSC between Python (inference) and PureData (hosting VST plugins via vstplugin~).
Web: A fully client-side implementation using TensorFlow.js for inference and Tone.js for synthesis, eliminating the need for server-side processing.
Live Modulation: Real-time control over stochastic parameters (entropy/density) and „Hot-Swapping“ capabilities to switch composer models (e.g., morphing from Bach to Beethoven) without audio interruption.
Smart Voice Allocation: A custom dispatcher logic ensures note-off pairing and velocity smoothing to prevent „stuck notes“ and ensure organic phrasing across both platforms.
This cross-medial generative art installation transforms the global pulse of human knowledge into a real-time audiovisual experience. By connecting to the live stream of Wikipedia edits, the system translates abstract metadata—such as byte size, user type, and edit wars—into organic soundscapes and dynamic particle visualizations.
The project treats the Wikipedia API not just as a data source, but as a generative seed. A core research focus lies in the aesthetic distinction between human contributions (mapped to warm, organic subtractive synthesis) and bot activity (represented by precise, cold digital chatter). It explores how invisible digital infrastructure can be made perceptible through sensory translation.
Technical Features
The project was developed in two distinct implementations to explore different technical ecosystems:
1. Public Web Installation (JavaScript / WebGL)
Browser-Based Real-time Processing: Direct connection to the Wikimedia EventStream via Server-Sent Events (SSE) handled entirely client-side.
3D Particle Engine: A high-performance visualization using Three.js (WebGL) to render edits as decaying light particles in a 3D space.
Web Audio Synthesis: A custom-built subtractive synthesis engine using the Web Audio API to recreate the sonic characteristics of the SuperCollider prototype directly in the browser.
2. Research Prototype (Python / SuperCollider)
Distributed Architecture: A modular Publish–Subscribe system using OSC (Open Sound Control) via UDP for low-latency communication between distinct processes.
Algorithmic Mapping: Complex translation logic mapping text metadata (e.g., article title length) to microtonal frequencies and visual coordinates.
High-End Audio Engine: A sample-free, generative synthesis engine built in SuperCollider, featuring dynamic reverb tails based on edit magnitude.
Tech Stack
Web Implementation
Core: JavaScript (ES6+), Server-Sent Events (SSE)
Visuals: Three.js (WebGL)
Audio: Web Audio API
Local Research Version
Core Logic: Python 3.10+ (requests, python-osc)
Visuals: Pygame
Audio Engine: SuperCollider (sclang)
Data Source: Wikimedia EventStreams API (Server-Sent Events)
Terry is a browser-based „music creation playground“ developed as a collaborative Bachelor project at the Berlin University of Applied Sciences (BHT). The application invites users to experiment with musical structures through an intuitive canvas interface.
Instead of traditional notation, users interact with dynamic 3D spheres to modulate sound parameters (timbre, duration, reverb) in real-time. The core feature is a custom-built Looper engine that records user interactions as timestamped events, allowing for complex, layered polyrhythmic structures within a harmonically constrained C-Major context.
Technical Features
Audio Engine: Implementation of Tone.js for web-based synthesis and signal processing.
3D Interface: Utilization of React-three-fiber to create responsive, manipulatable visual control elements.
Custom Event Looper: Development of a logic system that captures click-events and state changes relative to a start time, enabling non-destructive overdubbing.
Full-Stack Architecture: A Python (Flask) backend connected to a MongoDB database allows users to save, load, and share their compositions via a REST API.
This project is a dedicated additive synthesis engine developed in SuperCollider, designed for exploring microtonal tuning systems and non-standard harmonic relations. Unlike traditional synthesizers tied to 12-tet scales, this tool allows for the precise definition of frequency ratios and partial distribution, enabling the creation of complex, evolving timbres and „elastic“ harmonic structures.
It serves as a technical foundation for my ongoing research into spectral stretching, providing a programmable environment to test how shifted partials affect our perception of consonance and dissonance.
Technical Features
Dynamic Partial Control: Real-time manipulation of individual overtone amplitudes and frequencies.
Microtonal Flexibility: Support for custom tuning tables and logarithmic frequency mapping.
Algorithmic Modulation: Automated timbral movement through LFOs and envelopes applied to the spectral array.
DrawTone is an interactive and generative creative coding project by Dave Kronawitter and me. Through this project we explored the intersection of gestural drawing and real-time sound synthesis. By capturing visual trajectories and translating them into spectral data, the system allows users to „play“ an instrument through intuitive graphic input.
The project focuses on the translation of geometric properties—such as line height, curvature, and density—into synthesis parameters like frequency, amplitude, and timbral modulation. It serves as an experimental bridge between visual arts and algorithmic composition.
Technical Features
Real-time Synthesis: Direct mapping of XY-coordinates to spectral oscillators.
Algorithmic Translation: Implementation of custom logic to interpret visual speed and pressure.
Modular Architecture: Designed for easy integration with external MIDI and OSC-controlled environments.
Tech Stack
Image Recognition: Python with the OpenCV-framework