Project Overview
This cross-medial generative art installation transforms the global pulse of human knowledge into a real-time audiovisual experience. By connecting to the live stream of Wikipedia edits, the system translates abstract metadata—such as byte size, user type, and edit wars—into organic soundscapes and dynamic particle visualizations.
The project treats the Wikipedia API not just as a data source, but as a generative seed. A core research focus lies in the aesthetic distinction between human contributions (mapped to warm, organic subtractive synthesis) and bot activity (represented by precise, cold digital chatter). It explores how invisible digital infrastructure can be made perceptible through sensory translation.
Technical Features
- Distributed Architecture: A modular Publish–Subscribe system using OSC (Open Sound Control) via UDP to ensure low-latency communication between the data fetcher, visualizer, and sound engine.
- Algorithmic Mapping: Complex translation logic that maps text metadata (e.g., article title length) to sonic parameters (e.g., fundamental frequency) and visual coordinates.
- Custom Sound Engine: A sample-free, generative synthesis engine built in SuperCollider featuring microtonal frequency scaling and dynamic reverb tails based on edit magnitude.
- Particle Visualization: A Python-based rendering engine using alpha-blending to visualize the magnitude and „decay“ of information changes over time.
Tech Stack
- Core Logic: Python 3.10+ (requests, python-osc)
- Visuals: Pygame
- Audio Engine: SuperCollider (sclang)
- Data Source: Wikimedia EventStreams API (Server-Sent Events)
