I am teaching a web-based class in June about Ableton and Max for Live. All skill levels are welcomed — we are going to turn pictures into sounds, make plug-ins unlike anything you could buy, and hijack Ableton using cameras, Kinects, and smart phones. Hope to see you there.
Digital psychedelia created by deliberately breaking video files. The videos were original created by taking macro video footage of an old TV screen. They were altered by hand using a hex editor and also by writing a program that made the video change in reaction to the sounds, which were created as a collage in the program Max.
In the future, music production software ought to increasingly include smart tools that take the overwhelming number of options that the digital world provides, and give people more powerful control over them from a higher vantage point. Rather trying to bundle a bunch of knobs together, or drawing automation — both of which limit real-time expressive options — music software needs to look towards ways that a person can delegate the responsibility of making low-level choices to the computer.
In other words, the computer needs to become a collaborator.
One way to do this is with machine learning and artificial intelligence. This video is demonstration of a simple approach using the “intelligence” contained within the sound itself: it shows how using Max for Live, software can analyze an audio signal based on loudness, timbre, or pitch, and then use that information to automatically turn any knob, dial, or fader in Ableton.
Musicians have always been designers of relationships between sounds. This is what that can look like in the contemporary world of music production software.