SAM HU

Game Designer & Developer | Media Artist

Therem{Ai}n

theremain_1
theremain_2
theremain_3

Designed for lonely kids or adults, Therem{Ai}n allows users to perform music simply by moving two hands in the air. Based on what the user plays, the instrument plays back new melodies generated by a neural network as a response.

Core Tech Used: Magenta.js, TensorFlow, Leap Motion, Python, etc.
This project was made and awarded the 2nd place in 2018 Shanghai Google Design Sprint Hackathon.

Making of Therem{Ai}n

Concept

What our team is focusing on is how AI can be used to work hand in hand with an activity that is inherently special to human beings, the creation of music. We are going beyond the novelty of an artificial intelligence-based performance and homing in on how in a world where AI is often seen as a replacement for jobs and production, that AI can accompany, assist, and respond to our creative process

Making Steps

Our first step was to making a digital theremin. We used a Leap Motion as our main interface, and mapped Leap Motion data--in our case the positions of both hands of a musician--into MIDI pitch numbers and amplitudes. The pitch numbers were then used to generate sound waves as live playback using a Python library Pyo while the amplitudes were used to manipulate the volume.

Our second step was to feed MIDI pitch numbers obtained from the musician input into Magenta.js, a RNN based neural network that generates new melodies based on input MIDI data.

The last step was to simply play back generated MIDI data in the same manner as the input MIDI.

Collaborated with:
Aven Zhou
David Santiano