AI-Enhanced Music Instruments

ZICHI — Jan. 2019

Primary Roles:
Lead Programmer
Sound Designer
Technical Support
Collaborated with:
Aven Zhou — Artist
David Santiano — Musician, Programmer

 

ZICHI is an interactive musical installation enhanced by AI. Being more than a playable instrument, ZICHI can “understand” the input from users, “compose” a new melody, and “respond” to audiences with the tone of Guqin, a traditional eastern instrument.

ZICHI was inspired a Chinese legend of the greatest Guqin musician Yu Boya. The music of Yu Boya can only be understood by his best friend Zichi. Their stories have been told generation after generation to honor the value of friendship and understanding.

In this project, we proposed an AI system which can response to human input and generate Guqin music, aiming at rediscovering traditions and allow their essence to evolve through emerging technology while maintaining elements of both familiar and new. The deep cultural heritage meets rapidly changing forms of art powered by technological advancements, this project is dedicated to building a creative and collaborative AI profile, where AI captivates the imagination to enhance creativity.

This project is part of an ongoing installation series Chinese New Literati by the artist Aven Le Zhou.

Therem{Ai}n — Nov. 2018

theremain_2
theremain_2
theremain_3

Collaborated with:
Aven Zhou
David Santiano

Designed for lonely kids or adults, Therem{Ai}n allows users to perform music simply by moving two hands in the air. Based on what the user plays, the instrument plays back new melodies generated by a neural network as a response.

Core Tech Used: Magenta.js, TensorFlow, Leap Motion, Python, etc.
This project was made and awarded the 2nd place in 2018 Shanghai Google Design Sprint Hackathon.

Making of Therem{Ai}n

Concept

What our team is focusing on is how AI can be used to work hand in hand with an activity that is inherently special to human beings, the creation of music. We are going beyond the novelty of an artificial intelligence-based performance and homing in on how in a world where AI is often seen as a replacement for jobs and production, that AI can accompany, assist, and respond to our creative process

Making Steps

Our first step was to making a digital theremin. We used a Leap Motion as our main interface, and mapped Leap Motion data–in our case the positions of both hands of a musician–into MIDI pitch numbers and amplitudes. The pitch numbers were then used to generate sound waves as live playback using a Python library Pyo while the amplitudes were used to manipulate the volume.

Our second step was to feed MIDI pitch numbers obtained from the musician input into Magenta.js, a RNN based neural network that generates new melodies based on input MIDI data.

The last step was to simply play back generated MIDI data in the same manner as the input MIDI.