Neural Synthesis
Uploaded June 29th, 2024 by Yashique ChalilI’ve had the privilege of working extensively in neural synthesis, particularly with cutting-edge architectures like IRCAM’s RAVE, where my primary focus has been on training neural networks to capture and synthesize diverse sonic styles into AI models. My research spans across various alternative architectures, as I continually seek to develop more stable, refined models that can push the boundaries of creative sound design.
This journey has led me to collaborate with some of the leading figures in AI and music research, allowing me to merge academic rigor with innovative, artistic sound exploration. Through these partnerships, I’ve gained deeper insights into how neural synthesis can reshape our understanding and creation of sound.
Here are some demos of a plant model I’ve been developing. This model is trained on sounds that reflect the umwelt—the sensory world—of plants, capturing a fictional “auditory environment”. My goal is to use a refined version of this model for the sound design of an upcoming audio-visual play centered around plants, where the soundscapes will reflect their rich, yet often imperceptible, sonic world.
Running isolated bass and drums of John Coltrane's Giant Steps, pitched down into PlantModel v1.
Halucinations (The sounds that the model makes when there is no input going in) of the Plant Model V2. The variations were performed by manipulating the Latent Variables of the trained nueral network.
Creating grass footstep sounds with the Plant Model v2.
Datamind Audio
Model Reliability Engineer
As a Model Reliability Engineer at DataMind Audio, I helped maintain and train neural audio models. At DataMind Audio, we present a groundbreaking real-time neural audio synthesis integrated seamlessly into your Digital Audio Workstation (DAW). Simply input any audio signal, and watch as our plugin recreates its timbre, drawing from meticulously curated datasets crafted in collaboration with leading sound designers and music producers. What sets us apart is our exclusive use of neural networks—no samples involved. This avant-garde synthesis process empowers users to delve into the infinite possibilities of an “Artist Brain.” Moreover, our platform is committed to ethical sourcing, guaranteeing that artists receive proper compensation for their invaluable contributions to the dataset. I was also fortunate to lead a workshop and demo at the Creative Tech Scotland Gathering 2024., where I showcased some of these advancements.
Articles: Gearspace, Music Radar