At the intersection where sound becomes thought and technology learns to listen, Gadi Sassoon builds worlds. Composer, performer, and tireless experimenter, he moves fluently between piano and violin, code and gesture, concert hall and laboratory. His work does not treat AI as a spectacle, but as a lens, one that can magnify intention, expose vulnerability, and question who really holds the reins of emotion in a digitized age.

From “Live at the UN” to his ongoing explorations of embodiment, affective computing, and physical-digital performance, Sassoon’s practice asks urgent, quietly radical questions about authorship, presence, and freedom of feeling. We spoke with him about emotional sovereignty, collaborating with machines without surrendering to them, and why, sometimes, the future of music can feel like a medieval bard scoring a sci-fi film.

“Live at the UN” blends piano, violin, and TEAL’s AI-driven responses. How do you define emotional sovereignty in an age where machines can respond to, or even anticipate, your intent?

I came up with the idea of emotional sovereignty when, during my recent experiments, I stumbled into this question: why is AI increasingly designed to respond with simulated emotions to human input? It was a somewhat peripheral concern at first. The idea behind TEAL is to use AI as a multi-modal amplifier for musical gestures, augmenting artistic intent by assigning emotional weights to specific bits of music theory and physical gestures, and using AI to dynamically respond to my instrumental improvisation across multiple domains – sonic, visual, and physical.

“Inferred emotion” seems to me like the obvious thing to measure in a musical performance: after all, that’s the currency if not the grammar of music.

In order to realize this idea, I studied affective computing and various AI tools, and became increasingly uneasy with the number of mechanisms I kept encountering that effectively want to “graft” to parts of our emotional circuitry, not just when it might be useful, but in every interaction. To me, that feels wrong and forced; it gives me the heebie-jeebies, like seeing a corpse suddenly move.

The funny part is that it’s a precept of user experience design that we will have better interactions with machines that feel, to us, like they have some kind of emotional capability. I have become convinced that all that is spinning out of control. It’s dysfunctional, and we should really think about defending an emotional and cognitive perimeter from the machines.

That’s what I mean by emotional sovereignty. How we do that remains an open question, but our emotions are something extremely delicate and potentially dangerous when tampered with.

Collaborating with physicists, mathematicians, and engineers must be fascinating. What’s the most surprising insight that changed the way you compose or perform?

It’s one of my life’s privileges to have these collaborations, in places like Edinburgh, Bologna, London, and Geneva, but they are not devoid of challenges: a real dialogue between art and hard science or engineering requires a significant mutual effort in finding a common ground, not just a cool idea.

Because for all the conceptual parallels between art and science, the actual languages, practical orientations, and approaches can be very incompatible when you get into a studio or a lab and try to actually make something together. The most exciting insight has been that once you find a flow between these disparate fields (which in some cases took several years), new aesthetics emerge, and that has significantly made my music-making technically much more rigorous, but paradoxically, creatively much freer.

I guess what I am trying to say is that while some really strict technical guardrails emerged, I found myself in some pretty uncharted territory artistically: sometimes I feel like a medieval bard trying to score a sci-fi film. I love it.

With algorithmic systems like TEAL, you’re effectively co-creating with AI. Do you ever feel challenged by your own creation, like the machine is teaching you new musical gestures?

Co-creation isn’t really my framing: I think in terms of amplification and augmentation. I really look at these tools like a modern-day version of the guitar amp, except much, much deeper and multidimensional. The AI is there as just another color on the palette, a way to speak in louder and bolder tint.

My systems are designed with the human at the center and at the top of a conceptual hierarchy: the goal is for every minute gesture to be picked up, highlighted, and amplified for everyone in the audience to connect with. I am a very strict and unforgiving teacher when it comes to machine learning, and I am very adamant that the machines should remain our tools, and not the other way around.

Can you walk us through your creative process, from the first spark of an idea to the final performance? How do you know when a piece is finished?

I don’t really have a fixed process, it tends to change a lot: it will start from something I play on piano, violin, guitar or synth, a physical model, an algorithmic idea, or (like in the case of Modes of Vibration) experiment in “embodiment”, such as my sculpture-based synth created from slashed metal plates, and increasingly also visual seeds like Blender models or touch designer patches. It really depends.

Music is always the fulcrum, but I feel so lucky to have access to so many modes of expression and creation! Modern technology makes the path between idea and realization pretty fast and frictionless, before anything takes shape on stage, it has to go through a full cycle of studio production and several iterations of rehearsals.

I consider audience time very precious, and I want to make sure that anything I bring on stage is as polished and entertaining as I can make it, whatever crazy experiment might lie behind it. It’s important to me that people feel something authentic, no matter their background. At least, that’s the goal.

When it comes to how I know when a piece is finished, in the words of the great Catalan cellist Pablo Casals: “I like what is natural, to read what the music demands, and I try to do my best. The same in life.” I heard it in an interview used as a DJ Food sample, maybe 20 years ago. Still rings true: when it’s done, anything you add feels like faff. I only ever save one version of any track, not multiple versions. I work destructively.

Looking forward, where do you want to take music in the next decade: more AI integration, deeper physical-digital experiences, or a radical rethinking of how people listen?

I am really interested in the idea of embodiment. To me, it’s the missing link. We are all so buried in our digital lives right now that it feels like the real world wants to make a comeback. I am pretty obsessed with the old performance art idea of “the happening” at the moment. I think there is a big role to play for immersive experiences to be used to make people’s bodies feel included in the performance, to make people feel present, to make presence feel valued and valuable.

I want to bring all these worlds together, have a live orchestra in dialogue with layers of multimodal interpretation, or have an immersive rave with specialized techno that reacts to presence. In the immediate, I am talking with some robotics companies about manifesting my emotional amplification algorithms on stage as physical motion.

If you had to explain “Live at the UN” to someone who has never heard electronic experimental music, how would you describe the experience without using technical jargon?

It’s a performance of improvised piano and violin, where live visuals and synthetic sounds respond to the live musician through an AI system that interprets the artist’s emotions, music, and gestures.

Are there particular emotions or states you deliberately try to evoke through sound, visuals, or spatial arrangement in your shows?

I just hope to be able to create a space where the audience’s emotions resonate. The goal of emotional amplification for me is not necessarily to invoke the same emotions in the listener and viewer, but rather to provide a fertile ground for their own emotional response.

What do you hope people feel when they attend one of your live performances: awe, introspection, tension, joy?

Authenticity and freedom to feel authentically.

Are there habits or rituals that help you enter a creative flow?

At the risk of sounding beige, regular exercise and eating good food are probably the two most important things for my creative headspace. On a less day-to-day basis, Milan is very near the mountains, so heading up a glacier with my split board is the place where my creative batteries recharge the most. Great place to listen to mixes and masters, the silence of a mountain.

Outside of music, what inspires or sustains your creativity?

There is so much inspiration out there these days! A lot of it is visual, even though I am hopelessly colorblind, but perhaps because it’s all novel to me. I love contemporary digital visual arts, especially the creative communities around things like Blender and Touch Designer: people are really having their hip hop moment there, ground-up movements creating new aesthetics and workflows.

I am obsessed with old timey frame by frame animation. I love reading about the pioneers of cartoons and anime, and I draw my own on a cheap tablet with Blender’s grease pencil. There’s something about that. Then, snowboarding, riding solo on my favorite backcountry trails, is where I can hear a lot of new ideas.

Are there sonic textures or techniques you’re obsessed with right now that we should listen for?

I’ve always been obsessed with granular synthesis, and I am super happy with the way we have implemented it in my VST synth Tetrad, which I released as a companion to the Modes of Vibration album with my partners at Physical Audio. It’s been quite special to have my synthesis ideas turned into software, not least because it’s so much easier to put them into practice from inside a plugin!

gadi
Tetrad software synth

In a technical sense, I think in general it’s worth paying attention to the potential developments in simulation-based synthesis, because they’ve been quietly evolving into very accessible tools. I suspect we might see the sonic equivalent of ray-tracing pop up in the next few years. I have been very curious to get into the real-time built-in synthesis engine of Unreal Engine. You don’t hear much about it, but it looks very promising.

On an artistic level, I would recommend checking out the growing scene emerging around the Mediterranean, which blends traditional sounds and electronics in extremely creative ways, such as Tarta Relena or La Niña. Proper vibes.

Looking ahead, what’s the next frontier for you? New technologies, new formats, new collaborations, or entirely new ways to experience sound?

All of the above, hopefully! I have set up a community label in Milan called Classical Computer Music, which is also a radio show on Internet Public Radio, with the goal of fostering interactions with like-minded artists and institutions. There is so much to explore, so much exciting new ground to tread.

Finally, what’s a lesson from your experiments with AI and music that you think could change how artists approach creation in the next decade?

Vision is everything. Forever. No technology is ever going to change that. Not the pianola. Not the sampler. Not AI. I think the best way to really leverage new technologies is to go back to first principles: learning music, playing instruments, writing, arranging, finding a voice, a dream, a scream, whatever it is, as long as it’s real.

I think these technologies might actually bring us closer to these things, either as a counter-reaction or by harnessing them to our advantage as human artists. To paraphrase Dr. Strangelove, if we stop worrying about digital taxidermy, then we can learn to hack the AI. Or break it.

Follow Gadi Sassoon on:
Website | Facebook | Instagram | Youtube | Bandcamp | Soundcloud

© All rights reserved to Gadi Sassoon

The following two tabs change content below.
Still can't tell exactly my origins because of my suspiciously ‘Chinese eyes’.