moises.jpg

Watch Hexorcismos make AI performative and personal in this live show


In contrast to big data, automation, and bland prompts, artist Moisés Horta Valenzuela (Hexorcismos) is making machine learning into a live AV instrument using his own data sets and idiosyncratic, technoshamanistic approach. Check out this latest iteration of his live show Nahualtia Tlatzotzonalli (Shapeshifter Musician).

We’ve followed the Berlin-based, Mexican-born artist for a while – see links below. But this one also deserves an update. It’s fresh, too – from June’s KI-Camp. Details:

KI-Camp 2023 | AI concert dark allies, bright futures

Performance: Nahualtia Tlatzotzonalli
Artist: Hexorcismos
https://linktr.ee/hexorcismos
https://www.instagram.com/hexorcismos/

Video: Ranav Adhikari, Boiling Head Media
https://www.boiling-head.com/

The KI-Camp organized by the German Federal Ministry of Education and Research (BMBF) and the German Informatics Society (GI) brings together AI talents and renowned AI experts from all over the world. In interactive discussion rounds, lecture sessions, and hands-on formats, the free convention addresses future transdisciplinary issues from the fields of society, sustainability, health, art, and niche phenomena in AI research. At the closing of the #KICamp23 event, participants danced along to the AI concert ‘dark allies, bright futures’. https://kicamp.org/en/ https://www.instagram.com/kicamp3000/

The project uses his SemillaAI project, which focuses on training on local, artist-provided data sets (and has been accompanied by inviting various artists to collaborate).

And like a handful of artists in the sound community, he’s making heavy use of RAVE – the Realtime Audio Variational autoEncoder. So, quite unlike a lot of the currently trending directions in machine learning, RAVE is something you can easily run locally and using your own data. It’s best as a subject for a longer article, but to make a long story short, it becomes a way to make neural networks function for audio deconstruction and reconstruction, manipulation of your own sound sets, and as an audio processor (with style transfer). Artistically speaking, I’d even argue it has more in common with techniques like granular synthesis than it does “show me a picture of Mario baking a pizza” that was ripped from the entire Internet’s worth of artwork.

Check the official IRCAM site:

https://github.com/acids-ircam/rave

But Moises’ own RAVE-Latent Diffusion is specifically worth a look:

RAVE-Latent Diffusion is a denoising diffusion model designed to generate new RAVE latent codes with a large context window, faster than realtime, while maintaining music structural coherency.

https://github.com/moiseshorta/RAVE-Latent-Diffusion

For an earlier example of his work with live visuals, here’s a 2021 performance – back in those virtual-only pandemic days.

Electromagnetic fields and electricity are the life-force of the digital realm and Artificial Intelligence. In this exclusive performance for the AI and Music Festival, Hexorcismos, aka multidisciplinary Berlin based artist Moises Horta, takes a multi sensory approach for an animistic ritual featuring A.I. synthesized visuals and sounds, alongside the chilling effects of electromagnetic microphones; exploring the non-human sonic qualities of digital devices. Recorded live in the XR room at Factory Berlin, these energies are channeled through a techno- shamanistic ritual, into an intense soundscape aided by a custom-built neural network powered Djembe A.I. In collaboration with Factory Berlin

From the same time, it’s worth reading this proposition on decolonizing AI – a call that has only grown more urgent in the face of large dataset machine learning evolution since:

Towards Decolonizing AI: Propositions by Isabella Salas, Nora Golic and Moisés Horta Valenzuela

And for some past background:

By the way, I’m eons behind writing up an interview with artist Patten – coming, finally, in the next days, right after I get over this fresh cold I caught!





Source link

Share this post