ScreenGrab-yfx.1.jpg

TouchDesigner will add POPs for GPU-powered 3D operations


Points, polygons, point clouds, particles, line strips – and therefore simulations of everything from fog to cloth and collision – are now getting a powerful set of operators in TouchDesigner. It’s a generational leap forward for the tool that powers a lot of the world’s most bleeding-edge visuals. Details, plus more from Berlin:

Across all the dataflow / visual patching environments, there’s traditionally been some kind of cliff. Up to the edge of that cliff, you can work entirely visually by connecting boxes with patch cords. But go past that cliff, and tasks either become impossible or inefficient in terms of performance, meaning you go back to lower-level code. On the sound side, that means getting back to lower-level DSP; on visuals, it’s very often resorting to shader code.

POPs are Point Operators, a new set of operators that are largely GPU-accelerated and manipulate 3D materials. That joins Surface Operators (SOPs, which are CPU-bound), Texture Operators (TOPs), and Channel Operators (CHOPs). You can do some of this with the existing operators now, but typically you would need to add in shader code for GPU acceleration and greater functionality. Unlike running computation in a SOP, you aren’t bound by memory and processor performance constraints; these are optimized both for memory and GPU usage. And you can still stay patching without shaders. I look forward to seeing less framerate shudders in novice Touch work.

Oh, and if you do work with compute shaders (and C++), you’ll still find this useful, because you can do pre- and post-processing of data. This isn’t the first environment to attempt this, but it is shaping up to be one of the most extensive.

POPs are just a preview for now, shown in an event in Berlin (while I’m on another continent for the moment). But Derivative promises they’ll show up in experimental builds later this year.

This isn’t just a “code” versus “visual programming” question. (Oddly enough, the late Max Mathews once told me he really preferred textual coding because it was more like writing – a bit ironic for the namesake of Max, but I do understand what he meant!) Rather – connecting it back to Max’s original Music language (the predecessor of Csound) – if you are able to work with defined unit operators, the result will be easier to share and understand, more consistent, more portable, and less likely to break with updates. Heck, it’ll also typically be easier for you to understand if you’ve been away from it for a while.

That’s theoretically true whether that’s in a visual patching environment or a textual code / livecoding environment. Nothing against shader code, which is itself also designed to be portable, but I don’t think anyone would argue it’s particularly easy to read. DSP environments have already created low-level DSP operations like gen~ in Max or Core in Reaktor; visual environments are due for this kind of evolution.

TouchDesigner is such a reference point for media art and live visuals that any change like this is a big deal—and might even influence other environments.

Berlin TouchDesigner event, virtually

Speaking of that Berlin event, the folks at Node Institute have kindly posted the full video. Day 1 launches POPs in a short presentation by Greg Hermanovic – plus Chagall van der Berg showing her real-time motion capture, Stas Glazov on Stable Diffusion integration with Python, games, teaching, tools, and more:

Day 2 is an international community takeover, with topics like fluid simulation, virtual production, museum applications, art, and more Python:

Day 3 has more community presentations with artists Torin Blankensmith, Florence To, Martina Assum, Roy & Tim Gerritsen, Wieland Hilker, and more (with a large laser representation and some AI):





Source link

Share this post