Type to search

AI Art Music Video

New tools: Lucid Sonic Dreams, Traverser and Hallucinator (also: New music video)

johannezz Mar 20

I have a new video and the link is below this post. Here is a rundown of each of the three tools that I used to make the graphics. Each of them is worthy of having a dedicated blog post, and it is rather likely that I will do them sometime in future.

To begin with, Lucid Sonic Dreams by Mikael Alafriz has been a bit of sensation in the generative arts community since it was introduced a week ago. It is a very customizable tool to produce AI videography to music track, by default it uses StyleGAN2-ADA, and you can easily load graphics from 50 or so pretrained models, that’s an incredible wealth of images. However, you can use other generators such as the BigGAN if you like, and what’s more, you can even define new effects.

Coming out of the box, LSDreams acts like a glorified oscilloscope or a WinAmp visualization, the images writhe and twitch and jump around to the music. The problem in my humble opinion is that this kind of throbbing is fine for colored lights in disco, but in the case of GAN-derived complex imagery it actually can distract from the music that it is supposed to enhance. Since I demand from my visuals more than finger-snapping (I want to see ballet) I had to do some experimenting to get it closer to my requirements.

It turned out that less was more. The two main “axes” of movement in LSDreams are harmonic motion and percussive pulse (there is a third one that has to do with pitches, but that works only with some models), and if I disabled pulse and also all flashing effects, I was left with just two settings, the other being speed_fpm which determines the number of latent vectors that are being displayed in a minute, or in other words how often the image in the screen changes. The other setting was motion_react that ranges from 0 to 2 and dictates the degree that the images are transformed by the cumulative change in music’s amplitude.

Lucid Sonic Dreams example. It demonstrates how the images move when there is 1. silence 2. pads 3. percussion. Style=”microscope images”, speed_fpm = 12, motion_react = 0.5, Samples come from here and here

With just those two settings, I could come up with material that greatly pleased me. In my recent music video I have speed_fpm=10 and motion_react=0.9, and the style (i.e. pretrained model) is “textures”, and I really like how the graphics goes nuts when Jukebox starts to improvise on an acid house riff I made with Propellerhead ReBirth some fifteen years ago.

I have barely scratched the versatility of LSDreams but already I can envisage it being a major, if not the only generative tool I could use in making these music videos.

The “melting glass in the air” sequence in the beginning of my music video (below this post, remember?) comes from Ryan Murdock’s aka Advadnoun’s “Traverser” or, more officially, BigSleep4Video. Into BigSleep (which was discussed in this earlier post) another “traversing” network is added that is fed with random noise, and from that it starts to approach multiple coordinates of latent space. It is set to optimize two things, namely, the agreement with the text description, and also the difference between the images it finds, so you end up with many dissimilar images that however each aim to illustrate the text prompt. Lastly, Traverser does a spherical interpolation between the images, so you get a nice “circular” animation suitable for looping.

Traverser video of text input “Chemical and alchemical experiments in futuristic laboratory”

The Traverser along with many other innovative tools is available to members of Ryan’s Patreon group, where you can also find the first release of the last tool discussed here, tentatively called “The Hallucinator”. This is my own project, although I’m of course indebted to the help of co-artist-engineers in the aforementioned group: it is an improved story-to-hallucination notebook that takes multiple lines of text and produces seamless morphs between images that try to illustrate the texts. I will be making a publicly available version of this soon, at which time it will be the subject of a blog post of its own. For now, you can see it in the ending sequence of my music video where the images correspond to these sentences:

“a giant butterfly filling up the sky”,

“painting of a close-up of cats face”,

“photo of the coronation of queen of scotland”,

“sad ending of a french art cinema”

Without further ado, here’s the video. Thanks for reading!

PS. I have now Instagram, so come and take a look! (Still haven’t figured out how to show my social links in these pages…!)


  1. Roo March 24, 2021

    This is is fascinating! I’ve just been discussing AI and what it can do, as opposed to what it can’t do that only humans are able to, in my drama pedagogy class. The general consensus seems to be that AI cannot be creative, that this is somehow a singular human quality. I’m really not so sure.


Leave a Comment

Your email address will not be published. Required fields are marked *