Won Pound by Won Pound is released!

This post is one in a series about GANce

Close-readers, twitter-followers and corporeal-comrades will have already beheld the good news that Won Pound by Won Pound has been released! This is Won’s second album-length project (first of course being Post Space released in 2018), and graces listener’s ears courtesy of Minaret Records, a California jazz label.

The record is accompanied by an album-length music video synthesized with GANce, containing a completely unique video for each track. These 93960 frames have been the ultimate goal of this project since it’s inception, and serve as the best demonstration as to what GANce can do. Within the video (linked below), the video for ‘buzzz’ is a personal favorite, demonstrating the three core elements of a projection file blend:

Continue reading →

GANce Overlays

This post is one in a series about GANce

As it stood, the three main features that would comprise the upcoming collaboration with Won Pound (slated for release mid-April) were:

  • Projection Files (using a styleGAN2 network to project each of the individual frames in a source video, resulting in a series of latent vectors that can be manipulated and fed back into the network to create synthetic videos)
  • Audio Blending (using alpha compositing to combine a frequency domain representation of an audio signal with a series of projected vectors)
  • Network Switching (feeding the same latent vector into multiple networks produced in the same training run, resulting in visually similar results)

As detailed in the previous post. The effect of these three features can be seen in this demo:

 

Knowing we had enough runway to add another large feature to the project, and feeling particularly inspired following a visit to Clifford Ross’ exhibit at the Portland Museum of Art, I began exploring the relationship between the projection source video and the output images synthesized by the network.

Continue reading →

Introducing GANce

This post is one in a series about GANce

In collaboration with Won Pound for his forthcoming album release via minaret records I was recently commissioned to lead an expedition into latent space, encountering intelligences of my own haphazard creation.

A word of warning:

This and subsequent posts as well as the GitHub etc. should be considered toy projects. Development thus far has been results-oriented, with my git HEAD following the confusing and exciting. The goal was to make interesting artistic assets for Won’s release, with as little bandwidth as possible devoted to overthinking the engineering side. This is a fun role-reversal, typically the things that leave my studio look more like brushes than paintings. In publishing this work, the expected outcome is also inverted from my typical desire to share engineering techniques and methods; I hope my sharing the results shifts your perspective on the possible ways to bushwhack through latent space.

So, with that out of the way the following post is a summary of development progress thus far. Here’s a demo:

There are a few repositories associated with this work:

  • GANce, the tool that creates the output images seen throughout this post.
  • Pitraiture, the utility to capture portraits for training.

If you’re uninterested in the hardware/software configurations for image capture and GPU work, you should skip to Synthesizing Images.

Continue reading →