XCV || Cut Copy Paste is a project investigating ‘machine listening’ via a series of experiments using large scale audio-visual datasets and algorithms. The creative development outcome will be a suite of tools for audio-visual composition, interactive installation and/or live performance and will feed into Liquid Architecture’s City of Melbourne programs.

Support Video – Sampling of Google AudioSet

The support video on the left consists of 32 video clips (randomly) selected in order of download from the Google AudioSet. I have only edited the length (in and out points) of each clip and scaled them to fill the frame.

The Google AudioSet consists of 2 million audio clips categorized into “an expanding ontology of 632 audio event classes…to provide a common, realistic-scale evaluation task” for the application of AI and machine learning to audio event detection, recognition and classification.

The audio clips in the dataset were drawn from a larger library, You-Tube8M containing 8 million labeled video clips. Usually researchers download and pre-process the the audio clips using a suite of scripts (like this one) that automates the process. By modifying the download script is possible to keep the video component of these clips, which adds an uncanny affect and often humorous aspect to the “audio events”.

You can download my version of the script here (you’ll also need Python 3 and ffmpeg).

Support Letter – Joel Stern, Artistic Director, Liquid Architecture

Download Support Letter PDF

I am pleased to write to confirm my support of Matt Gingold’s application for a City of Melbourne Quick Response Grant for his project XCV || Cut, Copy Past.

Over the last several months I have been in conversation with Matt about how he might contribute to Liquid Architecture’s ‘Machine Listening’ program, an ongoing project investigating the way in which our sonic worlds are captured, ordered, aggregated, transformed and deployed in various ways by new technologies, networks, and formations of power. Technology-literate artists like Matt are crucial to the relevance of this project, and his XCV project is an exemplary example. 

A large part of the ‘Machine Listening’ project is about experimentation and cross-disciplinary collaboration. We are particularly interested in how the XCV project will not only develop new audio-visual work, but will result in the creation of collection, composition, analysis and performance tools for use by other artists. As such, we are very excited to support a project that is as critical and technical as it is humorous and engaging.

We look forward to working with Matt toward further development and outcomes as part of our artistic program.

Joel Stern

Artistic Director

Support Material – Previous Works


is one of two key media experiences in the First Peoples permanent exhibition at the Bunjilaka Cultural Centre, Melbourne Museum (AU). Winner of the 26th annual American Alliance of Museums (AAM) Excellence in Exhibitions Competition (2014), this innovative exhibition focuses on Aboriginal Victorian culture, tradition, values and language.

Deep Listening is the result of a long term collaboration with the staff of Museum Victoria and a broad spectrum of the Victorian Koori community. Comprised of 50 interviews with Koori’s from across the State, Deep Listening is a survey of community viewpoints and experience.

Custom software was developed to display upto 8 interviewees at once, across two curved, full HD screens – allowing the audience to truly engage in “the processes of deep and respectful listening”.


Multiple Presentations
Canada ― US ― Germany ― 2013-2016

was created in collaboration with Profs Thecla Schiphorst and Philippe Pasquier during a residency at the School of Interactive Art and Technology, Simon Fraser University (Vancouver, CA). Originally commissioned as an interactive public artwork for the Surrey Urban Screen (Vancouver, 2013), Longing and Forgetting has had several presentations, including live performance, generative video projection (Generations, Vancouver, 2015) and real-time data visualization (Scores + Traces, New York, 2016).

As part of the Moving Stories research program at SIAT, Longing and Forgetting is an investigation of the relationships between artificial intelligence and machine learning with higher level meanings in aesthetic and kinesthetic gesture.

From video games to online love, from driving cars, flying planes, online trading to state-of-the-art military strategy – the notion of ‘artificial agent’ is now the prevalent paradigm to embodying machines in virtual bodies.

The work presents the idea of generative choreographies amongst multiple video agents, or ‘digital performers’. Using a simple movement ‘alphabet’ that borrows from Laban concepts such as effort, weight, and space, the 12 physical performers were filmed traversing a climbing wall. This resulted in thousands of video clips which were then analysed and tagged according to basic movement properties.

As such each video agent comprised of many fragments of movement, which can be stitched together with rules that govern their goals and actions. From the actions of these simple, multiple agents complex movement and assemblages emerge.


Profs Thecla Schiphorst
Philippe Pasquier



explores creativity: the people who create and what it means to be creative. The project was exhibited as part of Present Tense, at the Australian National Portrait Gallery (ACT) from May to August 2010.

Artists, arts workers and administrators were invited from around Australia to have their portraits recorded, forming a unique and colourful record of 84 creative people ‘performing themselves’. In total there are around 50+ hours of footage, edited into 1400 clips and databased according to subject matter, emotional tone and performative criteria.

The portrait process was designed to enable a live performance involving a series of improvised vignettes. Whilst participants were provided with an overall structure, there was no rehearsal, script or traditional direction. Participants chose what to wear, what to bring and what to say.

Reflecting these live performances, the final installation consists of a computer ‘performing live’ in the exhibition space – generatively sorting, selecting and composing the video clips in real-time.

The Perfect Artist was made possible with support from: Arts Victoria, Aphids, PVI Collective, Performance Space, PACT, Head Quarters, Bill & George, Tiny Stadiums and the Australian National Portrait Gallery.



is the first of several works exploring the boundary of banality and the sublime through the synthesis of physical and digital choreographies. It has been shown in both single and multi-channel formats in Melbourne (2007), Sydney (Carriageworks, 2008), Malaysia (Melaka, 2009) and Taiwan (Museum of Contemporary Art, Taipei 2012).

Essentially laying on the floor, each dancer was filmed individually with the camera situated above them. Since the dancers were literally ‘grounded’, the illusion of flying, falling and floating is all the more extraordinary – forced to move their bodies ‘naturally’ in the most unnatural of poses. The set was painstakingly lit, so that the black background could be used without any keying or chroma effects.