As a member of Masino Bay , we designed an installation for the D&AD festival 2019 , which is a generative collage, learning whicxh images people like depending on the interest they give the piece.
The project was sponsored by ShutterStock , and GreatCoat Film .
The installation is made of 3 computers running 3 different programs.
A pose capture using open-Natural Interaction 2 , in order to analyse where people are looking and howe they pose in front of the screen.
A machine learning dedicated computer, the jetson nano from Nvidia , which would use a custom made reinforcement learning model. The model would learn which images would appear most to people by analysing how long they would stay in front of the piece. It can then control what is shown on screen, by sending OSC signals to the main collage app.
A collage app which would pull from a pool of videos and images and arrange them in an ever changing way, with different effects, and parameters of compositions. All those parameters are controlled by the machine learning model.
Masino Bay is a creative technology studio formed by Simon Oxley ( GreatCoat Film ), Gemma Yin , and me.
For this project, we were helped by Malte Lichtenberg , a PHD student in Machine Learning, who handled the core machine learning algorithms. He also helped on site, accompanied by Jayson Haebich , babysitting the installation. Gemma handled the creative direction / Simon the production & PR.
As part of the Genuine X team, I designed a prototype called Betty as a product design piece. The object is a Memory jukebox which has a very simple UI, in order for old people to use it easily.
The prototype has a digiPad, an LCD screen, a conductive ink button, a volume potentiometer, and a power button. The user enters his/her date of birth, and the jukebox plays songs from his /her 15 year to his / her 25 years after birth.
Technology used : Raspberry-pi, electonic design, buttons, Fritz, conductive Ink, Openframeworks, digiAmp+, audio processing on raspberry-pi.
I designed a prototype for a pitch for HKSTP new center in Hong-Kong, using Openframeworks .
The prototype was a small deisgned object with a screen, a mini printer, three buttons, and a dial embedded in it. The user is prompted to agree or disagree to a serie of questions on screen. Once the questions stage is finshed, some abstract graphics appears on screen, and rotate, to disappear at the top, in synchronisation with the print of the graphics. The time spent on each question, and the degree of agreement incluences the final visuals.
The project was done in 2 weeks.
Technology used : Openframeworks, Raspberry-Pi, mini printer, fritz, buttons
I designed applications in OpenFrameworks to make a lyric video for Sigrid. The lyrics and video were designed and directed by Gemma Yin Taylor.
You can check it out here.
I tried to create compelling visual effects in augmented reality to create the video. It was shot with an iPhoneX, and all filmed through the apps I made, without any post production, other than editing.
The effects produced were inspired by the app Weird Type by Zach Lieberman.
I worked part time at Goldsmiths University in the EAVI group , specifically the Immersive Pipeline as a software developer. I wrote a library for multiscreen video mapping under the professor and artist Atau Tanaka's supervision, and alonside artist researcher Blanca Regina .
Coded in cpp(OpenFrameworks), I designed a video mapping application library for multi screen video mapping. Using some warping and edge blending algorithms from ofxWarp , I coded a library which would load and save a configuration file specific to the Immersive Pipeline SIMIL space. After calibrating the video projectors to the space, the app can be used to create immersive audio visual experiences. We presented our work at the Splice festival and hosted an event on the 12 of April 2018.
Thanks to everyone at Goldsmiths, Atau Tanaka in particular, Bryan Dumphy , Lillevan , Alex Augier, and Alba corral .
Using OpenFrameworks , I designed a video mapping application to accompany Jane Fitz music during an event at the Factory. the theme of the visual was underwater creatures and darkness.
The application works with a controler which has parameters to control the elements on screen ( positions, scale, speed, reactivity to the music, effects, elements on screen, camera positions, etc...) . Everything was made custom for Jane, using a few libraries.
Thanks to Rich for putting me in touch with her.
Using WebGL, I designed a website for Fuel 3D to be used in the context of the Goodwood Festival of Speed.
Using the data from a 3D face scan, the user's face is uploaded to the website and can be selected in the gallery.
The face can be visualised using differents textures. Measurements of the face are shown on top fo the 3D model. The last page is a conparison of the face measurements against Johnny Mowlem's face measurements, giving a proportion match to his features.
Using the 0-coast from make Noise, I tracked various ports on the synth, such as the random generator, Slope Output, square wave, or dynamic. Using a kinect 1414 ,I recorded the depth environement, on a raspberry-Pi using Openframeworks .The sound is recorded simultaneously as the depth video, on a different laptop, and produced using my live set ( electribe, digitakt, 0-coast, microbrute, beatsteppro.
The data recorded from the 0-coast creates waves of colors through the video.
This is an example of depth recording, using electrical signals from the 0-cast to trigger visual elements on screen with precision.
Using the webVR possibilities, I created a semi random wave with Three.js, which can be experienced here.
The topology of the wave is randomly created (within a min and max), and the rgb components of the colors are either mapping the topology(rainbow styled), or proportionned to 1. The wave is pulsing through 2 dimensions: y and z.
Thanks to Bruno for his quick help, and his precise responses.
This was an internal Unit9 project : a bluetoooth speaker using gestures and movement to control the parameter of the device.
To recreate famous works of art with a Popeyes-flavoured difference, UNIT9 designed and built the world’s first Popeyes Sauce Printer. Working with GSD&M Austin, we designed, built and tested the machine in just two weeks. And to demonstrate the printer’s saucey skills, we created a Sauce Art Gallery. Full of masterpieces.
We built the Popeyes Sauce Printer by hacking a CNC machine, and using a 3D printer's brain. We first rigged the machine with a syringe, and then powered it with software (C++) to convert images into line data. The printer then could read the data and dispense the sauce it into an evenly, motion-controlled design.
Users could choose a piece of artwork from a gallery in a tablet app, and then send it to the Popeyes Sauce Printer to produce their own canvas print. Besides creating art masterpieces, we built the machine to print basic text. And even created our own typeface to be used throughout the campaign.
It was great to work with Jeffrey and Christian .
As part of the Unit9 Team, I designed a few GIFs to be included in the video celebrating the 30th anniversary of the vapor max shoe.
Using processing I designed some interactiv visuals involving geometric forms and colors(one example below).
The final video was projected onto the iconic Pompidou center in Paris. The projection, which mapped the entire fascade, was a powerful visual spectacle.
All gifs are the property of Nike.
Video designed for Jotta Studio in the context of an American Airlines week events. I designed it on Processing, representing 1 year long of the American Airline flights time lapse. Overall, more than 8000 flights were represented on the map, with over 300 differents airports mapped.
This is an example of live visual mapping made with an application I designed on Processing. I used an APC 40 to control the visuals. Various parameters respond ton the bands of frequencies.