This is a series of studies that showcase the possibility to generate art based on the facial expression. In this context, AI generates images that are supposed to reflect the viewer's state, in a brief manner. The images are generated in real time with stream diffusion, and the face tracking is done using mediapipe in Touchdesigner99. In the following videos, my facial expressions will be exagerated to demonstrate the point. It is similar to the "Dialog 2023" installation, only this time the images are generated in real time, with better technology.Â
This project was presented at the "Universal Language of arts and science" symposium, 2024.
In this video, the system generates images based on the facial expression. If I smile, the images lean towards a more happy prompt. It was done using mediapipe tracking in touckdesigner99, and the images are generated with stream diffusion.
In this video, the system generates images based on the facial expression. If I smile, the images lean towards a more happy prompt. The model used was an image to image model, so it generates a portrait of myself in the end. It was done using mediapipe tracking in touckdesigner99, and the images are generated with stream diffusion.
This video showcases the possibility to use facial recognition as an input to generative AI systems. The main idea is that if I look away then the flowers will die. It was done using mediapipe tracking in touckdesigner99, and the images are generated with stream diffusion. The concept here was similar to the AI EEG Study, but there I used a muse to measure the attention level.