How AI Sees Us: ‘Analytic Portrait’ Series Combines AI Vision Tech With Real Portraits

Seoul-based artists Shin Seung Back, who is also an engineer, and Kim Yong Hun are back with another photography and AI-related project, Analytic Portrait.
The artistic duo, known together as Shinseungback Kimgyonghun, were first featured on PetaPixel for their series, Mou ta n, which investigated how much of a photo can be removed before AI can no longer recognize it. Although AI is a hot topic in photography now, it was much less of one when they did that project in 2021 and much less discussed when the duo did Cloud Face in 2012, an art project in which they had AI look at moving clouds and try to extract human faces.
Analytic Portrait again investigates how AI sees the world. In this case, Shinseungback Kimyonghun created a series of portraits that show how 14 different AI vision algorithms view real human portraits. The integrated results from these vision algorithms are then used to create the final portrait, a multi-color wireframe recreation of the original photographs, or in one case, da Vinci’s famous Renaissance painting, The Mona Lisa.
“Each portrait is created using data from 14 different AI vision algorithms,” the artists explain. They provided PetaPixel with a list of each algorithm and its influence on the final artwork, detailed below.
- OpenCV Face Detection: face location and face size
- Dlib Face Detection: face location, face size, and facial landmarks
- DSFD: Face location, face size, and confidence score
- Distinctive Image Features from Scale-Invariant Keypoints: feature point
- Keypoint Communities: face location, face size, face landmark, human pose estimation (body parts)
- Distribution-Aware Coordinate Representation for Human Pose Estimation: human pose estimation (body parts)
- Densepose: Dense human pose estimation in the wild — body outline.
- Ad-Corre: Adaptive Correlation-Based Loss for Facial Expression Recognition in the Wild: emotion recognition
- Instances as Queries: instance segmentation (detecting objects and their boundaries
- FairFace: Face Attribute Dataset for Balanced Race, Gender, and Age for Bias Measurement and Mitigation: race, gender, and age classification
- L2CS-Net: Fine-Grained Gaze Estimation in Unconstrained Environments: gaze estimation
- Real-time deep hair matting on mobile devices: hair segmentation
- Multi-ethnic MEBeauty dataset and facial attractiveness: facial beauty score
- Google Cloud Vision API: dominant color identification
“We layer the outputs from these algorithms to construct each portrait,” the artists explain.
The color spectrum shown in the upper right-hand corner of each portrait represents the dominant colors of the original image, which is determined by Google Cloud Vision API. The remaining 13 algorithm outputs are assigned distinct colors to construct the final portraits.
It is worth noting that while in most cases, all 14 algorithms contribute to the final portrait, the colors used should only serve to differentiate input sources and do not correspond consistently to specific algorithms.
“The assigned colors may vary across different portraits,” Shinseungback Kimgyonghun say.
Further, four algorithms provide facial-related information, so some data was selectively removed to avoid redundancy.
The complete Analytic Portrait series is available on Shinseungback Kimyonghun’s website.
Image credits: Images from Shinseungback Kimyonghun’s new series, ‘Analytic Portrait.’
Source link