CAMERA

How AI Sees Us: ‘Analytic Portrait’ Series Combines AI Vision Tech With Real Portraits

Seoul-based artists Shin Seung Back, who is also an engineer, and Kim Yong Hun are back with another photography and AI-related project, Analytic Portrait.

The artistic duo, known together as Shinseungback Kimgyonghun, were first featured on PetaPixel for their series, Mou ta n, which investigated how much of a photo can be removed before AI can no longer recognize it. Although AI is a hot topic in photography now, it was much less of one when they did that project in 2021 and much less discussed when the duo did Cloud Face in 2012, an art project in which they had AI look at moving clouds and try to extract human faces.

A diagram with overlaid measurements and annotations on a human face outline. Labels indicate facial features like eyebrows, lips, and expressions with percentages. Text notes gender and ethnicity. Various colored lines and boxes outline sections.

A person faces forward, with various digital markers and text over their face, analyzing features like eyes, mouth, and expressions. Text indicates high human detection confidence and provides demographic information and expression percentages.

Analytic Portrait again investigates how AI sees the world. In this case, Shinseungback Kimyonghun created a series of portraits that show how 14 different AI vision algorithms view real human portraits. The integrated results from these vision algorithms are then used to create the final portrait, a multi-color wireframe recreation of the original photographs, or in one case, da Vinci’s famous Renaissance painting, The Mona Lisa.

A digitally augmented version of the Mona Lisa with overlaid analytical data. Various charts and labels assess emotions like happiness, age range, gender, and body proportions, alongside geometric lines and measurements.

A portrait of a woman with a serene expression, dark hair, and an enigmatic smile. She is seated with her hands crossed, wearing a dark dress. A distant landscape with winding paths and a river is visible in the background.

“Each portrait is created using data from 14 different AI vision algorithms,” the artists explain. They provided PetaPixel with a list of each algorithm and its influence on the final artwork, detailed below.

  1. OpenCV Face Detection: face location and face size
  2. Dlib Face Detection: face location, face size, and facial landmarks
  3. DSFD: Face location, face size, and confidence score
  4. Distinctive Image Features from Scale-Invariant Keypoints: feature point
  5. Keypoint Communities: face location, face size, face landmark, human pose estimation (body parts)
  6. Distribution-Aware Coordinate Representation for Human Pose Estimation: human pose estimation (body parts)
  7. Densepose: Dense human pose estimation in the wild — body outline.
  8. Ad-Corre: Adaptive Correlation-Based Loss for Facial Expression Recognition in the Wild: emotion recognition
  9. Instances as Queries: instance segmentation (detecting objects and their boundaries
  10. FairFace: Face Attribute Dataset for Balanced Race, Gender, and Age for Bias Measurement and Mitigation: race, gender, and age classification
  11. L2CS-Net: Fine-Grained Gaze Estimation in Unconstrained Environments: gaze estimation
  12. Real-time deep hair matting on mobile devices: hair segmentation
  13. Multi-ethnic MEBeauty dataset and facial attractiveness: facial beauty score
  14. Google Cloud Vision API: dominant color identification

“We layer the outputs from these algorithms to construct each portrait,” the artists explain.

A person wearing a red cap and plaid shirt crouches near a grey wall. The image has overlaying digital annotations highlighting facial expression analysis and body posture percentages. The floor is concrete.

A person wearing a red cap, checkered jacket, blue jeans, and checkered slip-on shoes is crouching on a gray concrete floor against a plain gray background.

A diagram of a person is overlaid with colorful lines, shapes, and labels indicating body parts like shoulder, torso, and legs. There are statistics for emotions and demographics in blue text and percentages on the upper left.

The color spectrum shown in the upper right-hand corner of each portrait represents the dominant colors of the original image, which is determined by Google Cloud Vision API. The remaining 13 algorithm outputs are assigned distinct colors to construct the final portraits.

It is worth noting that while in most cases, all 14 algorithms contribute to the final portrait, the colors used should only serve to differentiate input sources and do not correspond consistently to specific algorithms.

AI-generated image analysis of a person with annotations indicating body parts and posture. Text details emotion detection and demographic estimation, like age and gender. Colors highlight different analysis metrics, overlaying a stylized figure sketch.

A diagram of a human figure with annotations, showing percentages for emotions and attributes. It highlights body parts like torso, shoulders, arms, legs, and face, with additional notes on gender, age, and beauty.

Two annotated human figures with highlighted facial expressions and body parts. Each figure has percentages for emotions like neutral and happy. Labels include facial areas, arms, and torso, with estimates of gender and age.

“The assigned colors may vary across different portraits,” Shinseungback Kimgyonghun say.

Further, four algorithms provide facial-related information, so some data was selectively removed to avoid redundancy.

Two people standing side by side with digital overlays displaying percentages and annotations related to posture, mood, and stance analysis. They both wear black t-shirts, and their arms are positioned differently. Background is plain.

The complete Analytic Portrait series is available on Shinseungback Kimyonghun’s website.


Image credits: Images from Shinseungback Kimyonghun’s new series, ‘Analytic Portrait.’


Source link

Related Articles

Back to top button