Mr. Cam | Final Project: A&HA 4084 | Summer 2017

Summary

Mr. Cam is a site specific, camera-connected video installation that is designed to be projected on wall surfaces of buildings in public areas. For this work, an animated face with an indifferent expression appears to be monitoring the local area. This work draws attention to the role and intention of the many computer-controlled video cameras supervising and analyzing the activities of the public in particular ways. In this way, Mr. Cam puts a face to the mechanisms of those who watch—in larger than life-size form—a visible presence to those who are watched to complicate this relationship of watcher-watched as part of surveillance structures, control, and regulation.

Mr. CamInspiration

After learning about the OpenCV library in Processing I began to wonder about the current bounds of such computer vision technologies. For example, if this particular open-source library for computer vision and human recognition exists in the public domain, what proprietary tools exist that the public is not aware of? What are abilities of those tools? What initiatives are being undertaken by governments and public safety organizations for mass implementation of these tools? How good are they at “seeing” citizens? What do companies and governments intend to do with this data? How will it affect individual privacy, and do people even care about being watched in public at all times?

 

My initial thoughts are that agencies will do as much as they can without public consent, and that ordinary people give tacit approval because the cameras are often situated in inconspicuous ways or they fail to see how this data can be used.

What if the cameras had faces? People are very used to recognizing faces, and we associate eye contact with being watched.

 

“Pareidolia is a psychological phenomenon in which the mind responds to a stimulus by perceiving a familiar pattern where none exists. Common examples include seeing faces in inanimate objects.”

Does the modern surveillance technology and the “capture-it-all” methodology being used today simply reduce humanity to data points?

 

Research

I easily found several references to official programs such as the FBI’s Next Generation Identification initiative which details how the agency plans to use modern biometric technologies for enhanced surveillance and to “increase the range and quality of its identification and investigative capabilities”.

Regarding human computer interaction trends I referenced the predictions posited by futurists working with the production team of the film Minority Report from 2002. As part of many other prescient predictions, the film depicted situated public camera-enabled kiosks that would perform retinal scans on passing pedestrians and engage them with customized advertisements replete with personal data such as their name and purchase history.  The advertisements in Minority Report were handled by Jeff Boortz of Concrete Pictures.

billboards“Personalized Advertising” from the film Minority Report.

I also read news about trends in the surveillance art movement and about notable recent works.

Process

My process for pursuing an installation project that utilized Processing’s OpenCV library effectively started by watching Dan Shiffman’s video and computer vision tutorials on YouTube

I next tinkered with a number of the OpenCV example projects located in the OpenCV library folder. I used the included reference documentation to piece out some other options such as detectable patterns other than “FRONTALFACE”, including those for “EYE”, “PROFILEFACE” and “PEDESTRIAN”.

I next did an experiment with using the eye detection mode to write a script that detected eyes on famous hand-painted portraits, such as the Mona Lisa, and subsequently overlaid computer-drawn goggly eyes onto them. The goal at the time was to playfully comment on artists whose understanding of dimensional space allow them to paint portraits that viewers feel is watching them wherever they happen to be standing. That path was stalled when I found it too challenging to make OpenCV objects switch from looking at still images to the video webcam feed for live interaction. The initial prototypes were still entertaining at least.

 

Upon shifting the project to the generic Mr. Cam avatar, I initially had some trouble achieving believable eye tracking animations for the installation. The simple plotting of viewers faces in 3 dimensional space and mapping it to a 2 dimensional animation is challenging. If the eyes track detected facial coordinates directly, they are too active do not feel like they are actively gazing at the viewer. My solution was to encode a “dead-zone” in the center of the avatar’s face with conditional statements that forced the animation to stare straight ahead – appearing to better lock eyes with a viewer. An ultimate improvement that I could not yet build might involve mapping the animated eye movement as a function of the detected faces x coordinate and it’s detected width – assuming that the width of the image is in relation to it’s proximity to the embedded camera.

Another challenge was to constrain the motion of the animated pupil in the outer white eyeball ellipse. The constrain() function in processing will not meet this need as it constrain coordinates in elliptical shapes. And yet another challenge was to steady the motion of the pupils – because the OpenCV detected faces are actually a bit jumpy on camera, they cause animations to be quite jittery and unpleasant as well. Both of these problems were addressed by borrowing some code from a post on the Processing.org forum. That code helped me apply some vector math to constrain the pupils elliptically and apply easing to their motion.

This resulted in my most refined version of the project to date.

clip2

Final Project Documentation

// Mr. Cam : A camera-connected video installation designed to put a face to the cameras that watch us in public spaces
// Dylan Ryder | June 2017
// Licensed under Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)


import gab.opencv.*;
import processing.video.*;
import java.awt.*;

Capture video;
OpenCV opencv;

float mouthLength = 50;
float mouthX = 120;
float mouthY = 175;
float leftPupilX;
float leftPupilY;
float rightPupilX;
float rightPupilY;
int radius = 40; // Radius of white eyeball ellipse
float pupilSize = 20;

PVector leftEye = new PVector(100, 100);
PVector rightEye = new PVector(200, 100);

int x, y = 120;
float easing = 0.2;
int scaleFactor = 3;

int counter;

void setup() {
 size(960, 720);
 smooth();

 video = new Capture(this, 960/scaleFactor, 720/scaleFactor);
 opencv = new OpenCV(this, 960/scaleFactor, 720/scaleFactor); 
 opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);
 //opencv.loadCascade(OpenCV.CASCADE_PROFILEFACE);
 //opencv.loadCascade(OpenCV.CASCADE_EYE); 
 video.start();
 frameRate(24);
}

void draw() {
 background(255, 255, 0); // Yellow
 scale(scaleFactor);

 opencv.loadImage(video);
 opencv.flip(OpenCV.HORIZONTAL); // flip horizontally
 Rectangle[] faces = opencv.detect();

 strokeWeight(3);
 leftPupilX = leftPupilX + (100 - leftPupilX) * easing;
 rightPupilX = rightPupilX + (200 - rightPupilX) * easing;
 leftPupilY = rightPupilY = leftPupilY + (100 - leftPupilY) * easing;

 for (int i = 0; i < faces.length; i++) {
  noFill();
  stroke(0, 255, 0); // face detection rectangle color
  rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height);

 if (faces[i].x < 80 ) {
  leftPupilX = (leftPupilX + (faces[i].x - leftPupilX) * easing);// + (faces[i].width * 0.2);
  rightPupilX = leftPupilX + 100;
 }

 if ( faces[i].x > 175) {
  rightPupilX = rightPupilX + (faces[i].x - rightPupilX) * easing;// + (faces[i].width * 0.2);
  leftPupilX = rightPupilX - 100;
 }

 if ( (faces[i].y > 120) || (faces[i].y < 30) ) {
  leftPupilY = leftPupilY + (faces[i].y - leftPupilY) * easing;
  rightPupilY = rightPupilY + (faces[i].y - rightPupilY) * easing;
 }
}

 // Mouth
 noFill();
 stroke(0);
 line(mouthX, mouthY, mouthX + mouthLength, mouthY);
 arc(mouthX-15, mouthY, 30, 30, radians(-30), radians(30)); // left cheek
 arc(mouthX+65, mouthY, 30, 30, radians(145), radians(205)); // right cheek

 // Eyes
 fill(255); // white
 ellipse(leftEye.x, leftEye.y, radius+25, radius + 25); // left eyeball ellipse
 ellipse(rightEye.x, rightEye.y, radius+25, radius + 25); // left eyeball ellipse

PVector leftPupil = new PVector(leftPupilX, leftPupilY);
 if (dist(leftPupil.x, leftPupil.y, leftEye.x, leftEye.y) > radius/2) {
  leftPupil.sub(leftEye);
  leftPupil.normalize();
  leftPupil.mult(radius/2);
  leftPupil.add(leftEye);
 }

PVector rightPupil = new PVector(rightPupilX, rightPupilY);
 if (dist(rightPupil.x, rightPupil.y, rightEye.x, rightEye.y) > radius/2) {
  rightPupil.sub(rightEye);
  rightPupil.normalize();
  rightPupil.mult(radius/2);
  rightPupil.add(rightEye);
 }

 // Actually draw the pupils
 noStroke();
 fill(0); // black pupil color
 ellipse(leftPupil.x, leftPupil.y, pupilSize, pupilSize); // new left pupil
 ellipse(rightPupil.x, rightPupil.y, pupilSize, pupilSize); // new right pupil

counter ++;
 if (counter > 195) {
   counter = 0;
 }
 if (counter >= 190 && counter < 195) {
   blink();
 }
}

void captureEvent(Capture c) {
 c.read();
}

void blink() {
 fill(255, 255, 0); // Yellow
 stroke(255, 255, 0);
 ellipse(leftEye.x, leftEye.y, radius+26, radius + 26); // left eyeball ellipse
 ellipse(rightEye.x, rightEye.y, radius+26, radius + 26);
 stroke(0);
 noFill();
 line(67, leftEye.y, 133, leftEye.y);
 translate(100, 0);
 line(67, leftEye.y, 133, leftEye.y);
}
Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s