The Origin

    In Spring 2018, I took a class named "Lighting for Photography,” in which we explored a wide variety of ways to use light in our photographs. We also got a chance to experiment with all kinds of lighting equipment that the Massachusetts College of Art and Design owns. It was also the first time I met a little flashlight called "Mini Slave Flash." A mini slave flash is a tiny flashlight with a light sensor that responds to other flashes. Since it is small, light, and simply reacts to light; it is a flexible tool for not only outdoor shooting but also studio scenarios. I took it out with a model for a nighttime street shoot, and I tried putting the little flash on the model's shoulder. The results were fascinating; it looked like there was "something" creating a light brightening up the path in front of the subject. I showed the photos in class, and everyone had a different reaction to it. Some said it was a ghost, some said it was an angel, some said it was a sign of a miracle. I found this feedback exciting, and I felt like I should extend this from a photo series to an interactive installation.

The portrait I took with the flash

Design

    How can I make people see something that is behind them that at the same time isn’t visible in real life? I immediately thought about a mirror. In movies, mirrors are often used as a surface that connects two worlds. I think this is a good metaphor, since the project is to show people a "thing" that they can't see. I wanted it to look like a regular mirror when no one is in front of it. It is not easy to create a device that not only looks like a real mirror but also acts as a display. At first, I thought about using projection, but the problem with projection is that the bigger the image needs to be, the more physical space is needed. Therefore I decided to use an actual display instead of projection. For the appearance of the design, since I wanted it to look like a regular mirror, I decided to disguise it as a framed mirror. A frame can not only cover up the screen behind the mirror but also contain/hide the rest of the components. The mirror is a sheet of plexiglass with a half-transparent two-way mirror film applied to it. A two-way mirror will hide the dark parts and show the bright pictures on the screen side, and at the same time, it will reflect whatever is bright on the viewing side, like a mirror. With this setup, people aren’t able to notice the screen behind the plexiglass.  For the back end design of the project, I decided on a Microsoft Kinect camera as the sensor to detect people’s location in front of the mirror. I then track the location of the heads and make my shining light video follow them.

Concept illustration - the piece in a gallery space
Concept illustration -  a combination of
a reflective plexiglass layer on top of a screen

Prototype & User Testing

    For the first prototype, I used an old 20-inch LCD as the screen. I ordered a piece of plexiglass with the mirror film pre-applied to it, and I taped it onto the display with masking tape. The masking tape on the edge simulates the frame, and it worked pretty well. I used a Kinect contour tracking code for this version. By finding the highest point in the image, which most of the time is going to be the head, I could achieve the effect I wanted with the contour line of the audience in front of the sensor. I took this project to the MIT media lab for user testing, and I received affirmative feedback. People had similar thoughts to when I showed them the original concept photos, and everyone had a different definition for the light. This was a successful prototype, which established the system for me. However, it wasn't perfect. Due to the contour data I was using, it was merely tracking the highest point in the data and pinning my image to that location. The computer wasn't able to distinguish a person from a group of people, which resulted in the light jumping around when there were two users in the frame at the same time or when someone raised their hand higher than their head. Therefore, I switched the software to a MAX plugin called dp.kinect which recognizes people as joints and skeletons. It tracks in real-time (without needing the awkward Kinect setup pose) and is very efficient. Because it uses the skeleton of people, I can always get the position of people's heads–this means that it won't be affected by people raising their hands higher than their heads or multiple people being in the frame at the same time.

First Prototype, Photo by Martha Rettig
Final piece shown in the MassArt MFA Thesis Show 2019, at MASS MoCA

Conclusion

    The process of design can be applied to a lot of things, and the power of prototyping is limitless. By creating a prototype, we can not only test out the functionality of the design but also realize whether it is the correct way to go. Even though this project turned out to be an interactive art installation, the moment when people's reactions match my expectation is really important to my process, especially for a project like this one, in which I built a 48.5x34inches full-scale version of the piece for the 2019 MassArt MFA Thesis Show. The process not only helped me achieve the results I wanted, but also saved me time and money on rebuilding the final piece.

Visual simulation of the actual effect

Media

Hardware

Software

Date

Exhibition

Interactive Installation, Archival Inkjet Print

Computer, Kinect Camera, LCD Screen, One-way mirror, Wooden frame

Cycling' 74 Max 8, dp.kinect

May 2019

MassArt MFA Thesis Show 2019 at MassMoCA

© 2020 Richard W.P. Huang