Studio/Lab Final Presentation – Make a Face!

Final Presentation Slides: makeAFacePresentation



Notes on Reflection

Unlike most of my other projects that I could quickly settle down for one solid idea, this project went through several iterations until I reached the final idea – a witty and playful interaction/game between human vision and computer vision. Through the user testing, especially from the advice of my game design colleagues, the most valuable lesson that I learnt was the step-by-step instruction for the users to adapt to the rules and speed of the game. I tried using icons and visuals, instead of heavy texts, to more intuitively direct the players into the game. However, to understand the meaning of each icon still required more tolerance of the game in the beginning. Aside from the structure of the game, I found one ideology in Rory’s notes that could push the potential of the project further. Rory suggested that many networks, especially social networks, intentionally yet unconsciously steered us to certain emotions or encouraged us to participate in some activities. For instance, social networks encourage us to be engaged with friends, while shopping websites sell the products as well as the pleasant mood of buying things. Therefore, this project could become a critical object on exploring the unconsciously control of guidance of our emotion and activities, which could be as equally threatening as public surveillance.

Final Project – Work in Progress

For the past week, I modified my original project idea by creating a playful interaction to let the audience realize the the functions of computer vision, instead of showing it directly. Four buttons and four corresponding LED matrixes are placed on a head set for the audience. Along with different facial expression, the audience will be able to participate in this interaction/game by tapping the buttons on their heads and meanwhile making different faces. The head of the wearer will be turned into a game controller.

For technical aspect, p-comp parts will be used with an Arduino, which will be connected to an Openframeworks sketch, with ofxcv and ofxfacetracker library.

The following pictures are sketches and the OF sketch by now.

Screen Shot 2015-04-23 at 10.53.30 AM

IMG_3928 IMG_3929

Yumeng’s Untitled Studio/Lab Final Project

Sorry for the late post.

Before the Project:

1. Can a machine think? Can a machine have intelligence? Can a machine have consciousness or emotion? This set of questions is rather philosophical than technological, because after all, the answer will vary depending on different criteria for the definition of intelligence, consciousness, subjectivity, etc. However, the technological development in artificial intelligence are expanding our understanding of AI’s ability in different areas, as well as challenging and redefining the philosophical study in AI.

2. Some references regarding to what it entails for a machine to have intelligence.

a) Turing Test: 

To past the Turing Test, a computer has to be able to imitate the characteristics of a real human being in order to smoothly converse with a human tester, who will eventually mistaken his/her conversation partner, the computer, as a real human. 

b) The Chinese Room:

To prove the falsity of “Strong AI” – that the programs running on a computer enab the computer to behave intelligently, American philosopher John Searle proposed a scenario called the Chinese Room in the 1980s. A person who only speaks English is in a room with a pen, paper, and a set of rules to match the English vocabulary to the Chinese characters (like a dictionary). This person will be able to communicate with a Chinese-speaking person with written Chinese, even though he/her doesn’t understand Chinese. Thus, the computer doesn’t need to understand Chinese in order to translate; there are merely inputs, outputs, and the rules to convert the former to the later. 

c)Godel, Escher, Bach: An Eternal Golden Braid (still reading some of the chapters from this book)

d)Comparing human intelligence and AI

In the 1960s, Allen Newell and Herbert Simon proposed that “symbol manipulation” is the common method of both human and machine intelligence. In addition, philosopher Hurber Dreyfus suggests: “The mind can be viewed as a device operating on bits of information according to formal rules.” (There are also other theories regarding to this topic, but this one is what I’m most interested in.)

3. As a critical component of AI, Computer Vision reaches a new stage after its flourishing development during the 90s and 00s. Now, CV is  able to recognize not only a single object, but also the environment around it. For example, CV might detects: “a elephant is standing next to two people who are drinking water.” However, besides the abundant database of recognizing nouns, verbs and even adjectives, which are often more subjective, become the missing links in the interpretation from CV. Teaching CV to tell a story from an image, is the a crucial challenge. (Reference: a recent TED talk from Stanford Vision Lab. There are also some demos on their website.)

4. Many art and design projects are using CV as a tool, sometimes in superficial ways for cool visual effects. However, I want to explore the essence of CV, its potential in the future, and the philosophical and social implications behind it. This project that Justin sent to me, CV Dazzle, caught my attention because it examined our living condition under the environment surrounded by computer vision and therefore, the designer designed a make-up pattern that enables people to camouflage from being recognized by the computer.

The Project:

In this project, the color of an image (or from a web cam), as the most straightforward element to represent some emotions, will be detected by the color tracking add-on(s) from openFrameworks. Then, according to the percentage of the most used color, the original image will be deformed and recreated by the manipulation of meshes and the motion of different parts of the meshes. For example, a blue-dominant image’s meshes might be torn apart into broken pieces shaking against the black background, to represent a melancholic state of mind. This project is an idealistic representation of computer’s mind and emotion. It might over-simplifies the process of CV, but it is an optimistic expectation for the future of implementing CV in various areas in our life.

Technical Challenges:

Color tracking (and anaylizing? How do I read the color with the highest percentage of presence?)

ofMeshes (This tutorial will be what I am going for:



[Studio: Environment] Project 2 – Sleep Time and Quality Tracking

Sleep Time and Quality Tracking 

Back to high school, I stayed in a very strict boarding school where every student had to follow the strict daily schedule and study diligently. While most of the students couldn’t get enough sleep because of the heavy study load, I was called the God of Sleep, because no matter how stressed we were, I always had very good quality and enough time for sleep and having dreams. However, it seemed like I lost this ability for the first time from this semester. All the magical dreams are replaced by nightmares. Sometimes even when I sleep for enough time, I still feel exhausted the next morning. To solve this problem, I set a general sleeping schedule for myself and started to track my activities before sleep, which may interfere with my sleeping quality, and my feeling for the next morning after sleep. 

According to my academic schedule and personal habits, I set the standard sleeping schedule from 12:30am – 9:30am (9 hours). 


March 12-13 Thursday:

10:45pm – 8:50am

Before sleep (activities): Crying, doing homework, listening to music

After sleep (how I feel): Intern, I wanted to sleep a little late but had no choice

March 13-14 Friday:

12:28pm – 11:10am

Before sleep (activities): Watching movies, video chatting with parents, texting Yi, crying, lying on the bed doing nothing, listening to music

After sleep (how I feel): exhausted, woke up at 9:30 am but didn’t want to get up until 11:15am 

March 14-15 Saturday:

2:40am – 9:15am

Before: A big party with half of the people that I don’t know, drinking, crying, cleaning up the house, video chatting with parents 

After: I technically woke up when the sun was not up yet because my stomach wasn’t feeling well after the drinking. I constantly woke up about every half an hour and then finally decided to get out of bed at 9:15am. Surprisingly, I didn’t feel tired. I had a little headache but after a shower, I felt sober and came to D12 to work directly in the morning. 

March 15-16 Sunday:

I forgot to document. 

March 16-17 Monday:

12:30am – 10:15am

Before: I watched a documentary of 30 Seconds to Mars, video chatting with mom, reading online articles and news, playing my RasPI!

After: Woke up naturally at 9:15 then slept again until 10:15am. Feeling OK.

Before-Sleep Activity Types:




Listening to Music

Doing Nothing

Special Occasions(Party, etc)

Reading News/Articles 

Watching Movies

Playing Raspberry Pi

[Studio: Environment] Project 1 Description and Issues


First, shadows are the reflection of our existence in the space. Then, our shadows overlap, intertwine, communicate. In this project, shadows are turned into subjective representations of our mind states. Instead of being distorted by the source of light, the audience will be able to control their own shadows by wearing the headsets that collect EEG data from them and accordingly form fake shadows projected onto the ground beside them.

For hardwares, this project will utilize a Neurosky Mindwave headset for sensing the EEG data, a laptop to run the Processing sketch for the visualization of the data, and a mini projector to project the visualization/fake shadows.


1. The connection of the EEG headset is troublesome with Mac Yosemite due to software issues. However, it connects on a PC successfully. The current solution is that I use a trial of Parallel to run Windows on my Mac.

2. There are a few Processing libraries to connect the EEG data to the sketch. However, as I tested, none of them work on the new version of Processing. Along with the previous problem, now I’m running Processing 1.5.1 on Windows in order to run the example sketch successfully.

3. Now I’m using the laptop. However, I still want the piece to be wearable, so I want to use a Raspberry Pi instead (if I have enough time). Here are the subsequent issues:

1. I just started setting up my Raspberry Pi and never used it before.
2. The EEG headset doesn’t connect to Linux. I probably still need it to send data to my computer and then use bluetooth or something to send the data to Raspberry Pi? Do you have suggestions for how to do this?
3. I have to install Processing on Raspberry Pi. Will it run very slow?
4. Do you happen to have an HDMI to mini HDMI cable that I can borrow tomorrow in class?
5. Maybe other options to execute the project as a wearable piece?

[Studio: Environment] Reading Response – Thoughtless Acts, Suri

Examples of Thoughtless Acts

In the reading, Suri suggests that good designs are based on the observation of users’ intuitive act to the products or environments. They not only follow the principles of intuitiveness, but also enhance the overall user experience. Thoughtless acts are some of the best inspiration for designers. Here are two examples of thoughtless acts and a witty design to make us conscious about thoughtless acts.


Putting sunglasses back in the case all the time is inconvenient. Putting them in the pocket will scratch the lens. Hanging them in front of the shirt seems to be easy, convenient, and cool.


Formula 1 racers have to wear those high-tech fire-proof jumpsuits during the race for their safety. Want to know how uncomfortable those jumpsuits are? After two hours of racing, the drivers essentially take a shower with their sweat inside of the fully-closed capsules. Therefore, the picture above is often seen in the paddock, before and after the race. The racer (Kimi Raikkonen!) takes off the upper part of the jumpsuit and wrap it around the waist to be comfortable. Maybe it is also the racer’s fashion to wear jumpsuits this way.


This instance is a witty reminder of how important thoughtless acts can be. No one will doubt that when it rains, we hide ourselves under shelters to stay away from the rain. The designer uses hydrophobic paint to paint on the cement ground. It is invisible during sunny days, however, when it rains, the caption shows: “Stay dry out there.”