P2P Network Examples and More

In Peer to Peer communication, both parties has the capacity of transmitting and receiving data, which is different from the client-server communication that server fulfills the client side’s request. In a sense, either party in Peer to Peer plays the role of a server and a client. Peer to Peer communication has become ubiquitous in our life since the beginning of 21st century, due to the increasingly accelerated development of Internet of Things nowadays. It forms a dynamic group of peers that can upload and download at the same time for file sharing. If one user is offline or disconnected, other users still are capable of transmission. However, there are security issues in P2P communication, such as more exposing ports, software viruses and the risk of downloaded files.

Here are three examples of P2P communication.

  1. Skype

Launched in 2003, Skype has changed the way we make phone calls and expanded the functions to instant messages, transferring files and video chat between multiple people. Although P2P communication can harm the benefit of certain large companies, Skype considered P2P as a creative approach of bringing individual’s ideas together for the development within a relatively small group. Also, in the case of Skype, P2P saves the heavy cost from traditional centralized resources, which is used by most of the other instant message services at different levels. Therefore, Skype is able to invest more resources on developing advanced functions and user experience to thrive in the industry. In addition, Skype protects the privacy of users by encrypting their phone calls, messages, videos and files. However, I still have doubt on Skype’s explanation for their security protection. It says, “Skype is as secure as we can possibly make it.” Security issues are inherent at the birth of P2P communication; to what level exactly does Skype resolve the security dilemmas?

  1. 360 Smart Camera

360 Smart Camera is a low-cost Wi-Fi camera released by the Chinese software company Qihoo 360 this year. Initially it has been designed as a tool to watch your house or store remotely, by connecting the camera with its phone app. Through the app, the user can turn the camera angle to see different sections of the surroundings and speak to the phone, through which the sound will be transmitted to the camera with a speaker embedded inside. The camera even has facial recognition function; while detecting a face or movement in an empty house, the app will automatically push notifications to the user. The camera also contains a micro SD card slot. An 8 GB SD card will be sufficient to store 24 hours of the recording, enabling the user to playback videos on the phone. However, as the camera has become acceleratingly popular for daily use, it causes critical privacy concerns, since it can be easily set up and the recording content can accessed online through the broadcasting function by just clicking a button.

  1. File Sharing Applications

Some file sharing applications, such as BitTorrent, Pando and Tribler, explicitly reflect on the salient features of P2P network. The most interesting aspect for these file sharing applications and websites is that it forms communities among users with similar interests; every member is encouraged to contribute resources and download files from their peers at the same time. For instance, Tribler has advanced functions for improving the quality of file sharing community, such as the resource filter that every user can rate the quality of the resources and provide feedback for other users’ reference.

An art project relating to P2P network:

P2P is a useful tool for interaction design. In the past DT MFA thesis show in May this year, Chinese designer Hang Ye showcased his unity game combined with an Oculus Rift and a 3d-printed game controller that he designed exclusively for the game. In the game setting, the player enters a cubic room from a first-person perspective, and she/he can move forward by staring at a circle in the center of her/his vision from the Oculus. The play cannot move to other directions directly but instead, she/he can twist the game controller, also a cube, in order to turn the virtual room in the game upside down or sideways to change the directions they move forward. The genius part of the game is the cubic controller, a physical analogical presentation of the virtual cubic room, which adds a layer of philosophical duality to the game. For the technical part, there is an Arduino Uno, a gyroscope, an accelerometer and an Xbee module inside the 3d-printed controller. Through Xbee, the position data of the controller are sent to the computer that has a USB Xbee module. Unity reads the data and input them into the game.

Reference:

https://support.skype.com/en/faq/FA10983/what-are-p2p-communications

http://www.infosec.gov.hk/english/technical/files/peer.pdf

http://p2peducation.pbworks.com/w/page/8897427/FrontPage

[Lab-Systems] Response to Reading – The Mail Art History

Unlike traditional art forms, such as paintings and sculptures, that the media carry the art, mail art and correspondence art celebrated the media as unique qualities, as parts of the artworks. It was critical that Fluxus questioned the inherent nature of art. “What art could be?” “Why not?” “How else?” Fluxus experimented mailing various forms of art, including objects that were almost impossible to mail without damage, in order to challenge the limit of the media. More importantly, the interaction between mailing artworks had profound influences to new media art and interaction design nowadays. Correspondence art or mail art could be considered as an analog predecessor for digital interaction at the present time.

 

Mail art expands the functions of mails outside of mere communication. It became the art that spread art. After the early time of a closed community of artists, Fluxus unfolded intermedia art to the public, constructing a system of communication among artists and practitioners in the experimental field. The expanding community of Fluxus embraced the sparkling outcome when different art forms collaborated and intersected with other disciplines. The idea of hybrid art permeated in every aspect of Fluxus artworks. For instance, Yoko Ono’s art book Grapefruit had its title because Ono considered a grapefruit as the hybrid of a lemon and an orange, reflecting on the essence of her works across multiple art forms. To some extent, Fluxus was like a 1960s version of DT.

Lab-Systems: Week1 Self Introduction and 2 Examples of Connected Objects

A bit about myself: As I presented in our first class, I am passionate about creating head-mounted devices to alter the user’s various senses and behaviors, evoking people to rethink about everyday things through different angles of perception. More generally, I want to create immersive experiences, installations and interactions to hopefully bring confidence to audience that everyone can live in a dreamy magical world like in a sci-fi movie.

 

2 Examples of Connected Objects:

  1. J!ns Meme smart glasses collecting data from the user and connecting to computer or mobile

J!ns Meme is the latest project from Professor Masahiko Inami, who is well known for his invisible cloak that enables people to “see” with X-ray vision. (I worked with him briefly at SIGGRAPH in LA this summer.) J!ns Meme is a pair of wearable glasses embedded with multiple sensors to detect various activities, such as blinking, reading, talking and walking. Data are collected from a gyroscope, an accelerometer, and three electrodes between two eyes and then examined in groups that follow certain patterns in different activities. The glasses are able to measure the user’s reading speed, the number of steps the user has walked, etc. These data can be input to computer or mobile applications, providing further analysis for daily activities. For instance, when the user stares at the laptop’s screen for too long without blinking, the screen will blur to remind the user to use eyes healthily, and when the user does not move his/her head for a certain amount of time, the screen will dim a little bit.

Screen Shot 2015-09-02 at 12.00.08 AM Screen Shot 2015-09-02 at 12.00.28 AM

  1. 360 Smart Camera connecting to its phone app

360 Smart Camera is a low-cost Wi-Fi camera released by the Chinese software company Qihoo 360 this year. Initially it has been designed as a tool to watch your house or store remotely, by connecting the camera with its phone app. Through the app, the user can turn the camera angle to see different sections of the surroundings and speak to the phone, through which the sound will be transmitted to the camera with a speaker embedded inside. The camera even has facial recognition function; while detecting a face or movement in an empty house, the app will automatically push notifications to the user. The camera also contains a micro SD card slot. An 8 GB SD card will be sufficient to store 24 hours of the recording, enabling the user to playback videos on the phone. However, as the camera has become acceleratingly popular for daily use, it causes critical privacy concerns, since it can be easily set up and the recording content can accessed online through the broadcasting function by just clicking a button.

201507230933562322 143852354092136300_a580x330 Screen Shot 2015-09-02 at 10.35.34 PM

 

 

Studio/Lab Final Presentation – Make a Face!

Final Presentation Slides: makeAFacePresentation

Video: https://vimeo.com/127794759

 

Notes on Reflection

Unlike most of my other projects that I could quickly settle down for one solid idea, this project went through several iterations until I reached the final idea – a witty and playful interaction/game between human vision and computer vision. Through the user testing, especially from the advice of my game design colleagues, the most valuable lesson that I learnt was the step-by-step instruction for the users to adapt to the rules and speed of the game. I tried using icons and visuals, instead of heavy texts, to more intuitively direct the players into the game. However, to understand the meaning of each icon still required more tolerance of the game in the beginning. Aside from the structure of the game, I found one ideology in Rory’s notes that could push the potential of the project further. Rory suggested that many networks, especially social networks, intentionally yet unconsciously steered us to certain emotions or encouraged us to participate in some activities. For instance, social networks encourage us to be engaged with friends, while shopping websites sell the products as well as the pleasant mood of buying things. Therefore, this project could become a critical object on exploring the unconsciously control of guidance of our emotion and activities, which could be as equally threatening as public surveillance.

Final Project – Work in Progress

For the past week, I modified my original project idea by creating a playful interaction to let the audience realize the the functions of computer vision, instead of showing it directly. Four buttons and four corresponding LED matrixes are placed on a head set for the audience. Along with different facial expression, the audience will be able to participate in this interaction/game by tapping the buttons on their heads and meanwhile making different faces. The head of the wearer will be turned into a game controller.

For technical aspect, p-comp parts will be used with an Arduino, which will be connected to an Openframeworks sketch, with ofxcv and ofxfacetracker library.

The following pictures are sketches and the OF sketch by now.

Screen Shot 2015-04-23 at 10.53.30 AM

IMG_3928 IMG_3929

Yumeng’s Untitled Studio/Lab Final Project

Sorry for the late post.

Before the Project:

1. Can a machine think? Can a machine have intelligence? Can a machine have consciousness or emotion? This set of questions is rather philosophical than technological, because after all, the answer will vary depending on different criteria for the definition of intelligence, consciousness, subjectivity, etc. However, the technological development in artificial intelligence are expanding our understanding of AI’s ability in different areas, as well as challenging and redefining the philosophical study in AI.

2. Some references regarding to what it entails for a machine to have intelligence.

a) Turing Test: 

To past the Turing Test, a computer has to be able to imitate the characteristics of a real human being in order to smoothly converse with a human tester, who will eventually mistaken his/her conversation partner, the computer, as a real human. 

b) The Chinese Room:

To prove the falsity of “Strong AI” – that the programs running on a computer enab the computer to behave intelligently, American philosopher John Searle proposed a scenario called the Chinese Room in the 1980s. A person who only speaks English is in a room with a pen, paper, and a set of rules to match the English vocabulary to the Chinese characters (like a dictionary). This person will be able to communicate with a Chinese-speaking person with written Chinese, even though he/her doesn’t understand Chinese. Thus, the computer doesn’t need to understand Chinese in order to translate; there are merely inputs, outputs, and the rules to convert the former to the later. 

c)Godel, Escher, Bach: An Eternal Golden Braid (still reading some of the chapters from this book)

d)Comparing human intelligence and AI

In the 1960s, Allen Newell and Herbert Simon proposed that “symbol manipulation” is the common method of both human and machine intelligence. In addition, philosopher Hurber Dreyfus suggests: “The mind can be viewed as a device operating on bits of information according to formal rules.” (There are also other theories regarding to this topic, but this one is what I’m most interested in.)

3. As a critical component of AI, Computer Vision reaches a new stage after its flourishing development during the 90s and 00s. Now, CV is  able to recognize not only a single object, but also the environment around it. For example, CV might detects: “a elephant is standing next to two people who are drinking water.” However, besides the abundant database of recognizing nouns, verbs and even adjectives, which are often more subjective, become the missing links in the interpretation from CV. Teaching CV to tell a story from an image, is the a crucial challenge. (Reference: a recent TED talk from Stanford Vision Lab. There are also some demos on their website.)

http://www.ted.com/talks/fei_fei_li_how_we_re_teaching_computers_to_understand_pictures?language=en

http://vision.stanford.edu/

4. Many art and design projects are using CV as a tool, sometimes in superficial ways for cool visual effects. However, I want to explore the essence of CV, its potential in the future, and the philosophical and social implications behind it. This project that Justin sent to me, CV Dazzle, caught my attention because it examined our living condition under the environment surrounded by computer vision and therefore, the designer designed a make-up pattern that enables people to camouflage from being recognized by the computer. 

http://ahprojects.com/projects/cv-dazzle/

The Project:

In this project, the color of an image (or from a web cam), as the most straightforward element to represent some emotions, will be detected by the color tracking add-on(s) from openFrameworks. Then, according to the percentage of the most used color, the original image will be deformed and recreated by the manipulation of meshes and the motion of different parts of the meshes. For example, a blue-dominant image’s meshes might be torn apart into broken pieces shaking against the black background, to represent a melancholic state of mind. This project is an idealistic representation of computer’s mind and emotion. It might over-simplifies the process of CV, but it is an optimistic expectation for the future of implementing CV in various areas in our life.

Technical Challenges:

Color tracking (and anaylizing? How do I read the color with the highest percentage of presence?)

ofMeshes (This tutorial will be what I am going for: http://openframeworks.cc/tutorials/graphics/generativemesh.html

)

 

[Studio: Environment] Project 2 – Sleep Time and Quality Tracking

Sleep Time and Quality Tracking 

Back to high school, I stayed in a very strict boarding school where every student had to follow the strict daily schedule and study diligently. While most of the students couldn’t get enough sleep because of the heavy study load, I was called the God of Sleep, because no matter how stressed we were, I always had very good quality and enough time for sleep and having dreams. However, it seemed like I lost this ability for the first time from this semester. All the magical dreams are replaced by nightmares. Sometimes even when I sleep for enough time, I still feel exhausted the next morning. To solve this problem, I set a general sleeping schedule for myself and started to track my activities before sleep, which may interfere with my sleeping quality, and my feeling for the next morning after sleep. 

According to my academic schedule and personal habits, I set the standard sleeping schedule from 12:30am – 9:30am (9 hours). 

————————————————————-

March 12-13 Thursday:

10:45pm – 8:50am

Before sleep (activities): Crying, doing homework, listening to music

After sleep (how I feel): Intern, I wanted to sleep a little late but had no choice

March 13-14 Friday:

12:28pm – 11:10am

Before sleep (activities): Watching movies, video chatting with parents, texting Yi, crying, lying on the bed doing nothing, listening to music

After sleep (how I feel): exhausted, woke up at 9:30 am but didn’t want to get up until 11:15am 

March 14-15 Saturday:

2:40am – 9:15am

Before: A big party with half of the people that I don’t know, drinking, crying, cleaning up the house, video chatting with parents 

After: I technically woke up when the sun was not up yet because my stomach wasn’t feeling well after the drinking. I constantly woke up about every half an hour and then finally decided to get out of bed at 9:15am. Surprisingly, I didn’t feel tired. I had a little headache but after a shower, I felt sober and came to D12 to work directly in the morning. 

March 15-16 Sunday:

I forgot to document. 

March 16-17 Monday:

12:30am – 10:15am

Before: I watched a documentary of 30 Seconds to Mars, video chatting with mom, reading online articles and news, playing my RasPI!

After: Woke up naturally at 9:15 then slept again until 10:15am. Feeling OK.

Before-Sleep Activity Types:

Crying

Homework

Chatting/Texting

Listening to Music

Doing Nothing

Special Occasions(Party, etc)

Reading News/Articles 

Watching Movies

Playing Raspberry Pi