My project was initially about paramecium, single-celled organisms I had been experimenting with in Ecology Lab. For more information on that ideation process and the research I did, please see my other blog post: http://portfolio.newschool.edu/remia021/2016/04/18/final/
A short summary of things I learned from my first attempt includes:
-Magnetism /ferrofluid research
https://www.youtube.com/watch?v=9Qk0IcAJQWI
-Banana Font (slow growth/ decay, to be animated)
-paramecium & physarium
-Video Processing With Rose Engine
-Strong Bubble creation (using corn oil & various colored dish detergent)> good mixture with ferrofluid.
-OFXKINECT
All of the above had some influence on the final project.
Explain your concept:
My concept was to make an environment that would simulate the experience of being a mantis rainbow shrimp.
Initial Questions:
What would it be like to see infrared waves? Gravitational waves?
What would it be like to have sight which keenly detects motion?
Research:
What I knew about the rainbow mantis shrimp when I began was that it was the creature with the best vision in the animal kingdom. Looking at the table above, you can see the mantis shrimp has 16 different color receptors (vs our 3), and so therefore its vision carries a lot more information than ours.
Rainbow mantis shrimp also have incredibly fast punches, with their claws accelerating at a whopping acceleration equal to the velocity of a bullet shot out of a 22 caliber rifle.
In fact, the water around them boils when they punch. Their punches are used to break crab shells for consumption.
I listened to podcasts (Radiolab: Bigger than Bacon)
watched videos (youtube- Snapping Shrimp breaks Tank)
and read articles.
I also tried many addons-
ofCameraFilter
ofKSMRfragmentFX
ofSTLmodel
ofxBlur
ofxAssimModelLoader
ofxCV
ofxIO
many didn’t work or used little I was able to extract or understand.
Precedence:
Looking for codes which would simulate optical flow and other video processing mechanisms, I found Mirror Fun by Denis Perevalov on youtube. This sketch warped a grid displayed over the webcam stream based on movements read through the webcam via openCV.
The sketch was, however, outdated. With some email correspondence, I got the sketch up and running on the version of oF + os I was working on.
See the code here: https://github.com/gatana/lab/tree/master/MirrorFun2
Further Research:
Ideation/ Sketches/ First Process:
My first process involved working in video processing and attempting to get a live streamed video that was altered by code. In the video below, I used a kinect and coded the pixels to be much larger than usual. This was my first experience with basic video processing.
Other kinect experiments:
Prototypes:
Led strip for video:
https://vimeo.com/162867460
What Worked/Didnt/ Learnt:
What didn’t work was hosting the sketch on the Raspberry Pi, a last-minute thought I had which was not fully fleshed out in >> (open CV-based program with a camera-less raspberry pi? hahah)
The accelerometer glove (described in detail in the further directions section) did not work out in time.
I learned so much about the variety of addons and libraries available, as well as somewhat mastering the kinect’s depth reading abilities and really diving into opencv and tinkering with filters to combine and composite them to create more overwhelming effects.
Final Instantiation:
libraries used:
#include “ofxOpticalFlowFarneback.h”
#include “ofxGui.h”
#include “ofxFlowTools.h”
#include “ofxSerial.h”
#include “ofxKinect.h”
Reflective Review: (What translated well? What didn’t? )
What I think did work was the video, and having this program be presented as a video. I enjoyed editing it although have many more ideas now, also described in the further directions section.
What I think was lacking was the performative part of the video. Although the video I think does emulate the vision of a shrimp, as brought up in my critique- it is true, why would a shrimp encounter a human in its range of vision? I think the idea of having more shrimp imagery would push the project more in shrimp-landia direction.
BOM:
mirror
mylar
projector
oF program, running on computer with webcam
LED Strip (used in video)
Arduino
Fritzing:
Code Link:
https://github.com/gatana/Code2_BFADT.S16/tree/master/amvmt_visualizer
Future Directions:
Something that was brought up during critique was the idea of this as a performance, made more explicit through the use of a model shrimp or shrimp costume. The night before the project was due, I did consider fashioning a shrimp headpiece or outfit out of the mylar, however, there was no time.
As of making this project I have learned about a new interesting shrimp. This shrimp, the snapping pistol shrimp, is known for the loud sound it makes. The sound, heard above water, sounds like sizzling bacon. The US army actually used the loud sounds created by mass groups of these shrimps to their advantage, hiding submarines in the popping shrimp beds from Japanese naval ships that could not detect the US subs using sonar because of all the interfering noise.
This is why I think it’d be really ideal to create a crab claw like glove with an accelerometer embedded that reads punch values/ speed. When the punch breaks a certain acceleration level, say, 300, a fan would begin to blow, producing satisfying bubbles.
This would make for a very fun installation, in which I think a larger, more permanent set-up of mirrors, mylar, and projections would really create an environment. Perhaps here, with a kinect, a raspeberry pi could be incorporated for projection-mapping purposes in particular.
As for the video, I think animating the cut-out mylar shrimp would be very useful in placing more emphasis on them. I also agree creating a model of a shrimp, or a shrimp costume would complete the imagery in that the user of the webcam, the user being picked up by the openCV vision, is a shrimp. This would complete the circle- the user is a shrimp, seeing in shrimp vision.