Many ideas interested me (see bottom of post for links). Q: Thinking ahead to when we begin to incorporate raspberry pis, don’t know how it fits into which projects.
I am doing project idea #1. for the iterative project. I’m going with mug music because I see a potential for an interactive musical object such as an instrumental mug to become a prototype for a similar interactive dj gadget for the interactive room in idea #4, and also because I feel more confident in accomplishing it as I am a beginner and the sugarcube is far more complex, something I will have to build up to.
1.) The simplest, Mug Music, involves turning a cup into a musical instrument. I’m attracted to this project for this stage because it involves constructing an enclosure, and also can be iterated on by installing new sounds and songs. I may want to 3D print this enclosure in ceramic, which means more prototyping and iterative steps. The mug also responds to a music library called Chuck, something new to checkout.
2.) Lab focused~ I was also interested in making a flight simulator for people interested in flying or trying to learn to fly without committing wholly to the true experience as an unaided novice. Flying lessons are very expensive and so a simulator would provide people with time-tested, habit forming educational games which require the user to check the exterior of the plane properly before getting inside and taking off of a journey of knowledge.
Environment and objects made with Unity/Maya
would require a lot of pcomp:
few joysticks, switches and buttons and lights. I’d like if it was totally interactive and read the levels of the place you are in with real sensors and give that information back as a simulated, heightened experience that is malleable to the user’s choice of use.
3) Have been wanting to make a nice lamp..but not have it attached to a computer. Rechargable battery? Infinity mirror (cliche)?
4.) A much longer term (final:code iteration)? idea I have is to make an interactive room or mural where people could walk in, their motions and bodies detected by a Kinect. Their space and movement in the room would be translated into tonalities, and a touchscreen or interactive object like the sugarcube could be a hub of samples for the user to apply. The music would instigate visuals, or perhaps the whole thing could work inversely: user movements>visuals>music. Or the former, user movements>music>visuals. They both interest me equally.
First thought of approach:
beginning with an OF sketch which reads kinect movements
having that OF sketch react in a visual manner to the body sensing data
the visual output would turn into audio data input,
djed by the user through an interactive object,