Interaction, Motion Control and My Recent Projects

I've been thinking a lot about interactions and interaction design lately and now I'm finally forcing myself to sit down and hash out some of my thoughts. I really got moving on this path in February when I started working on Arduino, Leap Motion and Processing projects. 

I had previously had ideas and thoughts about interactive sculpture and other interactive media projects but hadn't followed through. A lot of the inspiration for the most recent push came from the 100 days of hustle but another big piece was hearing this Jack Dorsey talk. The part that really stuck with me was when he talks about doing whatever it takes to manifest the things you want to see in this world.

One of the things I've been thinking about for years is an immersive, interactive space: A room that collects information about each member of the audience in the room and the art is created accordingly (in my head it's a blend of projection, music and other moving elements). The piece could also influence the behavior of the audience through projections, sound and light to create a two-way street of influence and some interesting feedback loops. This project will be years away for me, but it is definitely something I would like to build, so I'm trying to push as hard as I can in that direction. 

The Arduino/Leap Motion/Processing projects have been the first step towards that goal. So far, in the short time I've been playing with them, I've come up with a ton of new ideas, and can see how things I found fascinating before fall under this umbrella. 

One of the things I find so interesting about interactive media is how it plays with the roles of actor and observer. I love the simple question, "am I influencing the actions of the art or is the art influencing my actions?" I guess this is the case with all art, but I think it is particularly tangible with interactive and kinetic pieces. I love the idea that the audience has no choice but to participate. 

One of the reasons I want to explore the physical-digital-physical interaction chain, particularly with motion control, is that I think we are fairly distanced from these interactions in the real world. For example, a light switch is a physical-electrical-optical interaction but feels so simple, so natural because we’re used to it. The on/off binary is simple, so why think about the chain of events that creates the on- or off-state? 

I want to create ‘unnatural’ interactions that can be learned through play and intuition. One thing that I think many current physical-digital-physical interactions lack is adaptivity and creative decision making. Usually it’s just binary on/off. I want to build things that take (and give) more than that.

Working with the Leap has got me to start thinking about gestures and what sort of interactions make sense for interactive control. Making a motion controlled menu for my Tether drawing program was very difficult because precision becomes an issue due to hand/finger motion. I ended up scrapping the motion/gesture controlled menu for a mouse controlled GUI.

To get around this, the program needs to be smarter. For example, there could be a gesture to indicate that you want to use the menu and then the program enters menu mode. In menu mode, it has a cursor that moves towards your hand/finger but doesn't appear exactly at your finger position to allow for sensitivity. Essentially everything has to be re-thought the input/controls are different. Though it can’t be too different or people won’t be able to learn/understand it. 

What would GUI's and interfaces look like if we didn't go through the age of the mouse? What will the future look like? Is it minority report? or something else? What else is needed?

Another thing I wonder about, especially with motion tracking like Leap/Kinect is that it doesn’t have the start/stop infrastructure that a mouse has (i.e. you take your hand off to stop). I’ve tried to mimic this by using depth (towards screen) to create a threshold (behind - action doesn’t occur, past - action occurs) but I’m not sure this is the way to go and there may need to be a physical indicator of the interface. It may make sense to pair the leap with an actual physical controller/switch that allows the user to change modes or have a gesture that only gets picked up by one mode. 

As things progress, the physical controller might be replaced by something like a certain posture, nod, or even a vocal command. I think in the future there will be developers/designers that build custom interfaces for people. Essentially they would act as technicians/consultants that go learn about a person's behaviors/movements and customizes/configures the controller just for them. With 3D printing and motion capture, this will make for some highly adaptable systems.

There is so much information coming through a devices like the Leap Motion controller and I wonder, what parameters are going to be used and passed through to make an effective user experience? Will that be the same for every person? Could the interface learn a person's preferences and adapt to suit their needs? (i.e you say a command and do a gesture, and eventually you don’t need to say the command because system has learned the gesture). What about multiple users? How do they work together with different gestures? Would they each need to have their own set of gestures?

There are so many interesting questions which is why I’m so inspired by this field. There seem to be so many things to experiment and play with and a lot of space to grow and develop.

Teeter

After more or less finishing up the Tether drawing program I'm starting to revisit the Teeter game idea. It is getting easier now that I know a little more about Processing and some of the cool things you can do (mostly thanks to The Nature of Code by Daniel Shiffman). Using Box2D makes the game physics a non-issue, which is a relief since I thought I might have to program it all myself. 

At this point I have a working program, with Leap controlled teeter-totter and semi-random particles of two different shapes. I want to add a few more particle types, tweak how and where the particles spawn and make it so the game has levels. 

In the video below, the red and green dots show hand height for left and right hands. The direction of the torque on the Teeter is determined by which hand is higher and the strength of the torque is determined by how far apart the hands are.

Strings

Today I finished a sculpture that I've been working on for about a week and a half. Twine wrapped around a frame and armature wire providing some out-of-frame shape. The process was long and tedious, but it was a great learning process, especially seeing how things happen in a way that is different from what you imagined. I've had this idea in my head for a while, so it was nice to get it out so I can start thinking about the interactive pieces I want to do with twine. 

I also had the good fortune to stumble upon the work of Kia Utzon-Frank. She has done some very cool things with thread and string that remind me of a few ideas I've had. Seeing her work gave me a huge spark and I now have a slew of new ideas. As I was looking at her work and thinking of new ideas, I started playing around with a few of them in Processing. It is amazing how easy it is to get an idea up and running, and to play around. I could do the same thing with pencil and paper but it would take a long time and would be very tedious. Being able to draw with code helps me work through ideas quicker. A few of my sketches are below.


Molecules Update

I have been working to clean up the Molecules program and add some new interactions and controls. Some of it was just rewriting the Processing code in a more elegant way using classes and PVectors instead the brute force method I used when starting out. Incorporating the different interactions is an interesting exercise because I really have to think about what motions make intuitive sense for a certain control. For example, I went back and forth between using a closed hand or open hand for freezing the molecules before finally deciding to use two hands to freeze. It also requires a bit of playing around with the controls to see which motions get picked up or how the motions blend. 

Mid-March Update

I haven't posted anything in the last few days, but have been working intensely on build outs of the tether drawing program and a molecules program with more gestures.  One thing that has been very interesting is trying to decide what things the Leap should control in these projects.

For the tether drawing program, I went back and forth between using a menu library to build a mouse-controlled menu or building my own Leap controlled menu. The Leap-controlled menu proved to be a little difficult because the Leap is not very well suited to small precision motions (mostly because it is hard to hold your hand still). Additionally, including more advanced functionality and gestures requires a lot of refinement, as it is easy for gestures/motions to be recognized when you don't want them to. 

On the molecules front, I'm working to include more control modes to practice working with gestures and other ways of defining behavior. Hopefully that will be up and running soon. 

Lastly, I started playing around a bit with using Processing to play with images, especially to pull colors from the image and make color palettes. A few images from these experiments are below.