more thoughts..

Having discovered the little animation on The Forsythe Company’s website, I have been on a little journey.

Considering currently:

processing – for visual representation, possibly of jelly-like matrix, or excited points/balls, that bounce up and down and affect a given flexible surface, like a wave in the water etc.

mass spring systems – to be researched, possibly with jitter in Max. or in mechanical motion.

Also got into developing the idea, that I presented in November, of performing with myself on screen, performing live with a non-real-time video of myself. This idea has been gathering favour as something that could be quite important to do. How would I further the development of this?

Having delved a little into Max, I have seen quite a few Jitter patches being developed online in the view of video noise and audio manipulation using the matrix idea. Very intrigued. One such patch allowed the laptop camera to project your image onto a plane in 3D space that via the trackpad can be manipulated on an x, y, and z axis. The resulting trajectory was very much like that of the 9 degrees of freedom board which allowed for very fine tracking of movement. Perhaps I can use the presentation film of myself playing – or a new one – and map onto it a matrix of co-ordinates that may detect as much movement as possible. Can this be mapped to a series of parameters for audio manipulation? And will this affect my involvement in the performance of a duet with myself on video? or will I be manipulating the co-ordinates whilst playing live and thus affecting the video? And lastly, it would be nice to find a way to make layers of video transparent / opaque. I have a feeling that the idea of performing with my past could be the key to this piece…

I will make a series of small video sessions all with very limited movements – using this other idea of using ‘hands as a language’ where one of the observations at my presentation was about the possibility of greater clarification when endeavouring to perform / improvise with limited movements. When free improv is happening, to somebody watching, it could mean too much information is at hand, too spectacular? how do we refine and delve into a deeper and tinier space of observation? With these new sessions I will try to make a new video which allows me (living) to interact with me (past) easier. Perhaps layers too? transparency.

Another further idea is the holography of myself playing, surrounded by audience. Can I record myself playing at all angles? and how would it look like at the end? Imagine Princess Leia’s holographic image asking for some help…Could be a nice trick.

I do like this idea of the performance with the self. The hardest part that I found was the interaction with the video: it is hard to create a nuance and a real relationship (as a duo) with a fixed thing. How do you push this relationship into more interesting realms?

Found another very nice website on interface design, interactions, analysis and synthesis, video analysis etc and found a lovely patch from the University of Oslo with – http://www.uio.no/english/research/groups/fourms/software/VideoAnalysis/

which analyses non-real-time videos. The data can be printed in text, and could well be useful for real-time interactions.