Concept and Background Research
Technicals
For this project, my initial idea was to make a performance in the real world - a performance using live camera filming projection on the stage. Because I’ve been attracted by performance like Tesseract by Charles Atlas, Rashaun Mitchell and Silas Riener, and Not a Moment Too Soon by Ferran Carvajal. In those performance, the camera / the projection screen becomes an additional dimension on the stage. Because of the different viewing angles of the videographer, and the various live interactive video processing, the audience not only see the performers and the stage from their own eyes, but also from the camera’s eye, from the computer’s eye. This multi-dimensional watching blends different time and space together, offers the audience a more complete, more conscious spectatorship. And for the performers, they are not only acting by themselves, they are also reacting with the video projection on the stage. For them, the live feed video projection is their another self on the stage, is the other world they experiencing on the stage, so there is a collaborative relationship between the videos and the performers, how the performers act affects what appears on the video, and same in the other way around. And I also think it works the same for the cameraman filming that feed. So in this case, the whole scene is created collaboratively by the performer, the cameraman, and the video feed.
Initial project proposal
Working Process
I was ambitious to experience and explore this hybrid creative process, so I asked my friend, the textile designer, to collaborate with me to make a performance presenting her collection in this project. And our workflow was like this:
Our initial plan was to let the models wear the garments and do some movements/behaviors as the performance, and use the half-transparent mesh projection in the performance, and we planned to film the performance in the studio, and use that as the fashion film for the collection. However, after I started working on the interaction design and codes, I realized that modeling is actually very different from dancing as a performance, the strategies work for dance performance actually doesn’t work for the modeling. My reference projects are all dance performance, and the key point of working with interactive visuals like the one using frame differencing / background differencing is the Movements. But when I actually tested with the model with the garments, I found that there’s actually not much (dramatic) movements in the modeling……(Of course the model can move dramatically, but it didn’t work well to present the details of the garments.) Also if I used the projection mesh, firstly the environment needs to be a dark space with a very specific spotlight behind the mesh, secondly when modeling behind the mesh, both the projection on mesh and the model behind were not showing to the audience clear enough. And we won’t have enough time to solve these problems. So basically we found that it would be a mess and in no way satisfying the demands of showing the garments if we insisted to make it as a performance. Therefore, I decided to give up the idea of using the projection. And instead, in the later experiments, I found that using the windows in the screen (monitor) could build the multi-views scene as well, not on the stage, but in the virtual space. And that’s why I came up with the idea of the live stream - an on-line multiple views live performance.

However, in the end, in the actual filming, because there was only one cameraman (me) and I had to take care of the program on the computer (running smoothly or not crashing), it was impossible to do the filming from multiple cameras at the same time. Also because the designer (Jojo) wanted an edited version of the film, considering that the resolution of the footages recorded from multiple small windows in one screen might be too low, in the end, we ran one view one big window once on the screen in the actual filming.
This project was built with OpenFrameworks. The first big challenge for me was to bring the camcorder live feed into OpenFrameworks. (Because I can’t do proper filming with a webcam.) I researched the questions in the OpenFrameworks and GitHub forums and found useful information that I could either try the Blackmagic Design Ultra Studio Mini Recorder and ofxBlackMagic, or try the ofxCanon. In the beginning, I bought the mini recorder and the wires, the adapter I needed, and run the example program of ofxBlackMagic, but it failed to compile…… I asked a few questions on GitHub, but didn’t get any response so far. So I turned to try ofxCanon. However, although I’ve applied and got the Canon license for the developer, it didn’t work out in the end. (I think it’s because I didn’t get the Canon app for livestream?) Then when I almost gave up, I found that there’s an addon to transfer data between other software and OpenFrameworks - ofxSyphon. And it works with the BlackMagic products with the software BlackSyphon, which transfers data from BlackMagic to Syphon, and ofxSyphon transfers data into OpenFrameworks. With these tools, I could use a live feed from the camcorder in OpenFrameworks.

In the ‘Full figure scan’ part, I calculated the average color (in RGB) at where the scan line passed, and use those RGB data to affect the color tone of the MRI scan video, the shape size and the rotation degree of the RGB sphere. And I found the way to calculate the average color from this post in openLab.

In the ‘Scan distortion’ part, I used drawSubsection( ) with a sin parameter in it to have the dynamic scan distortion.

In the ‘Hidden’ part, I used frame differencing, copying the pixels colors from the camera feed image, and draw the pixels at where the frame difference occurred. And I noticed that in the lab examples, to reduce the CPU tasks, we don’t draw the shapes on every differenced pixel, but draw them on 60~70% (even less) of the pixels. But when I’m drawing the camera feed on the differenced frames, 60~70% would not be enough to draw the complete video feed (it would be too sparse), 100% would slow down the program extremely, so 85~90% was a better result.

In the ‘virtual physical body/object interaction’ parts, I used the addon ofxLiquidFun as the main visual elements and created the interactions with openCV centriole locating and/or optical flow. OfxLiquidFun is a modified version of ofxBox2d. It’s an amazing tool to create virtual worlds with physical force/gravity. Through learning with the example projects, I created the liquid-like particles, wind-like particles, stretchy forced balls, and bouncing balls pool, etc. in this project. And through looking in the ofxOpenCV, I got more practical with the OpenCV contour finder, bounding rects, and centriole, etc. which are very useful to locate the object position (and size) in the Kinect.
bingComputing @ 2019
<< Back
Tesseract
Not a Moment Too Soon
ofxBlackmagic,
ofxCanon.
ofxSyphon.
BlackSyphon.
this post in openLab.
ofxLiquidFun
ofxBox2d.