Presented my project to the group. Killing two birds with one stone i did it as a blog (which you’re looking at) . Went well but was slightly thrown by Mani asking what was the content of the work. I’d forgotten to think about this!
Showed some java sketches i’d done and ideas for sub-projects:
Rather than using an additive light based photographic process (as Marey, Muybridge & Edgerton did), which tends to bleed out overly static parts of images, use a digital camera in movie mode to capture a sequence of frames. This can then be blended together into a single image using either;
a) photo editing software (eg Photoshop) – an example of which heads this blog.
b) write small Java programs to automate this process.
I’ve started experimenting with these programs on some footage taken out of my window. This happened to be a double decker bus, which through this porcess converts into a super long bendy bus. The words above refer to the different crude algorithms i used to create and enhance the averaging process across the frames.
The same sort of idea using DV Camcorder sequences. The sequence is broken down into individual frames. These are then treated and recombined. Sketches toward certain video ideas:
» create a video of a subset of the frames ghosting through the sequence.
» create a predetermined faint trail of all frames. A lone bold figure traverses this trail (bottom left below).
» create sequences where all frames one second apart are combined into one frame – then cycling through the 24 combined frames in a loop.
» I’ve also become interested in just the blurring aspects of time on stillish objects (eg. a bush).
Eventually I’ll write programs to do these type of processes automatically.
I’d like to use some of these processes on more complex trails. For example below is the type of route I take from home to the shops.
I also had a simple idea to merge Muybridge’s studies, putting the sequences back into a single frame and showed an example of one i’d done already
Using frame grabbing and recombining in a different way, here are a sequence of stills from Planet Of The Apes (1968). Each still consists of 1 minutes worth of film frames. Each RGB pixel is averaged out across these 1500 frames. All together there are 106 images. These can then be put back together into a slide-show.
Finally all 600,000+ frames from the entire film are averaged onto a single image thumbnailed below.
Using webcams to monitor audience motion.
I’d like to expand upon David Rokeby’s ideas, manipulating the data once recorded in different ways.
- geometrically mapping trails
- treating trails as sand particles which fall under gravity
- attempting to map audience from three dimensions to two