Falmouth Summary

A couple of weeks ago I went to Falmouth University and gave a talk to the Autonomatic Research Cluster, as part of an exchange visit arranged by SCIRIA . My interest in Autonomatic was that they have experimented with rapid prototyping and CNC milling at a very low level – directly inputting machine instructions and to an extent bypassing common 3d modelling packages. They also showed an interest in automating parts of the creative process either through the machine or third party users. Others who attended were Barbara Rauch (who talked about her work exploring consciousness, and protoyping models of human emotions supplanted onto animals), Jem Mackay (discussing his work in collaborative film making), and Richard Osborne (the college philospher and searcher for a Digital Aesthetic). I gave a talk which I described as a throughline of my 2 years at Camberwell.

In 2006 I had an idea for a series of art pieces: life size, full color sculptures which substantiated the motion of a human body. I simply wanted to see what these sculptures would look like. What was the ‘continuity of motion in space’, to use the title of Boccioni’s most famous work (pictured below, top centre)? What would these renderings tell us about what it is to be a human occupying space and time?

At this stage the particular action, or content, of the work was not of much significance to me. I suspected that the actions should be mundane, possibly representing the repetitiveness of human life and work.

Despite this lack of consideration of content, I could see that the project would work for me on three levels, with which I try to underpin all my art:

1) Scientific – the sculptures should be as objective as possible. I am always interested in removing my particular artistic signature from my work, be it through generative compositional techniques, or a drive toward objectivity. My battle with objectivity has been a central challenge throughout this MA.

2) Artistic – as far as I knew at the time these type of objects didn’t yet exist. Throughout history artists have attempted to realise motion using traditional art techniques – and technology currently available to artists meant that for the first time these objects could be made. I wanted to see them, I figured other people would too.

3) Programmatic – to realise these objects would involve a fair degree of technical and programmatical skill which I felt a suitable challenge. Working with 3D mathematics has always been one of the hairier regions of programming, and one which to date I had avoided. But useful algorithms are now fairly well established.

I felt the project would suitably encompass all three methodologies.

The project was achievable, and vitally it would result in traditional art objects. I had spent the previous years working on digital art projects resulting in interactive installations or internet games or films, and I was looking for something which used my programming and software skills to generate real physical objects – objects which could be displayed in galleries and just looked at. Since going to art college some 20 years ago and choosing to specialise in what was then called Intermedia (practically video with computer graphics overlays) I had turned my back on the physical, desiring to be an artist whose work would ‘fit onto a floppy disk’. There is only so much time you can spend looking at a computer monitor and furthermore you can sell physical objects, an endeavour which I feel mature enough, (or skint enough) as an artist to address. I wanted to create (to use the euphamism a desirable object.

After doing some preliminary research and discovering such important figures as E-J.Marey who inspired both the Futurists and Duchamp, and also Dan Collins one of the first artists to use rapid prototyping, I came across the work of Geoffrey Mann who was working along very similar lines with animal motion. He called his project Long Exposure Sculptures and I borrowed his title and expanded it to include Photography and Film as this proved the easist way to explain what I was trying to do.

I chose Camberwell College of Arts, which was local to me, after researching other student projects, and talking with the course tutor Andy Stiff, and determinig that he would be interested in helping me achieve my goal.

One of the prime reasons this project appealed to me was the recent improvements in the 3D modelling software Poser. This application would allow you to model a particular human in motion and then export these animations out as a series of 3D OBJ files, one of the primary formats in polygon mesh modelling. I envisioned that these OBJ files could then be merged (Boolean Unioned) in other 3D modelling programs and then sent off to be rendered, either by CNC milling or rapid prototyping or other commercially available 3D printing techniques. After researching different programs I settled on Rhino as the interface was intuitive and scriptable.

And here’s where my problems started. To render a rapid prototype requires that the file being sent to the printer is watertight; that is, the polygon mesh mustn’t have any geometrical holes or inconsistencies in it. There is a proprietry piece of software which comes with rapid prototyping machines called Magics STL which checks files about to be sent off for printing. This program was telling me that my models contained in the region of 8000 holes. On top of which the merging of different objects (and I was planning merging up to 60) would result in catastrophic breakdowns in the pose’s geometry. The idea of hand filling all these holes was something I didn’t want to think about with my rudimentary knowledge of Rhino.

Several months during my first year were spent trying to technically resolve these problems which stemmed from the fact that the object files coming out of Poser were hopelessly inadequate. Poser was designed to produce 2D renders of 3D figures (the sort of thing that might adorn a still shot of 3D interior designed in 3D Studio Max), so when you look in detail at the back of the figures or all the places where clothes join to bodies or all the places where body parts cross over each other, you can see the 3D shortcomings.

However I was encouraged by the types of 2D renders I was producing (below). From simple, almost default motions, walking, running, etc. the output figures held a charming or perhaps mythical quality.

Eventually I managed to get a one free trial of Magics STL (retail price £5000) and after a month of slogging away managed to achive my first watertight model.

The model looked to me as if there was a man trapped inside of himself, trying to break free. I christened it Golem and sent it off to the print technician at Central St Martins. After 9 months I had made my first prototype. Taking 8 hours to render and costing £80 it stood all of 9.6 centimeters tall…

…and it looked like something out of a cornflake packet. Most of the detail in the 3D model was gone, several small fingers had fallen off in the process of removing the scaffolding, it was a dull cream monotone, and if I were to scale it up would cost many thousands of pounds. On top of this my trial of the essential Magics STL had run out and I was completely stuck as to how to proceed after one year’s work.

About this time I received an email from a Dutch artist called Peter Jansen saying, “Hello, I think you might be interested in my project…”

Peter had already done my project! What is more, he had used exactly the same pipeline: Poser to Rhino to Magics STL to rapid prototype. He had just successfully exhibited these objects at the Turin Design Show and received a lot of praise for them. Thus, I finished my first year. I wrote my PGPD essay on ‘The Impossibility of Originality in Digital Art’ and went away to consider where to take my project.

Anaysing Peter’s impressive achievement closely I started to think carefully about the gap between objective and subjective approaches to visualising motion. What had initially drawn me to this subject was the incredible artworks made 100 years ago by Duchamp, Balla and Boccioni. Indeed Peter had been looking at the same artists and had made his own version of ‘Nude Descending a Staircase’. The gulf between these two versions suggested that perhaps a more subjective interpretation might be scientifically weaker but artistically stronger. The staticness of Peter’s sculptures also told me that what I was really attempting was not to express motion but infact what can be seen as the opposite – to freeze time. Mathematically this could be described as a subset of 4D space as passed through by a 3D object. I started thinking about other works which I had made throughout the year.

I had successfully been applying long exposure techniques to video data, whereby each frame of video is considered as a slice of 3D data over time. I’d then extract a region of each slice and recombine them into a 2D image, using a slit scan technique familiar to photographic and digital artists for many years.

I’d even used similar ideas to recombine in real time webcam footage to create interactive video pieces which recorded motion in front of the camera as a trail or trace.

Both these techniques were achievable because at any given moment I knew where every pixel (of a 2D image or video frame) was, and could recombine them algorithmically at will. The problems I had been having with 3D data stemmed from the fact that the figures I was trying to recombine were made up of say 40000 triangles in 3D geometry. Whether any particular triangle crossed another in 3D space was unknown to me, they were just huge lists of unmanageable data.

About this time I saw a video of a technical talk given by a young American artist called Nathan Wade. He had made a 3D model of a head slowly turning using medical data available in the public domain. He had written algorithms to decimate and rotate this data, and described what he was doing as Data Mining – the attempt to find new ways of extracting information from pre-existing data. To create new ways of artistically looking. Crucially the data he was using was scanned: effectively a series of bitmaps – so he too knew where every pixel was.

This made me think, why don’t I take my awkward 3D data and convert it into something that I can understand and work with – a 3D array of pixels, or voxels as they are called. And this is what I did. This finally resulted in Voxeliser a Java program to convert the geometric data into at first a 3D object file of small cubes. The beauty of this approach is that the problem of Boolean Unioning simply disappears. Each 3D object ‘frame’ given to the program just fills in certain boxes or not. The next frame then just does the same, if the box is already filled in then it moves on, it doesn’t care, but if the box is empty then it fills it. The resulting voxel sculptures fulfil my brief of describing the space passed through by a human in motion.

I didn’t think there would be any point in sending these voxel objects to a rapid prototyping machine as the poor resolution and associated lego aesthetic weren’t what i was looking for. However once I had cretaed the array I could output it as a series of slices and these could be sent to a laser cutter and then the slices could be glued together into a 3D model. This would become the backbone of my approach for the rest of the course. I finally had a production method where I could control all of the stages. This method started me off on a series of cardboard works called Slugmen, which passed thorugh various prototypes and were eventually shown as part of Xhibit09 in the Arts Gallery at UAL. The slottable models representing the space passed through in a single stride, were designed to be built by the audience to the show and several were built and others were taken away. So I guess I had made a desirable object.

The major concession to subjectivity or human interference I had made was that once I had output the bitmaps for each layer, I hand traced around them in Illustrator to produce a more flowing surface. I could have written a program to automate this but it looked complex and I wasn’t sure how useful it would be as there was one thing I couldn’t control in the process, which was access to the laser cutter. Each of the slugmen took something like 25 minutes to cut, and the high demand of the cutter meant you could only feasibly get a few hours each week on it. I wanted to scale up these models beyond the size of the cutter, which meant tracing the outlines from a projector onto large card – so I would have to use an artistic approximation anyway.

About this time I started to think of my voxelisation process in a broader light. The reason why polygon meshes, as opposed to voxelisation, have dominated 3D modelling is simply to do with polygons being an efficient way to describe space. A single polygon might encompass thousands of voxels. But, with unlimited computer memory and power the voxel approach is much easier and flexible, and computers are starting to have enough power to deal with this. If everything is seen as a huge 3D array of data then why not edit that array in much the same way that Photoshop is used to edit 2D data.

For example by converting video data into a 3D array and then removing anything which remains static it is possible to produce long exposure images like the one below.

The grand thought came to me, that any type of data can be effectively converted into a 3D array – film, images, 3D models, scanned data and so on. One of the foundation stones of the digital age is that all data is qualititively the same, just a series of 1s and 0s. So why not create a program which can manipultae this data? This thought led to my Phd proposal, entitled ‘Sculpting in Time: An Assessment of Voxelisation for 4D Data Manipulation’.

The image above is a simple example of what this program could do – input a video and then output a still image as contained in the white parallelogram, effectively slicing on the z (time) axis. This is recogniseable as a basic slit scan operation.

The program would be capable of handling my long exposure sculptures by immediately outputting the 3D model from a series of input 3D animations.

Just as any visual data can be input, so too can the output be in any visual form – stills, films, 3D models, virtual spaces, etc… I see this program as a kind of Ur-Digital Manipulation tool of which the projects described above but also something like Photoshop would be a subset.

Back to my MA, I’ve entered an exciting phase, the last 3 months where I’m actually starting to build things, getting bigger all the time. The latest work is done using the voxelisation technique and then projecting the slices onto 6mm corrugated card, hand tracing the projections and cutting them out and gluing them back together.

This piece was made by an action as mundane as touching my toes, but has a rich organic (almost labial) form which could only be guessed at, the way in which the object is balanced between human and abstract is interesting and reminds me of an Italian sculptor whose work has fascinated me recently. The cardboard layers give a geological reading, which fits in with the theme of compressing motion in time. Overall the hand crafted artistic approach, and its messy inevitabilities, have started to take over the work which I’m fine with for now. To generate one of these models objectively has defeated me (particularily budget wise – about £5000 for a polystyrene CNC 5 axis milled life size figure). But I know that within these crude models lies a core of objective realistic modelling which creates an interesting counterpoint, just about available through inspection. I think the object works well as a sculpture on a simple visual level too.

My next model will be done in the same way but I’m then going to paper-mache over the top and colour it. I’m still experimenting with process, working towards the final life size model.

I said at the start that I suspected that the content wasn’t too important and that each sculpture would generate its own narratives or readings and I think I’m starting to find that is true. That said currently the main idea I have in terms of the end piece is a life size model of the artist being pushed off a plinth – addressing artistic balance, the collapse (of digital information?, of time?, of history?), and nodding towards works by Manzoni and Klein.

Advertisements

4 Responses to “Falmouth Summary”

  1. yaghoob-babazadeh Says:

    dear/sir
    i want to know how can i produce 3d file from sonography machines? i have 3 file from that machine with jpg , avi , mvl extension. now i will make up 3d file from that files but i don’t know how and which on software. after making 3d file i will use from rapid prototype machine for making rp product from baby photo that i take from sonography machine.
    regards
    babazadeh

    • timpickup Says:

      I believe one of the major rapid prototyping bureaus offers a service which converts a photograph to a rapid prototyped semi-relief using a displacement map on the brightness. This would work well for a sonogram. Sorry i can’t remember the company.

  2. hagar Says:

    loved your work, do u live in uk. and how u progressing

    • timpickup Says:

      Thanks. yes in London, I’m working on more (but smaller) long exposure sculptures, and also some digital prints using code to collapse video sequences. As i make progress I’ll post to this blog.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: