Friday, 20 July 2012

007 - Aspect Ratio

I spent a long time at the beginning of this project trying to decide upon what aspect ration to output the final video. There are numerous standards to choose from. (16:9) would be the smart choice here so as to conform with most modern television sets and PC monitors.

However after researching some feature length and short films I began to appreciate a more extreme widescreen format. I find the super widescreen experience you get at the cinema to be far more theatrical and exciting. From an artistic point of view I think there are opportunities for more dramatic close-ups and creative compositions.

The aspect ratio standards for general theatrical releases is (2.39:1), more than double in length than the height of the screen. Blu-ray disk release standards are very similar at (2.40:1), at a full HD resolution of 1920x800.

One thing I've had to be constantly aware of in this project is timescales. More specifically, animation render times. Careful management of the screen output size could alleviate severe render times. So in order to make a more informed decision about screen aspect ration and potential speed of frame outputs I though I'd work out how many pixels each option contains. Here were the results in order of total pixels:

16:9 1920x1080p Full HD = 2,073,600 pixels
2.40:1 1920x800p Full HD = 1,536,000 pixels
16:9 1280x720p HD = 921,600 pixels
2.40:1 1280x533p HD = 682,240 pixels

Diagram I did to investigate the differences in size between formats
You could argue that there is an advantage to using the 2:40 ratio as less pixels will be involved in the rendering process.

Example of how the final video will look proportion-wise
So the final output I've decided on will actually conform to the 16:9 ratio but will be letter-boxed to suit the 'Panavision' format. This should aid in ease of playback with most screen sizes, formats and internet video hosting sites.

Wednesday, 18 July 2012

006 - Stereoscopic 3D

A significant part of this project is focused on the use of stereoscopic 3D in film, and more recently, video games. It's the same effect you see at the cinema when you don the 3D glasses.


At the inception of this project I'll admit I didn't know much at all about how 3D works in this way. It had always intrigued me, so I leapt at the chance to investigate it depth for this project.

I realised that in order for this effect to work there must be two completely separate views of the same scene/area. Each of these views represents the difference in space between the right and the left eye. Just like in real life when you focus on something in the foreground the background not only blurs out of focus but is also split in two horizontally. The same thing happens to a foreground object when focusing on the background except the horizontal split happens in the opposite direction.

This effect can then be harnessed by 3D technology to mimic the eye's natural methods. By setting up two cameras the right distance apart in a 3D modeling package like 3ds Max 2012 you can then composite the two views and create the illusion of depth. It tricks the eyes by forcing them to apply the rules of three-dimensional vision to a 2D flat surface.

First test with 2 Iphone photographs

I tried some initial experiments with this using several 3D characters I modeled. They all sit at different points on a flat surface, some closer to the camera, all of them a different size. The goal of the experiment was to see if the viewer could identify what characters were closer and what ones were in the distance. The effect works very well here with everyone I tested being able to correctly read the '3D' space.



I am hoping to do one scene or all of my final animation in stereoscopic 3D to further to further dramatise the on screen events and to push my own knowledge of using 3D systems.

(I suspect 3D will be a passing phase in general society, impressive as it is. It's a very clever gimmick, one that has been polished considerably in recent years. 3D TV sets have came right down in price yet they're still not a popular choice with today's consumers. I think this may be because people don't like wearing special glasses to watch television. I think it's the only barrier holding the technology back right now.)

005 - The Writing Process

I really wanted this project to have a solid foundation. A central idea or concept that would underpin all subsequent decisions and design choices. It should be a cohesive and 'complete' piece of work.



I therefore sought to write a screenplay-style script as though I was writing a piece for film. My thinking was that if I'm going to be creating a short film animation then I may as well get involved in the whole process from start to finish. To be the 3D Modeler, Animator, Director, Producer, Mo-Cap Choreographer etc. but also the Writer.

So I spent some time researching the conventions of writing for film. Reading over numerous guides, looking into character and plot development. I read some full-length film scripts (including one from my favorite film, Quentin Tarentino's Kill Bill Vol.1) to try and learn this whole new style of writing for the screen.

It really is a different style of writing. I'm used to creative writing in the form of essays and short stories. Even academic writing. But writing for the screen is a different beast entirely. You literally write what you want the view to see on the screen, at any given moment, no more, no less. At first I found it strange but then found it unusually liberating. It forced me to think about my project in different way. You can't write how a character feels, for example, but you can allude to how they feel by changing what we see on screen. It's that good advice 'Show, Don't Tell', but in a more literal sense. It also leaves the final work more open to interpretation.

I know my final written output from this exercise was only three pages long and definitely wont be winning any Oscars but the process itself brought out so many ideas.

The resulting script was then used to directly inform the story boarding part of this project. Some ideas have changed or had to be cut from the original scripting phase which I'm fine with. A project like this naturally evolves over time.

To be honest I probably spent far too much time on the writing. Having said that I know that there are ideas existing in this project that would not exist if it were not for me going through this deep and rigorous process. I'm glad I did it and would stress the importance of spending time with words.

Tuesday, 17 July 2012

004 - Puddle Test

This was a test I conducted where the aim was trying to recreate a realistic ground surface with a puddle. It's a small scene taken directly from my storyboard where a pair of sneakers walks past our view over a puddle. I wanted to include this scene to further emphasize the gritty/rugged feeling of the street. You see the sneakers treading the ground, they're so close-up you could almost feel the grit under your own feet. It also acts as a teasing introduction to the main protagonist as you don't see him in his entirety until later in the film.

End result of puddle test
I began with numerous tests into how normal maps could be generated, especially in relation to the reflection values of materials. This I thought could be advantageous over actually 3D-modeling the ripples of a puddle (which could negatively impact polygon counts and render times). If I could also isolate the area of influence a normal map had on the puddle then I could perhaps control a ripple effect with an animated normal mapped material (looped). This was the theory anyway.


I started 'baking' the normal map data from high-poly ripple models in 3ds Max 2012. This then gave me a flat square output that I could then apply to a flat planar surface. I used Photoshop CS5 to cut a puddle shape and then mixed in a normal-mapped stone ground texture I created using Crazy Bump software. This layered with the diffuse material provided a rather satisfactory result.


However I thought it still looked quite fake. It didn't look 'wet'. I was after a more tactile feeling from this scene. Something more photo-realistic. I then looked at some more reference pictures and noticed the image was missing that slight rim of of dampness that puddles often have. Not shiny wet but very damp and dark-looking. I then added this rim to the diffuse material and really liked the effect from it.

The end result shown above is created with just one polygon. I wanted to see how far one polygon could be pushed in this test. So something that could potentially have been created with thousands of polygons can be created with just one (by using materials in a different way).



This effect was then animated over ten frames and looped to create the light rain drops in this animation.

I still think I could add more to this scene to make the rain drops more convincing. Perhaps altering the ripples and adding some 'splash back' for each drop of rain.

003 - Capturing Motion

As I suspected would happen, a significant amount of time has passed since my last update. This however doesn't mean that the project has not progressed. Quite the opposite! Things have moved on, grown, evolved and come-into-being etc. So let's not dwell on my time absent from this blog and let's focus on the task at hand and (more importantly) what has been happening in the last month or two.


Today was 'motion capture' day. I got covered in 35 white reflective balls (markers) and acted out a segment of the short film. 16 motion-sensing cameras recorded my movements, calculating them on-screen into a 3D representation of my body. We used the Qualisys Motion Capture System, a very sophisticated piece of kit used mostly for medical applications at the University. The system costs around £100,000.

Three of us were using this kit today. Myself and colleagues Kevin Smith and Sarah Martin took turns operating the software and performing the motion capture. Lecturer Stevie Anderson was also there to lend us a hand. The lab itself belongs to a lecturer named Danny. He was a great help to us today and walked us through the calibration and setup processes.

I wanted to perform my character's interpretive dance piece. I have always had an interest in dance and in my spare time I do Ballet and Street Dance classes. I thought I'd be able to use some of those skills for the short film. Part of me also really wanted to generally experience being the subject for motion capture.


One draw-back that concerned me about today was the fact that we were limited to a height restriction in the space. I couldn't raise my hands above my head or jump whilst performing. I felt very limited in my range of movement which isn't ideal for an expressive piece of dance. This was due to where the cameras were positioned. I feel this could have been rectified by moving most of the cameras further away to the sides of the room but Danny seemed reluctant to stray from their current setup (perhaps due to the fact that the current setup is designed for foot movement, an area this department focuses on). However we made the best of what we had.

So all the movement data has been captured but the files are still not ready for use with character animation. The data we sampled (in .qtm format)  is still not 'clean' as some of the marker positions have broken off from their assigned body part names. Tomorrow I'll go through all the files and reassign any broken positions and hopefully have a data-set that is ready for conversion to C3D, a format compatible with Autodesk Motion Builder.

Even then the work is not complete as I still have to attach the motion data to the actual character via a CAT (Character Animation Toolkit) Rig. So much to do!