Monday, February 15, 2010

Title Crawl: Introduction to the thesis

  • Catastrophe! This thesis project has been made obsolete by the release of Mental Image's iray product. Unfortunately for me, this came to light about halfway through the project. The post below is unedited, so I can look back on this later. But to see the new direction my thesis has taken, replace the below post with this one.




This weekend marks the start of my senior thesis for Digital Imaging and Design at NYU's CADA program. This blog will serve as a record of my research and ramblings while I put it together. I will also be producing an animated short using the ideas in my thesis, which will hopefully demonstrate that my high-minded ideals are grounded in some portion of reality. The short is tentatively titled ('The Robin Danger School of Culinary Excellence') and more info will appear on that soon.

Brevity is a virtue, so I'm going to try to summarize the thesis with one sentence: 
I aim to demonstrate that animated entertainment (shorts, commercials and films) can be produced more effectively with a real-time hardware rendering engine than with traditional software rendering.

    And what do I mean by that? Well, there are many steps in the production of animated entertainment. I am focusing here on one of the last parts of the process, which is also one of the most time consuming: the act of rendering out a final set of images to go into the finished product. I think that this crucial step of the process can be accomplished cheaper, quicker, AND it can be done with a negligible loss in quality while using a hardware rendering technique.

    And why do I think this? Well, currently we don't use most of our hardware capabilities when rendering out an image. Usually, we use a software program to do all the complex calculations that are required. It is slower, but this is the current standard because of the level of visual quality and customization that has been available for many years. Here are some of the software rendering tools that are currently used:
    Renderman - Rendering language best known for pixar's version
    Vray - Commonly used in professional 3D programs
    Mental Ray - The defacto standard of high-end rendering

    So if software rendering gives us so much customization and quality, why am I bothering to look at another technique? To put it bluntly, video card hardware is closing the gap that used to exist between the two methods. Now hardware tools offer some degree of customization and visual quality. Here are some of the tools of the current hardware market:
    MachStudio Pro  - a realtime production tool
    Unreal Game Engine  - a prolific tool usisng openGL and DirectX
    Valve Source Engine  - a proprietary engine that uses DirectX

          I will be comparing and contrasting the process of using one method from each of these groups. In the software corner, I will be focusing on Mental Ray. Packaged with many high-end 3D programs, Mental Ray is both widely used and very very powerful. In the hardware corner, we have the Source Engine created by Valve Software for their own products. It's an inexpensive tool that is both well documented and not difficult to learn. Also, it has been used to create media that has been aired on network TV, which puts it in the running with the 'big dogs' as far as I'm concerned.

          My one-sentence summary up above is basically saying that Source can be better than Mental Ray. I will say upfront in my first post on this blog, that this argument will not always hold water. There are cases where using a software renderer is the only way that you can get the kind of control you need to get the job done. Effects animation and visuals that are unusual or require complex physics and light simulations... these are not the kind of thing that can easily be replaced by a real-time tool.

          However, I've worked with a couple software render farms here in NYC that used Mental Ray, and I can say that most of the things I saw getting pushed through the software renderer did not fit the above caveats. And you know what? I think that was a huge waste of everything. Some of those frames were rendered at several hours apiece, meaning you could go a whole day and not have a full second of animation to show for it. Employees were needed to wrangle the render farm, bucketloads of electricity were needed to keep the farm running and keep it cool, and since the farm was made up of workstations, the whole team experienced a slowdown in their productivity as their computers struggled to keep up. And all this was eating up time before the big deadline.

          For cases such as these, I will attempt to offer a solution through this thesis. As mentioned above, I'm going to make a short using the methods in my thesis. But to put a finer point on it, I'm going to make the same short TWICE. Once with the software method and once with the hardware method. I will then measure the production for each, and either show that this is a valid alternative or that I've made a bold claim that is actually untrue.

          By this time next year, we'll know the answer.

          3 comments:

          1. Hey its Helen. I'm on a bus! Just read your posts and I find you very easy to understand-it's clear what you want to do, as well as how and why. Which is great! Very interesting! The only thing I don't reall understand is what hardware rendering means-rather why it's called that. I'm sure this is something any animator would know so I'll talk to you later

            ReplyDelete
          2. A good question! I can simplify it by saying that 'hardware' rendering uses your computer's graphics card to create the image, while software rendering uses your computer's main processor and mostly ignores the graphics card.

            To get a little more technical, I can tell you that one of the biggest tasks for generating an image can be depth-sorting, which is the process of figuring out which objects are closer to the camera. If an object is being obscured by something else, there's no need to bother with doing any calculations for it! A graphics card can perform this operation blindingly fast, they build graphics chips specifically for it. Software rendering has to do this calculation with a program, and that's slower.

            I think this is an important enough issue to deserve its own post. That will come soon!

            ReplyDelete
          3. wow, this is quite the undertaking. the fact that you intend to do this is under a year is also quite the challenge. Although with the advances in technology and artistic understanding we have today I do think you you know what you are doing.

            I saw the FF13 trailer on Will's high def gigantor television and if that is the future of graphics and animation...my eyes are going to bleed. Why is there such a focus on creating things in a way that the human eye is not meant to see? or is that something we are aiming for.


            also HELLO! I MISS YOU!

            ReplyDelete