Limitations of 3D

In terms of  3D software there is one main limitation. in many cases the limitation is the prices of some particular 3D Software suites.

For example, 3ds max, perhaps one of the most renowned 3d modelling pieces of software is a very effective piece of software to create 3d models, however as this is a very thorough program, the overall price of the software is very large in price.

untitled-1.jpgwill be

The latest edition of 3ds max is available for almost $3500.

The high cost of 3d software is one of the very few limitations and is recognised in relation too most of the more professional and higher standard pieces of software,

 One further limitation in relation to the actual production of 3D models and whilst creating proffesional standard 3D models it becomes very easy to create files which have large file sizes, espescially after the model has been rendered.

The final limitation is the use of polygons and how these can cause limitations to an image and its quality.

 The higher the poly count the more detailed the image will be and the animation will run more smoothly. Each image is made up of a number of polygons and the number used is called the poly count. A high poly count in a scene will create large files

Radiosity and raytracing

Radiosity 

Radiosity is the algorithm which is very dependent on light and from which direction the light is hitting the object which creates a ore effective light reflection. 

In more recent times radiosity has become more renowned for being more benfitial over standard lighting effects. There are two main benefits which radiosity has over standard lighting. These are improved Image quality and more lifelike lighting.

Radiosity technology is very prenowned in 3Ds Max which has the ability to produce more accurate photometric simulations of lighting which have been created in specific scenes. Indirect light, soft shadows and colour bleeding between surfaces all produce images in a realstic manner. This is one major benefit because when rendered the image will have a sense of realism hich standard scan line rendering is unable to offer.

In terms of radiosity techniques, 3DS max also provides a real world lighting interface. The light intensity used  is measured using Photometric units. By being able to work with a real life lighting interface, you can effectively set up the lighting for your scenes. Due to this, the user is therefore able to concentrate more on designing the lighting or exploring different lighting effects, rather than in other computer graphic techniques where you are required to visualize them more accurately.

In terms of disadvantage is that even though the image quality is alot better, the rendering time rises and takes alot longer to be rendered.Photometric units are the measurements which are used to measure of lighting.

In terms of differences between local and global illumination rendering algorithms, Local illumination algorithms only tend to describe only how individual sectors like surfaces reflect or transmit light.  Given specific descriptions of light arriving at a surface, these mathmatical algorithms, called shaders in 3Ds Max. These have the ability to predict the intensity, colour and distribution of the light leaving a specific surface.

However Global illuminations only take into acount the ways in how light is transferred between different surfaces. This process is called global illumination algorithms.

An algorithm is a list of instrucitons which define a list of instructions to complete a task, a shader is a set of software instructions, which is used by the graphic resources primarily to perform rendering effects.

3DS max also offers two global illumination algorithms, which have become integral parts of its production rendering system. One key objective of a global illumination algorithms is to re-create, as accurately as possible. 

One of the very first global illumination algorithms that was developed was named as Ray-Tracing.

Raytracing

Raytracing is camera dependent and the reflection is very dependent on the position of the camera. 

A ray is tracerd backwards using from the position of the eye, through the pixel of the monitor, until it encounters a surface. To determine the total illumination, we trace a ray from the point of the encounter to each light source from within the environment, which is named the shadow ray.

One key example of radiosity in terms of a disadvantage using ray-tracing is that radiosity has the ability to calculate diffuse intereflections between surfaces, where as ray-tracing dosent account for interactions using interreflections. Radiosity also offers immediate rendering results,whereas the ray-tracing algorithm does not  offer results immediately, when in fact the rendering of images using ray-tracing depends on the amount of light sources and the process has to be repeated for each of the different viewing angles and is therefore view depending.

In terms of the history of ray-tracing and radiosity, the algorithms came about when researchers began to research alternative techniques for calculating global illumination.

In the early 1960’s engineers developed methods for simulating the radiative heat transfer between surfaces to determine how well there designs would cope under different scrutinous conditions in particular places such as furnaces and engines.

Rather than determine the colour between each pixel on a screen which is ray-tracing, radiosity is a technique which calculates the intensity  for all surfaces in the enironment

Rendering Hardware

One key stage during the 3D development of a particular object, has to rendered at the end of the process for the object to be complete. In this post i am going to be stating simple facts about rendering hardware and its capabilities.

API

Firstly an API in rendering terms, is  named Application Programming Interface. An API is the initial interface which is between a 3D Application and the hardware which is being used to render the object. The main function, this usually happens after all the steps in the animation process have been completed.

GPU 

The next point in relation to rendering hardware is a GPU. A GPU stands for graphics processing unit. A GPU is a dedicated graphics rendering tool for a number of media consoles, such as PC and Games console.

“A GPU implements a number of graphics primitive operations in a way that makes running them much faster than drawing directly to the screen with the host CPU.”

http://en.wikipedia.org/wiki/GPU

Throughout the history, GPU’s have developed mainly from the moonlithic graphic chip’s of the early 1980’s and 90’s. The first gaming console to use a style of  a GPU was the commadore amiga in the early 1980’s.

amiga1k.jpg

Shader’s

 A shader is a something is something which has the ability to define final surface properties, such as Colour, lighting and reflectivity, and transluscency of a surface.

Both the DirectX and OpenGL graphic libraries use three types of shaders. These are Vertex shader which only effect certain series of vertices and can therefore only effect vertex properties like position, color, and texture coordinate.

Rendering Engine

A rendering engine is a process which generates an image from a model.  The main model is a description of 3D objects from a data structure. The image contans information from points such as geometry, viewpoint, texture, lighting, and shading.

Rendering is used in many forms of media entertainment to render final productions.

Some of these include;

video games,

simulators,

movie or TV special effects, and design visualization.

Each of these forms of entertainment use differing amounts of balance and technique.

Why SGI graphics are considered to be important in the development of hardware renderers?

Silicon graphics are considered to play such a key role in the development of hardware renderers as silicon graphics were the founders of Open GL graphics.

James Clarke

James Clarke is a very well know entrepeneur who  formally specialised in in comouter scientists. In the more recent times he hs founded several notiable companies under the silicon Valley technology.

The reason why James Clarke is one of the most notable founders in computer graphics is because his research into computer graphics led to the development of systems for fast rendering of computer images.

Pro’s  and cons of Open GL and Direct X shaders

Open GL

The Open GL Shader is cross Platform,

Open GL is very portable and is able to be used in a variety of situations and is very straight forward to use.

Open GL is unlikely to evolve at a fast rate.

Direct X

DirectX supports a greater set of features.

DirectX gives programmers a great deal of control over the rendering pipeline if they want it.  DirectX 9.0 features programmable pixel and vertex shaders

Direct X has better support for modern chipset features.

Direct X also is very straightforward however, this is not cross platform like the open GL, where as the direct X can in many cases only work on the Windows Platform.

However it has some driver issues.

DirectX is not portable and probably never will be.

3D Software development

Throuhgout the world of 3D Software there are a mixture of both well known and less known software. However, they are all effective in what they do and all have their strengths and their specialist subjects.

  •  3ds Max 
  • AC3D
  • Cinema 4D
  • Houdini
  • LightWave 3D
  • Massive
  • Maya
  • Modo
  • Truespace
  • Softimage|XSI
  • Unreal Editor
  • AutoCad
  • After Burn
  • Mojo World
  • 3D Plus(Serif Plus)
  • Havok
  • ZBrush
  • Sketch Up Pro
  • MilkShape 3D
  • Anim8or

http://en.wikipedia.org/wiki/3D_computer_graphics_software

Examples of Software WorkSpace and Images.

       

  

Basic Information

Anim8or is a 3D modelling and character animation program. Some key advantages of the program are that the controls are easy to follow and the program is small in size in relation to similiar programs like 3DS Max.

Even though anim8or is not on the level of professionalism in relation of its competitors, it can still provide effectve rendered images which have been created.

 

http://www.anim8or.com/main/welcome.html

 3Ds Max

3ds Max is a full-featured 3D graphics application 3ds Max is one of the most widely-used off the shelf 3D animation programs by content creation professionals. 3Ds Max is the most powerful apllication available today at creating 3d models or 3d animations. 3Ds Max is recognised world wide as being the most in depth application of its kind.

3ds Max is used in many industries that utilize 3D graphics. It is used in the video game industry for developing models and creating cinema cut-scenes.

Maya

is currently used in the film and television industry. Maya has a high learning curve but has developed over the years into an application platform in and of itself though extendability via its MEL programming language.

 

 

 

 

 

 

 

 

CGI Timeline

The use of the PC and other computerized operating system became very influential in the development. In particular the 1960’s was influential in utilizing the computers attributes.

1962

The sketchpad system computer graphics was developed by Ivan Sutherland.

  

1964

Boeing  a seattle based company created a 3D animation of an aircraft carrier landing, along with drawing by William Fetter and W.Bernhart.

1966

Charles Csuri creates Hummingbird, the first example of computer generated representational animation.

1970’s

1971

Fred. I Parke created animated faces.

1972

MAGI animated computer rendered polygonal objects. Fred parke created the first ever computer generated facial animation.

1974

Interactive keyframing techniques was introduced with the short film “Hunger/La faim”. This short film was directed by Peter Foldes.

1979

A computer group opens at George Lucas’ industrial light and magic.

The 1980’s was a major decade in which CGI was developed to what it is known as today as computers became more powerful and able to hold high resolution productions.

A number of new companies opened all with a common focus,3D animation.

Wavefront, digital productions and R.Greenbug associates all opened in 1981 throughout America.

The first film with over 20 minutes of animation.

The genesis effect in “star Trek II” was the first fully computer animated visual effects shot. 

In 1983, Bill reeves from Lucasfilms published techniques for modelling particle systems.

Only a year later Porter and Duff, also at lucasfilm published a paper on digital compositing using an alpha channel. This paved the way for effectiveley combining live action and CG imagery.

Also in 1984 the animation studio Pixar opened.

Later in the 1980’s John Lasseter at Pixar published a paper describing traditional animation principles. This was followed for years to come by fellow animators.

In early 1988 “Locomotion” a short film by pacific data images, is an early example of squash and stretch.

1990’s

Terminator 2 was the first blockbuster film to use multiple morphing effects and simulated natural human motion.

The film “Aladdin” created by Disney was the first film to fully utilize animation as they produced the full character in computer generated images.

The huge hit “Jurrasic Park” was the inagural film film for using kinematics in creating realistic living creatures.

In mid 1990 the wildebeasts stampede in disney’s “The lion King” is a superb intergration of 3D computer animation flocking systems with traditional animation.

In 1995 “Toy Story” was released as the first full CGI film which was the first of its kind to get huge commercial and critical success.

In 1997, Pixars “Geri’s Game,” is modelled with subdivision surfaces, wins animated short films award.

1998 produced some of the best animated films of the decade including Antz and Bug’s Life.

Warner Brothers “iron giant” uses computer animtion to great effect in animating to great effect in the title character.

Throughout 2001 and 2001, Pixar created two short animated films, “For the Birds and “Mike’s New car” from Monster’s Inc. During this time gaming console further improved the reputation of compouter graphics like the Playstation 2, X-Box and gamecube.

In 2002, “Lord of the Rings 2″ created a unique character in Gollum by using a combination of performance capture and keyframe animation. During this year Blue Sky Studios released” Ice Age”

 

History of CGI

CGI Development

The first believed use of graphics and animation was the thaumatrope . It was invented in the mid 1820’s by John Paris. The device is a disc with images on both sides of the disc.

 thaumatrope2.png

Following this, the zoetrope, often known as the “Wheel of Life.” The zoetrope was invented in 1834 by William G Horner. Many people.

 zoetrope.jpg

In the CGI (computer generated imagery) industry believe that the classic thesis by Sutherland in the early 1960’s was the official beginning of computer graphics and animation, in the form that it is known today.The classic thesis by the pioneer Ivan Sutherland was the beginning of computer graphics which was an interactive computer graphics interface, called Sketchpad, demonstrated for the first time the power of computer graphics as a method for controlling and interacting with computers.Simple computer graphics had earlier been created during the fifties to generate simple output displays. However, it was not until Ivan Sutherland developed his original software, that people became aware of the skills and potential of computer graphics.

ivan_sutherlands_sketchpad_1963.jpg

The potential of the computer to create graphically enhanced productions, was, at first slow to develop. During this era there were three main barriers which often stopped people fully utilizing the powerful development which computers had to offer.The first was the high cost of computing during this time. This appeared in a number of different formats. Firstly the computer graphics which were created, especially if they were interactive, would become large in memory. The cost of this was very high and was hard to catch on and the cost demands couldn’t be met.

The only occasions when these could be justified was when universities and large industrial research laboratories used the computer to aid them with research.The second was the lack of understanding for the intricate controls of generating graphics, due to the then complicated picture generating software, which would be needed for an effective computer graphics system.  

The complex beginning of both systems of software and application was much underestimated. Many of the early graphics achievements were in fact impressive in them selves but rather inadequate when in comparison to the demands of professionally economically sound interactive graphic design applications.Due to many technological innovations, time favoured CGI. Computer equipment continued to drop in price year after year.

Also, during this era, operating systems were improved, and people ability to cope with complex was appearing less and less as the software became more sophisticated. Progress was made in the development of algorithms.

“Algorithms are a definite list of well-defined instructions for completing a task; that given an initial state will proceed through a well-defined series of successive states, eventually terminating in an end-state.The concept of an algorithm originated as a means of recording procedures for solving mathematical problems such as finding the common divisor of two numbers or multiplying two numbers”http://en.wikipedia.org/wiki/Algorithm.

Algorithms were successful in generating pictures, especially those which represented the views of 3D objects.Computer graphics entails both hardware and software technology. As with conventional numerical computing, we may have both batch and interactive modes.

In the early days of computer graphics, the main attention of the artists was given to the hardware side of CGI. Due to high performance hardware of the new age of computers, the attention to the hardware is a lot less and has shifted to the software side being a lot more appealing.

http://www.beanblossom.in.us/larryy/cgi.html – Info on history of CGI classis thesis

Cartesian Coordinate System

http://blackboard.ntc.ac.uk/webapps/portal/frameset.jsp?tab=courses&url=/bin/common/course.pl?course_id=_291_1

3D software packages use systems to create the illusion of working in a 3D Space. The software package which is used is called the cartesian co-ordinate system.

Frenchman Rene Descartes, developed the system which is used to create the illusion of working in 3D spaces. He did so in an effort to merge algebra and Euclidean geometry. His work has played an important role in the development of analytic geometry, calculus and cartography.

There are two 2 axes that are commonly used to define the 2 dimensional cartesian system. These are XY and +-. The point where the X and Y axis meet is called the origin, which is labelled O.

pic-1.jpg

The Z axis is important to the cartesian system because as it enables us to locate any point in three-dimensional space.

In the cartesian system, the are points which use the X.Y and Z axis. The point where 59 units along the negative X-axis meets is -59, 100 units along the positive y axis is 100 and 50 units along the negative Z-axis is -50.

pic2.jpg

There are also key parts to the system, which is the viewports. these are influential because it allows us to work from all of the angles, as They show a different set of axis.

3dmaximg.jpg

The orthographic view is that which allows you to display 2 axes at the same time.

Comparison between 3D pipeline and game pipeline

Comparison between 3d pipeline and games pipeline-

http://www.pbs.org/kcts/videogamerevolution/inside/how/index.html

  • Once the basic game concept is decided upon, writers and artists work together on a storyboard. A storyboard consists of rough sketches and technical instructions sequentially organized to depict each scene of the game. It is a visual representation of the story. A video game can have thousands of outcomes.
  • Therefore various levels, or “worlds,” of the game must be sketched out.
  • As the storyboard is made, designers begin to create the characters. Rough sketches of major characters are drawn and redrawn until they are perfect. It’s important for the artists to refine the characters as much as possible at this stage because it will be difficult to make changes later.
  • Once the character design is finalized, it’s time to transform the sketches into controllable 3D characters.
  • Play Video
  • This can take up to 5 days per character expression.

  • The sketches are first scanned into the computer.
  • Then, a digital “exoskeleton” is created to define the character’s shape and to give the computer the control points necessary to animate the figure.
  • The game programmers bring this figure to life by instructing the computer to move the character. Several techniques can be used to do this, depending on the type of game and motion desired. In some games, the motions of a human actor are captured using a special suit of sensors to represent the control points of the character’s skeleton. These movements then can be mapped onto the character’s skeleton to produce ultra-realistic motion.
  • One of the most important aspects of modern game creation is the environment. Reflections in shiny surfaces and varied cloud patterns often go unnoticed by players, but they help create a much more natural environment.
  • The majority of 3-D objects created for computer games are made up of polygons. A polygon is an area defined by lines. Each polygon has a set of vertices to define its shape, and it needs information that tells it what to look like.
  • This allows games to have incredibly detailed 3-D environments that you can interact with in real time.
  • Unseen to the user, but making all of the game elements work together, is the code. Code is the set of computer language instructions that controls every aspect of the game. Most games are written with the C programming language.
  • A 3-D code engine is almost always used to generate the incredibly complex code necessary for all of the polygons, shadows and textures the user sees on the screen.
  • Once the game is complete, it enters the postproduction phase. This phase includes extensive testing, review, marketing and finally, distribution.

Ralph Eggleston

eggleston_11.jpg

Ralph Eggleston is an art director who works for Pixar animation studios. He is most well known for assisting other director in the creation of monsters inc and the award winning For the birds.

pixar.jpg

Ralph eggleston is very renowned within the animation world. Many of his fellow animators believe he is an integral part in the animation world, due to the succesful films he has made along side Pixar studios.

Some of these include;

The Incredibles(2004),

Toy Story (1995),

Beverly Hills Cop III (1994) (character designer),

Finding Nemo(2003),

For the Birds(2001),

 

Toy StoryFinding NemoThe short Film- For the BirdsFor the birds_2

 Ralph Eggleston spent the early days of his career as one of the storyboards for the early episodes of the Simpsons, Most notibally was the  Krusty Gets Busted (1990).

simpsons.gif

Ralp Eggleston is a key figure in the world of animation and film. One of the main reasons behind this is that he has always strived upon improving his latest releases. One key example of this is the 1995 hit “Toy Story”, many people thought that this couldnt made better and improved upon as it was one of the first CGI Films. However, it was with the smash hit Monsters Inc which was voted one of the Best Snimstion film in 2003.

So therefore, Ralph Eggleston is a key figure in animation due to his talents in making better and better creations which are shown in the more later creations such as monsters inc and The incredibles. Another key part of his sucess has been that he has always followed the same process in the creation he has been an integral part of. Also he is a key figure as he has talents in many fields of creating. For example he was involved in the creating of the Krusty gets Busted episode of the simpson aswell as being involved in the storyboarding in the inagural first episode of the simpsons and the 1994 film Beverley Hills Cop.

 

He has been involved in both the animation and art department in his films. He has spent much of his career in both the art department and animation department and has alwasys worked in both fields, which shows he has multiple talents in the creating of both animations and films throughout his career.

Other films which Ralph Eggleston has been involved in creating and producing include;

FernGully: The Last Rainforest

The Making of Me.

Episode 1 of the Simpsons (storyboard creator)

ferngully_the_last_rainforest_ver2.jpg       epcot_-_the_making_of_me_logo.jpg        simpsons_roasting.png     

My key figure, Ralph Eggleston, who works for pixar, fits into the history of computer graphics and animation. The way how he fits into the history is as a follower, following the advancement in technology throughout previous years. Ralph Eggleston is a follower because they were the first set of artists who could use advanced technology to make image or clips in a new format. Pixar was one of the effects companies which were one of the original followers. This is shown in Pixar’s 1996 film Toy Story, which Ralph Eggleston was a graphic director in the hit film. Toy Story was one of the first fully 3D CGI films and is often credited as being the first fully 3D animated production.