|Page (1) of 1 - 09/28/01||email article||print page|
The Future of Creating Photo-Realistic 3D Web Images
Users in every industry are accustomed to seeing amazing, pre-rendered animations where modelers and animators have spent days making just the right geometry, textures, lighting and animation paths to hide flaws. With a flat rendering, it is easy to hide bad shadows, geometry that lacks detail, or badly stretched textures. Now with the new 3D web viewing engines, 3D models are put to the test when viewed on a standalone basis by end-users who spin a model manually, zoom in and really investigate every aspect of a model.
A great parallel for the 3D industry is the evolution of photography. The first photographers were skilled artists who learned the trade of capturing pictures - it was a complex art involving sensitive plates, chemical processes and lengthy exposure times. It was a small niche industry that had a great deal of potential. It was not until the advent of the personal camera, film cartridges, and centralized processing at a reasonable price that the industry really started to flourish. Photography had to be simplified and available to the masses. There are still professional photographers but they concern themselves with the quality of the picture taken and not how to capture the image on film and develop the film. The 3D industry needs to make a similar shift (or evolution). 3D will move beyond its current state with easier to use 3D software, faster computers, and the capability to capture photo-realistic 3D content easily.
Today's method for building 3D photo-realistic models from real-world objects is divided into two separate processes. One process involves building the geometry, either from scratch or with a 3D capture device. The second process is capturing and optimizing the texture maps using photos or other references. Once the geometry is created and optimized and the texture is created and optimized, they are married either in a planar fashion, cylindrically, spherically, using UV, or other methods. After this is finished, there is still a great deal of work done in order to create just the right look.
Photogrametry is another process used today to create 3D content. Photogrametry, the process of generating geometry from photos, has its own set of challenges due to the lights and shadows that remain part of the 3D model once it is completed. Ambient lighting and fixed shadows on the texture maps cause the 3D model to look unrealistic when rotated and geometry is not detailed enough to be inspected closely.
Some scanning systems are capable of capturing geometry and a separate digital image at the same time. The geometry and digital image are then attached to each other with planar or cylindrical mapping. Once again textures do not match up exactly to geometry, textures are stretched on geometry and are affected by ambient light.
All of these techniques are accepted methods of building photo-realistic content. The problem is the 3D industry has accepted the fact that building geometry and texture should be separate processes and that building a photo-realistic 3D model should take a lot of man-hours.
The ultimate process for creating 3D photo-realistic content rapidly is to capture the geometry and texture from the same coordinate. This eliminates the process of creating textures from 2D images. There is one technology, which does this - a patented laser scanner that uses 3 lasers (red, green and blue). These lasers converge into a white light and at the same time the exact geometrical coordinate of a point is calculated, an RGB reading for the point is gathered from the return values of the red, green and blue lasers. This gives an exact color value for each and every point. Because the lasers are the only source of illumination, ambient light and shadows are not recorded as part of the color map. All shadows and colors on the map are exactly representative of the real world object, which is being visualized in 3D. This technology is called Foundation and is available through Arius3D in Toronto. The company will have automated systems available for shipping in early 2001.
The next step to creating photo-realistic 3D content is processing the color point-cloud data captured from the Foundation scans. Processing point-clouds today is becoming more efficient and will eventually be automated. A few years ago it took weeks to process a point-cloud in traditional 3D software. Today, companies like Raindrop Geomagic, Paraform, Alias|Wavefront (Spider),Inus (Rapidform), and others are refining the processing of point clouds, which enables a developer to build the geometry and color simultaneously. Thus eliminating the time-consuming texture mapping process the industry is accustomed to. The future of these technologies will allow us to automatically build optimized NURBS patches, organized organic polygonal geometry, and sub-dividable surface geometry with the push of a button. This will be done using intelligent triangulation calculations and templates from existing 3D models. The advantage of these tools will be to create photo-realistic geometry automatically.
CG has a great deal of potential and this industry is just seeing the opportunities on the horizon. It is up to the many companies innovating these new technologies to drive industry growth. Most people in the business of building 3D content expect the day of automated 3D photo-realistic modeling; which must happen in order for the 3D web technologies to be successful.
Source:Digital Media Online. All Rights Reserved