OpenGL ES is in general a 3D graphics programming API. As such it has a pretty nice and (hopefully) easy-to-understand programming model that we can illustrate with a simple analogy.
Think of OpenGL ES as working like a camera. To take a picture you have to first go to the scene you want to photograph. Your scene is composed of objects—say, a table with more objects on it. They all have a position and orientation relative to your camera, as well as different materials and textures. Glass is translucent and a little reflective, a table is probably made out of wood, a magazine has the latest photo of some politician on it, and so on. Some of the objects might even move around (e.g., a fruit fly you can't get rid of). Your camera also has some properties, such as focal length, field of view, image resolution and size the photo will be taken at, and its own position and orientation within the world (relative to some origin). Even if both objects and the camera are moving, when you press the button to take the photo you catch a still image of the scene (for now we'll neglect the shutter speed, which might cause a blurry image). For that infinitely small moment everything stands still and is well defined, and the picture reflects exactly all those configurations of positions, orientations, textures, materials, and lighting. Figure 7-1 shows an abstract scene with a camera, a light, and three objects with different materials.
Each object has a position and orientation relative to the scene's origin. The camera, indicated by the eye, also has a position in relation to the scene's origin. The pyramid in Figure 7-1 is the so-called view volume or view frustum, which shows how much of the scene the camera captures and how the camera is oriented. The little white ball with the rays is our light source in the scene, which also has a position relative to the origin.
We can directly map this scene to OpenGL ES, but to do so we need to define a couple of things:
■ Objects (aka models): These are generally composed of two four: their geometry, as well as their color, texture, and material. The geometry is specified as a set of triangles. Each triangle is composed of three points in 3D space, so we have x-, y-, and z coordinates defined relative to the coordinate system origin, as in Figure 7-1. Note that the z-axis points toward us. The color is usually specified as an RGB
triple, as we are already used to. Textures and materials are little bit more involved. We'll get to those later on.
Lights: OpenGL ES offers us a couple of different light types with various attributes. They are just mathematical objects with a position and/or direction in 3D space, plus attributes such as color.
Camera: This is also a mathematical object that has a position and orientation in 3D space. Additionally it has parameters that govern how much of the image we see, similar to a real camera. All this things together define a view volume, or view frustum (indicated as the pyramid with the top cut off in Figure 7-1). Anything inside this pyramid can be seen by the camera; anything outside will not make it into the final picture.
Viewport: This defines the size and resolution of the final image. Think of it as the type of film you put into your analog camera or the image resolution you get for pictures taken with your digital camera.
Given all this, OpenGL ES can construct a 2D bitmap of our scene from the point of view of the camera. Notice that we define everything in 3D space. So how can OpenGL ES map that to two dimensions?
Was this article helpful?