More info

Friday, 7 March 2014

Computer Graphics Notes - Lab 25 - SUTHERLAND AND COHEN SUBDIVISION LINE CLIPPING ALGORITHM

SUTHERLAND AND COHEN SUBDIVISION LINE CLIPPING ALGORITHM
This is one of the oldest and most popular line-clipping algorithm developed by Dan Cohen and Ivan Sutherland. To speed up the processing this algorithm performs initial tests that reduce the number of intersections that must be calculated.tf his algorithm uses a four digit (bit) code to indicate which of nine regions contain the end point of line. The four bit codes are called region codes or out codes. These codes identify the location of the point relative to the boundaries of the clipping rectangle as shown as the figure below.

Each bit position in the region code is used to indicate one of the four relative coordinate positions of the point with respect to the clipping window: to the left, right, top or bottom. The rightmost bit is the first bit and the bits are set to 1 based on the following scheme:
Set Bit 1 - if the end point is to the left of the window
Set Bit 2 - if the end point is to the right of the window
Set Bit 3 - if the end point is to the below of the window
Set Bit 4 - if the end point is to the above of the window Otherwise, the bit is set to zero.

Once we have established region codes for all end points, we can determine which lines are completely inside the clipping window and which are closely outside. Any lines that are completely inside the window boundaries have a region code of 0000 for end points and we trivially accept these lines. Any lines that have 1 in the same bit position in the region codes for each endpoint are completely outside the clipping rectangle, and we trivially reject these lines. A method used to test lines for total clipping is equivalent to the logical AND operator. If the result of the logical AND operation with two end point codes is not 0000, the line is completely outside the clipping region. The lines that cannot be identified as completely inside or completely outside a clipping window by these tests are checked for intersection with the window boundaries.


SUTHERLAND AND COHEN SUBDIVISION LINE CLIPPING ALGORITHM



Read two end points of the line say P1 (x1, y1) and P2 (x2, y2).
2.     Read two corners (left-top and right-bottom) of the window, say (Wx1, Wy1 and Wx2, Wy2).
3.     Assign the region codes for two endpoints P1 and P2 using following steps:
Initialize code with bits 0000
Set Bit 1-if (x < Wx1)
Set Bit 2-if (x > Wx2)
Set Bit 3-if (x < Wy2)
Set Bit 4-if (x > Wy1)


4.     Check for visibility of line P1 P2 using following steps:      
a)     If region codes for both endpoints P1 and P2 are zero then the line is completely visible. Hence draw the line and go to step 9.
b)     If region codes for endpoints are not zero and the logical ANDing of them is also nonzero then the line is completely invisible, so reject the line and go to step 9.
c)     If region codes for two endpoints do not satisfy the conditions in 4a) and 4b) the line is partially visible.

5.     Determine the intersecting edge of the clipping window by inspecting the region codes of two endpoints. (P1 P2)
a)     If region codes for both the end points are non-zero, find intersection points P1' and P2' with boundary edges of clipping window with respect to point P1 and P2, respectively.
b)     If region codes for any one end point is non-zero then find intersection points P1' or P2' with the boundary edge of the clipping window with respect to it.
6.     Divide the line segments considering intersection points.
7.     Reject the line segment if any one end point of it appears outsides the clipping window.
8.     Draw the remaining line segments.
9.     Stop.


Computer Graphics Notes - Lab 25 - 2D CLIPPING

INTRODUCTION

The procedure that identifies the portions of a picture that are either inside or outside of a specified region of space is referred to as clipping. The region against which an object is to be clipped is called a clip window or clipping window. It is in a rectangular in shape.

The clipping algorithm determines which points, lines or portions of lines lay within the clipping window. These points, lines or portions of lines are retained for display. All others are discarded. We will consider algorithms for clipping the following primitive types
•    Point Clipping
•    Line Clipping
•    Polygon Clipping

POINT CLIPPING

Assuming that the clip window is a rectangle in standard position, we save a point P = (x, y) for display if the following inequalities are satisfied:
XWmin ≤ X ≤ XWmax
ywmin ≤ y ≤ ywmax
where the edge of the clip window (xwmin , xwmax, ywmin , ywmax) can be either the world-coordinate window boundaries or viewport boundaries. If any one of these four inequalities is not satisfied, the point is clipped (not saved for display).

Although point clipping is applied less often than line or polygon clipping, some application may require a point-clipping procedure.

Point Clipping

 LINE CLIPPING

The lines are said to be interior to the clipping window and hence visible if both end points are interior to the window, e.g., line psp4 in the figure. However, if both end points of a line are exterior to the window, the line is not necessarily completely exterior to the window, e.g., line p7p8 in the figure. If both end points of a line are completely to the right of, completely to the left of, completely above, or completely below the window, then the line is completely exterior to the window and hence invisible, e.g., line p1p2 and p9p10 in the figure.
Line Clipping
Find the part of a line inside the clip window

The lines which across one or more clipping boundaries require calculation of multiple intersection points to decide the visible portion of them. To minimize the intersection calculations and to increase the efficiency of the clipping algorithm, initially, completely visible and invisible lines are identified and then intersection points are calculated for remaining lines.

Line Clipping
Find the part of a line inside the clip window



Computer Graphics Notes - Lab 24 - Z-BUFFER ALGORITHM

One of the simplest and commonly used image space approaches to eliminate hidden surfaces is the Z-buffer or depth buffer algorithm. It is developed by Catmull. This algorithm compares surface depths at each pixel position on the projection plane. The surface depth is measured from the view plane along z axis of a viewing system. When object description is converted to projection coordinates (x, y, z), each pixel position on the view plane is specified by x and y coordinate, and z value gives the depth information. Thus object depths can be compared by comparing the z-values.


The Z-buffer algorithm is usually implemented in the normalized coordinates, so that z values from 0 at the back clipping plane to 1 at the front clipping plane. The implementation requires another buffer memory called Z-buffer along with the frame buffer memory required for raster display devices. A Z-buffer is used to store depth values for each (x, y) position as surfaces are processed, and the frame buffer stores the intensity values for each position. At the beginning Z-buffer is initialized to zero, representing the z-value at the back clipping plane, and the frame buffer is initialized to the background color. Each surface listed in the display file is then processed, one scan line at a time, calculating the depth (z-value) at each pixel position. Then calculated depth value is compared to the value previously stored in the z-buffer at that position. If the calculated depth value is greater than the value stored in the Z-buffer, the new depth value is stored, and the surface intensity at that position is determined and placed in the same xy location in the frame buffer.


For example, in the above figure among three surfaces, surface SI has the smallest depth at the view position (x, y) and hence highest z value. So it is visible at that position.
Z-BUFFER ALGORITHM
1.     Initialize the Z-buffer and frame buffer so that for all buffer positions Z-buffer (x, y) = 0 and frame-buffer (x, y) = Ibackground
2.     During scan conversion process, for each position on each polygon surface, compare depth values to previously stored values in the depth buffer to determine visibility. Calculate z-value for each (x, y) position on the polygon If z > Z-buffer (x, y), then set Z-buffer (x, y) = z, frame-buffer (x, y) = Isurface (x, y)
3.     Stop

Note that, Ibackground is the value for the back ground intensity, and Isurface is the projected intensity value for the surface at pixel position (x, y). After processing of all surfaces, the Z-buffer contains depth values for the visible surfaces and the frame buffer contains the corresponding intensity values for those surfaces.

To calculate z-values, the plane equation
Ax + By + Cz + D = 0
is used where (x, y, z) is any point on the plane, and the coefficient A, B, C and D are constants describing the spatial properties of the plane.

Therefore, we can write Z = (- Ax - By - D) / C

ADVANTAGES
1.     It is easy to implement.
2.     It can be implemented in hardware to overcome the speed problem.
3.     Since the algorithm processes objects one at a time, the total number of polygons in a picture can be arbitrarily large.

DISADVANTAGES
1.     It requires an additional buffer and hence the large memory.
2.     It is a time consuming process as it requires comparison for each pixel instead of for the entire polygon.

Computer Graphics Notes - Lab 23 - HIDDEN SURFACE ELIMINATION METHOD

INTRODUCTION

For generation of realistic graphic displays, we have to identify those parts of a scene that are visible from a chosen viewing position. There are many algorithms called visible surface algorithms developed to solve this problem. In early days of computer graphics visible surface algorithms were called hidden line or hidden surface algorithms.

In a given set of 3D objects and viewing specification, we wish to determine which lines or surfaces of the objects are visible, so that we can display only the visible lines or surfaces. This process is known as hidden surfaces or hidden line elimination, or visible surface determination. The hidden line or hidden surface algorithm determines the lines, edges, surfaces or volumes that are visible or invisible to an observer located at a specific point in space. These algorithms are broadly classified according to whether they deal with object definitions directly or with their projected images. These two approaches are called object-space methods or object precision methods and image-space methods, respectively.

Object-space Method: Object-space method is implemented in the physical coordinate system in which objects are described. It compares objects and parts of objects to each other within the scene definition to determine which surfaces, as a whole, we should label as visible. Object-space methods are generally used in line-display algorithms.

Image-Space Method: Image space method is implemented in the screen coordinate system in which the objects are viewed. In an image-space algorithm, visibility is decided point by point at each pixel position on the view plane. Most hidden line/surface algorithms use image-space methods.

Computer Graphics Notes - Lab 22 - Bezier Curves

Bezier curve is an approach for the construction of the curve. A Bezier curve is determined by a defining polygon. Bezier curves have a number of properties that make them highly useful and convenient for curve and surface design. They are also easy to implement. Bezier curves are widely used in various CAD systems and in general graphics packages.


APPROACH FOR CONSTRUCTING BEZIER CURVE

In this approach the Bezier curve can be constructed simply by taking midpoints. In midpoint approach midpoints of the lines connecting four control points (A, B, C, D) are determined (AB, BC, CD). Line segments connect these midpoints and their midpoints ABC and BCD are determined. Finally these two midpoints are connected and its midpoint ABCD is determined.

The point ABCD on the Bezier curve divides the original curve into two sections. This makes the points A, AB, ABC and ABCD are the control points of the first section and the points ABCD, BCD, CD and D are the control points for the second section. By considering two sections separately we can get two more sections for each separate section i.e. the original Bezier curve gets divided into for different curves. This process can be repeated to split the curve into smaller sections so short that they can be replaced by straight lines or even until the sections are not bigger than individual pixels.

ALGORITHM
1.     Get four control points say A (xA, yA), B (xb, yB), C (xc, yc) and D (xd, yD).
2.     Divide the curve represented by points A, B, C and D in two sections
xAB = (xA + xb) / 2
yAB = (yA + yb)/2
xBC = (xB + xC) / 2
yBC = (yB + yC) / 2
xCD =( xC + xD) / 2
yCD = (yC + yD) / 2
xABC = (xAB + xBC) / 2
yABC = (yAB + yBC) / 2
xBCD = (xBC + xCD) / 2
yBCD = (yBC + yCD) / 2
xABCD = (xABC + xBCD) / 2
yABCD = (yABC + yBCD) / 2

3.     Repeat the step 2 for section A, AB, ABC and ABCD and section ABCD, BCD, CD and D.
4.     Repeat step 3 until we have sections so short that straight lines can replace them.
5.     Replace small sections by straight lines.
6.     Stop.

MATHEMATICAL APPROACH FOR BEZIER CURVE

The equation for the Bezier Curve is given as
P(u) = (1 - u)3 P1 + 3 u (1 - u)2 P2 + 3 u2 (1 - u) P3 + u3 P4 for 0 ≤ u ≤ 1
PROPERTIES OF BEZIER CURVE
1.     The basic functions are real.
2.     Beizer curve always passes through the first and last control points i.e. curve has the same end points as the guiding polygon.
3.     The degree of the polynomial defining the curve segment is one less than the number of defining polygon point. Therefore, for 4 control points, the degree of the polynomial is three, i.e. cubic polynomial.
4.     The curve generally follows the shape of the defining polygon.
5.     The direction of the tangent vector at the end points is the same as that of the vector determined by first and last segments. Same vector is determined by both and point and first & last segment.
6.     The curve lies entirely within the convex hull formed by four control points.
7.     The convex hull property for a Bezier curve ensures that the polynomial smoothly follows the control points.
8.     The curve exhibits the variation diminishing property. This means that the curve does not oscillate about any straight line more often than the defining polygon.
9.     The curve is invariant under an affined transformation.
EXAMPLE
Construct the Bezier curve of order 3 and with polygon vertices A (1, 1), B (2, 3), C (4, 3) and D (6, 4).

Computer Graphics Notes - Lab 21 - 2D VIEWING AND CLIPPING CONT.

WORKSTATION TRANSFORMATION
The window defined in world coordinates is first transformed into normalized device coordinates. The normalized window is then transformed into the viewport coordinate. This window to viewport coordinate transformation is known as workstation transformation. It is achieved by performing following steps:
1.     The object together with its window is translated until the lower left corner of the window is at the origin.
2.     Object and window are scaled until the window has the dimensions of the viewport.

3.     Translate the viewport to its correct position on the screen.

Computer Graphics Notes - Lab 20 - 2D VIEWING AND CLIPPING

INTRODUCTION
We now consider the formal mechanism for displaying views of a picture on an output device. Typically, a graphics package allows a user to specify which part of a defined picture is to be displayed and where that part is to be displayed on the display device. Any convenient Cartesian coordinate system, referred to as the world-coordinate reference frame, can be used to define the picture. For a two-dimensional picture, a view is selected by specifying a sub area of the total picture area. The picture parts within the selected areas are then mapped onto specified areas of the device coordinates. The process of selecting and viewing the picture with different views is called windowing, and a process which divides each element of the picture into its visible and invisible portions, allowing the invisible portion to be discarded is called clipping.

THE VIEWING PIPELINE

A world-coordinate area selected for display is called a window. An area on a display device to which a window is mapped is called a viewport. The window defines what is to be viewed; the viewport defines where is to be displayed. In general, the mapping of a part of a world-coordinate scene to device coordinates is referred to as a viewing transformation. Sometimes the two-dimensional viewing transformation is simply referred to as the window-to-viewport transformation or the windowing transformation.




In computer graphics terminology, the term window originally referred to an area of a picture that is selected for viewing, as defined at the beginning. We will only use the term window to refer to an area of a world-coordinate scene that has been selected for display.

Some graphics packages that provide window and viewport operations allow only standard rectangles, but a more general approach is to allow the rectangular window to have any orientation. In this case, we carry out the viewing transformation in several steps.




MC
Construct World-Coordinate Scene using Modeling-Coordinate Transformations

Convert World-Coordinates to Viewing Coordinates

Map Viewing Coordinates to Normalized Viewing coordinates using Window- Viewport Specifications
Map Normalized Viewport to Device Coordinates



The two-dimensional viewing-transformation pipeline


Objects are placed into the scene by modeling transformations to a master coordinate system, commonly referred to as the world coordinate system (WC). A rectangular window with its edges parallel to the axes of the WC is used to select the portion of the scene for which an image is to be generated. This is referred as viewing coordinate system (VC). The viewing coordinate reference frame is used to provide a method for setting up arbitrary orientations for rectangular windows. Once the viewing reference frame is established, we can transform descriptions in world coordinates to viewing coordinates. We can define a viewport in normalized coordinates (in the range from 0 to 1) and map the viewing coordinate description of the scene to normalized coordinates. At the final step, all parts of the picture that lie outside the viewport are transferred to device coordinates.

By changing the position of viewport, we can view objects at different positions on the display area of an output device. Also, by varying the size of view ports, we can change the size and proportions of displayed objects.