0% found this document useful (0 votes)
49 views

Computer Graphics and Clipping

This document provides an overview of a CAD/CAM/CAE course. The course covers topics like computer graphics techniques for geometric modeling, transformation and data storage, NC and CNC technology, and computer aided engineering. It lists reference books and intended course outcomes. The modules will cover computer graphics, geometric modeling techniques like parametric curves, CSG, B-Rep and feature-based modeling. It also discusses concepts like window to viewport transformation, clipping, and line clipping algorithms like Cohen-Sutherland.

Uploaded by

keval42
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
49 views

Computer Graphics and Clipping

This document provides an overview of a CAD/CAM/CAE course. The course covers topics like computer graphics techniques for geometric modeling, transformation and data storage, NC and CNC technology, and computer aided engineering. It lists reference books and intended course outcomes. The modules will cover computer graphics, geometric modeling techniques like parametric curves, CSG, B-Rep and feature-based modeling. It also discusses concepts like window to viewport transformation, clipping, and line clipping algorithms like Cohen-Sutherland.

Uploaded by

keval42
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 56

CAD/CAM/CAE

Overview
By Keval K. Patil
SYLLABUS
1. Computer Graphics and Techniques for Geometric Modeling
2. Transformation, Manipulation & Data Storage
3. NC & CNC Technology
4. Computer Aided Engineering (CAE)
5. Computer Integrated Manufacturing & Technology Driven
Practices
6. Rapid Prototyping and Tooling
REFERENCE BOOKS
 Principle of Computer Graphics by William .M. Neumann and Robert .F. Sproul, McGraw Hill
Book Co. Singapore.
 “CAD/CAM Computer Aided design and Manufacturing” by Mikell P. Groover and Emory W.
Zimmers, Jr., Eastern Economy Edition
 “CAD/ CAM , Theory & Practice” by Ibrahim Zeid, R. Sivasubramanian, Tata McGraw Hill
Publications
 “Computer Graphics” by Donald Hearn and M. Pauline Baker, Eastern Economy Edition
 “CAD/CAM Principles, Practice and Manufacturing Management” by Chris McMahon,
Jimmie Browne, Pearson Education
 “CAD/CAM/CIM” by P. Radhakrishan, S. Subramanyan, V. Raju, New Age International Publishers
 “CAD/CAM Principles and Applications” by P.N. Rao, Tata McGraw Hill Publications
COURSE OUTCOMES
 At the end of the course you should able to…
 Identify proper computer graphics techniques for
geometric modelling.
 Transform, manipulate objects, store and manage data.
 Prepare part programming applicable to CNC machines.
 Use rapid prototyping and tooling concepts in any real life
applications.
 Identify the tools for Analysis of a complex engineering
component.
MODULE 1
COMPUTER GRAPHICS AND TECHNIQUES
FOR GEOMETRIC MODELING
OVERVIEW
 Two dimensional computer  The parametric representation of
graphics geometry
 Three dimensional Computer  Types of curves
graphics  Geometric modelling and its methods
 Applications of computer
 Constructive solid geometry (CSG)
graphics
 Boundary Representation (B-Rep)
 CAD/CAM Software
 Parametric Modeling
 Clipping
 feature based modeling,
 Hidden line and hidden surface
removal algorithms  Feature recognition,
 Design by feature.
WHAT IS COMPUTER
GRAPHICS?
 Creation, Manipulation, and Storage of geometric objects
(modeling) and their images (rendering)
 Display those images on screens or hardcopy devices
 Image processing
 Others: GUI, Haptics, Displays (VR)...
TYPES----
 Two dimensional computer graphics
 Generation of 2-D model
 2-D software: AutoCAD
 Points, Lines, Circles, curves etc.

 Three Dimensional computer graphics


 Generation of 3-D model
 3-D software: CATIA, PTC Creo, SOLIDWorks, Autodesk Inventor etc
 Cone, sphere, cubes etc
WHAT DRIVES COMPUTER GRAPHICS?
 Movie Industry
 Leaders in quality and artistry
 Not slaves to conceptual purity
 Big budgets and tight schedules
 Reminder that there is more to
CG than technology
 Defines our expectations
WHAT DRIVES COMPUTER GRAPHICS?
 Game Industry
 The newest driving force in
CG
 Why? Volume and Profit
 This is why we have commodity
GPUs
 Focus on interactivity
 Cost effective solutions
 Avoiding computation and
other tricks
WHAT DRIVES COMPUTER GRAPHICS?
 Medical Imaging and Scientific Visualization
 Tools for teaching and diagnosis
 New data representations and modalities
 Drive issues of precision and correctness
 Focus on presentation and interpretation of data
 Construction of models from acquired data

Nanomanipulator, UNC
Joe Kniss, Utah Gordon Kindelman, Utah
WHAT DRIVES COMPUTER GRAPHICS?
 Computer Aided Design
 Mechanical, Electronic, Architecture,...
 Drives the high end of the hardware market
 Integration of computing and display resources
 Reduced design cyles == faster systems, sooner
WINDOW TO VIEWPORT
TRANSFORMATION
What is window? What is viewport?
• A world-coordinate selected for • An area on a display device to
display is called a window. which a window Is mapped is
• You can define the window to be called a viewport.
larger than, the same size as, or • The rectangular portion of the
smaller than the actual range of interface window that defines
data values, depending on whether where the image will actually
you want to show all of the data or appear.
only part of the data. • Viewport defines where the
• Window defines what is to be window to be displayed.
viewed.
• If we are changing the position of window by keeping the
viewport location constant, then the different part of the object
is displayed at the same position on the display device.
• If we change the location of viewport then we will see the same
part of the object drawn at different places on screen.
WINDOW-TO-VIEWPORT

TRANSFORMATION
Window-to-Viewport transformation is the
process of mapping or transforming a two-
dimensional, world- coordinate object to device
coordinates.
• Objects inside the world or clipping window are mapped
to the viewport.
• The clipping window is used to select the part of the scene
that is to be displayed.
• The viewport then positions the scene on the output
device.
STEPS FOR WINDOW TO
VIEWPORT TRANSFORMATION
 Step 1: Translatewindow towards origin
 To shift window towards origin, translation factor will
become negative (-tx,-ty).
 Step 2: Resize window to the size of view port.
 Step 3: Translate window (position of window
must be same as position of view port).
CLIPPING
CLIPPING
 We’ve been assuming that all primitives (lines, triangles, polygons) lie entirely within the
viewport
 In general, this assumption will not hold
 Clipping is the process of determining the portion of the geometry model outside the window
and making that portion invisible.
CLIPPING OR CLIPPING ALGORITHM
 Any Procedure that identifies those portions of a picture that are either inside or outside
of a specified region of a space is referred to as a Clipping algorithm or simply
Clipping.
 The region against which an object is to clipped is called a Clip Window.
 Clipping algorithms are 2D-3D
 The purpose of a clipping algorithm is to determine which points, lines or portions of
lines lie with in the clipping window.
 These points, lines or portions of lines are relative for display and all other are
discarded
 Depending on the application, the clip window can be a general polygon or it can even
have curved boundaries
APPLICATION OF CLIPPING
 Extracting part of a defined scene for viewing.
 Identifying visible surface in 3D views.
 Antialiasing line segment or object boundaries.
 Creating objects using solid-modeling procedures.
 Displaying a multi window environment.
 Drawing & painting operation that allow part of a picture
to be selected for clipping, moving, erasing or
duplicating.
TYPES OF CLIPPING
 Point Clipping
 Line Clipping
 Polygon Clipping
POINT CLIPPING
 We have a point P=(x, y) for display
if the following inequalities are
satisfied
 Xwmin <= X <= Xwmax
 Ywmin <= Y <= Ywmax
 where the Xwmin, Ywmin, Xwmax,
Ywmax are the edge of the Clip
Window.
LINE CLIPPING
LINE CLIPPING
 In many cases the large majority of points or lines are either interior or
exterior to the clipping window.
 Therefore it is important to be able to quickly accept a line which is
completely interior to the window & reject a line which is exterior to the
window.
 Lines are interior to the clipping window & hence visible if both end points
are interior to the window.
 Points are interior to the clipping window provided that
 XL <= X <= XR
 YB <= Y <= YT
LINE CLIPPING
 However, if both endpoints of a line
are exterior to the window, the line
is not necessarily completely
exterior to the window.
 If both end points of a lines are:
 completely to the right of the
window.
 completely to the left of the
window.
 completely above the window.
 completely bottom the window.

Then the line is completely exterior


to the window & hence invisible
COHEN & SUTHERLAND ALGORITHM
 The tests for totally visible lines and the region tests for
totally invisible lines can be formalized using a technique
i.e. Cohen & Sutherland Algorithm.
 The technique uses a four digit (bit) code, known as
“Region Code”, to indicate which of nine regions
contains the end points of a line
COHEN & SUTHERLAND ALGORITHM
 The bits are set to 1 based on the following
scheme
1. First bit Set- If the end point is to the above the window
2. Second bit Set- If the end point is to the below the window
3. Third bit Set- If the end point is to the right of the window
4. Fourth bit Set- If the end point is to the left of the window
 If both end point codes are zero, then both ends of the line lie
inside the window & the line is visible
COHEN & SUTHERLAND ALGORITHM
 For First bit:- Y>YT
 Second bit:- Y<YB
 Third bit:- X>XR
 Fourth bit:-X<XL
 If the bit by bit logical intersection of the two
end points codes is not zero, then the line is
totally invisible & may be trivially reject.
 When the logical intersection is zero, the line
may be totally or partially visible or in fact
totally invisible.
 For this reason, it is necessary to check both end points codes separately to
determine total visibility.
 Lines that can not be identified as completely inside or completely outside a clip
window by there tests are checked for intersection with the window boundaries.
 Such lines may or may not cross into the window interior.
 We begin the clipping process for a line by comparing an outside endpoint to a
clipping boundary to determine how much of line can be discarded.
 Then the remaining part of the line is checked against the other boundaries, and
we continue until either the line is totally discarded or a section is found inside
the window.
 We set up our algorithm to check line end point against clipping boundaries in the
order left, right, bottom, & top.
COHEN & SUTHERLAND ALGORITHM
 The equation of the infinite line  The intersection with the window edges
through are given by
P1(X1, Y1) & P2(X2,Y2) is  Left XL
Y=m(X-X1)+Y1 Y=m(XL-X1)+Y1
Or  Right XR

Y=m(X-X2)+Y2 Y=m(XR-X1)+Y1
 Top YT
m= Y2-Y1/X2-X1
X=X1+1/m(YT-Y1)
 Bottom YB
X=X1+1/m(YB-Y1)
POLYGON
CLIPPING
LET US UNDERSTAND THIS CLIPPING METHOD -
POLYGON CLIPPING (SUTHERLAND
HODGMAN ALGORITHM)
• Sutherland Hodgeman polygon clipping algorithm
is used for polygon clipping.
• In this algorithm, all the vertices of the polygon are
clipped against each edge of the clipping window.
• First the polygon is clipped against the left edge of
the polygon window to get new vertices of the
polygon.
• These new vertices are used to clip the polygon
against right edge, top edge, bottom edge, of the
clipping window as shown in the following figure.
• While processing an edge of a polygon with clipping window, an intersection point is
found if edge is not completely inside clipping window and the a partial edge from the
intersection point to the outside edge is clipped. The following figures show left, right,
top and bottom edge clippings −
SUTHERLAND HODGEMAN ALGORITHM
 As each pair of adjacent polygon
vertices is passed to a window
boundary clipper we make the
following tests.
 If the first vertex is outside the
window boundary & the second
vertex is inside, the intersection
point of the polygon edge with the
window boundary & the second
vertex are added to the output vertex
list.
 If both input vertices are inside
the window boundary only the
second vertex is added to the
output vertex list.
 If the vertex is inside the
window boundary & the second
vertex is outside; only the edge
intersection with the window
boundary is added to the output
vertex list
 If both input vertices are outside
the window boundary, nothing
is added to the output list.
HIDDEN LINE AND
HIDDEN SURFACE
REMOVAL
VISIBLE SURFACE DETECTION
Visible surface detection or hidden surface
removal.
Realistic scenes: closer objects occludes the
others.
Classification:
– Object space methods
– Image space methods
TECHNIQUES FOR HIDDEN LINE AND HIDDEN SURFACE
REMOVAL
 Object-space method:- 
 Object-space method is implemented in the physical coordinate system in which objects are
described.
 It compares objects and parts of objects to each other within the scene definition to determine which
surfaces, as a whole, we should label as visible.
 Object-space methods are generally used in line-display algorithms.
HSR for all the objects in world coordinate system 2-D Clipping

 Image-Space method:- 
 Image space method is implemented in the screen coordinate system in which the objects are viewed.
 In an image-space algorithm, visibility is decided point by point at each pixel Position on the view
plane.
 Most hidden line/surface algorithms use image-space methods.
Objects 3D Clipping Screen Coordinates HSR
OBJECT SPACE METHODS
• Algorithms to determine which parts of the shapes are to be rendered in 3D coordinates.
• Methods based on comparison of objects for their 3D positions and dimensions with respect to a
viewing position.
• Efficient for small number of objects but difficult to implement.
• Depth sorting, area subdivision methods.
• Pseudo Code…
For each object A in the scene-
 Determine which parts of object A are visible

 Draw these parts in the appropriate color

 Compare the polygon in object A to other polygons in A and to polygons in every other object in
the scene
IMAGE SPACE METHODS
Based on the pixels to be drawn on 2D.
Try to determine which object should contribute to that pixel.
Running time complexity is the number of pixels times number of objects
Space complexity is two times the number of pixels:
 One array of pixels for the frame buffer
 One array of pixels for the depth buffer
 Coherence properties of surfaces can be used.

Pseudo codes
 For each pixel in the
– frame buffer

 Determine which polygon is closest to the viewer at that pixel location

 Color the pixel with the color of that polygon at that location.
Hidden line removal  Hidden surface removal

 Floating horizon  Z-buffer algorithm


 Object space algorithm  A-buffer algorithm
 Image space algorithm  Scan line Z-buffer algorithm
 List priority algorithm  Object space algorithm
 Apple algorithm  Apple algorithm
 Roberts algorithm  Roberts algorithm
 Warnocks algorithm  Warnocks algorithm
Z-BUFFER METHOD
•Also known as depth-Buffer method.
•Proposed by Catmull in 1974

•Easy to implement

•Z-buffer is like a frame buffer, contain depths


Z-BUFFER METHOD
 It is an image space approach
 Each surface is processed separately one pixel position at a time
across the surface
 The depth values for a pixel are compared and the closest
(smallest z) surface determines the color to be displayed in
the frame buffer.
 Applied very efficiently on polygon surfaces
 Surfaces are processed in any order
VISIBILITY


How do we ensure that closer polygons
overwrite further ones in general?
Z-BUFFER METHOD


Two buffers are used
– Frame Buffer
– Depth Buffer

The z-coordinates (depth values) are usually
normalized to the range [0,1]
Z-BUFFER ALGORITHM

Initialize all d[I,j]=1.0(max depth),c[I,j]=background color


For(each polygon)
For(each pixel in polygon’s projection)
{
Find depth-z of polygon at (x,y) corresponding to pixel (I,j);
If z<d[I,j]
C[I,j]=color;
End
}
CALCULATING DEPTH VALUES EFFICIENTLY


We know the depth values at the vertices.
How can we calculate the depth at any other
point on the surface of the polygon.

Using the polygon surface equation:

 Ax  By  D
z
C
CALCULATING DEPTH VALUES EFFICIENTLY
 For any scan line adjacent horizontal x positions or

vertical y positions differ by 1 unit.
 The depth value of the next position (x+1,y) on the scan
line can be obtained using

 z   A(x 1)  By  D
C
A
zC
BUFFE
R            












EXAMP
Parallel with








LE the image plane 






   

       
       
       
       
       
       
       
       


 
  
Not Parallel    
    
Z-      

BUFFE  

R 















       
EXAM 














PLE 















       
FLOATING HORIZON
ALGORITHM

You might also like