CNC Machining & 3D Printing Guide
CNC Machining & 3D Printing Guide
ME 475: Mechatronics
SUBMITTED TO:
Dr. Mohammad Mamun
Professor
Department of Mechanical Engineering,
BUET
SUBMITTED BY:
SOURAJIT MAJUMDER
1410142
DATE OF SUBMISSION:
24 FEBRUARY, 2019
Precision Manufacturing with CNC
Machine
Short for “computer numerical control,” the CNC process runs in contrast to — and thereby
supersedes — the limitations of manual control, where live operators are needed to prompt and
guide the commands of machining tools via levers, buttons and wheels. To the onlooker, a
CNC system might resemble a regular set of computer components, but the software programs
and consoles employed in CNC machining distinguish it from all other forms of computation.
When a CNC system is activated, the desired cuts are programmed into the software and dictated
to corresponding tools and machinery, which carry out the dimensional tasks as specified, much
like a robot.
Under CNC Machining, machine tools function through numerical control. A computer program
is customized for an object and the machines are programmed with CNC machining language
(called G-code) that essentially controls all features like feed rate, coordination, location and
speed. With CNC machining, the computer can control exact positioning and velocity.
First a CAD drawing is created (either 2D or 3D), and then a code is created that the CNC machine
will understand. The program is loaded and finally an operator runs a test of the program to
ensure there are no problems. This trial run is referred to as "cutting air" and it is an important
step because any mistake with speed and tool position could result in a scraped part or a
damaged machine.
Position control is determined through an open-loop or closed-loop system. With the former, the
signaling runs in a single direction between the controller and motor. With a closed-loop system,
the controller is capable of receiving feedback, which makes error correction possible. Thus, a
closed-loop system can rectify irregularities in velocity and position.
In CNC machining, movement is usually directed across X and Y axes. The tool, in turn, is
positioned and guided via stepper or servo motors, which replicate exact movements as
determined by the G-code. If the force and speed are minimal, the process can be run via open-
loop control. For everything else, closed-loop control is necessary to ensure the speed,
consistency and accuracy required for industrial applications, such as metalwork.
Basically, CNC machining makes it possible to pre-program the speed and position of machine
tool functions and run them via software in repetitive, predictable cycles, all with little
involvement from human operators. Due to these capabilities, the process has been adopted
across all corners of the manufacturing sector and is especially vital in the areas of metal and
plastic production.
1. Pre-Start
Before starting the machine, check to ensure oil and coolant levels are full. Check the machine
maintenance manual if you are unsure about how to service it. Ensure the work area is clear of
any loose tools or equipment. If the machine requires an air supply, ensure the compressor is on
and pressure meets the machine requirements.
2. Start/Home
Turn power on the machine and control. The main breaker is located at the back of the machine.
The machine power button is located in the upper-left corner on the control face.
3. Load Tools
Load all tools into the tool carousel in the order listed in the CNC program tool list.
8. Dry Run
Run the program in the air about 2.00 in. above the part.
9. Run Program
Run the program, using extra caution until the program is proven to be error-free.
Rapid prototyping is a group of techniques used to quickly fabricate a scale model of a physical
part or assembly using three-dimensional computer aided design (CAD) data.
3D printing is any of various processes in which material is joined or solidified under computer
control to create a three-dimensional object, with material being added together (such as liquid
molecules or powder grains being fused together), typically layer by layer. In the 1990s, 3D
printing techniques were considered suitable only for the production of functional or aesthetic
prototypes and a more appropriate term was rapid prototyping.
The umbrella term additive manufacturing (AM) gained wide currency in the 2000s, inspired by
the theme of material being added together (in any of various ways). In contrast, the term
subtractive manufacturing appeared as a retronym for the large family of machining processes
with material removal as their common theme. The term 3D printing still referred only to the
polymer technologies in most minds, and the term AM was likelier to be used in metalworking
and end use part production contexts than among polymer, inkjet, or stereolithographic
enthusiasts.
In the 1970s, Joseph Henry Condon and others at Bell Labs developed the Unix Circuit Design
System (UCDS), automating the laborious and error-prone task of manually converting drawings
to fabricate circuit boards for the purposes of research and development.
By the 1980s, U.S. policy makers and industrial managers were forced to take note that
America's dominance in the field of machine tool manufacturing evaporated, in what was named
the machine tool crisis. Numerous projects sought to counter these trends in the traditional CNC
CAM area, which had begun in the US. Later when Rapid Prototyping Systems moved out of labs
to be commercialized, it was recognized that developments were already international and U.S.
rapid prototyping companies would not have the luxury of letting a lead slip away. The National
Science Foundation was an umbrella for the National Aeronautics and
Space Administration (NASA), the US Department of Energy, the US Department of
Commerce NIST, the US Department of Defense, Defense Advanced Research Projects
Agency (DARPA), and the Office of Naval Research coordinated studies to inform strategic
planners in their deliberations. One such report was the 1997 Rapid Prototyping in Europe and
Japan Panel Report in which Joseph J. Beaman founder of DTM Corporation provides a historical
perspective:
“The roots of rapid prototyping technology can be traced to practices in topography and photo
sculpture. Within TOPOGRAPHY Blanther (1892) suggested a layered method for making a mold
for raised relief paper topographical maps .The process involved cutting the contour lines on a
series of plates which were then stacked. Matsubara (1974) of Mitsubishi proposed a
topographical process with a photo-hardening photopolymer resin to form thin layers stacked to
make a casting mold. PHOTOSCULPTURE was a 19th-century technique to create exact three-
dimensional replicas of objects. Most famously Francois Willeme (1860) placed 24 cameras in a
circular array and simultaneously photographed an object. The silhouette of each photograph
was then used to carve a replica. Morioka (1935, 1944) developed a hybrid photo sculpture and
topographic process using structured light to photographically create contour lines of an object.
The lines could then be developed into sheets and cut and stacked, or projected onto stock
material for carving. The Munz(1956) Process reproduced a three-dimensional image of an object
by selectively exposing, layer by layer, a photo emulsion on a lowering piston. After fixing, a solid
transparent cylinder contains an image of the object.”
The technologies referred to as Solid Freeform Fabrication are what we recognize today as rapid
prototyping, 3D printing or additive manufacturing: Swainson (1977), Schwerzel (1984) worked
on polymerization of a photosensitive polymer at the intersection of two computer controlled
laser beams. Ciraud (1972) considered magnetostatic or electrostatic deposition with electron
beam, laser or plasma for sintered surface cladding. These were all proposed but it is unknown if
working machines were built. Hideo Kodama of Nagoya Municipal Industrial Research Institute
was the first to publish an account of a solid model fabricated using a photopolymer rapid
prototyping system (1981). Even at that early date the technology was seen as having a place in
manufacturing practice. A low resolution, low strength output had value in design verification,
mold making, production jigs and other areas. Outputs have steadily advanced toward higher
specification uses.
Innovations are constantly being sought, to improve speed and the ability to cope with mass
production applications. A dramatic development which RP shares with related CNC areas is the
freeware open-sourcing of high level applications which constitute an entire CADCAM toolchain.
This has created a community of low res device manufacturers. Hobbyists have even made forays
into more demanding laser-effected device designs.
By the early 2010s, the terms 3D printing and additive manufacturing evolved senses in which
they were alternate umbrella terms for additive technologies, one being used in popular
vernacular by consumer-maker communities and the media, and the other used more formally
by industrial end-use part producers, machine manufacturers, and global technical standards
organizations. Until recently, the term 3D printing has been associated with machines low-end in
price or in capability. Both terms reflect that the technologies share the theme of material
addition or joining throughout a 3D work envelope under automated control. Peter Zelinski, the
editor-in-chief of Additive Manufacturing magazine, pointed out in 2017 that the terms are still
often synonymous in casual usage but that some manufacturing industry experts are increasingly
making a sense distinction whereby Additive Manufacturing comprises 3D printing plus other
technologies or other aspects of a manufacturing process.
Other terms that have been used as synonyms or hypernyms have included desktop
manufacturing, rapid manufacturing (as the logical production-level successor to rapid
prototyping), and on-demand manufacturing (which echoes on-demand printing in the 2D sense
of printing). That such application of the adjectives rapid and on-demand to the noun
manufacturing was novel in the 2000s reveals the prevailing mental model of the long industrial
era in which almost all production manufacturing involved long lead times for laborious tooling
development. Today, the term subtractive has not replaced the term machining, instead
complementing it when a term that covers any removal method is needed. Agile tooling is the
use of modular means to design tooling that is produced by additive manufacturing or 3D printing
methods to enable quick prototyping and responses to tooling and fixture needs. Agile tooling
uses a cost effective and high quality method to quickly respond to customer and market needs,
and it can be used in hydroforming, stamping, injection molding and other manufacturing
processes.
Industrial Robotics
An industrial robot is a robot system used for manufacturing. Industrial robots are automated,
programmable and capable of movement on three or more axis. Typical applications of robots
include welding, painting, assembly, pick and place for printed circuit boards, packaging and
labeling, palletizing, product inspection, and testing; all accomplished with high endurance,
speed, and precision. They can assist in material handling.
The earliest known industrial robot, conforming to the ISO definition was completed by "Bill"
Griffith P. Taylor in 1937 and published in Meccano Magazine, March 1938. The crane-like device
was built almost entirely using Meccano parts, and powered by a single electric motor.
Five axes of movement were possible, including grab and grab rotation. Automation was
achieved using punched paper tape to energize solenoids, which would facilitate the movement
of the crane's control levers. The robot could stack wooden blocks in preprogrammed patterns.
The number of motor revolutions required for each desired movement was first plotted on graph
paper. This information was then transferred to the paper tape, which was also driven by the
robot's single motor. Chris Shute built a complete replica of the robot in 1997.
The most commonly used robot configurations are articulated robots, SCARA robots, delta robots
and Cartesian coordinate robots, (gantry robots or x-y-z robots). In the context of general
robotics, most types of robots would fall into the category of robotic arms (inherent in the use of
the word manipulator in ISO standard 1738).
- Other robots are much more flexible as to the orientation of the object on which they are
operating or even the task that has to be performed on the object itself, which the robot may
even need to identify. For example, for more precise guidance, robots often contain machine
vision sub-systems acting as their visual sensors, linked to powerful computers or controllers.
Artificial intelligence, or what passes for it, is becoming an increasingly important factor in the
modern industrial robot.
Defining parameters:
• Number of axes –
Two axes are required to reach any point in a plane; three axes are required to reach any
point in space. To fully control the orientation of the end of the arm (i.e. the wrist) three more
axes (yaw, pitch, and roll) are required. Some designs (e.g. the SCARA robot) trade limitations in
motion possibilities for cost, speed, and accuracy.
• Degrees of freedom –
This is usually the same as the number of axes.
• Working envelope –
The region of space a robot can reach.
• Kinematics –
The actual arrangement of rigid members and joints in the robot, which determines the
robot's possible motions. Classes of robot kinematics include articulated, Cartesian, parallel and
SCARA.
• Speed –
How fast the robot can position the end of its arm. This may be defined in terms of the
angular or linear speed of each axis or as a compound speed i.e. the speed of the end of the arm
when all axes are moving.
• Acceleration –
How quickly an axis can accelerate. Since this is a limiting factor a robot may not be able
to reach its specified maximum speed for movements over a short distance or a complex path
requiring frequent changes of direction.
• Accuracy –
How closely a robot can reach a commanded position. When the absolute position of the
robot is measured and compared to the commanded position the error is a measure of accuracy.
Accuracy can be improved with external sensing for example a vision system or Infra-Red. See
robot calibration. Accuracy can vary with speed and position within the working envelope and
with payload (see compliance).
• Repeatability –
How well the robot will return to a programmed position. This is not the same as accuracy.
It may be that when told to go to a certain X-Y-Z position that it gets only to within 1 mm of that
position. This would be its accuracy which may be improved by calibration. But if that position is
taught into controller memory and each time it is sent there it returns to within 0.1mm of the
taught position then the repeatability will be within 0.1mm.
• Motion control –
For some applications, such as simple pick-and-place assembly, the robot need merely
return repeatedly to a limited number of pre-taught positions. For more sophisticated
applications, such as welding and finishing (spray painting), motion must be continuously
controlled to follow a path in space, with controlled orientation and velocity.
• Power source –
Some robots use electric motors, others use hydraulic actuators. The former are faster,
the latter are stronger and advantageous in applications such as spray painting, where a spark
could set off an explosion; however, low internal air-pressurization of the arm can prevent ingress
of flammable vapours as well as other contaminants. Nowadays, it is highly unlikely to see any
hydraulic robots in the market. Additional sealing, brushless electric motors and spark-proof
protection eased the construction of units that are able to work in the environment with an
explosive atmosphere.
• Drive –
Some robots connect electric motors to the joints via gears; others connect the motor to
the joint directly (direct drive). Using gears results in measurable 'backlash' which is free
movement in an axis. Smaller robot arms frequently employ high speed, low torque DC motors,
which generally require high gearing ratios; this has the disadvantage of backlash. In such cases
the harmonic drive is often used.
• Compliance –
This is a measure of the amount in angle or distance that a robot axis will move when a
force is applied to it. Because of compliance when a robot goes to a position carrying its maximum
payload it will be at a position slightly lower than when it is carrying no payload. Compliance can
also be responsible for overshoot when carrying high payloads in which case acceleration would
need to be reduced.
The setup or programming of motions and sequences for an industrial robot is typically taught
by linking the robot controller to a laptop, desktop computer or (internal or Internet) network. A
robot and a collection of machines or peripherals is referred to as a workcell, or cell. A typical cell
might contain a parts feeder, a molding machine and a robot. The various machines are
'integrated' and controlled by a single computer or PLC. How the robot interacts with other
machines in the cell must be programmed, both with regard to their positions in the cell and
synchronizing with them.
Software:
The computer is installed with corresponding interface software. The use of a computer greatly
simplifies the programming process. Specialized robot software is run either in the robot
controller or in the computer or both depending on the system design.
There are two basic entities that need to be taught (or programmed): positional data and
procedure. For example, in a task to move a screw from a feeder to a hole the positions of the
feeder and the hole must first be taught or programmed. Secondly the procedure to get the screw
from the feeder to the hole must be programmed along with any I/O involved, for example a
signal to indicate when the screw is in the feeder ready to be picked up. The purpose of the robot
software is to facilitate both these programming tasks.
Positional commands
The robot can be directed to the required position using a GUI or text based commands in which
the required X-Y-Z position may be specified and edited.
Teach pendant
Robot positions can be taught via a teach pendant. This is a handheld control and programming
unit. The common features of such units are the ability to manually send the robot to a desired
position, or "inch" or "jog" to adjust a position. They also have a means to change the speed since
a low speed is usually required for careful positioning, or while testrunning through a new or
modified routine. A large emergency stop button is usually included as well. Typically once the
robot has been programmed there is no more use for the teach pendant. All teach pendants are
equipped with a 3-position deadman switch. In the manual mode, it allows the robot to move
only when it is in the middle position (partially pressed). If it is fully pressed in or completely
released, the robot stops. This principle of operation allows natural reflexes to be used to
increase safety.
Lead-by-the-nose
This is a technique offered by many robot manufacturers. In this method, one user holds the
robot's manipulator, while another person enters a command which de-energizes the robot
causing it to go into limp. The user then moves the robot by hand to the required positions and/or
along a required path while the software logs these positions into memory. The program can
later run the robot to these positions or along the taught path. This technique is popular for tasks
such as paint spraying.
Offline programming
It is where the entire cell, the robot and all the machines or instruments in the workspace are
mapped graphically. The robot can then be moved on screen and the process simulated. A
robotics simulator is used to create embedded applications for a robot, without depending on
the physical operation of the robot arm and end effector. The advantages of robotics simulation
is that it saves time in the design of robotics applications. It can also increase the level of safety
associated with robotic equipment since various "what if" scenarios can be tried and tested
before the system is activated. Robot simulation software provides a platform to teach, test, run,
and debug programs that have been written in a variety of programming languages.
These allow for robotics programs to be conveniently written and debugged off-line with the final
version of the program tested on an actual robot. The ability to preview the behavior of a robotic
system in a virtual world allows for a variety of mechanisms, devices, configurations and
controllers to be tried and tested before being applied to a "real world" system. Robotics
simulators have the ability to provide real-time computing of the simulated motion of an
industrial robot using both geometric modeling and kinematics modeling.
Others:
In addition, machine operators often use user interface devices, typically touchscreen units,
which serve as the operator control panel. The operator can switch from program to program,
make adjustments within a program and also operate a host of peripheral devices that may be
integrated within the same robotic system. These include end effectors, feeders that supply
components to the robot, conveyor belts, emergency stop controls, machine vision systems,
safety interlock systems, barcode printers and an almost infinite array of other industrial devices
which are accessed and controlled via the operator control panel.
The teach pendant or PC is usually disconnected after programming and the robot then runs on
the program that has been installed in its controller. However a computer is often used to
'supervise' the robot and any peripherals, or to provide additional storage for access to numerous
complex paths and routines.
Safety standards are being developed by the Robotic Industries Association (RIA) in conjunction
with the American National Standards Institute (ANSI).[2] On October 5, 2017,
OSHA, NIOSH and RIA signed an alliance to work together to enhance technical expertise, identify
and help address potential workplace hazards associated with traditional industrial robots and
the emerging technology of human-robot collaboration installations and systems, and help
identify needed research to reduce workplace hazards. On October 16 NIOSH launched the
Center for Occupational Robotics Research to "provide scientific leadership to guide the
development and use of occupational robots that enhance worker safety, health, and wellbeing."
So far, the research needs identified by NIOSH and its partners include: tracking and preventing
injuries and fatalities, intervention and dissemination strategies to promote safe machine control
and maintenance procedures, and on translating effective evidence-based interventions into
workplace practice.
The current focus of the industry tends to be on giving robots vision. Specifically, the rise of
machine vision technology. This, combined with the advancement of the Internet of Things (IoT),
gives machines the ability to process images and understand what they are “seeing.”
As this technology continues to proliferate, the next step is giving robots the ability to apply these
things to learn on their own. For example, a robot can currently be programmed to pick up and
place items, but in the future, it will combine machine vision with machine learning to figure out
its own programming through trial and error.
Machine Learning through Artificial
Intelligence
Machine learning is a subset of AI. That is, all machine learning counts as AI, but not all AI counts
as machine learning. For example, symbolic logic – rules engines, expert systems and knowledge
graphs – could all be described as AI, and none of them are machine learning.
One aspect that separates machine learning from the knowledge graphs and expert systems is
its ability to modify itself when exposed to more data; i.e. machine learning is dynamic and does
not require human intervention to make certain changes. That makes it less brittle, and less
reliant on human experts.
In contrast, unsupervised machine learning algorithms are used when the information used to
train is neither classified nor labeled. Unsupervised learning studies how systems can infer a
function to describe a hidden structure from unlabeled data. The system doesn’t figure out the
right output, but it explores the data and can draw inferences from datasets to describe hidden
structures from unlabeled data.
Feature learning
Several learning algorithms aim at discovering better representations of the inputs provided
during training. Classic examples include principal components analysis and cluster analysis.
Feature learning algorithms, also called representation learning algorithms, often attempt to
preserve the information in their input but also transform it in a way that makes it useful, often
as a pre-processing step before performing classification or predictions. This technique allows
reconstruction of the inputs coming from the unknown data-generating distribution, while not
being necessarily faithful to configurations that are implausible under that distribution. This
replaces manual feature engineering, and allows a machine to both learn the features and use
them to perform a specific task.
Manifold learning algorithms attempt to do so under the constraint that the learned
representation is low-dimensional. Sparse coding algorithms attempt to do so under the
constraint that the learned representation is sparse, meaning that the mathematical model has
many zeros. Multilinear subspace learning algorithms aim to learn low-dimensional
representations directly from tensor representations for multidimensional data, without
reshaping them into higher-dimensional vectors. Deep learning algorithms discover multiple
levels of representation, or a hierarchy of features, with higher-level, more abstract features
defined in terms of (or generating) lower-level features. It has been argued that an intelligent
machine is one that learns a representation that disentangles the underlying factors of variation
that explain the observed data.
Feature learning is motivated by the fact that machine learning tasks such as classification often
require input that is mathematically and computationally convenient to process. However, real-
world data such as images, video, and sensory data has not yielded to attempts to algorithmically
define specific features. An alternative is to discover such features or representations through
examination, without relying on explicit algorithms.
The key idea is that a clean image patch can be sparsely represented by an image dictionary, but
the noise cannot.
Anomaly detection
In data mining, anomaly detection, also known as outlier detection, is the identification of rare
items, events or observations which raise suspicions by differing significantly from the majority
of the data. Typically, the anomalous items represent an issue such as bank fraud, a structural
defect, medical problems or errors in a text. Anomalies are referred to as outliers, novelties,
noise, deviations and exceptions.
In particular, in the context of abuse and network intrusion detection, the interesting objects are
often not rare objects, but unexpected bursts in activity. This pattern does not adhere to the
common statistical definition of an outlier as a rare object, and many outlier detection methods
(in particular, unsupervised algorithms) will fail on such data, unless it has been aggregated
appropriately. Instead, a cluster analysis algorithm may be able to detect the micro-clusters
formed by these patterns.
Three broad categories of anomaly detection techniques exist. Unsupervised anomaly detection
techniques detect anomalies in an unlabeled test data set under the assumption that the majority
of the instances in the data set are normal, by looking for instances that seem to fit least to the
remainder of the data set. Supervised anomaly detection techniques require a data set that has
been labeled as "normal" and "abnormal" and involves training a classifier (the key difference to
many other statistical classification problems is the inherent unbalanced nature of outlier
detection). Semi-supervised anomaly detection techniques construct a model representing
normal behavior from a given normal training data set, and then test the likelihood of a test
instance to be generated by the model.
Decision trees
Decision tree learning uses a decision tree as a predictive model to go from observations about
an item (represented in the branches) to conclusions about the item's target value (represented
in the leaves). It is one of the predictive modeling approaches used in statistics, data mining and
machine learning. Tree models where the target variable can take a discrete set of values are
called classification trees; in these tree structures, leaves represent class labels and branches
represent conjunctions of features that lead to those class labels. Decision trees where the target
variable can take continuous values (typically real numbers) are called regression trees. In
decision analysis, a decision tree can be used to visually and explicitly represent decisions and
decision making. In data mining, a decision tree describes data, but the resulting classification
tree can be an input for decision making.
Association rules
Association rule learning is a rule-based machine learning method for discovering relationships
between variables in large databases. It is intended to identify strong rules discovered in
databases using some measure of "interestingness". This rule-based approach generates new
rules as it analyzes more data. The ultimate goal, assuming the set of data is large enough, is to
help a machine mimic the human brain’s feature extraction and abstract association capabilities
for data that has not been categorized.
Rule-based machine learning is a general term for any machine learning method that identifies,
learns, or evolves "rules" to store, manipulate or apply knowledge. The defining characteristic of
a rule-based machine learning algorithm is the identification and utilization of a set of relational
rules that collectively represent the knowledge captured by the system. This is in contrast to
other machine learning algorithms that commonly identify a singular model that can be
universally applied to any instance in order to make a prediction. Rule-based machine learning
approaches include learning classifier systems, association rule learning, and artificial immune
systems.
Based on the concept of strong rules, Rakesh Agrawal, Tomasz Imieliński and Arun Swami
introduced association rules for discovering regularities between products in large-scale
transaction data recorded by point-of-sale (POS) systems in supermarkets. For example, the rule
found in the sales data of a supermarket would indicate that if a customer buys onions and
potatoes together, they are likely to also buy hamburger meat. Such information can be used as
the basis for decisions about marketing activities such as promotional pricing or product
placements. In addition to market basket analysis, association rules are employed today in
application areas including Web usage mining, intrusion detection, continuous production, and
bioinformatics. In contrast with sequence mining, association rule learning typically does not
consider the order of items either within a transaction or across transactions.
Learning classifier systems (LCS) are a family of rule-based machine learning algorithms that
combine a discovery component, typically a genetic algorithm, with a learning component,
performing either supervised learning, reinforcement learning, or unsupervised learning. They
seek to identify a set of context-dependent rules that collectively store and apply knowledge in
a piecewise manner in order to make predictions.
Machine learning enables analysis of massive quantities of data. While it generally delivers faster,
more accurate results in order to identify profitable opportunities or dangerous risks, it may also
require additional time and resources to train it properly.