Machine Vision and Image Processing (LabVIEW Self-Study Guide)
Machine Vision and Image Processing (LabVIEW Self-Study Guide)
Pass the browser test at http://onlinecourses.ni.com/browsertest.htm to use the National Instruments remote computers for your exercises <or> Install LabVIEW 8 on your computer and download the exercise and solution VIs to complete the exercises on your own computer
ni.com/training
TOPICS
Image acquisition (IMAQ) products Vision software products Camera configuration using MAX
ni.com/training
National Instruments line of IMAQ devices features image acquisition boards that connect to parallel digital, analog, and Camera Link cameras. These devices include advanced triggering and digital I/O features that you can use to trigger an acquisition from an outside signal. IMAQ devices feature up to 128 MB of onboard memory, which allows you to acquire images at extremely high rates while sustaining high-speed throughput and greater overall system performance. Most IMAQ devices work with motion control and data acquisition hardware using the National Instruments real-time system integration bus (RTSI). On National Instruments PCI boards, you can use a ribbon cable to connect RTSI connectors on adjacent boards to send triggering and timing information from one board to another. Real-Time Acquisition is available with LV RT on a PXI chassis or with LV RT and Vision Builder for AI on a CVS (1450 Series).
Vision Assistant
Prototype and generate scripts
ni.com/training
NI Vision is the image processing toolkit, or library, that adds high-level machine vision and image processing to your programming environment. NI Vision includes routines for LabVIEW, LabWindowsTM/CVITM, Measurement Studio, and MS Visual Studio. NI Vision Assistant is a tool for prototyping and testing image processing applications. Create custom algorithms with the Vision Assistant scripting feature, which records every step of your processing algorithm. It also can produce a working LabVIEW VI based on the script created or generate a builder file that contains the calls needed to execute the script in C or Visual Basic. Vision Asisstant is included as part of Vision Development Module. NI Vision Builder for Automated Inspection (Vision Builder AI) is a stand-alone prototyping and testing program, much like Vision Assistant. However, you can run your final inspection application from within Vision Builder AI. Vision Builder AI requires no programming experience, which enables you to develop projects in a shorter amount of time. The software includes functions for setting up complex pass/fail decisions, controlling digital I/O devices, and communicating with serial devices such as PLCs.
ni.com/training
NI Vision Development Module includes an extensive set of MMX-optimized functions for the following machine vision tasks: Grayscale, color, and binary image display Image processingincluding statistics, filtering, and geometric transforms Pattern matching and geometric matching Particle analysis Gauging Measurement Object Classification Optical character recognition Use NI Vision Development Module to accelerate the development of industrial machine vision and scientific imaging applications.
ni.com/training
Vision Assistant uses the NI Vision library but can be used independently of other development environments. In addition to being a tool for prototyping vision systems, you can use Vision Assistant to learn how different image processing functions perform. The Vision Assistant interface makes prototyping your application easy and efficient because of features such as a reference window that displays your original image, a script window that stores your image processing steps, and a processing window that reflects changes to your images as you apply new parameters.
ni.com/training
NI Vision Builder for Automated Inspection is configurable machine vision software that you can use to prototype, benchmark, and deploy applications. NI Vision Builder for Automated Inspection does not require programming, yet is scalable to powerful programming environments such as LabVIEW. A built-in deployment interface is included so you can quickly deploy your inspection, guidance, and identification applications. The software also includes abilities to set up complex pass/fail decisions, control digital I/O devices, and communicate with serial devices such as PLCs. Note: Though Vision Builder for AI can build LabVIEW code much like Vision Assistant, the code generated is difficult to modify. If a customer is interested in building a LabVIEW application, they should use Vision Assistant.
ni.com/training
10
TOPICS
How to choose a camera Lighting considerations How to choose an image acquisition device
ni.com/training
In order to properly prepare your imaging environment for your specific application, you must first examine your inspection task to determine your machine vision requirements. During system setup, you decide the type of lighting and lens you need, and you determine some of the basic specifications of your imaging tasks. Setting up your imaging environment is a critical first step to any imaging application. If you set up the system properly, you can focus your development energy on the application rather than problems caused by the environment, and you can save precious time during execution.
11
Scan type
Line/Area
Budget
ni.com/training
The following slides provide more information about each of these topics: Imaging System Parameters Sensor Resolution Focal Length Sensor Size Scan Type Format and Standard Analog formats Standard interlaced formats Progressive scan Digital formats Digital standards Taps Section Summary
12
1, 2
2. 3.
4.
4 5
5.
ni.com/training
Sensor resolution is the number of columns and rows of CCD pixels in the camera sensor. Sensor size is the physical area of the sensor array. The working distance is the distance from the front of the lens to the object under inspection. Feature Resolution indicates the amount of object detail that the imaging system can reproduce. The field of view is the area under inspection that the camera can acquire. The horizontal and vertical dimensions of the inspection area determine the FOV.
13
Sensor Resolution
Camera sensors contain an array of pixels. Sense incident light intensity Output video data through registers
Y pixel resolution X pixel resolution
Output Registers
ni.com/training
Acquiring images involves exposing a sensor to light and then measuring the amount of light intensity across a two dimensional field. You can transmit this measurement data serially or in parallel through output register to your NI-IMAQ software, where it is digitized.
14
ni.com/training
You can determine the required sensor resolution of your imaging system by measuring (in real-world units) the size of the smallest feature you need to detect in the image. To compute sensor resolution, you must first find the field of view, which is defined by the horizontal and vertical dimensions of the inspection area. Use the same units for FOV and size of smallest feature. Choose the largest FOV value (horizontal or vertical).
15
Common Sensors
Cameras are manufactured with a limited number of standard sensors
Number of CCD Pixels 640 x 480 768 x 572 1281 x 1072 2048 x 2048 4000 x 2624 FOV 60 mm 60 mm 60 mm 60 mm 60 mm Sensor Resolution 0.185 mm 0.156 mm 0.093 mm 0.058 mm 0.030 mm
ni.com/training
If your required sensor resolution does not correspond to a standard sensor resolution, choose a camera whose sensor resolution is larger than you require or use multiple cameras. Be aware of camera prices as sensor sizes increase. By determining the sensor resolution you need, you narrow down the number of camera options that meet your application needs.
16
FOV
Size
Working Distance
Focal length
ni.com/training
Another important factor that affects your camera choice is the physical size of the sensor, known as the sensor size. The sensors diagonal length specifies the size of the sensors active area. The number of pixels in your sensor should be greater than or equal to the pixel resolution. Lenses are manufactured with a limited number of standard focal lengths. Common lens focal lengths include 6 mm, 8 mm, 12.5 mm, 25 mm, and 50 mm. Once you choose a lens whose focal length is closest to the focal length required by your imaging system, you need to adjust the working distance to get the object under inspection in focus. Lenses with short focal lengths (less than 12 mm) produce images with a significant amount of distortion. If your application is sensitive to image distortion, try to increase the working distance and use a lens with a higher focal length. If you cannot change the working distance, you are somewhat limited in choosing a lens. As you set up your system you will need to fine tune the various parameters of the focal length equation until you achieve the right combination of components that match your inspection needs and meet your cost requirements.
17
GOAL
Determine the focal length and camera resolution for a barcode application
ni.com/training
Scenario You are developing a system to verify that the correct barcode is placed on each newly assembled product. The length of the bar code is 62 mm. The smallest bar has a width of 0.2 mm. Due to mechanical constraints in the system, the lens can be no closer than 124 mm from the barcode. The sensor size on your camera is 10 mm. 1. Determine the optimal size of the lens (in mm) you would purchase for this application. Focal Length = ____________________________________________ 2. Your camera features a resolution of 640 x 480 pixels. Given that the smallest bar has a width of 0.2 mm, determine if the resolution of this camera is acceptable for reading this barcode. Resolution needed for barcode: _____________________________ Acceptable? ____________________________________________
18
Scan Type
Area Scan Scans an area of pixels and acquires the entire rectangular image at once. Line Scan Scans one line of pixels at a time, and image is pieced together afterward.
Advantages: Faster acquisition Accommodate moving objects Disadvantages: Processing required to build image Expensive ni.com/training
Cameras use different methods of acquiring the pixels of an image. Two popular methods are area scan and line scan. An area scan camera acquires an area of pixels at a time. A line scan camera scans only one line of pixels at a time, providing faster acquisition. However, you must fit the lines together with software to create a whole image. Line scan cameras are useful in web inspection applications during which the object under inspection moves along a conveyor or stage in a production system. Line scan cameras also are useful in high-resolution applications because you can arbitrarily lengthen the image by fitting a specified number of lines together.
19
Output Registers
ni.com/training
Analog cameras output video signals in an analog format. The horizontal sync (HSYNC) pulse identifies the beginning of a line; several lines make up a field. An additional pulse, the vertical sync (VSYNC) identifies the beginning of a field. Notice the black level identified in the figure. This is a reference voltage used for measuring pixel intensities. Low voltages typically indicate darker pixels, while higher voltages identify light pixels.
20
ni.com/training
For most low-end cameras, the odd and even fields are interlaced to increase the perceived image update rate, a technique that has been used by the television industry for several years. Two fields are combined to make up a frame. In MAX, the VSYNC and HSYNC timing information relative to the pixel data is automatically set by selecting the appropriate camera configuration file. For example, you might select RS-170 or CCIR for a monochrome camera and NTSC or PAL for a color camera. Consult your camera documentation for more information on timing.
21
ni.com/training
This table is included for your reference. If you use an analog camera, it will most likely adhere to one of these four standards. Note: These are all interlaced.
22
ni.com/training
Progressive scan cameras are typically used in applications where the object or the background is in motion. Instead of acquiring the image one field at a time and then interlacing them for display, the CCD array in progressive scan cameras acquires the entire scene at once. If you use a standard analog camera in a motion application, there is a slight delay between the acquisition of each of the two fields in a frame. This slight delay causes blurring in the acquired image. Progressive scan cameras eliminate this problem by acquiring the entire frame at once, without interlacing. If you have motion in a scene and you only have a standard analog camera with interlaced video, you can use the National Instruments configuration software to scan only one field of each frame, which will eliminate blurring in the acquired image.
23
Output Registers
ni.com/training
Digital cameras use three types of signals data lines, a pixel clock, and enable lines. Data lines Parallel wires that carry digital signals corresponding to pixel values Digital cameras typically represent pixels with 8, 10, 12, or 14 bits Color digital cameras can represent pixels with up to 24 bits Depending on your camera, you may have as many as 24 data lines representing each pixel Pixel clock A high-frequency pulse train that determines when the data lines contain valid data. On the active edge of the pixel clock, digital lines have a constant value that is input to your IMAQ device The pixel clock frequency determines the rate that pixels are acquired Enable lines Indicate when data lines contain valid data The HSYNC signal, also known as the Line Valid signal, is active while a row of pixels is acquired. The HSYNC goes inactive at the end of that row The VSYNC signal, or Frame Valid signal, is active during the acquisition of an entire frame. Digital line scan cameras consist of a single row of CCD elements and only require a HYSNC timing signal. Digital area scan cameras need both HSYNC and VSYNC signals.
24
ni.com/training
The parallel interface standard is well-established that provides a wide range of acquisition speeds, image sizes, and pixel depths. Parallel cameras often require you to customize cables and connectors to suit your image acquisition device. The IEEE 1394 standard offers simple daisy chain cabling with a uniform interface, but lacks some data throughput capabilities as well as trigger synchronization capabilities. You can trigger the camera, not the board. The Camera Link standard was developed by a consortium of companies, including National Instruments, representing the framegrabber and camera industries. This standard is designed to offer speed and trigger functionality with the ease of standardized cables and interfaces.
25
1 tap
4 taps in quadrants
ni.com/training
Increasing the speed of a digital cameras pixel clock or acquiring more than one pixel at a time greatly improves acquisition speed. Note: Taps only apply to CL and Parallel cameras. The 1394 bus transfers data in packets, so the idea of taps does not apply to those cameras.
26
Digital
Advantages High speed, high pixel depth, and large image sizes Programmable controls Less image noise Disadvantages Expensive May require custom cables May require camera files for custom configuration
ni.com/training
When choosing between digital and analog cameras, keep in mind that digital cameras tend to be more expensive than analog cameras, but they allow faster frame rates, higher bit and spatial resolution, and higher signal-to-noise ratios. Analog cameras are based on older, proven technology and are therefore more common and less expensive. A low-cost, monochrome analog camera is an appropriate choice for most beginner vision applications.
27
PCI-1430
B.icd
1426, 1427, 1430
ni.com/training
Because digital cameras vary in specifications such as speed, image size, pixel depth, number of taps, and modes, NI-IMAQ requires a camera file specific to your camera to define all of these values in order to use that camera with your image acquisition device. Camera files are custom designed to provide efficient and effective interaction between your camera and your image acquisition device. You can find a list of camera files that have been tested and approved by National Instruments online at the Camera Advisor (ni.com/cameras).
28
ni.com/training
If you do not find a camera file for your camera at the Camera Advisor, you can create a custom camera file using the NI Camera File Generator. The NI Camera File Generator is a menu-driven, configuration environment for generating new camera files to equip cameras for which National Instruments does not have files, or for adding features to existing NI camera files. The NI Camera File Generator is a free software tool available from the Camera Advisor Web site.
29
Selecting a Camera
Selecting a camera is an important step in preparing your imaging environment. You should use the parameters of your application to decide whether you need an analog or digital imaging system. National Instruments offers Camera Advisor (www.ni.com/cameras), a one-stop Web resource for selecting an imaging camera featuring features and specifications for more than 100 cameras.
ni.com/training
30
Camera Advisor
Helps you select the right camera Contains full camera specifications
ni.com/cameras
ni.com/training
Camera Advisor is a one-stop Web resource that engineers and scientists can use when selecting an imaging camera. Using this virtual catalog of cameras, you can view features and specifications for more than 100 cameras. Camera Advisor also explains how various cameras work with National Instruments hardware and software. By visiting this section of ni.com, you can compare different models and makes of cameras, such as line scan, area scan, progressive scan, and digital and analog cameras. You can also use Camera Advisor to compare the technical details and specifications of various cameras. You will also find a complete list of cameras that have been tested and are fully compatible with National Instruments products.
31
ni.com/training
If objects in your image are covered by shadows or glare, it becomes much more difficult to examine the images effectively. Some objects reflect large amounts of light due to the nature of their external coating or their curvature. Poor lighting setups in the imaging environment can create shadows that fall across the image. When possible, position your lighting setup and your imaged object such that glare and shadows are reduced or eliminated. If this is not possible, you may need to use special lighting filters or lenses, which are available from a variety of vendors.
32
Ring Lighting
Light encircles the camera lens Advantage: Even illumination without shadows along lens axis Disadvantage: Can produce a circular glare
ni.com/training
Ring lighting creates intense shadow-free light in the axis of the camera. It is often combined with polarizers which filter glares caused by illuminating shiny objects.
33
Strobe Lighting
Light pulses as frame is acquired. Advantage: Reduces motion blur Disadvantages: May have to apply artificial gain to avoid dark images
ni.com/training
Strobe lights turn on and off very rapidly in order to illuminate objects at very specific points in time. When an object is in motion in front of the camera, it sometimes helps to illuminate the object only briefly with a strobe light in order to reduce blur in the image. Since the object is illuminated only briefly, it is sometimes necessary to apply a gain to the image so that the pixels do not appear too dark in the image. This gain can be applied within the cameral, onboard your IMAQ device, or programmatically within the software.
34
Backlighting
Object placed between camera and light source. Advantage: creates a sharp contrast that makes finding edges and measuring distances easy Disadvantage: curved objects can diffract light.
ni.com/training
Use backlighting when you can solve your application by looking only at the shape of an object. This figure shows an image of a stamped metal part that was acquired using a backlight. Many other factors, such as your camera choice, contribute to your decision about appropriate lighting for your application. You may wan to choose lighting sources and filters whose wavelengths match the sensitivity of the CCD sensor in your camera and the color of the object under inspection.
35
Diffused Lighting
Some objects reflect light due to their surface texture or curvature You can use diffused lighting to eliminate glare
ni.com/training
In this example, the first barcode image was acquired using highly directional light, which increased the sensitivity of specular highlights, or glints, causing a glare that makes the image difficult to analyze. The second barcode was acquired using diffused lighting to reduce glare.
36
Optics Resources
National Instruments recommends the following partner companies for your lighting and lens needs:
Graftek Imaging Edmund Industrial Optics Fostec Stocker & Yale www.graftek.com www.edmundoptics.com www.fostec.com www.stkr.com
ni.com/training
Visit ni.com for information on some of NIs partners who specialize in lighting equipment for the vision industry.
37
The IMAQ PCI/PXI-1409 image acquisition board offers easy-to-use driver and camera configuration software for up to four standard and nonstandard cameras. You can use the 1409 to acquire data from nonstandard cameras featuring variable pixel clocks from 2 MHz to 40 MHz. You can configure monochrome acquisition from RS-170, CCIR, NTSC, PAL, and RGB, and progressive scan cameras. You can configure color image acquisition from NTSC, PAL, and RGB cameras using the StillColor feature. The 1407 Series boards offer low-cost, single-channel monochrome accuracy. You can configure the 1407 for standard RS-170 and CCIR analog monochrome cameras. Unlike other low-cost machine vision image acquisition boards, the 1407 Series offers advanced features such as partial image acquisition, onboard decimation, lookup table processing, programmable gain, and triggering. You can configure your 1411 Series board for color image acquisition from standard NTSC, PAL, and S-Video cameras. You can also acquire from monochrome RS-170 and CCIR cameras. Unlike other color machine vision boards, the 1411 offers fast color conversion to hue, saturation, and luminance (HSL) image data. This is especially useful for high-speed color matching and inspection applications, even in varying illumination conditions.
38
ni.com/training
With a digital camera and a digital IMAQ board, you can acquire images at thousands of frames/s with greater grayscale resolution and more spatial resolution. National Instruments 1424 Series boards for PCI and PXI/CompactPCI are some of the fastest digital image acquisition boards available, with a 50 MHz pixel clock and 32-bit wide digital input (four 8-bit pixels). Using up to 128 MB of onboard memory, the 1424 can acquire data at a top rate of 200 MB/s. The 1422 Series boards feature a 16-bit input and a 40 MHz pixel clock for lower cost digital image acquisition applications. National Instruments offers IMAQ hardware for the low-voltage differential signaling (LVDS) standard. LVDS extends the performance of the commonly-used digital camera RS-422 differential data bus, which limits frequency to the 20 MHz range. However, LVDS cameras can clock data out at 50 MHz using the IMAQ PCI-1424 LVDS board. You can also use the LVDS of the IMAQ board to transmit data as far as 100 ft. LVDS also reduces noise significantly. The IMAQ PCI-1428 is the newest digital image acquisition board from National Instruments. The PCI-1428 offers high-resolution digital imaging with simple cabling for Camera Link cameras. Camera Link is a new industrial, high speed serial data and cabling standard developed by NI and other companies for easy connectivity between the PC and camera. Camera Link offers future data rate capabilities up to 2.3 gigabits per second.
39
ni.com/training
The new National Instruments CVS-1454 Compact Vision System extends the power of LabVIEW Real-Time to a new rugged machine vision package that withstands the harsh environments common in robotics, automated test, and industrial inspection systems. The NI CVS-1454 offers unprecedented I/O capabilities and network connectivity for distributed machine vision applications. It uses FireWire (IEEE 1394) technology, compatible with more than 40 cameras with a wide range of functionality, performance, and price. In addition, you can connect up to three cameras to one CVS-1454 to significantly lower the price of your deployed system. You can also integrate the NI CVS-1454 with the industrial measurement and control capabilities of NI Compact FieldPoint. To program the CVS-1454, you have the choice of configuring your machine vision application quickly with Vision Builder for Automated Inspection, or programming your application with LabVIEW and the Vision Development Module.
40
ni.com/training
41
Images in Memory
Images are not displayed directly when acquired.
Stored in a memory buffer so they can be accessed and manipulated
21 23 25 23 24
22 21 24 22 137
17 18 20 137 135
Memory Buffer
ni.com/training
42
ni.com/training
NI-IMAQ is a complete and robust API for image acquisition. Whether you are using LabVIEW, Measurement Studio, Visual Basic, or Visual C++, NI-IMAQ gives you high-level control of National Instruments image acquisition devices. NI-IMAQ performs all of the computer- and board-specific tasks, allowing straightforward image acquisition without register-level programming. NI-IMAQ is compatible with NI-DAQ and all other National Instruments driver software for easily integrating an imaging application into any National Instruments solution. NI-IMAQ is included with your hardware at no charge. NI-IMAQ features an extensive library of functions that you can call from your application programming environment. These functions include routines for video configuration, image acquisition (continuous and single-shot), memory buffer allocation, trigger control, and board configuration. NI-IMAQ performs all functionality required to acquire and save images. For image analysis functionality, refer to the IMAQ Vision software analysis libraries, which are discussed later in this course. NI-IMAQ resolves many of the complex issues between the computer and IMAQ hardware internally, such as programming interrupts and DMA controllers. NI-IMAQ also provides the interface path between LabVIEW, Measurement Studio, or other programming environments and the hardware product.
43
Trigger
Synchronize acquisition with real-world events
Display
Customize image display and user interface
ni.com/training
NI-IMAQ and IMAQ Vision use five categories of functions to acquire and display images: Acquisition Management functionsAllocate and free memory used for storing images; begin and end image acquisition sessions Single buffer acquisition functionsAcquire images into a single buffer using the snap and grab functions Multiple buffer acquisition functionsAcquire continuous images into multiple buffers using the ring and sequence functions Display controlsDisplay images for processing Trigger functionsLink a vision function to an event external to the computer, such as receiving a pulse to indicate the position of an item on an assembly line
44
IMAQ Close
Ends a session
IMAQ Dispose
Removes an image from memory
ni.com/training
IMAQ Init loads camera configuration information and configures the IMAQ device. Interface Name (default img0) refers to the device name from MAX. IMAQ Init generates an IMAQ Session that will be used to reference any future NI-IMAQ driver calls. IMAQ Close directs the board to stop acquiring images, release any allocated resources back to the system, and close the specified IMAQ Session. Restarting LabVIEW produces the same results as IMAQ Close. IMAQ Create creates an image buffer that you input into any of the acquisition functions of your IMAQ device. IMAQ Dispose disposes an image and frees the memory allocated for the image. Call IMAQ Dispose only when the image is no longer required for the remainder of the processing.
45
Snap
Processing Buffer
Grab
Acquisition Buffer
Processing Buffer
ni.com/training
Snaps and Grabs are the most basic types of acquisitions. You used these functions in the MAX configuration exercise earlier in the course. A snap is simply a snapshot, in which you acquire a single image from the camera. A grab is more like a video, in which you acquire every image that comes from the camera. The images in a grab are displayed successively, producing a full-motion video, consisting of around 25 to 30 frames per second.
46
These are high-level functions that configure your IMAQ device for you. If you choose to control more of the low-level features of your board, take a look at the LL Snap and LL Grab examples in the Example Finder.
47
Image Display
Most common method of viewing images
Display is not necessary for acquiring images Displayed image is not necessarily the acquired image
Color palette may change or be limited Viewed image may not represent all data acquired
ni.com/training
One of the first things you will want to do when you acquire an image is display it on your monitor. With LabVIEW 7.0 the Image Display control was introduced, allowing users to embed the image on the front panel. This is the easiest method and the one you will use during this course. Remember that when you display images, the display may differ from the actual image stored in memory. For instance, if your monitor only displays 16 or 256 colors, but you want to display an 8-bit monochrome image, the image may appear distorted depending on how many colors are being used by other programs. In addition, if you acquire a 10-, 12-, 14-, or 16-bit image, you will only be able to display an 8-bit representation of that image. (This is limited by the operating system and video output hardware.) The display of your images is not necessary for required for acquisition and processing. If your application does not require a human operator, you may want to eliminate all displays from the applications source code.
48
WindDraw VIs
Alternate display method based on floating (dialog) windows Separate palette and functions Provide up to 16 different display windows that are not connected to main program window
ni.com/training
IMAQ WindDraw opens or refreshes a display window. Use3 this VI to open up to 16 display windows and give each window certain properties, such as a window title. Images in LabVIEW The Image type is a pointer to a memory location (buffer) that holds image data. It does not follow usual dataflow. Data can change without notice. Functions read data from this location without consideration for what that data is.
49
...
Processing Buffer [n-1]
ni.com/training
Sequences and Rings are cousins to the other two types of acquisitions that youve studied, Snaps and Grabs. A sequence of images is a one-shot acquisition that fills up multiple buffers with images a single time. You can use NI-IMAQ functionality to specify a certain number of frames to skip between each acquisition buffer. For example, if your camera acquires 30 frames per second, use a sequence to acquire 30 images with no frames skipped and an acquisition time of one second, or to acquire 30 images with one frame skipped after each buffer with acquisition time of two seconds.
50
Ring
...
Acquisition Buffer [n-1]
A ring is similar to a sequence in that it uses multiple buffers, but a ring acquires images continuously. Each buffer is filled one at a time until all buffers contain an image. When all buffers are full, NI-IMAQ writes over the existing images one at a time, beginning with the first buffer. When you get ready to process or display an image from one of these buffers, you simply extract it from the ring, while the ring continues in the background. Note: A ring is typically better than a grab for obtaining images for Vision analysis. A ring function will extract one buffer while another buffer is being updated.
51
Ring
IMAQ Configure List Allocates a memory to a list of acquisition buffers IMAQ Configure Buffer Adds a buffer to the list IMAQ Extract Buffer Extracts a buffer from the acquisition list into the processing buffer ni.com/training
The IMAQ Sequence VI fills up multiple buffers with a series of images. After initializing a session with the image acquisition device, the IMAQ Configure List VI is used to set parameters for the list of acquisition buffers used in the session. Each buffer is allocated in memory and assigned to the list with the IMAQ Configure Buffer VI. IMAQ Configure Buffer allocates one buffer at a time, so it must be used in a loop. Each buffer is given a unique name based on the loop index. During the acquisition, the IMAQ Extract Buffer VI is used to access a specific buffer in the list. The VI extracts an acquisition buffer into a processing buffer (which is automatically allocated by the driver) so the user can manipulate and view it.
52
Using Triggers
May need to coordinate an image acquisition with motion control, data acquisition, or real-world events.
Can drive or receive trigger signals
RTSI bus for NI device synchronization External lines for real-world coordination
Examples:
Start image acquisition with external trigger received as the unit under test passes in front of the camera. Send trigger to DAQ card to acquire data as each frame is read from camera.
ni.com/training
All triggers, whether they are driven or received, have programmable polarity. In the following sections, a trigger is said to have a value of logical 0 if it is unasserted (if it is low for a high-true trigger signal and high for a low-true trigger signal), and a logical 1 if the trigger is asserted.
53
Trigger
IMAQ Trigger Drive
ni.com/training
Signal Description This signal will pulse to a logical 1 once the last piece of data from an image acquisition is captured by the IMAQ board. This signal will remain a logical 1 from the time a capture command is sent to the IMAQ board until the last byte of data is captured. The clock signal used to latch incoming data from the A/D converter Logical 0 Logical 1 Horizontal synchronization signal that is produced at the beginning of each line by the camera Vertical synchronization signal that is produced at the beginning of each line by the camera High when a frame is being captured Asserted at the end of each frame that is captured
Acquisition in Progress
Pixel Clock Unasserted Asserted HSYNC VSYNC Frame Start Frame Done
54
Trigger
ni.com/training
Assert External Trigger 0 Call IMAQ Init to get an interface number to the board. Call IMAQ Trigger Drive VI to drive External Trigger 2 Asserted. The Get/Set input must be set to TRUE to set the value of the trigger. Check Value of External Trigger Line 0 Call IMAQ Init to get an interface number to the board. IMAQ Trigger Read VI is used to sense the current value of External Trigger 2. The value of the trigger is returned in Trigger status.
55
ni.com/training
The IMAQ Property Node and theIMAQ 1394 Property Node get and/or set image acquisition properties. The nodes are expandable. Evaluation starts from the top and proceeds downward until an error or the final evaluation occurs.
56
ni.com/training
57
ni.com/training
58