Open navigation menu
Close suggestions
Search
Search
en
Change Language
Upload
Sign in
Sign in
Download free for days
0 ratings
0% found this document useful (0 votes)
13 views
DIP UNIT 1 & 2 BOOK
digital image processing
Uploaded by
mythily.bme
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here
.
Available Formats
Download as PDF or read online on Scribd
Download now
Download
Save DIP UNIT 1 & 2 BOOK For Later
Download
Save
Save DIP UNIT 1 & 2 BOOK For Later
0%
0% found this document useful, undefined
0%
, undefined
Embed
Share
Print
Report
0 ratings
0% found this document useful (0 votes)
13 views
DIP UNIT 1 & 2 BOOK
digital image processing
Uploaded by
mythily.bme
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here
.
Available Formats
Download as PDF or read online on Scribd
Download now
Download
Save DIP UNIT 1 & 2 BOOK For Later
Carousel Previous
Carousel Next
Download
Save
Save DIP UNIT 1 & 2 BOOK For Later
0%
0% found this document useful, undefined
0%
, undefined
Embed
Share
Print
Report
Download now
Download
You are on page 1
/ 146
Search
Fullscreen
TABLE OF CONTENTS CHAPTER-1 DIGITAL IMAGE FUNDAMENTALS. 1.1 Introduction 12 The Origins of Digital Image Processing 1.2.1 Bartlane Cable Picture Transmission System 1.2.2 Development of Digital Computers 1.23 Applications Steps in Digital Image Processing 13.1 Block Diagram Components (or) Elements of Image Processing System 14.1 Block Diagram Elements of Visual Perception 1.5.1 Structure of the Human Eye 15.2 Image Formation in the Eye 1.5.3 Brightness Adaptation 1.5.4 Brightness Discrimination Image Sensing and Acquisition 1.6.1 Image Acquisition using a Single Sensor 1.6.2 Image Acquisition using Sensor Strips 1.6.3 Image Acquisition using Sensor Arrays 1.6.4 Image Formation Model Image Sampling and Quantization 171 Generating a Digital Image 1.7.2 Digital Image Representation 1.7.3. Spatial Resolution 1.7.4 Gray - Level Resolution 17.5 Aliasing Effect 1.7.6 Moire Patterns 17.7 Zooming of Digital Images 178 Shrinking of Digital Images Relationships Between Pixels 1.8.1 Neighbors of A Pixel Scanned with CamScanner Ll 1.3 13 13 14 15 1.6 19 19 1.12 1.13 1.16 1.18 1.19 1.23 1.23 1.25 1.27 1.28 1.29 1,30 1.32 135 1.35 135 1.36 1.36 1.38 1.39 1.391.8.2 Adjacency 1.8.3. Path 1.8.4 Connectivity 1.8.5. Region 1.8.6 Boundary 1.8.7 Edge 1.8.8 Distance Measures 1.8.9 Image Operations on a Pixel Basis Fundamentals of Color Image Processing 1.9.1 Characterization of Light 1.9.2. Primary and Secondary Colors 1.9.3 Trichromatic Coefficients 1.9.4 Chromaticity Diagram Color Models 1.10.1 The RGB (Red, Green, Blue) Color Model 1.10.2. The HSI (Hue, Saturation, Intensity) Color Model 1.10.3. Image Format Conversion 1.10.4 Advantages of HSI Model Short Questions and Answers CHAPTER-2 IMAGE ENHANCEMENT 21 2.2 23 24 Introduction Spatial Domain Methods — Fundamentals 2.2.1 Defining a Neighborhood 2.2.2 Gray Level Transformation Function, T(R) 2.2.3 Mask Processing (or) Filtering Gray Level Transformations 2.3.1 Negative Transformation 2.3.2 Log Transformations 2.3.3 Power - Law Transformations 2.3.4 Piecewise - Linear Transformation Functions Histogram Processing Scanned with CamScanner 140 141 Lat 14 142 1a 143 1.46 1.46 147 1.48 149 1.50 151 152 155 1.60 1.62 1.64 21 21 2.2 22 24 24 25 2.6 2.6 2.8 2.1229 2.10 211 2.4.1 Histogram Equalization (or) Histogram Linearization 24.2 Histogram Matching (or) His ogra m Specification 243. Local Histogram Processing 244 Use of Histogram Statistics for Image Ei Basics of Spatial Filtering 25.1 Filtering Technique Pp 5.2. Types 3 Problem Encountered 4 Ways to Handle the Problem 3.5 Applications Smoothing Spatial Filtering 2.6.1 Smoothing by Linear Filters 26.2 Smoothing by Non-Linear Filters Sharpening Spatial Filtering 2.7.1 Image Enhancement Using Second Derivatives — the Laplacian Filters 2.1.2 Image Enhancement Using First Derivatives-the Gradient 2.7.3 Comparison between First and Second-Order Derivatives Frequency Domain Filtering 28.1 Filtering 282. Filters Used Introduction to Fourier Transform 29.1 The One ~ Dimensional Fourier Transform 292 The Two - Dimensional Fourier Transform 293 The Discrete Fourier Transform (DFT) 2.94 The Fast Fourier Transform (FFT) 29.5 Properties of Two-Dimensional Fourier Transform Smoothing Frequency - Domain Filters 2.10.1 Ideal Lowpass Filter (II_PF) 2.10.2 Butterworth Lowpass Filter (BLPF) 2.10.3. Gaussian Lowpass Filter (GLPF) 2.104. Applications of Lowpass Filtering Sharpenging Frequency Domain Filters 2.14 2.17 2.19 2.20 2.22 2.22 2.22 2.23 2.24 2.24 2.24 2.25 2.27 2.27 2.28 2.33 2.36 2.36 237 2.38 2.40 2.40 241 2.42 2.45 2.52 2.53 2.54 2.55 2.57 2.57 2.58 Scanned with CamScanner2.11.1 Ideal Highpass Filter (IHPF) 2.11.2 Butterworth Highpass Filter (BHPF) 2.11.3. Gaussian Highpass Filter (GHPF) 2.11.4 The Laplacian in the Frequency Domain 2.11.5 Enhancing the Filtered Images. Short Questions and Answers CHAPTER-3 IMAGE RESTORATION AND SEGMENTATION 3.1 3.2 33 34 3.5 3.6 3.7 3.8 3.9 Image Restoration — Introduction. Image Restoration / Degradation Model Noise Models 3.3.1 Noise Probability Density Functions (PDFS) 3.3.2 Periodic Noise 3.3.3 Estimation of Noise Parameters Mean Filters 3.4.1 Arithmetic Mean Filter 3.4.2 Geometric Mean Filter 3.4.3 Harmonic Mean Filter 3.4.4 Contraharmonic Mean Filter Order - Statistic Filters 3.5.1 Median Filter 3.5.2 Max Filter 3.5.3 Min Filter 3.5.4 Midpoint Filter 3.5.5 Alpha-Trimmed Mean Filter Adaptive Filters 3.6.1 Adaptive Local Noise Reduction Filters 3.6.2 Adaptive Median Filter 3.6.3 Advantages Over “Traditional Median Filter Periodic Noise Reduction Band Reject Filters Band Pass Filters Scanned with CamScanner 259 269 261 261 263 2.663.10 3. 3.12 3.13 3.14 3.15 3.16 3.17 3.18 Notch Filters Optimum Notch Filtering 3.11.1 Procedure Inverse Filtering 3.12.1 Concept 3.12.2. Drawbacks 3.12.3 Limitation 3.12.4 Application Wiener (or) Least Mean Square (LMS) Filtering 3.13.1. Mean Square Error 3.13.2. Assumptions Made 3.13.3, Approximated Image 3.13.4 Advantages 3.13.5. Disadvantages Image Segmentation — Introduction Detection of Discontinuities 3.15.1 Point Detection 3.15.2 Line Detection 3.15.3 Edge Detection Edge Linking and Boundary Detection 3.16.1 Local Processing 3.16.2 Regional Processing 3.16.3. Global Processing Using the Hough Transform Region-Based Segmentation 3.17.1, Region Growing 3.17.2. Region Splitting and Merging Morphological Processing 3.18.1 Dilation 3.18.2. Erosion Short Questions and Answers Scanned with CamScanner 3.23 3.24 3.25 3.28 3.28 3.28 3.29 3.29 3.30 3.30 3.30 3.30 3.32 3.32 3.32 3.33 3.33 3.34 3.36 3.43 3.44 3.46 3.49 3.53 3.54 3.56 3.58 3.59 3.60 3.63CHAPTER-IV 41 4.2 44 45 46 47 48 49 4.10 411 WAVELETS AND IMAGE COMPRESSION Wavelets & Multiresolution Processing — Introduction Subband Coding 42.1 Subband coding technique 4.2.2 Digital Filtering 42.3 Two-Band subband image coding 42.4 Four-band subband image coding Multiresolution Expansions 43.1 Series expansions 43.2 Scaling functions 43.3. Wavelet Functions Image Compression — Introduction Need for Data Compression Fundamentals 4.6.1 Data Redundancy 4.6.2 Types of Redundancy 4.6.3 Fidelity Criteria Image Compression Models 4.7.1 The Source Encoder 4.7.2 The Source Decoder 4.7.3 The Channel Encoder 4.14 The channel Decoder Error-Free (Or) Lossless Compression Variable - Length Coding 4.9.1 Huffman Coding 49.2 Near Optimal Variable Length Codes 493° Arithmetic Coding Bit- Plane Coding & Run - Length Coding 4101 Bit-plane Decomposition 4.10.2 Compression Of Bit Planes Lossless Predictive Coding AML Encoder Scanned with CamScanner 41 42 42 42 45 47 48 49 4.11 4.13 415 4.16 4.16 4.17 4.17 4.20 4.22 4.22 4.23 4.24 4.25 4.25 4.26 4.26 4.29 4.32 4.35 435 4.36 4.42 4.424.11.2 Decoder 443 4.11.3. Role of Prediction Error 4.44 4.11.4 Advantages a5 4.12 Lossy Compression 445 4,13. Lossy Predictive Coding 4.46 4.13.1 Encoder 446 4.13.2. Decoder 447 4,14. Image Compression Standards 4.48 4.14.1 Binary Image Compression Standards 4.48 4.14.2 Continuous Tone Still Image Compression Standards — JPEG 4.54 4.14.3. Video Compression Standard — MPEG 4.60 Short Questions and Answers 4.65 CHAPTER- 5 IMAGE REPRESENTATION AND RECOGNITION 5.1 Image Representation and Description — Introduction 5.1 Boundary Representation 52 5.2.1 Chain Codes 53 5.2.2 Polygonal Approximations (or) Polygonal Representations 5.6 5.23 Boundary Segments 59 524 Signatures 5.1 5.2.5 Skeletons 5.12 53 Boundary Descriptors 5.13 53.1 Simple Descriptors 5.13 53.2 Fourier Descriptors 5.14 533 Shape Numbers 5.17 5.3.4 Statistical Moments 5.18 54° Regional Descriptors 5.19 5.4.1 Simple Descriptors 5.19 542 Texture 5.21 5.4.3 Topological Descriptors 5.28 5.4.4 Moments of Two-Dimensional Functions 5.29 55 Object Recognition — Introduction . 5.30 Scanned with CamScanner5.6 Pattems and Pattern Classes 5.6.1 Vectors 5.6.2 Strings 5.63. Trees 5.7 Recognition Based on Matching 5.7.1 Decision-Theoretic Methods 5.7.2 Matching Techniques 5.13. Minimum Distance Classifier 5.7.4 Matching by Correlation Short Questions and Answers ‘Appendix-A Anna University Solved Question Papers Scanned with CamScanner 5.30 5.31 5.32 5.32 5.33 5.33 5.34 5.34 5.36 5.39 1-26CHAPTER - 1 FUNDAMENTALS 141 INTRODUCTION An image contains descriptive information about the object it represents. ‘An image is defined as a two-dimensional function, f(x, y) that carries some information, where x and y are known as spatial or plane coordinates. The amplitude of ‘f at any pair of coordinates (x, y) is called the intensity or gray level of the image at that point. Analog Image ‘An analog image is mathematically represented as a continuous range of values that give the position and intensity. Example: Camera and film, generally formats or objects we can see. Digitization Digitization can be defined as the process of transforming images, text or sound from analog media into electronic or digital data that we can save, organize, retrieve and restore through electronic devices. Digital Image A digital image is created through the process of digitization. It is the representation of a two dimensional (2D) image using ones and zeros. All the amplitude values and coordinate values (x, y) are finite in a digital image. Pixels © Pixels are small individual elements of a digital image. © These are also known as image elements or pixels or picture elements. © Each and every pixel has a particular location and brightness or intensity value. * A finite number of pixels form a digital image. Image Processing Image processing is defined as the process of analyzing and manipulating images Using a computer. . Scanned with CamScanner1.2 _ Digital Image Fundamentals Analog Imagé Processing Any image processing task which is conducted on two - dimensional analog signals by analog means is known as analog image processing. Digital Image Processing (DIP) Using computer algorithms to perform image processing on digital images is referred as digital image processing i.c. processing digital images by means of a digital computer. The main advantages of DIP over analog image processing are: Itallows a wide range of algorithms to be applied to the input data. ¢ It avoids noise and signal distortion problems. Needs for Digital Image Processing The important needs for DIP are: e To improve the pictorial information for human interpretation. © To process image data for storage, transmission and representation for autonomous machine perception. Fundamental Steps The fundamental steps in digital image processing are «Image acquisition ¢ Image enhancement ¢ Image restoration e Image compression e Image segmentation Image representation and description. All these steps.are explained in the following chapters. Applications Digital image processing is mainly applied in the following fields * Gamma - Ray Imaging « X-ray Imaging Scanned with CamScanner, Digital Image Processing _1.3 Imaging in the Ultra-Violet (UV) Band e Imaging in the Visible and Infrared (IR) Band Imaging in the Microwave Band 4 Imaging in the Radio Band © Ultrasound Imaging 1.2 THE ORIGINS OF DIGITAL IMAGE PROCESSING The history and development of digital image processing is summarized under this section. 1.2.1 Bartlane Cable Picture Transmission System — In 1920s, in the Newspaper Industry pictures were sent by submarine cable between London and New York. This took more than a week to sent a picture. Then, Bartlane cable picture transmission system was introduced. Tis system used a specialized printing equipment to code the picture before sending and reconstruct the same after receiving. The early Barlane systems were capable of coding the images in only five different gray levels. — In 1929, Bartlane systems with 15 gray levels were introduced and many more methods were used to improve the visual quality of the images. One such important development was the film plate reproduction method using light beams. 1.2.2. Development of Digital Computers The actual digital image processing started with the invention of digital computers because they require more storage space and computational power. Although the basic idea of developing computer started more than 5000 years ago with abacus, the modern digital computers were started developing in 1940s, with the introduction of below mentioned Von Neumann concepts: (i) A memory to hold stored program and data (i) Conditional branching deyere.2S* tWo are the basic ideas of Central Processing Unit (CPU). Then, there was a fast “velopment of computers in the followed years as summarized below: Scanned with CamScanner1.4 Digital Image Fundamentals (i) 1948 - Invention of transistors at Bell Laboratories (ii) 1950s — 1960s - Development of high-level programming languages: COBOL, FORTRAN. (iii) 1958 - Invention of Integrated circuit (IC) at Texas Instruments. (iv) 1960s — Development of operating systems(OS) (v) 1970s — Development of microprocessor by Intel — Miniaturization of components started with Large Scale Integration (LSI) (vi) 1981 — Introduction of personal computer by IBM (vii) 1980s - Very Large Scale Integration (VLSI) started and developed to Ultra Large Scale Integration (ULSI) — Thus, the developments in the area of mass storage and display systems further helped to improvise the digital image processing techniques. — Then, digital image processing techniques were started to be applied in different areas, 1.2.3 Applications (1) Space Applications In 1964, the pictures of the moon transmitted by Ranger 7 were corrected to remove image distortions using computers i.e. image processing techniques were used for the first time. This development was started at Jet Propulsion Laboratory, California. (2) Medical Imaging A very important event in this field is the invention of Computerized Tomography (CT) in 1970s. By passing X-rays on an object, this technology produced a three- dimensional (3D) rendition of the inside of the object. . > The two inventions © CT by Sir God frey N. Hounsfield and professor Allan M. Cormack and © X-ray by Wilhelm Conrad Roentgen in 1895 Teceived Nobel price and are the basis foi ‘ rT Some most i icatic image processing today, mi important applications of Scanned with CamScannerDigital Image Processing 1.5 - Thus, from 1960s until the present there is a vigorous growth in the image processing field. (3) Geography S Digital image processing techniques are used to study pollution patterns from aerial and satellite imagery, — Image enhancement and restoration procedures are used to process degraded images of unrecoverable objects and to duplicate expensive experimental results. — The methods are also used for remote Earth resources observations. (4) Archeology — Image processing methods are used to restore blurred pictures that were the only available records of rare artifacts. (5) Physical Related Fields ~ Using computer techniques, the images of experiments such as high - energy plasmas and electron microscopy are enhanced. (6) Machine Perception — Image processing techniques are helpful in solving problems with machine perception also. — Some examples are: Character recognition, industrial machine vision, fingerprint processing,. weather prediction by processing aerial and satellite ' imagery etc. (7) Other than the above mentioned, digital image processing methods are also applied in areas such as astronomy, biology, nuclear medicine, law enforcement, defense and industry. a Till today, the digital image processing techniques are continuously developing and are applied in various areas. 13 STEPS IN DIGITAL IMAGE PROCESSING All the processing methods’ of digital images can be broadly divided into two Categories. They are, (1) Methods whose input and output are images Scanned with CamScanner1.6 Digital Image Fundamentals hods w mages, but outputs are affributes i.e. features extracted . A hose input are image (2) Metho from those images. 1.3.1 Block diagram block diagram showing all the processing steps in digital image processing is The block diagram s given in fig. 1.1. (5) Wavelets and (6) multiresolution Image processing compression u 1) @ =) Morphological Color image — |guum processing processing b @) @ Knowledge base [gap Image Image = segmentation restoration | @) Image representation & Image filtering umm description & enhancement PROBLEM (10) DOMAIN a) ee Ry Object = Image recognition Acquisition L. Fig. 1.1 Steps in Digital Image Processing Among the above shown modules, (1), 2), 3), (4), (5) (6) > methods whose outputs are images (7), (8) (9) 10) > methods whose outputs are image attributes Knowledge Base This indicates the knowledge about a problem domain. — The knowledge base may either be simple such as the details of image regions or it may be complex such as an image database containing high-resolution satellite images for change-detection applications, Scanned with CamScannerDigital Image Processing 1.7 ~_ It guides the operation of each processing module in Fig. 1.1. - Italso controls the interaction between processing modules. ()) Image Acquisition ~ Image acquisition is the process of capturing or generating digital i i i igital eS usin} ‘imaging sensors’. mnie ees i - It can be considered as a simple process in which an image in digital form is given. - Usually, this stage involves ‘preprocessing’ such as scaling. (2) Image Enhancement - Image enhancement is the process of manipulating an image so that the result is more suitable than the original image for specific application. > There are a variety of enhancement techniques that use so many different image processing approaches. - These enhancement methods are ‘subjective’ and hence problem oriented. (3) Image Restoration Image restoration is also the process of improving the appearance of an image. - Restoration techniques are ‘objective’ i.e they are based on mathematical or probabilistic models of image degradation. (4) Color Image Processing - Color is one of the very important features that has been extracted from an image. - Color image processing techniques process an image considering its color as one of the important attribute in addition to other attributes. () Wavelets And Multiresolution Processing - Wavelets are used to represent images in various degrees of resolution. - These wavelets are mainly used for = Image data compression and = Pyramidal representation — this is the process of subdividing images successively into smaller regions. Scanned with CamScanner1.8 Digital Image Fundamentals (6) Image Compression - Image compression is the process of reducing the storage required to save an image or it is the process of reducing the bandwidth required to transmit an image. = Some compression standards are also defined. (2) Morphological Processing - Morphological processing deals with the tools for extracting image components. - These components will be used in the representation and description of shape. (8) Image Segmentation - Image segmentation is the process of partitioning or dividing an image into its constituent parts or objects. - There are a number of algorithms available for segmentation procedures. - Segmentation is usually done to recognize the objects from an image. If the segmentation is more accurate, there will be successful recognition. ~The output of this process is raw pixel data. (9) Image Representation And Description + Image representation is a process that is used to convert the output of segmentation process into a form suitable for computer processing. - Two types of representations are ¢ Boundary Representation — focuses on external shape characteristics such as corners and inflections. ¢ Regional Representation — focuses on internal properties such as texture or skeletal shape. - Image description is the process of extracting the attributes from an image that are used to give some quantitative information of interest. - Description can also be called as the process of ‘Feature selection’. (10)Image Recognition - Image recognition is a process that assigns a label or name to an object identified from an image, based on its descriptors. Scanned with CamScannerDigital Image Processing 1.9 Output — The output of processing the image ca 7 shown fa fig. 11 ig the image can be obtained from any stage of the modules — The modules that are required for an application i applica bosckend. pplication is totally dependent on the problem to 1.4 COMPONENTS (or) ELEMENT: : SYSTEM ‘S OF IMAGE PROCESSING The elements of a general-purpose image processing system are. (1) Image sensors (2) Specialized image processing hardware (3) A Computer (4) Image processing software (3) Mass storage (6) Image displays (7) Hardcopy devices (8) Network 14.1 Block Diagram The block diagram of a general purpose image processing system is shown in fig 1.2. (1) Image sensors Image sensing or image acquisition is used to acquire i.e. to get digital images. It requires two elements, which are, a. A physical device that is sensitive to the energy radiated by the object to be imaged. Example: A digital video camera ligitizer to convert the output of the physical sensing device into ital form. bA Scanned with CamScanner1.10 Digital Image Fundamentals NETWORK ———_ COMPUT! MASS IMAGE — KB} COMPUTER lame) Binye STORAGE DISPLAYS SPECIALIZED. HARDCOPY IMAGE, ROG PROCESSING SOFTWARE HARDWARE, [ IMAGE, SENSORS t PROBLEM DOMAIN Fig, 1.2 Elements of Digital Image Processing System (2) Specialized Image Processing Hardware ¢ This hardware consists of the digitizer and some hardware to perform other basic operations. Example: Arithmetic Logic Unit (ALU) which perform arithmetic and logical operations on entire images in parallel. * This type of hardware is also known as front-end subsystem. . The main feature of this hardware is its high speed. Therefore, fast functions which cannot be performed by the main computer can be handled by this unit. (3) Computer The computer used in an image processing system is a general - purpose computer. It can range from a personal computer (PC) to a supercomputer. In some dedicated applications, specially designed compute : the required performance. ign puters are used to achieve Scanned with CamScannerDigital Image Processing 1.1! (4) Image processing software The software for image processing has specialized modules which perform specific tasks. Some software packages have the facility for the user to write code using the specialized modules. (5) Mass storage Since images require large amount of storage space, mass storage capability is very important in image proce 8 applications, Example: A 1024 x 1024 size image with each pixel represented in 8 - bits, Tequires 1 megabyte of storage space, without compression. Measurement: Storage is measured in the following units: e — Bytes = 8 bits ¢ K bytes (Kilobytes) = One thousand bytes e Mbytes (Megabytes) = One million bytes * Gbytes (Gigabytes) = One billion bytes © T bytes (Terabytes) = One trillion bytes Types: There are three categories of digital storage for image processing applications. They. are, . (i) Short-term storage for use during processing ~ This can be provided by using computer memory or frame buffers. - Frame buffers are specialized boards that can store one or more images and can be accessed rapidly at video rates. This method allows instantaneous image zoom, scroll (vertical shifts) and pan (horizontal shifts) also. Gi) On-line storage for fast recall - This type of storage gives frequent access to the stored data, ~ Itis provided by magnetic disks and optical - media storage. (iii) Archival storage for infrequent access - It requires large amount of storage space and the stored data is accessed _ infrequently. Scanned with CamScanner1.12 Digital Image Fundamentals ~ Magnetic tapes and optical disk packed in “jukeboxes” provide this type of storage. (© Image displays i itors. Thes itors are driven by the ‘only used displays are color TV monitors. These monit amt of siimage and graphics display cards” which are a part of the computer system, If stereo display are needed in sonie applications, a headgear containing two small displays is used. (7) Hardcopy devices Hardcopy devices are used for recording images. These devices include, e Laser printers e Film cameras e Heat - sensitive devices e Inkjet units © Digital units like optical and CD - ROM disks etc. Eventhough the highest resolution is provided by camera film, the written material preferred is paper. (8) Network Networking is a function used in all computer systems today. Since image processing applications need large amount of data, the main consideration here is the bandwidth. Also, communications with remote sites are done through the Internet, which uses optical fiber and other broadband technologies. 15 ELEMENTS OF VISUAL PERCEPTION Vision is the most advanced huma: i n sense. So, images play the most i t role i human perception. camel suporsant role in Human visual perception is very important because the ion of i i i is selection of ima: techniques is based only on visual judgements. ge processing Scanned with CamScannerDigital Image Processing 1.13 5.1 Structure of the Human Eye The human eye is nearly in the shape of a sphere. Its average diameter is approximately 20mm. The eye, called the optic globe is enclosed by three membranes known as, (1) The Comea and Sclera outer cover (2) The Choroid and (3) The Retina The simplified horizontal cross section of the human eye is shown in fig. 1.3. Choroid Sclera \ Anterior ‘Chamber Ciliary Fibers ciliary Muscle Fig, 1.3 Human Eye - Cross Section (1) The Cornea and Sclera outer cover sete. tom © The cornea is a tough, transparent tissue that covers the anterior ie. surface of the eye. is ‘inuous © The sclera is an opaque (i-¢. not transparent) membrane that is continu with the cornea and encloses the remaining portion of ‘the eye. Scanned with CamScanner1.14 Digital Image Fundamentals (2) The Choroid i The choroid is located directly below the sclera. It has a network of blood vessels which are the major nutrition source to . . the eye. © Slight injury to the choroid can lead to severe eye damage as it causes restriction of blood flow. The outer cover of the choroid is heavily pigmented i.e. colored. This reduces the amount of light entering the eye from outside and backscatter within the optical globe. F ; \ e The choroid is divided into two at its anterior extreme as, (i) The Ciliary Body and Gi) The Iris Diaphragm © Iris Diaphragm: e Lens: It contracts and expands to control the amount of light enters the eye. The central opening of the iris is known as the pupil, whose diameter varies from 2 to 8 mm. The front of the iris contains the visible pigment of the eye and the back has a black pigment. The lens is made up of many layers of fibrous cells, It is suspended i.e. hang up by the fibers attached to the ciliary body. It contains 60 % to 70% water, 6% fat and more protein © Cataracts: The lens is colored by a slightly yellow pigmentation, This coloring increases with age, which leads to the clouding of lens, Excessive clouding of lens happens in extreme cases which is known as “cataracts”. This leads to poor color discrim ination (i.e. differentiation) and loss ¢ ation) Scanned with CamScannerQ) The Retina Digital Image Processing 1.15 The retina is a the innermost membrane of the eye. It covers the inside of the wall’s entire posterior i.e. back portion. Fovea: - The central portion of the retina is called the fovea. - Itisa circular indentation with a diameter of 1.5mm. Light receptors: - When the eye is properly focused, light from an object outside the eye is imaged on the retina. - Light receptors provide this “pattern vision” to the eye. - These receptors are distributed over the surface of the retina. - There are two classes of discrete light receptors, known as ° (ii) Rods (i) Cones and (ii) Rods (i) Cones In each eye 6 to 7 million cones are present. They are highly sensitive to color and are located in the fovea. Each cone is connected with its own nerve end. Therefore, humans can resolve fine details with the use of cones. Cone vision is called photopic or bright - light vision. The number of rods in each eye ranges from 75 to 159 million. They are sensitive to low levels of illumination i.e. lightings and are not involved in color vision. Many number of rods are connected to a common, single nerve. Thus, the amount of detail recognizable is less. ‘Therefore, rods provide only a general, overall picture of the field of view. Scotopic or dim-light vision: in i jects that appear with Due to the stimulation of rods, the objects that y bright color in daylight, will appear colorless in moonlight. This phenomenon is called as ‘scotopic or dim-light vision’. Scanned with CamScanner1,16 Digital Image Fundamentals Rods and Cones Distribution The distribution of rods and cones in the retina is illustrated in Fig. 1.4. The recepto, density is measured in degrees from the fovea. Navef rods or cones per mm? 180,000 135,000 90,000 45,000 Degrees from Visual Axis (Center of Fovea) Fig. 1.4 Distribution of Rods and Cones From the above figure, the fol lowing points are understood. Blind spot: The area in which there is no presence of light receptors is called the “blind spot”. (see in fig. 1.3 also) Receptors have symmetrical distribution about the fovea in all regions except the blind spot, Rods density increases from the Centre and it starts decreasing from 20° off axis, Cones have higher density in the center of the retina i.e. fovea. 1.5.2 Image Formation in the Eye Role of the Lens . The lens of the eye is flexible, whereas an ordinary optical lens is not, The radius of curvature of the an 0 terior surface of the lens is greater than the radius Of its posterior surface, (i) To focus distance objects (3m) the lens is made Flattened by the controlling muscles and it will have lowest refractive power, Scanned with CamScannerDigital Image Processing 1.17 (ii) To focus nearer objects, the muscles allow the lens to become thicker, and most strongly refractive, Focal Length ¢ The distance between the center of the lens length? and the retina is called the ‘focal ¢ Itranges from 14mm to 17mm ra as the refractive power decreases from maximum to minimum, To Calculate the Retinal Image Size To calculate the size of the retinal image, consider fig. 1.5 in which the observer is looking at a tree of height 20mm at a distance of 110m. Retinal image -/-——__—____—_- ttm Fig. 1.5 Image Formation in the Eye ‘Now, the object is at a distance > 3m. The refractive power of the lens is minimum. = Focal length = 17mm 20 h From fig. 1.5, =, monn Bgl 5 110” 17x10" > Size of the retinal image, h = Perceiving an Object The process of perceiving an object is done by the following steps: @ First, the retinal image of height ‘h’ is reflected in the area of fovea. Gi) Then, perception takes place by the relative excitation of light receptros. (iii) The receptors transform the radiant energy into electrical impulses. (iv) At last, these electrical impulses are decoded by the brain. Scanned with CamScanner1.18 Digital Image Fundamentals 1.5.3 Brightness Adaptation (Adaptation — Adjustable for a new situation) Human visual system cannot operate over a wide range of light intensity levels simultaneously. This large variation is accomplished by changing its overall sensitivity, Thi phenomenon is known as ‘brightness adaptation’. Brightness Adaptation Level The current sensitivity level of the visual system for any given set of conditions is called the ‘brightness adaptation level’. Adaptable Range The range of intensity levels to which the visual system can adapt is in the order of 10"°. This range covers from the scotopic threshold (i.e. dim-light) to the glare limit (i strong light). Subjective Brightness The light intensity as perceived by the human visual system is known as the ‘subjective brightness’. It is a logarithmic function of the light intensity incident on the eye. A curve of light intensity versus subjective brightness is shown in fig. 1.6. In the figure shown, ‘© The long solid curve ‘A’ represents the range of intensities that the visual system can adapt. Glare Limit —p - Adaptation Range t Subjective Brightness mL - miliambert Scotopic Vi Tareshold —>“[) 6 4202 4 Log of Intensity (mL) —3> Fig. 1.6 Light Intensity Vs Subjective Brightness Scanned with CamScannerDigital Image Processing 1-19 © The short intersecting curve (B — C) represents the range of subjective brightness at that level. © The double branches show that the transition from scotopic to photopic vision is gradual over the range from 0.001 to 0.1 mL. ion (Discriminaiton — something in a different way) at 154 Brightness Diserimi ability to discriminate ic. differentiate various intensity levels is very nting image processing results since digital images are displayed as a important in pres discrete set of intens ‘Also, the total range of intensity levels that can be discriminated simultancously is smaller than the total adaptation range. 1.5.4.1 Experimental Setup The experimental setup used to characterize the brightness discrimination of the human visual system is shown in fig. 1.7. 1+Al <— Diffuser Fig. 1.7 Characterization of Brightness Discrimination |The setup is a flat area which is a diffuser such as opaque glass. It is uniformly illuminated by a light source with a variable light intensity, 1. Case (i) ‘The intensity of the field is increased to an amount Al in the form of a short-duration Slash. This appears as a circle in the center of the area as shown in fig.1.11. +. The total illumination of the field = 1+ Al Now, * If AL is not bright enough => Indicates no perceivable change and the subject says “no”, * If Al is strong enough => Indicates some perceived change and the subject says “yes”, Scanned with CamScanner1.20 Digital Image Fundamentals Weber Ratio: It is given by, ‘ Me , Weber ratio = 1 (a Where, AI, - Increment of illumination which is distinguishable 50% of the time with background illumination [. ¢ Small Weber ratio => “Good” brightness discrimination, because only a smal percentage change in intensity is recognizable. © Large Weber ratio =>“Poor” brightness discrimination, because a large percentage change in intensity is needed. This Weber ratio as a function of intensity is shown in Fig. 1.8. 1.0 05 7 oL 4 t 0.5 : 4 log A/D og L 4 3 2 -1 «0 1 2 3 log )> Fig. 1.8 Weber ratio as a Function of Intensity Case (ii) In this case, the background illumination is kept constant and the other source intensity is increased from very low to very high range. ~. Totally, an observer can see twenty four different intensity changes. This gives the number of different intensities that can be seen at any point in a monochrome image. This implies that the eye is capable of much broader range of overall intensity discrimination and it can detect objectionable contouring effects also. 1.5.4.2 Perceived Brightness and Intensity In actual case, “the perceived brightness is not only a function of intensity”. This concept can be explained with the use of two phenomena, namely (1) Mach Bands Phenomena (2) Simultaneous Contrast Scanned with CamScannerDigital Image Processing 1.21 () Mach Bands Phenomena It shows that “the human visual boundary regions of different intensities”. is phenomenon is shown in fig, 1.9, which was described by Ernst fem tends to undershoot or overshoot around the An example of t Mach in 1865. (a) ) Fig. 1.9 Mach Bands Phenomena Here, the intensity of the individual stripes is constant in fig. 1.9 (a). But, the visual system perceives a brightness pattern which is strongly scalloped i.e. related as shown in fig. 1.9 (b). These seemingly scalloped bands are called “Mach bands”. (2) Simultaneous Contrast This phenomena shows that, “the perceived brightness does not depend simply on its intensity”. “An example of simultancous contrast is shown in fig. 1.10. Fig, 1.10 Simultaneous Contrast Scanned with CamScanner1.22 Digital Image Fundamentals Here, all the inner squares have the same intensity. But, as the background gets lighter, they seem to become darker. 1.5.4.3 Optical Illusions (Jilusion —> Something that does not exist) Optical illusions are a characteristics of the human visual system which imply thay “the eve fills in nonexisting information or wrongly perceives geometrical properties o objects”. Some examples of optical illusion are shown in fig 1.11. (a) e a) Here, the outlines of a square is seen in the figure. But there are no lines defining such a square is given. 6 ov ) | In this figure, just a few lines create the illusion of a \ / complete circle. 7|N (c) Eventhough the two line segments are of the same — length here, they seem to be different. a ; @ au All the lines here are oriented at 45°, equidistant and parallel. But, the crosshatching gives the illusion at that they are not parallel, Fig, 1.1] Optical IMusions Scanned with CamScannerDigital Image Processing | 23 1.6 IMAGE SENSING AND ACQUISITION - Image sensing and acquisition is the process of capturing or generating digital images using imaging sensors. = The images are generated by the combination of two things: = An illumination source Reflection or absorption of energy from the elements of the scene. The type of reflected energy depends on the scene being imaged. Types of sensors Three types of imaging sensors are mainly used for image acquisition. They are: (1) Single imaging sensor (2) Line sensor (3) Array sensor Imaging process The incoming illumination energy is transformed into digital images in the following way. © The sensor material detects the particular type of energy, combines it with input electrical power and transform that into a voltage. « The obtained voltage from the output of the sensors is digitized and this gives e digital image. 1.6.1 Image Acquisition Using a Single Sensor Components The components of a single sensor is shown in fig, 1.12. Energy tyury Filter |<-|— Sensing material Power in |4— Housing Voltage output Fig, 1.12. Single Imaging Sensor Scanned with CamScanner1.24 Digital Image Fundamentals of the sensor improves ‘selectivity’ i.e. if. green pass is placed in front pee for green light and so on. filter is placed, the output will be stronger Example A go od example of single sensor is the photodiode. The photodiode is constructed from silicon material s and its output waveform js proportional to light energy’ Generation of 2-D image To generate a 2-Dimensional image, relative displa area to be imaged in both x and y directions is needed. For this purpose, the sensor is arranged as shown in fig. 1.13. coments between the sensor and the Film Sensor —, v) — to |) Rotation — Linear motion (left to right) Fig. 1.13 2-D Image Generation Here, - The film negative mounted on a drum provides displacement in one direction when - The single sensor placed on a lead screw provide: ion ii rey 7 fanaa Provides motion in the perpendicular - Thus, a 2-D image is generated. - Advantage: (i) This method is inexpensive (ii) High-resolution images can be obtained. > Drawback: (i) Itisa slow method, Other Arrangements (1) In one method, a fla i ear direct 9 , @ flat bed is used and the sensor i . oes, ae ; : : i This mechanical digitizers-are called ‘microdensitiomerer 1m tao dic Scanned with CamScannerDigital Image Processing 1.25 (2) In another method, the sensors is placed with a laser source. Moving mirrors are used to reflect the laser signal onto the sensor, 1.62. Image Acquisition Using Sensor Strips Sensor strips are more frequently used for image acquisition than single sensors. ‘Arrangement Ina sensor strip, a number of sensors are arranged in-line as shown in fig. 1.14. Fig. 1.14 A sensor strip This sensor strip can be used in two different ways. (1) Linear sensor strip (2) Circular sensor strip ()) Linear Sensor Strip - Linear strip method is used in most flat band scanners. - Inasingle strip, more than 4000 in-line sensors can be connected. - The image acquisition using linear sensor strip is illustrated in fig. 1.15. Linear. motion Sensor strip Fig, 1.15 Linear sensor strip imaging Scanned with CamScanner1.26 Digital Image Fundamentals Here, the strip provides image elements in one direction and the motion perpendicular to the strip provides imaging in the other direction. Thus, 2-D image acquisition is done. Application: This method of image acquisition is mainly used in airborne applications where the imaging system is mounted on an aircraft that flies at a constant height and speed over the geographical area to be imaged. (2) Circular Sensor Strip In this arrangement, sensor strips are mounted in a ring configuration. An important thing here is that the images are not directly obtained by the motion of sensors. The sensed data must be processed by reconstruction algorithms in order to convert them into cross-sectional images. One such arrangement is shown in fig. 1.16. Cross — sectional image image of 3D object reconstruction 3-D objects Linear motion Sensor ring Fig. 1.16 Circular sensor strip imaging Scanned with CamScannerDigital Image Processing 1.27 Here, a rotating x-ray source provides illumination. The sensors opposite to the source collect the x-ray energy that passes through the object. The collected data is processed further and images are produced. - Applications: This image acquisition method is used in medical and industrial imaging to obtain cross-sectional images of 3D objects. Some specific applications are Computerized Axial Tomography (CAT) = Magnetic Resonance Imaging (MRI) * Positron Emission Tomography (PET) 1.6.3 Image Acquisition Using Sensor Arrays - This type of image acquisition is found commonly in electromagnetic and ultrasonic sensing devices, especially in digital cameras Arrangement - For this image acquisition process, individual sensors are arranged in the form of 2-Dimensional array as shown in fig. 1.17. Fig. 1.17 Array sensor = More than 4000 x 4000 elements or sensors can be packed in the above shown arrangement. = The sensor array used in digital cameras and other light sensing instruments is known as ‘CCD array’ - In CCD sensors, the response of each sensor is proportional to the integral of the light energy projected onto the surface of the sensor. Image Generation The arrangement for image acquisition using sensor arrays is shown in fig. 1.18. Scanned with CamScanner1.28 Digital Image Fundamentals Ilumination (energy) XW source x ai lL ao Le Scene Output (Digitized Imaging, Internal image) system image plane Fig. 1.18 Image acquisition using sensor arrays Here, The energy from an illumination source is reflected from a scene element. The imaging system collects the incoming energy and focus it onto an image plane. « Then, the sensor array in the imaging system produces outputs proportional to the integral of the light received at each sensor. « These output are first converted into analog and finally into a digital image. Advantage The advantage of using sensor arrays for image acquisition is that as the sensor array itself is two-dimensional, the motion of sensors is not necessary for imaging. A complete image can be obtained by just focusing the reflected energy pattern on to the surface of the array. 1.6.4 Image Formation Model Let, a two-dimensional image is denoted by f(x,y), where the function ‘P ‘will have some value at each spatial coordinate (x,y). As the image is generated from a physical process, f(x,y) must be non zero and finite. ie. O< fy) <2 ae (1.2) Components The function f(x,y) is characterized by two components (1) Mumination, i(x,y) + denotes the amount of source illumination incident on the scene. — Its nature is determined by illumination source. Scanned with CamScannerDigital Image Processing _1.29 (2) Reflectance r(x,y) —> denotes the amount of illumination reflected by the objects in the scene. > Its nature is determined by the characteristics of objects. Now, f(x,y) can be written as the combination of these two components as: foxy) = iy) Oy) where, 0
32 64 128 256 N> Fig. 1.23 Isopreference Curves It is understood from the figure that, * Varying N and k affects the amount of detail present in an image. * When N and k values increase, the curves shift up and right which implies a better picture quality, * The curves becomes more vertical when the amount of detail in the image increases. This indicates that the images with large amount of detail may need only a few gray levels, © Ifthe N value is fixed, the perceived quality of the image is independent of the number of gray levels used. * Ifthe k value is decreased, the contrast of an image increases which in turn improve the quality of the image. Scanned with CamScannerDigital Image Processing 1.35 1.7.3. Spatial Resolution ¢ Spatial resolution is defined as the smallest number of discernible i.e. recognizable line pairs per unit distance. ¢ Itis the smallest discernible detail in an image. The principal factor determining the spatial resolution of an image is sampling. # Spatial resolution of an image of size M x N with ‘L? gray leve =MXxN pixels. Example Let, a chart is constructed with vertical lines of width W. The space between the lines are also of width W. Thus, a line pair consists of one line and its adjacent space. ©. The width of a line pair = 2W = Line pairs per unit distance = — = spatial resolution of the image. False Contouring If the number of gray levels in smooth areas of a digital image is not sufficient, an imperceptible i.e not perceivable set of very fine ridge like (ridge — narrow elevation) structures occur in the smooth gray level areas. This effect is known as ‘false contouring’. 1.7.4 Gray - Level Resolution © Gray - level resolution is defined as the smallest discernible change in gray level. * Measuring the discernible changes in gray level depends on human perception. Therefore, it is a ‘subjective’ process. * Gray level resolution of an L-level digital image of size M x N= L levels. 1.7.5. Aliasing Effect Aliasing is an unwanted effect which is always present ina sampled image. Band-Limited Functions: Finite functions are represented in terms of sines and cosines of vario The sine/cosine component with the highest frequency determin content’ of the function. frequencies. s the ‘highest frequency If the highest frequency is finite and the function has unlimited duration, it is known as a ‘band-limited function’. Sampling Rate: Sampling rate is defined as the number of samples taken per unit distance. Scanned with CamScanner1.36 Digital Image Fundamentals Shannon's Sampling Theorem: ion i c greater than Shannon’s sampling theorem states that if a function is eae, ci wd a, ha or equal to twice its highest frequency, the original function can be completely its samples. fh w--- (1.15) ie f= nw Where f, -> sampling frequency finnx —> highest input frequency Aliasing: it i it it cannot A function is undersampled if the sampling frequency is very low, 2 fale at cannot satisfy the Shannon theorem. At this condition, additional frequency i Thi is known introduced into the sampled function which corrupts the sampled image. This effect is know as ‘aliasing’, The additional frequency components being introduced are known as ‘aliased frequencies’. To Reduce Aliasing: Aliasing effect can be decreased by reducing the high-frequency components. This is done by blurring or smoothing the image before sampling. 1.7.6 Moire Patterns Moire patterns are used to view the effect of aliased frequencies on a sampled image. Moire effect is a special case in which a function of finite duration can be sampled over a finite interval without violating the Shannon sampling theorem, 1.7.7 Zooming of Digital Images Zooming may be viewed as oversampling. Itis applied after digitizing the image. Steps: Two steps are needed for zooming, They are, (@) The creation of new pixel locations, i (b) The assignment of gray level: i Methods: 8 Bray levels to those locations, Zooming can be implemented by three methods namely, (1) Nearest neighbor interpolation (2) Pixel replication (3) Bilinear interpolation Scanned with CamScannerDigital Image Processing 1.37 (1) Nearest Neighbor Interpolation The steps followed in this method are explained with an example below: () Consider an image to size 500 x 500 pixels which is to be enlarged 1.5 times i.e. to 750 x 750 pixels, Gi) Let, an imaginary grid of size 750 x 750 is laid over the original image. (iii) Since, the imaginary grid spacing is,greater than the original image, new pixel locations are created which completes the step (a) mentioned above. (iv) Compress the grid to the original image size. (vy) Now, step (b) i.e. gray - level assignment is performed by assigning the gray level of the closest pixel in the original image to the new pixel in the grid. This is done for all the new points. (vi) After the assignment is over, expand the grid to the specified size to obtain the zoomed image. Advantage: ‘The method is faster than other methods. Disadvantage: It produces a checkerboard effect which is objectionable especially at high factor of magnification. (2) Pixel Replication * Pixel replication is a special case of nearest neighbor interpolation. ‘* It is applicable when the size of an image is to be increased to an integer number of times like double, triple, quadruple ete. * The concept is based on the duplication of pixels for the required number of times to achieve the desired size. Therefore, new locations will be the exact duplicates of old locations. Example: Let, the size of an image is needed to be doubled. First, each column of the image is duplicated, which doubles the size in horizontal direction. Then, each row of the enlarged image is duplicated to double the size in vertical direction. Thus, the image with required size is obtained. | { ©) Bilinear Interpolation * Bilinear transformation is the most preferred method of gray-level assignment. Milica Scanned with CamScanner1.38 Digital Image Fundamentals © This method overcomes the drawback of nearest neighbor interpolation. © Ituses four nearest neighbors of a point as below: Let, (x y') represent the coordinates of a point in the zoomed image and v(x y') is the gray-level assigned to that point. Now, the assigned gray level for bilinear interpolation is given by, v(xty')=ax'+by'rex'y'+d Where a, b, c, d — Coefficients determined using the 4 nearest neighbor of point (x',y"). If more neighbors are used for interpolation, smoother results can be obtained. 1.7.8 Shrinking of Digital Images Image shrinking can be viewed as under sampling and it is applied to an image after digitizing. Shrinking is performed by two methods which are similar to the methods of zooming. Method I: © This method is called as row-column deletion method. ‘© It is equivalent to the pixel replication method in zooming. . Hers, the required size of the image is obtained by deleting the additional rows and columns. « Example: To shrink an image by one - half, every other row and column are deleted. Method II: This method is simi 5 F 5 — method is similar to the grid analogy for zooming, which consist of the following ® Keep an imaginary grid of required size over the original image. (ii) Expand the grid to fit over the original image. Gil) Perform FA gray-level asi; - . interpolation, assignment by either nearest neighbor or bilineat (iv) Shrink the grid b; ; i ack t igi - . image of required chet” "Original specified size which gives the shrinked To reduce aliasin, ii 18 effects, the image is blurred or smoothened before shrinkin; ig. Scanned with CamScannerDigital Image Processing 1.39 4.8 RELATIONSHIPS BETWEEN PIXELS The relationships between the pixels in an image should be known clearly to understand the image processing techniques. Those relationships for an image f(x, y) are explained below: 1.8.1 Neighbors of a Pixel Apixel, p can have three types of neighbors known as, (1) 4 — Neighbors, Ny(p) (2) Diagonal Neighbors, Np(p) (3) 8- Neighbors, Ne(p) (I) 4~neighbors, Na(p): The 4-neighbors of a pixel ‘p’ at coordinates (x, y) includes two horizontal and two. vertical neighbors. The coordinates of these neighbors are given by, &-1, | &=1, | &-1, y-D) |») y+) (x+hy),(x-Ly)@ yt), Qy-D Oy Gy y—t) | | ysay art | &aT | HT y-) oy fy+D Fig. 1.24 A Region of an Image Centered at (x,y) Here, each pixel is at unit distance from (x, y) as shown in fig. 1.24. If (x, y) is on the border of the image, some of the neighbors of p lie outside the digital image. (2) Diagonal Neighbors, No(p): The coordinates of the four diagonal neighbors of ‘p’ are given by (x+hy+ D,(x+by-D.-Ly + D& -Ly-)) Here also, some of the neighbors lie outside the image if (x, y) is on the border of the image. © 8~-Neighbors, Ne(p): The diagonal neighbors together with the 4-neighbors are called the 8-neighbors of the pixel ‘p’. It is denoted by Ne(p)- Scanned with CamScanner
You might also like
1000 page topics - Computer vision
PDF
No ratings yet
1000 page topics - Computer vision
12 pages
SGM4-Study Guide For Module 4
PDF
No ratings yet
SGM4-Study Guide For Module 4
15 pages
Digital Image Processing Seminar
PDF
80% (5)
Digital Image Processing Seminar
23 pages
Unit 1 1 Unit 1 2 Merged
PDF
No ratings yet
Unit 1 1 Unit 1 2 Merged
45 pages
Image Processing Seminar
PDF
No ratings yet
Image Processing Seminar
48 pages
Digital Image Processing - Lecture Notes
PDF
0% (1)
Digital Image Processing - Lecture Notes
32 pages
DIP-PPT4
PDF
No ratings yet
DIP-PPT4
75 pages
Fundamentals of Image Processing
PDF
No ratings yet
Fundamentals of Image Processing
103 pages
Film Photography: Imaging
PDF
No ratings yet
Film Photography: Imaging
158 pages
Ec8093 Dip - Question Bank With Answers
PDF
100% (1)
Ec8093 Dip - Question Bank With Answers
189 pages
Ec8093 LN
PDF
No ratings yet
Ec8093 LN
190 pages
Chapter 2
PDF
No ratings yet
Chapter 2
66 pages
Lol
PDF
No ratings yet
Lol
40 pages
DIP IMP Q&A 10
PDF
No ratings yet
DIP IMP Q&A 10
8 pages
Image Processing - Notes
PDF
No ratings yet
Image Processing - Notes
239 pages
22BIT5C4 DIP
PDF
No ratings yet
22BIT5C4 DIP
190 pages
FINALDIPEC8093
PDF
No ratings yet
FINALDIPEC8093
190 pages
Digital Image Processing Black White Text Ok Images Badly Damaged 3rd Edition Rafael C Gonzalez download
PDF
No ratings yet
Digital Image Processing Black White Text Ok Images Badly Damaged 3rd Edition Rafael C Gonzalez download
83 pages
DIP Notes Unit-1 PPT (1)
PDF
No ratings yet
DIP Notes Unit-1 PPT (1)
62 pages
Digital Image Processing With Application To Digital Cinema 1st Edition Ks Thyagarajan pdf download
PDF
No ratings yet
Digital Image Processing With Application To Digital Cinema 1st Edition Ks Thyagarajan pdf download
81 pages
Digital Image Processing, 4th Editio: Chapter 1 Introduction
PDF
No ratings yet
Digital Image Processing, 4th Editio: Chapter 1 Introduction
13 pages
معالجة الصورة -1
PDF
No ratings yet
معالجة الصورة -1
14 pages
CSE-IT-312 DIP -1
PDF
No ratings yet
CSE-IT-312 DIP -1
17 pages
[FREE PDF sample] Digital Image Processing with Application to Digital Cinema 1st Edition Ks Thyagarajan ebooks
PDF
100% (1)
[FREE PDF sample] Digital Image Processing with Application to Digital Cinema 1st Edition Ks Thyagarajan ebooks
44 pages
Image Processing
PDF
0% (1)
Image Processing
19 pages
Introduction To Grayscale and Color Images: Image Acquisition
PDF
No ratings yet
Introduction To Grayscale and Color Images: Image Acquisition
49 pages
Digital Image Processing: Chapter Two
PDF
No ratings yet
Digital Image Processing: Chapter Two
64 pages
Image Processing Lecture 1
PDF
100% (1)
Image Processing Lecture 1
37 pages
Image Processing
PDF
No ratings yet
Image Processing
17 pages
What Is Image Processing? Explain Fundamental Steps in Digital Image Processing
PDF
No ratings yet
What Is Image Processing? Explain Fundamental Steps in Digital Image Processing
15 pages
Ganesh College of Engineering, Salem: Ec8093-Digital Image Processing Notes
PDF
No ratings yet
Ganesh College of Engineering, Salem: Ec8093-Digital Image Processing Notes
239 pages
DIP Module I
PDF
No ratings yet
DIP Module I
20 pages
IP
PDF
No ratings yet
IP
15 pages
Digital Image Processing Introduction
PDF
100% (1)
Digital Image Processing Introduction
123 pages
Ch03-Enhancement 3 Merged
PDF
No ratings yet
Ch03-Enhancement 3 Merged
108 pages
DIP
PDF
No ratings yet
DIP
20 pages
Lectures 1-3 Final
PDF
No ratings yet
Lectures 1-3 Final
120 pages
Port City International University: Mid Assignment
PDF
100% (1)
Port City International University: Mid Assignment
16 pages
Slides C1 C4 EN
PDF
No ratings yet
Slides C1 C4 EN
142 pages
Image Processing Notes Citstudents in
PDF
100% (1)
Image Processing Notes Citstudents in
92 pages
DSP in Digital Image Processing: Malnad College of Engineering
PDF
No ratings yet
DSP in Digital Image Processing: Malnad College of Engineering
30 pages
Unit I
PDF
No ratings yet
Unit I
27 pages
Ch-2 Digital Image Processing Topics
PDF
No ratings yet
Ch-2 Digital Image Processing Topics
36 pages
DIP Notes Unit-1 PPT @zammers
PDF
No ratings yet
DIP Notes Unit-1 PPT @zammers
65 pages
Image Processing LECTURE 2-A
PDF
100% (2)
Image Processing LECTURE 2-A
23 pages
Assignment Answers
PDF
No ratings yet
Assignment Answers
8 pages
Lecture # 1: Digital Image Processing
PDF
No ratings yet
Lecture # 1: Digital Image Processing
61 pages
Digital Image Processing Notes
PDF
No ratings yet
Digital Image Processing Notes
94 pages
Scheme and Solution: Rns Institute of Technology
PDF
No ratings yet
Scheme and Solution: Rns Institute of Technology
8 pages
DIP Overview
PDF
No ratings yet
DIP Overview
13 pages
Digital Image Processing and Its Implementation Using Mat Lab
PDF
No ratings yet
Digital Image Processing and Its Implementation Using Mat Lab
8 pages
Ch-3 Spatial and Frequency Domain Image Processing
PDF
No ratings yet
Ch-3 Spatial and Frequency Domain Image Processing
52 pages
Digital Image Processing
PDF
75% (4)
Digital Image Processing
12 pages
5 Digital-Image-Processing
PDF
No ratings yet
5 Digital-Image-Processing
187 pages
The Course: Image Representation Image Statistics Histograms Entropy Filters Books
PDF
No ratings yet
The Course: Image Representation Image Statistics Histograms Entropy Filters Books
77 pages
1 - UNIT DIP
PDF
No ratings yet
1 - UNIT DIP
130 pages
Lec5 Image Enhancement
PDF
No ratings yet
Lec5 Image Enhancement
104 pages
Biometrics tutorial4
PDF
No ratings yet
Biometrics tutorial4
5 pages
Biometrics tutorial3
PDF
No ratings yet
Biometrics tutorial3
2 pages
Challenges of wireless commn of medical data
PDF
No ratings yet
Challenges of wireless commn of medical data
9 pages
Microwave
PDF
No ratings yet
Microwave
11 pages