Applications of Digital Image Processing #
Digital image processing finds applications across various fields:
- Forensics: Used for fingerprint analysis, facial recognition, and document examination, helping law enforcement in criminal investigations.
- Military Applications: Utilized in target detection, night vision, and object tracking, enhancing battlefield awareness.
- Medical Imaging: Techniques like CT, MRI, and X-ray rely on digital image processing to enhance visualization for diagnosis and treatment planning.
- Consumer Electronics: Common in cameras, mobile devices, and AR/VR systems for functions like facial recognition and image enhancement.
- Industrial Applications: Applied in automation, quality control, and robotics, where machine vision systems inspect products on production lines.
Machine Vision Basic Steps #
Image Acquisition
- Process of capturing an image through a sensor or camera.
- Types of images:
- Reflection images: Generated when light reflects off the surface.
- Emission images: Generated by the emission of light from the object.
- Absorption images: Formed by light absorption in the object.
Preprocessing
- Often used to correct non-uniform lighting or remove noise.
- Common techniques:
- Histogram equalization for contrast adjustment.
- Smoothing filters to remove noise (e.g., Gaussian filter).
Segmentation
- Divides the image into meaningful parts or objects.
- Key for separating distinct objects from one another.
Labeling
- Assigns identifiers (labels) to segmented objects (e.g., assigning a number to each object).
Feature Extraction
- Involves calculating object properties such as perimeter or area, used for classification.
Classification
- Final step where conclusions are drawn based on extracted features.
- Could involve machine learning algorithms to identify or classify objects.
Image Processing Concepts #
- From Top to Bottom: As data moves through the steps, it becomes more abstract (higher-level features).
- From Bottom to Top: Data size increases (e.g., pixel-level data is large, while abstracted features are small).
Components of an Image Processing System #
An image processing system typically includes several key components:
- Light Source: Illuminates the object or scene for imaging.
- Sensor: Common types include CCD (Charge-Coupled Device) and CMOS (Complementary Metal-Oxide Semiconductor) sensors, each with unique benefits (e.g., CCDs offer high-quality images but use more power, while CMOS sensors are more cost-effective and power-efficient).
- Frame Grabber: Captures images from the sensor and converts them to digital form.
- Computer and Software: Stores, processes, and analyzes the captured images using algorithms and interfaces, often including software like MATLAB or specialized vision libraries.
Image Basics: Pixel Structure and Intensity Values #
A digital image consists of a grid of points called pixels, each representing an intensity or color value:
- Binary Images: Use only two colors, typically black and white.
- Grayscale Images: Range from 0 (black) to 255 (white) in 8-bit representation.
- Color Images: Represent colors using combinations of red, green, and blue (RGB) intensities.
Pixel Connectivity: Adjacency Types #
Pixel connectivity defines how pixels relate to their neighbors:
- 4-Adjacency: Pixels are adjacent if they share a side (horizontal or vertical).
- 8-Adjacency: Pixels are adjacent if they share a side or corner (horizontal, vertical, or diagonal).
- Mixed-Adjacency: Combines 4-adjacency and diagonal connectivity, using a set of rules to avoid diagonal ambiguities and ensure connectivity paths.
Color Formats #
- Subtractive Color (Paint): Color is produced by subtracting light (e.g., cyan, magenta, yellow).
- Additive Color (Light): Color is produced by adding light (e.g., red, green, blue).
Indexed Image #
- Saves memory and storage space by storing only the colors that appear in the image.
Perceptual Color Space (Human Perception) #
- Hue: The actual color.
- Saturation: The intensity of the color.
- Brightness: Lightness or darkness of the color.
Brightness Calculation (for RGB images) #
Brightness = 0.2989 Red + 0.5870 Green + 0.1140 Blue
Light Spectrum #
- Visible light wavelengths range from 380 nm to 780 nm.
Weber’s Law #
- Humans can detect a 1% difference in luminance, typically measured in candela per square meter ((cd/m^2)).
Spatial Resolution #
- Defines the density of pixels in an image, measured in dots per inch (dpi).
- The higher the spatial resolution, the more detail is present in the image.
Image Digitization #
Sampling: Converts a continuous image into discrete samples.
- Moiré patterns can arise from improper sampling.
Quantization: Maps the amplitude of the image to discrete levels.
- Example: The “wagon wheel effect” can be observed when objects like helicopter blades appear to spin backward due to improper sampling.
Practical Difference Between CCD and CMOS Sensors #
- CCD (Charge-Coupled Device): Known for high-quality, low-noise images, commonly used in scientific applications. Drawbacks include higher power consumption and cost.
- CMOS (Complementary Metal-Oxide Semiconductor): More affordable and power-efficient, widely used in consumer electronics. Modern CMOS sensors now offer comparable quality to CCDs in many applications.
Nyquist Theorem #
- For accurate digitization, the sampling rate should be at least twice the highest frequency present in the signal.
Questions #
What is digital image processing?
Digital image processing is the application of computer algorithms to process digital images.
Why do we need digital image processing?
Digital image processing is need to put more emphasis on details we believe are important
Please summarize the paplications of digital image processing in the society.
The main applications of digital image processing are in remote sensing, medical imaging, forensics, transportation, robotics, consumer electronics.
How is human visual perception formed?
Light that is reflected from an object enters the eye and is projected on the Forvea
What is the difference between image sampling and quantization?
Overly simplified: Sampling means the digitizing of space. Creating pixels. Quantization means the digitizing of intensity. Assigning color to pixel. Sampling refers to digitization of the coordinates, and quantization is the digitization of the amplitude values.
What is the difference between adjacency and connectivity?
Adjacency is related to the neighborhood of one particular pixel. Connectivity is related to a path linking two pixels.
Compute the Euclidean distance between pixel p and pixel q?
p | 0 | 0 | 0 | 0 |
0 | 0 | 0 | 0 | 0 |
0 | 0 | 0 | 0 | 0 |
0 | 0 | 0 | 0 | q |
- Pixel p is located at (0, 0)
- Pixel q is located at (4, 4)
distance is √ ( 4^2 + 4^2) = √32 = 4√2 The Euclidean distance between p and q is approximately 5.66.
Assume a picture with 1024 pixels 256 grey levels. It has a 32 bytes header and no compression. Calculate the total file size in bytes
2^8 = 256 -> 256 grey levels means 1 byte per pixel 1024 pixels * 1 byte of pixel data in file 1024 bytes of pixel data + 32 bytes of header data means the picture is 1056 bytes
Suppose th eoriginal image has been subsampled by a foctor of 2 in both dimensions. Calculate the new file syze (in bytes), assuming the header size has not changed.
1024 bytes of pixel data, halved vertical and horizontal space. 1024 / (2^2) = 256 bytes 256 bytes of picture data + 32 bytes of header info, means: 256 + 32 = 288 bytes
Suppose the original image has been requantised to allow encoding 2 pixels per byte. Calculate the new file size (assuming the header has not changed).
1024 pixels -> 512 bytes 512 + 32 = 544 bytes file size
How many gray levels wil the requantisised image have?
1024 pixels -> 512 bytes 0.5 byte or 4 bits per pixel 2^4 possible values per pixel -> 16 values