Multispectral imaging techniques and camera selection

Multispectral imaging techniques and camera selection

Multispectral Camera Technology
The first multispectral systems were either used for space science imaging or for analyzing and digitizing paintings and cultural heritage. The original LANDSAT 1 satellite, launched in 1972, was equipped with a four-band multispectral imaging system, including visible green and red channels as well as two NIR bands.

By the time of the LANDSAT 7 launch in 1999, the system had been expanded to eight multispectral bands, ranging from visible blue light to thermal infrared. These multispectral satellites and their successors are primarily used for agricultural and environmental analysis, including coastal and ocean current observation, vegetation analysis, drought stress, burn/fire-affected areas, and even cloud cover patterns. From the optics to the sensors used, these are extremely complex and expensive systems.

Similarly, advanced multispectral static cameras have been used in art and archaeology for years. These cameras use up to 18 multispectral bands to map and preliminarily identify pigments and embellishments on artworks. These images are also used for digitizing and/or visually enhancing old and faded documents and artifacts. Conservators can also use multispectral imaging to distinguish between original and overpainted sections and select appropriate preservation procedures.

Over time, different types of multispectral systems have been developed based on Fourier transform spectroscopy, liquid crystal tunable filters, broadband and narrowband filters, etc. With the improvement of various methods, they have migrated from ultra-high-end satellites and artwork preservation systems to machine vision cameras, offering a combination of resolution, frame rate, and price that makes them suitable for a wide range of multispectral applications. In this technical guide, we will focus on these camera-based multispectral imaging technologies, which are increasingly popular in machine vision applications.

Two (or More) Separate Cameras (Area or Line Scan)
The original method to add more spectral range to a machine vision setup is to align multiple cameras towards the target. For instance, if a fruit producer wants to inspect color and check for bruises, they might add an NIR camera to their setup alongside a color camera. However, combining the spectral data from two images into one inspection step is highly challenging and prone to errors. Even if two cameras are placed close together, there can still be enough optical parallax that aligning the pixels of the two images becomes nearly impossible. Therefore, any attempt to "fuse" the two images often fails. Instead, most customers treat additional spectral imaging as a completely separate inspection step, using separate cameras, lighting, lenses, and installations (and expenses), and are unable to leverage the image data from any other cameras used in the process.

Filter Wheel Camera (Area Scan)
A filter wheel camera, also known as a multi-narrowband filter-based imager, captures multi-channel spectral images by rotating filters installed in a filter wheel placed in front of the sensor or lens. Such a filter wheel can typically support up to 12 bands. The spectral reflectance of each pixel is then estimated based on the multispectral image. The advantage of a filter wheel-based camera is the full spatial resolution for each band. Filters can be customized according to application requirements, and the filter wheel can be modified. The disadvantages of this system include slow and time-consuming imaging, complex image registration, complex geometric distortion, and high costs for custom filters. Another issue is that adding mechanical components (electric wheel) to the system may require regular maintenance or replacement.

LRF

Multispectral cameras using filter wheels can capture multispectral images. This is achieved by rotating the filter wheel mounted in front of the lens or between the sensor and the lens.

Pixelated Multispectral Filter Array (Area Scanning)
The use of Bayer Color Filter Array (CFA) and demosaicing for single-sensor imaging has been well-established in current compact, low-cost color digital cameras. By extending the concept of CFA to Multispectral Filter Array (MSFA), one can capture multispectral images, and even hyperspectral images in some cases, without increasing size or cost. This capture method is also known as snapshot mosaic imaging. Snapshot mosaic sensors can support 4 to 40 channels in VIS (Visible), VIS-NIR, and NIR-SWIR wavelengths. Achieving very high pixel-based consistency in manufacturing between batches has been challenging. Real-world bands may have relatively high crosstalk, which can affect overall spectral sensitivity, pixel-related noise parameters, and the accuracy of spectral reconstruction. Algorithmic correction for these filters is quite complex. More importantly, due to the very sparse sampling of each spectral band in the filter array, multispectral demosaicing of multispectral filter arrays has always been a challenging problem. The more bands there are, the lower the spatial accuracy of each band becomes.

Two Cameras with Beam Splitter (Area Scanning)
One approach to addressing issues related to the multi-camera method is to introduce a beam splitter element that can simultaneously capture images from a common set of optics onto multiple cameras. For example, using two Bayer pattern cameras, one can capture two 3-band images and reconstruct them into a 6-channel (2x RGB) spectral image. Alternatively, a Bayer camera can be combined with a NIR camera to produce a 4-channel RGB+NIR output. Additional beam splitters and cameras can be added to capture extra bands. This method alleviates image capture and image registration issues associated with the basic multi-camera approach. Spectral information can be correlated and combined between multiple captured images. The biggest disadvantage is that if there are multiple cameras in the system, it can become very bulky and expensive. Furthermore, using beam splitters results in a loss of light intensity. This method often requires high-power illumination, necessitating a trade-off between high speed and the system's light sensitivity.

LRF

This multispectral imaging technique uses a beam splitter. Therefore, multiple cameras can be used to capture images simultaneously

This is another multispectral imaging technique using beam splitting, in which all optical components including the lens are common to both sensors, instead of using two separate cameras and separate lenses as in previous methods, but using a common beam splitter.

Multi-Sensor Dichroic Prism Camera (Area or Line Scanning)
At first glance, this seems very similar to the beam splitter method, but there are two notable differences. Firstly, only the sensors, not complete cameras, are mounted and aligned to the prism face. This significantly reduces the size compared to the previously described multi-camera beam splitter imaging system. Secondly, the prism block uses hard dichroic coatings that act as interference filters, directing the appropriate spectral range of incident light to each sensor. Therefore, instead of splitting the same light into multiple channels and reducing its intensity, each channel receives the full amount of light it needs to capture, whether it's a broadband or narrowband in the visible or invisible regions. Unlike the mosaic method, full spatial resolution for each band can be achieved. In area scanning scenarios, resolutions of up to 3.2 MP are now possible, with speeds exceeding 100 fps per band, while in line scanning, the camera can achieve 8192 pixels per band at 35 kHz. The main limitation of this method is that the size of the prism (and therefore the camera) needs to support multiple large sensors. This can limit the maximum resolution and/or pixel size of the sensors that can be used.

In a prism camera, the prism block consists of hard dichroic coatings, which are essentially interference filters. These filters are responsible for the initial separation of the incoming light.

Additional filters on the prism block are used for secondary separation.

Multi-Line Cameras (Three-Line, Four-Line, TDI-Type Line Scanning with Filters)
Line scanning cameras with multi-line sensors can also be used for multispectral applications. Line scanning cameras with three-line RGB sensors are commonly used in color imaging applications. Four-line sensor cameras can consist of RGB-NIR or RGB-monochrome combinations. This is one of the methods to achieve multispectral imaging. The number of lines on a multi-line sensor can range from 3 to several dozen. Today's most popular cameras have 8 to 16 lines, each with a unique spectral bandpass filter, allowing the capture of multispectral images with up to 16 bands. The same technology can be extended to TDI-type sensors, which consist of nearly 200 lines divided into 3 or 4 spectral domains. Multi-line cameras can also mount additional optical filters on existing RGB sensors. This method divides the horizontal line resolution into up to 4 parts, depending on the number of optical filters. By combining 5 optical filters with an RGB sensor, up to 15 spectral bands can be achieved. The disadvantage of this method is that as the number of spectral channels increases, the horizontal resolution of the system decreases.

LRF

Line scan cameras with multi-line sensors can be used for multispectral applications, where each pixel line has a unique spectral bandpass filter.

LRF

This approach uses a line scan sensor and by adding additional filters to the optical assembly, the horizontal resolution of the sensor can be divided into a multispectral domain. Here, a three-line sensor is divided into three spectral separations, resulting in a 9-channel multispectral camera.

Pushbroom Cameras for Multispectral Imaging (Line Scanning)
The pushbroom method, traditionally used in hyperspectral cameras, can also be applied to multispectral imaging, offering significant flexibility in the number of spectral bands that can be captured. The x-λ scanning (i.e., across horizontal resolution and multiple bands) is performed simultaneously, while scanning along the transport direction (y-axis) is continuous. This technology can capture complete spatial and spectral information line by line. Pushbroom cameras consist of three main components: a lens, an imaging spectrometer, and a silicon-based image sensor (for VIS-NIR) or an InGaAs sensor (for NIR-SWIR). The imaging spectrometer, composed of a light dispersion unit and focusing optics, is a key component of the pushbroom camera. In the imaging spectrometer, light passes through the input slit (collimator) to the dispersion unit and then focuses onto the image sensor, providing x-λ coordinates for a single line. Today, line resolutions can reach 1024 pixels, with the wavelength freely selectable between 5 to 224 bands. The spectral range depends on the type of sensor used, but VIS-NIR is popular. While this technology offers good flexibility, the drawback is that the speed increases with the number of channels. In the full range (224 bands), this is a hyperspectral method with a frame rate of only 500 Hz. For many industrial applications, this is too slow.

LRF

Multispectral imaging is possible using push-broom hyperspectral camera technology, which can capture complete spatial and spectral information line by line.

Area Scanning vs. Line Scanning in Multispectral Imaging

Among the explained multispectral imaging methods, only a few are suitable for high-speed industrial applications. In area scanning, the multi-sensor prism-based approach is highly suitable for inspecting bulk products in high-speed production. Other area scanning methods, such as pixelated multispectral pixel arrays (snapshot mosaics) and filter wheel-based approaches, are too slow for industrial imaging. Additionally, spatial resolution and pixel information reconstruction using snapshot mosaic cameras are quite challenging.

Filter wheel-based cameras are bulky and consist of multiple moving parts, which reduces the robustness of this method. Having said that, snapshot mosaics and filter wheel methods offer more spectral bands compared to multi-sensor prism-based methods. Snapshot mosaics are suitable for agricultural, smart agriculture, and medical imaging applications where good spatial accuracy is not required. Filter wheel-based cameras are particularly suitable for digital archiving of old paintings and classical art. Multi-sensor prism-based cameras are highly suitable for precision agriculture, smart agriculture, inline inspection of commodities like fruits, vegetables, meat, seafood, and industrial products such as food and pharmaceutical packaging, electronics, and printed circuit boards.

For multispectral imaging using line scanning cameras, two main approaches show good potential. One is using Pushbroom hyperspectral sensors, which allow scaling down from hyperspectral methods (225 spectral bands) to multispectral methods (5 spectral bands, 6.5 kHz line rate), making this approach suitable for industrial mid-speed applications such as inspecting food, recycling, and packaging goods.

The multi-sensor prism-based line sensor approach achieves extremely high speeds (up to 77 kHz at 4K pixels) and can simultaneously image visible and near-infrared bands, combining up to four spectral bands. The speed makes this approach suitable for all high-speed applications based on belt, channel, or free-fall sorting.

The third approach - using standard three-line line sensors with optical filters, reducing horizontal line resolution, and achieving 6 to 12 channels - has been attempting to enter the printing, food, ceramic, and textile inspection sectors for years but has failed due to complex calibration procedures, low accuracy, and difficult-to-use APIs.

Key Considerations When Choosing Multispectral Imaging Camera Technology

Ease of Setup (System Integration): Using multispectral imaging is significantly more complex than using standard machine vision cameras. To set up and integrate different components of a multispectral imaging system, it is important to have good expertise, not only in cameras but also in calibration procedures involving light sources, the nature of objects to be inspected, and bottlenecks arising from data processing and image data correction. The overall system integration may not be as complex as hyperspectral systems, but it actually depends on what the user wants to achieve through the multispectral imaging system.

Speed and Resolution: Industrial inspection procedures require high throughput. The readout architecture and structure of many multispectral systems are limited in speed. Speed depends on the number of wavelength channels, the type of multispectral technology used, and the interface. The more spectral bands, the more difficult it is to capture the required amount of light for high-speed applications. Spatial resolution is also a challenge for multispectral imaging, especially when detecting small objects. Cameras based on snapshot mosaic sensors use interpolation to estimate missing spatial information from single pixel values, but this method is not very accurate when detecting smaller defect sizes. Each application may require a different trade-off between the possible number of multispectral channels and achievable speed and resolution.

Number of Spectral Bands: The number of spectral bands required for an application actually depends on the nature of the object to be detected, the required detection accuracy, and the accuracy that can be achieved in image processing using additional spectral estimation techniques. In some applications, such as red edge detection or NDVI analysis, it is clear which bands in the red and NIR regions are needed to capture the required data from plants. The same is true for plastics and organic materials where spectral data is well known. Another example is fluorescence endoscopy, where ICG absorption and fluorescence reflection bands are known. In such cases, a limited number of bands may be sufficient. However, there are also applications involving mixtures of different materials to be inspected or requiring multiple spectral bands to accurately identify specific wavelength bands, or spectral color measurement applications based on multispectral imaging. Such applications require a relatively large number of spectral bands.

Flexibility: A flexible or scalable multispectral system is mainly suitable for applications where different types of materials are inspected on the same machine. Flexibility allows users to adjust the multispectral imaging system according to application requirements. This flexibility is mainly reflected in the number of spectral bands required, which increases or decreases the speed of the imaging system. The flexibility of some systems also means lower robustness because it may require replacing changing or moving components (for example, in the filter wheel method, the filter wheel can be easily replaced, but it adds a moving component to the system, which affects its robustness). On the other hand, some cameras have flexibility when manufactured but not after the product is finalized. Multi-sensor prism-based cameras have flexibility during manufacturing, allowing the selection of the desired spectral response of the camera based on hard dichroic coatings and basic prism parameters. However, once the prism sensor component is manufactured, it cannot be changed. Cameras based on snapshot mosaic sensors have the same logic. Once the multispectral filter array is fixed on the sensor, it cannot be replaced or modified during the inspection task.

Processing Multispectral Data Cubes and Data Streams: One of the challenges of multispectral imaging is processing multispectral data cubes. This is far less complex than hyperspectral data cubes, which may have several hundred spectra per pixel, but it is more complex than processing traditional RGB camera systems. The system architecture must be able to properly process, filter, and interpret multispectral data. The fewer spectral channels, the simpler this is. The second challenge may come from the method used for the data stream from the camera to the processing station. In the case of multiple streams, the advantage is that each data stream can be controlled independently, and the challenge lies in managing this on application software. Processing multiple streams requires a software architecture capable of handling two or more streams simultaneously. Software designed only for a single stream expects the device to send a single frame or multiple payloads available simultaneously. Therefore, for both single frames and multiple payloads, users can call a single function and get images from one stream. However, there are also platforms, such as JAI's eBUS Player, which can open camera devices a second or third time in read-only mode and process multiple data streams.

System Cost: Cost is always a driving factor in decision-making. Compact, user-friendly, mass-produced cameras are lower in cost than highly specialized and bulky systems. The cost also depends on the inspection task that needs to be performed. Applications driven by end-consumers or those close to end-consumers, such as food and agricultural inspections, are more price-sensitive compared to applications in research, high-tech, or scientific imaging. Nowadays, high-end hyperspectral imaging systems start at around 20,000 euros per camera system. Mass-produced multispectral cameras should be significantly below 10,000 euros to be commercially attractive. Multispectral cameras based on multiple cameras are more expensive than other methods, such as multi-sensor prism-based cameras or multispectral array-based cameras. It is also important to note that the cost discussion must be weighed and addressed or simplified based on the value that multispectral imaging can provide to solve existing imaging problems.

RELATED ARTICLES

Leave a comment

Your email address will not be published. Required fields are marked *