Thermal Imaging – Theory and Applications

What is Thermal Imagery

We know that our eyes see reflected light, so it is easy for us to understand the principle of forming the image from Visual (daylight and night vision cameras). But if there is not enough light it is impossible for us or the camera to see. This is not the case in the thermal imagery domain. Thermal cameras measure temperature and emissivity of objects in the scene. In the thermal infrared technologies, most of the captured radiation is emitted from the observed objects, in contrast to visual and near infrared, where most of the radiation is reflected. Thus, knowing or assuming material and environmental properties, temperatures can be measured using a thermal camera (i.e., the camera is said to be radiometric).  But, let’s not forget: “Thermal cameras detect more than just heat though; they detect tiny differences in heat – as small as 0.01°C – and display them as shades of grey or with different colors.” [1]

Thermal image is different from visual camera image and cannot be treated as a grayscale visual image. In thermal infrared there are no shadows, and noise characteristics are different then in visual tracking. There are also no color patterns like in visual domain, but patterns come out from variations in material or temperature of objects.

The infrared wavelength band is usually divided into different sub-bands, according to their different properties: near infrared (NIR, wavelengths 0.7–1 µm), shortwave infrared (SWIR, 1–3 µm), midwave infrared (MWIR, 3–5 µm), and longwave infrared (LWIR, 7.5–12 µm). These bands are separated by regions where the atmospheric transmission is very low (i.e., the air is opaque) or where sensor technologies have their limits. LWIR, and sometimes MWIR, is commonly referred to as thermal infrared (TIR). TIR cameras should not be confused with NIR cameras that are dependent on illumination and in general behave in a similar way as visual cameras. Thermal cameras are either cooled or uncooled. Images are typically stored as 16 bits per pixel to allow a large dynamic range. Uncooled cameras give noisier images at a lower frame rate, but are smaller, silent, and less expensive. [2,3] 

Q&A

1. What is the biggest difference between a high and low cost thermal camera?

   The biggest difference is typically resolution. The higher the resolution, the better the picture clarity. This translates to a better picture at a greater distance as well, similar to the megapixels of a regular digital camera.

2. Can thermal imaging cameras see through objects?

   No. Thermal imaging cameras only detect heat; they will not “see” through solid objects, clothing, brick walls, etc. They see the heat coming off the surface of the object.

3. Is there a difference between night vision and thermal imaging?

    Yes. Night vision relies on at least a very low level of light (less than the human eye can detect) in order to amplify it so that it can produce a picture. Night vision will not work in complete darkness whereas thermal imaging will  

    because it only “sees” heat.

4. Can rain and heavy fog limit the range of thermal imaging cameras?

    Yes. Rain and heavy fog can severely limit the range of thermal imaging cameras because light scatters off of droplets of water.

[4]

Applications

Applications of  thermal vision are numerous, in civil as well as in military sector, but here we will focus on applications in civil sector that can be of help in every day life. So, this technology can be used to observe and analyze human activities from a distance in a noninvasive manner, for example. Traditional computer vision utilizes RGB cameras, but problems with this sensor include its light dependency. Thermal cameras operate independently of light and measure the radiated infrared waves representing the temperature of the scene. In order to showcase the possibilities, both indoor and outdoor scenarios applications which use thermal imaging only are presented.

Surveillance: People counting in urban environments

Human movement can be automatically registered and analyzed. For both real-time and long-term perspectives, this knowledge can be beneficial in relation to urban planning and for shopkeepers in the city. Information in real-time can be used for analyzing the current flow and occupancy of the city, while long-term analysis can reveal trends and patterns related to specific days, time or events in the city.

Security: Analyzing the use of sports arenas

The interest in analyzing and optimizing the use of public facilities in cities has a large variety of applications in both indoor and outdoor spaces. Here, the focus is on sports arenas, but other possible applications could be libraries, museums, shopping malls, etc. The aim is to estimate the occupancy of sports arenas in terms of the number of people and their positions in real time. Potential use of this information is both online booking systems, and post-processing of data for analyzing the general use of the facilities. For the purpose of analyzing the use of the facilities, we also try to estimate the type of sport observed based on people’s positions.

In indoor spaces, the temperature is often kept constant and cooler than the human temperature. Foreground segmentation can therefore be accomplished by automatic thresholding the image. In some cases, unwanted hot objects, such as hot water pipes and heaters, can appear in the scene. In these situations, background subtraction can be utilized.

Health and safety: Gas leaking location and event alert

Some public buildings of interest can be monitored with thermal cameras, while gas or water leakage can be discovered before a hazardous situation happens.

Localizing a suspected leak in a building can turn to be delicate, sometimes requiring stopping the operations, if not probe walls or floors. Whatever the mix of construction materials, thermal imaging can be the right answer: in most cases, a leakage translates into an abnormal temperature pattern. Thermal imaging is de facto a non-contact operation, increasing inspector safety, capable of visualizing fluid leakage as well as electrical dysfunction. Thermal imaging can of course also detect thermal bridges and, as such, is a key tool to generate property investigation report.

Water leakage can be both hot and cold, and thermal imagers can catch them both. It can sometimes be close to impossible to spot a water leak on your own, especially when they are behind walls. That is why thermal cameras prevent dangerous situations.

Traffic control: Traffic monitoring and specific event alert

As for monitoring heterogenous traffic, thermal imaging can be a precious camera type reducing overall system costs and increasing reliability. On contrary to Visible and NIR-based detectors, LWIR cameras are not affected by the lighting conditions of the scene: e.g. night vs day, and sun orientation. This remains true over long distances, enabling the detection of a child, a biker, a car or a truck. Once coupled with relevant processing, LWIR cameras turn to be a key asset of ITS, reducing the number of cameras while increasing alarms reliability. This helps the manager on duty to take quickly the right decision in case of e.g. obstacle detection, reverse direction vehicle, abnormal traffic jam, etc. to ensure road-users security as well as optimal commuting time.

Energy saving: Building occupancy

Monitoring building occupancy turns to be highly relevant for management of commercial complex or public infrastructure: optimal adjustment of energy supply, scheduling of maintenance services, as well as comfort and health of occupants.
It is also useful for sizing security services, and of crucial importance in case of event requiring building evacuation. Advanced solutions, relying on thermal sensors, integrate thermal imaging: low resolution detectors (detecting presence / human activity) and/or a high-resolution thermal camera spotting relevant doorways (for people counting / human activity characterization).

This time, our goal was to explain more the science behind thermal cameras and its applications. If there are some additional questions or anything else you would like to know about this topic, feel free to ask via mail or comments.

Outliers in data and how to deal with them

Term explanation

In image processing, as an area in signal processing, modelling the data and expected values is very important in all kinds of applications. So, the data represent the problem that needs to be addressed. That is why it is necessary for us to know what kind of data to expect, and what are some values that are the result of some measurement errors, faulty data, erroneous procedures, or simply what are the areas where a certain theory might not be valid. So, to improve the model and gain better results of our applications, we must recognize and deal with outliers in the data.

What is an outlier?

In statistics, an outlier is a data point that differs significantly from other observations. Outliers in the data can be very dangerous, since they change the classical data statistics, such as mean value and variance of the data. This affects the results of an algorithm of any kind (image processing, machine learning, deep learning algorithm…). So, when modeling, it is extremely important to clean the data sample to ensure that the observations best represent the problem.

How to deal with outliers in the data

            The thing we know about outliers is that they do not fit the model we assumed, but we don’t know anything else about them, when they will appear or what value will have. We just know that we must stop them messing with our results. But how?

  • First step in determining the outliers is getting to know the data for the specific application. So, we must have some test dataset and start from there.
  • The next step is to find the data distribution (according to the available dataset), which can be tricky task sometimes. Let us assume that the data have normal (Gaussian) distribution.
Normal (Gaussian) distribution
  • When we are familiar with the distribution of the data, now we can identify outliers more easily. So, there is no precise way to define and identify outliers in general, but we must know how to define them for our specific application.
  • We can now use statistical methods to identify observations that appear to be rare or unlikely given the available data. Outliers can occur by chance in any distribution, but they often indicate either measurement error or that the population has a heavy-tailed distribution.
  • In the former case one wishes to discard them or use statistics that are robust to outliers, while in the latter case they indicate that the distribution has high skewness and that one should be very cautious in using tools or intuitions that assume a normal distribution.
  • In most larger samplings of data, some data points will be further away from the sample mean than what is deemed reasonable. This can be due to incidental systematic error or flaws in the theory that generated an assumed family of probability distributions, or it may be that some observations are far from the center of the data. In large samples, a small number of outliers is to be expected (and not due to any anomalous condition).
  • Now, we can deal with outliers. We can remove them from our dataset if we are dealing with the offline applications. But, on the other hand, if we are dealing with the real time online processing   than we must use some procedures, in order to make our application more robust.

Remark:

Maybe one thinks that a simple way to handle outliers is to detect them and remove them from the data set. Deleting an outlier, although better than doing nothing, still poses a number of problems:

  • When is deletion justified? Deletion requires a subjective decision.
  • When is an observation “outlying enough” to be deleted?
  • The user or the author of the data may think that “an observation is an observation” (i.e., observations should speak for themselves) and hence feel uneasy about deleting them.
  • Since there is generally some uncertainty as to whether an observation is really atypical, there is a risk of deleting “good” observations, which results in underestimating data variability.

Since the results depend on the user’s subjective decisions, it is difficult to determine the statistical behavior of the complete procedure.

Robust Statistics

Let’s say something about normal distribution assumption. It is very common to assume the Gaussian distribution in different kinds of an engineering problems. The most widely used model formalization is the assumption that the observed data have a normal (Gaussian) distribution. This assumption has been present in statistics as well as engineering for two centuries and has been the framework for all the classical methods in regression, analysis of variance and multivariate analysis. The main justification for assuming a normal distribution is that it gives an approximate representation to many real data sets, and at the same time is theoretically quite convenient because it allows one to derive explicit formulas for optimal statistical methods such as maximum likelihood, likelihood ratio tests, etc. We refer to such methods as classical statistical methods and note that they rely on the assumption that normality holds exactly. The classical statistics are by modern computing standards quite easy to compute. Unfortunately, theoretical and computational convenience does not always deliver an adequate tool for the practice of statistics and data analysis. It often happens in practice that an assumed normal distribution model (e.g., Standard Kalman filter) holds approximately in that it describes the majority of observations, but some observations follow a different pattern or no pattern at all.

Now, we know that such atypical data are called outliers, and even a single outlier can have a large distorting influence on a classical statistical method that is optimal under the assumption of normality or linearity. The kind of “approximately” normal distribution that gives rise to outliers is one that has a normal shape in the central region but has tails that are heavier or “fatter” than those of a normal distribution. One might naively expect that if such approximate normality holds, then the results of using a normal distribution theory would also hold approximately. This is unfortunately not the case.

The robust approach to statistical modeling and data analysis aims at deriving methods that produce reliable parameter estimates and associated tests and confidence intervals, not only when the data follow a given distribution exactly, but also when this happens only approximately in the sense just described.

Robust methods fit the bulk of the data well: if the data contain no outliers the robust method gives approximately the same results as the classical method, while if a small proportion of outliers are present the robust method gives approximately the same results as the classical method applied to the “typical” data. As a consequence of fitting the bulk of the data well, robust methods provide a very reliable method of detecting outliers, even in high-dimensional multivariate situations.

We note that one approach to dealing with outliers is the diagnostic approach. Diagnostics are statistics generally based on classical estimates that aim at giving numerical or graphical clues for the detection of data departures from the assumed model. There is a considerable literature on outlier diagnostics, and a good outlier diagnostic is clearly better than doing nothing. However, these methods present two drawbacks. One is that they are in general not as reliable for detecting outliers as examining departures from a robust fit to the data. The other is that, once suspicious observations have been flagged, the actions to be taken with them remain the analyst’s personal decision, and thus there is no objective way to establish the properties of the result of the overall procedure.

Robust methods have a long history that can be traced back at least to the end of the nineteenth century. But the first great steps forward occurred in the 1960s, and the early 1970s with the fundamental work of John Tukey (1960, 1962), Peter Huber (1964, 1967) and Frank Hampel (1971, 1974). The applicability of the new robust methods proposed by these researchers was made possible by the increased speed and accessibility of computers.

In this post we will not talk about Robust Statistics any more. If you want to find out more, a new post will be published soon, or you can get some information from the references given at the end. This was just a beginning and a warm up for those who need more information about getting started in designing more robust applications.