How can Artificial Intelligence help solve environmental problems like Air Pollution

Air pollution

Air pollution is caused by solid and liquid particles and certain gases that are suspended in the air. These particles and gases can come from car and truck exhaust, factories, dust, pollen, mold spores, volcanoes and wildfires. The solid and liquid particles suspended in our air are called aerosols.

Certain gases in the atmosphere can cause air pollution. For example, in cities, a gas called ozone is a major cause of air pollution. Ozone is also a greenhouse gas that can be both good and bad for our environment. It all depends where it is in Earth’s atmosphere.

Ozone high up in our atmosphere is a good thing. It helps block harmful energy from the Sun, called radiation. But, when ozone is closer to the ground, it can be really bad for our health. Ground level ozone is created when sunlight reacts with certain chemicals that come from sources of burning fossil fuels, such as factories or car exhaust.

When particles in the air combine with ozone, they create smog. Smog is a type of air pollution that looks like smoky fog and makes it difficult to see. (https://climatekids.nasa.gov/air-pollution/)

The major outdoor pollution sources include vehicles, power generation, building heating systems, agriculture/waste incineration and industry. In addition, more than 3 billion people worldwide rely on polluting technologies and fuels (including biomass, coal and kerosene) for household cooking, heating and lighting, releasing smoke into the home and leaching pollutants outdoors.

Air quality is closely linked to earth’s climate and ecosystems globally. Many of the drivers of air pollution (i.e. combustion of fossil fuels) are also sources of high CO2 emissions. Some air pollutants such as ozone and black carbon are short-lived climate pollutants that greatly contribute to climate change and affect agricultural productivity. Policies to reduce air pollution, therefore, offer a “win-win” strategy for both climate and health, lowering the burden of disease attributable to air pollution, as well as contributing to the near- and long-term mitigation of climate change.

Air pollution can be significantly reduced by expanding access to clean household fuels and technologies, as well as prioritizing: rapid urban transit, walking and cycling networks; energy-efficient buildings and urban design; improved waste management; and electricity production from renewable power sources. (https://www.who.int/airpollution/ambient/about/en/)

How does air pollution affect our health?

Breathing in polluted air can be very bad for our health. Long-term exposure to air pollution has been associated with diseases of the heart and lungs, cancers and other health problems. That’s why it’s important for us to monitor air pollution.


Source : World Health Organization

AI might be used to improve urban sustainability and quality of life. It is about time that Artificial Intelligence is used for something important for the whole planet. That is why we will talk about AI solutions that address the problem of air pollution.

Air pollution – AI solutions

Artificial Intelligence for cleaner air in Smart Cities

In Singapore, where air pollution and related health costs are particularly high, a team of researchers investigated the possibility to combine the power of sensor technologies, Internet of things (IoT) and AI to get reliable and valid environmental data and feed better, greener policy-making. As reported by The Business Times, through the computation of real-time IoT sensor data measuring spatial and temporal pollutants, user-friendly air quality heat maps and executive dashboards can be created, and the most severe pollution hotspots can be determined with the help of machine learning algorithms for predictive modelling. This is the first step to take proactive actions towards further decarbonizing the economy, including incentives for virtuous businesses, the development of wiser land use plans, the revitalization of urban precincts, and more. (https://www.pdxeng.ch/2019/03/28/artificial-intelligence-for-cleaner-air-in-smart-cities/)

An Artificial Intelligence-Based Environment Quality Analysis System

The paper describes an environment quality analysis system based on a combination of some artificial intelligence techniques, artificial neural networks and rule-based expert systems. Two case studies of the system use are discussed: air pollution analysis and flood forecasting with their impact on the environment and on the population health. The system can be used by an environmental decision support system in order to manage various environmental critical situations (such as floods and environmental pollution), and to inform the population about the state of the environment quality. (An Artificial Intelligence-Based Environment Quality Analysis System – https://link.springer.com/chapter/10.1007/978-3-642-23957-1_55)

AI non-profit to track air pollution from every power plant in the world and make data public

A nonprofit artificial intelligence firm called WattTime is going to use satellite imagery to precisely track the air pollution (including carbon emissions) coming out of every single power plant in the world, in real time. And it’s going to make the data public. This system promises to effectively eliminate poor monitoring and gaming of emissions data.

The plan is to use data from satellites that make theirs publicly available, as well as data from a few private companies that charge for their data. The images will be processed by various algorithms to detect signs of emissions. Google.org, Google’s philanthropic wing, is getting the project off the ground…with a $1.7 million grant. WattTime made a splash earlier this year with Automated Emissions Reduction. AER is a program that uses real-time grid data and machine learning to determine exactly when the grid is producing the cleanest electricity.

Author: David Roberts, Vox, Published on: 8 May 2019

“We’ll soon know the exact air pollution from every power plant in the world. That’s huge.”, 7 May 2019. (https://www.business-humanrights.org/en/ai-non-profit-to-track-air-pollution-from-every-power-plant-in-the-world-and-make-data-public)

A fresher breeze: How AI can help improve air quality

As part of our AI for Earth commitment, Microsoft supports five projects from Germany in the areas of environmental protection, biodiversity and sustainability. In the next few weeks, we will introduce the project teams and their innovative ideas that made the leap into our global programme and group of AI for Earth grantees.

AI for Earth

The AI ​​for Earth program helps researchers and organizations to use artificial intelligence to develop new approaches to protect water, agriculture, biodiversity and the climate. Over the next five years, Microsoft will invest $ 50 million in “AI for Earth.” To become part of the “AI for Earth” program, developers, researchers and organizations can apply with their idea for a so-called “Grant”. If you manage to convince the jury of Microsoft representatives, you´ll receive financial and technological support and also benefit from knowledge transfer and contacts within the global AI for Earth network. As part of Microsoft Berlin´s EarthLab and beyond, five ideas have been convincing and will be part of our “AI for Earth” program in the future in order to further promote their environmental innovations.  (https://news.microsoft.com/europe/2019/08/20/a-fresher-breeze-how-ai-can-help-improve-air-quality/)

Artificial Intelligence For Air Quality Control Systems: A Holistic Approach

Abstract

Recent environmental regulations introduced by the United States environmental protection agency such as the Mercury Air Toxics Standards and Hazardous Air Pollution Standards have challenged environmental particulate control equipment especially the electro-static precipitators to operate beyond their design specifications. The impact is exacerbated in power plants burning a wide range of low and high-ranking fossil fuels relying on co-benefits from upstream processes such as the selective catalytic reactor and boilers. To alleviate and mitigate the challenge, this manuscript presents the utilization of modern and novel algorithms in machine learning and artificial intelligence for improving the efficiency and performance of electrostatic precipitators reflecting a holistic approach by considering upstream processes as model parameters. In addition, the paper discusses input relevance algorithms for neural networks and random forests such as partial derivatives, input perturbation and GINI importance comparing their performance and applicability for our case study. Our approach comprises of applying random forests and neural network algorithms to an electrostatic precipitator extending the model to include upstream process parameters such as the selective catalytic reactor and the air heaters. To study variable importance differences and model generalization performance between our employed algorithms, we developed a statistical approach to compare features data distributions impact on input relevance.

Read more here:

 (https://ieeexplore.ieee.org/document/8635295)

Artificial intelligence based approach to forecast PM2.5 during haze episodes: A case study of Delhi, India

Highlights

•Neural network and fuzzy logic are combined for forecasting of PM2.5 during haze conditions.

•The haze occurs when the level of PM2.5 is more than 50 μg/m3 and relative humidity is less than 90%.

•Neuro-fuzzy model is capable for better forecasting of haze episodes over urbanized area than ANN and MLR models.

Abstract

Delhi has been listed as the worst performer across the world with respect to the presence of alarmingly high level of haze episodes, exposing the residents here to a host of diseases including respiratory disease, chronic obstructive pulmonary disorder and lung cancer. This study aimed to analyze the haze episodes in a year and to develop the forecasting methodologies for it. The air pollutants, e.g., CO, O3, NO2, SO2, PM2.5 as well as meteorological parameters (pressure, temperature, wind speed, wind direction index, relative humidity, visibility, dew point temperature, etc.) have been used in the present study to analyze the haze episodes in Delhi urban area. The nature of these episodes, their possible causes, and their major features are discussed in terms of fine particulate matter (PM2.5) and relative humidity. The correlation matrix shows that temperature, pressure, wind speed, O3, and dew point temperature are the dominating variables for PM2.5 concentrations in Delhi. The hour-by-hour analysis of past data pattern at different monitoring stations suggest that the haze hours were occurred approximately 48% of the total observed hours in the year, 2012 over Delhi urban area. The haze hour forecasting models in terms of PM2.5 concentrations (more than 50 μg/m3) and relative humidity (less than 90%) have been developed through artificial intelligence based Neuro-Fuzzy (NF) techniques and compared with the other modeling techniques e.g., multiple linear regression (MLR), and artificial neural network (ANN). The haze hour’s data for nine months, i.e. from January to September have been chosen for training and remaining three months, i.e., October to December in the year 2012 are chosen for validation of the developed models. The forecasted results are compared with the observed values with different statistical measures, e.g., correlation coefficients (R), normalized mean square error (NMSE), fractional bias (FB) and index of agreement (IOA). The performed analysis has indicated that R has values 0.25 for MLR, 0.53 for ANN, and NF: 0.72, between the observed and predicted PM2.5 concentrations during haze hours invalidation period. The results show that the artificial intelligence implementations have a more reasonable agreement with the observed values. Finally, it can be concluded that the most convincing advantage of artificial intelligence based NF model is capable for better forecasting of haze episodes in Delhi urban area than ANN and MLR models.

Read more here:

(https://www.sciencedirect.com/science/article/abs/pii/S1352231014009157)

Artificial intelligence modeling to evaluate field performance of photocatalytic asphalt pavement for ambient air purification

Abstract

In recent years, the application of titanium dioxide (TiO2) as a photocatalyst in asphalt pavement has received considerable attention for purifying ambient air from traffic-emitted pollutants via photocatalytic processes. In order to control the increasing deterioration of ambient air quality, urgent and proper risk assessment tools are deemed necessary. However, in practice, monitoring all process parameters for various operating conditions is difficult due to the complex and non-linear nature of air pollution-based problems. Therefore, the development of models to predict air pollutant concentrations is very useful because it can provide early warnings to the population and also reduce the number of measuring sites. This study used artificial neural network (ANN) and neuro-fuzzy (NF) models to predict NOx concentration in the air as a function of traffic count (Tr) and climatic conditions including humidity (H), temperature (T), solar radiation (S), and wind speed (W) before and after the application of TiO2 on the pavement surface. These models are useful for modeling because of their ability to be trained using historical data and because of their capability for modeling highly non-linear relationships. To build these models, data were collected from a field study where an aqueous nano TiO2 solution was sprayed on a 0.2-mile of asphalt pavement in Baton Rouge, LA. Results of this study showed that the NF model provided a better fitting to NOx measurements than the ANN model in the training, validation, and test steps. Results of a parametric study indicated that traffic level, relative humidity, and solar radiation had the most influence on photocatalytic efficiency.

Read more here:

 (https://link.springer.com/article/10.1007/s11356-014-2821-z)

Neuro Fuzzy Modeling Scheme for the Prediction of Air Pollution

Abstract

The techniques of artificial intelligence based in fuzzy logic and neural networks are frequently applied together. The reasons to combine these two paradigms come out of the difficulties and inherent limitations of each isolated paradigm. Hybrid of Artificial Neural Networks (ANN) and Fuzzy Inference Systems (FIS) have attracted the growing interest of researchers in various scientific and engineering areas due to the growing need of adaptive intelligent systems to solve the real world problems. ANN learns from scratch by adjusting the interconnections between layers. FIS is a popular computing framework based on the concept of fuzzy set theory, fuzzy if-then rules, and fuzzy reasoning. The structure of the model is based on three-layered neural fuzzy architecture with back propagation learning algorithm. The main objective of this paper is two folds. The first objective is to develop Fuzzy controller, scheme for the prediction of the changing for the NO2 or SO2, over urban zones based on the measurement of NO2 or SO2 over defined industrial sources. The second objective is to develop a neural net, NN; scheme for the prediction of O3 based on NO2 and SO2 measurements.

Read more here:

 (https://pdfs.semanticscholar.org/1fee/92567748cc2fd4530557bd8d8ebf6395d4e5.pdf)

Sensing the Air We Breathe — The OpenSense Zurich Dataset

Abstract

Monitoring and managing urban air pollution is a significant challenge for the sustainability of our environment. We quickly survey the air pollution modeling problem, introduce a new dataset of mobile air quality measurements in Zurich, and discuss the challenges of making sense of these data.

Read more here:

 (https://www.aaai.org/ocs/index.php/AAAI/AAAI12/paper/view/4896/5158)

This article is good for getting started and gives a dataset to work with!

Development of artificial intelligence based NO2 forecasting models at Taj Mahal, Agra

Abstract

The statistical regression and specific computational intelligence based models are presented in this paper for the forecasting of hourly NO2 concentrations at a historical monument Taj Mahal, Agra. The model was developed for the purpose of public health oriented air quality forecasting. Last ten–year air pollution data analysis reveals that the concentration of air pollutants increased significantly. It is also observed that the pollution levels are always higher during the months of November at around Taj Mahal, Agra. Therefore, the hourly observed data during November were used in the development of air quality forecasting models for Agra, India. Firstly, multiple linear regression (MLR) was used for building an air quality–forecasting model to forecast the NO2 concentrations at Agra. Further, a novel approach, based on regression models, principal component analysis (PCA) was analyzed to find the correlations of different predictor variables between meteorology and air pollutants. Then, the significant variables were taken as the input parameters to propose the reliable physical artificial neural network (ANN)-multi layer perceptron model for forecasting of air pollution in Agra. MLR and PCA–ANN models were evaluated through statistical analysis. The correlation coefficients (R) were 0.89 and 0.91 respectively, for PCA–ANN and were 0.69 and 0.89 respectively for MLR in the training and validation periods. Similarly, the values of normalized mean square error (NMSE), index of agreement (IOA) and fractional bias (FB) were in good agreement with the observed values. It was concluded that PCA–ANN model performs better and can be used for forecasting air pollution at Taj Mahal, Agra.

Read more here:

 (https://reader.elsevier.com/reader/sd/pii/S1309104215302567?token=9C6D5D566E2D1892B932A804377D82A742BDCA1C793AFEF1C357C03B4004FC916FE4196F6EDEC2F9CA093DB4E1B54E8C)

A Novel Air Quality Early-Warning System Based on Artificial Intelligence

Abstract

The problem of air pollution is a persistent issue for mankind and becoming increasingly serious in recent years, which has drawn worldwide attention. Establishing a scientific and effective air quality early-warning system is really significant and important. Regretfully, previous research didn’t thoroughly explore not only air pollutant prediction but also air quality evaluation, and relevant research work is still scarce, especially in China. Therefore, a novel air quality early-warning system composed of prediction and evaluation was developed in this study. Firstly, the advanced data preprocessing technology Improved Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (ICEEMDAN) combined with the powerful swarm intelligence algorithm Whale Optimization Algorithm (WOA) and the efficient artificial neural network Extreme Learning Machine (ELM) formed the prediction model. Then the predictive results were further analyzed by the method of fuzzy comprehensive evaluation, which offered intuitive air quality information and corresponding measures. The proposed system was tested in the Jing-Jin-Ji region of China, a representative research area in the world, and the daily concentration data of six main air pollutants in Beijing, Tianjin, and Shijiazhuang for two years were used to validate the accuracy and efficiency. Therefore, the proposed system is believed to play an important role in air pollution control and smart city construction all over the world in the future.

Read more here:

 (https://www.mdpi.com/1660-4601/16/19/3505)

How AI and IoT could help people combat air pollution issues

It is with little surprise that the UN’s 2019 World Environment Day Is a call to action to #beatairpollution. IT, as a sector, influences air quality in terms of the energy used to drive our electronics, data centers and, indeed, through business travel. With a large-scale industry presence in Asia, home to some of the most polluted cities in the world, we need to do what we can to minimize these impacts.

But technology can also be part of the solution. Last year, Capgemini announced a new global ambition to leverage technology to help organizations with their sustainability challenges, recognizing that this is the biggest impact we can make. Technology can be an enabler to help address prevention at source, helping organizations optimize their operations and reduce their impact. But with 4.2 million deaths every year as a result of exposure to ambient outdoor air pollution, how can we also leverage technology to monitor, inform, and ultimately change the behaviors of those most affected as they head into our many cities?

The advances in technology give us the opportunity to reach people directly and build a more sophisticated monitoring and communication network. We could leverage both artificial intelligence (AI) and the internet of things (IOT) with the capabilities from an increasing range of personal devices whether it be the 2.5 billion smart phones or the estimated 278 million smart watches in the world.[3] Indeed, the wearable health and fitness technology sector is set to grow 10–20% in the next five years, with an expanding set of capabilities. These devices measure elements such as heart rate, blood pressure, and breathing rate, which are indicators of overall health and are also measurables that change with exposure to air pollutants such as PM, nitrogen oxide and sulfur oxides. Yet they also monitor spatial and GPS data, which if combined could demonstrate the impact of the external environment on health factors, and better inform people of the issues. Data from different sources and AI technology could allow us to drill down on very local issues.

If we overlay current air quality monitoring data sources onto an individual, it would allow us to give a very precise prediction of local air quality issues. We could then integrate AI, to both refine and include a wider range of factors such as weather conditions and traffic levels. Added to this, if automatic number plate recognition (ANPR) is integrated, we could discern the proportion of vehicle fuel types being used in specific locations. This is important because diesel vehicles emit 90% of particulate matter.

Data analytics over time would allow people to understand impacts on their health – and change behavior.

Over time, as an individual’s health and diagnostics data are inputted into a data analytics model alongside their own spatial data and air pollution exposure data, they could receive an analysis of how air pollution is impacting their physiology. Based on this, they could receive tailored suggested actions to take as well. The ability to overlay a Google Map of your walk to school or work to the air quality data around you could, instead of highlighting traffic congestion, show air quality issues and provide the options to re-route to avoid, or offer alternative options for time to start a journey.

Read more here:

(https://www.capgemini.com/2019/06/beatairpollution-how-ai-and-iot-could-help-people-combat-air-pollution-issues/)

So, this time we listed some novel AI solutions for solving the environmental air pollution problem. Next time we talk about this topic, expect the idea how we are going to include Smart Imaging and AI in Smart city solution for cleaner air. Do you have any suggestions?

Artificial Intelligence is getting better – latest news and trends in AI concerning image processing

Artificial intelligence is now a part of new, more useful applications and it is getting better. In this blog post we will present you some of these new and interesting AI apps. And, let us just inform you that, from this blog post, every couple of months, we will show and discuss news and trends in image processing field, including new papers, research and applications!

And now, let’s start with news from our favorite, NVIDIA. What is NVIDIA up to?

AI can Detect Open Parking Spaces

With as many as 2 billion parking spaces in the United States, finding an open spot in a major city can be complicated. To help city planners and drivers more efficiently manage and find open spaces, MIT researchers developed a deep learning-based system that can automatically detect open spots from a video feed.

Parking spaces are costly to build, parking payments are difficult to enforce, and drivers waste an excessive amount of time searching for empty lots,” the researchers stated in their paper.

Article from:
https://news.developer.nvidia.com/ai-algorithm-aims-to-help-you-find-a-parking-spot/

New AI Imaging Technique Reconstructs Photos with Realistic Results

Researchers from NVIDIA, led by Guilin Liu, introduced a state-of-the-art deep learning method that can edit images or reconstruct a corrupted image, one that has holes or is missing pixels. The method can also be used to edit images by removing content and filling in the resulting holes. The method, which performs a process called “image inpainting”, could be implemented in photo editing software to remove unwanted content, while filling it with a realistic computer-generated alternative.

Our model can robustly handle holes of any shape, size location, or distance from the image borders. Previous deep learning approaches have focused on rectangular regions located around the center of the image, and often rely on expensive post-processing,” the NVIDIA researchers stated in their research paper.

Article from:
https://news.developer.nvidia.com/new-ai-imaging-technique-reconstructs-photos-with-realistic-results/

AI Can Now Fix Your Grainy Photos by Only Looking at Grainy Photos

What if you could take your photos that were originally taken in low light and automatically remove the noise and artifacts? Have grainy or pixelated images in your photo library and want to fix them? This deep learning-based approach has learned to fix photos by simply looking at examples of corrupted photos only. The work was developed by researchers from NVIDIA, Aalto University, and MIT, and was presented at the International Conference on Machine Learning in Stockholm, Sweden.

Recent deep learning work in the field has focused on training a neural network to restore images by showing example pairs of noisy and clean images. The AI then learns how to make up the difference. This method differs because it only requires two input images with the noise or grain.

Without ever being shown what a noise-free image looks like, this AI can remove artifacts, noise, grain, and automatically enhance your photos.

It is possible to learn to restore signals without ever observing clean ones, at performance sometimes exceeding training using clean exemplars,” the researchers stated in their paper.

Article from:
https://news.developer.nvidia.com/ai-can-now-fix-your-grainy-photos-by-only-looking-at-grainy-photos/

AI Model Can Generate Images from Natural Language Descriptions

To potentially improve natural language queries, including the retrieval of images from speech, Researchers from IBM and the University of Virginia developed a deep learning model that can generate objects and their attributes from natural language descriptions.

We show that under minor modifications, the proposed framework can handle the generation of different forms of scene representations, including cartoon-like scenes, object layouts corresponding to real images, and synthetic images,” the researchers stated in their paper.

Article from:
https://news.developer.nvidia.com/ai-model-can-generate-images-from-natural-language-descriptions/

Now, some new research papers with different fields that need AI as well as image processing:

Digital image analysis in breast pathology—from image processing techniques to artificial intelligence 

From: https://www.sciencedirect.com/science/article/pii/S1931524417302955 

Abstract: Breast cancer is the most common malignant disease in women worldwide. In recent decades, earlier diagnosis and better adjuvant therapy have substantially improved patient outcome. Diagnosis by histopathology has proven to be instrumental to guide breast cancer treatment, but new challenges have emerged as our increasing understanding of cancer over the years has revealed its complex nature. As patient demand for personalized breast cancer therapy grows, we face an urgent need for more precise biomarker assessment and more accurate histopathologic breast cancer diagnosis to make better therapy decisions. The digitization of pathology data has opened the door to faster, more reproducible, and more precise diagnoses through computerized image analysis. Software to assist diagnostic breast pathology through image processing techniques have been around for years. But recent breakthroughs in artificial intelligence (AI) promise to fundamentally change the way we detect and treat breast cancer in the near future. Machine learning, a subfield of AI that applies statistical methods to learn from data, has seen an explosion of interest in recent years because of its ability to recognize patterns in data with less need for human instruction. One technique in particular, known as deep learning, has produced groundbreaking results in many important problems including image classification and speech recognition. In this review, we will cover the use of AI and deep learning in diagnostic breast pathology, and other recent developments in digital image analysis.

Predicting tool life in turning operations using neural networks and image processing

From: https://www.sciencedirect.com/science/article/pii/S088832701730599X 

Abstract: A two-step method is presented for the automatic prediction of tool life in turning operations. First, experimental data are collected for three cutting edges under the same constant processing conditions. In these experiments, the parameter of tool wear, VB, is measured with conventional methods and the same parameter is estimated using Neural Wear, a customized software package that combines flank wear image recognition and Artificial Neural Networks (ANNs). Second, an ANN model of tool life is trained with the data collected from the first two cutting edges and the subsequent model is evaluated on two different subsets for the third cutting edge: the first subset is obtained from the direct measurement of tool wear and the second is obtained from the Neural Wear software that estimates tool wear using edge images. Although the complete-automated solution, Neural Wear software for tool wear recognition plus the ANN model of tool life prediction, presented a slightly higher error than the direct measurements, it was within the same range and can meet all industrial requirements. These results confirm that the combination of image recognition software and ANN modelling could potentially be developed into a useful industrial tool for low-cost estimation of tool life in turning operations.

Automatic food detection in egocentric images using artificial intelligence technology 

From:
https://www.cambridge.org/core/journals/public-health-nutrition/article/automatic-food-detection-in-egocentric-images-using-artificial-intelligence-technology/CAE3262B945CC45E4B14C06C83A68F42  

Abstract:

Objective:To develop an artificial intelligence (AI)-based algorithm which can automatically detect food items from images acquired by an egocentric wearable camera for dietary assessment.

Design:To study human diet and lifestyle, large sets of egocentric images were acquired using a wearable device, called eButton, from free-living individuals. Three thousand nine hundred images containing real-world activities, which formed eButton data set 1, were manually selected from thirty subjects. eButton data set 2 contained 29 515 images acquired from a research participant in a week-long unrestricted recording. They included both food- and non-food-related real-life activities, such as dining at both home and restaurants, cooking, shopping, gardening, housekeeping chores, taking classes, gym exercise, etc. All images in these data sets were classified as food/non-food images based on their tags generated by a convolutional neural network.

Results:A cross data-set test was conducted on eButton data set 1. The overall accuracy of food detection was 91·5 and 86·4 %, respectively, when one-half of data set 1 was used for training and the other half for testing. For eButton data set 2, 74·0 % sensitivity and 87·0 % specificity were obtained if both ‘food’ and ‘drink’ were considered as food images. Alternatively, if only ‘food’ items were considered, the sensitivity and specificity reached 85·0 and 85·8 %, respectively.

Conclusions: The AI technology can automatically detect foods from low-quality, wearable camera-acquired real-world egocentric images with reasonable accuracy, reducing both the burden of data processing and privacy concerns.

Bioinformatics and Image Processing—Detection of Plant Diseases 

From:
https://link.springer.com/chapter/10.1007/978-981-13-1580-0_14 

Abstract:

This paper gives an idea of how a combination of image processing along with bioinformatics detects deadly diseases in plants and agricultural crops. These kinds of diseases are not recognizable by bare human eyesight. First occurrence of these diseases is microscopic in nature. If plants are affected with such kind of diseases, there is deterioration in the quality of production of the plants. We need to correctly identify the symptoms, treat the diseases, and improve the production quality. Computers can help to make correct decision as well as can support industrialization of the detection work. We present in this paper a technique for image segmentation using HSI algorithm to classify various categories of diseases. This technique can also classify different types of plant diseases as well. GA has always proven itself to be very useful in image segmentation.

And, at the end, some news from public sector and applied algorithms:

China Now has Facial Recognition Based Toilets 

China has integrated facial recognition in the toilets across the country. Citizens now need WeChat or face scans to get the toilet papers. People will stand in the yellow recognition spot and will bring their face near the face identification machine.  Then after about three seconds, 90 centimeters of toilet paper will come out. People will then go in and use the toilet but only for limited time as alarm will buzz if someone occupies it for too long. In toilet, sensors will assess ammonium amount and spray a deodorant if required. The two bathrooms integrated with face scanners for being “clean and convenient,” and “reducing toilet paper waste.”

Read more here:
https://www.aitechnologies.com/china-now-has-facial-recognition-based-toilets/ 

Apple’s Camera-Toting Watch Band Uses Facial Recognition For Flawless FaceTime Calls 

U.S. Patent and Trademark Office granted a patent to Apple which says that the tech titan wants to widen the set of attributes of its wearable, by integrating an original camera system with the ability to automatically crop subject matter, trace objects such as user’s face and produce angle-adjusted avatars for FaceTime calls. “Image-capturing watch” U.S. Patent No. 10,129,503 of Apple tells a software and hardware solution that creates a camera-toting Apple Watch, that is both handy and feasible. Using a camera-toted Watch, consumers can put aside a heavy handheld device while playing sports, exercising or doing other energetic activities. However, a feasible smartwatch solution is hard to accomplish. The camera captures the motion data and then the watch processes it, after which it is mapped onto the computer produced picture, which imitates a consumer’s facial movements and expressions in real time. On the other hand, source movement data can be utilized to tell about the motion of inhuman avatars such as Apple’s Memoji and Animoji. It still remains unknown whether Apple wants to integrate its Apple Watch camera band tech.

Read more here:
https://www.aitechnologies.com/apples-camera-toting-watch-band-uses-facial-recognition-for-flawless-facetime-calls/

Metropolitan Police London is to Integrate Face Recognition Tech 

London’s police will integrate face recognition tech as an experiment for two days. In the areas of Leicester Square, Piccadilly Circus, and Soho in London, the technology will examine crowds’ faces and compare them with the database of individuals wanted by the courts and Metropolitan Police in London. If the tech founds a match, the police officers in that field will analyze it and perform further tests to make sure the identity of that individual.

Read more here:
https://www.aitechnologies.com/metropolitan-police-london-is-to-integrate-face-recognition-tech/

That’s all for now folks. But, tell me, what do you think, what are some areas where AI is going to bring most benefits? What are areas, by your opinion where there is space for more research? Can you actually believe that it is possible to have AI solutions in every day life?

All news are citations from the mentioned sites, where you can find the whole text about the topic.