marți, 19 aprilie 2022

Smart Agriculture

According to UN Food and Agriculture Organisation, the population will increase by 2 billion by 2050, to a projected total of 10 billion people worldwide. However, only 4% additional land will come under cultivation by then.


In this context, use of latest technological solutions to make farming more efficient, remains one of the greatest imperatives. While Artificial Intelligence (AI) sees a lot of direct application across sectors, it can also bring a paradigm shift in how we see farming today. AI-powered solutions will not only enable farmers to be more efficient when it comes to farming, it will also improve quality of crops.


The top five areas where AI-powered solutions can help agriculture are listed below:


1. IoT devices

Every day, massive amounts of organised and unstructured data are generated. These include information such as historical weather patterns, soil reports, rainfall, pest infestation, and photographs from drones and cameras, among other things. All of this data may be sensed by cognitive IoT solutions, which can then deliver actionable insights to increase yield. Proximity sensing and remote sensing are two technologies that are commonly used to combine data intelligently.


2. Insight based on image recognition

Drone-based photos can assist with in-depth field analysis, crop monitoring, field scanning, and other tasks. Farmers may use a combination of computer vision technologies, IoT, and drone data to assure quick responses. Drone images can provide real-time monitoring for the farmers. Computer vision technology can be put to use when it comes to disease detection, crop readiness identification and field management.


3. Agronomic optimisation

Cognitive solutions give suggestions to farmers on the best crops and hybrid seeds based on many characteristics such as soil condition, weather forecast, type of seeds, infestation in a specific location, and so on. The advice can be tailored even further based on the farm's needs, local conditions, and historical data on successful farming. 


4. Crop health monitoring 

To develop crop metrics across thousands of acres, remote sensing techniques, hyper spectral imaging, and 3D laser scanning are required. It has the potential to bring about a fundamental shift in how farmers monitor fields, both in terms of time and effort. This technology will also be used to track crops, as well as generate reports in the event of anomalies.


5. Automated irrigation

Irrigation is one of the most labor-intensive operations in agriculture. Irrigation can be automated and the total productivity can be increased using machines that are trained on past weather patterns, soil condition, and the type of crops to be cultivated. 




Sources:

Autonomous vehicles and artificial intelligence

 Artificial intelligence (AI) and self-driving cars are often complimentary topics in technology. Simply put, you cannot really discuss one without the other.


Autonomous vehicles (AV) are equipped with multiple sensors, such as cameras, radars and lidar, which help them better understand the surroundings and in path planning. These sensors generate a massive amount of data. To make sense of the data produced by these sensors, AVs need supercomputer-like, nearly instant processing capabilities. Companies developing AV systems rely heavily on AI, in the form of machine learning and deep learning, to process the vast amount of data efficiently and to train and validate their autonomous driving systems.


The first use of AI for autonomous driving goes back to the second Defense Advanced Research Projects Agency (DARPA) Autonomous Vehicle Challenge in 2005, which was won by the Stanford University Racing Team's autonomous robotic car 'Stanley'. The winning team, led by Sebastian Thurn, an associate professor of computer science and director of Stanford Artificial Intelligence Laboratory, attributed the victory to use of machine learning. Stanley was equipped with multiple sensors and backed by custom-written software, including machine learning algorithms, which helped the vehicle find the path, detect obstacles and avoid them while staying on the course. 


Artificial intelligence powers self-driving vehicle frameworks. Engineers of self-driving vehicles utilize immense information from image recognition systems, alongside AI and neural networks, to assemble frameworks that can drive self-sufficiently. The neural networks distinguish patterns in the data, which is fed to the AI calculations. That data includes images from cameras for self-driving vehicles. The neural networks figure out how to recognize traffic signals, trees, checks, people on foot, road signs, and different pieces of any random driving environment.



“The autonomous vehicle segment is the fastest growing segment in the automotive industry. Artificial Intelligence is indeed the most important and sophisticated component of self driving vehicles” (Carmody, Thomas, 2019).


The challenges  in developing AI systems for something as complex as a self driving vehicle are many. The AI has to interact with multitude of sensors and has to use data in real time. Many AI algorithms are computationally intensive and are therefore hard to use with CPUs that have memory and speed restrictions. Modern vehicles are an example of real time systems that have to produce deterministic results in the time domain. This is related to achieving safety while driving the vehicle. Complicated distributed systems like these require a lot of internal communications that are prone to latency which can disturb the decision making of the AI algorithms. In addition there is the issue of power consumption of the software running in the car. The more intensive AI algorithms consume more power, which is an issue, especially for electric vehicles that depend only on the charge of the battery (Carmody, Thomas, 2019).


AI is used for several important tasks in a self driving automobile. One of the main tasks is path planning. That is the navigation system of the vehicle (Sagar and Nanjundeswaraswamy, 2019). Another big task for  AI is the interaction with the sensory system and the interpretation of the data coming out of sensors.


Google has also started to develop self-driving cars, which use a mix of sensors, light detectors, and technologies like GPS and cameras. The following are some basic instructions on how a google car works:

  • The driver sets a destination. The vehicle’s software predicts and ascertains a course.

  • A turning, rooftop-mounted Lidar sensor screens a 60-meter range around the vehicle and makes a dynamic three-dimensional (3D) guide of the vehicle’s present environment.

  • A sensor on the left back tire screens sideways development to identify the vehicle’s position comparative with the 3D guide.

  • Radar frameworks toward the front and back bumpers ascertain distances to obstacles.

  • Artificial intelligence programming in the vehicle is associated with every one of the sensors and gathers data from Google Street View and camcorders inside the vehicle.

  • The AI recreates human perceptual and dynamic cycles utilizing deep learning algorithms and controls activities in driver control frameworks, like steering and brakes.

  • The vehicle’s software counsels Google Maps for early notification of things like tourist spots, traffic signs and lights and other obstacles

  • An override function is accessible to enable a human to take responsibility for the vehicle.

Autonomous vehicles are starting to become a real possibility in some parts of industry. (Agriculture, transportation and military are some of the examples.) The day when we are going to see autonomous vehicles in everyday  life for the regular consumer is quickly approaching.


Bibliography:


  1. https://ihsmarkit.com/research-analysis/artificial-intelligence-driving-autonomous-vehicle-development.html

  2. https://builtin.com/artificial-intelligence/artificial-intelligence-automotive-industry

  3. https://www.eescorporation.com/do-self-driving-cars-use-ai/

  4. https://www.embedded.com/the-role-of-artificial-intelligence-in-autonomous-vehicles/

CARTOONIFY IMAGE

 CARTOONIFY IMAGE 


        As you might know, sketching or creating a cartoon doesn’t always need to be done manually. 

Nowadays, many apps can turn your photos into cartoons.

But what if I tell you, 

that you can create your own effect witha a few lines of code?

There is a library called OpenCV which provides a common

infrastructure for computer vision applications and has optimized-machine-learning 

algorithms. It can be used to recognize objects, detect, and produce high-resolution images.

To create a cartoon effect, we need to pay attention to two things; edge and color palette. 

Those are what make the differences between a photo and a cartoon. 


OpenCv

        But first off all,let's see what OpenCv actually is and how it works.

OpenCV is an open-source python library used for computer vision and machine learning.

It is mainly aimed at real-time computer vision and image processing.

It is used to perform different operations on images which transform them using different techniques.

It majorly supports all languages like Python, C++, Android, Java, etc. It is easy to use and in demand due to its features. 

It is used in creating image processing or rendering application using different languages.

Now that we know that, let's get to the actual process of creating a cartoonify image


Steps

1.  First off all,we import the necessary modules  like : 

    -CV2: Imported to use OpenCV for image processing

   - easygui: Imported to open a file box. It allows us to select any file from our system.   

    -Numpy: Images are stored and processed as numbers. These are taken as arrays. 

    -Imageio: Used to read the file which is chosen by file box using a path.

    -Matplotlib: This library is used for visualization and plotting.

    -OS: For OS interaction. Here, to read the path and save images to that path.

    -Flask: Flask is a micro web framework written in Python.


2.  In this step, we will build the main window of our application, where the buttons, 

labels, and images will reside.


3.  Now, just think, how will a program read an image? For a computer, everything is just numbers. 

We will convert our image into a numpy array

Imread is a method in cv2 which is used to store images in the form of numbers.

This helps us to perform operations according to our needs.

The image is read as a numpy array, in which cell values depict R, G, and B values of a pixel.


4. Transforming an image to grayscale

cvtColor(image, flag) is a method in cv2 which is used to transform an image into the colour-space mentioned as ‘flag’.

 Our first step is to convert the image into grayscale.

 We use the BGR2GRAY flag. This returns the image in grayscale.

 A grayscale image is stored as grayScaleImage.



5. Smoothening a grayscale image

To smoothen an image, we simply apply a blur effect.

This is done using medianBlur() function. The center pixel is assigned a mean value of all the pixels which fall under the kernel. 

In turn, creating a blur effect.


6. Retrieving the edges of an image

    -Highlighted Edges

    -Smooth colors

          6.1 Try to retrieve the edges and highlight them. This is attained by the adaptive thresholding technique. 

The threshold value is the mean of the neighborhood pixel values area minus the constant C. C is a constant that is subtracted from the mean or weighted sum of the neighborhood pixels. 

Thresh_binary is the type of threshold applied, and the remaining parameters determine the block size.


           6.2.Preparing a Mask Image

 We prepare a lightened color image that we mask with edges at the end to produce a cartoon image.

 We use bilateralFilter which removes the noise. It can be taken as smoothening of an image to an extent.

It’s similar to BEAUTIFY or AI effect in cameras of modern mobile phones.



7. Giving a Cartoon Effect

So, let’s combine the two specialties. This will be done using MASKING. We perform bitwise and on two images to mask them. Remember, images are just numbers?

Yes, so that’s how we mask edged image on our “BEAUTIFY” image.

This finally CARTOONIFY our image!





Bibliography :

-  https://projectworlds.in/image-to-cartoon-python-opencv-machine-learning/

-  https://towardsdatascience.com/turn-photos-into-cartoons-using-python-bb1a9f578a7e

-  https://towardsdatascience.com/using-opencv-to-catoonize-an-image-1211473941b6

-  https://data-flair.training/blogs/cartoonify-image-opencv-python/


Facial Emotion Recognition

 

Human emotion detection is implemented in many areas requiring additional security or information about the person. Human emotions can be classified as: fear, contempt, disgust, anger, surprise, sad, happy, and neutral.

Facial Emotion Recognition is a technology used for analysing sentiments by different sources, such as pictures and videos.

FER typically has four steps. The first is to detect a face in an image and draw a rectangle around it and the next step is to detect landmarks in this face region. The third step is extracting spatial and temporal features from the facial components. The final step is to use a Feature Extraction (FE) classifier and produce the recognition results using the extracted features.

The FER 2013 dataset is usually used and also several  images for the five emotions are selected.(The emotions considered were happy, sad, angry, fear and neutral). These images are converted into NumPy arrays and landmark features are identified and extracted. A CNN model was developed with four phases where the first three phases had convolution, pooling, batch normalization and dropout layers. The final phase consists of flatten, dense and output layers

Convolutional neural network (CNN) is an algorithm of deep learning. Lecun first proposed its idea in 1989, and in 1998 proposed the application of this algorithm to handwritten digit recognition.  In 2012, Alex Krizhevsky won the Imagenet 2012 competition with CNN.

CNN can input image directly and get the final classification result without data preprocessing. By building a neural network model with a certain depth and combining nonlinear operations such as convolution and pooling, we can realize two important functions of imitating the hierarchical processing of human brain and local perception of visual nerve. It has been proved that the network has achieved good results in face recognition, speech recognition, vehicle detection and target tracking.  One role of a CNN is to reduce images into a form which is easier to process without losing features that are critical for good prediction.

According to some psychologists, communication occurring through facial expressions account for about 55% of communication, so machines can offer us more help if they are able to perceive and recognize human emotions.

 



Webliografie:

https://padlet.com/andreeaoprea99/w2otzumzrpqmhxnp

 

marți, 12 aprilie 2022

NORMAN - World's first psychopath AI

Norman AI was developed by the MIT researchers with the purpose of demonstrating that algorithms cannot be biased and unfair unless biased data is fed into it. For example, the same algorithm, if it is trained on another category of data, can see different things in an image.

Norman is an AI that is trained to perform image captioning, which is the process of generating a textual description of an image based on the actions and objects in the image. There are two parts; the first part consists of the extraction of the features out of the image, where Convolutional Neural Networks (CNN) are used. The second part consists of translating the features given in the first part into a natural sentence, using Recurrent Neural Networks (RNN), especially Long Short-Term Memory (LSTM).


In this experiment, Norman was trained on image captions from an infamous subreddit that is dedicated to images and videos about death. However, no images of real people were used due to ethical concerns, the researchers said. Additionally a standard AI model was trained with the MSCOCO dataset, a large scale dataset for training image captioning systems. MIT researchers compared Norman's responses with the standard AI on Rorschach inkblots, a psychological test that analyzes a person's perceptions of said inkblots, to detect disorders.  

The interesting part is that the results were very different. The standard AI saw "A black and white photo of a small bird" whereas Norman saw "Man gets pulled into dough machine". And for another inkblot, standard AI generated "A black and white photo of a baseball glove" whereas Norman wrote "Man is murdered by machine gun in broad daylight". Similarly, for another inkblot, standard AI saw "A group of birds sitting on top of a tree branch" whereas Norman saw "A man is electrocuted and catches to death". More examples can be found in the link below.

MIT said that Norman "represents a case study on the dangers of Artificial Intelligence gone wrong when biased data is used in machine learning algorithms".

Following this experiment, it was shown that the data provided to an algorithm sometimes matters more than the algorithm itself.   


Sources:


Color Detection

 Color detection is the process of detecting the name of the color. For humans this is an extremely easy task, but for computers it is tougher. Human eyes and brains work together to translate light into color. Our eyes transmit the light signal to the brain, and then the brain recognizes the color. But computers can’t use this strategy. They have to do some calculations in order to detect the color.

That is because colors are made up of 3 primary colors: red, green, and blue. Computers define each color a value within a range of 0 to 255. That makes 16,777,216 different colors. The dataset of the project includes 865 color names, their RGB and hexagonal values. The data is arranged in 6 columns: color, color name, hex value, R, G, B. For example: royal_blue_traditional, "Royal Blue (Traditional)",#002366,0,35,102.

The goal of the system is to find the color of the point on which the picture was clicked on. Since there are more than 16,5 million colors, and the dataset contains only 865, after the system found the RGB values it has to calculate the shortest distance to a listed color. 

The distance is calculated by this formula: 

d = abs(Red – ithRedColor) + abs(Green – ithGreenColor) + abs(Blue – ithBlueColor) 


Once the shortest distance is found, the system will display the colors name, and RGB values on the top left side of the picture. 

Result

Steps:

  1. Load the image 
  2. Read the csv file

  3. Show the image 

  4. Wait for a click event 

  5. Get the coordinates of the clicked point 

  6. Get the RGB values of the clicked point

  7. Calculate the shortest distance to a color 

  8. Display the color name and RGB values 


Bibliography:
COLOR DETECTION USING PANDAS AND OPENCV - C K Gomathy

duminică, 10 aprilie 2022

AutoML: Automatic Machine Learning

     Automated Machine Learning provides methods and processes to make Machine Learning available for non-Machine Learning experts, to improve efficiency of Machine Learning and to accelerate research on Machine Learning.

    

       Machine learning (ML) has achieved considerable successes in recent years and an ever-growing number of disciplines rely on it. However, this success crucially relies on human machine learning experts to perform the following tasks:

  • Preprocess and clean the data.
  • Select and construct appropriate features.
  • Select an appropriate model family.
  • Optimize model hyperparameters.
  • Design the topology of neural networks (if deep learning is used).
  • Postprocess machine learning models.
  • Critically analyze the results obtained.

    As the complexity of these tasks is often beyond non-ML-experts, the rapid growth of machine learning applications has created a demand for off-the-shelf machine learning methods that can be used easily and without expert knowledge. We call the resulting research area that targets progressive automation of machine learning AutoML.

    AutoML helps to make machine learning less of a black box by making it more accessible. This process automates parts of the machine learning process that apply the algorithm to real-world scenarios. A human performing this task would need an understanding of the algorithm's internal logic and how it relates to the real-world scenarios. It learns about learning and makes choices that would be too time-consuming or resource-intensive for humans to do with efficiency at scale.

   

    With automated ML you provide the training data to train ML models, and you can specify what type of model validation to perform. Automated ML performs model validation as part of training. That is, automated ML uses validation data to tune model hyperparameters based on the applied algorithm to find the best combination that best fits the training data. However, the same validation data is used for each iteration of tuning, which introduces model evaluation bias since the model continues to improve and fit to the validation data.

    To help confirm that such bias isn't applied to the final recommended model, automated ML supports the use of test data to evaluate the final model that automated ML recommends at the end of your experiment. When you provide test data as part of your AutoML experiment configuration, this recommended model is tested by default at the end of your experiment (preview).

    Auto-ML is in development so it can give efficient results, but it needs some improvements, because now it’s very limited to supervised learning and it has a lot of difficulties in case unsupervised and reinforcement learning.

References consulted during research:

https://towardsdatascience.com/automl-for-predictive-modeling-32b84c5a18f6

https://medium.com/@miloudbelarebia/does-auto-machine-learning-auto-ml-really-exists-64fa538eb7a6

https://www.techtarget.com/searchenterpriseai/definition/automated-machine-learning-AutoML


Neural network can read tree heights from satellite images

Neural network can read tree heights from satellite images Researchers at ETH Zurich have created a high-resolution global vegetation height...