marți, 12 aprilie 2022

NORMAN - World's first psychopath AI

Norman AI was developed by the MIT researchers with the purpose of demonstrating that algorithms cannot be biased and unfair unless biased data is fed into it. For example, the same algorithm, if it is trained on another category of data, can see different things in an image.

Norman is an AI that is trained to perform image captioning, which is the process of generating a textual description of an image based on the actions and objects in the image. There are two parts; the first part consists of the extraction of the features out of the image, where Convolutional Neural Networks (CNN) are used. The second part consists of translating the features given in the first part into a natural sentence, using Recurrent Neural Networks (RNN), especially Long Short-Term Memory (LSTM).


In this experiment, Norman was trained on image captions from an infamous subreddit that is dedicated to images and videos about death. However, no images of real people were used due to ethical concerns, the researchers said. Additionally a standard AI model was trained with the MSCOCO dataset, a large scale dataset for training image captioning systems. MIT researchers compared Norman's responses with the standard AI on Rorschach inkblots, a psychological test that analyzes a person's perceptions of said inkblots, to detect disorders.  

The interesting part is that the results were very different. The standard AI saw "A black and white photo of a small bird" whereas Norman saw "Man gets pulled into dough machine". And for another inkblot, standard AI generated "A black and white photo of a baseball glove" whereas Norman wrote "Man is murdered by machine gun in broad daylight". Similarly, for another inkblot, standard AI saw "A group of birds sitting on top of a tree branch" whereas Norman saw "A man is electrocuted and catches to death". More examples can be found in the link below.

MIT said that Norman "represents a case study on the dangers of Artificial Intelligence gone wrong when biased data is used in machine learning algorithms".

Following this experiment, it was shown that the data provided to an algorithm sometimes matters more than the algorithm itself.   


Sources:


Color Detection

 Color detection is the process of detecting the name of the color. For humans this is an extremely easy task, but for computers it is tougher. Human eyes and brains work together to translate light into color. Our eyes transmit the light signal to the brain, and then the brain recognizes the color. But computers can’t use this strategy. They have to do some calculations in order to detect the color.

That is because colors are made up of 3 primary colors: red, green, and blue. Computers define each color a value within a range of 0 to 255. That makes 16,777,216 different colors. The dataset of the project includes 865 color names, their RGB and hexagonal values. The data is arranged in 6 columns: color, color name, hex value, R, G, B. For example: royal_blue_traditional, "Royal Blue (Traditional)",#002366,0,35,102.

The goal of the system is to find the color of the point on which the picture was clicked on. Since there are more than 16,5 million colors, and the dataset contains only 865, after the system found the RGB values it has to calculate the shortest distance to a listed color. 

The distance is calculated by this formula: 

d = abs(Red – ithRedColor) + abs(Green – ithGreenColor) + abs(Blue – ithBlueColor) 


Once the shortest distance is found, the system will display the colors name, and RGB values on the top left side of the picture. 

Result

Steps:

  1. Load the image 
  2. Read the csv file

  3. Show the image 

  4. Wait for a click event 

  5. Get the coordinates of the clicked point 

  6. Get the RGB values of the clicked point

  7. Calculate the shortest distance to a color 

  8. Display the color name and RGB values 


Bibliography:
COLOR DETECTION USING PANDAS AND OPENCV - C K Gomathy

duminică, 10 aprilie 2022

AutoML: Automatic Machine Learning

     Automated Machine Learning provides methods and processes to make Machine Learning available for non-Machine Learning experts, to improve efficiency of Machine Learning and to accelerate research on Machine Learning.

    

       Machine learning (ML) has achieved considerable successes in recent years and an ever-growing number of disciplines rely on it. However, this success crucially relies on human machine learning experts to perform the following tasks:

  • Preprocess and clean the data.
  • Select and construct appropriate features.
  • Select an appropriate model family.
  • Optimize model hyperparameters.
  • Design the topology of neural networks (if deep learning is used).
  • Postprocess machine learning models.
  • Critically analyze the results obtained.

    As the complexity of these tasks is often beyond non-ML-experts, the rapid growth of machine learning applications has created a demand for off-the-shelf machine learning methods that can be used easily and without expert knowledge. We call the resulting research area that targets progressive automation of machine learning AutoML.

    AutoML helps to make machine learning less of a black box by making it more accessible. This process automates parts of the machine learning process that apply the algorithm to real-world scenarios. A human performing this task would need an understanding of the algorithm's internal logic and how it relates to the real-world scenarios. It learns about learning and makes choices that would be too time-consuming or resource-intensive for humans to do with efficiency at scale.

   

    With automated ML you provide the training data to train ML models, and you can specify what type of model validation to perform. Automated ML performs model validation as part of training. That is, automated ML uses validation data to tune model hyperparameters based on the applied algorithm to find the best combination that best fits the training data. However, the same validation data is used for each iteration of tuning, which introduces model evaluation bias since the model continues to improve and fit to the validation data.

    To help confirm that such bias isn't applied to the final recommended model, automated ML supports the use of test data to evaluate the final model that automated ML recommends at the end of your experiment. When you provide test data as part of your AutoML experiment configuration, this recommended model is tested by default at the end of your experiment (preview).

    Auto-ML is in development so it can give efficient results, but it needs some improvements, because now it’s very limited to supervised learning and it has a lot of difficulties in case unsupervised and reinforcement learning.

References consulted during research:

https://towardsdatascience.com/automl-for-predictive-modeling-32b84c5a18f6

https://medium.com/@miloudbelarebia/does-auto-machine-learning-auto-ml-really-exists-64fa538eb7a6

https://www.techtarget.com/searchenterpriseai/definition/automated-machine-learning-AutoML


marți, 5 aprilie 2022

Robots that can adapt like natural animals

 

Animals are beings that naturally can adapt to any type of injuries while current robots don’t have the ability to create compensatory behavior when damaged: they are either limited to their manufacturer design or need to many hours to search for optimal compensatory behaviors. Antoine Cully, Jeff Clune and Jean-Baptiste Mouret discovered an intelligent trial and error algorithm that gives robots the ability to adapt to damage in less than 2 minutes, thanks to intuitions that they develop before their mission and experiments that they conduct to validate or invalidate after the inflicted damage. The result is a never seen before process that adapts to a variety of injuries, including damaged, broken, and missing legs. This new discovery will make possible the development of more robust, effective, autonomous robots and suggests principles that animals may use to adapt. 

Current self-repair robots have two phases: self-diagnosis and then selecting the best, pre-designed contingency plan. Such robots are expensive to manufacture due to high cost hardware that involves advanced sensors because robot engineers cannot foresee every situation: this way of viewing the problem often fails either because the diagnosis is incorrect or because an appropriate contingency plan is not provided. 

Injured animals respond differently: they learn by trial and error how to compensate for damage. Trial-and-error learning algorithms could allow machines to creatively discover compensatory behaviors. Recovery from damage would be more practical and effective if robots could adapt as creatively and quickly as animals. 

Gaussian process model captures all these ideas, witch approximates the performance function using the already acquired data, and a Bayesian optimization procedure, which exploits this model to find the maximum of the performance function. The robot selects witch behaviors to test by maximizing the points from witch the performance is uncertain and exploits the points from witch the performance is expected to be high. The selected behavior is tested on the physical robot and the actual performance is recorded. The algorithm updates the expected performance of the tested behavior and lowers the uncertainty about it. 

An example is that after a damage occurs the robot is unable to walk straight and damage recovery via Intelligent Trial and Error begins. From an automatically generated behavior repertoire robots tests different types of behaviors. After each test, the robot updates its predictions of which behaviors will perform well despite the damage. This select/test/update loop is repeated until a tested behavior on the physical robot performs better than 90% of the best predicted performance in the repertoire, a value that can decrease with each test. This way, the robot rapidly discovers an effective compensatory behavior. 

A parallel is that the simulator and Gaussian process components of Intelligent Trial and Error are two forms of predictive models, which are known to exist in animals. 

  



    Consulted during research: 

Robots that can adapt like natural animals by Antoine Cully, Jeff Clune and Jean-Baptiste Mouret

Brief overview of Earth Observation practices

Within the realm of Earth Observation, Machine Learning plays a substantial role in facilitating data pinpointing and extraction. The ability to autonomously process and analyze large quantities of data with the help of various techniques within Machine Learning and Computer Vision, has significantly sparked the interest of Earth Observation analysts within the field as of recent years.

Though it may not be readily obvious at first, satellite data imagery processing and analysis brings valuable contribution to a broad range of domains, often beyond Artificial Intelligence. In this regard, numerous use cases may be identified, among which: renewable energy area suitability evaluation, improving disaster response, vegetation and crops monitoring, active conflict areas monitoring and many others.

One might (naively) assume that what the eye can see is all there is to it. In fact, the value of satellite data comes, more often than not, from analyzing parameters outside the visible spectrum of wavelengths. Translating wavelengths that are not visible to the human eye into colors may aid in accurately distinguishing features of interest. For instance, false color infrared greatly emphasizes vegetation in bright red with everything else being colored in darker tones. False color urban band combination may be used to outline urban regions as well as areas with flooding risk. Furthermore, NDWI2 band combination enhances water presence in drought affected areas.

Shifting focus to the technical details, it is commonly acknowledged that “A machine learning model is only as good as the data it is fed” and Earth Observation models are not exempt from this remark. Thus, several procedures are frequently employed to prepare and clean such datasets, including but not limited to: geometric correction, radiometric correction and atmospheric corrections. However, processing data from satellites is no easy feat, considering how large such data tends to be, especially considering spatial and aerial imagery. There are plenty of software tools and open source libraries which are highly specialized in working with large files, among which: BigTiff, Rasterio, Georaster, GDAL and others. Convolutional neural networks are most frequently used with regard to satellite data analysis, though several other architectural types may be employed to aid in analyzing metadata such as recurrent neural networks, self-organizing maps or generative adversarial networks.

To conclude, the demand for real time applications using satellite imagery is substantial and is expected to increase as more industries find advantages in adopting such systems.


Several of the references consulted during research:

Using Cross-Embodiment Inverse Reinforcement Learning to help robots perform tasks.

 Very often people learn to do things by watching others do it first. One popular scenario nowadays is people watching someone perform a certain skill/action on youtube (as a tutorial) in order to learn or get better. Now, what if robots could do the same? Today, however, the predominant paradigm for teaching robots is to remote control them using specialized hardware for teleoperation and then train them to imitate pre-recorded demonstrations.

If robots could instead self-learn new tasks by watching humans, this capability could allow them to be deployed in more unstructured settings like the home, and make it dramatically easier for anyone to teach or communicate with them, expert or otherwise. Perhaps one day, they might even be able to use Youtube videos to grow their collection of skills over time.


The biggest impediment is obvious but often overlooked, a robot is physically different from a human, which means it often completes tasks differently than we do. As a perfect example, the images below show how a human would outperform a gripper robot in a pen manipulation task just by the simple motion that allows a human to grip all the pens at the same time. The problem is not just in the performance side, but how exactly should a robot approach this in order to mimic the approach of the human.


Left: The hand grabs all pens and quickly transfers them between containers.
Right: The two-fingered gripper transports one pen at a time.
Image source


Cross-Embodiment Inverse Reinforcement Learning (XIRL)

In 2021 at the Conference on Robot Learning (CoRL) 2021, the team formed by Zakka, Kevin and Zeng, Andy and Florence, Pete and Tompson, Jonathan and Bohg, Jeannette and Dwibedi, Debidatta, presented XIRL as an oral paper, their statement : "We explore these challenges further and introduce a self-supervised method for Cross-embodiment Inverse Reinforcement Learning (XIRL). Rather than focusing on how individual human actions should correspond to robot actions, XIRL learns the high-level task objective from videos, and summarizes that knowledge in the form of a reward function that is invariant to embodiment differences, such as shape, actions and end-effector dynamics. The learned rewards can then be used together with reinforcement learning to teach the task to agents with new physical embodiments through trial and error. Our approach is general and scales autonomously with data — the more embodiment diversity presented in the videos, the more invariant and robust the reward functions become. Experiments show that our learned reward functions lead to significantly more sample efficient (roughly 2 to 4 times) reinforcement learning on new embodiments compared to alternative methods. To extend and build on our work, we are releasing an accompanying open-source implementation of our method along with X-MAGICAL, our new simulated benchmark for cross-embodiment imitation.

The underlying observation in this work is that in spite of the many differences induced by different embodiments, there still exist visual cues that reflect progression towards a common task objective. For example, in the pen manipulation task above, the presence of pens in the cup but not the mug, or the absence of pens on the table, are key frames that are common to different embodiments and indirectly provide cues for how close to being complete a task is. The key idea behind XIRL is to automatically discover these key moments in videos of different length and cluster them meaningfully to encode task progression. This motivation shares many similarities with unsupervised video alignment esearch, from which we can leverage a method called Temporal Cycle Consistency (TCC), which aligns videos accurately while learning useful visual representations for fine-grained video understanding without requiring any ground-truth correspondences."


XIRL self-supervises reward functions from expert demonstrations using temporal cycle consistency (TCC), then uses them for downstream reinforcement learning to learn new skills from third-person demonstrations.


Results and highlights

To evaluate the performance of XIRL and baseline alternatives e.g (TCN, LIFS, Goal Classifier) in a consistent environment, they created X-MAGICAL, which is a simulated benchmark for cross-embodiment imitation.

The task:  A simplified 2D equivalent of a common household robotic sweeping task, where an agent has to push three objects into a goal zone in the environment.




In their first set of experiments, they checked whether the learned embodiment-invariant reward function can enable successful reinforcement learning, when the expert demonstrations are provided through the agent itself. The results? XIRL significantly outperforms alternative methods especially on the tougher agents (e.g., short-stick and gripper).



For more details and experiments, check out their paper and download the code from the GitHub repository.


Conclusion 

XIRL learns an embodiment-invariant reward function that encodes task progress using a temporal cycle-consistency objective. Policies learned using the reward functions are significantly more sample-efficient than baseline alternatives. Furthermore, the reward functions do not require manually paired video frames between the demonstrator and the learner, giving them the ability to scale to an arbitrary number of embodiments or experts with varying skill levels.


Sources:

  • https://arxiv.org/abs/2106.03911
  • https://github.com/google-research/google-research/tree/master/xirl
  • https://github.com/kevinzakka/x-magical



LG Smart Home DeepThinQ To Improve AI Products and Services

    LG Electronics is advancing in Artificial Intelligence and Deep Learning fields capabilities with the rollout of its own AI development platform, named "DeepThinQ 1.0". The platform enables seamless integration of Artificial Intelligence into a wider range of products, making possible for the developers to apply deep learning for future products. This technology supports voice and video recognition from which it sends information to cloud servers.

It can automate all types of different activities like turning off the lights when the door is locked from the outside, running the robot vacuum in the owner's absence and turning on the air purifier based on owner's average arrival time. Also many processes are improved, like the LG washer studies its owner habits and automatically learns what settings to apply. The washer can further guide how to dry various types of loads. In the same way, the LG air conditioner can detect the number of people inside the room and their identity and keep the temperature according to their preferences. As well as the above features, it can play music and adjust car temperature. 

Based on the list above, DeepThinQ improves a lot the day-by-day tasks of average people and this is only the beginning, going into an era where Artificial Intelligence will play a bigger and bigger role in our lives.


"DeepThinQ is the embodiment of our open philosophy - to provide the most powerful AI solutions to our customers via a strategy of open platform, open partnership and open connectivity" said DR. I. P. Park, which is the LG Electronics Chief Technology Officer. This information reflects the mentality of one of the most important people in the Electronics field, which can guide us to what the future will provide us, the beautiful and unknown field of Artificial Intelligence.


Sources:

https://www.prnewswire.com/news-releases/lg-enters-deepthinq-mode-to-advance-ai-products-and-services-300578996.html

https://www.futurebridge.com/blog/smart-homes-impact-of-artificial-intelligence-in-connected-home/

Neural network can read tree heights from satellite images

Neural network can read tree heights from satellite images Researchers at ETH Zurich have created a high-resolution global vegetation height...