marți, 5 aprilie 2022

LG Smart Home DeepThinQ To Improve AI Products and Services

    LG Electronics is advancing in Artificial Intelligence and Deep Learning fields capabilities with the rollout of its own AI development platform, named "DeepThinQ 1.0". The platform enables seamless integration of Artificial Intelligence into a wider range of products, making possible for the developers to apply deep learning for future products. This technology supports voice and video recognition from which it sends information to cloud servers.

It can automate all types of different activities like turning off the lights when the door is locked from the outside, running the robot vacuum in the owner's absence and turning on the air purifier based on owner's average arrival time. Also many processes are improved, like the LG washer studies its owner habits and automatically learns what settings to apply. The washer can further guide how to dry various types of loads. In the same way, the LG air conditioner can detect the number of people inside the room and their identity and keep the temperature according to their preferences. As well as the above features, it can play music and adjust car temperature. 

Based on the list above, DeepThinQ improves a lot the day-by-day tasks of average people and this is only the beginning, going into an era where Artificial Intelligence will play a bigger and bigger role in our lives.


"DeepThinQ is the embodiment of our open philosophy - to provide the most powerful AI solutions to our customers via a strategy of open platform, open partnership and open connectivity" said DR. I. P. Park, which is the LG Electronics Chief Technology Officer. This information reflects the mentality of one of the most important people in the Electronics field, which can guide us to what the future will provide us, the beautiful and unknown field of Artificial Intelligence.


Sources:

https://www.prnewswire.com/news-releases/lg-enters-deepthinq-mode-to-advance-ai-products-and-services-300578996.html

https://www.futurebridge.com/blog/smart-homes-impact-of-artificial-intelligence-in-connected-home/

marți, 29 martie 2022

The powerful impact that Artificial Neural Networks have in the Medical Field.

 Artificial intelligence has the potential to transform the way surgery is taught and practiced.


Although the surgeon-patient-computer relationship's potential is a long way from being fully discovered, the use of AI in surgery is already driving significant changes for doctors and patients alike. AI is currently perceived as a supplement and not a replacement for the skill of a human surgeon.

Deep learning recurrent neural networks (RNN) (Recurrent neural networks are a variant of the conventional feedforward artificial neural networks that can deal with sequential data and can also be trained to hold the knowledge about the past.). Among many other usages in the medical field, RNNs are being used to predict from renal failure in real-time to mortality and postoperative bleeding after cardiac surgery. Using RNNs has obtained improved results compared to standard clinical reference tools. 

Neural Networks are essentially a part of Deep Learning, which in turn is a subset of Machine Learning. ML allows a computer to utilize partial labeling of the data (supervised learning) or the structure detected in the data itself (unsupervised learning) to explain or make predictions about the data Supervised learning is useful for training an ML algorithm to predict a known result or outcome while unsupervised learning is useful in searching for patterns within data.



In supervised learning, human-labeled data are fed to a machine-learning algorithm to teach the computer a function, such as recognizing a gallbladder in an image or detecting a complication in a large claims database. In unsupervised learning, unlabeled data are fed to a machine-learning algorithm, which then attempts to find a hidden structure to the data, such as identifying bright red (e.g. bleeding) as different from non-bleeding tissue.


Sources:


https://www.mobihealthnews.com/news/contributed-power-ai-surgery
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5995666/


Applying artificial intelligence for cancer immunotherapy

 




Artificial intelligence (AI) is a general term that refers to the use of a machine to imitate intelligent behavior for performing complex tasks with minimal human intervention, such as machine learning; this technology is revolutionizing and reshaping medicine. AI has considerable potential to perfect health-care systems in areas such as diagnostics, risk analysis, health information administration, lifestyle supervision, and virtual health assistance. In terms of immunotherapy, AI has been applied to the prediction of immunotherapy responses based on immune signatures, medical imaging and histological analysis. These features could also be highly useful in the management of cancer immunotherapy given their ever-increasing performance in improving diagnostic accuracy, optimizing treatment planning, predicting outcomes of care and reducing human resource costs.

Although immunotherapy is a great breakthrough in the field of cancer treatment, the judgment of whether a particular patient can respond to the therapy is occasionally confusing. However, the appearance of AI increases the chance of successful cancer immunotherapy through forecasting the therapeutic effect based on the establishment of immunotherapy predictive scores, including immunoscore and immunophenoscore. These two scoring systems were developed to predict the response to immune checkpoint blockade (ICB) therapy. Meanwhile, some limitations, such as unknown predictive power of individual biomarkers, difficulty of integrating diverse biomarkers into one system and lack of ICB response prediction models that can integrate different biomarkers, are the main barriers that warrant further study. A previous study showed that the integration of an AI-based diagnostic algorithm with physicians’ interpretations can be positively related to improving diagnostic accuracy for indiscernible cancer subtypes. AI technology obtains approximately 91.66% accuracy when recognizing major histocompatibility complex patterns associated with immunotherapy response. More importantly, AI can be applied to standardize assessments across institutions instead of depending on the interpretation of clinicians that occasionally is inherently subjective. Therefore, the application of AI in cancer immunotherapy may lead to positive outcomes in patients.

To date, most notable is the successful application of AI in immunotherapy in cancer research. Machine Learning can match the pace with modern medicine regarding generated data and the detection of phenotypic varieties that sneak through human screening. The range of machine screening can also be adjusted to detect only interested phenotype changes or to screen for broader phenotypes. Currently, AI-based methods have shown good results in the prediction of MHC-II epitopes on the strength of amino acid sequences and the development of vaccines targeting MHC-II immunopeptidome , which demonstrate the increasingly extensive application of AI in immunotherapy.

 



AI’s next big leap - Neuro-symbolic AI

 

AI’s next big leap - Neuro-symbolic AI

 

                Artificial intelligence research has made great achievements in solving specific applications, but we’re still far from the kind of general-purpose AI systems that scientists have been dreaming of for decades.

                Among the solutions being explored to overcome the barriers of AI is the idea of neuro-symbolic systems that bring together the best of different branches of computer science.

                What Is Neuro-Symbolic AI? A fancier version of AI that we have known till now, it uses deep learning neural network architectures and combines them with symbolic reasoning techniques. For instance, we have been using neural networks to identify what kind of a shape or color a particular object has. Applying symbolic reasoning to it can take it a step further to tell more exciting properties about the object such as the area of the object, volume and so on.

                It’s also taking baby steps toward reasoning like humans and might one day take the wheel in self-driving cars, meaning that neuro-symbolic AI brings us closer to machines with common sense. What exactly does that mean?

                Our minds are built not just to see patterns in pixels and soundwaves but to understand the world through models. As humans, we start developing these models as early as three months of age, by observing and acting in the world.

                For example, people (and sometimes animals) can learn to use a new tool to solve a problem or figure out how to repurpose a known object for a new goal (e.g., use a rock instead of a hammer to drive in a nail).

                These capabilities are often referred to as “intuitive physics” and “intuitive psychology” or “theory of mind,” and they are at the heart of common sense.

                These cognitive systems are the bridge between all the other parts of intelligence such as the targets of perception, the substrate of action-planning, reasoning, and even language.

                AI agents should be able to reason and plan their actions based on mental representations they develop of the world and other agents through intuitive physics and theory of mind.

 

                Overcoming The Shortfalls Of Neural Networks And Symbolic AI 

                If we look at human thoughts and reasoning processes, humans use symbols as an essential part of communication, making them intelligent. To make machines work like humans, researchers tried to simulate symbols into them. This symbolic AI was rule-based and involved explicit embedding of human knowledge and behavioral rules into computer programs, making the process cumbersome. It also made systems expensive and became less accurate as more rules were incorporated.

                To deal with these challenges, researchers explored a more data-driven approach, which led to the popularity of neural networks. While symbolic AI needed to be fed with every bit of information, neural networks could learn on its own if provided with large datasets. While this was working just fine, as mentioned earlier, the lack of model interpretability and a large amount of data that it needs to keep learning calls for a better system.

                To understand it more in-depth, while deep learning is suitable for large-scale pattern recognition, it struggles at capturing compositional and causal structure from data. Whereas symbolic models are good at capturing compositional and causal structure, but they strive to achieve complex correlations. 

                The shortfall in these two techniques has led to the merging of these two technologies into neuro-symbolic AI, which is more efficient than these two alone. The idea is to merge learning and logic hence making systems smarter. Researchers believe that symbolic AI algorithms will help incorporate common sense reasoning and domain knowledge into deep learning. For instance, while detecting a shape, a neuro-symbolic system would use a neural network’s pattern recognition capabilities to identify objects and symbolic AI’s logic to understand it better. 

                A neuro-symbolic system, therefore, uses both logic and language processing to answer the question, which is similar to how a human would respond. It is not only more efficient but requires very little training data, unlike neural networks. 

 

In conclusion, the primary goals of NS are to demonstrate the capability to:

  1. Solve much harder problems
  2. Learn with dramatically less data, ultimately for a large number of tasks rather than one narrow task)
  3. Provide inherently understandable and controllable decisions and actions
  4. Demonstrate common sense
  5. Solve the AI black box problem


Bibliografie:

                   1.ttps://knowablemagazine.org/article/technology/2020/what-is-neurosymbolic-ai?fbclid=IwAR0gX5x1FetwXmpnW6KJA_nBiO5YKUi1EgLi8dye62Qp3OeuH6wUtXUArfQ

         2.          https://bdtechtalks.com/2022/03/14/neuro-symbolic-ai-common-sense/?fbclid=IwAR1bZM4LtRDm9EdwvNnixnHoQpy5KZN4U6qNQIV1ifRAqgqeFoG9k70R78U

         3.          https://analyticsindiamag.com/what-is-neuro-symbolic-ai-and-why-

are-researchers-gushing-over-it/

         4.          https://researcher.watson.ibm.com/researcher/view_group.php?id=10518

 

AI in exploring the Universe

        The Universe has fascinated the human race for thousands of centuries. Looking at the sky makes one wonder how vast is the Universe. There is so much out there to explore and discover. Cosmologists and astrophysicists are trying their best to uncover the mysteries of the Universe. Since the Universe is so humongous, it is natural for us to wonder about the various concepts and philosophies regarding this astronomical body.

        Artificial Intelligence is the bright star in the dark Universe. It could very well be the perfect solution to solve the complexities and conquer the mysteries of the Universe. With advancements in technologies of artificial intelligence, like in the fields of data science, exploratory data analysis, and computer vision, we can achieve results beyond our imagination. Applying this tech on the hunt for gravitational lenses (distribution of matter, such as a cluster of galaxies) was surprisingly straightforward.

        First, the scientists made a dataset to train the neural network with, which meant generating 6 million fake images showing what gravitational lenses do and do not look like. Then, they turned the neural network loose on the data, leaving it to slowly identify patterns. A bit of fine-tuning later, and they had a program that recognised gravitational lenses in the blink of an eye.

        The credit for this accomplishment goes to the researchers who developed the deep neural network architecture called the Deep Density Displacement Model. The Deep Density Displacement Model learns from a set of pre-run numerical simulations, to predict the nonlinear large-scale structure of the Universe. Their extensive analysis demonstrates that D³M outperforms other models and is also able to accurately extrapolate far beyond its training data and predict structure formation for significantly different cosmological parameters. As for the result, it produced exceptionally accurate simulations and even made a 3-D simulation of the entire Universe.

         Another major new development in the field of artificial intelligence, which could perhaps be the most significant and exceptional discovery is a new AI called the "Dark emulator". The one concept which has left scientists with a question mark for over generations is the theory behind dark matter. Not only the secrets to the entire structure of the Universe can be unveiled, but also hypothesis and complex distinctions of modern physics concepts could potentially be solved with a detailed study and breakthrough of dark matter or dark energy. The Dark Emulator AI can be the best possible tool to solve the problems of astrophysicists. The Dark Emulator learns from the existing data and creates multiple virtual universes and keeps learning from these repeatedly.

        The potential for the interpretation of the gigantic Universe using the various tools and technologies of artificial intelligence is colossal. In a distant future, the enigma, paradoxes, and secrets about the Universe will unfold, and we will have a clear perception about the various mysteries, or at the very least, a brief idea to explore, examine and envision the eternity of the Universe.


Sources:

Finding strong gravitational lenses in the kilo degree survey with convolutional neural networks

Learning to predict the cosmological structure formation

marți, 22 martie 2022

Image Segmentation with Machine Learning


First of all, you might be wondering - What exactly is image segmentation and how does it work? Well, long story short, Image segmentation is a prime domain of computer vision, backed by a huge amount of research involving both image processing-based algorithms and learning-based techniques. In other words, image segmentation is the process of partitioning a digital image into multiple image segments, also known as image regions or image objects (sets of pixels). The goal of segmentation is to simplify and/or change the representation of an image into something that is more meaningful and easier to analyze.

Here is an example of how Image Segmentation works on a sample picture:



    As you can see, the machine automatically finds a lot of things that can be distinguished by the human eye, things such as cars, traffic lights, or even pedestrians. This example is, if you will, the most primitive way to show this technology’s true power and capabilities.

    Because this kind of technology has been used for a while now, engineers and programmers have managed to use this algorithm in multiple ways and uses. Some notable examples are:

-     Self-driving cars: Image segmentation can be used in self-driving cars for giving easy distinctions between various objects. Be it traffic signals, signboards, humans, and cars. It can help the driving instruction algorithm to better assess the surrounding before generating the next instruction.

-     Circuit Board Defect Detection: A company has to bear the responsibility of defected devices. If a camera backed with an Image Segmentation model keeps scanning for defects produced in the final product, a lot of money and time can be saved in fixing a defective device.

-     Face detection: Nowadays, we have observed that the majority of cameras in phones support portrait mode. Portrait mode is technically an outcome of Image Segmentation. Apart from this, security surveillance will be much more effective when the faces are distinguishable from noisy objects.

-     Medical Imaging: Image segmentation can be used to extract clinically relevant information from medical reports. For example, image segmentation can be used to segment tumors.

    For our project, we want to test and see the efficacy of the first example that we have shown above, the standard object detection system. For this, we will use Python, along with some specialized libraries (numpy, scipy, pillow, cython, matplotlib, scikit-image, tensorflow, keras, opencv, h5py, imgaug and jpython).

    The output given after the running will consist of the image that we put to the test, modified so that every object found is highlighted with a specific colour, and named accordingly. Also, the exact number of objects found is going to be output as well.

    The main technology used for this to be working is called Mask R-CNN (a Region-Based Convolutional Neural Network), which is a type of artificial neural network used in image recognition and processing that is optimized to process pixel data. Therefore, Convolutional Neural Networks are the fundamental and basic building blocks for the computer vision task of image segmentation (CNN segmentation).

    More details will be presented in the upcoming weeks. Thank you for your time! 

Bibliography:

https://data-flair.training/blogs/image-segmentation-machine-learning/

https://viso.ai/deep-learning/mask-r-cnn/

https://en.wikipedia.org/wiki/Image_segmentation


luni, 21 martie 2022

OMOY - Robot that seems to convey emotion while reading

  




Scientists from the Faculty of Engineering, Information and Systems at the University of

Tsukuba devised a text message mediation robot that can help users control their anger

when receiving upsetting news. This device may help improve social interactions as we

move towards a world with increasingly digital communications.

While a quick text message apology is a fast and easy way for friends to let us know

they are going to be late for a planned meet up, it is often missing the human element

that would accompany an explanation face-to-face, or even over the phone. It is likely to

be more upsetting when we are not able to perceive the emotional weight behind our

friends' regret at making us wait.

OMOY, which was equipped with a movable weight actuated by mechanical

components inside its body. By shifting the internal weight, the robot could express

simulated emotions. The robot was deployed as a mediator for reading text messages.

A text with unwelcome or frustrating news could be followed by an exhortation by

OMOY to not get upset, or even sympathy for the user. The mediator robot was

designed so that it can suppress the user's anger and other negative interpersonal

motivations, such as thoughts of revenge, and instead fostered forgiveness.

The robot's body expression produced by weight shifts did not require any specific

external components, such as arms or legs, which implied that the internal weight

movements could reduce a user's anger or other negative emotions without the use of

rich body gestures or facial expressions.

Exemple of perceived emotions and intentions: For a slow and repetive motion to the

left side we perceived embattled and denying emotions. For a fast and repetitive motion

from center to left in a “V” shape the perceived emotins are: refusal, urgency or joy.



Source: 

https://www.sciencedaily.com/releases/2022/03/220310100010.htm

https://www.youtube.com/watch?v=B3WEsEGT0xM


Neural network can read tree heights from satellite images

Neural network can read tree heights from satellite images Researchers at ETH Zurich have created a high-resolution global vegetation height...