marți, 29 martie 2022

The powerful impact that Artificial Neural Networks have in the Medical Field.

 Artificial intelligence has the potential to transform the way surgery is taught and practiced.


Although the surgeon-patient-computer relationship's potential is a long way from being fully discovered, the use of AI in surgery is already driving significant changes for doctors and patients alike. AI is currently perceived as a supplement and not a replacement for the skill of a human surgeon.

Deep learning recurrent neural networks (RNN) (Recurrent neural networks are a variant of the conventional feedforward artificial neural networks that can deal with sequential data and can also be trained to hold the knowledge about the past.). Among many other usages in the medical field, RNNs are being used to predict from renal failure in real-time to mortality and postoperative bleeding after cardiac surgery. Using RNNs has obtained improved results compared to standard clinical reference tools. 

Neural Networks are essentially a part of Deep Learning, which in turn is a subset of Machine Learning. ML allows a computer to utilize partial labeling of the data (supervised learning) or the structure detected in the data itself (unsupervised learning) to explain or make predictions about the data Supervised learning is useful for training an ML algorithm to predict a known result or outcome while unsupervised learning is useful in searching for patterns within data.



In supervised learning, human-labeled data are fed to a machine-learning algorithm to teach the computer a function, such as recognizing a gallbladder in an image or detecting a complication in a large claims database. In unsupervised learning, unlabeled data are fed to a machine-learning algorithm, which then attempts to find a hidden structure to the data, such as identifying bright red (e.g. bleeding) as different from non-bleeding tissue.


Sources:


https://www.mobihealthnews.com/news/contributed-power-ai-surgery
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5995666/


Applying artificial intelligence for cancer immunotherapy

 




Artificial intelligence (AI) is a general term that refers to the use of a machine to imitate intelligent behavior for performing complex tasks with minimal human intervention, such as machine learning; this technology is revolutionizing and reshaping medicine. AI has considerable potential to perfect health-care systems in areas such as diagnostics, risk analysis, health information administration, lifestyle supervision, and virtual health assistance. In terms of immunotherapy, AI has been applied to the prediction of immunotherapy responses based on immune signatures, medical imaging and histological analysis. These features could also be highly useful in the management of cancer immunotherapy given their ever-increasing performance in improving diagnostic accuracy, optimizing treatment planning, predicting outcomes of care and reducing human resource costs.

Although immunotherapy is a great breakthrough in the field of cancer treatment, the judgment of whether a particular patient can respond to the therapy is occasionally confusing. However, the appearance of AI increases the chance of successful cancer immunotherapy through forecasting the therapeutic effect based on the establishment of immunotherapy predictive scores, including immunoscore and immunophenoscore. These two scoring systems were developed to predict the response to immune checkpoint blockade (ICB) therapy. Meanwhile, some limitations, such as unknown predictive power of individual biomarkers, difficulty of integrating diverse biomarkers into one system and lack of ICB response prediction models that can integrate different biomarkers, are the main barriers that warrant further study. A previous study showed that the integration of an AI-based diagnostic algorithm with physicians’ interpretations can be positively related to improving diagnostic accuracy for indiscernible cancer subtypes. AI technology obtains approximately 91.66% accuracy when recognizing major histocompatibility complex patterns associated with immunotherapy response. More importantly, AI can be applied to standardize assessments across institutions instead of depending on the interpretation of clinicians that occasionally is inherently subjective. Therefore, the application of AI in cancer immunotherapy may lead to positive outcomes in patients.

To date, most notable is the successful application of AI in immunotherapy in cancer research. Machine Learning can match the pace with modern medicine regarding generated data and the detection of phenotypic varieties that sneak through human screening. The range of machine screening can also be adjusted to detect only interested phenotype changes or to screen for broader phenotypes. Currently, AI-based methods have shown good results in the prediction of MHC-II epitopes on the strength of amino acid sequences and the development of vaccines targeting MHC-II immunopeptidome , which demonstrate the increasingly extensive application of AI in immunotherapy.

 



AI’s next big leap - Neuro-symbolic AI

 

AI’s next big leap - Neuro-symbolic AI

 

                Artificial intelligence research has made great achievements in solving specific applications, but we’re still far from the kind of general-purpose AI systems that scientists have been dreaming of for decades.

                Among the solutions being explored to overcome the barriers of AI is the idea of neuro-symbolic systems that bring together the best of different branches of computer science.

                What Is Neuro-Symbolic AI? A fancier version of AI that we have known till now, it uses deep learning neural network architectures and combines them with symbolic reasoning techniques. For instance, we have been using neural networks to identify what kind of a shape or color a particular object has. Applying symbolic reasoning to it can take it a step further to tell more exciting properties about the object such as the area of the object, volume and so on.

                It’s also taking baby steps toward reasoning like humans and might one day take the wheel in self-driving cars, meaning that neuro-symbolic AI brings us closer to machines with common sense. What exactly does that mean?

                Our minds are built not just to see patterns in pixels and soundwaves but to understand the world through models. As humans, we start developing these models as early as three months of age, by observing and acting in the world.

                For example, people (and sometimes animals) can learn to use a new tool to solve a problem or figure out how to repurpose a known object for a new goal (e.g., use a rock instead of a hammer to drive in a nail).

                These capabilities are often referred to as “intuitive physics” and “intuitive psychology” or “theory of mind,” and they are at the heart of common sense.

                These cognitive systems are the bridge between all the other parts of intelligence such as the targets of perception, the substrate of action-planning, reasoning, and even language.

                AI agents should be able to reason and plan their actions based on mental representations they develop of the world and other agents through intuitive physics and theory of mind.

 

                Overcoming The Shortfalls Of Neural Networks And Symbolic AI 

                If we look at human thoughts and reasoning processes, humans use symbols as an essential part of communication, making them intelligent. To make machines work like humans, researchers tried to simulate symbols into them. This symbolic AI was rule-based and involved explicit embedding of human knowledge and behavioral rules into computer programs, making the process cumbersome. It also made systems expensive and became less accurate as more rules were incorporated.

                To deal with these challenges, researchers explored a more data-driven approach, which led to the popularity of neural networks. While symbolic AI needed to be fed with every bit of information, neural networks could learn on its own if provided with large datasets. While this was working just fine, as mentioned earlier, the lack of model interpretability and a large amount of data that it needs to keep learning calls for a better system.

                To understand it more in-depth, while deep learning is suitable for large-scale pattern recognition, it struggles at capturing compositional and causal structure from data. Whereas symbolic models are good at capturing compositional and causal structure, but they strive to achieve complex correlations. 

                The shortfall in these two techniques has led to the merging of these two technologies into neuro-symbolic AI, which is more efficient than these two alone. The idea is to merge learning and logic hence making systems smarter. Researchers believe that symbolic AI algorithms will help incorporate common sense reasoning and domain knowledge into deep learning. For instance, while detecting a shape, a neuro-symbolic system would use a neural network’s pattern recognition capabilities to identify objects and symbolic AI’s logic to understand it better. 

                A neuro-symbolic system, therefore, uses both logic and language processing to answer the question, which is similar to how a human would respond. It is not only more efficient but requires very little training data, unlike neural networks. 

 

In conclusion, the primary goals of NS are to demonstrate the capability to:

  1. Solve much harder problems
  2. Learn with dramatically less data, ultimately for a large number of tasks rather than one narrow task)
  3. Provide inherently understandable and controllable decisions and actions
  4. Demonstrate common sense
  5. Solve the AI black box problem


Bibliografie:

                   1.ttps://knowablemagazine.org/article/technology/2020/what-is-neurosymbolic-ai?fbclid=IwAR0gX5x1FetwXmpnW6KJA_nBiO5YKUi1EgLi8dye62Qp3OeuH6wUtXUArfQ

         2.          https://bdtechtalks.com/2022/03/14/neuro-symbolic-ai-common-sense/?fbclid=IwAR1bZM4LtRDm9EdwvNnixnHoQpy5KZN4U6qNQIV1ifRAqgqeFoG9k70R78U

         3.          https://analyticsindiamag.com/what-is-neuro-symbolic-ai-and-why-

are-researchers-gushing-over-it/

         4.          https://researcher.watson.ibm.com/researcher/view_group.php?id=10518

 

AI in exploring the Universe

        The Universe has fascinated the human race for thousands of centuries. Looking at the sky makes one wonder how vast is the Universe. There is so much out there to explore and discover. Cosmologists and astrophysicists are trying their best to uncover the mysteries of the Universe. Since the Universe is so humongous, it is natural for us to wonder about the various concepts and philosophies regarding this astronomical body.

        Artificial Intelligence is the bright star in the dark Universe. It could very well be the perfect solution to solve the complexities and conquer the mysteries of the Universe. With advancements in technologies of artificial intelligence, like in the fields of data science, exploratory data analysis, and computer vision, we can achieve results beyond our imagination. Applying this tech on the hunt for gravitational lenses (distribution of matter, such as a cluster of galaxies) was surprisingly straightforward.

        First, the scientists made a dataset to train the neural network with, which meant generating 6 million fake images showing what gravitational lenses do and do not look like. Then, they turned the neural network loose on the data, leaving it to slowly identify patterns. A bit of fine-tuning later, and they had a program that recognised gravitational lenses in the blink of an eye.

        The credit for this accomplishment goes to the researchers who developed the deep neural network architecture called the Deep Density Displacement Model. The Deep Density Displacement Model learns from a set of pre-run numerical simulations, to predict the nonlinear large-scale structure of the Universe. Their extensive analysis demonstrates that D³M outperforms other models and is also able to accurately extrapolate far beyond its training data and predict structure formation for significantly different cosmological parameters. As for the result, it produced exceptionally accurate simulations and even made a 3-D simulation of the entire Universe.

         Another major new development in the field of artificial intelligence, which could perhaps be the most significant and exceptional discovery is a new AI called the "Dark emulator". The one concept which has left scientists with a question mark for over generations is the theory behind dark matter. Not only the secrets to the entire structure of the Universe can be unveiled, but also hypothesis and complex distinctions of modern physics concepts could potentially be solved with a detailed study and breakthrough of dark matter or dark energy. The Dark Emulator AI can be the best possible tool to solve the problems of astrophysicists. The Dark Emulator learns from the existing data and creates multiple virtual universes and keeps learning from these repeatedly.

        The potential for the interpretation of the gigantic Universe using the various tools and technologies of artificial intelligence is colossal. In a distant future, the enigma, paradoxes, and secrets about the Universe will unfold, and we will have a clear perception about the various mysteries, or at the very least, a brief idea to explore, examine and envision the eternity of the Universe.


Sources:

Finding strong gravitational lenses in the kilo degree survey with convolutional neural networks

Learning to predict the cosmological structure formation

marți, 22 martie 2022

Image Segmentation with Machine Learning


First of all, you might be wondering - What exactly is image segmentation and how does it work? Well, long story short, Image segmentation is a prime domain of computer vision, backed by a huge amount of research involving both image processing-based algorithms and learning-based techniques. In other words, image segmentation is the process of partitioning a digital image into multiple image segments, also known as image regions or image objects (sets of pixels). The goal of segmentation is to simplify and/or change the representation of an image into something that is more meaningful and easier to analyze.

Here is an example of how Image Segmentation works on a sample picture:



    As you can see, the machine automatically finds a lot of things that can be distinguished by the human eye, things such as cars, traffic lights, or even pedestrians. This example is, if you will, the most primitive way to show this technology’s true power and capabilities.

    Because this kind of technology has been used for a while now, engineers and programmers have managed to use this algorithm in multiple ways and uses. Some notable examples are:

-     Self-driving cars: Image segmentation can be used in self-driving cars for giving easy distinctions between various objects. Be it traffic signals, signboards, humans, and cars. It can help the driving instruction algorithm to better assess the surrounding before generating the next instruction.

-     Circuit Board Defect Detection: A company has to bear the responsibility of defected devices. If a camera backed with an Image Segmentation model keeps scanning for defects produced in the final product, a lot of money and time can be saved in fixing a defective device.

-     Face detection: Nowadays, we have observed that the majority of cameras in phones support portrait mode. Portrait mode is technically an outcome of Image Segmentation. Apart from this, security surveillance will be much more effective when the faces are distinguishable from noisy objects.

-     Medical Imaging: Image segmentation can be used to extract clinically relevant information from medical reports. For example, image segmentation can be used to segment tumors.

    For our project, we want to test and see the efficacy of the first example that we have shown above, the standard object detection system. For this, we will use Python, along with some specialized libraries (numpy, scipy, pillow, cython, matplotlib, scikit-image, tensorflow, keras, opencv, h5py, imgaug and jpython).

    The output given after the running will consist of the image that we put to the test, modified so that every object found is highlighted with a specific colour, and named accordingly. Also, the exact number of objects found is going to be output as well.

    The main technology used for this to be working is called Mask R-CNN (a Region-Based Convolutional Neural Network), which is a type of artificial neural network used in image recognition and processing that is optimized to process pixel data. Therefore, Convolutional Neural Networks are the fundamental and basic building blocks for the computer vision task of image segmentation (CNN segmentation).

    More details will be presented in the upcoming weeks. Thank you for your time! 

Bibliography:

https://data-flair.training/blogs/image-segmentation-machine-learning/

https://viso.ai/deep-learning/mask-r-cnn/

https://en.wikipedia.org/wiki/Image_segmentation


luni, 21 martie 2022

OMOY - Robot that seems to convey emotion while reading

  




Scientists from the Faculty of Engineering, Information and Systems at the University of

Tsukuba devised a text message mediation robot that can help users control their anger

when receiving upsetting news. This device may help improve social interactions as we

move towards a world with increasingly digital communications.

While a quick text message apology is a fast and easy way for friends to let us know

they are going to be late for a planned meet up, it is often missing the human element

that would accompany an explanation face-to-face, or even over the phone. It is likely to

be more upsetting when we are not able to perceive the emotional weight behind our

friends' regret at making us wait.

OMOY, which was equipped with a movable weight actuated by mechanical

components inside its body. By shifting the internal weight, the robot could express

simulated emotions. The robot was deployed as a mediator for reading text messages.

A text with unwelcome or frustrating news could be followed by an exhortation by

OMOY to not get upset, or even sympathy for the user. The mediator robot was

designed so that it can suppress the user's anger and other negative interpersonal

motivations, such as thoughts of revenge, and instead fostered forgiveness.

The robot's body expression produced by weight shifts did not require any specific

external components, such as arms or legs, which implied that the internal weight

movements could reduce a user's anger or other negative emotions without the use of

rich body gestures or facial expressions.

Exemple of perceived emotions and intentions: For a slow and repetive motion to the

left side we perceived embattled and denying emotions. For a fast and repetitive motion

from center to left in a “V” shape the perceived emotins are: refusal, urgency or joy.



Source: 

https://www.sciencedaily.com/releases/2022/03/220310100010.htm

https://www.youtube.com/watch?v=B3WEsEGT0xM


duminică, 20 martie 2022

Deep-learning technique predicts clinical treatment outcomes

 by Quasar

A new methodology simulates counterfactual, time-varying, and dynamic treatment strategies, allowing doctors to choose the best course of action.

 

marți, 15 martie 2022

An artificial intelligence system that rapidly predicts how two proteins will attach

By MAC (Giupana Alexandru & Goina Dacian)

Any virus can be neutralized by antibodies. Those antibodies attach to the virus and destroy it. But the developing the right antibody is not an easy task. This process requires researchers to have elevated knowledge about how that attachment will happen. The proteins are real entities, thus they have 3D structures, which can be assembled in millions of combinations. This enormous number of combinations makes finding the right matching extremely time-consuming.

To solve this issue, researchers have developed a model called Equidock. This takes the  3D structures of the proteins and converts them into 3D graphs that can be processed by a neural network. The proteins are formed from chains of amino acids, and each of those amino acids is represented by a node in the graph. The developed model also has mathematical knowledge built in - this ensures the proteins always attach in the same way, no matter where they exist in 3D space.

After the model was trained, the researchers compared it with other similar software methods. The results have shown that the Equidock is able to predict the protein complex after only 1 to 5 seconds, much better than other software solutions, most of them requiring between 10 minutes to an hour or more to find the protein complex. In quality measures, which calculate how closely the predicted protein complex matches the actual protein complex, Equidock is often comparable with the baselines.


Source:

Article: https://news.mit.edu/2022/ai-predicts-protein-docking-0201

Full paper: https://openreview.net/forum?id=GQjaI9mLet

Studying the Big Bang with artificial intelligence

by MyCin - Oprean Andreea & Bica Anamaria

Can machine learning be used to uncover the secrets of the quark-gluon plasma? Yes - but only with sophisticated new methods.

        Imagine the following scenario: immediately after the Big Gang, the state of the universe could be described by countless interactions that occur in the tangled mess of quantum particles. This state of matter is known as "quark-gluon plasma". Therefore, it isn’t surprising that such processes can only be studied using high-performance computers and highly complex computer simulations whose results are difficult to evaluate. Using artificial intelligence or machine learning for this purpose seems like an obvious idea. However, ordinary machine-learning algorithms are not suitable for this task. The mathematical properties of particle physics require a very special structure of neural networks. At TU Wien (Vienna), it has now been shown how neural networks can be successfully used for these challenging tasks in particle physics.

        As stated by Dr. Andreas Ipp from the Institute for Theoretical Physics at TU Wien. "Even the largest supercomputers in the world are overwhelmed by this [simulating a quark-gluon plasma]." It would therefore be desirable not to calculate every detail precisely, but to recognize and predict certain properties of the plasma with the help of artificial intelligence. This is precisely why neural networks are used, and more importantly why they developed completely new network layers that not only predict the values, but they also take into consideration the quantum fields used to mathematically describe the particles and the forces between them.

        It will be some time before it is possible to fully simulate atomic core collisions at CERN with such methods, but the new type of neural networks provides a completely new and promising tool for describing physical phenomena for which all other computational methods may never be powerful enough.

 

Source:    Vienna University of Technology

Site:         sciencedaily.com

Machine Learning in Cybersecurity

by the Learning Machines - Dragoș Răzvan, Galiș Fabian, Slivilescu Vlad


    First of all, we have to disappoint you. Unfortunately, machine learning will never be a silver bullet for cybersecurity compared to image recognition or natural language processing, two areas where machine learning is thriving. 

    There will always be a man trying to find weaknesses in systems or ML algorithms and to bypass security mechanisms. What’s worse, now hackers are able to use machine learning to carry out all their nefarious endeavors. Fortunately, machine learning can aid in solving the most common tasks including regression, prediction, and classification. In the era of extremely large amount of data and cybersecurity talent shortage, ML seems to be an only solution.


    Intrusion detection systems attempt to discover the presence of unauthorized activities on computer networks, typically by focusing on behavior profiles and searching for signs of malicious activity. They're typically classified as either misuse-based or anomaly-based. In misuse-based detection, attacks are identified based on their resemblance to previously seen attacks, whereas in anomaly-based detection, a baseline of “normal” behavior is constructed and anything that does not match that baseline is flagged as a potential attack. Both methods can make use of different ML methods.

    While intrusion detection systems monitor a system or network’s behavior to identify signs that a network is under attack, malware detection systems examine specific files to determine if they are malicious. Traditional detection techniques can be easily evaded by so-called polymorphic or metamorphic viruses—types of malware that change their own code each time they propagate—thereby ensuring that different versions will have different signatures. Machine learning, however, excels at identifying shared features between samples that can’t be classified using simple rules. As early as 1996, researchers at IBM began to explore the use of neural networks to classify boot sector viruses, a specific type of virus that targets a machine’s instructions for booting up.

    Threat hunting — proactively searching for cyber threats that are lurking undetected in an organization’s network — used to be a manual and time-consuming process. However, with the adoption of machine learning, advanced analytics, and user behavior analytics (UBA), you can partially automate threat hunting, thus increasing its efficiency.

    Machine learning can also be useful in detecting code vulnerabilities. Both attackers and application developers hunt for code vulnerabilities. The first one to detect a vulnerability wins. One of the modern ways to search for dangerous flaws in code is using AI and ML algorithms that can quickly scan vast amounts of code and detect known vulnerabilities before hackers notice and exploit them.


    Undoubtedly, there are many issues with interpretability (particularly for deep learning algorithms), but humans also cannot interpret their own decisions, right?

    On the other hand, with the growing amount of data and decreasing number of experts, ML is an only remedy. It works now and will be mandatory soon. It is better to start right now.

    Keep in mind, hackers are also starting to use ML in their attacks. Their activities are divided into 5 groups of high-level tasks that ML can solve:

  • Information gathering — preparing for an attack;
  • Impersonation — attempting to imitate a confidant;
  • Unauthorized access — bypassing restrictions to gain access to some resources or user accounts;
  • Attack — performing an actual attack such as malware or DDoS;
  • Automation — automating the exploitation and post-exploitation.
Russian Hacker
Russian hacker (source: www.ukrgate.com/eng/?p=17131)

    In the context of the present day's ongoing war, A.I. might also play a vital role. Many fear that A.I. techniques such as deepfakes—highly realistic video fakes created using an A.I. technique—will supercharge Russian disinformation campaigns. Machine learning can also be used to help detect disinformation. The large social media platforms already deploy these systems, although their track record in accurately identifying and removing disinformation is spotty at best. 

    A few years ago, everyone had a skeptical attitude towards the use of machine learning. Today’s research findings and its implementation in products prove that ML actually works, and it’s here to stay. Otherwise, hackers will start looking ahead and benefiting from it.

    
Sources:


sâmbătă, 12 martie 2022

Automatic License Number Plate Recognition

By Error404 (Bianca Ștefănescu, Alex Silaghi, Cătălin Vanciu)



    Automatic recognition of car license number plate became a very important aspect in our daily life because of the unlimited increase of cars and transportation systems which make it impossible to be fully managed and monitored by humans, examples are so many like traffic monitoring, tracking stolen cars, managing parking toll, red-light violation enforcement, border and customs checkpoints. Yet it’s a very challenging problem, due to the diversity of plate formats, different scales, rotations and non-uniform illumination conditions during image acquisition.

    Most of the number plate detection algorithms fall in more than one category based on different techniques. To detect vehicle number plate following factors should be considered: 

  • Plate size: a plate can be of different size in a vehicle image. 
  • Plate location: a plate can be located anywhere in the vehicle. 
  • Plate background: a plate can have different background colors based on vehicle type. For example a government vehicle number plate might have different background than other public vehicles. 
  • Screw: a plate may have screw and that could be considered as a character. 

    A number plate can be extracted by using image segmentation method. There are numerous image segmentation methods available in various literatures. In most of the methods image binarization is used. Some authors use Otsu’s method for image binarization to convert color image to gray scale image. Some plate segmentation algorithms are based on color segmentation.

    In principle, image should first be processed, then Gaussian Blur, Sobel and morphological operations applied. In the end, the only thing left to do should be to extract the text using "pytesseract" and recognize the numbers and characters of the number plate.



    Bibliography:

  • Chirag Patel, Dipti Shah, PhD., Atul Patel, PhD. - "International Journal of Computer Applications (0975 – 8887) Volume 69– No.9, May 2013" 
  • Amr Badr, Mohamed M. Abdelwahab, Ahmed M. Thabet, and Ahmed M. Abdelsadek - "Annals of the University of Craiova, Mathematics and Computer Science Series Volume 38(1), 2011"
  • Xifan Shi, Weizhong Zhao, and Yonghang Shen - "Automatic License Plate Recognition System Based on Color Image Processing





Neural network can read tree heights from satellite images

Neural network can read tree heights from satellite images Researchers at ETH Zurich have created a high-resolution global vegetation height...