News & Events

The role of Artificial Intelligence in the healthcare evolution

18 - January

The incredible amount of health data available nowadays offers a multi-modal vision of each patient that Artificial Intelligence can leverage to predict specific evolution and to make better choices based on learning from that data.

In the past years, Artificial Intelligence (AI) has heavily influenced the healthcare environment: this happens due to the incredible amount of data that is available for a single patient. For each individual, it is possible to look for imaging (MRs, CTs, PETs), Electronic Health Records (EHR), IoT (i.e. wearable devices) acquisition, and many others. These data offer a multi-modal and complementary vision of the patient, which can be leveraged by Artificial Intelligence.

While scientific research focusing on specific modalities -and related output-, especially from the beginning of the 2010s increased exponentially with an equally increasing amount of FDA-certified devices (an example is an AI-aided endoscopy that automatically detects and characterize polyps), there has also been deep research on how to incorporate different modalities together. Most of the time, big data brings big problems such as missing data or data being in silos, so with information that is different to merge, and many times not labelled, which becomes unuseful to train AI-based algorithms. Therefore, an AI algorithm that works with multi-labelled data must be able to merge this variegated information in a useful and intelligent way.

A motion blurred photograph of a patient on stretcher or gurney being pushed at speed through a hospital corridor by doctors and nurses to an emergency room

By nature, medical data are prone to be represented as Knowledge Graphs, a structure in which elements are nodes that are connected if relations between them are present. Such structures can be easily used to connect the different information regarding the patient, but, more ambitiously, we may also think about a knowledge graph where nodes are patients, each node containing several attributes -the different modalities listed before-. In this case, the connections between patients mean that there are similarities between them. When data are represented in this way, a lot of possibilities open: one straightforward application is the link prediction task, in which an AI model is trained to predict if, given a new patient, there is a connection to the existing nodes in the graph; another application, always aided by AI, is to predict a specific attribute for a node (i.e., which blood pressure value will this patient have?).

Moreover, it is possible to use this graph to create a condensed version of all this information, and then train an AI model to predict specific information: for example, we may predict if a patient will become sick or not (classification task) or what value will a specific parameter in his body have in the future (regression task). This connects very easily to what we are trying to obtain in the context of the AICCELERATE program, where we are aiming to create a system that optimizes the resources in (and out) hospitals, by leveraging on a large amount of information coming from medical data. An example is given by the distribution of patients in the surgical rooms: this task is nowadays performed manually, but how could Artificial Intelligence improve this?

Following what we said before, we could collect the information for a specific patient, coming from different registries, and predict the important elements that are decisive in the assignment to surgical rooms: duration of the operation, risk of post-surgical complications, etc. An optimization algorithm could be used to assign patients to surgery timeslot in an efficient way, by relying on the AI model predictions, with the obvious consequence of improving the quality of the service by minimizing the losses due to error.

A natural question that may arise is: how can we trust the algorithm’s decision when there are real people involved? How can we be sure we are really doing the best choice? In this context a pivotal role is played Explainable-AI, which is the branch of the AI that focuses on explaining how AI algorithms work, thus preventing the end-user from considering it a black-box. But this will be the topic of the next post!

Giampaolo Pileggi
Research scientist. NEC Laboratories Europe GmbH


Similar Posts

Flecha hacia abajo Health Data Preparation for Artificial Intelligence
The generalizability and reusability of an AI model and digital intervention or decision support service built on that model are...
See more
Arrow up