News & Events
AI in healthcare: ethical and legal challenges
8 - NovemberAI in Digital Health is happening NOW
We have seen it in movies and TV series but also in futuristic literature such as Isaac Asimov’s books. Artificial Intelligence has incredible potential and will revolutionize most aspects of our daily lives in a few years.
In recent years, AI has become a reference point for today’s tech innovation. And it is expected to grow more in the next couple of years. From entertainment to the food industry and sports, AI is changing most of today’s enterprises, improving efficiency, products, and services.
But what about healthcare and medicine?
This “novel” technology (not so novel as Alan Turing wrote about it back in 1950) is now a reality and applied in healthcare, giving support to doctors and specialists worldwide.
No worries. It is still certain that AI won’t replace medical professionals, their insight, vision, and humanity. But on the other hand, it will help them to improve the efficiency of their daily work routines: from diagnosis processes to treatment protocol development, repetitive job assistance, precision medicine, personalized medicine, patient monitoring, and care, and the AI will bring positive waves of innovation in the healthcare industry.
As a result, the collaboration between healthcare professionals and modern technology will carry a positive change and lead to a disruption in medicine, improving the outcomes and delivering better health services. And this is not a distant or utopian future, as we are gradually getting there, step by step.
And we are trying to help this disruption by bringing AI everywhere to patient care worldwide.
But, to achieve this goal, it is crucial to follow shared guidelines and, thus, create rules and processes to regulate the use of AI enabling its full potential without the possibility of incurring issues.
AI regulations in healthcare
The AI Act is a proposed European law on Artificial Intelligence and will become the first set of rules in the world to regulate AI. As for the EU’s GDPR, which came into force in 2018, the AI Act has the potential to become a global standard, making waves worldwide. The regulations will apply to any AI system within the European Union: as we can imagine, it will have tremendous impacts and implications for all innovative organizations using AI around the world.
A 2020-research carried out by McKinsey highlights that only 48% of the organizations interviewed reported that they recognize the issues of regulatory-compliance risks. Furthermore, only 28% reported actively working to address them.
Although not yet in force, the set of rules provides a clear vision of the future of AI regulation as a whole. Now, it is the time to begin understanding those implications and prepare actions to mitigate possible risks that can emerge.
But what type of AI will fall under the new EU legislation?
The EU identifies four categories of Artificial Intelligence:
- Unacceptable-risk AI systems: “All AI systems considered a clear threat to the safety, livelihoods, and rights of people will be banned, from social scoring by governments to toys using voice assistance that encourages dangerous behaviour.”
- High-risk AI systems: All remote biometric identification systems are considered high-risk and subject to strict requirements. The use of remote biometric identification in publicly accessible spaces for law enforcement purposes is prohibited. This category will be subject to the most extensive set of requirements that include human oversight, cybersecurity, transparency, risk management, etc. Inside this category will fall the greatest part of AI systems used in healthcare.
- Limited-risk AI systems: Limited risk refers to AI systems with specific transparency obligations. When using AI systems such as chatbots, users should be aware that they are interacting with a machine to make an informed decision to continue or step back.
- Minimal or no-risk AI systems: The proposal allows the unrestricted use of minimal-risk AI. This includes applications such as AI-enabled video games or spam filters. The vast majority of AI systems currently used in the EU fall into this category.
First steps to start an AI compliance journey
This draft represents only the first step toward an international effort to manage the risks associated with AI. Although no measures have been put in place yet, this is the right time to start thinking about creating processes to ensure compliance.
The sooner digital health companies adapt to this new regulatory reality, the better their long-term success in the market will be.
At the beginning of this compliance journey, innovative companies working in the digital health ecosystem should:
- Establish an AI-related risk management program, including the processes used and chosen when developing the AI system.
- Perform an inventory of all AI systems used within the company that contains descriptions of all AI systems associated with current and planned use cases, along with risk classifications.
- Outline a risk-classification system, risk mitigation strategy, and data-risk management processes, including all the risks that the system poses, including unintended consequences that may arise.
In conclusion, this draft EU AI regulation should be seen as a reminder for companies to ensure they have a solid set of processes to manage AI risk and comply with present and future regulations.
To deliver proper innovation in this field, it is essential to focus efforts on creating frameworks for risk management and compliance to enable continuous improvement and deployment of Artificial Intelligence safely and quickly.
Jovan Stevovic, PhD – CEO of Chino.io
* * *
Chino.io combines technological and legal expertise to help innovative companies and projects navigate through EU regulatory frameworks enabling successful launches and reimbursement approvals and ensuring people’s data privacy and security. They are certified ISO 27001 (for security), ISO 13485 (QMS for medical devices and medical software), and ISO 9001 (general quality assurance).