Every time a patient is admitted to healthcare, large volumes of data are collected from various sources. This data landscape is difficult to navigate for doctors and other healthcare personnel, and there are therefore potentially major opportunities to be gained with the use of AI and machine learning methods, i.e. methods involving teaching software how to carry out a certain task to support decision-making and thus improve the quality of healthcare. Moreover, AI functionality is not affected by stress or the time of day. However, AI methods may also be hard to understand and non-transparent:
“Put simply, we download a volume of data to a complex system and the system produces a diagnosis or a risk. However, the system is not able to explain why the diagnosis has been made or why there is an increased risk. When compared with more simple decision-making rules, such as recommended measures when the results of a blood test are too high, the rules for these systems are much more complex,” explains Jonas Björk, Professor of epidemiology at Lund University and project manager for the research project.
There are also integrity risks involved in an increased correlation of sensitive registry data and legal issues surround responsibility and transparency. The methods utilised may also result in ethical dilemmas, along with the risk of misleading or discriminatory proposals for decision-making.
“We therefore need a more comprehensive screening of these methods – both pros and cons – and a refinement of the existing approaches to achieving self-learning software that provides comprehensible explanations. The project has a uniquely wide-reaching approach – we are attempting to both develop complex systems for health and care services and study the consequences of their utilisation.
The project focuses on cardiovascular diseases and three specific areas of application for self-learning software: prevention, diagnosis and prognosis. The goal, if the project is successful, is in time for it to be possible to implement more decision-making support of this kind within health and care services. At the same time, it is important to emphasise that AI is a tool, and that a doctor will always take responsibility for making decisions on healthcare for a patient.”
“I’m looking forward to working across disciplines, both with researchers from different fields and experts who are working clinically. Halmstad University is a leading organisation in terms of technology developments for machine learning, and Region Halland has an impressive data infrastructure comprising interlinked registry-based data from various healthcare organisations. We also hope to contribute to developing ethical guidelines, ensuring that this support system can be used in a fair, transparent and secure manner,” concludes Jonas Björk.
The research project, which ends in 2023, is a collaboration between Lund University – the Faculty of Medicine, the Faculty of Law, the Faculty of Engineering (LTH), the Faculty of Science and the School of Economics and Management – and Halmstad University.