Expertise

Machine Learning

Machine learning (ML) applications are making a considerable impact on healthcare and have the potential to redefine disease prediction and prevention strategies. Access to vast amounts of health data modalities i.e. such as electronic health records (EHR), medical imaging, or genomics allows wide-scale advancements encompassing predictive analytics, medical imaging, personalised medicine, and patient management.

Predictive analytics uses ML algorithms to analyse historical data and predict future outcomes , (e.g. mortality, hospitalization, or length of stay) which is particularly beneficial in identifying individuals at risk of developing certain conditions e.g. diabetes or cardiovascular disease (CVD). The ML solutions applied in medical imaging can analyse complex medical imaging such as X-rays, MRIs, and CT scans, and detect abnormalities like tumours, or other diseases at an early stage. Personalised medicine allows tailoring treatment strategies to individual patients based on their genetic data, medical history, and lifestyle factors. This enables healthcare providers to design more effective and accurate treatment plans leading to better patient outcomes. In the area of patient management, natural language processing models (ML-based algorithms) can automate the documentation process by transcribing and analysing doctor’s notes.

Within the Computational Cardiology group, we have identified novel features for the 10-years risk of cardiovascular disease (CVD) in people with type 2 diabetes, which can be used to improve early identification of high-risk individuals and help with CVD management. Furthermore, we have developed CVD polygenic risk score models for the 10-years risk of CVD prediction, which may offer unique possibilities to identify people with a meaningful risk of developing CVD. Additionally, using ML models we have derived a highly discriminative risk-prediction model for end-stage heart failure (HF) in patients with non-ischemic dilated cardiomyopathy, which may meaningfully improve clinical decision-making.

The current research focuses on developing personalized tools for early identification of cardiovascular risk, diagnosis, simulation of potential treatment effects, and the inclusion of AI-based treatment suggestions based on the Dutch General Practitioner and European Society of Cardiology (ESC) guidelines.

Data Standardization and Federated Learning

Recent advancements in Natural Language Processing (NLP), particularly the rise of Large Language Models (LLMs), have opened doors to exciting possibilities in healthcare. LLMs are generally pretrained on a massive amount of textual data to learn general language constructs (including grammar, syntax and semantics), and fine-tuned on instructions for multiple tasks and to mimic human preferences. They have demonstrated remarkable capabilities across a range of medical tasks, including answering clinical questions, summarizing medical texts, assisting in diagnostics, and predicting patient outcomes.

A significant portion of electronic health records (EHR) data is stored in the form of unstructured data, in which data are not stored in a predefined format such as clinical notes and image reports. LLMs provide the opportunity to automatically analyse and derive insights from the unstructured data. However, LLMs are inherently limited to the knowledge gained during training and can struggle with tasks outside their specific training scope. In addition, a key issue of LLMs is the potential to generate hallucinations, where the model produces information that appears coherent but is factually incorrect or unsubstantiated. In healthcare, such hallucinations could lead to unsafe recommendations, misinterpretations, or harmful outcomes if not properly mitigated. To address these concerns, fine-tuning LLMs on domain-specific medical data is important.

Within the Amsterdam Center for Computational Cardiology, we focus on the development of a central data platform to leverage multi-modal clinical data spanning information from imaging, genomics, electronic healthcare records, and clinical trials. Data will be mapped to international standards to allow for efficient data storage and interoperability across centers.  In the context of various projects, data harmonization pipelines are further optimized for different formats (FHIR, OMOP) and using internationally recognized standards. This allows for collaborations with other institutes through a federated learning strategy as well as external validation provide a solid foundation for AI-based research while enhancing data privacy and improving the generalizability and applicability of developed models. This approach enhances our ability to develop advanced computational models and improve generalizability of developed models by combining information from various clinical institutes. The ultimate goal is to provide a platform to allow for the clinical embedding of developed AI models within the healthcare infrastructure to support doctors with their clinical decision making.

Natural Language Processing

Harnessing large language models to evaluate nedication treatment adherence in accordance with the ESC guidelines for heart failure

Cardiovascular disease  encompasses various heart and blood vessel conditions, with heart failure being notably concerning due to its high rates of hospitalization, mortality among older adults, and substantial contribution to healthcare costs. Early diagnosis and effective management of heart failure are vital for enhancing patient outcomes, improving quality of life, and mitigating healthcare expenses.

Clinical guidelines serve as invaluable tools for healthcare professionals, offering evidence-based recommendations for the diagnosis, treatment, and management of various medical conditions [ref]. In the realm of heart failure, the European Society of Cardiology (ESC) provides comprehensive guidelines for the management of heart failure (hereafter referred to as ESC-HF). These guidelines provide recommendations on the diagnosis, treatment, and management of heart failure based on the latest scientific evidence and expert consensus. As such, they aid healthcare professionals in their decision-making to optimize symptom control, prevent disease progression, and reduce the burden of heart failure on both patients and healthcare systems.

Within the ACCC group, we drive innovation in several key areas by leveraging LLMs and EHR data from cardiovascular patients, fine-tuning these models on domain-specific medical data.

  • Assessing treatment alignment with guidelines:
    We utilize LLMs to evaluate whether treatment plans and procedures align with the European Society of Cardiology (ESC) guidelines. By analyzing patient records, we identify patterns and discrepancies, promoting adherence to evidence-based practices and improving patient outcomes.
  • Prognostic modeling for cardiovascular outcomes:
    We enhance traditional risk prediction models by incorporating both structured data (e.g., demographics, lab results) and unstructured data (e.g., clinical notes, imaging reports) using LLMs. This approach improves predictive performance and enables more personalized care.
  • Converting clinical notes into structured data:
    We apply LLMs to convert unstructured clinical notes into structured data, making it easier to analyze and use for statistical analysis and clinical research.
  • Automating medical coding:
    We streamline the medical coding process by using LLMs to automate the extraction of relevant codes from clinical notes. This reduces the administrative burden on healthcare providers while ensuring accuracy and consistency for billing, reporting, and compliance.

Cardiovascular Epidemiology​

Cardiovascular diseases (CVDs) remain the leading cause of morbidity and mortality worldwide, with an estimated death toll of almost 18 million each year. The overall group of CVDs comprises of different conditions, including coronary artery disease, heart failure, stroke, structural heart diseases and more.

Yet, the burden of CVDs is not only derived from these conditions. There are numerous risk factors that are indications for developing CVD that are burdensome for our healthcare.  The most important risk factors are high blood pressure, high cholesterol, overweight/obese, diabetes, physical inactivity and smoking. Effective prevention of these risk factors has been shown to significantly reduce the risk of developing CVDs. Additionally, appropriate management, including pharmacological interventions, of both CVDs and their risk factors, can substantially alleviate their overall impact.

In cardiovascular epidemiology we focus on understanding the patterns, causes, and effects of CVDs and their risk factors. We examine the contribution to the development, progression, and outcomes of CVDs, employing a range of research methodologies.

Within the Amsterdam Center for Computational Cardiology, we apply cardiovascular epidemiology to uncover causal associations between CVDs, outcomes, and risk factors. Using methodologies from causal inference, such as propensity scores, inverse probability weighting (IPW), and instrumental variable analysis, we account for confounding variables and draw accurate conclusions about relationships.

Additionally, we focus on developing individualized risk prediction models. By incorporating genetic information, clinical measurements, and lifestyle factors, we create precise tools for predicting an individual’s risk of developing CVD, personalizing prevention strategies, and optimizing treatment plans.

We also examine the progression of CVDs over time to identify critical intervention periods and factors influencing disease trajectories, aiming to improve long-term outcomes. Our research is supported by robust statistical epidemiological techniques, with this, we strive to advance the field of cardiovascular epidemiology and improve patient care.

  • Te Hoonte F, Spronk M, Sun Q, Wu K, Fan S, Wang Z, Bots ML, Van der Schouw YT, Uijl A, Vernooij RWM. Ideal cardiovascular health and cardiovascular-related events: a systematic review and meta-analysis. Eur J Prev Cardiol. 2024 Jun 3;31(8):966-985. doi: 10.1093/eurjpc/zwad405.
  • Schmidt AF, Joshi R, Gordillo-Marañón M, Drenos F, Charoen P, Giambartolomei C, Bis JC, Gaunt TR, Hughes AD, Lawlor DA, Wong A, Price JF, Chaturvedi N, Wannamethee G, Franceschini N, Kivimaki M, Hingorani AD, Finan C. Biomedical consequences of elevated cholesterol-containing lipoproteins and apolipoproteins on cardiovascular and non-cardiovascular outcomes. Commun Med (Lond). 2023 Jan 20;3(1):9. doi: 10.1038/s43856-022-00234-0.
  • Uijl A, Koudstaal S, Vaartjes I, Boer JMA, Verschuren WMM, van der Schouw YT, Asselbergs FW, Hoes AW, Sluijs I. Risk for Heart Failure: The Opportunity for Prevention With the American Heart Association’s Life’s Simple 7. JACC Heart Fail. 2019 Aug;7(8):637-647. doi: 10.1016/j.jchf.2019.03.009.
  • Uijl A, Koudstaal S, Direk K, Denaxas S, Groenwold RHH, Banerjee A, Hoes AW, Hemingway H, Asselbergs FW. Risk factors for incident heart failure in age- and sex-specific strata: a population-based cohort using linked electronic health records. Eur J Heart Fail. 2019 Oct;21(10):1197-1206. doi: 10.1002/ejhf.1350.

Image and electrocardiographic analysis

Electrocardiography (ECG) and image analysis play an important role in cardiology, aiding diagnosis, monitoring, and treatment planning for cardiovascular diseases. With the ECG, cardiac electrical activity is assessed, providing important information regarding heart rhythm, rate, and electrical function. Especially in the detection of arrhythmias, ischemic events, and structural abnormalities, the ECG plays an important role.

The simplicity and non-invasive nature of ECG make it a very important diagnostic tool in both acute and routine cardiovascular care. Additional insight regarding cardiac structure and mechanical function can be obtained using cardiac imaging modalities. These include modalities like echocardiography, cardiac MRI, and CT, providing detailed visualization of the heart’s anatomy, structure and function. Advanced imaging techniques help in assessing myocardial function, valvular abnormalities, and coronary artery disease. Together, ECG and cardiac imaging are very important in diagnosing, managing, and monitoring a wide spectrum of heart conditions.

However, only parameters describing global cardiac function (e.g., ejection fraction) or electrical activity (e.g., QRS duration or T-wave polarity) are currently considered in clinical practice to track disease progression. Recent studies indicate that more subtle markers for disease onset and progression can be identified, for example using radiomics techniques, AI-based techniques or advanced signal processing.

Through in-depth ECG- and imaging-based analysis, detailed insight in cardiovascular mechanical, structural and electrical functioning can be obtained to contribute to improved outcomes and reduced cardiovascular disease burden. This can contribute to the early identification of adverse events and assessment of baseline patient risk profiles. Additionally, by being able to identify patients not at risk for disease progression or adverse events will allow for further alleviation of healthcare burden. Within our lab, we focus on the in-depth (AI-based) analysis of ECG and imaging data for the improved characterization of specifically inherited cardiomyopathies and valvular heart disease. Through the extraction of novel quantitative features representing (subtle) disease progression, we aim to improve risk-stratification to ultimately use this information to further personalize treatment. In this effort, we focus on both existing techniques from literature and assess model generalizability and implementation strategies as well as develop and improve local algorithms.

Digital Twins and Synthetic Data

Digital Twins are digital representations of a physical asset (e.g. physical objects, processes, devices) containing the model of its data, its functionalities and communication interfaces. Acting as a replica of the physical objects or processes they represent, digital twins enable real-time monitoring and evaluation without being in close proximity. With its origin in engineering, the concept of digital twins only recently found its application in the healthcare sector. The use of health digital twins represents a new advance in the era of personalized medicine, aiming to enhance personalised prognosis and treatment by using the data from several data sources (e.g. electronic health record data, wearable data, etc.) as an input.

Synthetic data provides a means of making provides a means of making heterogeneous and highly sensitive medical data more harmonised and more easily shareable from generative digital twin models. Analysis of coherent cohorts of patients requires a large amount of information, often from multiple medical centres. Synthetic data offers a privacy-preserving way to create larger datasets that can lead to better models and better predictions of patient risk and personalised outcomes.

Within the Amsterdam Centre for Computational Cardiology, we focus on applying the digital twin concept to patients with cardiovascular disease (CVD) in the Netherlands. Given that CVD is a leading cause of mortality worldwide, there is a pressing need for more accurate and personalized risk assessments. We aim to leverage the recent Dutch government initiative to centralize individual health data within a digital personal health environment (PHE), where all medical records are stored in a centralized, cloud-based system.

This initiative facilitates the development and testing of digital twins by integrating comprehensive data from electronic health records, imaging, genetics, and wearable devices. By creating digital twins that closely resemble real-world patients, we aim to simulate future health events, assess risks, and explore the outcomes of various interventions, ultimately enhancing personalized treatment for CVD patients.

Within the Amsterdam Centre for Computational Cardiology, we investigate the use of generative AI models that can create realistic multi-modal synthetic data for the purpose of augmenting patient cohorts, facilitating the sharing of fully anonymised data, and for predicting synthetic outcomes of interventions. We investigate deep learning and physics-informed digital twin models for these purposes.

Genetically guided drug target identification

Only 10% of all drug candidates are eventually approved for use in clinics. The majority of drugs tested in clinical trials fail due to a lack of clinical efficacy, suggesting the wrong target has been selected. Genetically guided drug target identification offers a promising solution, providing a platform of human based evidence on target efficacy available at an early pre-clinical development stage.

By analyzing the genetic makeup of individuals, particularly those with specific diseases, we can identify genetic variations linked to the disease. These variations point to gene products, such as specific proteins or pathways involved in the disease, which then become potential drug targets. These targets are more likely to succeed, because they address the underlying disease mechanism, which is more effective than current drugs that often merely alleviate symptoms. This specificity furthermore allows for fewer side effects and a higher efficacy.

Genetically guided drug target identification is based on high-throughput computational analyses and can uncover novel or repurposing targets that conventional methods might overlook, opening new therapeutic avenues. Additionally, this method enables personalized treatment, tailoring therapies to a patient’s genetic profile resulting in better outcomes. Overall, this approach enhances the efficiency and effectiveness of drug development, significantly advancing personalized medicine.

Within the Computational Cardiology group, we apply genetically guided drug target identification to several cardiac diseases, by developing and using sophisticated methods. We lead and contributed to several genome-wide association studies (GWAS), such as on cardiac MRI parameters and heart failure. These results can lead to potential drug targets and can be sourced to identify causal relationships between exposures and outcome using methods such as colocalization, or Mendelian Randomization (MR).

MR uses genetic variants as instruments to anticipate the effect an exposure will have on the onset of disease by sourcing genome-wide variants associated with this exposure and determining whether they show a dose-response association with the disease. In previous studies, MR was applied to metabolites, cardiac MRI parameters, and several clinical outcomes, such as left ventricular ejection fraction, heart failure, and atrial fibrillation identifying promising drug targets.

We are identifying and prioritizing additional drug targets to improve drug development and are interested in developing long-term collaborations with wet-lab partners in both academia and industry, focusing on both cardiac and non-cardiac therapeutic areas.

Clinically driven drug trials

Randomized controlled trials (RCTs) are widely regarded as the gold standard in clinical research, offering robust evidence on the efficacy and safety of treatments. However, they also pose significant challenges, including high costs, logistical complexity, and limitations in generalizability.

Due to strict inclusion and exclusion criteria, RCT populations often do not reflect the diversity of real-world patients, which can limit the applicability of trial findings. In everyday clinical practice, patients present with a wider range of comorbidities, varying susceptibilities to side effects, and diverse responses to treatment, making it difficult to translate RCT results directly to these settings.

Our research group focuses on innovative trial designs and novel methodologies that enhance the efficiency of trials and strengthen causal inference in the context of observational data. We explore the potential of real-world data sources, such as electronic health records (EHRs) and patient registries, to conduct trials within routine clinical practice. This approach increases the relevance and applicability of findings by reflecting the complexity of real-world patient populations.

In addition, we use techniques such as Mendelian randomization and target trial emulation. Mendelian randomization uses genetic variations as natural experiments to infer causal relationships between risk factors and health outcomes, reducing biases in observational data. Target trial emulation allows us to mimic the structure of RCTs using observational data, enabling robust causal analysis when traditional RCTs are impractical.

Furthermore, by incorporating phenomapping, which uses advanced data analytics and machine learning, we can identify distinct subgroups of patients based on clinical, genetic, and biomarker characteristics. This allows for better stratification of patients and tailoring of interventions, enhancing both the precision and the generalizability of trial results.

  • Schmidt AF, et al. Genetic drug target validation using Mendelian randomisation. Nat Commun. 2020 Jun 26;11(1):3255. doi: 10.1038/s41467-020-16969-0.
  • Linschoten et al. Rationale and design of the HOVON 170 DLBCL – ANTICIPATE trial: Preventing anthracycline-induced cardiotoxicity with dexrazoxane. [submitted]
  • Handoko ML, et al. Embedding routine health care data in clinical trials: with great power comes great responsibility. Neth Heart J. 2024 Mar;32(3):106-115. doi: 10.1007/s12471-023-01837-5.
  • Basile C, Lindberg F, Uijl A, …, Savarese G. Real-world data to provide information on drug effects: always a bad approach? [submitted Int J of Cardiology july 2024 – invited review]