A sedentary lifestyle, poor diet, smoking, and genetic and other health factors are major contributors to coronary heart disease (CHD). Despite recent medical advances that have lowered the number of deaths compared to the past decades, CHD still remains the number 1 disease in mortality in the UK (73,000 deaths per year) with a tremendous economic burden: estimates put the cost to UK’s economy at £6.7 billion per year. The overriding goal of this project is to take advantage of multimodal information within cardiac magnetic resonance images to improve their analysis and facilitate the diagnosis and improve treatment of CHD.
Magnetic Resonance Imaging (MRI) as an imaging diagnostic tool is uniquely positioned to help as it is non-invasive and does not use radiation. A typical cardiac protocol relies on several MR imaging sequences to provide images of different contrast, termed as modalities hereafter, to assess disease progression and status. As a result of this range of acquisitions, hundreds of multidimensional multimodal images are generated in a single patient exam leading to severe data overload.
Therefore, robust and automated analyses algorithms would help alleviate the clinical reading burden. Several algorithms have been proposed to segment and register the myocardium in the most commonly used modalities by considering them independently. However, the problem remains difficult and performance is not yet adequate. Currently, the analysis of cardiac imaging data still remains a manual, time consuming, and expensive process typically performed by clinical experts. As a result, despite the huge amount of data generated, not only in a clinical but also in a research setting, only a fraction is being analysed robustly, due to the vast amount of time required for the analysis of this data.
This proposal aims to address the above shortcomings by proposing mechanisms that take advantage of the shared information that exists across modalities to enable the joint analysis of cardiac imaging data and thus make a significant leap in how we approach their analysis.
Conference: Agis’s paper at MICCAI on semi-supervised learning and segmentation (WP2) will have an oral presentation.
Conference: Tom presented a paper (oral) at MIDL in Amsterdam on unsupervised learning and factorisation.
Publication: Agis submitted a nice paper to MICCAI on semi-supervised learning and segmentation (WP2).
Presentation: Sotos gave a seminar at The University of Bristol on image synthesis.
Code: Code on multimodal synthesis and modality invariant representation learning available.
Publication: Agis presented his work (joined with Thomas) on learning mappings between modalities at MICCAI (WP1).
Publication: Agis presented his work on learning mappings between modalities without pairing and co-registration at SASHIMI @ MICCAI 2017 (WP1).
Dataset: Our data collection is growing. 40 patients data obtained from Royal Infirmary.
Kick-off: We started a new project.