Eric Mörth

PhD Scientist

Multimodal Medical Visualization

 Team Smit

Currently on a Research Stay at the VCG Group at Harvard University

I am a PhD Scientist in Multimodal Medical Visualization in Bergen. My PhD is a collaboration between the Mohm Medical Imaging and Visualization Center (MMIV) and the University of Bergen. My PhD is funded by my Supervisor Noeska Smit (Trond Mohn Stiftelse)

My main focus is the research in new and innovative ways to visualize and explore medical data, e.g. MRI data to enable doctors to have a better view at their data. My last projects resulted in successful publications in the field of medical visualization. I recently finished my PhD (02.09.2022) and before I successfully conducted master studies in Medical Informatics from the Medical Unviersity of Vienna and Biomedical Engineering from the Technical University of Vienna prepare me well for the challenges I face during my PhD studies. Furthermore, I was able to gather more than 5 years of experience in software development in various companies in Austria.

* Best Paper Honorable Mention at VCBM2022 for MuSIC
* Best Short Paper at VINCI2022 for IceVIS
* Best Interdisciplinary Presentation Award at PacificGraphics2020 for ParaGlyder

In my free time I love photography, videography, travelling and to ride and build bycicles.



    [PDF] [DOI] [Bibtex]
    title={Investigating user behavior in slideshows and scrollytelling as narrative genres in medical visualization},
    author={Mittenentzwei, Sarah and Garrison, Laura A and M{\"o}rth, Eric and Lawonn, Kai and Bruckner, Stefan and Preim, Bernhard and Meuschke, Monique},
    journal={Computers \& Graphics},
    abstract={In this study, we explore the impact of genre and navigation on user comprehension, preferences, and behaviors when experiencing data-driven disease stories. Our between-subject study (n=85) evaluated these aspects in-the-wild, with results pointing towards some general design considerations to keep in mind when authoring data-driven disease stories. Combining storytelling with interactive new media techniques, narrative medical visualization is a promising approach to communicating topics in medicine to a general audience in an accessible manner. For patients, visual storytelling may help them to better understand medical procedures and treatment options for more informed decision-making, boost their confidence and alleviate anxiety, and promote stronger personal health advocacy. Narrative medical visualization provides the building blocks for producing data-driven disease stories, which may be presented in several visual styles. These different styles correspond to different narrative genres, e.g., a Slideshow. Narrative genres can employ different navigational approaches. For instance, a Slideshow may rely on click interactions to advance through a story, while Scrollytelling typically uses vertical scrolling for navigation. While a common goal of a narrative medical visualization is to encourage a particular behavior, e.g., quitting smoking, it is unclear to what extent the choice of genre influences subsequent user behavior. Our study opens a new research direction into choice of narrative genre on user preferences and behavior in data-driven disease stories.},
    pdf = {pdfs/mittenentzwei2023userbehavior.pdf},
    images = {images/mittenentzwei2023userbehavior.png},
    thumbnails = {images/mittenentzwei2023userbehavior-thumb.png},
    project = {VIDI},


    [PDF] [Bibtex]
    title = {Scaling Up Medical Visualization: Multi-Modal, Multi-Patient, and Multi-Audience Approaches for Medical Data Exploration, Analysis and Communication},
    author = {Mörth, Eric},
    year = 2022,
    month = {September},
    isbn = 9788230862193,
    url = {},
    school = {Department of Informatics, University of Bergen, Norway},
    abstract = {
    Medical visualization is one of the most application-oriented areas of visualization research. Close collaboration with medical experts is essential for interpreting medical imaging data and creating meaningful visualization techniques and visualization applications. Cancer is one of the most common causes of death, and with increasing average age in developed countries, gynecological malignancy case numbers are rising. Modern imaging techniques are an essential tool in assessing tumors and produce an increasing number of imaging data radiologists must interpret. Besides the number of imaging modalities, the number of patients is also rising, leading to visualization solutions that must be scaled up to address the rising complexity of multi-modal and multi-patient data. Furthermore, medical visualization is not only targeted toward medical professionals but also has the goal of informing patients, relatives, and the public about the risks of certain diseases and potential treatments. Therefore, we identify the need to scale medical visualization solutions to cope with multi-audience data.
    This thesis addresses the scaling of these dimensions in different contributions we made. First, we present our techniques to scale medical visualizations in multiple modalities. We introduced a visualization technique using small multiples to display the data of multiple modalities within one imaging slice. This allows radiologists to explore the data efficiently without having several juxtaposed windows. In the next step, we developed an analysis platform using radiomic tumor profiling on multiple imaging modalities to analyze cohort data and to find new imaging biomarkers. Imaging biomarkers are indicators based on imaging data that predict clinical outcome related variables. Radiomic tumor profiling is a technique that generates potential imaging biomarkers based on first and second-order statistical measurements. The application allows medical experts to analyze the multi-parametric imaging data to find potential correlations between clinical parameters and the radiomic tumor profiling data. This approach scales up in two dimensions, multi-modal and multi-patient. In a later version, we added features to scale the multi-audience dimension by making our application applicable to cervical and prostate cancer data and the endometrial cancer data the application was designed for. In a subsequent contribution, we focus on tumor data on another scale and enable the analysis of tumor sub-parts by using the multi-modal imaging data in a hierarchical clustering approach. Our application finds potentially interesting regions that could inform future treatment decisions. In another contribution, the digital probing interaction, we focus on multi-patient data. The imaging data of multiple patients can be compared to find interesting tumor patterns potentially linked to the aggressiveness of the tumors. Lastly, we scale the multi-audience dimension with our similarity visualization applicable to endometrial cancer research, neurological cancer imaging research, and machine learning research on the automatic segmentation of tumor data. In contrast to the previously highlighted contributions, our last contribution, ScrollyVis, focuses primarily on multi-audience communication. We enable the creation of dynamic scientific scrollytelling experiences for a specific or general audience. Such stories can be used for specific use cases such as patient-doctor communication or communicating scientific results via stories targeting the general audience in a digital museum exhibition.
    Our proposed applications and interaction techniques have been demonstrated in application use cases and evaluated with domain experts and focus groups. As a result, we brought some of our contributions to usage in practice at other research institutes. We want to evaluate their impact on other scientific fields and the general public in future work.
    pdf = {pdfs/Moerth-PhD-Thesis-2022.pdf},
    images = {images/Moerth-PhD-Thesis-2022.PNG},
    thumbnails = {images/Moerth-PhD-Thesis-2022.PNG},
    project = {ttmedvis}
    [PDF] [DOI] [Bibtex]
    @inproceedings {EichnerMoerth2022MuSIC,
    booktitle = {Eurographics Workshop on Visual Computing for Biology and Medicine},
    editor = {Renata G. Raidou and Björn Sommer and Torsten W. Kuhlen and Michael Krone and Thomas Schultz and Hsiang-Yun Wu},
    title = {{MuSIC: Multi-Sequential Interactive Co-Registration for Cancer Imaging Data based on Segmentation Masks}},
    author = {Eichner, Tanja* and Mörth, Eric* and Wagner-Larsen, Kari S. and Lura, Njål and Haldorsen, Ingfrid S. and Gröller, Eduard and Bruckner, Stefan and Smit, Noeska N.},
    note = {Best Paper Honorable Mention at VCBM2022},
    project = {ttmedvis},
    year = {2022},
    abstract = {In gynecologic cancer imaging, multiple magnetic resonance imaging (MRI) sequences are acquired per patient to reveal different tissue characteristics. However, after image acquisition, the anatomical structures can be misaligned in the various sequences due to changing patient location in the scanner and organ movements. The co-registration process aims to align the sequences to allow for multi-sequential tumor imaging analysis. However, automatic co-registration often leads to unsatisfying results. To address this problem, we propose the web-based application MuSIC (Multi-Sequential Interactive Co-registration). The approach allows medical experts to co-register multiple sequences simultaneously based on a pre-defined segmentation mask generated for one of the sequences. Our contributions lie in our proposed workflow. First, a shape matching algorithm based on dual annealing searches for the tumor position in each sequence. The user can then interactively adapt the proposed segmentation positions if needed. During this procedure, we include a multi-modal magic lens visualization for visual quality assessment. Then, we register the volumes based on the segmentation mask positions. We allow for both rigid and deformable registration. Finally, we conducted a usability analysis with seven medical and machine learning experts to verify the utility of our approach. Our participants highly appreciate the multi-sequential setup and see themselves using MuSIC in the future.
    Best Paper Honorable Mention at VCBM2022},
    publisher = {The Eurographics Association},
    ISSN = {2070-5786},
    ISBN = {978-3-03868-177-9},
    DOI = {10.2312/vcbm.20221190},
    pdf = {pdfs/EichnerMoerth_2022.pdf},
    thumbnails = {images/EichnerMoerth_2022.PNG},
    images = {images/EichnerMoerth_2022.PNG},
    [PDF] [DOI] [Bibtex]
    @inproceedings {Kleinau2022Tornado,
    booktitle = {Eurographics Workshop on Visual Computing for Biology and Medicine},
    editor = {Renata G. Raidou and Björn Sommer and Torsten W. Kuhlen and Michael Krone and Thomas Schultz and Hsiang-Yun Wu},
    title = {{Is there a Tornado in Alex's Blood Flow? A Case Study for Narrative Medical Visualization}},
    project = {ttmedvis},
    author = {Kleinau, Anna and Stupak, Evgenia and Mörth, Eric and Garrison, Laura A. and Mittenentzwei, Sarah and Smit, Noeska N. and Lawonn, Kai and Bruckner, Stefan and Gutberlet, Matthias and Preim, Bernhard and Meuschke, Monique},
    year = {2022},
    abstract = {Narrative visualization advantageously combines storytelling with new media formats and techniques, like interactivity, to create improved learning experiences. In medicine, it has the potential to improve patient understanding of diagnostic procedures and treatment options, promote confidence, reduce anxiety, and support informed decision-making. However, limited scientific research has been conducted regarding the use of narrative visualization in medicine. To explore the value of narrative visualization in this domain, we introduce a data-driven story to inform a broad audience about the usage of measured blood flow data to diagnose and treat cardiovascular diseases. The focus of the story is on blood flow vortices in the aorta, with which imaging technique they are examined, and why they can be dangerous. In an interdisciplinary team, we define the main contents of the story and the resulting design questions. We sketch the iterative design process and implement the story based on two genres. In a between-subject study, we evaluate the suitability and understandability of the story and the influence of different navigation concepts on user experience. Finally, we discuss reusable concepts for further narrative medical visualization projects.},
    publisher = {The Eurographics Association},
    ISSN = {2070-5786},
    ISBN = {978-3-03868-177-9},
    DOI = {10.2312/vcbm.20221183},
    pdf = {pdfs/Kleinau_2022.pdf},
    thumbnails = {images/Kleinau_2022.PNG},
    images = {images/Kleinau_2022.PNG},
    [PDF] [DOI] [Bibtex]
    author = {Mörth, Eric and Bruckner, Stefan and Smit, Noeska N.},
    title = {ScrollyVis: Interactive visual authoring of guided dynamic narratives for scientific scrollytelling},
    journal = {IEEE Transactions on Visualization and Computer Graphics},
    year = {2022},
    volume = {},
    abstract = {Visual stories are an effective and powerful tool to convey specific information to a diverse public. Scrollytelling is a recent visual storytelling technique extensively used on the web, where content appears or changes as users scroll up or down a page. By employing the familiar gesture of scrolling as its primary interaction mechanism, it provides users with a sense of control, exploration and discoverability while still offering a simple and intuitive interface. In this paper, we present a novel approach for authoring, editing, and presenting data-driven scientific narratives using scrollytelling. Our method flexibly integrates common sources such as images, text, and video, but also supports more specialized visualization techniques such as interactive maps as well as scalar field and mesh data visualizations. We show that scrolling navigation can be used to traverse dynamic narratives and demonstrate how it can be combined with interactive parameter exploration. The resulting system consists of an extensible web-based authoring tool capable of exporting stand-alone stories that can be hosted on any web server. We demonstrate the power and utility of our approach with case studies from several diverse scientific fields and with a user study including 12 participants of diverse professional backgrounds. Furthermore, an expert in creating interactive articles assessed the usefulness of our approach and the quality of the created stories.},
    project = {ttmedvis},
    pdf = {pdfs/Moerth_2022_ScrollyVis.pdf},
    thumbnails = {images/Moerth_2022_ScrollyVis.png},
    images = {images/Moerth_2022_ScrollyVis.png},
    [PDF] [DOI] [VID] [Bibtex]
    title = {ICEVis: Interactive Clustering Exploration for tumor sub-region analysis in multiparametric cancer imaging},
    author = {Mörth, Eric and Eichner, Tanja and Ingfrid, Haldorsen and Bruckner, Stefan and Smit, Noeska N.},
    year = 2022,
    journal = {Proceedings of the International Symposium on Visual Information Communication and Interaction (VINCI'22)},
    volume = {15},
    pages = {5},
    doi = {10.1145/3554944.3554958},
    issn = {},
    url = {},
    abstract = {Tumor tissue characteristics derived from imaging data are gaining importance in clinical research. Tumor sub-regions may play a critical role in defining tumor types and may hold essential information about tumor aggressiveness. Depending on the tumor’s location within the body, such sub-regions can be easily identified and determined by physiology, but these sub-regions are not readily visible to others. Regions within a tumor are currently explored by comparing the image sequences and analyzing the tissue heterogeneity present. To improve the exploration of such tumor sub-regions, we propose a visual analytics tool called ICEVis. ICEVis supports the identification of tumor sub-regions and corresponding features combined with cluster visualizations highlighting cluster validity. It is often difficult to estimate the optimal number of clusters; we provide rich facilities to support this task, incorporating various statistical measures and interactive exploration of the results. We evaluated our tool with three clinical researchers to show the potential of our approach.
    Best Short Paper at VINCI2022},
    images = "images/Moerth_2022_ICEVis.png",
    thumbnails = "images/Moerth_2022_ICEVis.png",
    pdf = {pdfs/Moerth_2022_ICEVis.pdf},
    vid = {vids/ICEVis.mp4},
    project = "ttmedvis",


    [PDF] [DOI] [Bibtex]
    doi = {10.1007/s11695-021-05763-6},
    year = {2021},
    month = nov,
    publisher = {Springer Science and Business Media {LLC}},
    author = {Hannes Beiglb\"{o}ck and Eric M\"{o}rth and Berthold Reichardt and Tanja Stamm and Bianca Itariu and J\"{u}rgen Harreiter and Miriam Hufgard-Leitner and Paul Fellinger and Jakob Eichelter and Gerhard Prager and Alexander Kautzky and Alexandra Kautzky-Willer and Peter Wolf and Michael Krebs},
    title = {Sex-Specific Differences in Mortality of Patients with a History of Bariatric Surgery: a Nation-Wide Population-Based Study},
    journal = {Obesity Surgery},
    abstract = {Bariatric surgery reduces mortality in patients with severe obesity and is predominantly performed in women.
    Therefore, an analysis of sex-specific differences after bariatric surgery in a population-based dataset from Austria was
    performed. The focus was on deceased patients after bariatric surgery.
    The Austrian health insurance funds cover about 98% of the Austrian population. Medical health
    claims data of all Austrians who underwent bariatric surgery from 01/2010 to 12/2018 were analyzed. In total, 19,901 patients
    with 107,806 observed years postoperative were eligible for this analysis. Comorbidities based on International Classification
    of Diseases (ICD)-codes and drug intake documented by Anatomical Therapeutical Chemical (ATC)-codes were analyzed
    in patients deceased and grouped according to clinically relevant obesity-associated comorbidities: diabetes mellitus (DM),
    cardiovascular disease (CV), psychiatric disorder (PSY), and malignancy (M).
    In total, 367 deaths were observed (1.8%) within the observation period from 01/2010 to 04/2020. The overall
    mortality rate was 0.34% per year of observation and significantly higher in men compared to women (0.64 vs. 0.24%;
    p < 0.001(Chi-squared)). Moreover, the 30-day mortality was 0.19% and sixfold higher in men compared to women (0.48
    vs. 0.08%; p < 0.001). CV (82%) and PSY (55%) were the most common comorbidities in deceased patients with no sex-
    specific differences. Diabetes (38%) was more common in men (43 vs. 33%; p = 0.034), whereas malignant diseases (36%)
    were more frequent in women (30 vs. 41%; p = 0.025).
    After bariatric surgery, short-term mortality as well as long-term mortality was higher in men compared to
    women. In deceased patients, diabetes was more common in men, whereas malignant diseases were more common in women.},
    pdf = {pdfs/Beiglboeck2021_Article_Sex-SpecificDifferencesInMorta.pdf},
    thumbnails = {images/2021-Moerth-Diabetes-thumb.png},
    images = {images/2021-Moerth-Diabetes.png},
    keywords = {Bariatric surgery, Sex differences, Mortality, Population-based registry analysis, Comorbidities, Healthcare, research},


    [PDF] [DOI] [YT] [Bibtex]
    author = {M\"{o}rth, E. and Wagner-Larsen, K. and Hodneland, E. and Krakstad, C. and Haldorsen, I. S. and Bruckner, S. and Smit, N. N.},
    title = {RadEx: Integrated Visual Exploration of Multiparametric Studies for Radiomic Tumor Profiling},
    journal = {Computer Graphics Forum},
    volume = {39},
    number = {7},
    year = {2020},
    pages = {611--622},
    abstract = {Better understanding of the complex processes driving tumor growth and metastases is critical for developing targeted treatment strategies in cancer. Radiomics extracts large amounts of features from medical images which enables radiomic tumor profiling in combination with clinical markers. However, analyzing complex imaging data in combination with clinical data is not trivial and supporting tools aiding in these exploratory analyses are presently missing. In this paper, we present an approach that aims to enable the analysis of multiparametric medical imaging data in combination with numerical, ordinal, and categorical clinical parameters to validate established and unravel novel biomarkers. We propose a hybrid approach where dimensionality reduction to a single axis is combined with multiple linked views allowing clinical experts to formulate hypotheses based on all available imaging data and clinical parameters. This may help to reveal novel tumor characteristics in relation to molecular targets for treatment, thus providing better tools for enabling more personalized targeted treatment strategies. To confirm the utility of our approach, we closely collaborate with experts from the field of gynecological cancer imaging and conducted an evaluation with six experts in this field.},
    pdf = "pdfs/Moerth-2020-RadEx.pdf",
    images = "images/Moerth-2020-RadEx.jpg",
    youtube = "",
    thumbnails = "images/Moerth-2020-RadEx-thumb.jpg",
    project = "ttmedvis",
    doi = {10.1111/cgf.14172}
    [PDF] [DOI] [YT] [Bibtex]
    author = "M\"{o}rth, E. and Haldorsen, I.S. and Bruckner, S. and Smit, N.N.",
    title = "ParaGlyder: Probe-driven Interactive Visual Analysis for Multiparametric Medical Imaging Data",
    booktitle = "Proceedings of Computer Graphics International",
    pages = "351--363",
    year = "2020",
    abstract = "Multiparametric medical imaging describes approaches that include multiple imaging sequences acquired within the same imaging examination, as opposed to one single imaging sequence or imaging from multiple imaging modalities. Multiparametric imaging in cancer has been shown to be useful for tumor detection and may also depict functional tumor characteristics relevant for clinical phenotypes. However, when confronted with datasets consisting of multiple values per voxel, traditional reading of the imaging series fails to capture complicated patterns. Those patterns of potentially important imaging properties of the parameter space may be critical for the analysis. Standard approaches, such as transfer functions and juxtapositioned visualizations, fail to convey the shape of the multiparametric parameter distribution in sufficient detail. For these reasons, in this paper we present an approach that aims to enable the exploration and analysis of such multiparametric studies using an interactive visual analysis application to remedy the trade-offs between details in the value domain and in spatial resolution. Interactive probing within or across subjects allows for a digital biopsy that is able to uncover multiparametric tissue properties. This may aid in the discrimination between healthy and cancerous tissue, unravel radiomic tissue features that could be linked to targetable pathogenic mechanisms, and potentially highlight metastases that evolved from the primary tumor. We conducted an evaluation with eleven domain experts from the field of gynecological cancer imaging, neurological imaging, and machine learning research to confirm the utility of our approach.",
    note= "The final authenticated version is available online at",
    pdf = "pdfs/Moerth-2020-CGI-ParaGlyder.pdf",
    images = "images/Moerth-2020-ParaGlyder.PNG",
    thumbnails = "images/Moerth-2020-ParaGlyder-thumb.png",
    youtube = "",
    publisher = "LNCS by Springer",
    project = "ttmedvis",
    doi = "10.1007/978-3-030-61864-3_29"


    [PDF] [DOI] [Bibtex]
    @inproceedings {Moerth-2019-VCBM,
    booktitle = "Eurographics Workshop on Visual Computing for Biology and Medicine",
    editor = "Kozlíková, Barbora and Linsen, Lars and Vázquez, Pere-Pau and Lawonn, Kai and Raidou, Renata Georgia",
    abstract = "Three-dimensional (3D) ultrasound imaging and visualization
    is often used in medical diagnostics, especially in prenatal
    screening. Screening the development of the fetus is
    important to assess possible complications early on. State
    of the art approaches involve taking standardized
    measurements to compare them with standardized tables. The
    measurements are taken in a 2D slice view, where precise
    measurements can be difficult to acquire due to the fetal
    pose. Performing the analysis in a 3D view would enable the
    viewer to better discriminate between artefacts and
    representative information. Additionally making data
    comparable between different investigations and patients is
    a goal in medical imaging techniques and is often achieved
    by standardization. With this paper, we introduce a novel
    approach to provide a standardization method for 3D
    ultrasound fetus screenings. Our approach is called “The
    Vitruvian Baby” and incorporates a complete pipeline for
    standardized measuring in fetal 3D ultrasound. The input of
    the method is a 3D ultrasound screening of a fetus and the
    output is the fetus in a standardized T-pose. In this pose,
    taking measurements is easier and comparison of different
    fetuses is possible. In addition to the transformation of
    the 3D ultrasound data, we create an abstract representation
    of the fetus based on accurate measurements. We demonstrate
    the accuracy of our approach on simulated data where the
    ground truth is known.",
    title = "The Vitruvian Baby: Interactive Reformation of Fetal Ultrasound Data to a T-Position",
    author = "M\"{o}rth, Eric and Raidou, Renata Georgia and Viola, Ivan and Smit, Noeska",
    year = "2019",
    publisher = "The Eurographics Association",
    ISSN = "2070-5786",
    ISBN = "978-3-03868-081-9",
    DOI = "10.2312/vcbm.20191245",
    pdf = "pdfs/VCBM_TheVitruvianBaby_ShortPaper_201-205.pdf",
    images = "images/vcbmVitruvianBaby.jpg",
    thumbnails = "images/vcbmVitruvianBaby.jpg",
    url = "",
    project = {VIDI}
    [PDF] [DOI] [Bibtex]
    @MISC {Moerth-2019-EUROVIS,
    booktitle = "EuroVis 2019 - Posters",
    editor = "Madeiras Pereira, João and Raidou, Renata Georgia",
    title = "The Vitruvian Baby: Interactive Reformation of Fetal Ultrasound Data to a T-Position",
    author = "M\"{o}rth, Eric and Raidou, Renata Georgia and Smit, Noeska and Viola, Ivan",
    year = "2019",
    abstract = "Three dimensional (3D) ultrasound is commonly used in prenatal screening, because it provides insight into the shape as well
    as the organs of the fetus. Currently, gynecologists take standardized measurements of the fetus and check for abnormalities by
    analyzing the data in a 2D slice view. The fetal pose may complicate taking precise measurements in such a view. Analyzing the
    data in a 3D view would enable the viewer to better distinguish between artefacts and representative information. Standardization
    in medical imaging techniques aims to make the data comparable between different investigations and patients. It is
    already used in different medical applications for example in magnetic resonance imaging (MRI). With this work, we introduce
    a novel approach to provide a standardization method for 3D ultrasound screenings of fetuses. The approach consists of six
    steps and is called “The Vitruvian Baby”. The input is the data of the 3D ultrasound screening of a fetus and the output shows
    the fetus in a standardized T-pose in which measurements can be made. The precision of standardized measurements compared
    to the gold standard is for the finger to finger span 91,08% and for the head to toe measurement 94,05%.",
    publisher = "The Eurographics Association",
    howpublished = "Poster presented at the EuroVis conference 2019",
    ISBN = "978-3-03868-088-8",
    DOI = "10.2312/eurp.20191147",
    pdf = "pdfs/EUROVIS_TheVitruvianBaby_Poster.pdf",
    images = "images/EUROVISTheVitruvianBabyPoster.png",
    thumbnails = "images/EUROVISTheVitruvianBabyPoster.png",
    url = ""