Tenure-Track Position in Medical Visualization

This grant substantiates a tenure track position supported by Bergen Research Foundation and the University of Bergen. Noeska Smit from the Netherlands started in her position as associate professor at the Department of Informatics, UiB, January 1, 2017. Smit’s specialty is in medical visualization. The recruitment of Smit is part of a joint priority of the field of medical visualization at Helse Bergen and University of Bergen. The common priority also has resulted in the establishment of a joint center for medical imaging and visualization located at Helse Bergen This center is also supported economically by Bergen Research Foundation.

The visualization group at Department of Informatics has a leading position in the visualization of complex data from numerous application domains and is actively engaged in research in the emerging field of computational medicine. The recruitment of Smit is a strategic move towards this field and represents a strengthening of the fundamental basic research which will be of such crucial importance for the newly established medical imaging and visualization centre: MMIV.

Related links:

Publications

2022

    [PDF] [Bibtex]
    @phdthesis{moerth2022thesis,
    title = {Scaling Up Medical Visualization: Multi-Modal, Multi-Patient, and Multi-Audience Approaches for Medical Data Exploration, Analysis and Communication},
    author = {Mörth, Eric},
    year = 2022,
    month = {September},
    isbn = 9788230862193,
    url = {https://hdl.handle.net/11250/3014336},
    school = {Department of Informatics, University of Bergen, Norway},
    abstract = {
    Medical visualization is one of the most application-oriented areas of visualization research. Close collaboration with medical experts is essential for interpreting medical imaging data and creating meaningful visualization techniques and visualization applications. Cancer is one of the most common causes of death, and with increasing average age in developed countries, gynecological malignancy case numbers are rising. Modern imaging techniques are an essential tool in assessing tumors and produce an increasing number of imaging data radiologists must interpret. Besides the number of imaging modalities, the number of patients is also rising, leading to visualization solutions that must be scaled up to address the rising complexity of multi-modal and multi-patient data. Furthermore, medical visualization is not only targeted toward medical professionals but also has the goal of informing patients, relatives, and the public about the risks of certain diseases and potential treatments. Therefore, we identify the need to scale medical visualization solutions to cope with multi-audience data.
    This thesis addresses the scaling of these dimensions in different contributions we made. First, we present our techniques to scale medical visualizations in multiple modalities. We introduced a visualization technique using small multiples to display the data of multiple modalities within one imaging slice. This allows radiologists to explore the data efficiently without having several juxtaposed windows. In the next step, we developed an analysis platform using radiomic tumor profiling on multiple imaging modalities to analyze cohort data and to find new imaging biomarkers. Imaging biomarkers are indicators based on imaging data that predict clinical outcome related variables. Radiomic tumor profiling is a technique that generates potential imaging biomarkers based on first and second-order statistical measurements. The application allows medical experts to analyze the multi-parametric imaging data to find potential correlations between clinical parameters and the radiomic tumor profiling data. This approach scales up in two dimensions, multi-modal and multi-patient. In a later version, we added features to scale the multi-audience dimension by making our application applicable to cervical and prostate cancer data and the endometrial cancer data the application was designed for. In a subsequent contribution, we focus on tumor data on another scale and enable the analysis of tumor sub-parts by using the multi-modal imaging data in a hierarchical clustering approach. Our application finds potentially interesting regions that could inform future treatment decisions. In another contribution, the digital probing interaction, we focus on multi-patient data. The imaging data of multiple patients can be compared to find interesting tumor patterns potentially linked to the aggressiveness of the tumors. Lastly, we scale the multi-audience dimension with our similarity visualization applicable to endometrial cancer research, neurological cancer imaging research, and machine learning research on the automatic segmentation of tumor data. In contrast to the previously highlighted contributions, our last contribution, ScrollyVis, focuses primarily on multi-audience communication. We enable the creation of dynamic scientific scrollytelling experiences for a specific or general audience. Such stories can be used for specific use cases such as patient-doctor communication or communicating scientific results via stories targeting the general audience in a digital museum exhibition.
    Our proposed applications and interaction techniques have been demonstrated in application use cases and evaluated with domain experts and focus groups. As a result, we brought some of our contributions to usage in practice at other research institutes. We want to evaluate their impact on other scientific fields and the general public in future work.
    },
    pdf = {pdfs/Moerth-PhD-Thesis-2022.pdf},
    images = {images/Moerth-PhD-Thesis-2022.PNG},
    thumbnails = {images/Moerth-PhD-Thesis-2022.PNG},
    project = {ttmedvis}
    }
    [PDF] [DOI] [Bibtex]
    @inproceedings {EichnerMoerth2022MuSIC,
    booktitle = {Eurographics Workshop on Visual Computing for Biology and Medicine},
    editor = {Renata G. Raidou and Björn Sommer and Torsten W. Kuhlen and Michael Krone and Thomas Schultz and Hsiang-Yun Wu},
    title = {{MuSIC: Multi-Sequential Interactive Co-Registration for Cancer Imaging Data based on Segmentation Masks}},
    author = {Eichner, Tanja* and Mörth, Eric* and Wagner-Larsen, Kari S. and Lura, Njål and Haldorsen, Ingfrid S. and Gröller, Eduard and Bruckner, Stefan and Smit, Noeska N.},
    note = {Best Paper Honorable Mention at VCBM2022},
    project = {ttmedvis},
    year = {2022},
    abstract = {In gynecologic cancer imaging, multiple magnetic resonance imaging (MRI) sequences are acquired per patient to reveal different tissue characteristics. However, after image acquisition, the anatomical structures can be misaligned in the various sequences due to changing patient location in the scanner and organ movements. The co-registration process aims to align the sequences to allow for multi-sequential tumor imaging analysis. However, automatic co-registration often leads to unsatisfying results. To address this problem, we propose the web-based application MuSIC (Multi-Sequential Interactive Co-registration). The approach allows medical experts to co-register multiple sequences simultaneously based on a pre-defined segmentation mask generated for one of the sequences. Our contributions lie in our proposed workflow. First, a shape matching algorithm based on dual annealing searches for the tumor position in each sequence. The user can then interactively adapt the proposed segmentation positions if needed. During this procedure, we include a multi-modal magic lens visualization for visual quality assessment. Then, we register the volumes based on the segmentation mask positions. We allow for both rigid and deformable registration. Finally, we conducted a usability analysis with seven medical and machine learning experts to verify the utility of our approach. Our participants highly appreciate the multi-sequential setup and see themselves using MuSIC in the future.
    Best Paper Honorable Mention at VCBM2022},
    publisher = {The Eurographics Association},
    ISSN = {2070-5786},
    ISBN = {978-3-03868-177-9},
    DOI = {10.2312/vcbm.20221190},
    pdf = {pdfs/EichnerMoerth_2022.pdf},
    thumbnails = {images/EichnerMoerth_2022.PNG},
    images = {images/EichnerMoerth_2022.PNG},
    }
    [PDF] [DOI] [Bibtex]
    @inproceedings {Kleinau2022Tornado,
    booktitle = {Eurographics Workshop on Visual Computing for Biology and Medicine},
    editor = {Renata G. Raidou and Björn Sommer and Torsten W. Kuhlen and Michael Krone and Thomas Schultz and Hsiang-Yun Wu},
    title = {{Is there a Tornado in Alex's Blood Flow? A Case Study for Narrative Medical Visualization}},
    project = {ttmedvis},
    author = {Kleinau, Anna and Stupak, Evgenia and Mörth, Eric and Garrison, Laura A. and Mittenentzwei, Sarah and Smit, Noeska N. and Lawonn, Kai and Bruckner, Stefan and Gutberlet, Matthias and Preim, Bernhard and Meuschke, Monique},
    year = {2022},
    abstract = {Narrative visualization advantageously combines storytelling with new media formats and techniques, like interactivity, to create improved learning experiences. In medicine, it has the potential to improve patient understanding of diagnostic procedures and treatment options, promote confidence, reduce anxiety, and support informed decision-making. However, limited scientific research has been conducted regarding the use of narrative visualization in medicine. To explore the value of narrative visualization in this domain, we introduce a data-driven story to inform a broad audience about the usage of measured blood flow data to diagnose and treat cardiovascular diseases. The focus of the story is on blood flow vortices in the aorta, with which imaging technique they are examined, and why they can be dangerous. In an interdisciplinary team, we define the main contents of the story and the resulting design questions. We sketch the iterative design process and implement the story based on two genres. In a between-subject study, we evaluate the suitability and understandability of the story and the influence of different navigation concepts on user experience. Finally, we discuss reusable concepts for further narrative medical visualization projects.},
    publisher = {The Eurographics Association},
    ISSN = {2070-5786},
    ISBN = {978-3-03868-177-9},
    DOI = {10.2312/vcbm.20221183},
    pdf = {pdfs/Kleinau_2022.pdf},
    thumbnails = {images/Kleinau_2022.PNG},
    images = {images/Kleinau_2022.PNG},
    }
    [PDF] [DOI] [Bibtex]
    @Article{Moerth2022ScrollyVis,
    author = {Mörth, Eric and Bruckner, Stefan and Smit, Noeska N.},
    title = {ScrollyVis: Interactive visual authoring of guided dynamic narratives for scientific scrollytelling},
    journal = {IEEE Transactions on Visualization and Computer Graphics},
    year = {2022},
    volume = {},
    abstract = {Visual stories are an effective and powerful tool to convey specific information to a diverse public. Scrollytelling is a recent visual storytelling technique extensively used on the web, where content appears or changes as users scroll up or down a page. By employing the familiar gesture of scrolling as its primary interaction mechanism, it provides users with a sense of control, exploration and discoverability while still offering a simple and intuitive interface. In this paper, we present a novel approach for authoring, editing, and presenting data-driven scientific narratives using scrollytelling. Our method flexibly integrates common sources such as images, text, and video, but also supports more specialized visualization techniques such as interactive maps as well as scalar field and mesh data visualizations. We show that scrolling navigation can be used to traverse dynamic narratives and demonstrate how it can be combined with interactive parameter exploration. The resulting system consists of an extensible web-based authoring tool capable of exporting stand-alone stories that can be hosted on any web server. We demonstrate the power and utility of our approach with case studies from several diverse scientific fields and with a user study including 12 participants of diverse professional backgrounds. Furthermore, an expert in creating interactive articles assessed the usefulness of our approach and the quality of the created stories.},
    project = {ttmedvis},
    pdf = {pdfs/Moerth_2022_ScrollyVis.pdf},
    thumbnails = {images/Moerth_2022_ScrollyVis.png},
    images = {images/Moerth_2022_ScrollyVis.png},
    pages={1-12},
    doi={10.1109/TVCG.2022.3205769},
    }
    [PDF] [DOI] [VID] [Bibtex]
    @article{Moerth2022ICEVis,
    title = {ICEVis: Interactive Clustering Exploration for tumor sub-region analysis in multiparametric cancer imaging},
    author = {Mörth, Eric and Eichner, Tanja and Ingfrid, Haldorsen and Bruckner, Stefan and Smit, Noeska N.},
    year = 2022,
    journal = {Proceedings of the International Symposium on Visual Information Communication and Interaction (VINCI'22)},
    volume = {15},
    pages = {5},
    doi = {10.1145/3554944.3554958},
    issn = {},
    url = {},
    abstract = {Tumor tissue characteristics derived from imaging data are gaining importance in clinical research. Tumor sub-regions may play a critical role in defining tumor types and may hold essential information about tumor aggressiveness. Depending on the tumor’s location within the body, such sub-regions can be easily identified and determined by physiology, but these sub-regions are not readily visible to others. Regions within a tumor are currently explored by comparing the image sequences and analyzing the tissue heterogeneity present. To improve the exploration of such tumor sub-regions, we propose a visual analytics tool called ICEVis. ICEVis supports the identification of tumor sub-regions and corresponding features combined with cluster visualizations highlighting cluster validity. It is often difficult to estimate the optimal number of clusters; we provide rich facilities to support this task, incorporating various statistical measures and interactive exploration of the results. We evaluated our tool with three clinical researchers to show the potential of our approach.
    Best Short Paper at VINCI2022},
    images = "images/Moerth_2022_ICEVis.png",
    thumbnails = "images/Moerth_2022_ICEVis.png",
    pdf = {pdfs/Moerth_2022_ICEVis.pdf},
    vid = {vids/ICEVis.mp4},
    project = "ttmedvis",
    }
    [DOI] [YT] [Bibtex]
    @article{Sugathan2022Longitudinal,
    title = {Longitudinal visualization for exploratory analysis of multiple sclerosis lesions},
    author = {Sugathan, Sherin and Bartsch, Hauke and Riemer, Frank and Gr{\"u}ner, Renate and Lawonn, Kai and Smit, Noeska},
    year = 2022,
    journal = {Computers & Graphics},
    volume = 107,
    pages = {208--219},
    doi = {10.1016/j.cag.2022.07.023},
    issn = {0097-8493},
    url = {https://www.sciencedirect.com/science/article/pii/S0097849322001479},
    images = "images/Sugathan-2022-Longitudinal.PNG",
    thumbnails = "images/Sugathan-2022-Longitudinal.PNG",
    project = {ttmedvis},
    youtube = "https://youtu.be/uwcqSf1W-dc"
    }
    [DOI] [Bibtex]
    @article{VandenBossche2022Digital,
    title = {Digital body preservation: Technique and applications},
    author = {Vandenbossche, Vicky and Van de Velde, Joris and Avet, Stind and Willaert, Wouter and Soltvedt, Stian and Smit, Noeska and Audenaert, Emmanuel},
    year = 2022,
    journal = {Anatomical Sciences Education},
    volume = {15},
    number = {4},
    pages = {731--744},
    doi = {https://doi.org/10.1002/ase.2199},
    url = {https://anatomypubs.onlinelibrary.wiley.com/doi/abs/10.1002/ase.2199},
    images = "images/VandenBossche-2022-Digital.PNG",
    thumbnails = "images/VandenBossche-2022-Digital.PNG",
    project = {ttmedvis}
    }
    [DOI] [Bibtex]
    @article{Wagner2022Interobserver,
    title = {Interobserver agreement and prognostic impact for {MRI}--based 2018 {FIGO} staging parameters in uterine cervical cancer},
    author = {Wagner-Larsen, Kari S and Lura, Nj{\aa}l and Salvesen, {\O}yvind and Halle, Mari Kylles{\o} and Forsse, David and Trovik, Jone and Smit, Noeska and Krakstad, Camilla and Haldorsen, Ingfrid S},
    year = 2022,
    journal = {European Radiology},
    publisher = {Springer},
    pages = {1--12},
    doi = {10.1007/s00330-022-08666-x},
    url = {https://link.springer.com/article/10.1007/s00330-022-08666-x},
    images = "images/Wagner-2022-Interobserver.PNG",
    thumbnails = "images/Wagner-2022-Interobserver.PNG",
    project = {ttmedvis}
    }
    [DOI] [Bibtex]
    @article{Hodneland2022Fully,
    title = {Fully Automatic Whole-Volume Tumor Segmentation in Cervical Cancer},
    author = {Hodneland, Erlend and Kaliyugarasan, Satheshkumar and Wagner-Larsen, Kari Str{\o}no and Lura, Nj{\aa}l and Andersen, Erling and Bartsch, Hauke and Smit, Noeska and Halle, Mari Kylles{\o} and Krakstad, Camilla and Lundervold, Alexander Selvikv{\aa}g and Haldorsen, Ingfrid S},
    year = 2022,
    journal = {Cancers},
    publisher = {MDPI},
    volume = 14,
    number = 10,
    pages = 2372,
    doi = {10.3390/cancers14102372},
    url = {https://pubmed.ncbi.nlm.nih.gov/35625977/},
    images = "images/Hodneland-2022-Fully.PNG",
    thumbnails = "images/Hodneland-2022-Fully.PNG",
    project = {ttmedvis}
    }

2021

    [PDF] [DOI] [Bibtex]
    @article{Gillmann-2021-Viewpoints,
    author = {C. Gillmann and N. N. Smit and E. Groller and B. Preim and A. Vilanova and T. Wischgoll},
    journal = {IEEE Computer Graphics and Applications},
    title = {Ten Open Challenges in Medical Visualization},
    year = {2021},
    volume = {41},
    number = {05},
    issn = {1558-1756},
    pages = {7-15},
    keywords = {deep learning;uncertainty;data visualization;medical services;standardization;artificial intelligence;biomedical imaging},
    doi = {10.1109/MCG.2021.3094858},
    publisher = {IEEE Computer Society},
    address = {Los Alamitos, CA, USA},
    pdf = {pdfs/Gillmann-2021-Viewpoints.pdf},
    thumbnails = {images/Gillmann-2021-Viewpoints.png},
    images = {images/Gillmann-2021-Viewpoints.png},
    project = {ttmedvis},
    abstract = {The medical domain has been an inspiring application area in visualization research for many years already, but many open challenges remain. The driving forces of medical visualization research have been strengthened by novel developments, for example, in deep learning, the advent of affordable VR technology, and the need to provide medical visualizations for broader audiences. At IEEE VIS 2020, we hosted an Application Spotlight session to highlight recent medical visualization research topics. With this article, we provide the visualization community with ten such open challenges, primarily focused on challenges related to the visualization of medical imaging data. We first describe the unique nature of medical data in terms of data preparation, access, and standardization. Subsequently, we cover open visualization research challenges related to uncertainty, multimodal and multiscale approaches, and evaluation. Finally, we emphasize challenges related to users focusing on explainable AI, immersive visualization, P4 medicine, and narrative visualization.}
    }
    [PDF] [DOI] [Bibtex]
    @inproceedings{Hushagen-2021-VCBM,
    title = {The Role of Depth Perception in {XR} from a Neuroscience Perspective: A Primer and Survey},
    author = {Hushagen, Vetle and Tresselt, Gustav C. and Smit, Noeska N. and Specht, Karsten},
    year = 2021,
    booktitle = {Eurographics Workshop on Visual Computing for Biology and Medicine},
    publisher = {The Eurographics Association},
    doi = {10.2312/vcbm.20211344},
    isbn = {978-3-03868-140-3},
    issn = {2070-5786},
    url = {https://diglib.eg.org/handle/10.2312/vcbm20211344},
    pdf = {pdfs/Hushagen-2021-VCBM.pdf},
    thumbnails = {images/Hushagen-2021-VCBM.png},
    images = {images/Hushagen-2021-VCBM.png},
    project = {ttmedvis},
    abstract = {Augmented and virtual reality (XR) are potentially powerful tools for enhancing the efficiency of interactive visualization of complex data in biology and medicine. The benefits of visualization of digital objects in XR mainly arise from enhanced depth perception due to the stereoscopic nature of XR head mounted devices. With the added depth dimension, XR is in a prime position to convey complex information and support tasks where 3D information is important. In order to inform the development of novel XR applications in the biology and medicine domain, we present a survey which reviews the neuroscientific basis underlying the immersive features of XR. To make this literature more accessible to the visualization community, we first describe the basics of the visual system, highlighting how visual features are combined to objects
    and processed in higher cortical areas with a special focus on depth vision. Based on state of the art findings in neuroscience literature related to depth perception, we provide several recommendations for developers and designers. Our aim is to aid development of XR applications and strengthen development of tools aimed at molecular visualization, medical education, and surgery, as well as inspire new application areas.}
    }
    [PDF] [DOI] [Bibtex]
    @inproceedings{Sugathan-2021-VCBM,
    title = {Interactive Multimodal Imaging Visualization for Multiple Sclerosis Lesion Analysis},
    author = {Sugathan, Sherin and Bartsch, Hauke and Riemer, Frank and Gr{\"u}ner, Renate and Lawonn, Kai and Smit, Noeska N},
    year = 2021,
    booktitle = {Eurographics Workshop on Visual Computing for Biology and Medicine},
    publisher = {The Eurographics Association},
    doi = {10.2312/vcbm.20211346},
    isbn = {978-3-03868-140-3},
    issn = {2070-5786},
    url = {https://diglib.eg.org/handle/10.2312/vcbm20211346},
    pdf = {pdfs/Sugathan-2021-VCBM.pdf},
    thumbnails = {images/Sugathan-2021-VCBM.png},
    images = {images/Sugathan-2021-VCBM.png},
    project = {ttmedvis},
    abstract = {Multiple Sclerosis (MS) is a brain disease that is diagnosed and monitored extensively through MRI scans. One of the criteria is the appearance of so-called brain lesions. The lesions show up on MRI scans as regions with elevated or reduced contrast compared to the surrounding healthy tissue.
    Understanding the complex interplay of contrast, location and shape in images from multiple modalities from 2D MRI slices is challenging.
    Advanced visualization of appearance- and location-related features of lesions would help researchers in defining better disease characterization through MS research.
    Since a permanent cure is not possible in MS and medication-based disease modification is a common treatment path, providing better visualizations would strengthen research which investigates the effect of white matter lesions. Here we present an advanced visualization solution that supports analysis from multiple imaging modalities acquired in a clinical routine examination. The solution holds potential for enabling researchers to have a more intuitive perception of lesion features. As an example for enhancing the analytic possibilities, we demonstrate the benefits of lesion projection using both Diffusion Tensor Imaging (DTI) and gradient-based techniques. This approach enables users to assess brain structures across individuals as the atlas-based analysis provides 3D anchoring and labeling of regions across a series of brain scans from the same participant and across different participants. The projections on the brain surface also enable researchers to conduct detailed studies on the relationship between cognitive disabilities and location of lesions. This allows researchers to correlate lesions to Brodmann areas and related brain functions.
    We realize the solutions in a prototype application that supports both DTI and structural data. A qualitative evaluation demonstrates that our approach supports MS researchers by providing new opportunities for MS research.}
    }
    [PDF] [Bibtex]
    @InProceedings{Garrison-2021-EPP,
    author = {Laura Garrison and Monique Meuschke and Jennifer Fairman and Noeska Smit and Bernhard Preim and Stefan Bruckner},
    title = {An Exploration of Practice and Preferences for the Visual Communication of Biomedical Processes},
    booktitle = {Proceedings of VCBM},
    year = {2021},
    pages = {},
    doi = {},
    abstract = {The visual communication of biomedical processes draws from diverse techniques in both visualization and biomedical illustration. However, matching these techniques to their intended audience often relies on practice-based heuristics or narrow-scope evaluations. We present an exploratory study of the criteria that audiences use when evaluating a biomedical process visualization targeted for communication. Designed over a series of expert interviews and focus groups, our study focuses on common communication scenarios of five well-known biomedical processes and their standard visual representations. We framed these scenarios in a survey with participant expertise spanning from minimal to expert knowledge of a given topic. Our results show frequent overlap in abstraction preferences between expert and non-expert audiences, with similar prioritization of clarity and the ability of an asset to meet a given communication objective. We also found that some illustrative conventions are not as clear as we thought, e.g., glows have broadly ambiguous meaning, while other approaches were unexpectedly preferred, e.g., biomedical illustrations in place of data-driven visualizations. Our findings suggest numerous opportunities for the continued convergence of visualization and biomedical illustration techniques for targeted visualization design.
    Best Paper Honorable Mention at VCBM 2021},
    note = {Accepted for publication, to be presented at VCBM 2021},
    project = {VIDI,ttmedvis},
    pdf = {pdfs/Garrison-2021-EPP.pdf},
    thumbnails = {images/Garrison-2021-EPP.png},
    images = {images/Garrison-2021-EPP.jpg},
    url = {https://github.com/lauragarrison87/Biomedical_Process_Vis},
    keywords = {biomedical illustration, visual communication, survey},
    }
    [DOI] [Bibtex]
    @incollection{Smit-2021-COMULIS,
    author = {Smit, Noeska and Bühler, Katja and Vilanova, Anna and Falk, Martin},
    title = {Visualisation for correlative multimodal imaging},
    booktitle = {Imaging Modalities for Biological and Preclinical Research: A Compendium, Volume 2},
    publisher = {IOP Publishing},
    year = {2021},
    series = {2053-2563},
    type = {Book Chapter},
    pages = {III.4.e-1 to III.4.e-10},
    abstract = {In this chapter, we describe several approaches to interactive imaging data visualization in general, highlight several strategies for visualizing correlative multimodal imaging data, and provide examples and practical recommendations.},
    url = {http://dx.doi.org/10.1088/978-0-7503-3747-2ch28},
    doi = {10.1088/978-0-7503-3747-2ch28},
    isbn = {978-0-7503-3747-2},
    thumbnails = "images/Smit-2021-COMULIS.PNG",
    images = "images/Smit-2021-COMULIS.PNG",
    project = "ttmedvis",
    abstract = {The field of visualisation deals with finding appropriate visual representations of data so people can effectively carry out tasks related to data exploration, analysis, or presentation using the power of the human visual perceptual system. In the context of biomedical imaging data, interactive visualisation techniques can be employed, for example, to visually explore data, as image processing quality assurance, or in publications to communicate findings. When dealing with correlative imaging, challenges arise in how to effectively convey the information from multiple sources. In particular, the information density leads to the need for a critical reflection on the visual design with respect to which parts of the data are important to show and at what level of importance they should be visualised. In this chapter, we describe several approaches to interactive imaging data visualisation in general, highlight several strategies for visualising correlative multimodal imaging data, and provide examples and practical recommendations.}
    }

2020

    [PDF] [DOI] [YT] [Bibtex]
    @article{RadEx,
    author = {M\"{o}rth, E. and Wagner-Larsen, K. and Hodneland, E. and Krakstad, C. and Haldorsen, I. S. and Bruckner, S. and Smit, N. N.},
    title = {RadEx: Integrated Visual Exploration of Multiparametric Studies for Radiomic Tumor Profiling},
    journal = {Computer Graphics Forum},
    volume = {39},
    number = {7},
    year = {2020},
    pages = {611--622},
    abstract = {Better understanding of the complex processes driving tumor growth and metastases is critical for developing targeted treatment strategies in cancer. Radiomics extracts large amounts of features from medical images which enables radiomic tumor profiling in combination with clinical markers. However, analyzing complex imaging data in combination with clinical data is not trivial and supporting tools aiding in these exploratory analyses are presently missing. In this paper, we present an approach that aims to enable the analysis of multiparametric medical imaging data in combination with numerical, ordinal, and categorical clinical parameters to validate established and unravel novel biomarkers. We propose a hybrid approach where dimensionality reduction to a single axis is combined with multiple linked views allowing clinical experts to formulate hypotheses based on all available imaging data and clinical parameters. This may help to reveal novel tumor characteristics in relation to molecular targets for treatment, thus providing better tools for enabling more personalized targeted treatment strategies. To confirm the utility of our approach, we closely collaborate with experts from the field of gynecological cancer imaging and conducted an evaluation with six experts in this field.},
    pdf = "pdfs/Moerth-2020-RadEx.pdf",
    images = "images/Moerth-2020-RadEx.jpg",
    youtube = "https://youtu.be/zwtDzwwX790",
    thumbnails = "images/Moerth-2020-RadEx-thumb.jpg",
    project = "ttmedvis",
    doi = {10.1111/cgf.14172}
    }
    [PDF] [DOI] [YT] [Bibtex]
    @INPROCEEDINGS{Moerth-2020-CGI,
    author = "M\"{o}rth, E. and Haldorsen, I.S. and Bruckner, S. and Smit, N.N.",
    title = "ParaGlyder: Probe-driven Interactive Visual Analysis for Multiparametric Medical Imaging Data",
    booktitle = "Proceedings of Computer Graphics International",
    pages = "351--363",
    year = "2020",
    abstract = "Multiparametric medical imaging describes approaches that include multiple imaging sequences acquired within the same imaging examination, as opposed to one single imaging sequence or imaging from multiple imaging modalities. Multiparametric imaging in cancer has been shown to be useful for tumor detection and may also depict functional tumor characteristics relevant for clinical phenotypes. However, when confronted with datasets consisting of multiple values per voxel, traditional reading of the imaging series fails to capture complicated patterns. Those patterns of potentially important imaging properties of the parameter space may be critical for the analysis. Standard approaches, such as transfer functions and juxtapositioned visualizations, fail to convey the shape of the multiparametric parameter distribution in sufficient detail. For these reasons, in this paper we present an approach that aims to enable the exploration and analysis of such multiparametric studies using an interactive visual analysis application to remedy the trade-offs between details in the value domain and in spatial resolution. Interactive probing within or across subjects allows for a digital biopsy that is able to uncover multiparametric tissue properties. This may aid in the discrimination between healthy and cancerous tissue, unravel radiomic tissue features that could be linked to targetable pathogenic mechanisms, and potentially highlight metastases that evolved from the primary tumor. We conducted an evaluation with eleven domain experts from the field of gynecological cancer imaging, neurological imaging, and machine learning research to confirm the utility of our approach.",
    note= "The final authenticated version is available online at https://doi.org/10.1007/978-3-030-61864-3_29",
    pdf = "pdfs/Moerth-2020-CGI-ParaGlyder.pdf",
    images = "images/Moerth-2020-ParaGlyder.PNG",
    thumbnails = "images/Moerth-2020-ParaGlyder-thumb.png",
    youtube = "https://youtu.be/S_M4CWXKz0U",
    publisher = "LNCS by Springer",
    project = "ttmedvis",
    doi = "10.1007/978-3-030-61864-3_29"
    }
    [PDF] [DOI] [Bibtex]
    @article{Solteszova-2019-MLT,
    author = {Solteszova, V. and Smit, N. N. and Stoppel, S. and Gr\"{u}ner, R. and Bruckner, S.},
    title = {Memento: Localized Time-Warping for Spatio-Temporal Selection},
    journal = {Computer Graphics Forum},
    volume = {39},
    number = {1},
    pages = {231--243},
    year = {2020},
    keywords = {interaction, temporal data, visualization, spatio-temporal projection},
    images = "images/Solteszova-2019-MLT.jpg",
    thumbnails = "images/Solteszova-2019-MLT-1.jpg",
    pdf = "pdfs/Solteszova-2019-MLT.pdf",
    doi = {10.1111/cgf.13763},
    abstract = {Abstract Interaction techniques for temporal data are often focused on affecting the spatial aspects of the data, for instance through the use of transfer functions, camera navigation or clipping planes. However, the temporal aspect of the data interaction is often neglected. The temporal component is either visualized as individual time steps, an animation or a static summary over the temporal domain. When dealing with streaming data, these techniques are unable to cope with the task of re-viewing an interesting local spatio-temporal event, while continuing to observe the rest of the feed. We propose a novel technique that allows users to interactively specify areas of interest in the spatio-temporal domain. By employing a time-warp function, we are able to slow down time, freeze time or even travel back in time, around spatio-temporal events of interest. The combination of such a (pre-defined) time-warp function and brushing directly in the data to select regions of interest allows for a detailed review of temporally and spatially localized events, while maintaining an overview of the global spatio-temporal data. We demonstrate the utility of our technique with several usage scenarios.},
    project = "MetaVis,ttmedvis,VIDI"
    }

2019

    [DOI] [Bibtex]
    @article{kraima2019role,
    title={The role of the longitudinal muscle in the anal sphincter complex: Implications for the Intersphincteric Plane in Low Rectal Cancer Surgery?},
    author={Kraima, Anne C and West, Nicholas P and Roberts, Nicholas and Magee, Derek R and Smit, Noeska N and van de Velde, Cornelis JH and DeRuiter, Marco C and Rutten, Harm J and Quirke, Philip},
    journal={Clinical Anatomy},
    year={2019},
    doi="10.1002/ca.23444",
    url = "https://onlinelibrary.wiley.com/doi/full/10.1002/ca.23444",
    publisher={Wiley Online Library},
    project = "ttmedvis",
    images = {images/kraima-2019-role.png},
    thumbnails = {images/kraima-2019-role.png},
    abstract = {Intersphincteric resection (ISR) enables radical sphincter-preserving surgery in a subset of low rectal tumors impinging on the anal sphincter complex (ASC). Excellent anatomical knowledge is essential for optimal ISR. This study describes the role of the longitudinal muscle (LM) in the ASC and implications for ISR and other low rectal and anal pathologies. Six human adult en bloc cadaveric specimens (three males, three females) were obtained from the University of Leeds GIFT Research Tissue Programme. Paraffin-embedded mega blocks containing the ASC were produced and serially sectioned at 250?µm intervals. Whole mount microscopic sections were histologically stained and digitally scanned. The intersphincteric plane was shown to be potentially very variable. In some places adipose tissue is located between the external anal sphincter (EAS) and internal anal sphincter (IAS), whereas in others the LM interdigitates to obliterate the plane. Elsewhere the LM is (partly) absent with the intersphincteric plane lying on the IAS. The LM gave rise to the formation of the submucosae and corrugator ani muscles by penetrating the IAS and EAS. In four of six specimens, striated muscle fibers from the EAS curled around the distal IAS reaching the anal submucosa. The ASC formed a complex structure, varying between individuals with an inconstant LM affecting the potential location of the intersphincteric plane as well as a high degree of intermingling striated and smooth muscle fibers potentially further disrupting the plane. The complexity of identifying the correct pathological staging of low rectal cancer is also demonstrated.}
    }
    [DOI] [Bibtex]
    @incollection{Smit-2019-AtlasVis,
    title={Towards Advanced Interactive Visualization for Virtual Atlases},
    author={Smit, Noeska and Bruckner, Stefan},
    booktitle={Biomedical Visualisation},
    pages={85--96},
    year={2019},
    publisher={Springer},
    doi = {10.1007/978-3-030-19385-0_6},
    url = "http://noeskasmit.com/wp-content/uploads/2019/07/Smit_AtlasVis_2019.pdf",
    images = "images/Smit-2019-AtlasVis.png",
    thumbnails = "images/Smit-2019-AtlasVis.png",
    abstract = "An atlas is generally defined as a bound collection of tables, charts or illustrations describing a phenomenon. In an anatomical atlas for example, a collection of representative illustrations and text describes anatomy for the purpose of communicating anatomical knowledge. The atlas serves as reference frame for comparing and integrating data from different sources by spatially or semantically relating collections of drawings, imaging data, and/or text. In the field of medical image processing, atlas information is often constructed from a collection of regions of interest, which are based on medical images that are annotated by domain experts. Such an atlas may be employed for example for automatic segmentation of medical imaging data. The combination of interactive visualization techniques with atlas information opens up new possibilities for content creation, curation, and navigation in virtual atlases. With interactive visualization of atlas information, students are able to inspect and explore anatomical atlases in ways that were not possible with the traditional method of presenting anatomical atlases in book format, such as viewing the illustrations from other viewpoints. With advanced interaction techniques, it becomes possible to query the data that forms the basis for the atlas, thus empowering researchers to access a wealth of information in new ways. So far, atlasbased visualization has been employed for mainly medical education, as well as biological research. In this survey, we provide an overview of current digital biomedical atlas tasks and applications and summarize relevant visualization techniques. We discuss recent approaches for providing next-generation visual interfaces to navigate atlas data that go beyond common text-based search and hierarchical lists. Finally, we reflect on open challenges and opportunities for the next steps in interactive atlas visualization. ",
    project = "ttmedvis,MetaVis,VIDI"
    }
    [DOI] [Bibtex]
    @article{Meuschke-2019-EvalViz,
    title = {EvalViz--Surface Visualization Evaluation Wizard for Depth and Shape Perception Tasks},
    author = {Meuschke, Monique and Smit, Noeska N and Lichtenberg, Nils and Preim, Bernhard and Lawonn, Kai},
    journal = {Computers \& Graphics},
    year = {2019},
    publisher = {Elsevier},
    number = "1",
    volume = "82",
    DOI = {10.1016/j.cag.2019.05.022},
    images = "images/Meuschke_EvalViz_2019.png",
    thumbnails = "images/Meuschke_EvalViz_2019.png",
    abstract = "User studies are indispensable for visualization application papers in order to assess the value and limitations of the presented approach. Important aspects are how well depth and shape information can be perceived, as coding of these aspects is essential to enable an understandable representation of complex 3D data. In practice, there is usually little time to perform such studies, and the establishment and conduction of user studies can be labour-intensive. In addition, it can be difficult to reach enough participants to obtain expressive results regarding the quality of different visualization techniques.
    In this paper, we propose a framework that allows visualization researchers to quickly create task-based user studies on depth and shape perception for different surface visualizations and perform the resulting tasks via a web interface. With our approach, the effort for generating user studies is reduced and at the same time the web-based component allows researchers to attract more participants to their study. We demonstrate our framework by applying shape and depth evaluation tasks to visualizations of various surface representations used in many technical and biomedical applications.",
    project = "ttmedvis"
    }
    [PDF] [DOI] [Bibtex]
    @inproceedings {Smit-2019-DBP,
    booktitle = {Eurographics 2019 - Dirk Bartz Prize},
    editor = {Bruckner, Stefan and Oeltze-Jafra, Steffen},
    title = {{Model-based Visualization for Medical Education and Training}},
    author = {Smit, Noeska and Lawonn, Kai and Kraima, Annelot and deRuiter, Marco and Bruckner, Stefan and Eisemann, Elmar and Vilanova, Anna},
    year = {2019},
    publisher = {The Eurographics Association},
    ISSN = {1017-4656},
    DOI = {10.2312/egm.20191033},
    pdf = "pdfs/Smit_DBPrize_2019.pdf",
    images = "images/Smit_DBPrize_2019.png",
    thumbnails = "images/Smit_DBPrize_2019.png",
    abstract = "Anatomy, or the study of the structure of the human body, is an essential component of medical education. Certain parts of human anatomy are considered to be more complex to understand than others, due to a multitude of closely related structures. Furthermore, there are many potential variations in anatomy, e.g., different topologies of vessels, and knowledge of these variations is critical for many in medical practice.
    Some aspects of individual anatomy, such as the autonomic nerves, are not visible in individuals through medical imaging techniques or even during surgery, placing these nerves at risk for damage.
    3D models and interactive visualization techniques can be used to improve understanding of this complex anatomy, in combination with traditional medical education paradigms.
    We present a framework incorporating several advanced medical visualization techniques and applications for teaching and training purposes, which is the result of an interdisciplinary project.
    In contrast to previous approaches which focus on general anatomy visualization or direct visualization of medical imaging data, we employ model-based techniques to represent variational anatomy, as well as anatomy not visible from imaging. Our framework covers the complete spectrum including general anatomy, anatomical variations, and anatomy in individual patients.
    Applications within our framework were evaluated positively with medical users, and our educational tool for general anatomy is in use in a Massive Open Online Course (MOOC) on anatomy, which had over 17000 participants worldwide in the first run.",
    project = "ttmedvis,VIDI"
    }

2018

    [PDF] [DOI] [YT] [Bibtex]
    @INPROCEEDINGS {Meuschke2018VCBM,
    author = "Monique Meuschke and Noeska N. Smit and Nils Lichtenberg and Bernhard Preim and Kai Lawonn",
    title = "Automatic Generation of Web-Based User Studies to Evaluate Depth Perception in Vascular Surface Visualizations",
    booktitle = "Proceedings of VCBM 2018",
    year = "2018",
    editor = "Anna Puig Puig and Thomas Schultz and Anna Vilanova and Ingrid Hotz and Barbora Kozlikova and Pere-Pau Vázquez",
    pages = "033-044",
    address = "Granada, Spain",
    publisher = "Eurographics Association",
    abstract = "User studies are often required in biomedical visualization application papers in order to provide evidence for the utility of the presented approach. An important aspect is how well depth information can be perceived, as depth encoding is important to enable an understandable representation of complex data.Unfortunately, in practice there is often little time available to perform such studies, and setting up and conducting user studies may be labor-intensive. In addition, it can be challenging to reach enough participants to support the contribution claims of the paper. In this paper, we propose a system that allows biomedical visualization researchers to quickly generate perceptual task-based user studies for novel surface visualizations, and to perform the resulting experiment via a web interface. This approach helps to reduce effort in the setup of user studies themselves, and at the same time leverages a web-based approach that can help researchers attract more participants to their study. We demonstrate our system using the specific application of depth judgment tasks to evaluate vascular surface visualizations, since there is a lot of recent interest in this area.However, the system is also generally applicable for conducting other task-baseduser studies in biomedical visualization.",
    pdf = "pdfs/meuschke2018VCBM.pdf",
    images = "images/vcbm2018.png",
    thumbnails = "images/vcbm2018.png",
    youtube = "https://www.youtube.com/watch?v=8lns8GGpPJI",
    crossref = "VCBM-proc",
    doi = "10.2312/vcbm.20181227",
    project = "ttmedvis"
    }
    [PDF] [YT] [Bibtex]
    @ARTICLE {lichtenbergsmithansenlawonn2018,
    author = "Nils Lichtenberg and Noeska Smit and Christian Hansen and Kai Lawonn",
    title = "Real-time field aligned stripe patterns",
    journal = "Computers & Graphics",
    year = "2018",
    volume = "74",
    pages = "137-149",
    month = "aug",
    abstract = "In this paper, we present a parameterization technique that can be applied to surface meshes in real-time without time-consuming preprocessing steps. The parameterization is suitable for the display of (un-)oriented patterns and texture patches, and to sample a surface in a periodic fashion. The method is inspired by existing work that solves a global optimization problem to generate a continuous stripe pattern on the surface, from which texture coordinates can be derived. We propose a local optimization approach that is suitable for parallel execution on the GPU, which drastically reduces computation time. With this, we achieve on-the-fly texturing of 3D, medium-sized (up to 70k vertices) surface meshes. The algorithm takes a tangent vector field as input and aligns the texture coordinates to it. Our technique achieves real-time parameterization of the surface meshes by employing a parallelizable local search algorithm that converges to a local minimum in a few iterations. The calculation in real-time allows for live parameter updates and determination of varying texture coordinates. Furthermore, the method can handle non-manifold meshes. The technique is useful in various applications, e.g., biomedical visualization and flow visualization. We highlight our method\s potential by providing usage scenarios for several applications.A PDF of the accepted manuscript is available via noeskasmit.com/wp-content/uploads/2018/08/lichtenberg_2018.pdf.",
    pdf = "pdfs/lichtenberg_2018.pdf",
    images = "images/Selection_384.png",
    thumbnails = "images/1-s2.0-S0097849318300591-fx1_lrg.jpg",
    youtube = "https://www.youtube.com/watch?v=7CpkHy8KPK8",
    project = "ttmedvis"
    }