Noeska Smit

Professor

Medical Visualization

 Head Team Smit

**on leave until Oct/Nov 2023**

I’m a Professor in Medical Visualization (tenure track position funded by the Trond Mohn Foundation), with a background in computer science and as a radiographer. I am also affiliated to the Mohn Medical Imaging and Visualization (MMIV) centre as a senior researcher and part of the leadership team. Currently, I am researching novel interactive visualization approaches for improved exploration, analysis, and communication of multimodal medical imaging data. The focus of our team in this context is on multi-parametric MR acquisitions.

Teaching:

This page only displays publications I have authored in my current affiliation. For a full overview, please check my Google Scholar profile. For more information, please check my personal website.

Publications

2023

    [PDF] [DOI] [Bibtex]
    @article{wagner2023mri,
    title={MRI-based radiomic signatures for pretreatment prognostication in cervical cancer},
    author={Wagner-Larsen, Kari S and Hodneland, Erlend and Fasmer, Kristine E and Lura, Nj{\aa}l and Woie, Kathrine and Bertelsen, Bj{\o}rn I and Salvesen, {\O}yvind and Halle, Mari K and Smit, Noeska and Krakstad, Camilla and others},
    journal={Cancer Medicine},
    volume={12},
    number={20},
    pages={20251--20265},
    year={2023},
    publisher={Wiley Online Library},
    doi = {10.1002/cam4.6526},
    url = {https://onlinelibrary.wiley.com/doi/full/10.1002/cam4.6526},
    images = {images/wagner2023radiomics.PNG},
    thumbnails = {images/wagner2023radiomics.PNG},
    pdf = {pdfs/wagner2023radiomics.pdf}
    }
    [Bibtex]
    @book{preim2023visualization,
    title={Visualization, Visual Analytics and Virtual Reality in Medicine: State-of-the-art Techniques and Applications},
    author={Preim, Bernhard and Raidou, Renata and Smit, Noeska and Lawonn, Kai},
    year={2023},
    publisher={Elsevier},
    abstract = {Visualization, Visual Analytics and Virtual Reality in Medicine: State-of-the-art Techniques and Applications describes important techniques and applications that show an understanding of actual user needs as well as technological possibilities. The book includes user research, for example, task and requirement analysis, visualization design and algorithmic ideas without going into the details of implementation. This reference will be suitable for researchers and students in visualization and visual analytics in medicine and healthcare, medical image analysis scientists and biomedical engineers in general. Visualization and visual analytics have become prevalent in public health and clinical medicine, medical flow visualization, multimodal medical visualization and virtual reality in medical education and rehabilitation. Relevant applications now include digital pathology, virtual anatomy and computer-assisted radiation treatment planning.},
    images = {images/Smit2023book.png},
    thumbnails = {images/Smit2023book.png},
    project = {ttmedvis},
    url = {https://shop.elsevier.com/books/visualization-visual-analytics-and-virtual-reality-in-medicine/preim/978-0-12-822962-0}
    }

2022

    [PDF] [DOI] [Bibtex]
    @inproceedings {EichnerMoerth2022MuSIC,
    booktitle = {Eurographics Workshop on Visual Computing for Biology and Medicine},
    editor = {Renata G. Raidou and Björn Sommer and Torsten W. Kuhlen and Michael Krone and Thomas Schultz and Hsiang-Yun Wu},
    title = {{MuSIC: Multi-Sequential Interactive Co-Registration for Cancer Imaging Data based on Segmentation Masks}},
    author = {Eichner, Tanja* and Mörth, Eric* and Wagner-Larsen, Kari S. and Lura, Njål and Haldorsen, Ingfrid S. and Gröller, Eduard and Bruckner, Stefan and Smit, Noeska N.},
    note = {Best Paper Honorable Mention at VCBM2022},
    project = {ttmedvis},
    year = {2022},
    abstract = {In gynecologic cancer imaging, multiple magnetic resonance imaging (MRI) sequences are acquired per patient to reveal different tissue characteristics. However, after image acquisition, the anatomical structures can be misaligned in the various sequences due to changing patient location in the scanner and organ movements. The co-registration process aims to align the sequences to allow for multi-sequential tumor imaging analysis. However, automatic co-registration often leads to unsatisfying results. To address this problem, we propose the web-based application MuSIC (Multi-Sequential Interactive Co-registration). The approach allows medical experts to co-register multiple sequences simultaneously based on a pre-defined segmentation mask generated for one of the sequences. Our contributions lie in our proposed workflow. First, a shape matching algorithm based on dual annealing searches for the tumor position in each sequence. The user can then interactively adapt the proposed segmentation positions if needed. During this procedure, we include a multi-modal magic lens visualization for visual quality assessment. Then, we register the volumes based on the segmentation mask positions. We allow for both rigid and deformable registration. Finally, we conducted a usability analysis with seven medical and machine learning experts to verify the utility of our approach. Our participants highly appreciate the multi-sequential setup and see themselves using MuSIC in the future.
    Best Paper Honorable Mention at VCBM2022},
    publisher = {The Eurographics Association},
    ISSN = {2070-5786},
    ISBN = {978-3-03868-177-9},
    DOI = {10.2312/vcbm.20221190},
    pdf = {pdfs/EichnerMoerth_2022.pdf},
    thumbnails = {images/EichnerMoerth_2022.PNG},
    images = {images/EichnerMoerth_2022.PNG},
    }
    [PDF] [DOI] [Bibtex]
    @inproceedings {Kleinau2022Tornado,
    booktitle = {Eurographics Workshop on Visual Computing for Biology and Medicine},
    editor = {Renata G. Raidou and Björn Sommer and Torsten W. Kuhlen and Michael Krone and Thomas Schultz and Hsiang-Yun Wu},
    title = {{Is there a Tornado in Alex's Blood Flow? A Case Study for Narrative Medical Visualization}},
    project = {ttmedvis},
    author = {Kleinau, Anna and Stupak, Evgenia and Mörth, Eric and Garrison, Laura A. and Mittenentzwei, Sarah and Smit, Noeska N. and Lawonn, Kai and Bruckner, Stefan and Gutberlet, Matthias and Preim, Bernhard and Meuschke, Monique},
    year = {2022},
    abstract = {Narrative visualization advantageously combines storytelling with new media formats and techniques, like interactivity, to create improved learning experiences. In medicine, it has the potential to improve patient understanding of diagnostic procedures and treatment options, promote confidence, reduce anxiety, and support informed decision-making. However, limited scientific research has been conducted regarding the use of narrative visualization in medicine. To explore the value of narrative visualization in this domain, we introduce a data-driven story to inform a broad audience about the usage of measured blood flow data to diagnose and treat cardiovascular diseases. The focus of the story is on blood flow vortices in the aorta, with which imaging technique they are examined, and why they can be dangerous. In an interdisciplinary team, we define the main contents of the story and the resulting design questions. We sketch the iterative design process and implement the story based on two genres. In a between-subject study, we evaluate the suitability and understandability of the story and the influence of different navigation concepts on user experience. Finally, we discuss reusable concepts for further narrative medical visualization projects.},
    publisher = {The Eurographics Association},
    ISSN = {2070-5786},
    ISBN = {978-3-03868-177-9},
    DOI = {10.2312/vcbm.20221183},
    pdf = {pdfs/Kleinau_2022.pdf},
    thumbnails = {images/Kleinau_2022.PNG},
    images = {images/Kleinau_2022.PNG},
    }
    [PDF] [DOI] [Bibtex]
    @article{Meuschke2022narrative,
    title = {Narrative medical visualization to communicate disease data},
    author = {Meuschke, Monique and Garrison, Laura A. and Smit, Noeska N. and Bach, Benjamin and Mittenentzwei, Sarah and Wei{\ss}, Veronika and Bruckner, Stefan and Lawonn, Kai and Preim, Bernhard},
    year = 2022,
    journal = {Computers & Graphics},
    volume = 107,
    pages = {144--157},
    doi = {10.1016/j.cag.2022.07.017},
    issn = {0097-8493},
    url = {https://www.sciencedirect.com/science/article/pii/S009784932200139X},
    abstract = {This paper explores narrative techniques combined with medical visualizations to tell data-driven stories about diseases for a general audience. The field of medical illustration uses narrative visualization through hand-crafted techniques to promote health literacy. However, data-driven narrative visualization has rarely been applied to medical data. We derived a template for creating stories about diseases and applied it to three selected diseases to demonstrate how narrative techniques could support visual communication and facilitate understanding of medical data. One of our main considerations is how interactive 3D anatomical models can be integrated into the story and whether this leads to compelling stories in which the users feel involved. A between-subject study with 90 participants suggests that the combination of a carefully designed narrative structure, the constant involvement of a specific patient, high-qualitative visualizations combined with easy-to-use interactions, are critical for an understandable story about diseases that would be remembered by participants.},
    pdf = {pdfs/Narrative_medical_MEUSCHKE_DOA18072022_AFV.pdf},
    thumbnails = {images/Meuschke2022narrative-thumb.png},
    images = {images/Meuschke2022narrative.png},
    project = {VIDI}
    }
    [PDF] [DOI] [Bibtex]
    @Article{Moerth2022ScrollyVis,
    author = {Mörth, Eric and Bruckner, Stefan and Smit, Noeska N.},
    title = {ScrollyVis: Interactive visual authoring of guided dynamic narratives for scientific scrollytelling},
    journal = {IEEE Transactions on Visualization and Computer Graphics},
    year = {2022},
    volume = {},
    abstract = {Visual stories are an effective and powerful tool to convey specific information to a diverse public. Scrollytelling is a recent visual storytelling technique extensively used on the web, where content appears or changes as users scroll up or down a page. By employing the familiar gesture of scrolling as its primary interaction mechanism, it provides users with a sense of control, exploration and discoverability while still offering a simple and intuitive interface. In this paper, we present a novel approach for authoring, editing, and presenting data-driven scientific narratives using scrollytelling. Our method flexibly integrates common sources such as images, text, and video, but also supports more specialized visualization techniques such as interactive maps as well as scalar field and mesh data visualizations. We show that scrolling navigation can be used to traverse dynamic narratives and demonstrate how it can be combined with interactive parameter exploration. The resulting system consists of an extensible web-based authoring tool capable of exporting stand-alone stories that can be hosted on any web server. We demonstrate the power and utility of our approach with case studies from several diverse scientific fields and with a user study including 12 participants of diverse professional backgrounds. Furthermore, an expert in creating interactive articles assessed the usefulness of our approach and the quality of the created stories.},
    project = {ttmedvis},
    pdf = {pdfs/Moerth_2022_ScrollyVis.pdf},
    thumbnails = {images/Moerth_2022_ScrollyVis.png},
    images = {images/Moerth_2022_ScrollyVis.png},
    pages={1-12},
    doi={10.1109/TVCG.2022.3205769},
    }
    [PDF] [DOI] [VID] [Bibtex]
    @article{Moerth2022ICEVis,
    title = {ICEVis: Interactive Clustering Exploration for tumor sub-region analysis in multiparametric cancer imaging},
    author = {Mörth, Eric and Eichner, Tanja and Ingfrid, Haldorsen and Bruckner, Stefan and Smit, Noeska N.},
    year = 2022,
    journal = {Proceedings of the International Symposium on Visual Information Communication and Interaction (VINCI'22)},
    volume = {15},
    pages = {5},
    doi = {10.1145/3554944.3554958},
    issn = {},
    url = {},
    abstract = {Tumor tissue characteristics derived from imaging data are gaining importance in clinical research. Tumor sub-regions may play a critical role in defining tumor types and may hold essential information about tumor aggressiveness. Depending on the tumor’s location within the body, such sub-regions can be easily identified and determined by physiology, but these sub-regions are not readily visible to others. Regions within a tumor are currently explored by comparing the image sequences and analyzing the tissue heterogeneity present. To improve the exploration of such tumor sub-regions, we propose a visual analytics tool called ICEVis. ICEVis supports the identification of tumor sub-regions and corresponding features combined with cluster visualizations highlighting cluster validity. It is often difficult to estimate the optimal number of clusters; we provide rich facilities to support this task, incorporating various statistical measures and interactive exploration of the results. We evaluated our tool with three clinical researchers to show the potential of our approach.
    Best Short Paper at VINCI2022},
    images = "images/Moerth_2022_ICEVis.png",
    thumbnails = "images/Moerth_2022_ICEVis.png",
    pdf = {pdfs/Moerth_2022_ICEVis.pdf},
    vid = {vids/ICEVis.mp4},
    project = "ttmedvis",
    }
    [DOI] [YT] [Bibtex]
    @article{Sugathan2022Longitudinal,
    title = {Longitudinal visualization for exploratory analysis of multiple sclerosis lesions},
    author = {Sugathan, Sherin and Bartsch, Hauke and Riemer, Frank and Gr{\"u}ner, Renate and Lawonn, Kai and Smit, Noeska},
    year = 2022,
    journal = {Computers & Graphics},
    volume = 107,
    pages = {208--219},
    doi = {10.1016/j.cag.2022.07.023},
    issn = {0097-8493},
    url = {https://www.sciencedirect.com/science/article/pii/S0097849322001479},
    images = "images/Sugathan-2022-Longitudinal.PNG",
    thumbnails = "images/Sugathan-2022-Longitudinal.PNG",
    project = {ttmedvis},
    youtube = "https://youtu.be/uwcqSf1W-dc"
    }
    [DOI] [Bibtex]
    @article{VandenBossche2022Digital,
    title = {Digital body preservation: Technique and applications},
    author = {Vandenbossche, Vicky and Van de Velde, Joris and Avet, Stind and Willaert, Wouter and Soltvedt, Stian and Smit, Noeska and Audenaert, Emmanuel},
    year = 2022,
    journal = {Anatomical Sciences Education},
    volume = {15},
    number = {4},
    pages = {731--744},
    doi = {https://doi.org/10.1002/ase.2199},
    url = {https://anatomypubs.onlinelibrary.wiley.com/doi/abs/10.1002/ase.2199},
    images = "images/VandenBossche-2022-Digital.PNG",
    thumbnails = "images/VandenBossche-2022-Digital.PNG",
    project = {ttmedvis}
    }
    [DOI] [Bibtex]
    @article{Wagner2022Interobserver,
    title = {Interobserver agreement and prognostic impact for {MRI}--based 2018 {FIGO} staging parameters in uterine cervical cancer},
    author = {Wagner-Larsen, Kari S and Lura, Nj{\aa}l and Salvesen, {\O}yvind and Halle, Mari Kylles{\o} and Forsse, David and Trovik, Jone and Smit, Noeska and Krakstad, Camilla and Haldorsen, Ingfrid S},
    year = 2022,
    journal = {European Radiology},
    publisher = {Springer},
    pages = {1--12},
    doi = {10.1007/s00330-022-08666-x},
    url = {https://link.springer.com/article/10.1007/s00330-022-08666-x},
    images = "images/Wagner-2022-Interobserver.PNG",
    thumbnails = "images/Wagner-2022-Interobserver.PNG",
    project = {ttmedvis}
    }
    [DOI] [Bibtex]
    @article{Hodneland2022Fully,
    title = {Fully Automatic Whole-Volume Tumor Segmentation in Cervical Cancer},
    author = {Hodneland, Erlend and Kaliyugarasan, Satheshkumar and Wagner-Larsen, Kari Str{\o}no and Lura, Nj{\aa}l and Andersen, Erling and Bartsch, Hauke and Smit, Noeska and Halle, Mari Kylles{\o} and Krakstad, Camilla and Lundervold, Alexander Selvikv{\aa}g and Haldorsen, Ingfrid S},
    year = 2022,
    journal = {Cancers},
    publisher = {MDPI},
    volume = 14,
    number = 10,
    pages = 2372,
    doi = {10.3390/cancers14102372},
    url = {https://pubmed.ncbi.nlm.nih.gov/35625977/},
    images = "images/Hodneland-2022-Fully.PNG",
    thumbnails = "images/Hodneland-2022-Fully.PNG",
    project = {ttmedvis}
    }

2021

    [PDF] [YT] [Bibtex]
    @inproceedings{Rijken-2021-Illegible,
    title = {Illegible Semantics: Exploring the Design Space of Metal Logos},
    author = {Gerrit J. Rijken and Rene Cutura and Frank Heyen and Michael Sedlmair and Michael Correll and Jason Dykes and Noeska Smit},
    year = 2021,
    booktitle = {Proceedings of the {alt.VIS} workshop at {IEEE VIS}},
    eprint = {2109.01688},
    archiveprefix = {arXiv},
    primaryclass = {cs.HC},
    pdf = {pdfs/Rijken-2021-Illegible.pdf},
    thumbnails = {images/Rijken-2021-Illegible.png},
    images = {images/Rijken-2021-Illegible.png},
    abstract = {The logos of metal bands can be by turns gaudy, uncouth, or nearly illegible. Yet, these logos work: they communicate sophisticated notions of genre and emotional affect. In this paper we use the design considerations of metal logos to explore the space of ``illegible semantics'': the ways that text can communicate information at the cost of readability, which is not always the most important objective. In this work, drawing on formative visualization theory, professional design expertise, and empirical assessments of a corpus of metal band logos, we describe a design space of metal logos and present a tool through which logo characteristics can be explored through visualization. We investigate ways in which logo designers imbue their text with meaning and consider opportunities and implications for visualization more widely.},
    youtube = "https://youtu.be/BZOdIhU-mrA",
    }
    [PDF] [DOI] [YT] [Bibtex]
    @inproceedings{Smit-2021-DataKnitualization,
    title = {Data Knitualization: An Exploration of Knitting as a Visualization Medium},
    author = {Noeska Smit},
    year = 2021,
    booktitle = {Proceedings of the {alt.VIS} workshop at {IEEE VIS}},
    doi = {10.31219/osf.io/xahj9},
    pdf = {pdfs/Smit-2021-DataKnitualization.pdf},
    thumbnails = {images/Smit-2021-DataKnitualization.jpg},
    images = {images/Smit-2021-DataKnitualization.jpg},
    abstract = {While data visualization can be achieved in many media, from hand-drawn on paper to 3D printed via data physicalization, the ancient craft of knitting is not often considered as a visualization medium. With this work, I explore hand knitting as a potential data visualization medium based on my personal experience as a knitter and visualization researcher.},
    youtube = "https://youtu.be/K3D-M7jzbMs",
    }
    [PDF] [DOI] [Bibtex]
    @article{Gillmann-2021-Viewpoints,
    author = {C. Gillmann and N. N. Smit and E. Groller and B. Preim and A. Vilanova and T. Wischgoll},
    journal = {IEEE Computer Graphics and Applications},
    title = {Ten Open Challenges in Medical Visualization},
    year = {2021},
    volume = {41},
    number = {05},
    issn = {1558-1756},
    pages = {7-15},
    keywords = {deep learning;uncertainty;data visualization;medical services;standardization;artificial intelligence;biomedical imaging},
    doi = {10.1109/MCG.2021.3094858},
    publisher = {IEEE Computer Society},
    address = {Los Alamitos, CA, USA},
    pdf = {pdfs/Gillmann-2021-Viewpoints.pdf},
    thumbnails = {images/Gillmann-2021-Viewpoints.png},
    images = {images/Gillmann-2021-Viewpoints.png},
    project = {ttmedvis},
    abstract = {The medical domain has been an inspiring application area in visualization research for many years already, but many open challenges remain. The driving forces of medical visualization research have been strengthened by novel developments, for example, in deep learning, the advent of affordable VR technology, and the need to provide medical visualizations for broader audiences. At IEEE VIS 2020, we hosted an Application Spotlight session to highlight recent medical visualization research topics. With this article, we provide the visualization community with ten such open challenges, primarily focused on challenges related to the visualization of medical imaging data. We first describe the unique nature of medical data in terms of data preparation, access, and standardization. Subsequently, we cover open visualization research challenges related to uncertainty, multimodal and multiscale approaches, and evaluation. Finally, we emphasize challenges related to users focusing on explainable AI, immersive visualization, P4 medicine, and narrative visualization.}
    }
    [PDF] [DOI] [Bibtex]
    @inproceedings{Hushagen-2021-VCBM,
    title = {The Role of Depth Perception in {XR} from a Neuroscience Perspective: A Primer and Survey},
    author = {Hushagen, Vetle and Tresselt, Gustav C. and Smit, Noeska N. and Specht, Karsten},
    year = 2021,
    booktitle = {Eurographics Workshop on Visual Computing for Biology and Medicine},
    publisher = {The Eurographics Association},
    doi = {10.2312/vcbm.20211344},
    isbn = {978-3-03868-140-3},
    issn = {2070-5786},
    url = {https://diglib.eg.org/handle/10.2312/vcbm20211344},
    pdf = {pdfs/Hushagen-2021-VCBM.pdf},
    thumbnails = {images/Hushagen-2021-VCBM.png},
    images = {images/Hushagen-2021-VCBM.png},
    project = {ttmedvis},
    abstract = {Augmented and virtual reality (XR) are potentially powerful tools for enhancing the efficiency of interactive visualization of complex data in biology and medicine. The benefits of visualization of digital objects in XR mainly arise from enhanced depth perception due to the stereoscopic nature of XR head mounted devices. With the added depth dimension, XR is in a prime position to convey complex information and support tasks where 3D information is important. In order to inform the development of novel XR applications in the biology and medicine domain, we present a survey which reviews the neuroscientific basis underlying the immersive features of XR. To make this literature more accessible to the visualization community, we first describe the basics of the visual system, highlighting how visual features are combined to objects
    and processed in higher cortical areas with a special focus on depth vision. Based on state of the art findings in neuroscience literature related to depth perception, we provide several recommendations for developers and designers. Our aim is to aid development of XR applications and strengthen development of tools aimed at molecular visualization, medical education, and surgery, as well as inspire new application areas.}
    }
    [PDF] [DOI] [Bibtex]
    @inproceedings{Sugathan-2021-VCBM,
    title = {Interactive Multimodal Imaging Visualization for Multiple Sclerosis Lesion Analysis},
    author = {Sugathan, Sherin and Bartsch, Hauke and Riemer, Frank and Gr{\"u}ner, Renate and Lawonn, Kai and Smit, Noeska N},
    year = 2021,
    booktitle = {Eurographics Workshop on Visual Computing for Biology and Medicine},
    publisher = {The Eurographics Association},
    doi = {10.2312/vcbm.20211346},
    isbn = {978-3-03868-140-3},
    issn = {2070-5786},
    url = {https://diglib.eg.org/handle/10.2312/vcbm20211346},
    pdf = {pdfs/Sugathan-2021-VCBM.pdf},
    thumbnails = {images/Sugathan-2021-VCBM.png},
    images = {images/Sugathan-2021-VCBM.png},
    project = {ttmedvis},
    abstract = {Multiple Sclerosis (MS) is a brain disease that is diagnosed and monitored extensively through MRI scans. One of the criteria is the appearance of so-called brain lesions. The lesions show up on MRI scans as regions with elevated or reduced contrast compared to the surrounding healthy tissue.
    Understanding the complex interplay of contrast, location and shape in images from multiple modalities from 2D MRI slices is challenging.
    Advanced visualization of appearance- and location-related features of lesions would help researchers in defining better disease characterization through MS research.
    Since a permanent cure is not possible in MS and medication-based disease modification is a common treatment path, providing better visualizations would strengthen research which investigates the effect of white matter lesions. Here we present an advanced visualization solution that supports analysis from multiple imaging modalities acquired in a clinical routine examination. The solution holds potential for enabling researchers to have a more intuitive perception of lesion features. As an example for enhancing the analytic possibilities, we demonstrate the benefits of lesion projection using both Diffusion Tensor Imaging (DTI) and gradient-based techniques. This approach enables users to assess brain structures across individuals as the atlas-based analysis provides 3D anchoring and labeling of regions across a series of brain scans from the same participant and across different participants. The projections on the brain surface also enable researchers to conduct detailed studies on the relationship between cognitive disabilities and location of lesions. This allows researchers to correlate lesions to Brodmann areas and related brain functions.
    We realize the solutions in a prototype application that supports both DTI and structural data. A qualitative evaluation demonstrates that our approach supports MS researchers by providing new opportunities for MS research.}
    }
    [PDF] [Bibtex]
    @InProceedings{Garrison-2021-EPP,
    author = {Laura Garrison and Monique Meuschke and Jennifer Fairman and Noeska Smit and Bernhard Preim and Stefan Bruckner},
    title = {An Exploration of Practice and Preferences for the Visual Communication of Biomedical Processes},
    booktitle = {Proceedings of VCBM},
    year = {2021},
    pages = {},
    doi = {},
    abstract = {The visual communication of biomedical processes draws from diverse techniques in both visualization and biomedical illustration. However, matching these techniques to their intended audience often relies on practice-based heuristics or narrow-scope evaluations. We present an exploratory study of the criteria that audiences use when evaluating a biomedical process visualization targeted for communication. Designed over a series of expert interviews and focus groups, our study focuses on common communication scenarios of five well-known biomedical processes and their standard visual representations. We framed these scenarios in a survey with participant expertise spanning from minimal to expert knowledge of a given topic. Our results show frequent overlap in abstraction preferences between expert and non-expert audiences, with similar prioritization of clarity and the ability of an asset to meet a given communication objective. We also found that some illustrative conventions are not as clear as we thought, e.g., glows have broadly ambiguous meaning, while other approaches were unexpectedly preferred, e.g., biomedical illustrations in place of data-driven visualizations. Our findings suggest numerous opportunities for the continued convergence of visualization and biomedical illustration techniques for targeted visualization design.
    Best Paper Honorable Mention at VCBM 2021},
    note = {Accepted for publication, to be presented at VCBM 2021},
    project = {VIDI,ttmedvis},
    pdf = {pdfs/Garrison-2021-EPP.pdf},
    thumbnails = {images/Garrison-2021-EPP.png},
    images = {images/Garrison-2021-EPP.jpg},
    url = {https://github.com/lauragarrison87/Biomedical_Process_Vis},
    keywords = {biomedical illustration, visual communication, survey},
    }
    [DOI] [Bibtex]
    @incollection{Smit-2021-COMULIS,
    author = {Smit, Noeska and Bühler, Katja and Vilanova, Anna and Falk, Martin},
    title = {Visualisation for correlative multimodal imaging},
    booktitle = {Imaging Modalities for Biological and Preclinical Research: A Compendium, Volume 2},
    publisher = {IOP Publishing},
    year = {2021},
    series = {2053-2563},
    type = {Book Chapter},
    pages = {III.4.e-1 to III.4.e-10},
    abstract = {In this chapter, we describe several approaches to interactive imaging data visualization in general, highlight several strategies for visualizing correlative multimodal imaging data, and provide examples and practical recommendations.},
    url = {http://dx.doi.org/10.1088/978-0-7503-3747-2ch28},
    doi = {10.1088/978-0-7503-3747-2ch28},
    isbn = {978-0-7503-3747-2},
    thumbnails = "images/Smit-2021-COMULIS.PNG",
    images = "images/Smit-2021-COMULIS.PNG",
    project = "ttmedvis",
    abstract = {The field of visualisation deals with finding appropriate visual representations of data so people can effectively carry out tasks related to data exploration, analysis, or presentation using the power of the human visual perceptual system. In the context of biomedical imaging data, interactive visualisation techniques can be employed, for example, to visually explore data, as image processing quality assurance, or in publications to communicate findings. When dealing with correlative imaging, challenges arise in how to effectively convey the information from multiple sources. In particular, the information density leads to the need for a critical reflection on the visual design with respect to which parts of the data are important to show and at what level of importance they should be visualised. In this chapter, we describe several approaches to interactive imaging data visualisation in general, highlight several strategies for visualising correlative multimodal imaging data, and provide examples and practical recommendations.}
    }

2020

    [PDF] [DOI] [Bibtex]
    @article{Garrison-2020-IVE,
    author = {Garrison, Laura and Va\v{s}\'{i}\v{c}ek, Jakub and Craven, Alex R. and Gr\"{u}ner, Renate and Smit, Noeska and Bruckner, Stefan},
    title = {Interactive Visual Exploration of Metabolite Ratios in MR Spectroscopy Studies},
    journal = {Computers \& Graphics},
    volume = {92},
    pages = {1--12},
    keywords = {medical visualization, magnetic resonance spectroscopy data, information visualization, user-centered design},
    doi = {10.1016/j.cag.2020.08.001},
    abstract = {Magnetic resonance spectroscopy (MRS) is an advanced biochemical technique used to identify metabolic compounds in living tissue. While its sensitivity and specificity to chemical imbalances render it a valuable tool in clinical assessment, the results from this modality are abstract and difficult to interpret. With this design study we characterized and explored the tasks and requirements for evaluating these data from the perspective of a MRS research specialist. Our resulting tool, SpectraMosaic, links with upstream spectroscopy quantification software to provide a means for precise interactive visual analysis of metabolites with both single- and multi-peak spectral signatures. Using a layered visual approach, SpectraMosaic allows researchers to analyze any permutation of metabolites in ratio form for an entire cohort, or by sample region, individual, acquisition date, or brain activity status at the time of acquisition. A case study with three MRS researchers demonstrates the utility of our approach in rapid and iterative spectral data analysis.},
    year = {2020},
    pdf = "pdfs/Garrison-2020-IVE.pdf",
    thumbnails = "images/Garrison-2020-IVE.png",
    images = "images/Garrison-2020-IVE.jpg",
    project = "VIDI",
    git = "https://github.com/mmiv-center/spectramosaic-public",
    }
    [PDF] [DOI] [YT] [Bibtex]
    @article{RadEx,
    author = {M\"{o}rth, E. and Wagner-Larsen, K. and Hodneland, E. and Krakstad, C. and Haldorsen, I. S. and Bruckner, S. and Smit, N. N.},
    title = {RadEx: Integrated Visual Exploration of Multiparametric Studies for Radiomic Tumor Profiling},
    journal = {Computer Graphics Forum},
    volume = {39},
    number = {7},
    year = {2020},
    pages = {611--622},
    abstract = {Better understanding of the complex processes driving tumor growth and metastases is critical for developing targeted treatment strategies in cancer. Radiomics extracts large amounts of features from medical images which enables radiomic tumor profiling in combination with clinical markers. However, analyzing complex imaging data in combination with clinical data is not trivial and supporting tools aiding in these exploratory analyses are presently missing. In this paper, we present an approach that aims to enable the analysis of multiparametric medical imaging data in combination with numerical, ordinal, and categorical clinical parameters to validate established and unravel novel biomarkers. We propose a hybrid approach where dimensionality reduction to a single axis is combined with multiple linked views allowing clinical experts to formulate hypotheses based on all available imaging data and clinical parameters. This may help to reveal novel tumor characteristics in relation to molecular targets for treatment, thus providing better tools for enabling more personalized targeted treatment strategies. To confirm the utility of our approach, we closely collaborate with experts from the field of gynecological cancer imaging and conducted an evaluation with six experts in this field.},
    pdf = "pdfs/Moerth-2020-RadEx.pdf",
    images = "images/Moerth-2020-RadEx.jpg",
    youtube = "https://youtu.be/zwtDzwwX790",
    thumbnails = "images/Moerth-2020-RadEx-thumb.jpg",
    project = "ttmedvis",
    doi = {10.1111/cgf.14172}
    }
    [PDF] [DOI] [YT] [Bibtex]
    @INPROCEEDINGS{Moerth-2020-CGI,
    author = "M\"{o}rth, E. and Haldorsen, I.S. and Bruckner, S. and Smit, N.N.",
    title = "ParaGlyder: Probe-driven Interactive Visual Analysis for Multiparametric Medical Imaging Data",
    booktitle = "Proceedings of Computer Graphics International",
    pages = "351--363",
    year = "2020",
    abstract = "Multiparametric medical imaging describes approaches that include multiple imaging sequences acquired within the same imaging examination, as opposed to one single imaging sequence or imaging from multiple imaging modalities. Multiparametric imaging in cancer has been shown to be useful for tumor detection and may also depict functional tumor characteristics relevant for clinical phenotypes. However, when confronted with datasets consisting of multiple values per voxel, traditional reading of the imaging series fails to capture complicated patterns. Those patterns of potentially important imaging properties of the parameter space may be critical for the analysis. Standard approaches, such as transfer functions and juxtapositioned visualizations, fail to convey the shape of the multiparametric parameter distribution in sufficient detail. For these reasons, in this paper we present an approach that aims to enable the exploration and analysis of such multiparametric studies using an interactive visual analysis application to remedy the trade-offs between details in the value domain and in spatial resolution. Interactive probing within or across subjects allows for a digital biopsy that is able to uncover multiparametric tissue properties. This may aid in the discrimination between healthy and cancerous tissue, unravel radiomic tissue features that could be linked to targetable pathogenic mechanisms, and potentially highlight metastases that evolved from the primary tumor. We conducted an evaluation with eleven domain experts from the field of gynecological cancer imaging, neurological imaging, and machine learning research to confirm the utility of our approach.",
    note= "The final authenticated version is available online at https://doi.org/10.1007/978-3-030-61864-3_29",
    pdf = "pdfs/Moerth-2020-CGI-ParaGlyder.pdf",
    images = "images/Moerth-2020-ParaGlyder.PNG",
    thumbnails = "images/Moerth-2020-ParaGlyder-thumb.png",
    youtube = "https://youtu.be/S_M4CWXKz0U",
    publisher = "LNCS by Springer",
    project = "ttmedvis",
    doi = "10.1007/978-3-030-61864-3_29"
    }
    [PDF] [DOI] [Bibtex]
    @article{Solteszova-2019-MLT,
    author = {Solteszova, V. and Smit, N. N. and Stoppel, S. and Gr\"{u}ner, R. and Bruckner, S.},
    title = {Memento: Localized Time-Warping for Spatio-Temporal Selection},
    journal = {Computer Graphics Forum},
    volume = {39},
    number = {1},
    pages = {231--243},
    year = {2020},
    keywords = {interaction, temporal data, visualization, spatio-temporal projection},
    images = "images/Solteszova-2019-MLT.jpg",
    thumbnails = "images/Solteszova-2019-MLT-1.jpg",
    pdf = "pdfs/Solteszova-2019-MLT.pdf",
    doi = {10.1111/cgf.13763},
    abstract = {Abstract Interaction techniques for temporal data are often focused on affecting the spatial aspects of the data, for instance through the use of transfer functions, camera navigation or clipping planes. However, the temporal aspect of the data interaction is often neglected. The temporal component is either visualized as individual time steps, an animation or a static summary over the temporal domain. When dealing with streaming data, these techniques are unable to cope with the task of re-viewing an interesting local spatio-temporal event, while continuing to observe the rest of the feed. We propose a novel technique that allows users to interactively specify areas of interest in the spatio-temporal domain. By employing a time-warp function, we are able to slow down time, freeze time or even travel back in time, around spatio-temporal events of interest. The combination of such a (pre-defined) time-warp function and brushing directly in the data to select regions of interest allows for a detailed review of temporally and spatially localized events, while maintaining an overview of the global spatio-temporal data. We demonstrate the utility of our technique with several usage scenarios.},
    project = "MetaVis,ttmedvis,VIDI"
    }

2019

    [DOI] [Bibtex]
    @article{kraima2019role,
    title={The role of the longitudinal muscle in the anal sphincter complex: Implications for the Intersphincteric Plane in Low Rectal Cancer Surgery?},
    author={Kraima, Anne C and West, Nicholas P and Roberts, Nicholas and Magee, Derek R and Smit, Noeska N and van de Velde, Cornelis JH and DeRuiter, Marco C and Rutten, Harm J and Quirke, Philip},
    journal={Clinical Anatomy},
    year={2019},
    doi="10.1002/ca.23444",
    url = "https://onlinelibrary.wiley.com/doi/full/10.1002/ca.23444",
    publisher={Wiley Online Library},
    project = "ttmedvis",
    images = {images/kraima-2019-role.png},
    thumbnails = {images/kraima-2019-role.png},
    abstract = {Intersphincteric resection (ISR) enables radical sphincter-preserving surgery in a subset of low rectal tumors impinging on the anal sphincter complex (ASC). Excellent anatomical knowledge is essential for optimal ISR. This study describes the role of the longitudinal muscle (LM) in the ASC and implications for ISR and other low rectal and anal pathologies. Six human adult en bloc cadaveric specimens (three males, three females) were obtained from the University of Leeds GIFT Research Tissue Programme. Paraffin-embedded mega blocks containing the ASC were produced and serially sectioned at 250?µm intervals. Whole mount microscopic sections were histologically stained and digitally scanned. The intersphincteric plane was shown to be potentially very variable. In some places adipose tissue is located between the external anal sphincter (EAS) and internal anal sphincter (IAS), whereas in others the LM interdigitates to obliterate the plane. Elsewhere the LM is (partly) absent with the intersphincteric plane lying on the IAS. The LM gave rise to the formation of the submucosae and corrugator ani muscles by penetrating the IAS and EAS. In four of six specimens, striated muscle fibers from the EAS curled around the distal IAS reaching the anal submucosa. The ASC formed a complex structure, varying between individuals with an inconstant LM affecting the potential location of the intersphincteric plane as well as a high degree of intermingling striated and smooth muscle fibers potentially further disrupting the plane. The complexity of identifying the correct pathological staging of low rectal cancer is also demonstrated.}
    }
    [PDF] [DOI] [YT] [Bibtex]
    @INPROCEEDINGS {Garrison2019SM,
    author = {Garrison, Laura and Va\v{s}\'{\i}\v{c}ek, Jakub and Gr\"{u}ner, Renate and Smit, Noeska and Bruckner, Stefan},
    title = {SpectraMosaic: An Exploratory Tool for the Interactive Visual Analysis of Magnetic Resonance Spectroscopy Data},
    journal = {Computer Graphics Forum},
    month = {sep},
    year = {2019},
    booktitle = {Proceedings of VCBM 2019},
    pages = {1--10},
    event = "VCBM 2019",
    proceedings = "Proceedings of the 9th Eurographics Workshop on Visual Computing in Biology and Medicine",
    keywords = {medical visualization, magnetic resonance spectroscopy data, information visualization, user-centered design},
    images = "images/garrison_VCBM19spectramosaic_full.PNG",
    thumbnails = "images/garrison_VCBM19spectramosaic_thumb.png",
    pdf = "pdfs/garrison_VCBM19spectramosaic.pdf",
    youtube = "https://www.youtube.com/watch?v=Rzl7sl4WvdQ",
    abstract = {Magnetic resonance spectroscopy (MRS) allows for assessment of tissue metabolite characteristics used often for early detection and treatment evaluation of brain-related pathologies. However, meaningful variations in ratios of tissue metabolites within a sample area are difficult to capture with current visualization tools. Furthermore, the learning curve to interpretation is steep and limits the more widespread adoption of MRS in clinical practice. In this design study, we collaborated with domain experts to design a novel visualization tool for the exploration of tissue metabolite concentration ratios in spectroscopy clinical and research studies. We present a data and task analysis for this domain, where MRS data attributes can be categorized into tiers of visual priority. We furthermore introduce a novel set of visual encodings for these attributes. Our result is SpectraMosaic (see Figure~\ref{fig:teaser}), an interactive insight-generation tool for rapid exploration and comparison of metabolite ratios. We validate our approach with two case studies from MR spectroscopy experts, providing early qualitative evidence of the efficacy of the system for visualization of spectral data and affording deeper insights into these complex heterogeneous data.},
    git = "https://git.app.uib.no/Laura.Garrison/spectramosaic",
    doi = "0.2312/vcbm.20191225",
    project = "VIDI"
    }
    [DOI] [Bibtex]
    @incollection{Smit-2019-AtlasVis,
    title={Towards Advanced Interactive Visualization for Virtual Atlases},
    author={Smit, Noeska and Bruckner, Stefan},
    booktitle={Biomedical Visualisation},
    pages={85--96},
    year={2019},
    publisher={Springer},
    doi = {10.1007/978-3-030-19385-0_6},
    url = "http://noeskasmit.com/wp-content/uploads/2019/07/Smit_AtlasVis_2019.pdf",
    images = "images/Smit-2019-AtlasVis.png",
    thumbnails = "images/Smit-2019-AtlasVis.png",
    abstract = "An atlas is generally defined as a bound collection of tables, charts or illustrations describing a phenomenon. In an anatomical atlas for example, a collection of representative illustrations and text describes anatomy for the purpose of communicating anatomical knowledge. The atlas serves as reference frame for comparing and integrating data from different sources by spatially or semantically relating collections of drawings, imaging data, and/or text. In the field of medical image processing, atlas information is often constructed from a collection of regions of interest, which are based on medical images that are annotated by domain experts. Such an atlas may be employed for example for automatic segmentation of medical imaging data. The combination of interactive visualization techniques with atlas information opens up new possibilities for content creation, curation, and navigation in virtual atlases. With interactive visualization of atlas information, students are able to inspect and explore anatomical atlases in ways that were not possible with the traditional method of presenting anatomical atlases in book format, such as viewing the illustrations from other viewpoints. With advanced interaction techniques, it becomes possible to query the data that forms the basis for the atlas, thus empowering researchers to access a wealth of information in new ways. So far, atlasbased visualization has been employed for mainly medical education, as well as biological research. In this survey, we provide an overview of current digital biomedical atlas tasks and applications and summarize relevant visualization techniques. We discuss recent approaches for providing next-generation visual interfaces to navigate atlas data that go beyond common text-based search and hierarchical lists. Finally, we reflect on open challenges and opportunities for the next steps in interactive atlas visualization. ",
    project = "ttmedvis,MetaVis,VIDI"
    }
    [DOI] [Bibtex]
    @article{Meuschke-2019-EvalViz,
    title = {EvalViz--Surface Visualization Evaluation Wizard for Depth and Shape Perception Tasks},
    author = {Meuschke, Monique and Smit, Noeska N and Lichtenberg, Nils and Preim, Bernhard and Lawonn, Kai},
    journal = {Computers \& Graphics},
    year = {2019},
    publisher = {Elsevier},
    number = "1",
    volume = "82",
    DOI = {10.1016/j.cag.2019.05.022},
    images = "images/Meuschke_EvalViz_2019.png",
    thumbnails = "images/Meuschke_EvalViz_2019.png",
    abstract = "User studies are indispensable for visualization application papers in order to assess the value and limitations of the presented approach. Important aspects are how well depth and shape information can be perceived, as coding of these aspects is essential to enable an understandable representation of complex 3D data. In practice, there is usually little time to perform such studies, and the establishment and conduction of user studies can be labour-intensive. In addition, it can be difficult to reach enough participants to obtain expressive results regarding the quality of different visualization techniques.
    In this paper, we propose a framework that allows visualization researchers to quickly create task-based user studies on depth and shape perception for different surface visualizations and perform the resulting tasks via a web interface. With our approach, the effort for generating user studies is reduced and at the same time the web-based component allows researchers to attract more participants to their study. We demonstrate our framework by applying shape and depth evaluation tasks to visualizations of various surface representations used in many technical and biomedical applications.",
    project = "ttmedvis"
    }
    [PDF] [YT] [Bibtex]
    @MISC {Garrison2019SM_eurovis,
    title = {A Visual Encoding System for Comparative Exploration of Magnetic Resonance Spectroscopy Data},
    author = {Garrison, Laura and Va\v{s}\'{\i}\v{c}ek, Jakub and Gr\"{u}ner, Renate and Smit, Noeska and Bruckner, Stefan},
    abstract = "Magnetic resonance spectroscopy (MRS) allows for assessment of tissue metabolite characteristics used often for early detection and treatment evaluation of intracranial pathologies. In particular, this non-invasive technique is important in the study of metabolic changes related to brain tumors, strokes, seizure disorders, Alzheimer's disease, depression, as well as other diseases and disorders affecting the brain. However, meaningful variations in ratios of tissue metabolites within a sample area are difficult to capture with current visualization tools. Furthermore, the learning curve to interpretation is steep and limits the more widespread adoption of MRS in clinical practice. In this work we present a novel, tiered visual encoding system for multi-dimensional MRS data to aid in the visual exploration of metabolite concentration ratios. Our system was developed in close collaboration with domain experts including detailed data and task analyses. This visual encoding system was subsequently realized as part of an interactive insight-generation tool for rapid exploration and comparison of metabolite ratio variation for deeper insights to these complex data.",
    booktitle = {Proceedings of the EuroVis Conference - Posters (EuroVis 2019)},
    year = {2019},
    howpublished = "Poster presented at the EuroVis conference 2019",
    keywords = {medical visualization, magnetic resonance spectroscopy data, information visualization, user-centered design},
    images = "images/garrison_eurovis2019_SM_encodings.png",
    thumbnails = "images/garrison_eurovis2019_SM_encodings.png",
    pdf = "pdfs/garrison_eurovis2019_SM.pdf",
    youtube = "https://youtu.be/Rzl7sl4WvdQ",
    project = "VIDI"
    }
    [PDF] [DOI] [Bibtex]
    @inproceedings {Smit-2019-DBP,
    booktitle = {Eurographics 2019 - Dirk Bartz Prize},
    editor = {Bruckner, Stefan and Oeltze-Jafra, Steffen},
    title = {{Model-based Visualization for Medical Education and Training}},
    author = {Smit, Noeska and Lawonn, Kai and Kraima, Annelot and deRuiter, Marco and Bruckner, Stefan and Eisemann, Elmar and Vilanova, Anna},
    year = {2019},
    publisher = {The Eurographics Association},
    ISSN = {1017-4656},
    DOI = {10.2312/egm.20191033},
    pdf = "pdfs/Smit_DBPrize_2019.pdf",
    images = "images/Smit_DBPrize_2019.png",
    thumbnails = "images/Smit_DBPrize_2019.png",
    abstract = "Anatomy, or the study of the structure of the human body, is an essential component of medical education. Certain parts of human anatomy are considered to be more complex to understand than others, due to a multitude of closely related structures. Furthermore, there are many potential variations in anatomy, e.g., different topologies of vessels, and knowledge of these variations is critical for many in medical practice.
    Some aspects of individual anatomy, such as the autonomic nerves, are not visible in individuals through medical imaging techniques or even during surgery, placing these nerves at risk for damage.
    3D models and interactive visualization techniques can be used to improve understanding of this complex anatomy, in combination with traditional medical education paradigms.
    We present a framework incorporating several advanced medical visualization techniques and applications for teaching and training purposes, which is the result of an interdisciplinary project.
    In contrast to previous approaches which focus on general anatomy visualization or direct visualization of medical imaging data, we employ model-based techniques to represent variational anatomy, as well as anatomy not visible from imaging. Our framework covers the complete spectrum including general anatomy, anatomical variations, and anatomy in individual patients.
    Applications within our framework were evaluated positively with medical users, and our educational tool for general anatomy is in use in a Massive Open Online Course (MOOC) on anatomy, which had over 17000 participants worldwide in the first run.",
    project = "ttmedvis,VIDI"
    }
    [PDF] [DOI] [Bibtex]
    @inproceedings {Moerth-2019-VCBM,
    booktitle = "Eurographics Workshop on Visual Computing for Biology and Medicine",
    editor = "Kozlíková, Barbora and Linsen, Lars and Vázquez, Pere-Pau and Lawonn, Kai and Raidou, Renata Georgia",
    abstract = "Three-dimensional (3D) ultrasound imaging and visualization
    is often used in medical diagnostics, especially in prenatal
    screening. Screening the development of the fetus is
    important to assess possible complications early on. State
    of the art approaches involve taking standardized
    measurements to compare them with standardized tables. The
    measurements are taken in a 2D slice view, where precise
    measurements can be difficult to acquire due to the fetal
    pose. Performing the analysis in a 3D view would enable the
    viewer to better discriminate between artefacts and
    representative information. Additionally making data
    comparable between different investigations and patients is
    a goal in medical imaging techniques and is often achieved
    by standardization. With this paper, we introduce a novel
    approach to provide a standardization method for 3D
    ultrasound fetus screenings. Our approach is called “The
    Vitruvian Baby” and incorporates a complete pipeline for
    standardized measuring in fetal 3D ultrasound. The input of
    the method is a 3D ultrasound screening of a fetus and the
    output is the fetus in a standardized T-pose. In this pose,
    taking measurements is easier and comparison of different
    fetuses is possible. In addition to the transformation of
    the 3D ultrasound data, we create an abstract representation
    of the fetus based on accurate measurements. We demonstrate
    the accuracy of our approach on simulated data where the
    ground truth is known.",
    title = "The Vitruvian Baby: Interactive Reformation of Fetal Ultrasound Data to a T-Position",
    author = "M\"{o}rth, Eric and Raidou, Renata Georgia and Viola, Ivan and Smit, Noeska",
    year = "2019",
    publisher = "The Eurographics Association",
    ISSN = "2070-5786",
    ISBN = "978-3-03868-081-9",
    DOI = "10.2312/vcbm.20191245",
    pdf = "pdfs/VCBM_TheVitruvianBaby_ShortPaper_201-205.pdf",
    images = "images/vcbmVitruvianBaby.jpg",
    thumbnails = "images/vcbmVitruvianBaby.jpg",
    url = "https://diglib.eg.org/handle/10.2312/vcbm20191245",
    project = {VIDI}
    }
    [PDF] [DOI] [Bibtex]
    @MISC {Moerth-2019-EUROVIS,
    booktitle = "EuroVis 2019 - Posters",
    editor = "Madeiras Pereira, João and Raidou, Renata Georgia",
    title = "The Vitruvian Baby: Interactive Reformation of Fetal Ultrasound Data to a T-Position",
    author = "M\"{o}rth, Eric and Raidou, Renata Georgia and Smit, Noeska and Viola, Ivan",
    year = "2019",
    abstract = "Three dimensional (3D) ultrasound is commonly used in prenatal screening, because it provides insight into the shape as well
    as the organs of the fetus. Currently, gynecologists take standardized measurements of the fetus and check for abnormalities by
    analyzing the data in a 2D slice view. The fetal pose may complicate taking precise measurements in such a view. Analyzing the
    data in a 3D view would enable the viewer to better distinguish between artefacts and representative information. Standardization
    in medical imaging techniques aims to make the data comparable between different investigations and patients. It is
    already used in different medical applications for example in magnetic resonance imaging (MRI). With this work, we introduce
    a novel approach to provide a standardization method for 3D ultrasound screenings of fetuses. The approach consists of six
    steps and is called “The Vitruvian Baby”. The input is the data of the 3D ultrasound screening of a fetus and the output shows
    the fetus in a standardized T-pose in which measurements can be made. The precision of standardized measurements compared
    to the gold standard is for the finger to finger span 91,08% and for the head to toe measurement 94,05%.",
    publisher = "The Eurographics Association",
    howpublished = "Poster presented at the EuroVis conference 2019",
    ISBN = "978-3-03868-088-8",
    DOI = "10.2312/eurp.20191147",
    pdf = "pdfs/EUROVIS_TheVitruvianBaby_Poster.pdf",
    images = "images/EUROVISTheVitruvianBabyPoster.png",
    thumbnails = "images/EUROVISTheVitruvianBabyPoster.png",
    url = "https://diglib.eg.org/handle/10.2312/eurp20191147"
    }

2018

    [PDF] [DOI] [YT] [Bibtex]
    @INPROCEEDINGS {Meuschke2018VCBM,
    author = "Monique Meuschke and Noeska N. Smit and Nils Lichtenberg and Bernhard Preim and Kai Lawonn",
    title = "Automatic Generation of Web-Based User Studies to Evaluate Depth Perception in Vascular Surface Visualizations",
    booktitle = "Proceedings of VCBM 2018",
    year = "2018",
    editor = "Anna Puig Puig and Thomas Schultz and Anna Vilanova and Ingrid Hotz and Barbora Kozlikova and Pere-Pau Vázquez",
    pages = "033-044",
    address = "Granada, Spain",
    publisher = "Eurographics Association",
    abstract = "User studies are often required in biomedical visualization application papers in order to provide evidence for the utility of the presented approach. An important aspect is how well depth information can be perceived, as depth encoding is important to enable an understandable representation of complex data.Unfortunately, in practice there is often little time available to perform such studies, and setting up and conducting user studies may be labor-intensive. In addition, it can be challenging to reach enough participants to support the contribution claims of the paper. In this paper, we propose a system that allows biomedical visualization researchers to quickly generate perceptual task-based user studies for novel surface visualizations, and to perform the resulting experiment via a web interface. This approach helps to reduce effort in the setup of user studies themselves, and at the same time leverages a web-based approach that can help researchers attract more participants to their study. We demonstrate our system using the specific application of depth judgment tasks to evaluate vascular surface visualizations, since there is a lot of recent interest in this area.However, the system is also generally applicable for conducting other task-baseduser studies in biomedical visualization.",
    pdf = "pdfs/meuschke2018VCBM.pdf",
    images = "images/vcbm2018.png",
    thumbnails = "images/vcbm2018.png",
    youtube = "https://www.youtube.com/watch?v=8lns8GGpPJI",
    crossref = "VCBM-proc",
    doi = "10.2312/vcbm.20181227",
    project = "ttmedvis"
    }
    [PDF] [YT] [Bibtex]
    @ARTICLE {lichtenbergsmithansenlawonn2018,
    author = "Nils Lichtenberg and Noeska Smit and Christian Hansen and Kai Lawonn",
    title = "Real-time field aligned stripe patterns",
    journal = "Computers & Graphics",
    year = "2018",
    volume = "74",
    pages = "137-149",
    month = "aug",
    abstract = "In this paper, we present a parameterization technique that can be applied to surface meshes in real-time without time-consuming preprocessing steps. The parameterization is suitable for the display of (un-)oriented patterns and texture patches, and to sample a surface in a periodic fashion. The method is inspired by existing work that solves a global optimization problem to generate a continuous stripe pattern on the surface, from which texture coordinates can be derived. We propose a local optimization approach that is suitable for parallel execution on the GPU, which drastically reduces computation time. With this, we achieve on-the-fly texturing of 3D, medium-sized (up to 70k vertices) surface meshes. The algorithm takes a tangent vector field as input and aligns the texture coordinates to it. Our technique achieves real-time parameterization of the surface meshes by employing a parallelizable local search algorithm that converges to a local minimum in a few iterations. The calculation in real-time allows for live parameter updates and determination of varying texture coordinates. Furthermore, the method can handle non-manifold meshes. The technique is useful in various applications, e.g., biomedical visualization and flow visualization. We highlight our method\s potential by providing usage scenarios for several applications.A PDF of the accepted manuscript is available via noeskasmit.com/wp-content/uploads/2018/08/lichtenberg_2018.pdf.",
    pdf = "pdfs/lichtenberg_2018.pdf",
    images = "images/Selection_384.png",
    thumbnails = "images/1-s2.0-S0097849318300591-fx1_lrg.jpg",
    youtube = "https://www.youtube.com/watch?v=7CpkHy8KPK8",
    project = "ttmedvis"
    }
    [PDF] [Bibtex]
    @MISC {Smit18MMIV,
    author = "N. N. Smit and S. Bruckner and H. Hauser and I. Haldorsen and A. Lundervold and A. S. Lundervold and E. Hodneland and L. Oltedal and K. Specht and E. R. Gruner",
    title = "Research Agenda of the Mohn Medical Imaging and Visualization Centre in Bergen, Norway",
    howpublished = "Poster presented at the EG VCBM workshop 2018",
    month = "September",
    year = "2018",
    abstract = "The Mohn Medical Imaging and Visualization Centre (MMIV) was recently established in collaboration between the University of Bergen, Norway, and the Haukeland University Hospital in Bergen with generous financial support from the Bergen Research Foundation (BFS) to conduct cross-disciplinary research related to state-of-the-art medical imaging, including preclinical and clinical high-field MRI, CT and hybrid PET/CT/MR.The overall goal of the Centre is to research new methods in quantitative imaging and interactive visualization to predict changes in health and disease across spatial and temporal scales. This encompasses research in feature detection, feature extraction, and feature prediction, as well as on methods and techniques for the interactive visualization of spatial and abstract data related to and derived from these features.With special emphasis on the natural and medical sciences, the long-term goal of the Centre is to consolidate excellence in the interplay between medical imaging (physics, chemistry, radiography, radiology), and visualization (computer science and mathematics) and develop novel and refined imaging methods that may ultimately improve patient care. In this poster, we describe the overall research agenda of MMIV and describe the four core projects in the centre.",
    pdf = "pdfs/smit2018posterabstract.pdf",
    images = "images/MMIVPoster.png",
    thumbnails = "images/MMIVPoster.png",
    location = "Granada, Spain",
    project = "VIDI"
    }

2017

    [PDF] [DOI] [YT] [Bibtex]
    @ARTICLE {Smit-2017-PAS,
    author = "Noeska Smit and Kai Lawonn and Annelot Kraima and Marco DeRuiter and Hessam Sokooti and Stefan Bruckner and Elmar Eisemann and Anna Vilanova",
    title = "PelVis: Atlas-based Surgical Planning for Oncological Pelvic Surgery",
    journal = "IEEE Transactions on Visualization and Computer Graphics",
    year = "2017",
    volume = "23",
    number = "1",
    pages = "741--750",
    month = "jan",
    abstract = "Due to the intricate relationship between the pelvic organs and vital  structures, such as vessels and nerves, pelvic anatomy is often considered  to be complex to comprehend. In oncological pelvic surgery, a trade-off  has to be made between complete tumor resection and preserving function  by preventing damage to the nerves. Damage to the autonomic nerves  causes undesirable post-operative side-effects such as fecal and  urinal incontinence, as well as sexual dysfunction in up to 80 percent  of the cases. Since these autonomic nerves are not visible in pre-operative  MRI scans or during surgery, avoiding nerve damage during such a  surgical procedure becomes challenging. In this work, we present  visualization methods to represent context, target, and risk structures  for surgical planning. We employ distance-based and occlusion management  techniques in an atlas-based surgical planning tool for oncological  pelvic surgery. Patient-specific pre-operative MRI scans are registered  to an atlas model that includes nerve information. Through several  interactive linked views, the spatial relationships and distances  between the organs, tumor and risk zones are visualized to improve  understanding, while avoiding occlusion. In this way, the surgeon  can examine surgically relevant structures and plan the procedure  before going into the operating theater, thus raising awareness of  the autonomic nerve zone regions and potentially reducing post-operative  complications. Furthermore, we present the results of a domain expert  evaluation with surgical oncologists that demonstrates the advantages  of our approach.",
    pdf = "pdfs/Smit-2017-PAS.pdf",
    images = "images/Smit-2017-PAS.jpg",
    thumbnails = "images/Smit-2017-PAS.png",
    youtube = "https://www.youtube.com/watch?v=vHp05I5-hp8",
    doi = "10.1109/TVCG.2016.2598826",
    event = "IEEE SciVis 2016",
    keywords = "atlas, surgical planning, medical visualization",
    location = "Baltimore, USA"
    }
    [PDF] [DOI] [Bibtex]
    @ARTICLE {LawonnSmit-2017-Survey,
    author = "Lawonn, K. and Smit, N.N. and B{\"u}hler, K. and Preim, B.",
    title = "A Survey on Multimodal Medical Data Visualization",
    journal = "Computer Graphics Forum",
    year = "2017",
    volume = "37",
    number = "1",
    pages = "413-438",
    abstract = "Multi-modal data of the complex human anatomy contain a wealth of information. To visualize and explore such data, techniques for emphasizing important structures and controlling visibility are essential. Such fused overview visualizations guide physicians to suspicious regions to be analysed in detail, e.g. with slice-based viewing. We give an overview of state of the art in multi-modal medical data visualization techniques. Multi-modal medical data consist of multiple scans of the same subject using various acquisition methods, often combining multiple complimentary types of information. Three-dimensional visualization techniques for multi-modal medical data can be used in diagnosis, treatment planning, doctor–patient communication as well as interdisciplinary communication. Over the years, multiple techniques have been developed in order to cope with the various associated challenges and present the relevant information from multiple sources in an insightful way. We present an overview of these techniques and analyse the specific challenges that arise in multi-modal data visualization and how recent works aimed to solve these, often using smart visibility techniques. We provide a taxonomy of these multi-modal visualization applications based on the modalities used and the visualization techniques employed. Additionally, we identify unsolved problems as potential future research directions.",
    pdf = "pdfs/LawonnSmit-2017-MULTI.pdf",
    images = "images/LawonnSmit-2017-MULTI.jpg",
    thumbnails = "images/LawonnSmit-2017-MULTI-TN.png",
    issn = "1467-8659",
    url = "http://dx.doi.org/10.1111/cgf.13306",
    doi = "10.1111/cgf.13306",
    keywords = "medical imaging, visualization, scientific visualization, visualization, volume visualization, visualization, Medical Imaging [Visualization], Scientific Visualization [Visualization], Volume Visualization [Visualization], Multimodal Medical Data"
    }

2016

    [PDF] [Bibtex]
    @INPROCEEDINGS {Smit2016SLINE,
    author = "Nils Lichtenberg and Noeska Smit and Christian Hansen and Kai Lawonn",
    title = "Sline: Seamless Line Illustration for Interactive Biomedical Visualization",
    booktitle = "Proceedings of VCBM 2016",
    year = "2016",
    month = "sep",
    abstract = "In medical visualization of surface information, problems often arise when visualizing several overlapping structures simultaneously. There is a trade-off between visualizing multiple structures in a detailed way and limiting visual clutter, in order to allow users to focus on the main structures. Illustrative visualization techniques can help alleviate these problems by defining a level of abstraction per structure. However, clinical uptake of these advanced visualization techniques so far has been limited due to the complex parameter settings required. To bring advanced medical visualization closer to clinical application, we propose a novel illustrative technique that offers a seamless transition between various levels of abstraction and detail. Using a single comprehensive parameter, users are able to quickly define a visual representation per structure that fits the visualization requirements for focus and context structures. This technique can be applied to any biomedical context in which multiple surfaces are routinely visualized, such as neurosurgery, radiotherapy planning or drug design. Additionally, we introduce a novel hatching technique, that runs in real-time and does not require texture coordinates. An informal evaluation with experts from different biomedical domains reveals that our technique allows users to design focus-and-context visualizations in a fast and intuitive manner.",
    pdf = "pdfs/Lichtenberg-2016-SLINE.pdf",
    images = "images/Smit-2016-SLINE.PNG",
    thumbnails = "images/Smit-2016-SLINE.jpg",
    proceedings = "Proceedings of Eurographics Workshop on Visual Computing in Biology and Medicine",
    event = "VCBM 2016",
    keywords = "surface rendering, medical visualization, illustrative rendering",
    location = "Bergen, Norway"
    }