VIDI: Visual Data Science for Large Scale Hypothesis Management in Imaging Biomarker Discovery

Project Overview

Technology is revolutionizing medicine. New scanners enable doctors to “look into the patient’s body” and study their anatomy and physiology without the need of a scalpel. At an amazing speed new scanning technologies emerge, providing an ever growing and increasingly varied look into medical conditions. Today, we cannot “only” look at the bones within a body, but we can also examine soft tissue, blood flow, activation networks in the brain, and many more aspects of anatomy and physiology. The increased amount and complexity of the acquired medical imaging data leads to new challenges in knowledge extraction and decision making.

In order to optimally exploit this new wealth of information, it is crucial that all this imaging data is successfully linked to the medical condition of the patient. In many cases, this is challenging, for example, when diagnosing early-stage cancer or mental disorders. Analogous to biomarkers, which are molecular structures that are used to identify medical conditions, imaging biomarkers are information structures in medical images that can help with diagnostics and treatment planning, formulated in terms of features that can be computed from the imaging data. Imaging biomarker discovery is a highly challenging task and traditionally only a single hypothesis (for a new biomarker) is examined at a time.

This makes it impossible to explore a large number as well as more complex imaging biomarkers across multi-aspect data. In the VIDI project, we propose to research and advance visual data science to improve imaging biomarker discovery through the visual integration of multi-aspect medical data with a new visualization-enabled hypothesis management framework.

We aim to reduce the time it takes to discover new imaging biomarkers by studying structured sets of hypotheses, to be examined at the same time, through the integration of computational approaches and interactive visual analysis techniques. Another related goal is to enable the discovery of more complex imaging biomarkers, across multiple modalities, that potentially are able to more accurately characterize diseases. This should lead to a new form of designing innovative and effective imaging protocols and to the discovery of new imaging biomarkers, improving suboptimal imaging protocols and thus also reducing scanning costs. Our project is a truly interdisciplinary research effort, bringing visualization research and imaging research together in one project, and this is perfectly suited for the novel Centre for Medical Imaging and Visualization that has been established in Bergen, Norway.

VIDI Approach

To achieve these goals we have divided the VIDI project into seven discrete workpackages (WP), which can be executed, to some extent, in parallel:

WP1: Hypothesis management
Research and design of methodologies necessary for structuring, representation, exploration, and analysis of hypothesis sets. Develop visual language for data interactions and methods for linking spatial with non-spatial data.

WP2: Data & Features
Exploration of medical image feature extraction and visualisation, definition of UX for selection and refinement of data features to extend into additional dimensions.

WP3: Hypotheses Scoring
Development of methods for interactive visual ranking and analyses of user hypotheses sets. Exploration of methods to provide user with evaluation preview of investigated hypotheses, linked to hypotheses visualisation and rankings.

WP4: Optimized Imaging
Evaluation of existing imaging protocols and development new imaging techniques. Investigation of imaging process to guard against suboptimal image acquisitions.

WP5: Integration
Integrate solutions from work packages 1-4.

WP6: Evalulation
Evaluate new hypothesis methods in context of three target applications: 1) gynecologic cancer; 2) neuroinflammation in MS; and 3) neurodegenerative disorders.

WP7: Management & Dissemination
Coordination between the involved partners, planning and reporting, and dissemination.

VIDI Project Team

PI: Helwig Hauser
Co-PIs: Stefan Bruckner and Renate Grüner, MMIV
Associated researcher: Noeska Smit
PhD students: Laura Garrison and Fourough Gharbalchi

This project is funded by the Bergen Research Foundation (BFS) and the University of Bergen.



    [PDF] [DOI] [YT] [Bibtex]
    @ARTICLE {Garrison-2021-DimLift,
    author = {Garrison, Laura and M\"{u}ller, Juliane and Schreiber, Stefanie and Oeltze-Jafra, Steffen and Hauser, Helwig and Bruckner, Stefan},
    title = {DimLift: Interactive Hierarchical Data Exploration through Dimensional Bundling},
    journal={Accepted to appear in upcoming issue of IEEE Transactions on Visualization and Computer Graphics},
    year = {2021},
    abstract = {The identification of interesting patterns and relationships is essential to exploratory data analysis. This becomes increasingly difficult in high dimensional datasets. While dimensionality reduction techniques can be utilized to reduce the analysis space, these may unintentionally bury key dimensions within a larger grouping and obfuscate meaningful patterns. With this work we introduce DimLift, a novel visual analysis method for creating and interacting with dimensional bundles. Generated through an iterative dimensionality reduction or user-driven approach, dimensional bundles are expressive groups of dimensions that contribute similarly to the variance of a dataset. Interactive exploration and reconstruction methods via a layered parallel coordinates plot allow users to lift interesting and subtle relationships to the surface, even in complex scenarios of missing and mixed data types. We exemplify the power of this technique in an expert case study on clinical cohort data alongside two additional case examples from nutrition and ecology.},
    pdf = {pdfs/garrison-2021-dimlift.pdf},
    images = {images/garrison_dimlift.jpg},
    thumbnails = {images/garrison_dimlift_thumb.jpg},
    youtube = {},
    doi = {10.1109/TVCG.2021.3057519},
    git = {},
    project = {VIDI},
    note = {Accepted for publication, to appear in an upcoming issue}
    [PDF] [DOI] [Bibtex]
    @ARTICLE {Mueller-2021-IDA,
    author = {M\"{u}ller, Juliane and Garrison, Laura and Ulbrich, Philipp and Schreiber, Stefanie and Bruckner, Stefan and Hauser, Helwig and Oeltze-Jafra, Steffen},
    title = {Integrated Dual Analysis of Quantitative and Qualitative High-Dimensional Data},
    journal={Accepted to appear in upcoming issue of IEEE Transactions on Visualization and Computer Graphics},
    year = {2021},
    abstract = {The Dual Analysis framework is a powerful enabling technology for the exploration of high dimensional quantitative data by treating data dimensions as first-class objects that can be explored in tandem with data values. In this work, we extend the Dual Analysis framework through the joint treatment of quantitative (numerical) and qualitative (categorical) dimensions. Computing common measures for all dimensions allows us to visualize both quantitative and qualitative dimensions in the same view. This enables a natural joint treatment of mixed data during interactive visual exploration and analysis. Several measures of variation for nominal qualitative data can also be applied to ordinal qualitative and quantitative data. For example, instead of measuring variability from a mean or median, other measures assess inter-data variation or average variation from a mode. In this work, we demonstrate how these measures can be integrated into the Dual Analysis framework to explore and generate hypotheses about high-dimensional mixed data. A medical case study using clinical routine data of patients suffering from Cerebral Small Vessel Disease (CSVD), conducted with a senior neurologist and a medical student, shows that a joint Dual Analysis approach for quantitative and qualitative data can rapidly lead to new insights based on which new hypotheses may be generated.},
    pdf = {pdfs/Mueller_2020_IDA.pdf},
    images = {images/Mueller_2020_IDA.jpg},
    thumbnails = {images/Mueller_2020_IDA.png},
    doi = {10.1109/TVCG.2021.3056424},
    project = {VIDI},
    note = {Accepted for publication, to appear in an upcoming issue}


    [PDF] [DOI] [Bibtex]
    author = {Garrison, Laura and Va\v{s}\'{i}\v{c}ek, Jakub and Craven, Alex R. and Gr\"{u}ner, Renate and Smit, Noeska and Bruckner, Stefan},
    title = {Interactive Visual Exploration of Metabolite Ratios in MR Spectroscopy Studies},
    journal = {Computers \& Graphics},
    volume = {92},
    pages = {1--12},
    keywords = {medical visualization, magnetic resonance spectroscopy data, information visualization, user-centered design},
    doi = {10.1016/j.cag.2020.08.001},
    abstract = {Magnetic resonance spectroscopy (MRS) is an advanced biochemical technique used to identify metabolic compounds in living tissue. While its sensitivity and specificity to chemical imbalances render it a valuable tool in clinical assessment, the results from this modality are abstract and difficult to interpret. With this design study we characterized and explored the tasks and requirements for evaluating these data from the perspective of a MRS research specialist. Our resulting tool, SpectraMosaic, links with upstream spectroscopy quantification software to provide a means for precise interactive visual analysis of metabolites with both single- and multi-peak spectral signatures. Using a layered visual approach, SpectraMosaic allows researchers to analyze any permutation of metabolites in ratio form for an entire cohort, or by sample region, individual, acquisition date, or brain activity status at the time of acquisition. A case study with three MRS researchers demonstrates the utility of our approach in rapid and iterative spectral data analysis.},
    year = {2020},
    pdf = "pdfs/Garrison-2020-IVE.pdf",
    thumbnails = "images/Garrison-2020-IVE.png",
    images = "images/Garrison-2020-IVE.jpg",
    project = "VIDI",
    [PDF] [DOI] [Bibtex]
    author = {Solteszova, V. and Smit, N. N. and Stoppel, S. and Grüner, R. and Bruckner, S.},
    title = {Memento: Localized Time-Warping for Spatio-Temporal Selection},
    journal = {Computer Graphics Forum},
    volume = {39},
    number = {1},
    pages = {231--243},
    year = {2020},
    keywords = {interaction, temporal data, visualization, spatio-temporal projection},
    images = "images/Solteszova-2019-MLT.jpg",
    thumbnails = "images/Solteszova-2019-MLT-1.jpg",
    pdf = "pdfs/Solteszova-2019-MLT.pdf",
    doi = {10.1111/cgf.13763},
    abstract = {Abstract Interaction techniques for temporal data are often focused on affecting the spatial aspects of the data, for instance through the use of transfer functions, camera navigation or clipping planes. However, the temporal aspect of the data interaction is often neglected. The temporal component is either visualized as individual time steps, an animation or a static summary over the temporal domain. When dealing with streaming data, these techniques are unable to cope with the task of re-viewing an interesting local spatio-temporal event, while continuing to observe the rest of the feed. We propose a novel technique that allows users to interactively specify areas of interest in the spatio-temporal domain. By employing a time-warp function, we are able to slow down time, freeze time or even travel back in time, around spatio-temporal events of interest. The combination of such a (pre-defined) time-warp function and brushing directly in the data to select regions of interest allows for a detailed review of temporally and spatially localized events, while maintaining an overview of the global spatio-temporal data. We demonstrate the utility of our technique with several usage scenarios.},
    project = "MetaVis,ttmedvis,VIDI"


    [PDF] [DOI] [Bibtex]
    @inproceedings {Bartsch-2019-MVA,
    booktitle = {Proceedings of VCBM 2019 (Short Papers)},
    title = {MedUse: A Visual Analysis Tool for Medication Use Data in the ABCD Study},
    author = {Bartsch, Hauke and Garrison, Laura and Bruckner, Stefan and Wang, Ariel and Tapert, Susan F. and Grüner, Renate},
    abstract = {The RxNorm vocabulary is a yearly-published biomedical resource providing normalized names for medications. It is used to capture medication use in the Adolescent Brain Cognitive Development (ABCD) study, an active and publicly available longitudinal research study following 11,800 children over 10 years. In this work, we present medUse, a visual tool allowing researchers to explore and analyze the relationship of drug category to cognitive or imaging derived measures using ABCD study data. Our tool provides position-based context for tree traversal and selection granularity of both study participants and drug category. Developed as part of the Data Exploration and Analysis Portal (DEAP), medUse is available to more than 600 ABCD researchers world-wide. By integrating medUse into an actively used research product we are able to reach a wide audience and increase the practical relevance of visualization for the biomedical field.},
    year = {2019},
    pages = {97--101},
    images = "images/Bartsch-2019-MVA.jpg",
    thumbnails = "images/Bartsch-2019-MVA.png",
    pdf = "pdfs/Bartsch-2019-MVA.pdf",
    publisher = {The Eurographics Association},
    ISSN = {2070-5786},
    ISBN = {978-3-03868-081-9},
    DOI = {10.2312/vcbm.20191236},
    project = {VIDI}
    [PDF] [DOI] [YT] [Bibtex]
    @INPROCEEDINGS {Garrison2019SM,
    author = {Garrison, Laura and Va\v{s}\'{\i}\v{c}ek, Jakub and Gr\"{u}ner, Renate and Smit, Noeska and Bruckner, Stefan},
    title = {SpectraMosaic: An Exploratory Tool for the Interactive Visual Analysis of Magnetic Resonance Spectroscopy Data},
    journal = {Computer Graphics Forum},
    month = {sep},
    year = {2019},
    booktitle = {Proceedings of VCBM 2019},
    pages = {1--10},
    event = "VCBM 2019",
    proceedings = "Proceedings of the 9th Eurographics Workshop on Visual Computing in Biology and Medicine",
    keywords = {medical visualization, magnetic resonance spectroscopy data, information visualization, user-centered design},
    images = "images/garrison_VCBM19spectramosaic_full.PNG",
    thumbnails = "images/garrison_VCBM19spectramosaic_thumb.png",
    pdf = "pdfs/garrison_VCBM19spectramosaic.pdf",
    youtube = "",
    abstract = {Magnetic resonance spectroscopy (MRS) allows for assessment of tissue metabolite characteristics used often for early detection and treatment evaluation of brain-related pathologies. However, meaningful variations in ratios of tissue metabolites within a sample area are difficult to capture with current visualization tools. Furthermore, the learning curve to interpretation is steep and limits the more widespread adoption of MRS in clinical practice. In this design study, we collaborated with domain experts to design a novel visualization tool for the exploration of tissue metabolite concentration ratios in spectroscopy clinical and research studies. We present a data and task analysis for this domain, where MRS data attributes can be categorized into tiers of visual priority. We furthermore introduce a novel set of visual encodings for these attributes. Our result is SpectraMosaic (see Figure~\ref{fig:teaser}), an interactive insight-generation tool for rapid exploration and comparison of metabolite ratios. We validate our approach with two case studies from MR spectroscopy experts, providing early qualitative evidence of the efficacy of the system for visualization of spectral data and affording deeper insights into these complex heterogeneous data.},
    git = "",
    doi = "0.2312/vcbm.20191225",
    project = "VIDI"
    [DOI] [Bibtex]
    title={Towards Advanced Interactive Visualization for Virtual Atlases},
    author={Smit, Noeska and Bruckner, Stefan},
    booktitle={Biomedical Visualisation},
    doi = {10.1007/978-3-030-19385-0_6},
    url = "",
    images = "images/Smit-2019-AtlasVis.png",
    thumbnails = "images/Smit-2019-AtlasVis.png",
    abstract = "An atlas is generally defined as a bound collection of tables, charts or illustrations describing a phenomenon. In an anatomical atlas for example, a collection of representative illustrations and text describes anatomy for the purpose of communicating anatomical knowledge. The atlas serves as reference frame for comparing and integrating data from different sources by spatially or semantically relating collections of drawings, imaging data, and/or text. In the field of medical image processing, atlas information is often constructed from a collection of regions of interest, which are based on medical images that are annotated by domain experts. Such an atlas may be employed for example for automatic segmentation of medical imaging data. The combination of interactive visualization techniques with atlas information opens up new possibilities for content creation, curation, and navigation in virtual atlases. With interactive visualization of atlas information, students are able to inspect and explore anatomical atlases in ways that were not possible with the traditional method of presenting anatomical atlases in book format, such as viewing the illustrations from other viewpoints. With advanced interaction techniques, it becomes possible to query the data that forms the basis for the atlas, thus empowering researchers to access a wealth of information in new ways. So far, atlasbased visualization has been employed for mainly medical education, as well as biological research. In this survey, we provide an overview of current digital biomedical atlas tasks and applications and summarize relevant visualization techniques. We discuss recent approaches for providing next-generation visual interfaces to navigate atlas data that go beyond common text-based search and hierarchical lists. Finally, we reflect on open challenges and opportunities for the next steps in interactive atlas visualization. ",
    project = "ttmedvis,MetaVis,VIDI"
    [PDF] [YT] [Bibtex]
    @MISC {Garrison2019SM_eurovis,
    title = {A Visual Encoding System for Comparative Exploration of Magnetic Resonance Spectroscopy Data},
    author = {Garrison, Laura and Va\v{s}\'{\i}\v{c}ek, Jakub and Gr\"{u}ner, Renate and Smit, Noeska and Bruckner, Stefan},
    abstract = "Magnetic resonance spectroscopy (MRS) allows for assessment of tissue metabolite characteristics used often for early detection and treatment evaluation of intracranial pathologies. In particular, this non-invasive technique is important in the study of metabolic changes related to brain tumors, strokes, seizure disorders, Alzheimer's disease, depression, as well as other diseases and disorders affecting the brain. However, meaningful variations in ratios of tissue metabolites within a sample area are difficult to capture with current visualization tools. Furthermore, the learning curve to interpretation is steep and limits the more widespread adoption of MRS in clinical practice. In this work we present a novel, tiered visual encoding system for multi-dimensional MRS data to aid in the visual exploration of metabolite concentration ratios. Our system was developed in close collaboration with domain experts including detailed data and task analyses. This visual encoding system was subsequently realized as part of an interactive insight-generation tool for rapid exploration and comparison of metabolite ratio variation for deeper insights to these complex data.",
    booktitle = {Proceedings of the EuroVis Conference - Posters (EuroVis ’19)},
    year = {2019},
    howpublished = "Poster presented at the EuroVis conference 2019",
    keywords = {medical visualization, magnetic resonance spectroscopy data, information visualization, user-centered design},
    images = "images/garrison_eurovis2019_SM_encodings.png",
    thumbnails = "images/garrison_eurovis2019_SM_encodings.png",
    pdf = "pdfs/garrison_eurovis2019_SM.pdf",
    youtube = "",
    project = "VIDI"
    [PDF] [DOI] [Bibtex]
    @inproceedings {Smit-2019-DBP,
    booktitle = {Eurographics 2019 - Dirk Bartz Prize},
    editor = {Bruckner, Stefan and Oeltze-Jafra, Steffen},
    title = {{Model-based Visualization for Medical Education and Training}},
    author = {Smit, Noeska and Lawonn, Kai and Kraima, Annelot and deRuiter, Marco and Bruckner, Stefan and Eisemann, Elmar and Vilanova, Anna},
    year = {2019},
    publisher = {The Eurographics Association},
    ISSN = {1017-4656},
    DOI = {10.2312/egm.20191033},
    pdf = "pdfs/Smit_DBPrize_2019.pdf",
    images = "images/Smit_DBPrize_2019.png",
    thumbnails = "images/Smit_DBPrize_2019.png",
    abstract = "Anatomy, or the study of the structure of the human body, is an essential component of medical education. Certain parts of human anatomy are considered to be more complex to understand than others, due to a multitude of closely related structures. Furthermore, there are many potential variations in anatomy, e.g., different topologies of vessels, and knowledge of these variations is critical for many in medical practice.
    Some aspects of individual anatomy, such as the autonomic nerves, are not visible in individuals through medical imaging techniques or even during surgery, placing these nerves at risk for damage.
    3D models and interactive visualization techniques can be used to improve understanding of this complex anatomy, in combination with traditional medical education paradigms.
    We present a framework incorporating several advanced medical visualization techniques and applications for teaching and training purposes, which is the result of an interdisciplinary project.
    In contrast to previous approaches which focus on general anatomy visualization or direct visualization of medical imaging data, we employ model-based techniques to represent variational anatomy, as well as anatomy not visible from imaging. Our framework covers the complete spectrum including general anatomy, anatomical variations, and anatomy in individual patients.
    Applications within our framework were evaluated positively with medical users, and our educational tool for general anatomy is in use in a Massive Open Online Course (MOOC) on anatomy, which had over 17000 participants worldwide in the first run.",
    project = "ttmedvis,VIDI"
    [PDF] [DOI] [Bibtex]
    @inproceedings {Moerth-2019-VCBM,
    booktitle = "Eurographics Workshop on Visual Computing for Biology and Medicine",
    editor = "Kozlíková, Barbora and Linsen, Lars and Vázquez, Pere-Pau and Lawonn, Kai and Raidou, Renata Georgia",
    abstract = "Three-dimensional (3D) ultrasound imaging and visualization
    is often used in medical diagnostics, especially in prenatal
    screening. Screening the development of the fetus is
    important to assess possible complications early on. State
    of the art approaches involve taking standardized
    measurements to compare them with standardized tables. The
    measurements are taken in a 2D slice view, where precise
    measurements can be difficult to acquire due to the fetal
    pose. Performing the analysis in a 3D view would enable the
    viewer to better discriminate between artefacts and
    representative information. Additionally making data
    comparable between different investigations and patients is
    a goal in medical imaging techniques and is often achieved
    by standardization. With this paper, we introduce a novel
    approach to provide a standardization method for 3D
    ultrasound fetus screenings. Our approach is called “The
    Vitruvian Baby” and incorporates a complete pipeline for
    standardized measuring in fetal 3D ultrasound. The input of
    the method is a 3D ultrasound screening of a fetus and the
    output is the fetus in a standardized T-pose. In this pose,
    taking measurements is easier and comparison of different
    fetuses is possible. In addition to the transformation of
    the 3D ultrasound data, we create an abstract representation
    of the fetus based on accurate measurements. We demonstrate
    the accuracy of our approach on simulated data where the
    ground truth is known.",
    title = "The Vitruvian Baby: Interactive Reformation of Fetal Ultrasound Data to a T-Position",
    author = "Mörth, Eric and Raidou, Renata Georgia and Viola, Ivan and Smit, Noeska",
    year = "2019",
    publisher = "The Eurographics Association",
    ISSN = "2070-5786",
    ISBN = "978-3-03868-081-9",
    DOI = "10.2312/vcbm.20191245",
    pdf = "pdfs/VCBM_TheVitruvianBaby_ShortPaper_201-205.pdf",
    images = "images/vcbmVitruvianBaby.jpg",
    thumbnails = "images/vcbmVitruvianBaby.jpg",
    url = "",
    project = {VIDI}


    [PDF] [Bibtex]
    @MISC {Smit18MMIV,
    author = "N. N. Smit and S. Bruckner and H. Hauser and I. Haldorsen and A. Lundervold and A. S. Lundervold and E. Hodneland and L. Oltedal and K. Specht and E. R. Gruner",
    title = "Research Agenda of the Mohn Medical Imaging and Visualization Centre in Bergen, Norway",
    howpublished = "Poster presented at the EG VCBM workshop 2018",
    month = "September",
    year = "2018",
    abstract = "The Mohn Medical Imaging and Visualization Centre (MMIV) was recently established in collaboration between the University of Bergen, Norway, and the Haukeland University Hospital in Bergen with generous financial support from the Bergen Research Foundation (BFS) to conduct cross-disciplinary research related to state-of-the-art medical imaging, including preclinical and clinical high-field MRI, CT and hybrid PET/CT/MR.The overall goal of the Centre is to research new methods in quantitative imaging and interactive visualization to predict changes in health and disease across spatial and temporal scales. This encompasses research in feature detection, feature extraction, and feature prediction, as well as on methods and techniques for the interactive visualization of spatial and abstract data related to and derived from these features.With special emphasis on the natural and medical sciences, the long-term goal of the Centre is to consolidate excellence in the interplay between medical imaging (physics, chemistry, radiography, radiology), and visualization (computer science and mathematics) and develop novel and refined imaging methods that may ultimately improve patient care. In this poster, we describe the overall research agenda of MMIV and describe the four core projects in the centre.",
    pdf = "pdfs/smit2018posterabstract.pdf",
    images = "images/MMIVPoster.png",
    thumbnails = "images/MMIVPoster.png",
    location = "Granada, Spain",
    project = "VIDI"