
Project Overview
Technology is revolutionizing medicine. New scanners enable doctors to “look into the patient’s body” and study their anatomy and physiology without the need of a scalpel. At an amazing speed new scanning technologies emerge, providing an ever growing and increasingly varied look into medical conditions. Today, we cannot “only” look at the bones within a body, but we can also examine soft tissue, blood flow, activation networks in the brain, and many more aspects of anatomy and physiology. The increased amount and complexity of the acquired medical imaging data leads to new challenges in knowledge extraction and decision making.
In order to optimally exploit this new wealth of information, it is crucial that all this imaging data is successfully linked to the medical condition of the patient. In many cases, this is challenging, for example, when diagnosing early-stage cancer or mental disorders. Analogous to biomarkers, which are molecular structures that are used to identify medical conditions, imaging biomarkers are information structures in medical images that can help with diagnostics and treatment planning, formulated in terms of features that can be computed from the imaging data. Imaging biomarker discovery is a highly challenging task and traditionally only a single hypothesis (for a new biomarker) is examined at a time.
This makes it impossible to explore a large number as well as more complex imaging biomarkers across multi-aspect data. In the VIDI project, we propose to research and advance visual data science to improve imaging biomarker discovery through the visual integration of multi-aspect medical data with a new visualization-enabled hypothesis management framework.
We aim to reduce the time it takes to discover new imaging biomarkers by studying structured sets of hypotheses, to be examined at the same time, through the integration of computational approaches and interactive visual analysis techniques. Another related goal is to enable the discovery of more complex imaging biomarkers, across multiple modalities, that potentially are able to more accurately characterize diseases. This should lead to a new form of designing innovative and effective imaging protocols and to the discovery of new imaging biomarkers, improving suboptimal imaging protocols and thus also reducing scanning costs. Our project is a truly interdisciplinary research effort, bringing visualization research and imaging research together in one project, and this is perfectly suited for the novel Centre for Medical Imaging and Visualization that has been established in Bergen, Norway.
VIDI Approach
To achieve these goals we have divided the VIDI project into seven discrete workpackages (WP), which can be executed, to some extent, in parallel:
WP1: Hypothesis management
Research and design of methodologies necessary for structuring, representation, exploration, and analysis of hypothesis sets. Develop visual language for data interactions and methods for linking spatial with non-spatial data.
WP2: Data & Features
Exploration of medical image feature extraction and visualisation, definition of UX for selection and refinement of data features to extend into additional dimensions.
WP3: Hypotheses Scoring
Development of methods for interactive visual ranking and analyses of user hypotheses sets. Exploration of methods to provide user with evaluation preview of investigated hypotheses, linked to hypotheses visualisation and rankings.
WP4: Optimized Imaging
Evaluation of existing imaging protocols and development new imaging techniques. Investigation of imaging process to guard against suboptimal image acquisitions.
WP5: Integration
Integrate solutions from work packages 1-4.
WP6: Evalulation
Evaluate new hypothesis methods in context of three target applications: 1) gynecologic cancer; 2) neuroinflammation in MS; and 3) neurodegenerative disorders.
WP7: Management & Dissemination
Coordination between the involved partners, planning and reporting, and dissemination.
VIDI Project Team
PI: Helwig Hauser
Co-PIs: Stefan Bruckner and Renate Grüner, MMIV
Associated researcher: Noeska Smit
PhD students: Laura Garrison and Fourough Gharbalchi
This project is funded by the Bergen Research Foundation (BFS) and the University of Bergen.
Publications
2022
![[PDF]](https://vis.uib.no/wp-content/plugins/papercite/img/pdf.png)
@ARTICLE {Garrison2022MolColor,
author = "Laura A. Garrison and Stefan Bruckner",
title = "Considering Best Practices in Color Palettes for Molecular Visualizations",
journal = "Journal of Integrative Bioinformatics",
year = "2022",
abstract = "Biomedical illustration and visualization techniques provide a window into complex molecular worlds that are difficult to capture through experimental means alone. Biomedical illustrators frequently employ color to help tell a molecular story, e.g., to identify key molecules in a signaling pathway. Currently, color use for molecules is largely arbitrary and often chosen based on the client, cultural factors, or personal taste. The study of molecular dynamics is relatively young, and some stakeholders argue that color use guidelines would throttle the growth of the field. Instead, content authors have ample creative freedom to choose an aesthetic that, e.g., supports the story they want to tell. However, such creative freedom comes at a price. The color design process is challenging, particularly for those without a background in color theory. The result is a semantically inconsistent color space that reduces the interpretability and effectiveness of molecular visualizations as a whole. Our contribution in this paper is threefold. We first discuss some of the factors that contribute to this array of color palettes. Second, we provide a brief sampling of color palettes used in both industry and research sectors. Lastly, we suggest considerations for developing best practices around color palettes applied to molecular visualization.",
images = "images/garrison-molecularcolor-full.png",
thumbnails = "images/garrison-molecularcolor-thumb.png",
pdf = "pdfs/garrison-molecularcolor.pdf",
publisher = "De Gruyter",
project = "VIDI"
}
![[PDF]](https://vis.uib.no/wp-content/plugins/papercite/img/pdf.png)
@ARTICLE {Garrison2022PhysioSTAR,
author = "Laura A. Garrison and Ivan Kolesar and Ivan Viola and Helwig Hauser and Stefan Bruckner",
title = "Trends & Opportunities in Visualization for Physiology: A Multiscale Overview",
journal = "Computer Graphics Forum",
year = "2022",
abstract = "Combining elements of biology, chemistry, physics, and medicine, the science of human physiology is complex and multifaceted. In this report, we offer a broad and multiscale perspective on key developments and challenges in visualization for physiology. Our literature search process combined standard methods with a state-of-the-art visual analysis search tool to identify surveys and representative individual approaches for physiology. Our resulting taxonomy sorts literature on two levels. The first level categorizes literature according to organizational complexity and ranges from molecule to organ. A second level identifies any of three high-level visualization tasks within a given work: exploration, analysis, and communication. The findings of this report may be used by visualization researchers to understand the overarching trends, challenges, and opportunities in visualization for physiology and to provide a foundation for discussion and future research directions in this area. ",
images = "images/garrison-STAR-taxonomy.png",
thumbnails = "images/garrison-STAR-thumb.png",
pdf = "pdfs/Garrison_STAR_cameraready.pdf",
publisher = "The Eurographics Association and John Wiley \& Sons Ltd.",
project = "VIDI"
}
2021
![[PDF]](https://vis.uib.no/wp-content/plugins/papercite/img/pdf.png)
![[DOI]](https://vis.uib.no/wp-content/plugins/papercite/img/external.png)
![[VID]](https://vis.uib.no/wp-content/papercite-data/images/video.png)
@Article{Kristiansen-2021-SSG,
author = {Kristiansen, Y. S. and Garrison, L. and Bruckner, S.},
title = {Semantic Snapping for Guided Multi-View Visualization Design},
journal = {IEEE Transactions on Visualization and Computer Graphics},
year = {2021},
volume = {},
pages = {},
doi = {},
abstract = {Visual information displays are typically composed of multiple visualizations that are used to facilitate an understanding of the underlying data. A common example are dashboards, which are frequently used in domains such as finance, process monitoring and business intelligence. However, users may not be aware of existing guidelines and lack expert design knowledge when composing such multi-view visualizations. In this paper, we present semantic snapping, an approach to help non-expert users design effective multi-view visualizations from sets of pre-existing views. When a particular view is placed on a canvas, it is “aligned” with the remaining views–not with respect to its geometric layout, but based on aspects of the visual encoding itself, such as how data dimensions are mapped to channels. Our method uses an on-the-fly procedure to detect and suggest resolutions for conflicting, misleading, or ambiguous designs, as well as to provide suggestions for alternative presentations. With this approach, users can be guided to avoid common pitfalls encountered when composing visualizations. Our provided examples and case studies demonstrate the usefulness and validity of our approach.},
note = {Accepted for publication, to be presented at IEEE VIS 2021},
project = {MetaVis,VIDI},
pdf = {pdfs/Kristiansen-2021-SSG.pdf},
vid = {vids/Kristiansen-2021-SSG.mp4},
thumbnails = {images/Kristiansen-2021-SSG.png},
images = {images/Kristiansen-2021-SSG.jpg},
keywords = {tabular data, guidelines, mixed initiative human-machine analysis, coordinated and multiple views},
doi = {10.1109/TVCG.2021.3114860},
}
![[PDF]](https://vis.uib.no/wp-content/plugins/papercite/img/pdf.png)
@InProceedings{Garrison-2021-EPP,
author = {Laura Garrison and Monique Meuschke and Jennifer Fairman and Noeska Smit and Bernhard Preim and Stefan Bruckner},
title = {An Exploration of Practice and Preferences for the Visual Communication of Biomedical Processes},
booktitle = {Proceedings of VCBM},
year = {2021},
pages = {},
doi = {},
abstract = {The visual communication of biomedical processes draws from diverse techniques in both visualization and biomedical illustration. However, matching these techniques to their intended audience often relies on practice-based heuristics or narrow-scope evaluations. We present an exploratory study of the criteria that audiences use when evaluating a biomedical process visualization targeted for communication. Designed over a series of expert interviews and focus groups, our study focuses on common communication scenarios of five well-known biomedical processes and their standard visual representations. We framed these scenarios in a survey with participant expertise spanning from minimal to expert knowledge of a given topic. Our results show frequent overlap in abstraction preferences between expert and non-expert audiences, with similar prioritization of clarity and the ability of an asset to meet a given communication objective. We also found that some illustrative conventions are not as clear as we thought, e.g., glows have broadly ambiguous meaning, while other approaches were unexpectedly preferred, e.g., biomedical illustrations in place of data-driven visualizations. Our findings suggest numerous opportunities for the continued convergence of visualization and biomedical illustration techniques for targeted visualization design.},
note = {Accepted for publication, to be presented at VCBM 2021},
project = {VIDI,ttmedvis},
pdf = {pdfs/Garrison-2021-EPP.pdf},
thumbnails = {images/Garrison-2021-EPP.png},
images = {images/Garrison-2021-EPP.jpg},
url = {https://github.com/lauragarrison87/Biomedical_Process_Vis},
keywords = {biomedical illustration, visual communication, survey},
}
![[PDF]](https://vis.uib.no/wp-content/plugins/papercite/img/pdf.png)
![[DOI]](https://vis.uib.no/wp-content/plugins/papercite/img/external.png)
![[YT]](https://vis.uib.no/wp-content/papercite-data/images/youtube.png)
@ARTICLE {Garrison-2021-DimLift,
author = {Garrison, Laura and M\"{u}ller, Juliane and Schreiber, Stefanie and Oeltze-Jafra, Steffen and Hauser, Helwig and Bruckner, Stefan},
title = {DimLift: Interactive Hierarchical Data Exploration through Dimensional Bundling},
journal={IEEE Transactions on Visualization and Computer Graphics},
year = {2021},
abstract = {The identification of interesting patterns and relationships is essential to exploratory data analysis. This becomes increasingly difficult in high dimensional datasets. While dimensionality reduction techniques can be utilized to reduce the analysis space, these may unintentionally bury key dimensions within a larger grouping and obfuscate meaningful patterns. With this work we introduce DimLift, a novel visual analysis method for creating and interacting with dimensional bundles. Generated through an iterative dimensionality reduction or user-driven approach, dimensional bundles are expressive groups of dimensions that contribute similarly to the variance of a dataset. Interactive exploration and reconstruction methods via a layered parallel coordinates plot allow users to lift interesting and subtle relationships to the surface, even in complex scenarios of missing and mixed data types. We exemplify the power of this technique in an expert case study on clinical cohort data alongside two additional case examples from nutrition and ecology.},
volume = {27},
number = {6},
pages = {2908--2922},
pdf = {pdfs/garrison-2021-dimlift.pdf},
images = {images/garrison_dimlift.jpg},
thumbnails = {images/garrison_dimlift_thumb.jpg},
youtube = {https://youtu.be/JSZuhnDyugA},
doi = {10.1109/TVCG.2021.3057519},
git = {https://github.com/lauragarrison87/DimLift},
project = {VIDI},
}
![[PDF]](https://vis.uib.no/wp-content/plugins/papercite/img/pdf.png)
![[DOI]](https://vis.uib.no/wp-content/plugins/papercite/img/external.png)
@ARTICLE {Mueller-2021-IDA,
author = {M\"{u}ller, Juliane and Garrison, Laura and Ulbrich, Philipp and Schreiber, Stefanie and Bruckner, Stefan and Hauser, Helwig and Oeltze-Jafra, Steffen},
title = {Integrated Dual Analysis of Quantitative and Qualitative High-Dimensional Data},
journal={IEEE Transactions on Visualization and Computer Graphics},
year = {2021},
abstract = {The Dual Analysis framework is a powerful enabling technology for the exploration of high dimensional quantitative data by treating data dimensions as first-class objects that can be explored in tandem with data values. In this work, we extend the Dual Analysis framework through the joint treatment of quantitative (numerical) and qualitative (categorical) dimensions. Computing common measures for all dimensions allows us to visualize both quantitative and qualitative dimensions in the same view. This enables a natural joint treatment of mixed data during interactive visual exploration and analysis. Several measures of variation for nominal qualitative data can also be applied to ordinal qualitative and quantitative data. For example, instead of measuring variability from a mean or median, other measures assess inter-data variation or average variation from a mode. In this work, we demonstrate how these measures can be integrated into the Dual Analysis framework to explore and generate hypotheses about high-dimensional mixed data. A medical case study using clinical routine data of patients suffering from Cerebral Small Vessel Disease (CSVD), conducted with a senior neurologist and a medical student, shows that a joint Dual Analysis approach for quantitative and qualitative data can rapidly lead to new insights based on which new hypotheses may be generated.},
volume = {27},
number = {6},
pages = {2953--2966},
pdf = {pdfs/Mueller_2020_IDA.pdf},
images = {images/Mueller_2020_IDA.jpg},
thumbnails = {images/Mueller_2020_IDA.png},
doi = {10.1109/TVCG.2021.3056424},
git = {https://github.com/JulianeMu/IntegratedDualAnalysisAproach_MDA},
project = {VIDI},
}
2020
![[PDF]](https://vis.uib.no/wp-content/plugins/papercite/img/pdf.png)
![[DOI]](https://vis.uib.no/wp-content/plugins/papercite/img/external.png)
@article{Garrison-2020-IVE,
author = {Garrison, Laura and Va\v{s}\'{i}\v{c}ek, Jakub and Craven, Alex R. and Gr\"{u}ner, Renate and Smit, Noeska and Bruckner, Stefan},
title = {Interactive Visual Exploration of Metabolite Ratios in MR Spectroscopy Studies},
journal = {Computers \& Graphics},
volume = {92},
pages = {1--12},
keywords = {medical visualization, magnetic resonance spectroscopy data, information visualization, user-centered design},
doi = {10.1016/j.cag.2020.08.001},
abstract = {Magnetic resonance spectroscopy (MRS) is an advanced biochemical technique used to identify metabolic compounds in living tissue. While its sensitivity and specificity to chemical imbalances render it a valuable tool in clinical assessment, the results from this modality are abstract and difficult to interpret. With this design study we characterized and explored the tasks and requirements for evaluating these data from the perspective of a MRS research specialist. Our resulting tool, SpectraMosaic, links with upstream spectroscopy quantification software to provide a means for precise interactive visual analysis of metabolites with both single- and multi-peak spectral signatures. Using a layered visual approach, SpectraMosaic allows researchers to analyze any permutation of metabolites in ratio form for an entire cohort, or by sample region, individual, acquisition date, or brain activity status at the time of acquisition. A case study with three MRS researchers demonstrates the utility of our approach in rapid and iterative spectral data analysis.},
year = {2020},
pdf = "pdfs/Garrison-2020-IVE.pdf",
thumbnails = "images/Garrison-2020-IVE.png",
images = "images/Garrison-2020-IVE.jpg",
project = "VIDI",
git = "https://github.com/mmiv-center/spectramosaic-public",
}
![[PDF]](https://vis.uib.no/wp-content/plugins/papercite/img/pdf.png)
![[DOI]](https://vis.uib.no/wp-content/plugins/papercite/img/external.png)
@article{Solteszova-2019-MLT,
author = {Solteszova, V. and Smit, N. N. and Stoppel, S. and Gr\"{u}ner, R. and Bruckner, S.},
title = {Memento: Localized Time-Warping for Spatio-Temporal Selection},
journal = {Computer Graphics Forum},
volume = {39},
number = {1},
pages = {231--243},
year = {2020},
keywords = {interaction, temporal data, visualization, spatio-temporal projection},
images = "images/Solteszova-2019-MLT.jpg",
thumbnails = "images/Solteszova-2019-MLT-1.jpg",
pdf = "pdfs/Solteszova-2019-MLT.pdf",
doi = {10.1111/cgf.13763},
abstract = {Abstract Interaction techniques for temporal data are often focused on affecting the spatial aspects of the data, for instance through the use of transfer functions, camera navigation or clipping planes. However, the temporal aspect of the data interaction is often neglected. The temporal component is either visualized as individual time steps, an animation or a static summary over the temporal domain. When dealing with streaming data, these techniques are unable to cope with the task of re-viewing an interesting local spatio-temporal event, while continuing to observe the rest of the feed. We propose a novel technique that allows users to interactively specify areas of interest in the spatio-temporal domain. By employing a time-warp function, we are able to slow down time, freeze time or even travel back in time, around spatio-temporal events of interest. The combination of such a (pre-defined) time-warp function and brushing directly in the data to select regions of interest allows for a detailed review of temporally and spatially localized events, while maintaining an overview of the global spatio-temporal data. We demonstrate the utility of our technique with several usage scenarios.},
project = "MetaVis,ttmedvis,VIDI"
}