
Project Overview
Technology is revolutionizing medicine. New scanners enable doctors to “look into the patient’s body” and study their anatomy and physiology without the need of a scalpel. At an amazing speed new scanning technologies emerge, providing an ever growing and increasingly varied look into medical conditions. Today, we cannot “only” look at the bones within a body, but we can also examine soft tissue, blood flow, activation networks in the brain, and many more aspects of anatomy and physiology. The increased amount and complexity of the acquired medical imaging data leads to new challenges in knowledge extraction and decision making.
In order to optimally exploit this new wealth of information, it is crucial that all this imaging data is successfully linked to the medical condition of the patient. In many cases, this is challenging, for example, when diagnosing early-stage cancer or mental disorders. Analogous to biomarkers, which are molecular structures that are used to identify medical conditions, imaging biomarkers are information structures in medical images that can help with diagnostics and treatment planning, formulated in terms of features that can be computed from the imaging data. Imaging biomarker discovery is a highly challenging task and traditionally only a single hypothesis (for a new biomarker) is examined at a time.
This makes it impossible to explore a large number as well as more complex imaging biomarkers across multi-aspect data. In the VIDI project, we propose to research and advance visual data science to improve imaging biomarker discovery through the visual integration of multi-aspect medical data with a new visualization-enabled hypothesis management framework.
We aim to reduce the time it takes to discover new imaging biomarkers by studying structured sets of hypotheses, to be examined at the same time, through the integration of computational approaches and interactive visual analysis techniques. Another related goal is to enable the discovery of more complex imaging biomarkers, across multiple modalities, that potentially are able to more accurately characterize diseases. This should lead to a new form of designing innovative and effective imaging protocols and to the discovery of new imaging biomarkers, improving suboptimal imaging protocols and thus also reducing scanning costs. Our project is a truly interdisciplinary research effort, bringing visualization research and imaging research together in one project, and this is perfectly suited for the novel Centre for Medical Imaging and Visualization that has been established in Bergen, Norway.
VIDI Approach
To achieve these goals we have divided the VIDI project into seven discrete workpackages (WP), which can be executed, to some extent, in parallel:
WP1: Hypothesis management
Research and design of methodologies necessary for structuring, representation, exploration, and analysis of hypothesis sets. Develop visual language for data interactions and methods for linking spatial with non-spatial data.
WP2: Data & Features
Exploration of medical image feature extraction and visualisation, definition of UX for selection and refinement of data features to extend into additional dimensions.
WP3: Hypotheses Scoring
Development of methods for interactive visual ranking and analyses of user hypotheses sets. Exploration of methods to provide user with evaluation preview of investigated hypotheses, linked to hypotheses visualisation and rankings.
WP4: Optimized Imaging
Evaluation of existing imaging protocols and development new imaging techniques. Investigation of imaging process to guard against suboptimal image acquisitions.
WP5: Integration
Integrate solutions from work packages 1-4.
WP6: Evalulation
Evaluate new hypothesis methods in context of three target applications: 1) gynecologic cancer; 2) neuroinflammation in MS; and 3) neurodegenerative disorders.
WP7: Management & Dissemination
Coordination between the involved partners, planning and reporting, and dissemination.
VIDI Project Team
PI: Helwig Hauser
Co-PIs: Stefan Bruckner and Renate Grüner, MMIV
Associated researcher: Noeska Smit
PhD students: Laura Garrison and Fourough Gharbalchi
This project is funded by the Bergen Research Foundation (BFS) and the University of Bergen.
Publications
2022
![[PDF]](https://vis.uib.no/wp-content/plugins/papercite/img/pdf.png)
@phdthesis{garrison2022thesis,
title = {
From Molecules to the Masses: Visual Exploration, Analysis, and Communication
of Human Physiology
},
author = {Laura Ann Garrison},
year = 2022,
month = {September},
isbn = 9788230841389,
url = {https://hdl.handle.net/11250/3015990},
school = {Department of Informatics, University of Bergen, Norway},
abstract = {
The overarching theme of this thesis is the cross-disciplinary application of
medical illustration and visualization techniques to address challenges in
exploring, analyzing, and communicating aspects of physiology to audiences
with differing expertise.
Describing the myriad biological processes occurring in living beings over
time, the science of physiology is complex and critical to our understanding
of how life works. It spans many spatio-temporal scales to combine and bridge
the basic sciences (biology, physics, and chemistry) to medicine. Recent
years have seen an explosion of new and finer-grained experimental and
acquisition methods to characterize these data. The volume and complexity of
these data necessitate effective visualizations to complement standard
analysis practice. Visualization approaches must carefully consider and be
adaptable to the user's main task, be it exploratory, analytical, or
communication-oriented. This thesis contributes to the areas of theory,
empirical findings, methods, applications, and research replicability in
visualizing physiology. Our contributions open with a state-of-the-art report
exploring the challenges and opportunities in visualization for physiology.
This report is motivated by the need for visualization researchers, as well
as researchers in various application domains, to have a centralized,
multiscale overview of visualization tasks and techniques. Using a
mixed-methods search approach, this is the first report of its kind to
broadly survey the space of visualization for physiology. Our approach to
organizing the literature in this report enables the lookup of topics of
interest according to spatio-temporal scale. It further subdivides works
according to any combination of three high-level visualization tasks:
exploration, analysis, and communication. This provides an easily-navigable
foundation for discussion and future research opportunities for audience- and
task-appropriate visualization for physiology. From this report, we identify
two key areas for continued research that begin narrowly and subsequently
broaden in scope: (1) exploratory analysis of multifaceted physiology data
for expert users, and (2) communication for experts and non-experts alike.
Our investigation of multifaceted physiology data takes place over two
studies. Each targets processes occurring at different spatio-temporal scales
and includes a case study with experts to assess the applicability of our
proposed method. At the molecular scale, we examine data from magnetic
resonance spectroscopy (MRS), an advanced biochemical technique used to
identify small molecules (metabolites) in living tissue that are indicative
of metabolic pathway activity. Although highly sensitive and specific, the
output of this modality is abstract and difficult to interpret. Our design
study investigating the tasks and requirements for expert exploratory
analysis of these data led to SpectraMosaic, a novel application enabling
domain researchers to analyze any permutation of metabolites in ratio form
for an entire cohort, or by sample region, individual, acquisition date, or
brain activity status at the time of acquisition. A second approach considers
the exploratory analysis of multidimensional physiological data at the
opposite end of the spatio-temporal scale: population. An effective
exploratory data analysis workflow critically must identify interesting
patterns and relationships, which becomes increasingly difficult as data
dimensionality increases. Although this can be partially addressed with
existing dimensionality reduction techniques, the nature of these techniques
means that subtle patterns may be lost in the process. In this approach, we
describe DimLift, an iterative dimensionality reduction technique enabling
user identification of interesting patterns and relationships that may lie
subtly within a dataset through dimensional bundles. Key to this method is
the user's ability to steer the dimensionality reduction technique to follow
their own lines of inquiry.
Our third question considers the crafting of visualizations for communication
to audiences with different levels of expertise. It is natural to expect that
experts in a topic may have different preferences and criteria to evaluate a
visual communication relative to a non-expert audience. This impacts the
success of an image in communicating a given scenario. Drawing from diverse
techniques in biomedical illustration and visualization, we conducted an
exploratory study of the criteria that audiences use when evaluating a
biomedical process visualization targeted for communication. From this study,
we identify opportunities for further convergence of biomedical illustration
and visualization techniques for more targeted visual communication design.
One opportunity that we discuss in greater depth is the development of
semantically-consistent guidelines for the coloring of molecular scenes. The
intent of such guidelines is to elevate the scientific literacy of non-expert
audiences in the context of molecular visualization, which is particularly
relevant to public health communication.
All application code and empirical findings are open-sourced and available
for reuse by the scientific community and public. The methods and findings
presented in this thesis contribute to a foundation of cross-disciplinary
biomedical illustration and visualization research, opening several
opportunities for continued work in visualization for physiology.
},
pdf = {pdfs/garrison-phdthesis.pdf},
images = {images/garrison-thesis.png},
thumbnails = {images/garrison-thesis-thumb.png},
project = {VIDI}
}
![[PDF]](https://vis.uib.no/wp-content/plugins/papercite/img/pdf.png)
![[DOI]](https://vis.uib.no/wp-content/plugins/papercite/img/external.png)
@article{Meuschke2022narrative,
title = {Narrative medical visualization to communicate disease data},
author = {Meuschke, Monique and Garrison, Laura A. and Smit, Noeska N. and Bach, Benjamin and Mittenentzwei, Sarah and Wei{\ss}, Veronika and Bruckner, Stefan and Lawonn, Kai and Preim, Bernhard},
year = 2022,
journal = {Computers & Graphics},
volume = 107,
pages = {144--157},
doi = {10.1016/j.cag.2022.07.017},
issn = {0097-8493},
url = {https://www.sciencedirect.com/science/article/pii/S009784932200139X},
abstract = {This paper explores narrative techniques combined with medical visualizations to tell data-driven stories about diseases for a general audience. The field of medical illustration uses narrative visualization through hand-crafted techniques to promote health literacy. However, data-driven narrative visualization has rarely been applied to medical data. We derived a template for creating stories about diseases and applied it to three selected diseases to demonstrate how narrative techniques could support visual communication and facilitate understanding of medical data. One of our main considerations is how interactive 3D anatomical models can be integrated into the story and whether this leads to compelling stories in which the users feel involved. A between-subject study with 90 participants suggests that the combination of a carefully designed narrative structure, the constant involvement of a specific patient, high-qualitative visualizations combined with easy-to-use interactions, are critical for an understandable story about diseases that would be remembered by participants.},
pdf = {pdfs/Narrative_medical_MEUSCHKE_DOA18072022_AFV.pdf},
thumbnails = {images/Meuschke2022narrative-thumb.png},
images = {images/Meuschke2022narrative.png},
project = {VIDI}
}
![[PDF]](https://vis.uib.no/wp-content/plugins/papercite/img/pdf.png)
![[DOI]](https://vis.uib.no/wp-content/plugins/papercite/img/external.png)
@ARTICLE {Garrison2022MolColor,
author = "Laura A. Garrison and Stefan Bruckner",
title = "Considering Best Practices in Color Palettes for Molecular Visualizations",
journal = "Journal of Integrative Bioinformatics",
year = "2022",
abstract = "Biomedical illustration and visualization techniques provide a window into complex molecular worlds that are difficult to capture through experimental means alone. Biomedical illustrators frequently employ color to help tell a molecular story, e.g., to identify key molecules in a signaling pathway. Currently, color use for molecules is largely arbitrary and often chosen based on the client, cultural factors, or personal taste. The study of molecular dynamics is relatively young, and some stakeholders argue that color use guidelines would throttle the growth of the field. Instead, content authors have ample creative freedom to choose an aesthetic that, e.g., supports the story they want to tell. However, such creative freedom comes at a price. The color design process is challenging, particularly for those without a background in color theory. The result is a semantically inconsistent color space that reduces the interpretability and effectiveness of molecular visualizations as a whole. Our contribution in this paper is threefold. We first discuss some of the factors that contribute to this array of color palettes. Second, we provide a brief sampling of color palettes used in both industry and research sectors. Lastly, we suggest considerations for developing best practices around color palettes applied to molecular visualization.",
images = "images/garrison-molecularcolor-full.png",
thumbnails = "images/garrison-molecularcolor-thumb.png",
pdf = "pdfs/garrison-molecularcolor.pdf",
publisher = "De Gruyter",
doi = "10.1515/jib-2022-0016",
project = "VIDI"
}
![[PDF]](https://vis.uib.no/wp-content/plugins/papercite/img/pdf.png)
![[DOI]](https://vis.uib.no/wp-content/plugins/papercite/img/external.png)
@ARTICLE {Garrison2022PhysioSTAR,
author = "Laura A. Garrison and Ivan Kolesar and Ivan Viola and Helwig Hauser and Stefan Bruckner",
title = "Trends & Opportunities in Visualization for Physiology: A Multiscale Overview",
journal = "Computer Graphics Forum",
year = "2022",
volume = "41",
number = "3",
publisher = "The Eurographics Association and John Wiley & Sons Ltd.",
pages = "609-643",
doi = "10.1111/cgf.14575",
abstract = "Combining elements of biology, chemistry, physics, and medicine, the science of human physiology is complex and multifaceted. In this report, we offer a broad and multiscale perspective on key developments and challenges in visualization for physiology. Our literature search process combined standard methods with a state-of-the-art visual analysis search tool to identify surveys and representative individual approaches for physiology. Our resulting taxonomy sorts literature on two levels. The first level categorizes literature according to organizational complexity and ranges from molecule to organ. A second level identifies any of three high-level visualization tasks within a given work: exploration, analysis, and communication. The findings of this report may be used by visualization researchers to understand the overarching trends, challenges, and opportunities in visualization for physiology and to provide a foundation for discussion and future research directions in this area. ",
images = "images/garrison-STAR-taxonomy.png",
thumbnails = "images/garrison-STAR-thumb.png",
pdf = "pdfs/Garrison_STAR_cameraready.pdf",
publisher = "The Eurographics Association and John Wiley \& Sons Ltd.",
project = "VIDI"
}
2021
![[PDF]](https://vis.uib.no/wp-content/plugins/papercite/img/pdf.png)
![[DOI]](https://vis.uib.no/wp-content/plugins/papercite/img/external.png)
![[VID]](https://vis.uib.no/wp-content/papercite-data/images/video.png)
@Article{Kristiansen-2021-SSG,
author = {Kristiansen, Y. S. and Garrison, L. and Bruckner, S.},
title = {Semantic Snapping for Guided Multi-View Visualization Design},
journal = {IEEE Transactions on Visualization and Computer Graphics},
year = {2021},
volume = {},
pages = {},
doi = {},
abstract = {Visual information displays are typically composed of multiple visualizations that are used to facilitate an understanding of the underlying data. A common example are dashboards, which are frequently used in domains such as finance, process monitoring and business intelligence. However, users may not be aware of existing guidelines and lack expert design knowledge when composing such multi-view visualizations. In this paper, we present semantic snapping, an approach to help non-expert users design effective multi-view visualizations from sets of pre-existing views. When a particular view is placed on a canvas, it is “aligned” with the remaining views–not with respect to its geometric layout, but based on aspects of the visual encoding itself, such as how data dimensions are mapped to channels. Our method uses an on-the-fly procedure to detect and suggest resolutions for conflicting, misleading, or ambiguous designs, as well as to provide suggestions for alternative presentations. With this approach, users can be guided to avoid common pitfalls encountered when composing visualizations. Our provided examples and case studies demonstrate the usefulness and validity of our approach.},
note = {Accepted for publication, to be presented at IEEE VIS 2021},
project = {MetaVis,VIDI},
pdf = {pdfs/Kristiansen-2021-SSG.pdf},
vid = {vids/Kristiansen-2021-SSG.mp4},
thumbnails = {images/Kristiansen-2021-SSG.png},
images = {images/Kristiansen-2021-SSG.jpg},
keywords = {tabular data, guidelines, mixed initiative human-machine analysis, coordinated and multiple views},
doi = {10.1109/TVCG.2021.3114860},
}
![[PDF]](https://vis.uib.no/wp-content/plugins/papercite/img/pdf.png)
@InProceedings{Garrison-2021-EPP,
author = {Laura Garrison and Monique Meuschke and Jennifer Fairman and Noeska Smit and Bernhard Preim and Stefan Bruckner},
title = {An Exploration of Practice and Preferences for the Visual Communication of Biomedical Processes},
booktitle = {Proceedings of VCBM},
year = {2021},
pages = {},
doi = {},
abstract = {The visual communication of biomedical processes draws from diverse techniques in both visualization and biomedical illustration. However, matching these techniques to their intended audience often relies on practice-based heuristics or narrow-scope evaluations. We present an exploratory study of the criteria that audiences use when evaluating a biomedical process visualization targeted for communication. Designed over a series of expert interviews and focus groups, our study focuses on common communication scenarios of five well-known biomedical processes and their standard visual representations. We framed these scenarios in a survey with participant expertise spanning from minimal to expert knowledge of a given topic. Our results show frequent overlap in abstraction preferences between expert and non-expert audiences, with similar prioritization of clarity and the ability of an asset to meet a given communication objective. We also found that some illustrative conventions are not as clear as we thought, e.g., glows have broadly ambiguous meaning, while other approaches were unexpectedly preferred, e.g., biomedical illustrations in place of data-driven visualizations. Our findings suggest numerous opportunities for the continued convergence of visualization and biomedical illustration techniques for targeted visualization design.
Best Paper Honorable Mention at VCBM 2021},
note = {Accepted for publication, to be presented at VCBM 2021},
project = {VIDI,ttmedvis},
pdf = {pdfs/Garrison-2021-EPP.pdf},
thumbnails = {images/Garrison-2021-EPP.png},
images = {images/Garrison-2021-EPP.jpg},
url = {https://github.com/lauragarrison87/Biomedical_Process_Vis},
keywords = {biomedical illustration, visual communication, survey},
}
![[PDF]](https://vis.uib.no/wp-content/plugins/papercite/img/pdf.png)
![[DOI]](https://vis.uib.no/wp-content/plugins/papercite/img/external.png)
![[YT]](https://vis.uib.no/wp-content/papercite-data/images/youtube.png)
@ARTICLE {Garrison-2021-DimLift,
author = {Garrison, Laura and M\"{u}ller, Juliane and Schreiber, Stefanie and Oeltze-Jafra, Steffen and Hauser, Helwig and Bruckner, Stefan},
title = {DimLift: Interactive Hierarchical Data Exploration through Dimensional Bundling},
journal={IEEE Transactions on Visualization and Computer Graphics},
year = {2021},
abstract = {The identification of interesting patterns and relationships is essential to exploratory data analysis. This becomes increasingly difficult in high dimensional datasets. While dimensionality reduction techniques can be utilized to reduce the analysis space, these may unintentionally bury key dimensions within a larger grouping and obfuscate meaningful patterns. With this work we introduce DimLift, a novel visual analysis method for creating and interacting with dimensional bundles. Generated through an iterative dimensionality reduction or user-driven approach, dimensional bundles are expressive groups of dimensions that contribute similarly to the variance of a dataset. Interactive exploration and reconstruction methods via a layered parallel coordinates plot allow users to lift interesting and subtle relationships to the surface, even in complex scenarios of missing and mixed data types. We exemplify the power of this technique in an expert case study on clinical cohort data alongside two additional case examples from nutrition and ecology.},
volume = {27},
number = {6},
pages = {2908--2922},
pdf = {pdfs/garrison-2021-dimlift.pdf},
images = {images/garrison_dimlift.jpg},
thumbnails = {images/garrison_dimlift_thumb.jpg},
youtube = {https://youtu.be/JSZuhnDyugA},
doi = {10.1109/TVCG.2021.3057519},
git = {https://github.com/lauragarrison87/DimLift},
project = {VIDI},
}
![[PDF]](https://vis.uib.no/wp-content/plugins/papercite/img/pdf.png)
![[DOI]](https://vis.uib.no/wp-content/plugins/papercite/img/external.png)
@ARTICLE {Mueller-2021-IDA,
author = {M\"{u}ller, Juliane and Garrison, Laura and Ulbrich, Philipp and Schreiber, Stefanie and Bruckner, Stefan and Hauser, Helwig and Oeltze-Jafra, Steffen},
title = {Integrated Dual Analysis of Quantitative and Qualitative High-Dimensional Data},
journal={IEEE Transactions on Visualization and Computer Graphics},
year = {2021},
abstract = {The Dual Analysis framework is a powerful enabling technology for the exploration of high dimensional quantitative data by treating data dimensions as first-class objects that can be explored in tandem with data values. In this work, we extend the Dual Analysis framework through the joint treatment of quantitative (numerical) and qualitative (categorical) dimensions. Computing common measures for all dimensions allows us to visualize both quantitative and qualitative dimensions in the same view. This enables a natural joint treatment of mixed data during interactive visual exploration and analysis. Several measures of variation for nominal qualitative data can also be applied to ordinal qualitative and quantitative data. For example, instead of measuring variability from a mean or median, other measures assess inter-data variation or average variation from a mode. In this work, we demonstrate how these measures can be integrated into the Dual Analysis framework to explore and generate hypotheses about high-dimensional mixed data. A medical case study using clinical routine data of patients suffering from Cerebral Small Vessel Disease (CSVD), conducted with a senior neurologist and a medical student, shows that a joint Dual Analysis approach for quantitative and qualitative data can rapidly lead to new insights based on which new hypotheses may be generated.},
volume = {27},
number = {6},
pages = {2953--2966},
pdf = {pdfs/Mueller_2020_IDA.pdf},
images = {images/Mueller_2020_IDA.jpg},
thumbnails = {images/Mueller_2020_IDA.png},
doi = {10.1109/TVCG.2021.3056424},
git = {https://github.com/JulianeMu/IntegratedDualAnalysisAproach_MDA},
project = {VIDI},
}