Noeska Smit

Associate Professor

Medical Visualization

I’m an Associate Professor in Medical Visualization (tenure track position funded by the BFS), with a background in computer science and radiography. My research focuses on model-based visualization for medical applications, as well as multi-modal visualization in the context of computational medicine. My position is also associated to the Mohn Medical Imaging and Visualization (MMIV) centre.


This page only displays publications I have authored in my current affiliation. For a full overview, please check my Google Scholar profile.



    [PDF] [DOI] [YT] [Bibtex]
    @INPROCEEDINGS {Meuschke2018VCBM,
    author = "Meuschke, Monique and Smit, Noeska N. and Lichtenberg, Nils and Preim, Bernhard and Lawonn, Kai",
    title = "Automatic Generation of Web-Based User Studies to Evaluate Depth Perception in Vascular Surface Visualizations",
    booktitle = "Proceedings of VCBM 2018",
    year = "2018",
    editor = "Puig Puig, Anna and Schultz, Thomas and Vilanova, Anna and Hotz, Ingrid and Kozlikova, Barbora and Vázquez, Pere-Pau",
    pages = "033-044",
    address = "Granada, Spain",
    publisher = "Eurographics Association",
    abstract = "User studies are often required in biomedical visualization application papers in order to provide evidence for the utility of the presented approach. An important aspect is how well depth information can be perceived, as depth encoding is important to enable an understandable representation of complex data.Unfortunately, in practice there is often little time available to perform such studies, and setting up and conducting user studies may be labor-intensive. In addition, it can be challenging to reach enough participants to support the contribution claims of the paper. In this paper, we propose a system that allows biomedical visualization researchers to quickly generate perceptual task-based user studies for novel surface visualizations, and to perform the resulting experiment via a web interface. This approach helps to reduce effort in the setup of user studies themselves, and at the same time leverages a web-based approach that can help researchers attract more participants to their study. We demonstrate our system using the specific application of depth judgment tasks to evaluate vascular surface visualizations, since there is a lot of recent interest in this area.However, the system is also generally applicable for conducting other task-baseduser studies in biomedical visualization.",
    pdf = "pdfs/meuschke2018VCBM.pdf",
    images = "images/vcbm2018.png",
    thumbnails = "images/vcbm2018.png",
    youtube = "",
    crossref = "VCBM-proc",
    doi = "10.2312/vcbm.20181227",
    project = "ttmedvis"
    [PDF] [YT] [Bibtex]
    @ARTICLE {lichtenbergsmithansenlawonn2018,
    author = "Lichtenberg, Nils and Smit, Noeska and Hansen, Christian and Lawonn, Kai",
    title = "Real-time field aligned stripe patterns",
    journal = "Computers & Graphics",
    year = "2018",
    volume = "74",
    pages = "137-149",
    month = "aug",
    abstract = "In this paper, we present a parameterization technique that can be applied to surface meshes in real-time without time-consuming preprocessing steps. The parameterization is suitable for the display of (un-)oriented patterns and texture patches, and to sample a surface in a periodic fashion. The method is inspired by existing work that solves a global optimization problem to generate a continuous stripe pattern on the surface, from which texture coordinates can be derived. We propose a local optimization approach that is suitable for parallel execution on the GPU, which drastically reduces computation time. With this, we achieve on-the-fly texturing of 3D, medium-sized (up to 70k vertices) surface meshes. The algorithm takes a tangent vector field as input and aligns the texture coordinates to it. Our technique achieves real-time parameterization of the surface meshes by employing a parallelizable local search algorithm that converges to a local minimum in a few iterations. The calculation in real-time allows for live parameter updates and determination of varying texture coordinates. Furthermore, the method can handle non-manifold meshes. The technique is useful in various applications, e.g., biomedical visualization and flow visualization. We highlight our method\s potential by providing usage scenarios for several applications.A PDF of the accepted manuscript is available via",
    pdf = "pdfs/lichtenberg_2018.pdf",
    images = "images/Selection_384.png",
    thumbnails = "images/1-s2.0-S0097849318300591-fx1_lrg.jpg",
    youtube = "",
    project = "ttmedvis"
    [PDF] [Bibtex]
    @MISC {Smit18MMIV,
    author = "N. N. Smit and S. Bruckner and H. Hauser and I. Haldorsen and A. Lundervold and A. S. Lundervold and E. Hodneland and L. Oltedal and K. Specht and E. R. Gruner",
    title = "Research Agenda of the Mohn Medical Imaging and Visualization Centre in Bergen, Norway",
    howpublished = "Poster presented at the EG VCBM workshop 2018",
    month = "September",
    year = "2018",
    abstract = "The Mohn Medical Imaging and Visualization Centre (MMIV) was recently established in collaboration between the University of Bergen, Norway, and the Haukeland University Hospital in Bergen with generous financial support from the Bergen Research Foundation (BFS) to conduct cross-disciplinary research related to state-of-the-art medical imaging, including preclinical and clinical high-field MRI, CT and hybrid PET/CT/MR.The overall goal of the Centre is to research new methods in quantitative imaging and interactive visualization to predict changes in health and disease across spatial and temporal scales. This encompasses research in feature detection, feature extraction, and feature prediction, as well as on methods and techniques for the interactive visualization of spatial and abstract data related to and derived from these features.With special emphasis on the natural and medical sciences, the long-term goal of the Centre is to consolidate excellence in the interplay between medical imaging (physics, chemistry, radiography, radiology), and visualization (computer science and mathematics) and develop novel and refined imaging methods that may ultimately improve patient care. In this poster, we describe the overall research agenda of MMIV and describe the four core projects in the centre.",
    pdf = "pdfs/smit2018posterabstract.pdf",
    images = "images/MMIVPoster.png",
    thumbnails = "images/MMIVPoster.png",
    location = "Granada, Spain",
    project = "VIDI"


    [PDF] [DOI] [YT] [Bibtex]
    @ARTICLE {Smit-2017-PAS,
    author = "Smit, Noeska and Lawonn, Kai and Kraima, Annelot and DeRuiter, Marco and Sokooti, Hessam and Bruckner, Stefan and Eisemann, Elmar and Vilanova, Anna",
    title = "PelVis: Atlas-based Surgical Planning for Oncological Pelvic Surgery",
    journal = "IEEE Transactions on Visualization and Computer Graphics",
    year = "2017",
    volume = "23",
    number = "1",
    pages = "741--750",
    month = "jan",
    abstract = "Due to the intricate relationship between the pelvic organs and vital  structures, such as vessels and nerves, pelvic anatomy is often considered  to be complex to comprehend. In oncological pelvic surgery, a trade-off  has to be made between complete tumor resection and preserving function  by preventing damage to the nerves. Damage to the autonomic nerves  causes undesirable post-operative side-effects such as fecal and  urinal incontinence, as well as sexual dysfunction in up to 80 percent  of the cases. Since these autonomic nerves are not visible in pre-operative  MRI scans or during surgery, avoiding nerve damage during such a  surgical procedure becomes challenging. In this work, we present  visualization methods to represent context, target, and risk structures  for surgical planning. We employ distance-based and occlusion management  techniques in an atlas-based surgical planning tool for oncological  pelvic surgery. Patient-specific pre-operative MRI scans are registered  to an atlas model that includes nerve information. Through several  interactive linked views, the spatial relationships and distances  between the organs, tumor and risk zones are visualized to improve  understanding, while avoiding occlusion. In this way, the surgeon  can examine surgically relevant structures and plan the procedure  before going into the operating theater, thus raising awareness of  the autonomic nerve zone regions and potentially reducing post-operative  complications. Furthermore, we present the results of a domain expert  evaluation with surgical oncologists that demonstrates the advantages  of our approach.",
    pdf = "pdfs/Smit-2017-PAS.pdf",
    images = "images/Smit-2017-PAS.jpg",
    thumbnails = "images/Smit-2017-PAS.png",
    youtube = "",
    doi = "10.1109/TVCG.2016.2598826",
    event = "IEEE SciVis 2016",
    keywords = "atlas, surgical planning, medical visualization",
    location = "Baltimore, USA"
    [PDF] [DOI] [Bibtex]
    @ARTICLE {LawonnSmit-2017-Survey,
    author = "Lawonn, K. and Smit, N.N. and B{\"u}hler, K. and Preim, B.",
    title = "A Survey on Multimodal Medical Data Visualization",
    journal = "Computer Graphics Forum",
    year = "2017",
    volume = "37",
    number = "1",
    pages = "413-438",
    abstract = "Multi-modal data of the complex human anatomy contain a wealth of information. To visualize and explore such data, techniques for emphasizing important structures and controlling visibility are essential. Such fused overview visualizations guide physicians to suspicious regions to be analysed in detail, e.g. with slice-based viewing. We give an overview of state of the art in multi-modal medical data visualization techniques. Multi-modal medical data consist of multiple scans of the same subject using various acquisition methods, often combining multiple complimentary types of information. Three-dimensional visualization techniques for multi-modal medical data can be used in diagnosis, treatment planning, doctor–patient communication as well as interdisciplinary communication. Over the years, multiple techniques have been developed in order to cope with the various associated challenges and present the relevant information from multiple sources in an insightful way. We present an overview of these techniques and analyse the specific challenges that arise in multi-modal data visualization and how recent works aimed to solve these, often using smart visibility techniques. We provide a taxonomy of these multi-modal visualization applications based on the modalities used and the visualization techniques employed. Additionally, we identify unsolved problems as potential future research directions.",
    pdf = "pdfs/LawonnSmit-2017-MULTI.pdf",
    images = "images/LawonnSmit-2017-MULTI.jpg",
    thumbnails = "images/LawonnSmit-2017-MULTI-TN.png",
    issn = "1467-8659",
    url = "",
    doi = "10.1111/cgf.13306",
    keywords = "medical imaging, visualization, scientific visualization, visualization, volume visualization, visualization, Medical Imaging [Visualization], Scientific Visualization [Visualization], Volume Visualization [Visualization], Multimodal Medical Data"


    [PDF] [Bibtex]
    author = "Nils Lichtenberg and Noeska Smit and Christian Hansen and Kai Lawonn",
    title = "Sline: Seamless Line Illustration for Interactive Biomedical Visualization",
    booktitle = "Proceedings of VCBM 2016",
    year = "2016",
    month = "sep",
    abstract = "In medical visualization of surface information, problems often arise when visualizing several overlapping structures simultaneously. There is a trade-off between visualizing multiple structures in a detailed way and limiting visual clutter, in order to allow users to focus on the main structures. Illustrative visualization techniques can help alleviate these problems by defining a level of abstraction per structure. However, clinical uptake of these advanced visualization techniques so far has been limited due to the complex parameter settings required. To bring advanced medical visualization closer to clinical application, we propose a novel illustrative technique that offers a seamless transition between various levels of abstraction and detail. Using a single comprehensive parameter, users are able to quickly define a visual representation per structure that fits the visualization requirements for focus and context structures. This technique can be applied to any biomedical context in which multiple surfaces are routinely visualized, such as neurosurgery, radiotherapy planning or drug design. Additionally, we introduce a novel hatching technique, that runs in real-time and does not require texture coordinates. An informal evaluation with experts from different biomedical domains reveals that our technique allows users to design focus-and-context visualizations in a fast and intuitive manner.",
    pdf = "pdfs/Lichtenberg-2016-SLINE.pdf",
    images = "images/Smit-2016-SLINE.PNG",
    thumbnails = "images/Smit-2016-SLINE.jpg",
    proceedings = "Proceedings of Eurographics Workshop on Visual Computing in Biology and Medicine",
    event = "VCBM 2016",
    keywords = "surface rendering, medical visualization, illustrative rendering",
    location = "Bergen, Norway"