Noeska Smit

Associate Professor

Medical Visualization

I’m an Associate Professor in Medical Visualization (tenure track position funded by the BFS), with a background in computer science and radiography. My research focuses on model-based visualization for medical applications, as well as multi-modal visualization in the context of computational medicine. My position is also associated to the Mohn Medical Imaging and Visualization (MMIV) centre.




This page only displays publications I have authored in my current affiliation. For a full overview, please check my Google Scholar profile.



    [YT] [Bibtex]
    @ARTICLE {lichtenbergsmithansenlawonn2018,
    author = "Nils Lichtenberg and Noeska Smit and Christian Hansen and Kai Lawonn",
    title = "Real-time field aligned stripe patterns",
    journal = "Computers & Graphics",
    year = "2018",
    volume = "74",
    pages = "137-149",
    month = "aug",
    abstract = "In this paper, we present a parameterization technique that can be applied to surface meshes in real-time without time-consuming preprocessing steps. The parameterization is suitable for the display of (un-)oriented patterns and texture patches, and to sample a surface in a periodic fashion. The method is inspired by existing work that solves a global optimization problem to generate a continuous stripe pattern on the surface, from which texture coordinates can be derived. We propose a local optimization approach that is suitable for parallel execution on the GPU, which drastically reduces computation time. With this, we achieve on-the-fly texturing of 3D, medium-sized (up to 70 k vertices) surface meshes. The algorithm takes a tangent vector field as input and aligns the texture coordinates to it. Our technique achieves real-time parameterization of the surface meshes by employing a parallelizable local search algorithm that converges to a local minimum in a few iterations. The calculation in real-time allows for live parameter updates and determination of varying texture coordinates. Furthermore, the method can handle non-manifold meshes. The technique is useful in various applications, e.g., biomedical visualization and flow visualization. We highlight our method’s potential by providing usage scenarios for several applications.",
    images = "images/Selection_384.png",
    thumbnails = "images/1-s2.0-S0097849318300591-fx1_lrg.jpg",
    youtube = "",
    project = "ttmedvis"


    [PDF] [DOI] [YT] [Bibtex]
    @ARTICLE {Smit-2017-PAS,
    author = "Noeska Smit and Kai Lawonn and Annelot Kraima and Marco DeRuiter and Hessam Sokooti and Stefan Bruckner and Elmar Eisemann and Anna Vilanova",
    title = "PelVis: Atlas-based Surgical Planning for Oncological Pelvic Surgery",
    journal = "IEEE Transactions on Visualization and Computer Graphics",
    year = "2017",
    volume = "23",
    number = "1",
    pages = "741--750",
    month = "jan",
    abstract = "Due to the intricate relationship between the pelvic organs and vital
    structures, such as vessels and nerves, pelvic anatomy is often considered
    to be complex to comprehend. In oncological pelvic surgery, a trade-off
    has to be made between complete tumor resection and preserving function
    by preventing damage to the nerves. Damage to the autonomic nerves
    causes undesirable post-operative side-effects such as fecal and
    urinal incontinence, as well as sexual dysfunction in up to 80 percent
    of the cases. Since these autonomic nerves are not visible in pre-operative
    MRI scans or during surgery, avoiding nerve damage during such a
    surgical procedure becomes challenging. In this work, we present
    visualization methods to represent context, target, and risk structures
    for surgical planning. We employ distance-based and occlusion management
    techniques in an atlas-based surgical planning tool for oncological
    pelvic surgery. Patient-specific pre-operative MRI scans are registered
    to an atlas model that includes nerve information. Through several
    interactive linked views, the spatial relationships and distances
    between the organs, tumor and risk zones are visualized to improve
    understanding, while avoiding occlusion. In this way, the surgeon
    can examine surgically relevant structures and plan the procedure
    before going into the operating theater, thus raising awareness of
    the autonomic nerve zone regions and potentially reducing post-operative
    complications. Furthermore, we present the results of a domain expert
    evaluation with surgical oncologists that demonstrates the advantages
    of our approach.",
    pdf = "pdfs/Smit-2017-PAS.pdf",
    images = "images/Smit-2017-PAS.jpg",
    thumbnails = "images/Smit-2017-PAS.png",
    youtube = "",
    doi = "10.1109/TVCG.2016.2598826",
    event = "IEEE SciVis 2016",
    keywords = "atlas, surgical planning, medical visualization",
    location = "Baltimore, USA"
    [PDF] [DOI] [Bibtex]
    @ARTICLE {LawonnSmit-2017-Survey,
    author = "Lawonn, K. and Smit, N.N. and B{\"u}hler, K. and Preim, B.",
    title = "A Survey on Multimodal Medical Data Visualization",
    journal = "Computer Graphics Forum",
    year = "2017",
    abstract = "Multi-modal data of the complex human anatomy contain a wealth of information. To visualize and explore such data, techniques for emphasizing important structures and controlling visibility are essential. Such fused overview visualizations guide physicians to suspicious regions to be analysed in detail, e.g. with slice-based viewing. We give an overview of state of the art in multi-modal medical data visualization techniques. Multi-modal medical data consist of multiple scans of the same subject using various acquisition methods, often combining multiple complimentary types of information. Three-dimensional visualization techniques for multi-modal medical data can be used in diagnosis, treatment planning, doctor–patient communication as well as interdisciplinary communication. Over the years, multiple techniques have been developed in order to cope with the various associated challenges and present the relevant information from multiple sources in an insightful way. We present an overview of these techniques and analyse the specific challenges that arise in multi-modal data visualization and how recent works aimed to solve these, often using smart visibility techniques. We provide a taxonomy of these multi-modal visualization applications based on the modalities used and the visualization techniques employed. Additionally, we identify unsolved problems as potential future research directions.",
    pdf = "pdfs/LawonnSmit-2017-MULTI.pdf",
    images = "images/LawonnSmit-2017-MULTI.jpg",
    thumbnails = "images/LawonnSmit-2017-MULTI-TN.png",
    note = "CGF Early View",
    issn = "1467-8659",
    url = "",
    doi = "10.1111/cgf.13306",
    keywords = "medical imaging, visualization, scientific visualization, visualization, volume visualization, visualization, Medical Imaging [Visualization], Scientific Visualization [Visualization], Volume Visualization [Visualization], Multimodal Medical Data"


    [PDF] [Bibtex]
    author = "Nils Lichtenberg and Noeska Smit and Christian Hansen and Kai Lawonn",
    title = "Sline: Seamless Line Illustration for Interactive Biomedical Visualization",
    booktitle = "Proceedings of VCBM 2016",
    year = "2016",
    month = "sep",
    abstract = "In medical visualization of surface information, problems often arise when visualizing several overlapping structures simultaneously. There is a trade-off between visualizing multiple structures in a detailed way and limiting visual clutter, in order to allow users to focus on the main structures. Illustrative visualization techniques can help alleviate these problems by defining a level of abstraction per structure. However, clinical uptake of these advanced visualization techniques so far has been limited due to the complex parameter settings required. To bring advanced medical visualization closer to clinical application, we propose a novel illustrative technique that offers a seamless transition between various levels of abstraction and detail. Using a single comprehensive parameter, users are able to quickly define a visual representation per structure that fits the visualization requirements for focus and context structures. This technique can be applied to any biomedical context in which multiple surfaces are routinely visualized, such as neurosurgery, radiotherapy planning or drug design. Additionally, we introduce a novel hatching technique, that runs in real-time and does not require texture coordinates. An informal evaluation with experts from different biomedical domains reveals that our technique allows users to design focus-and-context visualizations in a fast and intuitive manner.",
    pdf = "pdfs/Lichtenberg-2016-SLINE.pdf",
    images = "images/Smit-2016-SLINE.PNG",
    thumbnails = "images/Smit-2016-SLINE.jpg",
    proceedings = "Proceedings of Eurographics Workshop on Visual Computing in Biology and Medicine",
    event = "VCBM 2016",
    keywords = "surface rendering, medical visualization, illustrative rendering",
    location = "Bergen, Norway"