The focus of the project is on the visualization of medical ultrasound data. Ultrasound is a very important clinical imaging tool that is extensively used by medical doctors all around the world. For non-experts and also sometimes for experts, the ultrasound images are hard to understand, leading to special challenges in the communication between medical doctors and their patients. The vision of the IllustraSound project is to provide new visualization technology to address the readability problem by enriching the ultrasound data with other types of medical data. On top of the ultrasound images, intuitive illustrative renderings are added, so that the patient (or doctor) can get a better understanding of the images. The IllustraSound-project is funded by the Norwegian Research Council (Project nr: 193170) and is researched at the Visualization Group in the Department of Informatics at the University of Bergen. The project started september 2009 and ran til 2012.


Feel free to download the open source Illustrasound prototype. It contains source files and all the required libraries for running the binaries on Win32 platform. Additionally the demonstration bundle contains one exemplary dataset and a document describing how to obtain the Couinaud-enhanced liver examination using the technologies that were the research outcome of the Illustrasound research project.



    [PDF] [Bibtex]
    @PHDTHESIS {birkeland13thesis,
    author = "{\AA}smund Rognerud Birkeland",
    title = "Ultrasonic Vessel Visualization: From Extraction to Perception",
    school = "Department of Informatics, University of Bergen, Norway",
    year = "2013",
    month = "March",
    abstract = "Ultrasound is one of the most frequently used imaging modalities in modern medicine. The high versatility and availability of ultrasound workstations is applied in various medical scenarios, such as diagnosis, treatment planning, intra-operative imaging, and more. Modern ultrasound workstations provide live imaging of anatomical structures, as well as physiological processes, such as blood flow. However, the imaging technique have a high presence of noise, a small scan sector, and are much affected by attenuation artefacts. Thus, traditional techniques for segmentation and visualization are not applicable to ultrasound data. In this theses, we present our latest advancements in segmentation and visualization techniques, tailored specifically for the characteristics of ultrasound data. We present new methods for interactive vessel segmentation for both 3D freehand and 4D ultrasound. By directly involving the examiner in the segmentation approach as well as combining data from different probe viewpoints, we are able to obtain 3D models of blood vessels rapidly and robustly. With the ability of robust vessel extraction, we introduce novel visualization techniques which utilize the previously acquired 3D vessel models. For anatomical imaging, we present a new physics-based approach for volume clipping, enhanced slice rendering and even defining curved Couinaud-surfaces. The technique creates a deformable membrane to adapt to structures in the underlying data, defined either by predefined segmentation, iso-values, or other data attributes. For functional imaging, medical ultrasound can use the Doppler principle to image blood flow. However, Doppler ultrasound only measures a projected velocity magnitude of the data. In this thesis, we present a technique that uses the direction of the blood vessels in order to reconstruct 3D blood flow from Doppler ultrasound. By extending Doppler ultrasound with this directional information, we are able to apply traditional flow visualization techniques for displaying the blood flow. Finally, we investigated the usage of moving particles as a means to depict velocity in flow visualization. Based on a series of studies targeted for motion perception, we present a new compensation model to correct for distortions in the human visual system. This model can help users to make a more consistent estimation of velocities from evaluating the motion of particles. ",
    pdf = "pdfs/birkeland13thesis.pdf",
    images = "images/birkeland13thesis.png",
    thumbnails = "images/birkeland13thesis_thumb.png",
    isbn = "?? ",
    project = "illustrasound, medviz, illvis"
    [PDF] [Bibtex]
    @INPROCEEDINGS {Birkeland13Doppler,
    author = "{\AA}smund Birkeland and Dag Magne Ulvang and Kim Nylund and Trygve Hausken and Odd Helge Gilja and Ivan Viola",
    title = "Doppler-based 3D Blood Flow Imaging and Visualization",
    booktitle = "Proceedings of the 29th Spring Conference on Computer Graphics",
    year = "2013",
    abstract = "Blood flow is a very important part of human physiology. In this paper, we present a new method for estimating and visualizing 3D blood flow on-the-fly based on Doppler ultrasound. We add semantic information about the geometry of the blood vessels in order to recreate the actual velocities of the blood. Assuming a laminar flow, the flow direction is related to the general direction of the vessel. Based on the center line of the vessel, we create a vector field representing the direction of the vessel at any given point. The actual flow velocity is then estimated from the Doppler ultrasound signal by back-projecting the velocity in the measured direction, onto the vessel direction. Additionally, we estimate the flux at user-selected cross-sections of the vessel by integrating the velocities over the area of the cross- section. In order to visualize the flow and the flux, we propose a visualization design based on traced particles colored by the flux. The velocities are visualized by animating particles in the flow field. Further, we propose a novel particle velocity legend as a means for the user to estimate the numerical value of the current velocity. Finally, we perform an evaluation of the technique where the accuracy of the velocity estimation is measured using a 4D MRI dataset as a basis for the ground truth.",
    pdf = "pdfs/Birkeland13Doppler.pdf",
    images = "images/Birkeland13Doppler01.png, images/Birkeland13Doppler02.png",
    thumbnails = "images/Birkeland13Doppler01_thumb.png, images/Birkeland13Doppler02_thumb.png",
    project = "illustrasound,medviz,illvis"
    [DOI] [Bibtex]
    @INPROCEEDINGS {Viola2013Dirk,
    author = "Ivan Viola and {\AA},smund Birkeland and Veronika \v{S},olt{\'e},szov{\'a}, and Linn Helljesen and Helwig Hauser and Spiros Kotopoulis and Kim Nylund and Dag M. Ulvang and Ola K. {\O }ye and Trygve Hausken and Odd H. Gilja",
    title = "High-Quality 3{D} Visualization of In-Situ Ultrasonography",
    booktitle = "EG 2013---Dirk Bartz Prize",
    year = "2013",
    pages = "1-4",
    abstract = "In recent years medical ultrasound has experienced a rapid development in the quality of real-time 3D ultrasound (US) imaging. The image quality of the 3D volume that was previously possible to achieve within the range of a few seconds, is now possible in a fraction of a second. This technological advance offers entirely new opportunities for the use of US in the clinic. In our project, we investigate how real-time 3D US can be combined with high-performance processing of today's graphics hardware to allow for high-quality 3D visualization and precise navigation during the examination. ",
    images = "images/2013-05-08-DirkBartzPrizeComb.jpg",
    thumbnails = "images/2013-05-08-DirkBartzPrizeComb.jpg",
    doi = "10.2312/conf/EG2013/med/001-004",
    url = ";internal\&action=action.digitallibrary.ShowPaperAbstract",
    project = "illustrasound,medviz,illvis"


    [PDF] [Bibtex]
    @ARTICLE {Helljesen12Klinisk,
    author = "Linn Emilie S{\ae}vil Helljesen and Spiros Kotopoulis and Kim Nylund and Ivan Viola and Trygve Hausken and Odd Helge Gilja",
    title = "Klinisk bruk av 3D-ultralyd",
    journal = "Kirurgen",
    year = "2012",
    volume = "2",
    pages = "118--120",
    month = "June",
    pdf = "pdfs/Helljesen12Klinisk.pdf",
    images = "images/Helljesen12Klinisk01.png, images/Helljesen12Klinisk02.png",
    thumbnails = "images/Helljesen12Klinisk01_thumb.png, images/Helljesen12Klinisk02_thumb.png",
    url = "",
    project = "illustrasound,medviz"
    @ARTICLE {Birkeland12TheUltrasound,
    author = "{\AA}smund Birkeland and Veronika \v{S}olt{\'e}szov{\'a} and Dieter H{\"o}nigmann and Odd Helge Gilja and Svein Brekke and Timo Ropinski and Ivan Viola",
    title = "The Ultrasound Visualization Pipeline - A Survey",
    journal = "CoRR",
    year = "2012",
    volume = "abs/1206.3975",
    abstract = "Ultrasound is one of the most frequently used imaging modality in medicine. The high spatial resolution, its interactive nature and non-invasiveness makes it the first choice in many examinations. Image interpretation is one of ultrasounds main challenges. Much training is required to obtain a confident skill level in ultrasound-based diagnostics. State-of-the-art graphics techniques is needed to provide meaningful visualizations of ultrasound in real-time. In this paper we present the process-pipeline for ultrasound visualization, including an overview of the tasks performed in the specific steps. To provide an insight into the trends of ultrasound visualization research, we have selected a set of significant publications and divided them into a technique-based taxonomy covering the topics pre-processing, segmentation, registration, rendering and augmented reality. For the different technique types we discuss the difference between ultrasound-based techniques and techniques for other modalities.",
    images = "images/Birkeland2012TheUltrasound.png",
    thumbnails = "images/Birkeland2012TheUltrasound_thumb.png",
    url = "",
    project = "illustrasound,medviz,illvis"
    [PDF] [DOI] [VID] [YT] [Bibtex]
    @ARTICLE {Birkeland-2012-IMC,
    author = "{\AA}smund Birkeland and Stefan Bruckner and Andrea Brambilla and Ivan Viola",
    title = "Illustrative Membrane Clipping",
    journal = "Computer Graphics Forum",
    year = "2012",
    volume = "31",
    number = "3",
    pages = "905--914",
    month = "jun",
    abstract = "Clipping is a fast, common technique for resolving occlusions. It  only requires simple interaction, is easily understandable, and thus  has been very popular for volume exploration. However, a drawback  of clipping is that the technique indiscriminately cuts through features.  Illustrators, for example, consider the structures in the vicinity  of the cut when visualizing complex spatial data and make sure that  smaller structures near the clipping plane are kept in the image  and not cut into fragments. In this paper we present a new technique,  which combines the simple clipping interaction with automated selective  feature preservation using an elastic membrane. In order to prevent  cutting objects near the clipping plane, the deformable membrane  uses underlying data properties to adjust itself to salient structures.  To achieve this behaviour, we translate data attributes into a potential  field which acts on the membrane, thus moving the problem of deformation  into the soft-body dynamics domain. This allows us to exploit existing  GPU-based physics libraries which achieve interactive frame rates.  For manual adjustment, the user can insert additional potential fields,  as well as pinning the membrane to interesting areas. We demonstrate  that our method can act as a flexible and non-invasive replacement  of traditional clipping planes.",
    pdf = "pdfs/Birkeland-2012-IMC.pdf",
    vid = "vids/Birkeland12Illustrative.avi",
    images = "images/Birkeland12Illustrative01.png, images/Birkeland12Illustrative02.png, images/Birkeland12Illustrative03.png",
    thumbnails = "images/Birkeland-2012-IMC.png",
    youtube = "",
    note = "presented at EuroVis 2012",
    doi = "10.1111/j.1467-8659.2012.03083.x",
    event = "EuroVis 2012",
    keywords = "clipping, volume rendering, illustrative visualization",
    location = "Vienna, Austria",
    project = "illustrasound,medviz,illvis",
    url = ""
    [PDF] [VID] [Bibtex]
    @INPROCEEDINGS {Solteszova12Stylized,
    author = "Veronika \v{S}olt{\'e}szov{\'a} and Ruben Patel and Helwig Hauser and Ivanko Viola",
    title = "Stylized Volume Visualization of Streamed Sonar Data",
    booktitle = "Proceedings of Spring Conference on Computer Graphics (SCCG 2012)",
    year = "2012",
    pages = "13--20",
    month = "May",
    abstract = "Current visualization technology implemented in the software for 2D sonars used in marine research is limited to slicing whilst volume visualization is only possible as post processing. We designed and implemented a system which allows for instantaneous volume visualization of streamed scans from 2D sonars without prior resampling to a voxel grid. The volume is formed by a set of most recent scans which are being stored. We transform each scan using its associated transformations to the view-space and slice their bounding box by view-aligned planes. Each slicing plane is reconstructed from the underlying scans and directly used for slice-based volume rendering. We integrated a low frequency illumination model which enhances the depth perception of noisy acoustic measurements. While we visualize the 2D data and time as 3D volumes, the temporal dimension is not intuitively communicated. Therefore, we introduce a concept of temporal outlines. Our system is a result of an interdisciplinary collaboration between visualization and marine scientists. The application of our system was evaluated by independent domain experts who were not involved in the design process in order to determine real life applicability.",
    pdf = "pdfs/Solteszova12Stylized.pdf",
    vid = "vids/Solteszova12Stylized.mp4",
    images = "images/Solteszova12Stylized01.png, images/Solteszova12Stylized02.png, images/Solteszova12Stylized03.png",
    thumbnails = "images/Solteszova12Stylized01_thumb.png, images/Solteszova12Stylized02_thumb.png, images/Solteszova12Stylized03_thumb.png",
    note = "Second best paper and second best presentation awards",
    location = "Smolenice castle, Slovakia",
    project = "illustrasound,medviz,illvis"
    [PDF] [Bibtex]
    @INPROCEEDINGS {Ford12HeartPad,
    author = "Steven Ford and Gabriel Kiss and Ivan Viola and Stefan Brukner and Hans Torp",
    title = "HeartPad: Real-Time Visual Guidance for Cardiac Ultrasound",
    booktitle = "Proceedings of Workshop at SIGGRAPH ASIA 2012",
    year = "2012",
    series = "WASA 2012",
    address = "Fusionopolis, Singapore",
    month = "November",
    abstract = "Medical ultrasound is a challenging modality when it comes to image interpretation. The goal we address in this work is to assist the ultrasound examiner and partially alleviate the burden of interpretation. We propose to address this goal with visualization that provides clear cues on the orientation and the correspondence between anatomy and the data being imaged. Our system analyzes the stream of 3D ultrasound data and in real-time identifies distinct features that are basis for a dynamically deformed mesh model of the heart. The heart mesh is composited with the original ultrasound data to create the data-to-anatomy correspondence. The visualization is broadcasted over the internet allowing, among other opportunities, a direct visualization on the patient on a tablet computer. The examiner interacts with the transducer and with the visualization parameters on the tablet. Our system has been characterized by domain specialist as useful in medical training and for navigating occasional ultrasound users.",
    pdf = "pdfs/Ford12HeartPad.pdf",
    images = "images/Ford12HeartPad01.png, images/Ford12HeartPad02.png",
    thumbnails = "images/Ford12HeartPad01_thumb.png, images/Ford12HeartPad02_thumb.png",
    project = "illustrasound,medviz, illvis"
    [PDF] [Bibtex]
    @MISC {Helljesen12CEUS,
    author = "Linn Emilie S{\ae}vil Helljesen",
    title = "{CEUS} av leverlesjoner hos pasienter henvist etter uklare funn p{\aa} {CT} ",
    howpublished = "Talk at the NFUD-symposium / Frie foredrag",
    month = "March",
    year = "2012",
    pdf = "pdfs/Helljesen12CEUS.pdf",
    images = "images/no_thumb.png",
    thumbnails = "images/no_thumb.png",
    location = "Stavanger, Norway",
    url = "",
    project = "illustrasound,medviz"
    [PDF] [DOI] [Bibtex]
    @ARTICLE {Solteszova12APerceptual,
    author = "Veronika \v{S}olt{\'e}szov{\'a} and Cagatay Turkay and Mark Price and Ivan Viola",
    title = "A Perceptual-Statistics Shading Model",
    journal = "Visualization and Computer Graphics, IEEE Transaction on",
    year = "2012",
    volume = "18",
    number = "12",
    pages = "2265 -2274",
    month = "Dec",
    abstract = "The process of surface perception is complex and based on several influencing factors, e.g., shading, silhouettes, occluding contours, and top down cognition. The accuracy of surface perception can be measured and the influencing factors can be modified in order to decrease the error in perception. This paper presents a novel concept of how a perceptual evaluation of a visualization technique can contribute to its redesign with the aim of improving the match between the distal and the proximal stimulus. During analysis of data from previous perceptual studies, we observed that the slant of 3D surfaces visualized on 2D screens is systematically underestimated. The visible trends in the error allowed us to create a statistical model of the perceived surface slant. Based on this statistical model we obtained from user experiments, we derived a new shading model that uses adjusted surface normals and aims to reduce the error in slant perception. The result is a shape-enhancement of visualization which is driven by an experimentally-founded statistical model. To assess the efficiency of the statistical shading model, we repeated the evaluation experiment and confirmed that the error in perception was decreased. Results of both user experiments are publicly-available datasets.",
    pdf = "pdfs/Solteszova12APerceptual.pdf",
    images = "images/Solteszova12APerceptual01.png, images/Solteszova12APerceptual02.png, images/Solteszova12APerceptual03.png",
    thumbnails = "images/Solteszova12APerceptual01_thumb.png, images/Solteszova12APerceptual02_thumb.png, images/Solteszova12APerceptual03_thumb.png",
    event = "IEEE Scientific Visualization Conference 2012",
    location = "Seattle, WA, USA",
    doi = "10.1109/TVCG.2012.188",
    issn = "1077--2626",
    extra = "extra/",
    project = "illustrasound,medviz,illvis"
    [PDF] [Bibtex]
    @MISC {Helljesen12Contrast,
    author = "Linn Emilie S{\ae}vil Helljesen and Kim Nylund and Trygve Hausken and Georg Dimcevski and Odd Helge Gilja",
    title = "Contrast-Enhanced Ultrasonography of liver lesions in patients referred after inconclusive findings on {CT} - Preliminary Data",
    howpublished = "Poster presented at the EUROSON Conference 2012",
    month = "April",
    year = "2012",
    abstract = "Ultralyd er kostnadseffektivt, og har blitt et nyttig verkt{\o }y i moderne klinisk diagnostikk.",
    pdf = "pdfs/Helljesen12Contrast.pdf",
    images = "images/Helljesen12Contrast01.png, images/Helljesen12Contrast02.png",
    thumbnails = "images/Helljesen12Contrast01_thumb.png, images/Helljesen12Contrast02_thumb.png",
    location = "Madrid, Spain",
    url = "",
    project = "illustrasound,medviz"
    [PDF] [DOI] [Bibtex]
    @INPROCEEDINGS {Solteszova12Lowest,
    author = "Veronika \v{S}olt{\'e}szov{\'a} and Linn Emilie S{\ae}vil Helljesen and Wolfgang Wein and Odd Helge Gilja and Ivan Viola",
    title = "Lowest-Variance Streamlines for Filtering of 3D Ultrasound",
    booktitle = "Eurographics Workshop on Visual Computing for Biology and Medicine (VCBM 2012)",
    year = "2012",
    pages = "41--48",
    month = "Sep",
    abstract = "Ultrasound as an acoustic imaging modality suffers from various kinds of noise. The presence of noise especially hinders the 3D visualization of ultrasound data, both in terms of resolving the spatial occlusion of the signal by surrounding noise, and mental decoupling of the signal from noise. This paper presents a novel type of structurepreserving filter that has been specifically designed to eliminate the presence of speckle and random noise in 3D ultrasound datasets. This filter is based on a local distribution of variance for a given voxel. The lowest variance direction is assumed to be aligned with the direction of the structure. A streamline integration over the lowest-variance vector field defines the filtered output value. The new filter is compared to other popular filtering approaches and its superiority is documented on several use cases. A case study where a clinician was delineating vascular structures of the liver from 3D visualizations further demonstrates the benefits of our approach compared to the state of the art.",
    pdf = "pdfs/Solteszova12Lowest.pdf",
    images = "images/Solteszova12Lowest01.png, images/Solteszova12Lowest02.png",
    thumbnails = "images/Solteszova12Lowest01_thumb.png, images/Solteszova12Lowest02_thumb.png",
    location = "Norrk{\"o}ping, Sweden",
    url = "",
    doi = "10.2312/VCBM/VCBM12/041-048",
    project = "illustrasound,medviz,illvis"
    @INPROCEEDINGS {Oye12Real,
    author = "Ola Kristoffer {\O }ye and Wolfgang Wein and Dag Magne Ulvang and Knut Matre and Ivan Viola",
    title = "Real Time Image-based Tracking of 4D Ultrasound Data",
    booktitle = "Lecture Notes in Computer Science (LNCS)",
    year = "2012",
    address = "Nice, France",
    month = "October",
    abstract = "We propose a methodology to perform real time image-based tracking on streaming 4D ultrasound data, using image registration to deduce the positioning of each ultrasound frame in a global coordinate system. Our method provides an alternative approach to traditional external tracking devices used for tracking probe movements. We compare the performance of our method against magnetic tracking on phantom and liver data, and show that our method is able to provide results in agreement with magnetic tracking.",
    images = "images/Oye12Real01.png, images/Oye12Real02.png",
    thumbnails = "images/Oye12Real01_thumb.png, images/Oye12Real02_thumb.png",
    url = "",
    event = "15th International Conference on Medical Image Computing and Computer Assisted Invervention (MICCAI)",
    extra = "",
    project = "illustrasound,medviz"


    @INCOLLECTION {oye11illustrativeCouinaud,
    author = "Ola Kristoffer {\O }ye and Dag Magne Ulvang and Odd Helge Gilja and Helwig Hauser and Ivan Viola",
    title = "Illustrative Couinaud Segmentation for Ultrasound Liver Examinations",
    booktitle = "Smart Graphics",
    publisher = "Springer Berlin / Heidelberg",
    year = "2011",
    volume = "6815",
    series = "Lecture Notes in Computer Science",
    pages = "60--77",
    abstract = "Couinaud segmentation is a widely used liver partitioning scheme for describing the spatial relation between diagnostically relevant anatomical and pathological features in the liver. In this paper, we propose a new methodologyfor effectively conveying these spatial relations during the ultrasound examinations. We visualize the two-dimensional ultrasound slice in the context of a three-dimensional Couinaud partitioning of the liver. The partitioning is described by planes in 3D reflecting the vascular tree anatomy, specified in the patient by the examiner using her natural interaction tool, i.e., the ultrasound transducer with positional tracking. A pre-defined generic liver model is adapted to the specified partitioning in order to provide a representation of the patients liver parenchyma. The specified Couinaud partitioning and parenchyma model approximation is then used to enhance the examination by providing visual aids to convey the relationships between the placement of the ultrasound plane and the partitioned liver. The 2D ultrasound slice is augmented with Couinaud partitioning intersection information and dynamic label placement. A linked 3D view shows the ultrasound slice, cutting the liver and displayed using fast exploded view rendering. The described visual augmentation has been characterized by the clinical personnel as very supportive during the examination procedure, and also as a good basis for pre-operative case discussions.",
    images = "images/oye11illustrativeCouinaud1.jpg, images/oye11illustrativeCouinaud2.jpg, images/oye11illustrativeCouinaud3.jpg",
    thumbnails = "images/oye11illustrativeCouinaud1_thumb.jpg, images/oye11illustrativeCouinaud2_thumb.jpg, images/oye11illustrativeCouinaud3_thumb.jpg",
    isbn = "978-3-642-22570-3",
    url = "",
    project = "illustrasound,medviz,illvis"
    @INPROCEEDINGS {solteszova11chromatic,
    author = "Veronika \v{S}olt{\'e}szov{\'a} and Daniel Patel and Ivan Viola",
    title = "Chromatic Shadows for Improved Perception",
    booktitle = "Proc. Non-photorealistic Animation and Rendering (NPAR 2011)",
    year = "2011",
    pages = "105--115",
    abstract = "Soft shadows are effective depth and shape cues. However, traditional shadowing algorithms decrease the luminance in shadowareas. The features in shadow become dark and thus shadowing causesinformation hiding. For this reason, in shadowed areas, medical illustrators decrease the luminance less and compensate the lower luminance range by adding color, i.e., by introducing a chromatic component. This paper presents a novel technique which enables an interactive setup of an illustrative shadow representation for preventing overdarkening of important structures. We introduce a scalar attribute for every voxel denoted as shadowiness and propose a shadow transfer function that maps the shadowiness to a color and a blend factor. Typically, the blend factor increases linearly with the shadowiness. We then let the original object color blend with the shadow color according to the blend factor. We suggest a specific shadow transfer function, designed together with a medical illustrator which shifts the shadow color towards blue. This shadow transfer function is quantitatively evaluated with respect to relative depth and surface perception.",
    images = "images/solteszova11chromatic3.jpg, images/solteszova11chromatic2.jpg, images/solteszova11chromatic.jpg, images/solteszova11chromatic4.jpg",
    thumbnails = "images/solteszova11chromatic3_thumb.jpg, images/solteszova11chromatic2_thumb.jpg, images/solteszova11chromatic_thumb.jpg, images/solteszova11chromatic4_thumb.jpg",
    location = "Vancouver, Canada",
    url = "",
    project = "illustrasound,medviz,illvis"
    @INPROCEEDINGS {palmerius11ultrasoundPalpation,
    author = "Karljohan Lundin Palmerius and Roald Flesland Havre and Odd Helge Gilja and Ivan Viola",
    title = "Ultrasound palpation by haptic elastography",
    booktitle = "Proc. International Symposium on Computer-Based Medical Systems (CBMS)",
    year = "2011",
    month = "june",
    abstract = "Palpation is an important method in the medical physical examination.Surface palpation alone, however, cannot be used in many situations due to the anatomical positions. Elastography images are therefore in many cases a complement to other imaging modalities. In this article we present a method for providing haptic feedback from elastography imaging data, allowing palpation of the hardness data. A prototype implementation was used in a demonstration session with domain experts providing feedback on the presented algorithm and also on the basic principle of palpating data from elastography imaging.",
    images = "images/palmerius11ultrasoundPalpation.jpg",
    thumbnails = "images/palmerius11ultrasoundPalpation_thumb.jpg",
    location = "Bristol, UK",
    url = "",
    project = "illustrasound,medviz,illvis"
    [PDF] [VID] [Bibtex]
    @ARTICLE {angelelli11ultrasoundStatistics,
    author = "Paolo Angelelli and Kim Nylund and Odd Helge Gilja and Helwig Hauser",
    title = "Interactive Visual Analysis of Contrast-enhanced Ultrasound Databased on Small Neighborhood Statistics",
    journal = "Computers \& Graphics - Special Issue on Visual Computing in Biology and Medicine",
    year = "2011",
    volume = "35",
    number = "2",
    pages = "218--226",
    abstract = "Contrast-enhanced ultrasound (CEUS) has recently become an important technology for lesion detection and characterization in cancer diagnosis. CEUS is used to investigate the perfusion kinetics in tissue over time, which relates to tissue vascularization. In this paper we present a pipeline that enables interactive visual exploration and semi-automatic segmentation and classification of CEUS data.For the visual analysis of this challenging data, with characteristic noise patterns and residual movements, we propose a robust method to derive expressive enhancement measures from small spatio-temporal neighborhoods. We use this information in a stagedvisual analysis pipeline that leads from a more local investigation to global results such as the delineation of anatomic regions according to their perfusion properties. To make the visual exploration interactive, we have developed an accelerated frameworkbased on the OpenCL library, that exploits modern many-cores hardware. Using our application, we were able to analyze datasets from CEUS liver examinations, being able to identify several focal liver lesions, segment and analyze them quickly and precisely, and eventually characterize them.",
    pdf = "pdfs/angelelli11CEUSIVA.pdf",
    vid = "vids/angelelli11CEUSSegmentation.wmv",
    images = "images/angelelli11ultrasoundStatistics2.jpg, images/angelelli11ultrasoundStatistics1.jpg",
    thumbnails = "images/angelelli11ultrasoundStatistics2_thumb.jpg, images/angelelli11ultrasoundStatistics1_thumb.jpg",
    url = "",
    project = "illustrasound,medviz,illvis"
    @ARTICLE {ruiz11automaticTFs,
    author = "Marc Ruiz and Anton Bardera and Imma Boada and Ivan Viola and Miquel Feixas and Mateu Sbert",
    title = "Automatic Transfer Functions based on Informational Divergence",
    journal = "IEEE Transactions on Visualization and Computer Graphics",
    year = "2011",
    volume = "17",
    number = "12",
    pages = "1932--1941",
    abstract = "In this paper we present a framework to define transfer functions from a target distribution provided by the user. A targetdistribution can reflect the data importance, or highly relevant data value interval, or spatial segmentation. Our approach is basedon a communication channel between a set of viewpoints and a set of bins of a volume data set, and it supports 1D as well as 2D transfer functions including the gradient information. The transfer functions are obtained by minimizing the informational divergence or Kullback-Leibler distance between the visibility distribution captured by the viewpoints and a target distribution selected by the user. The use of the derivative of the informational divergence allows for a fast optimization process. Different target distributions for 1D and 2D transfer functions are analyzed together with importance-driven and view-based techniques.",
    images = "images/ruiz11automaticTFs1.jpg, images/ruiz11automaticTFs2.jpg, images/ruiz11automaticTFs3.jpg, images/ruiz11automaticTFs4.jpg",
    thumbnails = "images/ruiz11automaticTFs1_thumb.jpg, images/ruiz11automaticTFs2_thumb.jpg, images/ruiz11automaticTFs3_thumb.jpg, images/ruiz11automaticTFs4_thumb.jpg",
    event = "IEEE Visualization Conference 2011",
    location = "Providence, RI, USA",
    project = "illustrasound"


    [PDF] [DOI] [YT] [Bibtex]
    @ARTICLE {Bruckner-2010-HVC,
    author = "Stefan Bruckner and Peter Rautek and Ivan Viola and Mike Roberts and Mario Costa Sousa and Meister Eduard Gr{\"o}ller",
    title = "Hybrid Visibility Compositing and Masking for Illustrative Rendering",
    journal = "Computers \& Graphics",
    year = "2010",
    volume = "34",
    number = "4",
    pages = "361--369",
    month = "aug",
    abstract = "In this paper, we introduce a novel framework for the compositing  of interactively rendered 3D layers tailored to the needs of scientific  illustration. Currently, traditional scientific illustrations are  produced in a series of composition stages, combining different pictorial  elements using 2D digital layering. Our approach extends the layer  metaphor into 3D without giving up the advantages of 2D methods.  The new compositing approach allows for effects such as selective  transparency, occlusion overrides, and soft depth buffering. Furthermore,  we show how common manipulation techniques such as masking can be  integrated into this concept. These tools behave just like in 2D,  but their influence extends beyond a single viewpoint. Since the  presented approach makes no assumptions about the underlying rendering  algorithms, layers can be generated based on polygonal geometry,  volumetric data, pointbased representations, or others. Our implementation  exploits current graphics hardware and permits real-time interaction  and rendering.",
    pdf = "pdfs/Bruckner-2010-HVC.pdf",
    images = "images/Bruckner-2010-HVC.jpg",
    thumbnails = "images/Bruckner-2010-HVC.png",
    youtube = ",,",
    doi = "10.1016/j.cag.2010.04.003",
    keywords = "compositing, masking, illustration",
    project = "illustrasound,medviz,illvis",
    url = ""
    [PDF] [VID] [Bibtex]
    @INPROCEEDINGS {angelelli10guided,
    author = "Paolo Angelelli and Ivan Viola and Kim Nylund and Odd Helge Gilja and Helwig Hauser",
    title = "Guided Visualization of Ultrasound Image Sequences",
    booktitle = "Proceedings of Eurographics Workshop on Visual Computing for Biology and Medicine (VCBM)",
    year = "2010",
    pages = "125--132",
    abstract = "Ultrasonography allows informative and expressive real time examinations of patients. Findings are usually reported as printouts, screen shots and video sequences. However, in certain scenarios, the amount of imaged ultrasound data is considerable or it is challenging to detect the anatomical features of interest. Post-examination access to the information present in the data is, therefore, cumbersome. The examiner must, in fact, review entire videosequences or risk to lose relevant information by reducing the examination to single screen shot and printouts. In this paper we propose a novel post-processing pipeline for guided visual exploration of ultrasound video sequences, to allow easier and richer exploration and analysis of the data. We demonstrate the usefulness of this approach by applying it to a liver examination case, showing easier and quicker ultrasound image selection and data exploration.",
    pdf = "pdfs/angelelli2010usvideovis.pdf",
    vid = "vids/angelelli10DOISound.mp4",
    images = "images/angelelli10guided0.jpg, images/angelelli10guided3.jpg, images/angelelli10guided2.jpg, images/angelelli10guided4.jpg",
    thumbnails = "images/angelelli10guided0_thumb.jpg, images/angelelli10guided3_thumb.jpg, images/angelelli10guided2_thumb.jpg, images/angelelli10guided4_thumb.jpg",
    location = "Leipzig, Germany",
    project = "illustrasound,medviz,illvis"
    [PDF] [DOI] [YT] [Bibtex]
    @ARTICLE {Solteszova-2010-MOS,
    author = "Veronika \v{S}olt{\'e}szov{\'a} and Daniel Patel and Stefan Bruckner and Ivan Viola",
    title = "A Multidirectional Occlusion Shading Model for Direct Volume Rendering",
    journal = "Computer Graphics Forum",
    year = "2010",
    volume = "29",
    number = "3",
    pages = "883--891",
    month = "jun",
    abstract = "In this paper, we present a novel technique which simulates directional  light scattering for more realistic interactive visualization of  volume data. Our method extends the recent directional occlusion  shading model by enabling light source positioning with practically  no performance penalty. Light transport is approximated using a tilted  cone-shaped function which leaves elliptic footprints in the opacity  buffer during slice-based volume rendering. We perform an incremental  blurring operation on the opacity buffer for each slice in front-to-back  order. This buffer is then used to define the degree of occlusion  for the subsequent slice. Our method is capable of generating high-quality  soft shadowing effects, allows interactive modification of all illumination  and rendering parameters, and requires no pre-computation.",
    pdf = "pdfs/Solteszova-2010-MOS.pdf",
    images = "images/Solteszova-2010-MOS.jpg",
    thumbnails = "images/Solteszova-2010-MOS.png",
    youtube = "",
    doi = "10.1111/j.1467-8659.2009.01695.x",
    event = "EuroVis 2010",
    keywords = "global illumination, volume rendering, shadows, optical model",
    location = "Bordeaux, France",
    project = "illustrasound,medviz,illvis",
    url = ""


    @BOOK {sbert09informationTheory,
    author = "Mateu Sbert and Miquel Feixas and Jaume Rigau and Miguel Chover and Ivan Viola",
    title = "Information Theory Tools for Computer Graphics",
    publisher = "Morgan and Claypool Publishers Colorado",
    year = "2009",
    series = "Synthesis Lectures on Computer Graphics and Animation",
    abstract = "Information theory (IT) tools, widely used in scientific fields such as engineering, physics, genetics, neuroscience, and many others,are also emerging as useful transversal tools in computer graphics. In this book, we present the basic concepts of IT and how they have been applied to the graphics areas of radiosity, adaptive ray-tracing, shape descriptors, viewpoint selection and saliency, scientific visualization, and geometry simplification. Some of the approaches presented, such as the viewpoint techniques, are now the state of the art in visualization. Almost all of the techniques presented in this book have been previously published in peer-reviewed conference proceedings or international journals. Here, we have stressed their common aspects and presented them in an unified way, so the reader can clearly see which problems IT tools can help solve, which specific tools to use, and how to apply them. A basic level of knowledge in computer graphics is required but basic concepts in IT are presented. The intended audiences are both students and practitioners of the fields above and related areas in computer graphics. In addition, IT practitioners will learn about these applications.",
    images = "images/viola09hand.jpg",
    thumbnails = "images/viola09hand_thumb.jpg",
    isbn = "1598299298",
    url = "",
    project = "illustrasound,illvis"


    [VID] [Bibtex]
    @INPROCEEDINGS {viola08illustrasound,
    author = "Ivan Viola and Kim Nylund and Ola Kristoffer {\O }ye and Dag Magne Ulvang and Odd Helge Gilja and Helwig Hauser",
    title = "Illustrated Ultrasound for Multimodal Data Interpretation of Liver Examinations",
    booktitle = "Proceedings of Eurographics Workshop on Visual Computing in Biomedicine",
    year = "2008",
    pages = "125--133",
    month = "Oct",
    abstract = "Traditional visualization of real-time 2D ultrasound data is difficult to interpret, even for experienced medical personnel. To make the interpretation during the education phase easier, we enhance the visualization during liver examinations with an abstracted depiction of relevant anatomical structures, here denoted as illustrated ultrasound. The specifics of enhancing structures are available through an interactively co-registered computed tomography, which has been enhanced by semantic information. To assist the orientation in the liver, we partition the liver into Couinaud segments. They are defined in a rapid segmentation process based on linked 2D slice views and 3D exploded views. The semantics are interactively related from the co-registered modality to the real-time ultrasound via co-registration. During the illustrated ultrasound examination training we provide visual enhancements that depict which liver segments are intersected by the ultrasound slice.",
    vid = "vids/viola08illustrasound.mp4",
    images = "images/viola08illustrasound.jpg, images/viola08illustrasound1.jpg, images/viola08illustrasound2.jpg, images/viola08illustrasound3.jpg",
    thumbnails = "images/viola08illustrasound_thumb.jpg, images/viola08illustrasound1_thumb.jpg, images/viola08illustrasound2_thumb.jpg, images/viola08illustrasound3_thumb.jpg",
    location = "Delft, The Netherlands",
    url = "",
    project = "illvis,illustrasound,medviz"