About me

somada141
Melbourne, Australia

Bio: Electrical & Computer Engineer with an MSc in Telecommunications and a PhD in Biomedical Engineering. Strong knowledge and experience in computational algorithm development, high-performance computing, multi-physics simulations, big-data analysis, and medical imaging/therapy modalities. Extensive experience in collaborating with medical & technical personnel, researchers, and industrial partners in large international projects such as, safety evaluation of medical devices, treatment planning & optimization of therapeutic modalities, and development of next-generation simulation platforms.

View complete profile

 

6 thoughts on “About me

  1. I have just found your excellent link about python and dicom. We are trying to extract a poly mesh from a dicom file created by a 3D ultrasound sonograph. Our aim is to use our colleges 3D printers to make a physical model of a fetus (12 to 16 weeks old). Do you have any experience of extracting a mesh from an ultrasound scan?
    Malcolm Kesson
    Dept. Visual Effects
    Savannah College of Art and Design
    Savannah
    GA
    USA

    Like

    • Hey Malcolm, while I haven’t done such a surface extraction before I doubt it would be too hard 🙂

      That being said, you’re gonna have a lot of noise in your image data (specks, holes, imaging artifacts) which you would likely want to ‘clean’ prior to extracting your surface so I would suggest a basic smoothing and segmentation with SimpleITK and then surface extraction with VTK. I take you’ve read my corresponding posts on those topics?

      Are we talking several planar cross-sections of the fetus or the kind of dataset used to extract fetus models in a 3D B-mode scan?

      Like

  2. Hi!

    I just wanted to thank you for your excellent work! I am a MSc student in Biomedical Engineering and your posts are extremely helpful ant time savers, since I am focusing my work on volume rendering and surface extraction from head CT scans.

    Thank you very much!

    Like

  3. Great posts about SimpleITK for processing DICOMs.

    When you do brain segmentation between grey and white matter, you specify that you’re limiting yourself to just 2D segmentation within a given slice because the 3D segmentation is more computationally expensive.

    If I wanted to perform the 3D segmentation, would I just simply pass the entire SimpleITK object in and use a 3D point for the seed? Is there any other trick or does SimpleITK know how to handle the operations for 3D?

    Thanks.

    Like

    • Hey Tony, you’re sorta posting this on the wrong page mate 🙂

      But, to answer your question, yeah doing the operations in 3D is entirely straightforward. You may have noticed that we explicitly sliced the originally 3D image to a 2D one in the post and all you gotta do is not do that 🙂

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s