Quick Links

Available Projects

Topic 1: Multi-modal image disease mapping

One in four people will be affected by cancer in their lifetime. Our research aims to produce computationally derived cancer disease maps that extract and quantify important disease characteristics from a very large biomedical image data repository. The outcome will vastly improve personalised diagnosis and treatment of these cancers by providing new insights on how some cancers spread and are unique to individuals.

1. Modelling tumour growth and spread in PET-CT imaging data

PET-CT is regarded as the imaging modality of choice for the evaluation, staging and assessment of response in most of cancers. It is also common for PET-CT scans to be acquired at intervals during treatment to monitor the patient’s response to therapy, e.g., whether the cancer is shrinking/growing, spreading to other sites. In diseases such as lymphoma, there can be dozens or hundreds of sites of disease, some of which may change independently to other sites during the treatment process (e.g., some sites may grow while others shrink). The current technique to quantify the changes is to either report on the disease burden as a whole or to manually analyse each site, which is not feasible as the number of disease sites increase.

In this project, we will derive a new deep learning technique for modelling changes across multiple disease sites through integrating convolutional neural networks (for analyzing image data) and recurrent neural networks (for analyzing temporal information). Ultimately, this will provide additional information to physicians when assessing patient response to therapy.

2. Functional structure detection on PET-CT imaging data

PET-CT is regarded as the imaging modality of choice for the evaluation, staging and assessment of response in most of cancers. Sites of disease (abnormalities) usually comprise of high uptakes (hot spot) and other visual characteristics such as shape, volume, localization etc. Existing methods for detecting abnormalities are reliant on modelling the characteristics of these abnormalities; however this is challenging due to the inconsistent image-omic (image / visual) features, their varying anatomical localization information, and due to the similarity to some normal structures that also exhibit high uptakes.

In this project, we aim to develop a new approach to automatically detect abnormalities, in a reverse manner, through the filtering (removal) of normal / known structures that occurs in the human body. We will pioneer state-of-the-art deep learning algorithms to iteratively filter out known structures, and thus detecting abnormal structures as the output. This project could significantly improve the segmentation and classification performance of the existing methods and potentially increase the confidence in diagnosis for the physicians in a clinical environment.

3. Robust segmentation and classification with limited medical imaging training data

Deep learning methods based on convolutional neural networks (CNN) have recently achieved great success in image classification, object detection and segmentation problems. This success is primarily attributed to the capability of CNNs to learn image feature representations that carry a high-level of semantic meaning. Therefore, many investigators have attempted to adapt deep learning methods to medical image segmentation and classification related problems. However, comparatively, there is a scarcity of annotated medical image training data due to the large cost and complications in manual annotations of medical images. Consequently, without sufficient training data to cover all the variations, e.g., lesions from different patients can have major differences in size/shape/texture, deep learning methods cannot provide accurate results.

In this project, we will derive a new approach to train an accurate deep learning model for medical images with limited data. More specifically, we will develop deep learning based data argumentation approach to derive additional information and features which can boost the training process. Ultimately, this project can potentially change the existing way of training a medical imaging based deep model and minimize the cost of building the training datasets.

 

Topic 2: Image-Omics

Recent findings have demonstrated the feasibility of using high-dimensional features derived from large datasets of radiology (x-ray and computer tomography [CT]) images in lesion characterization, response prediction, and prognostication in lung cancer patients. Current approaches to integrating imaging and omics data rely on deriving homogenous disease phenotypes from a patient’s imaging data that are typically acquired from a single modality, at a single time-point. We suggest that harnessing semantic knowledge and models derived from patient populations, will allow much better disease characterisation from images, will change current practices and lead to breakthroughs in disease classification and precision medicine.

 4. Image-omics features for the analysis of lung cancer in PET-CT images

Conventional approaches generally only regard the radiology data without integration of the complementary information provided by other imaging modalities, such as information from nuclear medicinefunctional images (positron emission tomography [PET]). HoweverFurther, conventional approaches rely on an ad-hoc definition of “traditional” image features (e.g. texture, shape) that have been used across a wide range of generic object recognition tasks and may not be meaningless for PET, e.g., texture analysis due to the inherent noise and shape analysis due to low spatial resolution.

In this project, we will derive a new approach to extract new image-omic features that are specific to 18F-FDG PET/CT images using state-of-the-art deep learning coupled with conventional (texture, morphological, statistical and regional) features. We will use this new data-specific image-omic features to analyse the relationship between visual features and clinical parameters (age, sex, histological type, tumor grade, stage and prognosis) to identify the features that may predict the metabolic type of lung tumor and help select some specific chemotherapy for planning the individualized therapeutic strategy. We will target our study at data from lung cancer, where the global 5-year survival rate is lower than 20% and where our image-omics approach may reveal insights that could one day be used to improve this survival rate.

 

5. Extrapolating image-genomics features across heterogeneous sites of disease

Image-genomics is a state-of-the-art imaging informatics research where disease quantification is derived by coupling medical imaging data with genetic data extracted from tissue biopsies as a means of unravelling the heterogeneity of disease and how it affects an individual. Image-genomics is based on the hypothesis that visual characteristics in an image encode the tumour’s underlying genotype and the influence of its biological environment.

However, current approaches are mainly focussed on a 1-to-1 correlation between image features and genetic patterns at a single site of disease, which is dependent on obtaining repeated tissue samples from biopsies. In clinical practice, multiple tissue biopsies are rarely undertaken and this is a major hurdle because the image-genomic features from a single biopsied location may not be representative of the genetic characteristics at other sites of disease (i.e., in other organs) and as such a major challenge is toin the selection of image features that are most representative of the entire disease burden.

In this project, we will derive new unsupervised deep learning technique for extracting image features that will encode the image characteristics that are common across different sites of disease. This is in contrast to existing ad-hoc feature extraction and selection, which have been subjectively designed by humans for different purposes at single sites and risks overlooking features that may be relevant to other sites of disease. Such a technique may allow some genetic information from one site to be extrapolated to other sites and, more generally, will allow the inference of genetic information of tumours from the morphology presented in medical imagery.

 

Topic 3: Biomedical Data Visualisation

The next generation of medical imaging scanners are introducing new diagnostic capabilities that improve patient care. These medical images are multi-dimensional (3D), multi-modality (fusion of PET and MRI for example) and also time varying (that is, 3D volumes taken over multiple time points and functional MRI). Our research innovates in coupling volume rendering technologies with machine learning/image processing to render realistic and detailed 3D volumes of the body.

6. Semantic-driven biomedical image visualisation

Modern medical images are complex data entities; they have multiple dimensions and often are derived from multiple modalities either sequentially or simultaneously (e.g., combined PET-CT and combined PET-MR). There are usually multiple potential regions of interest within these images, which are often examined with different visualization parameters for different diseases and types of applications. The current approach generally involves many trial-and-error manual parameter adjustments that need to be adjusted for different patient and disease characteristics.
In this project, we will design a deep learning approach to estimate and initialize the visualization parameters for 3D renderings of multi-modality medical imaging data. Specifically, the technique will relate the semantics of patient case (as described by disease stage, clinical reports, etc.) to the visualization parameters that emphasize the relevant anatomical and functional regions of interest. This will lead to new approaches to interpret complex medical data for diagnosis, staging, and assessing response to therapy.

 

7. Holographic biomedical image recognition and visualisation

The objective of this project is to innovate in mixed reality experiences to visualize patient’s medical record history (spatial + temporal). Microsoft’s HoloLens device will be utilised to display a complex visualisation of all the available data, which includes 2D/3D renderings of images (volumetric, multi-modal images), annotations e.g., regions of interest (ROI) delineated on the images, and graphical charts of numerical and text information, e.g., patient history and pathology. The centrepiece of patient record visualisation is with the medical imaging data that are multi-dimensional (3D spatial + multi-modal + temporal sequences) and are viewed along with various supporting data that needs to be interactively manipulated. All visualisation computing will be performed by a remote GPU and each Hololens will become a thin client. This ensures necessary computing resources as well as concurrency challenges.

 

Topic 4: Image Analysis + Data

 8. Adrenal Tumor Recognition and Analysis

Adrenal tumors are currently very difficult to access from the onset of the disease. The abnormal site is visible but cannot be differentiated to be either malignant or benign. To confirm the diagnosis, multiple additional tests are typically performed to identify and characterize the disease such as CT, PET-CT, and MRI and others. Early confirmation of the disease can lead to significant improvement in the quality of care.

Tumor detection is a fundamental requirement for an adrenal tumor Computer aided diagnosis (CAD) system to provide essential information as clinical decision support. However, there is no algorithm to detect and classify adrenal tumors. In this project, we aim to develop a machine learning algorithm to accurately and efficiently detect the adrenal tumors on medical images.

 

9. Machine Learning for Automatic Thyroid Eye Disease Classification

Thyroid Eye Disease (TED), also known as Thyroid Associated Orbitopathy (TAO), affects many people in the world, e.g., around 400 000 people are affected by TED in the United Kingdom. For a sizeable minority, TED is an extremely unpleasant, painful, cosmetically distressing, and occasionally sight threatening condition. Early diagnosis is particularly important for TED disease since early treatment could minimize the risk of losing sights in severe TED patients.

Current TED diagnosis is usually made by summing the score given to typical clinical features which are then used to select a disease classification. However, even for experienced physicians, diagnosis by human vision can be subjective, inaccurate and non-reproducible. This is primarily attributed to the complexity of eye features that is used to describe the disease.

In this project, we aim to develop ‘machine learning’ algorithm to automatically classify the TED disease into its subclasses. This will involve developing training models based on labelled eye pictures.

We propose to leverage machine learning algorithms, such as supervised deep learning, to extract large visual descriptors on the eye picture to model the TED disease. Then from the learned model, we will apply the state-of-the-art multi-label classification technique to use correlations among clinical features to automatically classify the “NO SPECS” score as well as the “Clinical activity score”.

 

10. Robotic Surgical Video Processing

Robotic surgery provides the surgeon with articulated instruments capable of the full range of movements that would otherwise be possible performing open surgery. The Robotic system uses binocular telescopes (which can be recorded as a video) allowing the surgeon to have full depth perception which greatly improves surgical precision. The ability to process surgical videos can provide exceptional level of clinical decision support, via real-time data augmentation during surgery, as well as to facilitate surgical training and measure surgical competence.

This research will develop the following core algorithms: (i) recognition of surgical tools and gestures from video; (ii) visual tracking of major anatomical structures from video; (iii) cloud-based computation for real-time video data processing; and (iv) surgical decision support system within an Augmented Reality and/or Virtual Reality environments.

 

11. Detecting and Measuring Fetal Structures in Ultrasound Images

Ultrasound is the imaging modality of choice for non-invasively assessing fetal development in pregnant mothers. Currently, this assessment is performed by measuring a standard set of fetal organ structures (e.g. head circumference and femur length) within the ultrasound images. However, recent clinical research has revealed that there are potentially many other biometrics that may be helpful in assessing the health of the fetus and the child after birth (e.g., the fetal thalamus and a potential link to neurodevelopment). This requires developing new growth models for a variety of different structures.

In this project, we will derive new ultrasound image analysis machine learning algorithms to detect and measure a variety of fetal anatomical structures. We will target structures in different body regions to obtain information about different developmental functions (brain for neurodevelopment, heart for circulatory and respiratory development, abdomen for digestive and endocrine systems)

 

12. Computer Aided Diagnosis for Skin Lesion Detection, Tracking and Analysis

Skin cancer costs the Australian heath system more than $500 million each year, which is the highest cost of all cancers. Melanoma is one of the most lethal forms of skin cancer and is responsible for more than 75% skin cancer deaths. Unfortunately, Australia has the highest incidence of melanoma in the world. On average, 30 Australians will be diagnosed with melanoma every day and more than 1,200 will die from the disease each year (according to Melanoma Institute Australia).

Early detection of melanoma is particularly important, since simple treatment can results in a complete cure if detected at an early stage. In the clinical environment, non-invasive diagnosis and tracking the changes over time by human vision alone can be inaccurate, subjective, and irreproducible even among experienced dermatologists. This is attributed to the challenges in interpreting skin lesions with diverse characteristics including lesions of varying sizes and shapes. In addition, when it is uncertain as whether or not a skin lesion is malignant, excision is usually necessary. However, it is almost impossible to excise the lesion without scarring. Consequently, patients will suffer unnecessary pain and discomfort and this is particularly true for the patients with many skin lesions.

There have been tremendous advances in Melanoma detection; however, majority of the works have been on improving the detection, segmentation and classification on a single images and not on studying the evolution of the disease (temporal tracking).

In this project, we will develop an accurate automated algorithm to detect, track and analyze skin lesions. Existing methods for skin lesion analysis are either invasive or not accurate due to the complexity and variability of skin lesions. We will leverage state-of-the-art artificial intelligence techniques on ~10,000 pathology confirmed skin lesion images to progressively learn the most important visual characteristics for identifying melanoma and segmenting the skin lesions (for tracking changes over time). This project could potentially improve the diagnosis confidence and accuracy for the dermatologists by providing a second opinion and indicating the malignancy and the changes over time. This project will also potentially increase clinical efficiency, improve patient awareness and provide a cost effective solution for Australia.

 

Projects Advertised as part of ARC Training Centre

A number opportunities are available for outstanding PhD scholars to work in a multidisciplinary research team in a new Australian Research Council (ARC) Training Centre for Innovative BioEngineering (ARC-TCIB). The ARC-TCIB is a multidisciplinary collaboration among researchers at The University of Sydney, University of Technology Sydney, Swinburne University of Technology, and Beth Israel Deaconess Medical Centre, in collaboration with leading industry partners including Allegra Orthopaedics, Osseointegration International Pty Ltd, Peter Brehm GmbH, Ti2Medical, and Royal Prince Alfred Hospital. The purpose of the ARC-TCIB will be to provide the next generation of skilled graduates with technical, translational, commercialisation, and entrepreneurial skills to overcome industry-focused challenges in biomedical engineering.

 

13. Deep neural networks for omni-modality musculoskeletal (MSK) image analysis

The discovery of biomarkers requires accurate delineation of bone and tissues surrounding a MSK defect, but this is difficult because different types of images (x-ray, PET, CT, MRI) depict different characteristics. We will derive a computerised image segmentation algorithm to automatically delineate bones and musculature surrounding the defect, using state-of-the-art convolutional neural networks (CNNs) – a data-driven approach to identify the quantifiable image characteristics that are most relevant for a particular task – in this case, segmentation. The key challenge will be to train the CNNs across all image types (both functional and anatomical) to identify the correlations between them so that bones and muscles can be optimally delineated regardless of the image type. The outcomes will be techniques to improve diagnostic processes by allowing automated localisation and biomarker analysis of the anatomical defect sites.

 

14. Advanced 3D visualisation of musculoskeletal (MSK) imaging

Clinicians and surgeons need to interpret imaging data for diagnosis and pre-surgical planning. However, viewing the images directly is not optimal because the defect has a non-trivial risk of being obscured by noise or being occluded by other structures. The project will involve research on 3D graphics optimisations to create an algorithm that exploits graphics processing hardware to enable 3D visualisation of the anatomical defect on a computer display. The outcome will be a new 3D visualisation algorithm that enables improved diagnosis and pre-surgical planning, by allowing clinicians to view the anatomical characteristics of the MSK defect without the noise and obstruction inherent in the medical images

 

15. Advanced segmentation of musculoskeletal (MSK) imaging and surgical planning.

With the advent of imaging and navigation technologies, pre-operative planning and subsequent execution of these plans has become a reality in knee replacement surgery.  However, current approach to semi-automated / manual segmentation of CT scans and manipulation to complete the surgical plan is a time consuming and inexact process.

The project will involve research on the state-of-the-art deep learning and shape modelling based segmentation algorithms and their application CT scans of the knee. In particular, quantitative measurements from the tracking of changes in 3D models from preoperative to postoperative scans will be explored.  Student will be involved in the planning of knee replacement surgeries using a novel instrument platform and assess the surgical result by comparing the postoperative CT scan of the surgery to the plan.

 

Telehealth

 16. An IT-enabled telehealth model of care

Telehealth technologies enhance the delivery of healthcare through the introduction of powerful mechanisms such as social networking, notification, patient education/information portals, and patient monitoring, either remotely (by the care team) or by family and friends. Telehealth can be broadly applied to many diseases. As a case study, obesity in multiple family members is common and the importance of a family-based approach to weight management is well known. Our research aims to develop a family-focused application (app) with novel concepts to incentivise the whole family through family (social) networking, gamification, notifications, personalised analytics, goal setting and a reward mechanism. The app will be supported by a remote study nurse to encourage adherence. The proposed app features will need evaluation in regard to usefulness in helping to induce lifestyle change.

Existing Works

Feature of Interest‐Based Direct Volume Rendering Using Contextual Saliency‐Driven Ray Profile Analysis

 Y. Jung,  J. Kim,  A. Kumar, D.D. Feng, M. Fulham

 Computer Graphics Forum

Direct volume rendering (DVR) visualization helps interpretation because it allows users to focus attention on the subset of volumetric data that is of most interest to them. The ideal visualization of the features of interest (FOIs) in a volume, however, is still a major challenge. The clear depiction of FOIs depends on accurate identification of the FOIs and appropriate specification of the optical parameters via transfer function (TF) design and it is typically a repetitive trial‐and‐error process. We address this challenge by introducing a new method that uses contextual saliency information to group the voxels along a viewing ray into distinct FOIs where ‘contextual saliency’ is a biologically inspired attribute that aids the identification of features that the human visual system considers important. The saliency information is also used to automatically define the optical parameters that emphasize the visual depiction of the FOIs in DVR. We demonstrate the capabilities of our method by its application to a variety of volumetric data sets and highlight its advantages by comparison to current state‐of‐the‐art ray profile analysis methods.

A graph-based approach for the retrieval of multi-modality medical images

Ashnil Kumar, Jinman Kim, Lingfeng Wen, Michael Fulham, Dagan Feng

Medical Image Analysis

In this paper, we address the retrieval of multi-modality medical volumes, which consist of two different imaging modalities, acquired sequentially, from the same scanner. One such example, positron emission tomography and computed tomography (PET-CT), provides physicians with complementary functional and anatomical features as well as spatial relationships and has led to improved cancer diagnosis, localisation, and staging.

An Ensemble of Fine-Tuned Convolutional Neural Networks for Medical Image Classification

Kumar, Ashnil, Jinman Kim, David Lyndon, Michael Fulham, and Dagan Feng

IEEE journal of biomedical and health informatics

The availability of medical imaging data from clinical archives, research literature, and clinical manuals, coupled with recent advances in computer vision offer the opportunity for image-based diagnosis, teaching, and biomedical research. However, the content and semantics of an image can vary depending on its modality and as such the identification of image modality is an important preliminary step. The key challenge for automatically classifying the modality of a medical image is due to the visual characteristics of different modalities: some are visually distinct while others may have only subtle differences. This challenge is compounded by variations in the appearance of images based on the diseases depicted and a lack of sufficient training data for some modalities. In this paper, we introduce a new method for classifying medical images that uses an ensemble of different convolutional neural network (CNN) architectures. CNNs are a state-of-the-art image classification technique that learns the optimal image features for a given classification task. We hypothesise that different CNN architectures learn different levels of semantic image representation and thus an ensemble of CNNs will enable higher quality features to be extracted. Our method develops a new feature extractor by fine-tuning CNNs that have been initialized on a large dataset of natural images. The fine-tuning process leverages the generic image features from natural images that are fundamental for all images and optimizes them for the variety of medical imaging modalities. These features are used to train numerous multiclass classifiers whose posterior probabilities are fused to predict the modalities of unseen images. Our experiments on the ImageCLEF 2016 medical image public dataset (30 modalities; 6776 training images, and 4166 test images) show that our ensemble of fine-tuned CNNs achieves a higher accuracy than established CNNs. Our ensemble also achieves a higher accuracy than methods in the literature evaluated on the same benchmark dataset and is only overtaken by those methods that source additional training data.