Skip to end of metadata
Go to start of metadata

Process for Applying

(New instructions, please read first).

In the first instance, all candidates who are interested in applying for a PhD or a Masters in Robotics, Autonomous Systems or related fields should go through the following process:

  1. Email our Robotics Higher Degree Research (HDR) applications coordinator Dr. Felipe Gonzalez <felipe.gonzalez@qut.edu.au>, with a clear subject, to register your interest and register your interest using our online application form.
  2. Read the general information on how to apply to a research degree at QUT: https://www.qut.edu.au/study/applying/phd-and-research-degree-applications and regarding scholarshipshttps://www.qut.edu.au/study/fees-and-scholarships/phd-and-research-degree-financial-support. Note that as part of this process you will need to nominate a supervisor at QUT and a research topic - please consult the list of topics below for further information on available projects (note this list is not exhaustive, other projects can also be proposed) and then proceed 

QUT PhD Top-Up Scholarships in Robotics, Autonomous Systems and Computer Vision

Special top-up scholarships of up to $10,000 AUD per annum may be offered to exceptional candidates who receive Australian Postgraduate Award (APA), QUT-Postgraduate Research Award (QUT-PRA), Women Re-Entry Scholarship (WRE), Indigenous Postgraduate Research Award or International Postgraduate Research Scholarship (IPRS). 

General scholarship information is provided here: https://www.qut.edu.au/study/fees-and-scholarships/phd-and-research-degree-financial-support

List of Available Projects/ Topics

The following sections detail available projects in the Robotics and Autonomous Systems group at QUT, broken down by major topic area.

Many of the listed projects have scholarships and top-up funding available for high quality applicants.

Australian Centre for Robotic Vision Projects

Main Supervisor(s)Title/TopicShort Description

Professor Peter Corke

 

Robust vision

Multiple PhD Positions Available for a number of projects in the Robotic Vision Centre of Excellence (www.roboticvision.org) related to creating robots that see:

  • Visual simulators, if we create robots that see how can we test their operation under all viewing conditions. What do we couple high fidelity virtual worlds to robot controllers, and how do we evaluate robot performance.
  • New camera technologies, how can we create better cameras? Can we do awesome things with existing cheap cameras (http://graphics.stanford.edu/projects/camera-2.0), can we combine large numbers of cheap cameras to synthesise one really good camera, can we use light-field cameras (eg. Lytro) for robots, can we use night-vision technology, how do we improve the colour accuracy of cameras to allow for better classification of objects and shadows in the world.
  • How do we reduce the energy consumption of a robot vision system? Do we need to pay attention to all the pixels all the time, or can we somehow pay attention to the important stuff?
  • Why do cameras have to be connected by cables to robots? Could we use wireless cameras? How do we manage the bandwidth, how do we split computation between the camera, the robot and the cloud? Can a robot look around corners by finding cameras around the corner that it could use? How does it find the cameras it needs to do the job? How is that visual information used to control motion?
  • Very close quarters flying, can we fly multi rotor aircraft through ventilation ducts, chimneys and shafts using just a sense of vision.
  • and much more as well...

More information available here: http://roboticvision.org/what-we-do/research/

Associate Professor Michael Milford

Learning robust robotic vision algorithms for recognition and scene understanding

Multiple PhD positions available as part of the Robotic Vision Centre of Excellence (www.roboticvision.org)

This project is developing robust means for performing typical recognition tasks such as place recognition and object recognition in the world using visual sensing alone. This is a crucial robotic competence yet presents significant challenges given a wide range of environments as well as variation due to weather, daily or seasonal cycles and structural changes. This project will combine machine learning and deep learning techniques to develop highly robust approaches to typical robotic vision tasks such as place recognition and object recognition under challenging conditions.

 

Prof Gordon Wyeth

Vision for Human Robot Interaction

1 PhD Position available. This project will develop computer vision tools to allow future robots to work intimately and collaboratively with a range of human users. For such robots to be accepted as useful members of a workforce they must be capable of recognising people, anticipating and reacting to the intent of humans, and recalling and using recent interaction history. Many of the robotic vision techniques required for interaction with humans will find their origins in the other projects of the Centre, but will be specifically tailored to challenges in human robot interaction in this project. Humans are a special case for robotic vision, critical to the development of application areas where robots are to work safely and effectively alongside humans.

More information available here (see under SV3): http://roboticvision.org/what-we-do/research/semantic-vision/projects/

Dr. Markus Eich

Vision for Human Robot Interaction

1 PhD Position available: Human Robot Interaction based on Vision

The key idea is to use visual cues to understand human intentions during a manual task, e.g. assembling object, preparing food, etc. By observing a human, a robot should be able to assist a human similar to a theatre nurse during a surgery. The robot observers and anticipates the next step. This needs an understanding of human tasks on a semantic level, consisting of actions applied to objects. An internal (semantic) action plan of such a process is needed ant the robot should be able to localize the current executed action within the task network and be able to learn new actions be observation. One approach could be, e.g. to link robot and human actions to symbols and vice versa.

1 PhD Position available: Semantic 3D Scene understanding

Object detection in computer vision has made a significant progress due to the renaissance of neural networks and deep learning. The drawback of such approaches are still that huge datasets have to be processed and it is difficult to add knowledge to the network without retaining at least some layers of the network. Neural networks are still a black box and it is hard to extract symbolic knowledge about the scene from the network. This project will deal with the question of how semantic knowledge, using Ontologies, Bayesian networks, etc. about shape, features, structures co-appearances, to reason about what a robot sees. Eg. A cup is perceived by a Neural Network hanging from a ceiling which is very likely to be misclassified. What can semantic knowledge tell the robot about the usual appearances of cups and what about things hanging from a ceiling and where does the knowledge come from? Can a knowledge base trigger also robotic actions if a correctly classified object does not belong there?

 

Dr Juxi Leitner

Learning for Vision and Action

Projects are available at the intersection of computer vision, robotic control and machine learning (AI). Cameras have become a quite ubiquitous sensor, yet what representations to chose for these visual observations for robotic tasks is still not clear. Machine learning has been shown to perform visual classification tasks with quite impressive results. How can we make use of these in robotic systems? What is the features that a robot needs to focus on for picking up and sorting objects? What are other affordances a robot needs to learn to interact with objects on a daily basis and in a similar manner that humans do? Can a robot learn affordances of the world by observing humans?

Especially interesting are developments in Deep Learning, e.g. see our workshop at RSS http://Juxi.net/workshop/deep-learning-rss-2016

Possible projects include: (deep) learning robust image based visual servoing, multi-modal learning of servoing, recurrent networks to control robotic arms, transfer of learned action/actuations from simulation to real world, deep learn everything (for long term autonomous operations), ...

Have a look at my current projects at http://Juxi.net/

 

Dr Niko Sünderhauf

Visual Semantic SLAM for Robotic Scene Understanding

 

How can robots understand the world around them? How can they build a detailed map or a world model that does not only express where things are, but also what these things are, and what the robot can do with them? Ideally such a semantic map of the world would comprise all the objects, their position and orientation (where is it?), along with semantic information (what is it?) and affordances (what can be done with it?).

If you are interested in combining typical robotics algorithms like SLAM, mapping, and localization with bleeding edge computer vision and deep learning techniques, consider applying for a PhD with me.

You will be working closely with other researchers in the Australian Centre for Robotic Vision (ACRV) and be at the forefront of robotic vision research in the world. There will be opportunities to collaborate and travel to other nodes within the centre (e.g. to the University of Adelaide), to our international partner investigators (e.g. Zurich, London, Oxford), and to international conferences.

This research project has enough potential to support multiple PhD students. You can work with various real robot platforms, but can also choose how much time you want to spend with robotic experiments depending on your skills and preferences.

You should be eager to learn new things, be highly self-motivated, and be a good programmer (we typically work with Python, C++, Matlab). Ideally you have taken at least one course in robotics, computer vision, or machine learning, or done a VRES or final year project in these areas.

More detailed research questions of interest in this project:

  • How can semantics (using object detection techniques based on deep learning) help to obtain more meaningful maps for robotics? 
  • How can robots exploit such maps for various task planning problems? 
  • How can objects be used for robust localization despite appearance changes (e.g. at night or in difficult weather conditions)? 
  • Can other robotic vision algorithms benefit from semantic knowledge, e.g. visual odometry? 
  • How can complex spatial relationships between objects be modeled for SLAM (e.g. some object tend to occur together at certain spatial configurations, like keyboard & monitor, while other object never occur together)? 
  • How can we use such relationships to improve the detection accuracy of deep learning algorithms? 
  • How can such knowledge be learned and incorporated into the classifiers? 
  • How can the structure that typically occurs in man-made environments be exploited for robotic SLAM? 
  • How can deep neural networks be treated as sensors in a sensor data fusion framework?
  • Are current deep learning approaches fundamentally flawed to be truly useful for robotics (e.g. due to the open set recognition problem)? Can they be fixed?

Agricultural Robotics Projects

Please view the overall call for applicants by clicking here.

Main Supervisor(s)Title/TopicShort Description

Prof Tristan Perez

Agricultural Cybernetics

Several PhD Positions available. The next agricultural revolution will be driven by digital agriculture. This will integrate deep agricultural knowledge and systems science with powerful digital technologies – from robots, autonomous systems and sensor networks to data analytics, economic modelling and artificial intelligence.

Cybernetics is the study of systems capable of receiving, storing and processing information and using it to control system behaviours. A significant body of system science results applied to engineering and finance could be adapted to food production systems. To date, agricultural cybernetics research has focused on protected cropping (i.e. greenhouse farming).

This project seeks to apply system science (modelling, estimation, control) to particular management problems in agriculture such as irrigation, crop nutrient and pest management.

Dr Chris McCool

Prof Tristan Perez

Image-based Crop detection and classification in challenging conditions

1 PhD Position available. Robotic technology has the potential to soon transform the way we produce food. Swarms of small highly autonomous robots will find application in weed and crop management, leading to improved operations, reduce soil compaction, and working more hours and shifts safely.

 This project will develop novel computer vision and machine learning methods to robustly detect and classify crops for a robotic system. This is a key issue for a practical robotic solution for agricultural and horticultural robotics. The project will explore statistical pattern recognition approaches as well as the use of a range of imagery including near-infrared, 3D and multi-spectral cameras. This is an integral element of the recently funded $3 million Agricultural Robotics Program at QUT.

Prof Tristan Perez

Multi-vehicle control for robot farm operations in agriculture

1 PhD Position available. Robotic technology has the potential to soon transform the way we produce food. Swarms of small highly autonomous robots will find application in weed and crop management, leading to improved operations, reduce soil compaction, and working more hours and shifts safely.

 This project will develop novel decentralised strategies for coordinated motion control of multiple robots for conducting operations in agriculture. This is the core capability for swarm operations.

Prof Tristan Perez

Robotic manipulation of agricultural crops

1 PhD Position available. Robotic technology has the potential to soon transform the way we produce food. Swarms of small highly autonomous robots will find application in weed and crop management, leading to improved operations, reduce soil compaction, and working more hours and shifts safely.


This project seeks to design a robotic manipulator for autonomous harvesting of horticultural produce (vision and action). Such design involves translating operational requirements into a robot task space, optimising number of links and their length so to attain maximum cover of the task space, design the end effector, and design task plan and motion control strategy for harvesting based on computer vision information. This project is part of the Agricultural Robotics Programme funded by the Department of Agriculture Fisheries and Forestry and it will be conducted under the Centre of Excellence for Robotic Vision.
For this project we are seeking a student of Mechanical or Mechatronics Engineering with a strong background in control systems.

Prof Tristan Perez

Robotic weed management in agriculture

1 PhD Position available. Robotic technology has the potential to soon transform the way we produce food. Swarms of small highly autonomous robots will find application in weed and crop management, leading to improved operations, reduce soil compaction, and working more hours and shifts safely.

This project seeks to design an integrated weed management system based on dual chemical and mechanical modes. It will consider the development of novel chemical adaptive delivery methods based on feedback information from computer vision and weather. The system will adjust the position of nozzles, size of droplets, and chemical mixtures to optimise the operation and minimise drift accordion to the dominant wether conditions and weed type.

The project will also consider a robotic manipulator for mechanical weed destruction. Such design involves translating operational requirements into a robot task space, optimising number of links and their length so to attain maximum cover of the task space, design the end effector, and design task plan and motion control strategy for harvesting based on computer vision information. This project is part of the Agricultural Robotics Programme funded by the Department of Agriculture Fisheries and Forestry and it will be conducted under the Centre of Excellence for Robotic Vision.

Robotics and Neuroscience Projects (Australian Research Council Future Fellowship)

Click here to read the call for applications

Main Supervisor(s)Title/TopicShort Description

Associate Professor Michael Milford

Human Vision Modelling Stream: Superhuman place recognition with a unified model of human visual processing and rodent spatial memory

This PhD position will focus on the hierarchical variably selective and tolerant human vision model.

Current robotic and personal navigation systems leave much to be desired; GPS only works in open outdoor areas, lasers are expensive and cameras are highly sensitive to changing environmental conditions. In contrast, nature has evolved superb navigation systems. This project will solve the challenging problem of place recognition, a key component of navigation, by modelling the visual recognition skills of humans and the rodent spatial memory system. This approach combines the best understood and most capable components of place recognition in nature to create a whole more capable than its parts. The project will produce advances in robotic and personal navigation technology and lead to breakthroughs in our understanding of the brain.

Click here to read the call for applications

This project is a strongly interdisciplinary one spanning neuroscience, robotics and computer vision and candidates will require experience or a skill set that facilitates this interdisciplinary approach. PhD students will have the very challenging but rewarding task of achieving research breakthroughs that impact multiple disciplines simultaneously.

Associate Professor Michael Milford

Multi-scale Spatial Memory Model Stream: Superhuman place recognition with a unified model of human visual processing and rodent spatial memory

This PhD position will focus on the multi-scale learning and recall system with a focus on multi-scale spatial memory inspired by the mammalian brain.

Current robotic and personal navigation systems leave much to be desired; GPS only works in open outdoor areas, lasers are expensive and cameras are highly sensitive to changing environmental conditions. In contrast, nature has evolved superb navigation systems. This project will solve the challenging problem of place recognition, a key component of navigation, by modelling the visual recognition skills of humans and the rodent spatial memory system. This approach combines the best understood and most capable components of place recognition in nature to create a whole more capable than its parts. The project will produce advances in robotic and personal navigation technology and lead to breakthroughs in our understanding of the brain.

Click here to read the call for applications

This project is a strongly interdisciplinary one spanning neuroscience, robotics and computer vision and candidates will require experience or a skill set that facilitates this interdisciplinary approach. PhD students will have the very challenging but rewarding task of achieving research breakthroughs that impact multiple disciplines simultaneously.

Associate Professor Michael Milford

An Infinitely Scalable Learning and Recognition Network

This project combines modelling of the spatial memory encoding system in the mammalian brain with machine learning techniques to investigate new compression techniques for encoding and recalling information that scale well to very large datasets.

This project is a strongly interdisciplinary one spanning neuroscience, robotics and computer vision and candidates will require experience or a skill set that facilitates this interdisciplinary approach. PhD students will have the very challenging but rewarding task of achieving research breakthroughs that impact multiple disciplines simultaneously.

Funded by the Asian Office of Aerospace Research & Development

Field Robotics

Main Supervisor(s)Title/TopicShort Description

Dr Matthew Dunbabin

Avatar Image

 

Adaptive sampling for dynamic events

 

  

Natural scene evaluation for robot-based stealthy tracking

The project will develop novel adaptive sampling methodologies that integrate static and mobile sensing assets to advance the observation fidelity and rate of scientific discovery from large-scale environmental monitoring. It is an interdisciplinary project that brings together areas of autonomous systems, machine learning, sampling theory, statistics and biology. The research will focus on; (1) event driven adaptive sampling, and (2) coordinated multi-robot planning and control.

 

Robotic platforms offer a unique capability for covertly tracking mobile targets, such as wildlife, over extended periods of time. To realise persistent, opportunistic tracking and behavioural studies of wildlife requires capabilities that allow the robot to autonomously select vantage points for observing the target without being detected, and to then transition between vantage points as the target moves. This requires robust interpretation and assessment of the natural scene around the robot and target from its on-board sensors for selection of appropriate vantage points. This project will investigate, develop and apply novel algorithms, image processing and 3D mapping, for natural scene characterisation. 

Dr. Thierry Peynot

Prof Tristan Perez

Fault-Tolerant Fusion of Multi-Sensing-Modality Data for Robot Perception

—Beyond (Catastrophic) Bayesian Fusion 

To conduct long-term missions, field-and-service robots must be able to operate safely and reliably in challenging environments and operational conditions, such as in the presence of fog, smoke, airborne dust, and rain. The use of multiple sensing modalities, such as laser, radar, visual cameras and infrared cameras, has been widely recommended in the literature to achieve resilient perception in such conditions. Distinct sensing modalities use physical processes to sense the environment that can react differently to different materials or environmental conditions, contrary to what is commonly assumed in the literature. For example, in dusty environments radars see through dust clouds while lasers will detect airborne dust particles. Conversely, glass windows in indoor environments are likely to be detected by a radar but missed by a laser. This can lead to failures of traditional Bayesian data fusion methods (i.e. catastrophic fusion). The aim of this project is to develop and validate novel algorithms for the fusion of data acquired by distinct sensor technologies so robot navigation becomes resilient to such conflicting sensor data.

Some aspects of this project are in collaboration with LAAS-CNRS in Toulouse, France. PhD candidates will be encouraged to spend some time at LAAS-CNRS to work with diagnosis experts.

Dr. Thierry Peynot

Development of an Autonomous Astrobiologist Rover

Astrobiologists look for signs of life on other planets, such as Mars. In particular, they hope to find stromatolites, i.e. rock structures that were formed by a biogenic process. The goal of this project is to give a planetary rover the ability to help astrobiologists with this mission, by autonomously detecting stromatolites using computer vision, and any other relevant sensing modality.

 This project is in collaboration with astrobiologists at NASA-JPL and with the Astrobiology Centre at UNSW.

Dr. Thierry Peynot

Terrain Traversability Estimation / Obstacle Detection

(2 projects)

  • Traversability Estimation in Vegetated Environments. Operating safely and efficiently in vegetated environments is a major challenge for autonomous mobile robots. Vegetation may appear like a dangerous obstacle geometrically, although a robot may be able to drive through it safely. On the other hand, it may be relatively easy to classify vegetation using vision, but there may be a stone hidden behind it that constitutes a real obstacle for the robot. This project concerns the development of novel methods for terrain traversability estimation that are reliable in vegetated environments, using a combination of camera/laser and radar.
  • Reliable Detection and Localisation of Negative Obstacles. Obstacle detection is a fundamental requirement for any autonomous mobile robot. Existing systems are quite good at detecting "positive" obstacles, i.e. elements above the ground that the robot should avoid. However, reliably detecting and recognising "negative obstacles", such as gaps or holes in the ground, remains challenging. The goal of this project is to investigate methods to detect negative obstacles, using multiple sensing modalities available on a mobile robot, and to determine the danger they represent for a particular vehicle.

Mining Robotics

Main Supervisor(s)Title/TopicShort Description

Dr. Thierry Peynot

Mining Robotics & Automation

QUT has recently become a member of CRCMining, the pre-eminent mining research organisation with an international reputation for delivering excellence in mining-focused research and related industry outcomes. The Robotics & Autonomous Systems (RAS) group at QUT is providing leadership in CRCMining's Mining Automation program. As such, we are involved in a number of projects at the forefront of Mining robotics and automation, in partnership with CRCMining and some of the world-leading mining companies and original equipment manufacturers (OEM) around the world.

A number of related PhD projects are available, including some with scholarships and/or top-ups. For further details please contact Dr. Thierry Peynot, who is leading this activity for RAS.

Aerial Robotics

Supervisor(s)Title/TopicShort Description

Prof Tristan Perez

Strategies for mid-air collision avoidance in aircraft based on bird behaviours

1 PhD position available. Mid-air collisions between aircraft are becoming increasingly likely due to the rapid proliferation of airspace complexity and the presence of unmanned aircraft in uncontrolled air spaces. Existing technologies for ‘sense and avoid’ collisions are bulky, expensive, and not 100% reliable. In this project, we will draw inspiration from Nature to seek improvements. Birds fly rapidly and safely through complex and cluttered environments and rarely collide with objects and other birds. We will study how birds use sensory information, make decisions, and perform manoeuvres, and draw inspiration from experiments on bird flight to develop and test novel strategies for the detection and avoidance of potential aircraft mid-air collisions. 


This project is co-funded between the Australian Research Council and Boeing Research & Technology Australia and is conducted as a collaboration between QUT and the Queensland Brain Institute at the University of Queensland (UQ).

Prof Tristan Perez

Biologically-inspired detection, pursuit and interception of moving objects by unmanned aircraft systems

1 PhD position available. Although it is well known that aggressive honeybees are very effective at detecting, pursuing and intercepting moving targets, this behaviour has never been studied quantitatively. In this project, we are using high-speed video cinematography to investigate this behaviour, to develop visual algorithms for the detection of moving targets, and to create dynamical models of the mechanisms that control pursuit. The  results will be used to design novel, biologically-inspired guidance systems for unmanned aerial vehicles that are engaged in surveillance, security and safety missions. 

 This project is co-funded between the Australian Research Council and is conducted as a collaboration between QUT and the Queensland Brain Institiute at the University of Queensland (UQ).

Prof Tristan Perez

Robot Trusted Autonomy1 PhD Position available. Robots and autonomous systems will have to be integrated in operational spaces shared by humans and human-operated machinery. This is creating a tremendous pressure on safety regulatory agencies that design regulations and and are responsible for certification. This project seeks to develop novel methods and technology for probabilistic assessment of robust autonomy in single and multi-vehicle operations. This can have a significant bearing in the uptake of mobile and service robot technology such as UAV, UGV, and USV in civilian operations.

Associate Professor Felipe Gonzalez

 

 

Design of artificial intelligence partially observable decision making tools for environmental monitoring using UAVs.UAVs are revolutionising spatial sciences; Master and PhD opportunities  exist on  designing and flight testing  onboard heuristic and  probabilistic based multi-criteria decision  making methods for plant bio-security,  volatile organic compounds and air quality data acquisition tasks with autonomous UAVs taking  into account resource (time or energy) constraints.  Exhaustive data collection by UAVs or by humans is difficult with the number of observations as well as the altitudes at which images are acquired been limited. This leads to a trade-off between the covered area and the reliability of densities and probability of detection estimations. A video overview of the project is available a http://youtu.be/Kt8tOuMru7Q
Associate Professor Felipe GonzalezInvestigation of the use of UAS based hyperspectral Imagery and PHOTOGRAMMETRY FOR Mapping key areas of the Great Barrier Reef and Ningaloo Reef , WA

UAVs are revolutionising spatial sciences; Master and PhD opportunities  exist on combining field-validated, image-derived spectra from a representative range of cover types and -automated classification to determine levels for benthic cover and several abiotic and biotic seabed components and hard coral growth forms in dominant or mixed states of macro-algal communities. We aim to be able to go to a finer level of detail and demonstrate differences in bleached and non-bleached coral as well as tabulate coral and digitate, massive and soft corals.

Associate Professor Felipe Gonzalez

Autonomous UAV Wildlife Tracking using Thermal ImagingThe tracking of wildlife has been a part of environmentalists and farmers for many years. This project’s goal is to make tracking and locating animals a faster, simpler and more efficient process. This will be accomplished by automating the process through the use of Unmanned Arial Vehicles (UAV’s). This research explore the area of computer vision and its ability to pinpoint the location of wildlife and display them on a map for the convenience of the user. A video overview of the project is available a

Associate Professor Felipe Gonzalez

A Markov-Based Dynamic Waypoint Navigation Model for Unmanned Aircraft

Systems and an Application to Autonomous Front-On Aerial Photography

This research explores  probabilistic model for a front on location prediction of a target. Based on low-cost and readily available sensor systems and with a general intent of improving capabilities of dynamic waypoint-based navigation systems for low-cost unmanned aerial systems (UAS). Behavioral dynamics of target movement for the design using Kalman filter and Markov model based prediction algorithms. A video overview of the project is available at .

Associate Professor Felipe Gonzalez

UAVs for Precision Agriculture and Plant Biosecurity

UAVs are revolutionising spatial sciences; Master and PhD opportunities  exist on combining field-validated, image-derived spectra from a representative range of project in agriculture and plant biosecurity

More info here http://www.arcaa.aero/projects/

Dr. Jason Ford

Advanced vision-based aircraft "sense and avoid"

1 PhD position available. One of the key barriers to more routine use of Small unmanned aerial vehicles (UAVs) or drones is the requirement to replicate the human pilot "sense and avoid" function (onboard the aircraft) to avoid mid-air collision with other aircraft.

Over the last few years, in collaboration with Boeing and other partners, QUT has developed world-leading vision-based mid-air collision avoidance technology, which has demonstrated reliable detection of near-collision course aircraft. This proposed project will investigate refinement of these technologies, and will likely involve interaction with industry partners.

Medical Robotics

Supervisor(s)Title/TopicShort Description

Prof Jonathan Roberts

Remote Ultrasound for Intensive Care Units

This project will develop the techniques required to perform remote ultrasound diagnostic capabilities by an expert located remotely from the patient. The clinical settings would include rural, remote, regional and austere environments. The technological basis of the project is a lightweight robotic system for remote tele-operation of an ultrasound probe, which is able to work in a human environment. The research challenge is to adequately sense the patient’s body and safely manipulate the transducer in a medical environment with a patient and medical staff in close proximity.

The outcome of the research will include a design of a tele-operated ultrasound probe robotic manipulator, the integration of an ultrasound probe adequate for this diagnostic and medical environment, and the design of interfaces for trialling at ICU units and by the remote diagnostic specialists.


This project is a collaboration between the CyPhy Lab and QUT's Medical Engineering Research Facility (MERF). The successful student will be co-supervised by MERF Director Prof Ross Crawford and Dr Oran Rigby (Clinical Director NSW State Institute of Trauma & Injury Management).

Prof Jonathan Roberts

& Dr. Thierry Peynot

 

Towards Robotic Knee Arthroscopy

In this project PhD candidates will contribute to the development of a robotic system to perform knee arthroscopy (a form of minimally-invasive surgery).

In the short future the system will first assist a surgeon, then in the long term it should be capable of performing the arthroscopy fully autonomously. A number of aspects need to be investigated, including (but not limited to):

  • Real-time reliable 3D reconstruction of the inside of a knee using an arthroscope camera
  • Visual tracking of a surgeon's tool from the arthroscope camera view
  • Control of a leg manipulator with visual feedback from the inside of the knee

General Robotics and Autonomous Systems Projects

 

Main Supervisor(s)Title/TopicShort Description

Prof Tristan Perez

Dr. Jason Ford

Non-linear observer design for submarines close of the free surface

1 PhD Position available. In order to implement high-performance motion control systems of submarines, it is necessary to estimate motion attributes that are not directly measured. This can be achieved by combining data and mathematical models through on-line algorithms for inference. These algorithms are called observers in control literature. This project seeks to develop observers for submarine motion characteristics when sailing close to the free-surface where the waves affect the motion of the boat.

We are looking for a student with either engineering or mathematics background with interest in vehicle dynamics and motion control. In particular, with a strong background in mathematics related to system dynamics and control (ode's).

This project has associated a $10,000 per year scholarship top up from the Australian Department of Defence. The applicant must be Australian Citizen.

Prof Tristan Perez

Robot Trusted Autonomy1 PhD Position available. Robots and autonomous systems will have to be integrated in operational spaces shared by humans and human-operated machinery. This is creating a tremendous pressure on safety regulatory agencies that design regulations and and are responsible for certification. This project seeks to develop novel methods and technology for probabilistic assessment of robust autonomy in single and multi-vehicle operations. This can have a significant bearing in the uptake of mobile and service robot technology such as UAV, UGV, and USV in civilian operations.

Dr. Jason Ford

August 2016 Non-fragile path planning and collision avoidance in uncertain environments.

1 PhD position available. Background: safe and efficient automated operation in dynamic uncertain real-world environments requires access to a range of sophisticated detection and decision-making capabilities. Over the last few decades, optimisation based control system design approach such as optimal and/or minimax robust control have been powerful tools; however, concepts such as non-fragile control, anti-fragile thinking and/or satisficing decision making paradigms have each illustrated serious (hidden) weaknesses in optimisation based control stemming from assumptions about model accuracy, the common under modelling of risk, and ill posed decision making objectives. In practice, in many important problems we are more concerned with the "non-fragility" of systems with respect to unusual but possible events rather then achieving full optimisation of performance. Aim: This project will use concepts from non-fragile, anti-fragile or satisficing paradigms to further develop emerging "non-fragile" system decision and design concepts. One important application is path planning and collision avoidance in uncertain environments, but the project is expected to have broader importance.

Past work at QUT on non-fragile system design includes: Airborne vision-based collision-detection system, Practical stability of approximating discrete-time filters with respect to model mismatch, Control of aircraft for inspection of linear infrastructure, Sense and avoid technology developments at Queensland University of Technology

Dr. Feras Dayoub

Intelligent autonomy for mobile service robots

The capabilities that the public are expecting from mobile service robots of the near future are vastly different from the majority of what we are programming these robots to achieve now. People expect these robots to interact with them naturally in a similar way to how humans interact between each other. Also, these robots are expected to be able to collaborate with humans and work side by side to achieve a task. These expectations come from the fact that humans when given the slightest hint that something is vaguely capable of autonomy and decision making they will respond with a wide range of social responses and treat the autonomous agent as if it has an internal thought process (for more info check “theory of mind”).

This PhD will exploit this fact to develop algorithms that enable a mobile service robot to detect and utilise such social responses ( e.g. facial expressions, hand and body gestures, personal space) in order to adjust its behaviour during the execution of collaborative tasks with humans.

Dr. Frederic Maire

Avatar Image

Deep learning for robotics

My research interests range from computer vision, humanoid robots to deep learning.

Deep neural networks are multi-layers neural networks capable of learning from data relevant features for classification tasks. Some types of deep neural networks (stack of restricted Boltzmann machines) are capable of learning hidden relationships between data sets (also known as latent variables). For example, it becomes possible to learn non-trivial relationships between streams of sensor data. Other recent advances in deep learning combined with classical reinforcement learning have large potential for robotics.

Following the link below, you will find a non-exhaustive list of potential research projects. A few have already started, but all can be scaled up!

Research project topics for prospective students.

   

 

Graduate Outcomes Case Study

Our PhD students go on to work in top industry and academic institutions all over the world.

Case Study: William Maddern

Google Scholar Profile

Will completed his PhD at the Queensland University of Technology in Brisbane, Australia in 2014 studying with Professor Gordon Wyeth and Dr Michael Milford. His research topic was large-scale appearance-based SLAM using a continuous trajectory representation. During his PhD he spent time as a visiting research student at the Mobile Robotics Group from April to August 2011. He investigated information-theoretic methods for point cloud registration to calibrate 2D and 3D laser scanners, mono and stereo cameras and INS systems for the Wildcat platform.

Will is now a postdoctoral researcher in the Mobile Robotics Group at Oxford University and is leading research on robotic cars.

 

Case Study: Stephanie Lowry

Google Scholar Profile

Stephanie completed her PhD at the Queensland University of Technology in Brisbane, Australia in 2014 studying with A.Prof Michael Milford and Professor Gordon Wyeth. Her research focused on visual place recognition algorithms for robotic navigation under challenging conditions.

Stephanie is now a postdoctoral researcher in the at Orebro University.


 

 

  • No labels