Page tree
Skip to end of metadata
Go to start of metadata

PhDs at QUT

Details about the conditions, timing and scholarships of a PhD at QUT can be found here.  Specific details about a PhD within the Science and Engineering Faculty such as entrance requirements and fees can be found here.

In Australia a PhD is:

  • a 3-4 year course of study
  • no course work
  • assessed entirely by thesis which is externally evaluated
  • thesis can be a monograph or a collection of papers (thesis by publication)
  • fees
    • there are no fees for PhD students who are Australian or New Zealand citizens, or permanent residents of Australia
    • for international students the fees are ???

Process for Expressing Interest

Thanks for your interest in studying for a PhD with the QUT Centre for Robotics.

Due to the volume of enquiries we receive, we have a streamlined process for expressing your interest in a PhD at QUT Robotics.

If you haven’t already, please send us (robotics@qut.edu.au) your CV / resume with the subject heading "QCR PHD" – that’s all you need to do at this stage. If you are already talking to a specific academic (see topics below) please feel free to copy them on that email as well.

If you don’t hear from us, you can assume that we don’t currently have any suitable PhD openings, or that we do not have a suitable potential PhD position for you. This applies whether this is an unsolicited application or whether you are responding to a specific call for PhD applications.

If we follow up with you, that is the time at which we can engage in more detailed discussion about eligibility, timing, application processes etc…

General Information

Assuming that you aren’t already funded, please note that there is typically one major scholarship round per year with applications likely closing in September 2020, for a start (if successful) from January 2021 onwards. There are a limited number of scholarships available during the year, as well as scholarships directly funded by projects - we will contact you if it is suitable to proceed with an application.

Since scholarships in general are quite competitive so we always encourage students to apply to several universities to maximize their chances of finding a funded position. There are more than 40 universities within Australia alone you can consider, as well as hundreds internationally. This way you maximize your chances of getting a great PhD position!

Topic Areas

Click a topic area below or use your browser search for keywords:

PERCEPTION & LOCALIZATION

Broad topic area: How a robot or autonomous vehicle uses perception to create maps and calculate and track its location in the world

Key contact: Professor Michael Milford, michael.milford@qut.edu.au

Main Supervisor(s)Title/TopicShort Description

Neuro-Autonomy: Neuroscience-Inspired Perception, Navigation, and Spatial Awareness for Autonomous Robots

State-of-the-art Autonomous Vehicles (AVs) are trained for specific, well-structured environments and, in general, would fail to operate in unstructured or novel settings. This project aims at developing next-generation AVs, capable of learning and on-the-fly adaptation to environmental novelty. These systems need to be orders of magnitude more energy efficient than current systems and able to pursue complex goals in highly dynamic and even adversarial environments.

Biological organisms exhibit the capabilities envisioned for next-generation AVs. From insects to birds, rodents and humans, one can observe the fusing of multiple sensor modalities, spatial awareness, and spatial memory, all functioning together as a suite of perceptual modalities that enable navigation in unstructured and complex environments. With this motivation, the project will leverage deep neurophysiological insights from the living world to develop new neuroscience-inspired methods capable of achieving advanced, next-generation perception and navigation for AVs.

PhD topics in bio-inspired place perception, bio-inspired sensing and bio-inspired machine / deep learning for mapping and navigation.

These topics are part of a major AUSMURI project with US and Australian collaborating partners including MIT, BU, Uni Melb, QUT, UNSW and Macquarie University.

Place-informed Robotics and Autonomous Vehicles

There is great potential for improving many of the capabilities of robots and autonomous vehicles by using spatial information - both specific information about where they are in the environment, but also contextual spatial information - "in a forest" versus "in a city". These spatially-informed capabilities include terrain traversability detection, object recognition, vulnerable road user detection and localization and place recognition itself.

We are seeking PhD students interested in investigating this topic area.

This topic area has potential for collaborations with a number of research and industry / government partners, including QUT's Centre for Accident Research and Road Safety.

Bio-inspired Mapping and Navigation

Every human, animal, robot and autonomous system is defined and limited by its ability to navigate the world in which it exists. Despite major advances in sensing technology, computational hardware, and machine learning techniques, the best navigation technologies available today lack many critical aspects including reliance on GPS and performance limitations. In stark contrast with technological implementations, nature has evolved highly efficient and robust navigation systems that enable animals to find food, shelter and mates, even in extreme environments, and to migrate around the planet.

This research area is taking the best-understood and best-performing aspects of both natural and artificial navigation systems, both in terms of computation and sensing and compute hardware, and recreating them from scratch to generate new navigation knowledge and technologies for industry, research, government and public stakeholders.

Relevant current and past research projects including an ARC Future Fellowship "Superhuman Place Recognition with a Unified Model of Human Visual Processing and Rodent Spatial Memory" and a US Air Force project, "an infinitely scalable learning and recognition network"..

Robust Robotic Vision for Mapping and Localization

This project is developing robust means for performing typical recognition tasks such as place recognition and object recognition in the world using visual sensing alone. This is a crucial robotic competence yet presents significant challenges given a wide range of environments as well as variation due to weather, daily or seasonal cycles and structural changes.

This project will combine machine learning and deep learning techniques to develop highly robust approaches to typical robotic vision tasks such as place recognition and object recognition under challenging conditions. This project includes opportunities for collaboration with international partners including the University of Essex, UK.

Platform-specific perception & localization for service and social robots

The utilization of perception and localization systems varies widely depending on the platform and context: an autonomous vehicle may need to localize precisely with respect to other vehicles and vulnerable road users in its environment, but not so precisely in a global framework. Likewise many robots in service, social or logistical roles have varied requirements on their perception and localization systems.

This research area is concerned with developing "fit for purpose" perception and localization systems for specific application domains and environments. At a deep technical level, this requirement concerns compute, memory and power constraints, but at a higher level it concerns the types of localization and perception capabilities which facilitate the functional requirements of the robot.

Visual Memory Summarisation for Lifelong Mobile Service Robots Operation in Everyday EnvironmentsThis research project aims to answer the question: How can video summarization methods be used for efficient review and data storage of visual sensory data captured during a life-long operation of mobile service robot? Video summarization refers to the process of generating a summary that best conveys the most informative content of a longer video. 

VISUAL LEARNING & UNDERSTANDING

Broad topic area: How a robot can learn to reliably interpret its environment, and build an internal representation of its surroundings in order to decide on its actions

Key contact: Dr Niko Suenderhauf, niko.suenderhauf@qut.edu.au

Main Supervisor(s)Title/TopicShort Description
Semantic SLAM for Robotic Scene Understanding

Making a robot understand what it sees is one of the most fascinating goals in my current research. To this end, we develop novel methods for Semantic Mapping and Semantic SLAM by combining object detection with simultaneous localisation and mapping (SLAM) techniques.

We work on novel approaches to SLAM that create semantically meaningful maps by combining geometric and semantic information. Such semantically enriched maps will help robots understand our complex world and will ultimately increase the range and sophistication of interactions that robots can have in domestic and industrial deployment scenarios.

In our research, we tightly combine modern deep learning and computer vision approaches with classical probabilistic robotics. 

Robot Learning for Navigation, Interaction, and Complex Tasks

How can robots best learn to navigate in challenging environments and execute complex tasks, such as tidying up an apartment or assist humans in their everyday domestic chores?

Often, hand-written architectures are based on complicated state machines that become intractable to design and maintain with growing task complexity. I am interested in developing learning-based approaches that are effective and efficient and scale better to complicated tasks. 

Especially learning based on semantic information (such as extracted by the research in semantic SLAM above), or learning based on algorithmic priors is a fascinating research direction.

Deep Learning for Robotics in Open-World Conditions: Uncertainty, Continuous Learning, Active Learning

In order to fully integrate deep learning into robotics, it is important that deep learning systems can reliably estimate the uncertainty in their predictions. This would allow robots to treat a deep neural network like any other sensor, and use the established Bayesian techniques to fuse the network’s predictions with prior knowledge or other sensor measurements or to accumulate information over time.

Deep learning systems, e.g. for classification or detection, typically return scores from their softmax layers that are proportional to the system’s confidence, but are not calibrated probabilities, and therefore not useable in a Bayesian sensor fusion framework.

Current approaches towards uncertainty estimation for deep learning are calibration techniques or Bayesian deep learning with approximations such as Monte Carlo Dropout or ensemble methods.

PhD topics in this area can focus on reliably extracting uncertainty using Bayesian Deep Learning approaches for the specific use case of object detection on a robot in open-set conditions, and using the uncertainty information to actively accumulate new knowledge about the environment, e.g. by asking a human for ground truth labels (active continuous learning)

Augmented  and Mixed Reality Applications of Object-based Semantic SLAM
Performance Monitoring of Deep Learning Models for Robotic Perception

This project breaks the current, weak, assumption in the literature that the performance of deep learning models reported on a holdout dataset is an indicator of the performance on all future and yet to be encountered conditions during deployment. In reality, and as shown in the Figure, the performance fluctuates and can drop below critical thresholds when the robot travels through particular places, times and conditions. We were able to observe this fluctuation thanks to the labelled ground truth images. This PhD will investigate new methods and approaches to performance monitoring without the need to ground truth data. 


Lifelong Semantic Mapping of Large Scale Environments.When building a map of objects inside a real environment on the scale of a house or a warehouse or a city, this map can become out-of-date very quickly as objects are moved around over time. This PhD project investigates new methods for keeping a high-resolution semantic map up-to-date using partial and possibly low-resolution snapshots of the environment captured by sensors mounted on moving agents (e.g. vehicles, mobile robots, personal).
Semantic based  onboard UAV navigation

In recent years the field of robotic navigation has increasingly harnessed semantic information in order to facilitate the planning and execution of robotic tasks. The use of semantic information focuses on employing representations more understandable by humans to accomplish tasks with robustness against environmental change, limiting memory requirements and improving scalability. Contemporary computer vision algorithms extracting semantic information have continuously improved their performance on benchmark datasets, however, most computations are expensive, limiting their use for robotic platforms constrained by size, weight and power such as unmanned aerial vehicles (UAVs). Recent advances have demonstrated the potential for navigation systems based on semantic information to be included into real-time operation of UAVs. This PhD focuses on the development and incorporation of semantic information into a UAV navigation system. 


Also involving Prof Michael Milford 

DECISION & CONTROL

Broad topic area: How to reliably make autonomous decisions and control for robots in the presence of uncertainty

Key contact: Professor Jason Ford, j2.ford@qut.edu.au

Main Supervisor(s)Title/TopicShort Description
Wide field of view sense and avoid.

Sense and avoid (SAA) refers to the implied regulatory requirement that UAVs be capable of sensing and avoiding potential mid-air collisions threats. The development of systems capable of matching and exceeding the reported performance of human pilots and meeting the implied SAA regulatory requirement is one of the key technical challenges hindering the routine, standard and flexible operation of UAVs in the national airspace.

Whilst much progress has been made over the last decade with narrow field of view (FOV) sensors, it is still extremely difficult to replicate a human pilot's ability, using computer vision, to sense potential aircraft at ranges exceeding 2km from a wide field of regard. 

This project will research how to achieve long-range aircraft detection from an image sequence taken from a wide FOV sensor. The project will involve investigation of candidate image processing approaches building from what is already known about narrow FOV image based aircraft detection. Key aspects of this project will relate to discovering how to replace the planar image homography operation and the detection of very small and low signal-to-noise ratio objects from distorted image sequences. The project maybe also involve advanced anomaly detection techniques and advanced mathematical techniques.

Also involving Dr Jasmin Martin

The insufficient informativeness of measurements in Bayesian detection problems

Shiryaev's Bayesian Quickest Change Detection (QCD) problem is to detect a change in the statistical problems of an observed process. This is an important signal processing problem with application in a diverse range of areas including: automatic control, quality control, statistics, target detection and more.

This PhD project will identify, characterise and then create new solutions to a number of signal processing problems where non-ergodic signal model are currently used. This includes looking at: The mathematics of statistical processing such as Markov chain, Probability and mathematical expectation operations, Dynamic programming/recursion equations. Some algorithm development and data analysis will take place during this project. 

UAV Navigation in GPS Denied Environments 

This PhD project aims to develop a framework for unmanned aerial vehicles (UAV), which optimally balances localisation, mapping and other objectives in order to solve sequential decision tasks under map and pose uncertainty. This project expects to generate new knowledge in UAV navigation using an innovative approach by combining simultaneous localisation and mapping algorithms with partially observable markov decision processes. The project’s expected outcomes will enable UAVs to solve multiple objectives under map and pose uncertainty in GPS-denied environments. This will provide significant benefits, such as more responsive disaster management, bushfire monitoring and biosecurity, and improved environmental monitoring.

Multi-UAV Navigation in GPS Denied Environments 

The aim of this research is to develop a framework for multiple Unmanned Aerial Vehicles (UAV), that balances information sharing, exploration, localization, mapping, and other planning objectives thus allowing a team of UAVs to navigate in complex environments in time critical situations. This project expects to generate new knowledge in UAV navigation using an innovative approach by combining Simultaneous Localization and Mapping (SLAM) algorithms with Partially Observable Markov Decision Processes (POMDP) and Deep Reinforcement learning. This should provide significant benefits, such as more responsive search and rescue inside collapsed buildings or underground mines, as well as fast target detection and mapping under the tree canopy.

Automating drone traffic management systems

Unmanned Traffic Management (UTM) describes a set of systems, services and procedures that will be developed to manage drone (unmanned aircraft systems/unmanned aerial vehicle/remotely piloted aircraft system) operations in and around our cities, including package delivery and inspection tasks and passenger transport. Essentially, UTM is a new air traffic control system for drones with high levels of automation and advanced decision making. We are developing powerful and scalable technologies that allow thousands of drones to operate safely in our skies.

This research topic contains multiple areas of investigation including:

Autonomous control of drones - Investigate robust and stable control algorithms that enable multiple drones to coordinate their motion for formation flight, collision avoidance, platooning, optimised surveillance and flight along intersecting routes.

Low-level airspace and traffic networks - Investigate manned and unmanned traffic modelling approaches for collision probability or risk analysis to aid airspace design and characterisation, automated flight approval, separation standard development and tactical mitigation performance metrics.

Air traffic configuration modelling and prediction - Investigate novel representations of air traffic movement, patterns and configurations using machine learning, Markov Chains or other methods.

Robust Feature Selection and Correspondence for Visual Control of Robots

Stable correspondence-free image-based visual servoing is a challenging and important problem. Classical image-based visual controllers, explicit feature correspondence (matching) to some desired set is not required before a control input is obtained. Instead, this project will investigate robust feature selection and correspondence which can be used to simultaneously solve the feature correspondence and visual servoing problem, removing any feature tracking requirement or additional image processing. 


Example of recent past work. https://eprints.qut.edu.au/113191/

Also involving Prof Jason Ford.

Coordinated control of multi-robot systems for dynamic task execution 

Managing multiple robotic systems simultaneously poses many challenges around coordination and control. This is particularly true in environments where there's a lack of accurate localisation, sensing uncertainty and limited communications, yet there is an overarching mission objective or series of tasks that need to be completed.

In this project, you will explore  and develop approaches around multi-robot swarming and coordinated formation control for dynamic process monitoring, target tracking and coordinated mapping. There will be a particular focus on underwater and surface robotic systems which inherently have different levels of sensing, mobility and communication uncertainty. 


PHYSICAL INTERACTION

Broad topic area: How a robot interacts with the physical world 

Key contact: Distinguished Professor Peter Corke, peter.corke@qut.edu.au


Main Supervisor(s)Title/TopicShort Description
Physical interaction themeLearning about objects by looking and manipulating

We learn about objects by manipulating them up and looking at them.  How can we create a robot that does something similar, pick up a thing it hasn't seen before, in order to understand better its geometry, color, texture and other properties.  Examples include fruit and vegetables, manufactured objects or manipulate the leaves of a plant to see fruits or the stem structure.

Physical interaction themeLearning about environments through physical interactionHow can a robot learn the physical characteristics of an environment by interacting with it.  For example, how can a wheeled robot learn ground properties as, or even before, it drives over it.
Robotic grasping: the last inch problem

Robotic grasping is an important and challenging contemporary problem in robotics.  Typically most grasping systems are open-loop, ie. the object is observed, a grasp is planned, and the robot moves to the planned grasping pose.  However if the object is moving, or the robot is not so accurate then the grasp will fail.  An alternative is for closed-loop motion where a vision sensor ensures the robot moves to the desired object-relative pose. However as we approach the object the vision problem becomes challenging, the object might become occluded by the gripper, or the object might go out of focus.  How can we solve this last inch problem and have usable vision right up until the moment the fingers close on the object.  Can we put tiny cameras in the palm of the robot's hand or even its finger tips?

Looking for candidates with some or all of the following experience: robotic grasping, computer vision, mechatronic design and coding.

Very high-speed dynamic motion planning for arm robots

Robot manipulator arms are increasingly used for logistics applications.  These typically require robots to run at the limits of their performance: motor torque and motor velocity.  Added challenges include significant payloads (if we are schlepping heavy parcels) with apriori unknown mass, the possibility of boxes detaching from the gripper under high acceleration, and fixed obstacles in the workspace.  How can we determine the limits to performance, quickly identify the payload mass, then plan the fastest path to get from A to B.  

Looking for candidates with some or all of the following experience: robot arm kinematics and dynamics, control theory, system identification and modelling, simulation and coding.

High-speed vision controlled motion of flexible planar robotic arms

Modern robot arms are capable of high-speed motion with very high precision.  However to achieve this they require accurately machined components, accurate encoders and stiff links that don't deflect during aggressive motion.  Stiff links are typically heavy, and that requires more powerful motors and these are more expensive.

What if we took a different approach? We could admit that the robot is flexible and inaccurate and use vision to control the relative position of the end-effector.   The camera could be a high-frame rate conventional camera or an event camera.  The project requires development of the experimental testbed, the development of good dynamic models of the machine, actuators and sensors, synthesis of appropriate control laws, and demonstration of all-up performance.

Looking for candidates with some or all of the following experience: robot arm kinematics and dynamics, control theory, system identification and modelling, computer vision, hands-on mechatronic design, simulation and coding.

High-speed robotic waste separation

Sorting waste or recyclables is an important but unpleasant job, currently done by specialised machinery and humans for the hard bits.  What are the core challenges that could be done by "robots that see". This is a challenging problem in perception, dynamic path planning and control.

Looking for candidates with some or all of the following experience: robot arm kinematics and dynamics, computer vision, deep learning, hands-on mechatronic design, simulation and coding.

Outdoor litter collection

Cleanup Australia day in 2019 collected 17,000 ute loads of rubbish from rivers, parks, beaches, roadways and bushland.  Imagine a robot ground vehicle or boat that could identify litter, plan the motion of the robot so that it can pick up items in the right sequence so that it doesn't have to stop, while also navigating obstacles in the environment. This is a challenging problem in perception, dynamic path planning and control.

Looking for candidates with some or all of the following experience: robot arm kinematics and dynamics, mobile robotics, path planning for redundant robots, computer vision, deep learning, hands-on mechatronic design, simulation and coding.

Robotic maintenance of equipment
Robotic Rehabilitation and Simulation for Joint Biomechanics

A new Australian Research Council Industrial Transformation Training Centre headquartered at QUT will transform the area of personalised surgical treatment of joints using state-of-the-art biomechanical techniques, led by Professors Yuantong Gu, Peter Pivonka and Graham Kerr, partnering with surgeons Ashish Gupta & Kenneth Cutbush.

We are seeking expressions of interest for PhD students focusing on the development of robotic device testing and robotic rehabilitation devices. Looking for candidates with some or all of the following experience: collaborative robot arms, health robotics, computer vision, control and planning, hardware, sensing and coding.

Please e-mail your CV to Professor Peter Pivonka peter.pivonka@qut.edu.au and Michael Milford michael.milford@qut.edu.au with the subject heading "ITTCPHD".

Terrain Traversability

Autonomous and non-autonomous vehicles face a wide range of challenges in identifying and safely traversing terrain across a wide range of environments from mine sites to farms to forests to deserts.

We are looking for PhD students interested in developing new techniques for improving terrain traversability detection, using both "traditional" methods as well as modern deep learning-based pipelines.

Also involving Associate Professor Thierry Peynot

Search and Retrieve with a Fully Autonomous Aerial Manipulator

The aim of this PhD is to develop  an autonomous multirotor based aerial manipulator that is capable of searching an environment, visually identifying a payload, and performing a pick and-place type maneuver. Full 3D trajectories for the search stage of the flight need to be predefined, with the grasping maneuver generated dynamically once the payload is identified. The manipulator and payload interactions with the multirotor base need to be actively compensated for by the controller to ensure stable flight during all maneuvers. 

Also involving Dr Aaron McFadyen 

Reaching in Clutter using Force and Tactile Feedback

Reaching into cluttered and unstructured environments for robotic manipulation is still a largely unsolved problem. Current motion planning strategies for robots optimise for reaching while avoiding collisions within their environment. This is a fundamental problem when interacting with real-world environments as contact is inevitable.

This PhD seeks to understand how we can use tactile or other sensory feedback and advanced control methods to exploit the environment for solving robotic tasks that are not achievable with current techniques. 

This PhD aims to give robots the ability to safely interact with the world without damaging themselves, others or the environment. 

This aims to take the robotics community one step closer to having human-like performance in cluttered and unstructured environments.

We are looking for candidates with an interest in motion planning, control and sensor design in the application areas of robotic grasping and manipulation.

Moving to See

This PhD aims to investigate methods for enabling robots to intelligently move their perception systems to improve their view of a target object.

Typically, robots capture images of their environment and then decide how to act: grasping an item, move to a location etc... However, sometimes it is necessary for a robot to gather more information in order to make a better decision. How can a robot decide on where to move its sensors (i.e. camera) such that it learns more about its environment?

One application area is within highly occluded environments, such as natural agricultural environments which can be highly occluded due to leaves, branches and other obstacles blocking the view to a target. 

Taking inspiration from optimal control and visual servoing techniques, can new methods be developed that enable robots to intelligently maneuver their perception systems in order to improve their view of a target and hence improve the likelihood of succeeding at their tasks?

Perception-to-action for collision avoidance using robotic boatsMuch like driving cars on our roads, there are rules around driving maritime systems (boats) on waterways regarding where you can drive and how to avoid and behave in potential collision situations.

In this project, you'll explore and develop state-of-the-art perception and decision support solutions to allow robotic surface vessels (robot boats) to safely travel complex waterways in and around other human-driven vessels. This will involve diving deep into vision and laser-based sensor processing and fusion algorithms, as well as robust autonomous decision support frameworks to allow compliance with collision regulations (COLREGs). A goal will be to validate these algorithms on our extensive state-of-the-art robotic boat fleet. 

This project is funded by an industry collaborator.

Gesture-based control of underwater helper-botsUnderwater robotic systems have been in use for several decades. In recent years, various groups have been adding manipulators and other payloads to increase their utility. The next frontier is to have human divers and robotic system collaborate safely and productively in the same space to jointly complete complex tasks. 

In this project, you'll explore gesture-based interaction to allow a diver and underwater robotic system to collaborate to complete various tasks. This will involve exploring vision processing and fusion algorithms, gesture recognition and intent detection and prediction. You'll then implement and validate these algorithms on our extensive state-of-the-art robotic platforms. This project involves co-supervision with a US-based collaborator.

HUMAN INTERACTION

Broad topic area: How robots can effectively interact socially with humans

Key contact: Distinguished Professor Peter Corke, peter.corke@qut.edu.au









  • No labels