My day job
My book was published in 2011, visit its website. A second edition was published this year.
PhD student topics
Right now I'm looking for PhD students to work on any of the following questions (or related):
- How can we split robot navigation functionality between a small/cheap local computer and a wireless network connection to a cloud computing resource? How do minimise the amount of data transmitted and handle latency?
- How do we reduce the energy consumption of a robot vision system? Do we need to pay attention to all the pixels all the time, or can we somehow pay attention to the important stuff? Could we create the equivalent of an operating system, a vision operating system, that allocates "attention" according to the task at hand?
- How should robots move to get the best view of what's interesting to them. Move to eliminate a specular reflection or to some obstacle out of the way?
- How can robots reach for and pick up various shaped objects in a fast, graceful and natural way?
- Visual simulators. If we create robots that see, how can we test their operation under all viewing conditions? What do we couple high fidelity virtual worlds to robot controllers, and how do we evaluate robot performance?
If you are thinking about applying you can find details about admission to QUT here. For:
- Australian students there is no tuition fee, but you need a living allowing. You are eligible to apply for a scholarship, the Australian Postgraduate Award (APA). These are quite competitive, based on your GPA, but with extra points for research (eg. a Masters degree by research, and/or publication of a research paper).
- Overseas students you need to pay a tuition fee (~AUD25k/year) and show you have a living allowance (~AUD25k/year). You can apply for a scholarship that would cover tuition as well a living allowance. These are limited in number and very competitive, based on your grades, but with extra points for research (eg. a Masters degree by research, and/or publication of a research paper).
- Will Chamberlain is working on middle for robotic vision systems
- John Skinner is working on using photorealistic computer graphics to help create better vision systems
- Fangyi Zhang is working on deep reinforcement learning for visual robot control
- Dan Richards is working on ultra low light imaging for robotic navigation
- Dorian Tsai is working on light field imaging for low-light imaging and specularity removal
- Peter Kujala is working on ultra high speed hand-eye coordination
Past students and theses:
- Patrick Ross developed vision-based collision detection for agricultural robotics, now at Uber Tech Centre
- Andrew English developed robust vision-based crop row following for agricultural robotics, now at Oxbotica
- Alex Bewley developed robust vision-based detection and tracking in novel and dynamic outdoor environments, now at Oxford
- Steve Martin developed techniques to sense terrain drivability and plan energy efficient paths accordingly, now at Australian Centre for Robotic Vision
- Zongyuan Ge on fine grained classification of animal species, now a research scientist at IBM
- Aaron McFadyen on visual servoing for aircraft collision avoidance, now at QUT
- Chris Djamaludin on security for autonomous Delay Tolerant Networks, now at PwC
- Edward Pepperell on robust vision-based localisation and navigation
- Inkyu Sa developed high-speed monocular camera vision techniques for a quadrotor flying robot, now at ETH Zurich
- Paul Pounds quadrotor dynamics and control (ANU), now at UQ
- Peter Hansen: Wide-baseline keypoint detection and matching with wide-angle images, now at Uber Tech Centre
- Kane Usher: Visual homing for a car-like vehicle
- Jasmine Banks: Reliability analysis of transform-based stereo matching techniques, now at QUT
Adrian Bonchis: Modelling and Control of Hydraulic Servo Systems, now at CSIRO
|My Erdos number is no more than 3. The paths that I know about are:|
What do I do?
I am interested in how robots can use the sense of vision to accomplish a broad range of tasks. These might range from recognizing places or text in the world to dynamic tasks. An example of a visual dynamic task is something like hand-eye coordination, manipulation of objects or driving/piloting a mobile robot on land, air or underwater.
Why vision not GPS? GPS has limitations: there are lots of places it won't work and it only tells where the robot is, not the things the robot needs to deal with. We do all manner of complex tasks without GPS and coordinates using relative position determined by our eyes. Nature has invented the eye ten different times so it must be an effective sensor for doing a diverse range of tasks. Vision sensors and computing power are getting cheaper and cheaper. Now is the time to be doing vision for robotics!
Our research centre, the ARC Centre of Excellence for Robotic Vision, is pushing the envelope in this area – more details at www.roboticvision.org.
You can find lists of my publications in quite a few places. I've published over 400 papers and have over 20,000 citations and h-index > 60.
- Google Scholar citations
- my papers on QUT ePrints server (many available for download)
- BibServer display
- Microsoft Academic Search
At QUT I developed "Introduction to Robotics" (EGB339) and "Advanced Robotics" (EGB439) and have taught them several times, as well as "Advanced Control" (ENB458).
I'm really interested in teaching at scale using internet technologies such as MOOCs:
- In 2015-16 we ran two six-week MOOCs on the edcast platform: Introduction to Robotics, and Robotic Vision. These were university undergraduate-level courses.
- Since 2016 we've migrated a simplified subset of these courses as multi-course programs on the FutureLearn platform
- In 2017 we launched the QUT Robot Academy which has over 200 video lessons (5-10 minute videos) that are the edcast MOOC content. Free to access and available 24x7.
I've been working on two MOOCs: Introduction to Robotics, and Robotic Vision. They kick off 16 Feb 2015. More details and registration are here.
I joined Queensland University of Technology at the start of 2010 as a Professor of Robotic Vision. I'm now also director of the ARC funded Centre of Excellence for Robotic Vision. I'm known for my research in vision-based robot control, field robotics and wireless sensor networks. I received a B.Eng and M.Eng.Sc. degrees, both in Electrical Engineering, and a PhD in Mechanical and Manufacturing Engineering, all from the University of Melbourne.
Prior to QUT I was a senior principal research scientist at CSIRO where I founded the Autonomous Systems laboratory, a 50-person team undertaking research in mining, ground, aerial and underwater robotics, as well as sensor networks. I subsequently led a major cross-organizational "capability platform" in wireless sensor networks.
Professional and Group Associations
- IEEE Robotics and Automation Society. Elected as Fellow in 2007. Elected to the AdCom (board of governors) in (2008-13, 2016-18).
- International Federation of Robotics Research (IFRR). Elected as a board member in 2009.
Scientific Community Service
- Program chair for IEEE Conf. Robotics and Automation (ICRA) 2018
- Editor-in-chief of the IEEE Robotics & Automation magazine (2009-2013)
- Founding multi-media editor and editorial board member of the International Journal of Robotics Research
- Founding and associate editor of the Journal of Field Robotics
- Member of the editorial advisory board of the Springer Tracts on Advanced Robotics series
- Past president of the Australian Robotics and Automation Association
- Region chair, area chair, member of technical committees for major international conferences such as: ICRA, IROS, RSS, Sensys, IPSN
Things I'm working on this year
- Teaching EGB439
- Developing more MOOCs for FutureLearn and participating in existing MOOCs
- Launching the Robot Academy
- Updating Robotics Toolbox and Machine Vision Toolbox for MATLAB
- Mid-term review of the ARC Centre of Excellence for Robotic Vision
I post a bit on Google+.