Skip to end of metadata
Go to start of metadata

The Robotics and Autonomous Systems (RAS) seminar series is open for the public. Everyone is welcome to attend.


Speaker: Nicolas Hudson

Title: Mobile Manipulation

Abstract: An overview of key insights and winning strategies used by NASA’s Jet Propulsion Laboratory in the DARPA ARM, ARL's RCTA and DARPA Robotics Challenge programs, and how this intersects with Google's "AI-First" world.

Where: QUT Gardens Point S-Block, 11th floor, The Cantina Lounge

When: 11:00AM-12:00PM on 24/April/2018

Seminar internal speaker list (constantly updated):

If your name is near the top, please be prepared to give a RAS presentation soon (it is going to keep floating there until you do one).

Kulatunga Mudiyanselage Eranda Bankara Tennakoon (Eranda) :  None
William Hooper :
Matthew Dunbabin :
Jonathan Roberts :
Chris McCool :
Michael Milford :
Juxi Leitner :

Niko Suenderhauf :
Chris Lehnert :
Fangyi Zhang :
Ajay Pandey :
Leo Wu :
Valerio Ortenzi :
Andres Marmol :
Jason Ford :
John Skinner :
James Mount :
Sean McMahon :
Anders Eriksson :
William Chamberlain :
Douglas Morrison :
Fan Zeng :
Sourav Garg :
Norton Kelly-Boxall :
Peter Corke :
Feras Dayoub :
Thierry Peynot :
Luis Mejias Alvarez :
Suman Bista :

(Will do later) Anjali Jaiprakash :
(Will do later) Felipe Gonzalez :
(Will do later) Jeremy Opie :  None
(Will do later) Lachlan Nicholson :
(Will do later) Mario Strydom :
(Will do later) Sean Wade-McCue :
(Will do later) Troy Cordie :

(Will do later) Jordan Laurie :

(Will do later) Vibhavari Dasagi :

(??) Matt McTaggert :  None

(??) Riccardo Grinover :
(??) Steve Martin : None

(??) Fahimeh Rezazadegan :


This list contains all the members from here

Ranked by the order of most recent seminar date. People who haven't given a seminar since 2016 are ranked first, in alphabetical order by First Name.

Organiser :  Please contact Fan Zeng, the organiser of this seminar series at, If

  1. Your name is not included in the list, and you'd like to add it into the list.
  2. Opposite of the above.
  3. You, or your visitor would like to give a talk in one of the upcoming sessions.
  4. Your name is near the top of the list, but you cannot give a seminar due to various reasons.

Thank you very much for your attention!

Full List of Seminars:

Seminars 2018:

Please email Fan Zeng (, if your presentation is arranged in the table below on a date you are not available. Please also remember to email the title and abstract when ready. A short biography would be appreciated for a brief introduction of the speaker before the presentation. Thanks!

Date                ____



05-Jun-18Tomas Krajnik


FreMEn: Frequency Map Enhancement for Long-Term Autonomy of Mobile Robots


While robotic mapping of static environments has been widely studied,
life-long mapping in non-stationary environments is still an open
problem. We present an approach for long-term representation of natural
environments, where many of the observed changes are caused by
pseudo-periodic factors, such as seasonal variations, or humans
performing their daily chores.

Rather than using a fixed probability value, our method models the
uncertainty of the elementary environment states by their frequency
spectra. This allows to integrate sparse and irregular observations
obtained during long-term deployments of mobile robots into
memory-efficient models that reflect the recurring patterns of activity
in the environment. The frequency-enhanced spatio-temporal models allow
to predict the future environment states, which improves the efficiency
of mobile robot operation in changing environments. In a series of
experiments performed over periods of weeks to years, we demonstrate
that the proposed approach improves mobile robot localization, path and
task planning, activity recognition and allows for life-long
spatio-temporal exploration.

29-May-18Stéphane Caron


The Inverted Pendulum: a simple model for 3D Bipedal Walking


Walking pattern generators based on the Linear Inverted Pendulum Model

(LIPM) have been successfully showcased on real robots. However, due to

key assumptions made in this model, they only work for walking over

horizontal floors (2D walking). In this talk, we will see how to

generalize the LIPM to 3D walking over uneven terrains, opening up old

but refreshed questions on the analysis and control of bipeds. Our aim

is to enable humanoids to walk in new environments: outdoors,

staircases, hazardous areas, etc. Today's public research has reached

the simulation stage in this field, as we will see in live simulations

during the talk. We will finally discuss our ongoing efforts to make

this a reality (in the public world) on the HRP-4 robot.

GP-S11 Cantina Lounge
Reason: ICRA2018

Reason: RAS Discipline Meeting
GP-S11 Cantina Lounge
08-May-18Thierry Peynot

GP-S11 Cantina Lounge
01-May-18Kulatunga Mudiyanselage Eranda Bankara Tennakoon (Eranda)

24-Apr-18Nicolas Hudson

Title: Mobile Manipulation

Abstract: An overview of key insights and winning strategies used by NASA’s Jet Propulsion Laboratory in the DARPA ARM, ARL's RCTA and DARPA Robotics Challenge programs, and how this intersects with Google's "AI-First" world.

GP-S11 Cantina Lounge
17-Apr-18Suman Raj Bista 

Title : Indoor navigation of mobile robots based on visual memory and image-based visual servoing

Abstract: This talk will focus on a method for appearance-based navigation from an image memory by Image-Based Visual Servoing (IBVS). The entire navigation process is based on 2D image information without using any 3D information at all. The environment is represented by a set of reference images with overlapping landmarks, which are selected automatically during a prior learning phase. These reference images define the path to follow during the navigation. The switching of reference images during navigation is done by comparing the current acquired image with nearby reference images. Based on the current image and two succeeding key images, the rotational velocity of a mobile robot is computed under IBVS control law. The navigation methods using local features like lines and the entire image using mutual information will be presented with the experimental results.

GP-S11 Cantina Lounge
10-Apr-18Luis Mejias Alvarez

TitleExperiences and Work during PDL 2017

AbstractIn this talk, I will present the main experiences during my stay in France, Canada and Spain in 2017. I will also present the work developed during this time which deals with the development of UAV navigation approaches that do not rely on GPS. The main technique behind the approach is called visual control, in particular of a type that exponentially decouples the translational from the rotational degrees of freedom. I will present motivation, flight experiments and results from this work.

GP-S11 Cantina Lounge

Peter Corke

Melissa Johnston

MARS conference experience

New SEF 3D printing capabilities to show you!

GP-S11 Cantina Lounge
27-Mar-18Dorian Tsai

Title: Distinguishing Refracted Features using Light Field Cameras with Application to Structure from Motion

Abstract: Robots must reliably interact with refractive objects in many applications; however, refractive objects can cause many robotic vision algorithms to become unreliable or even fail, particularly feature-based matching applications, such as structure-from-motion. We propose a method to distinguish between refracted and Lambertian image features using a light field camera. Specifically, we propose to use textural cross-correlation to characterise apparent feature motion in a single light field, and compare this motion to its Lambertian equivalent based on 4D light field geometry. Our refracted feature distinguisher has a 34.3\% higher rate of detection compared to state-of-the-art for light fields captured with large baselines relative to the refractive object. Our method also applies to light field cameras with much smaller baselines than previously considered, yielding up to 2 times better detection for 2D-refractive objects, such as a sphere, and up to 8 times better for 1D-refractive objects, such as a cylinder. For structure from motion, we demonstrate that rejecting refracted features using our distinguisher yields up to 42.4\% lower reprojection error, and lower failure rate when the robot is approaching refractive objects. Our method lead to more robust robot vision in the presence of refractive objects.

GP-S11 Cantina Lounge
20-Mar-18Paul Wilson

Title: Distributed acoustic sensing of conveyors


Mining3 has been investigating the use of fibre optic distributed acousting sensing since 2014 for monitoring the condition of conveyor belts. Because fibre optic cable behaves rather differently from microphones or piezo pickups, it has taken a great deal of work to be able to extract meaningful spectra from the data collected. In combination with research into the various failure modes and wear patterns of conveyor bearings it has been necessary to research the acoustic properties of conveyor belt steelwork and methods of attaching the fibre to the frames in order to assure good acoustic coupling. Currently two extended field trials are being undertaken at Moranbah North coal mine in Queensland and Argyle diamond mine in Western Australia. The results from the fibre interrogator units and the signal processing computers are automatically generating condition reports and these rely on some pattern matching techniques and some rules-based decision-making software. The technology is now at a pre-commercial stage sufficient for first adoption by mining companies. The next phase of the project is to try and improve the spectral signature pattern matching and the rule-based decision making by employing modern machine learning techniques to:

1. Speed up and improve the accuracy of the automated report generation

2. See if there are other patterns in the data that are not yet recognised

The assistance of QUT robotics group and their pattern-matching skills would be appreciated.
GP-S11 Cantina Lounge
13-Mar-18Weizhao Zhao

Title: An Optical Tracking System for Cyberknife Radiosurgery on Ocular Tumor

Abstract: Image-guided radiosurgery has been popularly used in cancer treatment. Tracking tumor movement

during the treatment is crucially important for radiation therapy. A treatment option for ocular tumor has

been investigated using the Cyberknife system, due to its advantage of real-time image guidance during

therapy. However, unpredictable eyeball movement imposes challenges to the state-of-art technology.

This presentation describes a 2D/3D transformation solution to predict the tumor’s 3D positions in real-

time. We designed a mechanical phantom to validate the developed method. In both calibration procedure

and validation procedure, the error between the predicted position and actual position for the gravity

center of the tumor in eyeball was within submillimeter level. Based on the developed the method, a

surrogate to the CyberKnife system is under construction. This invention has been awarded a United

States Patent.

GP-S11 Cantina Lounge
06-Mar-18Steven BulmerPresentation by some VRES students who developed an FPGA to do vision processing.GP-S11 Cantina Lounge
27-Feb-18Gavin Suddrey

Title: Almost Fury Road - The Story of an Autonomous Laboratory Tour ft. Pepper the Robot


This talk will focus on the problems and solutions inherent in getting a Pepper robot to give an autonomous tour of S11. This will cover various areas including motion control, sensing and autonomous navigation. In addition to discussing Pepper, I will also talk about how we integrated Pepper with other robots/technology in the lab to create a more interactive experience. This talk will be largely informal, and with any luck I will have some interesting videos to go along with it.

GP-S11 Cantina Lounge

Stuart McCarthy

Daniel Mcleod

Invited speakers from Manabotix, a local robotics / automation company.GP-S11 Cantina Lounge

Arnab Ghosh

Lu Gan

Steven Parkison

Arash Ushani

Axel Gandia

Generative Models for Computer Vision and Video Generation

Toward a Probabilistic Sound Semantic Perception Framework for Robotic Systems

Improving Point Cloud Registration

Understanding a Dynamic World

Character navigation based on optical flow

GP-S11 Cantina Lounge
08-Feb-18 Oliver Sawodny

Title: The Bionic Handling Assistant - Modeling and control of continuum manipulators

Presenter: Prof. Dr.-Ing. Dr. h.c. Oliver Sawodny, Institute for System Dynamics (ISYS), University of Stuttgart

Abstract: The Bionic Handling Assistant is a novel continuum manipulator with nine pressure-driven actuators called bellows that is manufactured using the rapid prototyping method Selective Laser Sintering. Unlike common rigid link manipulators, continuum manipulators provide a flexible actuation system by bending and extending their actuators. Using a pneumatic actuation system, the Bionic Handling Assistant is inherently safe and therefore well suited for tasks that require human contact. However, the pneumatic system and the coupled mechanics require highly developed control concepts, especially due to the redundancy between the tool center point and its actuators. Therefore, model-based control concepts and path-planning algorithms have to be developed, especially as most concepts for rigid link manipulators cannot simply be applied to this new class of manipulators.

GP-S11 Cantina Lounge
30-Jan-18 David Lane

From Research to Revenues in the Edinburgh Centre for Robotics

The Edinburgh Centre for Robotics is a joint venture between Heriot-Watt and Edinburgh Universities, with £100M investment from UK research councils, innovation agencies and international industry. 30 academics, 100 PhD students, 30 post docs and 30+ industrial partners pursue research and cross sector innovation impact around the central themes of robot interaction - with the environment, with each other, with people and with themselves (for certification). The talk will describe some of the advances underway in the Centre and the cross sector applications and commercialisation successes for example in marine (SeeByte, Hydrason), assisted living (Consequential Robotics), the  the newly created £36M ORCA Hub - Offshore Robotics for Certification of Assets, and podium success in the recent Amazon Alexa Challenge 

GP-S11 Cantina Lounge
23-Jan-18Thierry Peynot

Title: Impressions on CES 2018 (Consumer Electronics Show, Las Vegas)


Early January 2018 I had the opportunity to attend the famous CES show in Las Vegas. CES is a huge annual event where most of the big players in electronics show off their latest novelties. This year Self-driving Cars, Robotics, Drones, AI and VR obviously had an important presence, and many other technologies that are relevant to us were on display. In this short seminar I propose to share my impressions on the event, including what I found promising, impressive, disappointing etc.

GP-S11 Cantina Lounge

Peter Corke

Feras Dayoub

Title: How the ICRA Selection Process WorksGP-S11 Cantina Lounge
09-Jan-18Jasmin James

Title: Quickest Detection of Intermittent Signals With Application to Vision Based Aircraft Detection


In this paper we consider the problem of quickly detecting changes in an intermittent signal that can (repeatedly) switch between a normal and an anomalous state. We pose this intermittent signal detection problem as an optimal stopping problem and establish a quickest intermittent signal detection (ISD) rule with a threshold structure. We develop  bounds to characterise the performance of our ISD rule and establish a new filter for estimating its detection delays. Finally, we examine the performance of our ISD rule in both a simulation study and an important vision based aircraft detection application where the ISD rule demonstrates improvements in detection range and false alarm rates relative to the current state of the art aircraft detection techniques.
GP-S11 Cantina Lounge


Date                ____




External guest speaker: Chris Jeffery

Title: Start-up Adventures



GP-S11 Cantina Lounge

External guest speaker: Fredrik Kahl


From Projective Geometry to City-Scale Reconstructions in Computer Vision


Research in geometrical computer vision has undergone a remarkable journey over the last two decades. Not long ago, the field was dominated by mathematicians interested in projective geometry, but today, the area has matured and practical systems for performing large-scale 3D reconstructions are commonplace. In this talk, I will first review some of the progress achieved so far and then give examples of present state of the art, especially on robust methods for city-scale reconstruction and localization. In the end, future challenges will be discussed.

Bio Slides

GP-S11 Cantina Lounge
26-Jan-16Australian DayHoliday off. 

Luis Mejias

Update on IEEE Control Systems / Robotics and Automation Societies QLD joint-chapters ( 10 mins).


GP-S11 Cantina Lounge
09-Feb-16Leo Wu

Title: From industrial robots to medical robots: An individual perspective

Abstract: In this presentation, I will talk about some projects I participated in at Tsinghua University and National University of Singapore. In particular, I will discuss kinematic calibration of industrial robots and introduce a flexible medical robot named concentric tube robot. Finally I will make some rough comparisons between industrial robots and medical robots based on my experience.


GP-S11 Cantina Lounge
16-Feb-16Jason Ford

Title: Automation for large scale infrastructure inspection: Why and How

Abstract: This talk will describe our journey in developing the Flight Assist System (FAS) for automation of ROAMES infrastructure inspection aircraft. (ROAMES won a 2015 International Edison Award, and is having an international impact on the infrastructure inspection industry). I will also share some personal reflections on industry collaboration.

GP-S11 Cantina Lounge
23-Feb-16Victor Vicario

Title: Growing a Startup at QUT - The VBK Motors experience 


Abstract: Creating a Startup while being at university is tough, especially since funds are limited. That said, a number of opportunities are made available by the university itself which can make a great business a reality. VBK motorshas been lucky enough to take advantage of these opportunities. In the past 6 months VBK Motors has been selected both as a finalist at the QUT BlueBox innovation Challenge as well as a participantto the QUT BlueBox Hardware Accelerator Program.  In this Talk, the Co Founder and CEO of VBK Motors will discuss about his experience since the start of his Startup journey, identifying current opportunities available from QUT to promote innovative startups as well as what lies ahead for his young company.

Bio Photo

GP-S11 Cantina Lounge
01-Mar-16 Cancelled 
8-Mar-16 External guest speaker: Will Browne

Title: Cognitive Systems: Robotic Vision and Learning

(Note: The purpose of the talk is to encourage discussion over the next few days of my visit to QUT, so overviews of the topic will be presented.)

Abstract: Artificial Cognitive Systems encompasses robots that learn and adapt through exploring their environment. This talk will highlight research into Artificial Cognitive Systems that enables robots to improve autonomous operation. Perception, including robotic vision, is essential in obtaining the state of the world. Advances in salient object detection and pattern recognition will be presented. Also representing, reasoning and learning about appropriate actions for given tasks, such as active SLAM will be outlined. Advances in Affective Computing will be shown for robotic navigation. Finally, methods for artificial systems to scale and reuse information will be outlined.

Bio Slides Photo

GP-S11 Cantina Lounge
15-Mar-16 Donald Dansereau

Title: Computational Imaging: What has it ever done for me?

Abstract: I will briefly introduce the field of computational imaging and discuss recent developments in industry, academia, and within the ACRV.

 Slides Photo

GP-S11 Cantina Lounge
22-Mar-16External guest speaker: Thibault Schwartz (architect, co-founder of HAL Robotics)

Title: Simplifying machine control for architectural applications.

Abstract: The democratization of CAD technologies, perceivable in architecture schools as well as in the construction industry, has, during the last decade, progressively led to the creation of consortia combining architectural academics and their professional counterparts, seeking to extend their morphological research, undertaken at a virtual level, towards a systematic practice of manufacturing of geometrical abstractions. As a result, and taking advantage of lower cost CNC machines, university workshops are becoming genuine micro-factories, although various parameters inhibit the scaling of such experimentations beyond pavilions. We highlight software issues and propose solutions to help architectural robotics move beyond its current limitations, and reach the required robustness to be used on construction sites.

GP-S12 Owen J Wordsworth Room
29-Mar-16 Cancelled 

External guest speaker: Anne Walsh, and Leanne Kennedy

Title: QUT Trade Controls


This presentation will outline trade controls, the Defence Trade Control Act 2012 (the Act) and the impact tothe University research sectors. The Act was implemented by the Federal Government in support of Australia’s international obligations to meet strengthened export controls and to prevent sensitive technology that can be used in conventional and weapons of mass destruction programs, from getting into the wrong hands.
In March of 2015, the amended Act (Defence Trade Control Act Amendment Bill 2015 (the Bill)) was passed through parliament and comes into effect on the 2 April , 2016. Through the Act, intangible supply, brokering and publication of controlled goods and technology listed within the Defence and Strategic Goods List (DSGL) will be regulated and applies equally to industry, education/university institutions and the research sectors.
The presentation will include information on what impact the Act may have for research and what processes QUT has put in place to assistresearchers meet their obligations under the Act.

Bio Slides Photo Video

GP-S11 Cantina Lounge
12-Apr-16External guest speaker: Chunhua Shen

Title: Dense prediction on images using very deep convolutional networks


In this talkI will present an overview of my recent results on deep learning.
First I will introduce two deep structured learning methods. Structured output learning concerns the problem of predicting multiple variables that have dependency, with Conditional random field (CRF) as a typical example. The first application is to learn depth from single monocular images using a deep structured learning scheme. The unary and pairwise potentials of continuous CRFs are learned in a unified deep CNN framework. For the second application, a new, efficient deep structured model learning scheme is proposed for semantic segmentation. We achieve best reported results on seven public benchmark datasets.
Inspired by the deep residual network, originally designed for classification, we design very deep fully convolutional networks (FCNN) which significantly improve performance on dense pixel-level prediction for both high-level and low-level problems, including semantic segmentation, depth estimation, denoising, super-resolution.

Bio Slides



Subject to be canceled due tocentre's Robot Vision Summer School (RVSS). 

David Ball

Title: Interviewing experiences


Ben asked me to talk about my experiences with interviewing for start-up robotics companies in order to help those who might consider this path. I found the process to be quite different from interviewing for academic and standard engineering positions … and was initially caught by surprise. I’ll describe the process and the range of questions that were asked. Also, I’ll talk a little bit about Modern C++ and its advantages. Then I’ll give some sources which I found useful to prepare for these interviews. Lastly, if there is time, I’ll demonstrate a new machine learning toolbox and methodology which I found while taking a Coursera unit on machine learning.

Photo Slides


ACRV meeting

ACRV meeting



ICRA2016 practice talks

(3 mins spotlight pitch and 2-3 mins feedbacks for each speaker.)

1. William Chamberlain
2. Benjamin Talbot
3. Chris McCool
4. Chris Lehnert
5. Donald Dansereau
6. Michael Milford (on behalf of James Mount) 
7. Ben Upcroft (on behalf of Niko Suenderhauf)

17-May-16David HallIntern experience at Bosch 
24-May-16Stryker (visitor)Stryker will talk about their work in vision-based navigation for medical robotics. They will also outline what the company Stryker does. 
31-May-16Michael MilfordOpen slot 


Title: Callings - Finding and Living an Authentic Work / Life

Abstract: Ray Russell relates his 35 year quest to find the perfect work-life balance. Take a break from your SLAM, your occupancy grids, your quadratic equations and your grant writing. Part cracker-jack philosophy, part transcendental exploration - for a few minutes, let's examine together the motivations behind why we are all Here in the first place.

Photo1, Photo2, Slides


Ahmed Abbas

Title: Enabling Robots To Assist People During Assembly Tasks By Linking Visual Information With Symbolic Knowledge Representation

Abstract: Future robots should have the ability to perform daily tasks in various conditions. One of the future applications of robotics is to assist workers in assembly tasks. The aim of our project is to create a robot that can assist workers in an assembly task. The goal of the project and the current state of our work will be presented in order to receive feedback that could help to improve the future work.



Open slot 
28-Jun-16Amos Albert

Title: TBA

About the speaker: Amos is the CEO of Deepfield Robotics - a subsidiary of Bosch Germany





Title:  Not your grandmother’s MATLAB

This is unashamedly a talk about MATLAB and the gory details thereof.  Those of us who use MATLAB tend not to keep up with new functionality as it’s added - they have 3500 people working on enhancing the product: new core functionality and new toolboxes.  In this talk I’ll demonstrate (live!) some of the newer features that might be relevant to folk who work in our field: strings (yay!), categorical arrays, tables, tall arrays, graphics, apps, connectors, compilers and coders.




Title: Robotics: Science and Systems Conference Debrief

Abstract: I will give an update on the conference overall and the Deep Learning Workshop organised by Juxi and Niko. I’ll highlight a couple of talks/papers that I found very interesting. The first presentation was by Raia Hadsell from Deep Mind on Progressive Networks which enabled rapid transfer from simulation to real robots. The second presentation was by Dieter Fox from U. of Washington on how deep learnt features dramatically improved hand and gesture recognition.



Title: Learning Tasks from Natural Language Dialogue

Abstract: Providing robots with the ability to learn everyday tasks, such as cleaning, directly from users within the environment will allow them to be adapted to a wide variety of real-world problems, including aged and disability care. Previous research in task learning has focused on two key approaches: learning from demonstration, in which the agent observes the user performing the task; and learning from natural language, in which the agent learns from a spoken/written description of the task. While both approaches are complimentary in nature, for the purpose of this talk we will focus on the latter.

We will discuss the results of our recently published work, in which we demonstrated a task learning/planning approach that enabled a robot to both learn generalizable tasks from natural language inputs and exploit domain knowledge during planning. In addition, we will provide an overview of the current direction of our work, which includes learning generalizable tasks from situation specific explanations, as well as recognising repeatable patterns for repetitive tasks.




Title: How to place 6th in the Amazon Picking Challenge

Abstract: In early March, team ACRV was selected as one of 16 teams to participate in this year's Amazon Picking Challenge. This talk will summarise what followed. In particular, I will highlight some of the key lessons we learned as well as the tools and processes that worked and didn't work for us. I'll also mention my ideas on how team ACRV might win next years APC. 


Donald Dansereau

Title: Light Fields: Has it been 20 years already?!

Abstract: On the 20th anniversary of the seminal paper by Levoy and Hanrahan, I'll review recent developments in this still-growing field. I'll also discuss my upcoming move to the Stanford Computational Imaging Lab, and some of the work going on there. Finally I'll cover some of the present and ongoing work in light field imaging here at QUT.


9-Aug-16Duncan Campbell

Title: Where are UAVs at and how can we get them connected with the Internet of Things (or Industry V4)?

Abstract: UAVs, or flying robots to some, are becoming ever closer to the ubiquitous technology often touted. I will present a snapshot of where we are at in terms of widespread adoption of UAVs in our airspace to do really useful and economically beneficial things, and what big challenges remain. UAVs can form a critical sensing front-end and actor in the context of the Internet of Things (IoT), also known as Industry V4 in the industrial context. Industry is well progressed down the path of large scale systems integration and open data communication protocols, which has much to offer in terms of integrating multiple UAV systems, and possibly that of land and sea robotic platforms. The second part of the presentation will present a framework and early work on how heterogeneous robotic platforms may benefit from the industrial automation world to provide seamless data communication between intelligent sensing platforms in the field, and the realms of big data, cloud computing and decisioning. I will encourage discussion on this aspect as there are some great things that we can trial across the discipline and have more of our platforms interconnected and connected.




Title: Robotics Deployment of Machine Learning

Abstract: For a robotic application, training a machine learning model is generally not the end of the project. Even if the purpose of the model is to obtain knowledge about certain aspects of a dataset, the knowledge gained, to be useful, need to be generalised to cover new data that the robot will feed to the model during its deployment, however, most of these models fail to demonstrate the same level of performance, shown on their test set, when deployed on a robot. In this talk, I'll highlight some of the lessons I learned while deploying supervised machine learning on mobile robots.  If you are new to machine learning and you would like to use it ion your robot or if you are an expert in the topic and you would like to hear about the deployment stage, or if you are remotely interested in the subject, this talk will give you a wide overview and I hope it will stimulate discussion beyond the presented ideas.


Adjunct Associate Prof Oran Rigby

Title: Medical rescue, training and future use of robotics

Abstract: A review of how robotics currently and in the future may influence the delivery of critical care in the prehospital and medical environment focusing on patient rescue, remote diagnostics, and the opportunities for remote therapeutics in critical clinical decision pathways.


Matthew Dunbabin




Title: STEM Connectors – where STEM experts and schools connect.

Abstract: Dr Julia Davies will present an overview of this new STEM engagement program, where teachers invite experts into their classroom (via Skype or other means of video-telephony) to show students the relevance and application of STEM. For any researchers who subsequently may be interested in getting involved, Julia will then lead you through the registration process to create a profile page.

Check out




Title: Robots, neurons, and the fabric of reality

Abstract: I'll talk about lessons learned from robotics work in Vancouver, current and future directions toward better robots, and some reasons to think those directions might work. I'll introduce my current research in computational neuroscience and its value to robotics, and zoom out to the big picture to address the long-term ways in which I think robotics can impact neuroscience, philosophy, and the universe.



No seminar - RoboVis 2016 



Title: AgBotII Software Development and Latest Results

Abstract: The AgBotII was built at QUT as part of the Strategic Investment in Farming Robotics which is now nearing the successful completion of all milestones. These milestones included the development of the platform, weed management in the field, autonomous replenishment and fertilising in the field. In this seminar I will firstly talk a little bit about the software inside the AgBotII that made achieving these milestones possible, including the scale and size of the software development task. Secondly, I will present a summary of the most recent round of completed milestones, including the autonomous docking, refilling, recharging and broadcast fertilising. Finally, I will talk about using the Gazebo simulation which appears to be underutilised in the lab.

11-Oct-16 Thierry


Title: Mining Robotics at QUT and MINExpo 2016

Abstract: In the first part of this talk I will give a quick overview of the current status of activities in robotics and automation for the mining industry at QUT, including our membership in CRCMining/Mining3, recently confirmed projects such as the Advance Queensland Innovation Partnership “Automation-Enabling Positioning for Underground Mining”. Some of the many opportunities for the future will also be mentioned.

 In the second part, I will discuss impressions of MINExpo 2016, the largest mining exhibition in the world, which was just held in Las Vegas. This will include a preview of a 400+ ton autonomous truck. 


Peter Corke

Report on IROS 2016 



Title : Alone in the dark : robotic vision in low light environments 

Abstract : In most cases it is expected that robots can operate 24 hours a day this means 50% of their operational time is night where lighting may be insufficient.  This talk will look at how cameras operate in these low light environments, sources of noise, the effects of demosaicing and why it is important for use to consider these things when testing robotic vision algorithms. Next I will briefly cover so other camera technologies that could be helpful in low light conditions and finish with an overview of what I hope to accomplish throughout my PhD.



Title: Vision for Agricultural Robotics

Abstract: In this presentation I'll give an overview of the vision systems developed during the SIFR project for AgBot II (weed classification) and Harvey (crop segmentation and detection). 

The vision systems are used for a range of detection, classification and segmentation tasks making use of traditional vision features through as well as incorporating recent advances in convolutional neural networks.




Title: Commonwealth Bank robotics initiative 


Commonwealth Bank (CBA) has purchased a REEM humanoid robot in order to explore potential
use cases for robotics within the financial services industry.  The Bank and Stockland Retail 
Services Pty Limited  (Stockland) have entered into a project agreement to run a range of robotics experiments.

In this context, the Australian Technology Network of Universities (ATN) directorate has 
worked with CBA to devise an initiative offering opportunities to teams of students to engage 
in social robotics and robotics coding research during Semester 2 of 2016.

Last month a group of three QUT undergraduate students enrolled in  BEB801 went to demo their work at CBA Innovation Lab in Sydney.

In this talk, I will discuss the demonstrations presented in Sydney.  In particular, I will explain how the QUT team successfully managed to have the REEM robot play a game of "Simon Says". The key module of this system is a deep neural network that takes  as input a single image and predicts the pose of the skeleton of the person closest to the center of the image.


Title: Uni and start-up adventures in Austria, Switzerland, Germany, Italy, China and Singapore: A Pictorial Journey

Abstract: I'll cover the more interesting aspects of several international trips this year to a number of universities, start-ups and companies in Europe and Asia.





Title: “Deep Adventures in Germany, Portugal, England and France”

Abstract: Reporting on some of the activities in European labs around robotic vision, adaptive systems, and deep learning. Labs I visited include, CITEC (Cognitive Interactive Technology Cluster of Excellence) at the University of Bielefeld, VisLab at the Istituto Superiore Tecnico Lisbon, Oxbotica, DeepMind, Lagadic at inria Rennes, and SoftBank Robotics Europe.





Title: How to Build a World #1 Robotics Company.

 Abstract: Ben Sand will share insights from his time in Silicon Valley and from coaching people to build highly technical companies. Ben is a co-founder of Meta which raised AU$100M over 3.5 years. Meta builds augmented reality hardware with a strong computer vision component. Key hires included Prof. Steve Feiner (Columbia), Prof. Steve Mann (Toronto), Jayse Hansen (creator of graphics from Iron Man, Avengers), and Alan Beltran (head of hardware for Google's Project Tango).

Ben has experience creating high quality partnerships with universities were both tech companies and universities gain and will overview some of the models he has used previously.


Obadiah Lam


Ben Talbot

Title: Human Cues for Robot Navigation

Abstract: This talk covers the outcomes of the Discovery Project "Human Cues for Robot Navigation". We set out to investigate how robots could use navigation cues in environments designed for humans. We will cover some different types of spatial symbolic information, and how a robot can use this information as cues for navigation. Along the way, the robot must deal with the fluidity and ambiguity naturally inherent in these cues. The talk will then move to robot vision approaches for locating such cues in the world. We focused on textual cues, including signs and room labels, which reduces to a wild text spotting problem with an unseen lexicon. Finally, occlusions and specular highlights can prevent the robot from reading textual cues in the world, and we present a method for repositioning the robot to reduce the impact of specular highlights.


Tim McLennan &

Tim MacTaggart

QUT Bluebox

Title: Commercialisation models

Interactive discussion: outline of setting up a company called Q-botics, what are other models of commercialisation and what would we need to think about with those if there were to be exercised and what are some of the technology or models you are thinking about. The Uni is also doing a lot in the building entrepreneur spaces (startups and etc)




(Uni of Adelaide)

Title: Amazon Picking Challenge 2016: Team NimbRo of University of Bonn

 Abstract: Automation in warehouses is becoming increasingly important in order to relieve humans from mundane and heavy tasks. This talk will present Team NimbRo's successful solution for this year's Amazon Picking Challenge. We will first give a broad overview of the entire system and then focus on two challenging aspects. First, motion generation using a highly flexible IK-based keyframe interpolation framework featuring null space cost optimization. Second, our approach to object perception, which includes online learning from deep features, semantic segmentation on GPUs using pre-trained models, as well as 6D object pose estimation for better grasp point selection. Finally, we will point out the most difficult items for our setup and our approaches to handle them.



Mejias Alvarez

Title: Experiences in Flight Testing on Manned/Unmanned Aircraft

Abstract: In this presentation, I will provide an overview of the approach followed to test our research in a manned aircraft (Cessna 172) and a fixed–wing UAV. The software architecture developed allowed us to transparently execute on both aircraft the same core algorithms with minimal changes. Reusability, modularity and transparency were the criteria when developing this architecture allowing for seamless switching between simulation and real flight testing.




Title: A gentle introduction to generative models and Bayesian deep learning  

Abstract: In this talk I will give a gentle introduction to two of the most regarded research topics at the recent NIPS (Neural Information Processing Systems) conference: generative models and Bayesian deep learning. Both techniques are not yet widely adopted in our community, but have the potential to overcome many of the deficiencies of current deep learning approaches for robotic applications where real-world robustness is paramount. Typical deep neural networks, such as used by many in our group, are trained as discriminative classifiers. Generative models are more powerful in the sense that they go beyond mere classification and attempt to learn the true distribution of the data instead. This is highly beneficial for robustness in situations where new unknown classes are regularly encountered, or when training has to be weakly supervised due to the high costs of obtaining labeled data. I will cover two recent techniques for generative models: generative adversarial networks and variational autoencoders. Another shortcoming of typical deep neural networks is that they are not able to properly represent their uncertainty in a classification. Instead, they merely exhibit uncalibrated confidence scores. While this meets the requirements for in-dataset classification (such as the ImageNet or COCO challenges), robotic systems that have to make decisions and act in the physical world based on a neural network's output, need trustworthy uncertainty information. Bayesian deep learning provides the techniques to achieve this. 


Donald G. Dansereau

Title: Computational Imaging for Robotic Vision

Abstract: This talk argues for combining the fields of robotic vision and computational imaging. Both consider the joint design of hardware and algorithms, but with dramatically different approaches and results. Roboticists seldom design their own cameras, and computational imaging seldom considers performance in terms of autonomous decision-making.

The union of these fields considers whole-system design from optics to decisions. This yields impactful sensors offering greater autonomy and robustness, especially in challenging imaging conditions. Motivating examples are drawn from autonomous ground and underwater robotics, and the talk concludes with recent advances in the design and evaluation of novel cameras for robotics applications.

10-Feb-17Mohammed Deghat

Title: Distributed Multi-Robot Formation Control

 Abstract: Multi-agent systems are progressively being used in a broad range of modern applications such as multi-robot or multi-vehicle coordination and control, air traffic management systems, control of sensor networks, unmanned vehicles, energy systems and logistics.This presentation will review a number of concepts and results on multi-agent system control and will consider the types of communication, control and sensing architecture that allow preservation of the formation shape. It is assumed that the amount of sensing, communication and control computation by any one agent is limited. For example, each agent is only able to communicate over a limited range, and can only measure or receive its neighbours' state information.




Title: Robotic Manipulation in Real World Environments

Abstract: In this presentation, I will describe the development of robotic systems for manipulation in real world environments such as agriculture and warehouse automation. I will briefly outline the methods developed for manipulation in agriculture and how they can also be deployed to solve manipulation problems for warehouse automation.

 The presentation will focus on Harvey, a robot which autonomously harvests capsicum in a greenhouse. The horticulture industry remains heavily reliant on manual labour, and as such is highly affected by labour costs. In Australia, harvesting labour costs in 2013-14 accounted for 20% to 30% of total production costs. These costs along with other pressures such as scarcity of skilled labour and volatility in production due to uncertain weather events are putting profit margins for farm enterprises under tremendous pressure. Robotic harvesting offers an attractive potential solution to reducing labour costs while enabling more regular and selective harvesting, optimising crop quality, scheduling and therefore profit. Autonomous harvesting is a particularly challenging task that requires integrating multiple subsystems such as crop detection, motion planning, and dexterous manipulation. Further perception challenges also present themselves, such as changing lighting conditions, variability in crop and occlusions.

 We have demonstrated an effective vision-based algorithm for crop detection, two different grasp selection methods to handle natural variation in the crop, and a custom end-effector design for harvesting. Experimental results in a real greenhouse demonstrate successful grasping, detachment and overall harvesting rates. We believe these results represent an improvement on the previous state-of-the-art and show encouraging progress towards a commercially viable autonomous capsicum harvester.




Title: Angle Sensitive Imaging: A New Paradigm for Light Field Imaging

 Abstract: Imaging is a process of mapping information from higher dimensions of a light field into lower dimensions. Conventional cameras do this mapping into two dimensions of the image sensor array. These sensors lose directional information contained in the light rays passing through the camera aperture as each sensor element integrates all the light rays arriving at its surface. Directional information is lost and only intensity information is retained. 

 This talk takes you through a host of ideas to decouple this link and enable image sensors to capture both intensity and direction without sacrificing much of the spatial resolution as the existing techniques do. Some of the ideas that we explore in this talk are differential quadrature pixels, polarization pixels, multi-finger pixels and combinations of these to effectively capture the angular information of light by consuming only a very small imager area. These advances are facilitated by the miniaturization of the CMOS fabrication processes and enable low cost, robust computational cameras.

The presented work builds heavily on the theoretical premise laid down by the prior work on multi-aperture imaging. Practical aspects are modeled on the diffraction based Talbot effect. The presented solutions fall into a general category of sub-wavelength apertures and is a one-dimensional case of the same. These solutions enable a rich set of applications among which are fast response auto-focus camera systems and single-shot passive 3D imaging.


Aaron McFadyen

Title: Visual Servoing - Alternate Approaches and Applications

Abstract: Humans use vision as feedback to help control their actions all the time, particularly when operating vehicles such as cars, heavy machinery and aircraft. If we want to remove the human operator such that these vehicles or agents become autonomous, then replicating some of the control tasks may require the use of vision-based control or visual servoing.

This seminar explores how visual servoing can be used to control such mobile agents or robots. First, I will provide a brief introduction to visual servoing (including position and image-based control frameworks), as well as a step by step guide on how to derive a classical image-based visual controller. Second, I will introduce some new (non-classical) image-based visual servoing approaches that leverage alternative control frameworks to provide additional benefits such as guaranteed stability, constraint satisfaction and the removal of feature tracking requirements. The goal of this seminar is to highlight the design considerations and potential benefits and drawbacks when contemplating the use of visual servoing for autonomous robot control. For those new to visual servoing, this should provide suitable background information to further explore the subject matter. For those already familiar with visual servoing, the material should help you to decide what approaches may be suitable for your application.

Throughout the seminar, various concepts will be highlighted with the aid of example applications for unmanned aircraft (drone) control including some core functionality (collision avoidance) and application specific tasks (control of a suspended load).




Title: Multi-target Tracking: Challenges and Solutions 

Abstract: Despite significant progress, the problem of tracking multiple targets in crowded real-world scenarios is still far from solved. The task is highly relevant for a wide range of applications in robotics and computer vision, including autonomous vehicles, surveillance, video analysis and life sciences. In this seminar, I will present the remaining challenges to be addressed and some of the recently proposed solutions. In particular, I will comment on the differences between online and batch approaches and emphasise the importance of a centralised benchmark to advance the state of the art.


Guilherme Maeda

Title: Semi-Autonomy in Human-Robot Collaboration

 Abstract: Semi-autonomous robots are robots whose actions are, in part, functions of human decisions. Semi-autonomy allows robots to interact with a human partner in a collaborative manner. Potential applications can vary from the assembly of products in factories, to the aid of the elderly at home, to the shared control in teleoperated processes. However, the sense-plan-act paradigm established by industrial robotics does not account for the interaction with humans, and methods to program collaborative robots are still unclear. In this talk, I will introduce interaction primitives, a data-driven approach based on the use of imitation learning, for learning movement primitives for human-robot interaction. The core idea is to learn a parametric representation of joint trajectories of a robot and a human from multiple demonstrations. Using a probabilistic treatment, the method uses the correlation between the learned parameters such that the robot task and trajectory can be inferred from human observations. As a proof-of-concept, experiments with a 7-DoF lightweight arm collaborating with a human to assemble a toolbox will be shown.




Title: Language, Logic, and Motion: Synthesizing Robot Software

Abstract: Robots offer the potential to become regular helpers in our daily lives, yet challenges remain for complex autonomy in human environments. We address the challenge of complex autonomy by automating robot programming. Many useful robot tasks combine discrete decisions about objects and actions with continuous decisions about collision-free motion. We introduce a new planning framework that reasons over the combined logical and geometric space in which the robot operates. By grounding this planning framework in formal language and automata theory, we achieve not only efficient performance but also verifiable operation. Finally, such a rigorously grounded framework offers a firm base to scale to large domains, handle uncertainty in the environment, and incorporate behaviors learned from humans.




Title:  My trip to the US: Experience in enabling robots to manipulate in a kitchen scenario

Abstract: I am going to share my experience in a three-month project for objects manipulation in a kitchen scenario at the University of Maryland. The project consists of three tasks: fetching an object from a fridge, heating it using a microwave, and cleaning a table after dinner. The solution is based on a Baxter robot with a mobile base, mostly implemented using engineering techniques except deep learning for object recognition. In the talk, I will introduce the solution and show some demo videos.




Title: The Fast & the Compressible” - Reconstructing the 3D World through Mobile Devices

Abstract: Mobile devices are shifting from being a tool for communication to one that is used increasingly for perception. In this talk we will discuss my group’s work in the rapidly emerging space of using mobile devices to visually sense the 3D world. First, we will discuss the employment of high-speed (240+ FPS) cameras, now found on most consumer mobile devices. In particular, we will discuss how these high frame rates afford the application of direct photometric methods that allow for - previously unattainable - accurate, dense, and computationally efficient camera tracking & 3D reconstruction. Second, we will discuss how the problem of object category specific dense 3D reconstruction (e.g. “chair”, “bike”, “table”, etc.) can be posed as a Non-Rigid Structure from Motion (NRSfM) problem. We will discuss some theoretical advancements we have made recently surrounding this problem - in particular when one assumes the 3D shape being reconstructed is compressible. We will then relate these theoretical advancements to practical algorithms that can be applied to most modern mobile devices.  




Title: Human Action Understanding as a Pathway to Human-Robot Collaboration

Abstract: In this talk, I will first cover the motivating applications of human action recognition in the real world. Then, I will talk about some basics about temporal feature extraction such as 3D space-time interest point detection, optical flow features, temporal templates, dense trajectories, and motion boundary histograms.



Peter Corke, Timo Korthals (PhD student at Bielefeld), Thomas Schöpping (PhD student at Bielefeld), Stephen James (PhD students at Imperial College London)




Title: "Going further with direct visual servoing methods"

Abstract: This talk will be about different ways to improve the performances of direct visual servoing positioning methods, ranging from the use of global descriptors, particular filters and ultimately CNNs.




Title: Advanced Organic Optoelectronics for Making Robots See and Sense Better

Abstract: We see and feel the world by the sense of vision and touch that is brought to us by our eyes and skin. The rise of robotics would most certainly require robust vision and rich sensation for dexterous manipulation of soft objects and safe human-robot interaction. In this talk, I will first introduce the field of organic optoelectronics and discuss its potential in advancing current robotic vision and tactile sensing platforms. This will be followed by some of my most recent research on the design and development of advanced optoelectronic sensors for low level light sensing, reversible pixel operation, multi-spectral pixel design and tactile sensors that can be embedded in robotic arms of different shapes and forms for sophisticated sensing and smart functionality. I’ll also discuss some of my ongoing collaborative projects on brain-computer interface (with QBI/UQ) and night vision (with MIT).




Title: Event-Based Vision Algorithms for Mobile Robotics

Abstract: Event cameras, such as the Dynamic Vision Sensor (DVS), are biologically inspired sensors that present a new paradigm on the way that dynamic visual information is acquired and processed. Each pixel of an event camera operates independently from the rest, continuously monitoring its intensity level and transmitting only information about brightness changes of given size ("events") whenever they occur, with microsecond resolution. Hence, visual information is no longer acquired based on an external clock (e.g. global shutter); instead, each pixel has its own sampling rate, based on the visual input. This different representation of the visual information offers significant advantages over standard cameras, namely a very high dynamic range, no motion blur, and a latency in the order of microseconds. However, because the output is composed of a sequence of asynchronous events rather than actual intensity images, traditional vision algorithms cannot be applied, so that new algorithms that exploit the high temporal resolution and the asynchronous nature of the sensor are required. This talk will focus on the research carried out at the Robotics and Perception Group (University of Zurich) on the development of such algorithms for ego-motion estimation and scene reconstruction, so that a robot equipped with an event camera can build a map of the scene and infer its pose with respect to it.




Title: Product of Exponentials formula – An alternative approach to modelling your robot

 Abstract: Kinematics is a fundamental topic in robotics. Denavit-Hartenberg (DH) model has been a standard approach to modelling the kinematics of a robot and has been adopted for decades. This talk will introduce another method referred to as the Product of Exponentials formula (POE), which has been gaining increasing popularity as an alternative model. After describing the basic ideas of POE and comparing it to DH, the talk will show the equivalence between these two models, i.e., they can be converted into each other analytically. Finally, the talk will discuss a few examples using the POE model and show that in some circumstances, the POE model provides a simpler and more insightful interpretation of the kinematics of a robot.


Valerio Ortenzi 

Title: Vision-based trajectory control of unsensored robots to increase functionality, without robot hardware modification 

Abstract: In nuclear decommissioning operations, very rugged remote manipulators are used, which lack proprioceptive joint angle sensors. Hence these machines are simply tele-operated, where a human operator controls each joint of the robot individually using a teach pendant or a set of switches. Moreover, decommissioning tasks often involve forceful interactions between the environment and powerful tools at the robot's end-effector. Such interactions can result in complex dynamics, large torques at the robot's joints, and can also lead to erratic movements of a mobile manipulator's base frame with respect to the task space. My work seeks to address these problems by, firstly, showing how

the configuration of such robots can be tracked in real-time by a vision system and fed back into a trajectory control scheme. Secondly, my work investigates the dynamics of robot-environment contacts, and proposes several control schemes for detecting, coping with, and also exploiting such contacts. Several contributions are advanced. Specifically a control framework is presented which exploits the constraints arising at contact points to effectively reduce commanded torques to perform tasks; methods are advanced to estimate the constraints arising from contacts in a number of situations, using only kinematic quantities; a framework is proposed to estimate the configuration of a manipulator using a single monocular camera; and finally, a general control framework is described which uses all of the above contributions to servo a manipulator. The results of a number of experiments are presented which demonstrate
the feasibility of the proposed methods. 




Title: Evaluation of keypoint detectors and descriptors in arthroscopic images for feature-based matching applications

 Abstract: Knee arthroscopy is the most common minimally invasive orthopaedic procedure in the world. During this procedure, a camera and an arthroscope allow surgeons to observe unstructured and narrow views of the inside of the knee. Given visually challenging monocular images, the surgeon needs to a) estimate where the camera and the instruments are within the knee, b) maintain a mental map of the knee environment, and c) perform the appropriate therapeutic action while manipulating multiple instruments. These tasks are both mentally and physically demanding for surgeons and often lead to involuntary injury in patients.

 Surgeons would strongly benefit from systems that can continuously map the inside of the knee, localize the arthroscope and surgical tools, and control instruments using visual information. In this talk I will provide a quick overview of the research around robotic assisted knee arthroscopy within the Medical and Healthcare robotics group. I will then present in detail the outcomes of a recent submission to RA-L entitled “Evaluation of keypoint detectors and descriptors in arthroscopic images for feature-based matching applications”. I will conclude the talk with an overview of future research directions.


Mark McDonnell




David Hall: Towards Unsupervised Weed Scouting for Agricultural Robotics

Leo Wu: Dexterity analysis of three 6-DOF continuum robots combining concentric tube mechanisms and cable driven mechanisms

Michael Milford: Deep Learning Features at Scale for Visual Place Recognition

Fahimeh Rezazadegan: Action Recognition: From Static Datasets to Moving Robots

Juxi Leitner: ACRV Picking Benchmark




Title: Our journey with Hidden Markov Model filters for vision based aircraft detection.

Abstract: Short overview of our ten year journey with HMM filters for aircraft detection. I will briefly highlight key milestones and advancements and show a glimpse of recent developments.




Title: Tools for Robot Vision research: scalable experiments and databases

Abstract:  In order to perform experiments in robot vision, we have to write a bunch of surrounding code that sends the data to the system and interprets the results. Because each dataset and each robot vision system handles input differently, this code gets bigger and bigger and more and more complex. In this talk, I'm going to describe how I tackle this problem, and how I use mongodb to manage all the data and metadata around running experiments. Hopefully some of these tools and solutions will be useful for you when conducting your research.




Title: Entrepreneurship


This short seminar will present the work I have done outside of my PhD, including:

 - Hacking an RC car and developing a demo for Robotronica,

 - The various methods to crowd fund an idea/start-up,

 - The lessons learnt from running a successful KickStarter, and

 - Applying for MIT's Global Entrepreneurship Bootcamp.




Title: Multi-Modal Trip Hazard Detection On Construction Sites.

Abstract: Trip hazards are a significant contributor to accidents on construction and manufacturing sites, where over a third of Australian workplace injuries occur [1]. Current safety inspections are labour intensive and limited by human fallibility,making automation of trip hazard detection appealing from both a safety and economic perspective. Trip hazards present an interesting challenge to modern learning techniques because they are defined as much by affordance as by object type; for example wires on a table are not a trip hazard, but can be if lying on the ground. To address these challenges, we conduct a comprehensive investigation into the performance characteristics of 11 different colour and depth fusion approaches, including 4 fusion and one non fusion approach; using colour and two types of depth images. Trained and tested on over 600 labelled trip hazards over 4 floors and 2000m22 in an active construction site,this approach was able to differentiate between identical objects in different physical configurations (see Figure 1). Outperforming a colour-only detector, our multi-modal trip detector fuses colour and depth information to achieve a 4% absolute improvement in F1-score. These investigative results and the extensive publicly available dataset moves us one step closer to assistive or fully automated safety inspection systems on construction sites.


Feras Dayoub

Feras's top ten favourite papers from ICRA2017 (download slides here) 

Henrik Christensen


Dr. Henrik I. Christensen is a Professor of Computer Science at Dept. of Computer Science and Engineering UC San Diego. He is also the director of the Institute for Contextual Robotics.  Dr. Christensen does research on systems integration, human-robot interaction, mapping and robot vision. The research is performed within the Cognitive Robotics Laboratory. He has published more than 350 contributions across AI, robotics and vision. His research has a strong emphasis on "real problems with real solutions". He is actively engaged in the setup and coordination of robotics research in the US (and worldwide). Dr. Christensen received the Engelberger Award 2011, the highest honor awarded by the robotics industry. He was also awarded the "Boeing Supplier of the Year 2011". Dr. Christensen is a fellow of American Association for Advancement of Science (AAAS) and Institute of Electrical and Electronic Engineers (IEEE). His research has been featured in major media such as CNN, NY Times, BBC, ...




Title: Dealing with change in large-scale urban localisation 

Abstract: Autonomous vehicles in urban environments encounter a wide range of variation, including illumination, weather, dynamic objects, seasonal changes, roadworks and building construction. These changes occur over a range of timescales, from the day-night illumination cycle to construction that can span multiple years. In this talk I will discuss the challenges we have encountered during long-term autonomy trials in Oxford, Milton Keynes and Greenwich, and present two of our newest approaches to dealing with change in both localisation and mapping with vision and LIDAR. I will also cover our Oxford RobotCar Dataset and the upcoming long-term autonomy benchmark due in late 2017.

25-July-17Anders ErikssonTitle: Duality and Robotic Vision 

Troy Bruggemann

Title: Evaluating UAS Team Reliability

Abstract: There is a need for enabling greater efficiency, utilization and safety of Unmanned Aircraft Systems (UAS) operating in teams with humans in the loop. UAS are limited in their ability to cope with and continue their missions in the presence of failures and other things going wrong, and for this reason high human capital is typically required to support their safe operation. This talk discusses how to assess and design for UAS team reliability with humans in the loop. 




Title: Inverse Dynamic Games

Abstract: Inverse dynamic games is the problem of recovering the underlying objectives of players in a dynamic game from observations of their optimal strategies. The problem of inverse dynamic games arises naturally in the study of economics, biological systems, cooperative automation, and conflict scenarios. Despite its many potential applications, the theory of inverse dynamic games has received limited attention. In this talk, recent advances in the theory of inverse dynamic games that have been made possible by exploiting the minimum (or maximum) principle of optimal control will be presented. The potential application of this work to autonomous collision avoidance will also be discussed.


Michael Rosemann







Title: The Strange Case of Grasping with Soft Hands - Exploiting Dr. Jekyll and Taming Mr. Hyde

Abstract: Squashy and flexible robotic end-effectors such as the RBO Hand 2 provide opportunities (Dr. Jekyll) and challenges (Mr. Hyde) for long-standing problems in grasping and manipulation. Opportunities, because getting into contact is easy and forgiving and the mechanical compliance of soft hands creates large basins of attraction when grasping objects. On the other hand, controlling soft hands exhibits significant challenges: good contact models are missing and sensor feedback is limited.

In this talk I will present a high-level grasp planner that exploits environmental contact and a low-level control method which learns models of simple manipulations for a soft hand.




Dr. Lesley Jolly holds a PhD in Anthropology and has worked alongside Engineering Educators throughout her career in an attempt to improve learning with STEM. She has faciliated the AAEE Winter School ( for many years and is a wealth of knowledge on everything to do with Engineering Education and the various pedagogies (flipped classroom, project-based learning, problem-based learning etc). 
12-Sep-17No SeminarICRA deadline 
20-Sep-17Qi Wu
Title: Turing Test 2.0 - Vision and Language
Abstract:  The fields of natural language processing (NLP) and computer vision (CV) have seen great advances in their respective goals of analysing and generating text, and of understanding images and videos. While both fields share a similar set of methods rooted in artificial intelligence and machine learning, they have historically developed separately. Recent years, however, have seen an upsurge of interest in problems that require combination of linguistic and visual information. For example, Image Captioning and Visual Question Answering (VQA) are two important research topics in this area. Image captioning requires the machine to describe the image using human readable sentences while the VQA asks a machine to answer language-based questions based on the visual information. In this talk I will outline some of the most recent progresses, present some theories and techniques for these two Vision-to-Language tasks, and show a live demo of the image captioning and Visual Question Answering. I will also show some recent hot topics in the area, such as the Visual Dialog.



Title: Shallow Networks for Inverse Projection in 3D Human Pose Estimation

Abstract: Projecting a 3D scene onto a 2D image is a relatively straight-forward and well understood process common in computer vision. The inverse problem - recovering a 3D scene from a single 2D projection - is an inherently ill-posed problem. Deep neural networks have been shown to perform well at this task by learning patterns in large datasets, though most fail to take advantage of the inverse nature. This talk will cover a couple of approaches we have taken to learn small, shallow networks to embed within a typical optimization framework and discuss areas we are looking to pursue.


Will Chamberlain

Title: Borrowing eyes: robotic vision beyond line-of-sight

Abstract: We can expand robots' vision envelope beyond line-of-sight with data from remote cameras, and exploit fast communications to gather visual information on demand.  We can also use smart cameras to distribute the image processing as well as image capture, enabling robots to be cheaper, and scaling to a large number of remote cameras.  This talk will cover my approach to distributed robotic vision on mobile phone smart cameras, and some of the challenges of distributed vision: describing robots’ information needs, managing a changeable set of available cameras, and aggregating conflicting data. 


No Seminar

RoboVis 2017 in Tangalooma 

Michael Lucas

Title: Opportunities and Challenges for Automation, Robotics, and Computer Vision in Australian Supply Chains (a practitioner's perspective)     

Abstract: I'll talk about my experiences with Automation, Robotics and Computer Vision in supply chain applications around the world over the last 20 years (with brief case studies), and where the future challenges and opportunities are given the current market and industrial relations climate.





Robotic Grasping: A brief history of robots picking things up    


Robotic grasping has been studied for decades, and a wide variety of techniques have been developed for synthesising stable grasps.  I will present an brief overview of robotic grasping literature and techniques, from analytical methods to data-driven and more modern machine learning approaches which show potential great in robotic grasping.  Finally, I will discuss how this leads in to my PhD research topic.





SESAME and SVM for Underground Visual Place Recognition


Autonomous vehicles are increasingly being used in the underground mining industry, but competition and a challenging market is placing pressure for further improvements in autonomous vehicle technology with respect to cost, infrastructure requirements, robustness in varied environments and versatility. In this seminar I will share some of our recent work on several new vision-based techniques for underground visual place recognition that improve on current available technologies while only requiring camera input. I will present a Shannon Entropy-based salience generation approach (SESAME) that enhances the performance of single image-based place recognition by selectively processing image regions. I will also discuss the effectiveness of adding a learning-based scheme realised by Support Vector Machines (SVMs) to remove problematic images. The approaches have been evaluated on new large real-world underground vehicle mining datasets, and its generality has been demonstrated on a non-mining-based benchmark dataset. Together this research serves as a step forward in developing domain-appropriate improvements to existing state of-the-art place recognition algorithms that will hopefully lead to improved efficiencies in the mining industry.




Title: Don’t Look Back: Robustifying Place Categorization for Viewpoint- and Condition-Invariant Place Recognition

When a human drives a car along a road for the first time, they later recognize where they are on the return journey typically without needing to look in their rearview mirror or turn around to look back, despite significant viewpoint and appearance change. Such navigation capabilities are typically attributed to our semantic visual understanding of the environment [1] beyond geometry to recognizing the types of places we are passing through such as ``passing a shop on the left" or ``moving through a forested area". Humans are in effect using place categorization [2] to perform specific place recognition even when the viewpoint is 180 degrees reversed. Recent advances in deep neural networks have enabled high-performance semantic understanding of visual places and scenes, opening up the possibility of emulating what humans do. In this work, we develop a novel methodology for using the semantics-aware higher-order layers of deep neural networks for recognizing specific places from within a reference database. To further improve the robustness to appearance change, we develop a descriptor normalization scheme that builds on the success of normalization schemes for pure appearance-based techniques such as SeqSLAM [3]. Using two different datasets - one road-based, one pedestrian-based, we evaluate the performance of the system in performing place recognition on reverse traversals of a route with a limited field of view camera and no turn-back-and-look behaviors and compare to existing state-of-the-art techniques and vanilla off-the-shelf features. The results demonstrate significant improvements over the existing state of the art, especially for extreme perceptual challenges that involve both great viewpoint change and environmental appearance change. We also provide experimental analyses of the contributions of the various system components: the use of spatiotemporal sequences, place categorization and place-centric characteristics as opposed to object-centric semantics.





The Progress of 3D Printing and how it Could be Useful for Research


3D printing has recently become very popular with consumer grade printers making it easier and easier to download a file and hit print. With this in mind how can we as researchers utilise this technology to increase our research productivity? This talk will delve into the different 3D printing technologies and creations that we can use in our demo's, hardware development and experiments, so that we can reduce cost, reduce lead time on parts and spend more time on the areas that matter. There will also be a small introduction into the Gummi Arm, a 3D printed variable stiffness manipulator that I built and will be working on.

 21-Nov-17Andrea Cherubini




Traditionally, heterogeneous sensor data was fed to fusion algorithms (e.g., Kalman or Bayesian-based), so as to provide state estimation for modeling the environment. However, since robot sensors generally measure different physical phenomena, it is preferable to use them directly in the low-level servo controller rather than to apply them to multi-sensory fusion or to design complex state machines. This idea, originally proposed in the hybrid position-force control paradigm, when extended to multiple sensors brings new challenges to the control design; challenges related to the task representation and to the sensor characteristics (synchronization, hybrid control, task compatibility, etc.). 
The rationale behind our work has precisely been to use sensor-based control as a means to facilitate the physical interaction between robots and humans. 
In particular, we have used vision, proprioceptive force, touch and distance to address case studies, targeting four main research axes: teach-and-repeat navigation of wheeled mobile robots, collaborative industrial manipulation with safe physical interaction, force and visual control for interacting with humanoid robots, and shared robot control. Each of these axes will be presented here, before concluding with a general view of the issues at stake, and on the research projects that we plan to carry out in the upcoming years.

28-Nov-17Juan Jairo Inga Charaja


Human Behaviour Identification Using Inverse Reinforcement Learning


Recent trends in human-machine collaboration have led to increased interest in shared control systems, where both human and a machine or automation simultaneously interact with a dynamic system. However, for a systematic control design to enable automation to participate in a cooperation with a human, modeling and identification of human behavior becomes essential. Considering a model of shared control based on a differential game, the identification problem consists in finding the cost function describing observed human behavior. This seminar will show the potential of Inverse Reinforcement Learning techniques for identification in such scenarios.


Ashley Stewart

Fangyi Zhang

Sean McMahon

James Mount

ACRA rehearsal
12-Dec-17No seminarACRA
19-Dec-17Girish Chowdhary


The Robotics are Coming - for your Food!


26-Dec-17No seminarXmas

Seminars 2015:


Date                ____




Title: Marine Vessel Inspection as a Novel Field for Service Robotics: A Contribution to Systems, Control Methods and Semantic Perception Algorithms.

Abstract: Seagoing vessels, such as bulk carriers, dry cargo ships, and tankers, have to undergo regular inspections at survey intervals. This is performed by ship surveyors, using visual close-up surveys or non-destructive testing methods. Vessel inspection is performed on a regular basis, depending on the requirements of the ship classification society. For a close-up survey, the ship surveyor has usually to get within arms’ reach of the inspection point. Structural damage, pitting, and corrosion are visually estimated based on the experience of the surveyor. The most cost-intensive partduring the inspection process is to provide access to all parts of a ship. The talk will present a novel, robot-based approach for the marine inspection process. Within the talk, several locomotion concepts for inspection robots are presented. Additionally, perception concepts based on spatial-semantic ontologies and on spatial Fuzzy Description Logic are proposed. It will be discussed, how such concepts can be used to classify structural parts of a ship, which again can enhance a robot based inspection process by semantic annotations.

03-Feb-15Juxi Leitner"From Vision To Actions - Towards Adaptive and Autonomous Humanoid Robots" 

Andrew English,

Adam Jacobson,

Michael Milford, and

Thierry Peynot

Post IROS2015 and ROSCon2015



Sandra Mau

Title: TrademarkVision, a spin-out computer vision company.


Sandra will be talking about her computer vision spin-out company TrademarkVision, sharing her journey from research to commercialisation, and giving insight on creating the right environment to unlock innovation and entrepreneurship for women in technology.

GP-S11 Cantina Lounge


John Vial


The SLAM algorithm has a fundamental problem in that its memory requirements grow linearly over time. To combat this robot poses can bemarginalised (typically causing fill in and increasing memory overhead ),forgotten entirely or the system can be approximated.

This talk will describe a recentPhDdissertation that addresses this problem through approximation while also guaranteeing that the approximate distributions are both close (using the Kullback-Leibler Divergence) and conservative, this new technique is called Conservative Sparsification. A variant of the technique is developed that is appropriate for distributed estimation systems by employing Junction Trees.


Photos: p1.jpg, p2.jpg, p3.jpg.

GP-S11 Cantina Lounge

David Ball

Title: Reflections on: Robotics for Zero-Tillage Agriculture (ARC Linkage Project)


Farmers are under growing pressure to intensify production to feed a growing population, while managing environmental impact. Robotics has the potential to address these challenges by replacing large sophisticated farm machinery with fleets of small autonomous robots. The first half of this seminar will present research from the completed ARC Linkage project “Robotics for Zero-Tillage Agriculture” towards the goal of coordinated teams of autonomous robots that can perform typical farm coverage operations. The second half of the seminar will reflect on the other aspects of the grant such as expectations about technology readiness levels, impact, timeline, testing, and the real cost of the project. 

With a large fleet of robots it will become time consuming to monitor, control and resupply them all. To alleviate this problem we describe a multi-robot coverage planner and autonomous docking system. Making a large fleet of autonomous robots economical requires using inexpensive sensors such as cameras for localisation and obstacle avoidance. To this end we describe a vision-based obstacle detection system that continually adapts to environmental and illumination variations and a vision-assisted localisation system that can guide a robot along crops with challenging appearance. This research included three months of field trials on a broad-acre farm, culminating in a two-day autonomous coverage task of 59ha using two real robots, four simulated robots and an automatic refill station.



GP-S11 Cantina Lounge


Juxi Leitner,
James Mount, Brent Watts,
Sue Keay

Title: Sharing experiences for robot business commercialisation


Brent - Overview of QUT Bluebox and how they can help turn research and innovation into commercialization opportunities.

Michael - His startup and how Bluebox helped him

Sue - Discussing her experience at RoboBusiness in Silicon Valley

 Juxi and James - Übercamp Experience Overview


With more people investigating the idea of Startups we must ensure we are utilizing every opportunity and tool at our disposal. With robotics been an emerging technology, there will be the potential for several commercialization opportunities just on the horizon. So, this week's seminar will have several presentations all in the area of Startups, Commercialization and Entrepreneurship! There will be four short presentations on a variety of aspects within the entrepreneurial space. The first half of the seminar will discuss the recent experiences of some of our RAS members at QUT's Ubercamp and RoboBusiness. The second half  will of the seminar will show how QUT Bluebox can help turn your idea, or product, into a viable commercialization opportunity.


Photos: IMG_2699.jpg, IMG_2698.jpg

GP-S11 Cantina Lounge

François Chaumette

Title: Visual servoing without image processing

Speaker: François Chaumette


GP-S11 Cantina Lounge
24-Nov-15External speaker

Title: The HPec Project: Self-Adaptive, Energy Efficient, High Performance Embedded Computing, UAV case study.

Speaker: Jean-Philippe Diguet

Title: Embedded Health Management for Autonomous UAV Mission

Speaker: Catherine Dezan

Abstract: (HPeC Summary)

The HPeC project aims at demonstrating the relevancy of self-adaptive hardware architectures to respond to the growing demands ofhigh performancecomputing, in an increasing class of embedded systems that also have demanding footprint and energy efficiency constraints. This is typically the kind of embedded system we have in small autonomous systems like UAVs, that require high computing capabilities to perceive the environment (e.g embedded vision) and make decisions about task to execute according to uncertainties related to the environment, safety critical systems, the health of the system and processing results (e.g. identified object).


GP-S11 Cantina Lounge
1-Dec-15 NOTE:due toacrabeing held this week the lab might be very empty!GP-S11 Cantina Lounge
8-Dec-15TBDTBDGP-S11 Cantina Lounge

ACRA, AusAI, and Deep learning workshop recap.

Speakers: TBD

GP-S11 Cantina Lounge
22-Dec-15Andres F. Marmol V. GP-S11 Cantina Lounge
29-Dec-15 NOTE: due to the university will be closed this week, no seminar is scheduled.GP-S11 Cantina Lounge




Seminars 2014:

Date                ____




Ioannis Rekleitis 

Algorithmic Field Robotics: Enabling Autonomy in Challenging Environments



Chunhua Shen

Tristan Perez

New image features and insights for building state-of-the-art human detectors

Robust Autonomy in Filed Robotics –Assessment and Design




Mohan Sridharan

Towards Autonomy in Human-Robot CollaborationGP-P-512

Geoffrey Walker

Solarcars and EVs to Agbots and UAVsGP-O-603

Neil Davidson

Design as StrategyGP-O-603

Ben Upcroft

Sabbatical at Oxford - "We Are Never Lost"GP-O-603

Obadiah Lam

A new type of neural network: Hierarchical Temporal Memory – Cortical Learning AlgorithmGP-O-603

Niko Sünderhauf

What is beneath the snow? – Towards a probabilistic model of visual appearance changesGP-O-603


Zetao Chen

Thierry Peynot

Multi-scale Bio-inspired Place Recognition (ICRA 2014)

Radars: a complementary sensing modality to Vision for Robotics and Aerospace at QUT?

28-04-2014Keith L. Clark

Programming Robotic Agents: A Multi-tasking Teleo-Reactive Approach [Slides]


Stephanie Lowry

Paper 1: Towards Training-Free Appearance-Based Localization: Probabilistic Models for Whole-Image Descriptors (ICRA 2014)

Paper 2: Transforming Morning to Afternoon using Linear Regression Techniques (ICRA 2014)


Edward Pepperell

Steven Martin

All-Environment Visual Place Recognition with SMART (ICRA 2014)

(ICRA 2014)


Tim Morris

Multiple map hypotheses for planning and navigating in non-stationary environments (ICRA2014)

20-05-2014 RAS overviewRAS overviewGP-Z-606

Alex Bewley

Patrick Ross

Online Self-Supervised Multi-Instance Segmentation of Dynamic Objects (ICRA 2014) 

Novelty-based visual obstacle detection in agriculture (ICRA 2014)


Ben Upcroft

Michael Milford

Lighting Invariant Urban Street Classification (ICRA 2014)

Condition-Invariant, Top-Down Visual Place Recognition (ICRA 2014)


Jason Kulk

A Chronology of Previous Experiences with Robots

20-06-2014Raymond Russell"RoPro Design - the Struggles Facing a Mobile Robotics Company in 2014"GP-O-603
27-06-2014Timothy Molloy
- Asymptotic Minimax Robust and Misspecified Lorden Quickest Change Detection For Dependent Stochastic Processes
- Compressed sensing using hidden Markov models with application to vision based aircraft tracking

Change from Fridays to Tuesdays 11:00 am

15-07-2014Duncan Campbell Overview of Project ResQuGP-B-507
29-07-2014Steven WrightOptimization with a focus on machine learning applications [Slides]GP-B-507
05-08-2014Niko SuenderhaufOverview of the CVPR 2014 conferenceGP-B-507
07-08-2014Charles GrettonCalculating Economical Visually Appealing Routes [Thursday 4:00 - 5:00 pm]GP-S-301
12-08-2014Andre BarczakFast Feature Extraction Using Geometric Moment InvariantsGP-B-507
19-08-2014Joseph YoungQUT gear and services around HPCGP-B-507
26-08-2014Jonathan RobertsMuseum Robot - how to deploy two robot for four yearsGP-B-507
12-09-2014Jochen TrumpfObservers for systems with symmetry [Friday 11:00am - 12:00pm]GP-S-405
30-09-2014Group IROS recapGP-B-507
07-10-2014Tor Arne JohansenAutonomous Marine Operations and Systems, with emphasis on Unmanned Aerial VehiclesGP-B-507
14-10-2014Alfredo NantesTraffic SLAM a Robotics Approach to a New Traffic Engineering ChallengeGP-B-507
16-10-2014Ken SkinnerTrusted Autonomy [slides]GP-S-407
21-10-2014Sareh ShiraziVideo Analysis Based on Learning on Special Manifolds for Visual RecognitionGP-B-507
28-10-2014Remi AyokoWorkspace configurations, employee wellbeing and productivityGP-B-507
Matthew Dunbabin
RobotX ChallengeGP-B-507

Anjali Jaiprakash

25-11-2014Navinda KottegeHexapods and other stories: Autonomous Systems for Perceiving our EnvironmentGP-B-507
09-12-2014Franz Andert Integrating Vision Sensors to Unmanned AircraftGP-B-507

Seminars 2013





Adrien Durand Petiteville

Multi-sensor based navigation of a mobile robot in a cluttered environment                                                  


Matthew Garratt

Unmanned Aerial Vehicles Research at UNSW, ADFA. WHERE: S403 TIME: 12.00 pm - 1.00 pm


Alex Bewley

PhD student introduction: Who is Alex Bewley and what is he doing here?


Wesam Al Sabban

Path Planning for Small, Electric Unmanned Aerial Vehicles in Dynamic Conditions


David Ball + Others

OpenRatSLAM + other related work


David Schmale

Time: 2-3pm Place: S Block room 305.


Arren Glover 

Final Seminar. Room B121


Frederic Maire

Marine mammals detection in aerial images


Patrick Ross

PhD Confirmation Seminar - Outdoor traversability (10-11am)


Robert Zlot and Mike Bosse

Title to be decided.


ICRA practice talks:

[David Ball] [Chris Lehnert]


ICRA practice talks:

[David Ball]


ICRA attendees:

summaries of papers they liked


Michael Warren and Chris Lehnert



Kyran Findlater, Ryan Steindl

4th year thesis presentations: AgaBot flash light, $100 UAV


Paul Furgale

* *Autonomous Systems Lab, ETH Zurich (Title to be decided.)


Brett Browning

Mobile Robotics for Oil and Gas Production and Heavy Industry


Andre Gustavo Scolari Conceicao

Formation Control of Mobile Robots Using Decentralized Nonlinear Model Predictive Control


Matthew Dunbabin

How to blow-up a robot… and other cool ways to monitor the environment


Matthew Walter

 Acquiring Rich Models of Objects and Space Through Vision and Natural Language


Kok Yew (Mark) Ng

Robust fault reconstruction using sliding mode observer


Thierry Peynot

Resilient perception and navigation for unmanned ground vehicles in challenging environmental conditions


Hu (Kyle) He

Joint 2D and 3D cues for image segmentation


Video: Jeff Hawkins

Talk followed by discussion: on Intelligence by Jeff Hawkins


Video: Chris Manning

Talk followed by discussion: on Deep Learning for NLP


Timothy Morris and Feras Dayoub

The Guiabot...TBA


Timothy Morris
Stephanie Lowry

Vision-Only Autonomous Navigation Using Topometric Maps (IROS2013 practice)
Odometry-driven Inference to Link Multiple Exemplars of a Location (IROS2013 practice)


Adam Jacobson

Michael Warren

Autonomous Movement-Driven Place Recognition Calibration for Generic Multi-Sensor Robot Platforms(IROS2013 Practice)

Robust Scale Initialization for Long-Range Stereo Visual Odometry (IROS2013 practice)

15-11-2013Anthony FinnDirector, Defence & Systems Institute - Title: TBA


Michael Milford

6 Months of Awesome in Boston


IROS2013 recap

Michael Warren

Stephanie Lowry

Timothy Morris


Walter Scheirer

The Open Set Recognition Problem


Tim Barfoot

Visual Route Following for Mobile Robots

18-12-2013Denny Oetomo TBA
20-12-2013Jasmine BanksFPGAS and Applications
  • No labels