The Robotics and Autonomous Systems (RAS) seminar series is open for the public. Everyone is welcome to attend.
Upcoming:
Speaker: Nicolas Hudson
Title: Mobile Manipulation
Abstract: An overview of key insights and winning strategies used by NASA’s Jet Propulsion Laboratory in the DARPA ARM, ARL's RCTA and DARPA Robotics Challenge programs, and how this intersects with Google's "AI-First" world.
Where: QUT Gardens Point S-Block, 11th floor, The Cantina Lounge
When: 11:00AM-12:00PM on 24/April/2018
Seminar internal speaker list (constantly updated):
If your name is near the top, please be prepared to give a RAS presentation soon (it is going to keep floating there until you do one).
William Hooper : william.hooper@hdr.qut.edu.au
Matthew Dunbabin : m.dunbabin@qut.edu.au
Jonathan Roberts : jonathan.roberts@qut.edu.au
Chris McCool : c.mccool@qut.edu.au
Michael Milford : michael.milford@qut.edu.au
Juxi Leitner : j.leitner@qut.edu.au
Niko Suenderhauf : niko.suenderhauf@qut.edu.au
Chris Lehnert : c.lehnert@qut.edu.au
Fangyi Zhang : fangyi.zhang@qut.edu.au
Ajay Pandey : a2.pandey@qut.edu.au
Leo Wu : liao.wu@qut.edu.au
Valerio Ortenzi : valerio.ortenzi@qut.edu.au
Andres Marmol : andres.marmolvelez@qut.edu.au
Jason Ford : j2.ford@qut.edu.au
John Skinner : jr.skinner@hdr.qut.edu.au
James Mount : j.mount@qut.edu.au
Sean McMahon : S1.mcmahon@hdr.qut.edu.au
Anders Eriksson : anders.eriksson@qut.edu.au
William Chamberlain : william.chamberlain@qut.edu.au
Douglas Morrison : douglas.morrison@hdr.qut.edu.au
Fan Zeng : fan.zeng@qut.edu.au
Sourav Garg : sourav.garg@hdr.qut.edu.au
Norton Kelly-Boxall : norton.kellyboxall@hdr.qut.edu.au
(Will do later) Anjali Jaiprakash : anjali.jaiprakash@qut.edu.au
(Will do later) Felipe Gonzalez : felipe.gonzalez@qut.edu.au
(Will do later) Jeremy Opie : None
(Will do later) Lachlan Nicholson : lachlan.nicholson@hdr.qut.edu.au
(Will do later) Mario Strydom : mario@ch3.com.au
(Will do later) Sean Wade-McCue : Sean.wademccue@hdr.qut.edu.au
(Will do later) Troy Cordie : troy.cordie@hdr.qut.edu.au
(Will do later) Jordan Laurie : jordan.laurie@hdr.qut.edu.au
(Will do later) Vibhavari Dasagi : vibhavari.dasagi@hdr.qut.edu.au
(??) Matt McTaggert : None
(??) Riccardo Grinover : ricardo.grinover@connect.qut.edu.au
(??) Steve Martin : None
(??) Fahimeh Rezazadegan : fahimeh.rezazadegan@qut.edu.au
This list contains all the members from here
https://www.roboticvision.org/rv_person_category/researchers/
https://www.roboticvision.org/rv_person_category/students/
Ranked by the order of most recent seminar date. People who haven't given a seminar since 2016 are ranked first, in alphabetical order by First Name.
Organiser : Please contact Fan Zeng, the organiser of this seminar series at fan.zeng@qut.edu.au, If
- Your name is not included in the list, and you'd like to add it into the list.
- Opposite of the above.
- You, or your visitor would like to give a talk in one of the upcoming sessions.
- Your name is near the top of the list, but you cannot give a seminar due to various reasons.
Thank you very much for your attention!
Full List of Seminars:
Seminars 2018:
Please email Fan Zeng (fan.zeng@qut.edu.au), if your presentation is arranged in the table below on a date you are not available. Please also remember to email the title and abstract when ready. A short biography would be appreciated for a brief introduction of the speaker before the presentation. Thanks!
Date ____ | Speaker | Topic | Room |
---|---|---|---|
05-Jun-18 | Tomas Krajnik | Title: FreMEn: Frequency Map Enhancement for Long-Term Autonomy of Mobile Robots Abstract:While robotic mapping of static environments has been widely studied, Rather than using a fixed probability value, our method models the | |
29-May-18 | Stéphane Caron | Title: The Inverted Pendulum: a simple model for 3D Bipedal Walking Abstract: Walking pattern generators based on the Linear Inverted Pendulum Model (LIPM) have been successfully showcased on real robots. However, due to key assumptions made in this model, they only work for walking over horizontal floors (2D walking). In this talk, we will see how to generalize the LIPM to 3D walking over uneven terrains, opening up old but refreshed questions on the analysis and control of bipeds. Our aim is to enable humanoids to walk in new environments: outdoors, staircases, hazardous areas, etc. Today's public research has reached the simulation stage in this field, as we will see in live simulations during the talk. We will finally discuss our ongoing efforts to make this a reality (in the public world) on the HRP-4 robot. | GP-S11 Cantina Lounge |
22-May-18 | Cancelled | Cancelled | |
15-May-18 | Cancelled | Cancelled | GP-S11 Cantina Lounge |
08-May-18 | Thierry Peynot | GP-S11 Cantina Lounge | |
01-May-18 | Kulatunga Mudiyanselage Eranda Bankara Tennakoon (Eranda) | ||
24-Apr-18 | Nicolas Hudson | Title: Mobile Manipulation Abstract: An overview of key insights and winning strategies used by NASA’s Jet Propulsion Laboratory in the DARPA ARM, ARL's RCTA and DARPA Robotics Challenge programs, and how this intersects with Google's "AI-First" world. | GP-S11 Cantina Lounge |
17-Apr-18 | Suman Raj Bista | Title : Indoor navigation of mobile robots based on visual memory and image-based visual servoing Abstract: This talk will focus on a method for appearance-based navigation from an image memory by Image-Based Visual Servoing (IBVS). The entire navigation process is based on 2D image information without using any 3D information at all. The environment is represented by a set of reference images with overlapping landmarks, which are selected automatically during a prior learning phase. These reference images define the path to follow during the navigation. The switching of reference images during navigation is done by comparing the current acquired image with nearby reference images. Based on the current image and two succeeding key images, the rotational velocity of a mobile robot is computed under IBVS control law. The navigation methods using local features like lines and the entire image using mutual information will be presented with the experimental results. | GP-S11 Cantina Lounge |
10-Apr-18 | Luis Mejias Alvarez | Title: Experiences and Work during PDL 2017Abstract: In this talk, I will present the main experiences during my stay in France, Canada and Spain in 2017. I will also present the work developed during this time which deals with the development of UAV navigation approaches that do not rely on GPS. The main technique behind the approach is called visual control, in particular of a type that exponentially decouples the translational from the rotational degrees of freedom. I will present motivation, flight experiments and results from this work. | GP-S11 Cantina Lounge |
03-Apr-18 | Peter Corke Melissa Johnston | MARS conference experience New SEF 3D printing capabilities to show you! | GP-S11 Cantina Lounge |
27-Mar-18 | Dorian Tsai | Title: Distinguishing Refracted Features using Light Field Cameras with Application to Structure from Motion Abstract: Robots must reliably interact with refractive objects in many applications; however, refractive objects can cause many robotic vision algorithms to become unreliable or even fail, particularly feature-based matching applications, such as structure-from-motion. We propose a method to distinguish between refracted and Lambertian image features using a light field camera. Specifically, we propose to use textural cross-correlation to characterise apparent feature motion in a single light field, and compare this motion to its Lambertian equivalent based on 4D light field geometry. Our refracted feature distinguisher has a 34.3\% higher rate of detection compared to state-of-the-art for light fields captured with large baselines relative to the refractive object. Our method also applies to light field cameras with much smaller baselines than previously considered, yielding up to 2 times better detection for 2D-refractive objects, such as a sphere, and up to 8 times better for 1D-refractive objects, such as a cylinder. For structure from motion, we demonstrate that rejecting refracted features using our distinguisher yields up to 42.4\% lower reprojection error, and lower failure rate when the robot is approaching refractive objects. Our method lead to more robust robot vision in the presence of refractive objects. | GP-S11 Cantina Lounge |
20-Mar-18 | Paul Wilson | Title: Distributed acoustic sensing of conveyors Abstract: Mining3 has been investigating the use of fibre optic distributed acousting sensing since 2014 for monitoring the condition of conveyor belts. Because fibre optic cable behaves rather differently from microphones or piezo pickups, it has taken a great deal of work to be able to extract meaningful spectra from the data collected. In combination with research into the various failure modes and wear patterns of conveyor bearings it has been necessary to research the acoustic properties of conveyor belt steelwork and methods of attaching the fibre to the frames in order to assure good acoustic coupling. Currently two extended field trials are being undertaken at Moranbah North coal mine in Queensland and Argyle diamond mine in Western Australia. The results from the fibre interrogator units and the signal processing computers are automatically generating condition reports and these rely on some pattern matching techniques and some rules-based decision-making software. The technology is now at a pre-commercial stage sufficient for first adoption by mining companies. The next phase of the project is to try and improve the spectral signature pattern matching and the rule-based decision making by employing modern machine learning techniques to: 1. Speed up and improve the accuracy of the automated report generation 2. See if there are other patterns in the data that are not yet recognised The assistance of QUT robotics group and their pattern-matching skills would be appreciated. | GP-S11 Cantina Lounge |
13-Mar-18 | Weizhao Zhao | Title: An Optical Tracking System for Cyberknife Radiosurgery on Ocular Tumor Abstract: Image-guided radiosurgery has been popularly used in cancer treatment. Tracking tumor movement during the treatment is crucially important for radiation therapy. A treatment option for ocular tumor has been investigated using the Cyberknife system, due to its advantage of real-time image guidance during therapy. However, unpredictable eyeball movement imposes challenges to the state-of-art technology. This presentation describes a 2D/3D transformation solution to predict the tumor’s 3D positions in real- time. We designed a mechanical phantom to validate the developed method. In both calibration procedure and validation procedure, the error between the predicted position and actual position for the gravity center of the tumor in eyeball was within submillimeter level. Based on the developed the method, a surrogate to the CyberKnife system is under construction. This invention has been awarded a United States Patent. | GP-S11 Cantina Lounge |
06-Mar-18 | Steven Bulmer | Presentation by some VRES students who developed an FPGA to do vision processing. | GP-S11 Cantina Lounge |
27-Feb-18 | Gavin Suddrey | Title: Almost Fury Road - The Story of an Autonomous Laboratory Tour ft. Pepper the Robot Abstract: This talk will focus on the problems and solutions inherent in getting a Pepper robot to give an autonomous tour of S11. This will cover various areas including motion control, sensing and autonomous navigation. In addition to discussing Pepper, I will also talk about how we integrated Pepper with other robots/technology in the lab to create a more interactive experience. This talk will be largely informal, and with any luck I will have some interesting videos to go along with it. | GP-S11 Cantina Lounge |
20-Feb-18 | Stuart McCarthy Daniel Mcleod | Invited speakers from Manabotix, a local robotics / automation company. | GP-S11 Cantina Lounge |
13-Feb-18 | Arnab Ghosh Lu Gan Steven Parkison Arash Ushani Axel Gandia | Generative Models for Computer Vision and Video Generation Toward a Probabilistic Sound Semantic Perception Framework for Robotic Systems Improving Point Cloud Registration Understanding a Dynamic World Character navigation based on optical flow | GP-S11 Cantina Lounge |
08-Feb-18 | Oliver Sawodny | Title: The Bionic Handling Assistant - Modeling and control of continuum manipulators Presenter: Prof. Dr.-Ing. Dr. h.c. Oliver Sawodny, Institute for System Dynamics (ISYS), University of Stuttgart Abstract: The Bionic Handling Assistant is a novel continuum manipulator with nine pressure-driven actuators called bellows that is manufactured using the rapid prototyping method Selective Laser Sintering. Unlike common rigid link manipulators, continuum manipulators provide a flexible actuation system by bending and extending their actuators. Using a pneumatic actuation system, the Bionic Handling Assistant is inherently safe and therefore well suited for tasks that require human contact. However, the pneumatic system and the coupled mechanics require highly developed control concepts, especially due to the redundancy between the tool center point and its actuators. Therefore, model-based control concepts and path-planning algorithms have to be developed, especially as most concepts for rigid link manipulators cannot simply be applied to this new class of manipulators. | GP-S11 Cantina Lounge |
30-Jan-18 | David Lane | Title: Abstract: | GP-S11 Cantina Lounge |
23-Jan-18 | Thierry Peynot | Title: Impressions on CES 2018 (Consumer Electronics Show, Las Vegas) Abstract: Early January 2018 I had the opportunity to attend the famous CES show in Las Vegas. CES is a huge annual event where most of the big players in electronics show off their latest novelties. This year Self-driving Cars, Robotics, Drones, AI and VR obviously had an important presence, and many other technologies that are relevant to us were on display. In this short seminar I propose to share my impressions on the event, including what I found promising, impressive, disappointing etc. | GP-S11 Cantina Lounge |
16-Jan-18 | Peter Corke Feras Dayoub | Title: How the ICRA Selection Process Works | GP-S11 Cantina Lounge |
09-Jan-18 | Jasmin James | Title: Quickest Detection of Intermittent Signals With Application to Vision Based Aircraft Detection Abstract: In this paper we consider the problem of quickly detecting changes in an intermittent signal that can (repeatedly) switch between a normal and an anomalous state. We pose this intermittent signal detection problem as an optimal stopping problem and establish a quickest intermittent signal detection (ISD) rule with a threshold structure. We develop bounds to characterise the performance of our ISD rule and establish a new filter for estimating its detection delays. Finally, we examine the performance of our ISD rule in both a simulation study and an important vision based aircraft detection application where the ISD rule demonstrates improvements in detection range and false alarm rates relative to the current state of the art aircraft detection techniques. | GP-S11 Cantina Lounge |
Date ____ | Speaker | Topic | Room |
---|---|---|---|
12-Jan-16 | External guest speaker: Chris Jeffery | Title: Start-up Adventures | GP-S11 Cantina Lounge |
19-Jan-16 | External guest speaker: Fredrik Kahl | Title: From Projective Geometry to City-Scale Reconstructions in Computer Vision Abstract: Research in geometrical computer vision has undergone a remarkable journey over the last two decades. Not long ago, the field was dominated by mathematicians interested in projective geometry, but today, the area has matured and practical systems for performing large-scale 3D reconstructions are commonplace. In this talk, I will first review some of the progress achieved so far and then give examples of present state of the art, especially on robust methods for city-scale reconstruction and localization. In the end, future challenges will be discussed. | GP-S11 Cantina Lounge |
26-Jan-16 | Australian Day | Holiday off. | |
02-Feb-16 | Luis Mejias | Update on IEEE Control Systems / Robotics and Automation Societies QLD joint-chapters ( 10 mins). | GP-S11 Cantina Lounge |
09-Feb-16 | Leo Wu | Title: From industrial robots to medical robots: An individual perspective Abstract: In this presentation, I will talk about some projects I participated in at Tsinghua University and National University of Singapore. In particular, I will discuss kinematic calibration of industrial robots and introduce a flexible medical robot named concentric tube robot. Finally I will make some rough comparisons between industrial robots and medical robots based on my experience. | GP-S11 Cantina Lounge |
16-Feb-16 | Jason Ford | Title: Automation for large scale infrastructure inspection: Why and How Abstract: This talk will describe our journey in developing the Flight Assist System (FAS) for automation of ROAMES infrastructure inspection aircraft. (ROAMES won a 2015 International Edison Award, and is having an international impact on the infrastructure inspection industry). I will also share some personal reflections on industry collaboration. | GP-S11 Cantina Lounge |
23-Feb-16 | Victor Vicario | Title: Growing a Startup at QUT - The VBK Motors experience
Abstract: Creating a Startup while being at university is tough, especially since funds are limited. That said, a number of opportunities are made available by the university itself which can make a great business a reality. VBK motorshas been lucky enough to take advantage of these opportunities. In the past 6 months VBK Motors has been selected both as a finalist at the QUT BlueBox innovation Challenge as well as a participantto the QUT BlueBox Hardware Accelerator Program. In this Talk, the Co Founder and CEO of VBK Motors will discuss about his experience since the start of his Startup journey, identifying current opportunities available from QUT to promote innovative startups as well as what lies ahead for his young company. | GP-S11 Cantina Lounge |
01-Mar-16 | Cancelled | ||
8-Mar-16 | External guest speaker: Will Browne | Title: Cognitive Systems: Robotic Vision and Learning (Note: The purpose of the talk is to encourage discussion over the next few days of my visit to QUT, so overviews of the topic will be presented.) Abstract: Artificial Cognitive Systems encompasses robots that learn and adapt through exploring their environment. This talk will highlight research into Artificial Cognitive Systems that enables robots to improve autonomous operation. Perception, including robotic vision, is essential in obtaining the state of the world. Advances in salient object detection and pattern recognition will be presented. Also representing, reasoning and learning about appropriate actions for given tasks, such as active SLAM will be outlined. Advances in Affective Computing will be shown for robotic navigation. Finally, methods for artificial systems to scale and reuse information will be outlined. | GP-S11 Cantina Lounge |
15-Mar-16 | Donald Dansereau | Title: Computational Imaging: What has it ever done for me? Abstract: I will briefly introduce the field of computational imaging and discuss recent developments in industry, academia, and within the ACRV. | GP-S11 Cantina Lounge |
22-Mar-16 | External guest speaker: Thibault Schwartz (architect, co-founder of HAL Robotics) | Title: Simplifying machine control for architectural applications. Abstract: The democratization of CAD technologies, perceivable in architecture schools as well as in the construction industry, has, during the last decade, progressively led to the creation of consortia combining architectural academics and their professional counterparts, seeking to extend their morphological research, undertaken at a virtual level, towards a systematic practice of manufacturing of geometrical abstractions. As a result, and taking advantage of lower cost CNC machines, university workshops are becoming genuine micro-factories, although various parameters inhibit the scaling of such experimentations beyond pavilions. We highlight software issues and propose solutions to help architectural robotics move beyond its current limitations, and reach the required robustness to be used on construction sites. | GP-S12 Owen J Wordsworth Room |
29-Mar-16 | Cancelled | ||
5-Apr-16 | External guest speaker: Anne Walsh, and Leanne Kennedy | Title: QUT Trade Controls Abstract: This presentation will outline trade controls, the Defence Trade Control Act 2012 (the Act) and the impact tothe University research sectors. The Act was implemented by the Federal Government in support of Australia’s international obligations to meet strengthened export controls and to prevent sensitive technology that can be used in conventional and weapons of mass destruction programs, from getting into the wrong hands. | GP-S11 Cantina Lounge |
12-Apr-16 | External guest speaker: Chunhua Shen | Title: Dense prediction on images using very deep convolutional networks Abstract: In this talkI will present an overview of my recent results on deep learning. | |
19-Apr-16 |
| Subject to be canceled due tocentre's Robot Vision Summer School (RVSS). | |
26-Apr-16 | David Ball | Title: Interviewing experiences Abstract: Ben asked me to talk about my experiences with interviewing for start-up robotics companies in order to help those who might consider this path. I found the process to be quite different from interviewing for academic and standard engineering positions … and was initially caught by surprise. I’ll describe the process and the range of questions that were asked. Also, I’ll talk a little bit about Modern C++ and its advantages. Then I’ll give some sources which I found useful to prepare for these interviews. Lastly, if there is time, I’ll demonstrate a new machine learning toolbox and methodology which I found while taking a Coursera unit on machine learning. | |
3-May-16 | ACRV meeting | ACRV meeting | |
10-May-16 | ICRA2016 practice talks | (3 mins spotlight pitch and 2-3 mins feedbacks for each speaker.) 1. William Chamberlain | |
17-May-16 | David Hall | Intern experience at Bosch | |
24-May-16 | Stryker (visitor) | Stryker will talk about their work in vision-based navigation for medical robotics. They will also outline what the company Stryker does. | |
31-May-16 | Open slot | ||
7-Jun-16 | Ray Russel | Title: Callings - Finding and Living an Authentic Work / Life Abstract: Ray Russell relates his 35 year quest to find the perfect work-life balance. Take a break from your SLAM, your occupancy grids, your quadratic equations and your grant writing. Part cracker-jack philosophy, part transcendental exploration - for a few minutes, let's examine together the motivations behind why we are all Here in the first place. | |
14-Jun-16 | Ahmed Abbas | Title: Enabling Robots To Assist People During Assembly Tasks By Linking Visual Information With Symbolic Knowledge Representation Abstract: Future robots should have the ability to perform daily tasks in various conditions. One of the future applications of robotics is to assist workers in assembly tasks. The aim of our project is to create a robot that can assist workers in an assembly task. The goal of the project and the current state of our work will be presented in order to receive feedback that could help to improve the future work. | |
21-Jun-16 |
| Open slot | |
28-Jun-16 | Amos Albert | Title: TBA About the speaker: Amos is the CEO of Deepfield Robotics - a subsidiary of Bosch Germany
| |
5-July-16 | Peter Corke | Title: Not your grandmother’s MATLAB This is unashamedly a talk about MATLAB and the gory details thereof. Those of us who use MATLAB tend not to keep up with new functionality as it’s added - they have 3500 people working on enhancing the product: new core functionality and new toolboxes. In this talk I’ll demonstrate (live!) some of the newer features that might be relevant to folk who work in our field: strings (yay!), categorical arrays, tables, tall arrays, graphics, apps, connectors, compilers and coders. | |
12-July-16 | Ben Upcroft | Title: Robotics: Science and Systems Conference Debrief Abstract: I will give an update on the conference overall and the Deep Learning Workshop organised by Juxi and Niko. I’ll highlight a couple of talks/papers that I found very interesting. The first presentation was by Raia Hadsell from Deep Mind on Progressive Networks which enabled rapid transfer from simulation to real robots. The second presentation was by Dieter Fox from U. of Washington on how deep learnt features dramatically improved hand and gesture recognition. | |
19-July-16 | Gavin Suddrey | Title: Learning Tasks from Natural Language Dialogue Abstract: Providing robots with the ability to learn everyday tasks, such as cleaning, directly from users within the environment will allow them to be adapted to a wide variety of real-world problems, including aged and disability care. Previous research in task learning has focused on two key approaches: learning from demonstration, in which the agent observes the user performing the task; and learning from natural language, in which the agent learns from a spoken/written description of the task. While both approaches are complimentary in nature, for the purpose of this talk we will focus on the latter. We will discuss the results of our recently published work, in which we demonstrated a task learning/planning approach that enabled a robot to both learn generalizable tasks from natural language inputs and exploit domain knowledge during planning. In addition, we will provide an overview of the current direction of our work, which includes learning generalizable tasks from situation specific explanations, as well as recognising repeatable patterns for repetitive tasks. | |
26-July-16 | Adam Tow | Title: How to place 6th in the Amazon Picking Challenge Abstract: In early March, team ACRV was selected as one of 16 teams to participate in this year's Amazon Picking Challenge. This talk will summarise what followed. In particular, I will highlight some of the key lessons we learned as well as the tools and processes that worked and didn't work for us. I'll also mention my ideas on how team ACRV might win next years APC. | |
2-Aug-16 | Donald Dansereau | Title: Light Fields: Has it been 20 years already?! Abstract: On the 20th anniversary of the seminal paper by Levoy and Hanrahan, I'll review recent developments in this still-growing field. I'll also discuss my upcoming move to the Stanford Computational Imaging Lab, and some of the work going on there. Finally I'll cover some of the present and ongoing work in light field imaging here at QUT. | |
9-Aug-16 | Duncan Campbell | Title: Where are UAVs at and how can we get them connected with the Internet of Things (or Industry V4)? Abstract: UAVs, or flying robots to some, are becoming ever closer to the ubiquitous technology often touted. I will present a snapshot of where we are at in terms of widespread adoption of UAVs in our airspace to do really useful and economically beneficial things, and what big challenges remain. UAVs can form a critical sensing front-end and actor in the context of the Internet of Things (IoT), also known as Industry V4 in the industrial context. Industry is well progressed down the path of large scale systems integration and open data communication protocols, which has much to offer in terms of integrating multiple UAV systems, and possibly that of land and sea robotic platforms. The second part of the presentation will present a framework and early work on how heterogeneous robotic platforms may benefit from the industrial automation world to provide seamless data communication between intelligent sensing platforms in the field, and the realms of big data, cloud computing and decisioning. I will encourage discussion on this aspect as there are some great things that we can trial across the discipline and have more of our platforms interconnected and connected. | |
16-Aug-16 | Feras Dayoub | Title: Robotics Deployment of Machine Learning Abstract: For a robotic application, training a machine learning model is generally not the end of the project. Even if the purpose of the model is to obtain knowledge about certain aspects of a dataset, the knowledge gained, to be useful, need to be generalised to cover new data that the robot will feed to the model during its deployment, however, most of these models fail to demonstrate the same level of performance, shown on their test set, when deployed on a robot. In this talk, I'll highlight some of the lessons I learned while deploying supervised machine learning on mobile robots. If you are new to machine learning and you would like to use it ion your robot or if you are an expert in the topic and you would like to hear about the deployment stage, or if you are remotely interested in the subject, this talk will give you a wide overview and I hope it will stimulate discussion beyond the presented ideas. | |
23-Aug-16 | Adjunct Associate Prof Oran Rigby | Title: Medical rescue, training and future use of robotics Abstract: A review of how robotics currently and in the future may influence the delivery of critical care in the prehospital and medical environment focusing on patient rescue, remote diagnostics, and the opportunities for remote therapeutics in critical clinical decision pathways. | |
6-Sep-16 | Matthew Dunbabin | ||
13-Sep-16 | Julia Davies | Title: STEM Connectors – where STEM experts and schools connect. Abstract: Dr Julia Davies will present an overview of this new STEM engagement program, where teachers invite experts into their classroom (via Skype or other means of video-telephony) to show students the relevance and application of STEM. For any researchers who subsequently may be interested in getting involved, Julia will then lead you through the registration process to create a profile page. Check out https://stemconnectors.qld.edu.au/#/ | |
20-Sep-16 | Jacob Bruce | Title: Robots, neurons, and the fabric of reality Abstract: I'll talk about lessons learned from robotics work in Vancouver, current and future directions toward better robots, and some reasons to think those directions might work. I'll introduce my current research in computational neuroscience and its value to robotics, and zoom out to the big picture to address the long-term ways in which I think robotics can impact neuroscience, philosophy, and the universe. | |
27-Sep-16 |
| No seminar - RoboVis 2016 | |
4-Oct-16 | Jason Kulk | Title: AgBotII Software Development and Latest Results Abstract: The AgBotII was built at QUT as part of the Strategic Investment in Farming Robotics which is now nearing the successful completion of all milestones. These milestones included the development of the platform, weed management in the field, autonomous replenishment and fertilising in the field. In this seminar I will firstly talk a little bit about the software inside the AgBotII that made achieving these milestones possible, including the scale and size of the software development task. Secondly, I will present a summary of the most recent round of completed milestones, including the autonomous docking, refilling, recharging and broadcast fertilising. Finally, I will talk about using the Gazebo simulation which appears to be underutilised in the lab. | |
11-Oct-16 | Thierry Peynot | Title: Mining Robotics at QUT and MINExpo 2016 Abstract: In the first part of this talk I will give a quick overview of the current status of activities in robotics and automation for the mining industry at QUT, including our membership in CRCMining/Mining3, recently confirmed projects such as the Advance Queensland Innovation Partnership “Automation-Enabling Positioning for Underground Mining”. Some of the many opportunities for the future will also be mentioned. In the second part, I will discuss impressions of MINExpo 2016, the largest mining exhibition in the world, which was just held in Las Vegas. This will include a preview of a 400+ ton autonomous truck. | |
18-Oct-16 | Peter Corke | Report on IROS 2016 | |
25-Oct-16 | Daniel Richards | Title : Alone in the dark : robotic vision in low light environments Abstract : In most cases it is expected that robots can operate 24 hours a day this means 50% of their operational time is night where lighting may be insufficient. This talk will look at how cameras operate in these low light environments, sources of noise, the effects of demosaicing and why it is important for use to consider these things when testing robotic vision algorithms. Next I will briefly cover so other camera technologies that could be helpful in low light conditions and finish with an overview of what I hope to accomplish throughout my PhD. | |
1-Nov-16 | Christopher McCool | Title: Vision for Agricultural Robotics Abstract: In this presentation I'll give an overview of the vision systems developed during the SIFR project for AgBot II (weed classification) and Harvey (crop segmentation and detection). The vision systems are used for a range of detection, classification and segmentation tasks making use of traditional vision features through as well as incorporating recent advances in convolutional neural networks. | |
8-Nov-16 | Frederic Maire | Title: Commonwealth Bank robotics initiative Abstract: Commonwealth Bank (CBA) has purchased a REEM humanoid robot in order to explore potential use cases for robotics within the financial services industry. The Bank and Stockland Retail Services Pty Limited (Stockland) have entered into a project agreement to run a range of robotics experiments. In this context, the Australian Technology Network of Universities (ATN) directorate has worked with CBA to devise an initiative offering opportunities to teams of students to engage in social robotics and robotics coding research during Semester 2 of 2016. Last month a group of three QUT undergraduate students enrolled in BEB801 went to demo their work at CBA Innovation Lab in Sydney. In this talk, I will discuss the demonstrations presented in Sydney. In particular, I will explain how the QUT team successfully managed to have the REEM robot play a game of "Simon Says". The key module of this system is a deep neural network that takes as input a single image and predicts the pose of the skeleton of the person closest to the center of the image. | |
15-Nov-16 | Michael Milford | Title: Uni and start-up adventures in Austria, Switzerland, Germany, Italy, China and Singapore: A Pictorial Journey Abstract: I'll cover the more interesting aspects of several international trips this year to a number of universities, start-ups and companies in Europe and Asia. | |
22-Nov-16 | Juxi Leitner | Title: “Deep Adventures in Germany, Portugal, England and France” Abstract: Reporting on some of the activities in European labs around robotic vision, adaptive systems, and deep learning. Labs I visited include, CITEC (Cognitive Interactive Technology Cluster of Excellence) at the University of Bielefeld, VisLab at the Istituto Superiore Tecnico Lisbon, Oxbotica, DeepMind, Lagadic at inria Rennes, and SoftBank Robotics Europe. | |
29-Nov-16 | Ben Sand | Title: How to Build a World #1 Robotics Company. Abstract: Ben Sand will share insights from his time in Silicon Valley and from coaching people to build highly technical companies. Ben is a co-founder of Meta which raised AU$100M over 3.5 years. Meta builds augmented reality hardware with a strong computer vision component. Key hires included Prof. Steve Feiner (Columbia), Prof. Steve Mann (Toronto), Jayse Hansen (creator of graphics from Iron Man, Avengers), and Alan Beltran (head of hardware for Google's Project Tango). Ben has experience creating high quality partnerships with universities were both tech companies and universities gain and will overview some of the models he has used previously. | |
6-Dec-16 | Obadiah Lam & Ben Talbot | Title: Human Cues for Robot Navigation Abstract: This talk covers the outcomes of the Discovery Project "Human Cues for Robot Navigation". We set out to investigate how robots could use navigation cues in environments designed for humans. We will cover some different types of spatial symbolic information, and how a robot can use this information as cues for navigation. Along the way, the robot must deal with the fluidity and ambiguity naturally inherent in these cues. The talk will then move to robot vision approaches for locating such cues in the world. We focused on textual cues, including signs and room labels, which reduces to a wild text spotting problem with an unseen lexicon. Finally, occlusions and specular highlights can prevent the robot from reading textual cues in the world, and we present a method for repositioning the robot to reduce the impact of specular highlights. | |
13-Dec-16 | Tim McLennan & Tim MacTaggart | QUT Bluebox Title: Commercialisation models Interactive discussion: outline of setting up a company called Q-botics, what are other models of commercialisation and what would we need to think about with those if there were to be exercised and what are some of the technology or models you are thinking about. The Uni is also doing a lot in the building entrepreneur spaces (startups and etc) | |
20-Jan-17 | Anton Milan (Uni of Adelaide) | Title: Amazon Picking Challenge 2016: Team NimbRo of University of Bonn Abstract: Automation in warehouses is becoming increasingly important in order to relieve humans from mundane and heavy tasks. This talk will present Team NimbRo's successful solution for this year's Amazon Picking Challenge. We will first give a broad overview of the entire system and then focus on two challenging aspects. First, motion generation using a highly flexible IK-based keyframe interpolation framework featuring null space cost optimization. Second, our approach to object perception, which includes online learning from deep features, semantic segmentation on GPUs using pre-trained models, as well as 6D object pose estimation for better grasp point selection. Finally, we will point out the most difficult items for our setup and our approaches to handle them. | |
31-Jan-17 | Luis Mejias Alvarez | Title: Experiences in Flight Testing on Manned/Unmanned Aircraft Abstract: In this presentation, I will provide an overview of the approach followed to test our research in a manned aircraft (Cessna 172) and a fixed–wing UAV. The software architecture developed allowed us to transparently execute on both aircraft the same core algorithms with minimal changes. Reusability, modularity and transparency were the criteria when developing this architecture allowing for seamless switching between simulation and real flight testing. | |
07-Feb-17 | Niko Suenderhauf | Title: A gentle introduction to generative models and Bayesian deep learning Abstract: In this talk I will give a gentle introduction to two of the most regarded research topics at the recent NIPS (Neural Information Processing Systems) conference: generative models and Bayesian deep learning. Both techniques are not yet widely adopted in our community, but have the potential to overcome many of the deficiencies of current deep learning approaches for robotic applications where real-world robustness is paramount. Typical deep neural networks, such as used by many in our group, are trained as discriminative classifiers. Generative models are more powerful in the sense that they go beyond mere classification and attempt to learn the true distribution of the data instead. This is highly beneficial for robustness in situations where new unknown classes are regularly encountered, or when training has to be weakly supervised due to the high costs of obtaining labeled data. I will cover two recent techniques for generative models: generative adversarial networks and variational autoencoders. Another shortcoming of typical deep neural networks is that they are not able to properly represent their uncertainty in a classification. Instead, they merely exhibit uncalibrated confidence scores. While this meets the requirements for in-dataset classification (such as the ImageNet or COCO challenges), robotic systems that have to make decisions and act in the physical world based on a neural network's output, need trustworthy uncertainty information. Bayesian deep learning provides the techniques to achieve this. | |
10-Feb-17 | Donald G. Dansereau | Title: Computational Imaging for Robotic Vision Abstract: This talk argues for combining the fields of robotic vision and computational imaging. Both consider the joint design of hardware and algorithms, but with dramatically different approaches and results. Roboticists seldom design their own cameras, and computational imaging seldom considers performance in terms of autonomous decision-making. The union of these fields considers whole-system design from optics to decisions. This yields impactful sensors offering greater autonomy and robustness, especially in challenging imaging conditions. Motivating examples are drawn from autonomous ground and underwater robotics, and the talk concludes with recent advances in the design and evaluation of novel cameras for robotics applications. | |
10-Feb-17 | Mohammed Deghat | Title: Distributed Multi-Robot Formation Control Abstract: Multi-agent systems are progressively being used in a broad range of modern applications such as multi-robot or multi-vehicle coordination and control, air traffic management systems, control of sensor networks, unmanned vehicles, energy systems and logistics.This presentation will review a number of concepts and results on multi-agent system control and will consider the types of communication, control and sensing architecture that allow preservation of the formation shape. It is assumed that the amount of sensing, communication and control computation by any one agent is limited. For example, each agent is only able to communicate over a limited range, and can only measure or receive its neighbours' state information. | |
14-Feb-17 | Chris Lehnert | Title: Robotic Manipulation in Real World Environments Abstract: In this presentation, I will describe the development of robotic systems for manipulation in real world environments such as agriculture and warehouse automation. I will briefly outline the methods developed for manipulation in agriculture and how they can also be deployed to solve manipulation problems for warehouse automation. The presentation will focus on Harvey, a robot which autonomously harvests capsicum in a greenhouse. The horticulture industry remains heavily reliant on manual labour, and as such is highly affected by labour costs. In Australia, harvesting labour costs in 2013-14 accounted for 20% to 30% of total production costs. These costs along with other pressures such as scarcity of skilled labour and volatility in production due to uncertain weather events are putting profit margins for farm enterprises under tremendous pressure. Robotic harvesting offers an attractive potential solution to reducing labour costs while enabling more regular and selective harvesting, optimising crop quality, scheduling and therefore profit. Autonomous harvesting is a particularly challenging task that requires integrating multiple subsystems such as crop detection, motion planning, and dexterous manipulation. Further perception challenges also present themselves, such as changing lighting conditions, variability in crop and occlusions. We have demonstrated an effective vision-based algorithm for crop detection, two different grasp selection methods to handle natural variation in the crop, and a custom end-effector design for harvesting. Experimental results in a real greenhouse demonstrate successful grasping, detachment and overall harvesting rates. We believe these results represent an improvement on the previous state-of-the-art and show encouraging progress towards a commercially viable autonomous capsicum harvester. | |
16-Feb-17 | Vigil Varghese | Title: Angle Sensitive Imaging: A New Paradigm for Light Field Imaging Abstract: Imaging is a process of mapping information from higher dimensions of a light field into lower dimensions. Conventional cameras do this mapping into two dimensions of the image sensor array. These sensors lose directional information contained in the light rays passing through the camera aperture as each sensor element integrates all the light rays arriving at its surface. Directional information is lost and only intensity information is retained. This talk takes you through a host of ideas to decouple this link and enable image sensors to capture both intensity and direction without sacrificing much of the spatial resolution as the existing techniques do. Some of the ideas that we explore in this talk are differential quadrature pixels, polarization pixels, multi-finger pixels and combinations of these to effectively capture the angular information of light by consuming only a very small imager area. These advances are facilitated by the miniaturization of the CMOS fabrication processes and enable low cost, robust computational cameras. The presented work builds heavily on the theoretical premise laid down by the prior work on multi-aperture imaging. Practical aspects are modeled on the diffraction based Talbot effect. The presented solutions fall into a general category of sub-wavelength apertures and is a one-dimensional case of the same. These solutions enable a rich set of applications among which are fast response auto-focus camera systems and single-shot passive 3D imaging. | |
21-Feb-17 | Aaron McFadyen | Title: Visual Servoing - Alternate Approaches and Applications Abstract: Humans use vision as feedback to help control their actions all the time, particularly when operating vehicles such as cars, heavy machinery and aircraft. If we want to remove the human operator such that these vehicles or agents become autonomous, then replicating some of the control tasks may require the use of vision-based control or visual servoing. This seminar explores how visual servoing can be used to control such mobile agents or robots. First, I will provide a brief introduction to visual servoing (including position and image-based control frameworks), as well as a step by step guide on how to derive a classical image-based visual controller. Second, I will introduce some new (non-classical) image-based visual servoing approaches that leverage alternative control frameworks to provide additional benefits such as guaranteed stability, constraint satisfaction and the removal of feature tracking requirements. The goal of this seminar is to highlight the design considerations and potential benefits and drawbacks when contemplating the use of visual servoing for autonomous robot control. For those new to visual servoing, this should provide suitable background information to further explore the subject matter. For those already familiar with visual servoing, the material should help you to decide what approaches may be suitable for your application. Throughout the seminar, various concepts will be highlighted with the aid of example applications for unmanned aircraft (drone) control including some core functionality (collision avoidance) and application specific tasks (control of a suspended load). | |
23-Feb-17 | Anton Milan | Title: Multi-target Tracking: Challenges and Solutions Abstract: Despite significant progress, the problem of tracking multiple targets in crowded real-world scenarios is still far from solved. The task is highly relevant for a wide range of applications in robotics and computer vision, including autonomous vehicles, surveillance, video analysis and life sciences. In this seminar, I will present the remaining challenges to be addressed and some of the recently proposed solutions. In particular, I will comment on the differences between online and batch approaches and emphasise the importance of a centralised benchmark to advance the state of the art. | |
23-Feb-17 | Guilherme Maeda | Title: Semi-Autonomy in Human-Robot Collaboration Abstract: Semi-autonomous robots are robots whose actions are, in part, functions of human decisions. Semi-autonomy allows robots to interact with a human partner in a collaborative manner. Potential applications can vary from the assembly of products in factories, to the aid of the elderly at home, to the shared control in teleoperated processes. However, the sense-plan-act paradigm established by industrial robotics does not account for the interaction with humans, and methods to program collaborative robots are still unclear. In this talk, I will introduce interaction primitives, a data-driven approach based on the use of imitation learning, for learning movement primitives for human-robot interaction. The core idea is to learn a parametric representation of joint trajectories of a robot and a human from multiple demonstrations. Using a probabilistic treatment, the method uses the correlation between the learned parameters such that the robot task and trajectory can be inferred from human observations. As a proof-of-concept, experiments with a 7-DoF lightweight arm collaborating with a human to assemble a toolbox will be shown. | |
23-Feb-17 | Neil Dantam | Title: Language, Logic, and Motion: Synthesizing Robot Software Abstract: Robots offer the potential to become regular helpers in our daily lives, yet challenges remain for complex autonomy in human environments. We address the challenge of complex autonomy by automating robot programming. Many useful robot tasks combine discrete decisions about objects and actions with continuous decisions about collision-free motion. We introduce a new planning framework that reasons over the combined logical and geometric space in which the robot operates. By grounding this planning framework in formal language and automata theory, we achieve not only efficient performance but also verifiable operation. Finally, such a rigorously grounded framework offers a firm base to scale to large domains, handle uncertainty in the environment, and incorporate behaviors learned from humans. | |
28-Feb-17 | Fangyi Zhang | Title: My trip to the US: Experience in enabling robots to manipulate in a kitchen scenario Abstract: I am going to share my experience in a three-month project for objects manipulation in a kitchen scenario at the University of Maryland. The project consists of three tasks: fetching an object from a fridge, heating it using a microwave, and cleaning a table after dinner. The solution is based on a Baxter robot with a mobile base, mostly implemented using engineering techniques except deep learning for object recognition. In the talk, I will introduce the solution and show some demo videos. | |
10-Mar-17 | Simon Lucey | Title: The Fast & the Compressible” - Reconstructing the 3D World through Mobile Devices Abstract: Mobile devices are shifting from being a tool for communication to one that is used increasingly for perception. In this talk we will discuss my group’s work in the rapidly emerging space of using mobile devices to visually sense the 3D world. First, we will discuss the employment of high-speed (240+ FPS) cameras, now found on most consumer mobile devices. In particular, we will discuss how these high frame rates afford the application of direct photometric methods that allow for - previously unattainable - accurate, dense, and computationally efficient camera tracking & 3D reconstruction. Second, we will discuss how the problem of object category specific dense 3D reconstruction (e.g. “chair”, “bike”, “table”, etc.) can be posed as a Non-Rigid Structure from Motion (NRSfM) problem. We will discuss some theoretical advancements we have made recently surrounding this problem - in particular when one assumes the 3D shape being reconstructed is compressible. We will then relate these theoretical advancements to practical algorithms that can be applied to most modern mobile devices. | |
14-Mar-17 | Sareh Shirazi | Title: Human Action Understanding as a Pathway to Human-Robot Collaboration Abstract: In this talk, I will first cover the motivating applications of human action recognition in the real world. Then, I will talk about some basics about temporal feature extraction such as 3D space-time interest point detection, optical flow features, temporal templates, dense trajectories, and motion boundary histograms. | |
21-Mar-17 |
| Peter Corke, Timo Korthals (PhD student at Bielefeld), Thomas Schöpping (PhD student at Bielefeld), Stephen James (PhD students at Imperial College London) | |
28-Mar-17 | Quentin Bateux | Title: "Going further with direct visual servoing methods" Abstract: This talk will be about different ways to improve the performances of direct visual servoing positioning methods, ranging from the use of global descriptors, particular filters and ultimately CNNs. | |
04-Apr-17 | Ajay Pandey | Title: Advanced Organic Optoelectronics for Making Robots See and Sense Better Abstract: We see and feel the world by the sense of vision and touch that is brought to us by our eyes and skin. The rise of robotics would most certainly require robust vision and rich sensation for dexterous manipulation of soft objects and safe human-robot interaction. In this talk, I will first introduce the field of organic optoelectronics and discuss its potential in advancing current robotic vision and tactile sensing platforms. This will be followed by some of my most recent research on the design and development of advanced optoelectronic sensors for low level light sensing, reversible pixel operation, multi-spectral pixel design and tactile sensors that can be embedded in robotic arms of different shapes and forms for sophisticated sensing and smart functionality. I’ll also discuss some of my ongoing collaborative projects on brain-computer interface (with QBI/UQ) and night vision (with MIT). | |
11-Apr-17 | GuillermoGallego | Title: Event-Based Vision Algorithms for Mobile Robotics Abstract: Event cameras, such as the Dynamic Vision Sensor (DVS), are biologically inspired sensors that present a new paradigm on the way that dynamic visual information is acquired and processed. Each pixel of an event camera operates independently from the rest, continuously monitoring its intensity level and transmitting only information about brightness changes of given size ("events") whenever they occur, with microsecond resolution. Hence, visual information is no longer acquired based on an external clock (e.g. global shutter); instead, each pixel has its own sampling rate, based on the visual input. This different representation of the visual information offers significant advantages over standard cameras, namely a very high dynamic range, no motion blur, and a latency in the order of microseconds. However, because the output is composed of a sequence of asynchronous events rather than actual intensity images, traditional vision algorithms cannot be applied, so that new algorithms that exploit the high temporal resolution and the asynchronous nature of the sensor are required. This talk will focus on the research carried out at the Robotics and Perception Group (University of Zurich) on the development of such algorithms for ego-motion estimation and scene reconstruction, so that a robot equipped with an event camera can build a map of the scene and infer its pose with respect to it. | |
18-Apr-17 | Leo Wu | Title: Product of Exponentials formula – An alternative approach to modelling your robot Abstract: Kinematics is a fundamental topic in robotics. Denavit-Hartenberg (DH) model has been a standard approach to modelling the kinematics of a robot and has been adopted for decades. This talk will introduce another method referred to as the Product of Exponentials formula (POE), which has been gaining increasing popularity as an alternative model. After describing the basic ideas of POE and comparing it to DH, the talk will show the equivalence between these two models, i.e., they can be converted into each other analytically. Finally, the talk will discuss a few examples using the POE model and show that in some circumstances, the POE model provides a simpler and more insightful interpretation of the kinematics of a robot. | |
02-May-17 | Valerio Ortenzi | Title: Vision-based trajectory control of unsensored robots to increase functionality, without robot hardware modification Abstract: In nuclear decommissioning operations, very rugged remote manipulators are used, which lack proprioceptive joint angle sensors. Hence these machines are simply tele-operated, where a human operator controls each joint of the robot individually using a teach pendant or a set of switches. Moreover, decommissioning tasks often involve forceful interactions between the environment and powerful tools at the robot's end-effector. Such interactions can result in complex dynamics, large torques at the robot's joints, and can also lead to erratic movements of a mobile manipulator's base frame with respect to the task space. My work seeks to address these problems by, firstly, showing how the configuration of such robots can be tracked in real-time by a vision system and fed back into a trajectory control scheme. Secondly, my work investigates the dynamics of robot-environment contacts, and proposes several control schemes for detecting, coping with, and also exploiting such contacts. Several contributions are advanced. Specifically a control framework is presented which exploits the constraints arising at contact points to effectively reduce commanded torques to perform tasks; methods are advanced to estimate the constraints arising from contacts in a number of situations, using only kinematic quantities; a framework is proposed to estimate the configuration of a manipulator using a single monocular camera; and finally, a general control framework is described which uses all of the above contributions to servo a manipulator. The results of a number of experiments are presented which demonstrate | |
09-May-17 | Andres Marmol | Title: Evaluation of keypoint detectors and descriptors in arthroscopic images for feature-based matching applications Abstract: Knee arthroscopy is the most common minimally invasive orthopaedic procedure in the world. During this procedure, a camera and an arthroscope allow surgeons to observe unstructured and narrow views of the inside of the knee. Given visually challenging monocular images, the surgeon needs to a) estimate where the camera and the instruments are within the knee, b) maintain a mental map of the knee environment, and c) perform the appropriate therapeutic action while manipulating multiple instruments. These tasks are both mentally and physically demanding for surgeons and often lead to involuntary injury in patients. Surgeons would strongly benefit from systems that can continuously map the inside of the knee, localize the arthroscope and surgical tools, and control instruments using visual information. In this talk I will provide a quick overview of the research around robotic assisted knee arthroscopy within the Medical and Healthcare robotics group. I will then present in detail the outcomes of a recent submission to RA-L entitled “Evaluation of keypoint detectors and descriptors in arthroscopic images for feature-based matching applications”. I will conclude the talk with an overview of future research directions. | |
16-May-17 | Mark McDonnell | ||
23-May-17 | ICRA Practice | David Hall: Towards Unsupervised Weed Scouting for Agricultural Robotics Leo Wu: Dexterity analysis of three 6-DOF continuum robots combining concentric tube mechanisms and cable driven mechanisms Michael Milford: Deep Learning Features at Scale for Visual Place Recognition Fahimeh Rezazadegan: Action Recognition: From Static Datasets to Moving Robots Juxi Leitner: ACRV Picking Benchmark | |
30-May-17 | Jason Ford | Title: Our journey with Hidden Markov Model filters for vision based aircraft detection. Abstract: Short overview of our ten year journey with HMM filters for aircraft detection. I will briefly highlight key milestones and advancements and show a glimpse of recent developments. | |
13-Jun-17 | John Skinner | Title: Tools for Robot Vision research: scalable experiments and databases Abstract: In order to perform experiments in robot vision, we have to write a bunch of surrounding code that sends the data to the system and interprets the results. Because each dataset and each robot vision system handles input differently, this code gets bigger and bigger and more and more complex. In this talk, I'm going to describe how I tackle this problem, and how I use mongodb to manage all the data and metadata around running experiments. Hopefully some of these tools and solutions will be useful for you when conducting your research. | |
20-Jun-17 | James Mount | Title: Entrepreneurship Abstract: This short seminar will present the work I have done outside of my PhD, including: - Hacking an RC car and developing a demo for Robotronica, - The various methods to crowd fund an idea/start-up, - The lessons learnt from running a successful KickStarter, and - Applying for MIT's Global Entrepreneurship Bootcamp. | |
27-Jun-17 | Sean McMahon | Title: Multi-Modal Trip Hazard Detection On Construction Sites. Abstract: Trip hazards are a significant contributor to accidents on construction and manufacturing sites, where over a third of Australian workplace injuries occur [1]. Current safety inspections are labour intensive and limited by human fallibility,making automation of trip hazard detection appealing from both a safety and economic perspective. Trip hazards present an interesting challenge to modern learning techniques because they are defined as much by affordance as by object type; for example wires on a table are not a trip hazard, but can be if lying on the ground. To address these challenges, we conduct a comprehensive investigation into the performance characteristics of 11 different colour and depth fusion approaches, including 4 fusion and one non fusion approach; using colour and two types of depth images. Trained and tested on over 600 labelled trip hazards over 4 floors and 2000m22 in an active construction site,this approach was able to differentiate between identical objects in different physical configurations (see Figure 1). Outperforming a colour-only detector, our multi-modal trip detector fuses colour and depth information to achieve a 4% absolute improvement in F1-score. These investigative results and the extensive publicly available dataset moves us one step closer to assistive or fully automated safety inspection systems on construction sites. | |
4-July-17 | Feras Dayoub | Feras's top ten favourite papers from ICRA2017 (download slides here) | |
11-July-17 | Henrik Christensen | Bio: Dr. Henrik I. Christensen is a Professor of Computer Science at Dept. of Computer Science and Engineering UC San Diego. He is also the director of the Institute for Contextual Robotics. Dr. Christensen does research on systems integration, human-robot interaction, mapping and robot vision. The research is performed within the Cognitive Robotics Laboratory. He has published more than 350 contributions across AI, robotics and vision. His research has a strong emphasis on "real problems with real solutions". He is actively engaged in the setup and coordination of robotics research in the US (and worldwide). Dr. Christensen received the Engelberger Award 2011, the highest honor awarded by the robotics industry. He was also awarded the "Boeing Supplier of the Year 2011". Dr. Christensen is a fellow of American Association for Advancement of Science (AAAS) and Institute of Electrical and Electronic Engineers (IEEE). His research has been featured in major media such as CNN, NY Times, BBC, ... | |
18-July-17 | Will Maddern | Title: Dealing with change in large-scale urban localisation Abstract: Autonomous vehicles in urban environments encounter a wide range of variation, including illumination, weather, dynamic objects, seasonal changes, roadworks and building construction. These changes occur over a range of timescales, from the day-night illumination cycle to construction that can span multiple years. In this talk I will discuss the challenges we have encountered during long-term autonomy trials in Oxford, Milton Keynes and Greenwich, and present two of our newest approaches to dealing with change in both localisation and mapping with vision and LIDAR. I will also cover our Oxford RobotCar Dataset and the upcoming long-term autonomy benchmark due in late 2017. | |
25-July-17 | Anders Eriksson | Title: Duality and Robotic Vision | |
1-Aug-17 | Troy Bruggemann | Title: Evaluating UAS Team Reliability Abstract: There is a need for enabling greater efficiency, utilization and safety of Unmanned Aircraft Systems (UAS) operating in teams with humans in the loop. UAS are limited in their ability to cope with and continue their missions in the presence of failures and other things going wrong, and for this reason high human capital is typically required to support their safe operation. This talk discusses how to assess and design for UAS team reliability with humans in the loop. | |
8-Aug-17 | Tim Molloy | Title: Inverse Dynamic Games Abstract: Inverse dynamic games is the problem of recovering the underlying objectives of players in a dynamic game from observations of their optimal strategies. The problem of inverse dynamic games arises naturally in the study of economics, biological systems, cooperative automation, and conflict scenarios. Despite its many potential applications, the theory of inverse dynamic games has received limited attention. In this talk, recent advances in the theory of inverse dynamic games that have been made possible by exploiting the minimum (or maximum) principle of optimal control will be presented. The potential application of this work to autonomous collision avoidance will also be discussed. | |
15-Aug-17 | Michael Rosemann | ||
22-Aug-17 | Andra Keay | ||
29-Aug-17 | Clemens Eppner | Title: The Strange Case of Grasping with Soft Hands - Exploiting Dr. Jekyll and Taming Mr. Hyde Abstract: Squashy and flexible robotic end-effectors such as the RBO Hand 2 provide opportunities (Dr. Jekyll) and challenges (Mr. Hyde) for long-standing problems in grasping and manipulation. Opportunities, because getting into contact is easy and forgiving and the mechanical compliance of soft hands creates large basins of attraction when grasping objects. On the other hand, controlling soft hands exhibits significant challenges: good contact models are missing and sensor feedback is limited. In this talk I will present a high-level grasp planner that exploits environmental contact and a low-level control method which learns models of simple manipulations for a soft hand. | |
5-Sep-17 | Lesley Jolly | Dr. Lesley Jolly holds a PhD in Anthropology and has worked alongside Engineering Educators throughout her career in an attempt to improve learning with STEM. She has faciliated the AAEE Winter School (http://www.aaee.net.au/index.php/news1/events/239-aaee-winter-school-university-of-technology-sydney-10-14-july-2017) for many years and is a wealth of knowledge on everything to do with Engineering Education and the various pedagogies (flipped classroom, project-based learning, problem-based learning etc). | |
12-Sep-17 | No Seminar | ICRA deadline | |
20-Sep-17 | Qi Wu | Title: Turing Test 2.0 - Vision and Language Abstract: The fields of natural language processing (NLP) and computer vision (CV) have seen great advances in their respective goals of analysing and generating text, and of understanding images and videos. While both fields share a similar set of methods rooted in artificial intelligence and machine learning, they have historically developed separately. Recent years, however, have seen an upsurge of interest in problems that require combination of linguistic and visual information. For example, Image Captioning and Visual Question Answering (VQA) are two important research topics in this area. Image captioning requires the machine to describe the image using human readable sentences while the VQA asks a machine to answer language-based questions based on the visual information. In this talk I will outline some of the most recent progresses, present some theories and techniques for these two Vision-to-Language tasks, and show a live demo of the image captioning and Visual Question Answering. I will also show some recent hot topics in the area, such as the Visual Dialog. | |
26-Sep-17 | Dominic Jack | Title: Shallow Networks for Inverse Projection in 3D Human Pose Estimation Abstract: Projecting a 3D scene onto a 2D image is a relatively straight-forward and well understood process common in computer vision. The inverse problem - recovering a 3D scene from a single 2D projection - is an inherently ill-posed problem. Deep neural networks have been shown to perform well at this task by learning patterns in large datasets, though most fail to take advantage of the inverse nature. This talk will cover a couple of approaches we have taken to learn small, shallow networks to embed within a typical optimization framework and discuss areas we are looking to pursue. | |
3-Oct-17 | Will Chamberlain | Title: Borrowing eyes: robotic vision beyond line-of-sight Abstract: We can expand robots' vision envelope beyond line-of-sight with data from remote cameras, and exploit fast communications to gather visual information on demand. We can also use smart cameras to distribute the image processing as well as image capture, enabling robots to be cheaper, and scaling to a large number of remote cameras. This talk will cover my approach to distributed robotic vision on mobile phone smart cameras, and some of the challenges of distributed vision: describing robots’ information needs, managing a changeable set of available cameras, and aggregating conflicting data. | |
10-Oct-17 | No Seminar | RoboVis 2017 in Tangalooma | |
17-Oct-17 | Michael Lucas | Title: Opportunities and Challenges for Automation, Robotics, and Computer Vision in Australian Supply Chains (a practitioner's perspective) Abstract: I'll talk about my experiences with Automation, Robotics and Computer Vision in supply chain applications around the world over the last 20 years (with brief case studies), and where the future challenges and opportunities are given the current market and industrial relations climate. | |
24-Oct-17 | Doug Morrison | Title: Robotic Grasping: A brief history of robots picking things up Abstract: Robotic grasping has been studied for decades, and a wide variety of techniques have been developed for synthesising stable grasps. I will present an brief overview of robotic grasping literature and techniques, from analytical methods to data-driven and more modern machine learning approaches which show potential great in robotic grasping. Finally, I will discuss how this leads in to my PhD research topic. | |
31-Oct-17 | Fan Zeng | Title: SESAME and SVM for Underground Visual Place Recognition Abstract: Autonomous vehicles are increasingly being used in the underground mining industry, but competition and a challenging market is placing pressure for further improvements in autonomous vehicle technology with respect to cost, infrastructure requirements, robustness in varied environments and versatility. In this seminar I will share some of our recent work on several new vision-based techniques for underground visual place recognition that improve on current available technologies while only requiring camera input. I will present a Shannon Entropy-based salience generation approach (SESAME) that enhances the performance of single image-based place recognition by selectively processing image regions. I will also discuss the effectiveness of adding a learning-based scheme realised by Support Vector Machines (SVMs) to remove problematic images. The approaches have been evaluated on new large real-world underground vehicle mining datasets, and its generality has been demonstrated on a non-mining-based benchmark dataset. Together this research serves as a step forward in developing domain-appropriate improvements to existing state of-the-art place recognition algorithms that will hopefully lead to improved efficiencies in the mining industry. | |
07-Nov-17 | Sourav Garg | Title: Don’t Look Back: Robustifying Place Categorization for Viewpoint- and Condition-Invariant Place Recognition Abstract: | |
14-Nov-17 | Norton Kelly-Boxall | Title: The Progress of 3D Printing and how it Could be Useful for Research Abstract: 3D printing has recently become very popular with consumer grade printers making it easier and easier to download a file and hit print. With this in mind how can we as researchers utilise this technology to increase our research productivity? This talk will delve into the different 3D printing technologies and creations that we can use in our demo's, hardware development and experiments, so that we can reduce cost, reduce lead time on parts and spend more time on the areas that matter. There will also be a small introduction into the Gummi Arm, a 3D printed variable stiffness manipulator that I built and will be working on. | |
21-Nov-17 | Andrea Cherubini | Title: SENSOR-BASED CONTROL FOR NAVIGATION AND PHYSICAL HUMAN ROBOT INTERACTION Abstract: Traditionally, heterogeneous sensor data was fed to fusion algorithms (e.g., Kalman or Bayesian-based), so as to provide state estimation for modeling the environment. However, since robot sensors generally measure different physical phenomena, it is preferable to use them directly in the low-level servo controller rather than to apply them to multi-sensory fusion or to design complex state machines. This idea, originally proposed in the hybrid position-force control paradigm, when extended to multiple sensors brings new challenges to the control design; challenges related to the task representation and to the sensor characteristics (synchronization, hybrid control, task compatibility, etc.). | |
28-Nov-17 | Juan Jairo Inga Charaja | Title: Human Behaviour Identification Using Inverse Reinforcement Learning Abstract: Recent trends in human-machine collaboration have led to increased interest in shared control systems, where both human and a machine or automation simultaneously interact with a dynamic system. However, for a systematic control design to enable automation to participate in a cooperation with a human, modeling and identification of human behavior becomes essential. Considering a model of shared control based on a differential game, the identification problem consists in finding the cost function describing observed human behavior. This seminar will show the potential of Inverse Reinforcement Learning techniques for identification in such scenarios. | |
5-Dec-17 | Ashley Stewart Fangyi Zhang Sean McMahon James Mount | ACRA rehearsal | |
12-Dec-17 | No seminar | ACRA | |
19-Dec-17 | Girish Chowdhary | Title: The Robotics are Coming - for your Food! Abstract: | |
26-Dec-17 | No seminar | Xmas |
Seminars 2015:
Date ____ | Speaker | TOPIC___________ | Room______________________ |
---|---|---|---|
27-Jan-15 | Title: Marine Vessel Inspection as a Novel Field for Service Robotics: A Contribution to Systems, Control Methods and Semantic Perception Algorithms. Abstract: Seagoing vessels, such as bulk carriers, dry cargo ships, and tankers, have to undergo regular inspections at survey intervals. This is performed by ship surveyors, using visual close-up surveys or non-destructive testing methods. Vessel inspection is performed on a regular basis, depending on the requirements of the ship classification society. For a close-up survey, the ship surveyor has usually to get within arms’ reach of the inspection point. Structural damage, pitting, and corrosion are visually estimated based on the experience of the surveyor. The most cost-intensive partduring the inspection process is to provide access to all parts of a ship. The talk will present a novel, robot-based approach for the marine inspection process. Within the talk, several locomotion concepts for inspection robots are presented. Additionally, perception concepts based on spatial-semantic ontologies and on spatial Fuzzy Description Logic are proposed. It will be discussed, how such concepts can be used to classify structural parts of a ship, which again can enhance a robot based inspection process by semantic annotations. | GP-Z-302 | |
03-Feb-15 | Juxi Leitner | "From Vision To Actions - Towards Adaptive and Autonomous Humanoid Robots" | |
13-Oct-15 | Andrew English, Adam Jacobson, Michael Milford, and Thierry Peynot | Post IROS2015 and ROSCon2015 | GP-S405 |
20-Oct-15 | Sandra Mau | Title: TrademarkVision, a spin-out computer vision company. Abstract: Sandra will be talking about her computer vision spin-out company TrademarkVision, sharing her journey from research to commercialisation, and giving insight on creating the right environment to unlock innovation and entrepreneurship for women in technology. | GP-S11 Cantina Lounge |
27-Oct-15 | John Vial | Abstract: The SLAM algorithm has a fundamental problem in that its memory requirements grow linearly over time. To combat this robot poses can bemarginalised (typically causing fill in and increasing memory overhead ),forgotten entirely or the system can be approximated. This talk will describe a recentPhDdissertation that addresses this problem through approximation while also guaranteeing that the approximate distributions are both close (using the Kullback-Leibler Divergence) and conservative, this new technique is called Conservative Sparsification. A variant of the technique is developed that is appropriate for distributed estimation systems by employing Junction Trees. | GP-S11 Cantina Lounge |
3-Nov-15 | David Ball | Title: Reflections on: Robotics for Zero-Tillage Agriculture (ARC Linkage Project) Abstract: Farmers are under growing pressure to intensify production to feed a growing population, while managing environmental impact. Robotics has the potential to address these challenges by replacing large sophisticated farm machinery with fleets of small autonomous robots. The first half of this seminar will present research from the completed ARC Linkage project “Robotics for Zero-Tillage Agriculture” towards the goal of coordinated teams of autonomous robots that can perform typical farm coverage operations. The second half of the seminar will reflect on the other aspects of the grant such as expectations about technology readiness levels, impact, timeline, testing, and the real cost of the project. With a large fleet of robots it will become time consuming to monitor, control and resupply them all. To alleviate this problem we describe a multi-robot coverage planner and autonomous docking system. Making a large fleet of autonomous robots economical requires using inexpensive sensors such as cameras for localisation and obstacle avoidance. To this end we describe a vision-based obstacle detection system that continually adapts to environmental and illumination variations and a vision-assisted localisation system that can guide a robot along crops with challenging appearance. This research included three months of field trials on a broad-acre farm, culminating in a two-day autonomous coverage task of 59ha using two real robots, four simulated robots and an automatic refill station. | GP-S11 Cantina Lounge |
10-Nov-15 | Juxi Leitner, | Title: Sharing experiences for robot business commercialisation Speakers: Brent - Overview of QUT Bluebox and how they can help turn research and innovation into commercialization opportunities. Michael - His startup and how Bluebox helped him Sue - Discussing her experience at RoboBusiness in Silicon Valley Juxi and James - Übercamp Experience Overview Abstract: With more people investigating the idea of Startups we must ensure we are utilizing every opportunity and tool at our disposal. With robotics been an emerging technology, there will be the potential for several commercialization opportunities just on the horizon. So, this week's seminar will have several presentations all in the area of Startups, Commercialization and Entrepreneurship! There will be four short presentations on a variety of aspects within the entrepreneurial space. The first half of the seminar will discuss the recent experiences of some of our RAS members at QUT's Ubercamp and RoboBusiness. The second half will of the seminar will show how QUT Bluebox can help turn your idea, or product, into a viable commercialization opportunity. Photos: IMG_2699.jpg, IMG_2698.jpg | GP-S11 Cantina Lounge |
17-Nov-15 | Title: Visual servoing without image processing Speaker: François Chaumette | GP-S11 Cantina Lounge | |
24-Nov-15 | External speaker | Title: The HPec Project: Self-Adaptive, Energy Efficient, High Performance Embedded Computing, UAV case study. Speaker: Jean-Philippe Diguet Title: Embedded Health Management for Autonomous UAV Mission Speaker: Catherine Dezan Abstract: (HPeC Summary) The HPeC project aims at demonstrating the relevancy of self-adaptive hardware architectures to respond to the growing demands ofhigh performancecomputing, in an increasing class of embedded systems that also have demanding footprint and energy efficiency constraints. This is typically the kind of embedded system we have in small autonomous systems like UAVs, that require high computing capabilities to perceive the environment (e.g embedded vision) and make decisions about task to execute according to uncertainties related to the environment, safety critical systems, the health of the system and processing results (e.g. identified object). | GP-S11 Cantina Lounge |
1-Dec-15 | NOTE:due toacrabeing held this week the lab might be very empty! | GP-S11 Cantina Lounge | |
8-Dec-15 | TBD | TBD | GP-S11 Cantina Lounge |
15-Dec-15 | ACRA, AusAI, and Deep learning workshop recap. Speakers: TBD | GP-S11 Cantina Lounge | |
22-Dec-15 | Andres F. Marmol V. | GP-S11 Cantina Lounge | |
29-Dec-15 | NOTE: due to the university will be closed this week, no seminar is scheduled. | GP-S11 Cantina Lounge |
Seminars 2014:
Date ____ | Speaker | TOPIC_____________________________________________________________________________________________ | Room______________________ |
---|---|---|---|
20-01-2014 | Ioannis Rekleitis | Algorithmic Field Robotics: Enabling Autonomy in Challenging Environments | GP-P-512 |
21-01-2014 | Chunhua Shen Tristan Perez | New image features and insights for building state-of-the-art human detectors Robust Autonomy in Filed Robotics –Assessment and Design | GP-P-512 GP-P-512 |
22-01-2014 | Mohan Sridharan | Towards Autonomy in Human-Robot Collaboration | GP-P-512 |
31-01-2014 | Geoffrey Walker | Solarcars and EVs to Agbots and UAVs | GP-O-603 |
07-02-2014 | Neil Davidson | Design as Strategy | GP-O-603 |
21-03-2014 | Ben Upcroft | Sabbatical at Oxford - "We Are Never Lost" | GP-O-603 |
28-03-2014 | Obadiah Lam | A new type of neural network: Hierarchical Temporal Memory – Cortical Learning Algorithm | GP-O-603 |
04-04-2014 | Niko Sünderhauf | What is beneath the snow? – Towards a probabilistic model of visual appearance changes | GP-O-603 |
11-04-2014 | Zetao Chen Thierry Peynot | Multi-scale Bio-inspired Place Recognition (ICRA 2014) Radars: a complementary sensing modality to Vision for Robotics and Aerospace at QUT? | GP-O-603 |
28-04-2014 | Keith L. Clark | Programming Robotic Agents: A Multi-tasking Teleo-Reactive Approach [Slides] | GP-S-405 |
02-05-2014 | Stephanie Lowry | Paper 1: Towards Training-Free Appearance-Based Localization: Probabilistic Models for Whole-Image Descriptors (ICRA 2014) Paper 2: Transforming Morning to Afternoon using Linear Regression Techniques (ICRA 2014) | GP-O-603 |
09-05-2014 | Edward Pepperell Steven Martin | All-Environment Visual Place Recognition with SMART (ICRA 2014) (ICRA 2014) | GP-O-603 |
16-05-2014 | Tim Morris | Multiple map hypotheses for planning and navigating in non-stationary environments (ICRA2014) | GP-O-603 |
20-05-2014 | RAS overview | RAS overview | GP-Z-606 |
23-05-2014 | Alex Bewley Patrick Ross | Online Self-Supervised Multi-Instance Segmentation of Dynamic Objects (ICRA 2014) Novelty-based visual obstacle detection in agriculture (ICRA 2014) | GP-O-603 |
30-05-2014 | Ben Upcroft Michael Milford | Lighting Invariant Urban Street Classification (ICRA 2014) Condition-Invariant, Top-Down Visual Place Recognition (ICRA 2014) | GP-O-603 |
13-06-2014 | Jason Kulk | A Chronology of Previous Experiences with Robots | GP-O-603 |
20-06-2014 | Raymond Russell | "RoPro Design - the Struggles Facing a Mobile Robotics Company in 2014" | GP-O-603 |
27-06-2014 | Timothy Molloy | - Asymptotic Minimax Robust and Misspecified Lorden Quickest Change Detection For Dependent Stochastic Processes - Compressed sensing using hidden Markov models with application to vision based aircraft tracking | GP-O-603 |
Change from Fridays to Tuesdays 11:00 am | |||
15-07-2014 | Duncan Campbell | Overview of Project ResQu | GP-B-507 |
29-07-2014 | Steven Wright | Optimization with a focus on machine learning applications [Slides] | GP-B-507 |
05-08-2014 | Niko Suenderhauf | Overview of the CVPR 2014 conference | GP-B-507 |
07-08-2014 | Charles Gretton | Calculating Economical Visually Appealing Routes [Thursday 4:00 - 5:00 pm] | GP-S-301 |
12-08-2014 | Andre Barczak | Fast Feature Extraction Using Geometric Moment Invariants | GP-B-507 |
19-08-2014 | Joseph Young | QUT gear and services around HPC | GP-B-507 |
26-08-2014 | Jonathan Roberts | Museum Robot - how to deploy two robot for four years | GP-B-507 |
12-09-2014 | Jochen Trumpf | Observers for systems with symmetry [Friday 11:00am - 12:00pm] | GP-S-405 |
30-09-2014 | Group | IROS recap | GP-B-507 |
07-10-2014 | Tor Arne Johansen | Autonomous Marine Operations and Systems, with emphasis on Unmanned Aerial Vehicles | GP-B-507 |
14-10-2014 | Alfredo Nantes | Traffic SLAM a Robotics Approach to a New Traffic Engineering Challenge | GP-B-507 |
16-10-2014 | Ken Skinner | Trusted Autonomy [slides] | GP-S-407 |
21-10-2014 | Sareh Shirazi | Video Analysis Based on Learning on Special Manifolds for Visual Recognition | GP-B-507 |
28-10-2014 | Remi Ayoko | Workspace configurations, employee wellbeing and productivity | GP-B-507 |
04-11-2014 | Matthew Dunbabin | RobotX Challenge | GP-B-507 |
18-11-2014 | Anjali Jaiprakash | TBA | GP-B-507 |
25-11-2014 | Navinda Kottege | Hexapods and other stories: Autonomous Systems for Perceiving our Environment | GP-B-507 |
09-12-2014 | Franz Andert | Integrating Vision Sensors to Unmanned Aircraft | GP-B-507 |
Seminars 2013
Date | Speaker | TOPIC |
---|---|---|
1-2-2013 | Adrien Durand Petiteville | Multi-sensor based navigation of a mobile robot in a cluttered environment |
8-2-2013 | Matthew Garratt | Unmanned Aerial Vehicles Research at UNSW, ADFA. WHERE: S403 TIME: 12.00 pm - 1.00 pm |
15-2-2013 | Alex Bewley | PhD student introduction: Who is Alex Bewley and what is he doing here? |
22-2-2013 | Wesam Al Sabban | Path Planning for Small, Electric Unmanned Aerial Vehicles in Dynamic Conditions |
1-3-2013 | David Ball + Others | OpenRatSLAM + other related work |
5-3-2013 | David Schmale | Time: 2-3pm Place: S Block room 305. |
21-3-2013 | Arren Glover | Final Seminar. Room B121 |
5-4-2013 | Frederic Maire | Marine mammals detection in aerial images |
12-4-2013 | Patrick Ross | PhD Confirmation Seminar - Outdoor traversability (10-11am) |
19-4-2013 | Robert Zlot and Mike Bosse | Title to be decided. |
26-4-2013 | ICRA practice talks: | [David Ball] [Chris Lehnert] |
3-5-2013 | ICRA practice talks: | [David Ball] |
24-5-2013 | ICRA attendees: | summaries of papers they liked |
31-5-2013 | Michael Warren and Chris Lehnert | ROScon |
14-6-2013 | Kyran Findlater, Ryan Steindl | 4th year thesis presentations: AgaBot flash light, $100 UAV |
21-6-2013 | Paul Furgale | * *Autonomous Systems Lab, ETH Zurich (Title to be decided.) |
26-6-2013 | Brett Browning | Mobile Robotics for Oil and Gas Production and Heavy Industry |
5-7-2013 | Andre Gustavo Scolari Conceicao | Formation Control of Mobile Robots Using Decentralized Nonlinear Model Predictive Control |
12-7-2013 | Matthew Dunbabin | How to blow-up a robot… and other cool ways to monitor the environment |
30-7-2013 | Matthew Walter | Acquiring Rich Models of Objects and Space Through Vision and Natural Language |
31-7-2013 | Kok Yew (Mark) Ng | Robust fault reconstruction using sliding mode observer |
31-7-2013 | Thierry Peynot | Resilient perception and navigation for unmanned ground vehicles in challenging environmental conditions |
9-8-2013 | Hu (Kyle) He | Joint 2D and 3D cues for image segmentation |
30-8-2013 | Video: Jeff Hawkins | Talk followed by discussion: on Intelligence by Jeff Hawkins |
27-9-2013 | Video: Chris Manning | Talk followed by discussion: on Deep Learning for NLP |
11-10-2013 | Timothy Morris and Feras Dayoub | The Guiabot...TBA |
18-10-2013 | Timothy Morris | Vision-Only Autonomous Navigation Using Topometric Maps (IROS2013 practice) |
25-10-2013 | Adam Jacobson Michael Warren | Autonomous Movement-Driven Place Recognition Calibration for Generic Multi-Sensor Robot Platforms(IROS2013 Practice) Robust Scale Initialization for Long-Range Stereo Visual Odometry (IROS2013 practice) |
15-11-2013 | Anthony Finn | Director, Defence & Systems Institute - Title: TBA |
22-11-2013 | Michael Milford | 6 Months of Awesome in Boston |
28-11-2013 | IROS2013 recap | Michael Warren Stephanie Lowry Timothy Morris |
6-12-2013 | The Open Set Recognition Problem | |
13-12-2013 | Tim Barfoot | Visual Route Following for Mobile Robots |
18-12-2013 | Denny Oetomo | TBA |
20-12-2013 | Jasmine Banks | FPGAS and Applications |