The Robotics and Autonomous Systems (RAS) seminar series is open for the public. Everyone is welcome to attend.
Speaker: Juan Jairo Inga Charaja
Human Behaviour Identification Using Inverse Reinforcement Learning
Recent trends in human-machine collaboration have led to increased interest in shared control systems, where both human and a machine or automation simultaneously interact with a dynamic system. However, for a systematic control design to enable automation to participate in a cooperation with a human, modeling and identification of human behavior becomes essential. Considering a model of shared control based on a differential game, the identification problem consists in finding the cost function describing observed human behavior. This seminar will show the potential of Inverse Reinforcement Learning techniques for identification in such scenarios.
Where: QUT Gardens Point S-Block, 11th floor, The Cantina Lounge
When: 11:00AM-12:00PM on 28/Nov/2017
Seminar internal speaker list (constantly updated):
Anjali Jaiprakash : firstname.lastname@example.org
Dorian Tsai : email@example.com
Fahimeh Rezazadegan : firstname.lastname@example.org
Felipe Gonzalez : email@example.com
Jasmin James : firstname.lastname@example.org
Jeremy Opie : None
Jordan Laurie : None
Kulatunga Mudiyanselage Eranda Bankara Tennakoon (Eranda) : None
Lachlan Nicholson : email@example.com
Mario Strydom : None
Matt McTaggert : firstname.lastname@example.org
Riccardo Grinover : email@example.com
Sean Wade-McCue : Sean.firstname.lastname@example.org
Steve Martin : email@example.com
Suman Bista : firstname.lastname@example.org
Tristan Perez : email@example.com
Troy Cordie : None
Vibhavari Dasagi : None
William Hooper : firstname.lastname@example.org
Matthew Dunbabin : email@example.com
Thierry Peynot : firstname.lastname@example.org
Peter Corke : email@example.com
Jonathan Roberts : firstname.lastname@example.org
Chris McCool : email@example.com
Michael Milford : firstname.lastname@example.org
Juxi Leitner : email@example.com
Luis Mejias Alvarez : firstname.lastname@example.org
Niko Suenderhauf : email@example.com
Chris Lehnert : firstname.lastname@example.org
Fangyi Zhang : email@example.com
Ajay Pandey : firstname.lastname@example.org
Leo Wu : email@example.com
Valerio Ortenzi : firstname.lastname@example.org
Andres Marmol : email@example.com
Jason Ford : firstname.lastname@example.org
John Skinner : email@example.com
James Mount : firstname.lastname@example.org
Sean McMahon : S1.email@example.com
Feras Dayoub : firstname.lastname@example.org
Anders Eriksson : email@example.com
William Chamberlain : firstname.lastname@example.org
Douglas Morrison : email@example.com
Fan Zeng : firstname.lastname@example.org
Sourav Garg : email@example.com
Norton Kelly-Boxall : firstname.lastname@example.org
This list contains all the members from here
Ranked by the order of most recent seminar date. People who haven't given a seminar since 2016 are ranked first, in alphabetical order by First Name.
Organiser : Please contact Fan Zeng, the organiser of this seminar series at email@example.com, If
- Your name is not included in the list, and you'd like to add it into the list.
- Opposite of the above.
- You, or your visitor would like to give a talk in one of the upcoming sessions.
- Your name is near the top of the list, but you cannot give a seminar due to various reasons.
Thank you very much for your attention!
Full List of Seminars:
Please email Fan Zeng (firstname.lastname@example.org), if your presentation is arranged in the table below on a date you are not available. Please also remember to email the title and abstract when ready. A short biography would be appreciated for a brief introduction of the speaker before the presentation. Thanks!
|GP-S11 Cantina Lounge|
External guest speaker: Chris Jeffery
Title: Start-up Adventures
|GP-S11 Cantina Lounge|
External guest speaker: Fredrik Kahl
From Projective Geometry to City-Scale Reconstructions in Computer Vision
Research in geometrical computer vision has undergone a remarkable journey over the last two decades. Not long ago, the field was dominated by mathematicians interested in projective geometry, but today, the area has matured and practical systems for performing large-scale 3D reconstructions are commonplace. In this talk, I will first review some of the progress achieved so far and then give examples of present state of the art, especially on robust methods for city-scale reconstruction and localization. In the end, future challenges will be discussed.
|GP-S11 Cantina Lounge|
|26-Jan-16||Australian Day||Holiday off.|
Update on IEEE Control Systems / Robotics and Automation Societies QLD joint-chapters ( 10 mins).
|GP-S11 Cantina Lounge|
Title: From industrial robots to medical robots: An individual perspective
Abstract: In this presentation, I will talk about some projects I participated in at Tsinghua University and National University of Singapore. In particular, I will discuss kinematic calibration of industrial robots and introduce a flexible medical robot named concentric tube robot. Finally I will make some rough comparisons between industrial robots and medical robots based on my experience.
|GP-S11 Cantina Lounge|
Title: Automation for large scale infrastructure inspection: Why and How
Abstract: This talk will describe our journey in developing the Flight Assist System (FAS) for automation of ROAMES infrastructure inspection aircraft. (ROAMES won a 2015 International Edison Award, and is having an international impact on the infrastructure inspection industry). I will also share some personal reflections on industry collaboration.
|GP-S11 Cantina Lounge|
Title: Growing a Startup at QUT - The VBK Motors experience
Abstract: Creating a Startup while being at university is tough, especially since funds are limited. That said, a number of opportunities are made available by the university itself which can make a great business a reality. VBK motorshas been lucky enough to take advantage of these opportunities. In the past 6 months VBK Motors has been selected both as a finalist at the QUT BlueBox innovation Challenge as well as a participantto the QUT BlueBox Hardware Accelerator Program. In this Talk, the Co Founder and CEO of VBK Motors will discuss about his experience since the start of his Startup journey, identifying current opportunities available from QUT to promote innovative startups as well as what lies ahead for his young company.
|GP-S11 Cantina Lounge|
|8-Mar-16||External guest speaker: Will Browne|
Title: Cognitive Systems: Robotic Vision and Learning
(Note: The purpose of the talk is to encourage discussion over the next few days of my visit to QUT, so overviews of the topic will be presented.)
Abstract: Artificial Cognitive Systems encompasses robots that learn and adapt through exploring their environment. This talk will highlight research into Artificial Cognitive Systems that enables robots to improve autonomous operation. Perception, including robotic vision, is essential in obtaining the state of the world. Advances in salient object detection and pattern recognition will be presented. Also representing, reasoning and learning about appropriate actions for given tasks, such as active SLAM will be outlined. Advances in Affective Computing will be shown for robotic navigation. Finally, methods for artificial systems to scale and reuse information will be outlined.
|GP-S11 Cantina Lounge|
Title: Computational Imaging: What has it ever done for me?
Abstract: I will briefly introduce the field of computational imaging and discuss recent developments in industry, academia, and within the ACRV.
|GP-S11 Cantina Lounge|
|22-Mar-16||External guest speaker: Thibault Schwartz (architect, co-founder of HAL Robotics)|
Title: Simplifying machine control for architectural applications.
Abstract: The democratization of CAD technologies, perceivable in architecture schools as well as in the construction industry, has, during the last decade, progressively led to the creation of consortia combining architectural academics and their professional counterparts, seeking to extend their morphological research, undertaken at a virtual level, towards a systematic practice of manufacturing of geometrical abstractions. As a result, and taking advantage of lower cost CNC machines, university workshops are becoming genuine micro-factories, although various parameters inhibit the scaling of such experimentations beyond pavilions. We highlight software issues and propose solutions to help architectural robotics move beyond its current limitations, and reach the required robustness to be used on construction sites.
|GP-S12 Owen J Wordsworth Room|
External guest speaker: Anne Walsh, and Leanne Kennedy
Title: QUT Trade Controls
This presentation will outline trade controls, the Defence Trade Control Act 2012 (the Act) and the impact tothe University research sectors. The Act was implemented by the Federal Government in support of Australia’s international obligations to meet strengthened export controls and to prevent sensitive technology that can be used in conventional and weapons of mass destruction programs, from getting into the wrong hands.
|GP-S11 Cantina Lounge|
|12-Apr-16||External guest speaker: Chunhua Shen|
Title: Dense prediction on images using very deep convolutional networks
In this talkI will present an overview of my recent results on deep learning.
|Subject to be canceled due tocentre's Robot Vision Summer School (RVSS).|
Title: Interviewing experiences
Ben asked me to talk about my experiences with interviewing for start-up robotics companies in order to help those who might consider this path. I found the process to be quite different from interviewing for academic and standard engineering positions … and was initially caught by surprise. I’ll describe the process and the range of questions that were asked. Also, I’ll talk a little bit about Modern C++ and its advantages. Then I’ll give some sources which I found useful to prepare for these interviews. Lastly, if there is time, I’ll demonstrate a new machine learning toolbox and methodology which I found while taking a Coursera unit on machine learning.
ICRA2016 practice talks
(3 mins spotlight pitch and 2-3 mins feedbacks for each speaker.)
1. William Chamberlain
|17-May-16||David Hall||Intern experience at Bosch|
|24-May-16||Stryker (visitor)||Stryker will talk about their work in vision-based navigation for medical robotics. They will also outline what the company Stryker does.|
Title: Callings - Finding and Living an Authentic Work / Life
Abstract: Ray Russell relates his 35 year quest to find the perfect work-life balance. Take a break from your SLAM, your occupancy grids, your quadratic equations and your grant writing. Part cracker-jack philosophy, part transcendental exploration - for a few minutes, let's examine together the motivations behind why we are all Here in the first place.
Title: Enabling Robots To Assist People During Assembly Tasks By Linking Visual Information With Symbolic Knowledge Representation
Abstract: Future robots should have the ability to perform daily tasks in various conditions. One of the future applications of robotics is to assist workers in assembly tasks. The aim of our project is to create a robot that can assist workers in an assembly task. The goal of the project and the current state of our work will be presented in order to receive feedback that could help to improve the future work.
About the speaker: Amos is the CEO of Deepfield Robotics - a subsidiary of Bosch Germany
Title: Not your grandmother’s MATLAB
This is unashamedly a talk about MATLAB and the gory details thereof. Those of us who use MATLAB tend not to keep up with new functionality as it’s added - they have 3500 people working on enhancing the product: new core functionality and new toolboxes. In this talk I’ll demonstrate (live!) some of the newer features that might be relevant to folk who work in our field: strings (yay!), categorical arrays, tables, tall arrays, graphics, apps, connectors, compilers and coders.
Title: Robotics: Science and Systems Conference Debrief
Abstract: I will give an update on the conference overall and the Deep Learning Workshop organised by Juxi and Niko. I’ll highlight a couple of talks/papers that I found very interesting. The first presentation was by Raia Hadsell from Deep Mind on Progressive Networks which enabled rapid transfer from simulation to real robots. The second presentation was by Dieter Fox from U. of Washington on how deep learnt features dramatically improved hand and gesture recognition.
Title: Learning Tasks from Natural Language Dialogue
Abstract: Providing robots with the ability to learn everyday tasks, such as cleaning, directly from users within the environment will allow them to be adapted to a wide variety of real-world problems, including aged and disability care. Previous research in task learning has focused on two key approaches: learning from demonstration, in which the agent observes the user performing the task; and learning from natural language, in which the agent learns from a spoken/written description of the task. While both approaches are complimentary in nature, for the purpose of this talk we will focus on the latter.
We will discuss the results of our recently published work, in which we demonstrated a task learning/planning approach that enabled a robot to both learn generalizable tasks from natural language inputs and exploit domain knowledge during planning. In addition, we will provide an overview of the current direction of our work, which includes learning generalizable tasks from situation specific explanations, as well as recognising repeatable patterns for repetitive tasks.
Title: How to place 6th in the Amazon Picking Challenge
Abstract: In early March, team ACRV was selected as one of 16 teams to participate in this year's Amazon Picking Challenge. This talk will summarise what followed. In particular, I will highlight some of the key lessons we learned as well as the tools and processes that worked and didn't work for us. I'll also mention my ideas on how team ACRV might win next years APC.
Title: Light Fields: Has it been 20 years already?!
Abstract: On the 20th anniversary of the seminal paper by Levoy and Hanrahan, I'll review recent developments in this still-growing field. I'll also discuss my upcoming move to the Stanford Computational Imaging Lab, and some of the work going on there. Finally I'll cover some of the present and ongoing work in light field imaging here at QUT.
Title: Where are UAVs at and how can we get them connected with the Internet of Things (or Industry V4)?
Abstract: UAVs, or flying robots to some, are becoming ever closer to the ubiquitous technology often touted. I will present a snapshot of where we are at in terms of widespread adoption of UAVs in our airspace to do really useful and economically beneficial things, and what big challenges remain. UAVs can form a critical sensing front-end and actor in the context of the Internet of Things (IoT), also known as Industry V4 in the industrial context. Industry is well progressed down the path of large scale systems integration and open data communication protocols, which has much to offer in terms of integrating multiple UAV systems, and possibly that of land and sea robotic platforms. The second part of the presentation will present a framework and early work on how heterogeneous robotic platforms may benefit from the industrial automation world to provide seamless data communication between intelligent sensing platforms in the field, and the realms of big data, cloud computing and decisioning. I will encourage discussion on this aspect as there are some great things that we can trial across the discipline and have more of our platforms interconnected and connected.
Title: Robotics Deployment of Machine Learning
Abstract: For a robotic application, training a machine learning model is generally not the end of the project. Even if the purpose of the model is to obtain knowledge about certain aspects of a dataset, the knowledge gained, to be useful, need to be generalised to cover new data that the robot will feed to the model during its deployment, however, most of these models fail to demonstrate the same level of performance, shown on their test set, when deployed on a robot. In this talk, I'll highlight some of the lessons I learned while deploying supervised machine learning on mobile robots. If you are new to machine learning and you would like to use it ion your robot or if you are an expert in the topic and you would like to hear about the deployment stage, or if you are remotely interested in the subject, this talk will give you a wide overview and I hope it will stimulate discussion beyond the presented ideas.
Adjunct Associate Prof Oran Rigby
Title: Medical rescue, training and future use of robotics
Abstract: A review of how robotics currently and in the future may influence the delivery of critical care in the prehospital and medical environment focusing on patient rescue, remote diagnostics, and the opportunities for remote therapeutics in critical clinical decision pathways.
Title: STEM Connectors – where STEM experts and schools connect.
Abstract: Dr Julia Davies will present an overview of this new STEM engagement program, where teachers invite experts into their classroom (via Skype or other means of video-telephony) to show students the relevance and application of STEM. For any researchers who subsequently may be interested in getting involved, Julia will then lead you through the registration process to create a profile page.
Check out https://stemconnectors.qld.edu.au/#/
Title: Robots, neurons, and the fabric of reality
Abstract: I'll talk about lessons learned from robotics work in Vancouver, current and future directions toward better robots, and some reasons to think those directions might work. I'll introduce my current research in computational neuroscience and its value to robotics, and zoom out to the big picture to address the long-term ways in which I think robotics can impact neuroscience, philosophy, and the universe.
|No seminar - RoboVis 2016|
Title: AgBotII Software Development and Latest Results
Abstract: The AgBotII was built at QUT as part of the Strategic Investment in Farming Robotics which is now nearing the successful completion of all milestones. These milestones included the development of the platform, weed management in the field, autonomous replenishment and fertilising in the field. In this seminar I will firstly talk a little bit about the software inside the AgBotII that made achieving these milestones possible, including the scale and size of the software development task. Secondly, I will present a summary of the most recent round of completed milestones, including the autonomous docking, refilling, recharging and broadcast fertilising. Finally, I will talk about using the Gazebo simulation which appears to be underutilised in the lab.
Title: Mining Robotics at QUT and MINExpo 2016
Abstract: In the first part of this talk I will give a quick overview of the current status of activities in robotics and automation for the mining industry at QUT, including our membership in CRCMining/Mining3, recently confirmed projects such as the Advance Queensland Innovation Partnership “Automation-Enabling Positioning for Underground Mining”. Some of the many opportunities for the future will also be mentioned.
In the second part, I will discuss impressions of MINExpo 2016, the largest mining exhibition in the world, which was just held in Las Vegas. This will include a preview of a 400+ ton autonomous truck.
|Report on IROS 2016|
Title : Alone in the dark : robotic vision in low light environments
Abstract : In most cases it is expected that robots can operate 24 hours a day this means 50% of their operational time is night where lighting may be insufficient. This talk will look at how cameras operate in these low light environments, sources of noise, the effects of demosaicing and why it is important for use to consider these things when testing robotic vision algorithms. Next I will briefly cover so other camera technologies that could be helpful in low light conditions and finish with an overview of what I hope to accomplish throughout my PhD.
Title: Vision for Agricultural Robotics
Abstract: In this presentation I'll give an overview of the vision systems developed during the SIFR project for AgBot II (weed classification) and Harvey (crop segmentation and detection).
The vision systems are used for a range of detection, classification and segmentation tasks making use of traditional vision features through as well as incorporating recent advances in convolutional neural networks.
|Title: Commonwealth Bank robotics initiative |
Commonwealth Bank (CBA) has purchased a REEM humanoid robot in order to explore potential
use cases for robotics within the financial services industry. The Bank and Stockland Retail
Services Pty Limited (Stockland) have entered into a project agreement to run a range of robotics experiments.
In this context, the Australian Technology Network of Universities (ATN) directorate has
worked with CBA to devise an initiative offering opportunities to teams of students to engage
in social robotics and robotics coding research during Semester 2 of 2016.
Last month a group of three QUT undergraduate students enrolled in BEB801 went to demo their work at CBA Innovation Lab in Sydney.
In this talk, I will discuss the demonstrations presented in Sydney. In particular, I will explain how the QUT team successfully managed to have the REEM robot play a game of "Simon Says". The key module of this system is a deep neural network that takes as input a single image and predicts the pose of the skeleton of the person closest to the center of the image.
Title: Uni and start-up adventures in Austria, Switzerland, Germany, Italy, China and Singapore: A Pictorial Journey
Abstract: I'll cover the more interesting aspects of several international trips this year to a number of universities, start-ups and companies in Europe and Asia.
Title: “Deep Adventures in Germany, Portugal, England and France”
Abstract: Reporting on some of the activities in European labs around robotic vision, adaptive systems, and deep learning. Labs I visited include, CITEC (Cognitive Interactive Technology Cluster of Excellence) at the University of Bielefeld, VisLab at the Istituto Superiore Tecnico Lisbon, Oxbotica, DeepMind, Lagadic at inria Rennes, and SoftBank Robotics Europe.
Title: How to Build a World #1 Robotics Company.
Abstract: Ben Sand will share insights from his time in Silicon Valley and from coaching people to build highly technical companies. Ben is a co-founder of Meta which raised AU$100M over 3.5 years. Meta builds augmented reality hardware with a strong computer vision component. Key hires included Prof. Steve Feiner (Columbia), Prof. Steve Mann (Toronto), Jayse Hansen (creator of graphics from Iron Man, Avengers), and Alan Beltran (head of hardware for Google's Project Tango).
Ben has experience creating high quality partnerships with universities were both tech companies and universities gain and will overview some of the models he has used previously.
Title: Human Cues for Robot Navigation
Abstract: This talk covers the outcomes of the Discovery Project "Human Cues for Robot Navigation". We set out to investigate how robots could use navigation cues in environments designed for humans. We will cover some different types of spatial symbolic information, and how a robot can use this information as cues for navigation. Along the way, the robot must deal with the fluidity and ambiguity naturally inherent in these cues. The talk will then move to robot vision approaches for locating such cues in the world. We focused on textual cues, including signs and room labels, which reduces to a wild text spotting problem with an unseen lexicon. Finally, occlusions and specular highlights can prevent the robot from reading textual cues in the world, and we present a method for repositioning the robot to reduce the impact of specular highlights.
Tim McLennan &
Title: Commercialisation models
Interactive discussion: outline of setting up a company called Q-botics, what are other models of commercialisation and what would we need to think about with those if there were to be exercised and what are some of the technology or models you are thinking about. The Uni is also doing a lot in the building entrepreneur spaces (startups and etc)
(Uni of Adelaide)
Title: Amazon Picking Challenge 2016: Team NimbRo of University of Bonn
Abstract: Automation in warehouses is becoming increasingly important in order to relieve humans from mundane and heavy tasks. This talk will present Team NimbRo's successful solution for this year's Amazon Picking Challenge. We will first give a broad overview of the entire system and then focus on two challenging aspects. First, motion generation using a highly flexible IK-based keyframe interpolation framework featuring null space cost optimization. Second, our approach to object perception, which includes online learning from deep features, semantic segmentation on GPUs using pre-trained models, as well as 6D object pose estimation for better grasp point selection. Finally, we will point out the most difficult items for our setup and our approaches to handle them.
Title: Experiences in Flight Testing on Manned/Unmanned Aircraft
Abstract: In this presentation, I will provide an overview of the approach followed to test our research in a manned aircraft (Cessna 172) and a fixed–wing UAV. The software architecture developed allowed us to transparently execute on both aircraft the same core algorithms with minimal changes. Reusability, modularity and transparency were the criteria when developing this architecture allowing for seamless switching between simulation and real flight testing.
Title: A gentle introduction to generative models and Bayesian deep learning
Abstract: In this talk I will give a gentle introduction to two of the most regarded research topics at the recent NIPS (Neural Information Processing Systems) conference: generative models and Bayesian deep learning. Both techniques are not yet widely adopted in our community, but have the potential to overcome many of the deficiencies of current deep learning approaches for robotic applications where real-world robustness is paramount. Typical deep neural networks, such as used by many in our group, are trained as discriminative classifiers. Generative models are more powerful in the sense that they go beyond mere classification and attempt to learn the true distribution of the data instead. This is highly beneficial for robustness in situations where new unknown classes are regularly encountered, or when training has to be weakly supervised due to the high costs of obtaining labeled data. I will cover two recent techniques for generative models: generative adversarial networks and variational autoencoders. Another shortcoming of typical deep neural networks is that they are not able to properly represent their uncertainty in a classification. Instead, they merely exhibit uncalibrated confidence scores. While this meets the requirements for in-dataset classification (such as the ImageNet or COCO challenges), robotic systems that have to make decisions and act in the physical world based on a neural network's output, need trustworthy uncertainty information. Bayesian deep learning provides the techniques to achieve this.
Donald G. Dansereau
Title: Computational Imaging for Robotic Vision
Abstract: This talk argues for combining the fields of robotic vision and computational imaging. Both consider the joint design of hardware and algorithms, but with dramatically different approaches and results. Roboticists seldom design their own cameras, and computational imaging seldom considers performance in terms of autonomous decision-making.
The union of these fields considers whole-system design from optics to decisions. This yields impactful sensors offering greater autonomy and robustness, especially in challenging imaging conditions. Motivating examples are drawn from autonomous ground and underwater robotics, and the talk concludes with recent advances in the design and evaluation of novel cameras for robotics applications.
Title: Distributed Multi-Robot Formation Control
Abstract: Multi-agent systems are progressively being used in a broad range of modern applications such as multi-robot or multi-vehicle coordination and control, air traffic management systems, control of sensor networks, unmanned vehicles, energy systems and logistics.This presentation will review a number of concepts and results on multi-agent system control and will consider the types of communication, control and sensing architecture that allow preservation of the formation shape. It is assumed that the amount of sensing, communication and control computation by any one agent is limited. For example, each agent is only able to communicate over a limited range, and can only measure or receive its neighbours' state information.
Title: Robotic Manipulation in Real World Environments
Abstract: In this presentation, I will describe the development of robotic systems for manipulation in real world environments such as agriculture and warehouse automation. I will briefly outline the methods developed for manipulation in agriculture and how they can also be deployed to solve manipulation problems for warehouse automation.
The presentation will focus on Harvey, a robot which autonomously harvests capsicum in a greenhouse. The horticulture industry remains heavily reliant on manual labour, and as such is highly affected by labour costs. In Australia, harvesting labour costs in 2013-14 accounted for 20% to 30% of total production costs. These costs along with other pressures such as scarcity of skilled labour and volatility in production due to uncertain weather events are putting profit margins for farm enterprises under tremendous pressure. Robotic harvesting offers an attractive potential solution to reducing labour costs while enabling more regular and selective harvesting, optimising crop quality, scheduling and therefore profit. Autonomous harvesting is a particularly challenging task that requires integrating multiple subsystems such as crop detection, motion planning, and dexterous manipulation. Further perception challenges also present themselves, such as changing lighting conditions, variability in crop and occlusions.
We have demonstrated an effective vision-based algorithm for crop detection, two different grasp selection methods to handle natural variation in the crop, and a custom end-effector design for harvesting. Experimental results in a real greenhouse demonstrate successful grasping, detachment and overall harvesting rates. We believe these results represent an improvement on the previous state-of-the-art and show encouraging progress towards a commercially viable autonomous capsicum harvester.
Title: Angle Sensitive Imaging: A New Paradigm for Light Field Imaging
Abstract: Imaging is a process of mapping information from higher dimensions of a light ﬁeld into lower dimensions. Conventional cameras do this mapping into two dimensions of the image sensor array. These sensors lose directional information contained in the light rays passing through the camera aperture as each sensor element integrates all the light rays arriving at its surface. Directional information is lost and only intensity information is retained.
This talk takes you through a host of ideas to decouple this link and enable image sensors to capture both intensity and direction without sacriﬁcing much of the spatial resolution as the existing techniques do. Some of the ideas that we explore in this talk are diﬀerential quadrature pixels, polarization pixels, multi-ﬁnger pixels and combinations of these to eﬀectively capture the angular information of light by consuming only a very small imager area. These advances are facilitated by the miniaturization of the CMOS fabrication processes and enable low cost, robust computational cameras.
The presented work builds heavily on the theoretical premise laid down by the prior work on multi-aperture imaging. Practical aspects are modeled on the diﬀraction based Talbot eﬀect. The presented solutions fall into a general category of sub-wavelength apertures and is a one-dimensional case of the same. These solutions enable a rich set of applications among which are fast response auto-focus camera systems and single-shot passive 3D imaging.
Title: Visual Servoing - Alternate Approaches and Applications
Abstract: Humans use vision as feedback to help control their actions all the time, particularly when operating vehicles such as cars, heavy machinery and aircraft. If we want to remove the human operator such that these vehicles or agents become autonomous, then replicating some of the control tasks may require the use of vision-based control or visual servoing.
This seminar explores how visual servoing can be used to control such mobile agents or robots. First, I will provide a brief introduction to visual servoing (including position and image-based control frameworks), as well as a step by step guide on how to derive a classical image-based visual controller. Second, I will introduce some new (non-classical) image-based visual servoing approaches that leverage alternative control frameworks to provide additional benefits such as guaranteed stability, constraint satisfaction and the removal of feature tracking requirements. The goal of this seminar is to highlight the design considerations and potential benefits and drawbacks when contemplating the use of visual servoing for autonomous robot control. For those new to visual servoing, this should provide suitable background information to further explore the subject matter. For those already familiar with visual servoing, the material should help you to decide what approaches may be suitable for your application.
Throughout the seminar, various concepts will be highlighted with the aid of example applications for unmanned aircraft (drone) control including some core functionality (collision avoidance) and application specific tasks (control of a suspended load).
Title: Multi-target Tracking: Challenges and Solutions
Abstract: Despite significant progress, the problem of tracking multiple targets in crowded real-world scenarios is still far from solved. The task is highly relevant for a wide range of applications in robotics and computer vision, including autonomous vehicles, surveillance, video analysis and life sciences. In this seminar, I will present the remaining challenges to be addressed and some of the recently proposed solutions. In particular, I will comment on the differences between online and batch approaches and emphasise the importance of a centralised benchmark to advance the state of the art.
Title: Semi-Autonomy in Human-Robot Collaboration
Abstract: Semi-autonomous robots are robots whose actions are, in part, functions of human decisions. Semi-autonomy allows robots to interact with a human partner in a collaborative manner. Potential applications can vary from the assembly of products in factories, to the aid of the elderly at home, to the shared control in teleoperated processes. However, the sense-plan-act paradigm established by industrial robotics does not account for the interaction with humans, and methods to program collaborative robots are still unclear. In this talk, I will introduce interaction primitives, a data-driven approach based on the use of imitation learning, for learning movement primitives for human-robot interaction. The core idea is to learn a parametric representation of joint trajectories of a robot and a human from multiple demonstrations. Using a probabilistic treatment, the method uses the correlation between the learned parameters such that the robot task and trajectory can be inferred from human observations. As a proof-of-concept, experiments with a 7-DoF lightweight arm collaborating with a human to assemble a toolbox will be shown.
Title: Language, Logic, and Motion: Synthesizing Robot Software
Abstract: Robots offer the potential to become regular helpers in our daily lives, yet challenges remain for complex autonomy in human environments. We address the challenge of complex autonomy by automating robot programming. Many useful robot tasks combine discrete decisions about objects and actions with continuous decisions about collision-free motion. We introduce a new planning framework that reasons over the combined logical and geometric space in which the robot operates. By grounding this planning framework in formal language and automata theory, we achieve not only efficient performance but also verifiable operation. Finally, such a rigorously grounded framework offers a firm base to scale to large domains, handle uncertainty in the environment, and incorporate behaviors learned from humans.
Title: My trip to the US: Experience in enabling robots to manipulate in a kitchen scenario
Abstract: I am going to share my experience in a three-month project for objects manipulation in a kitchen scenario at the University of Maryland. The project consists of three tasks: fetching an object from a fridge, heating it using a microwave, and cleaning a table after dinner. The solution is based on a Baxter robot with a mobile base, mostly implemented using engineering techniques except deep learning for object recognition. In the talk, I will introduce the solution and show some demo videos.
Title: The Fast & the Compressible” - Reconstructing the 3D World through Mobile Devices
Abstract: Mobile devices are shifting from being a tool for communication to one that is used increasingly for perception. In this talk we will discuss my group’s work in the rapidly emerging space of using mobile devices to visually sense the 3D world. First, we will discuss the employment of high-speed (240+ FPS) cameras, now found on most consumer mobile devices. In particular, we will discuss how these high frame rates afford the application of direct photometric methods that allow for - previously unattainable - accurate, dense, and computationally efficient camera tracking & 3D reconstruction. Second, we will discuss how the problem of object category specific dense 3D reconstruction (e.g. “chair”, “bike”, “table”, etc.) can be posed as a Non-Rigid Structure from Motion (NRSfM) problem. We will discuss some theoretical advancements we have made recently surrounding this problem - in particular when one assumes the 3D shape being reconstructed is compressible. We will then relate these theoretical advancements to practical algorithms that can be applied to most modern mobile devices.
Title: Human Action Understanding as a Pathway to Human-Robot Collaboration
Abstract: In this talk, I will first cover the motivating applications of human action recognition in the real world. Then, I will talk about some basics about temporal feature extraction such as 3D space-time interest point detection, optical flow features, temporal templates, dense trajectories, and motion boundary histograms.
Peter Corke, Timo Korthals (PhD student at Bielefeld), Thomas Schöpping (PhD student at Bielefeld), Stephen James (PhD students at Imperial College London)
Title: "Going further with direct visual servoing methods"
Abstract: This talk will be about different ways to improve the performances of direct visual servoing positioning methods, ranging from the use of global descriptors, particular filters and ultimately CNNs.
Title: Advanced Organic Optoelectronics for Making Robots See and Sense Better
Abstract: We see and feel the world by the sense of vision and touch that is brought to us by our eyes and skin. The rise of robotics would most certainly require robust vision and rich sensation for dexterous manipulation of soft objects and safe human-robot interaction. In this talk, I will first introduce the field of organic optoelectronics and discuss its potential in advancing current robotic vision and tactile sensing platforms. This will be followed by some of my most recent research on the design and development of advanced optoelectronic sensors for low level light sensing, reversible pixel operation, multi-spectral pixel design and tactile sensors that can be embedded in robotic arms of different shapes and forms for sophisticated sensing and smart functionality. I’ll also discuss some of my ongoing collaborative projects on brain-computer interface (with QBI/UQ) and night vision (with MIT).
Title: Event-Based Vision Algorithms for Mobile Robotics
Abstract: Event cameras, such as the Dynamic Vision Sensor (DVS), are biologically inspired sensors that present a new paradigm on the way that dynamic visual information is acquired and processed. Each pixel of an event camera operates independently from the rest, continuously monitoring its intensity level and transmitting only information about brightness changes of given size ("events") whenever they occur, with microsecond resolution. Hence, visual information is no longer acquired based on an external clock (e.g. global shutter); instead, each pixel has its own sampling rate, based on the visual input. This different representation of the visual information offers significant advantages over standard cameras, namely a very high dynamic range, no motion blur, and a latency in the order of microseconds. However, because the output is composed of a sequence of asynchronous events rather than actual intensity images, traditional vision algorithms cannot be applied, so that new algorithms that exploit the high temporal resolution and the asynchronous nature of the sensor are required. This talk will focus on the research carried out at the Robotics and Perception Group (University of Zurich) on the development of such algorithms for ego-motion estimation and scene reconstruction, so that a robot equipped with an event camera can build a map of the scene and infer its pose with respect to it.
Title: Product of Exponentials formula – An alternative approach to modelling your robot
Abstract: Kinematics is a fundamental topic in robotics. Denavit-Hartenberg (DH) model has been a standard approach to modelling the kinematics of a robot and has been adopted for decades. This talk will introduce another method referred to as the Product of Exponentials formula (POE), which has been gaining increasing popularity as an alternative model. After describing the basic ideas of POE and comparing it to DH, the talk will show the equivalence between these two models, i.e., they can be converted into each other analytically. Finally, the talk will discuss a few examples using the POE model and show that in some circumstances, the POE model provides a simpler and more insightful interpretation of the kinematics of a robot.
Title: Vision-based trajectory control of unsensored robots to increase functionality, without robot hardware modification
Abstract: In nuclear decommissioning operations, very rugged remote manipulators are used, which lack proprioceptive joint angle sensors. Hence these machines are simply tele-operated, where a human operator controls each joint of the robot individually using a teach pendant or a set of switches. Moreover, decommissioning tasks often involve forceful interactions between the environment and powerful tools at the robot's end-effector. Such interactions can result in complex dynamics, large torques at the robot's joints, and can also lead to erratic movements of a mobile manipulator's base frame with respect to the task space. My work seeks to address these problems by, firstly, showing how
the configuration of such robots can be tracked in real-time by a vision system and fed back into a trajectory control scheme. Secondly, my work investigates the dynamics of robot-environment contacts, and proposes several control schemes for detecting, coping with, and also exploiting such contacts. Several contributions are advanced. Specifically a control framework is presented which exploits the constraints arising at contact points to effectively reduce commanded torques to perform tasks; methods are advanced to estimate the constraints arising from contacts in a number of situations, using only kinematic quantities; a framework is proposed to estimate the configuration of a manipulator using a single monocular camera; and finally, a general control framework is described which uses all of the above contributions to servo a manipulator. The results of a number of experiments are presented which demonstrate
Title: Evaluation of keypoint detectors and descriptors in arthroscopic images for feature-based matching applications
Abstract: Knee arthroscopy is the most common minimally invasive orthopaedic procedure in the world. During this procedure, a camera and an arthroscope allow surgeons to observe unstructured and narrow views of the inside of the knee. Given visually challenging monocular images, the surgeon needs to a) estimate where the camera and the instruments are within the knee, b) maintain a mental map of the knee environment, and c) perform the appropriate therapeutic action while manipulating multiple instruments. These tasks are both mentally and physically demanding for surgeons and often lead to involuntary injury in patients.
Surgeons would strongly benefit from systems that can continuously map the inside of the knee, localize the arthroscope and surgical tools, and control instruments using visual information. In this talk I will provide a quick overview of the research around robotic assisted knee arthroscopy within the Medical and Healthcare robotics group. I will then present in detail the outcomes of a recent submission to RA-L entitled “Evaluation of keypoint detectors and descriptors in arthroscopic images for feature-based matching applications”. I will conclude the talk with an overview of future research directions.
David Hall: Towards Unsupervised Weed Scouting for Agricultural Robotics
Leo Wu: Dexterity analysis of three 6-DOF continuum robots combining concentric tube mechanisms and cable driven mechanisms
Michael Milford: Deep Learning Features at Scale for Visual Place Recognition
Fahimeh Rezazadegan: Action Recognition: From Static Datasets to Moving Robots
Juxi Leitner: ACRV Picking Benchmark
Title: Our journey with Hidden Markov Model filters for vision based aircraft detection.
Abstract: Short overview of our ten year journey with HMM filters for aircraft detection. I will briefly highlight key milestones and advancements and show a glimpse of recent developments.
Title: Tools for Robot Vision research: scalable experiments and databases
Abstract: In order to perform experiments in robot vision, we have to write a bunch of surrounding code that sends the data to the system and interprets the results. Because each dataset and each robot vision system handles input differently, this code gets bigger and bigger and more and more complex. In this talk, I'm going to describe how I tackle this problem, and how I use mongodb to manage all the data and metadata around running experiments. Hopefully some of these tools and solutions will be useful for you when conducting your research.
This short seminar will present the work I have done outside of my PhD, including:
- Hacking an RC car and developing a demo for Robotronica,
- The various methods to crowd fund an idea/start-up,
- The lessons learnt from running a successful KickStarter, and
- Applying for MIT's Global Entrepreneurship Bootcamp.
Title: Multi-Modal Trip Hazard Detection On Construction Sites.
Abstract: Trip hazards are a significant contributor to accidents on construction and manufacturing sites, where over a third of Australian workplace injuries occur . Current safety inspections are labour intensive and limited by human fallibility,making automation of trip hazard detection appealing from both a safety and economic perspective. Trip hazards present an interesting challenge to modern learning techniques because they are defined as much by affordance as by object type; for example wires on a table are not a trip hazard, but can be if lying on the ground. To address these challenges, we conduct a comprehensive investigation into the performance characteristics of 11 different colour and depth fusion approaches, including 4 fusion and one non fusion approach; using colour and two types of depth images. Trained and tested on over 600 labelled trip hazards over 4 floors and 2000m22 in an active construction site,this approach was able to differentiate between identical objects in different physical configurations (see Figure 1). Outperforming a colour-only detector, our multi-modal trip detector fuses colour and depth information to achieve a 4% absolute improvement in F1-score. These investigative results and the extensive publicly available dataset moves us one step closer to assistive or fully automated safety inspection systems on construction sites.
|Feras's top ten favourite papers from ICRA2017 (download slides here)|
Dr. Henrik I. Christensen is a Professor of Computer Science at Dept. of Computer Science and Engineering UC San Diego. He is also the director of the Institute for Contextual Robotics. Dr. Christensen does research on systems integration, human-robot interaction, mapping and robot vision. The research is performed within the Cognitive Robotics Laboratory. He has published more than 350 contributions across AI, robotics and vision. His research has a strong emphasis on "real problems with real solutions". He is actively engaged in the setup and coordination of robotics research in the US (and worldwide). Dr. Christensen received the Engelberger Award 2011, the highest honor awarded by the robotics industry. He was also awarded the "Boeing Supplier of the Year 2011". Dr. Christensen is a fellow of American Association for Advancement of Science (AAAS) and Institute of Electrical and Electronic Engineers (IEEE). His research has been featured in major media such as CNN, NY Times, BBC, ...
Title: Dealing with change in large-scale urban localisation
Abstract: Autonomous vehicles in urban environments encounter a wide range of variation, including illumination, weather, dynamic objects, seasonal changes, roadworks and building construction. These changes occur over a range of timescales, from the day-night illumination cycle to construction that can span multiple years. In this talk I will discuss the challenges we have encountered during long-term autonomy trials in Oxford, Milton Keynes and Greenwich, and present two of our newest approaches to dealing with change in both localisation and mapping with vision and LIDAR. I will also cover our Oxford RobotCar Dataset and the upcoming long-term autonomy benchmark due in late 2017.
|25-July-17||Anders Eriksson||Title: Duality and Robotic Vision|
Title: Evaluating UAS Team Reliability
Abstract: There is a need for enabling greater efficiency, utilization and safety of Unmanned Aircraft Systems (UAS) operating in teams with humans in the loop. UAS are limited in their ability to cope with and continue their missions in the presence of failures and other things going wrong, and for this reason high human capital is typically required to support their safe operation. This talk discusses how to assess and design for UAS team reliability with humans in the loop.
Title: Inverse Dynamic Games
Abstract: Inverse dynamic games is the problem of recovering the underlying objectives of players in a dynamic game from observations of their optimal strategies. The problem of inverse dynamic games arises naturally in the study of economics, biological systems, cooperative automation, and conflict scenarios. Despite its many potential applications, the theory of inverse dynamic games has received limited attention. In this talk, recent advances in the theory of inverse dynamic games that have been made possible by exploiting the minimum (or maximum) principle of optimal control will be presented. The potential application of this work to autonomous collision avoidance will also be discussed.
Title: The Strange Case of Grasping with Soft Hands - Exploiting Dr. Jekyll and Taming Mr. Hyde
Abstract: Squashy and flexible robotic end-effectors such as the RBO Hand 2 provide opportunities (Dr. Jekyll) and challenges (Mr. Hyde) for long-standing problems in grasping and manipulation. Opportunities, because getting into contact is easy and forgiving and the mechanical compliance of soft hands creates large basins of attraction when grasping objects. On the other hand, controlling soft hands exhibits significant challenges: good contact models are missing and sensor feedback is limited.
In this talk I will present a high-level grasp planner that exploits environmental contact and a low-level control method which learns models of simple manipulations for a soft hand.
|Dr. Lesley Jolly holds a PhD in Anthropology and has worked alongside Engineering Educators throughout her career in an attempt to improve learning with STEM. She has faciliated the AAEE Winter School (http://www.aaee.net.au/index.php/news1/events/239-aaee-winter-school-university-of-technology-sydney-10-14-july-2017) for many years and is a wealth of knowledge on everything to do with Engineering Education and the various pedagogies (flipped classroom, project-based learning, problem-based learning etc).|
|12-Sep-17||No Seminar||ICRA deadline|
Title: Turing Test 2.0 - Vision and Language
Abstract: The fields of natural language processing (NLP) and computer vision (CV) have seen great advances in their respective goals of analysing and generating text, and of understanding images and videos. While both fields share a similar set of methods rooted in artificial intelligence and machine learning, they have historically developed separately. Recent years, however, have seen an upsurge of interest in problems that require combination of linguistic and visual information. For example, Image Captioning and Visual Question Answering (VQA) are two important research topics in this area. Image captioning requires the machine to describe the image using human readable sentences while the VQA asks a machine to answer language-based questions based on the visual information. In this talk I will outline some of the most recent progresses, present some theories and techniques for these two Vision-to-Language tasks, and show a live demo of the image captioning and Visual Question Answering. I will also show some recent hot topics in the area, such as the Visual Dialog.
Title: Shallow Networks for Inverse Projection in 3D Human Pose Estimation
Abstract: Projecting a 3D scene onto a 2D image is a relatively straight-forward and well understood process common in computer vision. The inverse problem - recovering a 3D scene from a single 2D projection - is an inherently ill-posed problem. Deep neural networks have been shown to perform well at this task by learning patterns in large datasets, though most fail to take advantage of the inverse nature. This talk will cover a couple of approaches we have taken to learn small, shallow networks to embed within a typical optimization framework and discuss areas we are looking to pursue.
Title: Borrowing eyes: robotic vision beyond line-of-sight
Abstract: We can expand robots' vision envelope beyond line-of-sight with data from remote cameras, and exploit fast communications to gather visual information on demand. We can also use smart cameras to distribute the image processing as well as image capture, enabling robots to be cheaper, and scaling to a large number of remote cameras. This talk will cover my approach to distributed robotic vision on mobile phone smart cameras, and some of the challenges of distributed vision: describing robots’ information needs, managing a changeable set of available cameras, and aggregating conflicting data.
|RoboVis 2017 in Tangalooma|
Title: Opportunities and Challenges for Automation, Robotics, and Computer Vision in Australian Supply Chains (a practitioner's perspective)
Abstract: I'll talk about my experiences with Automation, Robotics and Computer Vision in supply chain applications around the world over the last 20 years (with brief case studies), and where the future challenges and opportunities are given the current market and industrial relations climate.
Robotic Grasping: A brief history of robots picking things up
Robotic grasping has been studied for decades, and a wide variety of techniques have been developed for synthesising stable grasps. I will present an brief overview of robotic grasping literature and techniques, from analytical methods to data-driven and more modern machine learning approaches which show potential great in robotic grasping. Finally, I will discuss how this leads in to my PhD research topic.
SESAME and SVM for Underground Visual Place Recognition
Autonomous vehicles are increasingly being used in the underground mining industry, but competition and a challenging market is placing pressure for further improvements in autonomous vehicle technology with respect to cost, infrastructure requirements, robustness in varied environments and versatility. In this seminar I will share some of our recent work on several new vision-based techniques for underground visual place recognition that improve on current available technologies while only requiring camera input. I will present a Shannon Entropy-based salience generation approach (SESAME) that enhances the performance of single image-based place recognition by selectively processing image regions. I will also discuss the effectiveness of adding a learning-based scheme realised by Support Vector Machines (SVMs) to remove problematic images. The approaches have been evaluated on new large real-world underground vehicle mining datasets, and its generality has been demonstrated on a non-mining-based benchmark dataset. Together this research serves as a step forward in developing domain-appropriate improvements to existing state of-the-art place recognition algorithms that will hopefully lead to improved efficiencies in the mining industry.
Title: Don’t Look Back: Robustifying Place Categorization for Viewpoint- and Condition-Invariant Place Recognition
The Progress of 3D Printing and how it Could be Useful for Research
3D printing has recently become very popular with consumer grade printers making it easier and easier to download a file and hit print. With this in mind how can we as researchers utilise this technology to increase our research productivity? This talk will delve into the different 3D printing technologies and creations that we can use in our demo's, hardware development and experiments, so that we can reduce cost, reduce lead time on parts and spend more time on the areas that matter. There will also be a small introduction into the Gummi Arm, a 3D printed variable stiffness manipulator that I built and will be working on.
SENSOR-BASED CONTROL FOR NAVIGATION AND PHYSICAL HUMAN ROBOT INTERACTION
Traditionally, heterogeneous sensor data was fed to fusion algorithms (e.g., Kalman or Bayesian-based), so as to provide state estimation for modeling the environment. However, since robot sensors generally measure different physical phenomena, it is preferable to use them directly in the low-level servo controller rather than to apply them to multi-sensory fusion or to design complex state machines. This idea, originally proposed in the hybrid position-force control paradigm, when extended to multiple sensors brings new challenges to the control design; challenges related to the task representation and to the sensor characteristics (synchronization, hybrid control, task compatibility, etc.).
|28-Nov-17||Juan Jairo Inga Charaja|
Human Behaviour Identification Using Inverse Reinforcement Learning
Abstract:Recent trends in human-machine collaboration have led to increased interest in shared control systems, where both human and a machine or automation simultaneously interact with a dynamic system. However, for a systematic control design to enable automation to participate in a cooperation with a human, modeling and identification of human behavior becomes essential. Considering a model of shared control based on a differential game, the identification problem consists in finding the cost function describing observed human behavior. This seminar will show the potential of Inverse Reinforcement Learning techniques for identification in such scenarios.
Title: Marine Vessel Inspection as a Novel Field for Service Robotics: A Contribution to Systems, Control Methods and Semantic Perception Algorithms.
Abstract: Seagoing vessels, such as bulk carriers, dry cargo ships, and tankers, have to undergo regular inspections at survey intervals. This is performed by ship surveyors, using visual close-up surveys or non-destructive testing methods. Vessel inspection is performed on a regular basis, depending on the requirements of the ship classification society. For a close-up survey, the ship surveyor has usually to get within arms’ reach of the inspection point. Structural damage, pitting, and corrosion are visually estimated based on the experience of the surveyor. The most cost-intensive partduring the inspection process is to provide access to all parts of a ship. The talk will present a novel, robot-based approach for the marine inspection process. Within the talk, several locomotion concepts for inspection robots are presented. Additionally, perception concepts based on spatial-semantic ontologies and on spatial Fuzzy Description Logic are proposed. It will be discussed, how such concepts can be used to classify structural parts of a ship, which again can enhance a robot based inspection process by semantic annotations.
|03-Feb-15||Juxi Leitner||"From Vision To Actions - Towards Adaptive and Autonomous Humanoid Robots"|
Michael Milford, and
Post IROS2015 and ROSCon2015
Title: TrademarkVision, a spin-out computer vision company.
Sandra will be talking about her computer vision spin-out company TrademarkVision, sharing her journey from research to commercialisation, and giving insight on creating the right environment to unlock innovation and entrepreneurship for women in technology.
GP-S11 Cantina Lounge
The SLAM algorithm has a fundamental problem in that its memory requirements grow linearly over time. To combat this robot poses can bemarginalised (typically causing fill in and increasing memory overhead ),forgotten entirely or the system can be approximated.
This talk will describe a recentPhDdissertation that addresses this problem through approximation while also guaranteeing that the approximate distributions are both close (using the Kullback-Leibler Divergence) and conservative, this new technique is called Conservative Sparsification. A variant of the technique is developed that is appropriate for distributed estimation systems by employing Junction Trees.
|GP-S11 Cantina Lounge|
Title: Reflections on: Robotics for Zero-Tillage Agriculture (ARC Linkage Project)
Farmers are under growing pressure to intensify production to feed a growing population, while managing environmental impact. Robotics has the potential to address these challenges by replacing large sophisticated farm machinery with fleets of small autonomous robots. The first half of this seminar will present research from the completed ARC Linkage project “Robotics for Zero-Tillage Agriculture” towards the goal of coordinated teams of autonomous robots that can perform typical farm coverage operations. The second half of the seminar will reflect on the other aspects of the grant such as expectations about technology readiness levels, impact, timeline, testing, and the real cost of the project.
With a large fleet of robots it will become time consuming to monitor, control and resupply them all. To alleviate this problem we describe a multi-robot coverage planner and autonomous docking system. Making a large fleet of autonomous robots economical requires using inexpensive sensors such as cameras for localisation and obstacle avoidance. To this end we describe a vision-based obstacle detection system that continually adapts to environmental and illumination variations and a vision-assisted localisation system that can guide a robot along crops with challenging appearance. This research included three months of field trials on a broad-acre farm, culminating in a two-day autonomous coverage task of 59ha using two real robots, four simulated robots and an automatic refill station.
GP-S11 Cantina Lounge
Title: Sharing experiences for robot business commercialisation
Brent - Overview of QUT Bluebox and how they can help turn research and innovation into commercialization opportunities.
Sue - Discussing her experience at RoboBusiness in Silicon Valley
With more people investigating the idea of Startups we must ensure we are utilizing every opportunity and tool at our disposal. With robotics been an emerging technology, there will be the potential for several commercialization opportunities just on the horizon. So, this week's seminar will have several presentations all in the area of Startups, Commercialization and Entrepreneurship! There will be four short presentations on a variety of aspects within the entrepreneurial space. The first half of the seminar will discuss the recent experiences of some of our RAS members at QUT's Ubercamp and RoboBusiness. The second half will of the seminar will show how QUT Bluebox can help turn your idea, or product, into a viable commercialization opportunity.
|GP-S11 Cantina Lounge|
Title: Visual servoing without image processing
Speaker: François Chaumette
|GP-S11 Cantina Lounge|
Title: The HPec Project: Self-Adaptive, Energy Efficient, High Performance Embedded Computing, UAV case study.
Speaker: Jean-Philippe Diguet
Title: Embedded Health Management for Autonomous UAV Mission
Speaker: Catherine Dezan
Abstract: (HPeC Summary)
The HPeC project aims at demonstrating the relevancy of self-adaptive hardware architectures to respond to the growing demands ofhigh performancecomputing, in an increasing class of embedded systems that also have demanding footprint and energy efficiency constraints. This is typically the kind of embedded system we have in small autonomous systems like UAVs, that require high computing capabilities to perceive the environment (e.g embedded vision) and make decisions about task to execute according to uncertainties related to the environment, safety critical systems, the health of the system and processing results (e.g. identified object).
|GP-S11 Cantina Lounge|
|1-Dec-15||NOTE:due toacrabeing held this week the lab might be very empty!||GP-S11 Cantina Lounge|
|8-Dec-15||TBD||TBD||GP-S11 Cantina Lounge|
ACRA, AusAI, and Deep learning workshop recap.
|GP-S11 Cantina Lounge|
|22-Dec-15||Andres F. Marmol V.||GP-S11 Cantina Lounge|
|29-Dec-15||NOTE: due to the university will be closed this week, no seminar is scheduled.||GP-S11 Cantina Lounge|
Algorithmic Field Robotics: Enabling Autonomy in Challenging Environments
New image features and insights for building state-of-the-art human detectors
Robust Autonomy in Filed Robotics –Assessment and Design
|Towards Autonomy in Human-Robot Collaboration||GP-P-512|
|Solarcars and EVs to Agbots and UAVs||GP-O-603|
|Design as Strategy||GP-O-603|
|Sabbatical at Oxford - "We Are Never Lost"||GP-O-603|
|A new type of neural network: Hierarchical Temporal Memory – Cortical Learning Algorithm||GP-O-603|
|What is beneath the snow? – Towards a probabilistic model of visual appearance changes||GP-O-603|
Multi-scale Bio-inspired Place Recognition (ICRA 2014)
Radars: a complementary sensing modality to Vision for Robotics and Aerospace at QUT?
|28-04-2014||Keith L. Clark|
Programming Robotic Agents: A Multi-tasking Teleo-Reactive Approach [Slides]
Paper 1: Towards Training-Free Appearance-Based Localization: Probabilistic Models for Whole-Image Descriptors (ICRA 2014)
Paper 2: Transforming Morning to Afternoon using Linear Regression Techniques (ICRA 2014)
All-Environment Visual Place Recognition with SMART (ICRA 2014)
Multiple map hypotheses for planning and navigating in non-stationary environments (ICRA2014)
|20-05-2014||RAS overview||RAS overview||GP-Z-606|
Online Self-Supervised Multi-Instance Segmentation of Dynamic Objects (ICRA 2014)
Novelty-based visual obstacle detection in agriculture (ICRA 2014)
Lighting Invariant Urban Street Classification (ICRA 2014)
Condition-Invariant, Top-Down Visual Place Recognition (ICRA 2014)
A Chronology of Previous Experiences with Robots
|20-06-2014||Raymond Russell||"RoPro Design - the Struggles Facing a Mobile Robotics Company in 2014"||GP-O-603|
- Asymptotic Minimax Robust and Misspecified Lorden Quickest Change Detection For Dependent Stochastic Processes
- Compressed sensing using hidden Markov models with application to vision based aircraft tracking
Change from Fridays to Tuesdays 11:00 am
|15-07-2014||Duncan Campbell||Overview of Project ResQu||GP-B-507|
|29-07-2014||Steven Wright||Optimization with a focus on machine learning applications [Slides]||GP-B-507|
|05-08-2014||Niko Suenderhauf||Overview of the CVPR 2014 conference||GP-B-507|
|07-08-2014||Charles Gretton||Calculating Economical Visually Appealing Routes [Thursday 4:00 - 5:00 pm]||GP-S-301|
|12-08-2014||Andre Barczak||Fast Feature Extraction Using Geometric Moment Invariants||GP-B-507|
|19-08-2014||Joseph Young||QUT gear and services around HPC||GP-B-507|
|26-08-2014||Jonathan Roberts||Museum Robot - how to deploy two robot for four years||GP-B-507|
|12-09-2014||Jochen Trumpf||Observers for systems with symmetry [Friday 11:00am - 12:00pm]||GP-S-405|
|07-10-2014||Tor Arne Johansen||Autonomous Marine Operations and Systems, with emphasis on Unmanned Aerial Vehicles||GP-B-507|
|14-10-2014||Alfredo Nantes||Traffic SLAM a Robotics Approach to a New Traffic Engineering Challenge||GP-B-507|
|16-10-2014||Ken Skinner||Trusted Autonomy [slides]||GP-S-407|
|21-10-2014||Sareh Shirazi||Video Analysis Based on Learning on Special Manifolds for Visual Recognition||GP-B-507|
|28-10-2014||Remi Ayoko||Workspace configurations, employee wellbeing and productivity||GP-B-507|
|25-11-2014||Navinda Kottege||Hexapods and other stories: Autonomous Systems for Perceiving our Environment||GP-B-507|
|09-12-2014||Franz Andert||Integrating Vision Sensors to Unmanned Aircraft||GP-B-507|
Adrien Durand Petiteville
Multi-sensor based navigation of a mobile robot in a cluttered environment
Unmanned Aerial Vehicles Research at UNSW, ADFA. WHERE: S403 TIME: 12.00 pm - 1.00 pm
PhD student introduction: Who is Alex Bewley and what is he doing here?
Wesam Al Sabban
Path Planning for Small, Electric Unmanned Aerial Vehicles in Dynamic Conditions
David Ball + Others
OpenRatSLAM + other related work
Time: 2-3pm Place: S Block room 305.
Final Seminar. Room B121
Marine mammals detection in aerial images
PhD Confirmation Seminar - Outdoor traversability (10-11am)
Robert Zlot and Mike Bosse
Title to be decided.
ICRA practice talks:
[David Ball] [Chris Lehnert]
ICRA practice talks:
summaries of papers they liked
Michael Warren and Chris Lehnert
Kyran Findlater, Ryan Steindl
4th year thesis presentations: AgaBot flash light, $100 UAV
* *Autonomous Systems Lab, ETH Zurich (Title to be decided.)
Mobile Robotics for Oil and Gas Production and Heavy Industry
Andre Gustavo Scolari Conceicao
Formation Control of Mobile Robots Using Decentralized Nonlinear Model Predictive Control
How to blow-up a robot… and other cool ways to monitor the environment
Acquiring Rich Models of Objects and Space Through Vision and Natural Language
Kok Yew (Mark) Ng
Robust fault reconstruction using sliding mode observer
Resilient perception and navigation for unmanned ground vehicles in challenging environmental conditions
Hu (Kyle) He
Joint 2D and 3D cues for image segmentation
Video: Jeff Hawkins
Talk followed by discussion: on Intelligence by Jeff Hawkins
Video: Chris Manning
Talk followed by discussion: on Deep Learning for NLP
Timothy Morris and Feras Dayoub
Vision-Only Autonomous Navigation Using Topometric Maps (IROS2013 practice)
Autonomous Movement-Driven Place Recognition Calibration for Generic Multi-Sensor Robot Platforms(IROS2013 Practice)
Robust Scale Initialization for Long-Range Stereo Visual Odometry (IROS2013 practice)
|15-11-2013||Anthony Finn||Director, Defence & Systems Institute - Title: TBA|
6 Months of Awesome in Boston
The Open Set Recognition Problem
Visual Route Following for Mobile Robots
|20-12-2013||Jasmine Banks||FPGAS and Applications|