The world has many navigational cues for the benefit of humans: sign posts, maps and the wealth of information on the internet. Yet, to date, robotic navigation has made little use of this abundant symbolic information as a resource. This project will develop a robot navigation system that can navigate using information beyond the robot's range sensors by incorporating knowledge gained by reading room labels, following human route directions or interpreting maps found on the web. We will demonstrate the robot's navigation ability by comparing its performance with a human as it learns to find its way around campus by asking for directions, reading signs and maps, and searching the internet for clues.
The aim of the project is to create and demonstrate a new framework that integrates information intended for humans into a navigation resource that can be used for autonomous navigation in urban spaces.
Lam, Obadiah, Dayoub, Feras, Schulz, Ruth, & Corke, Peter (2015) Automated Topometric Graph Generation from Floor Plan Analysis. In 2015 Australasian Conference on Robotics and Automation (ACRA 2015), 2-4 December 2015, Canberra, ACT.
The world is rich with information such as signage and maps to assist humans to navigate. We present a method to extract topological spatial information from a generic bitmap floor plan and build a topometric graph that can be used by a mobile robot for tasks such as path planning and guided exploration. The algorithm first detects and extracts text in an image of the floor plan. Using the locations of the extracted text, flood fill is used to find the rooms and hallways. Doors are found by matching SURF features and these form the connections between rooms, which are the edges of the topological graph. Our system is able to automatically detect doors and differentiate between hallways and rooms, which is important for effective navigation. We show that our method can extract a topometric graph from a floor plan and is robust against ambiguous cases most commonly seen in floor plans including elevators and stairwells.
Talbot, Ben, Schulz, Ruth, Upcroft, Ben, & Wyeth, Gordon (2015) Reasoning about natural language phrases for semantic goal driven exploration. In 2015 Australasian Conference on Robotics and Automation (ACRA 2015), 2-4 December 2015, Canberra, ACT.
This paper presents a symbolic navigation system that uses spatial language descriptions to inform goal-directed exploration in unfamiliar office environments. An abstract map is created from a collection of natural language phrases describing the spatial layout of the environment. The spatial representation in the abstract map is controlled by a constraint based interpretation of each natural language phrase. In goal-directed exploration of an unseen office environment, the robot links the information in the abstract map to observed symbolic information and its grounded world representation. This paper demonstrates the ability of the system to eciently nd target rooms in three simulated environments, as well as a real-world environment that it's never been to previously. It is shown that by using only natural language phrases, the system can navigate to rooms in completely unexplored environments by travelling only 8.42% further than the optimal path.
Hou, Jun, Schulz, Ruth, Wyeth, Gordon, & Nayak, Richi (2015) Finding Within-Organisation Spatial Information on the Web. In 28th Australasian Joint Conference on Artificial Intelligence 2015, 2-4 December 2015, Canberra, ACT.
Information available on company websites can help people navigate to the offices of groups and individuals within the company. Automatically retrieving this within-organisation spatial information is a challenging AI problem This paper introduces a novel unsupervised pattern-based method to extract within-organisation spatial information by taking advantage of HTML structure patterns, together with a novel Conditional Random Fields (CRF) based method to identify different categories of within-organisation spatial information. The results show that the proposed method can achieve a high performance in terms of F-Score, indicating that this purely syntactic method based on web search and an analysis of HTML structure is well-suited for retrieving within-organisation spatial information.
Schulz, Ruth, Talbot, Ben, Upcroft, Ben, & Wyeth, Gordon (2015) Constructing Abstract Maps from Spatial Descriptions for Goal-directed Exploration. Presented at Robotics: Science and Systems 2015 Workshop on Model Learning for Human-Robot Communication, 16 July 2015, Rome, Italy.
This paper describes ongoing work on a system using spatial descriptions to construct abstract maps that can be used for goal-directed exploration in an unfamiliar office environment. Abstract maps contain membership, connectivity, and spatial layout information extracted from symbolic spatial information. In goal-directed exploration, the robot would then link this information with observed symbolic information and its grounded world representation. We demonstrate the ability of the system to extract and represent membership, connectivity, and spatial layout information from spatial descriptions of an office environment. In the planned study, the robot will navigate to the goal location using the abstract map to inform the best direction to explore in.
Schulz, Ruth, Talbot, Ben, Lam, Obadiah, Dayoub, Feras, Corke, Peter, Upcroft, Ben, & Wyeth, Gordon (2015) Robot navigation using human cues: A robot navigation system for symbolic goal-directed exploration. In IEEE International Conference on Robotics and Automation 2015, 26-30 May 2015, Washington State Convention Center, Seattle, WA.
In this paper we present for the first time a complete symbolic navigation system that performs goal-directed exploration to unfamiliar environments on a physical robot. We introduce a novel construct called the abstract map to link provided symbolic spatial information with observed symbolic information and actual places in the real world. Symbolic information is observed using a text recognition system that has been developed specifically for the application of reading door labels. In the study described in this paper, the robot was provided with a floor plan and a target room. The target room was specified with the room number, used both in the floor plan and on the door to the room. The robot autonomously navigated to the target room using its text recognition, abstract map, mapping, and path planning systems. The robot used the symbolic navigation system to determine an efficient path to the target room, and reached the goal in two different real-world environments. Simulation results show that the system reduces the time required to navigate to a goal when compared to random exploration.
Lam, Obadiah, Dayoub, Feras, Schulz, Ruth, & Corke, Peter (2014) Text recognition approaches for indoor robotics : a comparison. In 2014 Australasian Conference on Robotics and Automation, 2-4 December 2014, University of Melbourne, Melbourne, VIC.
This paper evaluates the performance of different text recognition techniques for a mobile robot in an indoor (university campus) environment. We compared four different methods: our own approach using existing text detection methods (Minimally Stable Extremal Regions detector and Stroke Width Transform) combined with a convolutional neural network, two modes of the open source program Tesseract, and the experimental mobile app Google Goggles. The results show that a convolutional neural network combined with the Stroke Width Transform gives the best performance in correctly matched text on images with single characters whereas Google Goggles gives the best performance on images with multiple words. The dataset used for this work is released as well.
During the development of the vision system for the project, we have created a dataset of images of text around QUT Gardens Point Campus, with a focus on room and building labels that are useful for navigation.