Distributed system for domestic robot operation using computer vision
Links to Fileshttp://library.towson.edu/cdm/ref/collection/etd/id/52902
MetadataShow full item record
Type of Workapplication/pdf
xiv, 209 pages
DepartmentTowson University. Department of Computer and Information Sciences
Anticipated for a long time in science-fiction literature, domestic robotics are timidly starting to appear. While the first applications, like vacuum-cleaners or toys, do not share much of the versatility of the robotic servants envisioned in literature and movies, it is just a matter of time before more useful robots appear, mainly driven by the demand for help for the aging population in the industrialized world . This dissertation research aims to study, propose, and start to develop an integrated home automation (domotic) system in the form of an Intelligent House infrastructure. We envision the model of the fully integrated Smart House of the future as being a self sufficient intelligent system able to take care of the inhabitants, with the robots becoming just the autonomous mobile components of the assisted living environment. So far Computer Vision (CV) seems to be the most promising technology to allow robot navigation in domestic environments, therefore a large part of the works is focusing on the aspects of using CV for domestic robot operations and addressing challenges that come with it . The work began by proposing a distributed processing architecture for controlling a robot operating in a domestic environment and navigating it using computer vision. The system is composed of a set of fixed cameras mounted on the walls near the ceiling overlooking the various rooms, a set of networked computers located in the house, home automation devices communicating with domotic computers and one or more mobile units (robots) having on board their own camera and processing equipment. We are taking advantage of the already existing Wi-Fi and wired networks in any modern house to provide the communication between equipment and this allows us to keep the cost of the system within an affordable range . As a basic implementation of the proposed architecture we experimented with a set of software components called Camera Module processing data from fixed cameras, a Situation Awareness Module integrating data from all the rest of the modules and building a map of the environment and the robot control software. The robot control software is in itself distributed between the computer board located on the robot and a part running on the base station. The components of the robot control software are in constant communication between themselves over Wi Fi . The first task we handled on the Computer Vision side of work was to implement an object tracking algorithm by fusing together multiple well knows CV operations into a multi-paradigm tracker. The MP Tracker algorithm developed runs inside a Camera Module receiving images from a fixed camera, detecting moving objects, and sending information about them to other components in the system . Situation Awareness Module (SAM) uses homographic projection to createa map of the environment. Once the model is built, the same equations are used to translate the coordinates of the moving objects received from the tracker into absolute coordinates in the room. The Robot Module is part of the robot controlling software that runs on the Base Station and uses the data model built by SAM to perform path planning and sends navigation commands to the Autonomous Robot Module . Autonomous Robot Module (ARM) is the part of robot control software that runs on the robot itself. Besides translating the high level navigation commands from Robot Module in hardware control signals, it also processes video from the on-board camera. Images from the mobile camera are used to both calculate optical-flow for low level navigation and for maintaining a trajectory, as well as providing upon request to other modules for epipolar geometry calculations . To tie all the modules together, we developed a new communication protocol sDOMO designed from the beginning as a protocol for domestic home automation and robotic systems. A major area of concern for using robots and generally any automation device in domestic environments is the security and privacy of the inhabitants. To address this concern we designed sDOMO to implement a self-sufficient home-centered automation network where the devices are able to perform their duties over the house network and any access to the outside world would be strictly controlled. The protocol has multiple layers of security and privacy protection and is being offered as an open source project for general purpose home automation and building of robotic systems . A new framework for processing multiple messages in parallel is being proposed as a new design pattern. The Multi-Threaded Message Dispatcher framework is a generalization of the mono-threaded Reactor design pattern to work on a heavy multi-threaded environment while encapsulating all the logic required to provide guaranteed deadlock avoidance. The framework will take care of all low levels details about locking and unlocking the access to critical resources allowing the programmer to focus on the problem he needs to solve instead of being distracted with critical section management . To be able to test the system we built from scratch a small domestic robot powered by a Raspberry PI 2 embedded computer board and an Arduino-Nano micro-controller to access the hardware in real-time. The robot is based on a differential drive platform with two DC motors commanded by Arduino via H-Bridges circuits. The robot Camera is mounted on a Pan-Tilt mechanism powered by two servo-motors. The embedded computer board is running the ARM software communicating with the rest of the system via sDOMO on Wi-Fi network. Current dissertation provides most of the information required for our robot to be replicated by other researchers.