“Robot” is a word created by Karel Capek in the first half of the last century. It fitted our vocabulary so swiftly and comfortably that it seems to be as old as a word “table” or say “apple”. Indeed, it is a notion that reflects a widely used technology.
Some imagine robots as androids exclusively, but surely it is not the only type. As Longman Dictionary puts it – a robot is a “machine that can move and do some of the work of a person, and is usually controlled by a computer”. So it is any piece of equipment that can replace man for one or several tasks. Fun, isn’t it? I wish I had a robot to do my sewing lessons in 6th grade for me .
We all dream about the times when day-to-day boring tasks will be performed by machines. And the recent Android(OS)-based robot designed to aid humans experiment on the International Space Station is making it closer. This example is only one in the range of fascinating, breath taking engineering and research done at present moment. And we have our two cents to throw in here. Our company have been doing robotics related development for around 5 years and have accumulated some experience we would like to share.
First, a bit of generalisation for the topic: we have been recently working on autonomous charging in general and the complex of tasks associated with it in particular. To “set the scene” – at the current stage we single out the following list of major robotics tendencies:
- Autonomous 3D and 2D mapping. This one seems rather a narrow issue, but actually it is used in almost any field of robotics application. It is a crucial part of robot movement as it allows the machine to orientate itself in space. The process goes approximately like this – the robot first builds a map of the surroundings and based on it performs the next action – movement or any other operation.
- Object detection – can be anything starting from road detection to distinguishing the owner of the robot from other people. This one is very important as it is the core element that allows interaction with environment. If we want a robot to perform tasks, we need it to be able to identify and use objects according to their purpose.
- SLAM Simultaneous Location and Motion – A term defining the ability of a machine to interact with the environment logically. For example a pencil should be classified and used as a writing instrument.
- Group robot controlling and interaction with the environment. A complex task meaning that robot should be able to identify environment – objects, other robots; and to perform tasks in a coordinated manner. An example – one robot classifies a tool and the other uses it according to the classification.
- Autonomous operation. Our example is charging. The main idea of this is to avoid using services of an operator during charging. The primary difficulty for the robot is finding the charger. So it means this task incorporates all the previous ones. Two robots should be able to interact with environment(find the charger and be able to orientate in space), each other(coordinated movement), identify the charger, perform self-charging, etc.
What is done now – a prototype of such robot is built on our premises. We are currently performing tests in a robot arena.
The hardware we are using for our solution:
- Compass – http://www.ocean-server.com/download/OS5000_Compass_Manual.pdf
- Platform – http://www.roboticsconnection.com/p-15-traxster-robot-kit.aspx
- Camera – http://www.mightexsystems.com/family_info.php?cPath=1_170_20&categories_id=20
- Embedded Control System – http://www.compactpc.com.tw/ebox-4300.htm
- Board – http://www.sparkfun.com/products/9184
- Operation system – Windows Embedded 7 Standard
- Android in space http://googleblog.blogspot.com/2011/09/android-in-spaaaace-part-2.html
- Description of the word Robot in Longman Dictionary of Contemporary English – http://www.ldoceonline.com/dictionary/robot
- OCSICO Contact details in case you would like to know more https://ocsico.com/contact-us
This article is written by Jane Matsesha in cooperation with http://robotics.by/.