Six-Legged mobile robot with obstacle detection whiskers and
Video Capture Camera
Functions: Moves around in all direction, changes paths when
whiskers are bumped with obstacles.. Can walk in tripod, wave,
and ripple gaits.. Can simulate cradle, swing motion, dizzy,
push- ups and even dance around- according to predefined program
script..
Parts used:
Electronics- 2 SSC's, Hobbico Servo motors,
Mechanical- locally fabricated materials.
Showing posts with label Robotic Projects. Show all posts
Showing posts with label Robotic Projects. Show all posts
Friday, July 17, 2009
Outdoor robot
Built mostly with recycling parts. Driven by 2 car wiper motors on about 17 Volts. Direction change
by a third wiper motor in the middle to change the angle of the two halves of the chassis; it works
with a steel rope that is wrapped around the motors shaft, can just be activated while driving
because of self-destruction risks.
The wheels are from a lawn mower. The rear wheels were fixed on a tilteable axe and fixed on spring
resorts.
Power supply by two 12V lead rechargeables batteries in series (7 Ah each).
A main PWM-circuit stabilizes the motor voltage two around 17 Volts.
Controlled by an 8052AH-BASIC-Evaluation board from Elektor magazine. Most other circuits self
developed and built, 2 kits assembled. Can receive commands by a TV IR-remote control, an RC5-code
receiver is used.
2 Bumpers with microswitches on the front end, two on the rear and two in the middle for the
direction motor.
2 IR-Diodes on the front an one IR-modulated receiver (38 kHz). Ultrasonic obstacle detection on the
rear.
It got some obstacle detecting and avoiding routines, but had lots of problems with his own weight.
The remote control was quite unuseable when the sun was shining.
Email: gklares@ara.lu
Thursday, July 16, 2009
Wallie robot
Wallie is an attempt to make a very small and very simple robot which is still able to perform a
certain task. In this case that task is wall-following. As you can see on the picture, Wallie's body
is an old PC mouse. It uses differential steering to navigate across its world. Its two motors are
very small 5 volt gearbox motors. I have salvaged a tape pressing roll from an old cassette deck and
transformed it in a very small castor wheel. This works beatifully.
Wallie uses three infrared obstacle detection sensors to locate and follow a wall. These are mounted
on the front of the bot as can be seen on the picture. One pointing a little bit to the left, one
pointing forward and one pointing a little to the right. These sensors either see or don't see an
obstacle. There is no distance measuring capability available. When an object comes within
approximately 7 cm of the sensor, it will trigger.
The brain of wallie is an ATMEL AT90S2313 microcontroller. It is programmed with the AVR port of the
linux GCC C-compiler.
The wall following procedure is as followes: First, Wallie waits until it gets offered a wall to
follow. In other words: You have to put him so close to the wall that the sensors of the bot see it.
Wallie will then start to drive forwards a little bit in the opposite direction of the wall. When
the distance to the wall gets to great, the sensor pointing to the wall will not see it anymore.
Wallie will then start to drive towards the wall again until he sees it again. Then he starts to
move away from the wall again etc. This way he will follow the wall without touching it. When it
does not find the wall within a short period, this means the wall has moved sharply away from the
bot. Wallie will then start to turn sharpy towards the direction he expects the wall to be until it
is found again. The second special situation is when the sensor facing the wall and the sensor
facing forwards see the wall. This means the wall has made a sharp turn towards the robot. Then
wallie will react by turning away from the wall until only the sensor facing the wall sees the wall.
The third special situation is when all three sensors see the wall. This means Wallie has driven up
a dead end or a very sharp edge in the wall. He will then start to turn on the spot until the sensor
pointing to the front does not see the wall any more. He then faces in the correct direction again.
Seeker Robot -Seeker is to look around for human beings
The goal of Seeker is to look around for human beings, drive towards the first it sees and then try
to follow that person. Seeker locates humans by using a Passive Infrared (PIR) sensor. This sensor
is capable of detecting the heat signature of a human being. It is mounted inside the white cone on
the sensor unit at the front of the robot. The white cone holds a freshnell lens to focus the
infrared (heat) radiation on the sensor element. The sensor unit also holds tree SHARP GP2D02
infrared distance measurement units. These sensors take over when Seeker gets close to the person it
wants to follow. This cannot be done with the PIR sensor because it is not accurate and directional
enough at close range. The sensors of Seeker are mounted on a pan/tilt unit. This enables Seeker to
look around and point its sensors at any object of interest in its field of view.
The drive train of Seeker is the same as that of Roamer and Wallie. It consists of two propelled
front wheels and a castor wheel at the back, enabling the bot to navigate the world by using
differential steering.
The brain of seeker is an ATMEL AVR 90S8535 microcontroller. It is programmed in C using the AVR
port of the linux GCC C-compiler.
The procedure to locate and follow humans is as follows: At powerup, Seeker starts to turn on the
spot. When it detects a heat signature, it drives towards it. When the heat signature is lost during
the approach of the target, it starts turning in a circle again to relocate the heat signature. When
Seeker gets close enough to the target, the infrared distance measurement sensors take over. In
order to follow the target, the distance measured by the left sensor is compared to that of the
right sensor (the third sensor is not used right now). If the left sensor measures a greater
distance than the right sensor, it concludes the target is located on its right side. If its the
other way around, it assumes the target is on the left. It will then move its sensor head in the
direction of the target. The motor controller of Seeker is programmed to drive in the direction the
sensor head is looking, and therefore the whole robot will start following the target. If seeker
gets close to the target he will stop. If the person being followed starts to move towards Seeker he
will start to drive backwards to avoid being stepped on. If Seeker loses the target he will start
looking for a new heat signature.
Roverbot
Description
When the robot hits something on the right side of the bumper, the right push button
is depressed. This makes the robot stop, back up, turn to the left, and continue moving forward. If
the robot collides with an object on the left side of the bumper, the left push button is hit. The
robot stops, backs up, turns to the right, and continues going forward. Because of the bumper, the
robot can maneuver around obstacles and keep moving without any human interference.
Line Follower
Functions
The buggy features two main wheels positioned opposite each other, and independently driven by
stepper motors. The chassis is balanced with a simple peg that skids along the ground.
The motors and sensors plug into two circuit boards mounted in the buggy chassis, and this in turn
is linked by means of umbilical ribbon cable, to an input/output port used in conjunction with a
Sinclair ZX81.
The ZX81 provides the intelligence to make the buggy follow a black line (electrical black
insulation tape). It could be argued that a basic line follower does not really require the use of a
computer, with the buggy being made to operate properly by getting the sensors to control the motors
through more direct electronic means. However, using a computer allows easy behaviour refinement by
software changes. For example after the basic line following was implemented the buggy was
programmed to be able to negotiate branches in the line.
Specifications
The chassis is built from a combination of Meccano® and Perspex®. The Meccano enabled a chassis to
be quickly constructed, and the Perspex facilitated the non Meccano parts (stepper motors and
wheels) to be easily incorporated into the design.
The robot electronics comprised two circuit boards - the driver board and the sensor board. These
boards are stacked one over the other.
The step resolution of the stepper motors is 1.8 degrees. To turn this step size into a smaller
wheel travel, a reduction gearing comprising a small cog on the motor shaft and a much larger cog
connected on the wheel is utilised on each motor drive.
Driver board
Two SAA1027 stepper motor drive ICs are employed on the driver board, each one to control a four
phase stepper motor. The ICs simplify control of the stepper motors by requiring just a digital
direction signal (clockwise/anti clockwise) and digital clock signal (advance step) for each motor.
The SAA1027 ICs require a 12v power supply, and 12v control signals. LM324 quad operational
amplifiers are used to level shift the 5v TTL levels from the ZX81 up to the 12v control signals.
Sensor Board
To enable the buggy to follow a black line, two optical sensors (TIL81) are used. They are
positioned at the front underside of the buggy. The sensors are separated by a distance of 1cm.
Additionally an infra-red LED (TLN1 10) is placed between the sensors, so that they are less
effected by the surrounding ambient light. Depending on the surface either black or white the
infrared beam is either absorbed or reflected respectively.
The sensor board comprises two identical circuits each connected to a corresponding optical sensor.
Each circuit converts the optical sensor output from an analogue value to a 5v TTL signal that can
be read by the ZX81 via the input port.
TeMo - Telerobotics over Mobile packet data services
TeMo is a tele-operated mobile internet robot. While other internet robots mostly use WiFi (or a nearby PC with an internet connection), TeMo connects to the internet using Mobile packed data services (e.g. GPRS / EDGE / UMTS / HSDPA). The advantage is virtually unlimited mobility for the robot.
Simply put, TeMo is a robot that can
Move around and do stuff (since it has tank tracks and a robotic arm)
Can be controlled from any where in the world (since it has Internet connectivity)
Can boldly go where no robot has gone before !! (since it used mobile packet data services for Internet connectivity)
TeMo is controlled using an Ajax based webage. The webpage is served by a tiny webserver running on a mobile phone that is mounted on the robotic platform. TeMo is also capable of sending pictures in realtime to the user terminal (and possibly also video in the near future).
Lets look at TeMo in detail. TeMo is made up of the following parts:
Lego (technic) blocks for the basic mechanical structure.
5 servo motors for mobilty, torso rotation and arm control.
A microcontroller that controls the motors, listens for commands from the webserver
A standard mobile phone that runs a tiny Webserver, connects to the Internet using GPRS/EDGE/UMTS and communicates with the microcontroller over Infrared.
The following diagram shows how the overall system works.
Simply put, TeMo is a robot that can
Move around and do stuff (since it has tank tracks and a robotic arm)
Can be controlled from any where in the world (since it has Internet connectivity)
Can boldly go where no robot has gone before !! (since it used mobile packet data services for Internet connectivity)
TeMo is controlled using an Ajax based webage. The webpage is served by a tiny webserver running on a mobile phone that is mounted on the robotic platform. TeMo is also capable of sending pictures in realtime to the user terminal (and possibly also video in the near future).
Lets look at TeMo in detail. TeMo is made up of the following parts:
Lego (technic) blocks for the basic mechanical structure.
5 servo motors for mobilty, torso rotation and arm control.
A microcontroller that controls the motors, listens for commands from the webserver
A standard mobile phone that runs a tiny Webserver, connects to the Internet using GPRS/EDGE/UMTS and communicates with the microcontroller over Infrared.
The following diagram shows how the overall system works.
Tuesday, July 14, 2009
Educational purposes Robot :Webots
Webots is a professional robot simulator widely used for educational purposes. The Webots project started in 1996, initially developed by Dr. Olivier Michel at the Swiss Federal Institute of Technology (EPFL) in Lausanne, Switzerland.
Webots uses the ODE (Open Dynamics Engine) for detecting of collisions and simulating rigid body dynamics. The ODE library allows one to accurately simulate physical properties of objects such as velocity, inertia and friction.
A large collection of freely modifiable robot models comes in the software distribution. In addition, it is also possible to build new models from scratch. When designing a robot model, the user specifies both the graphical and the physical properties of the objects. The graphical properties include the shape, dimensions, position and orientation, colors, and texture of the object. The physical properties include the mass, friction factor, as well as the spring and damping constants.
Webots includes a set of sensors and actuators frequently used in robotic experiments, e.g. proximity sensors, light sensors, touch sensors, GPS, accelerometers, cameras, emitters and receivers, servo motors (rotational & linear), position and force sensor, LEDs, grippers, gyros and compass.
The robot controller programs can be written in C, C++, Java, Python and MATLAB. The AIBO, Nao and E-puck robot models can also be programmed with the URBI language (URBI license required).
Webots offers the possibility to take PNG screen shots and to record the simulations as MPEG (Mac/Linux) and AVI (Windows) movies. Webots worlds are stored in .wbt files which have a format very similar to VRML. It is also possible to import and export Webots worlds or objects in the VRML format. Another useful feature is that the user can interact with a running simulation at any time, i.e. it possible to move the robots and other object with the mouse.
Webots is used in several online robot programming contests. The Robotstadium[1] competition is a simulation of the RoboCup Standard Platform League. In this simulation two teams of Nao play soccer with rules similar to regular soccer. The robots use simulated cameras, ultrasound and pressure sensors. In the Rat's Life[2] competition two simulated e-puck robots compete for energy resources in a Lego maze. Matches are run on a daily basis and the results can be watched in online videos.
Webots uses the ODE (Open Dynamics Engine) for detecting of collisions and simulating rigid body dynamics. The ODE library allows one to accurately simulate physical properties of objects such as velocity, inertia and friction.
A large collection of freely modifiable robot models comes in the software distribution. In addition, it is also possible to build new models from scratch. When designing a robot model, the user specifies both the graphical and the physical properties of the objects. The graphical properties include the shape, dimensions, position and orientation, colors, and texture of the object. The physical properties include the mass, friction factor, as well as the spring and damping constants.
Webots includes a set of sensors and actuators frequently used in robotic experiments, e.g. proximity sensors, light sensors, touch sensors, GPS, accelerometers, cameras, emitters and receivers, servo motors (rotational & linear), position and force sensor, LEDs, grippers, gyros and compass.
The robot controller programs can be written in C, C++, Java, Python and MATLAB. The AIBO, Nao and E-puck robot models can also be programmed with the URBI language (URBI license required).
Webots offers the possibility to take PNG screen shots and to record the simulations as MPEG (Mac/Linux) and AVI (Windows) movies. Webots worlds are stored in .wbt files which have a format very similar to VRML. It is also possible to import and export Webots worlds or objects in the VRML format. Another useful feature is that the user can interact with a running simulation at any time, i.e. it possible to move the robots and other object with the mouse.
Webots is used in several online robot programming contests. The Robotstadium[1] competition is a simulation of the RoboCup Standard Platform League. In this simulation two teams of Nao play soccer with rules similar to regular soccer. The robots use simulated cameras, ultrasound and pressure sensors. In the Rat's Life[2] competition two simulated e-puck robots compete for energy resources in a Lego maze. Matches are run on a daily basis and the results can be watched in online videos.
Light Sensor VIRTUAL FLOWER ROBOT
In this project we build a robot that has two optical light sensors and turns its head in the direction of light. The head is the only moving part of the robot and it is controlled by a gearbox manufactured by Tamiya. The light sensors are formed by two CdS photoresistors available from RadioShack. I used two smallest ones from the package of 5 photoresistors available there. The cell diameter is about 5mm, the maximum dark resistance is about 14M and the minimum light resistance is about 0.5K. The daylight resistance in my room is about 50K.Light sensor Motor and gear
The first prototype
The photoresistors are mounted on the robot head which in turn is attached to the gear axe. I used a paper strip separating the photoresistors and its optimal length in my setting is 1in measured from the photoresistors. The separator is needed to shadow one of the photoresistors when the light source is moving. For simplicity, the head can move in a 2-dim horizontal plain only, thus making a difference with a real sunflower. The head is formed by a small breadboard ,which for now has just the photoresistors and the paper separator mounted on it.
The light sensors (L and R on the schematic) are connected to the PIC which periodically measures their resistance and controls the motor accordingly. To measure the resistance of the photocells I use a classic RC-chain and measure the time of charging a capacitor, which for a fixed C is proportional to R. The direction of the motor rotation is controlled by the classic H-bridge composed entirely from NPN Darlington transistors TIP120. These transistor structures contain the diodes protecting them from the high voltage caused by inductive load. The bases of the bridge transistors are connected to PIC. If the RB6 and RB7 outputs are both 0, the motor is not rotating. If one of them is 0 and the other one is 1, the motor is rotating in the corresponding direction. The situation when both outputs are 1 is prevented by the software, since in this case the 3V battery would be short cut.Schematic Layout
This is just the first prototype of the design and I use LCD for tuning and debugging. The LCD displays the numbers coming out of the resistance measurement. The larger numbers correspond to a darker resistance. The built-in PIC program does not allow the numbers to exceed 255. The minimum numbers corresponding to lighting the device with a desktop 60W lamp is about 30, so we have almost the full range of the light intensity measurements of 30 - 255. The motor starts to move if the absolute difference between the numbers is larger than 15, which is defined experimentally. This constant defines how much the light source can move before the robot starts following it. The larger is the constant, the less is the accuracy of following the light. The sensor resistance is measured approximately every 80msec, which is also near optimal for the gear ratio 719:1 and the motor voltage in the range 3 - 5V. Increasing the measurement time up to 250msec causes the head moving back and force several times before it finally stabilizes.
The embedded program source for the first prototype is photo1.asm
The Second prototype
The LCD is actually not needed in a real device and can be excluded. This decreases the number of interface pins down to 6. Hence, a smaller PIC can be used as it is shown on the updated schematics. This PIC 12F675 has built-in 4MHz RC-oscillator which further simplifies the circuit. Also, smaller transistors can be used to drive the motor. However, they do require the diodes protecting them from the high voltage peaks caused by the motor.
Excluding the LCD significantly simplifies the program. One needs, however, to rename th output ports and other registers according to the PIC specs. The 12F675 has built-in comparators and ADC that are not used in this design and must be turned off. Also, all I/O ports must be setup for the digital mode. Finally, the PIC configuration fuses have some extra bits.
The embedded program source for the second prototype is photo2.asm
The Final Design
The robot electronics is assembled on a small board available form RadioShack. To simplify the power supply I added 3 silicon diodes 1N4003 that drop the 5V voltage down to about 3V for the motor. This way the entire unit can be powered up from a single 5V source. The maximum current consumption is about 200ma when the motor is on and just a couple of milliamperes when it is off.
The code is practically the save as the one for the second prototype with just a few changes. Two procedures that measure the light intensity are merged into one and I set up manually all PIC control registers instead of relying on their default values after power reset.
The embedded program source for the final design is photo3.asm
Things to consider
The used way for measuring the resistance if not optimal. It takes 2 pins of PIC - one for charging/discharging the cap and one for actually measuring a voltage. This can be accomplished with just one PIC pin. For this disconnect the right (on schematic) end of the cap and attach it to +5V. Rising up the voltage on PIN GP3 (in this case it should be configured for output) will discharge the cap. Now, configure this pin for input, and measure the voltage as described above.
For more details,ckts and help Contact
report4all@gmail.com
The first prototype
The photoresistors are mounted on the robot head which in turn is attached to the gear axe. I used a paper strip separating the photoresistors and its optimal length in my setting is 1in measured from the photoresistors. The separator is needed to shadow one of the photoresistors when the light source is moving. For simplicity, the head can move in a 2-dim horizontal plain only, thus making a difference with a real sunflower. The head is formed by a small breadboard ,which for now has just the photoresistors and the paper separator mounted on it.
The light sensors (L and R on the schematic) are connected to the PIC which periodically measures their resistance and controls the motor accordingly. To measure the resistance of the photocells I use a classic RC-chain and measure the time of charging a capacitor, which for a fixed C is proportional to R. The direction of the motor rotation is controlled by the classic H-bridge composed entirely from NPN Darlington transistors TIP120. These transistor structures contain the diodes protecting them from the high voltage caused by inductive load. The bases of the bridge transistors are connected to PIC. If the RB6 and RB7 outputs are both 0, the motor is not rotating. If one of them is 0 and the other one is 1, the motor is rotating in the corresponding direction. The situation when both outputs are 1 is prevented by the software, since in this case the 3V battery would be short cut.Schematic Layout
This is just the first prototype of the design and I use LCD for tuning and debugging. The LCD displays the numbers coming out of the resistance measurement. The larger numbers correspond to a darker resistance. The built-in PIC program does not allow the numbers to exceed 255. The minimum numbers corresponding to lighting the device with a desktop 60W lamp is about 30, so we have almost the full range of the light intensity measurements of 30 - 255. The motor starts to move if the absolute difference between the numbers is larger than 15, which is defined experimentally. This constant defines how much the light source can move before the robot starts following it. The larger is the constant, the less is the accuracy of following the light. The sensor resistance is measured approximately every 80msec, which is also near optimal for the gear ratio 719:1 and the motor voltage in the range 3 - 5V. Increasing the measurement time up to 250msec causes the head moving back and force several times before it finally stabilizes.
The embedded program source for the first prototype is photo1.asm
The Second prototype
The LCD is actually not needed in a real device and can be excluded. This decreases the number of interface pins down to 6. Hence, a smaller PIC can be used as it is shown on the updated schematics. This PIC 12F675 has built-in 4MHz RC-oscillator which further simplifies the circuit. Also, smaller transistors can be used to drive the motor. However, they do require the diodes protecting them from the high voltage peaks caused by the motor.
Excluding the LCD significantly simplifies the program. One needs, however, to rename th output ports and other registers according to the PIC specs. The 12F675 has built-in comparators and ADC that are not used in this design and must be turned off. Also, all I/O ports must be setup for the digital mode. Finally, the PIC configuration fuses have some extra bits.
The embedded program source for the second prototype is photo2.asm
The Final Design
The robot electronics is assembled on a small board available form RadioShack. To simplify the power supply I added 3 silicon diodes 1N4003 that drop the 5V voltage down to about 3V for the motor. This way the entire unit can be powered up from a single 5V source. The maximum current consumption is about 200ma when the motor is on and just a couple of milliamperes when it is off.
The code is practically the save as the one for the second prototype with just a few changes. Two procedures that measure the light intensity are merged into one and I set up manually all PIC control registers instead of relying on their default values after power reset.
The embedded program source for the final design is photo3.asm
Things to consider
The used way for measuring the resistance if not optimal. It takes 2 pins of PIC - one for charging/discharging the cap and one for actually measuring a voltage. This can be accomplished with just one PIC pin. For this disconnect the right (on schematic) end of the cap and attach it to +5V. Rising up the voltage on PIN GP3 (in this case it should be configured for output) will discharge the cap. Now, configure this pin for input, and measure the voltage as described above.
For more details,ckts and help Contact
report4all@gmail.com
Subscribe to:
Comments (Atom)