Tuesday, July 28, 2009

Obstacle Avoidance with NXT


Implanting obstacle avoidance to the NXT is very complicated. The fact that the NXT is following the leader while trying to avoid obstacles and other NXTs makes the whole process way much harder to maintain. I am facing many different problems with my algorithm. The first problem was “the monkey in the middle”, where one NXT will get in the way of the other NXT preventing it from seeing the light. Then when I used one NXT sensors, I didn’t have enough data to provide for the NXT to allow it to avoid obstacles effectively, I was limited with directions. So I decided to use two NXT ultrasonic sensors, but I faced two major issues with that, the first was that the sensor only covers 30 degrees from the center on the sensor not the sides as I thought before. The other problem was with placing the sensors at the right angle. I could do that using the NXT parts we had; there was no possible way to mount the sensors on the NXT with the angles I desired. After all these problems I started considering using 3 ultrasonic sensors, having a sensor on each side and one in the front. Using three sensors is very complicated, there is a lot of data coming in at once which makes it a bit harder for the NXT to process and react to all the given data.

I wrote a code for the three sensors and started testing it. Testing showed that there are still blind spots on the NXT that lead the NXT to crash into some objects. Using only sonar sensors to cover all the NXT leaving no blind spots is very hard to do. Currently I am considering using another motor to have the sensor at the front of the NXT moving from side to side leaving the NXT with no blind spots but I still have to discuss that with my professor.

As I mentioned previously I had to raise the light to solve the Monkey in the Middle problem. But raising the light became a nightmare. My algorithm for flowing works as follows. When the NXTCam detects and object it moves the motors so it would have that object in its center of vision. I could flip the camera backwards and my algorithm will still work with no need to modify the code at all. Since the light is higher than the eye level of the NXTCam I had to change the angle of the NXTCam. When I changed the angle my algorithm just stopped working, and following the leader was not happening. I have to do many modifications to my constants in my code trying to find to right numbers, and find the right angle to put the NXTCam in. Such process was way harder than I expected. It is very frustrating because I am limited with the angles I can put the NXTCam in, but none of them seems to work with my constants.

Time: 8hrs

Friday, July 24, 2009

Problems and solutions.


After testing my code two major problems appeared. The one of the NXT followers gets in the way of the other NXT follower and prevents it from seeing the light. For this case I decided to raise up the light but this made my code not work properly because it is very hard to find the best angle that allow the NXTCam chase the light without losing it easily. So I need to do more modifications to that part.

The other problem is placing the ultrasonic sensors. I wrote code for one, two and three ultrasonic sensors. One sensor does not provide enough data for the NXT to achieve desired obstacle avoidance. I would like to use only two sonar sensors but there is no way of placing the sensors in the angles I need so I had to use three sensors. Using three sensors is effective; I am now in the process of developing the code needed to process all that data and then I will implement it with in my follower code.

Time: 5hrs.

Thursday, July 23, 2009

Implementing obstacle avoidance


I worked on implementing obstacle avoidance using an ultrasonic sensor mounted on the front of the NXT. I updated my code to show the implantation. After testing, the problem of directing the robot when there is an obstacle because complicated with only one sonar sensor.

#pragma config(Sensor, S4, sonarSensor, sensorSONAR)

#pragma config(Motor, motorA, r_motor, tmotorNormal, PIDControl, )

#pragma config(Motor, motorC, l_motor, tmotorNormal, PIDControl, )

//*!!Code automatically generated by 'ROBOTC' configuration wizard !!*//

#include "nxtcamlib.c"

/************************************************************************************/

// When a single blob is found in the image the robot will try to centre the blob by moving

// the motors. When more the one blob is found the robot halts and displays a message.

// Tracked image resolution of 88 x 144 pixels at 30 frames/second

// Perform full-resolution (176 x 144) pixels.

/************************************************************************************/

task main ()

{

int nblobs; // Number of blobs

int_array bc, bl, bt, br, bb;

int x_centre, x_error;

int y_centre, y_error;

bool erased = false;

// Initialise the camera

init_camera(S1);

while (true) {

// Get the blobs from the camera into the array

get_blobs(S1, nblobs, bc, bl, bt, br, bb);

if (nblobs == 1) {

if (!erased) {

nxtDisplayTextLine(0,"Tracking ...");

erased = true;

}

// Find the centre of the blob using double resolution of camera

x_centre = bl[0] + br[0];

y_centre = bt[0] + bb[0];

// Compute the error from the desired position of the blob (using double resolution)

x_error = 176 - x_centre;

y_error = 144 - y_centre;

// Drive the motors proportional to the error

motor[l_motor] = (y_error - x_error) / 3;

motor[r_motor] = (y_error + x_error) / 3;

}

else {

motor[l_motor] = 20;

motor[r_motor] = -20;

nxtDisplayTextLine(0,"Found %d blobs.",nblobs);

erased = false;

}

while (SensorValue[sonarSensor] <30>

motor[l_motor] = -5;

motor[r_motor] = -5;

wait10Msec(60);}

}

}


Time : 6 hrs.

Wednesday, July 22, 2009

NXT Ultrasonic Sensor


Now, I am working on implanting obstacle avoidance in my algorithm. I decided to use NXT ultrasonic sensor because it’s the best we have. I did some testing on the sensor myself and looked up the features that come with it.

The Ultrasonic Sensor enables the robot to see and detect objects. It can sense and measure distance, and detect movement. The Ultrasonic Sensor measures distance in centimeters and in inches. It is able to measure distances from 0 to 255 centimeters (98 in) with a precision of +/- 3 cm.

The Ultrasonic Sensor uses the same scientific principle as bats: it measures distance by calculating the time it takes for a sound wave to hit an object and return – just like an echo. Note that two or more Ultrasonic Sensors operating in the same room may interrupt each other’s readings. Distances smaller than 3 cm cannot be measured using the ultrasonic sensor.

The ultrasonic sensor should always be placed in horizontal position; other positions decrease as well the field of vision as the sighting distance of the sensor. The sensor seems to be a bit 'blind' on the left eye, which can be explained by the fact that the left eye is actually the receiver of the ultrasonic wave while the right eye is the sender.

Testing showed two weaknesses of the ultrasonic sensor. The first issue is that it showed some areas where the sensor tends to measure 255 cm instead of the actual distance. The second even more important issue is the critical area in between 25 cm and 50 cm where the sensor has a high probability of returning the wrong value of 48 cm, but this one doesn’t happen that often.

Time: 5hrs.

Tuesday, July 21, 2009

Two NXT Cameras


I was thinking of way to decrease the chances of the NXT losing the leader and I decided to use two NXT cameras. Doing this provides the NXT with better tracking abilities and decreases the chances of losing the object when tracking. Using both NXT cameras was a great idea because it improved the NXT’s tracking ability showing a notable difference. More modifications need to be made to the source code to allow better communication between the cameras, and also provide more precise data to the motors.

Here the old source code:

#pragma config(Motor, motorA, r_motor, tmotorNormal, PIDControl, )

#pragma config(Motor, motorC, l_motor, tmotorNormal, PIDControl, )

//*!!Code automatically generated by 'ROBOTC' configuration wizard !!*//

#include "nxtcamlib.c"

/************************************************************************************/

// When a single blob is found in the image the robot will try to centre the blob by moving

// the motors. When more the one blob is found the robot halts and displays a message.

// Tracked image resolution of 88 x 144 pixels at 30 frames/second

// Perform full-resolution (176 x 144) pixels.

/************************************************************************************/

task main () {

int nblobs; // Number of blobs

int_array bc, bl, bt, br, bb;

int x_centre, x_error;

int y_centre, y_error;

bool erased = false;

// Initialise the camera

init_camera(S1);

while (true) {

// Get the blobs from the camera into the array

get_blobs(S1, nblobs, bc, bl, bt, br, bb);

if (nblobs == 1) {

if (!erased) {

nxtDisplayTextLine(0,"Tracking ...");

erased = true;

}

// Find the centre of the blob using double resolution of camera

x_centre = bl[0] + br[0];

y_centre = bt[0] + bb[0];

// Compute the error from the desired position of the blob (using double resolution)

x_error = 176 - x_centre;

y_error = 144 - y_centre;

// Drive the motors proportional to the error

motor[l_motor] = (y_error - x_error) / 3;

motor[r_motor] = (y_error + x_error) / 3;

}

else {

motor[l_motor] = 0;

motor[r_motor] = 0;

nxtDisplayTextLine(0,"Found %d blobs.",nblobs);

erased = false;

} } }

Here is the new source code:

#pragma config(Motor, motorA, r_motor, tmotorNormal, PIDControl, )

#pragma config(Motor, motorC, l_motor, tmotorNormal, PIDControl, )

//*!!Code automatically generated by 'ROBOTC' configuration wizard !!*//

#include "nxtcamlib.c"

/***********************************************************************************/

// When a single blob is found in the image the robot will try to centre the blob by moving

// the motors. When more the one blob is found the robot halts and displays a message.

// Tracked image resolution of 88 x 144 pixels at 30 frames/second

// Perform full-resolution (176 x 144) pixels.

/***********************************************************************************/

task main () {

int nblobs, nblobst; // Number of blobs

int_array bc, bl, bt, br, bb;

int_array bct, blt, btt, brt, bbt;

int x_centre, x_error;

int y_centre, y_error;

int x_centret, x_errort;

int y_centret, y_errort;

bool erased = false;

// Initialise the camera

init_camera(S1);

init_camera(S4);

while (true) {

// Get the blobs from the camera into the array

get_blobs(S1, nblobs, bc, bl, bt, br, bb);

get_blobs(S4, nblobst, bct, blt, btt, brt, bbt);

if (nblobs == 1 || nblobst ==1) {

if (!erased) {

nxtDisplayTextLine(0,"Tracking ...");

erased = true;

}

// Find the centre of the blob using double resolution of camera

x_centre = bl[0] + br[0];

y_centre = bt[0] + bb[0];

x_centret = blt[0] + brt[0];

y_centret = btt[0] + bbt[0];

// Compute the error from the desired position of the blob (using double resolution)

x_error = 176 - x_centre;

y_error = 144 - y_centre;

x_errort = 176 - x_centret;

y_errort = 144 - y_centret;

if (nblobs != 1) {

x_error = x_errort;

y_error = y_errort; }

if (nblobst != 1) {

x_errort = x_error;

y_errort = y_error; }

x_error = (x_error + x_errort) / 2;

y_error = (y_error + x_errort) / 2;

// Drive the motors proportional to the error

motor[l_motor] = (y_error - x_error) / 3;

motor[r_motor] = (y_error + x_error) / 3;

}

else {

motor[l_motor] = 5;

motor[r_motor] = -5;

nxtDisplayTextLine(0,"Found %d blobs.",nblobs);

erased = false;

} } }

Time: 7 hrs.