Tuesday, July 7, 2009

Light Follower Algorithm


To achieve this algorithm the NXT needs at least two light sensors, out in front and spaced apart from each other, one on left side and the other located on the right.

We read the value from both sensors, the sensor that reads more light is the direction the NXT should turn. For example, if the left light sensor reads more light than the right one, the NXT would turn or tend towards the left. If both values are almost equal, then the NXT would drive straight.

pseudocode:

read left_light sensor
read right_light sensor

if left_light sensor detects more light than right_light sensor
then turn robot left

if right_light sensor detects more light than left_light sensor
then turn robot right

if right_light sensor detects about the same as left_light sensor
then robot goes straight

loop


Modification to give the ability to avoid objects will be added. Also
, modifications to the turning will be made so the robot will no longer just have the three modes of turn left, turn right, and go straight. Instead will have commands like 'turn left by 10 degrees' or 'turn right really fast'.

pseudocode:

read left_light sensor
read right_light sensor

left_motor = (left_light sensor - right_light sensor) * arbitrary_constant
right_motor = (right_light sensor - left_light sensor) * arbitrary_constant

loop


I am very interested in the Split Brain Approach. This algorithm works without comparison of light sensor values. Instead, just command the right motor based on light from the left sensor, and the left motor with only data from the right sensor as shown below.

pseudocode:

read left_light sensor
read right_light sensor

move left_wheel_speed = right_light sensor * arbitrary_constant
move right_wheel_speed = left_light sensor * arbitrary_constant

loop

Also, I am interested in the ability to use more than a light sensor such as IR emitter/detectors and others.

Time: 6.5 hrs.

Monday, July 6, 2009

Leader-Follower Algorithm using NXTCam


To start the Leader-Follower algorithm first we need to see the object and then move the motors so the object is in the center and then follow it. Finding the object was explained in previous blogs, the code below starts off by centering the object in the middle. I will be modifying this code to complete the mission.


const tSensors cam = (tSensors) S1; //sensorI2CCustomFast

const tMotor r_motor = (tMotor) motorA;

const tMotor l_motor = (tMotor) motorC;

#include "nxtcamlib.c"

task main ()

{

int nblobs; // Number of blobs

int_array bc, bl, bt, br, bb;

int x_centre, x_error;

int y_centre, y_error;

bool erased = false;

int i, sq, n, width, height;

// Initialise the camera

init_camera(cam);

while (true) {

// Get the blobs from the camera into the array

get_blobs(cam, nblobs, bc, bl, bt, br, bb);

if (nblobs == 1) {

if (!erased) {

nxtDisplayTextLine(0,"Tracking ...");

erased = TRUE;

}

// Find the centre of the blob using double resolution of camera

x_centre = bl[0] + br[0];

y_centre = bt[0] + bb[0];

// Compute the error from the desired position of the blob (using double resolution)

x_error = 176 - x_centre;

y_error = 143 - y_centre;

// Drive the motors proportional to the error

motor[l_motor] = (y_error - x_error) / 5;

motor[r_motor] = (y_error + x_error) / 5;

} else {

motor[l_motor] = 0;

motor[r_motor] = 0;

nxtDisplayTextLine(0,"Found %d blobs.",nblobs);

erased = FALSE;

}

}

}

Time: 6.5 hrs

Sunday, July 5, 2009

Understanding C Code for the NXTcam

-

I put my hands on come code written for the NXT in C using the NXTCam so I decieded to read it and understand it before I start writing my own and it was very helpful. Also, if I read the library for the NXTCam and it was a bit hard to understand but it gave me a better understand of NXTCam programming background.

One of the sample codes was camtest.c”, which is a simple program that displays the blobs returned from the camera on the NXT display as text. Left and right buttons can be used to choose which blob you would like to look at. Another code was “cam_display.c”, which is a program that Displays the blobs returned from the camera on the NXT display. Note the scaling functions below reflect the actual scaling required - not the scaling from the documented values of camera coordinates. These program were wriiten by Gordon Wyeth.

Camtest.c

const tSensors cam = (tSensors) S1; //sensorI2CCustomFast

#include "nxtcamlib.c"

// Global

int cb; // Current blob index to display

// task button_handler() - increments blob index when right button is pressed, decrements

// when left button is pressed. Keeps values between 0 and 7.

task button_handler()

{

while (true) {

while (nNxtButtonPressed == -1){

;

}

if (nNxtButtonPressed == 2) {

if (cb == 0) {

cb = 7;

} else {

cb--;

}

} else if (nNxtButtonPressed == 1) {

if (cb == 7) {

cb = 0;

} else {

cb++;

}

}

while (nNxtButtonPressed != -1)

;

}

}

task main ()

{

int nblobs;

int_array bc;

int_array bl;

int_array bt;

int_array br;

int_array bb;

// Initialise the camera

init_camera(cam);

// Start with blob 0

cb = 0;

// Setup button handler

nNxtButtonTask = -2;

StartTask(button_handler);

while (true) {

// Get the current blob data from the camera

get_blobs(cam, nblobs, bc, bl, bt, br, bb);

// Print the data on the screen

nxtDisplayTextLine(1, "Blob %d of %d", cb + 1, nblobs);

nxtDisplayTextLine(2, "Color: %d", bc[cb]);

nxtDisplayTextLine(3, "Left: %d", bl[cb]);

nxtDisplayTextLine(4, "Top: %d", bt[cb]);

nxtDisplayTextLine(5, "Right: %d", br[cb]);

nxtDisplayTextLine(6, "Bottom: %d", bb[cb]);

}

}

Cam_display.c

const tSensors cam = (tSensors) S1; //sensorI2CCustomStd

#include "nxtcamlib.c"

// int xscale(int x) - Scales x values from camera coordinates to screen coordinates.

int xscale(int x) {

return ((x - 12) * 99) / 176;

}

// int yscale(int y) - Scales y values from camera coordinates to screen coordinates.

int yscale(int y) {

return ((143 - y) * 63) / 143;

}

task main ()

{

int n; // Number of blobs

int i;

int_array bc, bl, bt, br, bb; // Intermediate values for scaled corners

int l, t, r, b; // Intermediate values for scaled corners

// Initialises the camera

init_camera(cam);

while (true) {

// Get the blobs into the arrays for display

get_blobs(cam, n, bc, bl, bt, br, bb);

// Clear the display

eraseDisplay();

for (i = 0; i <>

// Draw the scaled blobs

l = xscale(bl[i]);

t = yscale(bt[i]);

r = xscale(br[i]);

b = yscale(bb[i]);

nxtFillRect(l, t, r, b);

}

}

}

After reading these codes and understanding them, I have a way better understanding of what I am doing, and I feel ready to start writing and modifying my own code for the Leader-Follower algorithm.

Time: 6 hrs.