In this lab you will familiarize yourself with pyrobot, a tool for controlling both simulated and physical robots, that is written in python. All of the source code for pyrobot is located at /usr/local/pyrobot/. Feel free to go check out any aspect that interests you. Pyrobot includes many machine learning tools, such as neural networks, which we will explore this week. To begin, run update81 to copy some starting point files into your home directory (cs81/labs/1/).
You may work with a partner on this lab.
I have provided three examples of simple neural networks for solving the logic problems and, or, and xor.
% python -i and.pyOne pass through the entire data set is defined as an epoch of training. During training the total summed squared error across all the output units is reported. In addition the percentage of output units with activations within tolerance of the desired target values is reported.
>>> n.showPerformance()
>>> n.showNetwork() >>> n.showPerformance()A new window will appear depicting the current activations in the network. There will be one box for each unit in the network. The whiter the box, the more positive the activation. The blacker the box, the more negative the activation.
>>> n.printWeights()Draw a picture of the network with its associated weights and biases and try to understand how it has solved the problem.
Try experimenting with the learning rate (called epsilon). Set it
very low (at say 0.1) and set it very high (at say 1.0). How does the
speed of learning change?
One method for teaching a neural network to control a robot is called offline training and involves the following steps. First hand code a robot controller. Second use this controller to collect data to be used to train a neural network. Third train a neural network with the collected data. Fourth use the trained network to control the robot and test its performance.
%pyrobot -s PyrobotSimulator -w Tutorial.py -r PyrobotRobot60000 -b wallFollow.pyThe -s flag specifies which simulator you would like to use. Each simulator has a number of different worlds, which are specified by the -w flag. The simulator can control multiple robots simultaneously. Each robot has a name specified by the -r flag. Finally the robot is controlled by a brain, which is specified by the -b flag.
This will open two windows: one the pyrobot control window and the other the simulator window. In the simulator window, select the View menu and choose Trail, to see the robot's path. In the pyrobot window, push the Run button to start the robot. Every 500 steps the program will report the average distance of the robot from the wall on its left side. We will use data collected from a program running a controller like this to train a neural network.
To quit, first press the Stop button in the pyrobot window to suspend the brain. Then go to the Robot menu and select Unload Robot. Finally, go to the File menu and select Exit. If the pyrobot window fails to close, then in a terminal window do: killall -9 pyrobot.
%pyrobot -s PyrobotSimulator -w Tutorial.py -r PyrobotRobot60000 -b collectData.pyWhen you see a message that the data collection is done. Stop the robot and exit from pyrobot. You should now have two files called sensorInputs.dat and motorTargets.dat saved in your directory.
%python trainNetwork.pyThis will take several minutes to complete. It will execute 1000 epochs of training on the collected data. Watch the error and percent correct during training. Does error consistently go down? Does percent correct consistently go up? After training is complete, you will have a new file in your directory called wallFollow.wts containing the weights and biases of the most successful version of the trained neural network.
%pyrobot -s PyrobotSimulator -w Tutorial.py -r PyrobotRobot60000 -b testNetwork.pyBe sure to "View" the robot's "Trail" in the simulator window before pressing the "Run" button in the pyrobot window. How does the neural network controlled robot perform compared to the hand-coded teacher program? In what ways does the neural network controlled robot behave differently from the teacher?
You will go through the same series of steps as above, but this time using your own hand-coded teacher program. Feel free to explore other worlds in the pyrobot simulator. You can look at the source code of any simulator world in:
/usr/local/pyrobot/plugins/worlds/Pyrobot/You may even create your own simulator world. If you create your own world it can be loaded directly from your home directory.
You may also explore other robot devices beyond sonar, including light sensors, cameras, and grippers. Below is a summary of how to use each type of device. In order to experiment with a device it must be added to the robot that you are using in a particular world. You can experiment with these devices by typing the commands described below in the command-line of the pyrobot window.
robot.sonar[0][sensorNum].valueMore information is available about range sensing in pyrobot.
robot.light[0][sensorNum].valueIf you'd like to see a brain that uses light sensors, try running the brain BraitenbergVehicle2b.py in the Braintenberg.py world.
Next, point the robot so that it is facing the blue box in the upper-left hand corner of Tutorial.py world. In the camera window, left click the mouse on the blue color; this will turn all the blue color to red. Notice in the pyrobot window that you've just added a match filter, which takes three parameters for RGB values. Next in the camera window, select Filter, then Blobify, and then Red. To see all the filters you've created, in the camera window, select Filter and then List filters. The current filters will be displayed in the pyrobot window. If you were doing these steps in a program, you could use these filter commands directly and would only need to do them once at the start of the program.
The filter results will be numbered in the order that you added them. In this example, the match filter will be 0 and the blobify filter will be 1. You can access the camera filter results by doing:
robot.camera[0].filterResults[filterNum]The blobify filter returns a list of lists. Each sublist represents the information on one blob. The sublists are ordered from largest to smallest blob. Typically we only focus on the sublist numbered 0, representing the biggest blob in view. To get the largest blob's data do:
robot.camera[0].filterResults[1][0]This will return a list of five values representing the bounding box of the blob and its area: (x1, y1, x2, y2, area).
robot.gripper[0].gripperCommand()You can see a list of all the gripper commands by going to the Device area of the pyrobot window and selecting gripper[0]. Then press the View button.
robot.simulation[0].getPose("RedPioneer")This will return a list of three values representing its x, y location and heading in radians. To move the robot to a particular location do:
robot.simulation[0].setPose("RedPioneer", x, y, heading)where x and y are within the bounds of the world and heading is in radians.
Put all of the files you created in the cs81/labs/1 directory. In the summary file provided, be sure to describe what your teacher program does, how you trained the network, and your results.
When you are done, run handin81 to turn in your completed lab work.