In part (a) of this lab, you learned how to create neural networks using conx. In part (b) of this lab, you generated training data based on your find light brain. Now you will put this all together in order to train a neural network to control a robot.
from conx import *
from jyro.simulator import *
Complete the function below. It should open the file you created previously, and return a list of patterns where each element of the list is a list of an input pattern and a target pattern. It should also print the number of patterns read.
def get_inputs_targets(filename):
"""Returns a list of patterns"""
return []
patterns = get_inputs_targets("training_data.txt")
Machine learning tools, neural networks included, will find the most direct way to reduce error. Let's do a little bit of analysis on the data you collected. Write a helper function to tally up what percentage of your training data has the robot moving forward vs. turning left. vs. turning right. In my data, it was predominantly forward movement, thus an easy way for the network to reduce error was to always output a positive translation value.
One option for counteracting this, is to do some preprocessing to balance the data. You could take all of the data and divide it into separate lists based on the type of movement. Shuffle (available in the random library) each of these lists, and then slice off an equal number of examples from each. Combine these balanced sublists back into a single list, which you should again shuffle. Now the data set will have a balanced number of examples for each type of movement.
Another potential issue that can arise in machine learning is that the raw data values may not be very distinct or may not match any available activation function. For example, in my data set the translation values are either 0.3 (slow forward) or 0.0 (no translation when turning in place). However, the activation function tanh outputs values in the range [-1,1], and 0 is not an easy value for tanh to produce. You may want to consider transforming your data to make it match the profile of the activation function more closely. For instance, we could transform all of the 0.0 translations to -1.0 and all of the 0.3 translations to 1.0.
You should experiment with preprocessing your data, either by balancing the type of data or by transforming the data to better fit the activation function (or try both!). Describe how well your network learns on the original data in comparison with the proprocessed data.
Remember that if you transform the data for the benefit of the neural network, then you'll need to undo this transformation before using the output of the network to control the robot.
Create a neural network with three layers: input, hidden, and output. The inputs will represent the robot's sensors and the outputs will represent the robot's actions. The hidden layer will help to transform the sensors into the appropriate actions.
The network should have as many inputs as you used in your find light brain. For example in my program, I used 2 lights and the 6 front sonars for a total of 8 inputs. You may have used a different number of sonars, and therefore have a different size.
Start with a hidden layer size of 5, but you can experiment with this to see what size works best.
The output layer size should be 2, since this layer represents the translation and rotation amounts to move the robot.
Use the "mse" loss function (which stands for Mean-Squared Error) and the "adam" optimizer (which is an adaptive gradient descent procedure).
robot_net = Network("FindLightController")
# complete the network definition here
We will use the patterns you read in as the dataset for this network. You can choose to create a validation set if you'd like. We can also shuffle the data.
#Uncomment these lines once you have a dataset
#robot_net.set_dataset(patterns)
#robot_net.dataset.shuffle()
#robot_net.dataset.summary()
Let's train the network for 5000 epochs, while attempting to reach an accuracy of 80%, with a tolerance of 0.1, and a report rate of 100 epochs.
Don't worry if your final performance isn't perfect. This is a difficult problem. As a point of reference, my network achieves a training accuracy of 76% and a validation accuracy of 23%, after 5000 epochs.
#Uncomment these lines once you have a network and a dataset ready for training
#robot_net.reset() # resets weights, starts training from scratch
#robot_net.train(epochs=5000, accuracy=0.8, tolerance=0.1, report_rate=100)
Training can take several minutes. So save the resulting weights so that you don't have to re-train each time.
robot_net.save("robot_net")
Complete the brain begun below.
def network_brain(robot):
lights = robot["light"].getData()
sonars = robot["sonar"].getData()
# normalize the sonar data in the same way you did before, grabbing only the sonars you need
# combine the light data with the normalized sonar data into a list called inputs
outputs = robot_net.propagate(inputs)
robot.move(outputs[0], outputs[1])
def make_world(physics):
physics.addBox(0, 0, 4, 4, fill="backgroundgreen", wallcolor="gray")
physics.addBox(1.75, 2.9, 2.25, 3.0, fill="blue", wallcolor="blue")
physics.addLight(2, 3.5, 1.0)
def make_robot():
robot = Pioneer("Pioneer", 2, 1, 0) #paremeters are x, y, heading (in radians)
robot.addDevice(Pioneer16Sonars())
light_sensors = PioneerFrontLightSensors(3) #parameter defines max range
robot.addDevice(light_sensors)
return robot
#Uncomment this line if you want to use saved weights
#robot_net.load("robot_net") #re-load a saved network
#Uncomment these once you have a trained neural network that you are ready to test
#robot = make_robot()
#vsim = VSimulator(robot, make_world)
#robot.brain = network_brain
Describe the robot's behavior when using the original manual program.
Next, describe the robot's behavior when using the neural network trained via your manual program.
Compare and contrast the two. Does the network's behavior mimic the original program's behavior? Is it better? Worse? In what circumstances?
Be sure to save this notebook to the repo.