Don't forget your statement of sources. Remember to list any outside reading you consulted!
For this homework, all of your plots must be computer-generated and fully labeled, with the exception of the SVM plot in question 4.
Describe how to modify a classic decision tree algorithms to obtain oblique splits (not parallel to an axis).
(a) (10 pts) (Adapted from Sutton and Barto Exercise 3.5) Imagine that you are designing a robot to run a maze. You decide to give it a reward of +1 for escaping the maze and a reward of zero at all other times. The task seems to break down naturally into episodes -- the successive runs through the maze -- so you decide to treat it as an episodic task, where the goal is to maximize expected total reward Rt = rt+1 + rt+2 + rt+3 + ... + rT, where T is the final time step of an episode. After running the learning agent for a while, you find that it is showing no improvement in escaping from the maze. Something is going wrong. Does the reward function effectively communicate the goal to the agent? If not, can you suggest another reward function that will work? If the reward function is fine, what else is going wrong?
(b) (20 pts) (Adapted from Sutton and Barto Exercise 3.10.) Consider reinforcement learning in a gridworld where rewards are positive for goals, negative for running into the edge of the world, and zero otherwise. Are the signs of these rewards important, or only the intervals between them? Prove that adding a constant C to all the rewards simply adds a constant, K, to the values of all states, and thus does not affect the relative value of any policies. What is K in terms of C and gamma? In your proof, you will find it useful to use the following equations we studied in class for the expected discounted return and value of a state:
Consider the following labeled data set:
2, 4, +
4, 2, +
4, 4, -
4, 6, +
6, 2, -
6, 4, +
(a) (6 pts) Plot the data twice and draw the decision surface that separates the two classes (+ and -) for both k=1 and k=3.
(b) (2 pts) How would you classify a point at (8,1) for both k=1 and k=3?
(c) (2 pts) How would you classify a point at (8,8) for both k=1 and k=3?
Russell and Norvig Exercise 20.12. Draw the SVM plots by hand, being sure to label all parts of your diagram (e.g., title, axes, margin, hyperplane, etc.).
In this problem, you will gain familiarity with the WEKA (Waikato Environment for Knowledge Analysis) machine learning toolkit. It is widely used in the machine learning community, and provides both a graphical interface and Java API to a wide variety of standard machine learning algorithms. Using WEKA will help reinforce the material we covered in class.
You must download and install WEKA from the following location: http://www.cs.waikato.ac.nz/ml/weka/. The "Getting Started" section of that page has links for information on system requirements, how to download the software, and documentation. WEKA is written in Java and should run on any platform with Java 1.4 or higher.
Here is a site that contains documentation on WEKA:
When you run WEKA, you should choose the use the "WEKA Explorer" when answering these questions. One problem that I sometimes run into is that WEKA needs more memory than java gives it by default. You can use java's "-Xmx" option to have java allocate more memory to the virtual machine. For example "java -Xmx200M ..." would have java allocate a maximum of 200 MB to the virtual machine.
Download the hw6-mushroom.arff data file and load it into WEKA in the preprocess tab. The mushroom task involves identifying poisonous or edible mushrooms from their features. In the classify tab, try out the following classifiers, using 10-fold cross-validation to get accurate estimates of the performance of each algorithm.:
(a) (8 pts) Create a table reporting the classification accuracy of each algorithm on the mushroom data.
(b) (5 pts) On this particular problem, WEKA reported the following performance information for two of the algorithms: (note that your numbers in part (a) may vary)
J48 Decision Tree
|
1-Nearest-Neighbor Classifier
|
Even though the algorithms both have the same reported accuracy, their True/False positive rates are different. For this particular problem, identifying poisonous mushrooms, is one algorithm better? If so, which one? Justify your answer by explaining why one is better or why they're both equally good.
(c) (8 pts) Run the J48 algorithm with and without reduced error pruning on the hw6-zoo.arff data. Explore varying the numFolds variable for reduced error pruning between 2 and 10 folds. You do not need to run every number; just pick a number of samples intelligently.
Plot the performance accuracy as a function of the number of folds in a properly-formatted graph (axes labeled, clear title, etc.). (You cannot generate the plot by hand.) In a few sentences, describe the effect of reduced error pruning on the induced decision tree. Why does reduced error pruning improve the classification accuracy of the decision tree? (Hint: what machine learning phenominon is occurring?) How does the numFolds variable affect the pruning and why?
(d) (10 pts) For this problem, you will create a classifier for some unknown data. The training data is contained in hw6-unknown2-train.arff. You should attempt to construct the best model you can for this training data. You can use cross-validated accuracy to determine your chosen model or any other evaluation criteria you want. You can use any WEKA classifier you want, even those other than the ones we've used in this homework, with any settings. Some classifiers perform dramatically different depending on the settings.
Report the best 10-fold cross-validated accuracy you can on this training data. What algorithm did you use to create the model? Justify any parameter settings you changed. Include a paragraph describing the algorithm you used for creating the model. If you settled on one that we didn't cover in class, you will have to do a bit of outside reading to understand it and be able to describe it. Tell why you think the algorithm you chose did the best.
(e) (4 pts) The real test of machine learning algorithms is taking a learned model and applying it to never-before-seen data to test its effectiveness.
The file hw6-unknown2-test-unlabeled.arff contains a number of unlabeled data drawn from the same distribution as the unknown training data you saw in (d) above. Once you've developed your learned model, you can apply it to the new test data. Make sure that the training data set is loaded into WEKA and your classifier options are all set as before. (If you do this immediately after (d), you should only have to do the next instruction.) Change the test option from cross-validation to "Supplied test set" and set it to use the hw6-unknown2-test-unlabeled.arff file for testing. Also, change the "More options..." to "Output predictions" for the data. When you "Start" the evaluation, WEKA will train your model on the training data and output predictions for the unlabeled data.
The classifier output should now have predicted labels for every test instance. Ignore that WEKA tells us that every predition is an error -- it does this because the test data does not have any class labels. The table will look something like the following (the predicted labels are contained in the third column):
=== Predictions on test set ===
inst#, actual, predicted, error, probability distribution
1 ? 1:f + *0.926 0.074
2 ? 2:t + 0.045
*0.955
3 ? 2:t + 0.045
*0.955
... ... ... ... ... ...
Copy this entire table into a text file (ONLY this table, trimming whitespace above and below it) into a text file called <LASTNAME><FIRSTNAME>HW6.out (where <LASTNAME> and <FIRSTNAME> are your names, of course. For example, my file would be called EatonEricHW6.out), and submit it using the handin63 program.
I'll evaluate the performance of your classifier on the test data by computing the accuracy of your predictions with the true labels for the test data. The three people with the highest test accuracies will get 5 additional extra credit points!