If you want to work alone do:
setup63-Lisa labs/02 noneIf you want to work with a partner, then one of you needs to run the following command while the other one waits until it finishes:
setup63-Lisa labs/02 partnerUsernameOnce the script finishes, the other partner should run it on their account.
cd ~/cs63/labs/02 cp -r /home/bryce/public/cs63/labs/02/* .
git add * git commit -m "lab2 start" git push
cd ~/cs63/labs/02 git pull
The objectives of this lab are to:
You will implement the following local search methods:
You will need to modify the following python files:
A traveling salesperson tour visits each of a collection of cities once before returning to the starting point. The objective is to minimize the total distance traveled. We will represent a tour by listing cities in the order visited.
The class TSP in the file TSP.py represents a problem instance and provides several useful methods. TSP.random_tour() returns a random state from which we can begin a local search. TSP.cost(tour) evaluates the total distance traveled to complete a tour. TSP.get_neighbor(tour) calls functions you will write to return a new tour in which two cities have swapped positions.
A TSP instance is constructed by passing in the name of a JSON file specifying the latitude and longitude of locations that must be visited. Example JSON files are located in the coordinates/ folder. The __init__() method also takes a string specifying how to select a neighboring state: passing "best" or "random" will determine what gets called by the TSP.get_neighbor() method.
The TSP class leaves three methods for you to implement:
Neighbors of a tour are produced by swaping the positions of any two cities in the ordered list.
You are strongly encouraged to write a main function for TSP.py in order to test your implementation of the neighbor functins.
tour = tsp.random_tour() cost = tsp.cost(tour) return tour, costThis will allow you to run HillClimbing.py without generating errors:
./HillClimbing.py coordinates/United_States_25.jsonDoing so will print the random tour and its cost, and will also create a file that plots the tour in latitude/longitude coordinates. By default, the plot is saved in the file map.pdf, but you can change this with the -plot command line argument:
./HillClimbing.py coordinates/India_15.json -plot random_tour.pdfYour job is to implement the following hill climbing algorithm:
state = random tour best = state while evals < MAX_EVALS: if random restart: state = random tour else if random move: state = random neighbor of state else: neighbor = generate (best or random) neighbor of state if cost(neighbor) < cost(state): state = neighbor if cost(state) < cost(best): best = state end while return best, cost(best)Note that because the TSP class is counting the number of times cost() gets called, you should save the value rather than calling it again when possible. Several of the parameters used above are specified in a config file. By default, HillClimbing.py reads HC_defaults.json. You should copy this file and modify it, passing in the new file via the -config command line argument:
cp HC_defaults.json HC_config.json *edit HC_config.json* ./HillClimbing.py coordinates/South_Africa_10.json -config HC_config.json
The config file for simulated annealing includes a slightly different set of parameters. In particular, config["INIT_TEMP"] specifies the starting temperature, and config["TEMP_DECAY"] gives the geometric rate at which the temperature decays. The RUNS and STEPS parameters specify how many random starting points simulated annealing should be run from, and how long each run should last.
Stochastic beam search considers all neighbors of all individual candidates in the population and preserves each with probability proportional to its cost. It is recommended that you add a helper function for beam search that evaluates the cost of each neighbor and determines its probability of selection based on the current temperature, then randomly selects the neighbors that will appear in the next iteration. For the larger maps, the number of neighbors grows quite large, so it may be impractical to evaluate the cost of every neighbor. Your helper function should therefore consider only a random subset of each individual's neighbors. The size of this subset is given by the MAX_NEIGHBORS parameter in the beam search config file.
A very useful python function for stochastic beam search is numpy.random.multinomial. This function takes a list representating a discrete probability distribution and a number of samples. It returns a list representing the number of times each element was drawn over those ranom samples. For example:
>>>from numpy.random import multinomial >>>for i in range(5): >>> print multinomial(10, [.4, .2, .1, .3]) [6 2 0 2] [6 1 0 3] [5 2 1 2] [4 1 1 4] [5 2 2 1]
You are encouraged to try automating this exploration process. Parameter tuning for new algorithms is in fact one of the most common uses of local search! However, if you plan to run long jobs on lab machines please be courteous to other users. Monitor your memory usage with the program htop, and make sure that python isn't hogging all of a lab machine's memory unless you are sitting at that machine. When running long jobs on a shared machine, it also helps to use the nice command to give other users prioity, but nice does not absolve you from the need to monitor your memory footprint.
cd ~/cs63/labs/02 git add TSP.py HillClimbing.py SimulatedAnnealing.py BeamSearch.py best_params.txt git commit git push