I have provided a scheme implementation of a simulated vacuum world. In this world, a vacuuming agent gets a three-element perception list on each turn. The first element is true (#t in scheme) when the machine has bumped into a wall and false (#f in scheme) otherwise. The second element is true when the machine senses dirt underneath it and false otherwise. The third is true when the machine is at its home location and false otherwise.
A vacuuming agent can execute one of five actions on each turn: go forward one unit, turn left 90 degrees, turn right 90 degrees, suck up dirt, and turn itself off. Each of these actions is represented as a scheme symbol ('forward, 'left, 'right, 'suck, 'shutoff).
The goal for a vacuuming agent is to clean up the room and return to the home location. The agent will receive 100 points for each piece of dirt vacuumed up, minus 1 point for each action taken, and minus 1000 points if it is not at home when it shuts itself off or the simulation ends.
The environment consists of a grid of squares. Some squares contain walls, and other squares are open. Some of the open squares contain dirt. Each forward action moves the agent one square in the direction it is facing unless there is a wall in the way, in which case the agent does not move, but a wall will be sensed. A suck action always cleans up dirt in that square. A shutoff action ends the simulation. However the simulation can also end if a designated number of steps has been reached.
The agent always begins at position (1,1), it's home location, facing east. The size of the room can be varied, and dirt is randomly place in 25% of the open squares.
Copy the scheme files provided in the directory below to your home directory.
~meeden/Public/cs63/lab1-agents/There are four files: agent.ss, env.ss, vacuum.ss, and init.ss. The main file, and the one most useful to use, is vacuum.ss. In this file, I have written a simple reflex agent as an example. To test this agent, do the following.
The run-test function expects an agent program, an agent name, a number of steps, a height of the environment, a width of the environment, and a boolean that designates whether each step should be shown or not.
Problem 1: Implement a better pure reflex agent than the one I provided. Call this agent better-reflex-agent. In your comments, explain what prevents a reflex agent from performing well at this task.
Problem 2: Implement an agent with internal state that always visits every square in the environment and then returns home and shuts itself off. Call this agent state-agent. In creating your own state information, you may assume that the agent always begins at position (1,1) facing east. Your agent should be able to work equally well in any rectangular environment. In your comments explain your agent's overall strategy as well as the purpose of each state variable you create.
Recall that you can use a let expression in scheme to create state. A let within a lambda will only exist for the current function call, but a let outside a lambda will retain state between function calls. For example:
(define state-agent ;;; state variables can go in let below (let ( ... ) (lambda (percept-list) (let ((wall (list-ref percept-list 0)) (dirt (list-ref percept-list 1)) (home (list-ref percept-list 2))) ;;; help functions can be defined here (define local-helper (lambda (...) ...)) ;;; main body of the state-agent program (cond ...)))))You should use set! to update the state variables. You may also create local functions by using define before the body of the main function.
You may want to reset the state variables before you issue the 'shutoff action so that you can easily run multiple tests.
Use cs63handin to submit your revised version of the vacuum.ss file. You should not modify any of the other files.