This is the multi-page printable view of this section. Click here to print.
Homework
1 - State Evaluation
Introduction
In this lab, you will construct an evaluation function for a reflex agent playing pacman. Recall that a reflex agent only uses the current state of the world to make decisions and thus acts in a greedy fashion (select what looks good right now).
Files
Coding
Improve the ReflexAgent
in multiAgents.py
to play respectably
by modifying the evaluationFunction
function. The provided reflex
agent code provides some helpful examples of methods that query the GameState
for information. A capable reflex agent will have to consider both food locations
and ghost locations to perform well. Your agent should easily and reliably clear
the testClassic
layout. You can see how your agent does just using the score of
future states by running the following command:
python pacman.py -p ReflexAgent -l testClassic
Try out your reflex agent on the default mediumClassic
layout with one
ghost or two (and animation off to speed up the display).
Recall you can change the speed of the game by adjusting the frametime.
python pacman.py --frameTime 0.05 -p ReflexAgent -k 1
python pacman.py --frameTime 0.05 -p ReflexAgent -k 2
How does your agent fare? It will likely often die with 2 ghosts on the default board, unless your evaluation function is quite good.
Hints
-
Remember that newFood has the function asList()
-
As features, try the reciprocal of important values (such as distance to food) rather than just the values themselves.
-
The evaluation function you’re writing is evaluating state-action pairs; in later parts of the project, you’ll be evaluating states.
-
The util file has a manhatten distance function you can use (
util.manhattanDistance
) -
You may find it useful to view the internal contents of various objects for debugging. You can do this by printing the objects’ string representations. For example, you can print newGhostStates with print(newGhostStates).
Options: Default ghosts are random; you can also play for fun with slightly smarter directional ghosts
using -g DirectionalGhost
. If the randomness is preventing you from telling whether your
agent is improving, you can use -f
to run with a fixed random seed (same random choices every game).
You can also play multiple games in a row with -n
. Turn off graphics with -q
to run lots of games quickly.
Grading: Your agent will be tested on the openClassic
layout 10 times. You will receive 0 points if your agent times out, or never wins. You will receive 1 point if your agent wins at least 5 times, or 2 points if your agent wins all 10 games. You will receive an addition 1 point if your agent’s average score is greater than 500, or 2 points if it is greater than 1000. You can try your agent out under these conditions with:
python autograder.py -q q1
Submitting
Submit your completed version of multiAgent.py
through Gradescope. The Gradescope
tests are the same as the local autograder.