0% found this document useful (0 votes)
122 views7 pages

Pac-Man Agent Design Lab Assignment

abc

Uploaded by

dothanhdat185
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
122 views7 pages

Pac-Man Agent Design Lab Assignment

abc

Uploaded by

dothanhdat185
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

IT159: Artificial Intelligence

Lab#1/Assignment#1: Designing Pac-Man Agents

Introduction
In this lab assignment, you will familiarise yourself with the Pac-Man World. Over the next few
assignments, you will implement your Pac-Man agent to find paths through its maze world in order
to reach a particular location and collect food efficiently. You will also build general search
algorithms and apply them to Pac-Man scenarios.

The code for the assignment consists of several Python files, some of which you will need to read
and understand to complete assignments and some of which you can ignore. The Pac-Man code
was developed by John DeNero and Dan Klein at UC Berkeley for the class CS188. The code is
written in Python 3.x, which you require an interpreter for to run the lab assignment.

Figure 1: The Pac-Man World

Step 1: Download Code


The code you will be using can be downloaded as a zip archive on Blackboard, namely search.

1
Extract the files into a directory/folder on your computer. A folder called “search” will be created,
and you will find several dozen files. To ensure that you have a working version of the files, run
the following command:

python [Link]

You should see a game screen pop up (see Figure 1). This is a basic Pac-Man game. In the game,
you control the movements of Pac-Man using arrow keys on your keyboard. Go ahead and try it.

The Pac-Man world is laid out as corridors (with shiny blue walls) where Pac-Man can move
about. Little white pellets are sometimes littered throughout the corridors. This is food for Pac-
Man (larger pellets are power food or capsules, try and figure out what those are for). In the world
shown in Figure 1, PacMan has adversaries: colored ghosts that eat Pac-Man when it runs into
them. Ghosts move about without eating any food. When Pac-Man is eaten, it dies and the game
ends. The screen will disappear.

Step 2: Pac-Man Agent


In this and the next few assignments, you will be writing agent programs to control the actions of
PacMan. That is, creating a Pac-Man agent. The code enables you to use different environments
to try out your Pac-Man agent programs. To specify an environment (for example, testMaze), you
use the command:

python [Link] -–layout testMaze

Go ahead and try it. It is a simple maze with one corridor. Here is one you will use more often:

python [Link] -–layout tinyMaze

Figure 2: Pac-Man agent in tinyMaze.

2
There are several other environments defined: mediumMaze, bigMaze, openSearch, etc. You
can also vary the scale of the screen by using the –zoom option as shown below:

python [Link] –-layout tinyMaze –-zoom 2

python [Link] –-layout bigMaze –-zoom 0.5

All of these are single agent environments, the agent being Pac-Man. In these environments, Pan-
Man always starts at the top right corner and, at the bottom left corner is a single food pellet (see
picture above). The game ends when Pac-Man eats very last pellet (there can be pellets anywhere
in its world).

Step 3: Learning the Pac-Man Grid and Actions


Grid: The environment is essentially a grid of squares. At any given time, Pac-Man occupies a
square and faces one of the four directions: North, South, East, or West. There may be walls in
between the square (like the t-shaped wall in tinyMaze) or entire squares might be blocked by
walls (like the bottom right corner of tinyMaze. regardless, the location of Pac-Man is determined
by the x- and y- coordinates of the grid (as shown below):

Figure 3: The Pac-Man Grid. Pac-Man is at position (5, 5). Food pellet is at (1, 1)

Actions

3
Pac-Man can only carry out the following actions:

• ‘North’: go one step north


• ‘South’: go one step south
• ‘East’: go one step east
• ‘West’: go one step west
• ‘Stop’: stop, do not move

Below, you will see how these are specified to be carried out.

Step 4: Diving into Some Code


Now that you are familiar with the basic world, it is time to get familiar with some of the code.
Start by looking at the contents of the file [Link].

Skim through the parts worth reading section of the code. Focus first on the following classes:
Agent, Directions, and Configuration.

Agent

The Agent class is very simple. It is the class you will subclass to create your Pac-Man agent. For
example, here is a very simple, and dumb, agent:
from game import Agent
from game import Directions

class DumbAgent(Agent):
"An agent that goes East until it can't"
def getAction(self, state):
"The agent always goes East"
return [Link]

The way it is set up, when you specify to the game (see below) that the Pac-Man will be controlled
by an instance of a DumbAgent, the action returned by the getAction() method will be carried
out at each time step. Important things to note in the above code are:

• You should create a new file called, [Link], in the same directory/folder as the rest of
the code base. Enter the code above exactly as shown. Be sure to save the file.
• Every subclass of Agent (like DumbAgent) is required to implement a getAction()
method. This is the method called in each time step of the game and as mentioned above,
it should return a valid action for Pac-Man to carry out.
• Notice that we are importing the classes Agent and Directions from [Link].

4
• The getAction() method is supplied a parameter: state, which it can use to find out about
the current game state (more on this below). For now, we are ignoring it.
• Study the class Directions (defined in [Link]).

Step 5: Run the code


Next run the Pac-Man game with its control as DumbAgent using the command:

python [Link] –-layout tinyMaze –-pacman DumbAgent

The command above is specifying to run the Pac-Man game using the tinyMaze environment and
the agent is controlled by the DumbAgent. What happens?

In the Pac-Man game, if the path to the grid is blocked and Pac-Man tries to go into it, the game
crashes with an “Illegal action” exception. This is OK. After all, it is a dumb agent. We’ll fix that
next. Try the same agent in the mediumMaze. Same result, right? Good!

Step 6: Learning about GameState


Next, lets us try and use the information present in the state parameter. This is an object of type
GameState which is defined in the file [Link]. Study the GameState class closely and note
the methods defined. Using these, you can get all kinds of information about the current state of
the game. Then you can base your agent’s action accordingly. Below, we show how you can use
some of these and prevent the game from crashing.
class DumbAgent(Agent):
"An agent that goes East until it can't."
def getAction(self, state):
"The agent receives a GameState (defined in [Link])."
print("Location: ", [Link]())
print("Actions available: ", [Link]())
if [Link] in [Link]():
print("Going East.")
return [Link]
else:
print("Stopping.")
return [Link]

As in Step 4, save this version of your program in [Link] and run it on tinyMaze, as well as
mediumMaze. Observe the behavior. Try out some of the other methods defined in GameState
to get an idea of what information is available to your agent.

Step 7: A Random Agent


OK, now it is time to write your own agent code.

5
Exercise 1: Create a new class called, RandomAgent (in the [Link] file), which based on the
current options and pick a random action to carry out. Run your agent in the tinyMaze
environment as well as mediumMaze environment. Observe the agent’s behavior. Does it get to
the food? Always? Without crashing? Etc.

Step 8: Exploring Environments


See the files in the folder/directory layouts. Environments are specified using simple text files
(*.lay) which are then rendered nicely by the graphics modules in the code base. Examine several
layout files to see how to specify walls, ghosts, pacman, food, etc.

Exercise 2: Create a small environment of your own. Make sure it has walls and corridors, as well
as some food. Save it as [Link] in the layouts directory.

Run your RandomAgent in this environment and observe how it does.

Also, try your agent out in the openSearch environment (files are already provided in the layouts
directory). Run your agent several times and record, on average, what score you get.

Step 9: A Better Random Agent


If you print out and look at the choice of actions at each step, you will notice that RandomAgent
always includes a choice for the ‘Stop’ action. This tends to slow it down. Stopping is needed in
situations where you need to evade ghosts. For now, in environments without any ghosts, you can
choose not to pick the ‘Stop’ action.

Exercise 3: Create a new class called BetterRandomAgent (in the file [Link]) so that it never
chooses ‘Stop’ as its action. Run the agent in openSearch and myLayout environments and
observe how it does.

Step 10: Percepts


What the Pac-Man agent can perceive is based on the methods of the GameState class which is
defined in the file [Link]. Open this file and let's look through the options.

It is important to realize that the game has several different agents (Pac-Man and the ghosts). Each
agent in the game has a unique index; Pac-Man is always index 0, with ghosts starting at index 1.

Pac-Man can perceive:

• His position
• The position of all the ghosts
• The locations of the walls
• The positions of the capsules

6
• The positions of each food pellet
• The total number of food pellets still available
• Whether it has won or lost the game
• His current score in the game

In addition, Pac-Man can also determine given the action it chooses what the next state of the
environment will be, by using the method generatePacmanSuccessor(). It is clear from the
methods available here that Pac-Man's environment is fully observable. Pac-Man's environment is
also static because until it decides what to do and takes an action, the ghosts do not move.

Exercise 4: In the file [Link] create a new agent called ReflexAgent. This agent should look
at the possible legal actions, and if one of these actions would cause a food pellet to be eaten, it
should choose that action. If none of the immediate actions lead to food, it should choose randomly
from the possibilities (excluding 'Stop'). Test your agent in both the openSearch and myLayout
layouts.

python [Link] --l openSearch --p ReflexAgent

What to submit:
Your submission should include the following:

1. Lab report answers to the following questions:


a. Describe the behavior of RandomAgent from Step 7
b. A screen shot of your myLayout environment from Step 8
c. Describe the behavior of RandomAgent from Step 9
d. Describe the behavior of ReflexAgent from Step 10
e. For each of the percepts listed in Step 10, show what command/code enables you
to access it. For example:
Pac-man’s postion: [Link]()

2. Source code + README (how to compile and run your code)


3. Please create a folder called "yourname_studentID" that includes all the required files
and generate a zip file called "yourname_studentID.zip".
4. Please submit your work (.zip) to Blackboard.

Common questions

Powered by AI

Creating custom environment layouts like myLayout.lay involves designing text files that specify walls, corridors, food, and potentially ghost positions. This process is vital as it allows experimentation with different maze configurations, testing agent behaviors under unique challenges, and observing how different algorithms navigate through specifically structured environments .

The recommended practices include creating agent subclasses in a separate file called Agents.py, proper code commenting for clarity, organizing code with README files for compilation instructions, packaging all required files into a properly named zip folder, and submitting via Blackboard for grading. These practices ensure clarity, ease of evaluation, and adherence to academic protocols .

The primary objectives of the assignments in the Pac-Man World are to implement Pac-Man agents that find paths through the maze efficiently to reach locations and collect food, to build general search algorithms, and to apply these in various Pac-Man scenarios .

The `getAction` method in the DumbAgent class is responsible for determining the action that the Pac-Man should take at each time step. It is crucial because it governs how the agent interacts with the game state, and in the case of DumbAgent, it always returns the action to go East. This method must be implemented in every subclass of the Agent class and eventually improved by checking the legality of actions to prevent crashes .

The Pac-Man environment is fully observable, allowing agents to perceive the positions of all elements (ghosts, walls, food), aiding in strategic planning. Its static nature implies that until Pac-Man makes a move, the environment doesn't change, allowing agents to calculate actions without time constraints from dynamic changes, fostering more controlled and precise strategy formation .

Observing the behavior of RandomAgent reveals its non-deterministic path selection, leading to varied success and failure in reaching food without crashing. ReflexAgent offers insights into how agents can be designed to prioritize immediate rewards, like food collection, over random movement, reflecting a more strategic and percept-driven approach to game state changes .

A potential pitfall of using RandomAgent is its inclusion of the 'Stop' action, which can slow it down and decrease efficiency. Improvements include creating a BetterRandomAgent that excludes 'Stop' from its actions unless necessary to avoid danger, thereby optimizing movement towards objectives like reaching food pellets more swiftly and effectively .

The Pac-Man agent is controlled by specifying different agent programs, such as DumbAgent, and executing commands with various layout options like -–layout testMaze. Different environments such as tinyMaze, mediumMaze, and bigMaze can be specified through command line options .

The GameState class facilitates decision-making by providing methods to access vital game information, such as Pac-Man's position, available actions, positions of walls and food, and the outcomes of potential actions through generatePacmanSuccessor. These capabilities enable agents to make informed, strategic decisions based on comprehensive state information .

The environment grid is composed of squares, and Pac-Man's decision-making and movement are limited by the grid's layout, including walls and blocked squares. Pac-Man can move only in specified directions (North, South, East, West), and its position is determined by x-and y-coordinates in the grid, influencing its pathway and strategic decisions for reaching food and avoiding illegal moves .

You might also like