search

A search problem consists of:

▪ A state space

▪ A successor function
(with actions, costs)

▪ A start state and a goal test

A solution is a sequence of actions (a plan) which
transforms the start state to a goal state

State Space Graphs vs. Search Trees

State space graph: A mathematical
representation of a search problem

▪ Nodes are (abstracted) world configurations

▪ Arcs represent successors (action results)

▪ The goal test is a set of goal nodes (maybe only one)

▪ A search tree:

Possible futures

▪ A “what if” tree of plans and their outcomes

▪ The start state is the root node

▪ Children correspond to successors

▪ Nodes show states, but correspond to PLANS that achieve those states

▪ For most problems, we can never actually build the whole tree

Search: (by the search tree)

▪ Expand out potential plans (tree nodes)

▪ Maintain a fringe of partial plans under consideration

▪ Try to expand as few tree nodes as possible

tree search: DFS and BFS

Depth-First Search (DFS) Properties

What nodes DFS expand?

▪ Some left prefix of the tree.

▪ Could process the whole tree!

▪ If m is finite, takes time O(bm)

How much space does the fringe take?

▪ Only has siblings on path to root, so O(bm)

Is it complete?

▪ m could be infinite, so only if we prevent
cycles (more later)

Is it optimal?

▪ No, it finds the “leftmost” solution,
regardless of depth or cost

Breadth-First Search (BFS)

dig the breadth first

Below: For map search

Heuristic

A heuristic is:

▪ A function that estimates how close a state is to a goal

▪ Designed for a particular search problem

▪ Examples: Manhattan distance, Euclidean distance for
pathing

find the one seems to be the closest

Strategy: expand a node that you think is

closest to a goal state

  • Heuristic: estimate of distance to nearest goal for
    each state

A common case:

  • Best-first takes you straight to the (wrong) goal

  • Worst-case: like a badly-guided DFS