0%

AI Search Algorithm

Problem Solving and Search

Environment Types

  • Accessible
  • Deterministic
  • Episodic
  • Static
  • Discrete

Problem Types

  • single-state problem
    • deterministic, accessible
    • agent knows the exact state
    • can calculate optimal action on sequence to reach goal state
  • multiple state problem
    • deterministic, inaccessible
    • Agent does not know the exact state
    • Assume states while working towards goal state.
  • contingency problem
    • nondeterministic, inaccessible
    • must use sensors during execution (could be in any of the possible states)
    • solution is a tree or policy
    • often iterative search and execution
  • exploration problem
    • unknown state space

Problem Formulation

a problem is defined by four items

  • initial state
  • operators
  • goal test
  • path cost
1
2
3
4
5
6
7
8
9
10
Function General-Search(problem, strategy) returns a solution, or failure
initialize the search tree using the initial state problem
loop do
if there are no candidates for expansion then return failure

choose a leaf node for expansion according to strategy
if the node contains a goal state then
return the corresponding solution
else expand the node and add resulting nodes to the search tree
end

for more detailed implementation,

1
2
3
4
5
6
7
8
Function General-Search(problem, Queuing-Fn) returns a solution, or failure
nodes <- make-queue(make-node(initial-state[problem]))
loop do
if nodes is empty then return failure
node <- Remove-Front(nodes)
if Goal-Test[problem] applied to State(node) succeeds then return node
nodes <- Queuing-Fn(nodes, Expand(node, Operators[problem]))
end

Node and State

Know the difference between state in state-space and tree node in a search tree, they are Different! p58 in 02-04

A state is a representation of physical configuration.

A node is a data structure, which includes parent, children, depth, path and cost(g), and expanded by some kind of EXPAND function.

Analyze

  • Completeness
  • Time complexity
  • Space complexity
    • b - max branching factor
    • d - depth of the least-cost solution
    • m - max depth of the search tree
  • optimality

Uninformed Search

Breadth-First Search (BFS)

Enqueue expanded (children) nodes to the back of the queue (FIFO order)

Analyze

  • Completeness: yes, if b is finite
  • Time: O(b^d)
  • Space: O(b^d)
  • Optimality: yes, assuming cost = 1 per step

Uniform Cost Search (UCS)

The queuing function keeps the node list sorted by increasing path cost, and expand the first unexpanded node (hence with smallest path cost)

Analyze

  • Completeness: Yes, if step cost > 0
  • Time: # nodes with g <= cost of optimal solution, <= O(b^d)
  • Space: # nodes with g <= cost of optimal solution, <= O(b^d)
  • Optimality: Yes, as long as path cost never decrease ?

Problem

Uniform-cost search not optimal if it is terminated when any node in the queue has goal state.

Here is the example

Improved UCS

refine the queuing function for optimal search, queue-up node if

  • Its state does not match the state of any parent
  • path cost smaller than path cost of any unexpanded node with same state in the queue
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Function UniformCost-Search(problem, Queuing-Fn) returns a solution, or failure
open <- make-queue(make-node(initial-state[problem]))
closed <- [empty]

loop do
if open is empty then return failure
currnode <- Remove-Front(open)
if Goal-Test[problem] applied to State(currnode) then return currnode
children <- Expand(currnode, Operators[problem])
while children not empty
[...]
end
closed <- Insert(closed, currnode)
open <- Sort-By-PathCost(open)
end

here is the codes in [...]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
children <- Expand(currnode, Operators[problem])
while children not empty
child <- Remove-Front(children)
if no node in open or closed has child’s state
open <- Queuing-Fn(open, child)
else if there exists node in open that has child’s state
if PathCost(child) < PathCost(node)
open <- Delete-Node(open, node)
open <- Queuing-Fn(open, child)
else if there exists node in closed that has child’s state
if PathCost(child) < PathCost(node)
closed <- Delete-Node(closed, node)
open <- Queuing-Fn(open, child)
end

Depth-First Search (DFS)

Enqueue expanded (children) nodes to the front of the queue (LIFO order)

Analyze

  • Completeness: no ? infinite state space / yes ? finite state space
  • Time: O(b^m)
  • Space: O(bm)
  • Optimality: No

A depth-first search with depth limit l

Analyze

  • Completeness: yes
  • Optimality: No
1
2
3
4
5
6
7
Function Iterative-deepening-Search(problem) returns a solution, or failure
for depth = 0 to infinite do
result <- Depth-Limited-Search(problem, depth)
if result succeeds then
return result
end
return failure

Analyze

  • Completeness: Yes,
  • Time: O(b^d)
  • Space: O(bd)
  • Optimality: Yes, if step cost = 1

Iterative deepening is preferred to depth-first or breadth-first when search space large and depth of solution not known

Comparing Uninformed Search Strategies

Type TimeO SpaceO Optimal Complete
BFS O(b^d) O(b^d) Yes Yes
UCS O(b^d) O(b^d) Yes Yes
DFS O(b^m) O(bm) No No
DIS O(b^l) O(bl) No Yes, if I >= d
IDS O(b^d) O(bd) Yes Yes

Informed Search

Use heuristics to guide the search

  • Idea: use an evaluation function for each node, estimate of “desirability”
  • Queuing function: Insert successors in decreasing order of desirability
  • Special cases:
    • Greedy search
    • A* search

Estimation function: h(n) = estimate of cost from n to goal (heuristic)

Analyze

  • Complete: No, can get stuck in loops
    • Complete in finite space with repeated-state checking.
  • Time: O(b^m), but a good heuristic can give dramatic improvement
  • Space: O(b^m), keeps all nodes in memory
  • Optimal: No.

A* search uses an admissible heuristic

f(n) = g(n) + h(n)

  • f(n) is the estimated total cost through n to the goal.
  • g(n) is the cost so far to reach n
  • h(n) is the estimated cost from n

Analyze

  • Complete: Yes, unless infinitely many nodes with f <= f(G)
  • Time: Exponential in [(relative error in h) x (length of solution)]
  • Space: Keeps all nodes in memory
  • Optimal: Yes – cannot expand fi+1 until fi is finished

Optimality of A*

1
2
3
f(G2) = g(G2) + f(G2) = g(G2) > g(G1) >= f(n)

Since f(G2) > f(n), A* will never select G2 for expansion

F(n) Increment

A* expands nodes in order of increasing f value.

1
2
3
4
5
6
7
8
9
10
11
12
assuming n --d--> n'

n ----d---> n'
\ /
h(n) h(n')
\ /
\ /
tar

g(n') = g(n) + d
f(n') = g(n') + h(n') = g(n) + d + h(n') >= g(n) + h(n) = f(n)
thus f(n') >= f(n)

so when traversing the node, f'(n) = max (g(n')+h(n'), f(n))

Iteratively maximize “value” of current state, by replacing it by successor state that has highest value, as long as possible.

here is the pseudo code

1
2
3
4
5
6
7
8
9
10
11
12
function Hill-Climbing(problem) returns a solution state
inputs: problem, a problem
local vars: current, a node
next, a node

current <- Make-Node(Initial-State[Problem])

loop do
next <- a highest-valued successor of current
if Value[next] < Value[current] then return current
current <- next
end

weakness: can only find the local minimum/maximum

Simulated Annealing

Boltzmann distribution

P(A)/P(B) = exp((E(A)-E(B))/T) = exp(E(B)/T)/exp(E(A)/T)

1
2
3
4
5
6
7
8
9
10
11
12
function SIMULATED-ANNEALING(problem, schedule) returns a solution state
inputs: problem, a problem
schedule, a mapping from time to “temperature”

current <- MAKE-NODE(problem.INITIAL-STATE)
for t = 1 to ∞ do
T <- schedule(t)
if T = 0 then return current
next <- a randomly selected successor of current
ΔE <- next.VALUE – current.VALUE
if ΔE > 0 then current <- next
else current <- next only with probability e^(ΔE/T)
Case ΔE ΔE/T exp(ΔE/T) accept bad move
T is large <0 <0, very small close to 1 high probability
T is small <0 <0, very large close to 0 low probability

Game playing

Basic Concepts

  • Initial state
  • Operators
  • Terminal state
  • Utility function: AKA payoff function

The Minimax Algorithm

Minimize opponent’s chance and thus? maximize your chance

Pseudo code of minimax algorithm

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
function minimax_decision(game) return an operator
for each operators in operators[game] do
values[op] = minimax_value(apply(op,game),game)
end
return the op with highest values[op]

------------------------------------------------------------

function minimax_value(state,game) returns a utility value
if terminate_test[game](state) then
return utility[game](state)
else if MAX is to move in state then //what does this mean?
return the the highest minimax_value of successors(state)
else
return the lowest minimax_value of the successors(state)

Statistic

  • Complete: Yes, for finite state-space
  • Optimal: Yes
  • Time Complexity: O(b^m)
  • Space Complexity: O(bm)

Resource Limitations

  • Evaluation function: evaluates value of state using heuristics and cuts off search
  • Using new minimax
    • CUTOFF-TEST: cutoff test to replace the termination condition
    • EVAL: evaluation function to replace utility function

Note: exact values do no matter, only the order matters.

Alpha-beta Pruning

  • prune portions of the search tree that cannot improve the utility value.
  • pruning does not affect the final result.
  • time complexity: O(b^(m/2)) if perfect
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
function max_value (state, game, a, b) return the minimum value of state
inputs: state,
game,
a, the best score for MAX along the path to state
b, the best score for MIN along the path to state
// a and b are both local variables
// at the start of the algorithm, a = - infinity, b = infinity

if cutoff(state) then return eval(state)

for each s in successors(state) do
a = max(a, min_value(s,game,a,b)
if(a>b) then return b
end
return a

-------------------------------------------------------------------------------

function min_value (state, game, a, b) return the maximum value of the state
inputs: state,
game,
a, the best score for MAX along the path to state
b, the best score for MIN along the path to state

if cutoff(state) then return eval(state)

for each s in successors(state) do
b = min(b, max_value(s,game,a,b))
if (b<a) then return a
end
return b

Elements Of Chance

expectimax and expectimin, expected values over all possible outcomes. In this way, order-preserving transformation do not necessarily behave the same!

Constraint Satisfaction

Constraint Satisfaction Problems (CSP)

Basic Concepts

  • Variables
  • Domains
  • Constraints
  • Consistent
  • Complete

Varieties of CSPs

  • Discrete variables
    • finite domains
    • infinite domains
  • Continuous variables

Varieties Of Constraints

  • Unary
  • Binary
  • Higher-order

Example: Crypt-arithmetic

1
2
3
4
    T W O
+ T W O
----------
F O U R

Backtracking Search For CSPs

Depth-first search for CSPs with single-variable assignments is called
backtracking search

code
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
function BACKTRACKING-SEARCH(csp) returns a solution, or failure
return BACKTRACK({ }, csp)

----------------------------------------------------------------

function BACKTRACK(assignment, csp) returns a solution, or failure

if assignment is complete then return assignment
var <- SELECT-UNASSIGNED-VARIABLE(csp)
for each value in ORDER-DOMAIN-VALUES(var, assignment, csp) do
if value is consistent with assignment then
add {var = value} to assignment
result <- BACKTRACK(assignment, csp)
if result != failure then return result
remove {var = value}
return failure

Improved Backtracking

  • choose the variable with the fewest legal values
  • choose the variable with the most constraints on remaining variables

Arc Consistency

  • node-consistent: all the values in its domain satisfy the variable’s unary constraints
  • arc-consistent: every value in its domain satisfies the binary constraints

Local Search For CSPs