# Problem Solving and Search

## Environment Types

- Accessible
- Deterministic
- Episodic
- Static
- Discrete

## Problem Types

- single-state problem
**deterministic, accessible**- agent knows the exact state
- can calculate optimal action on sequence to reach goal state

- multiple state problem
**deterministic, inaccessible**- Agent does not know the exact state
- Assume states while working towards goal state.

- contingency problem
**nondeterministic, inaccessible**- must use sensors during execution (could be in any of the possible states)
- solution is a tree or policy
- often iterative search and execution

- exploration problem
**unknown state space**

## Problem Formulation

a problem is defined by four items

- initial state
- operators
- goal test
- path cost

## General Search

1 | Function General-Search(problem, strategy) returns a solution, or failure |

for more detailed implementation,

1 | Function General-Search(problem, Queuing-Fn) returns a solution, or failure |

## Node and State

Know the difference between state in state-space and tree node in a search tree, they are **Different**! `p58 in 02-04`

A state is a representation of physical configuration.

A node is a data structure, which includes `parent`

, `children`

, `depth`

, `path`

and `cost(g)`

, and expanded by some kind of `EXPAND`

function.

## Evaluation of Search

Analyze

- Completeness
- Time complexity
- Space complexity
- b - max branching factor
- d - depth of the least-cost solution
- m - max depth of the search tree

- optimality

# Uninformed Search

## Breadth-First Search (BFS)

Enqueue expanded (children) nodes to the back of the queue (**FIFO** order)

Analyze

- Completeness: yes,
**if b is finite** - Time: O(b^d)
- Space: O(b^d)
- Optimality: yes, assuming cost = 1 per step

## Uniform Cost Search (UCS)

The **queuing function** keeps the node list sorted by increasing path cost, and expand the first unexpanded node (hence with smallest path cost)

Analyze

- Completeness: Yes, if step cost > 0
- Time: # nodes with g <= cost of optimal solution, <= O(b^d)
- Space: # nodes with g <= cost of optimal solution, <= O(b^d)
- Optimality: Yes, as long as path cost never decrease ?

### Problem

Uniform-cost search **not optimal** if it is terminated when any node in the queue has goal state.

Here is the example

### Improved UCS

refine the queuing function for optimal search, queue-up node if

- Its state does not match the state of any parent
- path cost smaller than path cost of any unexpanded node with same state in the queue

1 | Function UniformCost-Search(problem, Queuing-Fn) returns a solution, or failure |

here is the codes in `[...]`

1 | children <- Expand(currnode, Operators[problem]) |

## Depth-First Search (DFS)

Enqueue expanded (children) nodes to the front of the queue (**LIFO** order)

Analyze

- Completeness: no ? infinite state space / yes ? finite state space
- Time: O(b^m)
- Space: O(bm)
- Optimality: No

## Depth-limited Search

A depth-first search with depth limit `l`

Analyze

- Completeness: yes
- Optimality: No

## Iterative Deepening Search

1 | Function Iterative-deepening-Search(problem) returns a solution, or failure |

Analyze

- Completeness: Yes,
- Time: O(b^d)
- Space: O(bd)
- Optimality: Yes, if step cost = 1

Iterative deepening is preferred to depth-first or breadth-first when search space large and **depth of solution not known**

## Comparing Uninformed Search Strategies

Type | TimeO | SpaceO | Optimal | Complete |
---|---|---|---|---|

BFS | O(b^d) | O(b^d) | Yes | Yes |

UCS | O(b^d) | O(b^d) | Yes | Yes |

DFS | O(b^m) | O(bm) | No | No |

DIS | O(b^l) | O(bl) | No | Yes, if I >= d |

IDS | O(b^d) | O(bd) | Yes | Yes |

# Informed Search

Use heuristics to guide the search

## Best First Search

- Idea: use an evaluation function for each node, estimate of “desirability”
- Queuing function: Insert successors in decreasing order of
**desirability** - Special cases:
- Greedy search
- A* search

## Greedy Search

Estimation function: h(n) = estimate of cost from n to **goal** (heuristic)

Analyze

- Complete: No, can get stuck in loops
- Complete in finite space with
**repeated-state checking.**

- Complete in finite space with
- Time: O(b^m), but a good heuristic can give dramatic improvement
- Space: O(b^m), keeps all nodes in memory
- Optimal: No.

## A* Search

A* search uses an admissible heuristic

`f(n) = g(n) + h(n)`

`f(n)`

is the estimated total cost through n to the goal.`g(n)`

is the cost so far to reach n`h(n)`

is the estimated cost from n

Analyze

- Complete: Yes, unless infinitely many nodes with f <= f(G)
- Time: Exponential in
`[(relative error in h) x (length of solution)]`

- Space: Keeps all nodes in memory
- Optimal: Yes – cannot expand fi+1 until fi is finished

### Optimality of A*

1 | f(G2) = g(G2) + f(G2) = g(G2) > g(G1) >= f(n) |

### F(n) Increment

A* expands nodes in order of increasing f value.

1 | assuming n --d--> n' |

so when traversing the node, `f'(n) = max (g(n')+h(n'), f(n))`

## Hill-climbing Search

Iteratively maximize “value” of current state, by replacing it by successor state that has highest value, as long as possible.

here is the pseudo code

1 | function Hill-Climbing(problem) returns a solution state |

weakness: can only find the **local minimum/maximum**

## Simulated Annealing

Boltzmann distribution

P(A)/P(B) = exp((E(A)-E(B))/T) = exp(E(B)/T)/exp(E(A)/T)

1 | function SIMULATED-ANNEALING(problem, schedule) returns a solution state |

Case | ΔE | ΔE/T | exp(ΔE/T) | accept bad move |
---|---|---|---|---|

T is large | <0 | <0, very small | close to 1 | high probability |

T is small | <0 | <0, very large | close to 0 | low probability |

# Game playing

## Basic Concepts

- Initial state
- Operators
- Terminal state
- Utility function: AKA
**payoff function**

## The Minimax Algorithm

Minimize opponent’s chance and thus? maximize your chance

Pseudo code of minimax algorithm

1 | function minimax_decision(game) return an operator |

Statistic

- Complete: Yes, for finite state-space
- Optimal: Yes
- Time Complexity:
**O(b^m)** - Space Complexity: O(bm)

## Resource Limitations

- Evaluation function: evaluates value of state using heuristics and cuts off search
- Using new minimax
- CUTOFF-TEST: cutoff test to replace the termination condition
- EVAL: evaluation function to replace utility function

Note: **exact values** do no matter, only **the order** matters.

## Alpha-beta Pruning

- prune portions of the search tree that cannot improve the utility value.
- pruning does not affect the final result.
- time complexity: O(b^(m/2)) if perfect

1 | function max_value (state, game, a, b) return the minimum value of state |

## Elements Of Chance

expectimax and expectimin, expected values over all possible outcomes. In this way, order-preserving transformation do not necessarily behave the same!

# Constraint Satisfaction

## Constraint Satisfaction Problems (CSP)

### Basic Concepts

- Variables
- Domains
- Constraints
- Consistent
- Complete

### Varieties of CSPs

- Discrete variables
- finite domains
- infinite domains

- Continuous variables

### Varieties Of Constraints

- Unary
- Binary
- Higher-order

### Example: Crypt-arithmetic

1 | T W O |

## Backtracking Search For CSPs

Depth-first search for CSPs with single-variable assignments is called

backtracking search

1 | function BACKTRACKING-SEARCH(csp) returns a solution, or failure |

### Improved Backtracking

- choose the variable with the fewest legal values
- choose the variable with the most constraints on remaining variables

### Arc Consistency

- node-consistent: all the values in its domain satisfy the variable’s unary constraints
- arc-consistent: every value in its domain satisfies the binary constraints