State Spaces
- Definition
- A state space is the set of all configurations that a given problem and its environment
could achieve.
- Each configuration is called a state, and contains
- Static information.
This is often extracted and held separately in a "knowledge base"
- Dynamic information, which changes between states.
- For example die & chess &
M&C, N-queens
- Problems in terms of state spaces
- Initial states
- State generator: Given certain achieved states in a state space (these may be some
specified initial states), new states may be achieved.
- Independent generation of states, e.g., roll a die.
- State transformations applied to achieved states, e.g. flip a die, move in chess.
- Described as LHS => RHS, where the LHS determines applicability and the RHS
specifies results.
- Typically there are too many for ground representation (e.g. chess +-10^120),
so variables are used.
- Changes the dynamic information in a state
- Solution detector: Test to recognise a solution configuration
- Solving Problems
- A problem may be solved by the methods of achieving new states.
- The difficulty of finding goal states:
- A branching factor of N gives rise to a state space of size ND at a
search depth of D.
- This is typical of problem state spaces, i.e., problems are typically exponentially
hard.
- Only a finite number of independent generations or state transformations can be
made in a finite amount of time, thus the solution of exponentially hard problems
by exhaustive search or independent generation takes exponential time.
- We don't have exponential time
- Consider N-queens.
- The difficulty of finding goal states independently
- Cannot use control strategies.
- Cannot provide a sequence of transformations for reuse
- Only useful if there is no sequence of transformations that leads to a goal state.
- Consider N-queens.
- We will only consider searching hence forth.
- State space search using tranformations and control strategies
- Search strategies that search the entire state space cannot be effective in solving
problems, e.g., DFS, BFS, Iterative deepening.
- A control strategy will decide which (finite number of) transformations should be
made at any point in time, from which achieved state.
- Hopefully this strategy will allow the solution of exponentially hard problems in
less than exponential time.
- Three types of information can be used:
- The static information
- The dynamic information in achieved states, statically state per state, as
used in many game playing programs.
- The structure of the space of achieved states.
- For a control strategy to succeed, it is necessary that :
- The goal states are not randomly distributed.
- The pattern of distribution is detectable.
- The problem solver be able to react so as to search to according to the
detected pattern.
- Solution information
- If there is more than one solution state (sequence of transformations that achieves
a solution state), and a "best" solution state (sequence) is required, the search
for solutions (sequences) must continue until it is assured that the best solution
(sequence) has been found.
- If the sequence of steps required to get to a solution configuration is required,
the state transformations are recorded within each newly achieved state.
- Best may be measured by cost of transformation, and quality of goal state found.
- Representing States
- Three problems
- How can individual objects be represented.
- How can the individual object representations be combined into a state.
- How can sequences of states be represented without duplicating constant information.
- Solutions
- The first two are the problem of knowledge representation.
What data structures are appropriate?
- Non-structured knowledge representations (the computer science approach)
- Logics (the mathematical approach)
- The last is called the "frame problem", and can be solved by structure sharing.
- Build the current state by keeping only changes on each state change and
working from the initial state.
- Can have a re-start every now and again to avoid continuous rebuilding.
- Can keep the current state completely until a new branch is used.
- Direction of search
- Forward search
- Backward search - only if goal states are known.
- Depending on the topology of the problem space the better search direction changes.
- Working in the direction of a lower branching factor is superior, as the search
tree is smaller.
- Number of start states and number of goal states.
- If the branching factor is the same both ways this is important.
However branching rate is more important.
- It is better to start from a few states and work towards many states.
- In some cases the start states may not be precisely known, e.g., which axioms to
use in theorem proving.
- Is justification for the reasoning required.
- Backward reasoning can justify itself.
- Forward reasoning cannot absolutely justify itself.
- Working in both directions is called bi-directional search.
- Bi-directional search has the danger of passing itself in the dark, but cuts the
exponential growths of search space.
- Means-ends analysis
- This search strategy does not work in a particular direction, but rather moves to
detect and solve differences between initial and goal states.
- Large sub-problems are solved first, and the gaps between are solved later.
- Avoiding repeated states
- Common with commutative transforms (e.g., dressing), and reversible transformations,
e.g., M&C.
- In increasing order of expense and effectiveness
- Do not return to where you just came from
- Do not create cycles
- Do not create duplicate states
- Checking for states less general than new ones generalises the check for same states.
Exam Style Questions
- Define a state space.
- What are the three components of a classic definition of a problem?
- What two types of information are held a state?
- How may a problem be described in terms of a state space?
- What are the three types of information that can be used to guide the search for a solution
in a state space?
- In what two ways may one problem solution be considered better than another?
- Give two reasons why independent generation of states not suitable as a mechanism for reaching
a goal state in a state space?
- Explain the fundamental and computational reasons why depth first search and breadth first
search are not suitable as mechanisms for reaching a goal state in a state space?
- What does the control strategy do in state space search?
- What are the three requirements for a control strategy to succeed?
- Describe three common approaches to the frame problem (storing the dynamic information in
states).
Indicate the advantages and disadvantages of each.
- Describe the criteria that determine the best direction for search in a problem space.
- Explain the problems and advantages of bi-directional search.
- Describe three principles for avoiding loops in state space search.
Mention their relative costs.
- Explain how state space search can be implemented in terms of a physical symbol system.