The goal of Artificial Intelligence (AI) is to build machines with minds in the full and literal sense, said John Haugeland in his book AI: The Very Idea. Not a clever fake, or a machine that mimics human intelligence.
An autonomous intelligent agent senses the world around it, creates symbolic internal representations of what it knows, and plans and reasons with the symbolic representations. We call this approach to problem solving as symbolic AI.
While there has been considerable debate on the nature of intelligence it has been now accepted that there is no one process that is behind intelligent behavior but many acting in tandem. An intelligent machine would not be a solo performer but an orchestra with different processes working to create a symphony. Marvin Minsky described it as the society of mind.
There are three principal approaches to problem solving:
- Remember the past. Learn from it.
- Understand the present. Be aware of and reason about the world around you.
- Imagine the future. Search for the means to achieve your goals.
In this course, Search Methods in AI , we focus on the last one, the ability to solve problems from first principles by searching for solutions. Students will develop an armory of search algorithms that can be used for problem solving.
We begin by defining search spaces that algorithms navigate in the search for solutions. We start with simple uninformed search that traverses the space in a predetermined fashion. Blind search is completely oblivious of the goal. A heuristic function gives search a sense of direction. But heuristic functions are not perfect. In route finding in a city, for example, using Euclidean distance as a heuristic might drive search to a riverfront with no bridge to cross in sight.
Search, in general requires exponential space and time. To combat this complexity, we often resort to local search methods such as steepest gradient ascent or descent which is computationally inexpensive. Steepest gradient ascent (or descent) moves to the best neighbor if it is better, else it stops. It is a popular optimization method with many applications including training in neural networks.
But one cannot have one’s cake and eat it too. Local search methods often elude the best solutions. We look at search algorithms designed to elude local optima – tabu search, simulated annealing, genetic algorithms, and ant colony optimization.
We then turn to the well known A* algorithm that is guaranteed to find optimal solutions. Algorithm A* uses a combination of look ahead heuristic function and look back cost function that together guide search towards an optimal solution, and find it quickly. Algorithm A* may also need exponential space and time, and we investigate a variety of space saving versions of A* search.
We move on to look at board games like chess and the search algorithms to play them. Such games are represented by game trees. We look at the tree from the perspective of one player, traditionally called Max, and search for an optimal strategy for that player. The chess game tree is too large to be inspected fully. We study a few search algorithms that do a finite k ply lookahead to decide the next move for Max.
We also look at automated planning that is an emerging field with its own conferences. Planning is concerned with “the reasoning that lies behind actions”. Here again we study a host of search algorithms including, goal stack planning, partial order planning, and Graphplan.
We then look at an alternative problem solving approach embodied by the algorithm AO*. The idea is to explore different ways of breaking down a problem into simpler problems in a structure called an And-Or tree. This embodies a goal directed approach to search.
We move on rule based expert systems and study the well known Rete Algorithm for efficient computing in forward chaining, in which one does not have to repeatedly search for matching data. This technology is widely used in business rule management systems.
We also look at deductive reasoning in logic and the foundations of logic programming. We look at forward and backward reasoning in theorem proving, both of which have search beneath the hood. We observe how backward chaining is the basis of the programming language Prolog.
Finally we look at a unifying mechanism for problems solving, constraints processing, that gives us a mechanism for combining search with reasoning. Solving a Sudoku, for example, can be a combination of both search and reasoning, where one can sometimes infer the number to insert, but at other times may have to resort to search.
At the end of the course the student will be well versed in the role of search in problem solving, including logical deduction, and would have some of the requisite knowledge and skills needed for building autonomous agents.
By Dr. Deepak Khemani, Professor at Plaksha University. His area of expertise lies in Artificial Intelligence.
Text book : Deepak Khemani, Search Methods in Artificial Intelligence , Cambridge University Press, 2024.