What is Intelligence? What is Artificial Intelligence? Sessions 1 & 2 Introduction to AI; Planning & Search CSE 592 Applications of Artificial Intelligence Henry Kautz Winter 2003 What is Artificial Intelligence? . . . Exactly what the computer provides is the ability not to be rigid and unthinking but, rather, to behave • The study of the principles by which conditionally. That is what it means natural or artificial machines to apply knowledge to action: It manipulate knowledge: means to let the action taken reflect knowledge of the situation, to be – how knowledge is acquired sometimes this way, sometimes that, – how goals are generated and achieved as appropriate. . . . – how concepts are formed -Allen Newell – how collaboration is achieved Classical AI • The principles of intelligence are separate • Classical AI from any hardware / software / wetware implementation Disembodied Intelligence – logical reasoning – probabilistic reasoning • Autonomous Systems – strategic reasoning Embodied Intelligence – diagnostic reasoning • Look for these principles by studying how to perform tasks that require intelligence 1
Success Story: Medical Expert Success Story: Systems Chess • Mycin (1980) I could feel – I could smell – a – Expert level performance in diagnosis of new kind of blood infections intelligence • Today: 1,000’s of systems across the table - Kasparov – Everything from diagnosing cancer to designing dentures •Examines 5 billion positions / second – Often outperform doctors in clinical trials •Intelligent behavior emerges – Major hurdle today – non-expert part – from brute-force search doctor/machine interaction Autonomous Systems • Provide a standard • In the 1990’s there was a growing concern problem where a wide that work in classical AI ignored crucial range of technologies scientific questions: can be integrated and – How do we integrate the components of examined intelligence ( e.g. learning & planning)? • By 2050, develop a team – How does perception interact with of fully autonomous reasoning? humanoid robots that – How does the demand for real-time can win against the performance in a complex, changing human world champion environment affect the architecture of team in soccer. intelligence? Speed & Capacity Started: January 1996 Launch: October 15th, 1998 Experiment: May 17-21 courtesy JPL 2
Not Speed Alone… Varieties of Knowledge What kinds of knowledge required to understand – • Speech Recognition • Time flies like an arrow. • “Word spotting” feasible today • Continuous speech – rapid progress • Turns out that “low level” signal not as ambiguous as we once thought • Fruit flies like a banana. • Translation / Interpretation / Question-answering • Very limited progress The spirit is willing but the flesh is weak. (English) • Fruit flies like a rock. The vodka is good but the meat is rotten. (Russian) 1940's - 1960's: Artificial neural networks • McCulloch & Pitts 1943 1950's - 1960's: Symbolic information processing • General Problem Solver – Simon & Newell • "Weak methods“ for search and learning • 1969 - Minsky's Perceptrons 1940’s – 1970’s: Control theory for adaptive (learning) systems • USSR – Cybernetics – Norbert Weiner • Japan – Fuzzy logic … In sum, technology can be Historic 1970's – 1980’s: Expert systems controlled especially if it is • “Knowledge is power" – Ed Feigenbaum Perspective saturated with intelligence to watch • Logical knowledge representation • AI Boom over how it goes, to keep accounts, 1985 – 2000: A million flowers bloom to prevent errors, and to provide • Resurgence of neural nets – backpropagation wisdom to each decision. • Control theory + OR + Pavlovian conditioning = reinforcement learning • Probabilistic knowledge representation – Bayesian Nets – Judea Pearl -Allen Newell • Statistical machine learning 2000’s: Towards a grand unification • Unification of neural, statistical, and symbolic machine learning • Unification of logic and probabilistic KR • Autonomous systems Course Mechanics Planning & Search Topics Search – the foundation for all work in AI • What is AI? • Search, Planning, and Satisfiability • Deduction • Bayesian Networks • Probabilistic reasoning • Statistical Natural Language Processing • Decision Trees and Neural Networks • Perception • Data Mining: Pattern Discovery in Databases • Learning • Planning under Uncertainty and Reinforcement Learning • Autonomous Systems Case Studies • Game playing • Project Presentations • Expert systems Assignments • 4 homeworks • Planning • Significant project & presentation Information • http://www.cs.washington.edu/education/courses/592/03wi/ R&N Ch 3, 4, 5, 11 3
Classical Planning Planning • Input • Simplifying assumptions – Atomic time – Description of set of all possible states of the world – Actions have deterministic effects (in some knowledge representation language) – Agent knows complete initial state of the world – Description of initial state of world – Agent knows the effects of all actions – Description of goal – States are either goal or non-goal states, rather than numeric utilities or rewards – Description of available actions – Agent is sole cause of change • May include costs for performing actions • Output • All these assumptions can be relaxed, as we will see by the end of the course… – Sequence of actions that convert the initial state into one that satisfies the goal – May wish to minimize length or cost of plan 19 Example: Robot Control Example: (Blocks World) Route Input: Planning • State set Input: • Start state • State set • Goal state test • Start state • Operators (and costs) • Goal state test Output: • Operators Output: Missionaries and Cannibals Implicitly Generated Graphs • Planning can be viewed as finding paths in a graph, where the graph is • 3 missionaries M1, M2, M3 implicitly specified by the set of actions • 3 cannibals C1, C2, C3 • Blocks world: How many – vertex = relative positions of all blocks • Cross in a two person boat, so that states for K – edge = robot arm stacks one block missionaries never outnumber blocks? cannibals on either shore stack(blue,table) • What is a state? How many states? M1 M2 M3 stack(green,blue) stack(blue,red) C1 C2 C3 stack(green,red) stack(green,blue) 24 4
How Represent Actions? STRIPS Representation • World = set of propositions true in that world • Description of initial state of world • Actions: Set of propositions that completely describes a world – Precondition: conjunction of propositions { (block a) (block b) (block c) (on-table a) – Effects: propositions made true & propositions made false (deleted from the state description) (on-table b) (clear a) (clear b) (clear c) (arm-empty) } • Description of goal (i.e. set of desired worlds) Set of propositions that partially describes a world { (on a b) (on b c) } operator: stack_B_on_R • Description of available actions precondition: (on B Table) (clear R) effect: (on B R) (:not (clear R)) Action Schemata Search Algorithms • Compact representation of a large set of actions Backtrack Search 1. DFS 2. BFS / Dijkstra’s Algorithm 3. Iterative Deepening 4. Best-first search (:operator pickup 5. A* : parameters ((block ?ob1)) Constraint Propagation 1. Forward Checking : precondition (:and (clear ?ob1) (on-table ?ob1) 2. k-Consistency (arm-empty)) 3. DPLL & Resolution :effect (:and (:not (on-table ?ob1)) Local Search (:not (clear ?ob1)) 1. Hillclimbing (:not (arm-empty)) 2. Simulated annealing 3. Walksat (holding ?ob1))) Depth First Search Breadth First Search • Maintain queue of nodes to visit • Maintain stack of nodes to visit • Evaluation • Evaluation – Complete? – Complete? Not for infinite spaces Yes a a – Time Complexity? – Time Complexity? O(b^d) O(b^d) b b c c – Space Complexity? – Space Complexity? O(b^d) O(d) g h g h d e f d e f 5
Recommend
More recommend