Искусственный интеллект

Информация о пользователе

Привет, Гость! Войдите или зарегистрируйтесь.



AI-class Sebastian Thrun and Peter Norvig

Сообщений 1 страница 15 из 15

1

http://www.ai-class.com/

0

2

1. Welcome to AI
1. Introduction
2. Course Overview
3. Intelligent Agents
4. Applications of AI
5. Terminology
6. Poker Question
7. Robot Car Question
8. AI and Uncertainty
9. Machine Translation
10. Chinese Translation 1
11. Chinese Translation 2
12. Chinese Translation 3
13. Summary

0

3

1. Introduction.

Sebastian Thrun:
Welcome to the online introduction to artificial intelligence.
My name is Sebastian Thrun.
Peter Norvig:
I'm Peter Norvig.
Sebastian Thrun:
We are teaching this class at Stanford, and now are teaching it online for the entire world.
We are really excited about this.
Peter Norvig:
It's great to have you all here.
It's exciting to have such a record-breaking number of people.
We think we can deliver a good introduction to AI.
We hope you'll stick with it. It's going to be a lot of work,
but we think it's going to very rewarding.
Sebastian Thrun:
The way that it is going to be organized is that every week there is going to be new videos and with these videos, quizzes.
With these quizzes, you can test your knowledge about AI. We also post for the advanced version of this class, homework assignments and exams on which you'll be quizzed. We are going to grade those to give you a final score to see if you can actually master AI the same way any good student at Stanford would do it.
Peter Norvig:
If you do that, then at the end of the class, we'll sign a letter of accomplishment, and let you know that you've achieved this and what your rank in the class was.
Sebastian Thrun:
So i hope you have fun. Watch us on videotape. We will teach you AI.
Participate in the discussion forum.
Ask your question, and help others answer questions.
I hope we have a fantastic time ahead of us in the next 10 weeks.
Peter Norvig:
Welcome to the class. We'll see you online.

2. Course Overview.

Sebastian Thrun:
Welcome to the first unit of Online Introduction to AI. I will be teaching you the very, very basics today. This is Unit 1 of AI.
Welcome.
The Purpose of this class is twofold: Number 1, to teach you the very basics of AI so you'll be able to talk to people in the field and understand the basic tools of the trade; and also, very importantly, to excite you about the field.
I have been in the field of AI for about 20 years, and it's been truly rewarding. So I want you to participate in the beauty and the excitement of AI. So you can become a professional who gets the same reward and excitement out of this field as I do. The basic structure of this class involves videos in which Peter or I will teach you something new, then also quizzes, which we will ask you about your ability to answer AI questions, and finally, answer videos in which we tell you what the right answer would have been for the quiz that you might have falsely or incorrectly answered before.
This will be reiterated, and every so often you get a homework assignment, also in the form of quizzes but without the answers. And them we also have video exams. If you check our website, there's requirements on how you have to do assignments and exams. Please go to ai-class.org in the class.
An AI program is called wetware, a formula, or an intelligent agent. Pick the one that fits best.

3. Intelligent Agents

Sebastian Thrun:
The correct answer is intelligent agent.
Let's talk about intelligent agents.
Here is my intelligent agent, and it gets to interact with an environment.
The agent can perceive the state of the environment through its sensors,
and it can effect its state through its actuators.
The big question of AI is the function that maps sensors to actuators.
This is called the control policy for the agent.
So all of this class will deal with how does an agent make decisions that it can carry out with its actuators based on past sensor data.
Those decisions take place many, many times, and the loop of environment feedback to sensors, agent decision,
actuator interaction with environment and so on is called perception action cycle.
So here is my very first quiz for you.
AI has successfully been used in  finance, robotics, games, medicine, and the Web.
Check any or all of those that apply.
And if none of them applies, check the box down here that says none of them.

0

4

4. Applications of AI.

Sebastian Thrun:
So the correct answer is all of those - finance, robotics, games, medicine, the Web, and many more applications.
There is a huge number of applications of AI in finance, very often in the shape of making trading decisions - in which case, the agent is called a trading agent. And the environment might be things like the stock market or the bond market or the commodities market.
And our trading agent can sense the course of certain things, like the stock or bonds or commodities.
It can also read the news online and follow certain events. And its decisions are usually things like buy or sell decisions - trades.
There's a huge history of AI finding methods to look at data over time and make predictions as to how courses develop over time - and then put in trades behind those. And very frequently, people using AI trading agents have good amount of money with superior trading decisions.
There's also a long history of AI in Robotics. Here is my depiction of a robot. Of course, there are many different types of robots and they all interact with their environments through their sensors, which include things like cameras, microphones, tactile sensor or touch.
And the way they impact their environments is to move motors around, in particular, their wheels, their legs, their arms, their grippers.
They can also say things to people using voice.
Now there's a huge history of using AI in robotics. Pretty much, every robot that does something interesting today uses AI.
In fact, often AI has been studied together with robotics, as one discipline. But because robots are somewhat special in that they use physical actuators and deal with physical environments, they are a little bit different from just AI, as a whole.
When the Web came out, the early Web crawlers were called robots and to block a robot from accessing your website, to the present day,
there's a file called robot.txt, that allows you to deny any Web crawler to access and retrieve that information from your website.
So historically, robotics played a huge role in AI and a good chunk of this class will be focusing on robotics.
AI has a huge history in games - to make games smarter or feel more natural.  There are 2 ways in which AI has been used in games, as a game agent. One is to play against you, as a human user. So for example, if you play the game of Chess, then you are the environment to the game agent. The game agent gets to observe you moves, and it generates its own moves with the purpose of defeating you in Chess. So more adversarial games, where you play against an opponent and the opponent is a computer program, the game agent is built to play against you - against your own interests - and make you lose. And of course, your objective is to win. That's an AI games - type situation.
The second thing is that games agents in AI also are used to make games feel more natural. So very often games have characters inside, and these characters act in some way. And it's important for you, as the player, to feel that these characters are believable.
There's an entire sub-field of AI to use AI to make characters in a game more believable - look smarter, so to speak -  so that you, as a player, think you're playing a better game.
AI has a long history in medicine as well.
The classic example is that of a diagnostic agent. So here you are - and you might be sick, and you go to your doctor. And your doctor wishes to understand what the reason for your symptoms and your sickness is. The diagnostic agent will observe you through various measurements - for example, blood pressure and heart signals, and so on - and it'll come up with the hypothesis as to what you might be suffering from. But rather than intervene directly, in most cases the diagnostic of your decease is communicated to the doctor, who then takes on the intervention. This is called a diagnostic agent. There are many other versions of AI in medicine. AI is used in intensive care to understand whether there are situations that need immediate attention. It,s been used for life-long medicine to monitor signs over long periods of time. And as medicine becomes more personal, the role of AI will definitely increase.
We already mentioned AI on the Web. The most generic version of AI is to crawl the Web and understand the Web, and assist you in answering questions. So when you have this search box over here and it says "Search" on the left, and "I'm Feeling Lucky" on the right, what AI does for you is it understands what words you typed in and finds the most relevant pages.  That is really co-artificial intelligence. It's used by a number of companies, such as Microsoft and Google and Amazon, Yahoo, and many others. And the way this works is that there's a crawling agent that can go to the World Wide Web and retrieve pages, through just a computer program. If then sorts these pages into a big database inside the crawler and also analyzes developments of each page to any possible query. When you then come and issue a query, the AI system is able to give you a response - for example, a collection of 10 best Web links.
In short, every time you try a piece of software, that makes your computer software smart likely you will need AI. And in this class, Peter and I will teach you many of the basic tricks of the trade to make your software really smart.

0

5

5. Terminology

Sebastian Thrun:
It will be good to introduce some basic terminology that is commonly used in AI to distinguish different types of problems.
The very first word I will teach you is fully versus partially observable. An environment is called fully observable if what your agent can sense at the point in time is completely sufficient to make the optimal decision. So, for example, in many card games, when all the cards are on the table, the momentary site of all those cards is really sufficient to make the optimal choice. That is in contrast to some other environments where you need memory on the side of the agent to make the best possible decision. For example, in the game of poker, cards aren't openly on the table, and memorizing past moves will help you make a better decision. To fully understand the difference, consider the interaction of an agent  with the environment to its sensors and its actuators, and this interaction takes place over many cycles, often called the perception-action cycle. For many environments, it's convenient to assume that the environment has some of internal state. For example, in a card game where the cards are not openly on the table, the state might pertain to the cards in your hand. An environment is fully observable if the sensors can always see the entire state of the environment. It's partially observable if the sensors can only a fraction of the state, you memorizing past measurements gives us additional information of the state that is not really observable right now.  So any game, for example, where past moves have information about  what might be in a person's hand, those games are partially observable,  and they require different treatment. Very often agents that deal with partially observable environments need to acquire internal memory to understand what  the state of the environment is, and we'll talk extensively when we talk about hidden Markov models about how this structure has such internal memory.
A second terminology for environments pertains to whether the environment it deterministic or stochastic. Deterministic environment is one where your agent's actions uniquely determine the outcome. So, for example, in chess, there,s really no randomness when you move a piece. The effect of moving a piece is completely predetermined, and no matter where I'm going to move the same piece, the outcome is the same. That we call deterministic. Games with dice, for example, like backgammon, are stochastic. While you can still deterministically move your pieces, the outcome of an action also involves throwing of the dice, and you can't predict those. There's a certain amount of randomness involved for the outcome of dice, and therefore, we call this stochastic. A discrete environment is one where you have finitely many action choices,  and finitely many things you can sense. So, for example, in chess, again, there's finitely many board positions, and finitely many things you can do. That is different from a continuous environment where the space of possible actions or things you could sense may be infinite. So, for example, if you throw darts, there's infinitely many ways to angle the darts and to accelerate them. Finally, we distinguish benign versus adversarial  environments. In benign environment might be random. It might stochastic, but it has no objective on its own that would contradict the own objective. So, for example, weather is benign.  It might be random. It might affect the outcome of your actions. But it isn't really out there to get you. Contrast this with adversarial environments, such as many games like chess, where your opponent is really out there to get you. It turns out it's much harder to find good actions in adversarial environments where the opponent actively observes you and counteracts what you're trying to achieve relative to benign environment, where the environment might merely be stochastic but isn't really interested in making your life worse.
So, let's see to what extent these expressions make sense to you by going to our next quiz. 
So here are the 4 concepts again: partially observable versus fully,  stochastic versus deterministic, continuous versus discrete, adversarial versus benign.  And let me ask about the game of checkers.  Check one or all those attributes that apply. So, if you think checkers is partially observable, check this one. Otherwise, just don't check it. If you think it's stochastic, check this one,  continuous, check this one, adversarial, check this one.  If you don't know about checkers, you can check the Web and Google it  to find a little more information about checkers.

0

6

6. Poker Question
Male narrator:
The game of poker - is this partially observable, stochastic, continuos, or adversarial? Please check any of those that apply.

7. Robot Car Question
Male narrator:
- a favorite, a robotic car.  I wish to know whether it is partially observable, stochastic, continuous, or adversarial. That is, is the problem of driving robotically - say, in a city - subject to any of those categories? Please check any of all that might apply.

0

7

8. AI and Uncertainty

I'm going to briefly talk of AI as something else, which is AI is the technique of uncertainty management in computer software. Put differently, AI is the discipline that you apply when you want to know what to do when you don't know what to do. Now, there's many reasons why there might be uncertainty in a computer program. There could be a sensor limit. That is, your sensors are unable to tell me what exactly is the case outside the AI system. There could be adversaries who act in a way that makes it hard for you to understand what in the case. There could be stochastic environments. Every time you roll the dice in a dice game, the stochasticity of the dice will make it impossible for you to the absolutely certain of what's situation. That could be laziness. So perhaps you can actually compute the situation is, but your computer program is just too lazy to do it. And here's my favorite: ignorance, plain ignorance. Many people are just ignorant of what's going on. They could know it, but they just don't care. All of these things are cause for uncertainty. AI is the discipline that deals with uncertainty and manages it in decision making.

0

8

9. Machine Translation

Peter Norvig:

Now we've had an introduction to AI. We've heard about some of the properties of environments, and we've seen some possible architecture for agents. I'd like next to show you some examples of AI in practice. And Sebastian and I have some experience personally in things we hove done at Google, at NASA, and at Stanford. And I want to tell you a little bit about some of those. One of the best successes Of AI technology at Google has been the machine translation system. Here we see an example of an article in Italian automatically translated into English. Now, these systems are built for 50 different languages, and we can translate from any languages into any of the other languages. So, that's over 2500 different systems, and we've done this all using machine learning techniques, using AI techniques, rather that trying to build them by hand. And the way it works is that we go out and collect examples of text that's a line between the 2 languages. So we find, say, a newspaper that publishes 2 editions, an Italian edition and the English edition, and now we have examples of translations. And if anybody ever asked us for exactly the translation of this one particular article, then we could just look it up and say "We already know that." But of course, we aren't often going to be asked that. Rather, we've going to be asked parts of this. Here are some words that we've seen before, and we have to figure out which words in this article correspond to which words on the translation article. And when we do that by examining many, many millions of words of text in the 2 languages and making the correspondence, and then we can put that all together. And then when we see a new example of text that we haven't seen before, we can just look up what we've seen in the past for the correspondence. So, the task is really two parts. Off-line, before we see an example of text we want to translate, we first build our translation model. We do that by examining all of the different examples and figuring out which part aligns to which. Now, when we've given a text to translate, we use that model, and we go through and find the most probable translation.
So, what does it look like?  Well, let's look at in some example text. And rather than look at news articles, I'm going to look at something simpler. I'm going to switch from Italian to Chinese. Here's a bilingual text. Now, for a large-scale machine translation, examples are found on the Web. This example was found in a Chinese restaurant y Adam Lopez. Now, it's given, for a text of this form, that a line in Chinese corresponds to a line in English, and that's true for each of the individual lines. But to learn from the text, what we really want to discover is what individual words in Chinese correspond to individual words or small phrases in English. I've started that process by highlighting the word "wonton" in English. It appears 3 times throughout the text. Now, in each of those lines, there's a character that appears, and that's the only place in the Chinese text where that character appears. So, that seems like it's a high probability that this character in Chinese corresponds to the word "wonton" in English. Let's see  we can go farther. My question for you is that  word or what character or characters in Chinese correspond to the word "chicken" in English?  And here we see "chicken" appears in these locations. Click on the character or characters in Chinese that corresponds to "chicken".

0

9

10. Chinese Translation 1

The answer is that chicken appears here, here, here and here. Now, I don't know for sure, 100%, that that is the character for chicken in Chinese, but I do know that there is a good correspondence. Every place the word chicken appears in English, this character appears in Chinese and no other place. Let's go 1 step  farther. Let's see if we can work out a phrase in Chinese and see if it corresponds to a phrase in English. Here's the phrase corn cream. Click on the characters in Chinese that correspond to corn cream.

11. Chinese Translation 2

The answer is: these 2 characters here appear only 2 locations corresponding to the words corn cream which appear only in these locations in the English text. Again, we've not 100% sure that's the right answer, but it looks like a strong correlation.
Now, 1 more question. Tell me what character or characters in Chinese correspond to the English word soup.

12. Chinese Translation 3

The answer is that soup occurs in most of these phrases but not 100% of them. It's missing in this phrase. Equivalently, on the Chinese side we see this character occurs in most of the phrases, but it's missing here. So we see that the correspondence doesn't have to be 100% to tell us that there is still a good chance of a correlation. When we're learning to do machine translation we use these kinds of alignments to learn probability tables  of what is the probability of one phrase in one language corresponding to the phrase in another language.

0

10

13. Summary

Sebastian Thrun:

So congratulations, you just finished unit 1. You just finished unit 1 of this class, where I told you about key applications of AI, I told you about the definition of an intelligent agent, I gave you 4 key attributes or intelligent agents (partial observability, stochasticity, continuous spaces, and adversarial natures), and I briefly mentioned the mathematical concept of rationality. Obviously, I only touched any of these issues superficially, but as this class goes on you're going to dive into any of those and learn much more about what it takes to make a truly intelligent AI system. Thank you.

0

11

2. Problem Solving
1. Introduction
2. What is a Problem?
3. Example: Route Finding
4. Tree Search
5. Tree Search Continued
6. Graph Search
7. Breadth First Search 1
8. Breadth First Search 2
9. Breadth First Search 3
10. Breadth First Search 4
11. Breadth First Search 5
12. Uniform Cost Search
13. Uniform Cost Search 1
14. Uniform Cost Search 2
15. Uniform Cost Search 3
16. Uniform Cost Search 4
17. Uniform Cost Search 5
18. Search Comparison
19. Search Comparison 1
20. Search Comparison 2 
21. Search Comparison 3
22. More on Uniform Cost
23. A* Search
24. A* Search 1
25. A* Search 2
26. A* Search 3
27. A* Search 4
28. A* Search 5
29. Optimistic Heuristic
30. State Spaces
31. State Spaces 1
32. State Spaces 2
33. State Spaces 3
34. Sliding Blocks Puzzle
35. Sliding Blocks Puzzle 1
36. Sliding Blocks Puzzle 2
37. Problems with Search
38. A Note on Implementation

0

12

1. Introduction

In this unit we're going to talk about problem solving. The theory and technology of building agents that can plan ahead to solve problems.  In particular, we're talking about problem solving where the complexity of the problem comes from the idea that there are many states. As in this problem here. A navigation problem where there are many choices to start with. And the complexity comes from picking the right choice now and picking the right choice at the next intersection and the intersection after that. Streaming together a sequence of actions. This is in contrast to the type of complexity shown in this picture, where the complexity comes from the partial observability that we can't see through the fog where the possible paths are. We can't see the results of our actions and even the actions themselves are not known. This type of complexity will be covered in a later unit. Here's an example of a problem. This is a route-finding problem where we're given a start city, in this case, Arad, and a destination, Bucharest, the capital of Romania, from which this a corner of the map. And the problem then is to find a route from Arad to Bucharest. The action that the agent can execute when driving from one city to the next along one of the roads shown on the map. The question is, is there a solution that the agent can come up with given the knowledge shown here to the problem of driving from Arad to Bucharest?

2. What is a Problem?

And the answer is no. There is no solution that the agent can come up with because Bucharest doesn't appear on the map, and so the agent doesn't know any actions that can arrive there.  So let's give the agent a better chance. Now we've given the agent the full map of Romania. To start, he's in Arad, and the destination - or goal - is in Bucharest. And the agent is given the problem of coming up with a sequenct of actions that will arrive at the destination. Now, is it possible for the agent to solve this problem? And the answer is yes. There are many routes or steps of sequences of actions that will arrive at the destination. Here is one of them: Starting out in Arad, taking this step first, then this one, then this one,  then this one, and then this one to arrive at the destination.  So that would count as a solution to the problem. So sequence of actions, chained together, that are guaranteed to get us to the goal. 
Definition the problem.
Now let's formally define what are problem looks like. A problem can be broken down into a number of components.
First, the initial state that the agent starts out with. In our route finding problem, the initial state was the agent being in the city of Arad. Next, a function - Actions - that takes a state as input and returns a set of possible actions that the agent can extcute when the agent is in this state. ACTIONS (s) {a,a2,a3...} In some problems, the agent will have the same actions available in all states and in other problems, he'll have different actions dependent on this state. In the route finding problem, the actions are dependent of the state. When we've in our city, we can take the routes to the neighboring cities - but we can't go to any other cities. Next we have a function called Result, which takes, as input, a state and an action and delivers, as its output, a new state. So, for example, if the agent is in the city Arad, and takes - that would be the state - and takes the action of driving along Route E-671 towards Timisoara, then the result of applying that action in that state would be the new state - then the result of applying that action in that state would be the new state - where the agent is in the city of Timisoara. Next, we need a function called Goal Test, which takes a state and returns a Boolean value - true or falce - telling us if this state is a goal or not. In a route-finding problem, the only goal would be being in the destination city - the city of Bucharest - and all the other states would return false for the Goal Test. And finally, we need one more thing which is a Path Cost function - which takes a path, a sequence of state/action transitions, and returns a number, which is the cost of that path. Now, for the most of the problem's we'll deal with, we'll make the Past Cost function be additive so that the cost of the path is just the sum of the costs of the individual steps. And so we'll implement this Path Cost function, in terms of a Step Cost function. The Step Cost function takes a state, an action, and the resulting state from that action and returns a number - n - which is the cost of that action. In the route finding example, the cost might be the number of miles traveled or maybe the number of minutes it takes to get to this destination.

3. Example: Route Finding

Now let's see how the definition of a problem maps onto the route finding, the domain.  First, the initial state was given. Let's say we start off in Arad, and the goal test, let's say that the state of being in Bucharest is the only state that counts as a goal, and all the other states are not goals. Now the set of all of the states here is known as the state space, and we navigate the state space by applying actions. The actions are specific to each city, so when we are in Arad, there are three possible actions, to follow this road, this one, or this one. And as we follow them, we build paths or sequences of actions. So just being in Arad is the path of length zero, and now we could start exploring the space and add in this of length one, this path of length one, and this path of length one. We could add in another path here of length two and another path here of length two. Here is another path of length two. Here is a path of length three. Another path of length two, and so on. Now at ever point, we want to separate the state out into three parts. First, the ends of the parts -  The farthest paths that have been explored, we call the frontier. And so the frontier in this case consists of these states that are the farthest out we have explored. And then to the left of that in this diagram, we have the explored part of the state. And then off to the right, we have the unexplored. So let's write down those three components.  We have the frontier. We have the unexplored region, and we have the explored region. One more thing, in this diagram we have labeled the step cost of each action along the route. So the step cost of going between Neamt to Iasi would be 87 corresponding to a distance of 87 kilometers, and the path cost is just the sum of the step costs. So the cost of the path of going from Arad to Oradea would be 71 plus 75.

4. Tree Search

Now let's define a function for solving problems. It's called Tree Search because it superimposes a search tree over the state space. Here's how it works: It starts off by initializing the frontier to be the path consisting of only the initial states, and then it goes into a loop in which it first checks to see do we still have anything left in the frontier? If not we fail, there can be no solution. If we do have something, then we make a choice. Thee Search is really a family of functions not a single algorithm which depends on how we make that choice, and we'll see some of the options later. If we go ahead and make a choice if one of the paths on the frontier and remove that path from the frontier, we find the state which is at the end of the path, and if that state's a go then we're done. We found a path to the goal; otherwise, we do what's called expanding that path. We look at all the actions from the state, and we add to the path the actions and the result of this state; so we get a new path that has the old path, the action and the result of that action, and we stick all of those paths back onto the frontier. Now Tree Search represents a whole family of algorithms, and where you get the family resemblance is that they're all looking at the frontier, copying items off and looking to see if their goal tests, but where you get the difference is right here, in the choice of how you're going to expand the next item on the frontier, which path do we look at first, and we'll go through different sets of algorithms that make different choices for which path to look at first.
The first algorithm I want to consider is called Breadth-First Search.  Now it could be shortest-first search because what it does is always choose of the frontier one of the paths that hadn't been considered yet that's the shortest possible. So how does it work? Well we start off with the path of length 0, starting in the start state, and that's the only path in the frontier so it's the shortest one so we pick it, and then we expand it, and we add in all the paths that result from applying all the possible actions. So now we've removed this path from the frontier, but we've added in 3 new paths. This one, this one, and this one. Now we've in a position where we have 3 path on the frontier, and we have to pick the shortest one. Now in this case all 3 paths have the same length, length 1, so we break the tie at random or using some other technique, and let's suppose that in this case we choose this path from Arad to Sibiu. Now the question I want you to answer is once we remove that from the frontier, what paths are we going to add next? So show me by checking off the cities that ends the paths, which paths are going to be added to the frontier?

5. Tree Search Continued

The answer is that on Sibiu, the action function gives us 4 actions corresponding to traveling along these 4 roads, so we have to add in parts for each of those actions. One of those paths goes here, the other path continues from Arad and goes out here. The third path continues out here and then the fourth path goes from here - from Arad to Sibiu and then backtracks back to Arad. Now, it may seem silly and redundant to have a path that starts in Arad, goes to Sibiu and returns to Arad. How can that help us get to our destination in Bucharest? But we can see if we're dealing with a tree search, why it's natural to have this type of formulation and why the tree search doesn't even notice that it's backtracked. What the tree search does is superimpose on top of the state space a tree of searches, and the tree looks like this. We start off in state A, and in state A, there were 3 actions, so we gave those paths going to Z, S and T. And from S, there were 4 actions, so that gave us paths going from O, F, R, and A, and then the tree would continue of from here. We'd take one of the next items and we'd move it and continue on, but notice that we returned to the A state in the state space, but in the tree, it's just another item in the tree. Now, here's another representation of the search space and what's happening is as we start to explore the state, we keep track of the frontier, which is the set of states that ere at the end of the paths that we haven't explored yet, and behind that frontier is the set of explored states, and ahead of the frontier is the unexplored states. Now the reason we keep track of the explored states is that when we want to expand and we find a duplicate - so say when we expand from here, if we pointed back to state T, if we hadn't kept track of that, we would have to add in a new state for T down here.  But because we've already seen it and we know that this is actually a regressive step into the already explored state, now, because we kept track of that, we don't need it anymore.

6. Graph Search

Now we see how to modify the Three Search Function to make it be a Graph Search Function to avoid those repeated paths. What we do, is we start off and initialize a set called the explored set of states that we have already explored. Then, when we consider a new path, we add the new state to the set of already explored states, and then when we are expanding the path and adding in new states to the end of it, we don't add that in if we have already seen that new state in either the frontier of the explored. Now back to Breadth First Search. Let's assume we are using the Graph Search so that we have eliminated the duplicate paths. Arad is crossed off the list. The path that goes from Arad to Sibiu and back to Arad is removed, and we are left with these one, two, three, four, five possible paths. Given 5 paths, show me which ones are candidates to be expanded next by the Breadth First Search Algorithm.

7. Breadth First Search 1

And the answer is that Breadth - First Search always considers the shortest paths first, and in this case, there's 2 paths of length 1, and 1, the parts from Arad to Zerind and Arad to Timisoara, so those would be the 2 paths that would be considered. Now, let's suppose that the tie is broken in some way and we chose this path from Arad to Zerind. Now, we want to expand that node. We remove it from the frontier and put it in the explored list and now we say, "What paths are we going to add?" So check off the ends of the paths the cities that we're going to add.

8. Breadth First Search 2

In this case, there's nothing to add because of the 2 neighbors, 1 is in the explored list and 1 is in the frontier, and if we're using graph search, then we won't add either of those.

9. Breadth First Search 3

So we move on, we look for another shortest path. There's one path of length 1, so we look at that path, we expand it, add in this path, put that one on the explored list, and now we've got 3 paths of length 2. We choose 1 of them, and let's say we choose this one. Now, my question is show me which states we add to the path and tell me whether we're going to terminate the algorithm as this point because we've reached the goal or whether we're going to continue.

10. Breadth First Search 4

The answer os that we add 1 more path, the path to Bucharest. We don't add the path going back because it's in the explored list, but we don't terminate it yet. True, we have added a path that ends in Bucharest, but the goal test isn't applied when we add a path to the frontier. Rather, it's applied when we remote that path from the frontier, and we haven't done that yet.

11. Breadth First Search 5

Now, why doesn't the general tree search or graph search algorithm stop when it adds a goal node to the frontier? The reason is because it might not be the best path to the goal. Now, here we found a path of length 2 and we added a path of length 3 that reached the goal. The general graph search or tree search doesn't know that there might be some other path we could expand that would have a distance of say, 2 - 1/2, but there's an optimization that could be made. If we know we're doing Breadth - First Search and we know there's no possibility of a path of length 2 - 1/2. Then we can change algorithm so that it checks states as soon as they're added to the frontier rather than waiting until they're expanded and in that case, we can write a specific Breadth - First Search routine that terminates early and gives us a result as soon as we add a goal state to the frontier. Breadth - First Search will find this path that ends up in Bucharest, and if we're looking for the shortest path in terms of number of steps, Breadth - First Search is guaranteed to find it, but if we're looking for the shortest path in terms of total cost  by adding up the step costs, then it turns out that this path is shorter than the path found by Breadth - First Search. So let's look at how we could find that path.

12. Uniform Cost Search

An algorithm that has traditionally been called uniform-cost search but could be called cheapest-first search, is guaranteed to find the path with the cheapest total cost. Let's see how it works. We start out as before in the start state. And we pop that empty path off. Move it from the frontier to explored, and then add in the paths out of that state. As before, there will be 3 of those paths. And now, which path are going to pick next in order to expand according to the rules of cheapest first?

0

13

Homework 1
1. Peg Solitaire
2. Loaded Coin
3. Maze
4. Search Tree
5. Search Tree 2
6. Search Network
7. A* Search

1. Peg Solitaire

This is a question about peg solitaire. In peg solitaire, a single player faces the following kind of board. Initially, all pieces are occupied except for the center piece. You can find more information on peg solitaire at the following URL.
[ --
I wish to know whether this game is partially observable, please say yes or no.
I wish to know whether it is stochastic. Please say yes if it and no if it's deterministic.
Let me know if it's continuous, yes or no, and let me know if it's adversarial, yes or no.

2. Loaded Coin

I am going to ask you about the problem to learn about a loaded coin. A loaded coin is a coin, that if you flip it, might have a non 0.5 chance of coming up heads of tails. Fair coins always come up 50% heads or tails. Loaded coins might come up, for example, 0.9 chance heads and 0.1 chance tails. Your task will be understand, from coin flips, whether a coin is loaded, and if so, at what probability. I don't want you to solve problem, but I want you to answer the following questions:
It is partially observable? Yes or no. Is it stochastic? Yes or no. Is it continuous? Yes or no.

3. Maze

Let's talk about the problem of finding the problem of finding a path through a maze. Let me draw a maze. Suppose you wish to find the path from start to your goal. I don't want to your to solve this problem. Rather I want you to tell me whether it's partially observable. Yes or no. I it stochastic? Yes or no. Is it continuous? Yes or no.
And finally, is it adversarial? Yes or no.

4. Search Tree

This is a search question.  Suppose we are given the following search tree. We are searching from the top, the start node, to the goal, which is over here. Assume we expand from left to right. Tell me how many nodes are expanded if we expand from left to right, counting the start node and the goal node in your answer. And give me same answer for Depth First Search. Now, let's assume you're going to search from right to left. How many nodes would we now expand in Breadth First Search, and how many do we expand in Depth First Search?

5. Search Tree 2

Another search problem -  consider the following search tree, where this is the start node. Now, assume we search from left to right. I would like you to tell me the number of nodes expanded from Breadth - First Search and DEpth-First Search. Please do count the start and the goal node, and please give me the same numbers for Right-to-Left Search, for Bread-First, and Depth-First.

6. Search Network

This is another search problem. Let's assume we have a search graph. It isn't quite a tree but looks like this. Obviously in the structure we can reach nodes through multiple paths. So let's assume that our search never expands the same node twice. Let's also assume this start node is on top. We search down. And this over here is our goal node. So left-to-right search, tell me how many nodes breadth first would expand - do count the start and goal node in the final answer.   Give me the same result for a depth-first search. Again counting the start and the goal node in your answer. And again give me your answer for breadth-first and for depth-first in the right-to-left search paradigm.

7. A* Search

Let's talk about a star search. Let's assume we have the following grid. The start state is right here. And the goal state is right here. And just for convenience, I will give each here a little number. A. B. C. D. Let me draw a heuristic function. Please take a look for a moment and tell me whether this heuristic function is admissable. Check here if yes and here if no. Which one is the first node a star would expand? B1 or A2? What's the second node to expand? B1, C1, A2, A3, or B2? And finally, what is the third node to expand? D1, C2, B3, or A4?

0

14

3. Probability in AI
1. Introduction
2. Probability/Coin Flip
3. Coin Flip 2
4. Coin Flip 3
5. Coin Flip 4
6. Coin Flip 5
7. Probability Summary
8. Dependence
9. What We Learned
10. Weather
11. Weather 2
12. Weather 3
13. Cancer
14. Cancer 2
15. Cancer 3
16. Cancer 4
17. Bayes Rule
18. Bayes Network
19. Computing Bayes Rule
20. Two Test Cancer
21. Two Test Cancer 2
22. Conditional Independence
23. Conditional Indepedence 2
24. Absolute and Conditional
25. Confounding Cause
26. Explaining Away
27. Explaining Away 2
28. Explaining Away 3
29. Conditional Dependence
30. General Bayes Net
31. General Bayes Net 2
32. General Bayes Net 3
33. Value of a Network
34. D-Separation
35. D-Separation 2
36. D-Separation 3
37. Congratulations!
4. Probabilistic Inference
Homework 2

3. Probability in AI

So the next units will be concerned with probabilities and particularly with structured probabilities using Bayes networks. This is some of the most involved material in this class. And since this is a Stanford level class, you will find out that some of the quizzes are actually really hard. So us you go through the material, I hope the hardness of the quizzes won't discourage you; it's really entice you to take a piece of paper and a pen and work them out. Let me give you a flavor of a Bayes network using an example.  Suppose you find in the morning that your car won't start. One is that your battery is flat. Even for a flat battery there is multiple causes. One's, it's just plain dead, and one is that the battery is okay but it's not charging. The reason why a battery might not charge is that the alternator might be broken or the fan belt might be broken. If you look at this influence diagram, also called a Bayes network, you'll find there's many different ways to explain that the car won't start. And a natural question you might have is, "Can we diagnose the problem?" One diagnostic tool is a battery meter, which may increase or decrease your belief that the battery may cause your car failure. You might also know your battery age. Older batteries tend to go dead more often. And there's many other ways to look at reasons why the car might not start. You might inspect the lights, the oil lights, the oil light, the gas gauge. You might even dip into the engine to see what the oil level is with a dipstick. All of those relate to alternative reasons why the car might not be starting, like no oil, no gas, the fuel line might be blocked, or the starter may be broken. And all of these can influence your measurements, like the oil light or the gas gauge, in different ways. For example, the battery flat would have an effect on the lights. It might have an effect on the oil light and on the gas gauge, but it won't really affect the oil you measure with the dipstick. That is affected by the actual oil level, which also affects oil light. Gas will affect the gas gauge, and of course without gas the car doesn't start. So this is a complicated structure that really describes one  way to understand how a car doesn't start. A car is a complex system. It has lots of variables you can't really measure immediately, and it has sensors which allow you to understand a little bit about the state of the car. What the Bayes network does, it really assists you in reasoning from observable variables, like the car won't start and the value of the dipstick, to hidden causes, like is the fan belt broken or is the battery dead. What you have here is a Bayes network. A Bayes network is composed of nodes. These nodes correspond to events that you might or might not know that are typically called random variables. These nodes are linked by arcs, and the arcs suggest that suggest that a child of an arc is influenced by its parent but not in a deterministic way. It might be influenced in a probabilistic way, which means an older battery, for example, has a higher chance of causing the battery to be dead, but it's not clear that every old battery is dead. There is a total of 16 variables in this Bayes network. What the graph structure and associated probabilities specify it a huge probability distribution in the space og all of these 16 variables. If they are all binary, which we'll assume throughout this unit, they can take 2 to the 16th different values, which is a lot. The Bayes network, as we find out, is a complex representation of a distribution over this very, very large joint probability distribution of all of these variables. Further, once we specify the Bayes network, we can observe, for example, the car observe, for example, the car won't start. We can observe things like the oil light and the lights and the battery meter and then compute probabilities of the hypothesis, like the alternator is broken or the fan belt is broken or the battery is dead. So in this class we're going to talk about how to construct this Bayes network, what the semantics are, and how to reason in this Bayes network to find out about variables we can't observe, like whether the fan belt is broken or not. That's an overview. Throughout this unit I am going to assume that every event is discrete - in fact, it's binary. We'll start with some consideration of basic probability, we'll work our way into some simple Bayes networks, we'll talk about concepts like conditional independence and then define Bayes network more generally, move into concepts like D-separation and start doing parameter counts. Later on, Peter will tell you about inference in Bayes networks. So we won't do this in this class. I can't overemphasize how important this class is. Bayes network are used extensively in almost all fields of smart computer system, in diagnostics, for prediction, for machine learning, and fields like finance, inside Google, in robotics. Bayes network are also the building blocks of more advanced AI techniques such as particle filters, hidden Markov models, MDPs and POMDPs, Kalman filters, and many others. These are words that don't sound familiar quite yet,  but as you go through the class, I can promise you you will get to know what they mean. So let's start now at the very, very basics.

2. Probability/Coin Flip

Thrun: So let's talk about probabilities. Probabilities are the cornerstone of AI. They are used to express uncertainly, and the management of uncertainty is really key to many,  many things in AI such as mashine learning and Bayes network inference and filtering and robotics and computer vision and so on. So I'm going to start with some very basic questions, and we're going to work our way up from there. Here is a coin. The coin can come up heads or tails, and my question is the following:
Suppose the probability for heads is 0.5. What's the probability for it coming up tails?

3. Coin Flip 2

Let me ask my next quiz. Suppose the probability of heads is a quarter, 0.25. What's the probability of tail?

4. Coin Flip 3

Here's another quiz. What's the probability that the coin comes up heads, heads, heads, three times in a row, assuming that each one of those has a probability of a half and that these coin flips independent?

5. Coin Flip 4

Now let's flip the coin 4 times, and let's call Xi the result of the i-th coin flip. So each Xi is going to be drawn from heads or tail. What's the probability that all 4 of those flips give us the same result,  no matter what it is, assuming that each one of those has identically an equally distributed probability of coming up heads of the half?

6. Coin Flip 5

So here's another one. What's the probability that within the set of X1,X2,X3> and X4 there at least three heads?

7. Probability Summary

So we just learned a number of things.  One is about complementary probability. If an event has a certain probability, p, the complementary event has the probability 1-p.  We also learned about independence.  If 2 random variables, X and Y, are independent, which you're going to write like this, that means the probability of the joint that any 2 variables can assume is the product of the marginals. So rather than asking the question, "What is the probability for any combination that these 2 coins or maybe 5 coins could have taken?" we can now look at the probability of each coin individually, look at its probability and just multiply them up.

8. Dependence

So let me ask you about dependence. Suppose we flip 2 coins. Our first coin is a fair coin, and we're going to denote the outcome by X1. So the chance of X1 coming up heads is half. But now we branch into pocking a coin based on the first outcome. So if the first outcome was heads, you pick a coin whose probability of coming up heads is going to be 0.9. The way I word this is by conditional probability, probability of the second coin flip coming up heads provided that or given that X1, the first coin flip, was heads, is 0.9. The first coin flip might also come up tails, in which case I pick a very different coin. In this case I pick a coin which with 0.8 probability will once again give me tails, conditioned on the first coin flip coming up tails. So my question for you is, what's the probability of the second coin flip coming up heads?

9. What we learned

So, we actually just learned some interesting lessons.  The probability of any random variable Y can be written as probability of Y given that some other random variable X assumes value i times probability of X equals i, sums over all possible outcomes i for the (inaudible) variable X. This is called total probability. The second thing we learned has to do with negation of probabilities. We found that probability of not X given Y is 1 minus probability of X given Y. Now, you might be tempted to say "What about the probability of X given not Y?" "is this the same as 1 minus probability of X given Y?" And the answer is absolutely no. That's not the case. If you condition on something that has a certain probability value, you can take the event you're looking at the negate this, but you can never negate your conditional variable and assume these values add up to 1.

10. Weather

We assume there is something sunny days and sometimes rainy days, and on day 1, which we're going to call D1, the probability of sunny is 0.9. And then let's assume that a sunny day follows a sunny day with 0,8 chance, and a rainy day follows a sunny day with - well -

11. Weather 2

A sunny day follows a rainy day with 0.6 chance, and a rainy day follows a rainy day - please give me your number.

12. Weather 3

So, what are the chances that D2 is sunny? Suppose the same dynamics apply from D2 to D3, so just replace D# over here with D2s over there. That means the transition probabilities from one day to the next remain the same. Tell me, what's the probability that D3 is sunny?

0

15

Homework 2

1. Bayes Rule

Given the following Bayes network with P of A equal to 0.5, P of B given the A equals 0.2, and P of B given not A 0.8, calculate the following probability.

2. Simple Bayes Net

Consider a network of the following type: a variable, A, that is binary connects to three variables, X1, X2, and X3, that are also binary. The probability of A is 0.5, and for all variable Xi we have the probability of Xi given A is 0.2, and the probability of Xi given not A equals 0.6. I would like to know from you the probability of A given that we observed X1, X2, and not X3. Notice that these variables over here are conditionally independent given A.

3. Simple Bayes Net 2

Let us consider the same network again. I would like to know the probability of X3 given that I observed X1.

4. Conditional Independence

In this next homework assignment I will be drawing you a Bayes network ask will ask you some conditional independence questions.
Is B conditionally independent of C?  And say yes or no.
Is B conditionally independent of C given D?  And say yes or no.
Is B conditionally independent of C given A?  And say yes or no.
Is B conditionally independent of A and D?  And say yes or no.

5. Conditional Independence 2

Consider the following network. I would like to know whether the following statements are true or false. C is conditionally of E given A.
B is conditionally independent of D given C and E. A is conditionally independent of C given E. And A is conditionally independent of C given B. Please check yes or no for each of these question.

6. Parameter Count

In my final question I'll look at the exact same network as before, but I would like to know minimum number of numerical parameters such as the values to define probabilities and conditional probabilities that are necessary to specify the joint distribution of all 5 variables.

0