Placeholder Image

字幕表 動画を再生する

  • Hey, everyone!

  • Life is filled with uncertain events and often we must consider the possible outcomes before

  • deciding.

  • We ask ourselves questions likeWhat is the chance of success?” andWhat is the

  • probability that we fail?” to determine whether the risk is worth taking.

  • Many CEOs need to make huge decisions when investing in their research and development

  • departments or contemplating buyouts or mergers.

  • By using probability and statistical data, they can predict how likely each outcome is

  • and make the right call for their firm.

  • Some of you might be wondering: “What is this probability we are talking about?”.

  • Essentially, probability is the chance of something happening.

  • A more academic definition for this would bethe likelihood of an event occurring”.

  • The word event has a specific meaning when talking about probabilities.

  • Simply put, an event is a specific outcome or a combination of several outcomes.

  • These outcomes can be pretty much anything - getting Heads when flipping a coin, rolling

  • a 4 on a six-sided die or running a mile in under 6 minutes.

  • Take flipping a coin for example.

  • There isn’t only one single probability involved since there are two possible outcomes:

  • getting heads or getting tails.

  • That means we have two possible events and need to assign probabilities to each one.

  • When dealing with uncertain events we are seldom satisfied by simply knowing whether

  • an event is likely or unlikely.

  • Ideally, we want to be able to measure and compare probabilities in order to know which

  • event is relatively more likely.

  • To do so, we express probabilities numerically.

  • Even though we can express probabilities as percentages or fractions, conventionally we

  • write them out using real numbers between 0 and 1.

  • So, instead of using 20% or one fifth, we prefer 0.2.

  • All right!

  • Now let us briefly talk about interpreting these probability values.

  • Having a probability of ‘1’ expresses absolute certainty of the event occurring

  • and a probability of ‘0’ expresses absolute certainty of the event NOT occurring.

  • You probably figured this out, but higher probability values indicate a higher likelihood.

  • Okay!

  • As you can imagine, most events we are interested in would have a probability other than 0 and

  • 1.

  • So, values like 0.2, 0.5, and 0.66 are what we generally expect to see.

  • Even without knowing any of this, you can tell some events are more likely than others.

  • For instance, your chance of winning the lottery isn’t as great as winning a coin toss.

  • That’s why you can think of probability as a field that is about quantifying exactly

  • how likely each of those events are on their own.

  • So how about we start right away?

  • Let’s get into it!

  • Generally, the probability of an event A occurring, denoted P of A, is equal to the number of

  • preferred outcomes over the total number of possible outcomes.

  • By preferred we mean outcomes that we want to see happen.

  • A different term people use for such outcomes isfavourable”.

  • Similarly, sample space, is a term used to depict all possible outcomes.

  • Going forward, we shall use the respective terms interchangeably.

  • We will go through several examples to ensure you understand the notion well.

  • Say, event A is flipping a coin and getting Heads.

  • In this case, Heads is our only preferred outcome.

  • Assuming the coin doesn’t just somehow stay in the air indefinitely, there are only 2

  • possible outcomesheads or tails.

  • This means that our probability would be a half, so we write the following:

  • P of getting Heads, equals one half, which equals 0.5.

  • All right!

  • Now, imagine we have a standard six-sided die and we want to roll a 4.

  • Once again, we have a single preferred outcome, but this time we have a greater number of

  • total possible outcomes – 6.

  • Therefore, the probability of this event would look as follows:

  • P of rolling 4 equals: one sixth, or approximately 0 point one six seven.

  • Great!

  • Events can be simple, or a bit more complex.

  • For example, what if we wanted to roll a number divisible by 3?

  • That means we need to get either a 3 or a 6 so the number of preferred outcomes becomes

  • two.

  • However, the total number of possible outcomes stays the same since the die still has 6 sides.

  • Therefore, we conclude that the probability of rolling a number divisible by 3 equals:

  • 2 over 6, which is approximately 0.33.

  • So far so good!

  • Note that the probability of two independent events occurring at the same time, is equal

  • to the product of all probabilities of the individual events.

  • For instance, the likelihood of getting the Ace of Spades equals the probability of getting

  • an Ace, times the probability of getting a spade.

  • Expected values represent what we expect the outcome to be if we run an experiment many

  • times.

  • To fully grasp the concept, we must first explain what an experiment is.

  • Okay!

  • Imagine we don’t know the probability of getting heads when flipping a coin.

  • We are going to try to estimate it ourselves, so we toss a coin several times.

  • After doing one flip and recording the outcome we complete a trial.

  • By completing multiple trials, we are conducting an experiment.

  • For example, if we toss a coin 20 times and record the 20 outcomes, that entire process

  • is a single experiment with 20 trials.

  • All right!

  • The probabilities we get after conducting experiments are called experimental probabilities,

  • whereas the ones we introduced earlier were theoretical or true probabilities.

  • Generally, when we are uncertain what the true probabilities are or how to compute them,

  • we like conducting experiments.

  • The experimental probabilities we get are not always equal to the theoretical ones but

  • are a good approximation.

  • For instance, eight out of ten times I go to my local shop, I have to wait in line.

  • Based on my experience, 80% of the time there will be a queue and 20% of the time, there

  • won’t be one.

  • I can try to calculate the true probability, but it would include far too many factors.

  • The experimental probability, on the other hand, is easy to compute and very useful.

  • Okay!

  • The formula we use to calculate experimental probabilities is similar to the formula applied

  • for the theoretical ones earlier in the course.

  • It is simply the number of successful trials divided by the total number of trials.

  • Now that we know what an experiment is, we are ready to dive into expected values!

  • The expected value of an event A, denoted E of A, is the outcome we expect to occur

  • when we run an experiment.

  • To clarify any confusion around the definition, let us examine the following example:

  • We want to know how many times we will get a spade if we draw a card 20 times.

  • We always record the value of the card and then return it to the deck before shuffling.

  • For an event with categorical outcomes, like suits, we calculate the expected value by

  • multiplying the theoretical probability of the event, P of A, by the number of trials

  • we carried out, n.

  • Weve already seen how to compute the true probability of drawing a card from a specific

  • suit.

  • It is equal to one fourth or point twenty-five.

  • If we repeat this action 20 times, the expected value would equal 0.25, times 20, which equals

  • 5.

  • An expected value of 5 means we expect to get a spade 5 times if we run the experiment.

  • However, nothing guarantees us getting a spade EXACTLY 5 times.

  • Realistically, we could get a spade 4 times, 6 times or even 20 times.

  • Now, for numerical outcomes we use a slightly different formula.

  • We take the value for every element in the sample space and multiply it by its probability.

  • Then, we add all of those up to get the expected value.

  • For instance, you are trying to hit a target with a bow and arrow.

  • The target has 3 layers, the outermost one is worth 10 points, the second one is worth

  • 20 points and the bullseye is worth 100.

  • You have practiced enough to always be able to hit the target, but not so much that you

  • hit the centre every time.

  • The probability of hitting each layer is as follows: 0.5 for the outmost, 0.4 for the

  • second and 0.1 for the centre.

  • The expected value for this example would be “0.5 times 10, plus, 0.4 times 20, plus,

  • 0.1 times 100”.

  • This is equal to “5, plus 8, plus 10, or 23”.

  • Wait!

  • We can never get 23 points with a single shot!

  • So why is it important to know what the expected value of an event is?

  • We can use expected values to make predictions about the future based on past data.

  • We frequently make predictions using intervals instead of specific values due to the uncertainty

  • the future brings.

  • Meteorologists often use these when forecasting the weather.

  • They do not know exactly how much snow, rain or wind there is going to be, so they provide

  • us with likely intervals instead.

  • That is why we often hear statements likeExpect between 3 and 5 feet of snow tomorrow

  • morning.”

  • orTemperatures rising up to 90° on Wednesday.”.

  • So far, we have learned that the expected value is used when trying to predict future

  • events.

  • Sometimes the result of the expected value is confusing or doesn’t tell us much.

  • For instance, let us discuss a very famous examplethrowing 2 standard 6-sided dice

  • and adding up the numbers on top.

  • We have 6 options for what the result of the first one could be.

  • Regardless of the number we roll, we still have 6 different possibilities for what we

  • can roll on the second die.

  • That gives us a total of 6, times 6, equals 36 different outcomes for the two rolls.

  • For clarity, we can write out the results in a 6 by 6 table, where we write the sum

  • of the two dice.

  • You can clearly see that we have repeating entries along the secondary diagonal and all

  • diagonals parallel to it.

  • Notice how 7 occurs 6 times in the table.

  • This means we have 6 favourable outcomes.

  • As we already mentioned, there are 36 possible outcomes, so the chance of getting a “7”

  • equals: six, over 36, or just one sixth.

  • Let’s also compute the expected value for this event.

  • Since we are dealing with numerical data, we should apply the same formula we used for

  • the archery problem from the last lecture.

  • To do so, we must assign an appropriate probability to each unique entry in the table.

  • Just like with the sum being 7, we do that based on the number of times the number features

  • in the table.

  • If we do so, we are going to get the expected value which ends up being 7.

  • But how important is this value if the probability associated with it is only one sixth?

  • The sum being equal to 7 might be the most probable answer, but it is still very unlikely

  • to occur.

  • Thus, we cannot reasonably bet on getting a sum of exactly 7.

  • Moreover, even though we are suggesting 7 is the most probable sum, how can you be sure?

  • What we can do is to create a probability frequency distribution.

  • Simply put, a probability frequency distribution is a collection of the probabilities for each

  • possible outcomethat’s how I know that 7 was the most probable sum of two dice.

  • Usually it is expressed with a graph or a table.

  • To understand what a probability frequency distribution looks like, we are going to construct

  • one right now.

  • Using the sample space table we already constructed.

  • For each unique sum, we record the amount of times it features in the table.

  • This value is known as the frequency of the outcome.

  • For example, getting a sum of 8 in 5 different cases, means that 8 has a frequency of 5.

  • Okay!

  • If we write out all the outcomes in ascending order and the frequency of each one, we construct

  • a frequency distribution table.

  • By examining this table, we can easily see how the frequency changes with the results.

  • Good job!

  • At this point, we have done most of the work!

  • The final step in getting the probability frequency distribution might be the most intuitive

  • one.

  • We need to transform the frequency of each outcome into a probability.

  • Knowing the size of the sample space, we can determine the true probabilities for each

  • outcome.

  • We simply divide the frequency for each possible outcome by the size of the sample space.

  • A collection of all the probabilities for the various outcomes is called a probability

  • frequency distribution.

  • As mentioned earlier, we can express this probability frequency distribution through

  • a table, or a graph.

  • All right!

  • On the graph, we see the probability frequency distribution.

  • The X axis depicts the different possible number of spades we can get, and the Y axis

  • represents the probability of getting each outcome.

  • When making predictions, we generally want our interval to have the highest probability.

  • We can see that the individual outcomes with the highest probability are the ones with

  • the highest bars in the graph.

  • Usually, the highest bars will form around the expected value.

  • Thus, the values around it would also be the values with the highest probability.

  • This suggests that if we want the interval with the highest probability, we should construct

  • it around the expected value.

  • Before we move on to the next section, we need to talk about the opposite of an event.

  • The term we use in probability theory is thecomplementand we are going to explain

  • why it is so important in the next lecture.

  • Let’s talk about some of the characteristics of probabilities and events.

  • For starters, let’s define what a complement is.

  • Simply put, a complement of an event is everything the event is not.

  • As the name suggests, the complement helps complete the rest of the sample space.

  • To calculate the probability of the complement of an event, we need to set up a few things.

  • For starters, if we add the probabilities of different events, we get their sum of probabilities.

  • Now, if we add up all the possible outcomes of an event, we should always get 1.

  • Remember that having a probability of 1 is the same as being 100% certain.

  • We are going to explain why this is true with several examples.

  • Okay!

  • Imagine you are tossing a coin.

  • When it falls, we are guaranteed to get either heads or tails.

  • Therefore, if we account for the sum of all probabilities of getting heads OR tails, we

  • have completely exhausted all possible outcomes.

  • We have accounted for the entire sample space, so we are 100% certain to get one of the two.

  • Since we are certain one of these will occur, the sum of their probabilities should be 1.

  • So, what would it mean if we have a sum of probabilities greater than 1?

  • Recall that probability of 1 expresses absolute certainty.

  • By definition, we cannot be any surer than being absolutely sure, so a probability of

  • 1.5 does not make intuitive sense.

  • Instances where we can get such a sum of probabilities is when some of the assumed outcomes can occur

  • simultaneously.

  • This means we are double-counting some of the actual possible outcomes.

  • Now, another peculiar case is if we end up with a sum of probabilities less than 1.

  • Then we have surely not accounted for one or several possible outcomes.

  • Probability expresses the likelihood of an event occurring, so any probability less than

  • one is not guaranteed to occur.

  • Therefore, there must be some part of the sample space we have not yet accounted for.

  • Great!

  • Before we move on, we want to tell you that all events have complements and we denote

  • them by adding an apostrophe.

  • For example, the complement of the event “A” is denoted as “A apostrophe”.

  • It is also worth noting that the complement of a complement is the event itself, so “A

  • apostrophe, apostrophewould equal “A”!

  • Now imagine you are rolling a standard 6-sided die and want to roll an even number.

  • The opposite of that would be NOT rolling an even number, which is the same as wanting

  • to roll an odd number.

  • Complements are often used when the event we want to occur is satisfied by many outcomes.

  • For example, you want to know the probability of rolling a 1, 2, 4, 5 or 6.

  • That is the same as the probability of NOT rolling a 3.

  • This concept is extremely useful!

  • We already said that the sum of the probabilities of all possible outcomes equals 1, so you

  • can probably guess how we calculate complements.

  • The probability of the inverse equals 1 minus the probability of the event itself.

  • To make sure you understand the notion well, we will look at the example we mentioned earlier.

  • The sum of probabilities of getting one, two, four, five or six is equal to the sum of the

  • separate probabilities.

  • The likelihood of each outcome is equal to one sixth, so the sum of their probabilities

  • adds up to five sixths.

  • Now, another way of describing gettingone, two, four, five or sixisnot getting

  • a three”.

  • Let us calculate the probability of not getting a 3.

  • This is the complement of getting a 3, so we know the two should add up to 1.

  • Therefore, the probability of not getting a 3 equals 1 minus the probability of getting

  • a 3.

  • We know that P of 3 equals one sixth, so the probability of not getting three is equal

  • to 1 minus one sixth.

  • Therefore, the probability of not getting 3 is five sixths.

  • This shows that the probability of getting one, two, four, five or six is equal to the

  • probability of not getting a three.

Hey, everyone!

字幕と単語

ワンタップで英和辞典検索 単語をクリックすると、意味が表示されます

B1 中級

確率とは何か|期待値・度数分布・補数 (What is probability | Expected Values, Frequency Distribution, Complement)

  • 0 0
    林宜悉 に公開 2021 年 01 月 14 日
動画の中の単語