Placeholder Image

字幕表 動画を再生する

  • This is the most practical of all the things that we've discussed so far.

  • Um, it's really you could consider that component of industrial organizational psychology, but it's, ah, component that has a strong emphasis on individual differences, and so you could consider it first, an exploration of the practical utility of psychological concepts in predicting real world phenomena.

  • But he could also consider it a part of the validation of the psychometric models of personality that we've described because one of the things that you want to do to determine whether or not a construct So it's a hype of the hypothetical psychological phenomenon or entity whether or not that's riel, depends to some degree on what it's good for right, part of the aim of sciences, prediction and control.

  • And so it's useful to take a look at important life outcomes among human beings and to determine whether or not you can predict them.

  • Because that's one way of testing.

  • Whether or not the phenomena that you've derived, say, from your statistical analysis, both in the case of fluid intelligence or intelligence more broadly and in the case of the Big Five actually has the capacity to manifest itself in a meaningful way in in the world outside of inside the lab.

  • That's that's one domain of potential validation, but then outside the lab as well.

  • And so what we're gonna do today is to look at the factors that determine performance in the real world.

  • And so you might think, well, what are some important real world outcomes on which performance might vary?

  • And we're going to stick to those that are rather obvious today?

  • We talked about creative achievement already, for example, but you could think of creative achievement, entrepreneurial achievement, those actually clumped together.

  • They're part of the same phenomena you think about academic achievement, and you could think about success in the workplace.

  • And then those could be subdivided because success in the workplace, for example, needs to be analyzed in terms of success, say, at simple jobs and success that complex jobs like there's different ways of fraction ing fraction ating ah phenomena to increase your capacity to understand it and to measure it and then to predict it.

  • But those air rough domains, where there's self evident individual differences and where the capacity to predict performance would at least be of some utility performance in the workplace.

  • This is an interesting one.

  • Performance in the workplace is a is a tricky measure, a tricky issue.

  • I've been working on performance prediction tests for a long time because I was interested in because I'm a clinical psychologist and because I have a practical side, I'm often interested in what the utility of psychological measurements might be outside the lab.

  • It's interesting to consider that from an ethical perspective, too.

  • So if you're hiring people, you have to conundrums that lay themselves in front of you.

  • And one is tthe e ethical necessity to give each person a fair chance, and the other is the ethical necessity to place the proper person in the proper position.

  • And, you might say, Well, you could do that randomly and like There are countries in Europe, for example, that have a quasi random approach to the selection of university students.

  • So Holland is like that they have a pretty open admission policy now.

  • What that means is that everyone has a chance to go to college.

  • But the down side of it is that the failure rate in the first year, for example, is extraordinarily high now, you might say that's a perfectly reasonable price to pay for the open admission and for the opportunity to give everyone a chance.

  • But you could also say, Well, that's a hell of a waste of time for the people who go into the first year of college or university and fail.

  • It's not a pleasant experience for them.

  • It's It's very expensive in terms of times, time and resources.

  • And perhaps it would have been better for them and for broader society had they been able to determine beforehand whether they had two sets of of qualities that were necessary to increase the probability of success.

  • Because if I could tell you well, you know, you have, ah, 80% chance of success in this domain and only a 20% chance in this domain, you still might want to take the risk in the 20% success domain.

  • But you might also think, well, I might as well go off and function where my particular combination of Proclivities has the best opportunity to manifest itself and because why not position yourself for success rather than failure?

  • Now?

  • I'm not saying that the ethical conundrum between those two alternatives is something that's easy to Uh uh, What would you say?

  • An easy thing to map your way through.

  • It's by no means an easy way to map your way through because there's our strong arguments to be had on both sides of the equation.

  • So but, um, there's still the scientific question that remains.

  • Which is to what degree can you accurately predict people's performance?

  • And to what degree does that reflect positively or negatively on your psychological concepts?

  • And then there's the actual practical utility of potentially offering to schools, to universities and to work places in general, the probability of selecting people with an above average chance of succeeding.

  • It's even more complicated, say, if you're selecting not so much entry level employees in a company, because maybe you tell yourself there more towards the possibility of bringing more people in and letting them fail or succeed in the job.

  • But let's say you're replacing a management team at a medium to large size corporation.

  • You know, if you bring in managers that are incompetent, not only are they going to fail, which is obviously not very good for them, and the failure rate among managers is very high.

  • It exceeds.

  • It exceeds 65.

  • They figure.

  • I think it's 65% of managers.

  • I believe that's correct at zero or negative net value to the company's.

  • It's something like that.

  • The failure rate managerial positions is overwhelming.

  • And the problem with bringing someone into a managerial or an executive position even worse, who isn't competent to play that role is that they can wipe out the careers of everyone that there supervising and, in the case of, say, large companies that could bring the whole damn company doubt.

  • And so it's not like the the ethic of people deserve an equal chance.

  • Let's say to fail and succeed isn't very practical ethics when you're putting someone in a high demand position, where the consequences of failure are can be overwhelming not only for that person, but for all the people that they happen to be whose destiny they happen to be involved in determining.

  • And so then you might say, if you could come up with this election process, that would increase the probability of hiring an above average manager from 50 50 to 60 40 or 70 30 maybe you're actually ethically compelled to use that.

  • And in fact, by law, this is particular to the case in the United States.

  • You're required by law to use the most effective, valid, reliable and non discriminatory selection process that currently exists in order to select your employees.

  • And one of the things that's going to happen to employers in the next 10 years is they're going to get a very nasty shock for using interviews, because the data on unstructured interviews indicates quite clearly that a that they're discriminatory partly because they, for example, if you're tall and, uh, assertive and good looking and charming, then you're much more likely to do well in an interview.

  • But that doesn't necessarily have any bearing whatsoever on the probability that you succeed in the position.

  • And so, and not only that, the predictive validity oven structured interviews is very low.

  • It's about 0.12 It's something like that, which means that if you just pick people randomly for success or failure, let's say you'd have a 50 50 chance of predicting whether someone was going to be a success or a failure is just a coin toss.

  • And then, if used in unstructured interview, you'd get that up to 56 44 which is slightly better, but it's by no means it's not even close to the accuracy that you could get.

  • For example, if you just used to standard test of conscientiousness, which would give you a correlation of about 0.25 with with, say, manager with managerial productivity.

  • And there's this interesting, this is an interesting thing to know.

  • It's called the binomial effect.

  • Size display effect sizes the magnitude of in effect, right?

  • And and it's not an easy thing to get a handle on all that.

  • You really need to if you're gonna be a psychologist, because in any study there's an effect size indicator correlation off often Orin R.

  • Squared, which is the correlation squared, or cones D, which is the effects eyes expressed in standard deviations or something like that.

  • But you kind of have to understand that at a basic level, to understand what statistics actually do.

  • And there's this phenomenon called the binomial effect size display that can help you understand what, like in an embodied sense, what the magnitude of a correlation means.

  • So here's how it works.

  • Imagine that you have a predictor of 0.20 So the correlation is our equals 0.20 between phenomena, one will say conscientiousness and phenomena to workplace performance 10.20 correlation.

  • The question might be, Well, how much?

  • How much would you improve your predictive capacity over chance levels if you applied that predictor?

  • And the answer is that we are is the difference between the odds ratio.

  • So let me explain that so 0.50 point 50 If you subtract one from the another, you get zero.

  • So the predictability of selection by chance is 0.50 minus 0.50 equals zero.

  • That's the predictive validity of chance.

  • If you have a predictor of 0.20 which is approximately, that's sort of the low end estimate for conscientiousness, then that would change your odds ratio from 0.50 point 50 right random 2.60 point 40 because 400.60 minus 0.40 is 0.20 And so the correlation coefficient turns out to be the the difference between the odds between the odds, so and so it gives you a quick rule of thumb.

  • So, for example, so if you have a 0.20 predictor that gives you 60 40.

  • We have a 400.30 predictor that gives you 65 35 because 350.65 minus 0.35 point 30 And if you have a 0.6 predictor, which is really up on the high end, right, you really start to push your limits of statistical predict prediction validity.

  • At that point, that gives you 0.0.80 minus 0.20 And so what you have done if you use a predictive, uh, predictor that has a correlation coefficient of 0.60 which you could get, for example, if you took conscientiousness and combined that with a good test of I.

  • Q.

  • For predicting complex jobs, you might be able to get up 2.6.

  • That moves your odds ratio of selecting an above average person for the position from 0.50 point 502.80 point 20 So it cuts here.

  • Your failure rate by more than half right brings it down from 200.50 point 20 because 0.80 minus 0.20 point 60 So that's a really good thing to know that's called the binomial effect size display.

  • It's really good thing to have in your mind.

  • It's very simple.

  • It's just a it's just subtraction, and it gives you some sense of the power off of of statistical predictions.

  • Now the question might be, Well, let's say you had a predictor of 0.20 conscientiousness, you might say.

  • Well, if you square they are, that gives you 4% of the variance.

  • Who the hell cares?

  • 4% of the variance you've left 95% of the variability between people in terms of their performance.

  • Unexplained, you might say.

  • Well, why even bother?

  • Well, the answer to that question is how much difference in productive output is there between people.

  • Because if there's a tremendous degree of productive of difference in productive output between people, then increasing your ability to predict someone's performance even by some relatively small increments might have massive economic utility.

  • You know, if if, let's say the top 10% of your people are 50 times as productive as the bottom 10% of your people than shifting your ability to predict up so that you have more of those extremely high performing people or less of the extremely low performing people might more than pay off might more than pay for itself from an economic perspective, even though your prediction your predictor isn't doesn't have that massive amount of power well, and that actually happens to be the case.

  • So back in 1968 there was a guy named Walter Michelle and he had reviewed he's a social psychologist.

  • He reviewed the personality leader to up to that point and concluded that the typical personality measure on Lee predicted the typical performance measure at about 0.2, and that's actually remained relatively stable.

  • I would say it's a little higher than that's probably 0.25 especially if you do things like correct for measurement, error and so forth.

  • And what Michelle said was because it's only 0.25 Let's say you square that that's 5% of the various.

  • You leave 95% of the phenomenon unexplained.

  • You might as well not even bother measuring personality.

  • And so that actually killed the field of personality from a second metric perspective for about 25 years, really until about the early 19 nineties, when people woke up and thought, Wait a minute, What are the typical effect?

  • Sizes in other domains of prediction?

  • And then they found out that while the 0.20 correlation that was typical of personality prediction was actually pretty damn good bye social sciences or health sciences standards.

  • But it doesn't sound good when you just think about it as an absolute measure because it leaves 95% of the phenomenon unexplained.

  • But when you compare it to other things that people consider of reasonable magnitude than it turns out that personality psychologists are doing just fine.

  • And then also in the 19 nineties.

  • And I'll show you some of this, their economic calculations done.

  • And so one of the calculations would be, well, imagine that you have.

  • Maybe it took 20 companies, and you did it.

  • Ah, distribution of the productivity of their employees.

  • It's hard thing to do because you have to measure their productivity.

  • How the hell do you do that right?

  • With sales people, you can measure sales that's pretty straightforward.

  • With lawyers, you can measure hours billed like there's some.

  • There are some occupations where the performance measure is sort of built into the job But if you're a manager in the mid level of a large corporation, how the hell can you tell how productive you are?

  • It's so there's a measurement problem on the productive ity measurement and as well as the performance prediction end.

  • And it's a very intractable problem.

  • And the way that people often do that is by saying, Well, let's say you're a manager in the mid level of a corporation.

  • How do we determine how productive we are?

  • You are.

  • Well, we might ask you to compare your we're work productivity with your peers, maybe construct up a questionnaire asking about your efficient use of time and so forth, and then we might get your peers to do the same thing to you.

  • We might get your supervisors to do the same thing to you, and we might get your your subordinates to do the same thing to you and then aggregate across all those measures and in for that.

  • That aggregate opinion actually constitutes a valid measure of productivity.

  • You actually don't know right, because that assumes that what you're doing that your peers and your supervisors and so forth our rating is actually related in some positive manner to the bottom line of the company, and you actually don't you actually can't figure that out?

  • This is actually I think, why Why large companies start to become unstable is because if there's enough layers between the operations of the people in the tears of the corporation and the rial outcome measure, which is basically profit because that's what we've got, then the relationship between your activity as a manager and the productivity of the company starts to become increasingly blurred.

  • And that might mean that you're working as hard as you can on something that's actually going to cost the company money.

  • So you would actually be much more productive from a profit perspective if you just didn't go to work at all.

  • And that that happens a lot in large corporations because you never know, especially if there's a lot of steps that have to be undertaken in a process.

  • Before you contest the product in the market, you have no idea if you're wasting time and resources, you just can't tell.

  • So the performance measurement issue is a very, very complicated one.

  • We haven't talked about it that much, but I give you kind of a brief overview of it.

  • Now what you really want to do is have multiple sources of information about performance and aggregate across them.

  • And if you can use real world measures that are tied to two income generation, so much the better.

  • Because you have to use something as your gold standard, right?

  • You have to say at some point what We're going to define this as reality when it comes to performance and in a free market economy.

  • Roughly what you do there is You say that what profit is the proxy for productivity and that isn't the same thing saying is that it isn't the same thing as saying that profit is productivity.

  • That's not the same thing.

  • It's saying that at some point you have to decide what you're going to accept as a measure of productivity because otherwise there's no point in even talking about it.

  • And you can't just not talk about productivity.

  • If you're running an organization because the organization doesn't exist unless it produces something that will keep it going and generally that happens to be money.

  • So anyways, it's quite it's very complicated all of this and I was also curious because because I'm curious, I guess, is to find out what would happen if I took a measure that was derived in the lab and then tried to launch it out in a in the actual real world environment, tried to market and sell it.

  • And now that was very informative, because I presume that we developed tests which I'll talk to you about.

  • They were actually pretty good at predicting performance, managerial performance, for example, administrative performance.

  • We got ours of upwards of 0.6, which is really bloody impressive.

  • So we could we could tell employers.

  • Look, if you use our tests, we can increase the probability that you'll hire an above average employee from 50 50 to 80 20 and the economic benefit of that will be staggering, staggering.

  • And I'll show you the calculations that enable that sort of that sort of prediction to be made, and you might think well, and this is what you do think If you're naive about producing something of value, you might think well, if you can produce something that's of self evident economic value, selling it will be a snap, and that is so wrong.

  • You just cannot believe it.

  • So one of the things we found, which was really mind boggling to me, was that you could make a case that the probability that a company will use a test that predicts performance the pro, the probability that they will use the test is inversely related to the accuracy of the test, which basically means that the less accurate tests are easier to sell.

  • You think.

  • Well, why the hell would that be?

  • Why and the how in the world would it possibly be that corporations would rather buy tests that don't work, then test that do work?

  • And that that is what they do?

  • Because, really, what they do by is the Myers breaks right that sells about a 1,000,000 units a year, and the Myers Briggs has zero predictive utility with regards to performance prediction.

  • So why do people use it?

  • Oh, here's one reason it doesn't hurt anybody's feelings.

  • Everybody wins, right?

  • And so then you think, Well, do corporations really care whether or not everybody wins when they're being tested?

  • And the answer to that is yes, much more often than you would think so.

  • So we hit all sorts of barriers that was one of the problem with tests.

  • That work is that most of the people who take them don't do very well on them.

  • And then the other problem is, is that people people aren't good at statistical reasoning at all.

  • They're really, really bad at it.

  • And so, for example, they don't know the difference between a percentage and a percentile.

  • So a percentage is, you know, if you get 40% on a test, it means you got 40% of the questions, right?

  • If you if you are at the 40th percentile in the distribution of test scores, it means that you perform better than 40% of the people.

  • That's actually not too bad, right?

  • But you'll think, no, that's not 40 percentile.

  • That's 40%.

  • And then you'll think that you failed.

  • And so one of the things we found, for example, was that when we were when we were marketing the tests, too, mid level managers who had some say, at least on whether or not they would be used, the first thing they would say as well.

  • I want to do the test and the thing you say about that is no, you don't.

  • Because this is derived.

  • Statistically, you can't validate the test on the basis of your opinion about its applique ability in your case.

  • But you can't have that conversation that isn't gonna go anywhere because they say, Well, I'd never give a test to my employees that I hadn't taken myself.

  • It's like, Okay, so then you think, Well, you're a typical manager.

  • You're going to score it the 50th percentile.

  • You are not gonna be happy about that because you want to score the 90th percentile because you confuse percentiles and percentages.

  • And also because you don't notice that if you're doing better than 50% of the managers, that's actually pretty damn good question.

  • Yep.

  • Like that.

  • That's a good lead all.

  • All answer that as I go through this.

  • Okay.

  • Okay, so so that was off to that dead started right there.

  • And then the other thing we'd find this was cool.

  • Too horrible, but cool.

  • So imagine a large corporation and you go in and you try to make a sales pitch to someone, and maybe they're in HR.

  • And they say and you say to them, Look, you could use these tests if use it for 100 people will increase your bottom line by $2 million a year.

  • You'd think people would be jumping up and down about that.

  • And they say, Well, how much does it cost?

  • And we say, Well, it doesn't matter because the cost to benefit ratio is what you should care about, not the cost, because they would want it safer.

  • $9 a test.

  • It was like, No, if you do this for each and I'll show you the mathematics in a bit, each person that you select with this test will bring $20,000 of extra value to your company.

  • You're not getting that for $9.

  • And then they say, Well, wait a minute.

  • We have a budget for testing that's limited to $9 to test, they say, Well, it's gonna produce $30,000 a year in revenue.

  • They say, Well, that revenue goes to another branch of the company, and we won't get any credit for it.

  • We'll just get punished because we've exceeded our but our budget for selection.

  • I thought, Oh yeah, never, never, never expected that to come up as an obstacle.

  • So you know.

  • So we had very well designed products.

  • They also talk about 90 minutes because we this we started with a full scale neural psychological assessment.

  • So we we took tests of dorsal lateral prefrontal cortical ability, which had before that being administered by neuro psychologists, and we computerized them.

  • We did that back in 1993.

  • So it was among the first attempts to do this sort of thing.

  • And then we had a 90 minute test battery that also assessed the big Five.

  • And then you could give that to employees.

  • And, you know, you could you could produce this 80 20 differential that I described.

  • And well, then we found out well, that test was too long.

  • It's like, Well, what do you mean?

  • You're gonna get in $30,000 a year per employee if used.

  • This test is like, No, people don't want to be tested.

  • That looks like Okay, so you want an accurate test that doesn't discriminate against anybody that makes everyone feel good.

  • That's dirt cheap.

  • And that doesn't take any time at all.

  • It's like, Well, it took us about 15 years to build one of those after we had built the original thing that actually worked.

  • And so even then, the process of trying to introduce it into the workplace was almost impossible.

  • The other thing you find if you're trying to sell the large companies this is this is worth knowing is that it's great if you could sell to a large company, cause that's a large company man.

  • And so you.

  • The potential for the sale is astronomical, but large companies are so slow you cannot believe it.

  • And so it might take you three years.

  • First of all, you have to find who you should talk to.

  • That's impossible because they're all the people that you can talk to have no decision making power whatsoever.

  • So all you do is waste your time talking to them.

  • They'll talk to you.

  • They'll have meetings with you.

  • But it won't matter because they don't have any decision making power.

  • And then you can't get access to people who have the decision making power because they don't let you have access to the and so that's a huge problem.

  • And even figuring them out is a huge problem and even figuring out that those are the people that you need to talk to is a huge problem that might take you five years just to get it through your head, that that you're wasting your time 95% of the time talking to the wrong people.

  • And so then the next thing that happens, this is really this is really comical.

  • So let's say you do make some headway, takes you three bloody years.

  • You you made a relationship with the person, they're ready to launch it.

  • Then they have to market it internally and that takes them a year because they have to convince everybody to get on board.

  • And then the corporation reshuffles and the person that you're dealing with disappears.

  • It's like that happened to us.

  • We were marketing to a very large company.

  • We would got all the way up to the CEO and ready to with.

  • This was with the future authoring program and we were ready to launch the damn product.

  • It took us a huge amount of time to get through all the layers and just the week that we were going to launch at the CEO resigned in the company's stock crashed and that was the end of that bang and and large companies are like that, like they're extremely slow and the typical duration that a person lasts in a given position.

  • And this is something to consider with regards to your careers is about four years, so you'll occupy the position you're in for about four years.

  • And then, well, you'll either fail out.

  • You'll move laterally or you'll move up.

  • And so it's a three year sales cycle minimum.

  • And that's interweave with the possibility that whoever you're talking to is just going to disappear.

  • So and then a few small cell to small companies, well, there faster.

  • But they don't have any money, so that's not very helpful.

  • So it's extraordinarily difficult.

  • Even if you produce something of high value and that you have proof that the thing exists, it's extraordinarily difficult to sell it.

  • And so here's another thing to think about.

  • You know, I think I mentioned this to you before.

  • If you write a book which is virtually impossible, and then get a publisher, which is virtually impossible, probably have a rejection rate of 99%.

  • So then, if you do, if you do manage to publish the book with a reasonable publisher, which you won't cause it's impossible.

  • Then you get 8% royalties.

  • That's it.

  • So 92% of your labor goes to the sales, marketing and distribution.

  • End of things.

  • And so it's just so it's very, very difficult to to generate capital as an entrepreneur.

  • And there's there's innumerable impediments, one of the things that people do, and this is something for those of you who might be entrepreneurial reminded.

  • Here's another thing that you have to understand is that you can't really create a product and then launch it because, first of all, you don't know how to do that.

  • You might know how to create the product, but you do not know how to mark it or sell it.

  • You don't know how to advertise it.

  • You don't know how to communicate about it.

  • You don't know how to ship it like there's all sorts of things you just don't know.

  • But worse is you don't know if people will use it.

  • And so what companies do that Jen that roll out new products on a relatively continual basis do is they don't develop the product and then go sell it.

  • They continually communicate with their customers about what product the customer is willing to buy next.

  • And then they developed that.

  • And so you have to have your market identified, and then you have to be in continual communication with the market.

  • Well, you develop the product, it's not.

  • I built a better mousetrap in the world comes marching to my door.

  • That isn't how it works is that you got to find out if someone wants to buy your stupid thing.

  • And then the next thing is is that even if you have a great thing, you're gonna go talk to people and they have 50 great things that they might buy and they're all great, but they're not gonna buy all 50.

  • They might buy one, maybe, and maybe they'll buy it this week, but probably they'll buy it in six months.

  • And they're not gonna buy any of them, no matter how great, unless they're on fire and you're selling water because they're so busy already right there, so overwhelmed and preoccupied by their jobs that if you come in and you say well here, this is going to increase your efficiency by 20%.

  • And here's the three weeks you'd have to spend doing that, and then the payoff would come in the next years.

  • They'll say, Yeah, we'll do that as soon as I have time and as soon as I have.

  • Time is never so.

  • If they're on fire and you have water, you can sell it to them.

  • But that's all because otherwise it's a no go.

  • So those are all things that are worthwhile knowing because they're very hard to learn.

  • And so and very comically.

  • I mean, if you would have told me when I first started developing these tests that one of the reasons corporations wouldn't buy that was because they worked, that I would have just thought, really, come on, Really, That's really the reason.

  • It's like, Yeah, yeah, that's the reason God so very, very, very problematic.

  • And then they already have their processes in place and they're entrenched.

  • And then then the next problem is you know, you haven't got any customers.

  • I think I told you that before.

  • You don't have any customers and you show them the statistics, but they can't understand this statistics, and they also have 200 other people selling them things that 50 of them are crooks.

  • So they're just lying about what they're doing.

  • They can't tell the difference between the liars and the people who are telling the truth.

  • And so they can't.

  • And they can't understand the statistics for themselves and they can't verify them anyways.

  • So what they ask you is who else is using it?

  • And if the answer is, well, everybody, then you don't even have to do the sale because you're already rich.

  • And if the answer is nobody, then you're going to say, Well, why is nobody using this?

  • You say, Well, it's the hardest thing that's come along.

  • And they think, Well, I'm not gonna be this sucker who gets painted, you know, red and fired because this failed that they do not think this might make me succeed.

  • People do not care about whether or not they succeed.

  • They care about whether or not they fail.

  • And so that's another thing is that if your marketing something and you can say this will stop you from being identified as a failure than people might be happy about that.

  • But if you say this will really help you succeed, it's like, yeah, yeah, no, I don't want to succeed I just don't want to fail.

  • I don't want to succeed.

  • I just want to be invisible and be left alone.

  • And if you're looking for people's fundamental motivation, that's it.

  • It's they want to be invisible and they want to be left alone.

  • I tell you, the zebra story, you any of you know that story?

  • Okay, lt, I'll tell you this zebra story.

  • Tell it, Stop me if I told you before, this tells you everything you need to know about human beings.

  • So it's where it's worth.

  • It's worth knowing.

  • Okay, so zebras have stripes and people say, Well, that's for camouflage.

  • And then you think about that for two seconds.

  • And you think that's a really stupid theory because lions are camouflaged and they're like golden, like the grass and zebras are black and white, so you can see a zebra five miles away.

  • It's like there's a zebra.

  • It's black and white, so the whole camouflage think that's just not working out so well as a hypothesis.

  • Okay, so bylaw biologists go and they decide to take a look at some zebras, and so they're looking at a zebra at on the herd because there's no zebra, right?

  • Just like there's no fish.

  • There are schools of fish and their herds of zebras.

  • There isn't a fish.

  • This is why I think the card aren't coming back.

  • There are no cod.

  • There are massive 100 mile long schools of cod, 10 stories deep, 20 million years old.

  • You wiped out the school.

  • You don't just get to throw a card in the water and say, Well, you know, off go.

  • It's like, Well, where's my city?

  • It's like launching you in the middle of a field.

  • It's like, Well, go out there and reproduce like No, that's not gonna happen, you know?

  • So without the school that there's no card and you can't just introduce a whole school of card because you don't have a whole school of cod.

  • So, you know, maybe the car, they're never coming back.

  • And zebras are the same thing.

  • There's not a zebra, they're zebras.

  • And so you're looking at the zebras, trying to study as Ebron.

  • You look at the zebra and you make some notes and you look up and you think, Oh, Christ, which zebra was I looking at?

  • An answer to that is, you don't know, because the the camouflage is against the herd and the black and white stripes.

  • There's a variety of reasons for this.

  • Stripes flies off also seem not to like the stripes.

  • But, you know, usually people things evolved for multiple reasons, but anyways, it's very difficult to parse out a zebra against the herd.

  • You look down, you look up.

  • It's like, Oh, all those damn zebras looked the same.

  • Yes, the camouflage is effective, but it's against the herd.

  • All right, so then you think, Well, we better identify a zebra so we can see what he's up to.

  • So then you take your jeep on account of red paint on a stick with a rag.

  • On the end of it, you drive up to the zebras and you paint their haunch, read a little bit, put a nice red dot on their haunts, or maybe clipped the rear with a cattle clip, and then you you know, you stand back and think, Hey, I'm pretty smart.

  • Now I'm gonna watch that zebra.

  • So what do you think happens to the zebra?

  • The lions kill it, right?

  • Right, right.

  • Because lions, they're smart, right?

  • Hunter hunting animals, air smart, but they have to identify a zebra before they can organize their hunt.

  • They can't just hunt the whole herd.

  • They have to pick out a zebra.

  • And so maybe it's like a zebra that's got a sore hip or something, and so you think we'll nature's kind.

  • It just takes the week.

  • It's though no zebra or lions like really healthy, delicious zebras, but they look like all the other healthy, delicious sea breeze so they can't get a bead on them.

  • But if they're small and just born, or if they're limping or there's something that identifies them than the lions can pick them out and then they do pick them out.

  • And so the rule for human beings is keep your damn stripes on so the lions don't get you.

  • And I'm telling you, man, if you want to remember one thing from my class about human motivation, that's a good thing to learn.

  • People camouflage themselves against the herd, and they like to be in the middle of the herd, which is what fish do.

  • By the way, if you have a big school of fish, the smart, healthy large fish are in the middle of the school because you know what you call fish on the outside of the school bait, right?

  • So that's what people are doing there are trying to move into the middle of the herd all the time and the herd moves around or the school moves around and people are going well, I'm in the middle.

  • I'm staying in the middle here, So I've got this protective ring of people around me.

  • So the predators don't pick me out and do me in.

  • So okay, so that's That's part of the reason why I said, Well, you can't sell something to someone for success because you know, you're thinking well, people are aiming at success.

  • Don't don't be thinking that it's not by any means, necessarily true trade.

  • Neuroticism is a potential most powerful motivator, and trade neuroticism is.

  • Let's not be too threatened or hurt, right?

  • That's the negative emotion system, and the negative emotion system is a killer source of motivation.

  • You know, you also see that there are scales of well being that have been designed mostly by social psychologists, which means they're very bad scales most of the time because they're psychometric.

  • Past is absurdly low, generally speaking, So what you find with scales of well being, Sometimes they're talked about his scales of happiness, even is that people aren't after happiness.

  • They're after not hurting.

  • It's not so.

  • They don't want to be extroverted at enthusiastic, righted and bubbly and and full of smiles and laughter.

  • That isn't what they mean by.

  • I want to be happy.

  • What they mean is, I don't wanna be anxious or in pain and so well being scales tend to be something like neuroticism.

  • Sorry, emotional stability plus extra version.

  • But the big loading, his own emotional stability, the reverse of neuroticism.

  • You want to avoid suffering, You don't wanna be happy.

  • You want to avoid suffering.

  • And one way to avoid suffering is not to let the lions know on you.

  • And one way of doing that is to stay in the middle of the damn hurt.

  • And so and like I'm not being a smart Alec about this.

  • I understand why people do that there's real danger to being visible.

  • There's really danger and being visible now.

  • You might be successful if you're visible, but you also might be dead.

  • So okay, so now there's a number of things to consider.

  • If you're thinking about performance prediction and one of them is, to what degree do people vary in terms of their performance capacity?

  • And you might say, Well, there's very little performance variability.

  • Or you might say there's a tremendous amount of per performance variability.

  • Or you might say there's an absurd amount of performance variability, and it turns out that the claim that there's an absurd amount of performance variability is the proper claim.

  • I Q is normally distributed.

  • So is conscientiousness, But productivity is distributed along the parade.

  • Oh, distribution and I'll show you why.

  • And that follows a law called Prices Law from someone named Derek to Sola Price, who was studying scientific productivity in the early 19 sixties.

  • And what he showed was that ah, vanishingly small proportion of the scientists operating in a given scientific domain produced half the output.

  • And so what you see in this, what's happening is that to do really well at a given productive task, which would also include generating money as as a proxy for for creative productivity, is that you have to be a bunch of things have to go your way simultaneously So maybe you have to be really, really smart.

  • You have to be really, really lucky.

  • You have to be really healthy.

  • You have to be really energetic.

  • You have to be really conscientious.

  • You have to be in the right time at the right place.

  • And maybe you also have to have the right social network, right?

  • Like so.

  • It's a lot of things and each of those air sort of small probability.

  • They're each of those are small probabilities.

  • And then if you multiply the small probabilities together, you get an extraordinarily tiny probability and you have to have all those things functioning before you're gonna end out on the on the extreme end of the productivity distribution.

  • But if you do end up there, then you produce almost all of everything.

  • So it's a tiny number of people that produce almost all of everything.

  • That's prices law, and technically it is.

  • And I mentioned this to you before.

  • It's the square root law.

  • The square root of the number of people in a productive domain produce haft output.

  • Right?

  • So if you have 10 employees, three of them produce half the output.

  • If you have 100 10 of them produce half the output.

  • If you have 10,100 of the produce, half the productive output.

  • And so what that also means is that because there's massive variability and performance, you don't have to shift your ability to predict performance very, very, very much up towards being better at it to gain substantially on the positive side, because there's so much difference in productivity, and that actually happens to be also a function of the complexity of the job.

  • If the job is simple, which means you can.

  • This job has 10 rules, you know?

  • I mean, that's a janitorial job.

  • Let's say you do.

  • It takes a little while to learn it, but once you've learned it, you basically do the same thing all the time.

  • There's not a lot of performance variability in those jobs, and most of that would be not by conscientiousness and also to some degree by neuroticism because the higher people, her higher neuroticism, would be more likely to miss work but your butt.

  • But general cognitive ability for examples not a good predictor.

  • It'll it'll predict how fast you learn the tasks initially, but not how well you perform the test.

  • But if the task to you're doing or shifting constantly so your responsibilities change or you're in a creative job where you're constantly solving new problems, those are kind of the same thing.

  • Then I like you.

  • As the complexity of the job increases the predictive utility of I Q increases, which is only to say that smarter people can handle complex situations faster.

  • It's like that doesn't seem like a particularly radical claim.

  • So prices law dictates that there's massive individual productivity differences between people, so increasing your predictive your capacity for predicting performance, even by small increments has a huge economic consequence that was established in the 19 nineties.

  • The equations were first developed in the 19 nineties, and that's part of the reason that I started working on performance prediction tests because I read the economics and I thought, Oh my God, you can produce a test that costs $30 times say maybe you have 50 applicants for the position, $1500 to administer, and it'll produce an increments of something like 30% of salary permanently for the person that you put in the position.

  • So let's say you hire a $75,000 employees, and it increases their productivity by 30%.

  • So we'll say roughly $25,000.

  • You get a $25,000 return on a $1500 investment every single year.

  • That person occupies the position.

  • On average, that's four years.

  • That's $100,000 payoff for your $1500 investment.

  • I read that I thought, Oh, that'll be easy to sell.

  • It's like wrong, Wrong.

  • Even though the economic payoff was so massive, I told you the other impediments that that emerged.

  • But the the arithmetic are the capacity to produce these calculations was established in the 19 nineties, and I'll show you the equations in a bit here.

  • Okay, so we already talked about what a normal distribution looks like.

  • That's the red line, and a normal distribution emerges as a consequence of chance processes.

  • So we'll take a look at those here.

  • Hopefully, this will work.

  • Look at that.

  • All right, so there's your corporate advertisement for the day, but it's not often you see a heroic, normal distribution, but there, There you go.

  • So So So what you see is that you get a pattern output as a consequence of chance processes.

  • And so where you happen to be on the distribution for all of the traits that we've described as a consequence of innumerable chance genetic, genetic and cultural processes.

  • But a lot of them are a lot more than people like to admit our genetic.

  • I mean, you could make people stupider with the environment, but it's not that easy to make them smarter.

  • So because it's what I think about it, right, it's way easier to make something worse than it is to make something better.

  • And so even if you take everyone and you give them optimal access to information and say optimal nutrition, which pretty much everybody's in that position now, at least in so far as we could manage it, you're going to get massive genetic differences in such things as conscientiousness or extra version or intelligence.

  • And as you, in fact, as you flatten out the sociocultural environment, say you take everybody and you provide them with optimal nutrition and optimal access to information, which you've pretty much done by the way, with a computer right, because how are you going to give someone more access to information than to give them a Web enabled computer.

  • That's it right there.

  • There's just no better.

  • It's an infinite library.

  • You can learn anything with it.

  • So we're done.

  • We've equalized the education landscape, roughly speaking and then nutritionally well, you know.

  • Yeah, some people eat badly and some people eat better.

  • But the option to eat well is basically open to at least everybody in North America.

  • Roughly speaking, you've wiped out the soc socio cultural variation.

  • You might think, well, that equalizes people.

  • It's wrong.

  • All it does is reduce the variability to the remaining biological differences.

  • You maximize the genetic variability by minimizing the sociocultural variability, right?

  • Very important thing to understand.

  • This is why the gender differences between personality between men and women are largest in the Scandinavian countries.

  • Right?

  • Tens of thousands of people have bean assessed along these dimensions.

  • We know that we know the the data that have come in are clear.

  • The biggest differences between men and women in the world, in terms of personality and in terms of interest, are in the Scandinavian countries.

  • Why wiped out socio cultural variation?

  • All you've got left is biological differences.

  • So so so you.

  • Well, you can draw your own conclusions from that.

  • It's unfortunate and fortunate at the same time.

  • All right, so here's a parade.

  • Oh, distribution.

  • This is the distribution.

  • Remember, I showed you with the creative achievement questionnaire that almost everybody stacked up.

  • Zero people have zero creative output.

  • The median person has zero lifetime creative output.

  • And then there's a tiny proportion that are way the hell out on the, you know, right hand end of the distribution, right?

  • Those are the people have who has everything is cycling forward for them.

  • And as they get more well known, of course they get more opportunities as well.

  • Copper since very conscientious.

  • So I just I'll just run this simulation for you, okay?

  • And and this shows you why the parade?

  • Oh, distribution emerges now.

  • You have to watch this quickly because it's a fairly fast animation.

  • So here's what happens.

  • Everybody starts out with $10.

  • There's a There's 1000 people playing the game.

  • Everybody starts out with $10.

  • We I have a dollar.

  • You have a dollar.

  • I flip a coin.

  • If I get heads, you give me a dollar.

  • If I get tails, I give you daughter we go around and we trade with everyone.

  • Okay, so the first thing that happens when people start to trade is this normal distribution develops, right?

  • Because some people lose and some people when it's just like the golden board that I showed you.

  • Okay, so you keep playing.

  • People start to stack up zero watch because they lose 10 times in a row.

  • Bang, they're done.

  • The bottom Graff is a graph of the entropy of the distribution, which increases as the game continues because they have to be getting its maximally ordered right.

  • Everybody has exactly the same amount.

  • Now it's being distributed.

  • Same equations apply to the distribution of gas into a vacuum.

  • Well, what happens now?

  • Someone you know, there are people out there at the at the $50 Ranger at the $60 range of the $70 range.

  • You keep playing it while eventually what happens if you play it right to it's conclusion is that one person ends up with all the money, so that's to those who have everything.

  • More will be given from those who have nothing.

  • Everything will be taken.

  • That's the law of economic productivity.

  • It's called Matthew is the Matthew principle, and it's actually an economic principle that was deri

This is the most practical of all the things that we've discussed so far.

字幕と単語

ワンタップで英和辞典検索 単語をクリックすると、意味が表示されます

B1 中級

2017年 パーソナリティ21:生物学と特徴。パフォーマンス予測 (2017 Personality 21: Biology & Traits: Performance Prediction)

  • 1 0
    林宜悉 に公開 2021 年 01 月 14 日
動画の中の単語