Placeholder Image

字幕表 動画を再生する

  • Alright, welcome to category theory lectures

  • So, are we all programmers here?

  • Is everyone a programmer, or are there people who are not programmers?

  • If you're not a programmer, please raise your hand

  • Okay, thats means, like, you don't write any programs? > I kind of learn as a hobby

  • Okay well that's good enough

  • okay and how many people here have some little bit of knowledge of Haskell, okay

  • Oh Lots, that's, that's, excellent because I might be giving some examples mostly

  • just like you know, declarations, functions or something like that

  • I'll explain everything but it's it's good to have a little bit of

  • understanding. So I'm not really teaching a programming language, it will be about

  • category theory and category theory is like this most abstract, or well maybe

  • almost the most abstract part of mathematics, right

  • so the question is why are we here?

  • what does it have to do with programming, right?

  • and a lot of programmers they hate math

  • they finished learning math at college and now they

  • are really happy now "for the rest of my life I will not have to touch it"

  • "I hate math" and so on, and you guys here come for, I dunno, punishment?

  • Why do you think category theory might be important?

  • What do you think, do we do functional programming?

  • Is category theory only relevant to functional programming or other branches

  • of programming would maybe profit from it too?

  • is there something more to offer?

  • Have you heard of functors, who's heard of functors?

  • yeah, and who knows what functors are?

  • Uh huh, a few. That's good, that means i can actually explain stuff and you won't be totally

  • repeating what you already know

  • so I came to category theory through a long long twisted road

  • ok when I started programming I started programming in assembly language

  • the lowest possible level, right, where you actually tell the computer exactly what to do

  • right down to "grab this thing from memory, put it into the register, use it as an

  • "address and then jump" and so on so this is very precisely telling the computer

  • what to do right this is this is a very imperative way of programming we start

  • with this most imperative approach to programming and then sort of we drag

  • this this approach to programming throughout our lives right and like we

  • have to unlearn at some point. And this approach to programming sort of in

  • computer science is related to Turing machines. A Turing machine is this kind of

  • very primitive machine, it just stamps stuff on a paper tape, right

  • there is no high-level programming there it's just like this is

  • the assembly language "read this number, put it back on tape, erase something from

  • the tape" and so on so this is this one approach towards programming

  • by the way, all these approaches to programming were invented before there were even

  • computers, right and then there's the other approach to programming this came

  • from mathematics the lambda calculus right, Alonzo Church and these guys

  • and that was like "what is possible to compute, right,

  • "thinking about mathematics in terms of

  • "how things can be actually executed in some way, transforming things", right

  • so these approaches to programming are not very practical

  • although people write programs in assembly language and it's possible but

  • they don't really scale, so this is why we came out with languages that offer higher levels of abstraction,

  • and so the next level abstraction was procedural programming

  • what's characteristic of procedural programming is that you divide a big

  • problem into procedures and each procedure has its name, has a

  • certain number of arguments

  • maybe it returns a value sometimes right

  • not necessarily, maybe it's just for side effects and so on, but because you

  • chop up your work into smaller pieces and you can like deal with bigger

  • problems right so this this kind of abstracting of things really helped in

  • in programming, right and then next people came up with this

  • idea of object-oriented programming right and that's even more abstract now you

  • have stuff that you are hiding inside objects and then you can compose these

  • objects right and once you program an object you don't have to look

  • inside the object you can forget about the implementation right and and and

  • just look at the surface of the object which is the interface and then you can

  • combine these objects without looking inside and you know you have the bigger picture

  • and then you combine them into bigger objects and so you can see that there is

  • a a certain idea there, right? and so it's a very important idea that if you

  • want to deal with more complex problems you have to be able to

  • chop the bigger problem into smaller problems, right,

  • solve them separately and then combine the solutions together

  • And there is a name for this, this is called composability, right.

  • So composability is something that really helps us in programming.

  • What else helps us in programming? Abstraction, "abstraction" that comes from

  • from a Greek word that means more or less the same as "subtraction", right which

  • means "getting rid of details". If you want to hide details, you don't want them or you wanna say

  • "these things, they differ in some small details but for me they are the same"

  • "I don't care about the details" so, an object is in object-oriented programming

  • is something that hides the details, abstracts over some details right, and there are even these

  • abstract data types that just expose the interface and you're not supposed to

  • know how they are implemented right?

  • so when I first learned object-oriented programming I thought "this is like the best thing since sliced bread"

  • and I was a big proponent of object-oriented programming and together with this idea

  • of abstracting things and and and composing things comes the idea of

  • reusabillity right so if i have these blocks that I have chopped up and

  • implemented, right, maybe I can rearrange them in different ways so once I

  • implemented something maybe I will use it in another problem to solve

  • another problem I will have these building blocks, I will have lots of building

  • blocks that hide their complexity and I will just juggle them and put them

  • in new constellations, right? so it seemed to me like this is really the promise of

  • object-oriented programming, I'm buying it! and I started programming object-oriented way

  • using C++ and I became pretty good at C++ I think, you know, I wrote a lot of C++ code

  • and well it turns out that there is something wrong with this

  • object-oriented approach and it became more and more painfully obvious when people started

  • writing concurrent code and parallel code, ok, so concurrency does not mix very well with

  • object-oriented programming. Why doesn't it? Because objects hide implementation

  • and they hide exactly the wrong thing, which makes them not composable, ok?

  • They hide two things that are very important:

  • They hide mutationthey mutate some state inside, right? And we don't know about it, they hide it

  • They don't say "I'm mutating something".

  • And sharing these pointers rightthey share data and they often share data between

  • each other you know between themselves they share data

  • And mixing, sharing and mutation has a name

  • It's called a data race, ok?

  • So what the objects in object-oriented programming are abstracting over is the data races

  • and you are not supposed to abstract over data races

  • because then when you start combining these objects you get data races for free, right.

  • and it turns out that for some reason we don't like data races, ok?

  • and so once you realize that you think

  • "okay, I know how to avoid data races, right, I'm going I'm going to use locks

  • "and I'm going to hide locks too because I want to abstract over it"

  • so like in java where every object has its own lock, right?

  • and unfortunately locks don't compose either, right.

  • That's the problem with locks. I'm not going to talk about this too much

  • because that is like a different course about concurrent programming but I'm

  • just mentioning it that this kind of raising the levels of abstraction

  • you have to be careful what you're abstracting over

  • what are the things that you are subtracting, throwing away and not exposing, right?

  • So the next level of abstraction that came after that, well actually it came before

  • that but people realised that, "Hey, maybe we have to dig it out

  • "and start using this functional programming" when you abstract things into functions

  • and especially in Haskell when, you know,

  • in a functional language you don't have mutations so you don't have this problem

  • of hiding data races, and then you also have ways of composing

  • data structures into bigger data structures and that's also because everything is

  • immutable so you can safely compose and decompose things.

  • Now every time I learned a new language I wanted to find where the boundaries of this language are

  • like, "what are the hardest things to do in this language?", right?

  • So for instance in C++, right, what are the highest levels of abstraction that you can get?

  • Template metaprogramming, right? So I was really fascinated by template metaprogramming

  • and I started reading these books about template metaprogramming

  • and was like "Wow, I would have never come up with these ideas, they are so complex", right?

  • So it turns out that these are very simple ideas

  • it's just that the language is so awkward in expressing them

  • So I learned a little bit of Haskell and I said "Ok this huge template that was so complicated,

  • that's one line of code in Haskell", right? So there are languages in which

  • there was like a jump in the level of abstraction and made it much easier to

  • program at a higher level of abstraction, right.

  • And in every language you know there is this group of people who are writing

  • libraries right and they always stretch the language they always go to the highest

  • possible abstraction level and they and they hack, right? They hack at it

  • as much as possible. So I thought "Okay I don't like hacking, I just wanted to

  • "use a language that allows me to express myself at a high level of

  • "abstraction and that lets me express new ideas that are much more interesting"

  • you know, like, with template metaprogramming right you express this

  • idea that you might have lots of data structures that only differ by the type

  • that they hide. Like you can a vector of integers and vector of

  • booleans right? And there's just so much code to share, so if you abstract over the

  • data type that you are storing there, if you forget about it,

  • hide it, abstract over it, you can write code, abstract code, and in

  • C++ you do this with templates right. And you get something new, you

  • program at a higher level – a higher abstraction level because you

  • disregard some of the details, so that was great.

  • Now it turns out that once I learned Haskell

  • (I'm still learning Haskell to some extent)

  • I found out there are things in Haskell that are at these boundaries of abstraction

  • that, like, there are people who are working on this frontier of Haskell, right?

  • There are certain very abstract things that are unfortunately a little bit awkward to express in Haskell

  • and then there is this barrier to abstraction even in Haskell right?

  • I mean if you've seen some libraries that were written by Edward Kmett

  • you realise that they are extremely hard to understand what was the thought process, right?

  • And the secret is very simpleCategory Theory.

  • Edward Kmett knows Category Theory, and he just takes this stuff from

  • Category Theory, he reads these mathematical papers and he just

  • translates them into Haskell and when you translate stuff from Category

  • theory to Haskell you lose a lot of abstraction, you make it more concrete

  • Haskell has one category built-in and you are

  • restricting yourself to this particular category.

  • You can create other categories in Haskell and model them, but that becomes

  • awkward and that becomes sort of like template metaprogramming you know within

  • Haskellnot exactly the same mechanism but the the level of

  • difficulty in expressing these ideas in Haskell is as big as template

  • metaprogramming in C++.

  • So Category Theory is this higher-level language above Haskell, above

  • functional programming, above ML,

  • Haskell, Scala, and so on

  • C++, assembly, it's a higher-level language, okay? It's not a practical

  • language that we can write code in, but it's a language.

  • So just like people who write these horrible metaprogramming template libraries in C++

  • they can only do this because they learned a little bit of Haskell.

  • and they know what some constructs in Haskell are and how to do

  • thingshow to implement algorithms on immutable data structures right

  • they know how to do this because they learned it from Haskell

  • and they just translated into this horrible template programming language.

  • Fine right? And it works and people are using it, the same way that if you're

  • a functional programmer you can take these ideas from category theory,

  • these very very abstract ideas and translate it into this kind of awkward language called

  • Haskell, right? Now from looking from from categorical perspective Haskell

  • becomes this ugly language just like looking from Haskell C++ becomes this

  • ugly language, right? But at least you know it's an executable language, its a

  • language in which we can write programs. And of course these ideas when they

  • percolate from category theory down to Haskell they can also percolate then

  • down to C++ and even to assembly, PHP or whatever, JavaScript if you want to

  • because these are very general ideas

  • So we want to get to this highest possible level of abstraction to help us express ideas that later can be

  • translated into programs. So that for me is the main practical motivation

  • ok? But then I started also thinking about the theoretical motivation or more

  • philosophical motivation because once you start learning Category Theory you

  • say "Okay, Category Theory unifies lots of things".

  • I mean if you abstract enough, if you chop up all the unnecessary stuff and you are

  • you know, top of Mount Everest and you're looking down and suddenly

  • all these things start looking similar, like from that high-level all these

  • little programming languages look like little ants and they behave same way

  • and its like "Ok, they're all the same". At that level of abstraction all this

  • programming stuff, it's all the same, it looks the same.

  • But that's not allsuddenly at this high, high level other things look the same

  • and then mathematicians discovered this, that they've been developing separate

  • areas of mathematics, right? So first of all they developed geometry,

  • algebra, number theory, topology, what have you, set theory

  • these are all completely different separate theories, they look nothing like each other, right

  • and category theory found out similarities between all these things. So it turns out that at a certain level of

  • abstraction, the structure of all these theories is the same. So you can describe,

  • you know, you tweak with the structure of a category and you suddenly get topology,

  • you tweak this structure of category, you suddenly get set theory,

  • you tweak something else and you get algebra.

  • So there is this unification, this grand unification, of, essentially, all

  • areas of mathematics in category theory. But then there is also programming, right?

  • Programming that deals with these computers with this memory and processor or registers, ok?

  • And this stuff can be described also, and then there's a programming language,

  • this stuff can be described mathematically, yes, lambda calculus

  • Like all these languages that essentially have roots in lambda calculus

  • they try to get away from bits and bytes and gotos and jumps, right

  • and loops, and they want to abstract this stuff into stuff that's more

  • like lambda calculus, right and there are these data structures and these data

  • structures you know, we used to look at them like "here's a bunch of bytes,

  • "here's a bunch of bytes, here's a pointer from this bunch of bytes to this bunch

  • "of bytes" and so on, right?

  • The very very low level, like plumbers working with tubes, right?

  • And then computer scientists say "Well actually these things, they form types"

  • So there's type theory, there's type theory that can describe all these

  • categories of data structures, but there's also type theory in

  • mathematics, right, people invented types in math not to solve problems that

  • computer scientists have, they try to solve problems like paradoxes, like there is Russell's paradox

  • that says that you cannot construct a set of all sets, things like this,

  • or maybe you know the Barber's Paradox: can the barber shave himself or not

  • every barber shaves only people who don't shave themselves so can he shave himself or not?

  • This kind of paradoxits a funny paradox but they're serious

  • So this Barber's Paradox actually can be translated into Russell's Paradox

  • which is like "the set of all sets" or "sets that don't contain themselves don't form a set" and so on stuff like this right and

  • and Russell came up with this theory of types that to say that sets

  • form layers upon layers, that you cannot just have all sets and put them

  • one big set, right and then the type theory evolved from this and there's

  • this very abstract Martin-Löf Type Theory that's very formal, right so that's

  • that's just a branch of mathematics that was created to deal with paradoxes and

  • then there is logic, and logic was created you know long, long time ago by the

  • ancient Greeks right, they used logic so there are laws of logic and people have

  • been studying logics for thousands of years, right. And at some point people

  • suddenly realized that all these distinct areas of mathematics are exactly the same

  • this is called the Curry-Howard-Lambek isomorphism which says that

  • whatever you do in logic can be directly translated into whatever you do in

  • type theory. They are exactlyit's not like they are similarthey are

  • exactly the same there is a one to one correspondence, right? And the Lambek

  • part of this isomorphism says that in category theory, there's certain

  • types of categorieslike the cartesian closed categoriesthat actually totally

  • describe the same thing. It's just like there are these three different theories, one

  • is about computing, one is about categories another is about types and

  • they are all the same so like all essentially all our human activity is

  • described by one theory

  • ok so this is like really mind-blowing and of course mathematicians you know

  • when they discover things like this they turn philosophical or

  • I wouldn't say religious but at least philosophical right like "Oh my god we are discovering

  • stuff!" it's like you're not really creating mathematics right you're

  • discovering some deep deep truth about the universe

  • okay like what do you think it is mathematics something that we invent or

  • is it like built into the universe, because for physicists, this is "no", right, physicists

  • do experiments so – "We study stuff, we throw these atoms at each other, bang bang

  • and we observe something so we are discovering stuff that's around us"

  • Whereas mathematicians, no, they just sit down at the desk with a pencil

  • or walk around in a park and think.

  • What are they discovering? And now they are saying "Well we have independently discovered

  • that this branch of mathematics this guy discovered and this other guy in

  • ancient Greece he discovered logic and this guy in Sweden who discovered type

  • theory you know, and they all discovered the same thingthere is some really really

  • deep truth that we are discovering" so it's like there is some Platonic Ideal right

  • I don't know if you read [Neal] Stephenson's novels about,

  • there's this novel in which there are multiple universes and each of them is

  • more platonic than the other, and people move from one platonic universe to another but the

  • mathematicians think that this is really what they are doing, right?

  • And I was thinking about it and I thought "No this is really notthere

  • has to be at a simpler explanation"

  • And that goes back to the way we do mathematics or the way we discover the universe, right.

  • So what do we do? I mean we are human, right? We are these evolved

  • monkeys, right now we have evolved our brains so some parts of our brain

  • evolved you know since, I dunno, billion years of evolution, like, the image

  • processing stuff for instance has been evolving for a very very long time right

  • so this is a really very good system to analyse visual

  • input. That's been involving for a very long time starting with bacteria

  • that would just see whether it's bright or dark and change their metabolism

  • and then they would start seeing this side is brighter than that side and so on and eventually, you know, the

  • most important thing "Where is the predator?" "Where's the food?" distinguishing

  • between a predator and food right this is like the most important

  • thing that makes you survive right so we got very good at this and we have

  • inherited the stuff from lower animals and and there's been a lot of

  • evolutionary pressure to evolve these right and now we are trying to imitate

  • the stuff with computers and we're finding it really hard

  • it's a really hard problem image recognition right now, recognizing

  • who's your foe and who's your friend and recognizing where the food is right

  • and whether it's spoiled or notvery important things computers don't do very well

  • ok so this is a very hard problem but we've been working on it for a billion

  • years so our brains know how to do this stuff. But then there are there are

  • things that evolved in the last hundred thousand years or so, I don't

  • know, how long is human evolution? 100 million years?

  • No it's not. Whatever it is it's a tiny number, it's just like the

  • last millisecond of evolution that we suddenly had these brains that

  • that can actually think abstractlywe can count, we can communicate, we

  • can organise stuff, and we can do math and we can do science and this is

  • like a really fresh ability, and and we've been doing science for

  • the last few thousand years onwards so that's that's nothing on the evolutionary scale

  • and we're doing it with these brains, what do these brains come

  • from, they evolved to do things like hunt mammoths, or run away from

  • sabre-toothed tigers, they did not evolve to do programming, that's a

  • very recent thing! They didn't evolve even to do mathematics or logic

  • no, they evolved to do stuff like hunting, like finding food and

  • especially organizing ourselves into groups, right? Social activities

  • and communicating, so language right? So we have these language skills

  • it's very fresh thing, right?

  • So compared to the complexity of our visual cortex this new newly

  • acquired ability to actually think abstractly and using a language that

  • that's like really very very fresh thing and it's very primitive, ok. It hasn't had

  • time to evolve. It's very simple and we are using this tool that we evolved to

  • throwing javelins or arrows, shooting arrows and communicating with with other

  • guys like "Oh here, the mammoth is over there!"

  • "No don't run there!" or "Right behind you a sabre-toothed tiger!" or "A bear!" so we

  • are now using this tool to do mathematics

  • so there are certain things that we found useful in doing

  • abstract thinking, in doing mathematics and the major thing is we

  • come head-to-head with a very complex problem like how to provide food for our

  • tribe right and we solve it, how do we solve it, we divide it into smaller

  • problems that we can solve and then we combine the solutions

  • ok so this is the only way we know how to deal with complex situations by

  • decomposing these things into simpler things and then solving

  • them, the simplest problems, and combining the solutions into bigger solutions and this is everywhere.

  • You find is that this permeates everything we do that we don't even notice it.

  • But this is how we work and because this is how we work this is how we do science as well.

  • So every branch of science, every branch of mathematics is

  • "We can only see these things that can be chopped into pieces and then put

  • together" so no wonder they look the same!

  • Because we can only see these problems that have this structure. If they

  • don't have this structure, we just don't see them, we just say "We cannot solve this problem"

  • "Let's do something else, maybe we can get a grant for solving that other

  • "problem because that seems choppable into smaller pieces

  • "this one doesn't, let's forget about it let's not even talk about it", right?

  • So one good thing,

  • ok, so maybe the whole universe is like this maybe everything in this universe

  • can be chopped into little pieces and then put together, maybe that's like the

  • property of this universe and our brains are just reflecting this.

  • And personally I don't think so. Maybe I'm wrong, hopefully I'm wrong, but I'm a

  • physicist also, I started as a physicist, so I saw what was happening in physics

  • and in physics we also wanted to chop things into little pieces, right. And

  • we were very successful at this you knowwe found out that matter is

  • built from atoms, right? So atoms are these things that we can

  • separate and then study, find their properties and then say that the

  • property of a rock or a piece of metal comes by combining the properties

  • of these atoms so we can decompose a piece of rock into atoms, study

  • them and then recompose it, and then we have the understanding of them.

  • But then we didn't stop at that because we wanted to see what's inside the

  • atom right? So there are these elementary particles

  • electrons, protons and so on. So at some level if we

  • want to decompose things into simpler things, these simpler things have

  • to have similar properties. For instance

  • what's the simplest thing that we can imagine for an elementary particle? It

  • has to be a point, right? It should be a point. I mean, if it's a ball right then maybe we can

  • cut it, chop it into smaller pieces and and then do the decomposition,

  • recomposition and so on. So at some level, some lowest possible level we

  • cannot chop it anymore and we should find a point, right? So an elementary

  • particle should be a point. That would be the end of this level of decomposition, right?

  • And we tried doing this, we have like the standard model of particles in

  • which we assume that the particles are points, which is sort of a cheat because it turns

  • out that we cannot really deal theoretically with point particles because

  • they get to infinities like two point particles when they get very very very

  • close together, right.

  • The interaction goes to infinity and everything blows up so we came up with

  • this renormalization theory which is like a big hack you know to get rid of

  • infinities and so on and and of course physicists were not very happy with that

  • So they thought

  • "ok so maybe at this lowest level things are not as choppable as we thought, maybe

  • nature really does not follow this kind of divide and then combine" so

  • they came up with this idea that maybe elementary particles are strings. If you've heard of

  • of string theory right? Like, what a crazy theory this is! That this most elementary

  • thing is not a point but it actually is a string and you cannot chop it

  • Its like the elementary thing is not divisible, but it's not a point.

  • And quantum theoryand this is another non-choppable piece

  • of knowledge that we found outsays that if you have a bigger system, maybe you

  • can separate it into elementary particles and say "Hey I have a system of

  • 10 particles, I know properties of these 10 particles and I call this system

  • something bigger like an object", right, and I can find out the structure of this

  • object by looking at these particles and it turns out in quantum mechanics that the states that

  • it comes tothey don't add up!

  • A state that has two particles is not a sum or a product or a convolution of

  • states of a single particle it's a new state which follows a different you know

  • differential equation, and so on, so we try to separate particles

  • And suddenly we cut the particles apart and it turns out that

  • something weird happens in between when you are cutting, right, you are actually

  • changing the system by separating things, ok? So maybe, maybe at the very bottomor

  • maybe there is no bottommaybe at the very bottom things are not separable,

  • maybe things are not composable. Maybe this composability that we love so much

  • is not a property of nature, that's what I'm saying, maybe it's not a property of nature.

  • Maybe this is just the property of our brains, maybe our brains are such that we have

  • to see structure everywhere and if we can't find the structure we just give up

  • So in this way category theory is not really about mathematics or physics

  • category theory is about our minds, how our brains work so it's more of

  • epistemology than ontology. Epistemology is how we can reason about stuff, how we can

  • learn about stuff, ontology is about what things are, right. Maybe we cannot learn what

  • things are. But we can learn about how we can study, and that's what category theory tells us.

  • Let's take a break

Alright, welcome to category theory lectures

字幕と単語

ワンタップで英和辞典検索 単語をクリックすると、意味が表示されます

B1 中級

カテゴリー理論1.モチベーションと哲学 (Category Theory 1.1: Motivation and Philosophy)

  • 368 12
    張嘉軒 に公開 2021 年 01 月 14 日
動画の中の単語