Placeholder Image

字幕表 動画を再生する

  • Alright, welcome to category theory lectures

  • So, are we all programmers here?

  • Is everyone a programmer, or are there people who are not programmers?

  • If you're not a programmer, please raise your hand

  • Okay, thats means, like, you don't write any programs? > I kind of learn as a hobby

  • Okay well that's good enough

  • okay and how many people here have some little bit of knowledge of Haskell, okay

  • Oh Lots, that's, that's, excellent because I might be giving some examples mostly

  • just like you know, declarations, functions or something like that

  • I'll explain everything but it's it's good to have a little bit of

  • understanding. So I'm not really teaching a programming language, it will be about

  • category theory and category theory is like this most abstract, or well maybe

  • almost the most abstract part of mathematics, right

  • so the question is why are we here?

  • what does it have to do with programming, right?

  • and a lot of programmers they hate math

  • they finished learning math at college and now they

  • are really happy now "for the rest of my life I will not have to touch it"

  • "I hate math" and so on, and you guys here come for, I dunno, punishment?

  • Why do you think category theory might be important?

  • What do you think, do we do functional programming?

  • Is category theory only relevant to functional programming or other branches

  • of programming would maybe profit from it too?

  • is there something more to offer?

  • Have you heard of functors, who's heard of functors?

  • yeah, and who knows what functors are?

  • Uh huh, a few. That's good, that means i can actually explain stuff and you won't be totally

  • repeating what you already know

  • so I came to category theory through a long long twisted road

  • ok when I started programming I started programming in assembly language

  • the lowest possible level, right, where you actually tell the computer exactly what to do

  • right down to "grab this thing from memory, put it into the register, use it as an

  • "address and then jump" and so on so this is very precisely telling the computer

  • what to do right this is this is a very imperative way of programming we start

  • with this most imperative approach to programming and then sort of we drag

  • this this approach to programming throughout our lives right and like we

  • have to unlearn at some point. And this approach to programming sort of in

  • computer science is related to Turing machines. A Turing machine is this kind of

  • very primitive machine, it just stamps stuff on a paper tape, right

  • there is no high-level programming there it's just like this is

  • the assembly language "read this number, put it back on tape, erase something from

  • the tape" and so on so this is this one approach towards programming

  • by the way, all these approaches to programming were invented before there were even

  • computers, right and then there's the other approach to programming this came

  • from mathematics the lambda calculus right, Alonzo Church and these guys

  • and that was like "what is possible to compute, right,

  • "thinking about mathematics in terms of

  • "how things can be actually executed in some way, transforming things", right

  • so these approaches to programming are not very practical

  • although people write programs in assembly language and it's possible but

  • they don't really scale, so this is why we came out with languages that offer higher levels of abstraction,

  • and so the next level abstraction was procedural programming

  • what's characteristic of procedural programming is that you divide a big

  • problem into procedures and each procedure has its name, has a

  • certain number of arguments

  • maybe it returns a value sometimes right

  • not necessarily, maybe it's just for side effects and so on, but because you

  • chop up your work into smaller pieces and you can like deal with bigger

  • problems right so this this kind of abstracting of things really helped in

  • in programming, right and then next people came up with this

  • idea of object-oriented programming right and that's even more abstract now you

  • have stuff that you are hiding inside objects and then you can compose these

  • objects right and once you program an object you don't have to look

  • inside the object you can forget about the implementation right and and and

  • just look at the surface of the object which is the interface and then you can

  • combine these objects without looking inside and you know you have the bigger picture

  • and then you combine them into bigger objects and so you can see that there is

  • a a certain idea there, right? and so it's a very important idea that if you

  • want to deal with more complex problems you have to be able to

  • chop the bigger problem into smaller problems, right,

  • solve them separately and then combine the solutions together

  • And there is a name for this, this is called composability, right.

  • So composability is something that really helps us in programming.

  • What else helps us in programming? Abstraction, "abstraction" that comes from

  • from a Greek word that means more or less the same as "subtraction", right which

  • means "getting rid of details". If you want to hide details, you don't want them or you wanna say

  • "these things, they differ in some small details but for me they are the same"

  • "I don't care about the details" so, an object is in object-oriented programming

  • is something that hides the details, abstracts over some details right, and there are even these

  • abstract data types that just expose the interface and you're not supposed to

  • know how they are implemented right?

  • so when I first learned object-oriented programming I thought "this is like the best thing since sliced bread"

  • and I was a big proponent of object-oriented programming and together with this idea

  • of abstracting things and and and composing things comes the idea of

  • reusabillity right so if i have these blocks that I have chopped up and

  • implemented, right, maybe I can rearrange them in different ways so once I

  • implemented something maybe I will use it in another problem to solve

  • another problem I will have these building blocks, I will have lots of building

  • blocks that hide their complexity and I will just juggle them and put them

  • in new constellations, right? so it seemed to me like this is really the promise of

  • object-oriented programming, I'm buying it! and I started programming object-oriented way

  • using C++ and I became pretty good at C++ I think, you know, I wrote a lot of C++ code

  • and well it turns out that there is something wrong with this

  • object-oriented approach and it became more and more painfully obvious when people started

  • writing concurrent code and parallel code, ok, so concurrency does not mix very well with

  • object-oriented programming. Why doesn't it? Because objects hide implementation

  • and they hide exactly the wrong thing, which makes them not composable, ok?

  • They hide two things that are very important:

  • They hide mutationthey mutate some state inside, right? And we don't know about it, they hide it

  • They don't say "I'm mutating something".

  • And sharing these pointers rightthey share data and they often share data between

  • each other you know between themselves they share data

  • And mixing, sharing and mutation has a name

  • It's called a data race, ok?

  • So what the objects in object-oriented programming are abstracting over is the data races

  • and you are not supposed to abstract over data races

  • because then when you start combining these objects you get data races for free, right.

  • and it turns out that for some reason we don't like data races, ok?

  • and so once you realize that you think

  • "okay, I know how to avoid data races, right, I'm going I'm going to use locks

  • "and I'm going to hide locks too because I want to abstract over it"

  • so like in java where every object has its own lock, right?

  • and unfortunately locks don't compose either, right.

  • That's the problem with locks. I'm not going to talk about this too much

  • because that is like a different course about concurrent programming but I'm

  • just mentioning it that this kind of raising the levels of abstraction

  • you have to be careful what you're abstracting over

  • what are the things that you are subtracting, throwing away and not exposing, right?

  • So the next level of abstraction that came after that, well actually it came before

  • that but people realised that, "Hey, maybe we have to dig it out

  • "and start using this functional programming" when you abstract things into functions

  • and especially in Haskell when, you know,

  • in a functional language you don't have mutations so you don't have this problem

  • of hiding data races, and then you also have ways of composing

  • data structures into bigger data structures and that's also because everything is

  • immutable so you can safely compose and decompose things.

  • Now every time I learned a new language I wanted to find where the boundaries of this language are

  • like, "what are the hardest things to do in this language?", right?

  • So for instance in C++, right, what are the highest levels of abstraction that you can get?

  • Template metaprogramming, right? So I was really fascinated by template metaprogramming

  • and I started reading these books about template metaprogramming

  • and was like "Wow, I would have never come up with these ideas, they are so complex", right?

  • So it turns out that these are very simple ideas

  • it's just that the language is so awkward in expressing them

  • So I learned a little bit of Haskell and I said "Ok this huge template that was so complicated,

  • that's one line of code in Haskell", right? So there are languages in which

  • there was like a jump in the level of abstraction and made it much easier to

  • program at a higher level of abstraction, right.

  • And in every language you know there is this group of people who are writing

  • libraries right and they always stretch the language they always go to the highest

  • possible abstraction level and they and they hack, right? They hack at it

  • as much as possible. So I thought "Okay I don't like hacking, I just wanted to

  • "use a language that allows me to express myself at a high level of

  • "abstraction and that lets me express new ideas that are much more interesting"

  • you know, like, with template metaprogramming right you express this

  • idea that you might have lots of data structures that only differ by the type

  • that they hide. Like you can a vector of integers and vector of

  • booleans right? And there's just so much code to share, so if you abstract over the

  • data type that you are storing there, if you forget about it,

  • hide it, abstract over it, you can write code, abstract code, and in

  • C++ you do this with templates right. And you get something new, you

  • program at a higher level – a higher abstraction level because you

  • disregard some of the details, so that was great.

  • Now it turns out that once I learned Haskell

  • (I'm still learning Haskell to some extent)

  • I found out there are things in Haskell that are at these boundaries of abstraction

  • that, like, there are people who are working on this frontier of Haskell, right?

  • There are certain very abstract things that are unfortunately a little bit awkward to express in Haskell

  • and then there is this barrier to abstraction even in Haskell right?

  • I mean if you've seen some libraries that were written by Edward Kmett

  • you realise that they are extremely hard to understand what was the thought process, right?

  • And the secret is very simpleCategory Theory.

  • Edward Kmett knows Category Theory, and he just takes this stuff from

  • Category Theory, he reads these mathematical papers and he just