字幕表 動画を再生する
We built computers to expand our brains. Originally scientists build computers to solve arithmetic,
but they turned out to be incredibly useful for many other things as well: running the
entire internet, lifelike graphics, artificial brains or simulating the Universe, but amazingly
all of it boils down to just flipping zeros and ones.
Computers have become smaller and more powerful at an incredible rate. There is more computing
power in your cell phone then there was in the entire world in the mid 60s. And the entire
Apollo moon landing could have been run on a couple of Nintendos.
Computer science is the subject that studies what computers can do. It is diverse and overlapping
field but I'm going to split it into three parts. The fundamental theory of computer
science, computer engineering, and Applications.
We'll start with the father of theoretical computer science: Alan Turing, who formalised
the concept of a Turing machine which is a simple description of a general purpose computer.
People came up with other designs for computing machines but they are all equivalent to a
Turing machine which makes it the foundation of computer science.
A Turing machine contains several parts: An infinitely long tape that is split into cells
containing symbols.
There is also a head that can read and write symbols to the tape, a state register that
stores the state of the head and a list of possible instructions.
In todays computers the tape is like the working memory or RAM, the head is the central processing
unit and the list of instructions is held in the computer's memory.
Even though a Turing machine is a simple set of rules, it's incredibly powerful, and
this is essentially what all computers do nowadays. Although our computers obviously
have a few more parts like permanent storage and all the other components.
Every problem that is computable by a Turing machine is computable using Lambda calculus
which is the basis of research in programming languages.
Computability Theory attempts to classify what is and isn't computable. There are
some problems that due to their very nature, can never be solved by a computer, a famous
example is the halting problem where you try to predict whether a program will stop running
or carry on forever. There are programs where this is impossible to answer by a computer
or a human.
Many problems are theoretically solvable but in practice take too much memory or more steps
than lifetime of the Universe to solve, and computational complexity attempts to categorise
these problems according to how they scale. There are many different classes of complexity
and many classes of problem that fall into each type.
There are a lot of real world problems that fall into these impossible to solve categories,
but fortunately computer scientists have a bunch of sneaky tricks where you can fudge
things and get pretty good answers but you'll never know if they are the best answer.
An algorithm is a set of instructions independent of the hardware or programming language designed
to solve a particular problem. It is kind of like a recipe of how to build a program
and a lot of work is put into developing algorithms to get the best out of computers. Different
algorithms can get to the same final result, like sorting a random set of numbers into
order, but some algorithms are much more efficient than others, this is studied in O(n) complexity.
Information theory studies the properties of information and how it can be measured,
stored and communicated. One application of this is how well you can compress data, making
it take up less memory while preserving all or most of the information but there are lots
of other applications. Related to information theory is coding theory.
And Cryptography is obviously very important for keeping information sent over the internet
secret. There are many different encryption schemes which scramble data and usually rely
on some very complex mathematical problem to keep the information locked up.
These are the main branches of theoretical computer science although there are many,
more I don't have time to go into Logic, Graph Theory, Computational Geometry, Automata
Theory, Quantum Computation, Parallel Programming, Formal Methods and Datastructures, but lets
move on to computer engineering.
Designing computers is difficult because they have to do so many different things. Designers
need to try and make sure they are capable of solving many different kinds of problem
as optimally as possible.
Every single task that is run on the computer goes through the core of the computer: the
CPU. When you are doing lots of different things at the same time, the CPU needs to
switch back and forth between these jobs to make sure everything gets done in a reasonable
time. This is controlled by a scheduler, which chooses what to do when and tries to get through
the tasks in the most efficient way, which can be very difficult problem.
Multiprocessing helps speed things up because the CPU has several cores that can execute
multiple jobs in parallel. But this makes the job of the scheduler even more complex.
Computer architecture is how a processor is designed to perform tasks and different architectures
are good at different things. CPUs are general purpose, GPUs are optimised for graphics and
FPGAs can be programmed to be very fast at a very narrow range of task.
On top of the raw hardware there are many layers of software, written by programmers
using many different programming languages. A programming language is how humans tell
a computer what to do and they vary greatly depending on the job at hand from low level
languages like assembly through to high level languages like python or javascript for coding
websites and apps. In general, the closer a language is to the hardware, the more difficult
it is for humans to use.
At all stages of this hierarchy the code that programmers write needs to be turned into
raw CPU instructions and this is done by one or several programs called compilers.
Designing programming languages and compilers is a big deal, because they are the tool that
software engineers use to make everything and so they need to be as easy to use as possible
but also be versatile enough to allow the programmers to build their crazy ideas.
The operating system is the most important piece of software on the computer as it is
what we interact with and it controls how all of the other programs are run on the hardware,
and engineering a good operating system is a huge challenge.
This brings us to software engineering: writing bundles of instructions telling the computer
what to do. Building good software is an art form because you have to translate your creative
ideas into logical instructions in a specific language, make it as efficient as possible
to run and as free of errors as you can. So there are many best practices and design philosophies
that people follow.
Some other important areas are getting many computers to communicate and work together
together to solve problems. Storing and retrieving large amounts of data. Determining how well
computer systems are performing at specific tasks, and creating highly detailed and realistic
graphics.
Now we get to a really cool part of computer science, getting computers to solve real world
problems. These technologies underlie a lot of the programs, apps and websites we use.
When you are going on vacation and you want to get the best trip for the money you are
solving an optimisation problem. Optimisation problems appear everywhere and finding the
best path or most efficient combination of parts can save businesses millions of dollars.
This is related to Boolean Satisfiability where you attempt to work out if a logic formula
can be satisfied or not. This was the first problem proved to be NP-complete and and so
widely considered to be impossible to solve, but amazing development of new SAT solvers
means that huge SAT problems are solved routinely today especially in artificial intelligence.
Computers extend our brains multiply our cognitive abilities. The forefront of computer science
research is developing computer systems that can think for themselves: Artificial Intelligence.
There are many avenues that AI research takes, the most prominent of which is machine learning
which aims to develop algorithms and techniques to enable computers to learn from large amounts
of data and then use what they've learned to do something useful like make decisions
or classify things.
And there are many different types of machine learning.
Closely related are fields like computer vision, trying to make computers able to see objects
in images like we do, which uses image processing techniques.
Natural language processing aims to get computers to understand and communicate using human
language, or to process large amounts of data in the form of words for analysis.
This commonly uses another field called knowledge representation where data is organised according
to their relationships, like words with similar meanings are clustered together.
Machine learning algorithms have improved because of the large amount of data we give
them. Big data looks at how to manage and analyse large amounts of data to get value
from it. And will get even more data from the Internet of Things, adding data collection
and communications to everyday objects.
Hacking is not a traditional academic discipline but definitely worth mentioning. Trying to
find weaknesses in computer systems, and take advantage of them without being noticed.
Computational Science uses computers to help answer scientific questions from fundamental
physics to neuroscience, and often makes use of Super Computing which throws the weight
of the worlds most powerful computers at very large problems, often in the area of Simulation.
Then there is Human Computer Interaction which looks at how to design computer systems to
be easy and intuitive to use. Virtual reality, Augmented Reality and Teleprescence enhancing
or replacing our experience of reality. And finally Robotics which gives computers a physical
embodiment, from a roomba to trying to make intelligent human like machines.
So that is my Map of Computer Science, a field that is still developing as fast as it ever
has despite that fact that the underlying hardware is hitting some hard limits as we
struggle to miniaturise transistors any more, so lots of people are working on other kinds
of computers to try and overcome this problem. Computers have had an absolutely huge impact
on human society and so it is going to be interesting to see where this technology goes
in the next one hundred years. Who knows, perhaps one day, we'll all be computers.
As per usual if you want to get hold of this map as a poster I have made it available so
check in the description below for some links, and also if you want to find out more about
computer science I recommend you check out the sponsor for this video brilliant dot org.
People often ask me how to go about learning more about the kinds of subjects I cover in
these videos, and as well as watching videos, a really great way is to get down and solve
some real problems. And brilliant does and excellent job at this.
It's a really cool website and also an app which helps you learn by getting you to solve
interesting problems in science, mathematics and computer science. And each of the courses
starts off kind of easy and fun and then gets more and more challenging as you master the
concepts. If you want to learn specifically about computers science they have got whole
courses built around topics in this video like logic, algorithms, machine learning,
artificial intelligence, so if you want to check that out brilliant dot org slash dos,
or even better click the link in the description below because that lets them know that you
have come from here. So thanks again for watching, and I'll be back soon with a new video.