How We Know What We Know: Introduction
I’ve been reading up a lot over the last few years about a large variety of subjects, not science as such but how we do science and how we actually know what we know. I’ve written about some of these things before, in Sci-Ence! Justice Leak!, but there I was looking at stuff for its science-fictional or storytelling possibilities.
However, I want to write about this stuff seriously. Partly, that’s to help organise my own thoughts – I’m an autodidact, and I’ve read a VAST amount without trying to organise it except in an ad hoc manner. But also, it’s because I find this stuff absolutely fascinating. So I’ve come up with a through-line, and I’m going to try to do a post a week for the next twelve weeks. I’m going to try to be properly accurate, but still convert this all into vernacular English.
What I’m going to talk about is the scientific method – what it is, why it’s important, and how developments in computer science have meant we can create and prove, based on a very small set of assumptions, a mathematically rigorous formulation of the scientific method. Not only that, but we can use that prove what the optimal thing to do is in all circumstances (given enough computing power…)
There will be twelve parts to this series:
1 – Feedback
Explaining possibly the most important concept in human thought, and looking at the hypothesise-experiment-revise process in science.
2 – Occam’s Razor
The single most important tool in modern science, invented by a mediaeval monk.
3 – Proof By Contradiction
A mathematical technique, first formulated by Euclid, that’s the basis for much modern mathematics.
4 – Diagonal Proof
Georg Cantor’s proof and why it’s important
5 – Turing and Godel
On notions of computability, and what a computer program is.
6 – Kolmogrov Complexity
What’s the smallest computer program that could print out this essay?
7 – Bayes’ Theorem
An 18th century vicar shows us how to make decisions in the absence of information.
8 – Ashby’s Law
Cybernetics and attempting to control the uncontrollable
9 – Thermodynamics and Shannon
What is information, and how is it related to chaos?
10 – Solomonoff Induction
How to predict the future
11 – Hutter’s algorithm
Universal artificial intelligence
In which we look at what we’ve learned.
This will be summarising stuff from many books and articles, but in particular The Fabric Of Reality by David Deutsch, Probability Theory — The Logic Of Science by E.T. Jaynes, Information Theory, Inference, and Learning Algorithms by David MacKay, some of the posts on the LessWrong group blog, the lectures in Scott Aaronson’s sidebar and An Introduction To Cybernetics by W. Ross Ashby. Mistakes are, of course, mine, not theirs. Part 1 in this series will come next week.
(More generally my plan at the moment is to have four big series of posts on the go – my Beach Boys reviews, starting up my Doctor Who reviews again, this series and a series of posts on Cerebus – all posting roughly weekly, with the other three days of the week left either for linkblogs or for rants on whatever comes to mind in comics or politics).