The theory of computation is more and more relevant as the idea of computation becomes more and more prevalent in society. It has always been that the theory of computation embraced a wide range of computational notions. However, it has been interpreted in the popular culture as either a computer, or the neural network of the brain.

This course will give you a broader view of computation, and how it relates to mechanical computation and language.

**Hilbert’s 10th Problem**

As this course opens, so does a Major Motion Picture about Alan Turing, called the Imitation Game. So we might as well begin our history here, with Alan Turing’s contribution to the theory of computation.

Alan Turing was very concerned with the nature of computation because it was a pressing, and open problem in the history of mathematics. The nature of computation became fundamental to mathematics because in the growing notion of thought as a mechanical process, all of mathematics must then be a result of this mechanical process. If there were machines to make roads, it stands to reason that before long there should be machines to build mathematics.

At the 1900 Mathematical Congress, David Hilbert gave a series of challenge problems. His tenth problem in the series concerned finding the roots of any multi-dimensional polynomial, given the restriction of the number system to the integers. Such integer restricted problems are called Diophantine. Eventually this problem was found to be impossible to answer. It is impossible to determine, in general, whether or not integers can satisfy a Diophantine equation.

The problem is, just as the final conclusions were being drawn up across all of science and mathematics, the foundations crumbled.

The demand for increasingly clarity in mathematics lead people such as Friedrich Ludwig Gottlob Frege to set the foundations of mathematics on logic and set theory. By 1910 the philosophers Bertrand Russel and Alfred North Whitehead had gotten as far as a summary opus, *Principia Mathematica*, that was to completely describe the foundations of mathematics. But they couldn’t finish, because the more precise they thought about logic, the more paradoxes they discovered.

Bertrand Russel, in thinking about sets, asked himself about self-referential sets. His famous question is: *In the town where the barber shaves all who do not shave themselves, who shaves the barber?* It cannot be the barber, for he only shaves those that do not shave themselves, but if it isn’t the barber, then the barber does not shave himself and so it must be the barber.

The careful language of PM made it possible to write down such a self-referential formula, and this led to the ability to define impossible objects. So rather than the age of reason, we are in the age of Alice and Wonderland, where we think as many as six impossible things before breakfast.

In fact, it wasn’t only mathematics that seemed to be crumbling under its own self-contradiction, all the tidy plans for the end-of-the-century science were abruptly shredded, as Max Planck’s suggested Quantum Physics (1900), Einstein’s Theory of Relativity (1905) and Werner Heisenberg the Uncertainty Principal (1927).

For mathematics, this movement towards the strange new limits on mathematics, which total break from any classical understanding of reality, thought, and the mind, were realized in Kurt Gödel’s Incompleteness Theorem (1931). Turing recast Gödel’s problem into one of computation, and in 1936 published his solution to the Halting Problem, which states that no general method exists that will determine if a given program will ever terminate its computation.

The connection between Turing and Gödel, halting and truth, is the following: proof was being considered as a computation. It is a logical computation, finite steps over a finite symbol set, with strictly enforced rules. Just like a computer’s computation. If all computation halts, then proofs eventually terminate with an answer. The statement as been proven true, or proven false.

All statements are certainly true or false, the proof allows us to measure that. No, our measurement can fail to come to a conclusion.

**The Imitation Game**

Along with all these change in intellectual viewpoint, man’s perceived place in the world was changing. The obvious aspects that a human is a machine began to impress themselves on human self-knowledge. In 1923 Sigmund Freud published *The Ego and the Id*. In doing so, he recast our spiritual selves as the subject of scientific analysis and explanation. The emotive or sub-conscious mind is also a machine. Combined with the work on computation theory, the world began to wonder about Artificial Intelligence. Can a machine think.

The incompleteness theorem gave a tempting avenue towards the divine. Maybe machines can’t tell the difference between true and false, but somehow systems of logic are created in which those things that can be labeled true and false are all the useful things. Maybe there is another sort of intelligence that makes these over-arching rules, which is not rational but is feed by something uniquely human; and from there the machines take over. Similar to a bulldozer being thing the thing that can lift the earth, but it is the person that wills that that earth needs be lifted.

For this, Turing set off to make is Imitation Game, or Turing Test, that would be some sort of detector for this divine quality. However, it had to be played out through language, and language itself was under reconsideration, particularly by Ludwig Wittgenstein, *Tractatus Logico-Philosophicus*, in 1921, who could claim that the problems of thought are always a problem of language, and therefore the Turing Test skirts the problem by confusing a thought with a discussion of a thought.

I think that code breaking influenced this direction. Prior to the modern age of code breaking, which Turing was the central figurehead, cyptography was thought of in terms of linguistics. But set to the problem of code making and code breaking, the only obvious starting point was that of a machine obsfucating a symbol stream. Hence, the limitations of codes were the limitations of machine computation. This carries forward to us today in an even more refined manner, but contemplating codes unbreakable not because of a formal, mathematical unsolvability, but because the algorithm needed to solve a code out-strips the available computational resources.

**Language and Computation**

In 1957 Noam Chomsky published *Syntactic Structures*, which proposed a mathematical theory for the grammar of languages. His approach, married with the general concerns of Wittgenstein, put computation within a language framework. You will see this in the course: there will be a continued interest in “languages” as a model of computation.

Noam Chomsky was looking for rule systems that could generate grammatically correct sentences. As with mathematical incompleteness, it is not true that languages easily have a perfect grammar. “All grammars leak”, Edward Sapir 1921. However, Chomsky defined a sort of grammar that seems close to true, and in doing so, created the Chomsky Hierarchy of languages, with increasing complexity, to explain syntax and its relationship to meaning.

These things are important to computer scientists not only to define computation, but to guide the creation of computer languages, which should be of a lesser sort than human languages, as they are only need to command, and need to, even should not, express ambiguity. In doing so, a hierarchy of computation is created in parallel, with each level in the hierarchy of computation associated with a level in the hierarchy of language.