In theoretical computer science and mathematics, the theory of computation is the branch that deals with how efficiently problems can be solved on a model of computation, using an algorithm. The field is divided into three major branches: automata theory and language, computability theory, and computational complexity theory, which are linked by the question: "What are the fundamental capabilities and limitations of computers?".
In order to perform a rigorous study of computation, computer scientists work with a mathematical abstraction of computers called a model of computation. There are several models in use, but the most commonly examined is the Turing machine. Computer scientists study the Turing machine because it is simple to formulate, can be analyzed and used to prove results, and because it represents what many consider the most powerful possible "reasonable" model of computation (see Church–Turing thesis). It might seem that the potentially infinite memory capacity is an unrealizable attribute, but any decidable problem solved by a Turing machine will always require only a finite amount of memory. So in principle, any problem that can be solved (decided) by a Turing machine can be solved by a computer that has a finite amount of memory.
Category theory formalizes mathematical structure and its concepts in terms of a collection of objects and of arrows (also called morphisms). A category has two basic properties: the ability to compose the arrows associatively and the existence of an identity arrow for each object. Category theory can be used to formalize concepts of other high-level abstractions such as sets, rings, and groups.
Several terms used in category theory, including the term "morphism", are used differently from their uses in the rest of mathematics. In category theory, morphisms obey conditions specific to category theory itself.
Categories represent abstraction of other mathematical concepts. Many areas of mathematics can be formalised by category theory as categories. Hence category theory uses abstraction to make it possible to state and prove many intricate and subtle mathematical results in these fields in a much simpler way.
A basic example of a category is the category of sets, where the objects are sets and the arrows are functions from one set to another. However, the objects of a category need not be sets, and the arrows need not be functions. Any way of formalising a mathematical concept such that it meets the basic conditions on the behaviour of objects and arrows is a valid category—and all the results of category theory apply to it.