← Home

What Gödel's Theorems Mean for Legal Systems

29 Mar 2019

Throughout history, the idea of a mathematical or logic-based legal system has caught the imagination of many thinkers. A quick literature review shows that this goes at least as far back as 1709 with Bernouli's The Use of the Art of Conjecturing in Law. I do not have a reference, but this also feels like an idea that ancient Greeks may have played with extensively. This has been termed "Computational Law", and was quite thoroughly researched in the 20th century. This idea has intrigued me ever since I became aware of it, and I just wanted to take the time to properly explore it:

The appeal of a legal system defined in some symbolic representation is that it would be maximally fair, insofar as fairness is defined as the indiscriminate application of predefined and collectively agreed upon rules. A mathematical legal system would leave no room for the bias of a judge, or subtle influences on the day of a court ruling. It would also remove the massive dependency that legal cases currently have on slow and methodical human intervention. However, the idea of this system does still feel intuitively flawed to me and I would like to explore why.

A mathematical legal system would leave no room for the bias of a judge, or subtle influences on the day of a court ruling. It would also remove the massive dependency that legal cases currently have on slow and methodical human intervention. However, the idea of this system does feel intuitively flawed and I would like to explore why.

If you work in software or maths you may have heard of Kurt Gödel's first and second Incompleteness Theorems. The theorems state the following two properties of symbolic systems:

Any legal system that we base on mathematics would need to be axiomatic, not only because it is rooted in maths (which is itself axiomatic), but because there are notions of right and wrong that we take to be true at face value. It is wrong to kill people, it is wrong to steal etc. etc. From these principles an intelligent system could, in theory, reason its way up to whether a certain company should be allowed to continue operating, or any legal proposition.

However, the repercussions of Gödel's theorems means there would be rulings that a legal expert system would not be able to prove. In concrete terms I believe this would mean the software would get stuck in an infinite loop or fail with a stack overflow exception.

This means that the idea of a purely mathematical legal system is a non-starter. You do not even need to be aware of Gödel's theorem to feel that a legal system cannot prove its own validity if it defines what is legal in the first place.

It also means that existing legal systems cannot be formally proven to be consistent either, but this may be a good thing. Perhaps the slow mutation of societal norms cannot be captured in formal axioms, and that we are better off relying on the elusive collective sense of human morality and circumstance. Imperfect systems sometimes work because they are flawed?