Are things like "the output must be equal to the output of this other thing" easily expressed to a prover? If so this seems like it'd be awesome for optimisation as well.
It can definitely be done with Coq. The typical recommendation is to read through and do the exercises in the freely-available Software Foundations book [1], which will teach you how to do that in Coq.
That said, while it was educational and an extremely good introduction to interactive theorem proving, as I went through the book I became increasingly disappointed with how Coq-style proofs were so laborious and difficult to achieve, as often you had to prove even the simplest things step by step. You also had to go through many chapters before you finally arrived at proving correctness of optimization.
I later found an alternative which I can't recommend highly enough. If you read through and do the exercises in the also freely-available Concrete Semantics book [2], you will also learn how to do exactly what you want, but you will learn it much sooner and in a much easier way.
In the latter book, you will be using the Isabelle/HOL prover instead of the Coq prover, though. The language looks slightly more idiosyncratic (I'm referring to the weird extra quotes - you will know what I mean), but despite that, to me it was worth a hundred times over: Isabelle/HOL has the most amazing proof automation I've seen - it will not only find proofs all by itself quite often, automatically using hundreds of theorems it already knows or that you have already proved before, but sometimes it will even automatically spit out human-readable proofs, which will teach you how the proof can be done (which is useful when you are stumped and had no clue how to move forward).
Isabelle/HOL can also be configured to automatically find errors in your specification and in your implementation - it has built-in automatic quickcheck / nitpick functionality, which can often tell you that you're trying to prove something that is not provable even before you attempt to prove it. IMHO, all of these things contribute to a significantly better user experience (as a newbie, at least).
If you are interested, give it a try. I highly recommend it!
Isabelle/HOL has much better proof automation than any prover based on Curry/Howard. That's primarily because HOL is a classical logic, while Curry/Howard is constructive (yes, I know we can do classical proofs inside constructive logic, e.g. with proof irrelevance). Almost all automated provers are also classical. Having comparably powerful proof automation to Curry/Howard based systems is an open research problem.
weird extra quotes
That's because Isabelle is not a prover, but a 'meta-prover' used for implementing provers, e.g. Isabelle/HOL. In practise HOL is the only logic implemented in Isabelle with enough automation to be viable, so we don't really use Isabelle's 'meta-prover' facilities in practise.
> Having comparably powerful proof automation to Curry/Howard based systems is an open research problem.
Isabelle/HOL is essentially isomorphic to a subset of Lean when we assume classical axioms, and any automation you can write for Isabelle, you can write for that subset of Lean. It is often easy to generalize automation techniques to support the rest of Lean (i.e. dependent types) as well, but sometimes doing so may introduce extra complexity or preclude the use of an indexing data structure. See https://arxiv.org/abs/1701.04391 for an example of generalizing a procedure (congruence closure) to support dependent types that introduces very little overhead.
There are older papers that describes Lean's meta theory, https://leanprover.github.io/publications/, the first paper, and the one on elaboration. Unfortunately they both describe the version present in Lean 2, which still had the HoTT support we removed in Lean 3.
At a high level our meta-theory is very similar to Coq's with a few important departures, a big difference the remove of pattern matching and fixpoints from the kernel, we instead have
primitive recursors/elminators/recursion principles.
The lead author of Lean (Leo De Moura) has built some of the worlds most advanced automated reasoning tools, and his goal has always to unify type theory and automation. Our goal is to have the benefits of dependent type theory coupled with the high level of automation present in SMT and from tools like Isabelle.
De Moura is giving a talk tomorrow
in Cambridge at the workshop in computer aided proofs called
"Metaprogramming with Dependent Type Theory". I wanted to go, but
won't be able to attend. I guess that talk will mostly be about the
content of the paper. Now I don't feel so bad for being unable to attend.
Lean is LCF style in the sense that there is a small kernel, and all proofs written by the user and generated by the automation are checked by this kernel. This kernel (i.e. type checker) is much smaller than that of Coq or Agda.
It is _not_ LCF style in the sense that there is only a "theorem" datatype with proof operations. Lean proofs are type-checked expressions.
It is hard to find a terse description of the Calculus of Constructions or CIC. Lean's logic essentially consists of Martin-Löf type theory + (noncumulative) universes with Impredicative Prop + inductive type (where nested and mutual induction is compiled down to a simpler forms with only one eliminator) + quotient types + function and Prop extensionality. And optionally choice to gain a classical logic. Most other features, like a Agda like dependent pattern matching, or mutual and nested inductive types are compiled down to more basic features.
You mean that
there is NOT only a "theorem" datatype?
In contrast to Curry/Howard provers, the LCF approach forgets proofs, it guarantees soundness by giving you
access to proof rules only through the "theorem" datatype (which is
the key trusted computing base). To be sure the "theorem" datatype may
internally maintain a proof object (e.g. for the purposes of program
extraction), but that's not necessary.
Simplifying a bit, an algorithm is a function of type A -> B. If you have two implementations, f1 :: A -> B and f2 :: A -> B, say one slow and simple (f1), one fast and complicated (f2), then you can verify that f2 does what it is supposed to do in two steps:
1. Show that for all a in A we have f1 a == f2 a.
2. Show that f1 meets the specification.
If you trust that f1 does what it is supposed to do, then you can skip (2).