Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Are things like "the output must be equal to the output of this other thing" easily expressed to a prover? If so this seems like it'd be awesome for optimisation as well.


Yes, this is easy to express in a prover. A naive implementation can always serve as a specification for a sophisticated one.


This sounds very interesting indeed. Can this be done with Coq? Do you have any tutorials or such on doing this?


It can definitely be done with Coq. The typical recommendation is to read through and do the exercises in the freely-available Software Foundations book [1], which will teach you how to do that in Coq.

That said, while it was educational and an extremely good introduction to interactive theorem proving, as I went through the book I became increasingly disappointed with how Coq-style proofs were so laborious and difficult to achieve, as often you had to prove even the simplest things step by step. You also had to go through many chapters before you finally arrived at proving correctness of optimization.

I later found an alternative which I can't recommend highly enough. If you read through and do the exercises in the also freely-available Concrete Semantics book [2], you will also learn how to do exactly what you want, but you will learn it much sooner and in a much easier way.

In the latter book, you will be using the Isabelle/HOL prover instead of the Coq prover, though. The language looks slightly more idiosyncratic (I'm referring to the weird extra quotes - you will know what I mean), but despite that, to me it was worth a hundred times over: Isabelle/HOL has the most amazing proof automation I've seen - it will not only find proofs all by itself quite often, automatically using hundreds of theorems it already knows or that you have already proved before, but sometimes it will even automatically spit out human-readable proofs, which will teach you how the proof can be done (which is useful when you are stumped and had no clue how to move forward).

Isabelle/HOL can also be configured to automatically find errors in your specification and in your implementation - it has built-in automatic quickcheck / nitpick functionality, which can often tell you that you're trying to prove something that is not provable even before you attempt to prove it. IMHO, all of these things contribute to a significantly better user experience (as a newbie, at least).

If you are interested, give it a try. I highly recommend it!

[1] https://softwarefoundations.cis.upenn.edu/current/index.html [2] http://concrete-semantics.org/


I second this.

Isabelle/HOL has much better proof automation than any prover based on Curry/Howard. That's primarily because HOL is a classical logic, while Curry/Howard is constructive (yes, I know we can do classical proofs inside constructive logic, e.g. with proof irrelevance). Almost all automated provers are also classical. Having comparably powerful proof automation to Curry/Howard based systems is an open research problem.

   weird extra quotes
That's because Isabelle is not a prover, but a 'meta-prover' used for implementing provers, e.g. Isabelle/HOL. In practise HOL is the only logic implemented in Isabelle with enough automation to be viable, so we don't really use Isabelle's 'meta-prover' facilities in practise.


> Having comparably powerful proof automation to Curry/Howard based systems is an open research problem.

Isabelle/HOL is essentially isomorphic to a subset of Lean when we assume classical axioms, and any automation you can write for Isabelle, you can write for that subset of Lean. It is often easy to generalize automation techniques to support the rest of Lean (i.e. dependent types) as well, but sometimes doing so may introduce extra complexity or preclude the use of an indexing data structure. See https://arxiv.org/abs/1701.04391 for an example of generalizing a procedure (congruence closure) to support dependent types that introduces very little overhead.


This is interesting. It will take me some time to digest this paper.

As an aside: is there a terse description of Lean's type-theory? Also: Lean is Curry/Howard based, it's not an LCF-style architecture?


There are older papers that describes Lean's meta theory, https://leanprover.github.io/publications/, the first paper, and the one on elaboration. Unfortunately they both describe the version present in Lean 2, which still had the HoTT support we removed in Lean 3.

The new tactic paper, set to appear at ICFP '17, also briefly describes Lean's meta-theory. https://leanprover.github.io/papers/tactic.pdf

At a high level our meta-theory is very similar to Coq's with a few important departures, a big difference the remove of pattern matching and fixpoints from the kernel, we instead have primitive recursors/elminators/recursion principles.

The lead author of Lean (Leo De Moura) has built some of the worlds most advanced automated reasoning tools, and his goal has always to unify type theory and automation. Our goal is to have the benefits of dependent type theory coupled with the high level of automation present in SMT and from tools like Isabelle.


Thanks for the link to the paper.

De Moura is giving a talk tomorrow in Cambridge at the workshop in computer aided proofs called "Metaprogramming with Dependent Type Theory". I wanted to go, but won't be able to attend. I guess that talk will mostly be about the content of the paper. Now I don't feel so bad for being unable to attend.


Lean is LCF style in the sense that there is a small kernel, and all proofs written by the user and generated by the automation are checked by this kernel. This kernel (i.e. type checker) is much smaller than that of Coq or Agda.

It is _not_ LCF style in the sense that there is only a "theorem" datatype with proof operations. Lean proofs are type-checked expressions.

It is hard to find a terse description of the Calculus of Constructions or CIC. Lean's logic essentially consists of Martin-Löf type theory + (noncumulative) universes with Impredicative Prop + inductive type (where nested and mutual induction is compiled down to a simpler forms with only one eliminator) + quotient types + function and Prop extensionality. And optionally choice to gain a classical logic. Most other features, like a Agda like dependent pattern matching, or mutual and nested inductive types are compiled down to more basic features.


    there is only a "theorem" datatype 
You mean that there is NOT only a "theorem" datatype? In contrast to Curry/Howard provers, the LCF approach forgets proofs, it guarantees soundness by giving you access to proof rules only through the "theorem" datatype (which is the key trusted computing base). To be sure the "theorem" datatype may internally maintain a proof object (e.g. for the purposes of program extraction), but that's not necessary.


Sorry, I was unclear. I meant that most LCF style theorem provers only have one theorem datatype to carry proof information.


   Can this be done with Coq?
This can be done with any prover.

Simplifying a bit, an algorithm is a function of type A -> B. If you have two implementations, f1 :: A -> B and f2 :: A -> B, say one slow and simple (f1), one fast and complicated (f2), then you can verify that f2 does what it is supposed to do in two steps:

1. Show that for all a in A we have f1 a == f2 a.

2. Show that f1 meets the specification.

If you trust that f1 does what it is supposed to do, then you can skip (2).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: