Predicates.v 44.7 KB
Newer Older
Adam Chlipala's avatar
Adam Chlipala committed
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43
(* Copyright (c) 2008, Adam Chlipala
 * 
 * This work is licensed under a
 * Creative Commons Attribution-Noncommercial-No Derivative Works 3.0
 * Unported License.
 * The license text is available at:
 *   http://creativecommons.org/licenses/by-nc-nd/3.0/
 *)

(* begin hide *)
Require Import List.

Require Import Tactics.

Set Implicit Arguments.
(* end hide *)


(** %\chapter{Inductive Predicates}% *)

(** The so-called "Curry-Howard Correspondence" states a formal connection between functional programs and mathematical proofs.  In the last chapter, we snuck in a first introduction to this subject in Coq.  Witness the close similarity between the types [unit] and [True] from the standard library: *)

Print unit.
(** [[

Inductive unit : Set :=  tt : unit
]] *)

Print True.
(** [[

Inductive True : Prop :=  I : True
]] *)

(** Recall that [unit] is the type with only one value, and [True] is the proposition that always holds.  Despite this superficial difference between the two concepts, in both cases we can use the same inductive definition mechanism.  The connection goes further than this.  We see that we arrive at the definition of [True] by replacing [unit] by [True], [tt] by [I], and [Set] by [Prop].  The first two of these differences are superficial changes of names, while the third difference is the crucial one for separating programs from proofs.  A term [T] of type [Set] is a type of programs, and a term of type [T] is a program.  A term [T] of type [Prop] is a logical proposition, and its proofs are of type [T].

[unit] has one value, [tt].  [True] has one proof, [I].  Why distinguish between these two types?  Many people who have read about Curry-Howard in an abstract context and not put it to use in proof engineering answer that the two types in fact %\textit{%#<i>#should not#</i>#%}% be distinguished.  There is a certain aesthetic appeal to this point of view, but I want to argue that it is best to treat Curry-Howard very loosely in practical proving.  There are Coq-specific reasons for preferring the distinction, involving efficient compilation and avoidance of paradoxes in the presence of classical math, but I will argue that there is a more general principle that should lead us to avoid conflating programming and proving.

The essence of the argument is roughly this: to an engineer, not all functions of type [A -> B] are created equal, but all proofs of a proposition [P -> Q] are.  This idea is known as %\textit{%#<i>#proof irrelevance#</i>#%}%, and its formalizations in logics prevent us from distinguishing between alternate proofs of the same proposition.  Proof irrelevance is compatible with, but not derivable in, Gallina.  Apart from this theoretical concern, I will argue that it is most effective to do engineering with Coq by employing different techniques for programs versus proofs.  Most of this book is organized around that distinction, describing how to program, by applying standard functional programming techniques in the presence of dependent types; and how to prove, by writing custom Ltac decision procedures.

With that perspective in mind, this chapter is sort of a mirror image of the last chapter, introducing how to define predicates with inductive definitions.  We will point out similarities in places, but much of the effective Coq user's bag of tricks is disjoint for predicates versus "datatypes."  This chapter is also a covert introduction to dependent types, which are the foundation on which interesting inductive predicates are built, though we will rely on tactics to build dependently-typed proof terms for us for now.  A future chapter introduces more manual application of dependent types. *)


Adam Chlipala's avatar
Adam Chlipala committed
44
(** * Propositional Logic *)
Adam Chlipala's avatar
Adam Chlipala committed
45 46 47 48

(** Let us begin with a brief tour through the definitions of the connectives for propositional logic.  We will work within a Coq section that provides us with a set of propositional variables.  In Coq parlance, these are just terms of type [Prop.] *)

Section Propositional.
Adam Chlipala's avatar
Adam Chlipala committed
49
  Variables P Q R : Prop.
Adam Chlipala's avatar
Adam Chlipala committed
50 51 52 53 54 55

  (** In Coq, the most basic propositional connective is implication, written [->], which we have already used in almost every proof.  Rather than being defined inductively, implication is built into Coq as the function type constructor.

We have also already seen the definition of [True].  For a demonstration of a lower-level way of establishing proofs of inductive predicates, we turn to this trivial theorem. *)
  
  Theorem obvious : True.
Adam Chlipala's avatar
Adam Chlipala committed
56
(* begin thide *)
Adam Chlipala's avatar
Adam Chlipala committed
57
    apply I.
Adam Chlipala's avatar
Adam Chlipala committed
58
(* end thide *)
Adam Chlipala's avatar
Adam Chlipala committed
59 60 61 62
  Qed.

  (** We may always use the [apply] tactic to take a proof step based on applying a particular constructor of the inductive predicate that we are trying to establish.  Sometimes there is only one constructor that could possibly apply, in which case a shortcut is available: *)

Adam Chlipala's avatar
Adam Chlipala committed
63
(* begin thide *)
Adam Chlipala's avatar
Adam Chlipala committed
64 65 66 67
  Theorem obvious' : True.
    constructor.
  Qed.

Adam Chlipala's avatar
Adam Chlipala committed
68 69
(* end thide *)

Adam Chlipala's avatar
Adam Chlipala committed
70 71 72 73 74 75 76 77 78 79 80
  (** There is also a predicate [False], which is the Curry-Howard mirror image of [Empty_set] from the last chapter. *)

  Print False.
  (** [[

Inductive False : Prop :=
]] *)

  (** We can conclude anything from [False], doing case analysis on a proof of [False] in the same way we might do case analysis on, say, a natural number.  Since there are no cases to consider, any such case analysis succeeds immediately in proving the goal. *)

  Theorem False_imp : False -> 2 + 2 = 5.
Adam Chlipala's avatar
Adam Chlipala committed
81
(* begin thide *)
Adam Chlipala's avatar
Adam Chlipala committed
82
    destruct 1.
Adam Chlipala's avatar
Adam Chlipala committed
83
(* end thide *)
Adam Chlipala's avatar
Adam Chlipala committed
84 85 86 87 88
  Qed.

  (** In a consistent context, we can never build a proof of [False].  In inconsistent contexts that appear in the courses of proofs, it is usually easiest to proceed by demonstrating that inconsistency with an explicit proof of [False]. *)

  Theorem arith_neq : 2 + 2 = 5 -> 9 + 9 = 835.
Adam Chlipala's avatar
Adam Chlipala committed
89
(* begin thide *)
Adam Chlipala's avatar
Adam Chlipala committed
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104
    intro.

    (** At this point, we have an inconsistent hypothesis [2 + 2 = 5], so the specific conclusion is not important.  We use the [elimtype] tactic to state a proposition, telling Coq that we wish to construct a proof of the new proposition and then prove the original goal by case analysis on the structure of the new auxiliary proof.  Since [False] has no constructors, [elimtype False] simply leaves us with the obligation to prove [False]. *)

    elimtype False.
    (** [[

  H : 2 + 2 = 5
  ============================
   False
]] *)

    (** For now, we will leave the details of this proof about arithmetic to [crush]. *)

    crush.
Adam Chlipala's avatar
Adam Chlipala committed
105
(* end thide *)
Adam Chlipala's avatar
Adam Chlipala committed
106 107 108 109 110 111 112 113 114 115 116 117 118 119
  Qed.

  (** A related notion to [False] is logical negation. *)

  Print not.
  (** [[

not = fun A : Prop => A -> False
     : Prop -> Prop
     ]] *)

  (** We see that [not] is just shorthand for implication of [False].  We can use that fact explicitly in proofs.  The syntax [~P] expands to [not P]. *)

  Theorem arith_neq' : ~ (2 + 2 = 5).
Adam Chlipala's avatar
Adam Chlipala committed
120
(* begin thide *)
Adam Chlipala's avatar
Adam Chlipala committed
121 122 123 124 125 126 127 128 129
    unfold not.

    (** [[

  ============================
   2 + 2 = 5 -> False
   ]] *)

    crush.
Adam Chlipala's avatar
Adam Chlipala committed
130
(* end thide *)
Adam Chlipala's avatar
Adam Chlipala committed
131 132 133 134 135 136 137 138 139 140 141 142 143
  Qed.

  (** We also have conjunction, which we introduced in the last chapter. *)

  Print and.
  (** [[

Inductive and (A : Prop) (B : Prop) : Prop :=  conj : A -> B -> A /\ B
]] *)

  (** The interested reader can check that [and] has a Curry-Howard doppleganger called [prod], the type of pairs.  However, it is generally most convenient to reason about conjunction using tactics.  An explicit proof of commutativity of [and] illustrates the usual suspects for such tasks.  [/\] is an infix shorthand for [and]. *)

  Theorem and_comm : P /\ Q -> Q /\ P.
Adam Chlipala's avatar
Adam Chlipala committed
144
(* begin thide *)
Adam Chlipala's avatar
Adam Chlipala committed
145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174
    (** We start by case analysis on the proof of [P /\ Q]. *)

    destruct 1.
    (** [[

  H : P
  H0 : Q
  ============================
   Q /\ P
   ]] *)

    (** Every proof of a conjunction provides proofs for both conjuncts, so we get a single subgoal reflecting that.  We can proceed by splitting this subgoal into a case for each conjunct of [Q /\ P]. *)

    split.
    (** [[
2 subgoals
  
  H : P
  H0 : Q
  ============================
   Q

subgoal 2 is:
 P
 ]] *)

    (** In each case, the conclusion is among our hypotheses, so the [assumption] tactic finishes the process. *)

    assumption.
    assumption.
Adam Chlipala's avatar
Adam Chlipala committed
175
(* end thide *)
Adam Chlipala's avatar
Adam Chlipala committed
176 177 178 179 180 181 182 183 184 185 186 187 188 189
  Qed.

  (** Coq disjunction is called [or] and abbreviated with the infix operator [\/]. *)

  Print or.
  (** [[

Inductive or (A : Prop) (B : Prop) : Prop :=
    or_introl : A -> A \/ B | or_intror : B -> A \/ B
]] *)

  (** We see that there are two ways to prove a disjunction: prove the first disjunct or prove the second.  The Curry-Howard analogue of this is the Coq [sum] type.  We can demonstrate the main tactics here with another proof of commutativity. *)

  Theorem or_comm : P \/ Q -> Q \/ P.
Adam Chlipala's avatar
Adam Chlipala committed
190 191

(* begin thide *)
Adam Chlipala's avatar
Adam Chlipala committed
192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221
    (** As in the proof for [and], we begin with case analysis, though this time we are met by two cases instead of one. *)
    destruct 1.
(** [[

2 subgoals
  
  H : P
  ============================
   Q \/ P

subgoal 2 is:
 Q \/ P
]] *)

    (** We can see that, in the first subgoal, we want to prove the disjunction by proving its second disjunct.  The [right] tactic telegraphs this intent. *)

    right; assumption.

    (** The second subgoal has a symmetric proof.

       [[

1 subgoal
  
  H : Q
  ============================
   Q \/ P
   ]] *)

    left; assumption.
Adam Chlipala's avatar
Adam Chlipala committed
222
(* end thide *)
Adam Chlipala's avatar
Adam Chlipala committed
223 224
  Qed.

Adam Chlipala's avatar
Adam Chlipala committed
225 226 227 228 229

(* begin hide *)
(* In-class exercises *)

  Theorem contra : P -> ~P -> R.
230 231 232 233 234 235 236
(* begin thide *)
    unfold not.
    intros.
    elimtype False.
    apply H0.
    assumption.
(* end thide *)
Adam Chlipala's avatar
Adam Chlipala committed
237 238 239
  Admitted.

  Theorem and_assoc : (P /\ Q) /\ R -> P /\ (Q /\ R).
240 241 242 243 244 245 246 247 248 249
(* begin thide *)
    intros.
    destruct H.
    destruct H.
    split.
    assumption.
    split.
    assumption.
    assumption.
(* end thide *)
Adam Chlipala's avatar
Adam Chlipala committed
250 251 252
  Admitted.

  Theorem or_assoc : (P \/ Q) \/ R -> P \/ (Q \/ R).
253 254 255 256 257 258 259 260 261 262 263 264 265
(* begin thide *)
    intros.
    destruct H.
    destruct H.
    left.
    assumption.
    right.
    left.
    assumption.
    right.
    right.
    assumption.
(* end thide *)
Adam Chlipala's avatar
Adam Chlipala committed
266 267 268 269 270 271 272 273
  Admitted.

(* end hide *)


  (** It would be a shame to have to plod manually through all proofs about propositional logic.  Luckily, there is no need.  One of the most basic Coq automation tactics is [tauto], which is a complete decision procedure for constructive propositional logic.  (More on what "constructive" means in the next section.)  We can use [tauto] to dispatch all of the purely propositional theorems we have proved so far. *)

  Theorem or_comm' : P \/ Q -> Q \/ P.
Adam Chlipala's avatar
Adam Chlipala committed
274
(* begin thide *)
Adam Chlipala's avatar
Adam Chlipala committed
275
    tauto.
Adam Chlipala's avatar
Adam Chlipala committed
276
(* end thide *)
Adam Chlipala's avatar
Adam Chlipala committed
277 278 279 280 281 282 283
  Qed.

  (** Sometimes propositional reasoning forms important plumbing for the proof of a theorem, but we still need to apply some other smarts about, say, arithmetic.  [intuition] is a generalization of [tauto] that proves everything it can using propositional reasoning.  When some goals remain, it uses propositional laws to simplify them as far as possible.  Consider this example, which uses the list concatenation operator [++] from the standard library. *)

  Theorem arith_comm : forall ls1 ls2 : list nat,
    length ls1 = length ls2 \/ length ls1 + length ls2 = 6
    -> length (ls1 ++ ls2) = 6 \/ length ls1 = length ls2.
Adam Chlipala's avatar
Adam Chlipala committed
284
(* begin thide *)
Adam Chlipala's avatar
Adam Chlipala committed
285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312
    intuition.

    (** A lot of the proof structure has been generated for us by [intuition], but the final proof depends on a fact about lists.  The remaining subgoal hints at what cleverness we need to inject. *)

    (** [[

  ls1 : list nat
  ls2 : list nat
  H0 : length ls1 + length ls2 = 6
  ============================
   length (ls1 ++ ls2) = 6 \/ length ls1 = length ls2
   ]] *)

    (** We can see that we need a theorem about lengths of concatenated lists, which we proved last chapter and is also in the standard library. *)

    rewrite app_length.
    (** [[

  ls1 : list nat
  ls2 : list nat
  H0 : length ls1 + length ls2 = 6
  ============================
   length ls1 + length ls2 = 6 \/ length ls1 = length ls2
   ]] *)

    (** Now the subgoal follows by purely propositional reasoning.  That is, we could replace [length ls1 + length ls2 = 6] with [P] and [length ls1 = length ls2] with [Q] and arrive at a tautology of propositional logic. *)

    tauto.
Adam Chlipala's avatar
Adam Chlipala committed
313
(* end thide *)
Adam Chlipala's avatar
Adam Chlipala committed
314 315 316 317
  Qed.

  (** [intuition] is one of the main bits of glue in the implementation of [crush], so, with a little help, we can get a short automated proof of the theorem. *)

Adam Chlipala's avatar
Adam Chlipala committed
318
(* begin thide *)
Adam Chlipala's avatar
Adam Chlipala committed
319 320 321 322 323 324 325
  Theorem arith_comm' : forall ls1 ls2 : list nat,
    length ls1 = length ls2 \/ length ls1 + length ls2 = 6
    -> length (ls1 ++ ls2) = 6 \/ length ls1 = length ls2.
    Hint Rewrite app_length : cpdt.

    crush.
  Qed.
Adam Chlipala's avatar
Adam Chlipala committed
326
(* end thide *)
Adam Chlipala's avatar
Adam Chlipala committed
327

Adam Chlipala's avatar
Adam Chlipala committed
328 329
End Propositional.

Adam Chlipala's avatar
Adam Chlipala committed
330

Adam Chlipala's avatar
Adam Chlipala committed
331
(** * What Does It Mean to Be Constructive? *)
Adam Chlipala's avatar
Adam Chlipala committed
332

Adam Chlipala's avatar
Adam Chlipala committed
333
(** One potential point of confusion in the presentation so far is the distinction between [bool] and [Prop].  [bool] is a datatype whose two values are [true] and [false], while [Prop] is a more primitive type that includes among its members [True] and [False].  Why not collapse these two concepts into one, and why must there be more than two states of mathematical truth?
Adam Chlipala's avatar
Adam Chlipala committed
334

Adam Chlipala's avatar
Adam Chlipala committed
335 336 337 338 339 340 341
The answer comes from the fact that Coq implements %\textit{%#<i>#constructive#</i>#%}% or %\textit{%#<i>#intuitionistic#</i>#%}% logic, in contrast to the %\textit{%#<i>#classical#</i>#%}% logic that you may be more familiar with.  In constructive logic, classical tautologies like [~ ~P -> P] and [P \/ ~P] do not always hold.  In general, we can only prove these tautologies when [P] is %\textit{%#<i>#decidable#</i>#%}%, in the sense of computability theory.  The Curry-Howard encoding that Coq uses for [or] allows us to extract either a proof of [P] or a proof of [~P] from any proof of [P \/ ~P].  Since our proofs are just functional programs which we can run, this would give us a decision procedure for the halting problem, where the instantiations of [P] would be formulas like "this particular Turing machine halts."

Hence the distinction between [bool] and [Prop].  Programs of type [bool] are computational by construction; we can always run them to determine their results.  Many [Prop]s are undecidable, and so we can write more expressive formulas with [Prop]s than with [bool]s, but the inevitable consequence is that we cannot simply "run a [Prop] to determine its truth."

Constructive logic lets us define all of the logical connectives in an aesthetically-appealing way, with orthogonal inductive definitions.  That is, each connective is defined independently using a simple, shared mechanism.  Constructivity also enables a trick called %\textit{%#<i>#program extraction#</i>#%}%, where we write programs by phrasing them as theorems to be proved.  Since our proofs are just functional programs, we can extract executable programs from our final proofs, which we could not do as naturally with classical proofs.

We will see more about Coq's program extraction facility in a later chapter.  However, I think it is worth interjecting another warning at this point, following up on the prior warning about taking the Curry-Howard correspondence too literally.  It is possible to write programs by theorem-proving methods in Coq, but hardly anyone does it.  It is almost always most useful to maintain the distinction between programs and proofs.  If you write a program by proving a theorem, you are likely to run into algorithmic inefficiencies that you introduced in your proof to make it easier to prove.  It is a shame to have to worry about such situations while proving tricky theorems, and it is a happy state of affairs that you almost certainly will not need to, with the ideal of extracting programs from proofs being confined mostly to theoretical studies. *)
Adam Chlipala's avatar
Adam Chlipala committed
342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359


(** * First-Order Logic *)

(** The [forall] connective of first-order logic, which we have seen in many examples so far, is built into Coq.  Getting ahead of ourselves a bit, we can see it as the dependent function type constructor.  In fact, implication and universal quantification are just different syntactic shorthands for the same Coq mechanism.  A formula [P -> Q] is equivalent to [forall x : P, Q], where [x] does not appear in [Q].  That is, the "real" type of the implication says "for every proof of [P], there exists a proof of [Q]."

Existential quantification is defined in the standard library. *)

Print ex.
(** [[

Inductive ex (A : Type) (P : A -> Prop) : Prop :=
    ex_intro : forall x : A, P x -> ex P
    ]] *)

(** [ex] is parameterized by the type [A] that we quantify over, and by a predicate [P] over [A]s.  We prove an existential by exhibiting some [x] of type [A], along with a proof of [P x].  As usual, there are tactics that save us from worrying about the low-level details most of the time. *)

Theorem exist1 : exists x : nat, x + 1 = 2.
Adam Chlipala's avatar
Adam Chlipala committed
360
(* begin thide *)
361
  (** remove printing exists *)
Adam Chlipala's avatar
Adam Chlipala committed
362
  (** We can start this proof with a tactic [exists], which should not be confused with the formula constructor shorthand of the same name.  (In the PDF version of this document, the reverse 'E' appears instead of the text "exists" in formulas.) *)
Adam Chlipala's avatar
Adam Chlipala committed
363 364 365 366 367 368 369 370 371 372 373
  exists 1.

  (** The conclusion is replaced with a version using the existential witness that we announced. *)

  (** [[

  ============================
   1 + 1 = 2
   ]] *)

  reflexivity.
Adam Chlipala's avatar
Adam Chlipala committed
374
(* end thide *)
Adam Chlipala's avatar
Adam Chlipala committed
375 376 377 378 379 380 381
Qed.

(** printing exists $\exists$ *)

(** We can also use tactics to reason about existential hypotheses. *)

Theorem exist2 : forall n m : nat, (exists x : nat, n + x = m) -> n <= m.
Adam Chlipala's avatar
Adam Chlipala committed
382
(* begin thide *)
Adam Chlipala's avatar
Adam Chlipala committed
383 384 385 386 387 388 389 390 391 392 393 394 395 396 397
  (** We start by case analysis on the proof of the existential fact. *)
  destruct 1.
  (** [[

  n : nat
  m : nat
  x : nat
  H : n + x = m
  ============================
   n <= m
   ]] *)

  (** The goal has been replaced by a form where there is a new free variable [x], and where we have a new hypothesis that the body of the existential holds with [x] substituted for the old bound variable.  From here, the proof is just about arithmetic and is easy to automate. *)

  crush.
Adam Chlipala's avatar
Adam Chlipala committed
398
(* end thide *)
Adam Chlipala's avatar
Adam Chlipala committed
399 400 401 402 403 404 405 406
Qed.


(* begin hide *)
(* In-class exercises *)

Theorem forall_exists_commute : forall (A B : Type) (P : A -> B -> Prop),
  (exists x : A, forall y : B, P x y) -> (forall y : B, exists x : A, P x y).
407 408 409 410 411 412
(* begin thide *)
  intros.
  destruct H.
  exists x.
  apply H.
(* end thide *)
Adam Chlipala's avatar
Adam Chlipala committed
413 414 415 416 417 418
Admitted.

(* end hide *)


(** The tactic [intuition] has a first-order cousin called [firstorder].  [firstorder] proves many formulas when only first-order reasoning is needed, and it tries to perform first-order simplifications in any case.  First-order reasoning is much harder than propositional reasoning, so [firstorder] is much more likely than [intuition] to get stuck in a way that makes it run for long enough to be useless. *)
419 420 421 422 423 424 425 426 427 428


(** * Predicates with Implicit Equality *)

(** We start our exploration of a more complicated class of predicates with a simple example: an alternative way of characterizing when a natural number is zero. *)

Inductive isZero : nat -> Prop :=
| IsZero : isZero 0.

Theorem isZero_zero : isZero 0.
Adam Chlipala's avatar
Adam Chlipala committed
429
(* begin thide *)
430
  constructor.
Adam Chlipala's avatar
Adam Chlipala committed
431
(* end thide *)
432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450
Qed.

(** We can call [isZero] a %\textit{%#<i>#judgment#</i>#%}%, in the sense often used in the semantics of programming languages.  Judgments are typically defined in the style of %\textit{%#<i>#natural deduction#</i>#%}%, where we write a number of %\textit{%#<i>#inference rules#</i>#%}% with premises appearing above a solid line and a conclusion appearing below the line.  In this example, the sole constructor [IsZero] of [isZero] can be thought of as the single inference rule for deducing [isZero], with nothing above the line and [isZero 0] below it.  The proof of [isZero_zero] demonstrates how we can apply an inference rule.

The definition of [isZero] differs in an important way from all of the other inductive definitions that we have seen in this and the previous chapter.  Instead of writing just [Set] or [Prop] after the colon, here we write [nat -> Prop].  We saw examples of parameterized types like [list], but there the parameters appeared with names %\textit{%#<i>#before#</i>#%}% the colon.  Every constructor of a parameterized inductive type must have a range type that uses the same parameter, whereas the form we use here enables us to use different arguments to the type for different constructors.

For instance, [isZero] forces its argument to be [0].  We can see that the concept of equality is somehow implicit in the inductive definition mechanism.  The way this is accomplished is similar to the way that logic variables are used in Prolog, and it is a very powerful mechanism that forms a foundation for formalizing all of mathematics.  In fact, though it is natural to think of inductive types as folding in the functionality of equality, in Coq, the true situation is reversed, with equality defined as just another inductive type! *)

Print eq.
(** [[

Inductive eq (A : Type) (x : A) : A -> Prop :=  refl_equal : x = x
]] *)

(** [eq] is the type we get behind the scenes when uses of infix [=] are expanded.  We see that [eq] has both a parameter [x] that is fixed and an extra unnamed argument of the same type.  It is crucial that the second argument is untied to the parameter in the type of [eq]; otherwise, we would have to prove that two values are equal even to be able to state the possibility of equality, which would more or less defeat the purpose of having an equality proposition.  However, examining the type of equality's sole constructor [refl_equal], we see that we can only %\textit{%#<i>#prove#</i>#%}% equality when its two arguments are syntactically equal.  This definition turns out to capture all of the basic properties of equality, and the equality-manipulating tactics that we have seen so far, like [reflexivity] and [rewrite], are implemented treating [eq] as just another inductive type with a well-chosen definition.

Returning to the example of [isZero], we can see how to make use of hypotheses that use this predicate. *)

Theorem isZero_plus : forall n m : nat, isZero m -> n + m = n.
Adam Chlipala's avatar
Adam Chlipala committed
451
(* begin thide *)
452 453 454 455 456 457 458 459 460 461 462 463
  (** We want to proceed by cases on the proof of the assumption about [isZero]. *)
  destruct 1.
  (** [[

  n : nat
  ============================
   n + 0 = n
   ]] *)

  (** Since [isZero] has only one constructor, we are presented with only one subgoal.  The argument [m] to [isZero] is replaced with that type's argument from the single constructor [IsZero].  From this point, the proof is trivial. *)

  crush.
Adam Chlipala's avatar
Adam Chlipala committed
464
(* end thide *)
465 466 467 468 469
Qed.

(** Another example seems at first like it should admit an analogous proof, but in fact provides a demonstration of one of the most basic gotchas of Coq proving. *)

Theorem isZero_contra : isZero 1 -> False.
Adam Chlipala's avatar
Adam Chlipala committed
470
(* begin thide *)
471 472 473 474 475 476 477 478 479 480 481 482 483 484
  (** Let us try a proof by cases on the assumption, as in the last proof. *)
  destruct 1.
  (** [[

  ============================
   False
   ]] *)

  (** It seems that case analysis has not helped us much at all!  Our sole hypothesis disappears, leaving us, if anything, worse off than we were before.  What went wrong?  We have met an important restriction in tactics like [destruct] and [induction] when applied to types with arguments.  If the arguments are not already free variables, they will be replaced by new free variables internally before doing the case analysis or induction.  Since the argument [1] to [isZero] is replaced by a fresh variable, we lose the crucial fact that it is not equal to [0].

     Why does Coq use this restriction?  We will discuss the issue in detail in a future chapter, when we see the dependently-typed programming techniques that would allow us to write this proof term manually.  For now, we just say that the algorithmic problem of "logically complete case analysis" is undecidable when phrased in Coq's logic.  A few tactics and design patterns that we will present in this chapter suffice in almost all cases.  For the current example, what we want is a tactic called [inversion], which corresponds to the concept of inversion that is frequently used with natural deduction proof systems. *)

  Undo.
  inversion 1.
Adam Chlipala's avatar
Adam Chlipala committed
485
(* end thide *)
486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508
Qed.

(** What does [inversion] do?  Think of it as a version of [destruct] that does its best to take advantage of the structure of arguments to inductive types.  In this case, [inversion] completed the proof immediately, because it was able to detect that we were using [isZero] with an impossible argument.

Sometimes using [destruct] when you should have used [inversion] can lead to confusing results.  To illustrate, consider an alternate proof attempt for the last theorem. *)

Theorem isZero_contra' : isZero 1 -> 2 + 2 = 5.
  destruct 1.
  (** [[

  ============================
   1 + 1 = 4
   ]] *)

  (** What on earth happened here?  Internally, [destruct] replaced [1] with a fresh variable, and, trying to be helpful, it also replaced the occurrence of [1] within the unary representation of each number in the goal.  This has the net effect of decrementing each of these numbers.  If you are doing a proof and encounter a strange transmutation like this, there is a good chance that you should go back and replace a use of [destruct] with [inversion]. *)
Abort.


(* begin hide *)
(* In-class exercises *)

(* EX: Define an inductive type capturing when a list has exactly two elements.  Prove that your predicate does not hold of the empty list, and prove that, whenever it holds of a list, the length of that list is two. *)

509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526
(* begin thide *)
Section twoEls.
  Variable A : Type.

  Inductive twoEls : list A -> Prop :=
  | TwoEls : forall x y, twoEls (x :: y :: nil).

  Theorem twoEls_nil : twoEls nil -> False.
    inversion 1.
  Qed.

  Theorem twoEls_two : forall ls, twoEls ls -> length ls = 2.
    inversion 1.
    reflexivity.
  Qed.
End twoEls.
(* end thide *)

527 528
(* end hide *)

Adam Chlipala's avatar
Adam Chlipala committed
529 530 531 532 533 534 535 536 537 538 539 540 541 542

(** * Recursive Predicates *)

(** We have already seen all of the ingredients we need to build interesting recursive predicates, like this predicate capturing even-ness. *)

Inductive even : nat -> Prop :=
| EvenO : even O
| EvenSS : forall n, even n -> even (S (S n)).

(** Think of [even] as another judgment defined by natural deduction rules.  [EvenO] is a rule with nothing above the line and [even O] below the line, and [EvenSS] is a rule with [even n] above the line and [even (S (S n))] below.

The proof techniques of the last section are easily adapted. *)

Theorem even_0 : even 0.
Adam Chlipala's avatar
Adam Chlipala committed
543
(* begin thide *)
Adam Chlipala's avatar
Adam Chlipala committed
544
  constructor.
Adam Chlipala's avatar
Adam Chlipala committed
545
(* end thide *)
Adam Chlipala's avatar
Adam Chlipala committed
546 547 548
Qed.

Theorem even_4 : even 4.
Adam Chlipala's avatar
Adam Chlipala committed
549
(* begin thide *)
Adam Chlipala's avatar
Adam Chlipala committed
550
  constructor; constructor; constructor.
Adam Chlipala's avatar
Adam Chlipala committed
551
(* end thide *)
Adam Chlipala's avatar
Adam Chlipala committed
552 553 554 555
Qed.

(** It is not hard to see that sequences of constructor applications like the above can get tedious.  We can avoid them using Coq's hint facility. *)

Adam Chlipala's avatar
Adam Chlipala committed
556
(* begin thide *)
Adam Chlipala's avatar
Adam Chlipala committed
557 558 559 560 561 562
Hint Constructors even.

Theorem even_4' : even 4.
  auto.
Qed.

Adam Chlipala's avatar
Adam Chlipala committed
563 564
(* end thide *)

Adam Chlipala's avatar
Adam Chlipala committed
565
Theorem even_1_contra : even 1 -> False.
Adam Chlipala's avatar
Adam Chlipala committed
566
(* begin thide *)
Adam Chlipala's avatar
Adam Chlipala committed
567
  inversion 1.
Adam Chlipala's avatar
Adam Chlipala committed
568
(* end thide *)
Adam Chlipala's avatar
Adam Chlipala committed
569 570 571
Qed.

Theorem even_3_contra : even 3 -> False.
Adam Chlipala's avatar
Adam Chlipala committed
572
(* begin thide *)
Adam Chlipala's avatar
Adam Chlipala committed
573 574 575 576 577 578 579 580 581 582 583 584 585 586
  inversion 1.
  (** [[

  H : even 3
  n : nat
  H1 : even 1
  H0 : n = 1
  ============================
   False
   ]] *)

  (** [inversion] can be a little overzealous at times, as we can see here with the introduction of the unused variable [n] and an equality hypothesis about it.  For more complicated predicates, though, adding such assumptions is critical to dealing with the undecidability of general inversion. *)

  inversion H1.
Adam Chlipala's avatar
Adam Chlipala committed
587
(* end thide *)
Adam Chlipala's avatar
Adam Chlipala committed
588 589 590 591 592
Qed.

(** We can also do inductive proofs about [even]. *)

Theorem even_plus : forall n m, even n -> even m -> even (n + m).
Adam Chlipala's avatar
Adam Chlipala committed
593
(* begin thide *)
Adam Chlipala's avatar
Adam Chlipala committed
594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694
  (** It seems a reasonable first choice to proceed by induction on [n]. *)
  induction n; crush.
  (** [[

  n : nat
  IHn : forall m : nat, even n -> even m -> even (n + m)
  m : nat
  H : even (S n)
  H0 : even m
  ============================
   even (S (n + m))
   ]] *)

  (** We will need to use the hypotheses [H] and [H0] somehow.  The most natural choice is to invert [H]. *)

  inversion H.
  (** [[

  n : nat
  IHn : forall m : nat, even n -> even m -> even (n + m)
  m : nat
  H : even (S n)
  H0 : even m
  n0 : nat
  H2 : even n0
  H1 : S n0 = n
  ============================
   even (S (S n0 + m))
   ]] *)

  (** Simplifying the conclusion brings us to a point where we can apply a constructor. *)
  simpl.
  (** [[

  ============================
   even (S (S (n0 + m)))
   ]] *)

  constructor.
  (** [[

  ============================
   even (n0 + m)
   ]] *)

  (** At this point, we would like to apply the inductive hypothesis, which is: *)
  (** [[

  IHn : forall m : nat, even n -> even m -> even (n + m)
  ]] *)

  (** Unfortunately, the goal mentions [n0] where it would need to mention [n] to match [IHn].  We could keep looking for a way to finish this proof from here, but it turns out that we can make our lives much easier by changing our basic strategy.  Instead of inducting on the structure of [n], we should induct %\textit{%#<i>#on the structure of one of the [even] proofs#</i>#%}%.  This technique is commonly called %\textit{%#<i>#rule induction#</i>#%}% in programming language semantics.  In the setting of Coq, we have already seen how predicates are defined using the same inductive type mechanism as datatypes, so the fundamental unity of rule induction with "normal" induction is apparent. *)

Restart.

  induction 1.
(** [[

  m : nat
  ============================
   even m -> even (0 + m)

subgoal 2 is:
 even m -> even (S (S n) + m)
 ]] *)

  (** The first case is easily discharged by [crush], based on the hint we added earlier to try the constructors of [even]. *)

  crush.

  (** Now we focus on the second case: *)
  intro.

  (** [[

  m : nat
  n : nat
  H : even n
  IHeven : even m -> even (n + m)
  H0 : even m
  ============================
   even (S (S n) + m)
   ]] *)

  (** We simplify and apply a constructor, as in our last proof attempt. *)

  simpl; constructor.
  (** [[

  ============================
   even (n + m)
   ]] *)

  (** Now we have an exact match with our inductive hypothesis, and the remainder of the proof is trivial. *)

  apply IHeven; assumption.

  (** In fact, [crush] can handle all of the details of the proof once we declare the induction strategy. *)

Restart.
  induction 1; crush.
Adam Chlipala's avatar
Adam Chlipala committed
695
(* end thide *)
Adam Chlipala's avatar
Adam Chlipala committed
696 697 698 699 700
Qed.

(** Induction on recursive predicates has similar pitfalls to those we encountered with inversion in the last section. *)

Theorem even_contra : forall n, even (S (n + n)) -> False.
Adam Chlipala's avatar
Adam Chlipala committed
701
(* begin thide *)
Adam Chlipala's avatar
Adam Chlipala committed
702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718
  induction 1.
  (** [[
  
  n : nat
  ============================
   False

subgoal 2 is:
 False
 ]] *)

  (** We are already sunk trying to prove the first subgoal, since the argument to [even] was replaced by a fresh variable internally.  This time, we find it easiest to prove this theorem by way of a lemma.  Instead of trusting [induction] to replace expressions with fresh variables, we do it ourselves, explicitly adding the appropriate equalities as new assumptions. *)
Abort.

Lemma even_contra' : forall n', even n' -> forall n, n' = S (n + n) -> False.
  induction 1; crush.

Adam Chlipala's avatar
Adam Chlipala committed
719
  (** At this point, it is useful to consider all cases of [n] and [n0] being zero or nonzero.  Only one of these cases has any trickiness to it. *)
Adam Chlipala's avatar
Adam Chlipala committed
720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745
  destruct n; destruct n0; crush.

  (** [[

  n : nat
  H : even (S n)
  IHeven : forall n0 : nat, S n = S (n0 + n0) -> False
  n0 : nat
  H0 : S n = n0 + S n0
  ============================
   False
   ]] *)

  (** At this point it is useful to use a theorem from the standard library, which we also proved with a different name in the last chapter. *)
  Check plus_n_Sm.
  (** [[

plus_n_Sm
     : forall n m : nat, S (n + m) = n + S m
     ]] *)

  rewrite <- plus_n_Sm in H0.

  (** The induction hypothesis lets us complete the proof. *)
  apply IHeven with n0; assumption.

Adam Chlipala's avatar
Adam Chlipala committed
746
  (** As usual, we can rewrite the proof to avoid referencing any locally-generated names, which makes our proof script more readable and more robust to changes in the theorem statement. *)
Adam Chlipala's avatar
Adam Chlipala committed
747 748 749 750 751 752 753 754 755 756 757
Restart.
  Hint Rewrite <- plus_n_Sm : cpdt.

  induction 1; crush;
    match goal with
      | [ H : S ?N = ?N0 + ?N0 |- _ ] => destruct N; destruct N0
    end; crush; eauto.
Qed.

(** We write the proof in a way that avoids the use of local variable or hypothesis names, using the [match] tactic form to do pattern-matching on the goal.  We use unification variables prefixed by question marks in the pattern, and we take advantage of the possibility to mention a unification variable twice in one pattern, to enforce equality between occurrences.  The hint to rewrite with [plus_n_Sm] in a particular direction saves us from having to figure out the right place to apply that theorem, and we also take critical advantage of a new tactic, [eauto].

Adam Chlipala's avatar
Adam Chlipala committed
758
[crush] uses the tactic [intuition], which, when it runs out of tricks to try using only propositional logic, by default tries the tactic [auto], which we saw in an earlier example.  [auto] attempts Prolog-style logic programming, searching through all proof trees up to a certain depth that are built only out of hints that have been registered with [Hint] commands.  Compared to Prolog, [auto] places an important restriction: it never introduces new unification variables during search.  That is, every time a rule is applied during proof search, all of its arguments must be deducible by studying the form of the goal.  [eauto] relaxes this restriction, at the cost of possibly exponentially greater running time.  In this particular case, we know that [eauto] has only a small space of proofs to search, so it makes sense to run it.  It is common in effectively-automated Coq proofs to see a bag of standard tactics applied to pick off the "easy" subgoals, finishing with [eauto] to handle the tricky parts that can benefit from ad-hoc exhaustive search.
Adam Chlipala's avatar
Adam Chlipala committed
759 760 761 762

The original theorem now follows trivially from our lemma. *)

Theorem even_contra : forall n, even (S (n + n)) -> False.
763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789
  intros; eapply even_contra'; eauto.
Qed.

(** We use a variant [eapply] of [apply] which has the same relationship to [apply] as [eauto] has to [auto].  [apply] only succeeds if all arguments to the rule being used can be determined from the form of the goal, whereas [eapply] will introduce unification variables for undetermined arguments.  [eauto] is able to determine the right values for those unification variables.

By considering an alternate attempt at proving the lemma, we can see another common pitfall of inductive proofs in Coq.  Imagine that we had tried to prove [even_contra'] with all of the [forall] quantifiers moved to the front of the lemma statement. *)

Lemma even_contra'' : forall n' n, even n' -> n' = S (n + n) -> False.
  induction 1; crush;
    match goal with
      | [ H : S ?N = ?N0 + ?N0 |- _ ] => destruct N; destruct N0
    end; crush; eauto.

  (** One subgoal remains: *)

  (** [[
  
  n : nat
  H : even (S (n + n))
  IHeven : S (n + n) = S (S (S (n + n))) -> False
  ============================
   False
   ]] *)

  (** We are out of luck here.  The inductive hypothesis is trivially true, since its assumption is false.  In the version of this proof that succeeded, [IHeven] had an explicit quantification over [n].  This is because the quantification of [n] %\textit{%#<i>#appeared after the thing we are inducting on#</i>#%}% in the theorem statement.  In general, quantified variables and hypotheses that appear before the induction object in the theorem statement stay fixed throughout the inductive proof.  Variables and hypotheses that are quantified after the induction object may be varied explicitly in uses of inductive hypotheses.

     Why should Coq implement [induction] this way?  One answer is that it avoids burdening this basic tactic with additional heuristic smarts, but that is not the whole picture.  Imagine that [induction] analyzed dependencies among variables and reordered quantifiers to preserve as much freedom as possible in later uses of inductive hypotheses.  This could make the inductive hypotheses more complex, which could in turn cause particular automation machinery to fail when it would have succeeded before.  In general, we want to avoid quantifiers in our proofs whenever we can, and that goal is furthered by the refactoring that the [induction] tactic forces us to do. *)
Adam Chlipala's avatar
Adam Chlipala committed
790
(* end thide *)
791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820
Abort.


(* begin hide *)
(* In-class exercises *)

(* EX: Define a type [prop] of simple boolean formulas made up only of truth, falsehood, binary conjunction, and binary disjunction.  Define an inductive predicate [holds] that captures when [prop]s are valid, and define a predicate [falseFree] that captures when a [prop] does not contain the "false" formula.  Prove that every false-free [prop] is valid. *)

(* begin thide *)
Inductive prop : Set :=
| Tru : prop
| Fals : prop
| And : prop -> prop -> prop
| Or : prop -> prop -> prop.

Inductive holds : prop -> Prop :=
| HTru : holds Tru
| HAnd : forall p1 p2, holds p1 -> holds p2 -> holds (And p1 p2)
| HOr1 : forall p1 p2, holds p1 -> holds (Or p1 p2)
| HOr2 : forall p1 p2, holds p2 -> holds (Or p1 p2).

Inductive falseFree : prop -> Prop :=
| FFTru : falseFree Tru
| FFAnd : forall p1 p2, falseFree p1 -> falseFree p2 -> falseFree (And p1 p2)
| FFNot : forall p1 p2, falseFree p1 -> falseFree p2 -> falseFree (Or p1 p2).

Hint Constructors holds.

Theorem falseFree_holds : forall p, falseFree p -> holds p.
  induction 1; crush.
Adam Chlipala's avatar
Adam Chlipala committed
821
Qed.
822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857
(* end thide *)


(* EX: Define an inductive type [prop'] that is the same as [prop] but omits the possibility for falsehood.  Define a proposition [holds'] for [prop'] that is analogous to [holds].  Define a function [propify] for translating [prop']s to [prop]s.  Prove that, for any [prop'] [p], if [propify p] is valid, then so is [p]. *)

(* begin thide *)
Inductive prop' : Set :=
| Tru' : prop'
| And' : prop' -> prop' -> prop'
| Or' : prop' -> prop' -> prop'.

Inductive holds' : prop' -> Prop :=
| HTru' : holds' Tru'
| HAnd' : forall p1 p2, holds' p1 -> holds' p2 -> holds' (And' p1 p2)
| HOr1' : forall p1 p2, holds' p1 -> holds' (Or' p1 p2)
| HOr2' : forall p1 p2, holds' p2 -> holds' (Or' p1 p2).

Fixpoint propify (p : prop') : prop :=
  match p with
    | Tru' => Tru
    | And' p1 p2 => And (propify p1) (propify p2)
    | Or' p1 p2 => Or (propify p1) (propify p2)
  end.

Hint Constructors holds'.

Lemma propify_holds' : forall p', holds p' -> forall p, p' = propify p -> holds' p.
  induction 1; crush; destruct p; crush.
Qed.

Theorem propify_holds : forall p, holds (propify p) -> holds' p.
  intros; eapply propify_holds'; eauto.
Qed.
(* end thide *)

(* end hide *)
Adam Chlipala's avatar
Adam Chlipala committed
858 859 860 861 862 863 864 865 866 867 868


(** * Exercises *)

(** %\begin{enumerate}%#<ol>#

%\item%#<li># Prove these tautologies of propositional logic, using only the tactics [apply], [assumption], [constructor], [destruct], [intro], [intros], [left], [right], [split], and [unfold].
  %\begin{enumerate}%#<ol>#
    %\item%#<li># [(True \/ False) /\ (False \/ True)]#</li>#
    %\item%#<li># [P -> ~ ~P]#</li>#
    %\item%#<li># [P /\ (Q \/ R) -> (P /\ Q) \/ (P /\ R)]#</li>#
869
  #</ol> </li>#%\end{enumerate}%
Adam Chlipala's avatar
Adam Chlipala committed
870

871 872
  %\item%#<li># Prove the following tautology of first-order logic, using only the tactics [apply], [assert], [assumption], [destruct], [eapply], [eassumption], and %\textit{%#<tt>#exists#</tt>#%}%.  You will probably find [assert] useful for stating and proving an intermediate lemma, enabling a kind of "forward reasoning," in contrast to the "backward reasoning" that is the default for Coq tactics.  [eassumption] is a version of [assumption] that will do matching of unification variables.  Let some variable [T] of type [Set] be the set of individuals.  [x] is a constant symbol, [p] is a unary predicate symbol, [q] is a binary predicate symbol, and [f] is a unary function symbol.
%\begin{enumerate}%#<ol>#
Adam Chlipala's avatar
Adam Chlipala committed
873 874 875
    %\item%#<li># [p x -> (forall x, p x -> exists y, q x y) -> (forall x y, q x y -> q y (f y)) -> exists z, q z (f z)]#</li>#
  #</ol> </li>#%\end{enumerate}%

876 877
%\item%#<li># Define an inductive predicate capturing when a natural number is an integer multiple of either 6 or 10.  Prove that 13 does not satisfy your predicate, and prove that any number satisfying the predicate is not odd.  It is probably easiest to prove the second theorem by indicating "odd-ness" as equality to [2 * n + 1] for some [n].#</li>#

Adam Chlipala's avatar
Adam Chlipala committed
878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908
%\item%#<li># Define a simple programming language, its semantics, and its typing rules, and then prove that well-typed programs cannot go wrong.  Specifically:
  %\begin{enumerate}%#<ol>#
    %\item%#<li># Define [var] as a synonym for the natural numbers.#</li>#
    %\item%#<li># Define an inductive type [exp] of expressions, containing natural number constants, natural number addition, pairing of two other expressions, extraction of the first component of a pair, extraction of the second component of a pair, and variables (based on the [var] type you defined).#</li>#
    %\item%#<li># Define an inductive type [cmd] of commands, containing expressions and variable assignments.  A variable assignment node should contain the variable being assigned, the expression being assigned to it, and the command to run afterward.#</li>#
    %\item%#<li># Define an inductive type [val] of values, containing natural number constants and pairings of values.#</li>#
    %\item%#<li># Define a type of variable assignments, which assign a value to each variable.#</li>#
    %\item%#<li># Define a big-step evaluation relation [eval], capturing what it means for an expression to evaluate to a value under a particular variable assignment.  "Big step" means that the evaluation of every expression should evaluatable with a single instance of the inductive predicate you will define.  For instance, "[1 + 1] evaluates to [2] under assignment [va]" should be derivable for any assignment [va].#</li>#
    %\item%#<li># Define a big-step evaluation relation [run], capturing what it means for a command to run to a value under a particular variable assignment.  The value of a command is the result of evaluating its final expression.#</li>#
    %\item%#<li># Define a type of variable typings, which are like variable assignments, but map variables to types instead of values.  You might use polymorphism to share some code with your variable assignments.#</li>#
    %\item%#<li># Define typing judgments for expressions, values, and commands.  The expression and command cases will be in terms of a typing assignment.#</li>#
    %\item%#<li># Define a predicate [varsType] to express when a variable assignment and a variable typing agree on the types of variables.#</li>#
    %\item%#<li># Prove that any expression that has type [t] under variable typing [vt] evaluates under variable assignment [va] to some value that also has type [t] in [vt], as long as [va] and [vt] agree.#</li>#
    %\item%#<li># Prove that any command that has type [t] under variable typing [vt] evaluates under variable assignment [va] to some value that also has type [t] in [vt], as long as [va] and [vt] agree.#</li>#
  #</ol> </li>#%\end{enumerate}%
  A few hints that may be helpful:
  %\begin{enumerate}%#<ol>#
    %\item%#<li># One easy way of defining variable assignments and typings is to define both as instances of a polymorphic map type.  The map type at parameter [T] can be defined to be the type of arbitrary functions from variables to [T].  A helpful function for implementing insertion into such a functional map is [eq_nat_dec], which you can make available with [Require Import Arith.].  [eq_nat_dec] has a dependent type that tells you that it makes accurate decisions on whether two natural numbers are equal, but you can use it as if it returned a boolean, e.g., [if eq_nat_dec n m then E1 else E2].#</li>#
    %\item%#<li># If you follow the last hint, you may find yourself writing a proof that involves an expression with [eq_nat_dec] that you would like to simplify.  Running [destruct] on the particular call to [eq_nat_dec] should do the trick.  You can automate this advice with a piece of Ltac: [[

match goal with
  | [ |- context[eq_nat_dec ?X ?Y] ] => destruct (eq_nat_dec X Y)
end
]] #</li>#
    %\item%#<li># You probably do not want to use an inductive definition for compatibility of variable assignments and typings.#</li>#
    %\item%#<li># The [Tactics] module from this book contains a variant [crush'] of [crush].  [crush'] takes two arguments.  The first argument is a list of lemmas and other functions to be tried automatically in "forward reasoning" style, where we add new facts without being sure yet that they link into a proof of the conclusion.  The second argument is a list of predicates on which inverison should be attempted automatically.  For instance, running [crush' (lemma1, lemma2) pred] will search for chances to apply [lemma1] and [lemma2] to hypotheses that are already available, adding the new concluded fact if suitable hypotheses can be found.  Inversion will be attempted on any hypothesis using [pred], but only those inversions that narrow the field of possibilities to one possible rule will be kept.  The format of the list arguments to [crush'] is that you can pass an empty list as [tt], a singleton list as the unadorned single element, and a multiple-element list as a tuple of the elements.#</li>#
    %\item%#<li># If you want [crush'] to apply polymorphic lemmas, you may have to do a little extra work, if the type parameter is not a free variable of your proof context (so that [crush'] does not know to try it).  For instance, if you define a polymorphic map insert function [assign] of some type [forall T : Set, ...], and you want particular applications of [assign] added automatically with type parameter [U], you would need to include [assign] in the lemma list as [assign U] (if you have implicit arguments off) or [assign (T := U)] or [@assign U] (if you have implicit arguments on).#</li>#
  #</ol> </li>#%\end{enumerate}%

#</li>#

Adam Chlipala's avatar
Adam Chlipala committed
909
#</ol>#%\end{enumerate}% *)