# HG changeset patch # User nipkow # Date 1412095441 -7200 # Node ID d37c712cc01bd5f996f69909d3cf032c60bfba3e # Parent fb6508a73261b93b8de05fb9030cc3adb208e2d3 tuned diff -r fb6508a73261 -r d37c712cc01b src/Doc/Prog_Prove/Isar.thy --- a/src/Doc/Prog_Prove/Isar.thy Tue Sep 30 16:40:03 2014 +0200 +++ b/src/Doc/Prog_Prove/Isar.thy Tue Sep 30 18:44:01 2014 +0200 @@ -546,7 +546,7 @@ Sometimes one would like to prove some lemma locally within a proof, a lemma that shares the current context of assumptions but that -has its own assumptions and is generalised over its locally fixed +has its own assumptions and is generalized over its locally fixed variables at the end. This is what a \concept{raw proof block} does: \begin{quote}\index{$IMP053@@{text"{ ... }"} (proof block)} @{text"{"} \isacom{fix} @{text"x\<^sub>1 \ x\<^sub>n"}\\ @@ -1031,7 +1031,7 @@ \isacom{lemma} @{text[source]"I x y z \ x = r \ y = s \ z = t \ \"} \end{isabelle} Standard rule induction will work fine now, provided the free variables in -@{text r}, @{text s}, @{text t} are generalised via @{text"arbitrary"}. +@{text r}, @{text s}, @{text t} are generalized via @{text"arbitrary"}. However, induction can do the above transformation for us, behind the curtains, so we never need to see the expanded version of the lemma. This is what we need to write: diff -r fb6508a73261 -r d37c712cc01b src/Doc/Prog_Prove/Logic.thy --- a/src/Doc/Prog_Prove/Logic.thy Tue Sep 30 16:40:03 2014 +0200 +++ b/src/Doc/Prog_Prove/Logic.thy Tue Sep 30 18:44:01 2014 +0200 @@ -793,7 +793,7 @@ \subsection*{Exercises} \begin{exercise} -Formalise the following definition of palindromes +Formalize the following definition of palindromes \begin{itemize} \item The empty list and a singleton list are palindromes. \item If @{text xs} is a palindrome, so is @{term "a # xs @ [a]"}. diff -r fb6508a73261 -r d37c712cc01b src/Doc/Prog_Prove/Types_and_funs.thy --- a/src/Doc/Prog_Prove/Types_and_funs.thy Tue Sep 30 16:40:03 2014 +0200 +++ b/src/Doc/Prog_Prove/Types_and_funs.thy Tue Sep 30 18:44:01 2014 +0200 @@ -249,7 +249,7 @@ if the function is defined by recursion on argument number $i$.} \end{quote} The key heuristic, and the main point of this section, is to -\emph{generalise the goal before induction}. +\emph{generalize the goal before induction}. The reason is simple: if the goal is too specific, the induction hypothesis is too weak to allow the induction step to go through. Let us illustrate the idea with an example. @@ -294,10 +294,10 @@ argument,~@{term"[]"}, prevents it from rewriting the conclusion. This example suggests a heuristic: \begin{quote} -\emph{Generalise goals for induction by replacing constants by variables.} +\emph{Generalize goals for induction by replacing constants by variables.} \end{quote} Of course one cannot do this na\"{\i}vely: @{prop"itrev xs ys = rev xs"} is -just not true. The correct generalisation is +just not true. The correct generalization is *}; (*<*)oops;(*>*) lemma "itrev xs ys = rev xs @ ys" @@ -305,7 +305,7 @@ txt{* If @{text ys} is replaced by @{term"[]"}, the right-hand side simplifies to @{term"rev xs"}, as required. -In this instance it was easy to guess the right generalisation. +In this instance it was easy to guess the right generalization. Other situations can require a good deal of creativity. Although we now have two variables, only @{text xs} is suitable for @@ -313,12 +313,12 @@ not there: @{subgoals[display,margin=65]} The induction hypothesis is still too weak, but this time it takes no -intuition to generalise: the problem is that the @{text ys} +intuition to generalize: the problem is that the @{text ys} in the induction hypothesis is fixed, but the induction hypothesis needs to be applied with @{term"a # ys"} instead of @{text ys}. Hence we prove the theorem for all @{text ys} instead of a fixed one. We can instruct induction -to perform this generalisation for us by adding @{text "arbitrary: ys"}\index{arbitrary@@{text"arbitrary:"}}. +to perform this generalization for us by adding @{text "arbitrary: ys"}\index{arbitrary@@{text"arbitrary:"}}. *} (*<*)oops lemma "itrev xs ys = rev xs @ ys" @@ -334,12 +334,12 @@ done text{* -This leads to another heuristic for generalisation: +This leads to another heuristic for generalization: \begin{quote} -\emph{Generalise induction by generalising all free +\emph{Generalize induction by generalizing all free variables\\ {\em(except the induction variable itself)}.} \end{quote} -Generalisation is best performed with @{text"arbitrary: y\<^sub>1 \ y\<^sub>k"}. +Generalization is best performed with @{text"arbitrary: y\<^sub>1 \ y\<^sub>k"}. This heuristic prevents trivial failures like the one above. However, it should not be applied blindly. It is not always required, and the additional quantifiers can complicate diff -r fb6508a73261 -r d37c712cc01b src/Doc/Prog_Prove/document/intro-isabelle.tex --- a/src/Doc/Prog_Prove/document/intro-isabelle.tex Tue Sep 30 16:40:03 2014 +0200 +++ b/src/Doc/Prog_Prove/document/intro-isabelle.tex Tue Sep 30 18:44:01 2014 +0200 @@ -1,5 +1,5 @@ Isabelle is a generic system for -implementing logical formalisms, and Isabelle/HOL is the specialisation +implementing logical formalisms, and Isabelle/HOL is the specialization of Isabelle for HOL, which abbreviates Higher-Order Logic. We introduce HOL step by step following the equation \[ \mbox{HOL} = \mbox{Functional Programming} + \mbox{Logic}. \]