--- a/src/Doc/Prog_Prove/Basics.thy Sun May 18 17:01:37 2014 +0200
+++ b/src/Doc/Prog_Prove/Basics.thy Sun May 18 20:29:04 2014 +0200
@@ -27,7 +27,7 @@
\item[function types,]
denoted by @{text"\<Rightarrow>"}.
\item[type variables,]
- denoted by @{typ 'a}, @{typ 'b} etc., just like in ML\@.
+ denoted by @{typ 'a}, @{typ 'b}, etc., just like in ML\@.
\end{description}
Note that @{typ"'a \<Rightarrow> 'b list"} means @{typ[source]"'a \<Rightarrow> ('b list)"},
not @{typ"('a \<Rightarrow> 'b) list"}: postfix type constructors have precedence
@@ -87,7 +87,7 @@
Right-arrows of all kinds always associate to the right. In particular,
the formula
@{text"A\<^sub>1 \<Longrightarrow> A\<^sub>2 \<Longrightarrow> A\<^sub>3"} means @{text "A\<^sub>1 \<Longrightarrow> (A\<^sub>2 \<Longrightarrow> A\<^sub>3)"}.
-The (Isabelle specific) notation \mbox{@{text"\<lbrakk> A\<^sub>1; \<dots>; A\<^sub>n \<rbrakk> \<Longrightarrow> A"}}
+The (Isabelle-specific) notation \mbox{@{text"\<lbrakk> A\<^sub>1; \<dots>; A\<^sub>n \<rbrakk> \<Longrightarrow> A"}}
is short for the iterated implication \mbox{@{text"A\<^sub>1 \<Longrightarrow> \<dots> \<Longrightarrow> A\<^sub>n \<Longrightarrow> A"}}.
Sometimes we also employ inference rule notation:
\inferrule{\mbox{@{text "A\<^sub>1"}}\\ \mbox{@{text "\<dots>"}}\\ \mbox{@{text "A\<^sub>n"}}}
--- a/src/Doc/Prog_Prove/Bool_nat_list.thy Sun May 18 17:01:37 2014 +0200
+++ b/src/Doc/Prog_Prove/Bool_nat_list.thy Sun May 18 20:29:04 2014 +0200
@@ -18,7 +18,7 @@
@{datatype[display] bool}
with the two values \indexed{@{const True}}{True} and \indexed{@{const False}}{False} and
with many predefined functions: @{text "\<not>"}, @{text "\<and>"}, @{text "\<or>"}, @{text
-"\<longrightarrow>"} etc. Here is how conjunction could be defined by pattern matching:
+"\<longrightarrow>"}, etc. Here is how conjunction could be defined by pattern matching:
*}
fun conj :: "bool \<Rightarrow> bool \<Rightarrow> bool" where
@@ -34,7 +34,7 @@
@{datatype[display] nat}\index{Suc@@{const Suc}}
All values of type @{typ nat} are generated by the constructors
@{text 0} and @{const Suc}. Thus the values of type @{typ nat} are
-@{text 0}, @{term"Suc 0"}, @{term"Suc(Suc 0)"} etc.
+@{text 0}, @{term"Suc 0"}, @{term"Suc(Suc 0)"}, etc.
There are many predefined functions: @{text "+"}, @{text "*"}, @{text
"\<le>"}, etc. Here is how you could define your own addition:
*}
@@ -80,7 +80,7 @@
txt{* which displays @{thm[show_question_marks,display] add_02} The free
variable @{text m} has been replaced by the \concept{unknown}
-@{text"?m"}. There is no logical difference between the two but an
+@{text"?m"}. There is no logical difference between the two but there is an
operational one: unknowns can be instantiated, which is what you want after
some lemma has been proved.
@@ -165,7 +165,7 @@
\item Type @{typ "'a list"} is the type of lists over elements of type @{typ 'a}. Because @{typ 'a} is a type variable, lists are in fact \concept{polymorphic}: the elements of a list can be of arbitrary type (but must all be of the same type).
\item Lists have two constructors: @{const Nil}, the empty list, and @{const Cons}, which puts an element (of type @{typ 'a}) in front of a list (of type @{typ "'a list"}).
Hence all lists are of the form @{const Nil}, or @{term"Cons x Nil"},
-or @{term"Cons x (Cons y Nil)"} etc.
+or @{term"Cons x (Cons y Nil)"}, etc.
\item \isacom{datatype} requires no quotation marks on the
left-hand side, but on the right-hand side each of the argument
types of a constructor needs to be enclosed in quotation marks, unless
@@ -196,12 +196,12 @@
\medskip
Figure~\ref{fig:MyList} shows the theory created so far.
-Because @{text list}, @{const Nil}, @{const Cons} etc are already predefined,
+Because @{text list}, @{const Nil}, @{const Cons}, etc.\ are already predefined,
Isabelle prints qualified (long) names when executing this theory, for example, @{text MyList.Nil}
instead of @{const Nil}.
To suppress the qualified names you can insert the command
\texttt{declare [[names\_short]]}.
- This is not recommended in general but just for this unusual example.
+ This is not recommended in general but is convenient for this unusual example.
% Notice where the
%quotations marks are needed that we mostly sweep under the carpet. In
%particular, notice that \isacom{datatype} requires no quotation marks on the
--- a/src/Doc/Prog_Prove/Isar.thy Sun May 18 17:01:37 2014 +0200
+++ b/src/Doc/Prog_Prove/Isar.thy Sun May 18 20:29:04 2014 +0200
@@ -9,7 +9,7 @@
for larger proofs is \concept{Isar}. The two key features of Isar are:
\begin{itemize}
\item It is structured, not linear.
-\item It is readable without running it because
+\item It is readable without its being run because
you need to state what you are proving at any given point.
\end{itemize}
Whereas apply-scripts are like assembly language programs, Isar proofs
@@ -57,7 +57,7 @@
\noindent A proof can either be an atomic \isacom{by} with a single proof
method which must finish off the statement being proved, for example @{text
-auto}. Or it can be a \isacom{proof}--\isacom{qed} block of multiple
+auto}, or it can be a \isacom{proof}--\isacom{qed} block of multiple
steps. Such a block can optionally begin with a proof method that indicates
how to start off the proof, e.g.\ \mbox{@{text"(induction xs)"}}.
@@ -65,10 +65,10 @@
together with its proof. The optional \isacom{from} clause
indicates which facts are to be used in the proof.
Intermediate propositions are stated with \isacom{have}, the overall goal
-with \isacom{show}. A step can also introduce new local variables with
+is stated with \isacom{show}. A step can also introduce new local variables with
\isacom{fix}. Logically, \isacom{fix} introduces @{text"\<And>"}-quantified
variables, \isacom{assume} introduces the assumption of an implication
-(@{text"\<Longrightarrow>"}) and \isacom{have}/\isacom{show} the conclusion.
+(@{text"\<Longrightarrow>"}) and \isacom{have}/\isacom{show} introduce the conclusion.
Propositions are optionally named formulas. These names can be referred to in
later \isacom{from} clauses. In the simplest case, a fact is such a name.
@@ -81,7 +81,7 @@
Fact names can stand for whole lists of facts. For example, if @{text f} is
defined by command \isacom{fun}, @{text"f.simps"} refers to the whole list of
recursion equations defining @{text f}. Individual facts can be selected by
-writing @{text"f.simps(2)"}, whole sublists by @{text"f.simps(2-4)"}.
+writing @{text"f.simps(2)"}, whole sublists by writing @{text"f.simps(2-4)"}.
\section{Isar by Example}
@@ -100,7 +100,7 @@
qed
text{*
-The \isacom{proof} command lacks an explicit method how to perform
+The \isacom{proof} command lacks an explicit method by which to perform
the proof. In such cases Isabelle tries to use some standard introduction
rule, in the above case for @{text"\<not>"}:
\[
@@ -206,7 +206,7 @@
text{*
\begin{warn}
Note the hyphen after the \isacom{proof} command.
-It is the null method that does nothing to the goal. Leaving it out would ask
+It is the null method that does nothing to the goal. Leaving it out would be asking
Isabelle to try some suitable introduction rule on the goal @{const False}---but
there is no such rule and \isacom{proof} would fail.
\end{warn}
@@ -217,7 +217,7 @@
Stating a lemma with \isacom{assumes}-\isacom{shows} implicitly introduces the
name \indexed{@{text assms}}{assms} that stands for the list of all assumptions. You can refer
-to individual assumptions by @{text"assms(1)"}, @{text"assms(2)"} etc,
+to individual assumptions by @{text"assms(1)"}, @{text"assms(2)"}, etc.\
thus obviating the need to name them individually.
\section{Proof Patterns}
@@ -489,7 +489,7 @@
$\vdots$\\
\isacom{from} @{text "x_gr_0"} \dots
\end{quote}
-The name is longer than the fact it stands for! Short facts do not need names,
+The name is longer than the fact it stands for! Short facts do not need names;
one can refer to them easily by quoting them:
\begin{quote}
\isacom{have} \ @{text"\"x > 0\""}\\
@@ -544,8 +544,8 @@
\subsection{Raw Proof Blocks}
-Sometimes one would like to prove some lemma locally within a proof.
-A lemma that shares the current context of assumptions but that
+Sometimes one would like to prove some lemma locally within a proof,
+a lemma that shares the current context of assumptions but that
has its own assumptions and is generalized over its locally fixed
variables at the end. This is what a \concept{raw proof block} does:
\begin{quote}\index{$IMP053@@{text"{ ... }"} (proof block)}
@@ -780,7 +780,7 @@
\isacom{let} @{text"?case = \"P(Suc n)\""}
\end{quote}
The list of assumptions @{text Suc} is actually subdivided
-into @{text"Suc.IH"}, the induction hypotheses (here @{text"A(n) \<Longrightarrow> P(n)"})
+into @{text"Suc.IH"}, the induction hypotheses (here @{text"A(n) \<Longrightarrow> P(n)"}),
and @{text"Suc.prems"}, the premises of the goal being proved
(here @{text"A(Suc n)"}).
@@ -986,7 +986,7 @@
Let us examine the assumptions available in each case. In case @{text ev0}
we have @{text"n = 0"} and in case @{text evSS} we have @{prop"n = Suc(Suc k)"}
and @{prop"ev k"}. In each case the assumptions are available under the name
-of the case; there is no fine grained naming schema like for induction.
+of the case; there is no fine-grained naming schema like there is for induction.
Sometimes some rules could not have been used to derive the given fact
because constructors clash. As an extreme example consider
@@ -1030,7 +1030,7 @@
\begin{isabelle}
\isacom{lemma} @{text[source]"I x y z \<Longrightarrow> x = r \<Longrightarrow> y = s \<Longrightarrow> z = t \<Longrightarrow> \<dots>"}
\end{isabelle}
-Standard rule induction will worke fine now, provided the free variables in
+Standard rule induction will work fine now, provided the free variables in
@{text r}, @{text s}, @{text t} are generalized via @{text"arbitrary"}.
However, induction can do the above transformation for us, behind the curtains, so we never
@@ -1077,7 +1077,8 @@
\begin{warn}
This advanced form of induction does not support the @{text IH}
naming schema explained in \autoref{sec:assm-naming}:
-the induction hypotheses are instead found under the name @{text hyps}, like for the simpler
+the induction hypotheses are instead found under the name @{text hyps},
+as they are for the simpler
@{text induct} method.
\end{warn}
\index{induction|)}
@@ -1109,7 +1110,7 @@
\begin{exercise}
Recall predicate @{text star} from \autoref{sec:star} and @{text iter}
from Exercise~\ref{exe:iter}. Prove @{prop "iter r n x y \<Longrightarrow> star r x y"}
-in a structured style, do not just sledgehammer each case of the
+in a structured style; do not just sledgehammer each case of the
required induction.
\end{exercise}
--- a/src/Doc/Prog_Prove/Logic.thy Sun May 18 17:01:37 2014 +0200
+++ b/src/Doc/Prog_Prove/Logic.thy Sun May 18 20:29:04 2014 +0200
@@ -26,7 +26,7 @@
\]
Terms are the ones we have seen all along, built from constants, variables,
function application and @{text"\<lambda>"}-abstraction, including all the syntactic
-sugar like infix symbols, @{text "if"}, @{text "case"} etc.
+sugar like infix symbols, @{text "if"}, @{text "case"}, etc.
\begin{warn}
Remember that formulas are simply terms of type @{text bool}. Hence
@{text "="} also works for formulas. Beware that @{text"="} has a higher
@@ -155,7 +155,7 @@
Define a function @{text "set ::"} @{typ "'a tree \<Rightarrow> 'a set"}
that returns the elements in a tree and a function
@{text "ord ::"} @{typ "int tree \<Rightarrow> bool"}
-the tests if an @{typ "int tree"} is ordered.
+that tests if an @{typ "int tree"} is ordered.
Define a function @{text ins} that inserts an element into an ordered @{typ "int tree"}
while maintaining the order of the tree. If the element is already in the tree, the
@@ -259,7 +259,7 @@
not the translations from Isabelle's logic to those tools!)
and insists on a proof that it can check. This is what \indexed{@{text metis}}{metis} does.
It is given a list of lemmas and tries to find a proof just using those lemmas
-(and pure logic). In contrast to @{text simp} and friends that know a lot of
+(and pure logic). In contrast to using @{text simp} and friends who know a lot of
lemmas already, using @{text metis} manually is tedious because one has
to find all the relevant lemmas first. But that is precisely what
\isacom{sledgehammer} does for us.
@@ -284,7 +284,7 @@
connectives @{text"\<not>"}, @{text"\<and>"}, @{text"\<or>"}, @{text"\<longrightarrow>"},
@{text"\<longleftrightarrow>"}. Strictly speaking, this is known as \concept{linear arithmetic}
because it does not involve multiplication, although multiplication with
-numbers, e.g.\ @{text"2*n"} is allowed. Such formulas can be proved by
+numbers, e.g.\ @{text"2*n"}, is allowed. Such formulas can be proved by
\indexed{@{text arith}}{arith}:
*}
@@ -588,9 +588,9 @@
from @{prop"ev 0"} because @{prop"0 - 2 = (0::nat)"} on type @{typ nat}. In
case @{thm[source]evSS} we have \mbox{@{prop"m = n+(2::nat)"}} and may assume
@{prop"ev n"}, which implies @{prop"ev (m - 2)"} because @{text"m - 2 = (n +
-2) - 2 = n"}. We did not need the induction hypothesis at all for this proof,
-it is just a case analysis of which rule was used, but having @{prop"ev
-n"} at our disposal in case @{thm[source]evSS} was essential.
+2) - 2 = n"}. We did not need the induction hypothesis at all for this proof (it
+is just a case analysis of which rule was used) but having @{prop"ev n"}
+at our disposal in case @{thm[source]evSS} was essential.
This case analysis of rules is also called ``rule inversion''
and is discussed in more detail in \autoref{ch:Isar}.
@@ -844,8 +844,8 @@
\]
as two inductive predicates.
If you think of @{text a} and @{text b} as ``@{text "("}'' and ``@{text ")"}'',
-the grammars defines strings of balanced parentheses.
-Prove @{prop"T w \<Longrightarrow> S w"} and @{prop "S w \<Longrightarrow> T w"} separately and conclude
+the grammar defines strings of balanced parentheses.
+Prove @{prop"T w \<Longrightarrow> S w"} and \mbox{@{prop "S w \<Longrightarrow> T w"}} separately and conclude
@{prop "S w = T w"}.
\end{exercise}
--- a/src/Doc/Prog_Prove/Types_and_funs.thy Sun May 18 17:01:37 2014 +0200
+++ b/src/Doc/Prog_Prove/Types_and_funs.thy Sun May 18 20:29:04 2014 +0200
@@ -99,7 +99,7 @@
\subsection{Definitions}
-Non recursive functions can be defined as in the following example:
+Non-recursive functions can be defined as in the following example:
\index{definition@\isacom{definition}}*}
definition sq :: "nat \<Rightarrow> nat" where
@@ -133,7 +133,7 @@
\label{sec:recursive-funs}
Recursive functions are defined with \indexed{\isacom{fun}}{fun} by pattern matching
-over datatype constructors. The order of equations matters. Just as in
+over datatype constructors. The order of equations matters, as in
functional programming languages. However, all HOL functions must be
total. This simplifies the logic---terms are always defined---but means
that recursive functions must terminate. Otherwise one could define a
@@ -175,7 +175,7 @@
apply(induction n rule: div2.induct)
txt{* (where the infix @{text div} is the predefined division operation)
-yields the 3 subgoals
+yields the subgoals
@{subgoals[display,margin=65]}
An application of @{text auto} finishes the proof.
Had we used ordinary structural induction on @{text n},
@@ -271,7 +271,7 @@
its first argument by stacking its elements onto the second argument,
and it returns that second argument when the first one becomes
empty. Note that @{const itrev} is tail-recursive: it can be
-compiled into a loop, no stack is necessary for executing it.
+compiled into a loop; no stack is necessary for executing it.
Naturally, we would like to show that @{const itrev} does indeed reverse
its first argument provided the second one is empty:
@@ -487,7 +487,7 @@
subgoals. There is also @{text simp_all}, which applies @{text simp} to
all subgoals.
-\subsection{Rewriting With Definitions}
+\subsection{Rewriting with Definitions}
\label{sec:rewr-defs}
Definitions introduced by the command \isacom{definition}
@@ -557,7 +557,7 @@
Find an equation expressing the size of a tree after exploding it
(\noquotes{@{term [source] "nodes (explode n t)"}}) as a function
of @{term "nodes t"} and @{text n}. Prove your equation.
-You may use the usual arithmetic operators including the exponentiation
+You may use the usual arithmetic operators, including the exponentiation
operator ``@{text"^"}''. For example, \noquotes{@{prop [source] "2 ^ 2 = 4"}}.
Hint: simplifying with the list of theorems @{thm[source] algebra_simps}
--- a/src/Doc/Prog_Prove/document/intro-isabelle.tex Sun May 18 17:01:37 2014 +0200
+++ b/src/Doc/Prog_Prove/document/intro-isabelle.tex Sun May 18 20:29:04 2014 +0200
@@ -3,7 +3,7 @@
of Isabelle for HOL, which abbreviates Higher-Order Logic. We introduce
HOL step by step following the equation
\[ \mbox{HOL} = \mbox{Functional Programming} + \mbox{Logic}. \]
-We assume that the reader is used to logical and set theoretic notation
+We assume that the reader is used to logical and set-theoretic notation
and is familiar with the basic concepts of functional programming.
\ifsem
Open-minded readers have been known to pick up functional
@@ -20,8 +20,8 @@
optimization and compilation.
\fi
\autoref{ch:Logic} introduces the rest of HOL: the
-language of formulas beyond equality, automatic proof tools, single
-step proofs, and inductive definitions, an essential specification construct.
+language of formulas beyond equality, automatic proof tools, single-step
+proofs, and inductive definitions, an essential specification construct.
\autoref{ch:Isar} introduces Isar, Isabelle's language for writing structured
proofs.