author | nipkow |

Sun May 18 20:29:04 2014 +0200 (2014-05-18) | |

changeset 56989 | fafcf43ded4a |

parent 56988 | e8c0d894a205 |

child 56990 | 299b026cc5af |

typos

1.1 --- a/src/Doc/Prog_Prove/Basics.thy Sun May 18 17:01:37 2014 +0200 1.2 +++ b/src/Doc/Prog_Prove/Basics.thy Sun May 18 20:29:04 2014 +0200 1.3 @@ -27,7 +27,7 @@ 1.4 \item[function types,] 1.5 denoted by @{text"\<Rightarrow>"}. 1.6 \item[type variables,] 1.7 - denoted by @{typ 'a}, @{typ 'b} etc., just like in ML\@. 1.8 + denoted by @{typ 'a}, @{typ 'b}, etc., just like in ML\@. 1.9 \end{description} 1.10 Note that @{typ"'a \<Rightarrow> 'b list"} means @{typ[source]"'a \<Rightarrow> ('b list)"}, 1.11 not @{typ"('a \<Rightarrow> 'b) list"}: postfix type constructors have precedence 1.12 @@ -87,7 +87,7 @@ 1.13 Right-arrows of all kinds always associate to the right. In particular, 1.14 the formula 1.15 @{text"A\<^sub>1 \<Longrightarrow> A\<^sub>2 \<Longrightarrow> A\<^sub>3"} means @{text "A\<^sub>1 \<Longrightarrow> (A\<^sub>2 \<Longrightarrow> A\<^sub>3)"}. 1.16 -The (Isabelle specific) notation \mbox{@{text"\<lbrakk> A\<^sub>1; \<dots>; A\<^sub>n \<rbrakk> \<Longrightarrow> A"}} 1.17 +The (Isabelle-specific) notation \mbox{@{text"\<lbrakk> A\<^sub>1; \<dots>; A\<^sub>n \<rbrakk> \<Longrightarrow> A"}} 1.18 is short for the iterated implication \mbox{@{text"A\<^sub>1 \<Longrightarrow> \<dots> \<Longrightarrow> A\<^sub>n \<Longrightarrow> A"}}. 1.19 Sometimes we also employ inference rule notation: 1.20 \inferrule{\mbox{@{text "A\<^sub>1"}}\\ \mbox{@{text "\<dots>"}}\\ \mbox{@{text "A\<^sub>n"}}}

2.1 --- a/src/Doc/Prog_Prove/Bool_nat_list.thy Sun May 18 17:01:37 2014 +0200 2.2 +++ b/src/Doc/Prog_Prove/Bool_nat_list.thy Sun May 18 20:29:04 2014 +0200 2.3 @@ -18,7 +18,7 @@ 2.4 @{datatype[display] bool} 2.5 with the two values \indexed{@{const True}}{True} and \indexed{@{const False}}{False} and 2.6 with many predefined functions: @{text "\<not>"}, @{text "\<and>"}, @{text "\<or>"}, @{text 2.7 -"\<longrightarrow>"} etc. Here is how conjunction could be defined by pattern matching: 2.8 +"\<longrightarrow>"}, etc. Here is how conjunction could be defined by pattern matching: 2.9 *} 2.10 2.11 fun conj :: "bool \<Rightarrow> bool \<Rightarrow> bool" where 2.12 @@ -34,7 +34,7 @@ 2.13 @{datatype[display] nat}\index{Suc@@{const Suc}} 2.14 All values of type @{typ nat} are generated by the constructors 2.15 @{text 0} and @{const Suc}. Thus the values of type @{typ nat} are 2.16 -@{text 0}, @{term"Suc 0"}, @{term"Suc(Suc 0)"} etc. 2.17 +@{text 0}, @{term"Suc 0"}, @{term"Suc(Suc 0)"}, etc. 2.18 There are many predefined functions: @{text "+"}, @{text "*"}, @{text 2.19 "\<le>"}, etc. Here is how you could define your own addition: 2.20 *} 2.21 @@ -80,7 +80,7 @@ 2.22 2.23 txt{* which displays @{thm[show_question_marks,display] add_02} The free 2.24 variable @{text m} has been replaced by the \concept{unknown} 2.25 -@{text"?m"}. There is no logical difference between the two but an 2.26 +@{text"?m"}. There is no logical difference between the two but there is an 2.27 operational one: unknowns can be instantiated, which is what you want after 2.28 some lemma has been proved. 2.29 2.30 @@ -165,7 +165,7 @@ 2.31 \item Type @{typ "'a list"} is the type of lists over elements of type @{typ 'a}. Because @{typ 'a} is a type variable, lists are in fact \concept{polymorphic}: the elements of a list can be of arbitrary type (but must all be of the same type). 2.32 \item Lists have two constructors: @{const Nil}, the empty list, and @{const Cons}, which puts an element (of type @{typ 'a}) in front of a list (of type @{typ "'a list"}). 2.33 Hence all lists are of the form @{const Nil}, or @{term"Cons x Nil"}, 2.34 -or @{term"Cons x (Cons y Nil)"} etc. 2.35 +or @{term"Cons x (Cons y Nil)"}, etc. 2.36 \item \isacom{datatype} requires no quotation marks on the 2.37 left-hand side, but on the right-hand side each of the argument 2.38 types of a constructor needs to be enclosed in quotation marks, unless 2.39 @@ -196,12 +196,12 @@ 2.40 \medskip 2.41 2.42 Figure~\ref{fig:MyList} shows the theory created so far. 2.43 -Because @{text list}, @{const Nil}, @{const Cons} etc are already predefined, 2.44 +Because @{text list}, @{const Nil}, @{const Cons}, etc.\ are already predefined, 2.45 Isabelle prints qualified (long) names when executing this theory, for example, @{text MyList.Nil} 2.46 instead of @{const Nil}. 2.47 To suppress the qualified names you can insert the command 2.48 \texttt{declare [[names\_short]]}. 2.49 - This is not recommended in general but just for this unusual example. 2.50 + This is not recommended in general but is convenient for this unusual example. 2.51 % Notice where the 2.52 %quotations marks are needed that we mostly sweep under the carpet. In 2.53 %particular, notice that \isacom{datatype} requires no quotation marks on the

3.1 --- a/src/Doc/Prog_Prove/Isar.thy Sun May 18 17:01:37 2014 +0200 3.2 +++ b/src/Doc/Prog_Prove/Isar.thy Sun May 18 20:29:04 2014 +0200 3.3 @@ -9,7 +9,7 @@ 3.4 for larger proofs is \concept{Isar}. The two key features of Isar are: 3.5 \begin{itemize} 3.6 \item It is structured, not linear. 3.7 -\item It is readable without running it because 3.8 +\item It is readable without its being run because 3.9 you need to state what you are proving at any given point. 3.10 \end{itemize} 3.11 Whereas apply-scripts are like assembly language programs, Isar proofs 3.12 @@ -57,7 +57,7 @@ 3.13 3.14 \noindent A proof can either be an atomic \isacom{by} with a single proof 3.15 method which must finish off the statement being proved, for example @{text 3.16 -auto}. Or it can be a \isacom{proof}--\isacom{qed} block of multiple 3.17 +auto}, or it can be a \isacom{proof}--\isacom{qed} block of multiple 3.18 steps. Such a block can optionally begin with a proof method that indicates 3.19 how to start off the proof, e.g.\ \mbox{@{text"(induction xs)"}}. 3.20 3.21 @@ -65,10 +65,10 @@ 3.22 together with its proof. The optional \isacom{from} clause 3.23 indicates which facts are to be used in the proof. 3.24 Intermediate propositions are stated with \isacom{have}, the overall goal 3.25 -with \isacom{show}. A step can also introduce new local variables with 3.26 +is stated with \isacom{show}. A step can also introduce new local variables with 3.27 \isacom{fix}. Logically, \isacom{fix} introduces @{text"\<And>"}-quantified 3.28 variables, \isacom{assume} introduces the assumption of an implication 3.29 -(@{text"\<Longrightarrow>"}) and \isacom{have}/\isacom{show} the conclusion. 3.30 +(@{text"\<Longrightarrow>"}) and \isacom{have}/\isacom{show} introduce the conclusion. 3.31 3.32 Propositions are optionally named formulas. These names can be referred to in 3.33 later \isacom{from} clauses. In the simplest case, a fact is such a name. 3.34 @@ -81,7 +81,7 @@ 3.35 Fact names can stand for whole lists of facts. For example, if @{text f} is 3.36 defined by command \isacom{fun}, @{text"f.simps"} refers to the whole list of 3.37 recursion equations defining @{text f}. Individual facts can be selected by 3.38 -writing @{text"f.simps(2)"}, whole sublists by @{text"f.simps(2-4)"}. 3.39 +writing @{text"f.simps(2)"}, whole sublists by writing @{text"f.simps(2-4)"}. 3.40 3.41 3.42 \section{Isar by Example} 3.43 @@ -100,7 +100,7 @@ 3.44 qed 3.45 3.46 text{* 3.47 -The \isacom{proof} command lacks an explicit method how to perform 3.48 +The \isacom{proof} command lacks an explicit method by which to perform 3.49 the proof. In such cases Isabelle tries to use some standard introduction 3.50 rule, in the above case for @{text"\<not>"}: 3.51 \[ 3.52 @@ -206,7 +206,7 @@ 3.53 text{* 3.54 \begin{warn} 3.55 Note the hyphen after the \isacom{proof} command. 3.56 -It is the null method that does nothing to the goal. Leaving it out would ask 3.57 +It is the null method that does nothing to the goal. Leaving it out would be asking 3.58 Isabelle to try some suitable introduction rule on the goal @{const False}---but 3.59 there is no such rule and \isacom{proof} would fail. 3.60 \end{warn} 3.61 @@ -217,7 +217,7 @@ 3.62 3.63 Stating a lemma with \isacom{assumes}-\isacom{shows} implicitly introduces the 3.64 name \indexed{@{text assms}}{assms} that stands for the list of all assumptions. You can refer 3.65 -to individual assumptions by @{text"assms(1)"}, @{text"assms(2)"} etc, 3.66 +to individual assumptions by @{text"assms(1)"}, @{text"assms(2)"}, etc.\ 3.67 thus obviating the need to name them individually. 3.68 3.69 \section{Proof Patterns} 3.70 @@ -489,7 +489,7 @@ 3.71 $\vdots$\\ 3.72 \isacom{from} @{text "x_gr_0"} \dots 3.73 \end{quote} 3.74 -The name is longer than the fact it stands for! Short facts do not need names, 3.75 +The name is longer than the fact it stands for! Short facts do not need names; 3.76 one can refer to them easily by quoting them: 3.77 \begin{quote} 3.78 \isacom{have} \ @{text"\"x > 0\""}\\ 3.79 @@ -544,8 +544,8 @@ 3.80 3.81 \subsection{Raw Proof Blocks} 3.82 3.83 -Sometimes one would like to prove some lemma locally within a proof. 3.84 -A lemma that shares the current context of assumptions but that 3.85 +Sometimes one would like to prove some lemma locally within a proof, 3.86 +a lemma that shares the current context of assumptions but that 3.87 has its own assumptions and is generalized over its locally fixed 3.88 variables at the end. This is what a \concept{raw proof block} does: 3.89 \begin{quote}\index{$IMP053@@{text"{ ... }"} (proof block)} 3.90 @@ -780,7 +780,7 @@ 3.91 \isacom{let} @{text"?case = \"P(Suc n)\""} 3.92 \end{quote} 3.93 The list of assumptions @{text Suc} is actually subdivided 3.94 -into @{text"Suc.IH"}, the induction hypotheses (here @{text"A(n) \<Longrightarrow> P(n)"}) 3.95 +into @{text"Suc.IH"}, the induction hypotheses (here @{text"A(n) \<Longrightarrow> P(n)"}), 3.96 and @{text"Suc.prems"}, the premises of the goal being proved 3.97 (here @{text"A(Suc n)"}). 3.98 3.99 @@ -986,7 +986,7 @@ 3.100 Let us examine the assumptions available in each case. In case @{text ev0} 3.101 we have @{text"n = 0"} and in case @{text evSS} we have @{prop"n = Suc(Suc k)"} 3.102 and @{prop"ev k"}. In each case the assumptions are available under the name 3.103 -of the case; there is no fine grained naming schema like for induction. 3.104 +of the case; there is no fine-grained naming schema like there is for induction. 3.105 3.106 Sometimes some rules could not have been used to derive the given fact 3.107 because constructors clash. As an extreme example consider 3.108 @@ -1030,7 +1030,7 @@ 3.109 \begin{isabelle} 3.110 \isacom{lemma} @{text[source]"I x y z \<Longrightarrow> x = r \<Longrightarrow> y = s \<Longrightarrow> z = t \<Longrightarrow> \<dots>"} 3.111 \end{isabelle} 3.112 -Standard rule induction will worke fine now, provided the free variables in 3.113 +Standard rule induction will work fine now, provided the free variables in 3.114 @{text r}, @{text s}, @{text t} are generalized via @{text"arbitrary"}. 3.115 3.116 However, induction can do the above transformation for us, behind the curtains, so we never 3.117 @@ -1077,7 +1077,8 @@ 3.118 \begin{warn} 3.119 This advanced form of induction does not support the @{text IH} 3.120 naming schema explained in \autoref{sec:assm-naming}: 3.121 -the induction hypotheses are instead found under the name @{text hyps}, like for the simpler 3.122 +the induction hypotheses are instead found under the name @{text hyps}, 3.123 +as they are for the simpler 3.124 @{text induct} method. 3.125 \end{warn} 3.126 \index{induction|)} 3.127 @@ -1109,7 +1110,7 @@ 3.128 \begin{exercise} 3.129 Recall predicate @{text star} from \autoref{sec:star} and @{text iter} 3.130 from Exercise~\ref{exe:iter}. Prove @{prop "iter r n x y \<Longrightarrow> star r x y"} 3.131 -in a structured style, do not just sledgehammer each case of the 3.132 +in a structured style; do not just sledgehammer each case of the 3.133 required induction. 3.134 \end{exercise} 3.135

4.1 --- a/src/Doc/Prog_Prove/Logic.thy Sun May 18 17:01:37 2014 +0200 4.2 +++ b/src/Doc/Prog_Prove/Logic.thy Sun May 18 20:29:04 2014 +0200 4.3 @@ -26,7 +26,7 @@ 4.4 \] 4.5 Terms are the ones we have seen all along, built from constants, variables, 4.6 function application and @{text"\<lambda>"}-abstraction, including all the syntactic 4.7 -sugar like infix symbols, @{text "if"}, @{text "case"} etc. 4.8 +sugar like infix symbols, @{text "if"}, @{text "case"}, etc. 4.9 \begin{warn} 4.10 Remember that formulas are simply terms of type @{text bool}. Hence 4.11 @{text "="} also works for formulas. Beware that @{text"="} has a higher 4.12 @@ -155,7 +155,7 @@ 4.13 Define a function @{text "set ::"} @{typ "'a tree \<Rightarrow> 'a set"} 4.14 that returns the elements in a tree and a function 4.15 @{text "ord ::"} @{typ "int tree \<Rightarrow> bool"} 4.16 -the tests if an @{typ "int tree"} is ordered. 4.17 +that tests if an @{typ "int tree"} is ordered. 4.18 4.19 Define a function @{text ins} that inserts an element into an ordered @{typ "int tree"} 4.20 while maintaining the order of the tree. If the element is already in the tree, the 4.21 @@ -259,7 +259,7 @@ 4.22 not the translations from Isabelle's logic to those tools!) 4.23 and insists on a proof that it can check. This is what \indexed{@{text metis}}{metis} does. 4.24 It is given a list of lemmas and tries to find a proof just using those lemmas 4.25 -(and pure logic). In contrast to @{text simp} and friends that know a lot of 4.26 +(and pure logic). In contrast to using @{text simp} and friends who know a lot of 4.27 lemmas already, using @{text metis} manually is tedious because one has 4.28 to find all the relevant lemmas first. But that is precisely what 4.29 \isacom{sledgehammer} does for us. 4.30 @@ -284,7 +284,7 @@ 4.31 connectives @{text"\<not>"}, @{text"\<and>"}, @{text"\<or>"}, @{text"\<longrightarrow>"}, 4.32 @{text"\<longleftrightarrow>"}. Strictly speaking, this is known as \concept{linear arithmetic} 4.33 because it does not involve multiplication, although multiplication with 4.34 -numbers, e.g.\ @{text"2*n"} is allowed. Such formulas can be proved by 4.35 +numbers, e.g.\ @{text"2*n"}, is allowed. Such formulas can be proved by 4.36 \indexed{@{text arith}}{arith}: 4.37 *} 4.38 4.39 @@ -588,9 +588,9 @@ 4.40 from @{prop"ev 0"} because @{prop"0 - 2 = (0::nat)"} on type @{typ nat}. In 4.41 case @{thm[source]evSS} we have \mbox{@{prop"m = n+(2::nat)"}} and may assume 4.42 @{prop"ev n"}, which implies @{prop"ev (m - 2)"} because @{text"m - 2 = (n + 4.43 -2) - 2 = n"}. We did not need the induction hypothesis at all for this proof, 4.44 -it is just a case analysis of which rule was used, but having @{prop"ev 4.45 -n"} at our disposal in case @{thm[source]evSS} was essential. 4.46 +2) - 2 = n"}. We did not need the induction hypothesis at all for this proof (it 4.47 +is just a case analysis of which rule was used) but having @{prop"ev n"} 4.48 +at our disposal in case @{thm[source]evSS} was essential. 4.49 This case analysis of rules is also called ``rule inversion'' 4.50 and is discussed in more detail in \autoref{ch:Isar}. 4.51 4.52 @@ -844,8 +844,8 @@ 4.53 \] 4.54 as two inductive predicates. 4.55 If you think of @{text a} and @{text b} as ``@{text "("}'' and ``@{text ")"}'', 4.56 -the grammars defines strings of balanced parentheses. 4.57 -Prove @{prop"T w \<Longrightarrow> S w"} and @{prop "S w \<Longrightarrow> T w"} separately and conclude 4.58 +the grammar defines strings of balanced parentheses. 4.59 +Prove @{prop"T w \<Longrightarrow> S w"} and \mbox{@{prop "S w \<Longrightarrow> T w"}} separately and conclude 4.60 @{prop "S w = T w"}. 4.61 \end{exercise} 4.62

5.1 --- a/src/Doc/Prog_Prove/Types_and_funs.thy Sun May 18 17:01:37 2014 +0200 5.2 +++ b/src/Doc/Prog_Prove/Types_and_funs.thy Sun May 18 20:29:04 2014 +0200 5.3 @@ -99,7 +99,7 @@ 5.4 5.5 \subsection{Definitions} 5.6 5.7 -Non recursive functions can be defined as in the following example: 5.8 +Non-recursive functions can be defined as in the following example: 5.9 \index{definition@\isacom{definition}}*} 5.10 5.11 definition sq :: "nat \<Rightarrow> nat" where 5.12 @@ -133,7 +133,7 @@ 5.13 \label{sec:recursive-funs} 5.14 5.15 Recursive functions are defined with \indexed{\isacom{fun}}{fun} by pattern matching 5.16 -over datatype constructors. The order of equations matters. Just as in 5.17 +over datatype constructors. The order of equations matters, as in 5.18 functional programming languages. However, all HOL functions must be 5.19 total. This simplifies the logic---terms are always defined---but means 5.20 that recursive functions must terminate. Otherwise one could define a 5.21 @@ -175,7 +175,7 @@ 5.22 apply(induction n rule: div2.induct) 5.23 5.24 txt{* (where the infix @{text div} is the predefined division operation) 5.25 -yields the 3 subgoals 5.26 +yields the subgoals 5.27 @{subgoals[display,margin=65]} 5.28 An application of @{text auto} finishes the proof. 5.29 Had we used ordinary structural induction on @{text n}, 5.30 @@ -271,7 +271,7 @@ 5.31 its first argument by stacking its elements onto the second argument, 5.32 and it returns that second argument when the first one becomes 5.33 empty. Note that @{const itrev} is tail-recursive: it can be 5.34 -compiled into a loop, no stack is necessary for executing it. 5.35 +compiled into a loop; no stack is necessary for executing it. 5.36 5.37 Naturally, we would like to show that @{const itrev} does indeed reverse 5.38 its first argument provided the second one is empty: 5.39 @@ -487,7 +487,7 @@ 5.40 subgoals. There is also @{text simp_all}, which applies @{text simp} to 5.41 all subgoals. 5.42 5.43 -\subsection{Rewriting With Definitions} 5.44 +\subsection{Rewriting with Definitions} 5.45 \label{sec:rewr-defs} 5.46 5.47 Definitions introduced by the command \isacom{definition} 5.48 @@ -557,7 +557,7 @@ 5.49 Find an equation expressing the size of a tree after exploding it 5.50 (\noquotes{@{term [source] "nodes (explode n t)"}}) as a function 5.51 of @{term "nodes t"} and @{text n}. Prove your equation. 5.52 -You may use the usual arithmetic operators including the exponentiation 5.53 +You may use the usual arithmetic operators, including the exponentiation 5.54 operator ``@{text"^"}''. For example, \noquotes{@{prop [source] "2 ^ 2 = 4"}}. 5.55 5.56 Hint: simplifying with the list of theorems @{thm[source] algebra_simps}

6.1 --- a/src/Doc/Prog_Prove/document/intro-isabelle.tex Sun May 18 17:01:37 2014 +0200 6.2 +++ b/src/Doc/Prog_Prove/document/intro-isabelle.tex Sun May 18 20:29:04 2014 +0200 6.3 @@ -3,7 +3,7 @@ 6.4 of Isabelle for HOL, which abbreviates Higher-Order Logic. We introduce 6.5 HOL step by step following the equation 6.6 \[ \mbox{HOL} = \mbox{Functional Programming} + \mbox{Logic}. \] 6.7 -We assume that the reader is used to logical and set theoretic notation 6.8 +We assume that the reader is used to logical and set-theoretic notation 6.9 and is familiar with the basic concepts of functional programming. 6.10 \ifsem 6.11 Open-minded readers have been known to pick up functional 6.12 @@ -20,8 +20,8 @@ 6.13 optimization and compilation. 6.14 \fi 6.15 \autoref{ch:Logic} introduces the rest of HOL: the 6.16 -language of formulas beyond equality, automatic proof tools, single 6.17 -step proofs, and inductive definitions, an essential specification construct. 6.18 +language of formulas beyond equality, automatic proof tools, single-step 6.19 +proofs, and inductive definitions, an essential specification construct. 6.20 \autoref{ch:Isar} introduces Isar, Isabelle's language for writing structured 6.21 proofs. 6.22