author nipkow Mon Jun 10 16:04:34 2013 +0200 (2013-06-10) changeset 52362 6b80ba92c4fe parent 52360 ac7ac2b242a2 parent 52361 7d5ad23b8245 child 52363 41d7946e2595
merged
     1.1 --- a/src/Doc/ProgProve/Basics.thy	Mon Jun 10 06:08:17 2013 -0700
1.2 +++ b/src/Doc/ProgProve/Basics.thy	Mon Jun 10 16:04:34 2013 +0200
1.3 @@ -9,7 +9,7 @@
1.4
1.5  \section{Basics}
1.6
1.7 -\subsection{Types, Terms and Formulae}
1.8 +\subsection{Types, Terms and Formulas}
1.9  \label{sec:TypesTermsForms}
1.10
1.11  HOL is a typed logic whose type system resembles that of functional
1.12 @@ -58,7 +58,7 @@
1.13  Terms may also contain @{text "\<lambda>"}-abstractions. For example,
1.14  @{term "\<lambda>x. x"} is the identity function.
1.15
1.16 -\concept{Formulae} are terms of type @{text bool}.
1.17 +\concept{Formulas} are terms of type @{text bool}.
1.18  There are the basic constants @{term True} and @{term False} and
1.19  the usual logical connectives (in decreasing order of precedence):
1.20  @{text"\<not>"}, @{text"\<and>"}, @{text"\<or>"}, @{text"\<longrightarrow>"}.
1.21 @@ -133,7 +133,7 @@
1.22
1.23  The textual definition of a theory follows a fixed syntax with keywords like
1.24  \isacommand{begin} and \isacommand{datatype}.  Embedded in this syntax are
1.25 -the types and formulae of HOL.  To distinguish the two levels, everything
1.26 +the types and formulas of HOL.  To distinguish the two levels, everything
1.27  HOL-specific (terms and types) must be enclosed in quotation marks:
1.28  \texttt{"}\dots\texttt{"}. To lessen this burden, quotation marks around a
1.29  single identifier can be dropped.  When Isabelle prints a syntax error

     2.1 --- a/src/Doc/ProgProve/Bool_nat_list.thy	Mon Jun 10 06:08:17 2013 -0700
2.2 +++ b/src/Doc/ProgProve/Bool_nat_list.thy	Mon Jun 10 16:04:34 2013 +0200
2.3 @@ -109,7 +109,7 @@
2.5  \end{warn}
2.6
2.7 -\subsubsection{An informal proof}
2.8 +\subsubsection{An Informal Proof}
2.9
2.10  Above we gave some terse informal explanation of the proof of
2.11  @{prop"add m 0 = m"}. A more detailed informal exposition of the lemma
2.12 @@ -335,7 +335,7 @@
2.13  Finally the proofs of @{thm[source] rev_app} and @{thm[source] rev_rev}
2.14  succeed, too.
2.15
2.16 -\subsubsection{Another informal proof}
2.17 +\subsubsection{Another Informal Proof}
2.18
2.19  Here is the informal proof of associativity of @{const app}
2.20  corresponding to the Isabelle proof above.
2.21 @@ -377,7 +377,7 @@
2.22  @{text s} is @{term"app (app Nil ys) zs"}, @{text t} is @{term"app Nil (app
2.23  ys zs)"}, and @{text u} is @{term"app ys zs"}.
2.24
2.25 -\subsection{Predefined lists}
2.26 +\subsection{Predefined Lists}
2.27  \label{sec:predeflists}
2.28
2.29  Isabelle's predefined lists are the same as the ones above, but with

     3.1 --- a/src/Doc/ProgProve/Isar.thy	Mon Jun 10 06:08:17 2013 -0700
3.2 +++ b/src/Doc/ProgProve/Isar.thy	Mon Jun 10 16:04:34 2013 +0200
3.3 @@ -84,7 +84,7 @@
3.4  writing @{text"f.simps(2)"}, whole sublists by @{text"f.simps(2-4)"}.
3.5
3.6
3.7 -\section{Isar by example}
3.8 +\section{Isar by Example}
3.9
3.10  We show a number of proofs of Cantor's theorem that a function from a set to
3.11  its powerset cannot be surjective, illustrating various features of Isar. The
3.12 @@ -175,7 +175,7 @@
3.13  \noindent The \isacom{using} idiom de-emphasizes the used facts by moving them
3.14  behind the proposition.
3.15
3.16 -\subsection{Structured lemma statements: \isacom{fixes}, \isacom{assumes}, \isacom{shows}}
3.17 +\subsection{Structured Lemma Statements: \isacom{fixes}, \isacom{assumes}, \isacom{shows}}
3.18
3.19  Lemmas can also be stated in a more structured fashion. To demonstrate this
3.20  feature with Cantor's theorem, we rephrase @{prop"\<not> surj f"}
3.21 @@ -221,7 +221,7 @@
3.22  to individual assumptions by @{text"assms(1)"}, @{text"assms(2)"} etc,
3.23  thus obviating the need to name them individually.
3.24
3.25 -\section{Proof patterns}
3.26 +\section{Proof Patterns}
3.27
3.28  We show a number of important basic proof patterns. Many of them arise from
3.29  the rules of natural deduction that are applied by \isacom{proof} by
3.30 @@ -428,9 +428,9 @@
3.31  \end{minipage}
3.32  \end{tabular}
3.33  \begin{isamarkuptext}%
3.34 -\section{Streamlining proofs}
3.35 +\section{Streamlining Proofs}
3.36
3.37 -\subsection{Pattern matching and quotations}
3.38 +\subsection{Pattern Matching and Quotations}
3.39
3.40  In the proof patterns shown above, formulas are often duplicated.
3.41  This can make the text harder to read, write and maintain. Pattern matching
3.42 @@ -544,7 +544,7 @@
3.43  The \isacom{moreover} version is no shorter but expresses the structure more
3.44  clearly and avoids new names.
3.45
3.46 -\subsection{Raw proof blocks}
3.47 +\subsection{Raw Proof Blocks}
3.48
3.49  Sometimes one would like to prove some lemma locally within a proof.
3.50  A lemma that shares the current context of assumptions but that
3.51 @@ -590,9 +590,9 @@
3.52  the fact just proved, in this case the preceding block. In general,
3.53  \isacom{note} introduces a new name for one or more facts.
3.54
3.55 -\section{Case analysis and induction}
3.56 +\section{Case Analysis and Induction}
3.57
3.58 -\subsection{Datatype case analysis}
3.59 +\subsection{Datatype Case Analysis}
3.60
3.61  We have seen case analysis on formulas. Now we want to distinguish
3.62  which form some term takes: is it @{text 0} or of the form @{term"Suc n"},
3.63 @@ -642,7 +642,7 @@
3.64  are not used because they are directly piped (via \isacom{thus})
3.65  into the proof of the claim.
3.66
3.67 -\subsection{Structural induction}
3.68 +\subsection{Structural Induction}
3.69
3.70  We illustrate structural induction with an example based on natural numbers:
3.71  the sum (@{text"\<Sum>"}) of the first @{text n} natural numbers
3.72 @@ -768,7 +768,7 @@
3.73  \item \isacom{let} @{text"?case = \"P(C x\<^isub>1 \<dots> x\<^isub>n)\""}
3.74  \end{enumerate}
3.75
3.76 -\subsection{Rule induction}
3.77 +\subsection{Rule Induction}
3.78
3.79  Recall the inductive and recursive definitions of even numbers in
3.80  \autoref{sec:inductive-defs}:
3.81 @@ -893,7 +893,7 @@
3.82  free variables in rule @{text i} to @{text"x\<^isub>1 \<dots> x\<^isub>k"},
3.83  going through rule @{text i} from left to right.
3.84
3.85 -\subsection{Assumption naming}
3.86 +\subsection{Assumption Naming}
3.87  \label{sec:assm-naming}
3.88
3.89  In any induction, \isacom{case}~@{text name} sets up a list of assumptions
3.90 @@ -919,7 +919,8 @@
3.91  This is where the indexing of fact lists comes in handy, e.g.\
3.92  @{text"name.IH(2)"} or @{text"name.prems(1-2)"}.
3.93
3.94 -\subsection{Rule inversion}
3.95 +\subsection{Rule Inversion}
3.96 +\label{sec:rule-inversion}
3.97
3.98  Rule inversion is case analysis of which rule could have been used to
3.99  derive some fact. The name \concept{rule inversion} emphasizes that we are
3.100 @@ -980,7 +981,7 @@
3.101  text{* Normally not all cases will be impossible. As a simple exercise,
3.102  prove that \mbox{@{prop"\<not> ev(Suc(Suc(Suc 0)))"}.}
3.103
3.107
3.108  So far, rule induction was always applied to goals of the form @{text"I x y z \<Longrightarrow> \<dots>"}

     4.1 --- a/src/Doc/ProgProve/Logic.thy	Mon Jun 10 06:08:17 2013 -0700
4.2 +++ b/src/Doc/ProgProve/Logic.thy	Mon Jun 10 16:04:34 2013 +0200
4.3 @@ -119,7 +119,7 @@
4.4  See \cite{Nipkow-Main} for the wealth of further predefined functions in theory
4.5  @{theory Main}.
4.6
4.7 -\section{Proof automation}
4.8 +\section{Proof Automation}
4.9
4.10  So far we have only seen @{text simp} and @{text auto}: Both perform
4.11  rewriting, both can also prove linear arithmetic facts (no multiplication),
4.12 @@ -258,7 +258,7 @@
4.13  but we will not enlarge on that here.
4.14
4.15
4.16 -\subsection{Trying them all}
4.17 +\subsection{Trying Them All}
4.18
4.19  If you want to try all of the above automatic proof methods you simply type
4.20  \begin{isabelle}
4.21 @@ -271,7 +271,7 @@
4.22  There is also a lightweight variant \isacom{try0} that does not call
4.23  sledgehammer.
4.24
4.25 -\section{Single step proofs}
4.26 +\section{Single Step Proofs}
4.27
4.28  Although automation is nice, it often fails, at least initially, and you need
4.29  to find out why. When @{text fastforce} or @{text blast} simply fail, you have
4.30 @@ -284,7 +284,7 @@
4.31  \]
4.32  to the proof state. We will now examine the details of this process.
4.33
4.34 -\subsection{Instantiating unknowns}
4.35 +\subsection{Instantiating Unknowns}
4.36
4.37  We had briefly mentioned earlier that after proving some theorem,
4.38  Isabelle replaces all free variables @{text x} by so called \concept{unknowns}
4.39 @@ -312,7 +312,7 @@
4.40  @{text "conjI[where ?P = \"a=b\" and ?Q = \"False\"]"}.
4.41
4.42
4.43 -\subsection{Rule application}
4.44 +\subsection{Rule Application}
4.45
4.46  \concept{Rule application} means applying a rule backwards to a proof state.
4.47  For example, applying rule @{thm[source]conjI} to a proof state
4.48 @@ -338,7 +338,7 @@
4.49  \end{quote}
4.50  This is also called \concept{backchaining} with rule @{text xyz}.
4.51
4.52 -\subsection{Introduction rules}
4.53 +\subsection{Introduction Rules}
4.54
4.55  Conjunction introduction (@{thm[source] conjI}) is one example of a whole
4.56  class of rules known as \concept{introduction rules}. They explain under which
4.57 @@ -388,7 +388,7 @@
4.58  text{*
4.59  Of course this is just an example and could be proved by @{text arith}, too.
4.60
4.61 -\subsection{Forward proof}
4.62 +\subsection{Forward Proof}
4.63  \label{sec:forward-proof}
4.64
4.65  Forward proof means deriving new theorems from old theorems. We have already
4.66 @@ -437,7 +437,7 @@
4.67  text{* In this particular example we could have backchained with
4.68  @{thm[source] Suc_leD}, too, but because the premise is more complicated than the conclusion this can easily lead to nontermination.
4.69
4.70 -\subsection{Finding theorems}
4.71 +\subsection{Finding Theorems}
4.72
4.73  Command \isacom{find{\isacharunderscorekeyword}theorems} searches for specific theorems in the current
4.74  theory. Search criteria include pattern matching on terms and on names.
4.75 @@ -450,7 +450,7 @@
4.76  \end{warn}
4.77
4.78
4.79 -\section{Inductive definitions}
4.80 +\section{Inductive Definitions}
4.81  \label{sec:inductive-defs}
4.82
4.83  Inductive definitions are the third important definition facility, after
4.84 @@ -460,7 +460,7 @@
4.85  definition of operational semantics in the second part of the book.
4.86  \endsem
4.87
4.88 -\subsection{An example: even numbers}
4.89 +\subsection{An Example: Even Numbers}
4.90  \label{sec:Logic:even}
4.91
4.92  Here is a simple example of an inductively defined predicate:
4.93 @@ -486,7 +486,7 @@
4.94  @{text "ev 0 \<Longrightarrow> ev (0 + 2) \<Longrightarrow> ev((0 + 2) + 2) = ev 4"}
4.95  \end{quote}
4.96
4.97 -\subsubsection{Rule induction}
4.98 +\subsubsection{Rule Induction}
4.99
4.100  Showing that all even numbers have some property is more complicated.  For
4.101  example, let us prove that the inductive definition of even numbers agrees
4.102 @@ -617,7 +617,7 @@
4.103  default because, in contrast to recursive functions, there is no termination
4.104  requirement for inductive definitions.
4.105
4.106 -\subsubsection{Inductive versus recursive}
4.107 +\subsubsection{Inductive Versus Recursive}
4.108
4.109  We have seen two definitions of the notion of evenness, an inductive and a
4.110  recursive one. Which one is better? Much of the time, the recursive one is
4.111 @@ -642,7 +642,7 @@
4.112  definition, if we are only interested in the positive information, the
4.113  inductive definition may be much simpler.
4.114
4.115 -\subsection{The reflexive transitive closure}
4.116 +\subsection{The Reflexive Transitive Closure}
4.117  \label{sec:star}
4.118
4.119  Evenness is really more conveniently expressed recursively than inductively.
4.120 @@ -709,7 +709,7 @@
4.121
4.122  text{*
4.123
4.124 -\subsection{The general case}
4.125 +\subsection{The General Case}
4.126
4.127  Inductive definitions have approximately the following general form:
4.128  \begin{quote}

     5.1 --- a/src/Doc/ProgProve/Types_and_funs.thy	Mon Jun 10 06:08:17 2013 -0700
5.2 +++ b/src/Doc/ProgProve/Types_and_funs.thy	Mon Jun 10 16:04:34 2013 +0200
5.3 @@ -5,7 +5,7 @@
5.4  (*>*)
5.5  text{*
5.6  \vspace{-5ex}
5.7 -\section{Type and function definitions}
5.8 +\section{Type and Function Definitions}
5.9
5.10  Type synonyms are abbreviations for existing types, for example
5.11  *}
5.12 @@ -129,7 +129,7 @@
5.13
5.14  The ASCII representation of @{text"\<equiv>"} is \texttt{==} or \xsymbol{equiv}.
5.15
5.16 -\subsection{Recursive functions}
5.17 +\subsection{Recursive Functions}
5.18  \label{sec:recursive-funs}
5.19
5.20  Recursive functions are defined with \isacom{fun} by pattern matching
5.21 @@ -200,7 +200,7 @@
5.22  But note that the induction rule does not mention @{text f} at all,
5.23  except in its name, and is applicable independently of @{text f}.
5.24
5.25 -\section{Induction heuristics}
5.26 +\section{Induction Heuristics}
5.27
5.28  We have already noted that theorems about recursive functions are proved by
5.29  induction. In case the function has more than one argument, we have followed
5.30 @@ -345,7 +345,7 @@
5.31  Simplification is often also called \concept{rewriting}
5.32  and simplification rules \concept{rewrite rules}.
5.33
5.34 -\subsection{Simplification rules}
5.35 +\subsection{Simplification Rules}
5.36
5.37  The attribute @{text"simp"} declares theorems to be simplification rules,
5.38  which the simplifier will use automatically. In addition,
5.39 @@ -363,7 +363,7 @@
5.40  Distributivity laws, for example, alter the structure of terms
5.41  and can produce an exponential blow-up.
5.42
5.43 -\subsection{Conditional simplification rules}
5.44 +\subsection{Conditional Simplification Rules}
5.45
5.46  Simplification rules can be conditional.  Before applying such a rule, the
5.47  simplifier will first try to prove the preconditions, again by
5.48 @@ -401,7 +401,7 @@
5.49  leads to nontermination: when trying to rewrite @{prop"n<m"} to @{const True}
5.50  one first has to prove \mbox{@{prop"Suc n < m"}}, which can be rewritten to @{const True} provided @{prop"Suc(Suc n) < m"}, \emph{ad infinitum}.
5.51
5.52 -\subsection{The @{text simp} proof method}
5.53 +\subsection{The @{text simp} Proof Method}
5.54  \label{sec:simp}
5.55
5.56  So far we have only used the proof method @{text auto}.  Method @{text simp}
5.57 @@ -436,7 +436,7 @@
5.58  subgoals. There is also @{text simp_all}, which applies @{text simp} to
5.59  all subgoals.
5.60
5.61 -\subsection{Rewriting with definitions}
5.62 +\subsection{Rewriting With Definitions}
5.63  \label{sec:rewr-defs}
5.64
5.65  Definitions introduced by the command \isacom{definition}
5.66 @@ -457,7 +457,7 @@
5.67  In particular, let-expressions can be unfolded by
5.68  making @{thm[source] Let_def} a simplification rule.
5.69
5.70 -\subsection{Case splitting with @{text simp}}
5.71 +\subsection{Case Splitting With @{text simp}}
5.72
5.73  Goals containing if-expressions are automatically split into two cases by
5.74  @{text simp} using the rule