more standard document preparation within session context;
authorwenzelm
Mon, 27 Aug 2012 22:00:04 +0200
changeset 48947 7eee8b2d2099
parent 48946 a9b8344f5196
child 48948 fa49f8890ef3
more standard document preparation within session context;
doc-src/ProgProve/Basics.thy
doc-src/ProgProve/Bool_nat_list.thy
doc-src/ProgProve/Isar.thy
doc-src/ProgProve/LaTeXsugar.thy
doc-src/ProgProve/Logic.thy
doc-src/ProgProve/Makefile
doc-src/ProgProve/MyList.thy
doc-src/ProgProve/Thys/Basics.thy
doc-src/ProgProve/Thys/Bool_nat_list.thy
doc-src/ProgProve/Thys/Isar.thy
doc-src/ProgProve/Thys/LaTeXsugar.thy
doc-src/ProgProve/Thys/Logic.thy
doc-src/ProgProve/Thys/MyList.thy
doc-src/ProgProve/Thys/Types_and_funs.thy
doc-src/ProgProve/Thys/document/Basics.tex
doc-src/ProgProve/Thys/document/Bool_nat_list.tex
doc-src/ProgProve/Thys/document/Isar.tex
doc-src/ProgProve/Thys/document/LaTeXsugar.tex
doc-src/ProgProve/Thys/document/Logic.tex
doc-src/ProgProve/Thys/document/Types_and_funs.tex
doc-src/ProgProve/Types_and_funs.thy
doc-src/ProgProve/bang.eps
doc-src/ProgProve/bang.pdf
doc-src/ProgProve/document/bang.eps
doc-src/ProgProve/document/bang.pdf
doc-src/ProgProve/document/build
doc-src/ProgProve/document/intro-isabelle.tex
doc-src/ProgProve/document/mathpartir.sty
doc-src/ProgProve/document/prelude.tex
doc-src/ProgProve/document/root.bib
doc-src/ProgProve/document/root.tex
doc-src/ProgProve/document/svmono.cls
doc-src/ProgProve/intro-isabelle.tex
doc-src/ProgProve/isabelle.sty
doc-src/ProgProve/isabellesym.sty
doc-src/ProgProve/mathpartir.sty
doc-src/ProgProve/prelude.tex
doc-src/ProgProve/prog-prove.bib
doc-src/ProgProve/prog-prove.tex
doc-src/ProgProve/svmono.cls
doc-src/ROOT
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/doc-src/ProgProve/Basics.thy	Mon Aug 27 22:00:04 2012 +0200
@@ -0,0 +1,149 @@
+(*<*)
+theory Basics
+imports Main
+begin
+(*>*)
+text{*
+This chapter introduces HOL as a functional programming language and shows
+how to prove properties of functional programs by induction.
+
+\section{Basics}
+
+\subsection{Types, Terms and Formulae}
+\label{sec:TypesTermsForms}
+
+HOL is a typed logic whose type system resembles that of functional
+programming languages. Thus there are
+\begin{description}
+\item[base types,] 
+in particular @{typ bool}, the type of truth values,
+@{typ nat}, the type of natural numbers ($\mathbb{N}$), and @{typ int},
+the type of mathematical integers ($\mathbb{Z}$).
+\item[type constructors,]
+ in particular @{text list}, the type of
+lists, and @{text set}, the type of sets. Type constructors are written
+postfix, e.g.\ @{typ "nat list"} is the type of lists whose elements are
+natural numbers.
+\item[function types,]
+denoted by @{text"\<Rightarrow>"}.
+\item[type variables,]
+  denoted by @{typ 'a}, @{typ 'b} etc., just like in ML\@.
+\end{description}
+
+\concept{Terms} are formed as in functional programming by
+applying functions to arguments. If @{text f} is a function of type
+@{text"\<tau>\<^isub>1 \<Rightarrow> \<tau>\<^isub>2"} and @{text t} is a term of type
+@{text"\<tau>\<^isub>1"} then @{term"f t"} is a term of type @{text"\<tau>\<^isub>2"}. We write @{text "t :: \<tau>"} to mean that term @{text t} has type @{text \<tau>}.
+
+\begin{warn}
+There are many predefined infix symbols like @{text "+"} and @{text"\<le>"}.
+The name of the corresponding binary function is @{term"op +"},
+not just @{text"+"}. That is, @{term"x + y"} is syntactic sugar for
+\noquotes{@{term[source]"op + x y"}}.
+\end{warn}
+
+HOL also supports some basic constructs from functional programming:
+\begin{quote}
+@{text "(if b then t\<^isub>1 else t\<^isub>2)"}\\
+@{text "(let x = t in u)"}\\
+@{text "(case t of pat\<^isub>1 \<Rightarrow> t\<^isub>1 | \<dots> | pat\<^isub>n \<Rightarrow> t\<^isub>n)"}
+\end{quote}
+\begin{warn}
+The above three constructs must always be enclosed in parentheses
+if they occur inside other constructs.
+\end{warn}
+Terms may also contain @{text "\<lambda>"}-abstractions. For example,
+@{term "\<lambda>x. x"} is the identity function.
+
+\concept{Formulae} are terms of type @{text bool}.
+There are the basic constants @{term True} and @{term False} and
+the usual logical connectives (in decreasing order of precedence):
+@{text"\<not>"}, @{text"\<and>"}, @{text"\<or>"}, @{text"\<longrightarrow>"}.
+
+\concept{Equality} is available in the form of the infix function @{text "="}
+of type @{typ "'a \<Rightarrow> 'a \<Rightarrow> bool"}. It also works for formulas, where
+it means ``if and only if''.
+
+\concept{Quantifiers} are written @{prop"\<forall>x. P"} and @{prop"\<exists>x. P"}.
+
+Isabelle automatically computes the type of each variable in a term. This is
+called \concept{type inference}.  Despite type inference, it is sometimes
+necessary to attach explicit \concept{type constraints} (or \concept{type
+annotations}) to a variable or term.  The syntax is @{text "t :: \<tau>"} as in
+\mbox{\noquotes{@{prop[source] "m < (n::nat)"}}}. Type constraints may be
+needed to
+disambiguate terms involving overloaded functions such as @{text "+"}, @{text
+"*"} and @{text"\<le>"}.
+
+Finally there are the universal quantifier @{text"\<And>"} and the implication
+@{text"\<Longrightarrow>"}. They are part of the Isabelle framework, not the logic
+HOL. Logically, they agree with their HOL counterparts @{text"\<forall>"} and
+@{text"\<longrightarrow>"}, but operationally they behave differently. This will become
+clearer as we go along.
+\begin{warn}
+Right-arrows of all kinds always associate to the right. In particular,
+the formula
+@{text"A\<^isub>1 \<Longrightarrow> A\<^isub>2 \<Longrightarrow> A\<^isub>3"} means @{text "A\<^isub>1 \<Longrightarrow> (A\<^isub>2 \<Longrightarrow> A\<^isub>3)"}.
+The (Isabelle specific) notation \mbox{@{text"\<lbrakk> A\<^isub>1; \<dots>; A\<^isub>n \<rbrakk> \<Longrightarrow> A"}}
+is short for the iterated implication \mbox{@{text"A\<^isub>1 \<Longrightarrow> \<dots> \<Longrightarrow> A\<^isub>n \<Longrightarrow> A"}}.
+Sometimes we also employ inference rule notation:
+\inferrule{\mbox{@{text "A\<^isub>1"}}\\ \mbox{@{text "\<dots>"}}\\ \mbox{@{text "A\<^isub>n"}}}
+{\mbox{@{text A}}}
+\end{warn}
+
+
+\subsection{Theories}
+\label{sec:Basic:Theories}
+
+Roughly speaking, a \concept{theory} is a named collection of types,
+functions, and theorems, much like a module in a programming language.
+All the Isabelle text that you ever type needs to go into a theory.
+The general format of a theory @{text T} is
+\begin{quote}
+\isacom{theory} @{text T}\\
+\isacom{imports} @{text "T\<^isub>1 \<dots> T\<^isub>n"}\\
+\isacom{begin}\\
+\emph{definitions, theorems and proofs}\\
+\isacom{end}
+\end{quote}
+where @{text "T\<^isub>1 \<dots> T\<^isub>n"} are the names of existing
+theories that @{text T} is based on. The @{text "T\<^isub>i"} are the
+direct \concept{parent theories} of @{text T}.
+Everything defined in the parent theories (and their parents, recursively) is
+automatically visible. Each theory @{text T} must
+reside in a \concept{theory file} named @{text "T.thy"}.
+
+\begin{warn}
+HOL contains a theory @{text Main}, the union of all the basic
+predefined theories like arithmetic, lists, sets, etc.
+Unless you know what you are doing, always include @{text Main}
+as a direct or indirect parent of all your theories.
+\end{warn}
+
+In addition to the theories that come with the Isabelle/HOL distribution
+(see \url{http://isabelle.in.tum.de/library/HOL/})
+there is also the \emph{Archive of Formal Proofs}
+at  \url{http://afp.sourceforge.net}, a growing collection of Isabelle theories
+that everybody can contribute to.
+
+\subsection{Quotation Marks}
+
+The textual definition of a theory follows a fixed syntax with keywords like
+\isacommand{begin} and \isacommand{datatype}.  Embedded in this syntax are
+the types and formulae of HOL.  To distinguish the two levels, everything
+HOL-specific (terms and types) must be enclosed in quotation marks:
+\texttt{"}\dots\texttt{"}. To lessen this burden, quotation marks around a
+single identifier can be dropped.  When Isabelle prints a syntax error
+message, it refers to the HOL syntax as the \concept{inner syntax} and the
+enclosing theory language as the \concept{outer syntax}.
+\sem
+\begin{warn}
+For reasons of readability, we almost never show the quotation marks in this
+book. Consult the accompanying theory files to see where they need to go.
+\end{warn}
+\endsem
+%
+*}
+(*<*)
+end
+(*>*)
\ No newline at end of file
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/doc-src/ProgProve/Bool_nat_list.thy	Mon Aug 27 22:00:04 2012 +0200
@@ -0,0 +1,420 @@
+(*<*)
+theory Bool_nat_list
+imports Main
+begin
+(*>*)
+
+text{*
+\vspace{-4ex}
+\section{\texorpdfstring{Types @{typ bool}, @{typ nat} and @{text list}}{Types bool, nat and list}}
+
+These are the most important predefined types. We go through them one by one.
+Based on examples we learn how to define (possibly recursive) functions and
+prove theorems about them by induction and simplification.
+
+\subsection{Type @{typ bool}}
+
+The type of boolean values is a predefined datatype
+@{datatype[display] bool}
+with the two values @{const True} and @{const False} and
+with many predefined functions:  @{text "\<not>"}, @{text "\<and>"}, @{text "\<or>"}, @{text
+"\<longrightarrow>"} etc. Here is how conjunction could be defined by pattern matching:
+*}
+
+fun conj :: "bool \<Rightarrow> bool \<Rightarrow> bool" where
+"conj True True = True" |
+"conj _ _ = False"
+
+text{* Both the datatype and function definitions roughly follow the syntax
+of functional programming languages.
+
+\subsection{Type @{typ nat}}
+
+Natural numbers are another predefined datatype:
+@{datatype[display] nat}
+All values of type @{typ nat} are generated by the constructors
+@{text 0} and @{const Suc}. Thus the values of type @{typ nat} are
+@{text 0}, @{term"Suc 0"}, @{term"Suc(Suc 0)"} etc.
+There are many predefined functions: @{text "+"}, @{text "*"}, @{text
+"\<le>"}, etc. Here is how you could define your own addition:
+*}
+
+fun add :: "nat \<Rightarrow> nat \<Rightarrow> nat" where
+"add 0 n = n" |
+"add (Suc m) n = Suc(add m n)"
+
+text{* And here is a proof of the fact that @{prop"add m 0 = m"}: *}
+
+lemma add_02: "add m 0 = m"
+apply(induction m)
+apply(auto)
+done
+(*<*)
+lemma "add m 0 = m"
+apply(induction m)
+(*>*)
+txt{* The \isacom{lemma} command starts the proof and gives the lemma
+a name, @{text add_02}. Properties of recursively defined functions
+need to be established by induction in most cases.
+Command \isacom{apply}@{text"(induction m)"} instructs Isabelle to
+start a proof by induction on @{text m}. In response, it will show the
+following proof state:
+@{subgoals[display,indent=0]}
+The numbered lines are known as \emph{subgoals}.
+The first subgoal is the base case, the second one the induction step.
+The prefix @{text"\<And>m."} is Isabelle's way of saying ``for an arbitrary but fixed @{text m}''. The @{text"\<Longrightarrow>"} separates assumptions from the conclusion.
+The command \isacom{apply}@{text"(auto)"} instructs Isabelle to try
+and prove all subgoals automatically, essentially by simplifying them.
+Because both subgoals are easy, Isabelle can do it.
+The base case @{prop"add 0 0 = 0"} holds by definition of @{const add},
+and the induction step is almost as simple:
+@{text"add\<^raw:~>(Suc m) 0 = Suc(add m 0) = Suc m"}
+using first the definition of @{const add} and then the induction hypothesis.
+In summary, both subproofs rely on simplification with function definitions and
+the induction hypothesis.
+As a result of that final \isacom{done}, Isabelle associates the lemma
+just proved with its name. You can now inspect the lemma with the command
+*}
+
+thm add_02
+
+txt{* which displays @{thm[show_question_marks,display] add_02} The free
+variable @{text m} has been replaced by the \concept{unknown}
+@{text"?m"}. There is no logical difference between the two but an
+operational one: unknowns can be instantiated, which is what you want after
+some lemma has been proved.
+
+Note that there is also a proof method @{text induct}, which behaves almost
+like @{text induction}; the difference is explained in \autoref{ch:Isar}.
+
+\begin{warn}
+Terminology: We use \concept{lemma}, \concept{theorem} and \concept{rule}
+interchangeably for propositions that have been proved.
+\end{warn}
+\begin{warn}
+  Numerals (@{text 0}, @{text 1}, @{text 2}, \dots) and most of the standard
+  arithmetic operations (@{text "+"}, @{text "-"}, @{text "*"}, @{text"\<le>"},
+  @{text"<"} etc) are overloaded: they are available
+  not just for natural numbers but for other types as well.
+  For example, given the goal @{text"x + 0 = x"}, there is nothing to indicate
+  that you are talking about natural numbers. Hence Isabelle can only infer
+  that @{term x} is of some arbitrary type where @{text 0} and @{text"+"}
+  exist. As a consequence, you will be unable to prove the
+  goal. To alert you to such pitfalls, Isabelle flags numerals without a
+  fixed type in its output: @{prop"x+0 = x"}.  In this particular example,
+  you need to include
+  an explicit type constraint, for example @{text"x+0 = (x::nat)"}. If there
+  is enough contextual information this may not be necessary: @{prop"Suc x =
+  x"} automatically implies @{text"x::nat"} because @{term Suc} is not
+  overloaded.
+\end{warn}
+
+\subsubsection{An informal proof}
+
+Above we gave some terse informal explanation of the proof of
+@{prop"add m 0 = m"}. A more detailed informal exposition of the lemma
+might look like this:
+\bigskip
+
+\noindent
+\textbf{Lemma} @{prop"add m 0 = m"}
+
+\noindent
+\textbf{Proof} by induction on @{text m}.
+\begin{itemize}
+\item Case @{text 0} (the base case): @{prop"add 0 0 = 0"}
+  holds by definition of @{const add}.
+\item Case @{term"Suc m"} (the induction step):
+  We assume @{prop"add m 0 = m"}, the induction hypothesis (IH),
+  and we need to show @{text"add (Suc m) 0 = Suc m"}.
+  The proof is as follows:\smallskip
+
+  \begin{tabular}{@ {}rcl@ {\quad}l@ {}}
+  @{term "add (Suc m) 0"} &@{text"="}& @{term"Suc(add m 0)"}
+  & by definition of @{text add}\\
+              &@{text"="}& @{term "Suc m"} & by IH
+  \end{tabular}
+\end{itemize}
+Throughout this book, \concept{IH} will stand for ``induction hypothesis''.
+
+We have now seen three proofs of @{prop"add m 0 = 0"}: the Isabelle one, the
+terse four lines explaining the base case and the induction step, and just now a
+model of a traditional inductive proof. The three proofs differ in the level
+of detail given and the intended reader: the Isabelle proof is for the
+machine, the informal proofs are for humans. Although this book concentrates
+on Isabelle proofs, it is important to be able to rephrase those proofs
+as informal text comprehensible to a reader familiar with traditional
+mathematical proofs. Later on we will introduce an Isabelle proof language
+that is closer to traditional informal mathematical language and is often
+directly readable.
+
+\subsection{Type @{text list}}
+
+Although lists are already predefined, we define our own copy just for
+demonstration purposes:
+*}
+(*<*)
+apply(auto)
+done 
+declare [[names_short]]
+(*>*)
+datatype 'a list = Nil | Cons 'a "'a list"
+
+text{*
+\begin{itemize}
+\item Type @{typ "'a list"} is the type of lists over elements of type @{typ 'a}. Because @{typ 'a} is a type variable, lists are in fact \concept{polymorphic}: the elements of a list can be of arbitrary type (but must all be of the same type).
+\item Lists have two constructors: @{const Nil}, the empty list, and @{const Cons}, which puts an element (of type @{typ 'a}) in front of a list (of type @{typ "'a list"}).
+Hence all lists are of the form @{const Nil}, or @{term"Cons x Nil"},
+or @{term"Cons x (Cons y Nil)"} etc.
+\item \isacom{datatype} requires no quotation marks on the
+left-hand side, but on the right-hand side each of the argument
+types of a constructor needs to be enclosed in quotation marks, unless
+it is just an identifier (e.g.\ @{typ nat} or @{typ 'a}).
+\end{itemize}
+We also define two standard functions, append and reverse: *}
+
+fun app :: "'a list \<Rightarrow> 'a list \<Rightarrow> 'a list" where
+"app Nil ys = ys" |
+"app (Cons x xs) ys = Cons x (app xs ys)"
+
+fun rev :: "'a list \<Rightarrow> 'a list" where
+"rev Nil = Nil" |
+"rev (Cons x xs) = app (rev xs) (Cons x Nil)"
+
+text{* By default, variables @{text xs}, @{text ys} and @{text zs} are of
+@{text list} type.
+
+Command \isacom{value} evaluates a term. For example, *}
+
+value "rev(Cons True (Cons False Nil))"
+
+text{* yields the result @{value "rev(Cons True (Cons False Nil))"}. This works symbolically, too: *}
+
+value "rev(Cons a (Cons b Nil))"
+
+text{* yields @{value "rev(Cons a (Cons b Nil))"}.
+\medskip
+
+Figure~\ref{fig:MyList} shows the theory created so far.
+Because @{text list}, @{const Nil}, @{const Cons} etc are already predefined,
+ Isabelle prints qualified (long) names when executing this theory, for example, @{text MyList.Nil}
+ instead of @{const Nil}.
+ To suppress the qualified names you can insert the command
+ \texttt{declare [[names\_short]]}.
+ This is not recommended in general but just for this unusual example.
+% Notice where the
+%quotations marks are needed that we mostly sweep under the carpet.  In
+%particular, notice that \isacom{datatype} requires no quotation marks on the
+%left-hand side, but that on the right-hand side each of the argument
+%types of a constructor needs to be enclosed in quotation marks.
+
+\begin{figure}[htbp]
+\begin{alltt}
+\input{MyList.thy}\end{alltt}
+\caption{A Theory of Lists}
+\label{fig:MyList}
+\end{figure}
+
+\subsubsection{Structural Induction for Lists}
+
+Just as for natural numbers, there is a proof principle of induction for
+lists. Induction over a list is essentially induction over the length of
+the list, although the length remains implicit. To prove that some property
+@{text P} holds for all lists @{text xs}, i.e.\ \mbox{@{prop"P(xs)"}},
+you need to prove
+\begin{enumerate}
+\item the base case @{prop"P(Nil)"} and
+\item the inductive case @{prop"P(Cons x xs)"} under the assumption @{prop"P(xs)"}, for some arbitrary but fixed @{text x} and @{text xs}.
+\end{enumerate}
+This is often called \concept{structural induction}.
+
+\subsection{The Proof Process}
+
+We will now demonstrate the typical proof process, which involves
+the formulation and proof of auxiliary lemmas.
+Our goal is to show that reversing a list twice produces the original
+list. *}
+
+theorem rev_rev [simp]: "rev(rev xs) = xs"
+
+txt{* Commands \isacom{theorem} and \isacom{lemma} are
+interchangeable and merely indicate the importance we attach to a
+proposition. Via the bracketed attribute @{text simp} we also tell Isabelle
+to make the eventual theorem a \concept{simplification rule}: future proofs
+involving simplification will replace occurrences of @{term"rev(rev xs)"} by
+@{term"xs"}. The proof is by induction: *}
+
+apply(induction xs)
+
+txt{*
+As explained above, we obtain two subgoals, namely the base case (@{const Nil}) and the induction step (@{const Cons}):
+@{subgoals[display,indent=0,margin=65]}
+Let us try to solve both goals automatically:
+*}
+
+apply(auto)
+
+txt{*Subgoal~1 is proved, and disappears; the simplified version
+of subgoal~2 becomes the new subgoal~1:
+@{subgoals[display,indent=0,margin=70]}
+In order to simplify this subgoal further, a lemma suggests itself.
+
+\subsubsection{A First Lemma}
+
+We insert the following lemma in front of the main theorem:
+*}
+(*<*)
+oops
+(*>*)
+lemma rev_app [simp]: "rev(app xs ys) = app (rev ys) (rev xs)"
+
+txt{* There are two variables that we could induct on: @{text xs} and
+@{text ys}. Because @{const app} is defined by recursion on
+the first argument, @{text xs} is the correct one:
+*}
+
+apply(induction xs)
+
+txt{* This time not even the base case is solved automatically: *}
+apply(auto)
+txt{*
+\vspace{-5ex}
+@{subgoals[display,goals_limit=1]}
+Again, we need to abandon this proof attempt and prove another simple lemma
+first.
+
+\subsubsection{A Second Lemma}
+
+We again try the canonical proof procedure:
+*}
+(*<*)
+oops
+(*>*)
+lemma app_Nil2 [simp]: "app xs Nil = xs"
+apply(induction xs)
+apply(auto)
+done
+
+text{*
+Thankfully, this worked.
+Now we can continue with our stuck proof attempt of the first lemma:
+*}
+
+lemma rev_app [simp]: "rev(app xs ys) = app (rev ys) (rev xs)"
+apply(induction xs)
+apply(auto)
+
+txt{*
+We find that this time @{text"auto"} solves the base case, but the
+induction step merely simplifies to
+@{subgoals[display,indent=0,goals_limit=1]}
+The missing lemma is associativity of @{const app},
+which we insert in front of the failed lemma @{text rev_app}.
+
+\subsubsection{Associativity of @{const app}}
+
+The canonical proof procedure succeeds without further ado:
+*}
+(*<*)oops(*>*)
+lemma app_assoc [simp]: "app (app xs ys) zs = app xs (app ys zs)"
+apply(induction xs)
+apply(auto)
+done
+(*<*)
+lemma rev_app [simp]: "rev(app xs ys) = app (rev ys)(rev xs)"
+apply(induction xs)
+apply(auto)
+done
+
+theorem rev_rev [simp]: "rev(rev xs) = xs"
+apply(induction xs)
+apply(auto)
+done
+(*>*)
+text{*
+Finally the proofs of @{thm[source] rev_app} and @{thm[source] rev_rev}
+succeed, too.
+
+\subsubsection{Another informal proof}
+
+Here is the informal proof of associativity of @{const app}
+corresponding to the Isabelle proof above.
+\bigskip
+
+\noindent
+\textbf{Lemma} @{prop"app (app xs ys) zs = app xs (app ys zs)"}
+
+\noindent
+\textbf{Proof} by induction on @{text xs}.
+\begin{itemize}
+\item Case @{text Nil}: \ @{prop"app (app Nil ys) zs = app ys zs"} @{text"="}
+  \mbox{@{term"app Nil (app ys zs)"}} \ holds by definition of @{text app}.
+\item Case @{text"Cons x xs"}: We assume
+  \begin{center} \hfill @{term"app (app xs ys) zs"} @{text"="}
+  @{term"app xs (app ys zs)"} \hfill (IH) \end{center}
+  and we need to show
+  \begin{center} @{prop"app (app (Cons x xs) ys) zs = app (Cons x xs) (app ys zs)"}.\end{center}
+  The proof is as follows:\smallskip
+
+  \begin{tabular}{@ {}l@ {\quad}l@ {}}
+  @{term"app (app (Cons x xs) ys) zs"}\\
+  @{text"= app (Cons x (app xs ys)) zs"} & by definition of @{text app}\\
+  @{text"= Cons x (app (app xs ys) zs)"} & by definition of @{text app}\\
+  @{text"= Cons x (app xs (app ys zs))"} & by IH\\
+  @{text"= app (Cons x xs) (app ys zs)"} & by definition of @{text app}
+  \end{tabular}
+\end{itemize}
+\medskip
+
+\noindent Didn't we say earlier that all proofs are by simplification? But
+in both cases, going from left to right, the last equality step is not a
+simplification at all! In the base case it is @{prop"app ys zs = app Nil (app
+ys zs)"}. It appears almost mysterious because we suddenly complicate the
+term by appending @{text Nil} on the left. What is really going on is this:
+when proving some equality \mbox{@{prop"s = t"}}, both @{text s} and @{text t} are
+simplified to some common term @{text u}.  This heuristic for equality proofs
+works well for a functional programming context like ours. In the base case
+@{text s} is @{term"app (app Nil ys) zs"}, @{text t} is @{term"app Nil (app
+ys zs)"}, and @{text u} is @{term"app ys zs"}.
+
+\subsection{Predefined lists}
+\label{sec:predeflists}
+
+Isabelle's predefined lists are the same as the ones above, but with
+more syntactic sugar:
+\begin{itemize}
+\item @{text "[]"} is @{const Nil},
+\item @{term"x # xs"} is @{term"Cons x xs"},
+\item @{text"[x\<^isub>1, \<dots>, x\<^isub>n]"} is @{text"x\<^isub>1 # \<dots> # x\<^isub>n # []"}, and
+\item @{term "xs @ ys"} is @{term"app xs ys"}.
+\end{itemize}
+There is also a large library of predefined functions.
+The most important ones are the length function
+@{text"length :: 'a list \<Rightarrow> nat"} (with the obvious definition),
+and the map function that applies a function to all elements of a list:
+\begin{isabelle}
+\isacom{fun} @{const map} @{text"::"} @{typ[source] "('a \<Rightarrow> 'b) \<Rightarrow> 'a list \<Rightarrow> 'b list"}\\
+@{text"\""}@{thm map.simps(1)}@{text"\" |"}\\
+@{text"\""}@{thm map.simps(2)}@{text"\""}
+\end{isabelle}
+\sem
+Also useful are the \concept{head} of a list, its first element,
+and the \concept{tail}, the rest of the list:
+\begin{isabelle}
+\isacom{fun} @{text"hd :: 'a list \<Rightarrow> 'a"}\\
+@{prop"hd(x#xs) = x"}
+\end{isabelle}
+\begin{isabelle}
+\isacom{fun} @{text"tl :: 'a list \<Rightarrow> 'a list"}\\
+@{prop"tl [] = []"} @{text"|"}\\
+@{prop"tl(x#xs) = xs"}
+\end{isabelle}
+Note that since HOL is a logic of total functions, @{term"hd []"} is defined,
+but we do now know what the result is. That is, @{term"hd []"} is not undefined
+but underdefined.
+\endsem
+%
+*}
+(*<*)
+end
+(*>*)
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/doc-src/ProgProve/Isar.thy	Mon Aug 27 22:00:04 2012 +0200
@@ -0,0 +1,997 @@
+(*<*)
+theory Isar
+imports LaTeXsugar
+begin
+ML{* quick_and_dirty := true *}
+(*>*)
+text{*
+Apply-scripts are unreadable and hard to maintain. The language of choice
+for larger proofs is \concept{Isar}. The two key features of Isar are:
+\begin{itemize}
+\item It is structured, not linear.
+\item It is readable without running it because
+you need to state what you are proving at any given point.
+\end{itemize}
+Whereas apply-scripts are like assembly language programs, Isar proofs
+are like structured programs with comments. A typical Isar proof looks like this:
+*}text{*
+\begin{tabular}{@ {}l}
+\isacom{proof}\\
+\quad\isacom{assume} @{text"\""}$\mathit{formula}_0$@{text"\""}\\
+\quad\isacom{have} @{text"\""}$\mathit{formula}_1$@{text"\""} \quad\isacom{by} @{text simp}\\
+\quad\vdots\\
+\quad\isacom{have} @{text"\""}$\mathit{formula}_n$@{text"\""} \quad\isacom{by} @{text blast}\\
+\quad\isacom{show} @{text"\""}$\mathit{formula}_{n+1}$@{text"\""} \quad\isacom{by} @{text \<dots>}\\
+\isacom{qed}
+\end{tabular}
+*}text{*
+It proves $\mathit{formula}_0 \Longrightarrow \mathit{formula}_{n+1}$
+(provided each proof step succeeds).
+The intermediate \isacom{have} statements are merely stepping stones
+on the way towards the \isacom{show} statement that proves the actual
+goal. In more detail, this is the Isar core syntax:
+\medskip
+
+\begin{tabular}{@ {}lcl@ {}}
+\textit{proof} &=& \isacom{by} \textit{method}\\
+      &$\mid$& \isacom{proof} [\textit{method}] \ \textit{step}$^*$ \ \isacom{qed}
+\end{tabular}
+\medskip
+
+\begin{tabular}{@ {}lcl@ {}}
+\textit{step} &=& \isacom{fix} \textit{variables} \\
+      &$\mid$& \isacom{assume} \textit{proposition} \\
+      &$\mid$& [\isacom{from} \textit{fact}$^+$] (\isacom{have} $\mid$ \isacom{show}) \ \textit{proposition} \ \textit{proof}
+\end{tabular}
+\medskip
+
+\begin{tabular}{@ {}lcl@ {}}
+\textit{proposition} &=& [\textit{name}:] @{text"\""}\textit{formula}@{text"\""}
+\end{tabular}
+\medskip
+
+\begin{tabular}{@ {}lcl@ {}}
+\textit{fact} &=& \textit{name} \ $\mid$ \ \dots
+\end{tabular}
+\medskip
+
+\noindent A proof can either be an atomic \isacom{by} with a single proof
+method which must finish off the statement being proved, for example @{text
+auto}.  Or it can be a \isacom{proof}--\isacom{qed} block of multiple
+steps. Such a block can optionally begin with a proof method that indicates
+how to start off the proof, e.g.\ \mbox{@{text"(induction xs)"}}.
+
+A step either assumes a proposition or states a proposition
+together with its proof. The optional \isacom{from} clause
+indicates which facts are to be used in the proof.
+Intermediate propositions are stated with \isacom{have}, the overall goal
+with \isacom{show}. A step can also introduce new local variables with
+\isacom{fix}. Logically, \isacom{fix} introduces @{text"\<And>"}-quantified
+variables, \isacom{assume} introduces the assumption of an implication
+(@{text"\<Longrightarrow>"}) and \isacom{have}/\isacom{show} the conclusion.
+
+Propositions are optionally named formulas. These names can be referred to in
+later \isacom{from} clauses. In the simplest case, a fact is such a name.
+But facts can also be composed with @{text OF} and @{text of} as shown in
+\S\ref{sec:forward-proof}---hence the \dots\ in the above grammar.  Note
+that assumptions, intermediate \isacom{have} statements and global lemmas all
+have the same status and are thus collectively referred to as
+\concept{facts}.
+
+Fact names can stand for whole lists of facts. For example, if @{text f} is
+defined by command \isacom{fun}, @{text"f.simps"} refers to the whole list of
+recursion equations defining @{text f}. Individual facts can be selected by
+writing @{text"f.simps(2)"}, whole sublists by @{text"f.simps(2-4)"}.
+
+
+\section{Isar by example}
+
+We show a number of proofs of Cantor's theorem that a function from a set to
+its powerset cannot be surjective, illustrating various features of Isar. The
+constant @{const surj} is predefined.
+*}
+
+lemma "\<not> surj(f :: 'a \<Rightarrow> 'a set)"
+proof
+  assume 0: "surj f"
+  from 0 have 1: "\<forall>A. \<exists>a. A = f a" by(simp add: surj_def)
+  from 1 have 2: "\<exists>a. {x. x \<notin> f x} = f a" by blast
+  from 2 show "False" by blast
+qed
+
+text{*
+The \isacom{proof} command lacks an explicit method how to perform
+the proof. In such cases Isabelle tries to use some standard introduction
+rule, in the above case for @{text"\<not>"}:
+\[
+\inferrule{
+\mbox{@{thm (prem 1) notI}}}
+{\mbox{@{thm (concl) notI}}}
+\]
+In order to prove @{prop"~ P"}, assume @{text P} and show @{text False}.
+Thus we may assume @{prop"surj f"}. The proof shows that names of propositions
+may be (single!) digits---meaningful names are hard to invent and are often
+not necessary. Both \isacom{have} steps are obvious. The second one introduces
+the diagonal set @{term"{x. x \<notin> f x}"}, the key idea in the proof.
+If you wonder why @{text 2} directly implies @{text False}: from @{text 2}
+it follows that @{prop"a \<notin> f a \<longleftrightarrow> a \<in> f a"}.
+
+\subsection{@{text this}, @{text then}, @{text hence} and @{text thus}}
+
+Labels should be avoided. They interrupt the flow of the reader who has to
+scan the context for the point where the label was introduced. Ideally, the
+proof is a linear flow, where the output of one step becomes the input of the
+next step, piping the previously proved fact into the next proof, just like
+in a UNIX pipe. In such cases the predefined name @{text this} can be used
+to refer to the proposition proved in the previous step. This allows us to
+eliminate all labels from our proof (we suppress the \isacom{lemma} statement):
+*}
+(*<*)
+lemma "\<not> surj(f :: 'a \<Rightarrow> 'a set)"
+(*>*)
+proof
+  assume "surj f"
+  from this have "\<exists>a. {x. x \<notin> f x} = f a" by(auto simp: surj_def)
+  from this show "False" by blast
+qed
+
+text{* We have also taken the opportunity to compress the two \isacom{have}
+steps into one.
+
+To compact the text further, Isar has a few convenient abbreviations:
+\medskip
+
+\begin{tabular}{rcl}
+\isacom{then} &=& \isacom{from} @{text this}\\
+\isacom{thus} &=& \isacom{then} \isacom{show}\\
+\isacom{hence} &=& \isacom{then} \isacom{have}
+\end{tabular}
+\medskip
+
+\noindent
+With the help of these abbreviations the proof becomes
+*}
+(*<*)
+lemma "\<not> surj(f :: 'a \<Rightarrow> 'a set)"
+(*>*)
+proof
+  assume "surj f"
+  hence "\<exists>a. {x. x \<notin> f x} = f a" by(auto simp: surj_def)
+  thus "False" by blast
+qed
+text{*
+
+There are two further linguistic variations:
+\medskip
+
+\begin{tabular}{rcl}
+(\isacom{have}$\mid$\isacom{show}) \ \textit{prop} \ \isacom{using} \ \textit{facts}
+&=&
+\isacom{from} \ \textit{facts} \ (\isacom{have}$\mid$\isacom{show}) \ \textit{prop}\\
+\isacom{with} \ \textit{facts} &=& \isacom{from} \ \textit{facts} \isa{this}
+\end{tabular}
+\medskip
+
+\noindent The \isacom{using} idiom de-emphasizes the used facts by moving them
+behind the proposition.
+
+\subsection{Structured lemma statements: \isacom{fixes}, \isacom{assumes}, \isacom{shows}}
+
+Lemmas can also be stated in a more structured fashion. To demonstrate this
+feature with Cantor's theorem, we rephrase @{prop"\<not> surj f"}
+a little:
+*}
+
+lemma
+  fixes f :: "'a \<Rightarrow> 'a set"
+  assumes s: "surj f"
+  shows "False"
+
+txt{* The optional \isacom{fixes} part allows you to state the types of
+variables up front rather than by decorating one of their occurrences in the
+formula with a type constraint. The key advantage of the structured format is
+the \isacom{assumes} part that allows you to name each assumption; multiple
+assumptions can be separated by \isacom{and}. The
+\isacom{shows} part gives the goal. The actual theorem that will come out of
+the proof is @{prop"surj f \<Longrightarrow> False"}, but during the proof the assumption
+@{prop"surj f"} is available under the name @{text s} like any other fact.
+*}
+
+proof -
+  have "\<exists> a. {x. x \<notin> f x} = f a" using s
+    by(auto simp: surj_def)
+  thus "False" by blast
+qed
+
+text{* In the \isacom{have} step the assumption @{prop"surj f"} is now
+referenced by its name @{text s}. The duplication of @{prop"surj f"} in the
+above proofs (once in the statement of the lemma, once in its proof) has been
+eliminated.
+
+\begin{warn}
+Note the dash after the \isacom{proof}
+command.  It is the null method that does nothing to the goal. Leaving it out
+would ask Isabelle to try some suitable introduction rule on the goal @{const
+False}---but there is no suitable introduction rule and \isacom{proof}
+would fail.
+\end{warn}
+
+Stating a lemma with \isacom{assumes}-\isacom{shows} implicitly introduces the
+name @{text assms} that stands for the list of all assumptions. You can refer
+to individual assumptions by @{text"assms(1)"}, @{text"assms(2)"} etc,
+thus obviating the need to name them individually.
+
+\section{Proof patterns}
+
+We show a number of important basic proof patterns. Many of them arise from
+the rules of natural deduction that are applied by \isacom{proof} by
+default. The patterns are phrased in terms of \isacom{show} but work for
+\isacom{have} and \isacom{lemma}, too.
+
+We start with two forms of \concept{case analysis}:
+starting from a formula @{text P} we have the two cases @{text P} and
+@{prop"~P"}, and starting from a fact @{prop"P \<or> Q"}
+we have the two cases @{text P} and @{text Q}:
+*}text_raw{*
+\begin{tabular}{@ {}ll@ {}}
+\begin{minipage}[t]{.4\textwidth}
+\isa{%
+*}
+(*<*)lemma "R" proof-(*>*)
+show "R"
+proof cases
+  assume "P"
+  txt_raw{*\\\mbox{}\quad$\vdots$\\\mbox{}\hspace{-1.4ex}*}
+  show "R" (*<*)sorry(*>*)txt_raw{*\ $\dots$\\*}
+next
+  assume "\<not> P"
+  txt_raw{*\\\mbox{}\quad$\vdots$\\\mbox{}\hspace{-1.4ex}*}
+  show "R" (*<*)sorry(*>*)txt_raw{*\ $\dots$\\*}
+qed(*<*)oops(*>*)
+text_raw {* }
+\end{minipage}
+&
+\begin{minipage}[t]{.4\textwidth}
+\isa{%
+*}
+(*<*)lemma "R" proof-(*>*)
+have "P \<or> Q" (*<*)sorry(*>*)txt_raw{*\ $\dots$\\*}
+then show "R"
+proof
+  assume "P"
+  txt_raw{*\\\mbox{}\quad$\vdots$\\\mbox{}\hspace{-1.4ex}*}
+  show "R" (*<*)sorry(*>*)txt_raw{*\ $\dots$\\*}
+next
+  assume "Q"
+  txt_raw{*\\\mbox{}\quad$\vdots$\\\mbox{}\hspace{-1.4ex}*}
+  show "R" (*<*)sorry(*>*)txt_raw{*\ $\dots$\\*}
+qed(*<*)oops(*>*)
+
+text_raw {* }
+\end{minipage}
+\end{tabular}
+\medskip
+\begin{isamarkuptext}%
+How to prove a logical equivalence:
+\end{isamarkuptext}%
+\isa{%
+*}
+(*<*)lemma "P\<longleftrightarrow>Q" proof-(*>*)
+show "P \<longleftrightarrow> Q"
+proof
+  assume "P"
+  txt_raw{*\\\mbox{}\quad$\vdots$\\\mbox{}\hspace{-1.4ex}*}
+  show "Q" (*<*)sorry(*>*) txt_raw{*\ $\dots$\\*}
+next
+  assume "Q"
+  txt_raw{*\\\mbox{}\quad$\vdots$\\\mbox{}\hspace{-1.4ex}*}
+  show "P" (*<*)sorry(*>*) txt_raw{*\ $\dots$\\*}
+qed(*<*)qed(*>*)
+text_raw {* }
+\medskip
+\begin{isamarkuptext}%
+Proofs by contradiction:
+\end{isamarkuptext}%
+\begin{tabular}{@ {}ll@ {}}
+\begin{minipage}[t]{.4\textwidth}
+\isa{%
+*}
+(*<*)lemma "\<not> P" proof-(*>*)
+show "\<not> P"
+proof
+  assume "P"
+  txt_raw{*\\\mbox{}\quad$\vdots$\\\mbox{}\hspace{-1.4ex}*}
+  show "False" (*<*)sorry(*>*)txt_raw{*\ $\dots$\\*}
+qed(*<*)oops(*>*)
+
+text_raw {* }
+\end{minipage}
+&
+\begin{minipage}[t]{.4\textwidth}
+\isa{%
+*}
+(*<*)lemma "P" proof-(*>*)
+show "P"
+proof (rule ccontr)
+  assume "\<not>P"
+  txt_raw{*\\\mbox{}\quad$\vdots$\\\mbox{}\hspace{-1.4ex}*}
+  show "False" (*<*)sorry(*>*)txt_raw{*\ $\dots$\\*}
+qed(*<*)oops(*>*)
+
+text_raw {* }
+\end{minipage}
+\end{tabular}
+\medskip
+\begin{isamarkuptext}%
+The name @{thm[source] ccontr} stands for ``classical contradiction''.
+
+How to prove quantified formulas:
+\end{isamarkuptext}%
+\begin{tabular}{@ {}ll@ {}}
+\begin{minipage}[t]{.4\textwidth}
+\isa{%
+*}
+(*<*)lemma "ALL x. P x" proof-(*>*)
+show "\<forall>x. P(x)"
+proof
+  fix x
+  txt_raw{*\\\mbox{}\quad$\vdots$\\\mbox{}\hspace{-1.4ex}*}
+  show "P(x)" (*<*)sorry(*>*)txt_raw{*\ $\dots$\\*}
+qed(*<*)oops(*>*)
+
+text_raw {* }
+\end{minipage}
+&
+\begin{minipage}[t]{.4\textwidth}
+\isa{%
+*}
+(*<*)lemma "EX x. P(x)" proof-(*>*)
+show "\<exists>x. P(x)"
+proof
+  txt_raw{*\\\mbox{}\quad$\vdots$\\\mbox{}\hspace{-1.4ex}*}
+  show "P(witness)" (*<*)sorry(*>*)txt_raw{*\ $\dots$\\*}
+qed
+(*<*)oops(*>*)
+
+text_raw {* }
+\end{minipage}
+\end{tabular}
+\medskip
+\begin{isamarkuptext}%
+In the proof of \noquotes{@{prop[source]"\<forall>x. P(x)"}},
+the step \isacom{fix}~@{text x} introduces a locally fixed variable @{text x}
+into the subproof, the proverbial ``arbitrary but fixed value''.
+Instead of @{text x} we could have chosen any name in the subproof.
+In the proof of \noquotes{@{prop[source]"\<exists>x. P(x)"}},
+@{text witness} is some arbitrary
+term for which we can prove that it satisfies @{text P}.
+
+How to reason forward from \noquotes{@{prop[source] "\<exists>x. P(x)"}}:
+\end{isamarkuptext}%
+*}
+(*<*)lemma True proof- assume 1: "EX x. P x"(*>*)
+have "\<exists>x. P(x)" (*<*)by(rule 1)(*>*)txt_raw{*\ $\dots$\\*}
+then obtain x where p: "P(x)" by blast
+(*<*)oops(*>*)
+text{*
+After the \isacom{obtain} step, @{text x} (we could have chosen any name)
+is a fixed local
+variable, and @{text p} is the name of the fact
+\noquotes{@{prop[source] "P(x)"}}.
+This pattern works for one or more @{text x}.
+As an example of the \isacom{obtain} command, here is the proof of
+Cantor's theorem in more detail:
+*}
+
+lemma "\<not> surj(f :: 'a \<Rightarrow> 'a set)"
+proof
+  assume "surj f"
+  hence  "\<exists>a. {x. x \<notin> f x} = f a" by(auto simp: surj_def)
+  then obtain a where  "{x. x \<notin> f x} = f a"  by blast
+  hence  "a \<notin> f a \<longleftrightarrow> a \<in> f a"  by blast
+  thus "False" by blast
+qed
+
+text_raw{*
+\begin{isamarkuptext}%
+
+Finally, how to prove set equality and subset relationship:
+\end{isamarkuptext}%
+\begin{tabular}{@ {}ll@ {}}
+\begin{minipage}[t]{.4\textwidth}
+\isa{%
+*}
+(*<*)lemma "A = (B::'a set)" proof-(*>*)
+show "A = B"
+proof
+  show "A \<subseteq> B" (*<*)sorry(*>*)txt_raw{*\ $\dots$\\*}
+next
+  show "B \<subseteq> A" (*<*)sorry(*>*)txt_raw{*\ $\dots$\\*}
+qed(*<*)qed(*>*)
+
+text_raw {* }
+\end{minipage}
+&
+\begin{minipage}[t]{.4\textwidth}
+\isa{%
+*}
+(*<*)lemma "A <= (B::'a set)" proof-(*>*)
+show "A \<subseteq> B"
+proof
+  fix x
+  assume "x \<in> A"
+  txt_raw{*\\\mbox{}\quad$\vdots$\\\mbox{}\hspace{-1.4ex}*}
+  show "x \<in> B" (*<*)sorry(*>*)txt_raw{*\ $\dots$\\*}
+qed(*<*)qed(*>*)
+
+text_raw {* }
+\end{minipage}
+\end{tabular}
+\begin{isamarkuptext}%
+\section{Streamlining proofs}
+
+\subsection{Pattern matching and quotations}
+
+In the proof patterns shown above, formulas are often duplicated.
+This can make the text harder to read, write and maintain. Pattern matching
+is an abbreviation mechanism to avoid such duplication. Writing
+\begin{quote}
+\isacom{show} \ \textit{formula} @{text"("}\isacom{is} \textit{pattern}@{text")"}
+\end{quote}
+matches the pattern against the formula, thus instantiating the unknowns in
+the pattern for later use. As an example, consider the proof pattern for
+@{text"\<longleftrightarrow>"}:
+\end{isamarkuptext}%
+*}
+(*<*)lemma "formula\<^isub>1 \<longleftrightarrow> formula\<^isub>2" proof-(*>*)
+show "formula\<^isub>1 \<longleftrightarrow> formula\<^isub>2" (is "?L \<longleftrightarrow> ?R")
+proof
+  assume "?L"
+  txt_raw{*\\\mbox{}\quad$\vdots$\\\mbox{}\hspace{-1.4ex}*}
+  show "?R" (*<*)sorry(*>*) txt_raw{*\ $\dots$\\*}
+next
+  assume "?R"
+  txt_raw{*\\\mbox{}\quad$\vdots$\\\mbox{}\hspace{-1.4ex}*}
+  show "?L" (*<*)sorry(*>*) txt_raw{*\ $\dots$\\*}
+qed(*<*)qed(*>*)
+
+text{* Instead of duplicating @{text"formula\<^isub>i"} in the text, we introduce
+the two abbreviations @{text"?L"} and @{text"?R"} by pattern matching.
+Pattern matching works wherever a formula is stated, in particular
+with \isacom{have} and \isacom{lemma}.
+
+The unknown @{text"?thesis"} is implicitly matched against any goal stated by
+\isacom{lemma} or \isacom{show}. Here is a typical example: *}
+
+lemma "formula"
+proof -
+  txt_raw{*\\\mbox{}\quad$\vdots$\\\mbox{}\hspace{-1.4ex}*}
+  show ?thesis (*<*)sorry(*>*) txt_raw{*\ $\dots$\\*}
+qed
+
+text{* 
+Unknowns can also be instantiated with \isacom{let} commands
+\begin{quote}
+\isacom{let} @{text"?t"} = @{text"\""}\textit{some-big-term}@{text"\""}
+\end{quote}
+Later proof steps can refer to @{text"?t"}:
+\begin{quote}
+\isacom{have} @{text"\""}\dots @{text"?t"} \dots@{text"\""}
+\end{quote}
+\begin{warn}
+Names of facts are introduced with @{text"name:"} and refer to proved
+theorems. Unknowns @{text"?X"} refer to terms or formulas.
+\end{warn}
+
+Although abbreviations shorten the text, the reader needs to remember what
+they stand for. Similarly for names of facts. Names like @{text 1}, @{text 2}
+and @{text 3} are not helpful and should only be used in short proofs. For
+longer proofs, descriptive names are better. But look at this example:
+\begin{quote}
+\isacom{have} \ @{text"x_gr_0: \"x > 0\""}\\
+$\vdots$\\
+\isacom{from} @{text "x_gr_0"} \dots
+\end{quote}
+The name is longer than the fact it stands for! Short facts do not need names,
+one can refer to them easily by quoting them:
+\begin{quote}
+\isacom{have} \ @{text"\"x > 0\""}\\
+$\vdots$\\
+\isacom{from} @{text "`x>0`"} \dots
+\end{quote}
+Note that the quotes around @{text"x>0"} are \concept{back quotes}.
+They refer to the fact not by name but by value.
+
+\subsection{\isacom{moreover}}
+
+Sometimes one needs a number of facts to enable some deduction. Of course
+one can name these facts individually, as shown on the right,
+but one can also combine them with \isacom{moreover}, as shown on the left:
+*}text_raw{*
+\begin{tabular}{@ {}ll@ {}}
+\begin{minipage}[t]{.4\textwidth}
+\isa{%
+*}
+(*<*)lemma "P" proof-(*>*)
+have "P\<^isub>1" (*<*)sorry(*>*)txt_raw{*\ $\dots$\\*}
+moreover have "P\<^isub>2" (*<*)sorry(*>*)txt_raw{*\ $\dots$\\*}
+moreover
+txt_raw{*\\$\vdots$\\\hspace{-1.4ex}*}(*<*)have "True" ..(*>*)
+moreover have "P\<^isub>n" (*<*)sorry(*>*)txt_raw{*\ $\dots$\\*}
+ultimately have "P"  (*<*)sorry(*>*)txt_raw{*\ $\dots$\\*}
+(*<*)oops(*>*)
+
+text_raw {* }
+\end{minipage}
+&
+\qquad
+\begin{minipage}[t]{.4\textwidth}
+\isa{%
+*}
+(*<*)lemma "P" proof-(*>*)
+have lab\<^isub>1: "P\<^isub>1" (*<*)sorry(*>*)txt_raw{*\ $\dots$\\*}
+have lab\<^isub>2: "P\<^isub>2" (*<*)sorry(*>*)txt_raw{*\ $\dots$*}
+txt_raw{*\\$\vdots$\\\hspace{-1.4ex}*}
+have lab\<^isub>n: "P\<^isub>n" (*<*)sorry(*>*)txt_raw{*\ $\dots$\\*}
+from lab\<^isub>1 lab\<^isub>2 txt_raw{*\ $\dots$\\*}
+have "P"  (*<*)sorry(*>*)txt_raw{*\ $\dots$\\*}
+(*<*)oops(*>*)
+
+text_raw {* }
+\end{minipage}
+\end{tabular}
+\begin{isamarkuptext}%
+The \isacom{moreover} version is no shorter but expresses the structure more
+clearly and avoids new names.
+
+\subsection{Raw proof blocks}
+
+Sometimes one would like to prove some lemma locally within a proof.
+A lemma that shares the current context of assumptions but that
+has its own assumptions and is generalized over its locally fixed
+variables at the end. This is what a \concept{raw proof block} does:
+\begin{quote}
+@{text"{"} \isacom{fix} @{text"x\<^isub>1 \<dots> x\<^isub>n"}\\
+\mbox{}\ \ \ \isacom{assume} @{text"A\<^isub>1 \<dots> A\<^isub>m"}\\
+\mbox{}\ \ \ $\vdots$\\
+\mbox{}\ \ \ \isacom{have} @{text"B"}\\
+@{text"}"}
+\end{quote}
+proves @{text"\<lbrakk> A\<^isub>1; \<dots> ; A\<^isub>m \<rbrakk> \<Longrightarrow> B"}
+where all @{text"x\<^isub>i"} have been replaced by unknowns @{text"?x\<^isub>i"}.
+\begin{warn}
+The conclusion of a raw proof block is \emph{not} indicated by \isacom{show}
+but is simply the final \isacom{have}.
+\end{warn}
+
+As an example we prove a simple fact about divisibility on integers.
+The definition of @{text "dvd"} is @{thm dvd_def}.
+\end{isamarkuptext}%
+*}
+
+lemma fixes a b :: int assumes "b dvd (a+b)" shows "b dvd a"
+proof -
+  { fix k assume k: "a+b = b*k"
+    have "\<exists>k'. a = b*k'"
+    proof
+      show "a = b*(k - 1)" using k by(simp add: algebra_simps)
+    qed }
+  then show ?thesis using assms by(auto simp add: dvd_def)
+qed
+
+text{* Note that the result of a raw proof block has no name. In this example
+it was directly piped (via \isacom{then}) into the final proof, but it can
+also be named for later reference: you simply follow the block directly by a
+\isacom{note} command:
+\begin{quote}
+\isacom{note} \ @{text"name = this"}
+\end{quote}
+This introduces a new name @{text name} that refers to @{text this},
+the fact just proved, in this case the preceding block. In general,
+\isacom{note} introduces a new name for one or more facts.
+
+\section{Case analysis and induction}
+
+\subsection{Datatype case analysis}
+
+We have seen case analysis on formulas. Now we want to distinguish
+which form some term takes: is it @{text 0} or of the form @{term"Suc n"},
+is it @{term"[]"} or of the form @{term"x#xs"}, etc. Here is a typical example
+proof by case analysis on the form of @{text xs}:
+*}
+
+lemma "length(tl xs) = length xs - 1"
+proof (cases xs)
+  assume "xs = []"
+  thus ?thesis by simp
+next
+  fix y ys assume "xs = y#ys"
+  thus ?thesis by simp
+qed
+
+text{* Function @{text tl} (''tail'') is defined by @{thm tl.simps(1)} and
+@{thm tl.simps(2)}. Note that the result type of @{const length} is @{typ nat}
+and @{prop"0 - 1 = (0::nat)"}.
+
+This proof pattern works for any term @{text t} whose type is a datatype.
+The goal has to be proved for each constructor @{text C}:
+\begin{quote}
+\isacom{fix} \ @{text"x\<^isub>1 \<dots> x\<^isub>n"} \isacom{assume} @{text"\"t = C x\<^isub>1 \<dots> x\<^isub>n\""}
+\end{quote}
+Each case can be written in a more compact form by means of the \isacom{case}
+command:
+\begin{quote}
+\isacom{case} @{text "(C x\<^isub>1 \<dots> x\<^isub>n)"}
+\end{quote}
+This is equivalent to the explicit \isacom{fix}-\isacom{assume} line
+but also gives the assumption @{text"\"t = C x\<^isub>1 \<dots> x\<^isub>n\""} a name: @{text C},
+like the constructor.
+Here is the \isacom{case} version of the proof above:
+*}
+(*<*)lemma "length(tl xs) = length xs - 1"(*>*)
+proof (cases xs)
+  case Nil
+  thus ?thesis by simp
+next
+  case (Cons y ys)
+  thus ?thesis by simp
+qed
+
+text{* Remember that @{text Nil} and @{text Cons} are the alphanumeric names
+for @{text"[]"} and @{text"#"}. The names of the assumptions
+are not used because they are directly piped (via \isacom{thus})
+into the proof of the claim.
+
+\subsection{Structural induction}
+
+We illustrate structural induction with an example based on natural numbers:
+the sum (@{text"\<Sum>"}) of the first @{text n} natural numbers
+(@{text"{0..n::nat}"}) is equal to \mbox{@{term"n*(n+1) div 2::nat"}}.
+Never mind the details, just focus on the pattern:
+*}
+
+lemma "\<Sum>{0..n::nat} = n*(n+1) div 2"
+proof (induction n)
+  show "\<Sum>{0..0::nat} = 0*(0+1) div 2" by simp
+next
+  fix n assume "\<Sum>{0..n::nat} = n*(n+1) div 2"
+  thus "\<Sum>{0..Suc n} = Suc n*(Suc n+1) div 2" by simp
+qed
+
+text{* Except for the rewrite steps, everything is explicitly given. This
+makes the proof easily readable, but the duplication means it is tedious to
+write and maintain. Here is how pattern
+matching can completely avoid any duplication: *}
+
+lemma "\<Sum>{0..n::nat} = n*(n+1) div 2" (is "?P n")
+proof (induction n)
+  show "?P 0" by simp
+next
+  fix n assume "?P n"
+  thus "?P(Suc n)" by simp
+qed
+
+text{* The first line introduces an abbreviation @{text"?P n"} for the goal.
+Pattern matching @{text"?P n"} with the goal instantiates @{text"?P"} to the
+function @{term"\<lambda>n. \<Sum>{0..n::nat} = n*(n+1) div 2"}.  Now the proposition to
+be proved in the base case can be written as @{text"?P 0"}, the induction
+hypothesis as @{text"?P n"}, and the conclusion of the induction step as
+@{text"?P(Suc n)"}.
+
+Induction also provides the \isacom{case} idiom that abbreviates
+the \isacom{fix}-\isacom{assume} step. The above proof becomes
+*}
+(*<*)lemma "\<Sum>{0..n::nat} = n*(n+1) div 2"(*>*)
+proof (induction n)
+  case 0
+  show ?case by simp
+next
+  case (Suc n)
+  thus ?case by simp
+qed
+
+text{*
+The unknown @{text "?case"} is set in each case to the required
+claim, i.e.\ @{text"?P 0"} and \mbox{@{text"?P(Suc n)"}} in the above proof,
+without requiring the user to define a @{text "?P"}. The general
+pattern for induction over @{typ nat} is shown on the left-hand side:
+*}text_raw{*
+\begin{tabular}{@ {}ll@ {}}
+\begin{minipage}[t]{.4\textwidth}
+\isa{%
+*}
+(*<*)lemma "P(n::nat)" proof -(*>*)
+show "P(n)"
+proof (induction n)
+  case 0
+  txt_raw{*\\\mbox{}\ \ $\vdots$\\\mbox{}\hspace{-1ex}*}
+  show ?case (*<*)sorry(*>*) txt_raw{*\ $\dots$\\*}
+next
+  case (Suc n)
+  txt_raw{*\\\mbox{}\ \ $\vdots$\\\mbox{}\hspace{-1ex}*}
+  show ?case (*<*)sorry(*>*) txt_raw{*\ $\dots$\\*}
+qed(*<*)qed(*>*)
+
+text_raw {* }
+\end{minipage}
+&
+\begin{minipage}[t]{.4\textwidth}
+~\\
+~\\
+\isacom{let} @{text"?case = \"P(0)\""}\\
+~\\
+~\\
+~\\[1ex]
+\isacom{fix} @{text n} \isacom{assume} @{text"Suc: \"P(n)\""}\\
+\isacom{let} @{text"?case = \"P(Suc n)\""}\\
+\end{minipage}
+\end{tabular}
+\medskip
+*}
+text{*
+On the right side you can see what the \isacom{case} command
+on the left stands for.
+
+In case the goal is an implication, induction does one more thing: the
+proposition to be proved in each case is not the whole implication but only
+its conclusion; the premises of the implication are immediately made
+assumptions of that case. That is, if in the above proof we replace
+\isacom{show}~@{text"P(n)"} by
+\mbox{\isacom{show}~@{text"A(n) \<Longrightarrow> P(n)"}}
+then \isacom{case}~@{text 0} stands for
+\begin{quote}
+\isacom{assume} \ @{text"0: \"A(0)\""}\\
+\isacom{let} @{text"?case = \"P(0)\""}
+\end{quote}
+and \isacom{case}~@{text"(Suc n)"} stands for
+\begin{quote}
+\isacom{fix} @{text n}\\
+\isacom{assume} @{text"Suc:"}
+  \begin{tabular}[t]{l}@{text"\"A(n) \<Longrightarrow> P(n)\""}\\@{text"\"A(Suc n)\""}\end{tabular}\\
+\isacom{let} @{text"?case = \"P(Suc n)\""}
+\end{quote}
+The list of assumptions @{text Suc} is actually subdivided
+into @{text"Suc.IH"}, the induction hypotheses (here @{text"A(n) \<Longrightarrow> P(n)"})
+and @{text"Suc.prems"}, the premises of the goal being proved
+(here @{text"A(Suc n)"}).
+
+Induction works for any datatype.
+Proving a goal @{text"\<lbrakk> A\<^isub>1(x); \<dots>; A\<^isub>k(x) \<rbrakk> \<Longrightarrow> P(x)"}
+by induction on @{text x} generates a proof obligation for each constructor
+@{text C} of the datatype. The command @{text"case (C x\<^isub>1 \<dots> x\<^isub>n)"}
+performs the following steps:
+\begin{enumerate}
+\item \isacom{fix} @{text"x\<^isub>1 \<dots> x\<^isub>n"}
+\item \isacom{assume} the induction hypotheses (calling them @{text C.IH})
+ and the premises \mbox{@{text"A\<^isub>i(C x\<^isub>1 \<dots> x\<^isub>n)"}} (calling them @{text"C.prems"})
+ and calling the whole list @{text C}
+\item \isacom{let} @{text"?case = \"P(C x\<^isub>1 \<dots> x\<^isub>n)\""}
+\end{enumerate}
+
+\subsection{Rule induction}
+
+Recall the inductive and recursive definitions of even numbers in
+\autoref{sec:inductive-defs}:
+*}
+
+inductive ev :: "nat \<Rightarrow> bool" where
+ev0: "ev 0" |
+evSS: "ev n \<Longrightarrow> ev(Suc(Suc n))"
+
+fun even :: "nat \<Rightarrow> bool" where
+"even 0 = True" |
+"even (Suc 0) = False" |
+"even (Suc(Suc n)) = even n"
+
+text{* We recast the proof of @{prop"ev n \<Longrightarrow> even n"} in Isar. The
+left column shows the actual proof text, the right column shows
+the implicit effect of the two \isacom{case} commands:*}text_raw{*
+\begin{tabular}{@ {}l@ {\qquad}l@ {}}
+\begin{minipage}[t]{.5\textwidth}
+\isa{%
+*}
+
+lemma "ev n \<Longrightarrow> even n"
+proof(induction rule: ev.induct)
+  case ev0
+  show ?case by simp
+next
+  case evSS
+
+
+
+  thus ?case by simp
+qed
+
+text_raw {* }
+\end{minipage}
+&
+\begin{minipage}[t]{.5\textwidth}
+~\\
+~\\
+\isacom{let} @{text"?case = \"even 0\""}\\
+~\\
+~\\
+\isacom{fix} @{text n}\\
+\isacom{assume} @{text"evSS:"}
+  \begin{tabular}[t]{l} @{text"\"ev n\""}\\@{text"\"even n\""}\end{tabular}\\
+\isacom{let} @{text"?case = \"even(Suc(Suc n))\""}\\
+\end{minipage}
+\end{tabular}
+\medskip
+*}
+text{*
+The proof resembles structural induction, but the induction rule is given
+explicitly and the names of the cases are the names of the rules in the
+inductive definition.
+Let us examine the two assumptions named @{thm[source]evSS}:
+@{prop "ev n"} is the premise of rule @{thm[source]evSS}, which we may assume
+because we are in the case where that rule was used; @{prop"even n"}
+is the induction hypothesis.
+\begin{warn}
+Because each \isacom{case} command introduces a list of assumptions
+named like the case name, which is the name of a rule of the inductive
+definition, those rules now need to be accessed with a qualified name, here
+@{thm[source] ev.ev0} and @{thm[source] ev.evSS}
+\end{warn}
+
+In the case @{thm[source]evSS} of the proof above we have pretended that the
+system fixes a variable @{text n}.  But unless the user provides the name
+@{text n}, the system will just invent its own name that cannot be referred
+to.  In the above proof, we do not need to refer to it, hence we do not give
+it a specific name. In case one needs to refer to it one writes
+\begin{quote}
+\isacom{case} @{text"(evSS m)"}
+\end{quote}
+just like \isacom{case}~@{text"(Suc n)"} in earlier structural inductions.
+The name @{text m} is an arbitrary choice. As a result,
+case @{thm[source] evSS} is derived from a renamed version of
+rule @{thm[source] evSS}: @{text"ev m \<Longrightarrow> ev(Suc(Suc m))"}.
+Here is an example with a (contrived) intermediate step that refers to @{text m}:
+*}
+
+lemma "ev n \<Longrightarrow> even n"
+proof(induction rule: ev.induct)
+  case ev0 show ?case by simp
+next
+  case (evSS m)
+  have "even(Suc(Suc m)) = even m" by simp
+  thus ?case using `even m` by blast
+qed
+
+text{*
+\indent
+In general, let @{text I} be a (for simplicity unary) inductively defined
+predicate and let the rules in the definition of @{text I}
+be called @{text "rule\<^isub>1"}, \dots, @{text "rule\<^isub>n"}. A proof by rule
+induction follows this pattern:
+*}
+
+(*<*)
+inductive I where rule\<^isub>1: "I()" |  rule\<^isub>2: "I()" |  rule\<^isub>n: "I()"
+lemma "I x \<Longrightarrow> P x" proof-(*>*)
+show "I x \<Longrightarrow> P x"
+proof(induction rule: I.induct)
+  case rule\<^isub>1
+  txt_raw{*\\[-.4ex]\mbox{}\ \ $\vdots$\\[-.4ex]\mbox{}\hspace{-1ex}*}
+  show ?case (*<*)sorry(*>*)txt_raw{*\ $\dots$\\*}
+next
+  txt_raw{*\\[-.4ex]$\vdots$\\[-.4ex]\mbox{}\hspace{-1ex}*}
+(*<*)
+  case rule\<^isub>2
+  show ?case sorry
+(*>*)
+next
+  case rule\<^isub>n
+  txt_raw{*\\[-.4ex]\mbox{}\ \ $\vdots$\\[-.4ex]\mbox{}\hspace{-1ex}*}
+  show ?case (*<*)sorry(*>*)txt_raw{*\ $\dots$\\*}
+qed(*<*)qed(*>*)
+
+text{*
+One can provide explicit variable names by writing
+\isacom{case}~@{text"(rule\<^isub>i x\<^isub>1 \<dots> x\<^isub>k)"}, thus renaming the first @{text k}
+free variables in rule @{text i} to @{text"x\<^isub>1 \<dots> x\<^isub>k"},
+going through rule @{text i} from left to right.
+
+\subsection{Assumption naming}
+
+In any induction, \isacom{case}~@{text name} sets up a list of assumptions
+also called @{text name}, which is subdivided into three parts:
+\begin{description}
+\item[@{text name.IH}] contains the induction hypotheses.
+\item[@{text name.hyps}] contains all the other hypotheses of this case in the
+induction rule. For rule inductions these are the hypotheses of rule
+@{text name}, for structural inductions these are empty.
+\item[@{text name.prems}] contains the (suitably instantiated) premises
+of the statement being proved, i.e. the @{text A\<^isub>i} when
+proving @{text"\<lbrakk> A\<^isub>1; \<dots>; A\<^isub>n \<rbrakk> \<Longrightarrow> A"}.
+\end{description}
+\begin{warn}
+Proof method @{text induct} differs from @{text induction}
+only in this naming policy: @{text induct} does not distinguish
+@{text IH} from @{text hyps} but subsumes @{text IH} under @{text hyps}.
+\end{warn}
+
+More complicated inductive proofs than the ones we have seen so far
+often need to refer to specific assumptions---just @{text name} or even
+@{text name.prems} and @{text name.IH} can be too unspecific.
+This is where the indexing of fact lists comes in handy, e.g.\
+@{text"name.IH(2)"} or @{text"name.prems(1-2)"}.
+
+\subsection{Rule inversion}
+
+Rule inversion is case analysis of which rule could have been used to
+derive some fact. The name \concept{rule inversion} emphasizes that we are
+reasoning backwards: by which rules could some given fact have been proved?
+For the inductive definition of @{const ev}, rule inversion can be summarized
+like this:
+@{prop[display]"ev n \<Longrightarrow> n = 0 \<or> (EX k. n = Suc(Suc k) \<and> ev k)"}
+The realisation in Isabelle is a case analysis.
+A simple example is the proof that @{prop"ev n \<Longrightarrow> ev (n - 2)"}. We
+already went through the details informally in \autoref{sec:Logic:even}. This
+is the Isar proof:
+*}
+(*<*)
+notepad
+begin fix n
+(*>*)
+  assume "ev n"
+  from this have "ev(n - 2)"
+  proof cases
+    case ev0 thus "ev(n - 2)" by (simp add: ev.ev0)
+  next
+    case (evSS k) thus "ev(n - 2)" by (simp add: ev.evSS)
+  qed
+(*<*)
+end
+(*>*)
+
+text{* The key point here is that a case analysis over some inductively
+defined predicate is triggered by piping the given fact
+(here: \isacom{from}~@{text this}) into a proof by @{text cases}.
+Let us examine the assumptions available in each case. In case @{text ev0}
+we have @{text"n = 0"} and in case @{text evSS} we have @{prop"n = Suc(Suc k)"}
+and @{prop"ev k"}. In each case the assumptions are available under the name
+of the case; there is no fine grained naming schema like for induction.
+
+Sometimes some rules could not have been used to derive the given fact
+because constructors clash. As an extreme example consider
+rule inversion applied to @{prop"ev(Suc 0)"}: neither rule @{text ev0} nor
+rule @{text evSS} can yield @{prop"ev(Suc 0)"} because @{text"Suc 0"} unifies
+neither with @{text 0} nor with @{term"Suc(Suc n)"}. Impossible cases do not
+have to be proved. Hence we can prove anything from @{prop"ev(Suc 0)"}:
+*}
+(*<*)
+notepad begin fix P
+(*>*)
+  assume "ev(Suc 0)" then have P by cases
+(*<*)
+end
+(*>*)
+
+text{* That is, @{prop"ev(Suc 0)"} is simply not provable: *}
+
+lemma "\<not> ev(Suc 0)"
+proof
+  assume "ev(Suc 0)" then show False by cases
+qed
+
+text{* Normally not all cases will be impossible. As a simple exercise,
+prove that \mbox{@{prop"\<not> ev(Suc(Suc(Suc 0)))"}.}
+*}
+
+(*
+lemma "\<not> ev(Suc(Suc(Suc 0)))"
+proof
+  assume "ev(Suc(Suc(Suc 0)))"
+  then show False
+  proof cases
+    case evSS
+    from `ev(Suc 0)` show False by cases
+  qed
+qed
+*)
+
+(*<*)
+end
+(*>*)
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/doc-src/ProgProve/LaTeXsugar.thy	Mon Aug 27 22:00:04 2012 +0200
@@ -0,0 +1,47 @@
+(*  Title:      HOL/Library/LaTeXsugar.thy
+    Author:     Gerwin Klain, Tobias Nipkow, Norbert Schirmer
+    Copyright   2005 NICTA and TUM
+*)
+
+(*<*)
+theory LaTeXsugar
+imports Main
+begin
+
+(* DUMMY *)
+consts DUMMY :: 'a ("\<^raw:\_>")
+
+(* THEOREMS *)
+notation (Rule output)
+  "==>"  ("\<^raw:\mbox{}\inferrule{\mbox{>_\<^raw:}}>\<^raw:{\mbox{>_\<^raw:}}>")
+
+syntax (Rule output)
+  "_bigimpl" :: "asms \<Rightarrow> prop \<Rightarrow> prop"
+  ("\<^raw:\mbox{}\inferrule{>_\<^raw:}>\<^raw:{\mbox{>_\<^raw:}}>")
+
+  "_asms" :: "prop \<Rightarrow> asms \<Rightarrow> asms" 
+  ("\<^raw:\mbox{>_\<^raw:}\\>/ _")
+
+  "_asm" :: "prop \<Rightarrow> asms" ("\<^raw:\mbox{>_\<^raw:}>")
+
+notation (Axiom output)
+  "Trueprop"  ("\<^raw:\mbox{}\inferrule{\mbox{}}{\mbox{>_\<^raw:}}>")
+
+notation (IfThen output)
+  "==>"  ("\<^raw:{\normalsize{}>If\<^raw:\,}> _/ \<^raw:{\normalsize \,>then\<^raw:\,}>/ _.")
+syntax (IfThen output)
+  "_bigimpl" :: "asms \<Rightarrow> prop \<Rightarrow> prop"
+  ("\<^raw:{\normalsize{}>If\<^raw:\,}> _ /\<^raw:{\normalsize \,>then\<^raw:\,}>/ _.")
+  "_asms" :: "prop \<Rightarrow> asms \<Rightarrow> asms" ("\<^raw:\mbox{>_\<^raw:}> /\<^raw:{\normalsize \,>and\<^raw:\,}>/ _")
+  "_asm" :: "prop \<Rightarrow> asms" ("\<^raw:\mbox{>_\<^raw:}>")
+
+notation (IfThenNoBox output)
+  "==>"  ("\<^raw:{\normalsize{}>If\<^raw:\,}> _/ \<^raw:{\normalsize \,>then\<^raw:\,}>/ _.")
+syntax (IfThenNoBox output)
+  "_bigimpl" :: "asms \<Rightarrow> prop \<Rightarrow> prop"
+  ("\<^raw:{\normalsize{}>If\<^raw:\,}> _ /\<^raw:{\normalsize \,>then\<^raw:\,}>/ _.")
+  "_asms" :: "prop \<Rightarrow> asms \<Rightarrow> asms" ("_ /\<^raw:{\normalsize \,>and\<^raw:\,}>/ _")
+  "_asm" :: "prop \<Rightarrow> asms" ("_")
+
+end
+(*>*)
\ No newline at end of file
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/doc-src/ProgProve/Logic.thy	Mon Aug 27 22:00:04 2012 +0200
@@ -0,0 +1,734 @@
+(*<*)
+theory Logic
+imports LaTeXsugar
+begin
+(*>*)
+text{*
+\vspace{-5ex}
+\section{Logic and proof beyond equality}
+\label{sec:Logic}
+
+\subsection{Formulas}
+
+The core syntax of formulas (\textit{form} below)
+provides the standard logical constructs, in decreasing order of precedence:
+\[
+\begin{array}{rcl}
+
+\mathit{form} & ::= &
+  @{text"(form)"} ~\mid~
+  @{const True} ~\mid~
+  @{const False} ~\mid~
+  @{prop "term = term"}\\
+ &\mid& @{prop"\<not> form"} ~\mid~
+  @{prop "form \<and> form"} ~\mid~
+  @{prop "form \<or> form"} ~\mid~
+  @{prop "form \<longrightarrow> form"}\\
+ &\mid& @{prop"\<forall>x. form"} ~\mid~  @{prop"\<exists>x. form"}
+\end{array}
+\]
+Terms are the ones we have seen all along, built from constants, variables,
+function application and @{text"\<lambda>"}-abstraction, including all the syntactic
+sugar like infix symbols, @{text "if"}, @{text "case"} etc.
+\begin{warn}
+Remember that formulas are simply terms of type @{text bool}. Hence
+@{text "="} also works for formulas. Beware that @{text"="} has a higher
+precedence than the other logical operators. Hence @{prop"s = t \<and> A"} means
+@{text"(s = t) \<and> A"}, and @{prop"A\<and>B = B\<and>A"} means @{text"A \<and> (B = B) \<and> A"}.
+Logical equivalence can also be written with
+@{text "\<longleftrightarrow>"} instead of @{text"="}, where @{text"\<longleftrightarrow>"} has the same low
+precedence as @{text"\<longrightarrow>"}. Hence @{text"A \<and> B \<longleftrightarrow> B \<and> A"} really means
+@{text"(A \<and> B) \<longleftrightarrow> (B \<and> A)"}.
+\end{warn}
+\begin{warn}
+Quantifiers need to be enclosed in parentheses if they are nested within
+other constructs (just like @{text "if"}, @{text case} and @{text let}).
+\end{warn}
+The most frequent logical symbols have the following ASCII representations:
+\begin{center}
+\begin{tabular}{l@ {\qquad}l@ {\qquad}l}
+@{text "\<forall>"} & \xsymbol{forall} & \texttt{ALL}\\
+@{text "\<exists>"} & \xsymbol{exists} & \texttt{EX}\\
+@{text "\<lambda>"} & \xsymbol{lambda} & \texttt{\%}\\
+@{text "\<longrightarrow>"} & \texttt{-{}->}\\
+@{text "\<longleftrightarrow>"} & \texttt{<->}\\
+@{text "\<and>"} & \texttt{/\char`\\} & \texttt{\&}\\
+@{text "\<or>"} & \texttt{\char`\\/} & \texttt{|}\\
+@{text "\<not>"} & \xsymbol{not} & \texttt{\char`~}\\
+@{text "\<noteq>"} & \xsymbol{noteq} & \texttt{\char`~=}
+\end{tabular}
+\end{center}
+The first column shows the symbols, the second column ASCII representations
+that Isabelle interfaces convert into the corresponding symbol,
+and the third column shows ASCII representations that stay fixed.
+\begin{warn}
+The implication @{text"\<Longrightarrow>"} is part of the Isabelle framework. It structures
+theorems and proof states, separating assumptions from conclusions.
+The implication @{text"\<longrightarrow>"} is part of the logic HOL and can occur inside the
+formulas that make up the assumptions and conclusion.
+Theorems should be of the form @{text"\<lbrakk> A\<^isub>1; \<dots>; A\<^isub>n \<rbrakk> \<Longrightarrow> A"},
+not @{text"A\<^isub>1 \<and> \<dots> \<and> A\<^isub>n \<longrightarrow> A"}. Both are logically equivalent
+but the first one works better when using the theorem in further proofs.
+\end{warn}
+
+\subsection{Sets}
+
+Sets of elements of type @{typ 'a} have type @{typ"'a set"}.
+They can be finite or infinite. Sets come with the usual notation:
+\begin{itemize}
+\item @{term"{}"},\quad @{text"{e\<^isub>1,\<dots>,e\<^isub>n}"}
+\item @{prop"e \<in> A"},\quad @{prop"A \<subseteq> B"}
+\item @{term"A \<union> B"},\quad @{term"A \<inter> B"},\quad @{term"A - B"},\quad @{term"-A"}
+\end{itemize}
+and much more. @{const UNIV} is the set of all elements of some type.
+Set comprehension is written @{term"{x. P}"}
+rather than @{text"{x | P}"}, to emphasize the variable binding nature
+of the construct.
+\begin{warn}
+In @{term"{x. P}"} the @{text x} must be a variable. Set comprehension
+involving a proper term @{text t} must be written
+@{term[source]"{t | x y z. P}"},
+where @{text "x y z"} are the free variables in @{text t}.
+This is just a shorthand for @{term"{v. EX x y z. v = t \<and> P}"}, where
+@{text v} is a new variable.
+\end{warn}
+
+Here are the ASCII representations of the mathematical symbols:
+\begin{center}
+\begin{tabular}{l@ {\quad}l@ {\quad}l}
+@{text "\<in>"} & \texttt{\char`\\\char`\<in>} & \texttt{:}\\
+@{text "\<subseteq>"} & \texttt{\char`\\\char`\<subseteq>} & \texttt{<=}\\
+@{text "\<union>"} & \texttt{\char`\\\char`\<union>} & \texttt{Un}\\
+@{text "\<inter>"} & \texttt{\char`\\\char`\<inter>} & \texttt{Int}
+\end{tabular}
+\end{center}
+Sets also allow bounded quantifications @{prop"ALL x : A. P"} and
+@{prop"EX x : A. P"}.
+
+\subsection{Proof automation}
+
+So far we have only seen @{text simp} and @{text auto}: Both perform
+rewriting, both can also prove linear arithmetic facts (no multiplication),
+and @{text auto} is also able to prove simple logical or set-theoretic goals:
+*}
+
+lemma "\<forall>x. \<exists>y. x = y"
+by auto
+
+lemma "A \<subseteq> B \<inter> C \<Longrightarrow> A \<subseteq> B \<union> C"
+by auto
+
+text{* where
+\begin{quote}
+\isacom{by} \textit{proof-method}
+\end{quote}
+is short for
+\begin{quote}
+\isacom{apply} \textit{proof-method}\\
+\isacom{done}
+\end{quote}
+The key characteristics of both @{text simp} and @{text auto} are
+\begin{itemize}
+\item They show you were they got stuck, giving you an idea how to continue.
+\item They perform the obvious steps but are highly incomplete.
+\end{itemize}
+A proof method is \concept{complete} if it can prove all true formulas.
+There is no complete proof method for HOL, not even in theory.
+Hence all our proof methods only differ in how incomplete they are.
+
+A proof method that is still incomplete but tries harder than @{text auto} is
+@{text fastforce}.  It either succeeds or fails, it acts on the first
+subgoal only, and it can be modified just like @{text auto}, e.g.\
+with @{text "simp add"}. Here is a typical example of what @{text fastforce}
+can do:
+*}
+
+lemma "\<lbrakk> \<forall>xs \<in> A. \<exists>ys. xs = ys @ ys;  us \<in> A \<rbrakk>
+   \<Longrightarrow> \<exists>n. length us = n+n"
+by fastforce
+
+text{* This lemma is out of reach for @{text auto} because of the
+quantifiers.  Even @{text fastforce} fails when the quantifier structure
+becomes more complicated. In a few cases, its slow version @{text force}
+succeeds where @{text fastforce} fails.
+
+The method of choice for complex logical goals is @{text blast}. In the
+following example, @{text T} and @{text A} are two binary predicates. It
+is shown that if @{text T} is total, @{text A} is antisymmetric and @{text T} is
+a subset of @{text A}, then @{text A} is a subset of @{text T}:
+*}
+
+lemma
+  "\<lbrakk> \<forall>x y. T x y \<or> T y x;
+     \<forall>x y. A x y \<and> A y x \<longrightarrow> x = y;
+     \<forall>x y. T x y \<longrightarrow> A x y \<rbrakk>
+   \<Longrightarrow> \<forall>x y. A x y \<longrightarrow> T x y"
+by blast
+
+text{*
+We leave it to the reader to figure out why this lemma is true.
+Method @{text blast}
+\begin{itemize}
+\item is (in principle) a complete proof procedure for first-order formulas,
+  a fragment of HOL. In practice there is a search bound.
+\item does no rewriting and knows very little about equality.
+\item covers logic, sets and relations.
+\item either succeeds or fails.
+\end{itemize}
+Because of its strength in logic and sets and its weakness in equality reasoning, it complements the earlier proof methods.
+
+
+\subsubsection{Sledgehammer}
+
+Command \isacom{sledgehammer} calls a number of external automatic
+theorem provers (ATPs) that run for up to 30 seconds searching for a
+proof. Some of these ATPs are part of the Isabelle installation, others are
+queried over the internet. If successful, a proof command is generated and can
+be inserted into your proof.  The biggest win of \isacom{sledgehammer} is
+that it will take into account the whole lemma library and you do not need to
+feed in any lemma explicitly. For example,*}
+
+lemma "\<lbrakk> xs @ ys = ys @ xs;  length xs = length ys \<rbrakk> \<Longrightarrow> xs = ys"
+
+txt{* cannot be solved by any of the standard proof methods, but
+\isacom{sledgehammer} finds the following proof: *}
+
+by (metis append_eq_conv_conj)
+
+text{* We do not explain how the proof was found but what this command
+means. For a start, Isabelle does not trust external tools (and in particular
+not the translations from Isabelle's logic to those tools!)
+and insists on a proof that it can check. This is what @{text metis} does.
+It is given a list of lemmas and tries to find a proof just using those lemmas
+(and pure logic). In contrast to @{text simp} and friends that know a lot of
+lemmas already, using @{text metis} manually is tedious because one has
+to find all the relevant lemmas first. But that is precisely what
+\isacom{sledgehammer} does for us.
+In this case lemma @{thm[source]append_eq_conv_conj} alone suffices:
+@{thm[display] append_eq_conv_conj}
+We leave it to the reader to figure out why this lemma suffices to prove
+the above lemma, even without any knowledge of what the functions @{const take}
+and @{const drop} do. Keep in mind that the variables in the two lemmas
+are independent of each other, despite the same names, and that you can
+substitute arbitrary values for the free variables in a lemma.
+
+Just as for the other proof methods we have seen, there is no guarantee that
+\isacom{sledgehammer} will find a proof if it exists. Nor is
+\isacom{sledgehammer} superior to the other proof methods.  They are
+incomparable. Therefore it is recommended to apply @{text simp} or @{text
+auto} before invoking \isacom{sledgehammer} on what is left.
+
+\subsubsection{Arithmetic}
+
+By arithmetic formulas we mean formulas involving variables, numbers, @{text
+"+"}, @{text"-"}, @{text "="}, @{text "<"}, @{text "\<le>"} and the usual logical
+connectives @{text"\<not>"}, @{text"\<and>"}, @{text"\<or>"}, @{text"\<longrightarrow>"},
+@{text"\<longleftrightarrow>"}. Strictly speaking, this is known as \concept{linear arithmetic}
+because it does not involve multiplication, although multiplication with
+numbers, e.g.\ @{text"2*n"} is allowed. Such formulas can be proved by
+@{text arith}:
+*}
+
+lemma "\<lbrakk> (a::nat) \<le> x + b; 2*x < c \<rbrakk> \<Longrightarrow> 2*a + 1 \<le> 2*b + c"
+by arith
+
+text{* In fact, @{text auto} and @{text simp} can prove many linear
+arithmetic formulas already, like the one above, by calling a weak but fast
+version of @{text arith}. Hence it is usually not necessary to invoke
+@{text arith} explicitly.
+
+The above example involves natural numbers, but integers (type @{typ int})
+and real numbers (type @{text real}) are supported as well. As are a number
+of further operators like @{const min} and @{const max}. On @{typ nat} and
+@{typ int}, @{text arith} can even prove theorems with quantifiers in them,
+but we will not enlarge on that here.
+
+
+\subsubsection{Trying them all}
+
+If you want to try all of the above automatic proof methods you simply type
+\begin{isabelle}
+\isacom{try}
+\end{isabelle}
+You can also add specific simplification and introduction rules:
+\begin{isabelle}
+\isacom{try} @{text"simp: \<dots> intro: \<dots>"}
+\end{isabelle}
+There is also a lightweight variant \isacom{try0} that does not call
+sledgehammer.
+
+\subsection{Single step proofs}
+
+Although automation is nice, it often fails, at least initially, and you need
+to find out why. When @{text fastforce} or @{text blast} simply fail, you have
+no clue why. At this point, the stepwise
+application of proof rules may be necessary. For example, if @{text blast}
+fails on @{prop"A \<and> B"}, you want to attack the two
+conjuncts @{text A} and @{text B} separately. This can
+be achieved by applying \emph{conjunction introduction}
+\[ @{thm[mode=Rule,show_question_marks]conjI}\ @{text conjI}
+\]
+to the proof state. We will now examine the details of this process.
+
+\subsubsection{Instantiating unknowns}
+
+We had briefly mentioned earlier that after proving some theorem,
+Isabelle replaces all free variables @{text x} by so called \concept{unknowns}
+@{text "?x"}. We can see this clearly in rule @{thm[source] conjI}.
+These unknowns can later be instantiated explicitly or implicitly:
+\begin{itemize}
+\item By hand, using @{text of}.
+The expression @{text"conjI[of \"a=b\" \"False\"]"}
+instantiates the unknowns in @{thm[source] conjI} from left to right with the
+two formulas @{text"a=b"} and @{text False}, yielding the rule
+@{thm[display,mode=Rule]conjI[of "a=b" False]}
+
+In general, @{text"th[of string\<^isub>1 \<dots> string\<^isub>n]"} instantiates
+the unknowns in the theorem @{text th} from left to right with the terms
+@{text string\<^isub>1} to @{text string\<^isub>n}.
+
+\item By unification. \concept{Unification} is the process of making two
+terms syntactically equal by suitable instantiations of unknowns. For example,
+unifying @{text"?P \<and> ?Q"} with \mbox{@{prop"a=b \<and> False"}} instantiates
+@{text "?P"} with @{prop "a=b"} and @{text "?Q"} with @{prop False}.
+\end{itemize}
+We need not instantiate all unknowns. If we want to skip a particular one we
+can just write @{text"_"} instead, for example @{text "conjI[of _ \"False\"]"}.
+Unknowns can also be instantiated by name, for example
+@{text "conjI[where ?P = \"a=b\" and ?Q = \"False\"]"}.
+
+
+\subsubsection{Rule application}
+
+\concept{Rule application} means applying a rule backwards to a proof state.
+For example, applying rule @{thm[source]conjI} to a proof state
+\begin{quote}
+@{text"1.  \<dots>  \<Longrightarrow> A \<and> B"}
+\end{quote}
+results in two subgoals, one for each premise of @{thm[source]conjI}:
+\begin{quote}
+@{text"1.  \<dots>  \<Longrightarrow> A"}\\
+@{text"2.  \<dots>  \<Longrightarrow> B"}
+\end{quote}
+In general, the application of a rule @{text"\<lbrakk> A\<^isub>1; \<dots>; A\<^isub>n \<rbrakk> \<Longrightarrow> A"}
+to a subgoal \mbox{@{text"\<dots> \<Longrightarrow> C"}} proceeds in two steps:
+\begin{enumerate}
+\item
+Unify @{text A} and @{text C}, thus instantiating the unknowns in the rule.
+\item
+Replace the subgoal @{text C} with @{text n} new subgoals @{text"A\<^isub>1"} to @{text"A\<^isub>n"}.
+\end{enumerate}
+This is the command to apply rule @{text xyz}:
+\begin{quote}
+\isacom{apply}@{text"(rule xyz)"}
+\end{quote}
+This is also called \concept{backchaining} with rule @{text xyz}.
+
+\subsubsection{Introduction rules}
+
+Conjunction introduction (@{thm[source] conjI}) is one example of a whole
+class of rules known as \concept{introduction rules}. They explain under which
+premises some logical construct can be introduced. Here are some further
+useful introduction rules:
+\[
+\inferrule*[right=\mbox{@{text impI}}]{\mbox{@{text"?P \<Longrightarrow> ?Q"}}}{\mbox{@{text"?P \<longrightarrow> ?Q"}}}
+\qquad
+\inferrule*[right=\mbox{@{text allI}}]{\mbox{@{text"\<And>x. ?P x"}}}{\mbox{@{text"\<forall>x. ?P x"}}}
+\]
+\[
+\inferrule*[right=\mbox{@{text iffI}}]{\mbox{@{text"?P \<Longrightarrow> ?Q"}} \\ \mbox{@{text"?Q \<Longrightarrow> ?P"}}}
+  {\mbox{@{text"?P = ?Q"}}}
+\]
+These rules are part of the logical system of \concept{natural deduction}
+(e.g.\ \cite{HuthRyan}). Although we intentionally de-emphasize the basic rules
+of logic in favour of automatic proof methods that allow you to take bigger
+steps, these rules are helpful in locating where and why automation fails.
+When applied backwards, these rules decompose the goal:
+\begin{itemize}
+\item @{thm[source] conjI} and @{thm[source]iffI} split the goal into two subgoals,
+\item @{thm[source] impI} moves the left-hand side of a HOL implication into the list of assumptions,
+\item and @{thm[source] allI} removes a @{text "\<forall>"} by turning the quantified variable into a fixed local variable of the subgoal.
+\end{itemize}
+Isabelle knows about these and a number of other introduction rules.
+The command
+\begin{quote}
+\isacom{apply} @{text rule}
+\end{quote}
+automatically selects the appropriate rule for the current subgoal.
+
+You can also turn your own theorems into introduction rules by giving them
+the @{text"intro"} attribute, analogous to the @{text simp} attribute.  In
+that case @{text blast}, @{text fastforce} and (to a limited extent) @{text
+auto} will automatically backchain with those theorems. The @{text intro}
+attribute should be used with care because it increases the search space and
+can lead to nontermination.  Sometimes it is better to use it only in
+specific calls of @{text blast} and friends. For example,
+@{thm[source] le_trans}, transitivity of @{text"\<le>"} on type @{typ nat},
+is not an introduction rule by default because of the disastrous effect
+on the search space, but can be useful in specific situations:
+*}
+
+lemma "\<lbrakk> (a::nat) \<le> b; b \<le> c; c \<le> d; d \<le> e \<rbrakk> \<Longrightarrow> a \<le> e"
+by(blast intro: le_trans)
+
+text{*
+Of course this is just an example and could be proved by @{text arith}, too.
+
+\subsubsection{Forward proof}
+\label{sec:forward-proof}
+
+Forward proof means deriving new theorems from old theorems. We have already
+seen a very simple form of forward proof: the @{text of} operator for
+instantiating unknowns in a theorem. The big brother of @{text of} is @{text
+OF} for applying one theorem to others. Given a theorem @{prop"A \<Longrightarrow> B"} called
+@{text r} and a theorem @{text A'} called @{text r'}, the theorem @{text
+"r[OF r']"} is the result of applying @{text r} to @{text r'}, where @{text
+r} should be viewed as a function taking a theorem @{text A} and returning
+@{text B}.  More precisely, @{text A} and @{text A'} are unified, thus
+instantiating the unknowns in @{text B}, and the result is the instantiated
+@{text B}. Of course, unification may also fail.
+\begin{warn}
+Application of rules to other rules operates in the forward direction: from
+the premises to the conclusion of the rule; application of rules to proof
+states operates in the backward direction, from the conclusion to the
+premises.
+\end{warn}
+
+In general @{text r} can be of the form @{text"\<lbrakk> A\<^isub>1; \<dots>; A\<^isub>n \<rbrakk> \<Longrightarrow> A"}
+and there can be multiple argument theorems @{text r\<^isub>1} to @{text r\<^isub>m}
+(with @{text"m \<le> n"}), in which case @{text "r[OF r\<^isub>1 \<dots> r\<^isub>m]"} is obtained
+by unifying and thus proving @{text "A\<^isub>i"} with @{text "r\<^isub>i"}, @{text"i = 1\<dots>m"}.
+Here is an example, where @{thm[source]refl} is the theorem
+@{thm[show_question_marks] refl}:
+*}
+
+thm conjI[OF refl[of "a"] refl[of "b"]]
+
+text{* yields the theorem @{thm conjI[OF refl[of "a"] refl[of "b"]]}.
+The command \isacom{thm} merely displays the result.
+
+Forward reasoning also makes sense in connection with proof states.
+Therefore @{text blast}, @{text fastforce} and @{text auto} support a modifier
+@{text dest} which instructs the proof method to use certain rules in a
+forward fashion. If @{text r} is of the form \mbox{@{text "A \<Longrightarrow> B"}}, the modifier
+\mbox{@{text"dest: r"}}
+allows proof search to reason forward with @{text r}, i.e.\
+to replace an assumption @{text A'}, where @{text A'} unifies with @{text A},
+with the correspondingly instantiated @{text B}. For example, @{thm[source,show_question_marks] Suc_leD} is the theorem \mbox{@{thm Suc_leD}}, which works well for forward reasoning:
+*}
+
+lemma "Suc(Suc(Suc a)) \<le> b \<Longrightarrow> a \<le> b"
+by(blast dest: Suc_leD)
+
+text{* In this particular example we could have backchained with
+@{thm[source] Suc_leD}, too, but because the premise is more complicated than the conclusion this can easily lead to nontermination.
+
+\subsubsection{Finding theorems}
+
+Command \isacom{find\_theorems} searches for specific theorems in the current
+theory. Search criteria include pattern matching on terms and on names.
+For details see the Isabelle/Isar Reference Manual~\cite{IsarRef}.
+\bigskip
+
+\begin{warn}
+To ease readability we will drop the question marks
+in front of unknowns from now on.
+\end{warn}
+
+
+\section{Inductive definitions}
+\label{sec:inductive-defs}
+
+Inductive definitions are the third important definition facility, after
+datatypes and recursive function.
+\sem
+In fact, they are the key construct in the
+definition of operational semantics in the second part of the book.
+\endsem
+
+\subsection{An example: even numbers}
+\label{sec:Logic:even}
+
+Here is a simple example of an inductively defined predicate:
+\begin{itemize}
+\item 0 is even
+\item If $n$ is even, so is $n+2$.
+\end{itemize}
+The operative word ``inductive'' means that these are the only even numbers.
+In Isabelle we give the two rules the names @{text ev0} and @{text evSS}
+and write
+*}
+
+inductive ev :: "nat \<Rightarrow> bool" where
+ev0:    "ev 0" |
+evSS:  (*<*)"ev n \<Longrightarrow> ev (Suc(Suc n))"(*>*)
+text_raw{* @{prop[source]"ev n \<Longrightarrow> ev (n + 2)"} *}
+
+text{* To get used to inductive definitions, we will first prove a few
+properties of @{const ev} informally before we descend to the Isabelle level.
+
+How do we prove that some number is even, e.g.\ @{prop "ev 4"}? Simply by combining the defining rules for @{const ev}:
+\begin{quote}
+@{text "ev 0 \<Longrightarrow> ev (0 + 2) \<Longrightarrow> ev((0 + 2) + 2) = ev 4"}
+\end{quote}
+
+\subsubsection{Rule induction}
+
+Showing that all even numbers have some property is more complicated.  For
+example, let us prove that the inductive definition of even numbers agrees
+with the following recursive one:*}
+
+fun even :: "nat \<Rightarrow> bool" where
+"even 0 = True" |
+"even (Suc 0) = False" |
+"even (Suc(Suc n)) = even n"
+
+text{* We prove @{prop"ev m \<Longrightarrow> even m"}.  That is, we
+assume @{prop"ev m"} and by induction on the form of its derivation
+prove @{prop"even m"}. There are two cases corresponding to the two rules
+for @{const ev}:
+\begin{description}
+\item[Case @{thm[source]ev0}:]
+ @{prop"ev m"} was derived by rule @{prop "ev 0"}: \\
+ @{text"\<Longrightarrow>"} @{prop"m=(0::nat)"} @{text"\<Longrightarrow>"} @{text "even m = even 0 = True"}
+\item[Case @{thm[source]evSS}:]
+ @{prop"ev m"} was derived by rule @{prop "ev n \<Longrightarrow> ev(n+2)"}: \\
+@{text"\<Longrightarrow>"} @{prop"m=n+(2::nat)"} and by induction hypothesis @{prop"even n"}\\
+@{text"\<Longrightarrow>"} @{text"even m = even(n + 2) = even n = True"}
+\end{description}
+
+What we have just seen is a special case of \concept{rule induction}.
+Rule induction applies to propositions of this form
+\begin{quote}
+@{prop "ev n \<Longrightarrow> P n"}
+\end{quote}
+That is, we want to prove a property @{prop"P n"}
+for all even @{text n}. But if we assume @{prop"ev n"}, then there must be
+some derivation of this assumption using the two defining rules for
+@{const ev}. That is, we must prove
+\begin{description}
+\item[Case @{thm[source]ev0}:] @{prop"P(0::nat)"}
+\item[Case @{thm[source]evSS}:] @{prop"\<lbrakk> ev n; P n \<rbrakk> \<Longrightarrow> P(n + 2::nat)"}
+\end{description}
+The corresponding rule is called @{thm[source] ev.induct} and looks like this:
+\[
+\inferrule{
+\mbox{@{thm (prem 1) ev.induct[of "n"]}}\\
+\mbox{@{thm (prem 2) ev.induct}}\\
+\mbox{@{prop"!!n. \<lbrakk> ev n; P n \<rbrakk> \<Longrightarrow> P(n+2)"}}}
+{\mbox{@{thm (concl) ev.induct[of "n"]}}}
+\]
+The first premise @{prop"ev n"} enforces that this rule can only be applied
+in situations where we know that @{text n} is even.
+
+Note that in the induction step we may not just assume @{prop"P n"} but also
+\mbox{@{prop"ev n"}}, which is simply the premise of rule @{thm[source]
+evSS}.  Here is an example where the local assumption @{prop"ev n"} comes in
+handy: we prove @{prop"ev m \<Longrightarrow> ev(m - 2)"} by induction on @{prop"ev m"}.
+Case @{thm[source]ev0} requires us to prove @{prop"ev(0 - 2)"}, which follows
+from @{prop"ev 0"} because @{prop"0 - 2 = (0::nat)"} on type @{typ nat}. In
+case @{thm[source]evSS} we have \mbox{@{prop"m = n+(2::nat)"}} and may assume
+@{prop"ev n"}, which implies @{prop"ev (m - 2)"} because @{text"m - 2 = (n +
+2) - 2 = n"}. We did not need the induction hypothesis at all for this proof,
+it is just a case analysis of which rule was used, but having @{prop"ev
+n"} at our disposal in case @{thm[source]evSS} was essential.
+This case analysis of rules is also called ``rule inversion''
+and is discussed in more detail in \autoref{ch:Isar}.
+
+\subsubsection{In Isabelle}
+
+Let us now recast the above informal proofs in Isabelle. For a start,
+we use @{const Suc} terms instead of numerals in rule @{thm[source]evSS}:
+@{thm[display] evSS}
+This avoids the difficulty of unifying @{text"n+2"} with some numeral,
+which is not automatic.
+
+The simplest way to prove @{prop"ev(Suc(Suc(Suc(Suc 0))))"} is in a forward
+direction: @{text "evSS[OF evSS[OF ev0]]"} yields the theorem @{thm evSS[OF
+evSS[OF ev0]]}. Alternatively, you can also prove it as a lemma in backwards
+fashion. Although this is more verbose, it allows us to demonstrate how each
+rule application changes the proof state: *}
+
+lemma "ev(Suc(Suc(Suc(Suc 0))))"
+txt{*
+@{subgoals[display,indent=0,goals_limit=1]}
+*}
+apply(rule evSS)
+txt{*
+@{subgoals[display,indent=0,goals_limit=1]}
+*}
+apply(rule evSS)
+txt{*
+@{subgoals[display,indent=0,goals_limit=1]}
+*}
+apply(rule ev0)
+done
+
+text{* \indent
+Rule induction is applied by giving the induction rule explicitly via the
+@{text"rule:"} modifier:*}
+
+lemma "ev m \<Longrightarrow> even m"
+apply(induction rule: ev.induct)
+by(simp_all)
+
+text{* Both cases are automatic. Note that if there are multiple assumptions
+of the form @{prop"ev t"}, method @{text induction} will induct on the leftmost
+one.
+
+As a bonus, we also prove the remaining direction of the equivalence of
+@{const ev} and @{const even}:
+*}
+
+lemma "even n \<Longrightarrow> ev n"
+apply(induction n rule: even.induct)
+
+txt{* This is a proof by computation induction on @{text n} (see
+\autoref{sec:recursive-funs}) that sets up three subgoals corresponding to
+the three equations for @{const even}:
+@{subgoals[display,indent=0]}
+The first and third subgoals follow with @{thm[source]ev0} and @{thm[source]evSS}, and the second subgoal is trivially true because @{prop"even(Suc 0)"} is @{const False}:
+*}
+
+by (simp_all add: ev0 evSS)
+
+text{* The rules for @{const ev} make perfect simplification and introduction
+rules because their premises are always smaller than the conclusion. It
+makes sense to turn them into simplification and introduction rules
+permanently, to enhance proof automation: *}
+
+declare ev.intros[simp,intro]
+
+text{* The rules of an inductive definition are not simplification rules by
+default because, in contrast to recursive functions, there is no termination
+requirement for inductive definitions.
+
+\subsubsection{Inductive versus recursive}
+
+We have seen two definitions of the notion of evenness, an inductive and a
+recursive one. Which one is better? Much of the time, the recursive one is
+more convenient: it allows us to do rewriting in the middle of terms, and it
+expresses both the positive information (which numbers are even) and the
+negative information (which numbers are not even) directly. An inductive
+definition only expresses the positive information directly. The negative
+information, for example, that @{text 1} is not even, has to be proved from
+it (by induction or rule inversion). On the other hand, rule induction is
+tailor-made for proving \mbox{@{prop"ev n \<Longrightarrow> P n"}} because it only asks you
+to prove the positive cases. In the proof of @{prop"even n \<Longrightarrow> P n"} by
+computation induction via @{thm[source]even.induct}, we are also presented
+with the trivial negative cases. If you want the convenience of both
+rewriting and rule induction, you can make two definitions and show their
+equivalence (as above) or make one definition and prove additional properties
+from it, for example rule induction from computation induction.
+
+But many concepts do not admit a recursive definition at all because there is
+no datatype for the recursion (for example, the transitive closure of a
+relation), or the recursion would not terminate (for example,
+an interpreter for a programming language). Even if there is a recursive
+definition, if we are only interested in the positive information, the
+inductive definition may be much simpler.
+
+\subsection{The reflexive transitive closure}
+\label{sec:star}
+
+Evenness is really more conveniently expressed recursively than inductively.
+As a second and very typical example of an inductive definition we define the
+reflexive transitive closure.
+\sem
+It will also be an important building block for
+some of the semantics considered in the second part of the book.
+\endsem
+
+The reflexive transitive closure, called @{text star} below, is a function
+that maps a binary predicate to another binary predicate: if @{text r} is of
+type @{text"\<tau> \<Rightarrow> \<tau> \<Rightarrow> bool"} then @{term "star r"} is again of type @{text"\<tau> \<Rightarrow>
+\<tau> \<Rightarrow> bool"}, and @{prop"star r x y"} means that @{text x} and @{text y} are in
+the relation @{term"star r"}. Think @{term"r^*"} when you see @{term"star
+r"}, because @{text"star r"} is meant to be the reflexive transitive closure.
+That is, @{prop"star r x y"} is meant to be true if from @{text x} we can
+reach @{text y} in finitely many @{text r} steps. This concept is naturally
+defined inductively: *}
+
+inductive star :: "('a \<Rightarrow> 'a \<Rightarrow> bool) \<Rightarrow> 'a \<Rightarrow> 'a \<Rightarrow> bool"  for r where
+refl:  "star r x x" |
+step:  "r x y \<Longrightarrow> star r y z \<Longrightarrow> star r x z"
+
+text{* The base case @{thm[source] refl} is reflexivity: @{term "x=y"}. The
+step case @{thm[source]step} combines an @{text r} step (from @{text x} to
+@{text y}) and a @{term"star r"} step (from @{text y} to @{text z}) into a
+@{term"star r"} step (from @{text x} to @{text z}).
+The ``\isacom{for}~@{text r}'' in the header is merely a hint to Isabelle
+that @{text r} is a fixed parameter of @{const star}, in contrast to the
+further parameters of @{const star}, which change. As a result, Isabelle
+generates a simpler induction rule.
+
+By definition @{term"star r"} is reflexive. It is also transitive, but we
+need rule induction to prove that: *}
+
+lemma star_trans: "star r x y \<Longrightarrow> star r y z \<Longrightarrow> star r x z"
+apply(induction rule: star.induct)
+(*<*)
+defer
+apply(rename_tac u x y)
+defer
+(*>*)
+txt{* The induction is over @{prop"star r x y"} and we try to prove
+\mbox{@{prop"star r y z \<Longrightarrow> star r x z"}},
+which we abbreviate by @{prop"P x y"}. These are our two subgoals:
+@{subgoals[display,indent=0]}
+The first one is @{prop"P x x"}, the result of case @{thm[source]refl},
+and it is trivial.
+*}
+apply(assumption)
+txt{* Let us examine subgoal @{text 2}, case @{thm[source] step}.
+Assumptions @{prop"r u x"} and \mbox{@{prop"star r x y"}}
+are the premises of rule @{thm[source]step}.
+Assumption @{prop"star r y z \<Longrightarrow> star r x z"} is \mbox{@{prop"P x y"}},
+the IH coming from @{prop"star r x y"}. We have to prove @{prop"P u y"},
+which we do by assuming @{prop"star r y z"} and proving @{prop"star r u z"}.
+The proof itself is straightforward: from \mbox{@{prop"star r y z"}} the IH
+leads to @{prop"star r x z"} which, together with @{prop"r u x"},
+leads to \mbox{@{prop"star r u z"}} via rule @{thm[source]step}:
+*}
+apply(metis step)
+done
+
+text{*
+
+\subsection{The general case}
+
+Inductive definitions have approximately the following general form:
+\begin{quote}
+\isacom{inductive} @{text"I :: \"\<tau> \<Rightarrow> bool\""} \isacom{where}
+\end{quote}
+followed by a sequence of (possibly named) rules of the form
+\begin{quote}
+@{text "\<lbrakk> I a\<^isub>1; \<dots>; I a\<^isub>n \<rbrakk> \<Longrightarrow> I a"}
+\end{quote}
+separated by @{text"|"}. As usual, @{text n} can be 0.
+The corresponding rule induction principle
+@{text I.induct} applies to propositions of the form
+\begin{quote}
+@{prop "I x \<Longrightarrow> P x"}
+\end{quote}
+where @{text P} may itself be a chain of implications.
+\begin{warn}
+Rule induction is always on the leftmost premise of the goal.
+Hence @{text "I x"} must be the first premise.
+\end{warn}
+Proving @{prop "I x \<Longrightarrow> P x"} by rule induction means proving
+for every rule of @{text I} that @{text P} is invariant:
+\begin{quote}
+@{text "\<lbrakk> I a\<^isub>1; P a\<^isub>1; \<dots>; I a\<^isub>n; P a\<^isub>n \<rbrakk> \<Longrightarrow> P a"}
+\end{quote}
+
+The above format for inductive definitions is simplified in a number of
+respects. @{text I} can have any number of arguments and each rule can have
+additional premises not involving @{text I}, so-called \concept{side
+conditions}. In rule inductions, these side-conditions appear as additional
+assumptions. The \isacom{for} clause seen in the definition of the reflexive
+transitive closure merely simplifies the form of the induction rule.
+*}
+(*<*)
+end
+(*>*)
--- a/doc-src/ProgProve/Makefile	Mon Aug 27 21:46:00 2012 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,33 +0,0 @@
-## targets
-
-default: dvi
-
-
-## dependencies
-
-include ../Makefile.in
-
-NAME = prog-prove
-
-FILES = prog-prove.tex prog-prove.bib intro-isabelle.tex \
-  Thys/document/*.tex Thys/MyList.thy \
-  prelude.tex svmono.cls mathpartir.sty isabelle.sty isabellesym.sty
-
-dvi: $(NAME).dvi
-
-$(NAME).dvi: $(FILES) isabelle_hol.eps bang.eps
-	$(LATEX) $(NAME)
-	$(BIBTEX) $(NAME)
-	$(LATEX) $(NAME)
-	$(LATEX) $(NAME)
-	$(LATEX) $(NAME)
-
-pdf: $(NAME).pdf
-
-$(NAME).pdf: $(FILES) isabelle_hol.pdf bang.pdf
-	$(PDFLATEX) $(NAME)
-	$(BIBTEX) $(NAME)
-	$(PDFLATEX) $(NAME)
-	$(PDFLATEX) $(NAME)
-	$(FIXBOOKMARKS) $(NAME).out
-	$(PDFLATEX) $(NAME)
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/doc-src/ProgProve/MyList.thy	Mon Aug 27 22:00:04 2012 +0200
@@ -0,0 +1,17 @@
+theory MyList
+imports Main
+begin
+
+datatype 'a list = Nil | Cons 'a "'a list"
+
+fun app :: "'a list => 'a list => 'a list" where
+"app Nil ys = ys" |
+"app (Cons x xs) ys = Cons x (app xs ys)"
+
+fun rev :: "'a list => 'a list" where
+"rev Nil = Nil" |
+"rev (Cons x xs) = app (rev xs) (Cons x Nil)"
+
+value "rev(Cons True (Cons False Nil))"
+
+end
--- a/doc-src/ProgProve/Thys/Basics.thy	Mon Aug 27 21:46:00 2012 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,149 +0,0 @@
-(*<*)
-theory Basics
-imports Main
-begin
-(*>*)
-text{*
-This chapter introduces HOL as a functional programming language and shows
-how to prove properties of functional programs by induction.
-
-\section{Basics}
-
-\subsection{Types, Terms and Formulae}
-\label{sec:TypesTermsForms}
-
-HOL is a typed logic whose type system resembles that of functional
-programming languages. Thus there are
-\begin{description}
-\item[base types,] 
-in particular @{typ bool}, the type of truth values,
-@{typ nat}, the type of natural numbers ($\mathbb{N}$), and @{typ int},
-the type of mathematical integers ($\mathbb{Z}$).
-\item[type constructors,]
- in particular @{text list}, the type of
-lists, and @{text set}, the type of sets. Type constructors are written
-postfix, e.g.\ @{typ "nat list"} is the type of lists whose elements are
-natural numbers.
-\item[function types,]
-denoted by @{text"\<Rightarrow>"}.
-\item[type variables,]
-  denoted by @{typ 'a}, @{typ 'b} etc., just like in ML\@.
-\end{description}
-
-\concept{Terms} are formed as in functional programming by
-applying functions to arguments. If @{text f} is a function of type
-@{text"\<tau>\<^isub>1 \<Rightarrow> \<tau>\<^isub>2"} and @{text t} is a term of type
-@{text"\<tau>\<^isub>1"} then @{term"f t"} is a term of type @{text"\<tau>\<^isub>2"}. We write @{text "t :: \<tau>"} to mean that term @{text t} has type @{text \<tau>}.
-
-\begin{warn}
-There are many predefined infix symbols like @{text "+"} and @{text"\<le>"}.
-The name of the corresponding binary function is @{term"op +"},
-not just @{text"+"}. That is, @{term"x + y"} is syntactic sugar for
-\noquotes{@{term[source]"op + x y"}}.
-\end{warn}
-
-HOL also supports some basic constructs from functional programming:
-\begin{quote}
-@{text "(if b then t\<^isub>1 else t\<^isub>2)"}\\
-@{text "(let x = t in u)"}\\
-@{text "(case t of pat\<^isub>1 \<Rightarrow> t\<^isub>1 | \<dots> | pat\<^isub>n \<Rightarrow> t\<^isub>n)"}
-\end{quote}
-\begin{warn}
-The above three constructs must always be enclosed in parentheses
-if they occur inside other constructs.
-\end{warn}
-Terms may also contain @{text "\<lambda>"}-abstractions. For example,
-@{term "\<lambda>x. x"} is the identity function.
-
-\concept{Formulae} are terms of type @{text bool}.
-There are the basic constants @{term True} and @{term False} and
-the usual logical connectives (in decreasing order of precedence):
-@{text"\<not>"}, @{text"\<and>"}, @{text"\<or>"}, @{text"\<longrightarrow>"}.
-
-\concept{Equality} is available in the form of the infix function @{text "="}
-of type @{typ "'a \<Rightarrow> 'a \<Rightarrow> bool"}. It also works for formulas, where
-it means ``if and only if''.
-
-\concept{Quantifiers} are written @{prop"\<forall>x. P"} and @{prop"\<exists>x. P"}.
-
-Isabelle automatically computes the type of each variable in a term. This is
-called \concept{type inference}.  Despite type inference, it is sometimes
-necessary to attach explicit \concept{type constraints} (or \concept{type
-annotations}) to a variable or term.  The syntax is @{text "t :: \<tau>"} as in
-\mbox{\noquotes{@{prop[source] "m < (n::nat)"}}}. Type constraints may be
-needed to
-disambiguate terms involving overloaded functions such as @{text "+"}, @{text
-"*"} and @{text"\<le>"}.
-
-Finally there are the universal quantifier @{text"\<And>"} and the implication
-@{text"\<Longrightarrow>"}. They are part of the Isabelle framework, not the logic
-HOL. Logically, they agree with their HOL counterparts @{text"\<forall>"} and
-@{text"\<longrightarrow>"}, but operationally they behave differently. This will become
-clearer as we go along.
-\begin{warn}
-Right-arrows of all kinds always associate to the right. In particular,
-the formula
-@{text"A\<^isub>1 \<Longrightarrow> A\<^isub>2 \<Longrightarrow> A\<^isub>3"} means @{text "A\<^isub>1 \<Longrightarrow> (A\<^isub>2 \<Longrightarrow> A\<^isub>3)"}.
-The (Isabelle specific) notation \mbox{@{text"\<lbrakk> A\<^isub>1; \<dots>; A\<^isub>n \<rbrakk> \<Longrightarrow> A"}}
-is short for the iterated implication \mbox{@{text"A\<^isub>1 \<Longrightarrow> \<dots> \<Longrightarrow> A\<^isub>n \<Longrightarrow> A"}}.
-Sometimes we also employ inference rule notation:
-\inferrule{\mbox{@{text "A\<^isub>1"}}\\ \mbox{@{text "\<dots>"}}\\ \mbox{@{text "A\<^isub>n"}}}
-{\mbox{@{text A}}}
-\end{warn}
-
-
-\subsection{Theories}
-\label{sec:Basic:Theories}
-
-Roughly speaking, a \concept{theory} is a named collection of types,
-functions, and theorems, much like a module in a programming language.
-All the Isabelle text that you ever type needs to go into a theory.
-The general format of a theory @{text T} is
-\begin{quote}
-\isacom{theory} @{text T}\\
-\isacom{imports} @{text "T\<^isub>1 \<dots> T\<^isub>n"}\\
-\isacom{begin}\\
-\emph{definitions, theorems and proofs}\\
-\isacom{end}
-\end{quote}
-where @{text "T\<^isub>1 \<dots> T\<^isub>n"} are the names of existing
-theories that @{text T} is based on. The @{text "T\<^isub>i"} are the
-direct \concept{parent theories} of @{text T}.
-Everything defined in the parent theories (and their parents, recursively) is
-automatically visible. Each theory @{text T} must
-reside in a \concept{theory file} named @{text "T.thy"}.
-
-\begin{warn}
-HOL contains a theory @{text Main}, the union of all the basic
-predefined theories like arithmetic, lists, sets, etc.
-Unless you know what you are doing, always include @{text Main}
-as a direct or indirect parent of all your theories.
-\end{warn}
-
-In addition to the theories that come with the Isabelle/HOL distribution
-(see \url{http://isabelle.in.tum.de/library/HOL/})
-there is also the \emph{Archive of Formal Proofs}
-at  \url{http://afp.sourceforge.net}, a growing collection of Isabelle theories
-that everybody can contribute to.
-
-\subsection{Quotation Marks}
-
-The textual definition of a theory follows a fixed syntax with keywords like
-\isacommand{begin} and \isacommand{datatype}.  Embedded in this syntax are
-the types and formulae of HOL.  To distinguish the two levels, everything
-HOL-specific (terms and types) must be enclosed in quotation marks:
-\texttt{"}\dots\texttt{"}. To lessen this burden, quotation marks around a
-single identifier can be dropped.  When Isabelle prints a syntax error
-message, it refers to the HOL syntax as the \concept{inner syntax} and the
-enclosing theory language as the \concept{outer syntax}.
-\sem
-\begin{warn}
-For reasons of readability, we almost never show the quotation marks in this
-book. Consult the accompanying theory files to see where they need to go.
-\end{warn}
-\endsem
-%
-*}
-(*<*)
-end
-(*>*)
\ No newline at end of file
--- a/doc-src/ProgProve/Thys/Bool_nat_list.thy	Mon Aug 27 21:46:00 2012 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,420 +0,0 @@
-(*<*)
-theory Bool_nat_list
-imports Main
-begin
-(*>*)
-
-text{*
-\vspace{-4ex}
-\section{\texorpdfstring{Types @{typ bool}, @{typ nat} and @{text list}}{Types bool, nat and list}}
-
-These are the most important predefined types. We go through them one by one.
-Based on examples we learn how to define (possibly recursive) functions and
-prove theorems about them by induction and simplification.
-
-\subsection{Type @{typ bool}}
-
-The type of boolean values is a predefined datatype
-@{datatype[display] bool}
-with the two values @{const True} and @{const False} and
-with many predefined functions:  @{text "\<not>"}, @{text "\<and>"}, @{text "\<or>"}, @{text
-"\<longrightarrow>"} etc. Here is how conjunction could be defined by pattern matching:
-*}
-
-fun conj :: "bool \<Rightarrow> bool \<Rightarrow> bool" where
-"conj True True = True" |
-"conj _ _ = False"
-
-text{* Both the datatype and function definitions roughly follow the syntax
-of functional programming languages.
-
-\subsection{Type @{typ nat}}
-
-Natural numbers are another predefined datatype:
-@{datatype[display] nat}
-All values of type @{typ nat} are generated by the constructors
-@{text 0} and @{const Suc}. Thus the values of type @{typ nat} are
-@{text 0}, @{term"Suc 0"}, @{term"Suc(Suc 0)"} etc.
-There are many predefined functions: @{text "+"}, @{text "*"}, @{text
-"\<le>"}, etc. Here is how you could define your own addition:
-*}
-
-fun add :: "nat \<Rightarrow> nat \<Rightarrow> nat" where
-"add 0 n = n" |
-"add (Suc m) n = Suc(add m n)"
-
-text{* And here is a proof of the fact that @{prop"add m 0 = m"}: *}
-
-lemma add_02: "add m 0 = m"
-apply(induction m)
-apply(auto)
-done
-(*<*)
-lemma "add m 0 = m"
-apply(induction m)
-(*>*)
-txt{* The \isacom{lemma} command starts the proof and gives the lemma
-a name, @{text add_02}. Properties of recursively defined functions
-need to be established by induction in most cases.
-Command \isacom{apply}@{text"(induction m)"} instructs Isabelle to
-start a proof by induction on @{text m}. In response, it will show the
-following proof state:
-@{subgoals[display,indent=0]}
-The numbered lines are known as \emph{subgoals}.
-The first subgoal is the base case, the second one the induction step.
-The prefix @{text"\<And>m."} is Isabelle's way of saying ``for an arbitrary but fixed @{text m}''. The @{text"\<Longrightarrow>"} separates assumptions from the conclusion.
-The command \isacom{apply}@{text"(auto)"} instructs Isabelle to try
-and prove all subgoals automatically, essentially by simplifying them.
-Because both subgoals are easy, Isabelle can do it.
-The base case @{prop"add 0 0 = 0"} holds by definition of @{const add},
-and the induction step is almost as simple:
-@{text"add\<^raw:~>(Suc m) 0 = Suc(add m 0) = Suc m"}
-using first the definition of @{const add} and then the induction hypothesis.
-In summary, both subproofs rely on simplification with function definitions and
-the induction hypothesis.
-As a result of that final \isacom{done}, Isabelle associates the lemma
-just proved with its name. You can now inspect the lemma with the command
-*}
-
-thm add_02
-
-txt{* which displays @{thm[show_question_marks,display] add_02} The free
-variable @{text m} has been replaced by the \concept{unknown}
-@{text"?m"}. There is no logical difference between the two but an
-operational one: unknowns can be instantiated, which is what you want after
-some lemma has been proved.
-
-Note that there is also a proof method @{text induct}, which behaves almost
-like @{text induction}; the difference is explained in \autoref{ch:Isar}.
-
-\begin{warn}
-Terminology: We use \concept{lemma}, \concept{theorem} and \concept{rule}
-interchangeably for propositions that have been proved.
-\end{warn}
-\begin{warn}
-  Numerals (@{text 0}, @{text 1}, @{text 2}, \dots) and most of the standard
-  arithmetic operations (@{text "+"}, @{text "-"}, @{text "*"}, @{text"\<le>"},
-  @{text"<"} etc) are overloaded: they are available
-  not just for natural numbers but for other types as well.
-  For example, given the goal @{text"x + 0 = x"}, there is nothing to indicate
-  that you are talking about natural numbers. Hence Isabelle can only infer
-  that @{term x} is of some arbitrary type where @{text 0} and @{text"+"}
-  exist. As a consequence, you will be unable to prove the
-  goal. To alert you to such pitfalls, Isabelle flags numerals without a
-  fixed type in its output: @{prop"x+0 = x"}.  In this particular example,
-  you need to include
-  an explicit type constraint, for example @{text"x+0 = (x::nat)"}. If there
-  is enough contextual information this may not be necessary: @{prop"Suc x =
-  x"} automatically implies @{text"x::nat"} because @{term Suc} is not
-  overloaded.
-\end{warn}
-
-\subsubsection{An informal proof}
-
-Above we gave some terse informal explanation of the proof of
-@{prop"add m 0 = m"}. A more detailed informal exposition of the lemma
-might look like this:
-\bigskip
-
-\noindent
-\textbf{Lemma} @{prop"add m 0 = m"}
-
-\noindent
-\textbf{Proof} by induction on @{text m}.
-\begin{itemize}
-\item Case @{text 0} (the base case): @{prop"add 0 0 = 0"}
-  holds by definition of @{const add}.
-\item Case @{term"Suc m"} (the induction step):
-  We assume @{prop"add m 0 = m"}, the induction hypothesis (IH),
-  and we need to show @{text"add (Suc m) 0 = Suc m"}.
-  The proof is as follows:\smallskip
-
-  \begin{tabular}{@ {}rcl@ {\quad}l@ {}}
-  @{term "add (Suc m) 0"} &@{text"="}& @{term"Suc(add m 0)"}
-  & by definition of @{text add}\\
-              &@{text"="}& @{term "Suc m"} & by IH
-  \end{tabular}
-\end{itemize}
-Throughout this book, \concept{IH} will stand for ``induction hypothesis''.
-
-We have now seen three proofs of @{prop"add m 0 = 0"}: the Isabelle one, the
-terse four lines explaining the base case and the induction step, and just now a
-model of a traditional inductive proof. The three proofs differ in the level
-of detail given and the intended reader: the Isabelle proof is for the
-machine, the informal proofs are for humans. Although this book concentrates
-on Isabelle proofs, it is important to be able to rephrase those proofs
-as informal text comprehensible to a reader familiar with traditional
-mathematical proofs. Later on we will introduce an Isabelle proof language
-that is closer to traditional informal mathematical language and is often
-directly readable.
-
-\subsection{Type @{text list}}
-
-Although lists are already predefined, we define our own copy just for
-demonstration purposes:
-*}
-(*<*)
-apply(auto)
-done 
-declare [[names_short]]
-(*>*)
-datatype 'a list = Nil | Cons 'a "'a list"
-
-text{*
-\begin{itemize}
-\item Type @{typ "'a list"} is the type of lists over elements of type @{typ 'a}. Because @{typ 'a} is a type variable, lists are in fact \concept{polymorphic}: the elements of a list can be of arbitrary type (but must all be of the same type).
-\item Lists have two constructors: @{const Nil}, the empty list, and @{const Cons}, which puts an element (of type @{typ 'a}) in front of a list (of type @{typ "'a list"}).
-Hence all lists are of the form @{const Nil}, or @{term"Cons x Nil"},
-or @{term"Cons x (Cons y Nil)"} etc.
-\item \isacom{datatype} requires no quotation marks on the
-left-hand side, but on the right-hand side each of the argument
-types of a constructor needs to be enclosed in quotation marks, unless
-it is just an identifier (e.g.\ @{typ nat} or @{typ 'a}).
-\end{itemize}
-We also define two standard functions, append and reverse: *}
-
-fun app :: "'a list \<Rightarrow> 'a list \<Rightarrow> 'a list" where
-"app Nil ys = ys" |
-"app (Cons x xs) ys = Cons x (app xs ys)"
-
-fun rev :: "'a list \<Rightarrow> 'a list" where
-"rev Nil = Nil" |
-"rev (Cons x xs) = app (rev xs) (Cons x Nil)"
-
-text{* By default, variables @{text xs}, @{text ys} and @{text zs} are of
-@{text list} type.
-
-Command \isacom{value} evaluates a term. For example, *}
-
-value "rev(Cons True (Cons False Nil))"
-
-text{* yields the result @{value "rev(Cons True (Cons False Nil))"}. This works symbolically, too: *}
-
-value "rev(Cons a (Cons b Nil))"
-
-text{* yields @{value "rev(Cons a (Cons b Nil))"}.
-\medskip
-
-Figure~\ref{fig:MyList} shows the theory created so far.
-Because @{text list}, @{const Nil}, @{const Cons} etc are already predefined,
- Isabelle prints qualified (long) names when executing this theory, for example, @{text MyList.Nil}
- instead of @{const Nil}.
- To suppress the qualified names you can insert the command
- \texttt{declare [[names\_short]]}.
- This is not recommended in general but just for this unusual example.
-% Notice where the
-%quotations marks are needed that we mostly sweep under the carpet.  In
-%particular, notice that \isacom{datatype} requires no quotation marks on the
-%left-hand side, but that on the right-hand side each of the argument
-%types of a constructor needs to be enclosed in quotation marks.
-
-\begin{figure}[htbp]
-\begin{alltt}
-\input{Thys/MyList.thy}\end{alltt}
-\caption{A Theory of Lists}
-\label{fig:MyList}
-\end{figure}
-
-\subsubsection{Structural Induction for Lists}
-
-Just as for natural numbers, there is a proof principle of induction for
-lists. Induction over a list is essentially induction over the length of
-the list, although the length remains implicit. To prove that some property
-@{text P} holds for all lists @{text xs}, i.e.\ \mbox{@{prop"P(xs)"}},
-you need to prove
-\begin{enumerate}
-\item the base case @{prop"P(Nil)"} and
-\item the inductive case @{prop"P(Cons x xs)"} under the assumption @{prop"P(xs)"}, for some arbitrary but fixed @{text x} and @{text xs}.
-\end{enumerate}
-This is often called \concept{structural induction}.
-
-\subsection{The Proof Process}
-
-We will now demonstrate the typical proof process, which involves
-the formulation and proof of auxiliary lemmas.
-Our goal is to show that reversing a list twice produces the original
-list. *}
-
-theorem rev_rev [simp]: "rev(rev xs) = xs"
-
-txt{* Commands \isacom{theorem} and \isacom{lemma} are
-interchangeable and merely indicate the importance we attach to a
-proposition. Via the bracketed attribute @{text simp} we also tell Isabelle
-to make the eventual theorem a \concept{simplification rule}: future proofs
-involving simplification will replace occurrences of @{term"rev(rev xs)"} by
-@{term"xs"}. The proof is by induction: *}
-
-apply(induction xs)
-
-txt{*
-As explained above, we obtain two subgoals, namely the base case (@{const Nil}) and the induction step (@{const Cons}):
-@{subgoals[display,indent=0,margin=65]}
-Let us try to solve both goals automatically:
-*}
-
-apply(auto)
-
-txt{*Subgoal~1 is proved, and disappears; the simplified version
-of subgoal~2 becomes the new subgoal~1:
-@{subgoals[display,indent=0,margin=70]}
-In order to simplify this subgoal further, a lemma suggests itself.
-
-\subsubsection{A First Lemma}
-
-We insert the following lemma in front of the main theorem:
-*}
-(*<*)
-oops
-(*>*)
-lemma rev_app [simp]: "rev(app xs ys) = app (rev ys) (rev xs)"
-
-txt{* There are two variables that we could induct on: @{text xs} and
-@{text ys}. Because @{const app} is defined by recursion on
-the first argument, @{text xs} is the correct one:
-*}
-
-apply(induction xs)
-
-txt{* This time not even the base case is solved automatically: *}
-apply(auto)
-txt{*
-\vspace{-5ex}
-@{subgoals[display,goals_limit=1]}
-Again, we need to abandon this proof attempt and prove another simple lemma
-first.
-
-\subsubsection{A Second Lemma}
-
-We again try the canonical proof procedure:
-*}
-(*<*)
-oops
-(*>*)
-lemma app_Nil2 [simp]: "app xs Nil = xs"
-apply(induction xs)
-apply(auto)
-done
-
-text{*
-Thankfully, this worked.
-Now we can continue with our stuck proof attempt of the first lemma:
-*}
-
-lemma rev_app [simp]: "rev(app xs ys) = app (rev ys) (rev xs)"
-apply(induction xs)
-apply(auto)
-
-txt{*
-We find that this time @{text"auto"} solves the base case, but the
-induction step merely simplifies to
-@{subgoals[display,indent=0,goals_limit=1]}
-The missing lemma is associativity of @{const app},
-which we insert in front of the failed lemma @{text rev_app}.
-
-\subsubsection{Associativity of @{const app}}
-
-The canonical proof procedure succeeds without further ado:
-*}
-(*<*)oops(*>*)
-lemma app_assoc [simp]: "app (app xs ys) zs = app xs (app ys zs)"
-apply(induction xs)
-apply(auto)
-done
-(*<*)
-lemma rev_app [simp]: "rev(app xs ys) = app (rev ys)(rev xs)"
-apply(induction xs)
-apply(auto)
-done
-
-theorem rev_rev [simp]: "rev(rev xs) = xs"
-apply(induction xs)
-apply(auto)
-done
-(*>*)
-text{*
-Finally the proofs of @{thm[source] rev_app} and @{thm[source] rev_rev}
-succeed, too.
-
-\subsubsection{Another informal proof}
-
-Here is the informal proof of associativity of @{const app}
-corresponding to the Isabelle proof above.
-\bigskip
-
-\noindent
-\textbf{Lemma} @{prop"app (app xs ys) zs = app xs (app ys zs)"}
-
-\noindent
-\textbf{Proof} by induction on @{text xs}.
-\begin{itemize}
-\item Case @{text Nil}: \ @{prop"app (app Nil ys) zs = app ys zs"} @{text"="}
-  \mbox{@{term"app Nil (app ys zs)"}} \ holds by definition of @{text app}.
-\item Case @{text"Cons x xs"}: We assume
-  \begin{center} \hfill @{term"app (app xs ys) zs"} @{text"="}
-  @{term"app xs (app ys zs)"} \hfill (IH) \end{center}
-  and we need to show
-  \begin{center} @{prop"app (app (Cons x xs) ys) zs = app (Cons x xs) (app ys zs)"}.\end{center}
-  The proof is as follows:\smallskip
-
-  \begin{tabular}{@ {}l@ {\quad}l@ {}}
-  @{term"app (app (Cons x xs) ys) zs"}\\
-  @{text"= app (Cons x (app xs ys)) zs"} & by definition of @{text app}\\
-  @{text"= Cons x (app (app xs ys) zs)"} & by definition of @{text app}\\
-  @{text"= Cons x (app xs (app ys zs))"} & by IH\\
-  @{text"= app (Cons x xs) (app ys zs)"} & by definition of @{text app}
-  \end{tabular}
-\end{itemize}
-\medskip
-
-\noindent Didn't we say earlier that all proofs are by simplification? But
-in both cases, going from left to right, the last equality step is not a
-simplification at all! In the base case it is @{prop"app ys zs = app Nil (app
-ys zs)"}. It appears almost mysterious because we suddenly complicate the
-term by appending @{text Nil} on the left. What is really going on is this:
-when proving some equality \mbox{@{prop"s = t"}}, both @{text s} and @{text t} are
-simplified to some common term @{text u}.  This heuristic for equality proofs
-works well for a functional programming context like ours. In the base case
-@{text s} is @{term"app (app Nil ys) zs"}, @{text t} is @{term"app Nil (app
-ys zs)"}, and @{text u} is @{term"app ys zs"}.
-
-\subsection{Predefined lists}
-\label{sec:predeflists}
-
-Isabelle's predefined lists are the same as the ones above, but with
-more syntactic sugar:
-\begin{itemize}
-\item @{text "[]"} is @{const Nil},
-\item @{term"x # xs"} is @{term"Cons x xs"},
-\item @{text"[x\<^isub>1, \<dots>, x\<^isub>n]"} is @{text"x\<^isub>1 # \<dots> # x\<^isub>n # []"}, and
-\item @{term "xs @ ys"} is @{term"app xs ys"}.
-\end{itemize}
-There is also a large library of predefined functions.
-The most important ones are the length function
-@{text"length :: 'a list \<Rightarrow> nat"} (with the obvious definition),
-and the map function that applies a function to all elements of a list:
-\begin{isabelle}
-\isacom{fun} @{const map} @{text"::"} @{typ[source] "('a \<Rightarrow> 'b) \<Rightarrow> 'a list \<Rightarrow> 'b list"}\\
-@{text"\""}@{thm map.simps(1)}@{text"\" |"}\\
-@{text"\""}@{thm map.simps(2)}@{text"\""}
-\end{isabelle}
-\sem
-Also useful are the \concept{head} of a list, its first element,
-and the \concept{tail}, the rest of the list:
-\begin{isabelle}
-\isacom{fun} @{text"hd :: 'a list \<Rightarrow> 'a"}\\
-@{prop"hd(x#xs) = x"}
-\end{isabelle}
-\begin{isabelle}
-\isacom{fun} @{text"tl :: 'a list \<Rightarrow> 'a list"}\\
-@{prop"tl [] = []"} @{text"|"}\\
-@{prop"tl(x#xs) = xs"}
-\end{isabelle}
-Note that since HOL is a logic of total functions, @{term"hd []"} is defined,
-but we do now know what the result is. That is, @{term"hd []"} is not undefined
-but underdefined.
-\endsem
-%
-*}
-(*<*)
-end
-(*>*)
--- a/doc-src/ProgProve/Thys/Isar.thy	Mon Aug 27 21:46:00 2012 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,997 +0,0 @@
-(*<*)
-theory Isar
-imports LaTeXsugar
-begin
-ML{* quick_and_dirty := true *}
-(*>*)
-text{*
-Apply-scripts are unreadable and hard to maintain. The language of choice
-for larger proofs is \concept{Isar}. The two key features of Isar are:
-\begin{itemize}
-\item It is structured, not linear.
-\item It is readable without running it because
-you need to state what you are proving at any given point.
-\end{itemize}
-Whereas apply-scripts are like assembly language programs, Isar proofs
-are like structured programs with comments. A typical Isar proof looks like this:
-*}text{*
-\begin{tabular}{@ {}l}
-\isacom{proof}\\
-\quad\isacom{assume} @{text"\""}$\mathit{formula}_0$@{text"\""}\\
-\quad\isacom{have} @{text"\""}$\mathit{formula}_1$@{text"\""} \quad\isacom{by} @{text simp}\\
-\quad\vdots\\
-\quad\isacom{have} @{text"\""}$\mathit{formula}_n$@{text"\""} \quad\isacom{by} @{text blast}\\
-\quad\isacom{show} @{text"\""}$\mathit{formula}_{n+1}$@{text"\""} \quad\isacom{by} @{text \<dots>}\\
-\isacom{qed}
-\end{tabular}
-*}text{*
-It proves $\mathit{formula}_0 \Longrightarrow \mathit{formula}_{n+1}$
-(provided each proof step succeeds).
-The intermediate \isacom{have} statements are merely stepping stones
-on the way towards the \isacom{show} statement that proves the actual
-goal. In more detail, this is the Isar core syntax:
-\medskip
-
-\begin{tabular}{@ {}lcl@ {}}
-\textit{proof} &=& \isacom{by} \textit{method}\\
-      &$\mid$& \isacom{proof} [\textit{method}] \ \textit{step}$^*$ \ \isacom{qed}
-\end{tabular}
-\medskip
-
-\begin{tabular}{@ {}lcl@ {}}
-\textit{step} &=& \isacom{fix} \textit{variables} \\
-      &$\mid$& \isacom{assume} \textit{proposition} \\
-      &$\mid$& [\isacom{from} \textit{fact}$^+$] (\isacom{have} $\mid$ \isacom{show}) \ \textit{proposition} \ \textit{proof}
-\end{tabular}
-\medskip
-
-\begin{tabular}{@ {}lcl@ {}}
-\textit{proposition} &=& [\textit{name}:] @{text"\""}\textit{formula}@{text"\""}
-\end{tabular}
-\medskip
-
-\begin{tabular}{@ {}lcl@ {}}
-\textit{fact} &=& \textit{name} \ $\mid$ \ \dots
-\end{tabular}
-\medskip
-
-\noindent A proof can either be an atomic \isacom{by} with a single proof
-method which must finish off the statement being proved, for example @{text
-auto}.  Or it can be a \isacom{proof}--\isacom{qed} block of multiple
-steps. Such a block can optionally begin with a proof method that indicates
-how to start off the proof, e.g.\ \mbox{@{text"(induction xs)"}}.
-
-A step either assumes a proposition or states a proposition
-together with its proof. The optional \isacom{from} clause
-indicates which facts are to be used in the proof.
-Intermediate propositions are stated with \isacom{have}, the overall goal
-with \isacom{show}. A step can also introduce new local variables with
-\isacom{fix}. Logically, \isacom{fix} introduces @{text"\<And>"}-quantified
-variables, \isacom{assume} introduces the assumption of an implication
-(@{text"\<Longrightarrow>"}) and \isacom{have}/\isacom{show} the conclusion.
-
-Propositions are optionally named formulas. These names can be referred to in
-later \isacom{from} clauses. In the simplest case, a fact is such a name.
-But facts can also be composed with @{text OF} and @{text of} as shown in
-\S\ref{sec:forward-proof}---hence the \dots\ in the above grammar.  Note
-that assumptions, intermediate \isacom{have} statements and global lemmas all
-have the same status and are thus collectively referred to as
-\concept{facts}.
-
-Fact names can stand for whole lists of facts. For example, if @{text f} is
-defined by command \isacom{fun}, @{text"f.simps"} refers to the whole list of
-recursion equations defining @{text f}. Individual facts can be selected by
-writing @{text"f.simps(2)"}, whole sublists by @{text"f.simps(2-4)"}.
-
-
-\section{Isar by example}
-
-We show a number of proofs of Cantor's theorem that a function from a set to
-its powerset cannot be surjective, illustrating various features of Isar. The
-constant @{const surj} is predefined.
-*}
-
-lemma "\<not> surj(f :: 'a \<Rightarrow> 'a set)"
-proof
-  assume 0: "surj f"
-  from 0 have 1: "\<forall>A. \<exists>a. A = f a" by(simp add: surj_def)
-  from 1 have 2: "\<exists>a. {x. x \<notin> f x} = f a" by blast
-  from 2 show "False" by blast
-qed
-
-text{*
-The \isacom{proof} command lacks an explicit method how to perform
-the proof. In such cases Isabelle tries to use some standard introduction
-rule, in the above case for @{text"\<not>"}:
-\[
-\inferrule{
-\mbox{@{thm (prem 1) notI}}}
-{\mbox{@{thm (concl) notI}}}
-\]
-In order to prove @{prop"~ P"}, assume @{text P} and show @{text False}.
-Thus we may assume @{prop"surj f"}. The proof shows that names of propositions
-may be (single!) digits---meaningful names are hard to invent and are often
-not necessary. Both \isacom{have} steps are obvious. The second one introduces
-the diagonal set @{term"{x. x \<notin> f x}"}, the key idea in the proof.
-If you wonder why @{text 2} directly implies @{text False}: from @{text 2}
-it follows that @{prop"a \<notin> f a \<longleftrightarrow> a \<in> f a"}.
-
-\subsection{@{text this}, @{text then}, @{text hence} and @{text thus}}
-
-Labels should be avoided. They interrupt the flow of the reader who has to
-scan the context for the point where the label was introduced. Ideally, the
-proof is a linear flow, where the output of one step becomes the input of the
-next step, piping the previously proved fact into the next proof, just like
-in a UNIX pipe. In such cases the predefined name @{text this} can be used
-to refer to the proposition proved in the previous step. This allows us to
-eliminate all labels from our proof (we suppress the \isacom{lemma} statement):
-*}
-(*<*)
-lemma "\<not> surj(f :: 'a \<Rightarrow> 'a set)"
-(*>*)
-proof
-  assume "surj f"
-  from this have "\<exists>a. {x. x \<notin> f x} = f a" by(auto simp: surj_def)
-  from this show "False" by blast
-qed
-
-text{* We have also taken the opportunity to compress the two \isacom{have}
-steps into one.
-
-To compact the text further, Isar has a few convenient abbreviations:
-\medskip
-
-\begin{tabular}{rcl}
-\isacom{then} &=& \isacom{from} @{text this}\\
-\isacom{thus} &=& \isacom{then} \isacom{show}\\
-\isacom{hence} &=& \isacom{then} \isacom{have}
-\end{tabular}
-\medskip
-
-\noindent
-With the help of these abbreviations the proof becomes
-*}
-(*<*)
-lemma "\<not> surj(f :: 'a \<Rightarrow> 'a set)"
-(*>*)
-proof
-  assume "surj f"
-  hence "\<exists>a. {x. x \<notin> f x} = f a" by(auto simp: surj_def)
-  thus "False" by blast
-qed
-text{*
-
-There are two further linguistic variations:
-\medskip
-
-\begin{tabular}{rcl}
-(\isacom{have}$\mid$\isacom{show}) \ \textit{prop} \ \isacom{using} \ \textit{facts}
-&=&
-\isacom{from} \ \textit{facts} \ (\isacom{have}$\mid$\isacom{show}) \ \textit{prop}\\
-\isacom{with} \ \textit{facts} &=& \isacom{from} \ \textit{facts} \isa{this}
-\end{tabular}
-\medskip
-
-\noindent The \isacom{using} idiom de-emphasizes the used facts by moving them
-behind the proposition.
-
-\subsection{Structured lemma statements: \isacom{fixes}, \isacom{assumes}, \isacom{shows}}
-
-Lemmas can also be stated in a more structured fashion. To demonstrate this
-feature with Cantor's theorem, we rephrase @{prop"\<not> surj f"}
-a little:
-*}
-
-lemma
-  fixes f :: "'a \<Rightarrow> 'a set"
-  assumes s: "surj f"
-  shows "False"
-
-txt{* The optional \isacom{fixes} part allows you to state the types of
-variables up front rather than by decorating one of their occurrences in the
-formula with a type constraint. The key advantage of the structured format is
-the \isacom{assumes} part that allows you to name each assumption; multiple
-assumptions can be separated by \isacom{and}. The
-\isacom{shows} part gives the goal. The actual theorem that will come out of
-the proof is @{prop"surj f \<Longrightarrow> False"}, but during the proof the assumption
-@{prop"surj f"} is available under the name @{text s} like any other fact.
-*}
-
-proof -
-  have "\<exists> a. {x. x \<notin> f x} = f a" using s
-    by(auto simp: surj_def)
-  thus "False" by blast
-qed
-
-text{* In the \isacom{have} step the assumption @{prop"surj f"} is now
-referenced by its name @{text s}. The duplication of @{prop"surj f"} in the
-above proofs (once in the statement of the lemma, once in its proof) has been
-eliminated.
-
-\begin{warn}
-Note the dash after the \isacom{proof}
-command.  It is the null method that does nothing to the goal. Leaving it out
-would ask Isabelle to try some suitable introduction rule on the goal @{const
-False}---but there is no suitable introduction rule and \isacom{proof}
-would fail.
-\end{warn}
-
-Stating a lemma with \isacom{assumes}-\isacom{shows} implicitly introduces the
-name @{text assms} that stands for the list of all assumptions. You can refer
-to individual assumptions by @{text"assms(1)"}, @{text"assms(2)"} etc,
-thus obviating the need to name them individually.
-
-\section{Proof patterns}
-
-We show a number of important basic proof patterns. Many of them arise from
-the rules of natural deduction that are applied by \isacom{proof} by
-default. The patterns are phrased in terms of \isacom{show} but work for
-\isacom{have} and \isacom{lemma}, too.
-
-We start with two forms of \concept{case analysis}:
-starting from a formula @{text P} we have the two cases @{text P} and
-@{prop"~P"}, and starting from a fact @{prop"P \<or> Q"}
-we have the two cases @{text P} and @{text Q}:
-*}text_raw{*
-\begin{tabular}{@ {}ll@ {}}
-\begin{minipage}[t]{.4\textwidth}
-\isa{%
-*}
-(*<*)lemma "R" proof-(*>*)
-show "R"
-proof cases
-  assume "P"
-  txt_raw{*\\\mbox{}\quad$\vdots$\\\mbox{}\hspace{-1.4ex}*}
-  show "R" (*<*)sorry(*>*)txt_raw{*\ $\dots$\\*}
-next
-  assume "\<not> P"
-  txt_raw{*\\\mbox{}\quad$\vdots$\\\mbox{}\hspace{-1.4ex}*}
-  show "R" (*<*)sorry(*>*)txt_raw{*\ $\dots$\\*}
-qed(*<*)oops(*>*)
-text_raw {* }
-\end{minipage}
-&
-\begin{minipage}[t]{.4\textwidth}
-\isa{%
-*}
-(*<*)lemma "R" proof-(*>*)
-have "P \<or> Q" (*<*)sorry(*>*)txt_raw{*\ $\dots$\\*}
-then show "R"
-proof
-  assume "P"
-  txt_raw{*\\\mbox{}\quad$\vdots$\\\mbox{}\hspace{-1.4ex}*}
-  show "R" (*<*)sorry(*>*)txt_raw{*\ $\dots$\\*}
-next
-  assume "Q"
-  txt_raw{*\\\mbox{}\quad$\vdots$\\\mbox{}\hspace{-1.4ex}*}
-  show "R" (*<*)sorry(*>*)txt_raw{*\ $\dots$\\*}
-qed(*<*)oops(*>*)
-
-text_raw {* }
-\end{minipage}
-\end{tabular}
-\medskip
-\begin{isamarkuptext}%
-How to prove a logical equivalence:
-\end{isamarkuptext}%
-\isa{%
-*}
-(*<*)lemma "P\<longleftrightarrow>Q" proof-(*>*)
-show "P \<longleftrightarrow> Q"
-proof
-  assume "P"
-  txt_raw{*\\\mbox{}\quad$\vdots$\\\mbox{}\hspace{-1.4ex}*}
-  show "Q" (*<*)sorry(*>*) txt_raw{*\ $\dots$\\*}
-next
-  assume "Q"
-  txt_raw{*\\\mbox{}\quad$\vdots$\\\mbox{}\hspace{-1.4ex}*}
-  show "P" (*<*)sorry(*>*) txt_raw{*\ $\dots$\\*}
-qed(*<*)qed(*>*)
-text_raw {* }
-\medskip
-\begin{isamarkuptext}%
-Proofs by contradiction:
-\end{isamarkuptext}%
-\begin{tabular}{@ {}ll@ {}}
-\begin{minipage}[t]{.4\textwidth}
-\isa{%
-*}
-(*<*)lemma "\<not> P" proof-(*>*)
-show "\<not> P"
-proof
-  assume "P"
-  txt_raw{*\\\mbox{}\quad$\vdots$\\\mbox{}\hspace{-1.4ex}*}
-  show "False" (*<*)sorry(*>*)txt_raw{*\ $\dots$\\*}
-qed(*<*)oops(*>*)
-
-text_raw {* }
-\end{minipage}
-&
-\begin{minipage}[t]{.4\textwidth}
-\isa{%
-*}
-(*<*)lemma "P" proof-(*>*)
-show "P"
-proof (rule ccontr)
-  assume "\<not>P"
-  txt_raw{*\\\mbox{}\quad$\vdots$\\\mbox{}\hspace{-1.4ex}*}
-  show "False" (*<*)sorry(*>*)txt_raw{*\ $\dots$\\*}
-qed(*<*)oops(*>*)
-
-text_raw {* }
-\end{minipage}
-\end{tabular}
-\medskip
-\begin{isamarkuptext}%
-The name @{thm[source] ccontr} stands for ``classical contradiction''.
-
-How to prove quantified formulas:
-\end{isamarkuptext}%
-\begin{tabular}{@ {}ll@ {}}
-\begin{minipage}[t]{.4\textwidth}
-\isa{%
-*}
-(*<*)lemma "ALL x. P x" proof-(*>*)
-show "\<forall>x. P(x)"
-proof
-  fix x
-  txt_raw{*\\\mbox{}\quad$\vdots$\\\mbox{}\hspace{-1.4ex}*}
-  show "P(x)" (*<*)sorry(*>*)txt_raw{*\ $\dots$\\*}
-qed(*<*)oops(*>*)
-
-text_raw {* }
-\end{minipage}
-&
-\begin{minipage}[t]{.4\textwidth}
-\isa{%
-*}
-(*<*)lemma "EX x. P(x)" proof-(*>*)
-show "\<exists>x. P(x)"
-proof
-  txt_raw{*\\\mbox{}\quad$\vdots$\\\mbox{}\hspace{-1.4ex}*}
-  show "P(witness)" (*<*)sorry(*>*)txt_raw{*\ $\dots$\\*}
-qed
-(*<*)oops(*>*)
-
-text_raw {* }
-\end{minipage}
-\end{tabular}
-\medskip
-\begin{isamarkuptext}%
-In the proof of \noquotes{@{prop[source]"\<forall>x. P(x)"}},
-the step \isacom{fix}~@{text x} introduces a locally fixed variable @{text x}
-into the subproof, the proverbial ``arbitrary but fixed value''.
-Instead of @{text x} we could have chosen any name in the subproof.
-In the proof of \noquotes{@{prop[source]"\<exists>x. P(x)"}},
-@{text witness} is some arbitrary
-term for which we can prove that it satisfies @{text P}.
-
-How to reason forward from \noquotes{@{prop[source] "\<exists>x. P(x)"}}:
-\end{isamarkuptext}%
-*}
-(*<*)lemma True proof- assume 1: "EX x. P x"(*>*)
-have "\<exists>x. P(x)" (*<*)by(rule 1)(*>*)txt_raw{*\ $\dots$\\*}
-then obtain x where p: "P(x)" by blast
-(*<*)oops(*>*)
-text{*
-After the \isacom{obtain} step, @{text x} (we could have chosen any name)
-is a fixed local
-variable, and @{text p} is the name of the fact
-\noquotes{@{prop[source] "P(x)"}}.
-This pattern works for one or more @{text x}.
-As an example of the \isacom{obtain} command, here is the proof of
-Cantor's theorem in more detail:
-*}
-
-lemma "\<not> surj(f :: 'a \<Rightarrow> 'a set)"
-proof
-  assume "surj f"
-  hence  "\<exists>a. {x. x \<notin> f x} = f a" by(auto simp: surj_def)
-  then obtain a where  "{x. x \<notin> f x} = f a"  by blast
-  hence  "a \<notin> f a \<longleftrightarrow> a \<in> f a"  by blast
-  thus "False" by blast
-qed
-
-text_raw{*
-\begin{isamarkuptext}%
-
-Finally, how to prove set equality and subset relationship:
-\end{isamarkuptext}%
-\begin{tabular}{@ {}ll@ {}}
-\begin{minipage}[t]{.4\textwidth}
-\isa{%
-*}
-(*<*)lemma "A = (B::'a set)" proof-(*>*)
-show "A = B"
-proof
-  show "A \<subseteq> B" (*<*)sorry(*>*)txt_raw{*\ $\dots$\\*}
-next
-  show "B \<subseteq> A" (*<*)sorry(*>*)txt_raw{*\ $\dots$\\*}
-qed(*<*)qed(*>*)
-
-text_raw {* }
-\end{minipage}
-&
-\begin{minipage}[t]{.4\textwidth}
-\isa{%
-*}
-(*<*)lemma "A <= (B::'a set)" proof-(*>*)
-show "A \<subseteq> B"
-proof
-  fix x
-  assume "x \<in> A"
-  txt_raw{*\\\mbox{}\quad$\vdots$\\\mbox{}\hspace{-1.4ex}*}
-  show "x \<in> B" (*<*)sorry(*>*)txt_raw{*\ $\dots$\\*}
-qed(*<*)qed(*>*)
-
-text_raw {* }
-\end{minipage}
-\end{tabular}
-\begin{isamarkuptext}%
-\section{Streamlining proofs}
-
-\subsection{Pattern matching and quotations}
-
-In the proof patterns shown above, formulas are often duplicated.
-This can make the text harder to read, write and maintain. Pattern matching
-is an abbreviation mechanism to avoid such duplication. Writing
-\begin{quote}
-\isacom{show} \ \textit{formula} @{text"("}\isacom{is} \textit{pattern}@{text")"}
-\end{quote}
-matches the pattern against the formula, thus instantiating the unknowns in
-the pattern for later use. As an example, consider the proof pattern for
-@{text"\<longleftrightarrow>"}:
-\end{isamarkuptext}%
-*}
-(*<*)lemma "formula\<^isub>1 \<longleftrightarrow> formula\<^isub>2" proof-(*>*)
-show "formula\<^isub>1 \<longleftrightarrow> formula\<^isub>2" (is "?L \<longleftrightarrow> ?R")
-proof
-  assume "?L"
-  txt_raw{*\\\mbox{}\quad$\vdots$\\\mbox{}\hspace{-1.4ex}*}
-  show "?R" (*<*)sorry(*>*) txt_raw{*\ $\dots$\\*}
-next
-  assume "?R"
-  txt_raw{*\\\mbox{}\quad$\vdots$\\\mbox{}\hspace{-1.4ex}*}
-  show "?L" (*<*)sorry(*>*) txt_raw{*\ $\dots$\\*}
-qed(*<*)qed(*>*)
-
-text{* Instead of duplicating @{text"formula\<^isub>i"} in the text, we introduce
-the two abbreviations @{text"?L"} and @{text"?R"} by pattern matching.
-Pattern matching works wherever a formula is stated, in particular
-with \isacom{have} and \isacom{lemma}.
-
-The unknown @{text"?thesis"} is implicitly matched against any goal stated by
-\isacom{lemma} or \isacom{show}. Here is a typical example: *}
-
-lemma "formula"
-proof -
-  txt_raw{*\\\mbox{}\quad$\vdots$\\\mbox{}\hspace{-1.4ex}*}
-  show ?thesis (*<*)sorry(*>*) txt_raw{*\ $\dots$\\*}
-qed
-
-text{* 
-Unknowns can also be instantiated with \isacom{let} commands
-\begin{quote}
-\isacom{let} @{text"?t"} = @{text"\""}\textit{some-big-term}@{text"\""}
-\end{quote}
-Later proof steps can refer to @{text"?t"}:
-\begin{quote}
-\isacom{have} @{text"\""}\dots @{text"?t"} \dots@{text"\""}
-\end{quote}
-\begin{warn}
-Names of facts are introduced with @{text"name:"} and refer to proved
-theorems. Unknowns @{text"?X"} refer to terms or formulas.
-\end{warn}
-
-Although abbreviations shorten the text, the reader needs to remember what
-they stand for. Similarly for names of facts. Names like @{text 1}, @{text 2}
-and @{text 3} are not helpful and should only be used in short proofs. For
-longer proofs, descriptive names are better. But look at this example:
-\begin{quote}
-\isacom{have} \ @{text"x_gr_0: \"x > 0\""}\\
-$\vdots$\\
-\isacom{from} @{text "x_gr_0"} \dots
-\end{quote}
-The name is longer than the fact it stands for! Short facts do not need names,
-one can refer to them easily by quoting them:
-\begin{quote}
-\isacom{have} \ @{text"\"x > 0\""}\\
-$\vdots$\\
-\isacom{from} @{text "`x>0`"} \dots
-\end{quote}
-Note that the quotes around @{text"x>0"} are \concept{back quotes}.
-They refer to the fact not by name but by value.
-
-\subsection{\isacom{moreover}}
-
-Sometimes one needs a number of facts to enable some deduction. Of course
-one can name these facts individually, as shown on the right,
-but one can also combine them with \isacom{moreover}, as shown on the left:
-*}text_raw{*
-\begin{tabular}{@ {}ll@ {}}
-\begin{minipage}[t]{.4\textwidth}
-\isa{%
-*}
-(*<*)lemma "P" proof-(*>*)
-have "P\<^isub>1" (*<*)sorry(*>*)txt_raw{*\ $\dots$\\*}
-moreover have "P\<^isub>2" (*<*)sorry(*>*)txt_raw{*\ $\dots$\\*}
-moreover
-txt_raw{*\\$\vdots$\\\hspace{-1.4ex}*}(*<*)have "True" ..(*>*)
-moreover have "P\<^isub>n" (*<*)sorry(*>*)txt_raw{*\ $\dots$\\*}
-ultimately have "P"  (*<*)sorry(*>*)txt_raw{*\ $\dots$\\*}
-(*<*)oops(*>*)
-
-text_raw {* }
-\end{minipage}
-&
-\qquad
-\begin{minipage}[t]{.4\textwidth}
-\isa{%
-*}
-(*<*)lemma "P" proof-(*>*)
-have lab\<^isub>1: "P\<^isub>1" (*<*)sorry(*>*)txt_raw{*\ $\dots$\\*}
-have lab\<^isub>2: "P\<^isub>2" (*<*)sorry(*>*)txt_raw{*\ $\dots$*}
-txt_raw{*\\$\vdots$\\\hspace{-1.4ex}*}
-have lab\<^isub>n: "P\<^isub>n" (*<*)sorry(*>*)txt_raw{*\ $\dots$\\*}
-from lab\<^isub>1 lab\<^isub>2 txt_raw{*\ $\dots$\\*}
-have "P"  (*<*)sorry(*>*)txt_raw{*\ $\dots$\\*}
-(*<*)oops(*>*)
-
-text_raw {* }
-\end{minipage}
-\end{tabular}
-\begin{isamarkuptext}%
-The \isacom{moreover} version is no shorter but expresses the structure more
-clearly and avoids new names.
-
-\subsection{Raw proof blocks}
-
-Sometimes one would like to prove some lemma locally within a proof.
-A lemma that shares the current context of assumptions but that
-has its own assumptions and is generalized over its locally fixed
-variables at the end. This is what a \concept{raw proof block} does:
-\begin{quote}
-@{text"{"} \isacom{fix} @{text"x\<^isub>1 \<dots> x\<^isub>n"}\\
-\mbox{}\ \ \ \isacom{assume} @{text"A\<^isub>1 \<dots> A\<^isub>m"}\\
-\mbox{}\ \ \ $\vdots$\\
-\mbox{}\ \ \ \isacom{have} @{text"B"}\\
-@{text"}"}
-\end{quote}
-proves @{text"\<lbrakk> A\<^isub>1; \<dots> ; A\<^isub>m \<rbrakk> \<Longrightarrow> B"}
-where all @{text"x\<^isub>i"} have been replaced by unknowns @{text"?x\<^isub>i"}.
-\begin{warn}
-The conclusion of a raw proof block is \emph{not} indicated by \isacom{show}
-but is simply the final \isacom{have}.
-\end{warn}
-
-As an example we prove a simple fact about divisibility on integers.
-The definition of @{text "dvd"} is @{thm dvd_def}.
-\end{isamarkuptext}%
-*}
-
-lemma fixes a b :: int assumes "b dvd (a+b)" shows "b dvd a"
-proof -
-  { fix k assume k: "a+b = b*k"
-    have "\<exists>k'. a = b*k'"
-    proof
-      show "a = b*(k - 1)" using k by(simp add: algebra_simps)
-    qed }
-  then show ?thesis using assms by(auto simp add: dvd_def)
-qed
-
-text{* Note that the result of a raw proof block has no name. In this example
-it was directly piped (via \isacom{then}) into the final proof, but it can
-also be named for later reference: you simply follow the block directly by a
-\isacom{note} command:
-\begin{quote}
-\isacom{note} \ @{text"name = this"}
-\end{quote}
-This introduces a new name @{text name} that refers to @{text this},
-the fact just proved, in this case the preceding block. In general,
-\isacom{note} introduces a new name for one or more facts.
-
-\section{Case analysis and induction}
-
-\subsection{Datatype case analysis}
-
-We have seen case analysis on formulas. Now we want to distinguish
-which form some term takes: is it @{text 0} or of the form @{term"Suc n"},
-is it @{term"[]"} or of the form @{term"x#xs"}, etc. Here is a typical example
-proof by case analysis on the form of @{text xs}:
-*}
-
-lemma "length(tl xs) = length xs - 1"
-proof (cases xs)
-  assume "xs = []"
-  thus ?thesis by simp
-next
-  fix y ys assume "xs = y#ys"
-  thus ?thesis by simp
-qed
-
-text{* Function @{text tl} (''tail'') is defined by @{thm tl.simps(1)} and
-@{thm tl.simps(2)}. Note that the result type of @{const length} is @{typ nat}
-and @{prop"0 - 1 = (0::nat)"}.
-
-This proof pattern works for any term @{text t} whose type is a datatype.
-The goal has to be proved for each constructor @{text C}:
-\begin{quote}
-\isacom{fix} \ @{text"x\<^isub>1 \<dots> x\<^isub>n"} \isacom{assume} @{text"\"t = C x\<^isub>1 \<dots> x\<^isub>n\""}
-\end{quote}
-Each case can be written in a more compact form by means of the \isacom{case}
-command:
-\begin{quote}
-\isacom{case} @{text "(C x\<^isub>1 \<dots> x\<^isub>n)"}
-\end{quote}
-This is equivalent to the explicit \isacom{fix}-\isacom{assume} line
-but also gives the assumption @{text"\"t = C x\<^isub>1 \<dots> x\<^isub>n\""} a name: @{text C},
-like the constructor.
-Here is the \isacom{case} version of the proof above:
-*}
-(*<*)lemma "length(tl xs) = length xs - 1"(*>*)
-proof (cases xs)
-  case Nil
-  thus ?thesis by simp
-next
-  case (Cons y ys)
-  thus ?thesis by simp
-qed
-
-text{* Remember that @{text Nil} and @{text Cons} are the alphanumeric names
-for @{text"[]"} and @{text"#"}. The names of the assumptions
-are not used because they are directly piped (via \isacom{thus})
-into the proof of the claim.
-
-\subsection{Structural induction}
-
-We illustrate structural induction with an example based on natural numbers:
-the sum (@{text"\<Sum>"}) of the first @{text n} natural numbers
-(@{text"{0..n::nat}"}) is equal to \mbox{@{term"n*(n+1) div 2::nat"}}.
-Never mind the details, just focus on the pattern:
-*}
-
-lemma "\<Sum>{0..n::nat} = n*(n+1) div 2"
-proof (induction n)
-  show "\<Sum>{0..0::nat} = 0*(0+1) div 2" by simp
-next
-  fix n assume "\<Sum>{0..n::nat} = n*(n+1) div 2"
-  thus "\<Sum>{0..Suc n} = Suc n*(Suc n+1) div 2" by simp
-qed
-
-text{* Except for the rewrite steps, everything is explicitly given. This
-makes the proof easily readable, but the duplication means it is tedious to
-write and maintain. Here is how pattern
-matching can completely avoid any duplication: *}
-
-lemma "\<Sum>{0..n::nat} = n*(n+1) div 2" (is "?P n")
-proof (induction n)
-  show "?P 0" by simp
-next
-  fix n assume "?P n"
-  thus "?P(Suc n)" by simp
-qed
-
-text{* The first line introduces an abbreviation @{text"?P n"} for the goal.
-Pattern matching @{text"?P n"} with the goal instantiates @{text"?P"} to the
-function @{term"\<lambda>n. \<Sum>{0..n::nat} = n*(n+1) div 2"}.  Now the proposition to
-be proved in the base case can be written as @{text"?P 0"}, the induction
-hypothesis as @{text"?P n"}, and the conclusion of the induction step as
-@{text"?P(Suc n)"}.
-
-Induction also provides the \isacom{case} idiom that abbreviates
-the \isacom{fix}-\isacom{assume} step. The above proof becomes
-*}
-(*<*)lemma "\<Sum>{0..n::nat} = n*(n+1) div 2"(*>*)
-proof (induction n)
-  case 0
-  show ?case by simp
-next
-  case (Suc n)
-  thus ?case by simp
-qed
-
-text{*
-The unknown @{text "?case"} is set in each case to the required
-claim, i.e.\ @{text"?P 0"} and \mbox{@{text"?P(Suc n)"}} in the above proof,
-without requiring the user to define a @{text "?P"}. The general
-pattern for induction over @{typ nat} is shown on the left-hand side:
-*}text_raw{*
-\begin{tabular}{@ {}ll@ {}}
-\begin{minipage}[t]{.4\textwidth}
-\isa{%
-*}
-(*<*)lemma "P(n::nat)" proof -(*>*)
-show "P(n)"
-proof (induction n)
-  case 0
-  txt_raw{*\\\mbox{}\ \ $\vdots$\\\mbox{}\hspace{-1ex}*}
-  show ?case (*<*)sorry(*>*) txt_raw{*\ $\dots$\\*}
-next
-  case (Suc n)
-  txt_raw{*\\\mbox{}\ \ $\vdots$\\\mbox{}\hspace{-1ex}*}
-  show ?case (*<*)sorry(*>*) txt_raw{*\ $\dots$\\*}
-qed(*<*)qed(*>*)
-
-text_raw {* }
-\end{minipage}
-&
-\begin{minipage}[t]{.4\textwidth}
-~\\
-~\\
-\isacom{let} @{text"?case = \"P(0)\""}\\
-~\\
-~\\
-~\\[1ex]
-\isacom{fix} @{text n} \isacom{assume} @{text"Suc: \"P(n)\""}\\
-\isacom{let} @{text"?case = \"P(Suc n)\""}\\
-\end{minipage}
-\end{tabular}
-\medskip
-*}
-text{*
-On the right side you can see what the \isacom{case} command
-on the left stands for.
-
-In case the goal is an implication, induction does one more thing: the
-proposition to be proved in each case is not the whole implication but only
-its conclusion; the premises of the implication are immediately made
-assumptions of that case. That is, if in the above proof we replace
-\isacom{show}~@{text"P(n)"} by
-\mbox{\isacom{show}~@{text"A(n) \<Longrightarrow> P(n)"}}
-then \isacom{case}~@{text 0} stands for
-\begin{quote}
-\isacom{assume} \ @{text"0: \"A(0)\""}\\
-\isacom{let} @{text"?case = \"P(0)\""}
-\end{quote}
-and \isacom{case}~@{text"(Suc n)"} stands for
-\begin{quote}
-\isacom{fix} @{text n}\\
-\isacom{assume} @{text"Suc:"}
-  \begin{tabular}[t]{l}@{text"\"A(n) \<Longrightarrow> P(n)\""}\\@{text"\"A(Suc n)\""}\end{tabular}\\
-\isacom{let} @{text"?case = \"P(Suc n)\""}
-\end{quote}
-The list of assumptions @{text Suc} is actually subdivided
-into @{text"Suc.IH"}, the induction hypotheses (here @{text"A(n) \<Longrightarrow> P(n)"})
-and @{text"Suc.prems"}, the premises of the goal being proved
-(here @{text"A(Suc n)"}).
-
-Induction works for any datatype.
-Proving a goal @{text"\<lbrakk> A\<^isub>1(x); \<dots>; A\<^isub>k(x) \<rbrakk> \<Longrightarrow> P(x)"}
-by induction on @{text x} generates a proof obligation for each constructor
-@{text C} of the datatype. The command @{text"case (C x\<^isub>1 \<dots> x\<^isub>n)"}
-performs the following steps:
-\begin{enumerate}
-\item \isacom{fix} @{text"x\<^isub>1 \<dots> x\<^isub>n"}
-\item \isacom{assume} the induction hypotheses (calling them @{text C.IH})
- and the premises \mbox{@{text"A\<^isub>i(C x\<^isub>1 \<dots> x\<^isub>n)"}} (calling them @{text"C.prems"})
- and calling the whole list @{text C}
-\item \isacom{let} @{text"?case = \"P(C x\<^isub>1 \<dots> x\<^isub>n)\""}
-\end{enumerate}
-
-\subsection{Rule induction}
-
-Recall the inductive and recursive definitions of even numbers in
-\autoref{sec:inductive-defs}:
-*}
-
-inductive ev :: "nat \<Rightarrow> bool" where
-ev0: "ev 0" |
-evSS: "ev n \<Longrightarrow> ev(Suc(Suc n))"
-
-fun even :: "nat \<Rightarrow> bool" where
-"even 0 = True" |
-"even (Suc 0) = False" |
-"even (Suc(Suc n)) = even n"
-
-text{* We recast the proof of @{prop"ev n \<Longrightarrow> even n"} in Isar. The
-left column shows the actual proof text, the right column shows
-the implicit effect of the two \isacom{case} commands:*}text_raw{*
-\begin{tabular}{@ {}l@ {\qquad}l@ {}}
-\begin{minipage}[t]{.5\textwidth}
-\isa{%
-*}
-
-lemma "ev n \<Longrightarrow> even n"
-proof(induction rule: ev.induct)
-  case ev0
-  show ?case by simp
-next
-  case evSS
-
-
-
-  thus ?case by simp
-qed
-
-text_raw {* }
-\end{minipage}
-&
-\begin{minipage}[t]{.5\textwidth}
-~\\
-~\\
-\isacom{let} @{text"?case = \"even 0\""}\\
-~\\
-~\\
-\isacom{fix} @{text n}\\
-\isacom{assume} @{text"evSS:"}
-  \begin{tabular}[t]{l} @{text"\"ev n\""}\\@{text"\"even n\""}\end{tabular}\\
-\isacom{let} @{text"?case = \"even(Suc(Suc n))\""}\\
-\end{minipage}
-\end{tabular}
-\medskip
-*}
-text{*
-The proof resembles structural induction, but the induction rule is given
-explicitly and the names of the cases are the names of the rules in the
-inductive definition.
-Let us examine the two assumptions named @{thm[source]evSS}:
-@{prop "ev n"} is the premise of rule @{thm[source]evSS}, which we may assume
-because we are in the case where that rule was used; @{prop"even n"}
-is the induction hypothesis.
-\begin{warn}
-Because each \isacom{case} command introduces a list of assumptions
-named like the case name, which is the name of a rule of the inductive
-definition, those rules now need to be accessed with a qualified name, here
-@{thm[source] ev.ev0} and @{thm[source] ev.evSS}
-\end{warn}
-
-In the case @{thm[source]evSS} of the proof above we have pretended that the
-system fixes a variable @{text n}.  But unless the user provides the name
-@{text n}, the system will just invent its own name that cannot be referred
-to.  In the above proof, we do not need to refer to it, hence we do not give
-it a specific name. In case one needs to refer to it one writes
-\begin{quote}
-\isacom{case} @{text"(evSS m)"}
-\end{quote}
-just like \isacom{case}~@{text"(Suc n)"} in earlier structural inductions.
-The name @{text m} is an arbitrary choice. As a result,
-case @{thm[source] evSS} is derived from a renamed version of
-rule @{thm[source] evSS}: @{text"ev m \<Longrightarrow> ev(Suc(Suc m))"}.
-Here is an example with a (contrived) intermediate step that refers to @{text m}:
-*}
-
-lemma "ev n \<Longrightarrow> even n"
-proof(induction rule: ev.induct)
-  case ev0 show ?case by simp
-next
-  case (evSS m)
-  have "even(Suc(Suc m)) = even m" by simp
-  thus ?case using `even m` by blast
-qed
-
-text{*
-\indent
-In general, let @{text I} be a (for simplicity unary) inductively defined
-predicate and let the rules in the definition of @{text I}
-be called @{text "rule\<^isub>1"}, \dots, @{text "rule\<^isub>n"}. A proof by rule
-induction follows this pattern:
-*}
-
-(*<*)
-inductive I where rule\<^isub>1: "I()" |  rule\<^isub>2: "I()" |  rule\<^isub>n: "I()"
-lemma "I x \<Longrightarrow> P x" proof-(*>*)
-show "I x \<Longrightarrow> P x"
-proof(induction rule: I.induct)
-  case rule\<^isub>1
-  txt_raw{*\\[-.4ex]\mbox{}\ \ $\vdots$\\[-.4ex]\mbox{}\hspace{-1ex}*}
-  show ?case (*<*)sorry(*>*)txt_raw{*\ $\dots$\\*}
-next
-  txt_raw{*\\[-.4ex]$\vdots$\\[-.4ex]\mbox{}\hspace{-1ex}*}
-(*<*)
-  case rule\<^isub>2
-  show ?case sorry
-(*>*)
-next
-  case rule\<^isub>n
-  txt_raw{*\\[-.4ex]\mbox{}\ \ $\vdots$\\[-.4ex]\mbox{}\hspace{-1ex}*}
-  show ?case (*<*)sorry(*>*)txt_raw{*\ $\dots$\\*}
-qed(*<*)qed(*>*)
-
-text{*
-One can provide explicit variable names by writing
-\isacom{case}~@{text"(rule\<^isub>i x\<^isub>1 \<dots> x\<^isub>k)"}, thus renaming the first @{text k}
-free variables in rule @{text i} to @{text"x\<^isub>1 \<dots> x\<^isub>k"},
-going through rule @{text i} from left to right.
-
-\subsection{Assumption naming}
-
-In any induction, \isacom{case}~@{text name} sets up a list of assumptions
-also called @{text name}, which is subdivided into three parts:
-\begin{description}
-\item[@{text name.IH}] contains the induction hypotheses.
-\item[@{text name.hyps}] contains all the other hypotheses of this case in the
-induction rule. For rule inductions these are the hypotheses of rule
-@{text name}, for structural inductions these are empty.
-\item[@{text name.prems}] contains the (suitably instantiated) premises
-of the statement being proved, i.e. the @{text A\<^isub>i} when
-proving @{text"\<lbrakk> A\<^isub>1; \<dots>; A\<^isub>n \<rbrakk> \<Longrightarrow> A"}.
-\end{description}
-\begin{warn}
-Proof method @{text induct} differs from @{text induction}
-only in this naming policy: @{text induct} does not distinguish
-@{text IH} from @{text hyps} but subsumes @{text IH} under @{text hyps}.
-\end{warn}
-
-More complicated inductive proofs than the ones we have seen so far
-often need to refer to specific assumptions---just @{text name} or even
-@{text name.prems} and @{text name.IH} can be too unspecific.
-This is where the indexing of fact lists comes in handy, e.g.\
-@{text"name.IH(2)"} or @{text"name.prems(1-2)"}.
-
-\subsection{Rule inversion}
-
-Rule inversion is case analysis of which rule could have been used to
-derive some fact. The name \concept{rule inversion} emphasizes that we are
-reasoning backwards: by which rules could some given fact have been proved?
-For the inductive definition of @{const ev}, rule inversion can be summarized
-like this:
-@{prop[display]"ev n \<Longrightarrow> n = 0 \<or> (EX k. n = Suc(Suc k) \<and> ev k)"}
-The realisation in Isabelle is a case analysis.
-A simple example is the proof that @{prop"ev n \<Longrightarrow> ev (n - 2)"}. We
-already went through the details informally in \autoref{sec:Logic:even}. This
-is the Isar proof:
-*}
-(*<*)
-notepad
-begin fix n
-(*>*)
-  assume "ev n"
-  from this have "ev(n - 2)"
-  proof cases
-    case ev0 thus "ev(n - 2)" by (simp add: ev.ev0)
-  next
-    case (evSS k) thus "ev(n - 2)" by (simp add: ev.evSS)
-  qed
-(*<*)
-end
-(*>*)
-
-text{* The key point here is that a case analysis over some inductively
-defined predicate is triggered by piping the given fact
-(here: \isacom{from}~@{text this}) into a proof by @{text cases}.
-Let us examine the assumptions available in each case. In case @{text ev0}
-we have @{text"n = 0"} and in case @{text evSS} we have @{prop"n = Suc(Suc k)"}
-and @{prop"ev k"}. In each case the assumptions are available under the name
-of the case; there is no fine grained naming schema like for induction.
-
-Sometimes some rules could not have been used to derive the given fact
-because constructors clash. As an extreme example consider
-rule inversion applied to @{prop"ev(Suc 0)"}: neither rule @{text ev0} nor
-rule @{text evSS} can yield @{prop"ev(Suc 0)"} because @{text"Suc 0"} unifies
-neither with @{text 0} nor with @{term"Suc(Suc n)"}. Impossible cases do not
-have to be proved. Hence we can prove anything from @{prop"ev(Suc 0)"}:
-*}
-(*<*)
-notepad begin fix P
-(*>*)
-  assume "ev(Suc 0)" then have P by cases
-(*<*)
-end
-(*>*)
-
-text{* That is, @{prop"ev(Suc 0)"} is simply not provable: *}
-
-lemma "\<not> ev(Suc 0)"
-proof
-  assume "ev(Suc 0)" then show False by cases
-qed
-
-text{* Normally not all cases will be impossible. As a simple exercise,
-prove that \mbox{@{prop"\<not> ev(Suc(Suc(Suc 0)))"}.}
-*}
-
-(*
-lemma "\<not> ev(Suc(Suc(Suc 0)))"
-proof
-  assume "ev(Suc(Suc(Suc 0)))"
-  then show False
-  proof cases
-    case evSS
-    from `ev(Suc 0)` show False by cases
-  qed
-qed
-*)
-
-(*<*)
-end
-(*>*)
--- a/doc-src/ProgProve/Thys/LaTeXsugar.thy	Mon Aug 27 21:46:00 2012 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,47 +0,0 @@
-(*  Title:      HOL/Library/LaTeXsugar.thy
-    Author:     Gerwin Klain, Tobias Nipkow, Norbert Schirmer
-    Copyright   2005 NICTA and TUM
-*)
-
-(*<*)
-theory LaTeXsugar
-imports Main
-begin
-
-(* DUMMY *)
-consts DUMMY :: 'a ("\<^raw:\_>")
-
-(* THEOREMS *)
-notation (Rule output)
-  "==>"  ("\<^raw:\mbox{}\inferrule{\mbox{>_\<^raw:}}>\<^raw:{\mbox{>_\<^raw:}}>")
-
-syntax (Rule output)
-  "_bigimpl" :: "asms \<Rightarrow> prop \<Rightarrow> prop"
-  ("\<^raw:\mbox{}\inferrule{>_\<^raw:}>\<^raw:{\mbox{>_\<^raw:}}>")
-
-  "_asms" :: "prop \<Rightarrow> asms \<Rightarrow> asms" 
-  ("\<^raw:\mbox{>_\<^raw:}\\>/ _")
-
-  "_asm" :: "prop \<Rightarrow> asms" ("\<^raw:\mbox{>_\<^raw:}>")
-
-notation (Axiom output)
-  "Trueprop"  ("\<^raw:\mbox{}\inferrule{\mbox{}}{\mbox{>_\<^raw:}}>")
-
-notation (IfThen output)
-  "==>"  ("\<^raw:{\normalsize{}>If\<^raw:\,}> _/ \<^raw:{\normalsize \,>then\<^raw:\,}>/ _.")
-syntax (IfThen output)
-  "_bigimpl" :: "asms \<Rightarrow> prop \<Rightarrow> prop"
-  ("\<^raw:{\normalsize{}>If\<^raw:\,}> _ /\<^raw:{\normalsize \,>then\<^raw:\,}>/ _.")
-  "_asms" :: "prop \<Rightarrow> asms \<Rightarrow> asms" ("\<^raw:\mbox{>_\<^raw:}> /\<^raw:{\normalsize \,>and\<^raw:\,}>/ _")
-  "_asm" :: "prop \<Rightarrow> asms" ("\<^raw:\mbox{>_\<^raw:}>")
-
-notation (IfThenNoBox output)
-  "==>"  ("\<^raw:{\normalsize{}>If\<^raw:\,}> _/ \<^raw:{\normalsize \,>then\<^raw:\,}>/ _.")
-syntax (IfThenNoBox output)
-  "_bigimpl" :: "asms \<Rightarrow> prop \<Rightarrow> prop"
-  ("\<^raw:{\normalsize{}>If\<^raw:\,}> _ /\<^raw:{\normalsize \,>then\<^raw:\,}>/ _.")
-  "_asms" :: "prop \<Rightarrow> asms \<Rightarrow> asms" ("_ /\<^raw:{\normalsize \,>and\<^raw:\,}>/ _")
-  "_asm" :: "prop \<Rightarrow> asms" ("_")
-
-end
-(*>*)
\ No newline at end of file
--- a/doc-src/ProgProve/Thys/Logic.thy	Mon Aug 27 21:46:00 2012 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,734 +0,0 @@
-(*<*)
-theory Logic
-imports LaTeXsugar
-begin
-(*>*)
-text{*
-\vspace{-5ex}
-\section{Logic and proof beyond equality}
-\label{sec:Logic}
-
-\subsection{Formulas}
-
-The core syntax of formulas (\textit{form} below)
-provides the standard logical constructs, in decreasing order of precedence:
-\[
-\begin{array}{rcl}
-
-\mathit{form} & ::= &
-  @{text"(form)"} ~\mid~
-  @{const True} ~\mid~
-  @{const False} ~\mid~
-  @{prop "term = term"}\\
- &\mid& @{prop"\<not> form"} ~\mid~
-  @{prop "form \<and> form"} ~\mid~
-  @{prop "form \<or> form"} ~\mid~
-  @{prop "form \<longrightarrow> form"}\\
- &\mid& @{prop"\<forall>x. form"} ~\mid~  @{prop"\<exists>x. form"}
-\end{array}
-\]
-Terms are the ones we have seen all along, built from constants, variables,
-function application and @{text"\<lambda>"}-abstraction, including all the syntactic
-sugar like infix symbols, @{text "if"}, @{text "case"} etc.
-\begin{warn}
-Remember that formulas are simply terms of type @{text bool}. Hence
-@{text "="} also works for formulas. Beware that @{text"="} has a higher
-precedence than the other logical operators. Hence @{prop"s = t \<and> A"} means
-@{text"(s = t) \<and> A"}, and @{prop"A\<and>B = B\<and>A"} means @{text"A \<and> (B = B) \<and> A"}.
-Logical equivalence can also be written with
-@{text "\<longleftrightarrow>"} instead of @{text"="}, where @{text"\<longleftrightarrow>"} has the same low
-precedence as @{text"\<longrightarrow>"}. Hence @{text"A \<and> B \<longleftrightarrow> B \<and> A"} really means
-@{text"(A \<and> B) \<longleftrightarrow> (B \<and> A)"}.
-\end{warn}
-\begin{warn}
-Quantifiers need to be enclosed in parentheses if they are nested within
-other constructs (just like @{text "if"}, @{text case} and @{text let}).
-\end{warn}
-The most frequent logical symbols have the following ASCII representations:
-\begin{center}
-\begin{tabular}{l@ {\qquad}l@ {\qquad}l}
-@{text "\<forall>"} & \xsymbol{forall} & \texttt{ALL}\\
-@{text "\<exists>"} & \xsymbol{exists} & \texttt{EX}\\
-@{text "\<lambda>"} & \xsymbol{lambda} & \texttt{\%}\\
-@{text "\<longrightarrow>"} & \texttt{-{}->}\\
-@{text "\<longleftrightarrow>"} & \texttt{<->}\\
-@{text "\<and>"} & \texttt{/\char`\\} & \texttt{\&}\\
-@{text "\<or>"} & \texttt{\char`\\/} & \texttt{|}\\
-@{text "\<not>"} & \xsymbol{not} & \texttt{\char`~}\\
-@{text "\<noteq>"} & \xsymbol{noteq} & \texttt{\char`~=}
-\end{tabular}
-\end{center}
-The first column shows the symbols, the second column ASCII representations
-that Isabelle interfaces convert into the corresponding symbol,
-and the third column shows ASCII representations that stay fixed.
-\begin{warn}
-The implication @{text"\<Longrightarrow>"} is part of the Isabelle framework. It structures
-theorems and proof states, separating assumptions from conclusions.
-The implication @{text"\<longrightarrow>"} is part of the logic HOL and can occur inside the
-formulas that make up the assumptions and conclusion.
-Theorems should be of the form @{text"\<lbrakk> A\<^isub>1; \<dots>; A\<^isub>n \<rbrakk> \<Longrightarrow> A"},
-not @{text"A\<^isub>1 \<and> \<dots> \<and> A\<^isub>n \<longrightarrow> A"}. Both are logically equivalent
-but the first one works better when using the theorem in further proofs.
-\end{warn}
-
-\subsection{Sets}
-
-Sets of elements of type @{typ 'a} have type @{typ"'a set"}.
-They can be finite or infinite. Sets come with the usual notation:
-\begin{itemize}
-\item @{term"{}"},\quad @{text"{e\<^isub>1,\<dots>,e\<^isub>n}"}
-\item @{prop"e \<in> A"},\quad @{prop"A \<subseteq> B"}
-\item @{term"A \<union> B"},\quad @{term"A \<inter> B"},\quad @{term"A - B"},\quad @{term"-A"}
-\end{itemize}
-and much more. @{const UNIV} is the set of all elements of some type.
-Set comprehension is written @{term"{x. P}"}
-rather than @{text"{x | P}"}, to emphasize the variable binding nature
-of the construct.
-\begin{warn}
-In @{term"{x. P}"} the @{text x} must be a variable. Set comprehension
-involving a proper term @{text t} must be written
-@{term[source]"{t | x y z. P}"},
-where @{text "x y z"} are the free variables in @{text t}.
-This is just a shorthand for @{term"{v. EX x y z. v = t \<and> P}"}, where
-@{text v} is a new variable.
-\end{warn}
-
-Here are the ASCII representations of the mathematical symbols:
-\begin{center}
-\begin{tabular}{l@ {\quad}l@ {\quad}l}
-@{text "\<in>"} & \texttt{\char`\\\char`\<in>} & \texttt{:}\\
-@{text "\<subseteq>"} & \texttt{\char`\\\char`\<subseteq>} & \texttt{<=}\\
-@{text "\<union>"} & \texttt{\char`\\\char`\<union>} & \texttt{Un}\\
-@{text "\<inter>"} & \texttt{\char`\\\char`\<inter>} & \texttt{Int}
-\end{tabular}
-\end{center}
-Sets also allow bounded quantifications @{prop"ALL x : A. P"} and
-@{prop"EX x : A. P"}.
-
-\subsection{Proof automation}
-
-So far we have only seen @{text simp} and @{text auto}: Both perform
-rewriting, both can also prove linear arithmetic facts (no multiplication),
-and @{text auto} is also able to prove simple logical or set-theoretic goals:
-*}
-
-lemma "\<forall>x. \<exists>y. x = y"
-by auto
-
-lemma "A \<subseteq> B \<inter> C \<Longrightarrow> A \<subseteq> B \<union> C"
-by auto
-
-text{* where
-\begin{quote}
-\isacom{by} \textit{proof-method}
-\end{quote}
-is short for
-\begin{quote}
-\isacom{apply} \textit{proof-method}\\
-\isacom{done}
-\end{quote}
-The key characteristics of both @{text simp} and @{text auto} are
-\begin{itemize}
-\item They show you were they got stuck, giving you an idea how to continue.
-\item They perform the obvious steps but are highly incomplete.
-\end{itemize}
-A proof method is \concept{complete} if it can prove all true formulas.
-There is no complete proof method for HOL, not even in theory.
-Hence all our proof methods only differ in how incomplete they are.
-
-A proof method that is still incomplete but tries harder than @{text auto} is
-@{text fastforce}.  It either succeeds or fails, it acts on the first
-subgoal only, and it can be modified just like @{text auto}, e.g.\
-with @{text "simp add"}. Here is a typical example of what @{text fastforce}
-can do:
-*}
-
-lemma "\<lbrakk> \<forall>xs \<in> A. \<exists>ys. xs = ys @ ys;  us \<in> A \<rbrakk>
-   \<Longrightarrow> \<exists>n. length us = n+n"
-by fastforce
-
-text{* This lemma is out of reach for @{text auto} because of the
-quantifiers.  Even @{text fastforce} fails when the quantifier structure
-becomes more complicated. In a few cases, its slow version @{text force}
-succeeds where @{text fastforce} fails.
-
-The method of choice for complex logical goals is @{text blast}. In the
-following example, @{text T} and @{text A} are two binary predicates. It
-is shown that if @{text T} is total, @{text A} is antisymmetric and @{text T} is
-a subset of @{text A}, then @{text A} is a subset of @{text T}:
-*}
-
-lemma
-  "\<lbrakk> \<forall>x y. T x y \<or> T y x;
-     \<forall>x y. A x y \<and> A y x \<longrightarrow> x = y;
-     \<forall>x y. T x y \<longrightarrow> A x y \<rbrakk>
-   \<Longrightarrow> \<forall>x y. A x y \<longrightarrow> T x y"
-by blast
-
-text{*
-We leave it to the reader to figure out why this lemma is true.
-Method @{text blast}
-\begin{itemize}
-\item is (in principle) a complete proof procedure for first-order formulas,
-  a fragment of HOL. In practice there is a search bound.
-\item does no rewriting and knows very little about equality.
-\item covers logic, sets and relations.
-\item either succeeds or fails.
-\end{itemize}
-Because of its strength in logic and sets and its weakness in equality reasoning, it complements the earlier proof methods.
-
-
-\subsubsection{Sledgehammer}
-
-Command \isacom{sledgehammer} calls a number of external automatic
-theorem provers (ATPs) that run for up to 30 seconds searching for a
-proof. Some of these ATPs are part of the Isabelle installation, others are
-queried over the internet. If successful, a proof command is generated and can
-be inserted into your proof.  The biggest win of \isacom{sledgehammer} is
-that it will take into account the whole lemma library and you do not need to
-feed in any lemma explicitly. For example,*}
-
-lemma "\<lbrakk> xs @ ys = ys @ xs;  length xs = length ys \<rbrakk> \<Longrightarrow> xs = ys"
-
-txt{* cannot be solved by any of the standard proof methods, but
-\isacom{sledgehammer} finds the following proof: *}
-
-by (metis append_eq_conv_conj)
-
-text{* We do not explain how the proof was found but what this command
-means. For a start, Isabelle does not trust external tools (and in particular
-not the translations from Isabelle's logic to those tools!)
-and insists on a proof that it can check. This is what @{text metis} does.
-It is given a list of lemmas and tries to find a proof just using those lemmas
-(and pure logic). In contrast to @{text simp} and friends that know a lot of
-lemmas already, using @{text metis} manually is tedious because one has
-to find all the relevant lemmas first. But that is precisely what
-\isacom{sledgehammer} does for us.
-In this case lemma @{thm[source]append_eq_conv_conj} alone suffices:
-@{thm[display] append_eq_conv_conj}
-We leave it to the reader to figure out why this lemma suffices to prove
-the above lemma, even without any knowledge of what the functions @{const take}
-and @{const drop} do. Keep in mind that the variables in the two lemmas
-are independent of each other, despite the same names, and that you can
-substitute arbitrary values for the free variables in a lemma.
-
-Just as for the other proof methods we have seen, there is no guarantee that
-\isacom{sledgehammer} will find a proof if it exists. Nor is
-\isacom{sledgehammer} superior to the other proof methods.  They are
-incomparable. Therefore it is recommended to apply @{text simp} or @{text
-auto} before invoking \isacom{sledgehammer} on what is left.
-
-\subsubsection{Arithmetic}
-
-By arithmetic formulas we mean formulas involving variables, numbers, @{text
-"+"}, @{text"-"}, @{text "="}, @{text "<"}, @{text "\<le>"} and the usual logical
-connectives @{text"\<not>"}, @{text"\<and>"}, @{text"\<or>"}, @{text"\<longrightarrow>"},
-@{text"\<longleftrightarrow>"}. Strictly speaking, this is known as \concept{linear arithmetic}
-because it does not involve multiplication, although multiplication with
-numbers, e.g.\ @{text"2*n"} is allowed. Such formulas can be proved by
-@{text arith}:
-*}
-
-lemma "\<lbrakk> (a::nat) \<le> x + b; 2*x < c \<rbrakk> \<Longrightarrow> 2*a + 1 \<le> 2*b + c"
-by arith
-
-text{* In fact, @{text auto} and @{text simp} can prove many linear
-arithmetic formulas already, like the one above, by calling a weak but fast
-version of @{text arith}. Hence it is usually not necessary to invoke
-@{text arith} explicitly.
-
-The above example involves natural numbers, but integers (type @{typ int})
-and real numbers (type @{text real}) are supported as well. As are a number
-of further operators like @{const min} and @{const max}. On @{typ nat} and
-@{typ int}, @{text arith} can even prove theorems with quantifiers in them,
-but we will not enlarge on that here.
-
-
-\subsubsection{Trying them all}
-
-If you want to try all of the above automatic proof methods you simply type
-\begin{isabelle}
-\isacom{try}
-\end{isabelle}
-You can also add specific simplification and introduction rules:
-\begin{isabelle}
-\isacom{try} @{text"simp: \<dots> intro: \<dots>"}
-\end{isabelle}
-There is also a lightweight variant \isacom{try0} that does not call
-sledgehammer.
-
-\subsection{Single step proofs}
-
-Although automation is nice, it often fails, at least initially, and you need
-to find out why. When @{text fastforce} or @{text blast} simply fail, you have
-no clue why. At this point, the stepwise
-application of proof rules may be necessary. For example, if @{text blast}
-fails on @{prop"A \<and> B"}, you want to attack the two
-conjuncts @{text A} and @{text B} separately. This can
-be achieved by applying \emph{conjunction introduction}
-\[ @{thm[mode=Rule,show_question_marks]conjI}\ @{text conjI}
-\]
-to the proof state. We will now examine the details of this process.
-
-\subsubsection{Instantiating unknowns}
-
-We had briefly mentioned earlier that after proving some theorem,
-Isabelle replaces all free variables @{text x} by so called \concept{unknowns}
-@{text "?x"}. We can see this clearly in rule @{thm[source] conjI}.
-These unknowns can later be instantiated explicitly or implicitly:
-\begin{itemize}
-\item By hand, using @{text of}.
-The expression @{text"conjI[of \"a=b\" \"False\"]"}
-instantiates the unknowns in @{thm[source] conjI} from left to right with the
-two formulas @{text"a=b"} and @{text False}, yielding the rule
-@{thm[display,mode=Rule]conjI[of "a=b" False]}
-
-In general, @{text"th[of string\<^isub>1 \<dots> string\<^isub>n]"} instantiates
-the unknowns in the theorem @{text th} from left to right with the terms
-@{text string\<^isub>1} to @{text string\<^isub>n}.
-
-\item By unification. \concept{Unification} is the process of making two
-terms syntactically equal by suitable instantiations of unknowns. For example,
-unifying @{text"?P \<and> ?Q"} with \mbox{@{prop"a=b \<and> False"}} instantiates
-@{text "?P"} with @{prop "a=b"} and @{text "?Q"} with @{prop False}.
-\end{itemize}
-We need not instantiate all unknowns. If we want to skip a particular one we
-can just write @{text"_"} instead, for example @{text "conjI[of _ \"False\"]"}.
-Unknowns can also be instantiated by name, for example
-@{text "conjI[where ?P = \"a=b\" and ?Q = \"False\"]"}.
-
-
-\subsubsection{Rule application}
-
-\concept{Rule application} means applying a rule backwards to a proof state.
-For example, applying rule @{thm[source]conjI} to a proof state
-\begin{quote}
-@{text"1.  \<dots>  \<Longrightarrow> A \<and> B"}
-\end{quote}
-results in two subgoals, one for each premise of @{thm[source]conjI}:
-\begin{quote}
-@{text"1.  \<dots>  \<Longrightarrow> A"}\\
-@{text"2.  \<dots>  \<Longrightarrow> B"}
-\end{quote}
-In general, the application of a rule @{text"\<lbrakk> A\<^isub>1; \<dots>; A\<^isub>n \<rbrakk> \<Longrightarrow> A"}
-to a subgoal \mbox{@{text"\<dots> \<Longrightarrow> C"}} proceeds in two steps:
-\begin{enumerate}
-\item
-Unify @{text A} and @{text C}, thus instantiating the unknowns in the rule.
-\item
-Replace the subgoal @{text C} with @{text n} new subgoals @{text"A\<^isub>1"} to @{text"A\<^isub>n"}.
-\end{enumerate}
-This is the command to apply rule @{text xyz}:
-\begin{quote}
-\isacom{apply}@{text"(rule xyz)"}
-\end{quote}
-This is also called \concept{backchaining} with rule @{text xyz}.
-
-\subsubsection{Introduction rules}
-
-Conjunction introduction (@{thm[source] conjI}) is one example of a whole
-class of rules known as \concept{introduction rules}. They explain under which
-premises some logical construct can be introduced. Here are some further
-useful introduction rules:
-\[
-\inferrule*[right=\mbox{@{text impI}}]{\mbox{@{text"?P \<Longrightarrow> ?Q"}}}{\mbox{@{text"?P \<longrightarrow> ?Q"}}}
-\qquad
-\inferrule*[right=\mbox{@{text allI}}]{\mbox{@{text"\<And>x. ?P x"}}}{\mbox{@{text"\<forall>x. ?P x"}}}
-\]
-\[
-\inferrule*[right=\mbox{@{text iffI}}]{\mbox{@{text"?P \<Longrightarrow> ?Q"}} \\ \mbox{@{text"?Q \<Longrightarrow> ?P"}}}
-  {\mbox{@{text"?P = ?Q"}}}
-\]
-These rules are part of the logical system of \concept{natural deduction}
-(e.g.\ \cite{HuthRyan}). Although we intentionally de-emphasize the basic rules
-of logic in favour of automatic proof methods that allow you to take bigger
-steps, these rules are helpful in locating where and why automation fails.
-When applied backwards, these rules decompose the goal:
-\begin{itemize}
-\item @{thm[source] conjI} and @{thm[source]iffI} split the goal into two subgoals,
-\item @{thm[source] impI} moves the left-hand side of a HOL implication into the list of assumptions,
-\item and @{thm[source] allI} removes a @{text "\<forall>"} by turning the quantified variable into a fixed local variable of the subgoal.
-\end{itemize}
-Isabelle knows about these and a number of other introduction rules.
-The command
-\begin{quote}
-\isacom{apply} @{text rule}
-\end{quote}
-automatically selects the appropriate rule for the current subgoal.
-
-You can also turn your own theorems into introduction rules by giving them
-the @{text"intro"} attribute, analogous to the @{text simp} attribute.  In
-that case @{text blast}, @{text fastforce} and (to a limited extent) @{text
-auto} will automatically backchain with those theorems. The @{text intro}
-attribute should be used with care because it increases the search space and
-can lead to nontermination.  Sometimes it is better to use it only in
-specific calls of @{text blast} and friends. For example,
-@{thm[source] le_trans}, transitivity of @{text"\<le>"} on type @{typ nat},
-is not an introduction rule by default because of the disastrous effect
-on the search space, but can be useful in specific situations:
-*}
-
-lemma "\<lbrakk> (a::nat) \<le> b; b \<le> c; c \<le> d; d \<le> e \<rbrakk> \<Longrightarrow> a \<le> e"
-by(blast intro: le_trans)
-
-text{*
-Of course this is just an example and could be proved by @{text arith}, too.
-
-\subsubsection{Forward proof}
-\label{sec:forward-proof}
-
-Forward proof means deriving new theorems from old theorems. We have already
-seen a very simple form of forward proof: the @{text of} operator for
-instantiating unknowns in a theorem. The big brother of @{text of} is @{text
-OF} for applying one theorem to others. Given a theorem @{prop"A \<Longrightarrow> B"} called
-@{text r} and a theorem @{text A'} called @{text r'}, the theorem @{text
-"r[OF r']"} is the result of applying @{text r} to @{text r'}, where @{text
-r} should be viewed as a function taking a theorem @{text A} and returning
-@{text B}.  More precisely, @{text A} and @{text A'} are unified, thus
-instantiating the unknowns in @{text B}, and the result is the instantiated
-@{text B}. Of course, unification may also fail.
-\begin{warn}
-Application of rules to other rules operates in the forward direction: from
-the premises to the conclusion of the rule; application of rules to proof
-states operates in the backward direction, from the conclusion to the
-premises.
-\end{warn}
-
-In general @{text r} can be of the form @{text"\<lbrakk> A\<^isub>1; \<dots>; A\<^isub>n \<rbrakk> \<Longrightarrow> A"}
-and there can be multiple argument theorems @{text r\<^isub>1} to @{text r\<^isub>m}
-(with @{text"m \<le> n"}), in which case @{text "r[OF r\<^isub>1 \<dots> r\<^isub>m]"} is obtained
-by unifying and thus proving @{text "A\<^isub>i"} with @{text "r\<^isub>i"}, @{text"i = 1\<dots>m"}.
-Here is an example, where @{thm[source]refl} is the theorem
-@{thm[show_question_marks] refl}:
-*}
-
-thm conjI[OF refl[of "a"] refl[of "b"]]
-
-text{* yields the theorem @{thm conjI[OF refl[of "a"] refl[of "b"]]}.
-The command \isacom{thm} merely displays the result.
-
-Forward reasoning also makes sense in connection with proof states.
-Therefore @{text blast}, @{text fastforce} and @{text auto} support a modifier
-@{text dest} which instructs the proof method to use certain rules in a
-forward fashion. If @{text r} is of the form \mbox{@{text "A \<Longrightarrow> B"}}, the modifier
-\mbox{@{text"dest: r"}}
-allows proof search to reason forward with @{text r}, i.e.\
-to replace an assumption @{text A'}, where @{text A'} unifies with @{text A},
-with the correspondingly instantiated @{text B}. For example, @{thm[source,show_question_marks] Suc_leD} is the theorem \mbox{@{thm Suc_leD}}, which works well for forward reasoning:
-*}
-
-lemma "Suc(Suc(Suc a)) \<le> b \<Longrightarrow> a \<le> b"
-by(blast dest: Suc_leD)
-
-text{* In this particular example we could have backchained with
-@{thm[source] Suc_leD}, too, but because the premise is more complicated than the conclusion this can easily lead to nontermination.
-
-\subsubsection{Finding theorems}
-
-Command \isacom{find\_theorems} searches for specific theorems in the current
-theory. Search criteria include pattern matching on terms and on names.
-For details see the Isabelle/Isar Reference Manual~\cite{IsarRef}.
-\bigskip
-
-\begin{warn}
-To ease readability we will drop the question marks
-in front of unknowns from now on.
-\end{warn}
-
-
-\section{Inductive definitions}
-\label{sec:inductive-defs}
-
-Inductive definitions are the third important definition facility, after
-datatypes and recursive function.
-\sem
-In fact, they are the key construct in the
-definition of operational semantics in the second part of the book.
-\endsem
-
-\subsection{An example: even numbers}
-\label{sec:Logic:even}
-
-Here is a simple example of an inductively defined predicate:
-\begin{itemize}
-\item 0 is even
-\item If $n$ is even, so is $n+2$.
-\end{itemize}
-The operative word ``inductive'' means that these are the only even numbers.
-In Isabelle we give the two rules the names @{text ev0} and @{text evSS}
-and write
-*}
-
-inductive ev :: "nat \<Rightarrow> bool" where
-ev0:    "ev 0" |
-evSS:  (*<*)"ev n \<Longrightarrow> ev (Suc(Suc n))"(*>*)
-text_raw{* @{prop[source]"ev n \<Longrightarrow> ev (n + 2)"} *}
-
-text{* To get used to inductive definitions, we will first prove a few
-properties of @{const ev} informally before we descend to the Isabelle level.
-
-How do we prove that some number is even, e.g.\ @{prop "ev 4"}? Simply by combining the defining rules for @{const ev}:
-\begin{quote}
-@{text "ev 0 \<Longrightarrow> ev (0 + 2) \<Longrightarrow> ev((0 + 2) + 2) = ev 4"}
-\end{quote}
-
-\subsubsection{Rule induction}
-
-Showing that all even numbers have some property is more complicated.  For
-example, let us prove that the inductive definition of even numbers agrees
-with the following recursive one:*}
-
-fun even :: "nat \<Rightarrow> bool" where
-"even 0 = True" |
-"even (Suc 0) = False" |
-"even (Suc(Suc n)) = even n"
-
-text{* We prove @{prop"ev m \<Longrightarrow> even m"}.  That is, we
-assume @{prop"ev m"} and by induction on the form of its derivation
-prove @{prop"even m"}. There are two cases corresponding to the two rules
-for @{const ev}:
-\begin{description}
-\item[Case @{thm[source]ev0}:]
- @{prop"ev m"} was derived by rule @{prop "ev 0"}: \\
- @{text"\<Longrightarrow>"} @{prop"m=(0::nat)"} @{text"\<Longrightarrow>"} @{text "even m = even 0 = True"}
-\item[Case @{thm[source]evSS}:]
- @{prop"ev m"} was derived by rule @{prop "ev n \<Longrightarrow> ev(n+2)"}: \\
-@{text"\<Longrightarrow>"} @{prop"m=n+(2::nat)"} and by induction hypothesis @{prop"even n"}\\
-@{text"\<Longrightarrow>"} @{text"even m = even(n + 2) = even n = True"}
-\end{description}
-
-What we have just seen is a special case of \concept{rule induction}.
-Rule induction applies to propositions of this form
-\begin{quote}
-@{prop "ev n \<Longrightarrow> P n"}
-\end{quote}
-That is, we want to prove a property @{prop"P n"}
-for all even @{text n}. But if we assume @{prop"ev n"}, then there must be
-some derivation of this assumption using the two defining rules for
-@{const ev}. That is, we must prove
-\begin{description}
-\item[Case @{thm[source]ev0}:] @{prop"P(0::nat)"}
-\item[Case @{thm[source]evSS}:] @{prop"\<lbrakk> ev n; P n \<rbrakk> \<Longrightarrow> P(n + 2::nat)"}
-\end{description}
-The corresponding rule is called @{thm[source] ev.induct} and looks like this:
-\[
-\inferrule{
-\mbox{@{thm (prem 1) ev.induct[of "n"]}}\\
-\mbox{@{thm (prem 2) ev.induct}}\\
-\mbox{@{prop"!!n. \<lbrakk> ev n; P n \<rbrakk> \<Longrightarrow> P(n+2)"}}}
-{\mbox{@{thm (concl) ev.induct[of "n"]}}}
-\]
-The first premise @{prop"ev n"} enforces that this rule can only be applied
-in situations where we know that @{text n} is even.
-
-Note that in the induction step we may not just assume @{prop"P n"} but also
-\mbox{@{prop"ev n"}}, which is simply the premise of rule @{thm[source]
-evSS}.  Here is an example where the local assumption @{prop"ev n"} comes in
-handy: we prove @{prop"ev m \<Longrightarrow> ev(m - 2)"} by induction on @{prop"ev m"}.
-Case @{thm[source]ev0} requires us to prove @{prop"ev(0 - 2)"}, which follows
-from @{prop"ev 0"} because @{prop"0 - 2 = (0::nat)"} on type @{typ nat}. In
-case @{thm[source]evSS} we have \mbox{@{prop"m = n+(2::nat)"}} and may assume
-@{prop"ev n"}, which implies @{prop"ev (m - 2)"} because @{text"m - 2 = (n +
-2) - 2 = n"}. We did not need the induction hypothesis at all for this proof,
-it is just a case analysis of which rule was used, but having @{prop"ev
-n"} at our disposal in case @{thm[source]evSS} was essential.
-This case analysis of rules is also called ``rule inversion''
-and is discussed in more detail in \autoref{ch:Isar}.
-
-\subsubsection{In Isabelle}
-
-Let us now recast the above informal proofs in Isabelle. For a start,
-we use @{const Suc} terms instead of numerals in rule @{thm[source]evSS}:
-@{thm[display] evSS}
-This avoids the difficulty of unifying @{text"n+2"} with some numeral,
-which is not automatic.
-
-The simplest way to prove @{prop"ev(Suc(Suc(Suc(Suc 0))))"} is in a forward
-direction: @{text "evSS[OF evSS[OF ev0]]"} yields the theorem @{thm evSS[OF
-evSS[OF ev0]]}. Alternatively, you can also prove it as a lemma in backwards
-fashion. Although this is more verbose, it allows us to demonstrate how each
-rule application changes the proof state: *}
-
-lemma "ev(Suc(Suc(Suc(Suc 0))))"
-txt{*
-@{subgoals[display,indent=0,goals_limit=1]}
-*}
-apply(rule evSS)
-txt{*
-@{subgoals[display,indent=0,goals_limit=1]}
-*}
-apply(rule evSS)
-txt{*
-@{subgoals[display,indent=0,goals_limit=1]}
-*}
-apply(rule ev0)
-done
-
-text{* \indent
-Rule induction is applied by giving the induction rule explicitly via the
-@{text"rule:"} modifier:*}
-
-lemma "ev m \<Longrightarrow> even m"
-apply(induction rule: ev.induct)
-by(simp_all)
-
-text{* Both cases are automatic. Note that if there are multiple assumptions
-of the form @{prop"ev t"}, method @{text induction} will induct on the leftmost
-one.
-
-As a bonus, we also prove the remaining direction of the equivalence of
-@{const ev} and @{const even}:
-*}
-
-lemma "even n \<Longrightarrow> ev n"
-apply(induction n rule: even.induct)
-
-txt{* This is a proof by computation induction on @{text n} (see
-\autoref{sec:recursive-funs}) that sets up three subgoals corresponding to
-the three equations for @{const even}:
-@{subgoals[display,indent=0]}
-The first and third subgoals follow with @{thm[source]ev0} and @{thm[source]evSS}, and the second subgoal is trivially true because @{prop"even(Suc 0)"} is @{const False}:
-*}
-
-by (simp_all add: ev0 evSS)
-
-text{* The rules for @{const ev} make perfect simplification and introduction
-rules because their premises are always smaller than the conclusion. It
-makes sense to turn them into simplification and introduction rules
-permanently, to enhance proof automation: *}
-
-declare ev.intros[simp,intro]
-
-text{* The rules of an inductive definition are not simplification rules by
-default because, in contrast to recursive functions, there is no termination
-requirement for inductive definitions.
-
-\subsubsection{Inductive versus recursive}
-
-We have seen two definitions of the notion of evenness, an inductive and a
-recursive one. Which one is better? Much of the time, the recursive one is
-more convenient: it allows us to do rewriting in the middle of terms, and it
-expresses both the positive information (which numbers are even) and the
-negative information (which numbers are not even) directly. An inductive
-definition only expresses the positive information directly. The negative
-information, for example, that @{text 1} is not even, has to be proved from
-it (by induction or rule inversion). On the other hand, rule induction is
-tailor-made for proving \mbox{@{prop"ev n \<Longrightarrow> P n"}} because it only asks you
-to prove the positive cases. In the proof of @{prop"even n \<Longrightarrow> P n"} by
-computation induction via @{thm[source]even.induct}, we are also presented
-with the trivial negative cases. If you want the convenience of both
-rewriting and rule induction, you can make two definitions and show their
-equivalence (as above) or make one definition and prove additional properties
-from it, for example rule induction from computation induction.
-
-But many concepts do not admit a recursive definition at all because there is
-no datatype for the recursion (for example, the transitive closure of a
-relation), or the recursion would not terminate (for example,
-an interpreter for a programming language). Even if there is a recursive
-definition, if we are only interested in the positive information, the
-inductive definition may be much simpler.
-
-\subsection{The reflexive transitive closure}
-\label{sec:star}
-
-Evenness is really more conveniently expressed recursively than inductively.
-As a second and very typical example of an inductive definition we define the
-reflexive transitive closure.
-\sem
-It will also be an important building block for
-some of the semantics considered in the second part of the book.
-\endsem
-
-The reflexive transitive closure, called @{text star} below, is a function
-that maps a binary predicate to another binary predicate: if @{text r} is of
-type @{text"\<tau> \<Rightarrow> \<tau> \<Rightarrow> bool"} then @{term "star r"} is again of type @{text"\<tau> \<Rightarrow>
-\<tau> \<Rightarrow> bool"}, and @{prop"star r x y"} means that @{text x} and @{text y} are in
-the relation @{term"star r"}. Think @{term"r^*"} when you see @{term"star
-r"}, because @{text"star r"} is meant to be the reflexive transitive closure.
-That is, @{prop"star r x y"} is meant to be true if from @{text x} we can
-reach @{text y} in finitely many @{text r} steps. This concept is naturally
-defined inductively: *}
-
-inductive star :: "('a \<Rightarrow> 'a \<Rightarrow> bool) \<Rightarrow> 'a \<Rightarrow> 'a \<Rightarrow> bool"  for r where
-refl:  "star r x x" |
-step:  "r x y \<Longrightarrow> star r y z \<Longrightarrow> star r x z"
-
-text{* The base case @{thm[source] refl} is reflexivity: @{term "x=y"}. The
-step case @{thm[source]step} combines an @{text r} step (from @{text x} to
-@{text y}) and a @{term"star r"} step (from @{text y} to @{text z}) into a
-@{term"star r"} step (from @{text x} to @{text z}).
-The ``\isacom{for}~@{text r}'' in the header is merely a hint to Isabelle
-that @{text r} is a fixed parameter of @{const star}, in contrast to the
-further parameters of @{const star}, which change. As a result, Isabelle
-generates a simpler induction rule.
-
-By definition @{term"star r"} is reflexive. It is also transitive, but we
-need rule induction to prove that: *}
-
-lemma star_trans: "star r x y \<Longrightarrow> star r y z \<Longrightarrow> star r x z"
-apply(induction rule: star.induct)
-(*<*)
-defer
-apply(rename_tac u x y)
-defer
-(*>*)
-txt{* The induction is over @{prop"star r x y"} and we try to prove
-\mbox{@{prop"star r y z \<Longrightarrow> star r x z"}},
-which we abbreviate by @{prop"P x y"}. These are our two subgoals:
-@{subgoals[display,indent=0]}
-The first one is @{prop"P x x"}, the result of case @{thm[source]refl},
-and it is trivial.
-*}
-apply(assumption)
-txt{* Let us examine subgoal @{text 2}, case @{thm[source] step}.
-Assumptions @{prop"r u x"} and \mbox{@{prop"star r x y"}}
-are the premises of rule @{thm[source]step}.
-Assumption @{prop"star r y z \<Longrightarrow> star r x z"} is \mbox{@{prop"P x y"}},
-the IH coming from @{prop"star r x y"}. We have to prove @{prop"P u y"},
-which we do by assuming @{prop"star r y z"} and proving @{prop"star r u z"}.
-The proof itself is straightforward: from \mbox{@{prop"star r y z"}} the IH
-leads to @{prop"star r x z"} which, together with @{prop"r u x"},
-leads to \mbox{@{prop"star r u z"}} via rule @{thm[source]step}:
-*}
-apply(metis step)
-done
-
-text{*
-
-\subsection{The general case}
-
-Inductive definitions have approximately the following general form:
-\begin{quote}
-\isacom{inductive} @{text"I :: \"\<tau> \<Rightarrow> bool\""} \isacom{where}
-\end{quote}
-followed by a sequence of (possibly named) rules of the form
-\begin{quote}
-@{text "\<lbrakk> I a\<^isub>1; \<dots>; I a\<^isub>n \<rbrakk> \<Longrightarrow> I a"}
-\end{quote}
-separated by @{text"|"}. As usual, @{text n} can be 0.
-The corresponding rule induction principle
-@{text I.induct} applies to propositions of the form
-\begin{quote}
-@{prop "I x \<Longrightarrow> P x"}
-\end{quote}
-where @{text P} may itself be a chain of implications.
-\begin{warn}
-Rule induction is always on the leftmost premise of the goal.
-Hence @{text "I x"} must be the first premise.
-\end{warn}
-Proving @{prop "I x \<Longrightarrow> P x"} by rule induction means proving
-for every rule of @{text I} that @{text P} is invariant:
-\begin{quote}
-@{text "\<lbrakk> I a\<^isub>1; P a\<^isub>1; \<dots>; I a\<^isub>n; P a\<^isub>n \<rbrakk> \<Longrightarrow> P a"}
-\end{quote}
-
-The above format for inductive definitions is simplified in a number of
-respects. @{text I} can have any number of arguments and each rule can have
-additional premises not involving @{text I}, so-called \concept{side
-conditions}. In rule inductions, these side-conditions appear as additional
-assumptions. The \isacom{for} clause seen in the definition of the reflexive
-transitive closure merely simplifies the form of the induction rule.
-*}
-(*<*)
-end
-(*>*)
--- a/doc-src/ProgProve/Thys/MyList.thy	Mon Aug 27 21:46:00 2012 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,17 +0,0 @@
-theory MyList
-imports Main
-begin
-
-datatype 'a list = Nil | Cons 'a "'a list"
-
-fun app :: "'a list => 'a list => 'a list" where
-"app Nil ys = ys" |
-"app (Cons x xs) ys = Cons x (app xs ys)"
-
-fun rev :: "'a list => 'a list" where
-"rev Nil = Nil" |
-"rev (Cons x xs) = app (rev xs) (Cons x Nil)"
-
-value "rev(Cons True (Cons False Nil))"
-
-end
--- a/doc-src/ProgProve/Thys/Types_and_funs.thy	Mon Aug 27 21:46:00 2012 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,479 +0,0 @@
-(*<*)
-theory Types_and_funs
-imports Main
-begin
-(*>*)
-text{*
-\vspace{-5ex}
-\section{Type and function definitions}
-
-Type synonyms are abbreviations for existing types, for example
-*}
-
-type_synonym string = "char list"
-
-text{*
-Type synonyms are expanded after parsing and are not present in internal representation and output. They are mere conveniences for the reader.
-
-\subsection{Datatypes}
-
-The general form of a datatype definition looks like this:
-\begin{quote}
-\begin{tabular}{@ {}rclcll}
-\isacom{datatype} @{text "('a\<^isub>1,\<dots>,'a\<^isub>n)t"}
-     & = & $C_1 \ @{text"\""}\tau_{1,1}@{text"\""} \dots @{text"\""}\tau_{1,n_1}@{text"\""}$ \\
-     & $|$ & \dots \\
-     & $|$ & $C_k \ @{text"\""}\tau_{k,1}@{text"\""} \dots @{text"\""}\tau_{k,n_k}@{text"\""}$
-\end{tabular}
-\end{quote}
-It introduces the constructors \
-$C_i :: \tau_{i,1}\Rightarrow \cdots \Rightarrow \tau_{i,n_i} \Rightarrow$~@{text "('a\<^isub>1,\<dots>,'a\<^isub>n)t"} \ and expresses that any value of this type is built from these constructors in a unique manner. Uniqueness is implied by the following
-properties of the constructors:
-\begin{itemize}
-\item \emph{Distinctness:} $C_i\ \ldots \neq C_j\ \dots$ \quad if $i \neq j$
-\item \emph{Injectivity:}
-\begin{tabular}[t]{l}
- $(C_i \ x_1 \dots x_{n_i} = C_i \ y_1 \dots y_{n_i}) =$\\
- $(x_1 = y_1 \land \dots \land x_{n_i} = y_{n_i})$
-\end{tabular}
-\end{itemize}
-The fact that any value of the datatype is built from the constructors implies
-the structural induction rule: to show
-$P~x$ for all $x$ of type @{text "('a\<^isub>1,\<dots>,'a\<^isub>n)t"},
-one needs to show $P(C_i\ x_1 \dots x_{n_i})$ (for each $i$) assuming
-$P(x_j)$ for all $j$ where $\tau_{i,j} =$~@{text "('a\<^isub>1,\<dots>,'a\<^isub>n)t"}.
-Distinctness and injectivity are applied automatically by @{text auto}
-and other proof methods. Induction must be applied explicitly.
-
-Datatype values can be taken apart with case-expressions, for example
-\begin{quote}
-\noquotes{@{term[source] "(case xs of [] \<Rightarrow> 0 | x # _ \<Rightarrow> Suc x)"}}
-\end{quote}
-just like in functional programming languages. Case expressions
-must be enclosed in parentheses.
-
-As an example, consider binary trees:
-*}
-
-datatype 'a tree = Tip | Node  "'a tree"  'a  "'a tree"
-
-text{* with a mirror function: *}
-
-fun mirror :: "'a tree \<Rightarrow> 'a tree" where
-"mirror Tip = Tip" |
-"mirror (Node l a r) = Node (mirror r) a (mirror l)"
-
-text{* The following lemma illustrates induction: *}
-
-lemma "mirror(mirror t) = t"
-apply(induction t)
-
-txt{* yields
-@{subgoals[display]}
-The induction step contains two induction hypotheses, one for each subtree.
-An application of @{text auto} finishes the proof.
-
-A very simple but also very useful datatype is the predefined
-@{datatype[display] option}
-Its sole purpose is to add a new element @{const None} to an existing
-type @{typ 'a}. To make sure that @{const None} is distinct from all the
-elements of @{typ 'a}, you wrap them up in @{const Some} and call
-the new type @{typ"'a option"}. A typical application is a lookup function
-on a list of key-value pairs, often called an association list:
-*}
-(*<*)
-apply auto
-done
-(*>*)
-fun lookup :: "('a * 'b) list \<Rightarrow> 'a \<Rightarrow> 'b option" where
-"lookup [] x = None" |
-"lookup ((a,b) # ps) x = (if a = x then Some b else lookup ps x)"
-
-text{*
-Note that @{text"\<tau>\<^isub>1 * \<tau>\<^isub>2"} is the type of pairs, also written @{text"\<tau>\<^isub>1 \<times> \<tau>\<^isub>2"}.
-
-\subsection{Definitions}
-
-Non recursive functions can be defined as in the following example:
-*}
-
-definition sq :: "nat \<Rightarrow> nat" where
-"sq n = n * n"
-
-text{* Such definitions do not allow pattern matching but only
-@{text"f x\<^isub>1 \<dots> x\<^isub>n = t"}, where @{text f} does not occur in @{text t}.
-
-\subsection{Abbreviations}
-
-Abbreviations are similar to definitions:
-*}
-
-abbreviation sq' :: "nat \<Rightarrow> nat" where
-"sq' n == n * n"
-
-text{* The key difference is that @{const sq'} is only syntactic sugar:
-@{term"sq' t"} is replaced by \mbox{@{term"t*t"}} after parsing, and every
-occurrence of a term @{term"u*u"} is replaced by \mbox{@{term"sq' u"}} before
-printing.  Internally, @{const sq'} does not exist.  This is also the
-advantage of abbreviations over definitions: definitions need to be expanded
-explicitly (see \autoref{sec:rewr-defs}) whereas abbreviations are already
-expanded upon parsing. However, abbreviations should be introduced sparingly:
-if abused, they can lead to a confusing discrepancy between the internal and
-external view of a term.
-
-\subsection{Recursive functions}
-\label{sec:recursive-funs}
-
-Recursive functions are defined with \isacom{fun} by pattern matching
-over datatype constructors. The order of equations matters. Just as in
-functional programming languages. However, all HOL functions must be
-total. This simplifies the logic---terms are always defined---but means
-that recursive functions must terminate. Otherwise one could define a
-function @{prop"f n = f n + (1::nat)"} and conclude \mbox{@{prop"(0::nat) = 1"}} by
-subtracting @{term"f n"} on both sides.
-
-Isabelle's automatic termination checker requires that the arguments of
-recursive calls on the right-hand side must be strictly smaller than the
-arguments on the left-hand side. In the simplest case, this means that one
-fixed argument position decreases in size with each recursive call. The size
-is measured as the number of constructors (excluding 0-ary ones, e.g.\ @{text
-Nil}). Lexicographic combinations are also recognized. In more complicated
-situations, the user may have to prove termination by hand. For details
-see~\cite{Krauss}.
-
-Functions defined with \isacom{fun} come with their own induction schema
-that mirrors the recursion schema and is derived from the termination
-order. For example,
-*}
-
-fun div2 :: "nat \<Rightarrow> nat" where
-"div2 0 = 0" |
-"div2 (Suc 0) = Suc 0" |
-"div2 (Suc(Suc n)) = Suc(div2 n)"
-
-text{* does not just define @{const div2} but also proves a
-customized induction rule:
-\[
-\inferrule{
-\mbox{@{thm (prem 1) div2.induct}}\\
-\mbox{@{thm (prem 2) div2.induct}}\\
-\mbox{@{thm (prem 3) div2.induct}}}
-{\mbox{@{thm (concl) div2.induct[of _ "m"]}}}
-\]
-This customized induction rule can simplify inductive proofs. For example,
-*}
-
-lemma "div2(n+n) = n"
-apply(induction n rule: div2.induct)
-
-txt{* yields the 3 subgoals
-@{subgoals[display,margin=65]}
-An application of @{text auto} finishes the proof.
-Had we used ordinary structural induction on @{text n},
-the proof would have needed an additional
-case analysis in the induction step.
-
-The general case is often called \concept{computation induction},
-because the induction follows the (terminating!) computation.
-For every defining equation
-\begin{quote}
-@{text "f(e) = \<dots> f(r\<^isub>1) \<dots> f(r\<^isub>k) \<dots>"}
-\end{quote}
-where @{text"f(r\<^isub>i)"}, @{text"i=1\<dots>k"}, are all the recursive calls,
-the induction rule @{text"f.induct"} contains one premise of the form
-\begin{quote}
-@{text"P(r\<^isub>1) \<Longrightarrow> \<dots> \<Longrightarrow> P(r\<^isub>k) \<Longrightarrow> P(e)"}
-\end{quote}
-If @{text "f :: \<tau>\<^isub>1 \<Rightarrow> \<dots> \<Rightarrow> \<tau>\<^isub>n \<Rightarrow> \<tau>"} then @{text"f.induct"} is applied like this:
-\begin{quote}
-\isacom{apply}@{text"(induction x\<^isub>1 \<dots> x\<^isub>n rule: f.induct)"}
-\end{quote}
-where typically there is a call @{text"f x\<^isub>1 \<dots> x\<^isub>n"} in the goal.
-But note that the induction rule does not mention @{text f} at all,
-except in its name, and is applicable independently of @{text f}.
-
-\section{Induction heuristics}
-
-We have already noted that theorems about recursive functions are proved by
-induction. In case the function has more than one argument, we have followed
-the following heuristic in the proofs about the append function:
-\begin{quote}
-\emph{Perform induction on argument number $i$\\
- if the function is defined by recursion on argument number $i$.}
-\end{quote}
-The key heuristic, and the main point of this section, is to
-\emph{generalize the goal before induction}.
-The reason is simple: if the goal is
-too specific, the induction hypothesis is too weak to allow the induction
-step to go through. Let us illustrate the idea with an example.
-
-Function @{const rev} has quadratic worst-case running time
-because it calls append for each element of the list and
-append is linear in its first argument.  A linear time version of
-@{const rev} requires an extra argument where the result is accumulated
-gradually, using only~@{text"#"}:
-*}
-(*<*)
-apply auto
-done
-(*>*)
-fun itrev :: "'a list \<Rightarrow> 'a list \<Rightarrow> 'a list" where
-"itrev []        ys = ys" |
-"itrev (x#xs) ys = itrev xs (x#ys)"
-
-text{* The behaviour of @{const itrev} is simple: it reverses
-its first argument by stacking its elements onto the second argument,
-and it returns that second argument when the first one becomes
-empty. Note that @{const itrev} is tail-recursive: it can be
-compiled into a loop, no stack is necessary for executing it.
-
-Naturally, we would like to show that @{const itrev} does indeed reverse
-its first argument provided the second one is empty:
-*}
-
-lemma "itrev xs [] = rev xs"
-
-txt{* There is no choice as to the induction variable:
-*}
-
-apply(induction xs)
-apply(auto)
-
-txt{*
-Unfortunately, this attempt does not prove
-the induction step:
-@{subgoals[display,margin=70]}
-The induction hypothesis is too weak.  The fixed
-argument,~@{term"[]"}, prevents it from rewriting the conclusion.  
-This example suggests a heuristic:
-\begin{quote}
-\emph{Generalize goals for induction by replacing constants by variables.}
-\end{quote}
-Of course one cannot do this na\"{\i}vely: @{prop"itrev xs ys = rev xs"} is
-just not true.  The correct generalization is
-*};
-(*<*)oops;(*>*)
-lemma "itrev xs ys = rev xs @ ys"
-(*<*)apply(induction xs, auto)(*>*)
-txt{*
-If @{text ys} is replaced by @{term"[]"}, the right-hand side simplifies to
-@{term"rev xs"}, as required.
-In this instance it was easy to guess the right generalization.
-Other situations can require a good deal of creativity.  
-
-Although we now have two variables, only @{text xs} is suitable for
-induction, and we repeat our proof attempt. Unfortunately, we are still
-not there:
-@{subgoals[display,margin=65]}
-The induction hypothesis is still too weak, but this time it takes no
-intuition to generalize: the problem is that the @{text ys}
-in the induction hypothesis is fixed,
-but the induction hypothesis needs to be applied with
-@{term"a # ys"} instead of @{text ys}. Hence we prove the theorem
-for all @{text ys} instead of a fixed one. We can instruct induction
-to perform this generalization for us by adding @{text "arbitrary: ys"}.
-*}
-(*<*)oops
-lemma "itrev xs ys = rev xs @ ys"
-(*>*)
-apply(induction xs arbitrary: ys)
-
-txt{* The induction hypothesis in the induction step is now universally quantified over @{text ys}:
-@{subgoals[display,margin=65]}
-Thus the proof succeeds:
-*}
-
-apply auto
-done
-
-text{*
-This leads to another heuristic for generalization:
-\begin{quote}
-\emph{Generalize induction by generalizing all free
-variables\\ {\em(except the induction variable itself)}.}
-\end{quote}
-Generalization is best performed with @{text"arbitrary: y\<^isub>1 \<dots> y\<^isub>k"}. 
-This heuristic prevents trivial failures like the one above.
-However, it should not be applied blindly.
-It is not always required, and the additional quantifiers can complicate
-matters in some cases. The variables that need to be quantified are typically
-those that change in recursive calls.
-
-\section{Simplification}
-
-So far we have talked a lot about simplifying terms without explaining the concept. \concept{Simplification} means
-\begin{itemize}
-\item using equations $l = r$ from left to right (only),
-\item as long as possible.
-\end{itemize}
-To emphasize the directionality, equations that have been given the
-@{text"simp"} attribute are called \concept{simplification}
-rules. Logically, they are still symmetric, but proofs by
-simplification use them only in the left-to-right direction.  The proof tool
-that performs simplifications is called the \concept{simplifier}. It is the
-basis of @{text auto} and other related proof methods.
-
-The idea of simplification is best explained by an example. Given the
-simplification rules
-\[
-\begin{array}{rcl@ {\quad}l}
-@{term"0 + n::nat"} &@{text"="}& @{text n} & (1) \\
-@{term"(Suc m) + n"} &@{text"="}& @{term"Suc (m + n)"} & (2) \\
-@{text"(Suc m \<le> Suc n)"} &@{text"="}& @{text"(m \<le> n)"} & (3)\\
-@{text"(0 \<le> m)"} &@{text"="}& @{const True} & (4)
-\end{array}
-\]
-the formula @{prop"0 + Suc 0 \<le> Suc 0 + x"} is simplified to @{const True}
-as follows:
-\[
-\begin{array}{r@ {}c@ {}l@ {\quad}l}
-@{text"(0 + Suc 0"} & \leq & @{text"Suc 0 + x)"}  & \stackrel{(1)}{=} \\
-@{text"(Suc 0"}     & \leq & @{text"Suc 0 + x)"}  & \stackrel{(2)}{=} \\
-@{text"(Suc 0"}     & \leq & @{text"Suc (0 + x)"} & \stackrel{(3)}{=} \\
-@{text"(0"}         & \leq & @{text"0 + x)"}      & \stackrel{(4)}{=} \\[1ex]
- & @{const True}
-\end{array}
-\]
-Simplification is often also called \concept{rewriting}
-and simplification rules \concept{rewrite rules}.
-
-\subsection{Simplification rules}
-
-The attribute @{text"simp"} declares theorems to be simplification rules,
-which the simplifier will use automatically. In addition,
-\isacom{datatype} and \isacom{fun} commands implicitly declare some
-simplification rules: \isacom{datatype} the distinctness and injectivity
-rules, \isacom{fun} the defining equations. Definitions are not declared
-as simplification rules automatically! Nearly any theorem can become a
-simplification rule. The simplifier will try to transform it into an
-equation. For example, the theorem @{prop"\<not> P"} is turned into @{prop"P = False"}.
-
-Only equations that really simplify, like @{prop"rev (rev xs) = xs"} and
-@{prop"xs @ [] = xs"}, should be declared as simplification
-rules. Equations that may be counterproductive as simplification rules
-should only be used in specific proof steps (see \S\ref{sec:simp} below).
-Distributivity laws, for example, alter the structure of terms
-and can produce an exponential blow-up.
-
-\subsection{Conditional simplification rules}
-
-Simplification rules can be conditional.  Before applying such a rule, the
-simplifier will first try to prove the preconditions, again by
-simplification. For example, given the simplification rules
-\begin{quote}
-@{prop"p(0::nat) = True"}\\
-@{prop"p(x) \<Longrightarrow> f(x) = g(x)"},
-\end{quote}
-the term @{term"f(0::nat)"} simplifies to @{term"g(0::nat)"}
-but @{term"f(1::nat)"} does not simplify because @{prop"p(1::nat)"}
-is not provable.
-
-\subsection{Termination}
-
-Simplification can run forever, for example if both @{prop"f x = g x"} and
-@{prop"g(x) = f(x)"} are simplification rules. It is the user's
-responsibility not to include simplification rules that can lead to
-nontermination, either on their own or in combination with other
-simplification rules. The right-hand side of a simplification rule should
-always be ``simpler'' than the left-hand side---in some sense. But since
-termination is undecidable, such a check cannot be automated completely
-and Isabelle makes little attempt to detect nontermination.
-
-When conditional simplification rules are applied, their preconditions are
-proved first. Hence all preconditions need to be
-simpler than the left-hand side of the conclusion. For example
-\begin{quote}
-@{prop "n < m \<Longrightarrow> (n < Suc m) = True"}
-\end{quote}
-is suitable as a simplification rule: both @{prop"n<m"} and @{const True}
-are simpler than \mbox{@{prop"n < Suc m"}}. But
-\begin{quote}
-@{prop "Suc n < m \<Longrightarrow> (n < m) = True"}
-\end{quote}
-leads to nontermination: when trying to rewrite @{prop"n<m"} to @{const True}
-one first has to prove \mbox{@{prop"Suc n < m"}}, which can be rewritten to @{const True} provided @{prop"Suc(Suc n) < m"}, \emph{ad infinitum}.
-
-\subsection{The @{text simp} proof method}
-\label{sec:simp}
-
-So far we have only used the proof method @{text auto}.  Method @{text simp}
-is the key component of @{text auto}, but @{text auto} can do much more. In
-some cases, @{text auto} is overeager and modifies the proof state too much.
-In such cases the more predictable @{text simp} method should be used.
-Given a goal
-\begin{quote}
-@{text"1. \<lbrakk> P\<^isub>1; \<dots>; P\<^isub>m \<rbrakk> \<Longrightarrow> C"}
-\end{quote}
-the command
-\begin{quote}
-\isacom{apply}@{text"(simp add: th\<^isub>1 \<dots> th\<^isub>n)"}
-\end{quote}
-simplifies the assumptions @{text "P\<^isub>i"} and the conclusion @{text C} using
-\begin{itemize}
-\item all simplification rules, including the ones coming from \isacom{datatype} and \isacom{fun},
-\item the additional lemmas @{text"th\<^isub>1 \<dots> th\<^isub>n"}, and
-\item the assumptions.
-\end{itemize}
-In addition to or instead of @{text add} there is also @{text del} for removing
-simplification rules temporarily. Both are optional. Method @{text auto}
-can be modified similarly:
-\begin{quote}
-\isacom{apply}@{text"(auto simp add: \<dots> simp del: \<dots>)"}
-\end{quote}
-Here the modifiers are @{text"simp add"} and @{text"simp del"}
-instead of just @{text add} and @{text del} because @{text auto}
-does not just perform simplification.
-
-Note that @{text simp} acts only on subgoal 1, @{text auto} acts on all
-subgoals. There is also @{text simp_all}, which applies @{text simp} to
-all subgoals.
-
-\subsection{Rewriting with definitions}
-\label{sec:rewr-defs}
-
-Definitions introduced by the command \isacom{definition}
-can also be used as simplification rules,
-but by default they are not: the simplifier does not expand them
-automatically. Definitions are intended for introducing abstract concepts and
-not merely as abbreviations. Of course, we need to expand the definition
-initially, but once we have proved enough abstract properties of the new
-constant, we can forget its original definition. This style makes proofs more
-robust: if the definition has to be changed, only the proofs of the abstract
-properties will be affected.
-
-The definition of a function @{text f} is a theorem named @{text f_def}
-and can be added to a call of @{text simp} just like any other theorem:
-\begin{quote}
-\isacom{apply}@{text"(simp add: f_def)"}
-\end{quote}
-In particular, let-expressions can be unfolded by
-making @{thm[source] Let_def} a simplification rule.
-
-\subsection{Case splitting with @{text simp}}
-
-Goals containing if-expressions are automatically split into two cases by
-@{text simp} using the rule
-\begin{quote}
-@{prop"P(if A then s else t) = ((A \<longrightarrow> P(s)) \<and> (~A \<longrightarrow> P(t)))"}
-\end{quote}
-For example, @{text simp} can prove
-\begin{quote}
-@{prop"(A \<and> B) = (if A then B else False)"}
-\end{quote}
-because both @{prop"A \<longrightarrow> (A & B) = B"} and @{prop"~A \<longrightarrow> (A & B) = False"}
-simplify to @{const True}.
-
-We can split case-expressions similarly. For @{text nat} the rule looks like this:
-@{prop[display,margin=65,indent=4]"P(case e of 0 \<Rightarrow> a | Suc n \<Rightarrow> b n) = ((e = 0 \<longrightarrow> P a) & (ALL n. e = Suc n \<longrightarrow> P(b n)))"}
-Case expressions are not split automatically by @{text simp}, but @{text simp}
-can be instructed to do so:
-\begin{quote}
-\isacom{apply}@{text"(simp split: nat.split)"}
-\end{quote}
-splits all case-expressions over natural numbers. For an arbitrary
-datatype @{text t} it is @{text "t.split"} instead of @{thm[source] nat.split}.
-Method @{text auto} can be modified in exactly the same way.
-*}
-(*<*)
-end
-(*>*)
--- a/doc-src/ProgProve/Thys/document/Basics.tex	Mon Aug 27 21:46:00 2012 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,176 +0,0 @@
-%
-\begin{isabellebody}%
-\def\isabellecontext{Basics}%
-%
-\isadelimtheory
-%
-\endisadelimtheory
-%
-\isatagtheory
-%
-\endisatagtheory
-{\isafoldtheory}%
-%
-\isadelimtheory
-%
-\endisadelimtheory
-%
-\begin{isamarkuptext}%
-This chapter introduces HOL as a functional programming language and shows
-how to prove properties of functional programs by induction.
-
-\section{Basics}
-
-\subsection{Types, Terms and Formulae}
-\label{sec:TypesTermsForms}
-
-HOL is a typed logic whose type system resembles that of functional
-programming languages. Thus there are
-\begin{description}
-\item[base types,] 
-in particular \isa{bool}, the type of truth values,
-\isa{nat}, the type of natural numbers ($\mathbb{N}$), and \isa{int},
-the type of mathematical integers ($\mathbb{Z}$).
-\item[type constructors,]
- in particular \isa{list}, the type of
-lists, and \isa{set}, the type of sets. Type constructors are written
-postfix, e.g.\ \isa{nat\ list} is the type of lists whose elements are
-natural numbers.
-\item[function types,]
-denoted by \isa{{\isaliteral{5C3C52696768746172726F773E}{\isasymRightarrow}}}.
-\item[type variables,]
-  denoted by \isa{{\isaliteral{27}{\isacharprime}}a}, \isa{{\isaliteral{27}{\isacharprime}}b} etc., just like in ML\@.
-\end{description}
-
-\concept{Terms} are formed as in functional programming by
-applying functions to arguments. If \isa{f} is a function of type
-\isa{{\isaliteral{5C3C7461753E}{\isasymtau}}\isaliteral{5C3C5E697375623E}{}\isactrlisub {\isadigit{1}}\ {\isaliteral{5C3C52696768746172726F773E}{\isasymRightarrow}}\ {\isaliteral{5C3C7461753E}{\isasymtau}}\isaliteral{5C3C5E697375623E}{}\isactrlisub {\isadigit{2}}} and \isa{t} is a term of type
-\isa{{\isaliteral{5C3C7461753E}{\isasymtau}}\isaliteral{5C3C5E697375623E}{}\isactrlisub {\isadigit{1}}} then \isa{f\ t} is a term of type \isa{{\isaliteral{5C3C7461753E}{\isasymtau}}\isaliteral{5C3C5E697375623E}{}\isactrlisub {\isadigit{2}}}. We write \isa{t\ {\isaliteral{3A}{\isacharcolon}}{\isaliteral{3A}{\isacharcolon}}\ {\isaliteral{5C3C7461753E}{\isasymtau}}} to mean that term \isa{t} has type \isa{{\isaliteral{5C3C7461753E}{\isasymtau}}}.
-
-\begin{warn}
-There are many predefined infix symbols like \isa{{\isaliteral{2B}{\isacharplus}}} and \isa{{\isaliteral{5C3C6C653E}{\isasymle}}}.
-The name of the corresponding binary function is \isa{op\ {\isaliteral{2B}{\isacharplus}}},
-not just \isa{{\isaliteral{2B}{\isacharplus}}}. That is, \isa{x\ {\isaliteral{2B}{\isacharplus}}\ y} is syntactic sugar for
-\noquotes{\isa{{\isaliteral{22}{\isachardoublequote}}op\ {\isaliteral{2B}{\isacharplus}}\ x\ y{\isaliteral{22}{\isachardoublequote}}}}.
-\end{warn}
-
-HOL also supports some basic constructs from functional programming:
-\begin{quote}
-\isa{{\isaliteral{28}{\isacharparenleft}}if\ b\ then\ t\isaliteral{5C3C5E697375623E}{}\isactrlisub {\isadigit{1}}\ else\ t\isaliteral{5C3C5E697375623E}{}\isactrlisub {\isadigit{2}}{\isaliteral{29}{\isacharparenright}}}\\
-\isa{{\isaliteral{28}{\isacharparenleft}}let\ x\ {\isaliteral{3D}{\isacharequal}}\ t\ in\ u{\isaliteral{29}{\isacharparenright}}}\\
-\isa{{\isaliteral{28}{\isacharparenleft}}case\ t\ of\ pat\isaliteral{5C3C5E697375623E}{}\isactrlisub {\isadigit{1}}\ {\isaliteral{5C3C52696768746172726F773E}{\isasymRightarrow}}\ t\isaliteral{5C3C5E697375623E}{}\isactrlisub {\isadigit{1}}\ {\isaliteral{7C}{\isacharbar}}\ {\isaliteral{5C3C646F74733E}{\isasymdots}}\ {\isaliteral{7C}{\isacharbar}}\ pat\isaliteral{5C3C5E697375623E}{}\isactrlisub n\ {\isaliteral{5C3C52696768746172726F773E}{\isasymRightarrow}}\ t\isaliteral{5C3C5E697375623E}{}\isactrlisub n{\isaliteral{29}{\isacharparenright}}}
-\end{quote}
-\begin{warn}
-The above three constructs must always be enclosed in parentheses
-if they occur inside other constructs.
-\end{warn}
-Terms may also contain \isa{{\isaliteral{5C3C6C616D6264613E}{\isasymlambda}}}-abstractions. For example,
-\isa{{\isaliteral{5C3C6C616D6264613E}{\isasymlambda}}x{\isaliteral{2E}{\isachardot}}\ x} is the identity function.
-
-\concept{Formulae} are terms of type \isa{bool}.
-There are the basic constants \isa{True} and \isa{False} and
-the usual logical connectives (in decreasing order of precedence):
-\isa{{\isaliteral{5C3C6E6F743E}{\isasymnot}}}, \isa{{\isaliteral{5C3C616E643E}{\isasymand}}}, \isa{{\isaliteral{5C3C6F723E}{\isasymor}}}, \isa{{\isaliteral{5C3C6C6F6E6772696768746172726F773E}{\isasymlongrightarrow}}}.
-
-\concept{Equality} is available in the form of the infix function \isa{{\isaliteral{3D}{\isacharequal}}}
-of type \isa{{\isaliteral{27}{\isacharprime}}a\ {\isaliteral{5C3C52696768746172726F773E}{\isasymRightarrow}}\ {\isaliteral{27}{\isacharprime}}a\ {\isaliteral{5C3C52696768746172726F773E}{\isasymRightarrow}}\ bool}. It also works for formulas, where
-it means ``if and only if''.
-
-\concept{Quantifiers} are written \isa{{\isaliteral{5C3C666F72616C6C3E}{\isasymforall}}x{\isaliteral{2E}{\isachardot}}\ P} and \isa{{\isaliteral{5C3C6578697374733E}{\isasymexists}}x{\isaliteral{2E}{\isachardot}}\ P}.
-
-Isabelle automatically computes the type of each variable in a term. This is
-called \concept{type inference}.  Despite type inference, it is sometimes
-necessary to attach explicit \concept{type constraints} (or \concept{type
-annotations}) to a variable or term.  The syntax is \isa{t\ {\isaliteral{3A}{\isacharcolon}}{\isaliteral{3A}{\isacharcolon}}\ {\isaliteral{5C3C7461753E}{\isasymtau}}} as in
-\mbox{\noquotes{\isa{{\isaliteral{22}{\isachardoublequote}}m\ {\isaliteral{3C}{\isacharless}}\ {\isaliteral{28}{\isacharparenleft}}n{\isaliteral{3A}{\isacharcolon}}{\isaliteral{3A}{\isacharcolon}}nat{\isaliteral{29}{\isacharparenright}}{\isaliteral{22}{\isachardoublequote}}}}}. Type constraints may be
-needed to
-disambiguate terms involving overloaded functions such as \isa{{\isaliteral{2B}{\isacharplus}}}, \isa{{\isaliteral{2A}{\isacharasterisk}}} and \isa{{\isaliteral{5C3C6C653E}{\isasymle}}}.
-
-Finally there are the universal quantifier \isa{{\isaliteral{5C3C416E643E}{\isasymAnd}}} and the implication
-\isa{{\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}}. They are part of the Isabelle framework, not the logic
-HOL. Logically, they agree with their HOL counterparts \isa{{\isaliteral{5C3C666F72616C6C3E}{\isasymforall}}} and
-\isa{{\isaliteral{5C3C6C6F6E6772696768746172726F773E}{\isasymlongrightarrow}}}, but operationally they behave differently. This will become
-clearer as we go along.
-\begin{warn}
-Right-arrows of all kinds always associate to the right. In particular,
-the formula
-\isa{A\isaliteral{5C3C5E697375623E}{}\isactrlisub {\isadigit{1}}\ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ A\isaliteral{5C3C5E697375623E}{}\isactrlisub {\isadigit{2}}\ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ A\isaliteral{5C3C5E697375623E}{}\isactrlisub {\isadigit{3}}} means \isa{A\isaliteral{5C3C5E697375623E}{}\isactrlisub {\isadigit{1}}\ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ {\isaliteral{28}{\isacharparenleft}}A\isaliteral{5C3C5E697375623E}{}\isactrlisub {\isadigit{2}}\ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ A\isaliteral{5C3C5E697375623E}{}\isactrlisub {\isadigit{3}}{\isaliteral{29}{\isacharparenright}}}.
-The (Isabelle specific) notation \mbox{\isa{{\isaliteral{5C3C6C6272616B6B3E}{\isasymlbrakk}}\ A\isaliteral{5C3C5E697375623E}{}\isactrlisub {\isadigit{1}}{\isaliteral{3B}{\isacharsemicolon}}\ {\isaliteral{5C3C646F74733E}{\isasymdots}}{\isaliteral{3B}{\isacharsemicolon}}\ A\isaliteral{5C3C5E697375623E}{}\isactrlisub n\ {\isaliteral{5C3C726272616B6B3E}{\isasymrbrakk}}\ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ A}}
-is short for the iterated implication \mbox{\isa{A\isaliteral{5C3C5E697375623E}{}\isactrlisub {\isadigit{1}}\ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ {\isaliteral{5C3C646F74733E}{\isasymdots}}\ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ A\isaliteral{5C3C5E697375623E}{}\isactrlisub n\ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ A}}.
-Sometimes we also employ inference rule notation:
-\inferrule{\mbox{\isa{A\isaliteral{5C3C5E697375623E}{}\isactrlisub {\isadigit{1}}}}\\ \mbox{\isa{{\isaliteral{5C3C646F74733E}{\isasymdots}}}}\\ \mbox{\isa{A\isaliteral{5C3C5E697375623E}{}\isactrlisub n}}}
-{\mbox{\isa{A}}}
-\end{warn}
-
-
-\subsection{Theories}
-\label{sec:Basic:Theories}
-
-Roughly speaking, a \concept{theory} is a named collection of types,
-functions, and theorems, much like a module in a programming language.
-All the Isabelle text that you ever type needs to go into a theory.
-The general format of a theory \isa{T} is
-\begin{quote}
-\isacom{theory} \isa{T}\\
-\isacom{imports} \isa{T\isaliteral{5C3C5E697375623E}{}\isactrlisub {\isadigit{1}}\ {\isaliteral{5C3C646F74733E}{\isasymdots}}\ T\isaliteral{5C3C5E697375623E}{}\isactrlisub n}\\
-\isacom{begin}\\
-\emph{definitions, theorems and proofs}\\
-\isacom{end}
-\end{quote}
-where \isa{T\isaliteral{5C3C5E697375623E}{}\isactrlisub {\isadigit{1}}\ {\isaliteral{5C3C646F74733E}{\isasymdots}}\ T\isaliteral{5C3C5E697375623E}{}\isactrlisub n} are the names of existing
-theories that \isa{T} is based on. The \isa{T\isaliteral{5C3C5E697375623E}{}\isactrlisub i} are the
-direct \concept{parent theories} of \isa{T}.
-Everything defined in the parent theories (and their parents, recursively) is
-automatically visible. Each theory \isa{T} must
-reside in a \concept{theory file} named \isa{T{\isaliteral{2E}{\isachardot}}thy}.
-
-\begin{warn}
-HOL contains a theory \isa{Main}, the union of all the basic
-predefined theories like arithmetic, lists, sets, etc.
-Unless you know what you are doing, always include \isa{Main}
-as a direct or indirect parent of all your theories.
-\end{warn}
-
-In addition to the theories that come with the Isabelle/HOL distribution
-(see \url{http://isabelle.in.tum.de/library/HOL/})
-there is also the \emph{Archive of Formal Proofs}
-at  \url{http://afp.sourceforge.net}, a growing collection of Isabelle theories
-that everybody can contribute to.
-
-\subsection{Quotation Marks}
-
-The textual definition of a theory follows a fixed syntax with keywords like
-\isacommand{begin} and \isacommand{datatype}.  Embedded in this syntax are
-the types and formulae of HOL.  To distinguish the two levels, everything
-HOL-specific (terms and types) must be enclosed in quotation marks:
-\texttt{"}\dots\texttt{"}. To lessen this burden, quotation marks around a
-single identifier can be dropped.  When Isabelle prints a syntax error
-message, it refers to the HOL syntax as the \concept{inner syntax} and the
-enclosing theory language as the \concept{outer syntax}.
-\sem
-\begin{warn}
-For reasons of readability, we almost never show the quotation marks in this
-book. Consult the accompanying theory files to see where they need to go.
-\end{warn}
-\endsem
-%%
-\end{isamarkuptext}%
-\isamarkuptrue%
-%
-\isadelimtheory
-%
-\endisadelimtheory
-%
-\isatagtheory
-%
-\endisatagtheory
-{\isafoldtheory}%
-%
-\isadelimtheory
-%
-\endisadelimtheory
-\end{isabellebody}%
-%%% Local Variables:
-%%% mode: latex
-%%% TeX-master: "root"
-%%% End:
--- a/doc-src/ProgProve/Thys/document/Bool_nat_list.tex	Mon Aug 27 21:46:00 2012 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,606 +0,0 @@
-%
-\begin{isabellebody}%
-\def\isabellecontext{Bool{\isaliteral{5F}{\isacharunderscore}}nat{\isaliteral{5F}{\isacharunderscore}}list}%
-%
-\isadelimtheory
-%
-\endisadelimtheory
-%
-\isatagtheory
-%
-\endisatagtheory
-{\isafoldtheory}%
-%
-\isadelimtheory
-%
-\endisadelimtheory
-%
-\begin{isamarkuptext}%
-\vspace{-4ex}
-\section{\texorpdfstring{Types \isa{bool}, \isa{nat} and \isa{list}}{Types bool, nat and list}}
-
-These are the most important predefined types. We go through them one by one.
-Based on examples we learn how to define (possibly recursive) functions and
-prove theorems about them by induction and simplification.
-
-\subsection{Type \isa{bool}}
-
-The type of boolean values is a predefined datatype
-\begin{isabelle}%
-\isacommand{datatype}\ bool\ {\isaliteral{3D}{\isacharequal}}\ True\ {\isaliteral{7C}{\isacharbar}}\ False%
-\end{isabelle}
-with the two values \isa{True} and \isa{False} and
-with many predefined functions:  \isa{{\isaliteral{5C3C6E6F743E}{\isasymnot}}}, \isa{{\isaliteral{5C3C616E643E}{\isasymand}}}, \isa{{\isaliteral{5C3C6F723E}{\isasymor}}}, \isa{{\isaliteral{5C3C6C6F6E6772696768746172726F773E}{\isasymlongrightarrow}}} etc. Here is how conjunction could be defined by pattern matching:%
-\end{isamarkuptext}%
-\isamarkuptrue%
-\isacommand{fun}\isamarkupfalse%
-\ conj\ {\isaliteral{3A}{\isacharcolon}}{\isaliteral{3A}{\isacharcolon}}\ {\isaliteral{22}{\isachardoublequoteopen}}bool\ {\isaliteral{5C3C52696768746172726F773E}{\isasymRightarrow}}\ bool\ {\isaliteral{5C3C52696768746172726F773E}{\isasymRightarrow}}\ bool{\isaliteral{22}{\isachardoublequoteclose}}\ \isakeyword{where}\isanewline
-{\isaliteral{22}{\isachardoublequoteopen}}conj\ True\ True\ {\isaliteral{3D}{\isacharequal}}\ True{\isaliteral{22}{\isachardoublequoteclose}}\ {\isaliteral{7C}{\isacharbar}}\isanewline
-{\isaliteral{22}{\isachardoublequoteopen}}conj\ {\isaliteral{5F}{\isacharunderscore}}\ {\isaliteral{5F}{\isacharunderscore}}\ {\isaliteral{3D}{\isacharequal}}\ False{\isaliteral{22}{\isachardoublequoteclose}}%
-\begin{isamarkuptext}%
-Both the datatype and function definitions roughly follow the syntax
-of functional programming languages.
-
-\subsection{Type \isa{nat}}
-
-Natural numbers are another predefined datatype:
-\begin{isabelle}%
-\isacommand{datatype}\ nat\ {\isaliteral{3D}{\isacharequal}}\ {\isadigit{0}}\ {\isaliteral{7C}{\isacharbar}}\ Suc\ nat%
-\end{isabelle}
-All values of type \isa{nat} are generated by the constructors
-\isa{{\isadigit{0}}} and \isa{Suc}. Thus the values of type \isa{nat} are
-\isa{{\isadigit{0}}}, \isa{Suc\ {\isadigit{0}}}, \isa{Suc\ {\isaliteral{28}{\isacharparenleft}}Suc\ {\isadigit{0}}{\isaliteral{29}{\isacharparenright}}} etc.
-There are many predefined functions: \isa{{\isaliteral{2B}{\isacharplus}}}, \isa{{\isaliteral{2A}{\isacharasterisk}}}, \isa{{\isaliteral{5C3C6C653E}{\isasymle}}}, etc. Here is how you could define your own addition:%
-\end{isamarkuptext}%
-\isamarkuptrue%
-\isacommand{fun}\isamarkupfalse%
-\ add\ {\isaliteral{3A}{\isacharcolon}}{\isaliteral{3A}{\isacharcolon}}\ {\isaliteral{22}{\isachardoublequoteopen}}nat\ {\isaliteral{5C3C52696768746172726F773E}{\isasymRightarrow}}\ nat\ {\isaliteral{5C3C52696768746172726F773E}{\isasymRightarrow}}\ nat{\isaliteral{22}{\isachardoublequoteclose}}\ \isakeyword{where}\isanewline
-{\isaliteral{22}{\isachardoublequoteopen}}add\ {\isadigit{0}}\ n\ {\isaliteral{3D}{\isacharequal}}\ n{\isaliteral{22}{\isachardoublequoteclose}}\ {\isaliteral{7C}{\isacharbar}}\isanewline
-{\isaliteral{22}{\isachardoublequoteopen}}add\ {\isaliteral{28}{\isacharparenleft}}Suc\ m{\isaliteral{29}{\isacharparenright}}\ n\ {\isaliteral{3D}{\isacharequal}}\ Suc{\isaliteral{28}{\isacharparenleft}}add\ m\ n{\isaliteral{29}{\isacharparenright}}{\isaliteral{22}{\isachardoublequoteclose}}%
-\begin{isamarkuptext}%
-And here is a proof of the fact that \isa{add\ m\ {\isadigit{0}}\ {\isaliteral{3D}{\isacharequal}}\ m}:%
-\end{isamarkuptext}%
-\isamarkuptrue%
-\isacommand{lemma}\isamarkupfalse%
-\ add{\isaliteral{5F}{\isacharunderscore}}{\isadigit{0}}{\isadigit{2}}{\isaliteral{3A}{\isacharcolon}}\ {\isaliteral{22}{\isachardoublequoteopen}}add\ m\ {\isadigit{0}}\ {\isaliteral{3D}{\isacharequal}}\ m{\isaliteral{22}{\isachardoublequoteclose}}\isanewline
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\isatagproof
-\isacommand{apply}\isamarkupfalse%
-{\isaliteral{28}{\isacharparenleft}}induction\ m{\isaliteral{29}{\isacharparenright}}\isanewline
-\isacommand{apply}\isamarkupfalse%
-{\isaliteral{28}{\isacharparenleft}}auto{\isaliteral{29}{\isacharparenright}}\isanewline
-\isacommand{done}\isamarkupfalse%
-%
-\endisatagproof
-{\isafoldproof}%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\isatagproof
-%
-\begin{isamarkuptxt}%
-The \isacom{lemma} command starts the proof and gives the lemma
-a name, \isa{add{\isaliteral{5F}{\isacharunderscore}}{\isadigit{0}}{\isadigit{2}}}. Properties of recursively defined functions
-need to be established by induction in most cases.
-Command \isacom{apply}\isa{{\isaliteral{28}{\isacharparenleft}}induction\ m{\isaliteral{29}{\isacharparenright}}} instructs Isabelle to
-start a proof by induction on \isa{m}. In response, it will show the
-following proof state:
-\begin{isabelle}%
-\ {\isadigit{1}}{\isaliteral{2E}{\isachardot}}\ add\ {\isadigit{0}}\ {\isadigit{0}}\ {\isaliteral{3D}{\isacharequal}}\ {\isadigit{0}}\isanewline
-\ {\isadigit{2}}{\isaliteral{2E}{\isachardot}}\ {\isaliteral{5C3C416E643E}{\isasymAnd}}m{\isaliteral{2E}{\isachardot}}\ add\ m\ {\isadigit{0}}\ {\isaliteral{3D}{\isacharequal}}\ m\ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ add\ {\isaliteral{28}{\isacharparenleft}}Suc\ m{\isaliteral{29}{\isacharparenright}}\ {\isadigit{0}}\ {\isaliteral{3D}{\isacharequal}}\ Suc\ m%
-\end{isabelle}
-The numbered lines are known as \emph{subgoals}.
-The first subgoal is the base case, the second one the induction step.
-The prefix \isa{{\isaliteral{5C3C416E643E}{\isasymAnd}}m{\isaliteral{2E}{\isachardot}}} is Isabelle's way of saying ``for an arbitrary but fixed \isa{m}''. The \isa{{\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}} separates assumptions from the conclusion.
-The command \isacom{apply}\isa{{\isaliteral{28}{\isacharparenleft}}auto{\isaliteral{29}{\isacharparenright}}} instructs Isabelle to try
-and prove all subgoals automatically, essentially by simplifying them.
-Because both subgoals are easy, Isabelle can do it.
-The base case \isa{add\ {\isadigit{0}}\ {\isadigit{0}}\ {\isaliteral{3D}{\isacharequal}}\ {\isadigit{0}}} holds by definition of \isa{add},
-and the induction step is almost as simple:
-\isa{add~{\isaliteral{28}{\isacharparenleft}}Suc\ m{\isaliteral{29}{\isacharparenright}}\ {\isadigit{0}}\ {\isaliteral{3D}{\isacharequal}}\ Suc{\isaliteral{28}{\isacharparenleft}}add\ m\ {\isadigit{0}}{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{3D}{\isacharequal}}\ Suc\ m}
-using first the definition of \isa{add} and then the induction hypothesis.
-In summary, both subproofs rely on simplification with function definitions and
-the induction hypothesis.
-As a result of that final \isacom{done}, Isabelle associates the lemma
-just proved with its name. You can now inspect the lemma with the command%
-\end{isamarkuptxt}%
-\isamarkuptrue%
-%
-\endisatagproof
-{\isafoldproof}%
-%
-\isadelimproof
-%
-\endisadelimproof
-\isacommand{thm}\isamarkupfalse%
-\ add{\isaliteral{5F}{\isacharunderscore}}{\isadigit{0}}{\isadigit{2}}%
-\isadelimproof
-%
-\endisadelimproof
-%
-\isatagproof
-%
-\begin{isamarkuptxt}%
-which displays \begin{isabelle}%
-add\ {\isaliteral{3F}{\isacharquery}}m\ {\isadigit{0}}\ {\isaliteral{3D}{\isacharequal}}\ {\isaliteral{3F}{\isacharquery}}m%
-\end{isabelle} The free
-variable \isa{m} has been replaced by the \concept{unknown}
-\isa{{\isaliteral{3F}{\isacharquery}}m}. There is no logical difference between the two but an
-operational one: unknowns can be instantiated, which is what you want after
-some lemma has been proved.
-
-Note that there is also a proof method \isa{induct}, which behaves almost
-like \isa{induction}; the difference is explained in \autoref{ch:Isar}.
-
-\begin{warn}
-Terminology: We use \concept{lemma}, \concept{theorem} and \concept{rule}
-interchangeably for propositions that have been proved.
-\end{warn}
-\begin{warn}
-  Numerals (\isa{{\isadigit{0}}}, \isa{{\isadigit{1}}}, \isa{{\isadigit{2}}}, \dots) and most of the standard
-  arithmetic operations (\isa{{\isaliteral{2B}{\isacharplus}}}, \isa{{\isaliteral{2D}{\isacharminus}}}, \isa{{\isaliteral{2A}{\isacharasterisk}}}, \isa{{\isaliteral{5C3C6C653E}{\isasymle}}},
-  \isa{{\isaliteral{3C}{\isacharless}}} etc) are overloaded: they are available
-  not just for natural numbers but for other types as well.
-  For example, given the goal \isa{x\ {\isaliteral{2B}{\isacharplus}}\ {\isadigit{0}}\ {\isaliteral{3D}{\isacharequal}}\ x}, there is nothing to indicate
-  that you are talking about natural numbers. Hence Isabelle can only infer
-  that \isa{x} is of some arbitrary type where \isa{{\isadigit{0}}} and \isa{{\isaliteral{2B}{\isacharplus}}}
-  exist. As a consequence, you will be unable to prove the
-  goal. To alert you to such pitfalls, Isabelle flags numerals without a
-  fixed type in its output: \isa{x\ {\isaliteral{2B}{\isacharplus}}\ {\isaliteral{28}{\isacharparenleft}}{\isadigit{0}}{\isaliteral{5C3C436F6C6F6E3E}{\isasymColon}}{\isaliteral{27}{\isacharprime}}a{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{3D}{\isacharequal}}\ x}.  In this particular example,
-  you need to include
-  an explicit type constraint, for example \isa{x{\isaliteral{2B}{\isacharplus}}{\isadigit{0}}\ {\isaliteral{3D}{\isacharequal}}\ {\isaliteral{28}{\isacharparenleft}}x{\isaliteral{3A}{\isacharcolon}}{\isaliteral{3A}{\isacharcolon}}nat{\isaliteral{29}{\isacharparenright}}}. If there
-  is enough contextual information this may not be necessary: \isa{Suc\ x\ {\isaliteral{3D}{\isacharequal}}\ x} automatically implies \isa{x{\isaliteral{3A}{\isacharcolon}}{\isaliteral{3A}{\isacharcolon}}nat} because \isa{Suc} is not
-  overloaded.
-\end{warn}
-
-\subsubsection{An informal proof}
-
-Above we gave some terse informal explanation of the proof of
-\isa{add\ m\ {\isadigit{0}}\ {\isaliteral{3D}{\isacharequal}}\ m}. A more detailed informal exposition of the lemma
-might look like this:
-\bigskip
-
-\noindent
-\textbf{Lemma} \isa{add\ m\ {\isadigit{0}}\ {\isaliteral{3D}{\isacharequal}}\ m}
-
-\noindent
-\textbf{Proof} by induction on \isa{m}.
-\begin{itemize}
-\item Case \isa{{\isadigit{0}}} (the base case): \isa{add\ {\isadigit{0}}\ {\isadigit{0}}\ {\isaliteral{3D}{\isacharequal}}\ {\isadigit{0}}}
-  holds by definition of \isa{add}.
-\item Case \isa{Suc\ m} (the induction step):
-  We assume \isa{add\ m\ {\isadigit{0}}\ {\isaliteral{3D}{\isacharequal}}\ m}, the induction hypothesis (IH),
-  and we need to show \isa{add\ {\isaliteral{28}{\isacharparenleft}}Suc\ m{\isaliteral{29}{\isacharparenright}}\ {\isadigit{0}}\ {\isaliteral{3D}{\isacharequal}}\ Suc\ m}.
-  The proof is as follows:\smallskip
-
-  \begin{tabular}{@ {}rcl@ {\quad}l@ {}}
-  \isa{add\ {\isaliteral{28}{\isacharparenleft}}Suc\ m{\isaliteral{29}{\isacharparenright}}\ {\isadigit{0}}} &\isa{{\isaliteral{3D}{\isacharequal}}}& \isa{Suc\ {\isaliteral{28}{\isacharparenleft}}add\ m\ {\isadigit{0}}{\isaliteral{29}{\isacharparenright}}}
-  & by definition of \isa{add}\\
-              &\isa{{\isaliteral{3D}{\isacharequal}}}& \isa{Suc\ m} & by IH
-  \end{tabular}
-\end{itemize}
-Throughout this book, \concept{IH} will stand for ``induction hypothesis''.
-
-We have now seen three proofs of \isa{add\ m\ {\isadigit{0}}\ {\isaliteral{3D}{\isacharequal}}\ {\isadigit{0}}}: the Isabelle one, the
-terse four lines explaining the base case and the induction step, and just now a
-model of a traditional inductive proof. The three proofs differ in the level
-of detail given and the intended reader: the Isabelle proof is for the
-machine, the informal proofs are for humans. Although this book concentrates
-on Isabelle proofs, it is important to be able to rephrase those proofs
-as informal text comprehensible to a reader familiar with traditional
-mathematical proofs. Later on we will introduce an Isabelle proof language
-that is closer to traditional informal mathematical language and is often
-directly readable.
-
-\subsection{Type \isa{list}}
-
-Although lists are already predefined, we define our own copy just for
-demonstration purposes:%
-\end{isamarkuptxt}%
-\isamarkuptrue%
-%
-\endisatagproof
-{\isafoldproof}%
-%
-\isadelimproof
-%
-\endisadelimproof
-\isacommand{datatype}\isamarkupfalse%
-\ {\isaliteral{27}{\isacharprime}}a\ list\ {\isaliteral{3D}{\isacharequal}}\ Nil\ {\isaliteral{7C}{\isacharbar}}\ Cons\ {\isaliteral{27}{\isacharprime}}a\ {\isaliteral{22}{\isachardoublequoteopen}}{\isaliteral{27}{\isacharprime}}a\ list{\isaliteral{22}{\isachardoublequoteclose}}%
-\begin{isamarkuptext}%
-\begin{itemize}
-\item Type \isa{{\isaliteral{27}{\isacharprime}}a\ list} is the type of lists over elements of type \isa{{\isaliteral{27}{\isacharprime}}a}. Because \isa{{\isaliteral{27}{\isacharprime}}a} is a type variable, lists are in fact \concept{polymorphic}: the elements of a list can be of arbitrary type (but must all be of the same type).
-\item Lists have two constructors: \isa{Nil}, the empty list, and \isa{Cons}, which puts an element (of type \isa{{\isaliteral{27}{\isacharprime}}a}) in front of a list (of type \isa{{\isaliteral{27}{\isacharprime}}a\ list}).
-Hence all lists are of the form \isa{Nil}, or \isa{Cons\ x\ Nil},
-or \isa{Cons\ x\ {\isaliteral{28}{\isacharparenleft}}Cons\ y\ Nil{\isaliteral{29}{\isacharparenright}}} etc.
-\item \isacom{datatype} requires no quotation marks on the
-left-hand side, but on the right-hand side each of the argument
-types of a constructor needs to be enclosed in quotation marks, unless
-it is just an identifier (e.g.\ \isa{nat} or \isa{{\isaliteral{27}{\isacharprime}}a}).
-\end{itemize}
-We also define two standard functions, append and reverse:%
-\end{isamarkuptext}%
-\isamarkuptrue%
-\isacommand{fun}\isamarkupfalse%
-\ app\ {\isaliteral{3A}{\isacharcolon}}{\isaliteral{3A}{\isacharcolon}}\ {\isaliteral{22}{\isachardoublequoteopen}}{\isaliteral{27}{\isacharprime}}a\ list\ {\isaliteral{5C3C52696768746172726F773E}{\isasymRightarrow}}\ {\isaliteral{27}{\isacharprime}}a\ list\ {\isaliteral{5C3C52696768746172726F773E}{\isasymRightarrow}}\ {\isaliteral{27}{\isacharprime}}a\ list{\isaliteral{22}{\isachardoublequoteclose}}\ \isakeyword{where}\isanewline
-{\isaliteral{22}{\isachardoublequoteopen}}app\ Nil\ ys\ {\isaliteral{3D}{\isacharequal}}\ ys{\isaliteral{22}{\isachardoublequoteclose}}\ {\isaliteral{7C}{\isacharbar}}\isanewline
-{\isaliteral{22}{\isachardoublequoteopen}}app\ {\isaliteral{28}{\isacharparenleft}}Cons\ x\ xs{\isaliteral{29}{\isacharparenright}}\ ys\ {\isaliteral{3D}{\isacharequal}}\ Cons\ x\ {\isaliteral{28}{\isacharparenleft}}app\ xs\ ys{\isaliteral{29}{\isacharparenright}}{\isaliteral{22}{\isachardoublequoteclose}}\isanewline
-\isanewline
-\isacommand{fun}\isamarkupfalse%
-\ rev\ {\isaliteral{3A}{\isacharcolon}}{\isaliteral{3A}{\isacharcolon}}\ {\isaliteral{22}{\isachardoublequoteopen}}{\isaliteral{27}{\isacharprime}}a\ list\ {\isaliteral{5C3C52696768746172726F773E}{\isasymRightarrow}}\ {\isaliteral{27}{\isacharprime}}a\ list{\isaliteral{22}{\isachardoublequoteclose}}\ \isakeyword{where}\isanewline
-{\isaliteral{22}{\isachardoublequoteopen}}rev\ Nil\ {\isaliteral{3D}{\isacharequal}}\ Nil{\isaliteral{22}{\isachardoublequoteclose}}\ {\isaliteral{7C}{\isacharbar}}\isanewline
-{\isaliteral{22}{\isachardoublequoteopen}}rev\ {\isaliteral{28}{\isacharparenleft}}Cons\ x\ xs{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{3D}{\isacharequal}}\ app\ {\isaliteral{28}{\isacharparenleft}}rev\ xs{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{28}{\isacharparenleft}}Cons\ x\ Nil{\isaliteral{29}{\isacharparenright}}{\isaliteral{22}{\isachardoublequoteclose}}%
-\begin{isamarkuptext}%
-By default, variables \isa{xs}, \isa{ys} and \isa{zs} are of
-\isa{list} type.
-
-Command \isacom{value} evaluates a term. For example,%
-\end{isamarkuptext}%
-\isamarkuptrue%
-\isacommand{value}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}rev{\isaliteral{28}{\isacharparenleft}}Cons\ True\ {\isaliteral{28}{\isacharparenleft}}Cons\ False\ Nil{\isaliteral{29}{\isacharparenright}}{\isaliteral{29}{\isacharparenright}}{\isaliteral{22}{\isachardoublequoteclose}}%
-\begin{isamarkuptext}%
-yields the result \isa{Cons\ False\ {\isaliteral{28}{\isacharparenleft}}Cons\ True\ Nil{\isaliteral{29}{\isacharparenright}}}. This works symbolically, too:%
-\end{isamarkuptext}%
-\isamarkuptrue%
-\isacommand{value}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}rev{\isaliteral{28}{\isacharparenleft}}Cons\ a\ {\isaliteral{28}{\isacharparenleft}}Cons\ b\ Nil{\isaliteral{29}{\isacharparenright}}{\isaliteral{29}{\isacharparenright}}{\isaliteral{22}{\isachardoublequoteclose}}%
-\begin{isamarkuptext}%
-yields \isa{Cons\ b\ {\isaliteral{28}{\isacharparenleft}}Cons\ a\ Nil{\isaliteral{29}{\isacharparenright}}}.
-\medskip
-
-Figure~\ref{fig:MyList} shows the theory created so far.
-Because \isa{list}, \isa{Nil}, \isa{Cons} etc are already predefined,
- Isabelle prints qualified (long) names when executing this theory, for example, \isa{MyList{\isaliteral{2E}{\isachardot}}Nil}
- instead of \isa{Nil}.
- To suppress the qualified names you can insert the command
- \texttt{declare [[names\_short]]}.
- This is not recommended in general but just for this unusual example.
-% Notice where the
-%quotations marks are needed that we mostly sweep under the carpet.  In
-%particular, notice that \isacom{datatype} requires no quotation marks on the
-%left-hand side, but that on the right-hand side each of the argument
-%types of a constructor needs to be enclosed in quotation marks.
-
-\begin{figure}[htbp]
-\begin{alltt}
-\input{Thys/MyList.thy}\end{alltt}
-\caption{A Theory of Lists}
-\label{fig:MyList}
-\end{figure}
-
-\subsubsection{Structural Induction for Lists}
-
-Just as for natural numbers, there is a proof principle of induction for
-lists. Induction over a list is essentially induction over the length of
-the list, although the length remains implicit. To prove that some property
-\isa{P} holds for all lists \isa{xs}, i.e.\ \mbox{\isa{P\ xs}},
-you need to prove
-\begin{enumerate}
-\item the base case \isa{P\ Nil} and
-\item the inductive case \isa{P\ {\isaliteral{28}{\isacharparenleft}}Cons\ x\ xs{\isaliteral{29}{\isacharparenright}}} under the assumption \isa{P\ xs}, for some arbitrary but fixed \isa{x} and \isa{xs}.
-\end{enumerate}
-This is often called \concept{structural induction}.
-
-\subsection{The Proof Process}
-
-We will now demonstrate the typical proof process, which involves
-the formulation and proof of auxiliary lemmas.
-Our goal is to show that reversing a list twice produces the original
-list.%
-\end{isamarkuptext}%
-\isamarkuptrue%
-\isacommand{theorem}\isamarkupfalse%
-\ rev{\isaliteral{5F}{\isacharunderscore}}rev\ {\isaliteral{5B}{\isacharbrackleft}}simp{\isaliteral{5D}{\isacharbrackright}}{\isaliteral{3A}{\isacharcolon}}\ {\isaliteral{22}{\isachardoublequoteopen}}rev{\isaliteral{28}{\isacharparenleft}}rev\ xs{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{3D}{\isacharequal}}\ xs{\isaliteral{22}{\isachardoublequoteclose}}%
-\isadelimproof
-%
-\endisadelimproof
-%
-\isatagproof
-%
-\begin{isamarkuptxt}%
-Commands \isacom{theorem} and \isacom{lemma} are
-interchangeable and merely indicate the importance we attach to a
-proposition. Via the bracketed attribute \isa{simp} we also tell Isabelle
-to make the eventual theorem a \concept{simplification rule}: future proofs
-involving simplification will replace occurrences of \isa{rev\ {\isaliteral{28}{\isacharparenleft}}rev\ xs{\isaliteral{29}{\isacharparenright}}} by
-\isa{xs}. The proof is by induction:%
-\end{isamarkuptxt}%
-\isamarkuptrue%
-\isacommand{apply}\isamarkupfalse%
-{\isaliteral{28}{\isacharparenleft}}induction\ xs{\isaliteral{29}{\isacharparenright}}%
-\begin{isamarkuptxt}%
-As explained above, we obtain two subgoals, namely the base case (\isa{Nil}) and the induction step (\isa{Cons}):
-\begin{isabelle}%
-\ {\isadigit{1}}{\isaliteral{2E}{\isachardot}}\ rev\ {\isaliteral{28}{\isacharparenleft}}rev\ Nil{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{3D}{\isacharequal}}\ Nil\isanewline
-\ {\isadigit{2}}{\isaliteral{2E}{\isachardot}}\ {\isaliteral{5C3C416E643E}{\isasymAnd}}a\ xs{\isaliteral{2E}{\isachardot}}\ rev\ {\isaliteral{28}{\isacharparenleft}}rev\ xs{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{3D}{\isacharequal}}\ xs\ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ rev\ {\isaliteral{28}{\isacharparenleft}}rev\ {\isaliteral{28}{\isacharparenleft}}Cons\ a\ xs{\isaliteral{29}{\isacharparenright}}{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{3D}{\isacharequal}}\ Cons\ a\ xs%
-\end{isabelle}
-Let us try to solve both goals automatically:%
-\end{isamarkuptxt}%
-\isamarkuptrue%
-\isacommand{apply}\isamarkupfalse%
-{\isaliteral{28}{\isacharparenleft}}auto{\isaliteral{29}{\isacharparenright}}%
-\begin{isamarkuptxt}%
-Subgoal~1 is proved, and disappears; the simplified version
-of subgoal~2 becomes the new subgoal~1:
-\begin{isabelle}%
-\ {\isadigit{1}}{\isaliteral{2E}{\isachardot}}\ {\isaliteral{5C3C416E643E}{\isasymAnd}}a\ xs{\isaliteral{2E}{\isachardot}}\isanewline
-\isaindent{\ {\isadigit{1}}{\isaliteral{2E}{\isachardot}}\ \ \ \ }rev\ {\isaliteral{28}{\isacharparenleft}}rev\ xs{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{3D}{\isacharequal}}\ xs\ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\isanewline
-\isaindent{\ {\isadigit{1}}{\isaliteral{2E}{\isachardot}}\ \ \ \ }rev\ {\isaliteral{28}{\isacharparenleft}}app\ {\isaliteral{28}{\isacharparenleft}}rev\ xs{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{28}{\isacharparenleft}}Cons\ a\ Nil{\isaliteral{29}{\isacharparenright}}{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{3D}{\isacharequal}}\ Cons\ a\ xs%
-\end{isabelle}
-In order to simplify this subgoal further, a lemma suggests itself.
-
-\subsubsection{A First Lemma}
-
-We insert the following lemma in front of the main theorem:%
-\end{isamarkuptxt}%
-\isamarkuptrue%
-%
-\endisatagproof
-{\isafoldproof}%
-%
-\isadelimproof
-%
-\endisadelimproof
-\isacommand{lemma}\isamarkupfalse%
-\ rev{\isaliteral{5F}{\isacharunderscore}}app\ {\isaliteral{5B}{\isacharbrackleft}}simp{\isaliteral{5D}{\isacharbrackright}}{\isaliteral{3A}{\isacharcolon}}\ {\isaliteral{22}{\isachardoublequoteopen}}rev{\isaliteral{28}{\isacharparenleft}}app\ xs\ ys{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{3D}{\isacharequal}}\ app\ {\isaliteral{28}{\isacharparenleft}}rev\ ys{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{28}{\isacharparenleft}}rev\ xs{\isaliteral{29}{\isacharparenright}}{\isaliteral{22}{\isachardoublequoteclose}}%
-\isadelimproof
-%
-\endisadelimproof
-%
-\isatagproof
-%
-\begin{isamarkuptxt}%
-There are two variables that we could induct on: \isa{xs} and
-\isa{ys}. Because \isa{app} is defined by recursion on
-the first argument, \isa{xs} is the correct one:%
-\end{isamarkuptxt}%
-\isamarkuptrue%
-\isacommand{apply}\isamarkupfalse%
-{\isaliteral{28}{\isacharparenleft}}induction\ xs{\isaliteral{29}{\isacharparenright}}%
-\begin{isamarkuptxt}%
-This time not even the base case is solved automatically:%
-\end{isamarkuptxt}%
-\isamarkuptrue%
-\isacommand{apply}\isamarkupfalse%
-{\isaliteral{28}{\isacharparenleft}}auto{\isaliteral{29}{\isacharparenright}}%
-\begin{isamarkuptxt}%
-\vspace{-5ex}
-\begin{isabelle}%
-\ {\isadigit{1}}{\isaliteral{2E}{\isachardot}}\ rev\ ys\ {\isaliteral{3D}{\isacharequal}}\ app\ {\isaliteral{28}{\isacharparenleft}}rev\ ys{\isaliteral{29}{\isacharparenright}}\ Nil%
-\end{isabelle}
-Again, we need to abandon this proof attempt and prove another simple lemma
-first.
-
-\subsubsection{A Second Lemma}
-
-We again try the canonical proof procedure:%
-\end{isamarkuptxt}%
-\isamarkuptrue%
-%
-\endisatagproof
-{\isafoldproof}%
-%
-\isadelimproof
-%
-\endisadelimproof
-\isacommand{lemma}\isamarkupfalse%
-\ app{\isaliteral{5F}{\isacharunderscore}}Nil{\isadigit{2}}\ {\isaliteral{5B}{\isacharbrackleft}}simp{\isaliteral{5D}{\isacharbrackright}}{\isaliteral{3A}{\isacharcolon}}\ {\isaliteral{22}{\isachardoublequoteopen}}app\ xs\ Nil\ {\isaliteral{3D}{\isacharequal}}\ xs{\isaliteral{22}{\isachardoublequoteclose}}\isanewline
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\isatagproof
-\isacommand{apply}\isamarkupfalse%
-{\isaliteral{28}{\isacharparenleft}}induction\ xs{\isaliteral{29}{\isacharparenright}}\isanewline
-\isacommand{apply}\isamarkupfalse%
-{\isaliteral{28}{\isacharparenleft}}auto{\isaliteral{29}{\isacharparenright}}\isanewline
-\isacommand{done}\isamarkupfalse%
-%
-\endisatagproof
-{\isafoldproof}%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\begin{isamarkuptext}%
-Thankfully, this worked.
-Now we can continue with our stuck proof attempt of the first lemma:%
-\end{isamarkuptext}%
-\isamarkuptrue%
-\isacommand{lemma}\isamarkupfalse%
-\ rev{\isaliteral{5F}{\isacharunderscore}}app\ {\isaliteral{5B}{\isacharbrackleft}}simp{\isaliteral{5D}{\isacharbrackright}}{\isaliteral{3A}{\isacharcolon}}\ {\isaliteral{22}{\isachardoublequoteopen}}rev{\isaliteral{28}{\isacharparenleft}}app\ xs\ ys{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{3D}{\isacharequal}}\ app\ {\isaliteral{28}{\isacharparenleft}}rev\ ys{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{28}{\isacharparenleft}}rev\ xs{\isaliteral{29}{\isacharparenright}}{\isaliteral{22}{\isachardoublequoteclose}}\isanewline
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\isatagproof
-\isacommand{apply}\isamarkupfalse%
-{\isaliteral{28}{\isacharparenleft}}induction\ xs{\isaliteral{29}{\isacharparenright}}\isanewline
-\isacommand{apply}\isamarkupfalse%
-{\isaliteral{28}{\isacharparenleft}}auto{\isaliteral{29}{\isacharparenright}}%
-\begin{isamarkuptxt}%
-We find that this time \isa{auto} solves the base case, but the
-induction step merely simplifies to
-\begin{isabelle}%
-\ {\isadigit{1}}{\isaliteral{2E}{\isachardot}}\ {\isaliteral{5C3C416E643E}{\isasymAnd}}a\ xs{\isaliteral{2E}{\isachardot}}\isanewline
-\isaindent{\ {\isadigit{1}}{\isaliteral{2E}{\isachardot}}\ \ \ \ }rev\ {\isaliteral{28}{\isacharparenleft}}app\ xs\ ys{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{3D}{\isacharequal}}\ app\ {\isaliteral{28}{\isacharparenleft}}rev\ ys{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{28}{\isacharparenleft}}rev\ xs{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\isanewline
-\isaindent{\ {\isadigit{1}}{\isaliteral{2E}{\isachardot}}\ \ \ \ }app\ {\isaliteral{28}{\isacharparenleft}}app\ {\isaliteral{28}{\isacharparenleft}}rev\ ys{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{28}{\isacharparenleft}}rev\ xs{\isaliteral{29}{\isacharparenright}}{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{28}{\isacharparenleft}}Cons\ a\ Nil{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{3D}{\isacharequal}}\isanewline
-\isaindent{\ {\isadigit{1}}{\isaliteral{2E}{\isachardot}}\ \ \ \ }app\ {\isaliteral{28}{\isacharparenleft}}rev\ ys{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{28}{\isacharparenleft}}app\ {\isaliteral{28}{\isacharparenleft}}rev\ xs{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{28}{\isacharparenleft}}Cons\ a\ Nil{\isaliteral{29}{\isacharparenright}}{\isaliteral{29}{\isacharparenright}}%
-\end{isabelle}
-The missing lemma is associativity of \isa{app},
-which we insert in front of the failed lemma \isa{rev{\isaliteral{5F}{\isacharunderscore}}app}.
-
-\subsubsection{Associativity of \isa{app}}
-
-The canonical proof procedure succeeds without further ado:%
-\end{isamarkuptxt}%
-\isamarkuptrue%
-%
-\endisatagproof
-{\isafoldproof}%
-%
-\isadelimproof
-%
-\endisadelimproof
-\isacommand{lemma}\isamarkupfalse%
-\ app{\isaliteral{5F}{\isacharunderscore}}assoc\ {\isaliteral{5B}{\isacharbrackleft}}simp{\isaliteral{5D}{\isacharbrackright}}{\isaliteral{3A}{\isacharcolon}}\ {\isaliteral{22}{\isachardoublequoteopen}}app\ {\isaliteral{28}{\isacharparenleft}}app\ xs\ ys{\isaliteral{29}{\isacharparenright}}\ zs\ {\isaliteral{3D}{\isacharequal}}\ app\ xs\ {\isaliteral{28}{\isacharparenleft}}app\ ys\ zs{\isaliteral{29}{\isacharparenright}}{\isaliteral{22}{\isachardoublequoteclose}}\isanewline
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\isatagproof
-\isacommand{apply}\isamarkupfalse%
-{\isaliteral{28}{\isacharparenleft}}induction\ xs{\isaliteral{29}{\isacharparenright}}\isanewline
-\isacommand{apply}\isamarkupfalse%
-{\isaliteral{28}{\isacharparenleft}}auto{\isaliteral{29}{\isacharparenright}}\isanewline
-\isacommand{done}\isamarkupfalse%
-%
-\endisatagproof
-{\isafoldproof}%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\isatagproof
-%
-\endisatagproof
-{\isafoldproof}%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\isatagproof
-%
-\endisatagproof
-{\isafoldproof}%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\begin{isamarkuptext}%
-Finally the proofs of \isa{rev{\isaliteral{5F}{\isacharunderscore}}app} and \isa{rev{\isaliteral{5F}{\isacharunderscore}}rev}
-succeed, too.
-
-\subsubsection{Another informal proof}
-
-Here is the informal proof of associativity of \isa{app}
-corresponding to the Isabelle proof above.
-\bigskip
-
-\noindent
-\textbf{Lemma} \isa{app\ {\isaliteral{28}{\isacharparenleft}}app\ xs\ ys{\isaliteral{29}{\isacharparenright}}\ zs\ {\isaliteral{3D}{\isacharequal}}\ app\ xs\ {\isaliteral{28}{\isacharparenleft}}app\ ys\ zs{\isaliteral{29}{\isacharparenright}}}
-
-\noindent
-\textbf{Proof} by induction on \isa{xs}.
-\begin{itemize}
-\item Case \isa{Nil}: \ \isa{app\ {\isaliteral{28}{\isacharparenleft}}app\ Nil\ ys{\isaliteral{29}{\isacharparenright}}\ zs\ {\isaliteral{3D}{\isacharequal}}\ app\ ys\ zs} \isa{{\isaliteral{3D}{\isacharequal}}}
-  \mbox{\isa{app\ Nil\ {\isaliteral{28}{\isacharparenleft}}app\ ys\ zs{\isaliteral{29}{\isacharparenright}}}} \ holds by definition of \isa{app}.
-\item Case \isa{Cons\ x\ xs}: We assume
-  \begin{center} \hfill \isa{app\ {\isaliteral{28}{\isacharparenleft}}app\ xs\ ys{\isaliteral{29}{\isacharparenright}}\ zs} \isa{{\isaliteral{3D}{\isacharequal}}}
-  \isa{app\ xs\ {\isaliteral{28}{\isacharparenleft}}app\ ys\ zs{\isaliteral{29}{\isacharparenright}}} \hfill (IH) \end{center}
-  and we need to show
-  \begin{center} \isa{app\ {\isaliteral{28}{\isacharparenleft}}app\ {\isaliteral{28}{\isacharparenleft}}Cons\ x\ xs{\isaliteral{29}{\isacharparenright}}\ ys{\isaliteral{29}{\isacharparenright}}\ zs\ {\isaliteral{3D}{\isacharequal}}\ app\ {\isaliteral{28}{\isacharparenleft}}Cons\ x\ xs{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{28}{\isacharparenleft}}app\ ys\ zs{\isaliteral{29}{\isacharparenright}}}.\end{center}
-  The proof is as follows:\smallskip
-
-  \begin{tabular}{@ {}l@ {\quad}l@ {}}
-  \isa{app\ {\isaliteral{28}{\isacharparenleft}}app\ {\isaliteral{28}{\isacharparenleft}}Cons\ x\ xs{\isaliteral{29}{\isacharparenright}}\ ys{\isaliteral{29}{\isacharparenright}}\ zs}\\
-  \isa{{\isaliteral{3D}{\isacharequal}}\ app\ {\isaliteral{28}{\isacharparenleft}}Cons\ x\ {\isaliteral{28}{\isacharparenleft}}app\ xs\ ys{\isaliteral{29}{\isacharparenright}}{\isaliteral{29}{\isacharparenright}}\ zs} & by definition of \isa{app}\\
-  \isa{{\isaliteral{3D}{\isacharequal}}\ Cons\ x\ {\isaliteral{28}{\isacharparenleft}}app\ {\isaliteral{28}{\isacharparenleft}}app\ xs\ ys{\isaliteral{29}{\isacharparenright}}\ zs{\isaliteral{29}{\isacharparenright}}} & by definition of \isa{app}\\
-  \isa{{\isaliteral{3D}{\isacharequal}}\ Cons\ x\ {\isaliteral{28}{\isacharparenleft}}app\ xs\ {\isaliteral{28}{\isacharparenleft}}app\ ys\ zs{\isaliteral{29}{\isacharparenright}}{\isaliteral{29}{\isacharparenright}}} & by IH\\
-  \isa{{\isaliteral{3D}{\isacharequal}}\ app\ {\isaliteral{28}{\isacharparenleft}}Cons\ x\ xs{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{28}{\isacharparenleft}}app\ ys\ zs{\isaliteral{29}{\isacharparenright}}} & by definition of \isa{app}
-  \end{tabular}
-\end{itemize}
-\medskip
-
-\noindent Didn't we say earlier that all proofs are by simplification? But
-in both cases, going from left to right, the last equality step is not a
-simplification at all! In the base case it is \isa{app\ ys\ zs\ {\isaliteral{3D}{\isacharequal}}\ app\ Nil\ {\isaliteral{28}{\isacharparenleft}}app\ ys\ zs{\isaliteral{29}{\isacharparenright}}}. It appears almost mysterious because we suddenly complicate the
-term by appending \isa{Nil} on the left. What is really going on is this:
-when proving some equality \mbox{\isa{s\ {\isaliteral{3D}{\isacharequal}}\ t}}, both \isa{s} and \isa{t} are
-simplified to some common term \isa{u}.  This heuristic for equality proofs
-works well for a functional programming context like ours. In the base case
-\isa{s} is \isa{app\ {\isaliteral{28}{\isacharparenleft}}app\ Nil\ ys{\isaliteral{29}{\isacharparenright}}\ zs}, \isa{t} is \isa{app\ Nil\ {\isaliteral{28}{\isacharparenleft}}app\ ys\ zs{\isaliteral{29}{\isacharparenright}}}, and \isa{u} is \isa{app\ ys\ zs}.
-
-\subsection{Predefined lists}
-\label{sec:predeflists}
-
-Isabelle's predefined lists are the same as the ones above, but with
-more syntactic sugar:
-\begin{itemize}
-\item \isa{{\isaliteral{5B}{\isacharbrackleft}}{\isaliteral{5D}{\isacharbrackright}}} is \isa{Nil},
-\item \isa{x\ {\isaliteral{23}{\isacharhash}}\ xs} is \isa{Cons\ x\ xs},
-\item \isa{{\isaliteral{5B}{\isacharbrackleft}}x\isaliteral{5C3C5E697375623E}{}\isactrlisub {\isadigit{1}}{\isaliteral{2C}{\isacharcomma}}\ {\isaliteral{5C3C646F74733E}{\isasymdots}}{\isaliteral{2C}{\isacharcomma}}\ x\isaliteral{5C3C5E697375623E}{}\isactrlisub n{\isaliteral{5D}{\isacharbrackright}}} is \isa{x\isaliteral{5C3C5E697375623E}{}\isactrlisub {\isadigit{1}}\ {\isaliteral{23}{\isacharhash}}\ {\isaliteral{5C3C646F74733E}{\isasymdots}}\ {\isaliteral{23}{\isacharhash}}\ x\isaliteral{5C3C5E697375623E}{}\isactrlisub n\ {\isaliteral{23}{\isacharhash}}\ {\isaliteral{5B}{\isacharbrackleft}}{\isaliteral{5D}{\isacharbrackright}}}, and
-\item \isa{xs\ {\isaliteral{40}{\isacharat}}\ ys} is \isa{app\ xs\ ys}.
-\end{itemize}
-There is also a large library of predefined functions.
-The most important ones are the length function
-\isa{length\ {\isaliteral{3A}{\isacharcolon}}{\isaliteral{3A}{\isacharcolon}}\ {\isaliteral{27}{\isacharprime}}a\ list\ {\isaliteral{5C3C52696768746172726F773E}{\isasymRightarrow}}\ nat} (with the obvious definition),
-and the map function that applies a function to all elements of a list:
-\begin{isabelle}
-\isacom{fun} \isa{map} \isa{{\isaliteral{3A}{\isacharcolon}}{\isaliteral{3A}{\isacharcolon}}} \isa{{\isaliteral{22}{\isachardoublequote}}{\isaliteral{28}{\isacharparenleft}}{\isaliteral{27}{\isacharprime}}a\ {\isaliteral{5C3C52696768746172726F773E}{\isasymRightarrow}}\ {\isaliteral{27}{\isacharprime}}b{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{5C3C52696768746172726F773E}{\isasymRightarrow}}\ {\isaliteral{27}{\isacharprime}}a\ list\ {\isaliteral{5C3C52696768746172726F773E}{\isasymRightarrow}}\ {\isaliteral{27}{\isacharprime}}b\ list{\isaliteral{22}{\isachardoublequote}}}\\
-\isa{{\isaliteral{22}{\isachardoublequote}}}\isa{map\ f\ {\isaliteral{5B}{\isacharbrackleft}}{\isaliteral{5D}{\isacharbrackright}}\ {\isaliteral{3D}{\isacharequal}}\ {\isaliteral{5B}{\isacharbrackleft}}{\isaliteral{5D}{\isacharbrackright}}}\isa{{\isaliteral{22}{\isachardoublequote}}\ {\isaliteral{7C}{\isacharbar}}}\\
-\isa{{\isaliteral{22}{\isachardoublequote}}}\isa{map\ f\ {\isaliteral{28}{\isacharparenleft}}x\ {\isaliteral{23}{\isacharhash}}\ xs{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{3D}{\isacharequal}}\ f\ x\ {\isaliteral{23}{\isacharhash}}\ map\ f\ xs}\isa{{\isaliteral{22}{\isachardoublequote}}}
-\end{isabelle}
-\sem
-Also useful are the \concept{head} of a list, its first element,
-and the \concept{tail}, the rest of the list:
-\begin{isabelle}
-\isacom{fun} \isa{hd\ {\isaliteral{3A}{\isacharcolon}}{\isaliteral{3A}{\isacharcolon}}\ {\isaliteral{27}{\isacharprime}}a\ list\ {\isaliteral{5C3C52696768746172726F773E}{\isasymRightarrow}}\ {\isaliteral{27}{\isacharprime}}a}\\
-\isa{hd\ {\isaliteral{28}{\isacharparenleft}}x\ {\isaliteral{23}{\isacharhash}}\ xs{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{3D}{\isacharequal}}\ x}
-\end{isabelle}
-\begin{isabelle}
-\isacom{fun} \isa{tl\ {\isaliteral{3A}{\isacharcolon}}{\isaliteral{3A}{\isacharcolon}}\ {\isaliteral{27}{\isacharprime}}a\ list\ {\isaliteral{5C3C52696768746172726F773E}{\isasymRightarrow}}\ {\isaliteral{27}{\isacharprime}}a\ list}\\
-\isa{tl\ {\isaliteral{5B}{\isacharbrackleft}}{\isaliteral{5D}{\isacharbrackright}}\ {\isaliteral{3D}{\isacharequal}}\ {\isaliteral{5B}{\isacharbrackleft}}{\isaliteral{5D}{\isacharbrackright}}} \isa{{\isaliteral{7C}{\isacharbar}}}\\
-\isa{tl\ {\isaliteral{28}{\isacharparenleft}}x\ {\isaliteral{23}{\isacharhash}}\ xs{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{3D}{\isacharequal}}\ xs}
-\end{isabelle}
-Note that since HOL is a logic of total functions, \isa{hd\ {\isaliteral{5B}{\isacharbrackleft}}{\isaliteral{5D}{\isacharbrackright}}} is defined,
-but we do now know what the result is. That is, \isa{hd\ {\isaliteral{5B}{\isacharbrackleft}}{\isaliteral{5D}{\isacharbrackright}}} is not undefined
-but underdefined.
-\endsem
-%%
-\end{isamarkuptext}%
-\isamarkuptrue%
-%
-\isadelimtheory
-%
-\endisadelimtheory
-%
-\isatagtheory
-%
-\endisatagtheory
-{\isafoldtheory}%
-%
-\isadelimtheory
-%
-\endisadelimtheory
-\end{isabellebody}%
-%%% Local Variables:
-%%% mode: latex
-%%% TeX-master: "root"
-%%% End:
--- a/doc-src/ProgProve/Thys/document/Isar.tex	Mon Aug 27 21:46:00 2012 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,1682 +0,0 @@
-%
-\begin{isabellebody}%
-\def\isabellecontext{Isar}%
-%
-\isadelimtheory
-%
-\endisadelimtheory
-%
-\isatagtheory
-%
-\endisatagtheory
-{\isafoldtheory}%
-%
-\isadelimtheory
-%
-\endisadelimtheory
-%
-\isadelimML
-%
-\endisadelimML
-%
-\isatagML
-%
-\endisatagML
-{\isafoldML}%
-%
-\isadelimML
-%
-\endisadelimML
-%
-\begin{isamarkuptext}%
-Apply-scripts are unreadable and hard to maintain. The language of choice
-for larger proofs is \concept{Isar}. The two key features of Isar are:
-\begin{itemize}
-\item It is structured, not linear.
-\item It is readable without running it because
-you need to state what you are proving at any given point.
-\end{itemize}
-Whereas apply-scripts are like assembly language programs, Isar proofs
-are like structured programs with comments. A typical Isar proof looks like this:%
-\end{isamarkuptext}%
-\isamarkuptrue%
-%
-\begin{isamarkuptext}%
-\begin{tabular}{@ {}l}
-\isacom{proof}\\
-\quad\isacom{assume} \isa{{\isaliteral{22}{\isachardoublequote}}}$\mathit{formula}_0$\isa{{\isaliteral{22}{\isachardoublequote}}}\\
-\quad\isacom{have} \isa{{\isaliteral{22}{\isachardoublequote}}}$\mathit{formula}_1$\isa{{\isaliteral{22}{\isachardoublequote}}} \quad\isacom{by} \isa{simp}\\
-\quad\vdots\\
-\quad\isacom{have} \isa{{\isaliteral{22}{\isachardoublequote}}}$\mathit{formula}_n$\isa{{\isaliteral{22}{\isachardoublequote}}} \quad\isacom{by} \isa{blast}\\
-\quad\isacom{show} \isa{{\isaliteral{22}{\isachardoublequote}}}$\mathit{formula}_{n+1}$\isa{{\isaliteral{22}{\isachardoublequote}}} \quad\isacom{by} \isa{{\isaliteral{5C3C646F74733E}{\isasymdots}}}\\
-\isacom{qed}
-\end{tabular}%
-\end{isamarkuptext}%
-\isamarkuptrue%
-%
-\begin{isamarkuptext}%
-It proves $\mathit{formula}_0 \Longrightarrow \mathit{formula}_{n+1}$
-(provided each proof step succeeds).
-The intermediate \isacom{have} statements are merely stepping stones
-on the way towards the \isacom{show} statement that proves the actual
-goal. In more detail, this is the Isar core syntax:
-\medskip
-
-\begin{tabular}{@ {}lcl@ {}}
-\textit{proof} &=& \isacom{by} \textit{method}\\
-      &$\mid$& \isacom{proof} [\textit{method}] \ \textit{step}$^*$ \ \isacom{qed}
-\end{tabular}
-\medskip
-
-\begin{tabular}{@ {}lcl@ {}}
-\textit{step} &=& \isacom{fix} \textit{variables} \\
-      &$\mid$& \isacom{assume} \textit{proposition} \\
-      &$\mid$& [\isacom{from} \textit{fact}$^+$] (\isacom{have} $\mid$ \isacom{show}) \ \textit{proposition} \ \textit{proof}
-\end{tabular}
-\medskip
-
-\begin{tabular}{@ {}lcl@ {}}
-\textit{proposition} &=& [\textit{name}:] \isa{{\isaliteral{22}{\isachardoublequote}}}\textit{formula}\isa{{\isaliteral{22}{\isachardoublequote}}}
-\end{tabular}
-\medskip
-
-\begin{tabular}{@ {}lcl@ {}}
-\textit{fact} &=& \textit{name} \ $\mid$ \ \dots
-\end{tabular}
-\medskip
-
-\noindent A proof can either be an atomic \isacom{by} with a single proof
-method which must finish off the statement being proved, for example \isa{auto}.  Or it can be a \isacom{proof}--\isacom{qed} block of multiple
-steps. Such a block can optionally begin with a proof method that indicates
-how to start off the proof, e.g.\ \mbox{\isa{{\isaliteral{28}{\isacharparenleft}}induction\ xs{\isaliteral{29}{\isacharparenright}}}}.
-
-A step either assumes a proposition or states a proposition
-together with its proof. The optional \isacom{from} clause
-indicates which facts are to be used in the proof.
-Intermediate propositions are stated with \isacom{have}, the overall goal
-with \isacom{show}. A step can also introduce new local variables with
-\isacom{fix}. Logically, \isacom{fix} introduces \isa{{\isaliteral{5C3C416E643E}{\isasymAnd}}}-quantified
-variables, \isacom{assume} introduces the assumption of an implication
-(\isa{{\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}}) and \isacom{have}/\isacom{show} the conclusion.
-
-Propositions are optionally named formulas. These names can be referred to in
-later \isacom{from} clauses. In the simplest case, a fact is such a name.
-But facts can also be composed with \isa{OF} and \isa{of} as shown in
-\S\ref{sec:forward-proof}---hence the \dots\ in the above grammar.  Note
-that assumptions, intermediate \isacom{have} statements and global lemmas all
-have the same status and are thus collectively referred to as
-\concept{facts}.
-
-Fact names can stand for whole lists of facts. For example, if \isa{f} is
-defined by command \isacom{fun}, \isa{f{\isaliteral{2E}{\isachardot}}simps} refers to the whole list of
-recursion equations defining \isa{f}. Individual facts can be selected by
-writing \isa{f{\isaliteral{2E}{\isachardot}}simps{\isaliteral{28}{\isacharparenleft}}{\isadigit{2}}{\isaliteral{29}{\isacharparenright}}}, whole sublists by \isa{f{\isaliteral{2E}{\isachardot}}simps{\isaliteral{28}{\isacharparenleft}}{\isadigit{2}}{\isaliteral{2D}{\isacharminus}}{\isadigit{4}}{\isaliteral{29}{\isacharparenright}}}.
-
-
-\section{Isar by example}
-
-We show a number of proofs of Cantor's theorem that a function from a set to
-its powerset cannot be surjective, illustrating various features of Isar. The
-constant \isa{surj} is predefined.%
-\end{isamarkuptext}%
-\isamarkuptrue%
-\isacommand{lemma}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}{\isaliteral{5C3C6E6F743E}{\isasymnot}}\ surj{\isaliteral{28}{\isacharparenleft}}f\ {\isaliteral{3A}{\isacharcolon}}{\isaliteral{3A}{\isacharcolon}}\ {\isaliteral{27}{\isacharprime}}a\ {\isaliteral{5C3C52696768746172726F773E}{\isasymRightarrow}}\ {\isaliteral{27}{\isacharprime}}a\ set{\isaliteral{29}{\isacharparenright}}{\isaliteral{22}{\isachardoublequoteclose}}\isanewline
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\isatagproof
-\isacommand{proof}\isamarkupfalse%
-\isanewline
-\ \ \isacommand{assume}\isamarkupfalse%
-\ {\isadigit{0}}{\isaliteral{3A}{\isacharcolon}}\ {\isaliteral{22}{\isachardoublequoteopen}}surj\ f{\isaliteral{22}{\isachardoublequoteclose}}\isanewline
-\ \ \isacommand{from}\isamarkupfalse%
-\ {\isadigit{0}}\ \isacommand{have}\isamarkupfalse%
-\ {\isadigit{1}}{\isaliteral{3A}{\isacharcolon}}\ {\isaliteral{22}{\isachardoublequoteopen}}{\isaliteral{5C3C666F72616C6C3E}{\isasymforall}}A{\isaliteral{2E}{\isachardot}}\ {\isaliteral{5C3C6578697374733E}{\isasymexists}}a{\isaliteral{2E}{\isachardot}}\ A\ {\isaliteral{3D}{\isacharequal}}\ f\ a{\isaliteral{22}{\isachardoublequoteclose}}\ \isacommand{by}\isamarkupfalse%
-{\isaliteral{28}{\isacharparenleft}}simp\ add{\isaliteral{3A}{\isacharcolon}}\ surj{\isaliteral{5F}{\isacharunderscore}}def{\isaliteral{29}{\isacharparenright}}\isanewline
-\ \ \isacommand{from}\isamarkupfalse%
-\ {\isadigit{1}}\ \isacommand{have}\isamarkupfalse%
-\ {\isadigit{2}}{\isaliteral{3A}{\isacharcolon}}\ {\isaliteral{22}{\isachardoublequoteopen}}{\isaliteral{5C3C6578697374733E}{\isasymexists}}a{\isaliteral{2E}{\isachardot}}\ {\isaliteral{7B}{\isacharbraceleft}}x{\isaliteral{2E}{\isachardot}}\ x\ {\isaliteral{5C3C6E6F74696E3E}{\isasymnotin}}\ f\ x{\isaliteral{7D}{\isacharbraceright}}\ {\isaliteral{3D}{\isacharequal}}\ f\ a{\isaliteral{22}{\isachardoublequoteclose}}\ \isacommand{by}\isamarkupfalse%
-\ blast\isanewline
-\ \ \isacommand{from}\isamarkupfalse%
-\ {\isadigit{2}}\ \isacommand{show}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}False{\isaliteral{22}{\isachardoublequoteclose}}\ \isacommand{by}\isamarkupfalse%
-\ blast\isanewline
-\isacommand{qed}\isamarkupfalse%
-%
-\endisatagproof
-{\isafoldproof}%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\begin{isamarkuptext}%
-The \isacom{proof} command lacks an explicit method how to perform
-the proof. In such cases Isabelle tries to use some standard introduction
-rule, in the above case for \isa{{\isaliteral{5C3C6E6F743E}{\isasymnot}}}:
-\[
-\inferrule{
-\mbox{\isa{P\ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ False}}}
-{\mbox{\isa{{\isaliteral{5C3C6E6F743E}{\isasymnot}}\ P}}}
-\]
-In order to prove \isa{{\isaliteral{5C3C6E6F743E}{\isasymnot}}\ P}, assume \isa{P} and show \isa{False}.
-Thus we may assume \isa{surj\ f}. The proof shows that names of propositions
-may be (single!) digits---meaningful names are hard to invent and are often
-not necessary. Both \isacom{have} steps are obvious. The second one introduces
-the diagonal set \isa{{\isaliteral{7B}{\isacharbraceleft}}x{\isaliteral{2E}{\isachardot}}\ x\ {\isaliteral{5C3C6E6F74696E3E}{\isasymnotin}}\ f\ x{\isaliteral{7D}{\isacharbraceright}}}, the key idea in the proof.
-If you wonder why \isa{{\isadigit{2}}} directly implies \isa{False}: from \isa{{\isadigit{2}}}
-it follows that \isa{{\isaliteral{28}{\isacharparenleft}}a\ {\isaliteral{5C3C6E6F74696E3E}{\isasymnotin}}\ f\ a{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{3D}{\isacharequal}}\ {\isaliteral{28}{\isacharparenleft}}a\ {\isaliteral{5C3C696E3E}{\isasymin}}\ f\ a{\isaliteral{29}{\isacharparenright}}}.
-
-\subsection{\isa{this}, \isa{then}, \isa{hence} and \isa{thus}}
-
-Labels should be avoided. They interrupt the flow of the reader who has to
-scan the context for the point where the label was introduced. Ideally, the
-proof is a linear flow, where the output of one step becomes the input of the
-next step, piping the previously proved fact into the next proof, just like
-in a UNIX pipe. In such cases the predefined name \isa{this} can be used
-to refer to the proposition proved in the previous step. This allows us to
-eliminate all labels from our proof (we suppress the \isacom{lemma} statement):%
-\end{isamarkuptext}%
-\isamarkuptrue%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\isatagproof
-\isacommand{proof}\isamarkupfalse%
-\isanewline
-\ \ \isacommand{assume}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}surj\ f{\isaliteral{22}{\isachardoublequoteclose}}\isanewline
-\ \ \isacommand{from}\isamarkupfalse%
-\ this\ \isacommand{have}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}{\isaliteral{5C3C6578697374733E}{\isasymexists}}a{\isaliteral{2E}{\isachardot}}\ {\isaliteral{7B}{\isacharbraceleft}}x{\isaliteral{2E}{\isachardot}}\ x\ {\isaliteral{5C3C6E6F74696E3E}{\isasymnotin}}\ f\ x{\isaliteral{7D}{\isacharbraceright}}\ {\isaliteral{3D}{\isacharequal}}\ f\ a{\isaliteral{22}{\isachardoublequoteclose}}\ \isacommand{by}\isamarkupfalse%
-{\isaliteral{28}{\isacharparenleft}}auto\ simp{\isaliteral{3A}{\isacharcolon}}\ surj{\isaliteral{5F}{\isacharunderscore}}def{\isaliteral{29}{\isacharparenright}}\isanewline
-\ \ \isacommand{from}\isamarkupfalse%
-\ this\ \isacommand{show}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}False{\isaliteral{22}{\isachardoublequoteclose}}\ \isacommand{by}\isamarkupfalse%
-\ blast\isanewline
-\isacommand{qed}\isamarkupfalse%
-%
-\endisatagproof
-{\isafoldproof}%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\begin{isamarkuptext}%
-We have also taken the opportunity to compress the two \isacom{have}
-steps into one.
-
-To compact the text further, Isar has a few convenient abbreviations:
-\medskip
-
-\begin{tabular}{rcl}
-\isacom{then} &=& \isacom{from} \isa{this}\\
-\isacom{thus} &=& \isacom{then} \isacom{show}\\
-\isacom{hence} &=& \isacom{then} \isacom{have}
-\end{tabular}
-\medskip
-
-\noindent
-With the help of these abbreviations the proof becomes%
-\end{isamarkuptext}%
-\isamarkuptrue%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\isatagproof
-\isacommand{proof}\isamarkupfalse%
-\isanewline
-\ \ \isacommand{assume}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}surj\ f{\isaliteral{22}{\isachardoublequoteclose}}\isanewline
-\ \ \isacommand{hence}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}{\isaliteral{5C3C6578697374733E}{\isasymexists}}a{\isaliteral{2E}{\isachardot}}\ {\isaliteral{7B}{\isacharbraceleft}}x{\isaliteral{2E}{\isachardot}}\ x\ {\isaliteral{5C3C6E6F74696E3E}{\isasymnotin}}\ f\ x{\isaliteral{7D}{\isacharbraceright}}\ {\isaliteral{3D}{\isacharequal}}\ f\ a{\isaliteral{22}{\isachardoublequoteclose}}\ \isacommand{by}\isamarkupfalse%
-{\isaliteral{28}{\isacharparenleft}}auto\ simp{\isaliteral{3A}{\isacharcolon}}\ surj{\isaliteral{5F}{\isacharunderscore}}def{\isaliteral{29}{\isacharparenright}}\isanewline
-\ \ \isacommand{thus}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}False{\isaliteral{22}{\isachardoublequoteclose}}\ \isacommand{by}\isamarkupfalse%
-\ blast\isanewline
-\isacommand{qed}\isamarkupfalse%
-%
-\endisatagproof
-{\isafoldproof}%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\begin{isamarkuptext}%
-There are two further linguistic variations:
-\medskip
-
-\begin{tabular}{rcl}
-(\isacom{have}$\mid$\isacom{show}) \ \textit{prop} \ \isacom{using} \ \textit{facts}
-&=&
-\isacom{from} \ \textit{facts} \ (\isacom{have}$\mid$\isacom{show}) \ \textit{prop}\\
-\isacom{with} \ \textit{facts} &=& \isacom{from} \ \textit{facts} \isa{this}
-\end{tabular}
-\medskip
-
-\noindent The \isacom{using} idiom de-emphasizes the used facts by moving them
-behind the proposition.
-
-\subsection{Structured lemma statements: \isacom{fixes}, \isacom{assumes}, \isacom{shows}}
-
-Lemmas can also be stated in a more structured fashion. To demonstrate this
-feature with Cantor's theorem, we rephrase \isa{{\isaliteral{5C3C6E6F743E}{\isasymnot}}\ surj\ f}
-a little:%
-\end{isamarkuptext}%
-\isamarkuptrue%
-\isacommand{lemma}\isamarkupfalse%
-\isanewline
-\ \ \isakeyword{fixes}\ f\ {\isaliteral{3A}{\isacharcolon}}{\isaliteral{3A}{\isacharcolon}}\ {\isaliteral{22}{\isachardoublequoteopen}}{\isaliteral{27}{\isacharprime}}a\ {\isaliteral{5C3C52696768746172726F773E}{\isasymRightarrow}}\ {\isaliteral{27}{\isacharprime}}a\ set{\isaliteral{22}{\isachardoublequoteclose}}\isanewline
-\ \ \isakeyword{assumes}\ s{\isaliteral{3A}{\isacharcolon}}\ {\isaliteral{22}{\isachardoublequoteopen}}surj\ f{\isaliteral{22}{\isachardoublequoteclose}}\isanewline
-\ \ \isakeyword{shows}\ {\isaliteral{22}{\isachardoublequoteopen}}False{\isaliteral{22}{\isachardoublequoteclose}}%
-\isadelimproof
-%
-\endisadelimproof
-%
-\isatagproof
-%
-\begin{isamarkuptxt}%
-The optional \isacom{fixes} part allows you to state the types of
-variables up front rather than by decorating one of their occurrences in the
-formula with a type constraint. The key advantage of the structured format is
-the \isacom{assumes} part that allows you to name each assumption; multiple
-assumptions can be separated by \isacom{and}. The
-\isacom{shows} part gives the goal. The actual theorem that will come out of
-the proof is \isa{surj\ f\ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ False}, but during the proof the assumption
-\isa{surj\ f} is available under the name \isa{s} like any other fact.%
-\end{isamarkuptxt}%
-\isamarkuptrue%
-\isacommand{proof}\isamarkupfalse%
-\ {\isaliteral{2D}{\isacharminus}}\isanewline
-\ \ \isacommand{have}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}{\isaliteral{5C3C6578697374733E}{\isasymexists}}\ a{\isaliteral{2E}{\isachardot}}\ {\isaliteral{7B}{\isacharbraceleft}}x{\isaliteral{2E}{\isachardot}}\ x\ {\isaliteral{5C3C6E6F74696E3E}{\isasymnotin}}\ f\ x{\isaliteral{7D}{\isacharbraceright}}\ {\isaliteral{3D}{\isacharequal}}\ f\ a{\isaliteral{22}{\isachardoublequoteclose}}\ \isacommand{using}\isamarkupfalse%
-\ s\isanewline
-\ \ \ \ \isacommand{by}\isamarkupfalse%
-{\isaliteral{28}{\isacharparenleft}}auto\ simp{\isaliteral{3A}{\isacharcolon}}\ surj{\isaliteral{5F}{\isacharunderscore}}def{\isaliteral{29}{\isacharparenright}}\isanewline
-\ \ \isacommand{thus}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}False{\isaliteral{22}{\isachardoublequoteclose}}\ \isacommand{by}\isamarkupfalse%
-\ blast\isanewline
-\isacommand{qed}\isamarkupfalse%
-%
-\endisatagproof
-{\isafoldproof}%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\begin{isamarkuptext}%
-In the \isacom{have} step the assumption \isa{surj\ f} is now
-referenced by its name \isa{s}. The duplication of \isa{surj\ f} in the
-above proofs (once in the statement of the lemma, once in its proof) has been
-eliminated.
-
-\begin{warn}
-Note the dash after the \isacom{proof}
-command.  It is the null method that does nothing to the goal. Leaving it out
-would ask Isabelle to try some suitable introduction rule on the goal \isa{False}---but there is no suitable introduction rule and \isacom{proof}
-would fail.
-\end{warn}
-
-Stating a lemma with \isacom{assumes}-\isacom{shows} implicitly introduces the
-name \isa{assms} that stands for the list of all assumptions. You can refer
-to individual assumptions by \isa{assms{\isaliteral{28}{\isacharparenleft}}{\isadigit{1}}{\isaliteral{29}{\isacharparenright}}}, \isa{assms{\isaliteral{28}{\isacharparenleft}}{\isadigit{2}}{\isaliteral{29}{\isacharparenright}}} etc,
-thus obviating the need to name them individually.
-
-\section{Proof patterns}
-
-We show a number of important basic proof patterns. Many of them arise from
-the rules of natural deduction that are applied by \isacom{proof} by
-default. The patterns are phrased in terms of \isacom{show} but work for
-\isacom{have} and \isacom{lemma}, too.
-
-We start with two forms of \concept{case analysis}:
-starting from a formula \isa{P} we have the two cases \isa{P} and
-\isa{{\isaliteral{5C3C6E6F743E}{\isasymnot}}\ P}, and starting from a fact \isa{P\ {\isaliteral{5C3C6F723E}{\isasymor}}\ Q}
-we have the two cases \isa{P} and \isa{Q}:%
-\end{isamarkuptext}%
-\isamarkuptrue%
-%
-\begin{tabular}{@ {}ll@ {}}
-\begin{minipage}[t]{.4\textwidth}
-\isa{%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\isatagproof
-\isacommand{show}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}R{\isaliteral{22}{\isachardoublequoteclose}}\isanewline
-\isacommand{proof}\isamarkupfalse%
-\ cases\isanewline
-\ \ \isacommand{assume}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}P{\isaliteral{22}{\isachardoublequoteclose}}%
-\\\mbox{}\quad$\vdots$\\\mbox{}\hspace{-1.4ex}
-\ \ \isacommand{show}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}R{\isaliteral{22}{\isachardoublequoteclose}}\ %
-\ $\dots$\\
-\isacommand{next}\isamarkupfalse%
-\isanewline
-\ \ \isacommand{assume}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}{\isaliteral{5C3C6E6F743E}{\isasymnot}}\ P{\isaliteral{22}{\isachardoublequoteclose}}%
-\\\mbox{}\quad$\vdots$\\\mbox{}\hspace{-1.4ex}
-\ \ \isacommand{show}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}R{\isaliteral{22}{\isachardoublequoteclose}}\ %
-\ $\dots$\\
-\isacommand{qed}\isamarkupfalse%
-%
-\endisatagproof
-{\isafoldproof}%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-}
-\end{minipage}
-&
-\begin{minipage}[t]{.4\textwidth}
-\isa{%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\isatagproof
-\isacommand{have}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}P\ {\isaliteral{5C3C6F723E}{\isasymor}}\ Q{\isaliteral{22}{\isachardoublequoteclose}}\ %
-\ $\dots$\\
-\isacommand{then}\isamarkupfalse%
-\ \isacommand{show}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}R{\isaliteral{22}{\isachardoublequoteclose}}\isanewline
-\isacommand{proof}\isamarkupfalse%
-\isanewline
-\ \ \isacommand{assume}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}P{\isaliteral{22}{\isachardoublequoteclose}}%
-\\\mbox{}\quad$\vdots$\\\mbox{}\hspace{-1.4ex}
-\ \ \isacommand{show}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}R{\isaliteral{22}{\isachardoublequoteclose}}\ %
-\ $\dots$\\
-\isacommand{next}\isamarkupfalse%
-\isanewline
-\ \ \isacommand{assume}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}Q{\isaliteral{22}{\isachardoublequoteclose}}%
-\\\mbox{}\quad$\vdots$\\\mbox{}\hspace{-1.4ex}
-\ \ \isacommand{show}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}R{\isaliteral{22}{\isachardoublequoteclose}}\ %
-\ $\dots$\\
-\isacommand{qed}\isamarkupfalse%
-%
-\endisatagproof
-{\isafoldproof}%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-}
-\end{minipage}
-\end{tabular}
-\medskip
-\begin{isamarkuptext}%
-How to prove a logical equivalence:
-\end{isamarkuptext}%
-\isa{%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\isatagproof
-\isacommand{show}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}P\ {\isaliteral{5C3C6C6F6E676C65667472696768746172726F773E}{\isasymlongleftrightarrow}}\ Q{\isaliteral{22}{\isachardoublequoteclose}}\isanewline
-\isacommand{proof}\isamarkupfalse%
-\isanewline
-\ \ \isacommand{assume}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}P{\isaliteral{22}{\isachardoublequoteclose}}%
-\\\mbox{}\quad$\vdots$\\\mbox{}\hspace{-1.4ex}
-\ \ \isacommand{show}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}Q{\isaliteral{22}{\isachardoublequoteclose}}\ %
-\ $\dots$\\
-\isacommand{next}\isamarkupfalse%
-\isanewline
-\ \ \isacommand{assume}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}Q{\isaliteral{22}{\isachardoublequoteclose}}%
-\\\mbox{}\quad$\vdots$\\\mbox{}\hspace{-1.4ex}
-\ \ \isacommand{show}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}P{\isaliteral{22}{\isachardoublequoteclose}}\ %
-\ $\dots$\\
-\isacommand{qed}\isamarkupfalse%
-%
-\endisatagproof
-{\isafoldproof}%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-}
-\medskip
-\begin{isamarkuptext}%
-Proofs by contradiction:
-\end{isamarkuptext}%
-\begin{tabular}{@ {}ll@ {}}
-\begin{minipage}[t]{.4\textwidth}
-\isa{%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\isatagproof
-\isacommand{show}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}{\isaliteral{5C3C6E6F743E}{\isasymnot}}\ P{\isaliteral{22}{\isachardoublequoteclose}}\isanewline
-\isacommand{proof}\isamarkupfalse%
-\isanewline
-\ \ \isacommand{assume}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}P{\isaliteral{22}{\isachardoublequoteclose}}%
-\\\mbox{}\quad$\vdots$\\\mbox{}\hspace{-1.4ex}
-\ \ \isacommand{show}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}False{\isaliteral{22}{\isachardoublequoteclose}}\ %
-\ $\dots$\\
-\isacommand{qed}\isamarkupfalse%
-%
-\endisatagproof
-{\isafoldproof}%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-}
-\end{minipage}
-&
-\begin{minipage}[t]{.4\textwidth}
-\isa{%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\isatagproof
-\isacommand{show}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}P{\isaliteral{22}{\isachardoublequoteclose}}\isanewline
-\isacommand{proof}\isamarkupfalse%
-\ {\isaliteral{28}{\isacharparenleft}}rule\ ccontr{\isaliteral{29}{\isacharparenright}}\isanewline
-\ \ \isacommand{assume}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}{\isaliteral{5C3C6E6F743E}{\isasymnot}}P{\isaliteral{22}{\isachardoublequoteclose}}%
-\\\mbox{}\quad$\vdots$\\\mbox{}\hspace{-1.4ex}
-\ \ \isacommand{show}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}False{\isaliteral{22}{\isachardoublequoteclose}}\ %
-\ $\dots$\\
-\isacommand{qed}\isamarkupfalse%
-%
-\endisatagproof
-{\isafoldproof}%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-}
-\end{minipage}
-\end{tabular}
-\medskip
-\begin{isamarkuptext}%
-The name \isa{ccontr} stands for ``classical contradiction''.
-
-How to prove quantified formulas:
-\end{isamarkuptext}%
-\begin{tabular}{@ {}ll@ {}}
-\begin{minipage}[t]{.4\textwidth}
-\isa{%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\isatagproof
-\isacommand{show}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}{\isaliteral{5C3C666F72616C6C3E}{\isasymforall}}x{\isaliteral{2E}{\isachardot}}\ P{\isaliteral{28}{\isacharparenleft}}x{\isaliteral{29}{\isacharparenright}}{\isaliteral{22}{\isachardoublequoteclose}}\isanewline
-\isacommand{proof}\isamarkupfalse%
-\isanewline
-\ \ \isacommand{fix}\isamarkupfalse%
-\ x%
-\\\mbox{}\quad$\vdots$\\\mbox{}\hspace{-1.4ex}
-\ \ \isacommand{show}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}P{\isaliteral{28}{\isacharparenleft}}x{\isaliteral{29}{\isacharparenright}}{\isaliteral{22}{\isachardoublequoteclose}}\ %
-\ $\dots$\\
-\isacommand{qed}\isamarkupfalse%
-%
-\endisatagproof
-{\isafoldproof}%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-}
-\end{minipage}
-&
-\begin{minipage}[t]{.4\textwidth}
-\isa{%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\isatagproof
-\isacommand{show}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}{\isaliteral{5C3C6578697374733E}{\isasymexists}}x{\isaliteral{2E}{\isachardot}}\ P{\isaliteral{28}{\isacharparenleft}}x{\isaliteral{29}{\isacharparenright}}{\isaliteral{22}{\isachardoublequoteclose}}\isanewline
-\isacommand{proof}\isamarkupfalse%
-%
-\\\mbox{}\quad$\vdots$\\\mbox{}\hspace{-1.4ex}
-\ \ \isacommand{show}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}P{\isaliteral{28}{\isacharparenleft}}witness{\isaliteral{29}{\isacharparenright}}{\isaliteral{22}{\isachardoublequoteclose}}\ %
-\ $\dots$\\
-\isacommand{qed}\isamarkupfalse%
-%
-\endisatagproof
-{\isafoldproof}%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-}
-\end{minipage}
-\end{tabular}
-\medskip
-\begin{isamarkuptext}%
-In the proof of \noquotes{\isa{{\isaliteral{22}{\isachardoublequote}}{\isaliteral{5C3C666F72616C6C3E}{\isasymforall}}x{\isaliteral{2E}{\isachardot}}\ P{\isaliteral{28}{\isacharparenleft}}x{\isaliteral{29}{\isacharparenright}}{\isaliteral{22}{\isachardoublequote}}}},
-the step \isacom{fix}~\isa{x} introduces a locally fixed variable \isa{x}
-into the subproof, the proverbial ``arbitrary but fixed value''.
-Instead of \isa{x} we could have chosen any name in the subproof.
-In the proof of \noquotes{\isa{{\isaliteral{22}{\isachardoublequote}}{\isaliteral{5C3C6578697374733E}{\isasymexists}}x{\isaliteral{2E}{\isachardot}}\ P{\isaliteral{28}{\isacharparenleft}}x{\isaliteral{29}{\isacharparenright}}{\isaliteral{22}{\isachardoublequote}}}},
-\isa{witness} is some arbitrary
-term for which we can prove that it satisfies \isa{P}.
-
-How to reason forward from \noquotes{\isa{{\isaliteral{22}{\isachardoublequote}}{\isaliteral{5C3C6578697374733E}{\isasymexists}}x{\isaliteral{2E}{\isachardot}}\ P{\isaliteral{28}{\isacharparenleft}}x{\isaliteral{29}{\isacharparenright}}{\isaliteral{22}{\isachardoublequote}}}}:
-\end{isamarkuptext}%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\isatagproof
-\isacommand{have}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}{\isaliteral{5C3C6578697374733E}{\isasymexists}}x{\isaliteral{2E}{\isachardot}}\ P{\isaliteral{28}{\isacharparenleft}}x{\isaliteral{29}{\isacharparenright}}{\isaliteral{22}{\isachardoublequoteclose}}\ %
-\ $\dots$\\
-\isacommand{then}\isamarkupfalse%
-\ \isacommand{obtain}\isamarkupfalse%
-\ x\ \isakeyword{where}\ p{\isaliteral{3A}{\isacharcolon}}\ {\isaliteral{22}{\isachardoublequoteopen}}P{\isaliteral{28}{\isacharparenleft}}x{\isaliteral{29}{\isacharparenright}}{\isaliteral{22}{\isachardoublequoteclose}}\ \isacommand{by}\isamarkupfalse%
-\ blast%
-\endisatagproof
-{\isafoldproof}%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\begin{isamarkuptext}%
-After the \isacom{obtain} step, \isa{x} (we could have chosen any name)
-is a fixed local
-variable, and \isa{p} is the name of the fact
-\noquotes{\isa{{\isaliteral{22}{\isachardoublequote}}P{\isaliteral{28}{\isacharparenleft}}x{\isaliteral{29}{\isacharparenright}}{\isaliteral{22}{\isachardoublequote}}}}.
-This pattern works for one or more \isa{x}.
-As an example of the \isacom{obtain} command, here is the proof of
-Cantor's theorem in more detail:%
-\end{isamarkuptext}%
-\isamarkuptrue%
-\isacommand{lemma}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}{\isaliteral{5C3C6E6F743E}{\isasymnot}}\ surj{\isaliteral{28}{\isacharparenleft}}f\ {\isaliteral{3A}{\isacharcolon}}{\isaliteral{3A}{\isacharcolon}}\ {\isaliteral{27}{\isacharprime}}a\ {\isaliteral{5C3C52696768746172726F773E}{\isasymRightarrow}}\ {\isaliteral{27}{\isacharprime}}a\ set{\isaliteral{29}{\isacharparenright}}{\isaliteral{22}{\isachardoublequoteclose}}\isanewline
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\isatagproof
-\isacommand{proof}\isamarkupfalse%
-\isanewline
-\ \ \isacommand{assume}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}surj\ f{\isaliteral{22}{\isachardoublequoteclose}}\isanewline
-\ \ \isacommand{hence}\isamarkupfalse%
-\ \ {\isaliteral{22}{\isachardoublequoteopen}}{\isaliteral{5C3C6578697374733E}{\isasymexists}}a{\isaliteral{2E}{\isachardot}}\ {\isaliteral{7B}{\isacharbraceleft}}x{\isaliteral{2E}{\isachardot}}\ x\ {\isaliteral{5C3C6E6F74696E3E}{\isasymnotin}}\ f\ x{\isaliteral{7D}{\isacharbraceright}}\ {\isaliteral{3D}{\isacharequal}}\ f\ a{\isaliteral{22}{\isachardoublequoteclose}}\ \isacommand{by}\isamarkupfalse%
-{\isaliteral{28}{\isacharparenleft}}auto\ simp{\isaliteral{3A}{\isacharcolon}}\ surj{\isaliteral{5F}{\isacharunderscore}}def{\isaliteral{29}{\isacharparenright}}\isanewline
-\ \ \isacommand{then}\isamarkupfalse%
-\ \isacommand{obtain}\isamarkupfalse%
-\ a\ \isakeyword{where}\ \ {\isaliteral{22}{\isachardoublequoteopen}}{\isaliteral{7B}{\isacharbraceleft}}x{\isaliteral{2E}{\isachardot}}\ x\ {\isaliteral{5C3C6E6F74696E3E}{\isasymnotin}}\ f\ x{\isaliteral{7D}{\isacharbraceright}}\ {\isaliteral{3D}{\isacharequal}}\ f\ a{\isaliteral{22}{\isachardoublequoteclose}}\ \ \isacommand{by}\isamarkupfalse%
-\ blast\isanewline
-\ \ \isacommand{hence}\isamarkupfalse%
-\ \ {\isaliteral{22}{\isachardoublequoteopen}}a\ {\isaliteral{5C3C6E6F74696E3E}{\isasymnotin}}\ f\ a\ {\isaliteral{5C3C6C6F6E676C65667472696768746172726F773E}{\isasymlongleftrightarrow}}\ a\ {\isaliteral{5C3C696E3E}{\isasymin}}\ f\ a{\isaliteral{22}{\isachardoublequoteclose}}\ \ \isacommand{by}\isamarkupfalse%
-\ blast\isanewline
-\ \ \isacommand{thus}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}False{\isaliteral{22}{\isachardoublequoteclose}}\ \isacommand{by}\isamarkupfalse%
-\ blast\isanewline
-\isacommand{qed}\isamarkupfalse%
-%
-\endisatagproof
-{\isafoldproof}%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\begin{isamarkuptext}%
-
-Finally, how to prove set equality and subset relationship:
-\end{isamarkuptext}%
-\begin{tabular}{@ {}ll@ {}}
-\begin{minipage}[t]{.4\textwidth}
-\isa{%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\isatagproof
-\isacommand{show}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}A\ {\isaliteral{3D}{\isacharequal}}\ B{\isaliteral{22}{\isachardoublequoteclose}}\isanewline
-\isacommand{proof}\isamarkupfalse%
-\isanewline
-\ \ \isacommand{show}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}A\ {\isaliteral{5C3C73756273657465713E}{\isasymsubseteq}}\ B{\isaliteral{22}{\isachardoublequoteclose}}\ %
-\ $\dots$\\
-\isacommand{next}\isamarkupfalse%
-\isanewline
-\ \ \isacommand{show}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}B\ {\isaliteral{5C3C73756273657465713E}{\isasymsubseteq}}\ A{\isaliteral{22}{\isachardoublequoteclose}}\ %
-\ $\dots$\\
-\isacommand{qed}\isamarkupfalse%
-%
-\endisatagproof
-{\isafoldproof}%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-}
-\end{minipage}
-&
-\begin{minipage}[t]{.4\textwidth}
-\isa{%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\isatagproof
-\isacommand{show}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}A\ {\isaliteral{5C3C73756273657465713E}{\isasymsubseteq}}\ B{\isaliteral{22}{\isachardoublequoteclose}}\isanewline
-\isacommand{proof}\isamarkupfalse%
-\isanewline
-\ \ \isacommand{fix}\isamarkupfalse%
-\ x\isanewline
-\ \ \isacommand{assume}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}x\ {\isaliteral{5C3C696E3E}{\isasymin}}\ A{\isaliteral{22}{\isachardoublequoteclose}}%
-\\\mbox{}\quad$\vdots$\\\mbox{}\hspace{-1.4ex}
-\ \ \isacommand{show}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}x\ {\isaliteral{5C3C696E3E}{\isasymin}}\ B{\isaliteral{22}{\isachardoublequoteclose}}\ %
-\ $\dots$\\
-\isacommand{qed}\isamarkupfalse%
-%
-\endisatagproof
-{\isafoldproof}%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-}
-\end{minipage}
-\end{tabular}
-\begin{isamarkuptext}%
-\section{Streamlining proofs}
-
-\subsection{Pattern matching and quotations}
-
-In the proof patterns shown above, formulas are often duplicated.
-This can make the text harder to read, write and maintain. Pattern matching
-is an abbreviation mechanism to avoid such duplication. Writing
-\begin{quote}
-\isacom{show} \ \textit{formula} \isa{{\isaliteral{28}{\isacharparenleft}}}\isacom{is} \textit{pattern}\isa{{\isaliteral{29}{\isacharparenright}}}
-\end{quote}
-matches the pattern against the formula, thus instantiating the unknowns in
-the pattern for later use. As an example, consider the proof pattern for
-\isa{{\isaliteral{5C3C6C6F6E676C65667472696768746172726F773E}{\isasymlongleftrightarrow}}}:
-\end{isamarkuptext}%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\isatagproof
-\isacommand{show}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}formula\isaliteral{5C3C5E697375623E}{}\isactrlisub {\isadigit{1}}\ {\isaliteral{5C3C6C6F6E676C65667472696768746172726F773E}{\isasymlongleftrightarrow}}\ formula\isaliteral{5C3C5E697375623E}{}\isactrlisub {\isadigit{2}}{\isaliteral{22}{\isachardoublequoteclose}}\ {\isaliteral{28}{\isacharparenleft}}\isakeyword{is}\ {\isaliteral{22}{\isachardoublequoteopen}}{\isaliteral{3F}{\isacharquery}}L\ {\isaliteral{5C3C6C6F6E676C65667472696768746172726F773E}{\isasymlongleftrightarrow}}\ {\isaliteral{3F}{\isacharquery}}R{\isaliteral{22}{\isachardoublequoteclose}}{\isaliteral{29}{\isacharparenright}}\isanewline
-\isacommand{proof}\isamarkupfalse%
-\isanewline
-\ \ \isacommand{assume}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}{\isaliteral{3F}{\isacharquery}}L{\isaliteral{22}{\isachardoublequoteclose}}%
-\\\mbox{}\quad$\vdots$\\\mbox{}\hspace{-1.4ex}
-\ \ \isacommand{show}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}{\isaliteral{3F}{\isacharquery}}R{\isaliteral{22}{\isachardoublequoteclose}}\ %
-\ $\dots$\\
-\isacommand{next}\isamarkupfalse%
-\isanewline
-\ \ \isacommand{assume}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}{\isaliteral{3F}{\isacharquery}}R{\isaliteral{22}{\isachardoublequoteclose}}%
-\\\mbox{}\quad$\vdots$\\\mbox{}\hspace{-1.4ex}
-\ \ \isacommand{show}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}{\isaliteral{3F}{\isacharquery}}L{\isaliteral{22}{\isachardoublequoteclose}}\ %
-\ $\dots$\\
-\isacommand{qed}\isamarkupfalse%
-%
-\endisatagproof
-{\isafoldproof}%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\begin{isamarkuptext}%
-Instead of duplicating \isa{formula\isaliteral{5C3C5E697375623E}{}\isactrlisub i} in the text, we introduce
-the two abbreviations \isa{{\isaliteral{3F}{\isacharquery}}L} and \isa{{\isaliteral{3F}{\isacharquery}}R} by pattern matching.
-Pattern matching works wherever a formula is stated, in particular
-with \isacom{have} and \isacom{lemma}.
-
-The unknown \isa{{\isaliteral{3F}{\isacharquery}}thesis} is implicitly matched against any goal stated by
-\isacom{lemma} or \isacom{show}. Here is a typical example:%
-\end{isamarkuptext}%
-\isamarkuptrue%
-\isacommand{lemma}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}formula{\isaliteral{22}{\isachardoublequoteclose}}\isanewline
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\isatagproof
-\isacommand{proof}\isamarkupfalse%
-\ {\isaliteral{2D}{\isacharminus}}%
-\\\mbox{}\quad$\vdots$\\\mbox{}\hspace{-1.4ex}
-\ \ \isacommand{show}\isamarkupfalse%
-\ {\isaliteral{3F}{\isacharquery}}thesis\ %
-\ $\dots$\\
-\isacommand{qed}\isamarkupfalse%
-%
-\endisatagproof
-{\isafoldproof}%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\begin{isamarkuptext}%
-Unknowns can also be instantiated with \isacom{let} commands
-\begin{quote}
-\isacom{let} \isa{{\isaliteral{3F}{\isacharquery}}t} = \isa{{\isaliteral{22}{\isachardoublequote}}}\textit{some-big-term}\isa{{\isaliteral{22}{\isachardoublequote}}}
-\end{quote}
-Later proof steps can refer to \isa{{\isaliteral{3F}{\isacharquery}}t}:
-\begin{quote}
-\isacom{have} \isa{{\isaliteral{22}{\isachardoublequote}}}\dots \isa{{\isaliteral{3F}{\isacharquery}}t} \dots\isa{{\isaliteral{22}{\isachardoublequote}}}
-\end{quote}
-\begin{warn}
-Names of facts are introduced with \isa{name{\isaliteral{3A}{\isacharcolon}}} and refer to proved
-theorems. Unknowns \isa{{\isaliteral{3F}{\isacharquery}}X} refer to terms or formulas.
-\end{warn}
-
-Although abbreviations shorten the text, the reader needs to remember what
-they stand for. Similarly for names of facts. Names like \isa{{\isadigit{1}}}, \isa{{\isadigit{2}}}
-and \isa{{\isadigit{3}}} are not helpful and should only be used in short proofs. For
-longer proofs, descriptive names are better. But look at this example:
-\begin{quote}
-\isacom{have} \ \isa{x{\isaliteral{5F}{\isacharunderscore}}gr{\isaliteral{5F}{\isacharunderscore}}{\isadigit{0}}{\isaliteral{3A}{\isacharcolon}}\ {\isaliteral{22}{\isachardoublequote}}x\ {\isaliteral{3E}{\isachargreater}}\ {\isadigit{0}}{\isaliteral{22}{\isachardoublequote}}}\\
-$\vdots$\\
-\isacom{from} \isa{x{\isaliteral{5F}{\isacharunderscore}}gr{\isaliteral{5F}{\isacharunderscore}}{\isadigit{0}}} \dots
-\end{quote}
-The name is longer than the fact it stands for! Short facts do not need names,
-one can refer to them easily by quoting them:
-\begin{quote}
-\isacom{have} \ \isa{{\isaliteral{22}{\isachardoublequote}}x\ {\isaliteral{3E}{\isachargreater}}\ {\isadigit{0}}{\isaliteral{22}{\isachardoublequote}}}\\
-$\vdots$\\
-\isacom{from} \isa{{\isaliteral{60}{\isacharbackquote}}x{\isaliteral{3E}{\isachargreater}}{\isadigit{0}}{\isaliteral{60}{\isacharbackquote}}} \dots
-\end{quote}
-Note that the quotes around \isa{x{\isaliteral{3E}{\isachargreater}}{\isadigit{0}}} are \concept{back quotes}.
-They refer to the fact not by name but by value.
-
-\subsection{\isacom{moreover}}
-
-Sometimes one needs a number of facts to enable some deduction. Of course
-one can name these facts individually, as shown on the right,
-but one can also combine them with \isacom{moreover}, as shown on the left:%
-\end{isamarkuptext}%
-\isamarkuptrue%
-%
-\begin{tabular}{@ {}ll@ {}}
-\begin{minipage}[t]{.4\textwidth}
-\isa{%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\isatagproof
-\isacommand{have}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}P\isaliteral{5C3C5E697375623E}{}\isactrlisub {\isadigit{1}}{\isaliteral{22}{\isachardoublequoteclose}}\ %
-\ $\dots$\\
-\isacommand{moreover}\isamarkupfalse%
-\ \isacommand{have}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}P\isaliteral{5C3C5E697375623E}{}\isactrlisub {\isadigit{2}}{\isaliteral{22}{\isachardoublequoteclose}}\ %
-\ $\dots$\\
-\isacommand{moreover}\isamarkupfalse%
-%
-\\$\vdots$\\\hspace{-1.4ex}
-\isacommand{moreover}\isamarkupfalse%
-\ \isacommand{have}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}P\isaliteral{5C3C5E697375623E}{}\isactrlisub n{\isaliteral{22}{\isachardoublequoteclose}}\ %
-\ $\dots$\\
-\isacommand{ultimately}\isamarkupfalse%
-\ \isacommand{have}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}P{\isaliteral{22}{\isachardoublequoteclose}}\ \ %
-\ $\dots$\\
-%
-\endisatagproof
-{\isafoldproof}%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-}
-\end{minipage}
-&
-\qquad
-\begin{minipage}[t]{.4\textwidth}
-\isa{%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\isatagproof
-\isacommand{have}\isamarkupfalse%
-\ lab\isaliteral{5C3C5E697375623E}{}\isactrlisub {\isadigit{1}}{\isaliteral{3A}{\isacharcolon}}\ {\isaliteral{22}{\isachardoublequoteopen}}P\isaliteral{5C3C5E697375623E}{}\isactrlisub {\isadigit{1}}{\isaliteral{22}{\isachardoublequoteclose}}\ %
-\ $\dots$\\
-\isacommand{have}\isamarkupfalse%
-\ lab\isaliteral{5C3C5E697375623E}{}\isactrlisub {\isadigit{2}}{\isaliteral{3A}{\isacharcolon}}\ {\isaliteral{22}{\isachardoublequoteopen}}P\isaliteral{5C3C5E697375623E}{}\isactrlisub {\isadigit{2}}{\isaliteral{22}{\isachardoublequoteclose}}\ %
-\ $\dots$
-%
-\\$\vdots$\\\hspace{-1.4ex}
-\isacommand{have}\isamarkupfalse%
-\ lab\isaliteral{5C3C5E697375623E}{}\isactrlisub n{\isaliteral{3A}{\isacharcolon}}\ {\isaliteral{22}{\isachardoublequoteopen}}P\isaliteral{5C3C5E697375623E}{}\isactrlisub n{\isaliteral{22}{\isachardoublequoteclose}}\ %
-\ $\dots$\\
-\isacommand{from}\isamarkupfalse%
-\ lab\isaliteral{5C3C5E697375623E}{}\isactrlisub {\isadigit{1}}\ lab\isaliteral{5C3C5E697375623E}{}\isactrlisub {\isadigit{2}}%
-\ $\dots$\\
-\isacommand{have}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}P{\isaliteral{22}{\isachardoublequoteclose}}\ \ %
-\ $\dots$\\
-%
-\endisatagproof
-{\isafoldproof}%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-}
-\end{minipage}
-\end{tabular}
-\begin{isamarkuptext}%
-The \isacom{moreover} version is no shorter but expresses the structure more
-clearly and avoids new names.
-
-\subsection{Raw proof blocks}
-
-Sometimes one would like to prove some lemma locally within a proof.
-A lemma that shares the current context of assumptions but that
-has its own assumptions and is generalized over its locally fixed
-variables at the end. This is what a \concept{raw proof block} does:
-\begin{quote}
-\isa{{\isaliteral{7B}{\isacharbraceleft}}} \isacom{fix} \isa{x\isaliteral{5C3C5E697375623E}{}\isactrlisub {\isadigit{1}}\ {\isaliteral{5C3C646F74733E}{\isasymdots}}\ x\isaliteral{5C3C5E697375623E}{}\isactrlisub n}\\
-\mbox{}\ \ \ \isacom{assume} \isa{A\isaliteral{5C3C5E697375623E}{}\isactrlisub {\isadigit{1}}\ {\isaliteral{5C3C646F74733E}{\isasymdots}}\ A\isaliteral{5C3C5E697375623E}{}\isactrlisub m}\\
-\mbox{}\ \ \ $\vdots$\\
-\mbox{}\ \ \ \isacom{have} \isa{B}\\
-\isa{{\isaliteral{7D}{\isacharbraceright}}}
-\end{quote}
-proves \isa{{\isaliteral{5C3C6C6272616B6B3E}{\isasymlbrakk}}\ A\isaliteral{5C3C5E697375623E}{}\isactrlisub {\isadigit{1}}{\isaliteral{3B}{\isacharsemicolon}}\ {\isaliteral{5C3C646F74733E}{\isasymdots}}\ {\isaliteral{3B}{\isacharsemicolon}}\ A\isaliteral{5C3C5E697375623E}{}\isactrlisub m\ {\isaliteral{5C3C726272616B6B3E}{\isasymrbrakk}}\ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ B}
-where all \isa{x\isaliteral{5C3C5E697375623E}{}\isactrlisub i} have been replaced by unknowns \isa{{\isaliteral{3F}{\isacharquery}}x\isaliteral{5C3C5E697375623E}{}\isactrlisub i}.
-\begin{warn}
-The conclusion of a raw proof block is \emph{not} indicated by \isacom{show}
-but is simply the final \isacom{have}.
-\end{warn}
-
-As an example we prove a simple fact about divisibility on integers.
-The definition of \isa{dvd} is \isa{{\isaliteral{28}{\isacharparenleft}}b\ dvd\ a{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{3D}{\isacharequal}}\ {\isaliteral{28}{\isacharparenleft}}{\isaliteral{5C3C6578697374733E}{\isasymexists}}k{\isaliteral{2E}{\isachardot}}\ a\ {\isaliteral{3D}{\isacharequal}}\ b\ {\isaliteral{2A}{\isacharasterisk}}\ k{\isaliteral{29}{\isacharparenright}}}.
-\end{isamarkuptext}%
-\isacommand{lemma}\isamarkupfalse%
-\ \isakeyword{fixes}\ a\ b\ {\isaliteral{3A}{\isacharcolon}}{\isaliteral{3A}{\isacharcolon}}\ int\ \isakeyword{assumes}\ {\isaliteral{22}{\isachardoublequoteopen}}b\ dvd\ {\isaliteral{28}{\isacharparenleft}}a{\isaliteral{2B}{\isacharplus}}b{\isaliteral{29}{\isacharparenright}}{\isaliteral{22}{\isachardoublequoteclose}}\ \isakeyword{shows}\ {\isaliteral{22}{\isachardoublequoteopen}}b\ dvd\ a{\isaliteral{22}{\isachardoublequoteclose}}\isanewline
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\isatagproof
-\isacommand{proof}\isamarkupfalse%
-\ {\isaliteral{2D}{\isacharminus}}\isanewline
-\ \ \isacommand{{\isaliteral{7B}{\isacharbraceleft}}}\isamarkupfalse%
-\ \isacommand{fix}\isamarkupfalse%
-\ k\ \isacommand{assume}\isamarkupfalse%
-\ k{\isaliteral{3A}{\isacharcolon}}\ {\isaliteral{22}{\isachardoublequoteopen}}a{\isaliteral{2B}{\isacharplus}}b\ {\isaliteral{3D}{\isacharequal}}\ b{\isaliteral{2A}{\isacharasterisk}}k{\isaliteral{22}{\isachardoublequoteclose}}\isanewline
-\ \ \ \ \isacommand{have}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}{\isaliteral{5C3C6578697374733E}{\isasymexists}}k{\isaliteral{27}{\isacharprime}}{\isaliteral{2E}{\isachardot}}\ a\ {\isaliteral{3D}{\isacharequal}}\ b{\isaliteral{2A}{\isacharasterisk}}k{\isaliteral{27}{\isacharprime}}{\isaliteral{22}{\isachardoublequoteclose}}\isanewline
-\ \ \ \ \isacommand{proof}\isamarkupfalse%
-\isanewline
-\ \ \ \ \ \ \isacommand{show}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}a\ {\isaliteral{3D}{\isacharequal}}\ b{\isaliteral{2A}{\isacharasterisk}}{\isaliteral{28}{\isacharparenleft}}k\ {\isaliteral{2D}{\isacharminus}}\ {\isadigit{1}}{\isaliteral{29}{\isacharparenright}}{\isaliteral{22}{\isachardoublequoteclose}}\ \isacommand{using}\isamarkupfalse%
-\ k\ \isacommand{by}\isamarkupfalse%
-{\isaliteral{28}{\isacharparenleft}}simp\ add{\isaliteral{3A}{\isacharcolon}}\ algebra{\isaliteral{5F}{\isacharunderscore}}simps{\isaliteral{29}{\isacharparenright}}\isanewline
-\ \ \ \ \isacommand{qed}\isamarkupfalse%
-\ \isacommand{{\isaliteral{7D}{\isacharbraceright}}}\isamarkupfalse%
-\isanewline
-\ \ \isacommand{then}\isamarkupfalse%
-\ \isacommand{show}\isamarkupfalse%
-\ {\isaliteral{3F}{\isacharquery}}thesis\ \isacommand{using}\isamarkupfalse%
-\ assms\ \isacommand{by}\isamarkupfalse%
-{\isaliteral{28}{\isacharparenleft}}auto\ simp\ add{\isaliteral{3A}{\isacharcolon}}\ dvd{\isaliteral{5F}{\isacharunderscore}}def{\isaliteral{29}{\isacharparenright}}\isanewline
-\isacommand{qed}\isamarkupfalse%
-%
-\endisatagproof
-{\isafoldproof}%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\begin{isamarkuptext}%
-Note that the result of a raw proof block has no name. In this example
-it was directly piped (via \isacom{then}) into the final proof, but it can
-also be named for later reference: you simply follow the block directly by a
-\isacom{note} command:
-\begin{quote}
-\isacom{note} \ \isa{name\ {\isaliteral{3D}{\isacharequal}}\ this}
-\end{quote}
-This introduces a new name \isa{name} that refers to \isa{this},
-the fact just proved, in this case the preceding block. In general,
-\isacom{note} introduces a new name for one or more facts.
-
-\section{Case analysis and induction}
-
-\subsection{Datatype case analysis}
-
-We have seen case analysis on formulas. Now we want to distinguish
-which form some term takes: is it \isa{{\isadigit{0}}} or of the form \isa{Suc\ n},
-is it \isa{{\isaliteral{5B}{\isacharbrackleft}}{\isaliteral{5D}{\isacharbrackright}}} or of the form \isa{x\ {\isaliteral{23}{\isacharhash}}\ xs}, etc. Here is a typical example
-proof by case analysis on the form of \isa{xs}:%
-\end{isamarkuptext}%
-\isamarkuptrue%
-\isacommand{lemma}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}length{\isaliteral{28}{\isacharparenleft}}tl\ xs{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{3D}{\isacharequal}}\ length\ xs\ {\isaliteral{2D}{\isacharminus}}\ {\isadigit{1}}{\isaliteral{22}{\isachardoublequoteclose}}\isanewline
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\isatagproof
-\isacommand{proof}\isamarkupfalse%
-\ {\isaliteral{28}{\isacharparenleft}}cases\ xs{\isaliteral{29}{\isacharparenright}}\isanewline
-\ \ \isacommand{assume}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}xs\ {\isaliteral{3D}{\isacharequal}}\ {\isaliteral{5B}{\isacharbrackleft}}{\isaliteral{5D}{\isacharbrackright}}{\isaliteral{22}{\isachardoublequoteclose}}\isanewline
-\ \ \isacommand{thus}\isamarkupfalse%
-\ {\isaliteral{3F}{\isacharquery}}thesis\ \isacommand{by}\isamarkupfalse%
-\ simp\isanewline
-\isacommand{next}\isamarkupfalse%
-\isanewline
-\ \ \isacommand{fix}\isamarkupfalse%
-\ y\ ys\ \isacommand{assume}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}xs\ {\isaliteral{3D}{\isacharequal}}\ y{\isaliteral{23}{\isacharhash}}ys{\isaliteral{22}{\isachardoublequoteclose}}\isanewline
-\ \ \isacommand{thus}\isamarkupfalse%
-\ {\isaliteral{3F}{\isacharquery}}thesis\ \isacommand{by}\isamarkupfalse%
-\ simp\isanewline
-\isacommand{qed}\isamarkupfalse%
-%
-\endisatagproof
-{\isafoldproof}%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\begin{isamarkuptext}%
-Function \isa{tl} (''tail'') is defined by \isa{tl\ {\isaliteral{5B}{\isacharbrackleft}}{\isaliteral{5D}{\isacharbrackright}}\ {\isaliteral{3D}{\isacharequal}}\ {\isaliteral{5B}{\isacharbrackleft}}{\isaliteral{5D}{\isacharbrackright}}} and
-\isa{tl\ {\isaliteral{28}{\isacharparenleft}}x\ {\isaliteral{23}{\isacharhash}}\ xs{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{3D}{\isacharequal}}\ xs}. Note that the result type of \isa{length} is \isa{nat}
-and \isa{{\isadigit{0}}\ {\isaliteral{2D}{\isacharminus}}\ {\isadigit{1}}\ {\isaliteral{3D}{\isacharequal}}\ {\isadigit{0}}}.
-
-This proof pattern works for any term \isa{t} whose type is a datatype.
-The goal has to be proved for each constructor \isa{C}:
-\begin{quote}
-\isacom{fix} \ \isa{x\isaliteral{5C3C5E697375623E}{}\isactrlisub {\isadigit{1}}\ {\isaliteral{5C3C646F74733E}{\isasymdots}}\ x\isaliteral{5C3C5E697375623E}{}\isactrlisub n} \isacom{assume} \isa{{\isaliteral{22}{\isachardoublequote}}t\ {\isaliteral{3D}{\isacharequal}}\ C\ x\isaliteral{5C3C5E697375623E}{}\isactrlisub {\isadigit{1}}\ {\isaliteral{5C3C646F74733E}{\isasymdots}}\ x\isaliteral{5C3C5E697375623E}{}\isactrlisub n{\isaliteral{22}{\isachardoublequote}}}
-\end{quote}
-Each case can be written in a more compact form by means of the \isacom{case}
-command:
-\begin{quote}
-\isacom{case} \isa{{\isaliteral{28}{\isacharparenleft}}C\ x\isaliteral{5C3C5E697375623E}{}\isactrlisub {\isadigit{1}}\ {\isaliteral{5C3C646F74733E}{\isasymdots}}\ x\isaliteral{5C3C5E697375623E}{}\isactrlisub n{\isaliteral{29}{\isacharparenright}}}
-\end{quote}
-This is equivalent to the explicit \isacom{fix}-\isacom{assume} line
-but also gives the assumption \isa{{\isaliteral{22}{\isachardoublequote}}t\ {\isaliteral{3D}{\isacharequal}}\ C\ x\isaliteral{5C3C5E697375623E}{}\isactrlisub {\isadigit{1}}\ {\isaliteral{5C3C646F74733E}{\isasymdots}}\ x\isaliteral{5C3C5E697375623E}{}\isactrlisub n{\isaliteral{22}{\isachardoublequote}}} a name: \isa{C},
-like the constructor.
-Here is the \isacom{case} version of the proof above:%
-\end{isamarkuptext}%
-\isamarkuptrue%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\isatagproof
-\isacommand{proof}\isamarkupfalse%
-\ {\isaliteral{28}{\isacharparenleft}}cases\ xs{\isaliteral{29}{\isacharparenright}}\isanewline
-\ \ \isacommand{case}\isamarkupfalse%
-\ Nil\isanewline
-\ \ \isacommand{thus}\isamarkupfalse%
-\ {\isaliteral{3F}{\isacharquery}}thesis\ \isacommand{by}\isamarkupfalse%
-\ simp\isanewline
-\isacommand{next}\isamarkupfalse%
-\isanewline
-\ \ \isacommand{case}\isamarkupfalse%
-\ {\isaliteral{28}{\isacharparenleft}}Cons\ y\ ys{\isaliteral{29}{\isacharparenright}}\isanewline
-\ \ \isacommand{thus}\isamarkupfalse%
-\ {\isaliteral{3F}{\isacharquery}}thesis\ \isacommand{by}\isamarkupfalse%
-\ simp\isanewline
-\isacommand{qed}\isamarkupfalse%
-%
-\endisatagproof
-{\isafoldproof}%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\begin{isamarkuptext}%
-Remember that \isa{Nil} and \isa{Cons} are the alphanumeric names
-for \isa{{\isaliteral{5B}{\isacharbrackleft}}{\isaliteral{5D}{\isacharbrackright}}} and \isa{{\isaliteral{23}{\isacharhash}}}. The names of the assumptions
-are not used because they are directly piped (via \isacom{thus})
-into the proof of the claim.
-
-\subsection{Structural induction}
-
-We illustrate structural induction with an example based on natural numbers:
-the sum (\isa{{\isaliteral{5C3C53756D3E}{\isasymSum}}}) of the first \isa{n} natural numbers
-(\isa{{\isaliteral{7B}{\isacharbraceleft}}{\isadigit{0}}{\isaliteral{2E}{\isachardot}}{\isaliteral{2E}{\isachardot}}n{\isaliteral{3A}{\isacharcolon}}{\isaliteral{3A}{\isacharcolon}}nat{\isaliteral{7D}{\isacharbraceright}}}) is equal to \mbox{\isa{n\ {\isaliteral{2A}{\isacharasterisk}}\ {\isaliteral{28}{\isacharparenleft}}n\ {\isaliteral{2B}{\isacharplus}}\ {\isadigit{1}}{\isaliteral{29}{\isacharparenright}}\ div\ {\isadigit{2}}}}.
-Never mind the details, just focus on the pattern:%
-\end{isamarkuptext}%
-\isamarkuptrue%
-\isacommand{lemma}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}{\isaliteral{5C3C53756D3E}{\isasymSum}}{\isaliteral{7B}{\isacharbraceleft}}{\isadigit{0}}{\isaliteral{2E}{\isachardot}}{\isaliteral{2E}{\isachardot}}n{\isaliteral{3A}{\isacharcolon}}{\isaliteral{3A}{\isacharcolon}}nat{\isaliteral{7D}{\isacharbraceright}}\ {\isaliteral{3D}{\isacharequal}}\ n{\isaliteral{2A}{\isacharasterisk}}{\isaliteral{28}{\isacharparenleft}}n{\isaliteral{2B}{\isacharplus}}{\isadigit{1}}{\isaliteral{29}{\isacharparenright}}\ div\ {\isadigit{2}}{\isaliteral{22}{\isachardoublequoteclose}}\isanewline
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\isatagproof
-\isacommand{proof}\isamarkupfalse%
-\ {\isaliteral{28}{\isacharparenleft}}induction\ n{\isaliteral{29}{\isacharparenright}}\isanewline
-\ \ \isacommand{show}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}{\isaliteral{5C3C53756D3E}{\isasymSum}}{\isaliteral{7B}{\isacharbraceleft}}{\isadigit{0}}{\isaliteral{2E}{\isachardot}}{\isaliteral{2E}{\isachardot}}{\isadigit{0}}{\isaliteral{3A}{\isacharcolon}}{\isaliteral{3A}{\isacharcolon}}nat{\isaliteral{7D}{\isacharbraceright}}\ {\isaliteral{3D}{\isacharequal}}\ {\isadigit{0}}{\isaliteral{2A}{\isacharasterisk}}{\isaliteral{28}{\isacharparenleft}}{\isadigit{0}}{\isaliteral{2B}{\isacharplus}}{\isadigit{1}}{\isaliteral{29}{\isacharparenright}}\ div\ {\isadigit{2}}{\isaliteral{22}{\isachardoublequoteclose}}\ \isacommand{by}\isamarkupfalse%
-\ simp\isanewline
-\isacommand{next}\isamarkupfalse%
-\isanewline
-\ \ \isacommand{fix}\isamarkupfalse%
-\ n\ \isacommand{assume}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}{\isaliteral{5C3C53756D3E}{\isasymSum}}{\isaliteral{7B}{\isacharbraceleft}}{\isadigit{0}}{\isaliteral{2E}{\isachardot}}{\isaliteral{2E}{\isachardot}}n{\isaliteral{3A}{\isacharcolon}}{\isaliteral{3A}{\isacharcolon}}nat{\isaliteral{7D}{\isacharbraceright}}\ {\isaliteral{3D}{\isacharequal}}\ n{\isaliteral{2A}{\isacharasterisk}}{\isaliteral{28}{\isacharparenleft}}n{\isaliteral{2B}{\isacharplus}}{\isadigit{1}}{\isaliteral{29}{\isacharparenright}}\ div\ {\isadigit{2}}{\isaliteral{22}{\isachardoublequoteclose}}\isanewline
-\ \ \isacommand{thus}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}{\isaliteral{5C3C53756D3E}{\isasymSum}}{\isaliteral{7B}{\isacharbraceleft}}{\isadigit{0}}{\isaliteral{2E}{\isachardot}}{\isaliteral{2E}{\isachardot}}Suc\ n{\isaliteral{7D}{\isacharbraceright}}\ {\isaliteral{3D}{\isacharequal}}\ Suc\ n{\isaliteral{2A}{\isacharasterisk}}{\isaliteral{28}{\isacharparenleft}}Suc\ n{\isaliteral{2B}{\isacharplus}}{\isadigit{1}}{\isaliteral{29}{\isacharparenright}}\ div\ {\isadigit{2}}{\isaliteral{22}{\isachardoublequoteclose}}\ \isacommand{by}\isamarkupfalse%
-\ simp\isanewline
-\isacommand{qed}\isamarkupfalse%
-%
-\endisatagproof
-{\isafoldproof}%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\begin{isamarkuptext}%
-Except for the rewrite steps, everything is explicitly given. This
-makes the proof easily readable, but the duplication means it is tedious to
-write and maintain. Here is how pattern
-matching can completely avoid any duplication:%
-\end{isamarkuptext}%
-\isamarkuptrue%
-\isacommand{lemma}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}{\isaliteral{5C3C53756D3E}{\isasymSum}}{\isaliteral{7B}{\isacharbraceleft}}{\isadigit{0}}{\isaliteral{2E}{\isachardot}}{\isaliteral{2E}{\isachardot}}n{\isaliteral{3A}{\isacharcolon}}{\isaliteral{3A}{\isacharcolon}}nat{\isaliteral{7D}{\isacharbraceright}}\ {\isaliteral{3D}{\isacharequal}}\ n{\isaliteral{2A}{\isacharasterisk}}{\isaliteral{28}{\isacharparenleft}}n{\isaliteral{2B}{\isacharplus}}{\isadigit{1}}{\isaliteral{29}{\isacharparenright}}\ div\ {\isadigit{2}}{\isaliteral{22}{\isachardoublequoteclose}}\ {\isaliteral{28}{\isacharparenleft}}\isakeyword{is}\ {\isaliteral{22}{\isachardoublequoteopen}}{\isaliteral{3F}{\isacharquery}}P\ n{\isaliteral{22}{\isachardoublequoteclose}}{\isaliteral{29}{\isacharparenright}}\isanewline
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\isatagproof
-\isacommand{proof}\isamarkupfalse%
-\ {\isaliteral{28}{\isacharparenleft}}induction\ n{\isaliteral{29}{\isacharparenright}}\isanewline
-\ \ \isacommand{show}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}{\isaliteral{3F}{\isacharquery}}P\ {\isadigit{0}}{\isaliteral{22}{\isachardoublequoteclose}}\ \isacommand{by}\isamarkupfalse%
-\ simp\isanewline
-\isacommand{next}\isamarkupfalse%
-\isanewline
-\ \ \isacommand{fix}\isamarkupfalse%
-\ n\ \isacommand{assume}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}{\isaliteral{3F}{\isacharquery}}P\ n{\isaliteral{22}{\isachardoublequoteclose}}\isanewline
-\ \ \isacommand{thus}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}{\isaliteral{3F}{\isacharquery}}P{\isaliteral{28}{\isacharparenleft}}Suc\ n{\isaliteral{29}{\isacharparenright}}{\isaliteral{22}{\isachardoublequoteclose}}\ \isacommand{by}\isamarkupfalse%
-\ simp\isanewline
-\isacommand{qed}\isamarkupfalse%
-%
-\endisatagproof
-{\isafoldproof}%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\begin{isamarkuptext}%
-The first line introduces an abbreviation \isa{{\isaliteral{3F}{\isacharquery}}P\ n} for the goal.
-Pattern matching \isa{{\isaliteral{3F}{\isacharquery}}P\ n} with the goal instantiates \isa{{\isaliteral{3F}{\isacharquery}}P} to the
-function \isa{{\isaliteral{5C3C6C616D6264613E}{\isasymlambda}}n{\isaliteral{2E}{\isachardot}}\ {\isaliteral{5C3C53756D3E}{\isasymSum}}{\isaliteral{7B}{\isacharbraceleft}}{\isadigit{0}}{\isaliteral{2E}{\isachardot}}{\isaliteral{2E}{\isachardot}}n{\isaliteral{7D}{\isacharbraceright}}\ {\isaliteral{3D}{\isacharequal}}\ n\ {\isaliteral{2A}{\isacharasterisk}}\ {\isaliteral{28}{\isacharparenleft}}n\ {\isaliteral{2B}{\isacharplus}}\ {\isadigit{1}}{\isaliteral{29}{\isacharparenright}}\ div\ {\isadigit{2}}}.  Now the proposition to
-be proved in the base case can be written as \isa{{\isaliteral{3F}{\isacharquery}}P\ {\isadigit{0}}}, the induction
-hypothesis as \isa{{\isaliteral{3F}{\isacharquery}}P\ n}, and the conclusion of the induction step as
-\isa{{\isaliteral{3F}{\isacharquery}}P{\isaliteral{28}{\isacharparenleft}}Suc\ n{\isaliteral{29}{\isacharparenright}}}.
-
-Induction also provides the \isacom{case} idiom that abbreviates
-the \isacom{fix}-\isacom{assume} step. The above proof becomes%
-\end{isamarkuptext}%
-\isamarkuptrue%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\isatagproof
-\isacommand{proof}\isamarkupfalse%
-\ {\isaliteral{28}{\isacharparenleft}}induction\ n{\isaliteral{29}{\isacharparenright}}\isanewline
-\ \ \isacommand{case}\isamarkupfalse%
-\ {\isadigit{0}}\isanewline
-\ \ \isacommand{show}\isamarkupfalse%
-\ {\isaliteral{3F}{\isacharquery}}case\ \isacommand{by}\isamarkupfalse%
-\ simp\isanewline
-\isacommand{next}\isamarkupfalse%
-\isanewline
-\ \ \isacommand{case}\isamarkupfalse%
-\ {\isaliteral{28}{\isacharparenleft}}Suc\ n{\isaliteral{29}{\isacharparenright}}\isanewline
-\ \ \isacommand{thus}\isamarkupfalse%
-\ {\isaliteral{3F}{\isacharquery}}case\ \isacommand{by}\isamarkupfalse%
-\ simp\isanewline
-\isacommand{qed}\isamarkupfalse%
-%
-\endisatagproof
-{\isafoldproof}%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\begin{isamarkuptext}%
-The unknown \isa{{\isaliteral{3F}{\isacharquery}}case} is set in each case to the required
-claim, i.e.\ \isa{{\isaliteral{3F}{\isacharquery}}P\ {\isadigit{0}}} and \mbox{\isa{{\isaliteral{3F}{\isacharquery}}P{\isaliteral{28}{\isacharparenleft}}Suc\ n{\isaliteral{29}{\isacharparenright}}}} in the above proof,
-without requiring the user to define a \isa{{\isaliteral{3F}{\isacharquery}}P}. The general
-pattern for induction over \isa{nat} is shown on the left-hand side:%
-\end{isamarkuptext}%
-\isamarkuptrue%
-%
-\begin{tabular}{@ {}ll@ {}}
-\begin{minipage}[t]{.4\textwidth}
-\isa{%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\isatagproof
-\isacommand{show}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}P{\isaliteral{28}{\isacharparenleft}}n{\isaliteral{29}{\isacharparenright}}{\isaliteral{22}{\isachardoublequoteclose}}\isanewline
-\isacommand{proof}\isamarkupfalse%
-\ {\isaliteral{28}{\isacharparenleft}}induction\ n{\isaliteral{29}{\isacharparenright}}\isanewline
-\ \ \isacommand{case}\isamarkupfalse%
-\ {\isadigit{0}}%
-\\\mbox{}\ \ $\vdots$\\\mbox{}\hspace{-1ex}
-\ \ \isacommand{show}\isamarkupfalse%
-\ {\isaliteral{3F}{\isacharquery}}case\ %
-\ $\dots$\\
-\isacommand{next}\isamarkupfalse%
-\isanewline
-\ \ \isacommand{case}\isamarkupfalse%
-\ {\isaliteral{28}{\isacharparenleft}}Suc\ n{\isaliteral{29}{\isacharparenright}}%
-\\\mbox{}\ \ $\vdots$\\\mbox{}\hspace{-1ex}
-\ \ \isacommand{show}\isamarkupfalse%
-\ {\isaliteral{3F}{\isacharquery}}case\ %
-\ $\dots$\\
-\isacommand{qed}\isamarkupfalse%
-%
-\endisatagproof
-{\isafoldproof}%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-}
-\end{minipage}
-&
-\begin{minipage}[t]{.4\textwidth}
-~\\
-~\\
-\isacom{let} \isa{{\isaliteral{3F}{\isacharquery}}case\ {\isaliteral{3D}{\isacharequal}}\ {\isaliteral{22}{\isachardoublequote}}P{\isaliteral{28}{\isacharparenleft}}{\isadigit{0}}{\isaliteral{29}{\isacharparenright}}{\isaliteral{22}{\isachardoublequote}}}\\
-~\\
-~\\
-~\\[1ex]
-\isacom{fix} \isa{n} \isacom{assume} \isa{Suc{\isaliteral{3A}{\isacharcolon}}\ {\isaliteral{22}{\isachardoublequote}}P{\isaliteral{28}{\isacharparenleft}}n{\isaliteral{29}{\isacharparenright}}{\isaliteral{22}{\isachardoublequote}}}\\
-\isacom{let} \isa{{\isaliteral{3F}{\isacharquery}}case\ {\isaliteral{3D}{\isacharequal}}\ {\isaliteral{22}{\isachardoublequote}}P{\isaliteral{28}{\isacharparenleft}}Suc\ n{\isaliteral{29}{\isacharparenright}}{\isaliteral{22}{\isachardoublequote}}}\\
-\end{minipage}
-\end{tabular}
-\medskip
-%
-\begin{isamarkuptext}%
-On the right side you can see what the \isacom{case} command
-on the left stands for.
-
-In case the goal is an implication, induction does one more thing: the
-proposition to be proved in each case is not the whole implication but only
-its conclusion; the premises of the implication are immediately made
-assumptions of that case. That is, if in the above proof we replace
-\isacom{show}~\isa{P{\isaliteral{28}{\isacharparenleft}}n{\isaliteral{29}{\isacharparenright}}} by
-\mbox{\isacom{show}~\isa{A{\isaliteral{28}{\isacharparenleft}}n{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ P{\isaliteral{28}{\isacharparenleft}}n{\isaliteral{29}{\isacharparenright}}}}
-then \isacom{case}~\isa{{\isadigit{0}}} stands for
-\begin{quote}
-\isacom{assume} \ \isa{{\isadigit{0}}{\isaliteral{3A}{\isacharcolon}}\ {\isaliteral{22}{\isachardoublequote}}A{\isaliteral{28}{\isacharparenleft}}{\isadigit{0}}{\isaliteral{29}{\isacharparenright}}{\isaliteral{22}{\isachardoublequote}}}\\
-\isacom{let} \isa{{\isaliteral{3F}{\isacharquery}}case\ {\isaliteral{3D}{\isacharequal}}\ {\isaliteral{22}{\isachardoublequote}}P{\isaliteral{28}{\isacharparenleft}}{\isadigit{0}}{\isaliteral{29}{\isacharparenright}}{\isaliteral{22}{\isachardoublequote}}}
-\end{quote}
-and \isacom{case}~\isa{{\isaliteral{28}{\isacharparenleft}}Suc\ n{\isaliteral{29}{\isacharparenright}}} stands for
-\begin{quote}
-\isacom{fix} \isa{n}\\
-\isacom{assume} \isa{Suc{\isaliteral{3A}{\isacharcolon}}}
-  \begin{tabular}[t]{l}\isa{{\isaliteral{22}{\isachardoublequote}}A{\isaliteral{28}{\isacharparenleft}}n{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ P{\isaliteral{28}{\isacharparenleft}}n{\isaliteral{29}{\isacharparenright}}{\isaliteral{22}{\isachardoublequote}}}\\\isa{{\isaliteral{22}{\isachardoublequote}}A{\isaliteral{28}{\isacharparenleft}}Suc\ n{\isaliteral{29}{\isacharparenright}}{\isaliteral{22}{\isachardoublequote}}}\end{tabular}\\
-\isacom{let} \isa{{\isaliteral{3F}{\isacharquery}}case\ {\isaliteral{3D}{\isacharequal}}\ {\isaliteral{22}{\isachardoublequote}}P{\isaliteral{28}{\isacharparenleft}}Suc\ n{\isaliteral{29}{\isacharparenright}}{\isaliteral{22}{\isachardoublequote}}}
-\end{quote}
-The list of assumptions \isa{Suc} is actually subdivided
-into \isa{Suc{\isaliteral{2E}{\isachardot}}IH}, the induction hypotheses (here \isa{A{\isaliteral{28}{\isacharparenleft}}n{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ P{\isaliteral{28}{\isacharparenleft}}n{\isaliteral{29}{\isacharparenright}}})
-and \isa{Suc{\isaliteral{2E}{\isachardot}}prems}, the premises of the goal being proved
-(here \isa{A{\isaliteral{28}{\isacharparenleft}}Suc\ n{\isaliteral{29}{\isacharparenright}}}).
-
-Induction works for any datatype.
-Proving a goal \isa{{\isaliteral{5C3C6C6272616B6B3E}{\isasymlbrakk}}\ A\isaliteral{5C3C5E697375623E}{}\isactrlisub {\isadigit{1}}{\isaliteral{28}{\isacharparenleft}}x{\isaliteral{29}{\isacharparenright}}{\isaliteral{3B}{\isacharsemicolon}}\ {\isaliteral{5C3C646F74733E}{\isasymdots}}{\isaliteral{3B}{\isacharsemicolon}}\ A\isaliteral{5C3C5E697375623E}{}\isactrlisub k{\isaliteral{28}{\isacharparenleft}}x{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{5C3C726272616B6B3E}{\isasymrbrakk}}\ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ P{\isaliteral{28}{\isacharparenleft}}x{\isaliteral{29}{\isacharparenright}}}
-by induction on \isa{x} generates a proof obligation for each constructor
-\isa{C} of the datatype. The command \isa{case\ {\isaliteral{28}{\isacharparenleft}}C\ x\isaliteral{5C3C5E697375623E}{}\isactrlisub {\isadigit{1}}\ {\isaliteral{5C3C646F74733E}{\isasymdots}}\ x\isaliteral{5C3C5E697375623E}{}\isactrlisub n{\isaliteral{29}{\isacharparenright}}}
-performs the following steps:
-\begin{enumerate}
-\item \isacom{fix} \isa{x\isaliteral{5C3C5E697375623E}{}\isactrlisub {\isadigit{1}}\ {\isaliteral{5C3C646F74733E}{\isasymdots}}\ x\isaliteral{5C3C5E697375623E}{}\isactrlisub n}
-\item \isacom{assume} the induction hypotheses (calling them \isa{C{\isaliteral{2E}{\isachardot}}IH})
- and the premises \mbox{\isa{A\isaliteral{5C3C5E697375623E}{}\isactrlisub i{\isaliteral{28}{\isacharparenleft}}C\ x\isaliteral{5C3C5E697375623E}{}\isactrlisub {\isadigit{1}}\ {\isaliteral{5C3C646F74733E}{\isasymdots}}\ x\isaliteral{5C3C5E697375623E}{}\isactrlisub n{\isaliteral{29}{\isacharparenright}}}} (calling them \isa{C{\isaliteral{2E}{\isachardot}}prems})
- and calling the whole list \isa{C}
-\item \isacom{let} \isa{{\isaliteral{3F}{\isacharquery}}case\ {\isaliteral{3D}{\isacharequal}}\ {\isaliteral{22}{\isachardoublequote}}P{\isaliteral{28}{\isacharparenleft}}C\ x\isaliteral{5C3C5E697375623E}{}\isactrlisub {\isadigit{1}}\ {\isaliteral{5C3C646F74733E}{\isasymdots}}\ x\isaliteral{5C3C5E697375623E}{}\isactrlisub n{\isaliteral{29}{\isacharparenright}}{\isaliteral{22}{\isachardoublequote}}}
-\end{enumerate}
-
-\subsection{Rule induction}
-
-Recall the inductive and recursive definitions of even numbers in
-\autoref{sec:inductive-defs}:%
-\end{isamarkuptext}%
-\isamarkuptrue%
-\isacommand{inductive}\isamarkupfalse%
-\ ev\ {\isaliteral{3A}{\isacharcolon}}{\isaliteral{3A}{\isacharcolon}}\ {\isaliteral{22}{\isachardoublequoteopen}}nat\ {\isaliteral{5C3C52696768746172726F773E}{\isasymRightarrow}}\ bool{\isaliteral{22}{\isachardoublequoteclose}}\ \isakeyword{where}\isanewline
-ev{\isadigit{0}}{\isaliteral{3A}{\isacharcolon}}\ {\isaliteral{22}{\isachardoublequoteopen}}ev\ {\isadigit{0}}{\isaliteral{22}{\isachardoublequoteclose}}\ {\isaliteral{7C}{\isacharbar}}\isanewline
-evSS{\isaliteral{3A}{\isacharcolon}}\ {\isaliteral{22}{\isachardoublequoteopen}}ev\ n\ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ ev{\isaliteral{28}{\isacharparenleft}}Suc{\isaliteral{28}{\isacharparenleft}}Suc\ n{\isaliteral{29}{\isacharparenright}}{\isaliteral{29}{\isacharparenright}}{\isaliteral{22}{\isachardoublequoteclose}}\isanewline
-\isanewline
-\isacommand{fun}\isamarkupfalse%
-\ even\ {\isaliteral{3A}{\isacharcolon}}{\isaliteral{3A}{\isacharcolon}}\ {\isaliteral{22}{\isachardoublequoteopen}}nat\ {\isaliteral{5C3C52696768746172726F773E}{\isasymRightarrow}}\ bool{\isaliteral{22}{\isachardoublequoteclose}}\ \isakeyword{where}\isanewline
-{\isaliteral{22}{\isachardoublequoteopen}}even\ {\isadigit{0}}\ {\isaliteral{3D}{\isacharequal}}\ True{\isaliteral{22}{\isachardoublequoteclose}}\ {\isaliteral{7C}{\isacharbar}}\isanewline
-{\isaliteral{22}{\isachardoublequoteopen}}even\ {\isaliteral{28}{\isacharparenleft}}Suc\ {\isadigit{0}}{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{3D}{\isacharequal}}\ False{\isaliteral{22}{\isachardoublequoteclose}}\ {\isaliteral{7C}{\isacharbar}}\isanewline
-{\isaliteral{22}{\isachardoublequoteopen}}even\ {\isaliteral{28}{\isacharparenleft}}Suc{\isaliteral{28}{\isacharparenleft}}Suc\ n{\isaliteral{29}{\isacharparenright}}{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{3D}{\isacharequal}}\ even\ n{\isaliteral{22}{\isachardoublequoteclose}}%
-\begin{isamarkuptext}%
-We recast the proof of \isa{ev\ n\ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ even\ n} in Isar. The
-left column shows the actual proof text, the right column shows
-the implicit effect of the two \isacom{case} commands:%
-\end{isamarkuptext}%
-\isamarkuptrue%
-%
-\begin{tabular}{@ {}l@ {\qquad}l@ {}}
-\begin{minipage}[t]{.5\textwidth}
-\isa{%
-\isacommand{lemma}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}ev\ n\ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ even\ n{\isaliteral{22}{\isachardoublequoteclose}}\isanewline
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\isatagproof
-\isacommand{proof}\isamarkupfalse%
-{\isaliteral{28}{\isacharparenleft}}induction\ rule{\isaliteral{3A}{\isacharcolon}}\ ev{\isaliteral{2E}{\isachardot}}induct{\isaliteral{29}{\isacharparenright}}\isanewline
-\ \ \isacommand{case}\isamarkupfalse%
-\ ev{\isadigit{0}}\isanewline
-\ \ \isacommand{show}\isamarkupfalse%
-\ {\isaliteral{3F}{\isacharquery}}case\ \isacommand{by}\isamarkupfalse%
-\ simp\isanewline
-\isacommand{next}\isamarkupfalse%
-\isanewline
-\ \ \isacommand{case}\isamarkupfalse%
-\ evSS\isanewline
-\isanewline
-\isanewline
-\isanewline
-\ \ \isacommand{thus}\isamarkupfalse%
-\ {\isaliteral{3F}{\isacharquery}}case\ \isacommand{by}\isamarkupfalse%
-\ simp\isanewline
-\isacommand{qed}\isamarkupfalse%
-%
-\endisatagproof
-{\isafoldproof}%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-}
-\end{minipage}
-&
-\begin{minipage}[t]{.5\textwidth}
-~\\
-~\\
-\isacom{let} \isa{{\isaliteral{3F}{\isacharquery}}case\ {\isaliteral{3D}{\isacharequal}}\ {\isaliteral{22}{\isachardoublequote}}even\ {\isadigit{0}}{\isaliteral{22}{\isachardoublequote}}}\\
-~\\
-~\\
-\isacom{fix} \isa{n}\\
-\isacom{assume} \isa{evSS{\isaliteral{3A}{\isacharcolon}}}
-  \begin{tabular}[t]{l} \isa{{\isaliteral{22}{\isachardoublequote}}ev\ n{\isaliteral{22}{\isachardoublequote}}}\\\isa{{\isaliteral{22}{\isachardoublequote}}even\ n{\isaliteral{22}{\isachardoublequote}}}\end{tabular}\\
-\isacom{let} \isa{{\isaliteral{3F}{\isacharquery}}case\ {\isaliteral{3D}{\isacharequal}}\ {\isaliteral{22}{\isachardoublequote}}even{\isaliteral{28}{\isacharparenleft}}Suc{\isaliteral{28}{\isacharparenleft}}Suc\ n{\isaliteral{29}{\isacharparenright}}{\isaliteral{29}{\isacharparenright}}{\isaliteral{22}{\isachardoublequote}}}\\
-\end{minipage}
-\end{tabular}
-\medskip
-%
-\begin{isamarkuptext}%
-The proof resembles structural induction, but the induction rule is given
-explicitly and the names of the cases are the names of the rules in the
-inductive definition.
-Let us examine the two assumptions named \isa{evSS}:
-\isa{ev\ n} is the premise of rule \isa{evSS}, which we may assume
-because we are in the case where that rule was used; \isa{even\ n}
-is the induction hypothesis.
-\begin{warn}
-Because each \isacom{case} command introduces a list of assumptions
-named like the case name, which is the name of a rule of the inductive
-definition, those rules now need to be accessed with a qualified name, here
-\isa{ev{\isaliteral{2E}{\isachardot}}ev{\isadigit{0}}} and \isa{ev{\isaliteral{2E}{\isachardot}}evSS}
-\end{warn}
-
-In the case \isa{evSS} of the proof above we have pretended that the
-system fixes a variable \isa{n}.  But unless the user provides the name
-\isa{n}, the system will just invent its own name that cannot be referred
-to.  In the above proof, we do not need to refer to it, hence we do not give
-it a specific name. In case one needs to refer to it one writes
-\begin{quote}
-\isacom{case} \isa{{\isaliteral{28}{\isacharparenleft}}evSS\ m{\isaliteral{29}{\isacharparenright}}}
-\end{quote}
-just like \isacom{case}~\isa{{\isaliteral{28}{\isacharparenleft}}Suc\ n{\isaliteral{29}{\isacharparenright}}} in earlier structural inductions.
-The name \isa{m} is an arbitrary choice. As a result,
-case \isa{evSS} is derived from a renamed version of
-rule \isa{evSS}: \isa{ev\ m\ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ ev{\isaliteral{28}{\isacharparenleft}}Suc{\isaliteral{28}{\isacharparenleft}}Suc\ m{\isaliteral{29}{\isacharparenright}}{\isaliteral{29}{\isacharparenright}}}.
-Here is an example with a (contrived) intermediate step that refers to \isa{m}:%
-\end{isamarkuptext}%
-\isamarkuptrue%
-\isacommand{lemma}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}ev\ n\ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ even\ n{\isaliteral{22}{\isachardoublequoteclose}}\isanewline
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\isatagproof
-\isacommand{proof}\isamarkupfalse%
-{\isaliteral{28}{\isacharparenleft}}induction\ rule{\isaliteral{3A}{\isacharcolon}}\ ev{\isaliteral{2E}{\isachardot}}induct{\isaliteral{29}{\isacharparenright}}\isanewline
-\ \ \isacommand{case}\isamarkupfalse%
-\ ev{\isadigit{0}}\ \isacommand{show}\isamarkupfalse%
-\ {\isaliteral{3F}{\isacharquery}}case\ \isacommand{by}\isamarkupfalse%
-\ simp\isanewline
-\isacommand{next}\isamarkupfalse%
-\isanewline
-\ \ \isacommand{case}\isamarkupfalse%
-\ {\isaliteral{28}{\isacharparenleft}}evSS\ m{\isaliteral{29}{\isacharparenright}}\isanewline
-\ \ \isacommand{have}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}even{\isaliteral{28}{\isacharparenleft}}Suc{\isaliteral{28}{\isacharparenleft}}Suc\ m{\isaliteral{29}{\isacharparenright}}{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{3D}{\isacharequal}}\ even\ m{\isaliteral{22}{\isachardoublequoteclose}}\ \isacommand{by}\isamarkupfalse%
-\ simp\isanewline
-\ \ \isacommand{thus}\isamarkupfalse%
-\ {\isaliteral{3F}{\isacharquery}}case\ \isacommand{using}\isamarkupfalse%
-\ {\isaliteral{60}{\isacharbackquoteopen}}even\ m{\isaliteral{60}{\isacharbackquoteclose}}\ \isacommand{by}\isamarkupfalse%
-\ blast\isanewline
-\isacommand{qed}\isamarkupfalse%
-%
-\endisatagproof
-{\isafoldproof}%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\begin{isamarkuptext}%
-\indent
-In general, let \isa{I} be a (for simplicity unary) inductively defined
-predicate and let the rules in the definition of \isa{I}
-be called \isa{rule\isaliteral{5C3C5E697375623E}{}\isactrlisub {\isadigit{1}}}, \dots, \isa{rule\isaliteral{5C3C5E697375623E}{}\isactrlisub n}. A proof by rule
-induction follows this pattern:%
-\end{isamarkuptext}%
-\isamarkuptrue%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\isatagproof
-\isacommand{show}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}I\ x\ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ P\ x{\isaliteral{22}{\isachardoublequoteclose}}\isanewline
-\isacommand{proof}\isamarkupfalse%
-{\isaliteral{28}{\isacharparenleft}}induction\ rule{\isaliteral{3A}{\isacharcolon}}\ I{\isaliteral{2E}{\isachardot}}induct{\isaliteral{29}{\isacharparenright}}\isanewline
-\ \ \isacommand{case}\isamarkupfalse%
-\ rule\isaliteral{5C3C5E697375623E}{}\isactrlisub {\isadigit{1}}%
-\\[-.4ex]\mbox{}\ \ $\vdots$\\[-.4ex]\mbox{}\hspace{-1ex}
-\ \ \isacommand{show}\isamarkupfalse%
-\ {\isaliteral{3F}{\isacharquery}}case\ %
-\ $\dots$\\
-\isacommand{next}\isamarkupfalse%
-%
-\\[-.4ex]$\vdots$\\[-.4ex]\mbox{}\hspace{-1ex}
-\isacommand{next}\isamarkupfalse%
-\isanewline
-\ \ \isacommand{case}\isamarkupfalse%
-\ rule\isaliteral{5C3C5E697375623E}{}\isactrlisub n%
-\\[-.4ex]\mbox{}\ \ $\vdots$\\[-.4ex]\mbox{}\hspace{-1ex}
-\ \ \isacommand{show}\isamarkupfalse%
-\ {\isaliteral{3F}{\isacharquery}}case\ %
-\ $\dots$\\
-\isacommand{qed}\isamarkupfalse%
-%
-\endisatagproof
-{\isafoldproof}%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\begin{isamarkuptext}%
-One can provide explicit variable names by writing
-\isacom{case}~\isa{{\isaliteral{28}{\isacharparenleft}}rule\isaliteral{5C3C5E697375623E}{}\isactrlisub i\ x\isaliteral{5C3C5E697375623E}{}\isactrlisub {\isadigit{1}}\ {\isaliteral{5C3C646F74733E}{\isasymdots}}\ x\isaliteral{5C3C5E697375623E}{}\isactrlisub k{\isaliteral{29}{\isacharparenright}}}, thus renaming the first \isa{k}
-free variables in rule \isa{i} to \isa{x\isaliteral{5C3C5E697375623E}{}\isactrlisub {\isadigit{1}}\ {\isaliteral{5C3C646F74733E}{\isasymdots}}\ x\isaliteral{5C3C5E697375623E}{}\isactrlisub k},
-going through rule \isa{i} from left to right.
-
-\subsection{Assumption naming}
-
-In any induction, \isacom{case}~\isa{name} sets up a list of assumptions
-also called \isa{name}, which is subdivided into three parts:
-\begin{description}
-\item[\isa{name{\isaliteral{2E}{\isachardot}}IH}] contains the induction hypotheses.
-\item[\isa{name{\isaliteral{2E}{\isachardot}}hyps}] contains all the other hypotheses of this case in the
-induction rule. For rule inductions these are the hypotheses of rule
-\isa{name}, for structural inductions these are empty.
-\item[\isa{name{\isaliteral{2E}{\isachardot}}prems}] contains the (suitably instantiated) premises
-of the statement being proved, i.e. the \isa{A\isaliteral{5C3C5E697375623E}{}\isactrlisub i} when
-proving \isa{{\isaliteral{5C3C6C6272616B6B3E}{\isasymlbrakk}}\ A\isaliteral{5C3C5E697375623E}{}\isactrlisub {\isadigit{1}}{\isaliteral{3B}{\isacharsemicolon}}\ {\isaliteral{5C3C646F74733E}{\isasymdots}}{\isaliteral{3B}{\isacharsemicolon}}\ A\isaliteral{5C3C5E697375623E}{}\isactrlisub n\ {\isaliteral{5C3C726272616B6B3E}{\isasymrbrakk}}\ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ A}.
-\end{description}
-\begin{warn}
-Proof method \isa{induct} differs from \isa{induction}
-only in this naming policy: \isa{induct} does not distinguish
-\isa{IH} from \isa{hyps} but subsumes \isa{IH} under \isa{hyps}.
-\end{warn}
-
-More complicated inductive proofs than the ones we have seen so far
-often need to refer to specific assumptions---just \isa{name} or even
-\isa{name{\isaliteral{2E}{\isachardot}}prems} and \isa{name{\isaliteral{2E}{\isachardot}}IH} can be too unspecific.
-This is where the indexing of fact lists comes in handy, e.g.\
-\isa{name{\isaliteral{2E}{\isachardot}}IH{\isaliteral{28}{\isacharparenleft}}{\isadigit{2}}{\isaliteral{29}{\isacharparenright}}} or \isa{name{\isaliteral{2E}{\isachardot}}prems{\isaliteral{28}{\isacharparenleft}}{\isadigit{1}}{\isaliteral{2D}{\isacharminus}}{\isadigit{2}}{\isaliteral{29}{\isacharparenright}}}.
-
-\subsection{Rule inversion}
-
-Rule inversion is case analysis of which rule could have been used to
-derive some fact. The name \concept{rule inversion} emphasizes that we are
-reasoning backwards: by which rules could some given fact have been proved?
-For the inductive definition of \isa{ev}, rule inversion can be summarized
-like this:
-\begin{isabelle}%
-ev\ n\ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ n\ {\isaliteral{3D}{\isacharequal}}\ {\isadigit{0}}\ {\isaliteral{5C3C6F723E}{\isasymor}}\ {\isaliteral{28}{\isacharparenleft}}{\isaliteral{5C3C6578697374733E}{\isasymexists}}k{\isaliteral{2E}{\isachardot}}\ n\ {\isaliteral{3D}{\isacharequal}}\ Suc\ {\isaliteral{28}{\isacharparenleft}}Suc\ k{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{5C3C616E643E}{\isasymand}}\ ev\ k{\isaliteral{29}{\isacharparenright}}%
-\end{isabelle}
-The realisation in Isabelle is a case analysis.
-A simple example is the proof that \isa{ev\ n\ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ ev\ {\isaliteral{28}{\isacharparenleft}}n\ {\isaliteral{2D}{\isacharminus}}\ {\isadigit{2}}{\isaliteral{29}{\isacharparenright}}}. We
-already went through the details informally in \autoref{sec:Logic:even}. This
-is the Isar proof:%
-\end{isamarkuptext}%
-\isamarkuptrue%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\isatagproof
-\ \ \isacommand{assume}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}ev\ n{\isaliteral{22}{\isachardoublequoteclose}}\isanewline
-\ \ \isacommand{from}\isamarkupfalse%
-\ this\ \isacommand{have}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}ev{\isaliteral{28}{\isacharparenleft}}n\ {\isaliteral{2D}{\isacharminus}}\ {\isadigit{2}}{\isaliteral{29}{\isacharparenright}}{\isaliteral{22}{\isachardoublequoteclose}}\isanewline
-\ \ \isacommand{proof}\isamarkupfalse%
-\ cases\isanewline
-\ \ \ \ \isacommand{case}\isamarkupfalse%
-\ ev{\isadigit{0}}\ \isacommand{thus}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}ev{\isaliteral{28}{\isacharparenleft}}n\ {\isaliteral{2D}{\isacharminus}}\ {\isadigit{2}}{\isaliteral{29}{\isacharparenright}}{\isaliteral{22}{\isachardoublequoteclose}}\ \isacommand{by}\isamarkupfalse%
-\ {\isaliteral{28}{\isacharparenleft}}simp\ add{\isaliteral{3A}{\isacharcolon}}\ ev{\isaliteral{2E}{\isachardot}}ev{\isadigit{0}}{\isaliteral{29}{\isacharparenright}}\isanewline
-\ \ \isacommand{next}\isamarkupfalse%
-\isanewline
-\ \ \ \ \isacommand{case}\isamarkupfalse%
-\ {\isaliteral{28}{\isacharparenleft}}evSS\ k{\isaliteral{29}{\isacharparenright}}\ \isacommand{thus}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}ev{\isaliteral{28}{\isacharparenleft}}n\ {\isaliteral{2D}{\isacharminus}}\ {\isadigit{2}}{\isaliteral{29}{\isacharparenright}}{\isaliteral{22}{\isachardoublequoteclose}}\ \isacommand{by}\isamarkupfalse%
-\ {\isaliteral{28}{\isacharparenleft}}simp\ add{\isaliteral{3A}{\isacharcolon}}\ ev{\isaliteral{2E}{\isachardot}}evSS{\isaliteral{29}{\isacharparenright}}\isanewline
-\ \ \isacommand{qed}\isamarkupfalse%
-%
-\endisatagproof
-{\isafoldproof}%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\begin{isamarkuptext}%
-The key point here is that a case analysis over some inductively
-defined predicate is triggered by piping the given fact
-(here: \isacom{from}~\isa{this}) into a proof by \isa{cases}.
-Let us examine the assumptions available in each case. In case \isa{ev{\isadigit{0}}}
-we have \isa{n\ {\isaliteral{3D}{\isacharequal}}\ {\isadigit{0}}} and in case \isa{evSS} we have \isa{n\ {\isaliteral{3D}{\isacharequal}}\ Suc\ {\isaliteral{28}{\isacharparenleft}}Suc\ k{\isaliteral{29}{\isacharparenright}}}
-and \isa{ev\ k}. In each case the assumptions are available under the name
-of the case; there is no fine grained naming schema like for induction.
-
-Sometimes some rules could not have been used to derive the given fact
-because constructors clash. As an extreme example consider
-rule inversion applied to \isa{ev\ {\isaliteral{28}{\isacharparenleft}}Suc\ {\isadigit{0}}{\isaliteral{29}{\isacharparenright}}}: neither rule \isa{ev{\isadigit{0}}} nor
-rule \isa{evSS} can yield \isa{ev\ {\isaliteral{28}{\isacharparenleft}}Suc\ {\isadigit{0}}{\isaliteral{29}{\isacharparenright}}} because \isa{Suc\ {\isadigit{0}}} unifies
-neither with \isa{{\isadigit{0}}} nor with \isa{Suc\ {\isaliteral{28}{\isacharparenleft}}Suc\ n{\isaliteral{29}{\isacharparenright}}}. Impossible cases do not
-have to be proved. Hence we can prove anything from \isa{ev\ {\isaliteral{28}{\isacharparenleft}}Suc\ {\isadigit{0}}{\isaliteral{29}{\isacharparenright}}}:%
-\end{isamarkuptext}%
-\isamarkuptrue%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\isatagproof
-\ \ \isacommand{assume}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}ev{\isaliteral{28}{\isacharparenleft}}Suc\ {\isadigit{0}}{\isaliteral{29}{\isacharparenright}}{\isaliteral{22}{\isachardoublequoteclose}}\ \isacommand{then}\isamarkupfalse%
-\ \isacommand{have}\isamarkupfalse%
-\ P\ \isacommand{by}\isamarkupfalse%
-\ cases%
-\endisatagproof
-{\isafoldproof}%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\begin{isamarkuptext}%
-That is, \isa{ev\ {\isaliteral{28}{\isacharparenleft}}Suc\ {\isadigit{0}}{\isaliteral{29}{\isacharparenright}}} is simply not provable:%
-\end{isamarkuptext}%
-\isamarkuptrue%
-\isacommand{lemma}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}{\isaliteral{5C3C6E6F743E}{\isasymnot}}\ ev{\isaliteral{28}{\isacharparenleft}}Suc\ {\isadigit{0}}{\isaliteral{29}{\isacharparenright}}{\isaliteral{22}{\isachardoublequoteclose}}\isanewline
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\isatagproof
-\isacommand{proof}\isamarkupfalse%
-\isanewline
-\ \ \isacommand{assume}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}ev{\isaliteral{28}{\isacharparenleft}}Suc\ {\isadigit{0}}{\isaliteral{29}{\isacharparenright}}{\isaliteral{22}{\isachardoublequoteclose}}\ \isacommand{then}\isamarkupfalse%
-\ \isacommand{show}\isamarkupfalse%
-\ False\ \isacommand{by}\isamarkupfalse%
-\ cases\isanewline
-\isacommand{qed}\isamarkupfalse%
-%
-\endisatagproof
-{\isafoldproof}%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\begin{isamarkuptext}%
-Normally not all cases will be impossible. As a simple exercise,
-prove that \mbox{\isa{{\isaliteral{5C3C6E6F743E}{\isasymnot}}\ ev\ {\isaliteral{28}{\isacharparenleft}}Suc\ {\isaliteral{28}{\isacharparenleft}}Suc\ {\isaliteral{28}{\isacharparenleft}}Suc\ {\isadigit{0}}{\isaliteral{29}{\isacharparenright}}{\isaliteral{29}{\isacharparenright}}{\isaliteral{29}{\isacharparenright}}}.}%
-\end{isamarkuptext}%
-\isamarkuptrue%
-%
-\isadelimtheory
-%
-\endisadelimtheory
-%
-\isatagtheory
-%
-\endisatagtheory
-{\isafoldtheory}%
-%
-\isadelimtheory
-%
-\endisadelimtheory
-\end{isabellebody}%
-%%% Local Variables:
-%%% mode: latex
-%%% TeX-master: "root"
-%%% End:
--- a/doc-src/ProgProve/Thys/document/LaTeXsugar.tex	Mon Aug 27 21:46:00 2012 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,35 +0,0 @@
-%
-\begin{isabellebody}%
-\def\isabellecontext{LaTeXsugar}%
-%
-\isadelimtheory
-\isanewline
-%
-\endisadelimtheory
-%
-\isatagtheory
-%
-\endisatagtheory
-{\isafoldtheory}%
-%
-\isadelimtheory
-%
-\endisadelimtheory
-%
-\isadelimtheory
-%
-\endisadelimtheory
-%
-\isatagtheory
-%
-\endisatagtheory
-{\isafoldtheory}%
-%
-\isadelimtheory
-%
-\endisadelimtheory
-\end{isabellebody}%
-%%% Local Variables:
-%%% mode: latex
-%%% TeX-master: "root"
-%%% End:
--- a/doc-src/ProgProve/Thys/document/Logic.tex	Mon Aug 27 21:46:00 2012 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,971 +0,0 @@
-%
-\begin{isabellebody}%
-\def\isabellecontext{Logic}%
-%
-\isadelimtheory
-%
-\endisadelimtheory
-%
-\isatagtheory
-%
-\endisatagtheory
-{\isafoldtheory}%
-%
-\isadelimtheory
-%
-\endisadelimtheory
-%
-\begin{isamarkuptext}%
-\vspace{-5ex}
-\section{Logic and proof beyond equality}
-\label{sec:Logic}
-
-\subsection{Formulas}
-
-The core syntax of formulas (\textit{form} below)
-provides the standard logical constructs, in decreasing order of precedence:
-\[
-\begin{array}{rcl}
-
-\mathit{form} & ::= &
-  \isa{{\isaliteral{28}{\isacharparenleft}}form{\isaliteral{29}{\isacharparenright}}} ~\mid~
-  \isa{True} ~\mid~
-  \isa{False} ~\mid~
-  \isa{term\ {\isaliteral{3D}{\isacharequal}}\ term}\\
- &\mid& \isa{{\isaliteral{5C3C6E6F743E}{\isasymnot}}\ form} ~\mid~
-  \isa{form\ {\isaliteral{5C3C616E643E}{\isasymand}}\ form} ~\mid~
-  \isa{form\ {\isaliteral{5C3C6F723E}{\isasymor}}\ form} ~\mid~
-  \isa{form\ {\isaliteral{5C3C6C6F6E6772696768746172726F773E}{\isasymlongrightarrow}}\ form}\\
- &\mid& \isa{{\isaliteral{5C3C666F72616C6C3E}{\isasymforall}}x{\isaliteral{2E}{\isachardot}}\ form} ~\mid~  \isa{{\isaliteral{5C3C6578697374733E}{\isasymexists}}x{\isaliteral{2E}{\isachardot}}\ form}
-\end{array}
-\]
-Terms are the ones we have seen all along, built from constants, variables,
-function application and \isa{{\isaliteral{5C3C6C616D6264613E}{\isasymlambda}}}-abstraction, including all the syntactic
-sugar like infix symbols, \isa{if}, \isa{case} etc.
-\begin{warn}
-Remember that formulas are simply terms of type \isa{bool}. Hence
-\isa{{\isaliteral{3D}{\isacharequal}}} also works for formulas. Beware that \isa{{\isaliteral{3D}{\isacharequal}}} has a higher
-precedence than the other logical operators. Hence \isa{s\ {\isaliteral{3D}{\isacharequal}}\ t\ {\isaliteral{5C3C616E643E}{\isasymand}}\ A} means
-\isa{{\isaliteral{28}{\isacharparenleft}}s\ {\isaliteral{3D}{\isacharequal}}\ t{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{5C3C616E643E}{\isasymand}}\ A}, and \isa{A\ {\isaliteral{5C3C616E643E}{\isasymand}}\ B\ {\isaliteral{3D}{\isacharequal}}\ B\ {\isaliteral{5C3C616E643E}{\isasymand}}\ A} means \isa{A\ {\isaliteral{5C3C616E643E}{\isasymand}}\ {\isaliteral{28}{\isacharparenleft}}B\ {\isaliteral{3D}{\isacharequal}}\ B{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{5C3C616E643E}{\isasymand}}\ A}.
-Logical equivalence can also be written with
-\isa{{\isaliteral{5C3C6C6F6E676C65667472696768746172726F773E}{\isasymlongleftrightarrow}}} instead of \isa{{\isaliteral{3D}{\isacharequal}}}, where \isa{{\isaliteral{5C3C6C6F6E676C65667472696768746172726F773E}{\isasymlongleftrightarrow}}} has the same low
-precedence as \isa{{\isaliteral{5C3C6C6F6E6772696768746172726F773E}{\isasymlongrightarrow}}}. Hence \isa{A\ {\isaliteral{5C3C616E643E}{\isasymand}}\ B\ {\isaliteral{5C3C6C6F6E676C65667472696768746172726F773E}{\isasymlongleftrightarrow}}\ B\ {\isaliteral{5C3C616E643E}{\isasymand}}\ A} really means
-\isa{{\isaliteral{28}{\isacharparenleft}}A\ {\isaliteral{5C3C616E643E}{\isasymand}}\ B{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{5C3C6C6F6E676C65667472696768746172726F773E}{\isasymlongleftrightarrow}}\ {\isaliteral{28}{\isacharparenleft}}B\ {\isaliteral{5C3C616E643E}{\isasymand}}\ A{\isaliteral{29}{\isacharparenright}}}.
-\end{warn}
-\begin{warn}
-Quantifiers need to be enclosed in parentheses if they are nested within
-other constructs (just like \isa{if}, \isa{case} and \isa{let}).
-\end{warn}
-The most frequent logical symbols have the following ASCII representations:
-\begin{center}
-\begin{tabular}{l@ {\qquad}l@ {\qquad}l}
-\isa{{\isaliteral{5C3C666F72616C6C3E}{\isasymforall}}} & \xsymbol{forall} & \texttt{ALL}\\
-\isa{{\isaliteral{5C3C6578697374733E}{\isasymexists}}} & \xsymbol{exists} & \texttt{EX}\\
-\isa{{\isaliteral{5C3C6C616D6264613E}{\isasymlambda}}} & \xsymbol{lambda} & \texttt{\%}\\
-\isa{{\isaliteral{5C3C6C6F6E6772696768746172726F773E}{\isasymlongrightarrow}}} & \texttt{-{}->}\\
-\isa{{\isaliteral{5C3C6C6F6E676C65667472696768746172726F773E}{\isasymlongleftrightarrow}}} & \texttt{<->}\\
-\isa{{\isaliteral{5C3C616E643E}{\isasymand}}} & \texttt{/\char`\\} & \texttt{\&}\\
-\isa{{\isaliteral{5C3C6F723E}{\isasymor}}} & \texttt{\char`\\/} & \texttt{|}\\
-\isa{{\isaliteral{5C3C6E6F743E}{\isasymnot}}} & \xsymbol{not} & \texttt{\char`~}\\
-\isa{{\isaliteral{5C3C6E6F7465713E}{\isasymnoteq}}} & \xsymbol{noteq} & \texttt{\char`~=}
-\end{tabular}
-\end{center}
-The first column shows the symbols, the second column ASCII representations
-that Isabelle interfaces convert into the corresponding symbol,
-and the third column shows ASCII representations that stay fixed.
-\begin{warn}
-The implication \isa{{\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}} is part of the Isabelle framework. It structures
-theorems and proof states, separating assumptions from conclusions.
-The implication \isa{{\isaliteral{5C3C6C6F6E6772696768746172726F773E}{\isasymlongrightarrow}}} is part of the logic HOL and can occur inside the
-formulas that make up the assumptions and conclusion.
-Theorems should be of the form \isa{{\isaliteral{5C3C6C6272616B6B3E}{\isasymlbrakk}}\ A\isaliteral{5C3C5E697375623E}{}\isactrlisub {\isadigit{1}}{\isaliteral{3B}{\isacharsemicolon}}\ {\isaliteral{5C3C646F74733E}{\isasymdots}}{\isaliteral{3B}{\isacharsemicolon}}\ A\isaliteral{5C3C5E697375623E}{}\isactrlisub n\ {\isaliteral{5C3C726272616B6B3E}{\isasymrbrakk}}\ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ A},
-not \isa{A\isaliteral{5C3C5E697375623E}{}\isactrlisub {\isadigit{1}}\ {\isaliteral{5C3C616E643E}{\isasymand}}\ {\isaliteral{5C3C646F74733E}{\isasymdots}}\ {\isaliteral{5C3C616E643E}{\isasymand}}\ A\isaliteral{5C3C5E697375623E}{}\isactrlisub n\ {\isaliteral{5C3C6C6F6E6772696768746172726F773E}{\isasymlongrightarrow}}\ A}. Both are logically equivalent
-but the first one works better when using the theorem in further proofs.
-\end{warn}
-
-\subsection{Sets}
-
-Sets of elements of type \isa{{\isaliteral{27}{\isacharprime}}a} have type \isa{{\isaliteral{27}{\isacharprime}}a\ set}.
-They can be finite or infinite. Sets come with the usual notation:
-\begin{itemize}
-\item \isa{{\isaliteral{7B}{\isacharbraceleft}}{\isaliteral{7D}{\isacharbraceright}}},\quad \isa{{\isaliteral{7B}{\isacharbraceleft}}e\isaliteral{5C3C5E697375623E}{}\isactrlisub {\isadigit{1}}{\isaliteral{2C}{\isacharcomma}}{\isaliteral{5C3C646F74733E}{\isasymdots}}{\isaliteral{2C}{\isacharcomma}}e\isaliteral{5C3C5E697375623E}{}\isactrlisub n{\isaliteral{7D}{\isacharbraceright}}}
-\item \isa{e\ {\isaliteral{5C3C696E3E}{\isasymin}}\ A},\quad \isa{A\ {\isaliteral{5C3C73756273657465713E}{\isasymsubseteq}}\ B}
-\item \isa{A\ {\isaliteral{5C3C756E696F6E3E}{\isasymunion}}\ B},\quad \isa{A\ {\isaliteral{5C3C696E7465723E}{\isasyminter}}\ B},\quad \isa{A\ {\isaliteral{2D}{\isacharminus}}\ B},\quad \isa{{\isaliteral{2D}{\isacharminus}}\ A}
-\end{itemize}
-and much more. \isa{UNIV} is the set of all elements of some type.
-Set comprehension is written \isa{{\isaliteral{7B}{\isacharbraceleft}}x{\isaliteral{2E}{\isachardot}}\ P{\isaliteral{7D}{\isacharbraceright}}}
-rather than \isa{{\isaliteral{7B}{\isacharbraceleft}}x\ {\isaliteral{7C}{\isacharbar}}\ P{\isaliteral{7D}{\isacharbraceright}}}, to emphasize the variable binding nature
-of the construct.
-\begin{warn}
-In \isa{{\isaliteral{7B}{\isacharbraceleft}}x{\isaliteral{2E}{\isachardot}}\ P{\isaliteral{7D}{\isacharbraceright}}} the \isa{x} must be a variable. Set comprehension
-involving a proper term \isa{t} must be written
-\isa{{\isaliteral{22}{\isachardoublequote}}{\isaliteral{7B}{\isacharbraceleft}}t\ {\isaliteral{7C}{\isacharbar}}\ x\ y\ z{\isaliteral{2E}{\isachardot}}\ P{\isaliteral{7D}{\isacharbraceright}}{\isaliteral{22}{\isachardoublequote}}},
-where \isa{x\ y\ z} are the free variables in \isa{t}.
-This is just a shorthand for \isa{{\isaliteral{7B}{\isacharbraceleft}}v{\isaliteral{2E}{\isachardot}}\ {\isaliteral{5C3C6578697374733E}{\isasymexists}}x\ y\ z{\isaliteral{2E}{\isachardot}}\ v\ {\isaliteral{3D}{\isacharequal}}\ t\ {\isaliteral{5C3C616E643E}{\isasymand}}\ P{\isaliteral{7D}{\isacharbraceright}}}, where
-\isa{v} is a new variable.
-\end{warn}
-
-Here are the ASCII representations of the mathematical symbols:
-\begin{center}
-\begin{tabular}{l@ {\quad}l@ {\quad}l}
-\isa{{\isaliteral{5C3C696E3E}{\isasymin}}} & \texttt{\char`\\\char`\<in>} & \texttt{:}\\
-\isa{{\isaliteral{5C3C73756273657465713E}{\isasymsubseteq}}} & \texttt{\char`\\\char`\<subseteq>} & \texttt{<=}\\
-\isa{{\isaliteral{5C3C756E696F6E3E}{\isasymunion}}} & \texttt{\char`\\\char`\<union>} & \texttt{Un}\\
-\isa{{\isaliteral{5C3C696E7465723E}{\isasyminter}}} & \texttt{\char`\\\char`\<inter>} & \texttt{Int}
-\end{tabular}
-\end{center}
-Sets also allow bounded quantifications \isa{{\isaliteral{5C3C666F72616C6C3E}{\isasymforall}}x{\isaliteral{5C3C696E3E}{\isasymin}}A{\isaliteral{2E}{\isachardot}}\ P} and
-\isa{{\isaliteral{5C3C6578697374733E}{\isasymexists}}x{\isaliteral{5C3C696E3E}{\isasymin}}A{\isaliteral{2E}{\isachardot}}\ P}.
-
-\subsection{Proof automation}
-
-So far we have only seen \isa{simp} and \isa{auto}: Both perform
-rewriting, both can also prove linear arithmetic facts (no multiplication),
-and \isa{auto} is also able to prove simple logical or set-theoretic goals:%
-\end{isamarkuptext}%
-\isamarkuptrue%
-\isacommand{lemma}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}{\isaliteral{5C3C666F72616C6C3E}{\isasymforall}}x{\isaliteral{2E}{\isachardot}}\ {\isaliteral{5C3C6578697374733E}{\isasymexists}}y{\isaliteral{2E}{\isachardot}}\ x\ {\isaliteral{3D}{\isacharequal}}\ y{\isaliteral{22}{\isachardoublequoteclose}}\isanewline
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\isatagproof
-\isacommand{by}\isamarkupfalse%
-\ auto%
-\endisatagproof
-{\isafoldproof}%
-%
-\isadelimproof
-\isanewline
-%
-\endisadelimproof
-\isanewline
-\isacommand{lemma}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}A\ {\isaliteral{5C3C73756273657465713E}{\isasymsubseteq}}\ B\ {\isaliteral{5C3C696E7465723E}{\isasyminter}}\ C\ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ A\ {\isaliteral{5C3C73756273657465713E}{\isasymsubseteq}}\ B\ {\isaliteral{5C3C756E696F6E3E}{\isasymunion}}\ C{\isaliteral{22}{\isachardoublequoteclose}}\isanewline
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\isatagproof
-\isacommand{by}\isamarkupfalse%
-\ auto%
-\endisatagproof
-{\isafoldproof}%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\begin{isamarkuptext}%
-where
-\begin{quote}
-\isacom{by} \textit{proof-method}
-\end{quote}
-is short for
-\begin{quote}
-\isacom{apply} \textit{proof-method}\\
-\isacom{done}
-\end{quote}
-The key characteristics of both \isa{simp} and \isa{auto} are
-\begin{itemize}
-\item They show you were they got stuck, giving you an idea how to continue.
-\item They perform the obvious steps but are highly incomplete.
-\end{itemize}
-A proof method is \concept{complete} if it can prove all true formulas.
-There is no complete proof method for HOL, not even in theory.
-Hence all our proof methods only differ in how incomplete they are.
-
-A proof method that is still incomplete but tries harder than \isa{auto} is
-\isa{fastforce}.  It either succeeds or fails, it acts on the first
-subgoal only, and it can be modified just like \isa{auto}, e.g.\
-with \isa{simp\ add}. Here is a typical example of what \isa{fastforce}
-can do:%
-\end{isamarkuptext}%
-\isamarkuptrue%
-\isacommand{lemma}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}{\isaliteral{5C3C6C6272616B6B3E}{\isasymlbrakk}}\ {\isaliteral{5C3C666F72616C6C3E}{\isasymforall}}xs\ {\isaliteral{5C3C696E3E}{\isasymin}}\ A{\isaliteral{2E}{\isachardot}}\ {\isaliteral{5C3C6578697374733E}{\isasymexists}}ys{\isaliteral{2E}{\isachardot}}\ xs\ {\isaliteral{3D}{\isacharequal}}\ ys\ {\isaliteral{40}{\isacharat}}\ ys{\isaliteral{3B}{\isacharsemicolon}}\ \ us\ {\isaliteral{5C3C696E3E}{\isasymin}}\ A\ {\isaliteral{5C3C726272616B6B3E}{\isasymrbrakk}}\isanewline
-\ \ \ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ {\isaliteral{5C3C6578697374733E}{\isasymexists}}n{\isaliteral{2E}{\isachardot}}\ length\ us\ {\isaliteral{3D}{\isacharequal}}\ n{\isaliteral{2B}{\isacharplus}}n{\isaliteral{22}{\isachardoublequoteclose}}\isanewline
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\isatagproof
-\isacommand{by}\isamarkupfalse%
-\ fastforce%
-\endisatagproof
-{\isafoldproof}%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\begin{isamarkuptext}%
-This lemma is out of reach for \isa{auto} because of the
-quantifiers.  Even \isa{fastforce} fails when the quantifier structure
-becomes more complicated. In a few cases, its slow version \isa{force}
-succeeds where \isa{fastforce} fails.
-
-The method of choice for complex logical goals is \isa{blast}. In the
-following example, \isa{T} and \isa{A} are two binary predicates. It
-is shown that if \isa{T} is total, \isa{A} is antisymmetric and \isa{T} is
-a subset of \isa{A}, then \isa{A} is a subset of \isa{T}:%
-\end{isamarkuptext}%
-\isamarkuptrue%
-\isacommand{lemma}\isamarkupfalse%
-\isanewline
-\ \ {\isaliteral{22}{\isachardoublequoteopen}}{\isaliteral{5C3C6C6272616B6B3E}{\isasymlbrakk}}\ {\isaliteral{5C3C666F72616C6C3E}{\isasymforall}}x\ y{\isaliteral{2E}{\isachardot}}\ T\ x\ y\ {\isaliteral{5C3C6F723E}{\isasymor}}\ T\ y\ x{\isaliteral{3B}{\isacharsemicolon}}\isanewline
-\ \ \ \ \ {\isaliteral{5C3C666F72616C6C3E}{\isasymforall}}x\ y{\isaliteral{2E}{\isachardot}}\ A\ x\ y\ {\isaliteral{5C3C616E643E}{\isasymand}}\ A\ y\ x\ {\isaliteral{5C3C6C6F6E6772696768746172726F773E}{\isasymlongrightarrow}}\ x\ {\isaliteral{3D}{\isacharequal}}\ y{\isaliteral{3B}{\isacharsemicolon}}\isanewline
-\ \ \ \ \ {\isaliteral{5C3C666F72616C6C3E}{\isasymforall}}x\ y{\isaliteral{2E}{\isachardot}}\ T\ x\ y\ {\isaliteral{5C3C6C6F6E6772696768746172726F773E}{\isasymlongrightarrow}}\ A\ x\ y\ {\isaliteral{5C3C726272616B6B3E}{\isasymrbrakk}}\isanewline
-\ \ \ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ {\isaliteral{5C3C666F72616C6C3E}{\isasymforall}}x\ y{\isaliteral{2E}{\isachardot}}\ A\ x\ y\ {\isaliteral{5C3C6C6F6E6772696768746172726F773E}{\isasymlongrightarrow}}\ T\ x\ y{\isaliteral{22}{\isachardoublequoteclose}}\isanewline
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\isatagproof
-\isacommand{by}\isamarkupfalse%
-\ blast%
-\endisatagproof
-{\isafoldproof}%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\begin{isamarkuptext}%
-We leave it to the reader to figure out why this lemma is true.
-Method \isa{blast}
-\begin{itemize}
-\item is (in principle) a complete proof procedure for first-order formulas,
-  a fragment of HOL. In practice there is a search bound.
-\item does no rewriting and knows very little about equality.
-\item covers logic, sets and relations.
-\item either succeeds or fails.
-\end{itemize}
-Because of its strength in logic and sets and its weakness in equality reasoning, it complements the earlier proof methods.
-
-
-\subsubsection{Sledgehammer}
-
-Command \isacom{sledgehammer} calls a number of external automatic
-theorem provers (ATPs) that run for up to 30 seconds searching for a
-proof. Some of these ATPs are part of the Isabelle installation, others are
-queried over the internet. If successful, a proof command is generated and can
-be inserted into your proof.  The biggest win of \isacom{sledgehammer} is
-that it will take into account the whole lemma library and you do not need to
-feed in any lemma explicitly. For example,%
-\end{isamarkuptext}%
-\isamarkuptrue%
-\isacommand{lemma}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}{\isaliteral{5C3C6C6272616B6B3E}{\isasymlbrakk}}\ xs\ {\isaliteral{40}{\isacharat}}\ ys\ {\isaliteral{3D}{\isacharequal}}\ ys\ {\isaliteral{40}{\isacharat}}\ xs{\isaliteral{3B}{\isacharsemicolon}}\ \ length\ xs\ {\isaliteral{3D}{\isacharequal}}\ length\ ys\ {\isaliteral{5C3C726272616B6B3E}{\isasymrbrakk}}\ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ xs\ {\isaliteral{3D}{\isacharequal}}\ ys{\isaliteral{22}{\isachardoublequoteclose}}%
-\isadelimproof
-%
-\endisadelimproof
-%
-\isatagproof
-%
-\begin{isamarkuptxt}%
-cannot be solved by any of the standard proof methods, but
-\isacom{sledgehammer} finds the following proof:%
-\end{isamarkuptxt}%
-\isamarkuptrue%
-\isacommand{by}\isamarkupfalse%
-\ {\isaliteral{28}{\isacharparenleft}}metis\ append{\isaliteral{5F}{\isacharunderscore}}eq{\isaliteral{5F}{\isacharunderscore}}conv{\isaliteral{5F}{\isacharunderscore}}conj{\isaliteral{29}{\isacharparenright}}%
-\endisatagproof
-{\isafoldproof}%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\begin{isamarkuptext}%
-We do not explain how the proof was found but what this command
-means. For a start, Isabelle does not trust external tools (and in particular
-not the translations from Isabelle's logic to those tools!)
-and insists on a proof that it can check. This is what \isa{metis} does.
-It is given a list of lemmas and tries to find a proof just using those lemmas
-(and pure logic). In contrast to \isa{simp} and friends that know a lot of
-lemmas already, using \isa{metis} manually is tedious because one has
-to find all the relevant lemmas first. But that is precisely what
-\isacom{sledgehammer} does for us.
-In this case lemma \isa{append{\isaliteral{5F}{\isacharunderscore}}eq{\isaliteral{5F}{\isacharunderscore}}conv{\isaliteral{5F}{\isacharunderscore}}conj} alone suffices:
-\begin{isabelle}%
-{\isaliteral{28}{\isacharparenleft}}xs\ {\isaliteral{40}{\isacharat}}\ ys\ {\isaliteral{3D}{\isacharequal}}\ zs{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{3D}{\isacharequal}}\ {\isaliteral{28}{\isacharparenleft}}xs\ {\isaliteral{3D}{\isacharequal}}\ take\ {\isaliteral{28}{\isacharparenleft}}length\ xs{\isaliteral{29}{\isacharparenright}}\ zs\ {\isaliteral{5C3C616E643E}{\isasymand}}\ ys\ {\isaliteral{3D}{\isacharequal}}\ drop\ {\isaliteral{28}{\isacharparenleft}}length\ xs{\isaliteral{29}{\isacharparenright}}\ zs{\isaliteral{29}{\isacharparenright}}%
-\end{isabelle}
-We leave it to the reader to figure out why this lemma suffices to prove
-the above lemma, even without any knowledge of what the functions \isa{take}
-and \isa{drop} do. Keep in mind that the variables in the two lemmas
-are independent of each other, despite the same names, and that you can
-substitute arbitrary values for the free variables in a lemma.
-
-Just as for the other proof methods we have seen, there is no guarantee that
-\isacom{sledgehammer} will find a proof if it exists. Nor is
-\isacom{sledgehammer} superior to the other proof methods.  They are
-incomparable. Therefore it is recommended to apply \isa{simp} or \isa{auto} before invoking \isacom{sledgehammer} on what is left.
-
-\subsubsection{Arithmetic}
-
-By arithmetic formulas we mean formulas involving variables, numbers, \isa{{\isaliteral{2B}{\isacharplus}}}, \isa{{\isaliteral{2D}{\isacharminus}}}, \isa{{\isaliteral{3D}{\isacharequal}}}, \isa{{\isaliteral{3C}{\isacharless}}}, \isa{{\isaliteral{5C3C6C653E}{\isasymle}}} and the usual logical
-connectives \isa{{\isaliteral{5C3C6E6F743E}{\isasymnot}}}, \isa{{\isaliteral{5C3C616E643E}{\isasymand}}}, \isa{{\isaliteral{5C3C6F723E}{\isasymor}}}, \isa{{\isaliteral{5C3C6C6F6E6772696768746172726F773E}{\isasymlongrightarrow}}},
-\isa{{\isaliteral{5C3C6C6F6E676C65667472696768746172726F773E}{\isasymlongleftrightarrow}}}. Strictly speaking, this is known as \concept{linear arithmetic}
-because it does not involve multiplication, although multiplication with
-numbers, e.g.\ \isa{{\isadigit{2}}{\isaliteral{2A}{\isacharasterisk}}n} is allowed. Such formulas can be proved by
-\isa{arith}:%
-\end{isamarkuptext}%
-\isamarkuptrue%
-\isacommand{lemma}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}{\isaliteral{5C3C6C6272616B6B3E}{\isasymlbrakk}}\ {\isaliteral{28}{\isacharparenleft}}a{\isaliteral{3A}{\isacharcolon}}{\isaliteral{3A}{\isacharcolon}}nat{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{5C3C6C653E}{\isasymle}}\ x\ {\isaliteral{2B}{\isacharplus}}\ b{\isaliteral{3B}{\isacharsemicolon}}\ {\isadigit{2}}{\isaliteral{2A}{\isacharasterisk}}x\ {\isaliteral{3C}{\isacharless}}\ c\ {\isaliteral{5C3C726272616B6B3E}{\isasymrbrakk}}\ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ {\isadigit{2}}{\isaliteral{2A}{\isacharasterisk}}a\ {\isaliteral{2B}{\isacharplus}}\ {\isadigit{1}}\ {\isaliteral{5C3C6C653E}{\isasymle}}\ {\isadigit{2}}{\isaliteral{2A}{\isacharasterisk}}b\ {\isaliteral{2B}{\isacharplus}}\ c{\isaliteral{22}{\isachardoublequoteclose}}\isanewline
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\isatagproof
-\isacommand{by}\isamarkupfalse%
-\ arith%
-\endisatagproof
-{\isafoldproof}%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\begin{isamarkuptext}%
-In fact, \isa{auto} and \isa{simp} can prove many linear
-arithmetic formulas already, like the one above, by calling a weak but fast
-version of \isa{arith}. Hence it is usually not necessary to invoke
-\isa{arith} explicitly.
-
-The above example involves natural numbers, but integers (type \isa{int})
-and real numbers (type \isa{real}) are supported as well. As are a number
-of further operators like \isa{min} and \isa{max}. On \isa{nat} and
-\isa{int}, \isa{arith} can even prove theorems with quantifiers in them,
-but we will not enlarge on that here.
-
-
-\subsubsection{Trying them all}
-
-If you want to try all of the above automatic proof methods you simply type
-\begin{isabelle}
-\isacom{try}
-\end{isabelle}
-You can also add specific simplification and introduction rules:
-\begin{isabelle}
-\isacom{try} \isa{simp{\isaliteral{3A}{\isacharcolon}}\ {\isaliteral{5C3C646F74733E}{\isasymdots}}\ intro{\isaliteral{3A}{\isacharcolon}}\ {\isaliteral{5C3C646F74733E}{\isasymdots}}}
-\end{isabelle}
-There is also a lightweight variant \isacom{try0} that does not call
-sledgehammer.
-
-\subsection{Single step proofs}
-
-Although automation is nice, it often fails, at least initially, and you need
-to find out why. When \isa{fastforce} or \isa{blast} simply fail, you have
-no clue why. At this point, the stepwise
-application of proof rules may be necessary. For example, if \isa{blast}
-fails on \isa{A\ {\isaliteral{5C3C616E643E}{\isasymand}}\ B}, you want to attack the two
-conjuncts \isa{A} and \isa{B} separately. This can
-be achieved by applying \emph{conjunction introduction}
-\[ \isa{\mbox{}\inferrule{\mbox{{\isaliteral{3F}{\isacharquery}}P}\\\ \mbox{{\isaliteral{3F}{\isacharquery}}Q}}{\mbox{{\isaliteral{3F}{\isacharquery}}P\ {\isaliteral{5C3C616E643E}{\isasymand}}\ {\isaliteral{3F}{\isacharquery}}Q}}}\ \isa{conjI}
-\]
-to the proof state. We will now examine the details of this process.
-
-\subsubsection{Instantiating unknowns}
-
-We had briefly mentioned earlier that after proving some theorem,
-Isabelle replaces all free variables \isa{x} by so called \concept{unknowns}
-\isa{{\isaliteral{3F}{\isacharquery}}x}. We can see this clearly in rule \isa{conjI}.
-These unknowns can later be instantiated explicitly or implicitly:
-\begin{itemize}
-\item By hand, using \isa{of}.
-The expression \isa{conjI{\isaliteral{5B}{\isacharbrackleft}}of\ {\isaliteral{22}{\isachardoublequote}}a{\isaliteral{3D}{\isacharequal}}b{\isaliteral{22}{\isachardoublequote}}\ {\isaliteral{22}{\isachardoublequote}}False{\isaliteral{22}{\isachardoublequote}}{\isaliteral{5D}{\isacharbrackright}}}
-instantiates the unknowns in \isa{conjI} from left to right with the
-two formulas \isa{a{\isaliteral{3D}{\isacharequal}}b} and \isa{False}, yielding the rule
-\begin{isabelle}%
-\mbox{}\inferrule{\mbox{a\ {\isaliteral{3D}{\isacharequal}}\ b}\\\ \mbox{False}}{\mbox{a\ {\isaliteral{3D}{\isacharequal}}\ b\ {\isaliteral{5C3C616E643E}{\isasymand}}\ False}}%
-\end{isabelle}
-
-In general, \isa{th{\isaliteral{5B}{\isacharbrackleft}}of\ string\isaliteral{5C3C5E697375623E}{}\isactrlisub {\isadigit{1}}\ {\isaliteral{5C3C646F74733E}{\isasymdots}}\ string\isaliteral{5C3C5E697375623E}{}\isactrlisub n{\isaliteral{5D}{\isacharbrackright}}} instantiates
-the unknowns in the theorem \isa{th} from left to right with the terms
-\isa{string\isaliteral{5C3C5E697375623E}{}\isactrlisub {\isadigit{1}}} to \isa{string\isaliteral{5C3C5E697375623E}{}\isactrlisub n}.
-
-\item By unification. \concept{Unification} is the process of making two
-terms syntactically equal by suitable instantiations of unknowns. For example,
-unifying \isa{{\isaliteral{3F}{\isacharquery}}P\ {\isaliteral{5C3C616E643E}{\isasymand}}\ {\isaliteral{3F}{\isacharquery}}Q} with \mbox{\isa{a\ {\isaliteral{3D}{\isacharequal}}\ b\ {\isaliteral{5C3C616E643E}{\isasymand}}\ False}} instantiates
-\isa{{\isaliteral{3F}{\isacharquery}}P} with \isa{a\ {\isaliteral{3D}{\isacharequal}}\ b} and \isa{{\isaliteral{3F}{\isacharquery}}Q} with \isa{False}.
-\end{itemize}
-We need not instantiate all unknowns. If we want to skip a particular one we
-can just write \isa{{\isaliteral{5F}{\isacharunderscore}}} instead, for example \isa{conjI{\isaliteral{5B}{\isacharbrackleft}}of\ {\isaliteral{5F}{\isacharunderscore}}\ {\isaliteral{22}{\isachardoublequote}}False{\isaliteral{22}{\isachardoublequote}}{\isaliteral{5D}{\isacharbrackright}}}.
-Unknowns can also be instantiated by name, for example
-\isa{conjI{\isaliteral{5B}{\isacharbrackleft}}where\ {\isaliteral{3F}{\isacharquery}}P\ {\isaliteral{3D}{\isacharequal}}\ {\isaliteral{22}{\isachardoublequote}}a{\isaliteral{3D}{\isacharequal}}b{\isaliteral{22}{\isachardoublequote}}\ and\ {\isaliteral{3F}{\isacharquery}}Q\ {\isaliteral{3D}{\isacharequal}}\ {\isaliteral{22}{\isachardoublequote}}False{\isaliteral{22}{\isachardoublequote}}{\isaliteral{5D}{\isacharbrackright}}}.
-
-
-\subsubsection{Rule application}
-
-\concept{Rule application} means applying a rule backwards to a proof state.
-For example, applying rule \isa{conjI} to a proof state
-\begin{quote}
-\isa{{\isadigit{1}}{\isaliteral{2E}{\isachardot}}\ \ {\isaliteral{5C3C646F74733E}{\isasymdots}}\ \ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ A\ {\isaliteral{5C3C616E643E}{\isasymand}}\ B}
-\end{quote}
-results in two subgoals, one for each premise of \isa{conjI}:
-\begin{quote}
-\isa{{\isadigit{1}}{\isaliteral{2E}{\isachardot}}\ \ {\isaliteral{5C3C646F74733E}{\isasymdots}}\ \ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ A}\\
-\isa{{\isadigit{2}}{\isaliteral{2E}{\isachardot}}\ \ {\isaliteral{5C3C646F74733E}{\isasymdots}}\ \ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ B}
-\end{quote}
-In general, the application of a rule \isa{{\isaliteral{5C3C6C6272616B6B3E}{\isasymlbrakk}}\ A\isaliteral{5C3C5E697375623E}{}\isactrlisub {\isadigit{1}}{\isaliteral{3B}{\isacharsemicolon}}\ {\isaliteral{5C3C646F74733E}{\isasymdots}}{\isaliteral{3B}{\isacharsemicolon}}\ A\isaliteral{5C3C5E697375623E}{}\isactrlisub n\ {\isaliteral{5C3C726272616B6B3E}{\isasymrbrakk}}\ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ A}
-to a subgoal \mbox{\isa{{\isaliteral{5C3C646F74733E}{\isasymdots}}\ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ C}} proceeds in two steps:
-\begin{enumerate}
-\item
-Unify \isa{A} and \isa{C}, thus instantiating the unknowns in the rule.
-\item
-Replace the subgoal \isa{C} with \isa{n} new subgoals \isa{A\isaliteral{5C3C5E697375623E}{}\isactrlisub {\isadigit{1}}} to \isa{A\isaliteral{5C3C5E697375623E}{}\isactrlisub n}.
-\end{enumerate}
-This is the command to apply rule \isa{xyz}:
-\begin{quote}
-\isacom{apply}\isa{{\isaliteral{28}{\isacharparenleft}}rule\ xyz{\isaliteral{29}{\isacharparenright}}}
-\end{quote}
-This is also called \concept{backchaining} with rule \isa{xyz}.
-
-\subsubsection{Introduction rules}
-
-Conjunction introduction (\isa{conjI}) is one example of a whole
-class of rules known as \concept{introduction rules}. They explain under which
-premises some logical construct can be introduced. Here are some further
-useful introduction rules:
-\[
-\inferrule*[right=\mbox{\isa{impI}}]{\mbox{\isa{{\isaliteral{3F}{\isacharquery}}P\ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ {\isaliteral{3F}{\isacharquery}}Q}}}{\mbox{\isa{{\isaliteral{3F}{\isacharquery}}P\ {\isaliteral{5C3C6C6F6E6772696768746172726F773E}{\isasymlongrightarrow}}\ {\isaliteral{3F}{\isacharquery}}Q}}}
-\qquad
-\inferrule*[right=\mbox{\isa{allI}}]{\mbox{\isa{{\isaliteral{5C3C416E643E}{\isasymAnd}}x{\isaliteral{2E}{\isachardot}}\ {\isaliteral{3F}{\isacharquery}}P\ x}}}{\mbox{\isa{{\isaliteral{5C3C666F72616C6C3E}{\isasymforall}}x{\isaliteral{2E}{\isachardot}}\ {\isaliteral{3F}{\isacharquery}}P\ x}}}
-\]
-\[
-\inferrule*[right=\mbox{\isa{iffI}}]{\mbox{\isa{{\isaliteral{3F}{\isacharquery}}P\ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ {\isaliteral{3F}{\isacharquery}}Q}} \\ \mbox{\isa{{\isaliteral{3F}{\isacharquery}}Q\ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ {\isaliteral{3F}{\isacharquery}}P}}}
-  {\mbox{\isa{{\isaliteral{3F}{\isacharquery}}P\ {\isaliteral{3D}{\isacharequal}}\ {\isaliteral{3F}{\isacharquery}}Q}}}
-\]
-These rules are part of the logical system of \concept{natural deduction}
-(e.g.\ \cite{HuthRyan}). Although we intentionally de-emphasize the basic rules
-of logic in favour of automatic proof methods that allow you to take bigger
-steps, these rules are helpful in locating where and why automation fails.
-When applied backwards, these rules decompose the goal:
-\begin{itemize}
-\item \isa{conjI} and \isa{iffI} split the goal into two subgoals,
-\item \isa{impI} moves the left-hand side of a HOL implication into the list of assumptions,
-\item and \isa{allI} removes a \isa{{\isaliteral{5C3C666F72616C6C3E}{\isasymforall}}} by turning the quantified variable into a fixed local variable of the subgoal.
-\end{itemize}
-Isabelle knows about these and a number of other introduction rules.
-The command
-\begin{quote}
-\isacom{apply} \isa{rule}
-\end{quote}
-automatically selects the appropriate rule for the current subgoal.
-
-You can also turn your own theorems into introduction rules by giving them
-the \isa{intro} attribute, analogous to the \isa{simp} attribute.  In
-that case \isa{blast}, \isa{fastforce} and (to a limited extent) \isa{auto} will automatically backchain with those theorems. The \isa{intro}
-attribute should be used with care because it increases the search space and
-can lead to nontermination.  Sometimes it is better to use it only in
-specific calls of \isa{blast} and friends. For example,
-\isa{le{\isaliteral{5F}{\isacharunderscore}}trans}, transitivity of \isa{{\isaliteral{5C3C6C653E}{\isasymle}}} on type \isa{nat},
-is not an introduction rule by default because of the disastrous effect
-on the search space, but can be useful in specific situations:%
-\end{isamarkuptext}%
-\isamarkuptrue%
-\isacommand{lemma}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}{\isaliteral{5C3C6C6272616B6B3E}{\isasymlbrakk}}\ {\isaliteral{28}{\isacharparenleft}}a{\isaliteral{3A}{\isacharcolon}}{\isaliteral{3A}{\isacharcolon}}nat{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{5C3C6C653E}{\isasymle}}\ b{\isaliteral{3B}{\isacharsemicolon}}\ b\ {\isaliteral{5C3C6C653E}{\isasymle}}\ c{\isaliteral{3B}{\isacharsemicolon}}\ c\ {\isaliteral{5C3C6C653E}{\isasymle}}\ d{\isaliteral{3B}{\isacharsemicolon}}\ d\ {\isaliteral{5C3C6C653E}{\isasymle}}\ e\ {\isaliteral{5C3C726272616B6B3E}{\isasymrbrakk}}\ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ a\ {\isaliteral{5C3C6C653E}{\isasymle}}\ e{\isaliteral{22}{\isachardoublequoteclose}}\isanewline
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\isatagproof
-\isacommand{by}\isamarkupfalse%
-{\isaliteral{28}{\isacharparenleft}}blast\ intro{\isaliteral{3A}{\isacharcolon}}\ le{\isaliteral{5F}{\isacharunderscore}}trans{\isaliteral{29}{\isacharparenright}}%
-\endisatagproof
-{\isafoldproof}%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\begin{isamarkuptext}%
-Of course this is just an example and could be proved by \isa{arith}, too.
-
-\subsubsection{Forward proof}
-\label{sec:forward-proof}
-
-Forward proof means deriving new theorems from old theorems. We have already
-seen a very simple form of forward proof: the \isa{of} operator for
-instantiating unknowns in a theorem. The big brother of \isa{of} is \isa{OF} for applying one theorem to others. Given a theorem \isa{A\ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ B} called
-\isa{r} and a theorem \isa{A{\isaliteral{27}{\isacharprime}}} called \isa{r{\isaliteral{27}{\isacharprime}}}, the theorem \isa{r{\isaliteral{5B}{\isacharbrackleft}}OF\ r{\isaliteral{27}{\isacharprime}}{\isaliteral{5D}{\isacharbrackright}}} is the result of applying \isa{r} to \isa{r{\isaliteral{27}{\isacharprime}}}, where \isa{r} should be viewed as a function taking a theorem \isa{A} and returning
-\isa{B}.  More precisely, \isa{A} and \isa{A{\isaliteral{27}{\isacharprime}}} are unified, thus
-instantiating the unknowns in \isa{B}, and the result is the instantiated
-\isa{B}. Of course, unification may also fail.
-\begin{warn}
-Application of rules to other rules operates in the forward direction: from
-the premises to the conclusion of the rule; application of rules to proof
-states operates in the backward direction, from the conclusion to the
-premises.
-\end{warn}
-
-In general \isa{r} can be of the form \isa{{\isaliteral{5C3C6C6272616B6B3E}{\isasymlbrakk}}\ A\isaliteral{5C3C5E697375623E}{}\isactrlisub {\isadigit{1}}{\isaliteral{3B}{\isacharsemicolon}}\ {\isaliteral{5C3C646F74733E}{\isasymdots}}{\isaliteral{3B}{\isacharsemicolon}}\ A\isaliteral{5C3C5E697375623E}{}\isactrlisub n\ {\isaliteral{5C3C726272616B6B3E}{\isasymrbrakk}}\ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ A}
-and there can be multiple argument theorems \isa{r\isaliteral{5C3C5E697375623E}{}\isactrlisub {\isadigit{1}}} to \isa{r\isaliteral{5C3C5E697375623E}{}\isactrlisub m}
-(with \isa{m\ {\isaliteral{5C3C6C653E}{\isasymle}}\ n}), in which case \isa{r{\isaliteral{5B}{\isacharbrackleft}}OF\ r\isaliteral{5C3C5E697375623E}{}\isactrlisub {\isadigit{1}}\ {\isaliteral{5C3C646F74733E}{\isasymdots}}\ r\isaliteral{5C3C5E697375623E}{}\isactrlisub m{\isaliteral{5D}{\isacharbrackright}}} is obtained
-by unifying and thus proving \isa{A\isaliteral{5C3C5E697375623E}{}\isactrlisub i} with \isa{r\isaliteral{5C3C5E697375623E}{}\isactrlisub i}, \isa{i\ {\isaliteral{3D}{\isacharequal}}\ {\isadigit{1}}{\isaliteral{5C3C646F74733E}{\isasymdots}}m}.
-Here is an example, where \isa{refl} is the theorem
-\isa{{\isaliteral{3F}{\isacharquery}}t\ {\isaliteral{3D}{\isacharequal}}\ {\isaliteral{3F}{\isacharquery}}t}:%
-\end{isamarkuptext}%
-\isamarkuptrue%
-\isacommand{thm}\isamarkupfalse%
-\ conjI{\isaliteral{5B}{\isacharbrackleft}}OF\ refl{\isaliteral{5B}{\isacharbrackleft}}of\ {\isaliteral{22}{\isachardoublequoteopen}}a{\isaliteral{22}{\isachardoublequoteclose}}{\isaliteral{5D}{\isacharbrackright}}\ refl{\isaliteral{5B}{\isacharbrackleft}}of\ {\isaliteral{22}{\isachardoublequoteopen}}b{\isaliteral{22}{\isachardoublequoteclose}}{\isaliteral{5D}{\isacharbrackright}}{\isaliteral{5D}{\isacharbrackright}}%
-\begin{isamarkuptext}%
-yields the theorem \isa{a\ {\isaliteral{3D}{\isacharequal}}\ a\ {\isaliteral{5C3C616E643E}{\isasymand}}\ b\ {\isaliteral{3D}{\isacharequal}}\ b}.
-The command \isacom{thm} merely displays the result.
-
-Forward reasoning also makes sense in connection with proof states.
-Therefore \isa{blast}, \isa{fastforce} and \isa{auto} support a modifier
-\isa{dest} which instructs the proof method to use certain rules in a
-forward fashion. If \isa{r} is of the form \mbox{\isa{A\ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ B}}, the modifier
-\mbox{\isa{dest{\isaliteral{3A}{\isacharcolon}}\ r}}
-allows proof search to reason forward with \isa{r}, i.e.\
-to replace an assumption \isa{A{\isaliteral{27}{\isacharprime}}}, where \isa{A{\isaliteral{27}{\isacharprime}}} unifies with \isa{A},
-with the correspondingly instantiated \isa{B}. For example, \isa{Suc{\isaliteral{5F}{\isacharunderscore}}leD} is the theorem \mbox{\isa{Suc\ m\ {\isaliteral{5C3C6C653E}{\isasymle}}\ n\ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ m\ {\isaliteral{5C3C6C653E}{\isasymle}}\ n}}, which works well for forward reasoning:%
-\end{isamarkuptext}%
-\isamarkuptrue%
-\isacommand{lemma}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}Suc{\isaliteral{28}{\isacharparenleft}}Suc{\isaliteral{28}{\isacharparenleft}}Suc\ a{\isaliteral{29}{\isacharparenright}}{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{5C3C6C653E}{\isasymle}}\ b\ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ a\ {\isaliteral{5C3C6C653E}{\isasymle}}\ b{\isaliteral{22}{\isachardoublequoteclose}}\isanewline
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\isatagproof
-\isacommand{by}\isamarkupfalse%
-{\isaliteral{28}{\isacharparenleft}}blast\ dest{\isaliteral{3A}{\isacharcolon}}\ Suc{\isaliteral{5F}{\isacharunderscore}}leD{\isaliteral{29}{\isacharparenright}}%
-\endisatagproof
-{\isafoldproof}%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\begin{isamarkuptext}%
-In this particular example we could have backchained with
-\isa{Suc{\isaliteral{5F}{\isacharunderscore}}leD}, too, but because the premise is more complicated than the conclusion this can easily lead to nontermination.
-
-\subsubsection{Finding theorems}
-
-Command \isacom{find\_theorems} searches for specific theorems in the current
-theory. Search criteria include pattern matching on terms and on names.
-For details see the Isabelle/Isar Reference Manual~\cite{IsarRef}.
-\bigskip
-
-\begin{warn}
-To ease readability we will drop the question marks
-in front of unknowns from now on.
-\end{warn}
-
-
-\section{Inductive definitions}
-\label{sec:inductive-defs}
-
-Inductive definitions are the third important definition facility, after
-datatypes and recursive function.
-\sem
-In fact, they are the key construct in the
-definition of operational semantics in the second part of the book.
-\endsem
-
-\subsection{An example: even numbers}
-\label{sec:Logic:even}
-
-Here is a simple example of an inductively defined predicate:
-\begin{itemize}
-\item 0 is even
-\item If $n$ is even, so is $n+2$.
-\end{itemize}
-The operative word ``inductive'' means that these are the only even numbers.
-In Isabelle we give the two rules the names \isa{ev{\isadigit{0}}} and \isa{evSS}
-and write%
-\end{isamarkuptext}%
-\isamarkuptrue%
-\isacommand{inductive}\isamarkupfalse%
-\ ev\ {\isaliteral{3A}{\isacharcolon}}{\isaliteral{3A}{\isacharcolon}}\ {\isaliteral{22}{\isachardoublequoteopen}}nat\ {\isaliteral{5C3C52696768746172726F773E}{\isasymRightarrow}}\ bool{\isaliteral{22}{\isachardoublequoteclose}}\ \isakeyword{where}\isanewline
-ev{\isadigit{0}}{\isaliteral{3A}{\isacharcolon}}\ \ \ \ {\isaliteral{22}{\isachardoublequoteopen}}ev\ {\isadigit{0}}{\isaliteral{22}{\isachardoublequoteclose}}\ {\isaliteral{7C}{\isacharbar}}\isanewline
-evSS{\isaliteral{3A}{\isacharcolon}}\ \ %
-\isa{{\isaliteral{22}{\isachardoublequote}}ev\ n\ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ ev\ {\isaliteral{28}{\isacharparenleft}}n\ {\isaliteral{2B}{\isacharplus}}\ {\isadigit{2}}{\isaliteral{29}{\isacharparenright}}{\isaliteral{22}{\isachardoublequote}}}
-%
-\begin{isamarkuptext}%
-To get used to inductive definitions, we will first prove a few
-properties of \isa{ev} informally before we descend to the Isabelle level.
-
-How do we prove that some number is even, e.g.\ \isa{ev\ {\isadigit{4}}}? Simply by combining the defining rules for \isa{ev}:
-\begin{quote}
-\isa{ev\ {\isadigit{0}}\ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ ev\ {\isaliteral{28}{\isacharparenleft}}{\isadigit{0}}\ {\isaliteral{2B}{\isacharplus}}\ {\isadigit{2}}{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ ev{\isaliteral{28}{\isacharparenleft}}{\isaliteral{28}{\isacharparenleft}}{\isadigit{0}}\ {\isaliteral{2B}{\isacharplus}}\ {\isadigit{2}}{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{2B}{\isacharplus}}\ {\isadigit{2}}{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{3D}{\isacharequal}}\ ev\ {\isadigit{4}}}
-\end{quote}
-
-\subsubsection{Rule induction}
-
-Showing that all even numbers have some property is more complicated.  For
-example, let us prove that the inductive definition of even numbers agrees
-with the following recursive one:%
-\end{isamarkuptext}%
-\isamarkuptrue%
-\isacommand{fun}\isamarkupfalse%
-\ even\ {\isaliteral{3A}{\isacharcolon}}{\isaliteral{3A}{\isacharcolon}}\ {\isaliteral{22}{\isachardoublequoteopen}}nat\ {\isaliteral{5C3C52696768746172726F773E}{\isasymRightarrow}}\ bool{\isaliteral{22}{\isachardoublequoteclose}}\ \isakeyword{where}\isanewline
-{\isaliteral{22}{\isachardoublequoteopen}}even\ {\isadigit{0}}\ {\isaliteral{3D}{\isacharequal}}\ True{\isaliteral{22}{\isachardoublequoteclose}}\ {\isaliteral{7C}{\isacharbar}}\isanewline
-{\isaliteral{22}{\isachardoublequoteopen}}even\ {\isaliteral{28}{\isacharparenleft}}Suc\ {\isadigit{0}}{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{3D}{\isacharequal}}\ False{\isaliteral{22}{\isachardoublequoteclose}}\ {\isaliteral{7C}{\isacharbar}}\isanewline
-{\isaliteral{22}{\isachardoublequoteopen}}even\ {\isaliteral{28}{\isacharparenleft}}Suc{\isaliteral{28}{\isacharparenleft}}Suc\ n{\isaliteral{29}{\isacharparenright}}{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{3D}{\isacharequal}}\ even\ n{\isaliteral{22}{\isachardoublequoteclose}}%
-\begin{isamarkuptext}%
-We prove \isa{ev\ m\ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ even\ m}.  That is, we
-assume \isa{ev\ m} and by induction on the form of its derivation
-prove \isa{even\ m}. There are two cases corresponding to the two rules
-for \isa{ev}:
-\begin{description}
-\item[Case \isa{ev{\isadigit{0}}}:]
- \isa{ev\ m} was derived by rule \isa{ev\ {\isadigit{0}}}: \\
- \isa{{\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}} \isa{m\ {\isaliteral{3D}{\isacharequal}}\ {\isadigit{0}}} \isa{{\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}} \isa{even\ m\ {\isaliteral{3D}{\isacharequal}}\ even\ {\isadigit{0}}\ {\isaliteral{3D}{\isacharequal}}\ True}
-\item[Case \isa{evSS}:]
- \isa{ev\ m} was derived by rule \isa{ev\ n\ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ ev\ {\isaliteral{28}{\isacharparenleft}}n\ {\isaliteral{2B}{\isacharplus}}\ {\isadigit{2}}{\isaliteral{29}{\isacharparenright}}}: \\
-\isa{{\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}} \isa{m\ {\isaliteral{3D}{\isacharequal}}\ n\ {\isaliteral{2B}{\isacharplus}}\ {\isadigit{2}}} and by induction hypothesis \isa{even\ n}\\
-\isa{{\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}} \isa{even\ m\ {\isaliteral{3D}{\isacharequal}}\ even{\isaliteral{28}{\isacharparenleft}}n\ {\isaliteral{2B}{\isacharplus}}\ {\isadigit{2}}{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{3D}{\isacharequal}}\ even\ n\ {\isaliteral{3D}{\isacharequal}}\ True}
-\end{description}
-
-What we have just seen is a special case of \concept{rule induction}.
-Rule induction applies to propositions of this form
-\begin{quote}
-\isa{ev\ n\ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ P\ n}
-\end{quote}
-That is, we want to prove a property \isa{P\ n}
-for all even \isa{n}. But if we assume \isa{ev\ n}, then there must be
-some derivation of this assumption using the two defining rules for
-\isa{ev}. That is, we must prove
-\begin{description}
-\item[Case \isa{ev{\isadigit{0}}}:] \isa{P\ {\isadigit{0}}}
-\item[Case \isa{evSS}:] \isa{{\isaliteral{5C3C6C6272616B6B3E}{\isasymlbrakk}}ev\ n{\isaliteral{3B}{\isacharsemicolon}}\ P\ n{\isaliteral{5C3C726272616B6B3E}{\isasymrbrakk}}\ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ P\ {\isaliteral{28}{\isacharparenleft}}n\ {\isaliteral{2B}{\isacharplus}}\ {\isadigit{2}}{\isaliteral{29}{\isacharparenright}}}
-\end{description}
-The corresponding rule is called \isa{ev{\isaliteral{2E}{\isachardot}}induct} and looks like this:
-\[
-\inferrule{
-\mbox{\isa{ev\ n}}\\
-\mbox{\isa{P\ {\isadigit{0}}}}\\
-\mbox{\isa{{\isaliteral{5C3C416E643E}{\isasymAnd}}n{\isaliteral{2E}{\isachardot}}\ {\isaliteral{5C3C6C6272616B6B3E}{\isasymlbrakk}}ev\ n{\isaliteral{3B}{\isacharsemicolon}}\ P\ n{\isaliteral{5C3C726272616B6B3E}{\isasymrbrakk}}\ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ P\ {\isaliteral{28}{\isacharparenleft}}n\ {\isaliteral{2B}{\isacharplus}}\ {\isadigit{2}}{\isaliteral{29}{\isacharparenright}}}}}
-{\mbox{\isa{P\ n}}}
-\]
-The first premise \isa{ev\ n} enforces that this rule can only be applied
-in situations where we know that \isa{n} is even.
-
-Note that in the induction step we may not just assume \isa{P\ n} but also
-\mbox{\isa{ev\ n}}, which is simply the premise of rule \isa{evSS}.  Here is an example where the local assumption \isa{ev\ n} comes in
-handy: we prove \isa{ev\ m\ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ ev\ {\isaliteral{28}{\isacharparenleft}}m\ {\isaliteral{2D}{\isacharminus}}\ {\isadigit{2}}{\isaliteral{29}{\isacharparenright}}} by induction on \isa{ev\ m}.
-Case \isa{ev{\isadigit{0}}} requires us to prove \isa{ev\ {\isaliteral{28}{\isacharparenleft}}{\isadigit{0}}\ {\isaliteral{2D}{\isacharminus}}\ {\isadigit{2}}{\isaliteral{29}{\isacharparenright}}}, which follows
-from \isa{ev\ {\isadigit{0}}} because \isa{{\isadigit{0}}\ {\isaliteral{2D}{\isacharminus}}\ {\isadigit{2}}\ {\isaliteral{3D}{\isacharequal}}\ {\isadigit{0}}} on type \isa{nat}. In
-case \isa{evSS} we have \mbox{\isa{m\ {\isaliteral{3D}{\isacharequal}}\ n\ {\isaliteral{2B}{\isacharplus}}\ {\isadigit{2}}}} and may assume
-\isa{ev\ n}, which implies \isa{ev\ {\isaliteral{28}{\isacharparenleft}}m\ {\isaliteral{2D}{\isacharminus}}\ {\isadigit{2}}{\isaliteral{29}{\isacharparenright}}} because \isa{m\ {\isaliteral{2D}{\isacharminus}}\ {\isadigit{2}}\ {\isaliteral{3D}{\isacharequal}}\ {\isaliteral{28}{\isacharparenleft}}n\ {\isaliteral{2B}{\isacharplus}}\ {\isadigit{2}}{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{2D}{\isacharminus}}\ {\isadigit{2}}\ {\isaliteral{3D}{\isacharequal}}\ n}. We did not need the induction hypothesis at all for this proof,
-it is just a case analysis of which rule was used, but having \isa{ev\ n} at our disposal in case \isa{evSS} was essential.
-This case analysis of rules is also called ``rule inversion''
-and is discussed in more detail in \autoref{ch:Isar}.
-
-\subsubsection{In Isabelle}
-
-Let us now recast the above informal proofs in Isabelle. For a start,
-we use \isa{Suc} terms instead of numerals in rule \isa{evSS}:
-\begin{isabelle}%
-ev\ n\ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ ev\ {\isaliteral{28}{\isacharparenleft}}Suc\ {\isaliteral{28}{\isacharparenleft}}Suc\ n{\isaliteral{29}{\isacharparenright}}{\isaliteral{29}{\isacharparenright}}%
-\end{isabelle}
-This avoids the difficulty of unifying \isa{n{\isaliteral{2B}{\isacharplus}}{\isadigit{2}}} with some numeral,
-which is not automatic.
-
-The simplest way to prove \isa{ev\ {\isaliteral{28}{\isacharparenleft}}Suc\ {\isaliteral{28}{\isacharparenleft}}Suc\ {\isaliteral{28}{\isacharparenleft}}Suc\ {\isaliteral{28}{\isacharparenleft}}Suc\ {\isadigit{0}}{\isaliteral{29}{\isacharparenright}}{\isaliteral{29}{\isacharparenright}}{\isaliteral{29}{\isacharparenright}}{\isaliteral{29}{\isacharparenright}}} is in a forward
-direction: \isa{evSS{\isaliteral{5B}{\isacharbrackleft}}OF\ evSS{\isaliteral{5B}{\isacharbrackleft}}OF\ ev{\isadigit{0}}{\isaliteral{5D}{\isacharbrackright}}{\isaliteral{5D}{\isacharbrackright}}} yields the theorem \isa{ev\ {\isaliteral{28}{\isacharparenleft}}Suc\ {\isaliteral{28}{\isacharparenleft}}Suc\ {\isaliteral{28}{\isacharparenleft}}Suc\ {\isaliteral{28}{\isacharparenleft}}Suc\ {\isadigit{0}}{\isaliteral{29}{\isacharparenright}}{\isaliteral{29}{\isacharparenright}}{\isaliteral{29}{\isacharparenright}}{\isaliteral{29}{\isacharparenright}}}. Alternatively, you can also prove it as a lemma in backwards
-fashion. Although this is more verbose, it allows us to demonstrate how each
-rule application changes the proof state:%
-\end{isamarkuptext}%
-\isamarkuptrue%
-\isacommand{lemma}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}ev{\isaliteral{28}{\isacharparenleft}}Suc{\isaliteral{28}{\isacharparenleft}}Suc{\isaliteral{28}{\isacharparenleft}}Suc{\isaliteral{28}{\isacharparenleft}}Suc\ {\isadigit{0}}{\isaliteral{29}{\isacharparenright}}{\isaliteral{29}{\isacharparenright}}{\isaliteral{29}{\isacharparenright}}{\isaliteral{29}{\isacharparenright}}{\isaliteral{22}{\isachardoublequoteclose}}%
-\isadelimproof
-%
-\endisadelimproof
-%
-\isatagproof
-%
-\begin{isamarkuptxt}%
-\begin{isabelle}%
-\ {\isadigit{1}}{\isaliteral{2E}{\isachardot}}\ ev\ {\isaliteral{28}{\isacharparenleft}}Suc\ {\isaliteral{28}{\isacharparenleft}}Suc\ {\isaliteral{28}{\isacharparenleft}}Suc\ {\isaliteral{28}{\isacharparenleft}}Suc\ {\isadigit{0}}{\isaliteral{29}{\isacharparenright}}{\isaliteral{29}{\isacharparenright}}{\isaliteral{29}{\isacharparenright}}{\isaliteral{29}{\isacharparenright}}%
-\end{isabelle}%
-\end{isamarkuptxt}%
-\isamarkuptrue%
-\isacommand{apply}\isamarkupfalse%
-{\isaliteral{28}{\isacharparenleft}}rule\ evSS{\isaliteral{29}{\isacharparenright}}%
-\begin{isamarkuptxt}%
-\begin{isabelle}%
-\ {\isadigit{1}}{\isaliteral{2E}{\isachardot}}\ ev\ {\isaliteral{28}{\isacharparenleft}}Suc\ {\isaliteral{28}{\isacharparenleft}}Suc\ {\isadigit{0}}{\isaliteral{29}{\isacharparenright}}{\isaliteral{29}{\isacharparenright}}%
-\end{isabelle}%
-\end{isamarkuptxt}%
-\isamarkuptrue%
-\isacommand{apply}\isamarkupfalse%
-{\isaliteral{28}{\isacharparenleft}}rule\ evSS{\isaliteral{29}{\isacharparenright}}%
-\begin{isamarkuptxt}%
-\begin{isabelle}%
-\ {\isadigit{1}}{\isaliteral{2E}{\isachardot}}\ ev\ {\isadigit{0}}%
-\end{isabelle}%
-\end{isamarkuptxt}%
-\isamarkuptrue%
-\isacommand{apply}\isamarkupfalse%
-{\isaliteral{28}{\isacharparenleft}}rule\ ev{\isadigit{0}}{\isaliteral{29}{\isacharparenright}}\isanewline
-\isacommand{done}\isamarkupfalse%
-%
-\endisatagproof
-{\isafoldproof}%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\begin{isamarkuptext}%
-\indent
-Rule induction is applied by giving the induction rule explicitly via the
-\isa{rule{\isaliteral{3A}{\isacharcolon}}} modifier:%
-\end{isamarkuptext}%
-\isamarkuptrue%
-\isacommand{lemma}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}ev\ m\ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ even\ m{\isaliteral{22}{\isachardoublequoteclose}}\isanewline
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\isatagproof
-\isacommand{apply}\isamarkupfalse%
-{\isaliteral{28}{\isacharparenleft}}induction\ rule{\isaliteral{3A}{\isacharcolon}}\ ev{\isaliteral{2E}{\isachardot}}induct{\isaliteral{29}{\isacharparenright}}\isanewline
-\isacommand{by}\isamarkupfalse%
-{\isaliteral{28}{\isacharparenleft}}simp{\isaliteral{5F}{\isacharunderscore}}all{\isaliteral{29}{\isacharparenright}}%
-\endisatagproof
-{\isafoldproof}%
-%
-\isadelimproof
-%
-\endisadelimproof
-%
-\begin{isamarkuptext}%
-Both cases are automatic. Note that if there are multiple assumptions
-of the form \isa{ev\ t}, method \isa{induction} will induct on the leftmost
-one.
-
-As a bonus, we also prove the remaining direction of the equivalence of
-\isa{ev} and \isa{even}:%
-\end{isamarkuptext}%
-\isamarkuptrue%
-\isacommand{lemma}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}even\ n\ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ ev\ n{\isaliteral{22}{\isachardoublequoteclose}}\isanewline
-%
-\isadelimproof
-%