--- a/src/Doc/Tutorial/Misc/simp.thy Thu Jan 11 13:48:17 2018 +0100
+++ b/src/Doc/Tutorial/Misc/simp.thy Fri Jan 12 14:08:53 2018 +0100
@@ -2,9 +2,9 @@
theory simp imports Main begin
(*>*)
-subsection{*Simplification Rules*}
+subsection\<open>Simplification Rules\<close>
-text{*\index{simplification rules}
+text\<open>\index{simplification rules}
To facilitate simplification,
the attribute @{text"[simp]"}\index{*simp (attribute)}
declares theorems to be simplification rules, which the simplifier
@@ -49,11 +49,11 @@
different path) on $A$, it is not defined what the simplification attribute
of that theorem will be in $C$: it could be either.
\end{warn}
-*}
+\<close>
-subsection{*The {\tt\slshape simp} Method*}
+subsection\<open>The {\tt\slshape simp} Method\<close>
-text{*\index{*simp (method)|bold}
+text\<open>\index{*simp (method)|bold}
The general format of the simplification method is
\begin{quote}
@{text simp} \textit{list of modifiers}
@@ -65,11 +65,11 @@
only the first subgoal and may thus need to be repeated --- use
\methdx{simp_all} to simplify all subgoals.
If nothing changes, @{text simp} fails.
-*}
+\<close>
-subsection{*Adding and Deleting Simplification Rules*}
+subsection\<open>Adding and Deleting Simplification Rules\<close>
-text{*
+text\<open>
\index{simplification rules!adding and deleting}%
If a certain theorem is merely needed in a few proofs by simplification,
we do not need to make it a global simplification rule. Instead we can modify
@@ -88,41 +88,41 @@
\begin{quote}
\isacommand{apply}@{text"(simp add: mod_mult_distrib add_mult_distrib)"}
\end{quote}
-*}
+\<close>
-subsection{*Assumptions*}
+subsection\<open>Assumptions\<close>
-text{*\index{simplification!with/of assumptions}
+text\<open>\index{simplification!with/of assumptions}
By default, assumptions are part of the simplification process: they are used
as simplification rules and are simplified themselves. For example:
-*}
+\<close>
lemma "\<lbrakk> xs @ zs = ys @ xs; [] @ xs = [] @ [] \<rbrakk> \<Longrightarrow> ys = zs"
apply simp
done
-text{*\noindent
+text\<open>\noindent
The second assumption simplifies to @{term"xs = []"}, which in turn
simplifies the first assumption to @{term"zs = ys"}, thus reducing the
conclusion to @{term"ys = ys"} and hence to @{term"True"}.
In some cases, using the assumptions can lead to nontermination:
-*}
+\<close>
lemma "\<forall>x. f x = g (f (g x)) \<Longrightarrow> f [] = f [] @ []"
-txt{*\noindent
+txt\<open>\noindent
An unmodified application of @{text"simp"} loops. The culprit is the
simplification rule @{term"f x = g (f (g x))"}, which is extracted from
the assumption. (Isabelle notices certain simple forms of
nontermination but not this one.) The problem can be circumvented by
telling the simplifier to ignore the assumptions:
-*}
+\<close>
apply(simp (no_asm))
done
-text{*\noindent
+text\<open>\noindent
Three modifiers influence the treatment of assumptions:
\begin{description}
\item[@{text"(no_asm)"}]\index{*no_asm (modifier)}
@@ -145,11 +145,11 @@
%positive, and from left to right, if $n$ is negative.
%Beware that such rotations make proofs quite brittle.
%\end{warn}
-*}
+\<close>
-subsection{*Rewriting with Definitions*}
+subsection\<open>Rewriting with Definitions\<close>
-text{*\label{sec:Simp-with-Defs}\index{simplification!with definitions}
+text\<open>\label{sec:Simp-with-Defs}\index{simplification!with definitions}
Constant definitions (\S\ref{sec:ConstDefinitions}) can be used as
simplification rules, but by default they are not: the simplifier does not
expand them automatically. Definitions are intended for introducing abstract
@@ -159,32 +159,32 @@
proofs more robust: if the definition has to be changed,
only the proofs of the abstract properties will be affected.
-For example, given *}
+For example, given\<close>
definition xor :: "bool \<Rightarrow> bool \<Rightarrow> bool" where
"xor A B \<equiv> (A \<and> \<not>B) \<or> (\<not>A \<and> B)"
-text{*\noindent
+text\<open>\noindent
we may want to prove
-*}
+\<close>
lemma "xor A (\<not>A)"
-txt{*\noindent
+txt\<open>\noindent
Typically, we begin by unfolding some definitions:
\indexbold{definitions!unfolding}
-*}
+\<close>
apply(simp only: xor_def)
-txt{*\noindent
+txt\<open>\noindent
In this particular case, the resulting goal
@{subgoals[display,indent=0]}
can be proved by simplification. Thus we could have proved the lemma outright by
-*}(*<*)oops lemma "xor A (\<not>A)"(*>*)
+\<close>(*<*)oops lemma "xor A (\<not>A)"(*>*)
apply(simp add: xor_def)
(*<*)done(*>*)
-text{*\noindent
+text\<open>\noindent
Of course we can also unfold definitions in the middle of a proof.
\begin{warn}
@@ -199,78 +199,78 @@
one or several definitions, as in \isacommand{apply}\isa{(unfold xor_def)}.
This is can be useful in situations where \isa{simp} does too much.
Warning: \isa{unfold} acts on all subgoals!
-*}
+\<close>
-subsection{*Simplifying {\tt\slshape let}-Expressions*}
+subsection\<open>Simplifying {\tt\slshape let}-Expressions\<close>
-text{*\index{simplification!of \isa{let}-expressions}\index{*let expressions}%
+text\<open>\index{simplification!of \isa{let}-expressions}\index{*let expressions}%
Proving a goal containing \isa{let}-expressions almost invariably requires the
@{text"let"}-con\-structs to be expanded at some point. Since
@{text"let"}\ldots\isa{=}\ldots@{text"in"}{\ldots} is just syntactic sugar for
the predefined constant @{term"Let"}, expanding @{text"let"}-constructs
-means rewriting with \tdx{Let_def}: *}
+means rewriting with \tdx{Let_def}:\<close>
lemma "(let xs = [] in xs@ys@xs) = ys"
apply(simp add: Let_def)
done
-text{*
+text\<open>
If, in a particular context, there is no danger of a combinatorial explosion
of nested @{text"let"}s, you could even simplify with @{thm[source]Let_def} by
default:
-*}
+\<close>
declare Let_def [simp]
-subsection{*Conditional Simplification Rules*}
+subsection\<open>Conditional Simplification Rules\<close>
-text{*
+text\<open>
\index{conditional simplification rules}%
So far all examples of rewrite rules were equations. The simplifier also
accepts \emph{conditional} equations, for example
-*}
+\<close>
lemma hd_Cons_tl[simp]: "xs \<noteq> [] \<Longrightarrow> hd xs # tl xs = xs"
apply(case_tac xs, simp, simp)
done
-text{*\noindent
+text\<open>\noindent
Note the use of ``\ttindexboldpos{,}{$Isar}'' to string together a
sequence of methods. Assuming that the simplification rule
@{term"(rev xs = []) = (xs = [])"}
is present as well,
the lemma below is proved by plain simplification:
-*}
+\<close>
lemma "xs \<noteq> [] \<Longrightarrow> hd(rev xs) # tl(rev xs) = rev xs"
(*<*)
by(simp)
(*>*)
-text{*\noindent
+text\<open>\noindent
The conditional equation @{thm[source]hd_Cons_tl} above
can simplify @{term"hd(rev xs) # tl(rev xs)"} to @{term"rev xs"}
because the corresponding precondition @{term"rev xs ~= []"}
simplifies to @{term"xs ~= []"}, which is exactly the local
assumption of the subgoal.
-*}
+\<close>
-subsection{*Automatic Case Splits*}
+subsection\<open>Automatic Case Splits\<close>
-text{*\label{sec:AutoCaseSplits}\indexbold{case splits}%
+text\<open>\label{sec:AutoCaseSplits}\indexbold{case splits}%
Goals containing @{text"if"}-expressions\index{*if expressions!splitting of}
are usually proved by case
distinction on the boolean condition. Here is an example:
-*}
+\<close>
lemma "\<forall>xs. if xs = [] then rev xs = [] else rev xs \<noteq> []"
-txt{*\noindent
+txt\<open>\noindent
The goal can be split by a special method, \methdx{split}:
-*}
+\<close>
apply(split if_split)
-txt{*\noindent
+txt\<open>\noindent
@{subgoals[display,indent=0]}
where \tdx{if_split} is a theorem that expresses splitting of
@{text"if"}s. Because
@@ -280,11 +280,11 @@
This splitting idea generalizes from @{text"if"} to \sdx{case}.
Let us simplify a case analysis over lists:\index{*list.split (theorem)}
-*}(*<*)by simp(*>*)
+\<close>(*<*)by simp(*>*)
lemma "(case xs of [] \<Rightarrow> zs | y#ys \<Rightarrow> y#(ys@zs)) = xs@zs"
apply(split list.split)
-txt{*
+txt\<open>
@{subgoals[display,indent=0]}
The simplifier does not split
@{text"case"}-expressions, as it does @{text"if"}-expressions,
@@ -293,26 +293,26 @@
@{text split}\index{*split (modifier)}
for adding splitting rules explicitly. The
lemma above can be proved in one step by
-*}
+\<close>
(*<*)oops
lemma "(case xs of [] \<Rightarrow> zs | y#ys \<Rightarrow> y#(ys@zs)) = xs@zs"
(*>*)
apply(simp split: list.split)
(*<*)done(*>*)
-text{*\noindent
+text\<open>\noindent
whereas \isacommand{apply}@{text"(simp)"} alone will not succeed.
Every datatype $t$ comes with a theorem
$t$@{text".split"} which can be declared to be a \bfindex{split rule} either
locally as above, or by giving it the \attrdx{split} attribute globally:
-*}
+\<close>
declare list.split [split]
-text{*\noindent
+text\<open>\noindent
The @{text"split"} attribute can be removed with the @{text"del"} modifier,
either locally
-*}
+\<close>
(*<*)
lemma "dummy=dummy"
(*>*)
@@ -320,12 +320,12 @@
(*<*)
oops
(*>*)
-text{*\noindent
+text\<open>\noindent
or globally:
-*}
+\<close>
declare list.split [split del]
-text{*
+text\<open>
Polished proofs typically perform splitting within @{text simp} rather than
invoking the @{text split} method. However, if a goal contains
several @{text "if"} and @{text case} expressions,
@@ -335,12 +335,12 @@
The split rules shown above are intended to affect only the subgoal's
conclusion. If you want to split an @{text"if"} or @{text"case"}-expression
in the assumptions, you have to apply \tdx{if_split_asm} or
-$t$@{text".split_asm"}: *}
+$t$@{text".split_asm"}:\<close>
lemma "if xs = [] then ys \<noteq> [] else ys = [] \<Longrightarrow> xs @ ys \<noteq> []"
apply(split if_split_asm)
-txt{*\noindent
+txt\<open>\noindent
Unlike splitting the conclusion, this step creates two
separate subgoals, which here can be solved by @{text"simp_all"}:
@{subgoals[display,indent=0]}
@@ -357,22 +357,22 @@
simplified at first, until either the expression reduces to one of the
cases or it is split.
\end{warn}
-*}
+\<close>
(*<*)
by(simp_all)
(*>*)
-subsection{*Tracing*}
-text{*\indexbold{tracing the simplifier}
+subsection\<open>Tracing\<close>
+text\<open>\indexbold{tracing the simplifier}
Using the simplifier effectively may take a bit of experimentation. Set the
Proof General flag \pgmenu{Isabelle} $>$ \pgmenu{Settings} $>$ \pgmenu{Trace Simplifier} to get a better idea of what is going on:
-*}
+\<close>
lemma "rev [a] = []"
apply(simp)
(*<*)oops(*>*)
-text{*\noindent
+text\<open>\noindent
produces the following trace in Proof General's \pgmenu{Trace} buffer:
\begin{ttbox}\makeatother
@@ -418,7 +418,7 @@
obtained the desired trace.
Since this is easily forgotten (and may have the unpleasant effect of
swamping the interface with trace information), here is how you can switch
-the trace on locally in a proof: *}
+the trace on locally in a proof:\<close>
(*<*)lemma "x=x"
(*>*)
@@ -426,14 +426,14 @@
apply simp
(*<*)oops(*>*)
-text{* \noindent
+text\<open>\noindent
Within the current proof, all simplifications in subsequent proof steps
will be traced, but the text reminds you to remove the \isa{using} clause
-after it has done its job. *}
+after it has done its job.\<close>
-subsection{*Finding Theorems\label{sec:find}*}
+subsection\<open>Finding Theorems\label{sec:find}\<close>
-text{*\indexbold{finding theorems}\indexbold{searching theorems}
+text\<open>\indexbold{finding theorems}\indexbold{searching theorems}
Isabelle's large database of proved theorems
offers a powerful search engine. Its chief limitation is
its restriction to the theories currently loaded.
@@ -512,7 +512,7 @@
through previous searches and just modify them. This saves you having
to type in lengthy expressions again and again.
\end{pgnote}
-*}
+\<close>
(*<*)
end
(*>*)