--- a/doc-src/TutorialI/Advanced/Partial.thy Tue May 01 17:16:32 2001 +0200
+++ b/doc-src/TutorialI/Advanced/Partial.thy Tue May 01 22:26:55 2001 +0200
@@ -1,14 +1,24 @@
(*<*)theory Partial = While_Combinator:(*>*)
-text{*\noindent
-Throughout the tutorial we have emphasized the fact that all functions
-in HOL are total. Hence we cannot hope to define truly partial
-functions. The best we can do are functions that are
-\emph{underdefined}\index{underdefined function}:
-for certain arguments we only know that a result
-exists, but we do not know what it is. When defining functions that are
-normally considered partial, underdefinedness turns out to be a very
-reasonable alternative.
+text{*\noindent Throughout the tutorial we have emphasized the fact
+that all functions in HOL are total. Hence we cannot hope to define
+truly partial functions but must totalize them. A straightforward
+totalization is to lift the result type of the function from $\tau$ to
+$\tau$~@{text option} (see \ref{sec:option}), where @{term None} is
+returned if the function is applied to an argument not in its
+domain. Function @{term assoc} in \S\ref{sec:Trie} is a simple example.
+We do not pursue this schema further because it should be clear
+how it works. Its main drawback is that the result of such a lifted
+function has to be unpacked first before it can be processed
+further. Its main advantage is that you can distinguish if the
+function was applied to an argument in its domain or not. If you do
+not need to make this distinction, for example because the function is
+never used outside its domain, it is easier to work with
+\emph{underdefined}\index{underdefined function} functions: for
+certain arguments we only know that a result exists, but we do not
+know what it is. When defining functions that are normally considered
+partial, underdefinedness turns out to be a very reasonable
+alternative.
We have already seen an instance of underdefinedness by means of
non-exhaustive pattern matching: the definition of @{term last} in
@@ -61,7 +71,7 @@
standard mathematical division function.
As a more substantial example we consider the problem of searching a graph.
-For simplicity our graph is given by a function (@{term f}) of
+For simplicity our graph is given by a function @{term f} of
type @{typ"'a \<Rightarrow> 'a"} which
maps each node to its successor, i.e.\ the graph is really a tree.
The task is to find the end of a chain, modelled by a node pointing to
@@ -90,9 +100,7 @@
text{*\noindent
The recursion equation itself should be clear enough: it is our aborted
first attempt augmented with a check that there are no non-trivial loops.
-
-What complicates the termination proof is that the argument of @{term find}
-is a pair. To express the required well-founded relation we employ the
+To express the required well-founded relation we employ the
predefined combinator @{term same_fst} of type
@{text[display]"('a \<Rightarrow> bool) \<Rightarrow> ('a \<Rightarrow> ('b\<times>'b)set) \<Rightarrow> (('a\<times>'b) \<times> ('a\<times>'b))set"}
defined as
@@ -177,8 +185,8 @@
by @{text auto} but falls to @{text simp}:
*}
-lemma lem: "\<lbrakk> wf(step1 f); x' = f x \<rbrakk> \<Longrightarrow>
- \<exists>y. while (\<lambda>(x,x'). x' \<noteq> x) (\<lambda>(x,x'). (x',f x')) (x,x') = (y,y) \<and>
+lemma lem: "wf(step1 f) \<Longrightarrow>
+ \<exists>y. while (\<lambda>(x,x'). x' \<noteq> x) (\<lambda>(x,x'). (x',f x')) (x,f x) = (y,y) \<and>
f y = y"
apply(rule_tac P = "\<lambda>(x,x'). x' = f x" and
r = "inv_image (step1 f) fst" in while_rule);
--- a/doc-src/TutorialI/Advanced/document/Partial.tex Tue May 01 17:16:32 2001 +0200
+++ b/doc-src/TutorialI/Advanced/document/Partial.tex Tue May 01 22:26:55 2001 +0200
@@ -3,15 +3,25 @@
\def\isabellecontext{Partial}%
%
\begin{isamarkuptext}%
-\noindent
-Throughout the tutorial we have emphasized the fact that all functions
-in HOL are total. Hence we cannot hope to define truly partial
-functions. The best we can do are functions that are
-\emph{underdefined}\index{underdefined function}:
-for certain arguments we only know that a result
-exists, but we do not know what it is. When defining functions that are
-normally considered partial, underdefinedness turns out to be a very
-reasonable alternative.
+\noindent Throughout the tutorial we have emphasized the fact
+that all functions in HOL are total. Hence we cannot hope to define
+truly partial functions but must totalize them. A straightforward
+totalization is to lift the result type of the function from $\tau$ to
+$\tau$~\isa{option} (see \ref{sec:option}), where \isa{None} is
+returned if the function is applied to an argument not in its
+domain. Function \isa{assoc} in \S\ref{sec:Trie} is a simple example.
+We do not pursue this schema further because it should be clear
+how it works. Its main drawback is that the result of such a lifted
+function has to be unpacked first before it can be processed
+further. Its main advantage is that you can distinguish if the
+function was applied to an argument in its domain or not. If you do
+not need to make this distinction, for example because the function is
+never used outside its domain, it is easier to work with
+\emph{underdefined}\index{underdefined function} functions: for
+certain arguments we only know that a result exists, but we do not
+know what it is. When defining functions that are normally considered
+partial, underdefinedness turns out to be a very reasonable
+alternative.
We have already seen an instance of underdefinedness by means of
non-exhaustive pattern matching: the definition of \isa{last} in
@@ -64,7 +74,7 @@
standard mathematical division function.
As a more substantial example we consider the problem of searching a graph.
-For simplicity our graph is given by a function (\isa{f}) of
+For simplicity our graph is given by a function \isa{f} of
type \isa{{\isacharprime}a\ {\isasymRightarrow}\ {\isacharprime}a} which
maps each node to its successor, i.e.\ the graph is really a tree.
The task is to find the end of a chain, modelled by a node pointing to
@@ -93,9 +103,7 @@
\noindent
The recursion equation itself should be clear enough: it is our aborted
first attempt augmented with a check that there are no non-trivial loops.
-
-What complicates the termination proof is that the argument of \isa{find}
-is a pair. To express the required well-founded relation we employ the
+To express the required well-founded relation we employ the
predefined combinator \isa{same{\isacharunderscore}fst} of type
\begin{isabelle}%
\ \ \ \ \ {\isacharparenleft}{\isacharprime}a\ {\isasymRightarrow}\ bool{\isacharparenright}\ {\isasymRightarrow}\ {\isacharparenleft}{\isacharprime}a\ {\isasymRightarrow}\ {\isacharparenleft}{\isacharprime}b{\isasymtimes}{\isacharprime}b{\isacharparenright}set{\isacharparenright}\ {\isasymRightarrow}\ {\isacharparenleft}{\isacharparenleft}{\isacharprime}a{\isasymtimes}{\isacharprime}b{\isacharparenright}\ {\isasymtimes}\ {\isacharparenleft}{\isacharprime}a{\isasymtimes}{\isacharprime}b{\isacharparenright}{\isacharparenright}set%
@@ -188,8 +196,8 @@
Only the final premise of \isa{while{\isacharunderscore}rule} is left unproved
by \isa{auto} but falls to \isa{simp}:%
\end{isamarkuptext}%
-\isacommand{lemma}\ lem{\isacharcolon}\ {\isachardoublequote}{\isasymlbrakk}\ wf{\isacharparenleft}step{\isadigit{1}}\ f{\isacharparenright}{\isacharsemicolon}\ x{\isacharprime}\ {\isacharequal}\ f\ x\ {\isasymrbrakk}\ {\isasymLongrightarrow}\ \isanewline
-\ \ \ {\isasymexists}y{\isachardot}\ while\ {\isacharparenleft}{\isasymlambda}{\isacharparenleft}x{\isacharcomma}x{\isacharprime}{\isacharparenright}{\isachardot}\ x{\isacharprime}\ {\isasymnoteq}\ x{\isacharparenright}\ {\isacharparenleft}{\isasymlambda}{\isacharparenleft}x{\isacharcomma}x{\isacharprime}{\isacharparenright}{\isachardot}\ {\isacharparenleft}x{\isacharprime}{\isacharcomma}f\ x{\isacharprime}{\isacharparenright}{\isacharparenright}\ {\isacharparenleft}x{\isacharcomma}x{\isacharprime}{\isacharparenright}\ {\isacharequal}\ {\isacharparenleft}y{\isacharcomma}y{\isacharparenright}\ {\isasymand}\isanewline
+\isacommand{lemma}\ lem{\isacharcolon}\ {\isachardoublequote}wf{\isacharparenleft}step{\isadigit{1}}\ f{\isacharparenright}\ {\isasymLongrightarrow}\isanewline
+\ \ {\isasymexists}y{\isachardot}\ while\ {\isacharparenleft}{\isasymlambda}{\isacharparenleft}x{\isacharcomma}x{\isacharprime}{\isacharparenright}{\isachardot}\ x{\isacharprime}\ {\isasymnoteq}\ x{\isacharparenright}\ {\isacharparenleft}{\isasymlambda}{\isacharparenleft}x{\isacharcomma}x{\isacharprime}{\isacharparenright}{\isachardot}\ {\isacharparenleft}x{\isacharprime}{\isacharcomma}f\ x{\isacharprime}{\isacharparenright}{\isacharparenright}\ {\isacharparenleft}x{\isacharcomma}f\ x{\isacharparenright}\ {\isacharequal}\ {\isacharparenleft}y{\isacharcomma}y{\isacharparenright}\ {\isasymand}\isanewline
\ \ \ \ \ \ \ f\ y\ {\isacharequal}\ y{\isachardoublequote}\isanewline
\isacommand{apply}{\isacharparenleft}rule{\isacharunderscore}tac\ P\ {\isacharequal}\ {\isachardoublequote}{\isasymlambda}{\isacharparenleft}x{\isacharcomma}x{\isacharprime}{\isacharparenright}{\isachardot}\ x{\isacharprime}\ {\isacharequal}\ f\ x{\isachardoublequote}\ \isakeyword{and}\isanewline
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ r\ {\isacharequal}\ {\isachardoublequote}inv{\isacharunderscore}image\ {\isacharparenleft}step{\isadigit{1}}\ f{\isacharparenright}\ fst{\isachardoublequote}\ \isakeyword{in}\ while{\isacharunderscore}rule{\isacharparenright}\isanewline
--- a/doc-src/TutorialI/CTL/CTLind.thy Tue May 01 17:16:32 2001 +0200
+++ b/doc-src/TutorialI/CTL/CTLind.thy Tue May 01 22:26:55 2001 +0200
@@ -59,9 +59,9 @@
and that the instantiated induction hypothesis implies the conclusion.
Now we come to the key lemma. Assuming that no infinite @{term A}-avoiding
-path starts from @{term s}, we want to show @{prop"s \<in> lfp(af A)"}. This
-can be generalized by proving that every point @{term t} ``between''
-@{term s} and @{term A}, i.e.\ all of @{term"Avoid s A"},
+path starts from @{term s}, we want to show @{prop"s \<in> lfp(af A)"}. For the
+inductive proof this must be generalized to the statement that every point @{term t}
+``between'' @{term s} and @{term A}, i.e.\ all of @{term"Avoid s A"},
is contained in @{term"lfp(af A)"}:
*}
@@ -79,7 +79,7 @@
The formal counterpart of this proof sketch is a well-founded induction
w.r.t.\ @{term M} restricted to (roughly speaking) @{term"Avoid s A - A"}:
@{term[display]"{(y,x). (x,y) \<in> M \<and> x \<in> Avoid s A \<and> x \<notin> A}"}
-As we shall see in a moment, the absence of infinite @{term A}-avoiding paths
+As we shall see presently, the absence of infinite @{term A}-avoiding paths
starting from @{term s} implies well-foundedness of this relation. For the
moment we assume this and proceed with the induction:
*}
--- a/doc-src/TutorialI/CTL/document/CTLind.tex Tue May 01 17:16:32 2001 +0200
+++ b/doc-src/TutorialI/CTL/document/CTLind.tex Tue May 01 22:26:55 2001 +0200
@@ -59,9 +59,9 @@
and that the instantiated induction hypothesis implies the conclusion.
Now we come to the key lemma. Assuming that no infinite \isa{A}-avoiding
-path starts from \isa{s}, we want to show \isa{s\ {\isasymin}\ lfp\ {\isacharparenleft}af\ A{\isacharparenright}}. This
-can be generalized by proving that every point \isa{t} ``between''
-\isa{s} and \isa{A}, i.e.\ all of \isa{Avoid\ s\ A},
+path starts from \isa{s}, we want to show \isa{s\ {\isasymin}\ lfp\ {\isacharparenleft}af\ A{\isacharparenright}}. For the
+inductive proof this must be generalized to the statement that every point \isa{t}
+``between'' \isa{s} and \isa{A}, i.e.\ all of \isa{Avoid\ s\ A},
is contained in \isa{lfp\ {\isacharparenleft}af\ A{\isacharparenright}}:%
\end{isamarkuptext}%
\isacommand{lemma}\ Avoid{\isacharunderscore}in{\isacharunderscore}lfp{\isacharbrackleft}rule{\isacharunderscore}format{\isacharparenleft}no{\isacharunderscore}asm{\isacharparenright}{\isacharbrackright}{\isacharcolon}\isanewline
@@ -79,7 +79,7 @@
\begin{isabelle}%
\ \ \ \ \ {\isacharbraceleft}{\isacharparenleft}y{\isacharcomma}\ x{\isacharparenright}{\isachardot}\ {\isacharparenleft}x{\isacharcomma}\ y{\isacharparenright}\ {\isasymin}\ M\ {\isasymand}\ x\ {\isasymin}\ Avoid\ s\ A\ {\isasymand}\ x\ {\isasymnotin}\ A{\isacharbraceright}%
\end{isabelle}
-As we shall see in a moment, the absence of infinite \isa{A}-avoiding paths
+As we shall see presently, the absence of infinite \isa{A}-avoiding paths
starting from \isa{s} implies well-foundedness of this relation. For the
moment we assume this and proceed with the induction:%
\end{isamarkuptxt}%
--- a/doc-src/TutorialI/Misc/AdvancedInd.thy Tue May 01 17:16:32 2001 +0200
+++ b/doc-src/TutorialI/Misc/AdvancedInd.thy Tue May 01 22:26:55 2001 +0200
@@ -86,7 +86,9 @@
you want to induct on a whole term, rather than an individual variable. In
general, when inducting on some term $t$ you must rephrase the conclusion $C$
as
-\[ \forall y@1 \dots y@n.~ x = t \longrightarrow C \]
+\begin{equation}\label{eqn:ind-over-term}
+\forall y@1 \dots y@n.~ x = t \longrightarrow C
+\end{equation}
where $y@1 \dots y@n$ are the free variables in $t$ and $x$ is new, and
perform induction on $x$ afterwards. An example appears in
\S\ref{sec:complete-ind} below.
@@ -101,7 +103,20 @@
Of course, all premises that share free variables with $t$ need to be pulled into
the conclusion as well, under the @{text"\<forall>"}, again using @{text"\<longrightarrow>"} as shown above.
-*};
+
+Readers who are puzzled by the form of statement
+(\ref{eqn:ind-over-term}) above should remember that the
+transformation is only performed to permit induction. Once induction
+has been applied, the statement can be transformed back into something quite
+intuitive. For example, applying wellfounded induction on $x$ (w.r.t.\
+$\prec$) to (\ref{eqn:ind-over-term}) and transforming the result a
+little leads to the goal
+\[ \bigwedge\vec{y}.~\forall \vec{z}.~ t(\vec{z}) \prec t(\vec{y}) \longrightarrow C(\vec{z})
+ \Longrightarrow C(\vec{y}) \]
+where the dependence of $t$ and $C$ on their free variables has been made explicit.
+Unfortunately, this induction schema cannot be expressed as a single theorem because it depends
+on the number of free variables in $t$ --- the notation $\vec{y}$ is merely an informal device.
+*}
subsection{*Beyond Structural and Recursion Induction*};
--- a/doc-src/TutorialI/Misc/document/AdvancedInd.tex Tue May 01 17:16:32 2001 +0200
+++ b/doc-src/TutorialI/Misc/document/AdvancedInd.tex Tue May 01 22:26:55 2001 +0200
@@ -79,7 +79,9 @@
you want to induct on a whole term, rather than an individual variable. In
general, when inducting on some term $t$ you must rephrase the conclusion $C$
as
-\[ \forall y@1 \dots y@n.~ x = t \longrightarrow C \]
+\begin{equation}\label{eqn:ind-over-term}
+\forall y@1 \dots y@n.~ x = t \longrightarrow C
+\end{equation}
where $y@1 \dots y@n$ are the free variables in $t$ and $x$ is new, and
perform induction on $x$ afterwards. An example appears in
\S\ref{sec:complete-ind} below.
@@ -93,7 +95,20 @@
For an example see \S\ref{sec:CTL-revisited} below.
Of course, all premises that share free variables with $t$ need to be pulled into
-the conclusion as well, under the \isa{{\isasymforall}}, again using \isa{{\isasymlongrightarrow}} as shown above.%
+the conclusion as well, under the \isa{{\isasymforall}}, again using \isa{{\isasymlongrightarrow}} as shown above.
+
+Readers who are puzzled by the form of statement
+(\ref{eqn:ind-over-term}) above should remember that the
+transformation is only performed to permit induction. Once induction
+has been applied, the statement can be transformed back into something quite
+intuitive. For example, applying wellfounded induction on $x$ (w.r.t.\
+$\prec$) to (\ref{eqn:ind-over-term}) and transforming the result a
+little leads to the goal
+\[ \bigwedge\vec{y}.~\forall \vec{z}.~ t(\vec{z}) \prec t(\vec{y}) \longrightarrow C(\vec{z})
+ \Longrightarrow C(\vec{y}) \]
+where the dependence of $t$ and $C$ on their free variables has been made explicit.
+Unfortunately, this induction schema cannot be expressed as a single theorem because it depends
+on the number of free variables in $t$ --- the notation $\vec{y}$ is merely an informal device.%
\end{isamarkuptext}%
%
\isamarkupsubsection{Beyond Structural and Recursion Induction%
--- a/doc-src/TutorialI/Recdef/Nested1.thy Tue May 01 17:16:32 2001 +0200
+++ b/doc-src/TutorialI/Recdef/Nested1.thy Tue May 01 22:26:55 2001 +0200
@@ -3,7 +3,7 @@
(*>*)
text{*\noindent
-Although the definition of @{term trev} is quite natural, we will have
+Although the definition of @{term trev} below is quite natural, we will have
to overcome a minor difficulty in convincing Isabelle of its termination.
It is precisely this difficulty that is the \textit{raison d'\^etre} of
this subsection.
--- a/doc-src/TutorialI/Recdef/Nested2.thy Tue May 01 17:16:32 2001 +0200
+++ b/doc-src/TutorialI/Recdef/Nested2.thy Tue May 01 22:26:55 2001 +0200
@@ -2,7 +2,7 @@
theory Nested2 = Nested0:
(*>*)
-text{*The termintion condition is easily proved by induction:*}
+text{*The termination condition is easily proved by induction:*}
lemma [simp]: "t \<in> set ts \<longrightarrow> size t < Suc(term_list_size ts)"
by(induct_tac ts, auto)
@@ -61,12 +61,12 @@
\isacommand{recdef} has been supplied with the congruence theorem
@{thm[source]map_cong}:
@{thm[display,margin=50]"map_cong"[no_vars]}
-Its second premise expresses (indirectly) that the second argument of
-@{term"map"} is only applied to elements of its third argument. Congruence
+Its second premise expresses (indirectly) that the first argument of
+@{term"map"} is only applied to elements of its second argument. Congruence
rules for other higher-order functions on lists look very similar. If you get
into a situation where you need to supply \isacommand{recdef} with new
-congruence rules, you can either append a hint locally
-to the specific occurrence of \isacommand{recdef}
+congruence rules, you can either append a hint after the end of
+the recursion equations
*}
(*<*)
consts dummy :: "nat => nat"
--- a/doc-src/TutorialI/Recdef/document/Nested1.tex Tue May 01 17:16:32 2001 +0200
+++ b/doc-src/TutorialI/Recdef/document/Nested1.tex Tue May 01 22:26:55 2001 +0200
@@ -4,7 +4,7 @@
%
\begin{isamarkuptext}%
\noindent
-Although the definition of \isa{trev} is quite natural, we will have
+Although the definition of \isa{trev} below is quite natural, we will have
to overcome a minor difficulty in convincing Isabelle of its termination.
It is precisely this difficulty that is the \textit{raison d'\^etre} of
this subsection.
--- a/doc-src/TutorialI/Recdef/document/Nested2.tex Tue May 01 17:16:32 2001 +0200
+++ b/doc-src/TutorialI/Recdef/document/Nested2.tex Tue May 01 22:26:55 2001 +0200
@@ -3,7 +3,7 @@
\def\isabellecontext{Nested{\isadigit{2}}}%
%
\begin{isamarkuptext}%
-The termintion condition is easily proved by induction:%
+The termination condition is easily proved by induction:%
\end{isamarkuptext}%
\isacommand{lemma}\ {\isacharbrackleft}simp{\isacharbrackright}{\isacharcolon}\ {\isachardoublequote}t\ {\isasymin}\ set\ ts\ {\isasymlongrightarrow}\ size\ t\ {\isacharless}\ Suc{\isacharparenleft}term{\isacharunderscore}list{\isacharunderscore}size\ ts{\isacharparenright}{\isachardoublequote}\isanewline
\isacommand{by}{\isacharparenleft}induct{\isacharunderscore}tac\ ts{\isacharcomma}\ auto{\isacharparenright}%
@@ -63,12 +63,12 @@
\ \ \ \ \ {\isasymlbrakk}xs\ {\isacharequal}\ ys{\isacharsemicolon}\ {\isasymAnd}x{\isachardot}\ x\ {\isasymin}\ set\ ys\ {\isasymLongrightarrow}\ f\ x\ {\isacharequal}\ g\ x{\isasymrbrakk}\isanewline
\isaindent{\ \ \ \ \ }{\isasymLongrightarrow}\ map\ f\ xs\ {\isacharequal}\ map\ g\ ys%
\end{isabelle}
-Its second premise expresses (indirectly) that the second argument of
-\isa{map} is only applied to elements of its third argument. Congruence
+Its second premise expresses (indirectly) that the first argument of
+\isa{map} is only applied to elements of its second argument. Congruence
rules for other higher-order functions on lists look very similar. If you get
into a situation where you need to supply \isacommand{recdef} with new
-congruence rules, you can either append a hint locally
-to the specific occurrence of \isacommand{recdef}%
+congruence rules, you can either append a hint after the end of
+the recursion equations%
\end{isamarkuptext}%
{\isacharparenleft}\isakeyword{hints}\ recdef{\isacharunderscore}cong{\isacharcolon}\ map{\isacharunderscore}cong{\isacharparenright}%
\begin{isamarkuptext}%
--- a/doc-src/TutorialI/Types/Overloading1.thy Tue May 01 17:16:32 2001 +0200
+++ b/doc-src/TutorialI/Types/Overloading1.thy Tue May 01 22:26:55 2001 +0200
@@ -4,8 +4,8 @@
text{*
We now start with the theory of ordering relations, which we want to phrase
-in terms of the two binary symbols @{text"<<"} and @{text"<<="}: they have
-been chosen to avoid clashes with @{text"<"} and @{text"\<le>"} in theory @{text
+in terms of the two binary symbols @{text"<<"} and @{text"<<="}
+to avoid clashes with @{text"<"} and @{text"\<le>"} in theory @{text
Main}. To restrict the application of @{text"<<"} and @{text"<<="} we
introduce the class @{text ordrel}:
*}
@@ -15,7 +15,7 @@
text{*\noindent
This introduces a new class @{text ordrel} and makes it a subclass of
the predefined class @{text term}\footnote{The quotes around @{text term}
-simply avoid the clash with the command \isacommand{term}.}; @{text term}
+simply avoid the clash with the command \isacommand{term}.} --- @{text term}
is the class of all HOL types, like ``Object'' in Java.
This is a degenerate form of axiomatic type class without any axioms.
Its sole purpose is to restrict the use of overloaded constants to meaningful
--- a/doc-src/TutorialI/Types/Overloading2.thy Tue May 01 17:16:32 2001 +0200
+++ b/doc-src/TutorialI/Types/Overloading2.thy Tue May 01 22:26:55 2001 +0200
@@ -34,6 +34,6 @@
\end{tabular}
\end{center}
And analogously for @{text"<"} instead of @{text"\<le>"}.
-The form on the left is translated into the one on the right upon input but it is not
-translated back upon output.
+The form on the left is translated into the one on the right upon input.
+For technical reasons, it is not translated back upon output.
*}(*<*)end(*>*)
--- a/doc-src/TutorialI/Types/Pairs.thy Tue May 01 17:16:32 2001 +0200
+++ b/doc-src/TutorialI/Types/Pairs.thy Tue May 01 22:26:55 2001 +0200
@@ -109,9 +109,9 @@
txt{*
@{subgoals[display,indent=0]}
Again, simplification produces a term suitable for @{thm[source]split_split}
-as above. If you are worried about the funny form of the premise:
-@{term"split (op =)"} is the same as @{text"\<lambda>(x,y). x=y"}.
-The same procedure works for
+as above. If you are worried about the strange form of the premise:
+@{term"split (op =)"} is short for @{text"\<lambda>(x,y). x=y"}.
+The same proof procedure works for
*}
(*<*)by(simp split:split_split)(*>*)
--- a/doc-src/TutorialI/Types/Typedef.thy Tue May 01 17:16:32 2001 +0200
+++ b/doc-src/TutorialI/Types/Typedef.thy Tue May 01 22:26:55 2001 +0200
@@ -26,10 +26,9 @@
This does not define @{typ my_new_type} at all but merely introduces its
name. Thus we know nothing about this type, except that it is
non-empty. Such declarations without definitions are
-useful only if that type can be viewed as a parameter of a theory and we do
-not intend to impose any restrictions on it. A typical example is given in
-\S\ref{sec:VMC}, where we define transition relations over an arbitrary type
-of states without any internal structure.
+useful if that type can be viewed as a parameter of the theory.
+A typical example is given in \S\ref{sec:VMC}, where we define a transition
+relation over an arbitrary type of states.
In principle we can always get rid of such type declarations by making those
types parameters of every other type, thus keeping the theory generic. In
--- a/doc-src/TutorialI/Types/document/Overloading1.tex Tue May 01 17:16:32 2001 +0200
+++ b/doc-src/TutorialI/Types/document/Overloading1.tex Tue May 01 22:26:55 2001 +0200
@@ -7,8 +7,8 @@
%
\begin{isamarkuptext}%
We now start with the theory of ordering relations, which we want to phrase
-in terms of the two binary symbols \isa{{\isacharless}{\isacharless}} and \isa{{\isacharless}{\isacharless}{\isacharequal}}: they have
-been chosen to avoid clashes with \isa{{\isacharless}} and \isa{{\isasymle}} in theory \isa{Main}. To restrict the application of \isa{{\isacharless}{\isacharless}} and \isa{{\isacharless}{\isacharless}{\isacharequal}} we
+in terms of the two binary symbols \isa{{\isacharless}{\isacharless}} and \isa{{\isacharless}{\isacharless}{\isacharequal}}
+to avoid clashes with \isa{{\isacharless}} and \isa{{\isasymle}} in theory \isa{Main}. To restrict the application of \isa{{\isacharless}{\isacharless}} and \isa{{\isacharless}{\isacharless}{\isacharequal}} we
introduce the class \isa{ordrel}:%
\end{isamarkuptext}%
\isacommand{axclass}\ ordrel\ {\isacharless}\ {\isachardoublequote}term{\isachardoublequote}%
@@ -16,7 +16,7 @@
\noindent
This introduces a new class \isa{ordrel} and makes it a subclass of
the predefined class \isa{term}\footnote{The quotes around \isa{term}
-simply avoid the clash with the command \isacommand{term}.}; \isa{term}
+simply avoid the clash with the command \isacommand{term}.} --- \isa{term}
is the class of all HOL types, like ``Object'' in Java.
This is a degenerate form of axiomatic type class without any axioms.
Its sole purpose is to restrict the use of overloaded constants to meaningful
--- a/doc-src/TutorialI/Types/document/Overloading2.tex Tue May 01 17:16:32 2001 +0200
+++ b/doc-src/TutorialI/Types/document/Overloading2.tex Tue May 01 22:26:55 2001 +0200
@@ -36,8 +36,8 @@
\end{tabular}
\end{center}
And analogously for \isa{{\isacharless}} instead of \isa{{\isasymle}}.
-The form on the left is translated into the one on the right upon input but it is not
-translated back upon output.%
+The form on the left is translated into the one on the right upon input.
+For technical reasons, it is not translated back upon output.%
\end{isamarkuptext}%
\end{isabellebody}%
%%% Local Variables:
--- a/doc-src/TutorialI/Types/document/Pairs.tex Tue May 01 17:16:32 2001 +0200
+++ b/doc-src/TutorialI/Types/document/Pairs.tex Tue May 01 22:26:55 2001 +0200
@@ -107,9 +107,9 @@
\ {\isadigit{1}}{\isachardot}\ split\ op\ {\isacharequal}\ p\ {\isasymlongrightarrow}\ fst\ p\ {\isacharequal}\ snd\ p%
\end{isabelle}
Again, simplification produces a term suitable for \isa{split{\isacharunderscore}split}
-as above. If you are worried about the funny form of the premise:
-\isa{split\ op\ {\isacharequal}} is the same as \isa{{\isasymlambda}{\isacharparenleft}x{\isacharcomma}y{\isacharparenright}{\isachardot}\ x{\isacharequal}y}.
-The same procedure works for%
+as above. If you are worried about the strange form of the premise:
+\isa{split\ op\ {\isacharequal}} is short for \isa{{\isasymlambda}{\isacharparenleft}x{\isacharcomma}y{\isacharparenright}{\isachardot}\ x{\isacharequal}y}.
+The same proof procedure works for%
\end{isamarkuptxt}%
\isacommand{lemma}\ {\isachardoublequote}p\ {\isasymin}\ {\isacharbraceleft}{\isacharparenleft}x{\isacharcomma}y{\isacharparenright}{\isachardot}\ x{\isacharequal}y{\isacharbraceright}\ {\isasymLongrightarrow}\ fst\ p\ {\isacharequal}\ snd\ p{\isachardoublequote}%
\begin{isamarkuptxt}%
--- a/doc-src/TutorialI/Types/document/Typedef.tex Tue May 01 17:16:32 2001 +0200
+++ b/doc-src/TutorialI/Types/document/Typedef.tex Tue May 01 22:26:55 2001 +0200
@@ -32,10 +32,9 @@
This does not define \isa{my{\isacharunderscore}new{\isacharunderscore}type} at all but merely introduces its
name. Thus we know nothing about this type, except that it is
non-empty. Such declarations without definitions are
-useful only if that type can be viewed as a parameter of a theory and we do
-not intend to impose any restrictions on it. A typical example is given in
-\S\ref{sec:VMC}, where we define transition relations over an arbitrary type
-of states without any internal structure.
+useful if that type can be viewed as a parameter of the theory.
+A typical example is given in \S\ref{sec:VMC}, where we define a transition
+relation over an arbitrary type of states.
In principle we can always get rid of such type declarations by making those
types parameters of every other type, thus keeping the theory generic. In
--- a/doc-src/TutorialI/Types/types.tex Tue May 01 17:16:32 2001 +0200
+++ b/doc-src/TutorialI/Types/types.tex Tue May 01 22:26:55 2001 +0200
@@ -2,7 +2,7 @@
\label{ch:more-types}
So far we have learned about a few basic types (for example \isa{bool} and
-\isa{nat}), type abbreviations (\isacommand{types}) and recursive datatpes
+\isa{nat}), type abbreviations (\isacommand{types}) and recursive datatypes
(\isacommand{datatype}). This chapter will introduce more
advanced material:
\begin{itemize}
@@ -33,13 +33,15 @@
\section{Records}
\label{sec:records}
-\section{Axiomatic type classes}
+\section{Axiomatic Type Classes}
\label{sec:axclass}
\index{axiomatic type class|(}
\index{*axclass|(}
The programming language Haskell has popularized the notion of type classes.
+In its simplest form, a type class is a set of types with a common interface:
+all types in that class must provide the functions in the interface.
Isabelle offers the related concept of an \textbf{axiomatic type class}.
Roughly speaking, an axiomatic type class is a type class with axioms, i.e.\
an axiomatic specification of a class of types. Thus we can talk about a type