--- a/src/HOL/Isar_Examples/Basic_Logic.thy Mon Nov 02 11:43:02 2015 +0100
+++ b/src/HOL/Isar_Examples/Basic_Logic.thy Mon Nov 02 13:58:19 2015 +0100
@@ -13,10 +13,9 @@
subsection \<open>Pure backward reasoning\<close>
-text \<open>In order to get a first idea of how Isabelle/Isar proof
- documents may look like, we consider the propositions @{text I},
- @{text K}, and @{text S}. The following (rather explicit) proofs
- should require little extra explanations.\<close>
+text \<open>In order to get a first idea of how Isabelle/Isar proof documents may
+ look like, we consider the propositions \<open>I\<close>, \<open>K\<close>, and \<open>S\<close>. The following
+ (rather explicit) proofs should require little extra explanations.\<close>
lemma I: "A \<longrightarrow> A"
proof
@@ -51,14 +50,12 @@
qed
qed
-text \<open>Isar provides several ways to fine-tune the reasoning,
- avoiding excessive detail. Several abbreviated language elements
- are available, enabling the writer to express proofs in a more
- concise way, even without referring to any automated proof tools
- yet.
+text \<open>Isar provides several ways to fine-tune the reasoning, avoiding
+ excessive detail. Several abbreviated language elements are available,
+ enabling the writer to express proofs in a more concise way, even without
+ referring to any automated proof tools yet.
- First of all, proof by assumption may be abbreviated as a single
- dot.\<close>
+ First of all, proof by assumption may be abbreviated as a single dot.\<close>
lemma "A \<longrightarrow> A"
proof
@@ -66,21 +63,21 @@
show A by fact+
qed
-text \<open>In fact, concluding any (sub-)proof already involves solving
- any remaining goals by assumption\footnote{This is not a completely
- trivial operation, as proof by assumption may involve full
- higher-order unification.}. Thus we may skip the rather vacuous
- body of the above proof as well.\<close>
+text \<open>In fact, concluding any (sub-)proof already involves solving any
+ remaining goals by assumption\footnote{This is not a completely trivial
+ operation, as proof by assumption may involve full higher-order
+ unification.}. Thus we may skip the rather vacuous body of the above proof
+ as well.\<close>
lemma "A \<longrightarrow> A"
proof
qed
-text \<open>Note that the \isacommand{proof} command refers to the @{text
- rule} method (without arguments) by default. Thus it implicitly
- applies a single rule, as determined from the syntactic form of the
- statements involved. The \isacommand{by} command abbreviates any
- proof with empty body, so the proof may be further pruned.\<close>
+text \<open>Note that the \isacommand{proof} command refers to the \<open>rule\<close> method
+ (without arguments) by default. Thus it implicitly applies a single rule,
+ as determined from the syntactic form of the statements involved. The
+ \isacommand{by} command abbreviates any proof with empty body, so the
+ proof may be further pruned.\<close>
lemma "A \<longrightarrow> A"
by rule
@@ -89,19 +86,18 @@
lemma "A \<longrightarrow> A" ..
-text \<open>Thus we have arrived at an adequate representation of the
- proof of a tautology that holds by a single standard
- rule.\footnote{Apparently, the rule here is implication
- introduction.}\<close>
+text \<open>Thus we have arrived at an adequate representation of the proof of a
+ tautology that holds by a single standard rule.\footnote{Apparently, the
+ rule here is implication introduction.}
-text \<open>Let us also reconsider @{text K}. Its statement is composed
- of iterated connectives. Basic decomposition is by a single rule at
- a time, which is why our first version above was by nesting two
- proofs.
+ \<^medskip>
+ Let us also reconsider \<open>K\<close>. Its statement is composed of iterated
+ connectives. Basic decomposition is by a single rule at a time, which is
+ why our first version above was by nesting two proofs.
- The @{text intro} proof method repeatedly decomposes a goal's
- conclusion.\footnote{The dual method is @{text elim}, acting on a
- goal's premises.}\<close>
+ The \<open>intro\<close> proof method repeatedly decomposes a goal's
+ conclusion.\footnote{The dual method is \<open>elim\<close>, acting on a goal's
+ premises.}\<close>
lemma "A \<longrightarrow> B \<longrightarrow> A"
proof (intro impI)
@@ -114,29 +110,27 @@
lemma "A \<longrightarrow> B \<longrightarrow> A"
by (intro impI)
-text \<open>Just like @{text rule}, the @{text intro} and @{text elim}
- proof methods pick standard structural rules, in case no explicit
- arguments are given. While implicit rules are usually just fine for
- single rule application, this may go too far with iteration. Thus
- in practice, @{text intro} and @{text elim} would be typically
- restricted to certain structures by giving a few rules only, e.g.\
- \isacommand{proof}~@{text "(intro impI allI)"} to strip implications
- and universal quantifiers.
+text \<open>Just like \<open>rule\<close>, the \<open>intro\<close> and \<open>elim\<close> proof methods pick standard
+ structural rules, in case no explicit arguments are given. While implicit
+ rules are usually just fine for single rule application, this may go too
+ far with iteration. Thus in practice, \<open>intro\<close> and \<open>elim\<close> would be
+ typically restricted to certain structures by giving a few rules only,
+ e.g.\ \isacommand{proof}~\<open>(intro impI allI)\<close> to strip implications and
+ universal quantifiers.
- Such well-tuned iterated decomposition of certain structures is the
- prime application of @{text intro} and @{text elim}. In contrast,
- terminal steps that solve a goal completely are usually performed by
- actual automated proof methods (such as \isacommand{by}~@{text
- blast}.\<close>
+ Such well-tuned iterated decomposition of certain structures is the prime
+ application of \<open>intro\<close> and \<open>elim\<close>. In contrast, terminal steps that solve
+ a goal completely are usually performed by actual automated proof methods
+ (such as \isacommand{by}~\<open>blast\<close>.\<close>
subsection \<open>Variations of backward vs.\ forward reasoning\<close>
-text \<open>Certainly, any proof may be performed in backward-style only.
- On the other hand, small steps of reasoning are often more naturally
- expressed in forward-style. Isar supports both backward and forward
- reasoning as a first-class concept. In order to demonstrate the
- difference, we consider several proofs of @{text "A \<and> B \<longrightarrow> B \<and> A"}.
+text \<open>Certainly, any proof may be performed in backward-style only. On the
+ other hand, small steps of reasoning are often more naturally expressed in
+ forward-style. Isar supports both backward and forward reasoning as a
+ first-class concept. In order to demonstrate the difference, we consider
+ several proofs of \<open>A \<and> B \<longrightarrow> B \<and> A\<close>.
The first version is purely backward.\<close>
@@ -150,13 +144,12 @@
qed
qed
-text \<open>Above, the @{text "conjunct_1/2"} projection rules had to be
- named explicitly, since the goals @{text B} and @{text A} did not
- provide any structural clue. This may be avoided using
- \isacommand{from} to focus on the @{text "A \<and> B"} assumption as the
- current facts, enabling the use of double-dot proofs. Note that
- \isacommand{from} already does forward-chaining, involving the
- @{text conjE} rule here.\<close>
+text \<open>Above, the projection rules \<open>conjunct1\<close> / \<open>conjunct2\<close> had to be named
+ explicitly, since the goals \<open>B\<close> and \<open>A\<close> did not provide any structural
+ clue. This may be avoided using \isacommand{from} to focus on the \<open>A \<and> B\<close>
+ assumption as the current facts, enabling the use of double-dot proofs.
+ Note that \isacommand{from} already does forward-chaining, involving the
+ \<open>conjE\<close> rule here.\<close>
lemma "A \<and> B \<longrightarrow> B \<and> A"
proof
@@ -168,27 +161,26 @@
qed
qed
-text \<open>In the next version, we move the forward step one level
- upwards. Forward-chaining from the most recent facts is indicated
- by the \isacommand{then} command. Thus the proof of @{text "B \<and> A"}
- from @{text "A \<and> B"} actually becomes an elimination, rather than an
- introduction. The resulting proof structure directly corresponds to
- that of the @{text conjE} rule, including the repeated goal
- proposition that is abbreviated as @{text ?thesis} below.\<close>
+text \<open>In the next version, we move the forward step one level upwards.
+ Forward-chaining from the most recent facts is indicated by the
+ \isacommand{then} command. Thus the proof of \<open>B \<and> A\<close> from \<open>A \<and> B\<close> actually
+ becomes an elimination, rather than an introduction. The resulting proof
+ structure directly corresponds to that of the \<open>conjE\<close> rule, including the
+ repeated goal proposition that is abbreviated as \<open>?thesis\<close> below.\<close>
lemma "A \<and> B \<longrightarrow> B \<and> A"
proof
assume "A \<and> B"
then show "B \<and> A"
- proof -- \<open>rule @{text conjE} of @{text "A \<and> B"}\<close>
+ proof -- \<open>rule \<open>conjE\<close> of \<open>A \<and> B\<close>\<close>
assume B A
- then show ?thesis .. -- \<open>rule @{text conjI} of @{text "B \<and> A"}\<close>
+ then show ?thesis .. -- \<open>rule \<open>conjI\<close> of \<open>B \<and> A\<close>\<close>
qed
qed
-text \<open>In the subsequent version we flatten the structure of the main
- body by doing forward reasoning all the time. Only the outermost
- decomposition step is left as backward.\<close>
+text \<open>In the subsequent version we flatten the structure of the main body by
+ doing forward reasoning all the time. Only the outermost decomposition
+ step is left as backward.\<close>
lemma "A \<and> B \<longrightarrow> B \<and> A"
proof
@@ -198,9 +190,9 @@
from \<open>B\<close> \<open>A\<close> show "B \<and> A" ..
qed
-text \<open>We can still push forward-reasoning a bit further, even at the
- risk of getting ridiculous. Note that we force the initial proof
- step to do nothing here, by referring to the ``-'' proof method.\<close>
+text \<open>We can still push forward-reasoning a bit further, even at the risk of
+ getting ridiculous. Note that we force the initial proof step to do
+ nothing here, by referring to the \<open>-\<close> proof method.\<close>
lemma "A \<and> B \<longrightarrow> B \<and> A"
proof -
@@ -210,27 +202,28 @@
from \<open>A \<and> B\<close> have B ..
from \<open>B\<close> \<open>A\<close> have "B \<and> A" ..
}
- then show ?thesis .. -- \<open>rule @{text impI}\<close>
+ then show ?thesis .. -- \<open>rule \<open>impI\<close>\<close>
qed
-text \<open>\medskip With these examples we have shifted through a whole
- range from purely backward to purely forward reasoning. Apparently,
- in the extreme ends we get slightly ill-structured proofs, which
- also require much explicit naming of either rules (backward) or
- local facts (forward).
+text \<open>
+ \<^medskip>
+ With these examples we have shifted through a whole range from purely
+ backward to purely forward reasoning. Apparently, in the extreme ends we
+ get slightly ill-structured proofs, which also require much explicit
+ naming of either rules (backward) or local facts (forward).
- The general lesson learned here is that good proof style would
- achieve just the \emph{right} balance of top-down backward
- decomposition, and bottom-up forward composition. In general, there
- is no single best way to arrange some pieces of formal reasoning, of
- course. Depending on the actual applications, the intended audience
- etc., rules (and methods) on the one hand vs.\ facts on the other
- hand have to be emphasized in an appropriate way. This requires the
- proof writer to develop good taste, and some practice, of course.\<close>
+ The general lesson learned here is that good proof style would achieve
+ just the \<^emph>\<open>right\<close> balance of top-down backward decomposition, and
+ bottom-up forward composition. In general, there is no single best way to
+ arrange some pieces of formal reasoning, of course. Depending on the
+ actual applications, the intended audience etc., rules (and methods) on
+ the one hand vs.\ facts on the other hand have to be emphasized in an
+ appropriate way. This requires the proof writer to develop good taste, and
+ some practice, of course.
-text \<open>For our example the most appropriate way of reasoning is
- probably the middle one, with conjunction introduction done after
- elimination.\<close>
+ \<^medskip>
+ For our example the most appropriate way of reasoning is probably the
+ middle one, with conjunction introduction done after elimination.\<close>
lemma "A \<and> B \<longrightarrow> B \<and> A"
proof
@@ -246,22 +239,22 @@
subsection \<open>A few examples from ``Introduction to Isabelle''\<close>
-text \<open>We rephrase some of the basic reasoning examples of
- @{cite "isabelle-intro"}, using HOL rather than FOL.\<close>
+text \<open>We rephrase some of the basic reasoning examples of @{cite
+ "isabelle-intro"}, using HOL rather than FOL.\<close>
subsubsection \<open>A propositional proof\<close>
-text \<open>We consider the proposition @{text "P \<or> P \<longrightarrow> P"}. The proof
- below involves forward-chaining from @{text "P \<or> P"}, followed by an
- explicit case-analysis on the two \emph{identical} cases.\<close>
+text \<open>We consider the proposition \<open>P \<or> P \<longrightarrow> P\<close>. The proof below involves
+ forward-chaining from \<open>P \<or> P\<close>, followed by an explicit case-analysis on
+ the two \<^emph>\<open>identical\<close> cases.\<close>
lemma "P \<or> P \<longrightarrow> P"
proof
assume "P \<or> P"
then show P
proof -- \<open>
- rule @{text disjE}: \smash{$\infer{C}{A \disj B & \infer*{C}{[A]} & \infer*{C}{[B]}}$}
+ rule \<open>disjE\<close>: \smash{$\infer{C}{A \disj B & \infer*{C}{[A]} & \infer*{C}{[B]}}$}
\<close>
assume P show P by fact
next
@@ -269,27 +262,27 @@
qed
qed
-text \<open>Case splits are \emph{not} hardwired into the Isar language as
- a special feature. The \isacommand{next} command used to separate
- the cases above is just a short form of managing block structure.
+text \<open>Case splits are \<^emph>\<open>not\<close> hardwired into the Isar language as a
+ special feature. The \isacommand{next} command used to separate the cases
+ above is just a short form of managing block structure.
- \medskip In general, applying proof methods may split up a goal into
- separate ``cases'', i.e.\ new subgoals with individual local
- assumptions. The corresponding proof text typically mimics this by
- establishing results in appropriate contexts, separated by blocks.
+ \<^medskip>
+ In general, applying proof methods may split up a goal into separate
+ ``cases'', i.e.\ new subgoals with individual local assumptions. The
+ corresponding proof text typically mimics this by establishing results in
+ appropriate contexts, separated by blocks.
In order to avoid too much explicit parentheses, the Isar system
implicitly opens an additional block for any new goal, the
- \isacommand{next} statement then closes one block level, opening a
- new one. The resulting behavior is what one would expect from
- separating cases, only that it is more flexible. E.g.\ an induction
- base case (which does not introduce local assumptions) would
- \emph{not} require \isacommand{next} to separate the subsequent step
- case.
+ \isacommand{next} statement then closes one block level, opening a new
+ one. The resulting behaviour is what one would expect from separating
+ cases, only that it is more flexible. E.g.\ an induction base case (which
+ does not introduce local assumptions) would \<^emph>\<open>not\<close> require
+ \isacommand{next} to separate the subsequent step case.
- \medskip In our example the situation is even simpler, since the two
- cases actually coincide. Consequently the proof may be rephrased as
- follows.\<close>
+ \<^medskip>
+ In our example the situation is even simpler, since the two cases actually
+ coincide. Consequently the proof may be rephrased as follows.\<close>
lemma "P \<or> P \<longrightarrow> P"
proof
@@ -316,37 +309,34 @@
subsubsection \<open>A quantifier proof\<close>
-text \<open>To illustrate quantifier reasoning, let us prove @{text
- "(\<exists>x. P (f x)) \<longrightarrow> (\<exists>y. P y)"}. Informally, this holds because any
- @{text a} with @{text "P (f a)"} may be taken as a witness for the
- second existential statement.
+text \<open>To illustrate quantifier reasoning, let us prove
+ \<open>(\<exists>x. P (f x)) \<longrightarrow> (\<exists>y. P y)\<close>. Informally, this holds because any \<open>a\<close> with
+ \<open>P (f a)\<close> may be taken as a witness for the second existential statement.
- The first proof is rather verbose, exhibiting quite a lot of
- (redundant) detail. It gives explicit rules, even with some
- instantiation. Furthermore, we encounter two new language elements:
- the \isacommand{fix} command augments the context by some new
- ``arbitrary, but fixed'' element; the \isacommand{is} annotation
- binds term abbreviations by higher-order pattern matching.\<close>
+ The first proof is rather verbose, exhibiting quite a lot of (redundant)
+ detail. It gives explicit rules, even with some instantiation.
+ Furthermore, we encounter two new language elements: the \isacommand{fix}
+ command augments the context by some new ``arbitrary, but fixed'' element;
+ the \isacommand{is} annotation binds term abbreviations by higher-order
+ pattern matching.\<close>
lemma "(\<exists>x. P (f x)) \<longrightarrow> (\<exists>y. P y)"
proof
assume "\<exists>x. P (f x)"
then show "\<exists>y. P y"
- proof (rule exE) -- \<open>
- rule @{text exE}: \smash{$\infer{B}{\ex x A(x) & \infer*{B}{[A(x)]_x}}$}
-\<close>
+ proof (rule exE) --
+ \<open>rule \<open>exE\<close>: \smash{$\infer{B}{\ex x A(x) & \infer*{B}{[A(x)]_x}}$}\<close>
fix a
assume "P (f a)" (is "P ?witness")
then show ?thesis by (rule exI [of P ?witness])
qed
qed
-text \<open>While explicit rule instantiation may occasionally improve
- readability of certain aspects of reasoning, it is usually quite
- redundant. Above, the basic proof outline gives already enough
- structural clues for the system to infer both the rules and their
- instances (by higher-order unification). Thus we may as well prune
- the text as follows.\<close>
+text \<open>While explicit rule instantiation may occasionally improve readability
+ of certain aspects of reasoning, it is usually quite redundant. Above, the
+ basic proof outline gives already enough structural clues for the system
+ to infer both the rules and their instances (by higher-order unification).
+ Thus we may as well prune the text as follows.\<close>
lemma "(\<exists>x. P (f x)) \<longrightarrow> (\<exists>y. P y)"
proof
@@ -359,10 +349,9 @@
qed
qed
-text \<open>Explicit @{text \<exists>}-elimination as seen above can become quite
- cumbersome in practice. The derived Isar language element
- ``\isakeyword{obtain}'' provides a more handsome way to do
- generalized existence reasoning.\<close>
+text \<open>Explicit \<open>\<exists>\<close>-elimination as seen above can become quite cumbersome in
+ practice. The derived Isar language element ``\isakeyword{obtain}''
+ provides a more handsome way to do generalized existence reasoning.\<close>
lemma "(\<exists>x. P (f x)) \<longrightarrow> (\<exists>y. P y)"
proof
@@ -371,21 +360,19 @@
then show "\<exists>y. P y" ..
qed
-text \<open>Technically, \isakeyword{obtain} is similar to
- \isakeyword{fix} and \isakeyword{assume} together with a soundness
- proof of the elimination involved. Thus it behaves similar to any
- other forward proof element. Also note that due to the nature of
- general existence reasoning involved here, any result exported from
- the context of an \isakeyword{obtain} statement may \emph{not} refer
- to the parameters introduced there.\<close>
+text \<open>Technically, \isakeyword{obtain} is similar to \isakeyword{fix} and
+ \isakeyword{assume} together with a soundness proof of the elimination
+ involved. Thus it behaves similar to any other forward proof element. Also
+ note that due to the nature of general existence reasoning involved here,
+ any result exported from the context of an \isakeyword{obtain} statement
+ may \<^emph>\<open>not\<close> refer to the parameters introduced there.\<close>
subsubsection \<open>Deriving rules in Isabelle\<close>
-text \<open>We derive the conjunction elimination rule from the
- corresponding projections. The proof is quite straight-forward,
- since Isabelle/Isar supports non-atomic goals and assumptions fully
- transparently.\<close>
+text \<open>We derive the conjunction elimination rule from the corresponding
+ projections. The proof is quite straight-forward, since Isabelle/Isar
+ supports non-atomic goals and assumptions fully transparently.\<close>
theorem conjE: "A \<and> B \<Longrightarrow> (A \<Longrightarrow> B \<Longrightarrow> C) \<Longrightarrow> C"
proof -
--- a/src/HOL/Isar_Examples/Cantor.thy Mon Nov 02 11:43:02 2015 +0100
+++ b/src/HOL/Isar_Examples/Cantor.thy Mon Nov 02 13:58:19 2015 +0100
@@ -12,17 +12,17 @@
the Isabelle/HOL manual @{cite "isabelle-HOL"}.}\<close>
text \<open>Cantor's Theorem states that every set has more subsets than
- it has elements. It has become a favorite basic example in pure
- higher-order logic since it is so easily expressed: \[\all{f::\alpha
- \To \alpha \To \idt{bool}} \ex{S::\alpha \To \idt{bool}}
- \all{x::\alpha} f \ap x \not= S\]
+ it has elements. It has become a favourite basic example in pure
+ higher-order logic since it is so easily expressed:
+
+ @{text [display]
+ \<open>\<forall>f::\<alpha> \<Rightarrow> \<alpha> \<Rightarrow> bool. \<exists>S::\<alpha> \<Rightarrow> bool. \<forall>x::\<alpha>. f x \<noteq> S\<close>}
- Viewing types as sets, $\alpha \To \idt{bool}$ represents the
- powerset of $\alpha$. This version of the theorem states that for
- every function from $\alpha$ to its powerset, some subset is outside
- its range. The Isabelle/Isar proofs below uses HOL's set theory,
- with the type $\alpha \ap \idt{set}$ and the operator
- $\idt{range}::(\alpha \To \beta) \To \beta \ap \idt{set}$.\<close>
+ Viewing types as sets, \<open>\<alpha> \<Rightarrow> bool\<close> represents the powerset of \<open>\<alpha>\<close>. This
+ version of the theorem states that for every function from \<open>\<alpha>\<close> to its
+ powerset, some subset is outside its range. The Isabelle/Isar proofs below
+ uses HOL's set theory, with the type \<open>\<alpha> set\<close> and the operator
+ \<open>range :: (\<alpha> \<Rightarrow> \<beta>) \<Rightarrow> \<beta> set\<close>.\<close>
theorem "\<exists>S. S \<notin> range (f :: 'a \<Rightarrow> 'a set)"
proof
@@ -46,20 +46,19 @@
qed
qed
-text \<open>How much creativity is required? As it happens, Isabelle can
- prove this theorem automatically using best-first search.
- Depth-first search would diverge, but best-first search successfully
- navigates through the large search space. The context of Isabelle's
- classical prover contains rules for the relevant constructs of HOL's
- set theory.\<close>
+text \<open>How much creativity is required? As it happens, Isabelle can prove
+ this theorem automatically using best-first search. Depth-first search
+ would diverge, but best-first search successfully navigates through the
+ large search space. The context of Isabelle's classical prover contains
+ rules for the relevant constructs of HOL's set theory.\<close>
theorem "\<exists>S. S \<notin> range (f :: 'a \<Rightarrow> 'a set)"
by best
-text \<open>While this establishes the same theorem internally, we do not
- get any idea of how the proof actually works. There is currently no
- way to transform internal system-level representations of Isabelle
- proofs back into Isar text. Writing intelligible proof documents
- really is a creative process, after all.\<close>
+text \<open>While this establishes the same theorem internally, we do not get any
+ idea of how the proof actually works. There is currently no way to
+ transform internal system-level representations of Isabelle proofs back
+ into Isar text. Writing intelligible proof documents really is a creative
+ process, after all.\<close>
end
--- a/src/HOL/Isar_Examples/Drinker.thy Mon Nov 02 11:43:02 2015 +0100
+++ b/src/HOL/Isar_Examples/Drinker.thy Mon Nov 02 13:58:19 2015 +0100
@@ -9,8 +9,8 @@
begin
text \<open>Here is another example of classical reasoning: the Drinker's
- Principle says that for some person, if he is drunk, everybody else
- is drunk!
+ Principle says that for some person, if he is drunk, everybody else is
+ drunk!
We first prove a classical part of de-Morgan's law.\<close>
--- a/src/HOL/Isar_Examples/Expr_Compiler.thy Mon Nov 02 11:43:02 2015 +0100
+++ b/src/HOL/Isar_Examples/Expr_Compiler.thy Mon Nov 02 13:58:19 2015 +0100
@@ -10,17 +10,16 @@
imports Main
begin
-text \<open>This is a (rather trivial) example of program verification.
- We model a compiler for translating expressions to stack machine
- instructions, and prove its correctness wrt.\ some evaluation
- semantics.\<close>
+text \<open>This is a (rather trivial) example of program verification. We model a
+ compiler for translating expressions to stack machine instructions, and
+ prove its correctness wrt.\ some evaluation semantics.\<close>
subsection \<open>Binary operations\<close>
-text \<open>Binary operations are just functions over some type of values.
- This is both for abstract syntax and semantics, i.e.\ we use a
- ``shallow embedding'' here.\<close>
+text \<open>Binary operations are just functions over some type of values. This is
+ both for abstract syntax and semantics, i.e.\ we use a ``shallow
+ embedding'' here.\<close>
type_synonym 'val binop = "'val \<Rightarrow> 'val \<Rightarrow> 'val"
@@ -28,16 +27,15 @@
subsection \<open>Expressions\<close>
text \<open>The language of expressions is defined as an inductive type,
- consisting of variables, constants, and binary operations on
- expressions.\<close>
+ consisting of variables, constants, and binary operations on expressions.\<close>
datatype (dead 'adr, dead 'val) expr =
Variable 'adr
| Constant 'val
| Binop "'val binop" "('adr, 'val) expr" "('adr, 'val) expr"
-text \<open>Evaluation (wrt.\ some environment of variable assignments) is
- defined by primitive recursion over the structure of expressions.\<close>
+text \<open>Evaluation (wrt.\ some environment of variable assignments) is defined
+ by primitive recursion over the structure of expressions.\<close>
primrec eval :: "('adr, 'val) expr \<Rightarrow> ('adr \<Rightarrow> 'val) \<Rightarrow> 'val"
where
@@ -48,16 +46,15 @@
subsection \<open>Machine\<close>
-text \<open>Next we model a simple stack machine, with three
- instructions.\<close>
+text \<open>Next we model a simple stack machine, with three instructions.\<close>
datatype (dead 'adr, dead 'val) instr =
Const 'val
| Load 'adr
| Apply "'val binop"
-text \<open>Execution of a list of stack machine instructions is easily
- defined as follows.\<close>
+text \<open>Execution of a list of stack machine instructions is easily defined as
+ follows.\<close>
primrec exec :: "(('adr, 'val) instr) list \<Rightarrow> 'val list \<Rightarrow> ('adr \<Rightarrow> 'val) \<Rightarrow> 'val list"
where
@@ -75,8 +72,8 @@
subsection \<open>Compiler\<close>
-text \<open>We are ready to define the compilation function of expressions
- to lists of stack machine instructions.\<close>
+text \<open>We are ready to define the compilation function of expressions to
+ lists of stack machine instructions.\<close>
primrec compile :: "('adr, 'val) expr \<Rightarrow> (('adr, 'val) instr) list"
where
@@ -85,9 +82,8 @@
| "compile (Binop f e1 e2) = compile e2 @ compile e1 @ [Apply f]"
-text \<open>The main result of this development is the correctness theorem
- for @{text compile}. We first establish a lemma about @{text exec}
- and list append.\<close>
+text \<open>The main result of this development is the correctness theorem for
+ \<open>compile\<close>. We first establish a lemma about \<open>exec\<close> and list append.\<close>
lemma exec_append:
"exec (xs @ ys) stack env =
@@ -127,11 +123,14 @@
qed
-text \<open>\bigskip In the proofs above, the @{text simp} method does
- quite a lot of work behind the scenes (mostly ``functional program
- execution''). Subsequently, the same reasoning is elaborated in
- detail --- at most one recursive function definition is used at a
- time. Thus we get a better idea of what is actually going on.\<close>
+text \<open>
+ \<^bigskip>
+ In the proofs above, the \<open>simp\<close> method does quite a lot of work behind the
+ scenes (mostly ``functional program execution''). Subsequently, the same
+ reasoning is elaborated in detail --- at most one recursive function
+ definition is used at a time. Thus we get a better idea of what is
+ actually going on.
+\<close>
lemma exec_append':
"exec (xs @ ys) stack env = exec ys (exec xs stack env) env"
--- a/src/HOL/Isar_Examples/Group.thy Mon Nov 02 11:43:02 2015 +0100
+++ b/src/HOL/Isar_Examples/Group.thy Mon Nov 02 13:58:19 2015 +0100
@@ -10,18 +10,17 @@
subsection \<open>Groups and calculational reasoning\<close>
-text \<open>Groups over signature $({\times} :: \alpha \To \alpha \To
- \alpha, \idt{one} :: \alpha, \idt{inverse} :: \alpha \To \alpha)$
- are defined as an axiomatic type class as follows. Note that the
- parent class $\idt{times}$ is provided by the basic HOL theory.\<close>
+text \<open>Groups over signature \<open>(\<times> :: \<alpha> \<Rightarrow> \<alpha> \<Rightarrow> \<alpha>, one :: \<alpha>, inverse :: \<alpha> \<Rightarrow> \<alpha>)\<close>
+ are defined as an axiomatic type class as follows. Note that the parent
+ class \<open>\<times>\<close> is provided by the basic HOL theory.\<close>
class group = times + one + inverse +
assumes group_assoc: "(x * y) * z = x * (y * z)"
and group_left_one: "1 * x = x"
and group_left_inverse: "inverse x * x = 1"
-text \<open>The group axioms only state the properties of left one and
- inverse, the right versions may be derived as follows.\<close>
+text \<open>The group axioms only state the properties of left one and inverse,
+ the right versions may be derived as follows.\<close>
theorem (in group) group_right_inverse: "x * inverse x = 1"
proof -
@@ -44,9 +43,8 @@
finally show ?thesis .
qed
-text \<open>With \name{group-right-inverse} already available,
- \name{group-right-one}\label{thm:group-right-one} is now established
- much easier.\<close>
+text \<open>With \<open>group_right_inverse\<close> already available, \<open>group_right_one\<close>
+ \label{thm:group-right-one} is now established much easier.\<close>
theorem (in group) group_right_one: "x * 1 = x"
proof -
@@ -61,27 +59,27 @@
finally show ?thesis .
qed
-text \<open>\medskip The calculational proof style above follows typical
- presentations given in any introductory course on algebra. The
- basic technique is to form a transitive chain of equations, which in
- turn are established by simplifying with appropriate rules. The
- low-level logical details of equational reasoning are left implicit.
+text \<open>
+ \<^medskip>
+ The calculational proof style above follows typical presentations given in
+ any introductory course on algebra. The basic technique is to form a
+ transitive chain of equations, which in turn are established by
+ simplifying with appropriate rules. The low-level logical details of
+ equational reasoning are left implicit.
- Note that ``$\dots$'' is just a special term variable that is bound
- automatically to the argument\footnote{The argument of a curried
- infix expression happens to be its right-hand side.} of the last
- fact achieved by any local assumption or proven statement. In
- contrast to $\var{thesis}$, the ``$\dots$'' variable is bound
- \emph{after} the proof is finished, though.
+ Note that ``\<open>\<dots>\<close>'' is just a special term variable that is bound
+ automatically to the argument\footnote{The argument of a curried infix
+ expression happens to be its right-hand side.} of the last fact achieved
+ by any local assumption or proven statement. In contrast to \<open>?thesis\<close>, the
+ ``\<open>\<dots>\<close>'' variable is bound \<^emph>\<open>after\<close> the proof is finished.
There are only two separate Isar language elements for calculational
- proofs: ``\isakeyword{also}'' for initial or intermediate
- calculational steps, and ``\isakeyword{finally}'' for exhibiting the
- result of a calculation. These constructs are not hardwired into
- Isabelle/Isar, but defined on top of the basic Isar/VM interpreter.
- Expanding the \isakeyword{also} and \isakeyword{finally} derived
- language elements, calculations may be simulated by hand as
- demonstrated below.\<close>
+ proofs: ``\isakeyword{also}'' for initial or intermediate calculational
+ steps, and ``\isakeyword{finally}'' for exhibiting the result of a
+ calculation. These constructs are not hardwired into Isabelle/Isar, but
+ defined on top of the basic Isar/VM interpreter. Expanding the
+ \isakeyword{also} and \isakeyword{finally} derived language elements,
+ calculations may be simulated by hand as demonstrated below.\<close>
theorem (in group) "x * 1 = x"
proof -
@@ -114,51 +112,49 @@
show ?thesis .
qed
-text \<open>Note that this scheme of calculations is not restricted to
- plain transitivity. Rules like anti-symmetry, or even forward and
- backward substitution work as well. For the actual implementation
- of \isacommand{also} and \isacommand{finally}, Isabelle/Isar
- maintains separate context information of ``transitivity'' rules.
- Rule selection takes place automatically by higher-order
- unification.\<close>
+text \<open>Note that this scheme of calculations is not restricted to plain
+ transitivity. Rules like anti-symmetry, or even forward and backward
+ substitution work as well. For the actual implementation of
+ \isacommand{also} and \isacommand{finally}, Isabelle/Isar maintains
+ separate context information of ``transitivity'' rules. Rule selection
+ takes place automatically by higher-order unification.\<close>
subsection \<open>Groups as monoids\<close>
-text \<open>Monoids over signature $({\times} :: \alpha \To \alpha \To
- \alpha, \idt{one} :: \alpha)$ are defined like this.\<close>
+text \<open>Monoids over signature \<open>(\<times> :: \<alpha> \<Rightarrow> \<alpha> \<Rightarrow> \<alpha>, one :: \<alpha>)\<close> are defined like
+ this.\<close>
class monoid = times + one +
assumes monoid_assoc: "(x * y) * z = x * (y * z)"
and monoid_left_one: "1 * x = x"
and monoid_right_one: "x * 1 = x"
-text \<open>Groups are \emph{not} yet monoids directly from the
- definition. For monoids, \name{right-one} had to be included as an
- axiom, but for groups both \name{right-one} and \name{right-inverse}
- are derivable from the other axioms. With \name{group-right-one}
- derived as a theorem of group theory (see
+text \<open>Groups are \<^emph>\<open>not\<close> yet monoids directly from the definition. For
+ monoids, \<open>right_one\<close> had to be included as an axiom, but for groups both
+ \<open>right_one\<close> and \<open>right_inverse\<close> are derivable from the other axioms. With
+ \<open>group_right_one\<close> derived as a theorem of group theory (see
page~\pageref{thm:group-right-one}), we may still instantiate
- $\idt{group} \subseteq \idt{monoid}$ properly as follows.\<close>
+ \<open>group \<subseteq> monoid\<close> properly as follows.\<close>
-instance group < monoid
+instance group \<subseteq> monoid
by intro_classes
(rule group_assoc,
rule group_left_one,
rule group_right_one)
text \<open>The \isacommand{instance} command actually is a version of
- \isacommand{theorem}, setting up a goal that reflects the intended
- class relation (or type constructor arity). Thus any Isar proof
- language element may be involved to establish this statement. When
- concluding the proof, the result is transformed into the intended
- type signature extension behind the scenes.\<close>
+ \isacommand{theorem}, setting up a goal that reflects the intended class
+ relation (or type constructor arity). Thus any Isar proof language element
+ may be involved to establish this statement. When concluding the proof,
+ the result is transformed into the intended type signature extension
+ behind the scenes.\<close>
subsection \<open>More theorems of group theory\<close>
text \<open>The one element is already uniquely determined by preserving
- an \emph{arbitrary} group element.\<close>
+ an \<^emph>\<open>arbitrary\<close> group element.\<close>
theorem (in group) group_one_equality:
assumes eq: "e * x = x"
--- a/src/HOL/Isar_Examples/Hoare.thy Mon Nov 02 11:43:02 2015 +0100
+++ b/src/HOL/Isar_Examples/Hoare.thy Mon Nov 02 13:58:19 2015 +0100
@@ -12,11 +12,10 @@
subsection \<open>Abstract syntax and semantics\<close>
-text \<open>The following abstract syntax and semantics of Hoare Logic
- over \texttt{WHILE} programs closely follows the existing tradition
- in Isabelle/HOL of formalizing the presentation given in
- @{cite \<open>\S6\<close> "Winskel:1993"}. See also @{file "~~/src/HOL/Hoare"} and
- @{cite "Nipkow:1998:Winskel"}.\<close>
+text \<open>The following abstract syntax and semantics of Hoare Logic over
+ \<^verbatim>\<open>WHILE\<close> programs closely follows the existing tradition in Isabelle/HOL
+ of formalizing the presentation given in @{cite \<open>\S6\<close> "Winskel:1993"}. See
+ also @{file "~~/src/HOL/Hoare"} and @{cite "Nipkow:1998:Winskel"}.\<close>
type_synonym 'a bexp = "'a set"
type_synonym 'a assn = "'a set"
@@ -60,14 +59,15 @@
subsection \<open>Primitive Hoare rules\<close>
-text \<open>From the semantics defined above, we derive the standard set
- of primitive Hoare rules; e.g.\ see @{cite \<open>\S6\<close> "Winskel:1993"}.
- Usually, variant forms of these rules are applied in actual proof,
- see also \S\ref{sec:hoare-isar} and \S\ref{sec:hoare-vcg}.
+text \<open>From the semantics defined above, we derive the standard set of
+ primitive Hoare rules; e.g.\ see @{cite \<open>\S6\<close> "Winskel:1993"}. Usually,
+ variant forms of these rules are applied in actual proof, see also
+ \S\ref{sec:hoare-isar} and \S\ref{sec:hoare-vcg}.
- \medskip The \name{basic} rule represents any kind of atomic access
- to the state space. This subsumes the common rules of \name{skip}
- and \name{assign}, as formulated in \S\ref{sec:hoare-isar}.\<close>
+ \<^medskip>
+ The \<open>basic\<close> rule represents any kind of atomic access to the state space.
+ This subsumes the common rules of \<open>skip\<close> and \<open>assign\<close>, as formulated in
+ \S\ref{sec:hoare-isar}.\<close>
theorem basic: "\<turnstile> {s. f s \<in> P} (Basic f) P"
proof
@@ -79,7 +79,7 @@
qed
text \<open>The rules for sequential commands and semantic consequences are
- established in a straight forward manner as follows.\<close>
+ established in a straight forward manner as follows.\<close>
theorem seq: "\<turnstile> P c1 Q \<Longrightarrow> \<turnstile> Q c2 R \<Longrightarrow> \<turnstile> P (c1; c2) R"
proof
@@ -105,8 +105,8 @@
qed
text \<open>The rule for conditional commands is directly reflected by the
- corresponding semantics; in the proof we just have to look closely
- which cases apply.\<close>
+ corresponding semantics; in the proof we just have to look closely which
+ cases apply.\<close>
theorem cond:
assumes case_b: "\<turnstile> (P \<inter> b) c1 Q"
@@ -134,12 +134,11 @@
qed
qed
-text \<open>The @{text while} rule is slightly less trivial --- it is the
- only one based on recursion, which is expressed in the semantics by
- a Kleene-style least fixed-point construction. The auxiliary
- statement below, which is by induction on the number of iterations
- is the main point to be proven; the rest is by routine application
- of the semantics of \texttt{WHILE}.\<close>
+text \<open>The \<open>while\<close> rule is slightly less trivial --- it is the only one based
+ on recursion, which is expressed in the semantics by a Kleene-style least
+ fixed-point construction. The auxiliary statement below, which is by
+ induction on the number of iterations is the main point to be proven; the
+ rest is by routine application of the semantics of \<^verbatim>\<open>WHILE\<close>.\<close>
theorem while:
assumes body: "\<turnstile> (P \<inter> b) c P"
@@ -167,24 +166,23 @@
text \<open>We now introduce concrete syntax for describing commands (with
embedded expressions) and assertions. The basic technique is that of
- semantic ``quote-antiquote''. A \emph{quotation} is a syntactic
- entity delimited by an implicit abstraction, say over the state
- space. An \emph{antiquotation} is a marked expression within a
- quotation that refers the implicit argument; a typical antiquotation
- would select (or even update) components from the state.
+ semantic ``quote-antiquote''. A \<^emph>\<open>quotation\<close> is a syntactic entity
+ delimited by an implicit abstraction, say over the state space. An
+ \<^emph>\<open>antiquotation\<close> is a marked expression within a quotation that refers the
+ implicit argument; a typical antiquotation would select (or even update)
+ components from the state.
- We will see some examples later in the concrete rules and
- applications.\<close>
+ We will see some examples later in the concrete rules and applications.
-text \<open>The following specification of syntax and translations is for
- Isabelle experts only; feel free to ignore it.
+ \<^medskip>
+ The following specification of syntax and translations is for Isabelle
+ experts only; feel free to ignore it.
- While the first part is still a somewhat intelligible specification
- of the concrete syntactic representation of our Hoare language, the
- actual ``ML drivers'' is quite involved. Just note that the we
- re-use the basic quote/antiquote translations as already defined in
- Isabelle/Pure (see @{ML Syntax_Trans.quote_tr}, and
- @{ML Syntax_Trans.quote_tr'},).\<close>
+ While the first part is still a somewhat intelligible specification of the
+ concrete syntactic representation of our Hoare language, the actual ``ML
+ drivers'' is quite involved. Just note that the we re-use the basic
+ quote/antiquote translations as already defined in Isabelle/Pure (see @{ML
+ Syntax_Trans.quote_tr}, and @{ML Syntax_Trans.quote_tr'},).\<close>
syntax
"_quote" :: "'b \<Rightarrow> ('a \<Rightarrow> 'b)"
@@ -213,10 +211,9 @@
in [(@{syntax_const "_quote"}, K quote_tr)] end
\<close>
-text \<open>As usual in Isabelle syntax translations, the part for
- printing is more complicated --- we cannot express parts as macro
- rules as above. Don't look here, unless you have to do similar
- things for yourself.\<close>
+text \<open>As usual in Isabelle syntax translations, the part for printing is
+ more complicated --- we cannot express parts as macro rules as above.
+ Don't look here, unless you have to do similar things for yourself.\<close>
print_translation \<open>
let
@@ -245,13 +242,14 @@
subsection \<open>Rules for single-step proof \label{sec:hoare-isar}\<close>
-text \<open>We are now ready to introduce a set of Hoare rules to be used
- in single-step structured proofs in Isabelle/Isar. We refer to the
- concrete syntax introduce above.
+text \<open>We are now ready to introduce a set of Hoare rules to be used in
+ single-step structured proofs in Isabelle/Isar. We refer to the concrete
+ syntax introduce above.
- \medskip Assertions of Hoare Logic may be manipulated in
- calculational proofs, with the inclusion expressed in terms of sets
- or predicates. Reversed order is supported as well.\<close>
+ \<^medskip>
+ Assertions of Hoare Logic may be manipulated in calculational proofs, with
+ the inclusion expressed in terms of sets or predicates. Reversed order is
+ supported as well.\<close>
lemma [trans]: "\<turnstile> P c Q \<Longrightarrow> P' \<subseteq> P \<Longrightarrow> \<turnstile> P' c Q"
by (unfold Valid_def) blast
@@ -278,10 +276,10 @@
by (simp add: Valid_def)
-text \<open>Identity and basic assignments.\footnote{The $\idt{hoare}$
- method introduced in \S\ref{sec:hoare-vcg} is able to provide proper
- instances for any number of basic assignments, without producing
- additional verification conditions.}\<close>
+text \<open>Identity and basic assignments.\footnote{The \<open>hoare\<close> method introduced
+ in \S\ref{sec:hoare-vcg} is able to provide proper instances for any
+ number of basic assignments, without producing additional verification
+ conditions.}\<close>
lemma skip [intro?]: "\<turnstile> P SKIP P"
proof -
@@ -293,19 +291,19 @@
by (rule basic)
text \<open>Note that above formulation of assignment corresponds to our
- preferred way to model state spaces, using (extensible) record types
- in HOL @{cite "Naraschewski-Wenzel:1998:HOOL"}. For any record field
- $x$, Isabelle/HOL provides a functions $x$ (selector) and
- $\idt{x{\dsh}update}$ (update). Above, there is only a place-holder
- appearing for the latter kind of function: due to concrete syntax
- \isa{\'x := \'a} also contains \isa{x\_update}.\footnote{Note that
- due to the external nature of HOL record fields, we could not even
- state a general theorem relating selector and update functions (if
- this were required here); this would only work for any particular
- instance of record fields introduced so far.}\<close>
+ preferred way to model state spaces, using (extensible) record types in
+ HOL @{cite "Naraschewski-Wenzel:1998:HOOL"}. For any record field \<open>x\<close>,
+ Isabelle/HOL provides a functions \<open>x\<close> (selector) and \<open>x_update\<close> (update).
+ Above, there is only a place-holder appearing for the latter kind of
+ function: due to concrete syntax \<open>\<acute>x := \<acute>a\<close> also contains
+ \<open>x_update\<close>.\footnote{Note that due to the external nature of HOL record
+ fields, we could not even state a general theorem relating selector and
+ update functions (if this were required here); this would only work for
+ any particular instance of record fields introduced so far.}
-text \<open>Sequential composition --- normalizing with associativity
- achieves proper of chunks of code verified separately.\<close>
+ \<^medskip>
+ Sequential composition --- normalizing with associativity achieves proper
+ of chunks of code verified separately.\<close>
lemmas [trans, intro?] = seq
@@ -344,16 +342,15 @@
subsection \<open>Verification conditions \label{sec:hoare-vcg}\<close>
-text \<open>We now load the \emph{original} ML file for proof scripts and
- tactic definition for the Hoare Verification Condition Generator
- (see @{file "~~/src/HOL/Hoare/"}). As far as we
- are concerned here, the result is a proof method \name{hoare}, which
- may be applied to a Hoare Logic assertion to extract purely logical
- verification conditions. It is important to note that the method
- requires \texttt{WHILE} loops to be fully annotated with invariants
- beforehand. Furthermore, only \emph{concrete} pieces of code are
- handled --- the underlying tactic fails ungracefully if supplied
- with meta-variables or parameters, for example.\<close>
+text \<open>We now load the \<^emph>\<open>original\<close> ML file for proof scripts and tactic
+ definition for the Hoare Verification Condition Generator (see @{file
+ "~~/src/HOL/Hoare/"}). As far as we are concerned here, the result is a
+ proof method \<open>hoare\<close>, which may be applied to a Hoare Logic assertion to
+ extract purely logical verification conditions. It is important to note
+ that the method requires \<^verbatim>\<open>WHILE\<close> loops to be fully annotated with
+ invariants beforehand. Furthermore, only \<^emph>\<open>concrete\<close> pieces of code are
+ handled --- the underlying tactic fails ungracefully if supplied with
+ meta-variables or parameters, for example.\<close>
lemma SkipRule: "p \<subseteq> q \<Longrightarrow> Valid p (Basic id) q"
by (auto simp add: Valid_def)
--- a/src/HOL/Isar_Examples/Hoare_Ex.thy Mon Nov 02 11:43:02 2015 +0100
+++ b/src/HOL/Isar_Examples/Hoare_Ex.thy Mon Nov 02 13:58:19 2015 +0100
@@ -6,9 +6,9 @@
subsection \<open>State spaces\<close>
-text \<open>First of all we provide a store of program variables that
- occur in any of the programs considered later. Slightly unexpected
- things may happen when attempting to work with undeclared variables.\<close>
+text \<open>First of all we provide a store of program variables that occur in any
+ of the programs considered later. Slightly unexpected things may happen
+ when attempting to work with undeclared variables.\<close>
record vars =
I :: nat
@@ -16,29 +16,28 @@
N :: nat
S :: nat
-text \<open>While all of our variables happen to have the same type,
- nothing would prevent us from working with many-sorted programs as
- well, or even polymorphic ones. Also note that Isabelle/HOL's
- extensible record types even provides simple means to extend the
- state space later.\<close>
+text \<open>While all of our variables happen to have the same type, nothing would
+ prevent us from working with many-sorted programs as well, or even
+ polymorphic ones. Also note that Isabelle/HOL's extensible record types
+ even provides simple means to extend the state space later.\<close>
subsection \<open>Basic examples\<close>
-text \<open>We look at few trivialities involving assignment and
- sequential composition, in order to get an idea of how to work with
- our formulation of Hoare Logic.\<close>
+text \<open>We look at few trivialities involving assignment and sequential
+ composition, in order to get an idea of how to work with our formulation
+ of Hoare Logic.\<close>
-text \<open>Using the basic @{text assign} rule directly is a bit
+text \<open>Using the basic \<open>assign\<close> rule directly is a bit
cumbersome.\<close>
lemma "\<turnstile> \<lbrace>\<acute>(N_update (\<lambda>_. (2 * \<acute>N))) \<in> \<lbrace>\<acute>N = 10\<rbrace>\<rbrace> \<acute>N := 2 * \<acute>N \<lbrace>\<acute>N = 10\<rbrace>"
by (rule assign)
-text \<open>Certainly we want the state modification already done, e.g.\
- by simplification. The \name{hoare} method performs the basic state
- update for us; we may apply the Simplifier afterwards to achieve
- ``obvious'' consequences as well.\<close>
+text \<open>Certainly we want the state modification already done, e.g.\ by
+ simplification. The \<open>hoare\<close> method performs the basic state update for us;
+ we may apply the Simplifier afterwards to achieve ``obvious'' consequences
+ as well.\<close>
lemma "\<turnstile> \<lbrace>True\<rbrace> \<acute>N := 10 \<lbrace>\<acute>N = 10\<rbrace>"
by hoare
@@ -67,8 +66,8 @@
\<lbrace>\<acute>M = b \<and> \<acute>N = a\<rbrace>"
by hoare simp
-text \<open>It is important to note that statements like the following one
- can only be proven for each individual program variable. Due to the
+text \<open>It is important to note that statements like the following one can
+ only be proven for each individual program variable. Due to the
extra-logical nature of record fields, we cannot formulate a theorem
relating record selectors and updates schematically.\<close>
@@ -84,9 +83,9 @@
oops
-text \<open>In the following assignments we make use of the consequence
- rule in order to achieve the intended precondition. Certainly, the
- \name{hoare} method is able to handle this case, too.\<close>
+text \<open>In the following assignments we make use of the consequence rule in
+ order to achieve the intended precondition. Certainly, the \<open>hoare\<close> method
+ is able to handle this case, too.\<close>
lemma "\<turnstile> \<lbrace>\<acute>M = \<acute>N\<rbrace> \<acute>M := \<acute>M + 1 \<lbrace>\<acute>M \<noteq> \<acute>N\<rbrace>"
proof -
@@ -114,10 +113,10 @@
subsection \<open>Multiplication by addition\<close>
-text \<open>We now do some basic examples of actual \texttt{WHILE}
- programs. This one is a loop for calculating the product of two
- natural numbers, by iterated addition. We first give detailed
- structured proof based on single-step Hoare rules.\<close>
+text \<open>We now do some basic examples of actual \<^verbatim>\<open>WHILE\<close> programs. This one is
+ a loop for calculating the product of two natural numbers, by iterated
+ addition. We first give detailed structured proof based on single-step
+ Hoare rules.\<close>
lemma
"\<turnstile> \<lbrace>\<acute>M = 0 \<and> \<acute>S = 0\<rbrace>
@@ -141,10 +140,10 @@
finally show ?thesis .
qed
-text \<open>The subsequent version of the proof applies the @{text hoare}
- method to reduce the Hoare statement to a purely logical problem
- that can be solved fully automatically. Note that we have to
- specify the \texttt{WHILE} loop invariant in the original statement.\<close>
+text \<open>The subsequent version of the proof applies the \<open>hoare\<close> method to
+ reduce the Hoare statement to a purely logical problem that can be solved
+ fully automatically. Note that we have to specify the \<^verbatim>\<open>WHILE\<close> loop
+ invariant in the original statement.\<close>
lemma
"\<turnstile> \<lbrace>\<acute>M = 0 \<and> \<acute>S = 0\<rbrace>
@@ -157,15 +156,15 @@
subsection \<open>Summing natural numbers\<close>
-text \<open>We verify an imperative program to sum natural numbers up to a
- given limit. First some functional definition for proper
- specification of the problem.\<close>
+text \<open>We verify an imperative program to sum natural numbers up to a given
+ limit. First some functional definition for proper specification of the
+ problem.
-text \<open>The following proof is quite explicit in the individual steps
- taken, with the \name{hoare} method only applied locally to take
- care of assignment and sequential composition. Note that we express
- intermediate proof obligation in pure logic, without referring to
- the state space.\<close>
+ \<^medskip>
+ The following proof is quite explicit in the individual steps taken, with
+ the \<open>hoare\<close> method only applied locally to take care of assignment and
+ sequential composition. Note that we express intermediate proof obligation
+ in pure logic, without referring to the state space.\<close>
theorem
"\<turnstile> \<lbrace>True\<rbrace>
@@ -203,9 +202,8 @@
finally show ?thesis .
qed
-text \<open>The next version uses the @{text hoare} method, while still
- explaining the resulting proof obligations in an abstract,
- structured manner.\<close>
+text \<open>The next version uses the \<open>hoare\<close> method, while still explaining the
+ resulting proof obligations in an abstract, structured manner.\<close>
theorem
"\<turnstile> \<lbrace>True\<rbrace>
@@ -230,8 +228,8 @@
qed
qed
-text \<open>Certainly, this proof may be done fully automatic as well,
- provided that the invariant is given beforehand.\<close>
+text \<open>Certainly, this proof may be done fully automatic as well, provided
+ that the invariant is given beforehand.\<close>
theorem
"\<turnstile> \<lbrace>True\<rbrace>
@@ -248,8 +246,8 @@
subsection \<open>Time\<close>
-text \<open>A simple embedding of time in Hoare logic: function @{text
- timeit} inserts an extra variable to keep track of the elapsed time.\<close>
+text \<open>A simple embedding of time in Hoare logic: function \<open>timeit\<close> inserts
+ an extra variable to keep track of the elapsed time.\<close>
record tstate = time :: nat
--- a/src/HOL/Isar_Examples/Knaster_Tarski.thy Mon Nov 02 11:43:02 2015 +0100
+++ b/src/HOL/Isar_Examples/Knaster_Tarski.thy Mon Nov 02 13:58:19 2015 +0100
@@ -14,30 +14,27 @@
subsection \<open>Prose version\<close>
text \<open>According to the textbook @{cite \<open>pages 93--94\<close> "davey-priestley"},
- the Knaster-Tarski fixpoint theorem is as
- follows.\footnote{We have dualized the argument, and tuned the
- notation a little bit.}
-
- \textbf{The Knaster-Tarski Fixpoint Theorem.} Let @{text L} be a
- complete lattice and @{text "f: L \<rightarrow> L"} an order-preserving map.
- Then @{text "\<Sqinter>{x \<in> L | f(x) \<le> x}"} is a fixpoint of @{text f}.
+ the Knaster-Tarski fixpoint theorem is as follows.\footnote{We have
+ dualized the argument, and tuned the notation a little bit.}
- \textbf{Proof.} Let @{text "H = {x \<in> L | f(x) \<le> x}"} and @{text "a =
- \<Sqinter>H"}. For all @{text "x \<in> H"} we have @{text "a \<le> x"}, so @{text
- "f(a) \<le> f(x) \<le> x"}. Thus @{text "f(a)"} is a lower bound of @{text
- H}, whence @{text "f(a) \<le> a"}. We now use this inequality to prove
- the reverse one (!) and thereby complete the proof that @{text a} is
- a fixpoint. Since @{text f} is order-preserving, @{text "f(f(a)) \<le>
- f(a)"}. This says @{text "f(a) \<in> H"}, so @{text "a \<le> f(a)"}.\<close>
+ \<^bold>\<open>The Knaster-Tarski Fixpoint Theorem.\<close> Let \<open>L\<close> be a complete lattice and
+ \<open>f: L \<rightarrow> L\<close> an order-preserving map. Then \<open>\<Sqinter>{x \<in> L | f(x) \<le> x}\<close> is a
+ fixpoint of \<open>f\<close>.
+
+ \<^bold>\<open>Proof.\<close> Let \<open>H = {x \<in> L | f(x) \<le> x}\<close> and \<open>a = \<Sqinter>H\<close>. For all \<open>x \<in> H\<close> we
+ have \<open>a \<le> x\<close>, so \<open>f(a) \<le> f(x) \<le> x\<close>. Thus \<open>f(a)\<close> is a lower bound of \<open>H\<close>,
+ whence \<open>f(a) \<le> a\<close>. We now use this inequality to prove the reverse one (!)
+ and thereby complete the proof that \<open>a\<close> is a fixpoint. Since \<open>f\<close> is
+ order-preserving, \<open>f(f(a)) \<le> f(a)\<close>. This says \<open>f(a) \<in> H\<close>, so \<open>a \<le> f(a)\<close>.\<close>
subsection \<open>Formal versions\<close>
-text \<open>The Isar proof below closely follows the original
- presentation. Virtually all of the prose narration has been
- rephrased in terms of formal Isar language elements. Just as many
- textbook-style proofs, there is a strong bias towards forward proof,
- and several bends in the course of reasoning.\<close>
+text \<open>The Isar proof below closely follows the original presentation.
+ Virtually all of the prose narration has been rephrased in terms of formal
+ Isar language elements. Just as many textbook-style proofs, there is a
+ strong bias towards forward proof, and several bends in the course of
+ reasoning.\<close>
theorem Knaster_Tarski:
fixes f :: "'a::complete_lattice \<Rightarrow> 'a"
@@ -67,15 +64,15 @@
qed
qed
-text \<open>Above we have used several advanced Isar language elements,
- such as explicit block structure and weak assumptions. Thus we have
- mimicked the particular way of reasoning of the original text.
+text \<open>Above we have used several advanced Isar language elements, such as
+ explicit block structure and weak assumptions. Thus we have mimicked the
+ particular way of reasoning of the original text.
- In the subsequent version the order of reasoning is changed to
- achieve structured top-down decomposition of the problem at the
- outer level, while only the inner steps of reasoning are done in a
- forward manner. We are certainly more at ease here, requiring only
- the most basic features of the Isar language.\<close>
+ In the subsequent version the order of reasoning is changed to achieve
+ structured top-down decomposition of the problem at the outer level, while
+ only the inner steps of reasoning are done in a forward manner. We are
+ certainly more at ease here, requiring only the most basic features of the
+ Isar language.\<close>
theorem Knaster_Tarski':
fixes f :: "'a::complete_lattice \<Rightarrow> 'a"
--- a/src/HOL/Isar_Examples/Mutilated_Checkerboard.thy Mon Nov 02 11:43:02 2015 +0100
+++ b/src/HOL/Isar_Examples/Mutilated_Checkerboard.thy Mon Nov 02 13:58:19 2015 +0100
@@ -9,8 +9,8 @@
imports Main
begin
-text \<open>The Mutilated Checker Board Problem, formalized inductively.
- See @{cite "paulson-mutilated-board"} for the original tactic script version.\<close>
+text \<open>The Mutilated Checker Board Problem, formalized inductively. See @{cite
+ "paulson-mutilated-board"} for the original tactic script version.\<close>
subsection \<open>Tilings\<close>
--- a/src/HOL/Isar_Examples/Nested_Datatype.thy Mon Nov 02 11:43:02 2015 +0100
+++ b/src/HOL/Isar_Examples/Nested_Datatype.thy Mon Nov 02 13:58:19 2015 +0100
@@ -20,7 +20,7 @@
lemmas subst_simps = subst_term.simps subst_term_list.simps
-text \<open>\medskip A simple lemma about composition of substitutions.\<close>
+text \<open>\<^medskip> A simple lemma about composition of substitutions.\<close>
lemma
"subst_term (subst_term f1 \<circ> f2) t =
--- a/src/HOL/Isar_Examples/Peirce.thy Mon Nov 02 11:43:02 2015 +0100
+++ b/src/HOL/Isar_Examples/Peirce.thy Mon Nov 02 13:58:19 2015 +0100
@@ -8,16 +8,15 @@
imports Main
begin
-text \<open>We consider Peirce's Law: $((A \impl B) \impl A) \impl A$.
- This is an inherently non-intuitionistic statement, so its proof
- will certainly involve some form of classical contradiction.
+text \<open>We consider Peirce's Law: \<open>((A \<longrightarrow> B) \<longrightarrow> A) \<longrightarrow> A\<close>. This is an inherently
+ non-intuitionistic statement, so its proof will certainly involve some
+ form of classical contradiction.
- The first proof is again a well-balanced combination of plain
- backward and forward reasoning. The actual classical step is where
- the negated goal may be introduced as additional assumption. This
- eventually leads to a contradiction.\footnote{The rule involved
- there is negation elimination; it holds in intuitionistic logic as
- well.}\<close>
+ The first proof is again a well-balanced combination of plain backward and
+ forward reasoning. The actual classical step is where the negated goal may
+ be introduced as additional assumption. This eventually leads to a
+ contradiction.\footnote{The rule involved there is negation elimination;
+ it holds in intuitionistic logic as well.}\<close>
theorem "((A \<longrightarrow> B) \<longrightarrow> A) \<longrightarrow> A"
proof
@@ -34,19 +33,17 @@
qed
qed
-text \<open>In the subsequent version the reasoning is rearranged by means
- of ``weak assumptions'' (as introduced by \isacommand{presume}).
- Before assuming the negated goal $\neg A$, its intended consequence
- $A \impl B$ is put into place in order to solve the main problem.
- Nevertheless, we do not get anything for free, but have to establish
- $A \impl B$ later on. The overall effect is that of a logical
- \emph{cut}.
+text \<open>In the subsequent version the reasoning is rearranged by means of
+ ``weak assumptions'' (as introduced by \isacommand{presume}). Before
+ assuming the negated goal \<open>\<not> A\<close>, its intended consequence \<open>A \<longrightarrow> B\<close> is put
+ into place in order to solve the main problem. Nevertheless, we do not get
+ anything for free, but have to establish \<open>A \<longrightarrow> B\<close> later on. The overall
+ effect is that of a logical \<^emph>\<open>cut\<close>.
- Technically speaking, whenever some goal is solved by
- \isacommand{show} in the context of weak assumptions then the latter
- give rise to new subgoals, which may be established separately. In
- contrast, strong assumptions (as introduced by \isacommand{assume})
- are solved immediately.\<close>
+ Technically speaking, whenever some goal is solved by \isacommand{show} in
+ the context of weak assumptions then the latter give rise to new subgoals,
+ which may be established separately. In contrast, strong assumptions (as
+ introduced by \isacommand{assume}) are solved immediately.\<close>
theorem "((A \<longrightarrow> B) \<longrightarrow> A) \<longrightarrow> A"
proof
@@ -65,21 +62,20 @@
qed
qed
-text \<open>Note that the goals stemming from weak assumptions may be even
- left until qed time, where they get eventually solved ``by
- assumption'' as well. In that case there is really no fundamental
- difference between the two kinds of assumptions, apart from the
- order of reducing the individual parts of the proof configuration.
+text \<open>Note that the goals stemming from weak assumptions may be even left
+ until qed time, where they get eventually solved ``by assumption'' as
+ well. In that case there is really no fundamental difference between the
+ two kinds of assumptions, apart from the order of reducing the individual
+ parts of the proof configuration.
- Nevertheless, the ``strong'' mode of plain assumptions is quite
- important in practice to achieve robustness of proof text
- interpretation. By forcing both the conclusion \emph{and} the
- assumptions to unify with the pending goal to be solved, goal
- selection becomes quite deterministic. For example, decomposition
- with rules of the ``case-analysis'' type usually gives rise to
- several goals that only differ in there local contexts. With strong
- assumptions these may be still solved in any order in a predictable
- way, while weak ones would quickly lead to great confusion,
- eventually demanding even some backtracking.\<close>
+ Nevertheless, the ``strong'' mode of plain assumptions is quite important
+ in practice to achieve robustness of proof text interpretation. By forcing
+ both the conclusion \<^emph>\<open>and\<close> the assumptions to unify with the pending goal
+ to be solved, goal selection becomes quite deterministic. For example,
+ decomposition with rules of the ``case-analysis'' type usually gives rise
+ to several goals that only differ in there local contexts. With strong
+ assumptions these may be still solved in any order in a predictable way,
+ while weak ones would quickly lead to great confusion, eventually
+ demanding even some backtracking.\<close>
end
--- a/src/HOL/Isar_Examples/Puzzle.thy Mon Nov 02 11:43:02 2015 +0100
+++ b/src/HOL/Isar_Examples/Puzzle.thy Mon Nov 02 13:58:19 2015 +0100
@@ -8,9 +8,8 @@
Original pen-and-paper proof due to Herbert Ehler; Isabelle tactic
script by Tobias Nipkow.}\<close>
-text \<open>\textbf{Problem.} Given some function $f\colon \Nat \to \Nat$
- such that $f \ap (f \ap n) < f \ap (\idt{Suc} \ap n)$ for all $n$.
- Demonstrate that $f$ is the identity.\<close>
+text \<open>\<^bold>\<open>Problem.\<close> Given some function \<open>f: \<nat> \<rightarrow> \<nat>\<close> such that
+ \<open>f (f n) < f (Suc n)\<close> for all \<open>n\<close>. Demonstrate that \<open>f\<close> is the identity.\<close>
theorem
assumes f_ax: "\<And>n. f (f n) < f (Suc n)"
--- a/src/HOL/Isar_Examples/Summation.thy Mon Nov 02 11:43:02 2015 +0100
+++ b/src/HOL/Isar_Examples/Summation.thy Mon Nov 02 13:58:19 2015 +0100
@@ -9,16 +9,16 @@
begin
text \<open>Subsequently, we prove some summation laws of natural numbers
- (including odds, squares, and cubes). These examples demonstrate
- how plain natural deduction (including induction) may be combined
- with calculational proof.\<close>
+ (including odds, squares, and cubes). These examples demonstrate how plain
+ natural deduction (including induction) may be combined with calculational
+ proof.\<close>
subsection \<open>Summation laws\<close>
-text \<open>The sum of natural numbers $0 + \cdots + n$ equals $n \times
- (n + 1)/2$. Avoiding formal reasoning about division we prove this
- equation multiplied by $2$.\<close>
+text \<open>The sum of natural numbers \<open>0 + \<cdots> + n\<close> equals \<open>n \<times> (n + 1)/2\<close>.
+ Avoiding formal reasoning about division we prove this equation multiplied
+ by \<open>2\<close>.\<close>
theorem sum_of_naturals:
"2 * (\<Sum>i::nat=0..n. i) = n * (n + 1)"
@@ -35,47 +35,39 @@
by simp
qed
-text \<open>The above proof is a typical instance of mathematical
- induction. The main statement is viewed as some $\var{P} \ap n$
- that is split by the induction method into base case $\var{P} \ap
- 0$, and step case $\var{P} \ap n \Impl \var{P} \ap (\idt{Suc} \ap
- n)$ for arbitrary $n$.
+text \<open>The above proof is a typical instance of mathematical induction. The
+ main statement is viewed as some \<open>?P n\<close> that is split by the induction
+ method into base case \<open>?P 0\<close>, and step case \<open>?P n \<Longrightarrow> ?P (Suc n)\<close> for
+ arbitrary \<open>n\<close>.
- The step case is established by a short calculation in forward
- manner. Starting from the left-hand side $\var{S} \ap (n + 1)$ of
- the thesis, the final result is achieved by transformations
- involving basic arithmetic reasoning (using the Simplifier). The
- main point is where the induction hypothesis $\var{S} \ap n = n
- \times (n + 1)$ is introduced in order to replace a certain subterm.
- So the ``transitivity'' rule involved here is actual
- \emph{substitution}. Also note how the occurrence of ``\dots'' in
- the subsequent step documents the position where the right-hand side
- of the hypothesis got filled in.
+ The step case is established by a short calculation in forward manner.
+ Starting from the left-hand side \<open>?S (n + 1)\<close> of the thesis, the final
+ result is achieved by transformations involving basic arithmetic reasoning
+ (using the Simplifier). The main point is where the induction hypothesis
+ \<open>?S n = n \<times> (n + 1)\<close> is introduced in order to replace a certain subterm.
+ So the ``transitivity'' rule involved here is actual \<^emph>\<open>substitution\<close>. Also
+ note how the occurrence of ``\dots'' in the subsequent step documents the
+ position where the right-hand side of the hypothesis got filled in.
- \medskip A further notable point here is integration of calculations
- with plain natural deduction. This works so well in Isar for two
- reasons.
- \begin{enumerate}
-
- \item Facts involved in \isakeyword{also}~/ \isakeyword{finally}
- calculational chains may be just anything. There is nothing special
- about \isakeyword{have}, so the natural deduction element
- \isakeyword{assume} works just as well.
+ \<^medskip>
+ A further notable point here is integration of calculations with plain
+ natural deduction. This works so well in Isar for two reasons.
- \item There are two \emph{separate} primitives for building natural
- deduction contexts: \isakeyword{fix}~$x$ and
- \isakeyword{assume}~$A$. Thus it is possible to start reasoning
- with some new ``arbitrary, but fixed'' elements before bringing in
- the actual assumption. In contrast, natural deduction is
- occasionally formalized with basic context elements of the form
- $x:A$ instead.
+ \<^enum> Facts involved in \isakeyword{also}~/ \isakeyword{finally}
+ calculational chains may be just anything. There is nothing special
+ about \isakeyword{have}, so the natural deduction element
+ \isakeyword{assume} works just as well.
+
+ \<^enum> There are two \<^emph>\<open>separate\<close> primitives for building natural deduction
+ contexts: \isakeyword{fix}~\<open>x\<close> and \isakeyword{assume}~\<open>A\<close>. Thus it is
+ possible to start reasoning with some new ``arbitrary, but fixed''
+ elements before bringing in the actual assumption. In contrast, natural
+ deduction is occasionally formalized with basic context elements of the
+ form \<open>x:A\<close> instead.
- \end{enumerate}
-\<close>
-
-text \<open>\medskip We derive further summation laws for odds, squares,
- and cubes as follows. The basic technique of induction plus
- calculation is the same as before.\<close>
+ \<^medskip>
+ We derive further summation laws for odds, squares, and cubes as follows.
+ The basic technique of induction plus calculation is the same as before.\<close>
theorem sum_of_odds:
"(\<Sum>i::nat=0..<n. 2 * i + 1) = n^Suc (Suc 0)"
@@ -93,9 +85,8 @@
by simp
qed
-text \<open>Subsequently we require some additional tweaking of Isabelle
- built-in arithmetic simplifications, such as bringing in
- distributivity by hand.\<close>
+text \<open>Subsequently we require some additional tweaking of Isabelle built-in
+ arithmetic simplifications, such as bringing in distributivity by hand.\<close>
lemmas distrib = add_mult_distrib add_mult_distrib2
@@ -134,15 +125,13 @@
text \<open>Note that in contrast to older traditions of tactical proof
scripts, the structured proof applies induction on the original,
- unsimplified statement. This allows to state the induction cases
- robustly and conveniently. Simplification (or other automated)
- methods are then applied in terminal position to solve certain
- sub-problems completely.
+ unsimplified statement. This allows to state the induction cases robustly
+ and conveniently. Simplification (or other automated) methods are then
+ applied in terminal position to solve certain sub-problems completely.
- As a general rule of good proof style, automatic methods such as
- $\idt{simp}$ or $\idt{auto}$ should normally be never used as
- initial proof methods with a nested sub-proof to address the
- automatically produced situation, but only as terminal ones to solve
- sub-problems.\<close>
+ As a general rule of good proof style, automatic methods such as \<open>simp\<close> or
+ \<open>auto\<close> should normally be never used as initial proof methods with a
+ nested sub-proof to address the automatically produced situation, but only
+ as terminal ones to solve sub-problems.\<close>
end
--- a/src/HOL/Isar_Examples/document/root.tex Mon Nov 02 11:43:02 2015 +0100
+++ b/src/HOL/Isar_Examples/document/root.tex Mon Nov 02 13:58:19 2015 +0100
@@ -5,7 +5,7 @@
\begin{document}
\title{Miscellaneous Isabelle/Isar examples for Higher-Order Logic}
-\author{Markus Wenzel \\ \url{http://www.in.tum.de/~wenzelm/} \\[2ex]
+\author{Markus Wenzel \\[2ex]
With contributions by Gertrud Bauer and Tobias Nipkow}
\maketitle