47269
|
1 |
(*<*)
|
|
2 |
theory Logic
|
|
3 |
imports LaTeXsugar
|
|
4 |
begin
|
|
5 |
(*>*)
|
|
6 |
text{*
|
|
7 |
\vspace{-5ex}
|
47711
|
8 |
\section{Logic and proof beyond equality}
|
47269
|
9 |
\label{sec:Logic}
|
|
10 |
|
|
11 |
\subsection{Formulas}
|
|
12 |
|
47711
|
13 |
The core syntax of formulas (\textit{form} below)
|
47720
|
14 |
provides the standard logical constructs, in decreasing order of precedence:
|
47269
|
15 |
\[
|
|
16 |
\begin{array}{rcl}
|
|
17 |
|
|
18 |
\mathit{form} & ::= &
|
|
19 |
@{text"(form)"} ~\mid~
|
|
20 |
@{const True} ~\mid~
|
|
21 |
@{const False} ~\mid~
|
|
22 |
@{prop "term = term"}\\
|
|
23 |
&\mid& @{prop"\<not> form"} ~\mid~
|
|
24 |
@{prop "form \<and> form"} ~\mid~
|
|
25 |
@{prop "form \<or> form"} ~\mid~
|
|
26 |
@{prop "form \<longrightarrow> form"}\\
|
|
27 |
&\mid& @{prop"\<forall>x. form"} ~\mid~ @{prop"\<exists>x. form"}
|
|
28 |
\end{array}
|
|
29 |
\]
|
47711
|
30 |
Terms are the ones we have seen all along, built from constants, variables,
|
47269
|
31 |
function application and @{text"\<lambda>"}-abstraction, including all the syntactic
|
|
32 |
sugar like infix symbols, @{text "if"}, @{text "case"} etc.
|
|
33 |
\begin{warn}
|
|
34 |
Remember that formulas are simply terms of type @{text bool}. Hence
|
|
35 |
@{text "="} also works for formulas. Beware that @{text"="} has a higher
|
|
36 |
precedence than the other logical operators. Hence @{prop"s = t \<and> A"} means
|
47711
|
37 |
@{text"(s = t) \<and> A"}, and @{prop"A\<and>B = B\<and>A"} means @{text"A \<and> (B = B) \<and> A"}.
|
47269
|
38 |
Logical equivalence can also be written with
|
|
39 |
@{text "\<longleftrightarrow>"} instead of @{text"="}, where @{text"\<longleftrightarrow>"} has the same low
|
|
40 |
precedence as @{text"\<longrightarrow>"}. Hence @{text"A \<and> B \<longleftrightarrow> B \<and> A"} really means
|
|
41 |
@{text"(A \<and> B) \<longleftrightarrow> (B \<and> A)"}.
|
|
42 |
\end{warn}
|
|
43 |
\begin{warn}
|
|
44 |
Quantifiers need to be enclosed in parentheses if they are nested within
|
|
45 |
other constructs (just like @{text "if"}, @{text case} and @{text let}).
|
|
46 |
\end{warn}
|
|
47 |
The most frequent logical symbols have the following ASCII representations:
|
|
48 |
\begin{center}
|
|
49 |
\begin{tabular}{l@ {\qquad}l@ {\qquad}l}
|
|
50 |
@{text "\<forall>"} & \xsymbol{forall} & \texttt{ALL}\\
|
|
51 |
@{text "\<exists>"} & \xsymbol{exists} & \texttt{EX}\\
|
|
52 |
@{text "\<lambda>"} & \xsymbol{lambda} & \texttt{\%}\\
|
|
53 |
@{text "\<longrightarrow>"} & \texttt{-{}->}\\
|
47711
|
54 |
@{text "\<longleftrightarrow>"} & \texttt{<->}\\
|
47269
|
55 |
@{text "\<and>"} & \texttt{/\char`\\} & \texttt{\&}\\
|
|
56 |
@{text "\<or>"} & \texttt{\char`\\/} & \texttt{|}\\
|
|
57 |
@{text "\<not>"} & \xsymbol{not} & \texttt{\char`~}\\
|
|
58 |
@{text "\<noteq>"} & \xsymbol{noteq} & \texttt{\char`~=}
|
|
59 |
\end{tabular}
|
|
60 |
\end{center}
|
|
61 |
The first column shows the symbols, the second column ASCII representations
|
|
62 |
that Isabelle interfaces convert into the corresponding symbol,
|
|
63 |
and the third column shows ASCII representations that stay fixed.
|
|
64 |
\begin{warn}
|
|
65 |
The implication @{text"\<Longrightarrow>"} is part of the Isabelle framework. It structures
|
47711
|
66 |
theorems and proof states, separating assumptions from conclusions.
|
47269
|
67 |
The implication @{text"\<longrightarrow>"} is part of the logic HOL and can occur inside the
|
|
68 |
formulas that make up the assumptions and conclusion.
|
|
69 |
Theorems should be of the form @{text"\<lbrakk> A\<^isub>1; \<dots>; A\<^isub>n \<rbrakk> \<Longrightarrow> A"},
|
|
70 |
not @{text"A\<^isub>1 \<and> \<dots> \<and> A\<^isub>n \<longrightarrow> A"}. Both are logically equivalent
|
|
71 |
but the first one works better when using the theorem in further proofs.
|
|
72 |
\end{warn}
|
|
73 |
|
|
74 |
\subsection{Sets}
|
|
75 |
|
|
76 |
Sets of elements of type @{typ 'a} have type @{typ"'a set"}.
|
47711
|
77 |
They can be finite or infinite. Sets come with the usual notation:
|
47269
|
78 |
\begin{itemize}
|
|
79 |
\item @{term"{}"},\quad @{text"{e\<^isub>1,\<dots>,e\<^isub>n}"}
|
|
80 |
\item @{prop"e \<in> A"},\quad @{prop"A \<subseteq> B"}
|
|
81 |
\item @{term"A \<union> B"},\quad @{term"A \<inter> B"},\quad @{term"A - B"},\quad @{term"-A"}
|
|
82 |
\end{itemize}
|
|
83 |
and much more. @{const UNIV} is the set of all elements of some type.
|
|
84 |
Set comprehension is written @{term"{x. P}"}
|
|
85 |
rather than @{text"{x | P}"}, to emphasize the variable binding nature
|
|
86 |
of the construct.
|
|
87 |
\begin{warn}
|
|
88 |
In @{term"{x. P}"} the @{text x} must be a variable. Set comprehension
|
|
89 |
involving a proper term @{text t} must be written
|
49615
|
90 |
\noquotes{@{term[source] "{t | x y. P}"}},
|
|
91 |
where @{text "x y"} are those free variables in @{text t}
|
|
92 |
that occur in @{text P}.
|
|
93 |
This is just a shorthand for @{term"{v. EX x y. v = t \<and> P}"}, where
|
|
94 |
@{text v} is a new variable. For example, @{term"{x+y|x. x \<in> A}"}
|
|
95 |
is short for \noquotes{@{term[source]"{v. \<exists>x. v = x+y \<and> x \<in> A}"}}.
|
47269
|
96 |
\end{warn}
|
|
97 |
|
|
98 |
Here are the ASCII representations of the mathematical symbols:
|
|
99 |
\begin{center}
|
|
100 |
\begin{tabular}{l@ {\quad}l@ {\quad}l}
|
|
101 |
@{text "\<in>"} & \texttt{\char`\\\char`\<in>} & \texttt{:}\\
|
|
102 |
@{text "\<subseteq>"} & \texttt{\char`\\\char`\<subseteq>} & \texttt{<=}\\
|
|
103 |
@{text "\<union>"} & \texttt{\char`\\\char`\<union>} & \texttt{Un}\\
|
|
104 |
@{text "\<inter>"} & \texttt{\char`\\\char`\<inter>} & \texttt{Int}
|
|
105 |
\end{tabular}
|
|
106 |
\end{center}
|
|
107 |
Sets also allow bounded quantifications @{prop"ALL x : A. P"} and
|
|
108 |
@{prop"EX x : A. P"}.
|
|
109 |
|
|
110 |
\subsection{Proof automation}
|
|
111 |
|
|
112 |
So far we have only seen @{text simp} and @{text auto}: Both perform
|
|
113 |
rewriting, both can also prove linear arithmetic facts (no multiplication),
|
|
114 |
and @{text auto} is also able to prove simple logical or set-theoretic goals:
|
|
115 |
*}
|
|
116 |
|
|
117 |
lemma "\<forall>x. \<exists>y. x = y"
|
|
118 |
by auto
|
|
119 |
|
|
120 |
lemma "A \<subseteq> B \<inter> C \<Longrightarrow> A \<subseteq> B \<union> C"
|
|
121 |
by auto
|
|
122 |
|
|
123 |
text{* where
|
|
124 |
\begin{quote}
|
|
125 |
\isacom{by} \textit{proof-method}
|
|
126 |
\end{quote}
|
|
127 |
is short for
|
|
128 |
\begin{quote}
|
|
129 |
\isacom{apply} \textit{proof-method}\\
|
|
130 |
\isacom{done}
|
|
131 |
\end{quote}
|
|
132 |
The key characteristics of both @{text simp} and @{text auto} are
|
|
133 |
\begin{itemize}
|
|
134 |
\item They show you were they got stuck, giving you an idea how to continue.
|
|
135 |
\item They perform the obvious steps but are highly incomplete.
|
|
136 |
\end{itemize}
|
|
137 |
A proof method is \concept{complete} if it can prove all true formulas.
|
|
138 |
There is no complete proof method for HOL, not even in theory.
|
|
139 |
Hence all our proof methods only differ in how incomplete they are.
|
|
140 |
|
|
141 |
A proof method that is still incomplete but tries harder than @{text auto} is
|
|
142 |
@{text fastforce}. It either succeeds or fails, it acts on the first
|
|
143 |
subgoal only, and it can be modified just like @{text auto}, e.g.\
|
|
144 |
with @{text "simp add"}. Here is a typical example of what @{text fastforce}
|
|
145 |
can do:
|
|
146 |
*}
|
|
147 |
|
|
148 |
lemma "\<lbrakk> \<forall>xs \<in> A. \<exists>ys. xs = ys @ ys; us \<in> A \<rbrakk>
|
|
149 |
\<Longrightarrow> \<exists>n. length us = n+n"
|
|
150 |
by fastforce
|
|
151 |
|
|
152 |
text{* This lemma is out of reach for @{text auto} because of the
|
|
153 |
quantifiers. Even @{text fastforce} fails when the quantifier structure
|
|
154 |
becomes more complicated. In a few cases, its slow version @{text force}
|
|
155 |
succeeds where @{text fastforce} fails.
|
|
156 |
|
|
157 |
The method of choice for complex logical goals is @{text blast}. In the
|
47711
|
158 |
following example, @{text T} and @{text A} are two binary predicates. It
|
|
159 |
is shown that if @{text T} is total, @{text A} is antisymmetric and @{text T} is
|
47269
|
160 |
a subset of @{text A}, then @{text A} is a subset of @{text T}:
|
|
161 |
*}
|
|
162 |
|
|
163 |
lemma
|
|
164 |
"\<lbrakk> \<forall>x y. T x y \<or> T y x;
|
|
165 |
\<forall>x y. A x y \<and> A y x \<longrightarrow> x = y;
|
|
166 |
\<forall>x y. T x y \<longrightarrow> A x y \<rbrakk>
|
|
167 |
\<Longrightarrow> \<forall>x y. A x y \<longrightarrow> T x y"
|
|
168 |
by blast
|
|
169 |
|
|
170 |
text{*
|
|
171 |
We leave it to the reader to figure out why this lemma is true.
|
|
172 |
Method @{text blast}
|
|
173 |
\begin{itemize}
|
|
174 |
\item is (in principle) a complete proof procedure for first-order formulas,
|
|
175 |
a fragment of HOL. In practice there is a search bound.
|
|
176 |
\item does no rewriting and knows very little about equality.
|
|
177 |
\item covers logic, sets and relations.
|
|
178 |
\item either succeeds or fails.
|
|
179 |
\end{itemize}
|
|
180 |
Because of its strength in logic and sets and its weakness in equality reasoning, it complements the earlier proof methods.
|
|
181 |
|
|
182 |
|
|
183 |
\subsubsection{Sledgehammer}
|
|
184 |
|
|
185 |
Command \isacom{sledgehammer} calls a number of external automatic
|
|
186 |
theorem provers (ATPs) that run for up to 30 seconds searching for a
|
|
187 |
proof. Some of these ATPs are part of the Isabelle installation, others are
|
|
188 |
queried over the internet. If successful, a proof command is generated and can
|
|
189 |
be inserted into your proof. The biggest win of \isacom{sledgehammer} is
|
|
190 |
that it will take into account the whole lemma library and you do not need to
|
|
191 |
feed in any lemma explicitly. For example,*}
|
|
192 |
|
|
193 |
lemma "\<lbrakk> xs @ ys = ys @ xs; length xs = length ys \<rbrakk> \<Longrightarrow> xs = ys"
|
|
194 |
|
|
195 |
txt{* cannot be solved by any of the standard proof methods, but
|
|
196 |
\isacom{sledgehammer} finds the following proof: *}
|
|
197 |
|
|
198 |
by (metis append_eq_conv_conj)
|
|
199 |
|
|
200 |
text{* We do not explain how the proof was found but what this command
|
|
201 |
means. For a start, Isabelle does not trust external tools (and in particular
|
|
202 |
not the translations from Isabelle's logic to those tools!)
|
|
203 |
and insists on a proof that it can check. This is what @{text metis} does.
|
|
204 |
It is given a list of lemmas and tries to find a proof just using those lemmas
|
|
205 |
(and pure logic). In contrast to @{text simp} and friends that know a lot of
|
|
206 |
lemmas already, using @{text metis} manually is tedious because one has
|
|
207 |
to find all the relevant lemmas first. But that is precisely what
|
|
208 |
\isacom{sledgehammer} does for us.
|
|
209 |
In this case lemma @{thm[source]append_eq_conv_conj} alone suffices:
|
|
210 |
@{thm[display] append_eq_conv_conj}
|
|
211 |
We leave it to the reader to figure out why this lemma suffices to prove
|
|
212 |
the above lemma, even without any knowledge of what the functions @{const take}
|
|
213 |
and @{const drop} do. Keep in mind that the variables in the two lemmas
|
|
214 |
are independent of each other, despite the same names, and that you can
|
|
215 |
substitute arbitrary values for the free variables in a lemma.
|
|
216 |
|
|
217 |
Just as for the other proof methods we have seen, there is no guarantee that
|
|
218 |
\isacom{sledgehammer} will find a proof if it exists. Nor is
|
|
219 |
\isacom{sledgehammer} superior to the other proof methods. They are
|
|
220 |
incomparable. Therefore it is recommended to apply @{text simp} or @{text
|
|
221 |
auto} before invoking \isacom{sledgehammer} on what is left.
|
|
222 |
|
|
223 |
\subsubsection{Arithmetic}
|
|
224 |
|
|
225 |
By arithmetic formulas we mean formulas involving variables, numbers, @{text
|
|
226 |
"+"}, @{text"-"}, @{text "="}, @{text "<"}, @{text "\<le>"} and the usual logical
|
|
227 |
connectives @{text"\<not>"}, @{text"\<and>"}, @{text"\<or>"}, @{text"\<longrightarrow>"},
|
|
228 |
@{text"\<longleftrightarrow>"}. Strictly speaking, this is known as \concept{linear arithmetic}
|
|
229 |
because it does not involve multiplication, although multiplication with
|
|
230 |
numbers, e.g.\ @{text"2*n"} is allowed. Such formulas can be proved by
|
|
231 |
@{text arith}:
|
|
232 |
*}
|
|
233 |
|
|
234 |
lemma "\<lbrakk> (a::nat) \<le> x + b; 2*x < c \<rbrakk> \<Longrightarrow> 2*a + 1 \<le> 2*b + c"
|
|
235 |
by arith
|
|
236 |
|
|
237 |
text{* In fact, @{text auto} and @{text simp} can prove many linear
|
|
238 |
arithmetic formulas already, like the one above, by calling a weak but fast
|
|
239 |
version of @{text arith}. Hence it is usually not necessary to invoke
|
|
240 |
@{text arith} explicitly.
|
|
241 |
|
|
242 |
The above example involves natural numbers, but integers (type @{typ int})
|
|
243 |
and real numbers (type @{text real}) are supported as well. As are a number
|
|
244 |
of further operators like @{const min} and @{const max}. On @{typ nat} and
|
|
245 |
@{typ int}, @{text arith} can even prove theorems with quantifiers in them,
|
|
246 |
but we will not enlarge on that here.
|
|
247 |
|
|
248 |
|
47727
|
249 |
\subsubsection{Trying them all}
|
|
250 |
|
|
251 |
If you want to try all of the above automatic proof methods you simply type
|
|
252 |
\begin{isabelle}
|
|
253 |
\isacom{try}
|
|
254 |
\end{isabelle}
|
|
255 |
You can also add specific simplification and introduction rules:
|
|
256 |
\begin{isabelle}
|
|
257 |
\isacom{try} @{text"simp: \<dots> intro: \<dots>"}
|
|
258 |
\end{isabelle}
|
|
259 |
There is also a lightweight variant \isacom{try0} that does not call
|
|
260 |
sledgehammer.
|
|
261 |
|
47269
|
262 |
\subsection{Single step proofs}
|
|
263 |
|
|
264 |
Although automation is nice, it often fails, at least initially, and you need
|
|
265 |
to find out why. When @{text fastforce} or @{text blast} simply fail, you have
|
|
266 |
no clue why. At this point, the stepwise
|
|
267 |
application of proof rules may be necessary. For example, if @{text blast}
|
|
268 |
fails on @{prop"A \<and> B"}, you want to attack the two
|
|
269 |
conjuncts @{text A} and @{text B} separately. This can
|
|
270 |
be achieved by applying \emph{conjunction introduction}
|
|
271 |
\[ @{thm[mode=Rule,show_question_marks]conjI}\ @{text conjI}
|
|
272 |
\]
|
|
273 |
to the proof state. We will now examine the details of this process.
|
|
274 |
|
|
275 |
\subsubsection{Instantiating unknowns}
|
|
276 |
|
|
277 |
We had briefly mentioned earlier that after proving some theorem,
|
|
278 |
Isabelle replaces all free variables @{text x} by so called \concept{unknowns}
|
|
279 |
@{text "?x"}. We can see this clearly in rule @{thm[source] conjI}.
|
|
280 |
These unknowns can later be instantiated explicitly or implicitly:
|
|
281 |
\begin{itemize}
|
|
282 |
\item By hand, using @{text of}.
|
|
283 |
The expression @{text"conjI[of \"a=b\" \"False\"]"}
|
|
284 |
instantiates the unknowns in @{thm[source] conjI} from left to right with the
|
|
285 |
two formulas @{text"a=b"} and @{text False}, yielding the rule
|
|
286 |
@{thm[display,mode=Rule]conjI[of "a=b" False]}
|
|
287 |
|
|
288 |
In general, @{text"th[of string\<^isub>1 \<dots> string\<^isub>n]"} instantiates
|
|
289 |
the unknowns in the theorem @{text th} from left to right with the terms
|
|
290 |
@{text string\<^isub>1} to @{text string\<^isub>n}.
|
|
291 |
|
|
292 |
\item By unification. \concept{Unification} is the process of making two
|
|
293 |
terms syntactically equal by suitable instantiations of unknowns. For example,
|
|
294 |
unifying @{text"?P \<and> ?Q"} with \mbox{@{prop"a=b \<and> False"}} instantiates
|
|
295 |
@{text "?P"} with @{prop "a=b"} and @{text "?Q"} with @{prop False}.
|
|
296 |
\end{itemize}
|
|
297 |
We need not instantiate all unknowns. If we want to skip a particular one we
|
|
298 |
can just write @{text"_"} instead, for example @{text "conjI[of _ \"False\"]"}.
|
|
299 |
Unknowns can also be instantiated by name, for example
|
|
300 |
@{text "conjI[where ?P = \"a=b\" and ?Q = \"False\"]"}.
|
|
301 |
|
|
302 |
|
|
303 |
\subsubsection{Rule application}
|
|
304 |
|
|
305 |
\concept{Rule application} means applying a rule backwards to a proof state.
|
|
306 |
For example, applying rule @{thm[source]conjI} to a proof state
|
|
307 |
\begin{quote}
|
|
308 |
@{text"1. \<dots> \<Longrightarrow> A \<and> B"}
|
|
309 |
\end{quote}
|
|
310 |
results in two subgoals, one for each premise of @{thm[source]conjI}:
|
|
311 |
\begin{quote}
|
|
312 |
@{text"1. \<dots> \<Longrightarrow> A"}\\
|
|
313 |
@{text"2. \<dots> \<Longrightarrow> B"}
|
|
314 |
\end{quote}
|
|
315 |
In general, the application of a rule @{text"\<lbrakk> A\<^isub>1; \<dots>; A\<^isub>n \<rbrakk> \<Longrightarrow> A"}
|
|
316 |
to a subgoal \mbox{@{text"\<dots> \<Longrightarrow> C"}} proceeds in two steps:
|
|
317 |
\begin{enumerate}
|
|
318 |
\item
|
|
319 |
Unify @{text A} and @{text C}, thus instantiating the unknowns in the rule.
|
|
320 |
\item
|
|
321 |
Replace the subgoal @{text C} with @{text n} new subgoals @{text"A\<^isub>1"} to @{text"A\<^isub>n"}.
|
|
322 |
\end{enumerate}
|
|
323 |
This is the command to apply rule @{text xyz}:
|
|
324 |
\begin{quote}
|
|
325 |
\isacom{apply}@{text"(rule xyz)"}
|
|
326 |
\end{quote}
|
|
327 |
This is also called \concept{backchaining} with rule @{text xyz}.
|
|
328 |
|
|
329 |
\subsubsection{Introduction rules}
|
|
330 |
|
|
331 |
Conjunction introduction (@{thm[source] conjI}) is one example of a whole
|
|
332 |
class of rules known as \concept{introduction rules}. They explain under which
|
|
333 |
premises some logical construct can be introduced. Here are some further
|
|
334 |
useful introduction rules:
|
|
335 |
\[
|
|
336 |
\inferrule*[right=\mbox{@{text impI}}]{\mbox{@{text"?P \<Longrightarrow> ?Q"}}}{\mbox{@{text"?P \<longrightarrow> ?Q"}}}
|
|
337 |
\qquad
|
|
338 |
\inferrule*[right=\mbox{@{text allI}}]{\mbox{@{text"\<And>x. ?P x"}}}{\mbox{@{text"\<forall>x. ?P x"}}}
|
|
339 |
\]
|
|
340 |
\[
|
|
341 |
\inferrule*[right=\mbox{@{text iffI}}]{\mbox{@{text"?P \<Longrightarrow> ?Q"}} \\ \mbox{@{text"?Q \<Longrightarrow> ?P"}}}
|
|
342 |
{\mbox{@{text"?P = ?Q"}}}
|
|
343 |
\]
|
|
344 |
These rules are part of the logical system of \concept{natural deduction}
|
|
345 |
(e.g.\ \cite{HuthRyan}). Although we intentionally de-emphasize the basic rules
|
|
346 |
of logic in favour of automatic proof methods that allow you to take bigger
|
|
347 |
steps, these rules are helpful in locating where and why automation fails.
|
|
348 |
When applied backwards, these rules decompose the goal:
|
|
349 |
\begin{itemize}
|
|
350 |
\item @{thm[source] conjI} and @{thm[source]iffI} split the goal into two subgoals,
|
|
351 |
\item @{thm[source] impI} moves the left-hand side of a HOL implication into the list of assumptions,
|
|
352 |
\item and @{thm[source] allI} removes a @{text "\<forall>"} by turning the quantified variable into a fixed local variable of the subgoal.
|
|
353 |
\end{itemize}
|
|
354 |
Isabelle knows about these and a number of other introduction rules.
|
|
355 |
The command
|
|
356 |
\begin{quote}
|
|
357 |
\isacom{apply} @{text rule}
|
|
358 |
\end{quote}
|
|
359 |
automatically selects the appropriate rule for the current subgoal.
|
|
360 |
|
|
361 |
You can also turn your own theorems into introduction rules by giving them
|
47711
|
362 |
the @{text"intro"} attribute, analogous to the @{text simp} attribute. In
|
47269
|
363 |
that case @{text blast}, @{text fastforce} and (to a limited extent) @{text
|
|
364 |
auto} will automatically backchain with those theorems. The @{text intro}
|
|
365 |
attribute should be used with care because it increases the search space and
|
47711
|
366 |
can lead to nontermination. Sometimes it is better to use it only in
|
|
367 |
specific calls of @{text blast} and friends. For example,
|
47269
|
368 |
@{thm[source] le_trans}, transitivity of @{text"\<le>"} on type @{typ nat},
|
|
369 |
is not an introduction rule by default because of the disastrous effect
|
|
370 |
on the search space, but can be useful in specific situations:
|
|
371 |
*}
|
|
372 |
|
|
373 |
lemma "\<lbrakk> (a::nat) \<le> b; b \<le> c; c \<le> d; d \<le> e \<rbrakk> \<Longrightarrow> a \<le> e"
|
|
374 |
by(blast intro: le_trans)
|
|
375 |
|
|
376 |
text{*
|
|
377 |
Of course this is just an example and could be proved by @{text arith}, too.
|
|
378 |
|
|
379 |
\subsubsection{Forward proof}
|
|
380 |
\label{sec:forward-proof}
|
|
381 |
|
|
382 |
Forward proof means deriving new theorems from old theorems. We have already
|
|
383 |
seen a very simple form of forward proof: the @{text of} operator for
|
|
384 |
instantiating unknowns in a theorem. The big brother of @{text of} is @{text
|
|
385 |
OF} for applying one theorem to others. Given a theorem @{prop"A \<Longrightarrow> B"} called
|
|
386 |
@{text r} and a theorem @{text A'} called @{text r'}, the theorem @{text
|
|
387 |
"r[OF r']"} is the result of applying @{text r} to @{text r'}, where @{text
|
|
388 |
r} should be viewed as a function taking a theorem @{text A} and returning
|
|
389 |
@{text B}. More precisely, @{text A} and @{text A'} are unified, thus
|
|
390 |
instantiating the unknowns in @{text B}, and the result is the instantiated
|
|
391 |
@{text B}. Of course, unification may also fail.
|
|
392 |
\begin{warn}
|
|
393 |
Application of rules to other rules operates in the forward direction: from
|
|
394 |
the premises to the conclusion of the rule; application of rules to proof
|
|
395 |
states operates in the backward direction, from the conclusion to the
|
|
396 |
premises.
|
|
397 |
\end{warn}
|
|
398 |
|
|
399 |
In general @{text r} can be of the form @{text"\<lbrakk> A\<^isub>1; \<dots>; A\<^isub>n \<rbrakk> \<Longrightarrow> A"}
|
|
400 |
and there can be multiple argument theorems @{text r\<^isub>1} to @{text r\<^isub>m}
|
|
401 |
(with @{text"m \<le> n"}), in which case @{text "r[OF r\<^isub>1 \<dots> r\<^isub>m]"} is obtained
|
|
402 |
by unifying and thus proving @{text "A\<^isub>i"} with @{text "r\<^isub>i"}, @{text"i = 1\<dots>m"}.
|
|
403 |
Here is an example, where @{thm[source]refl} is the theorem
|
|
404 |
@{thm[show_question_marks] refl}:
|
|
405 |
*}
|
|
406 |
|
|
407 |
thm conjI[OF refl[of "a"] refl[of "b"]]
|
|
408 |
|
|
409 |
text{* yields the theorem @{thm conjI[OF refl[of "a"] refl[of "b"]]}.
|
|
410 |
The command \isacom{thm} merely displays the result.
|
|
411 |
|
|
412 |
Forward reasoning also makes sense in connection with proof states.
|
|
413 |
Therefore @{text blast}, @{text fastforce} and @{text auto} support a modifier
|
|
414 |
@{text dest} which instructs the proof method to use certain rules in a
|
|
415 |
forward fashion. If @{text r} is of the form \mbox{@{text "A \<Longrightarrow> B"}}, the modifier
|
|
416 |
\mbox{@{text"dest: r"}}
|
|
417 |
allows proof search to reason forward with @{text r}, i.e.\
|
|
418 |
to replace an assumption @{text A'}, where @{text A'} unifies with @{text A},
|
|
419 |
with the correspondingly instantiated @{text B}. For example, @{thm[source,show_question_marks] Suc_leD} is the theorem \mbox{@{thm Suc_leD}}, which works well for forward reasoning:
|
|
420 |
*}
|
|
421 |
|
|
422 |
lemma "Suc(Suc(Suc a)) \<le> b \<Longrightarrow> a \<le> b"
|
|
423 |
by(blast dest: Suc_leD)
|
|
424 |
|
|
425 |
text{* In this particular example we could have backchained with
|
|
426 |
@{thm[source] Suc_leD}, too, but because the premise is more complicated than the conclusion this can easily lead to nontermination.
|
|
427 |
|
47727
|
428 |
\subsubsection{Finding theorems}
|
|
429 |
|
|
430 |
Command \isacom{find\_theorems} searches for specific theorems in the current
|
|
431 |
theory. Search criteria include pattern matching on terms and on names.
|
|
432 |
For details see the Isabelle/Isar Reference Manual~\cite{IsarRef}.
|
|
433 |
\bigskip
|
|
434 |
|
47269
|
435 |
\begin{warn}
|
|
436 |
To ease readability we will drop the question marks
|
|
437 |
in front of unknowns from now on.
|
|
438 |
\end{warn}
|
|
439 |
|
47727
|
440 |
|
47269
|
441 |
\section{Inductive definitions}
|
|
442 |
\label{sec:inductive-defs}
|
|
443 |
|
|
444 |
Inductive definitions are the third important definition facility, after
|
47711
|
445 |
datatypes and recursive function.
|
|
446 |
\sem
|
|
447 |
In fact, they are the key construct in the
|
47269
|
448 |
definition of operational semantics in the second part of the book.
|
47711
|
449 |
\endsem
|
47269
|
450 |
|
|
451 |
\subsection{An example: even numbers}
|
|
452 |
\label{sec:Logic:even}
|
|
453 |
|
|
454 |
Here is a simple example of an inductively defined predicate:
|
|
455 |
\begin{itemize}
|
|
456 |
\item 0 is even
|
|
457 |
\item If $n$ is even, so is $n+2$.
|
|
458 |
\end{itemize}
|
|
459 |
The operative word ``inductive'' means that these are the only even numbers.
|
|
460 |
In Isabelle we give the two rules the names @{text ev0} and @{text evSS}
|
|
461 |
and write
|
|
462 |
*}
|
|
463 |
|
|
464 |
inductive ev :: "nat \<Rightarrow> bool" where
|
|
465 |
ev0: "ev 0" |
|
|
466 |
evSS: (*<*)"ev n \<Longrightarrow> ev (Suc(Suc n))"(*>*)
|
47306
|
467 |
text_raw{* @{prop[source]"ev n \<Longrightarrow> ev (n + 2)"} *}
|
47269
|
468 |
|
|
469 |
text{* To get used to inductive definitions, we will first prove a few
|
|
470 |
properties of @{const ev} informally before we descend to the Isabelle level.
|
|
471 |
|
|
472 |
How do we prove that some number is even, e.g.\ @{prop "ev 4"}? Simply by combining the defining rules for @{const ev}:
|
|
473 |
\begin{quote}
|
|
474 |
@{text "ev 0 \<Longrightarrow> ev (0 + 2) \<Longrightarrow> ev((0 + 2) + 2) = ev 4"}
|
|
475 |
\end{quote}
|
|
476 |
|
|
477 |
\subsubsection{Rule induction}
|
|
478 |
|
|
479 |
Showing that all even numbers have some property is more complicated. For
|
|
480 |
example, let us prove that the inductive definition of even numbers agrees
|
|
481 |
with the following recursive one:*}
|
|
482 |
|
|
483 |
fun even :: "nat \<Rightarrow> bool" where
|
|
484 |
"even 0 = True" |
|
|
485 |
"even (Suc 0) = False" |
|
|
486 |
"even (Suc(Suc n)) = even n"
|
|
487 |
|
|
488 |
text{* We prove @{prop"ev m \<Longrightarrow> even m"}. That is, we
|
|
489 |
assume @{prop"ev m"} and by induction on the form of its derivation
|
|
490 |
prove @{prop"even m"}. There are two cases corresponding to the two rules
|
|
491 |
for @{const ev}:
|
|
492 |
\begin{description}
|
|
493 |
\item[Case @{thm[source]ev0}:]
|
|
494 |
@{prop"ev m"} was derived by rule @{prop "ev 0"}: \\
|
|
495 |
@{text"\<Longrightarrow>"} @{prop"m=(0::nat)"} @{text"\<Longrightarrow>"} @{text "even m = even 0 = True"}
|
|
496 |
\item[Case @{thm[source]evSS}:]
|
|
497 |
@{prop"ev m"} was derived by rule @{prop "ev n \<Longrightarrow> ev(n+2)"}: \\
|
|
498 |
@{text"\<Longrightarrow>"} @{prop"m=n+(2::nat)"} and by induction hypothesis @{prop"even n"}\\
|
|
499 |
@{text"\<Longrightarrow>"} @{text"even m = even(n + 2) = even n = True"}
|
|
500 |
\end{description}
|
|
501 |
|
|
502 |
What we have just seen is a special case of \concept{rule induction}.
|
|
503 |
Rule induction applies to propositions of this form
|
|
504 |
\begin{quote}
|
|
505 |
@{prop "ev n \<Longrightarrow> P n"}
|
|
506 |
\end{quote}
|
|
507 |
That is, we want to prove a property @{prop"P n"}
|
|
508 |
for all even @{text n}. But if we assume @{prop"ev n"}, then there must be
|
|
509 |
some derivation of this assumption using the two defining rules for
|
|
510 |
@{const ev}. That is, we must prove
|
|
511 |
\begin{description}
|
|
512 |
\item[Case @{thm[source]ev0}:] @{prop"P(0::nat)"}
|
|
513 |
\item[Case @{thm[source]evSS}:] @{prop"\<lbrakk> ev n; P n \<rbrakk> \<Longrightarrow> P(n + 2::nat)"}
|
|
514 |
\end{description}
|
|
515 |
The corresponding rule is called @{thm[source] ev.induct} and looks like this:
|
|
516 |
\[
|
|
517 |
\inferrule{
|
|
518 |
\mbox{@{thm (prem 1) ev.induct[of "n"]}}\\
|
|
519 |
\mbox{@{thm (prem 2) ev.induct}}\\
|
|
520 |
\mbox{@{prop"!!n. \<lbrakk> ev n; P n \<rbrakk> \<Longrightarrow> P(n+2)"}}}
|
|
521 |
{\mbox{@{thm (concl) ev.induct[of "n"]}}}
|
|
522 |
\]
|
|
523 |
The first premise @{prop"ev n"} enforces that this rule can only be applied
|
|
524 |
in situations where we know that @{text n} is even.
|
|
525 |
|
|
526 |
Note that in the induction step we may not just assume @{prop"P n"} but also
|
|
527 |
\mbox{@{prop"ev n"}}, which is simply the premise of rule @{thm[source]
|
|
528 |
evSS}. Here is an example where the local assumption @{prop"ev n"} comes in
|
|
529 |
handy: we prove @{prop"ev m \<Longrightarrow> ev(m - 2)"} by induction on @{prop"ev m"}.
|
|
530 |
Case @{thm[source]ev0} requires us to prove @{prop"ev(0 - 2)"}, which follows
|
|
531 |
from @{prop"ev 0"} because @{prop"0 - 2 = (0::nat)"} on type @{typ nat}. In
|
|
532 |
case @{thm[source]evSS} we have \mbox{@{prop"m = n+(2::nat)"}} and may assume
|
|
533 |
@{prop"ev n"}, which implies @{prop"ev (m - 2)"} because @{text"m - 2 = (n +
|
|
534 |
2) - 2 = n"}. We did not need the induction hypothesis at all for this proof,
|
47711
|
535 |
it is just a case analysis of which rule was used, but having @{prop"ev
|
47269
|
536 |
n"} at our disposal in case @{thm[source]evSS} was essential.
|
47711
|
537 |
This case analysis of rules is also called ``rule inversion''
|
47269
|
538 |
and is discussed in more detail in \autoref{ch:Isar}.
|
|
539 |
|
|
540 |
\subsubsection{In Isabelle}
|
|
541 |
|
|
542 |
Let us now recast the above informal proofs in Isabelle. For a start,
|
|
543 |
we use @{const Suc} terms instead of numerals in rule @{thm[source]evSS}:
|
|
544 |
@{thm[display] evSS}
|
|
545 |
This avoids the difficulty of unifying @{text"n+2"} with some numeral,
|
|
546 |
which is not automatic.
|
|
547 |
|
|
548 |
The simplest way to prove @{prop"ev(Suc(Suc(Suc(Suc 0))))"} is in a forward
|
|
549 |
direction: @{text "evSS[OF evSS[OF ev0]]"} yields the theorem @{thm evSS[OF
|
|
550 |
evSS[OF ev0]]}. Alternatively, you can also prove it as a lemma in backwards
|
|
551 |
fashion. Although this is more verbose, it allows us to demonstrate how each
|
|
552 |
rule application changes the proof state: *}
|
|
553 |
|
|
554 |
lemma "ev(Suc(Suc(Suc(Suc 0))))"
|
|
555 |
txt{*
|
|
556 |
@{subgoals[display,indent=0,goals_limit=1]}
|
|
557 |
*}
|
|
558 |
apply(rule evSS)
|
|
559 |
txt{*
|
|
560 |
@{subgoals[display,indent=0,goals_limit=1]}
|
|
561 |
*}
|
|
562 |
apply(rule evSS)
|
|
563 |
txt{*
|
|
564 |
@{subgoals[display,indent=0,goals_limit=1]}
|
|
565 |
*}
|
|
566 |
apply(rule ev0)
|
|
567 |
done
|
|
568 |
|
|
569 |
text{* \indent
|
|
570 |
Rule induction is applied by giving the induction rule explicitly via the
|
|
571 |
@{text"rule:"} modifier:*}
|
|
572 |
|
|
573 |
lemma "ev m \<Longrightarrow> even m"
|
|
574 |
apply(induction rule: ev.induct)
|
|
575 |
by(simp_all)
|
|
576 |
|
|
577 |
text{* Both cases are automatic. Note that if there are multiple assumptions
|
|
578 |
of the form @{prop"ev t"}, method @{text induction} will induct on the leftmost
|
|
579 |
one.
|
|
580 |
|
|
581 |
As a bonus, we also prove the remaining direction of the equivalence of
|
|
582 |
@{const ev} and @{const even}:
|
|
583 |
*}
|
|
584 |
|
|
585 |
lemma "even n \<Longrightarrow> ev n"
|
|
586 |
apply(induction n rule: even.induct)
|
|
587 |
|
|
588 |
txt{* This is a proof by computation induction on @{text n} (see
|
|
589 |
\autoref{sec:recursive-funs}) that sets up three subgoals corresponding to
|
|
590 |
the three equations for @{const even}:
|
|
591 |
@{subgoals[display,indent=0]}
|
|
592 |
The first and third subgoals follow with @{thm[source]ev0} and @{thm[source]evSS}, and the second subgoal is trivially true because @{prop"even(Suc 0)"} is @{const False}:
|
|
593 |
*}
|
|
594 |
|
|
595 |
by (simp_all add: ev0 evSS)
|
|
596 |
|
|
597 |
text{* The rules for @{const ev} make perfect simplification and introduction
|
|
598 |
rules because their premises are always smaller than the conclusion. It
|
|
599 |
makes sense to turn them into simplification and introduction rules
|
|
600 |
permanently, to enhance proof automation: *}
|
|
601 |
|
|
602 |
declare ev.intros[simp,intro]
|
|
603 |
|
|
604 |
text{* The rules of an inductive definition are not simplification rules by
|
|
605 |
default because, in contrast to recursive functions, there is no termination
|
|
606 |
requirement for inductive definitions.
|
|
607 |
|
|
608 |
\subsubsection{Inductive versus recursive}
|
|
609 |
|
|
610 |
We have seen two definitions of the notion of evenness, an inductive and a
|
|
611 |
recursive one. Which one is better? Much of the time, the recursive one is
|
|
612 |
more convenient: it allows us to do rewriting in the middle of terms, and it
|
|
613 |
expresses both the positive information (which numbers are even) and the
|
|
614 |
negative information (which numbers are not even) directly. An inductive
|
|
615 |
definition only expresses the positive information directly. The negative
|
|
616 |
information, for example, that @{text 1} is not even, has to be proved from
|
|
617 |
it (by induction or rule inversion). On the other hand, rule induction is
|
47711
|
618 |
tailor-made for proving \mbox{@{prop"ev n \<Longrightarrow> P n"}} because it only asks you
|
47269
|
619 |
to prove the positive cases. In the proof of @{prop"even n \<Longrightarrow> P n"} by
|
|
620 |
computation induction via @{thm[source]even.induct}, we are also presented
|
|
621 |
with the trivial negative cases. If you want the convenience of both
|
|
622 |
rewriting and rule induction, you can make two definitions and show their
|
|
623 |
equivalence (as above) or make one definition and prove additional properties
|
|
624 |
from it, for example rule induction from computation induction.
|
|
625 |
|
|
626 |
But many concepts do not admit a recursive definition at all because there is
|
|
627 |
no datatype for the recursion (for example, the transitive closure of a
|
47711
|
628 |
relation), or the recursion would not terminate (for example,
|
|
629 |
an interpreter for a programming language). Even if there is a recursive
|
47269
|
630 |
definition, if we are only interested in the positive information, the
|
|
631 |
inductive definition may be much simpler.
|
|
632 |
|
|
633 |
\subsection{The reflexive transitive closure}
|
|
634 |
\label{sec:star}
|
|
635 |
|
|
636 |
Evenness is really more conveniently expressed recursively than inductively.
|
|
637 |
As a second and very typical example of an inductive definition we define the
|
47711
|
638 |
reflexive transitive closure.
|
|
639 |
\sem
|
|
640 |
It will also be an important building block for
|
47269
|
641 |
some of the semantics considered in the second part of the book.
|
47711
|
642 |
\endsem
|
47269
|
643 |
|
|
644 |
The reflexive transitive closure, called @{text star} below, is a function
|
|
645 |
that maps a binary predicate to another binary predicate: if @{text r} is of
|
|
646 |
type @{text"\<tau> \<Rightarrow> \<tau> \<Rightarrow> bool"} then @{term "star r"} is again of type @{text"\<tau> \<Rightarrow>
|
|
647 |
\<tau> \<Rightarrow> bool"}, and @{prop"star r x y"} means that @{text x} and @{text y} are in
|
|
648 |
the relation @{term"star r"}. Think @{term"r^*"} when you see @{term"star
|
|
649 |
r"}, because @{text"star r"} is meant to be the reflexive transitive closure.
|
|
650 |
That is, @{prop"star r x y"} is meant to be true if from @{text x} we can
|
|
651 |
reach @{text y} in finitely many @{text r} steps. This concept is naturally
|
|
652 |
defined inductively: *}
|
|
653 |
|
|
654 |
inductive star :: "('a \<Rightarrow> 'a \<Rightarrow> bool) \<Rightarrow> 'a \<Rightarrow> 'a \<Rightarrow> bool" for r where
|
|
655 |
refl: "star r x x" |
|
|
656 |
step: "r x y \<Longrightarrow> star r y z \<Longrightarrow> star r x z"
|
|
657 |
|
|
658 |
text{* The base case @{thm[source] refl} is reflexivity: @{term "x=y"}. The
|
|
659 |
step case @{thm[source]step} combines an @{text r} step (from @{text x} to
|
47711
|
660 |
@{text y}) and a @{term"star r"} step (from @{text y} to @{text z}) into a
|
|
661 |
@{term"star r"} step (from @{text x} to @{text z}).
|
47269
|
662 |
The ``\isacom{for}~@{text r}'' in the header is merely a hint to Isabelle
|
|
663 |
that @{text r} is a fixed parameter of @{const star}, in contrast to the
|
|
664 |
further parameters of @{const star}, which change. As a result, Isabelle
|
|
665 |
generates a simpler induction rule.
|
|
666 |
|
|
667 |
By definition @{term"star r"} is reflexive. It is also transitive, but we
|
|
668 |
need rule induction to prove that: *}
|
|
669 |
|
|
670 |
lemma star_trans: "star r x y \<Longrightarrow> star r y z \<Longrightarrow> star r x z"
|
|
671 |
apply(induction rule: star.induct)
|
|
672 |
(*<*)
|
|
673 |
defer
|
|
674 |
apply(rename_tac u x y)
|
|
675 |
defer
|
|
676 |
(*>*)
|
|
677 |
txt{* The induction is over @{prop"star r x y"} and we try to prove
|
|
678 |
\mbox{@{prop"star r y z \<Longrightarrow> star r x z"}},
|
|
679 |
which we abbreviate by @{prop"P x y"}. These are our two subgoals:
|
|
680 |
@{subgoals[display,indent=0]}
|
|
681 |
The first one is @{prop"P x x"}, the result of case @{thm[source]refl},
|
|
682 |
and it is trivial.
|
|
683 |
*}
|
|
684 |
apply(assumption)
|
|
685 |
txt{* Let us examine subgoal @{text 2}, case @{thm[source] step}.
|
|
686 |
Assumptions @{prop"r u x"} and \mbox{@{prop"star r x y"}}
|
|
687 |
are the premises of rule @{thm[source]step}.
|
|
688 |
Assumption @{prop"star r y z \<Longrightarrow> star r x z"} is \mbox{@{prop"P x y"}},
|
|
689 |
the IH coming from @{prop"star r x y"}. We have to prove @{prop"P u y"},
|
|
690 |
which we do by assuming @{prop"star r y z"} and proving @{prop"star r u z"}.
|
|
691 |
The proof itself is straightforward: from \mbox{@{prop"star r y z"}} the IH
|
|
692 |
leads to @{prop"star r x z"} which, together with @{prop"r u x"},
|
|
693 |
leads to \mbox{@{prop"star r u z"}} via rule @{thm[source]step}:
|
|
694 |
*}
|
|
695 |
apply(metis step)
|
|
696 |
done
|
|
697 |
|
|
698 |
text{*
|
|
699 |
|
|
700 |
\subsection{The general case}
|
|
701 |
|
|
702 |
Inductive definitions have approximately the following general form:
|
|
703 |
\begin{quote}
|
|
704 |
\isacom{inductive} @{text"I :: \"\<tau> \<Rightarrow> bool\""} \isacom{where}
|
|
705 |
\end{quote}
|
|
706 |
followed by a sequence of (possibly named) rules of the form
|
|
707 |
\begin{quote}
|
|
708 |
@{text "\<lbrakk> I a\<^isub>1; \<dots>; I a\<^isub>n \<rbrakk> \<Longrightarrow> I a"}
|
|
709 |
\end{quote}
|
|
710 |
separated by @{text"|"}. As usual, @{text n} can be 0.
|
|
711 |
The corresponding rule induction principle
|
|
712 |
@{text I.induct} applies to propositions of the form
|
|
713 |
\begin{quote}
|
|
714 |
@{prop "I x \<Longrightarrow> P x"}
|
|
715 |
\end{quote}
|
|
716 |
where @{text P} may itself be a chain of implications.
|
|
717 |
\begin{warn}
|
|
718 |
Rule induction is always on the leftmost premise of the goal.
|
|
719 |
Hence @{text "I x"} must be the first premise.
|
|
720 |
\end{warn}
|
|
721 |
Proving @{prop "I x \<Longrightarrow> P x"} by rule induction means proving
|
|
722 |
for every rule of @{text I} that @{text P} is invariant:
|
|
723 |
\begin{quote}
|
|
724 |
@{text "\<lbrakk> I a\<^isub>1; P a\<^isub>1; \<dots>; I a\<^isub>n; P a\<^isub>n \<rbrakk> \<Longrightarrow> P a"}
|
|
725 |
\end{quote}
|
|
726 |
|
|
727 |
The above format for inductive definitions is simplified in a number of
|
|
728 |
respects. @{text I} can have any number of arguments and each rule can have
|
|
729 |
additional premises not involving @{text I}, so-called \concept{side
|
|
730 |
conditions}. In rule inductions, these side-conditions appear as additional
|
|
731 |
assumptions. The \isacom{for} clause seen in the definition of the reflexive
|
|
732 |
transitive closure merely simplifies the form of the induction rule.
|
|
733 |
*}
|
|
734 |
(*<*)
|
|
735 |
end
|
|
736 |
(*>*)
|