23 @{prop "form \<longrightarrow> form"}\index{$HOL2@\isasymlongrightarrow}\\ |
23 @{prop "form \<longrightarrow> form"}\index{$HOL2@\isasymlongrightarrow}\\ |
24 &\mid& @{prop"\<forall>x. form"}\index{$HOL6@\isasymforall} ~\mid~ @{prop"\<exists>x. form"}\index{$HOL7@\isasymexists} |
24 &\mid& @{prop"\<forall>x. form"}\index{$HOL6@\isasymforall} ~\mid~ @{prop"\<exists>x. form"}\index{$HOL7@\isasymexists} |
25 \end{array} |
25 \end{array} |
26 \] |
26 \] |
27 Terms are the ones we have seen all along, built from constants, variables, |
27 Terms are the ones we have seen all along, built from constants, variables, |
28 function application and @{text"\<lambda>"}-abstraction, including all the syntactic |
28 function application and \<open>\<lambda>\<close>-abstraction, including all the syntactic |
29 sugar like infix symbols, @{text "if"}, @{text "case"}, etc. |
29 sugar like infix symbols, \<open>if\<close>, \<open>case\<close>, etc. |
30 \begin{warn} |
30 \begin{warn} |
31 Remember that formulas are simply terms of type @{text bool}. Hence |
31 Remember that formulas are simply terms of type \<open>bool\<close>. Hence |
32 @{text "="} also works for formulas. Beware that @{text"="} has a higher |
32 \<open>=\<close> also works for formulas. Beware that \<open>=\<close> has a higher |
33 precedence than the other logical operators. Hence @{prop"s = t \<and> A"} means |
33 precedence than the other logical operators. Hence @{prop"s = t \<and> A"} means |
34 @{text"(s = t) \<and> A"}, and @{prop"A\<and>B = B\<and>A"} means @{text"A \<and> (B = B) \<and> A"}. |
34 \<open>(s = t) \<and> A\<close>, and @{prop"A\<and>B = B\<and>A"} means \<open>A \<and> (B = B) \<and> A\<close>. |
35 Logical equivalence can also be written with |
35 Logical equivalence can also be written with |
36 @{text "\<longleftrightarrow>"} instead of @{text"="}, where @{text"\<longleftrightarrow>"} has the same low |
36 \<open>\<longleftrightarrow>\<close> instead of \<open>=\<close>, where \<open>\<longleftrightarrow>\<close> has the same low |
37 precedence as @{text"\<longrightarrow>"}. Hence @{text"A \<and> B \<longleftrightarrow> B \<and> A"} really means |
37 precedence as \<open>\<longrightarrow>\<close>. Hence \<open>A \<and> B \<longleftrightarrow> B \<and> A\<close> really means |
38 @{text"(A \<and> B) \<longleftrightarrow> (B \<and> A)"}. |
38 \<open>(A \<and> B) \<longleftrightarrow> (B \<and> A)\<close>. |
39 \end{warn} |
39 \end{warn} |
40 \begin{warn} |
40 \begin{warn} |
41 Quantifiers need to be enclosed in parentheses if they are nested within |
41 Quantifiers need to be enclosed in parentheses if they are nested within |
42 other constructs (just like @{text "if"}, @{text case} and @{text let}). |
42 other constructs (just like \<open>if\<close>, \<open>case\<close> and \<open>let\<close>). |
43 \end{warn} |
43 \end{warn} |
44 The most frequent logical symbols and their ASCII representations are shown |
44 The most frequent logical symbols and their ASCII representations are shown |
45 in Fig.~\ref{fig:log-symbols}. |
45 in Fig.~\ref{fig:log-symbols}. |
46 \begin{figure} |
46 \begin{figure} |
47 \begin{center} |
47 \begin{center} |
48 \begin{tabular}{l@ {\qquad}l@ {\qquad}l} |
48 \begin{tabular}{l@ {\qquad}l@ {\qquad}l} |
49 @{text "\<forall>"} & \xsymbol{forall} & \texttt{ALL}\\ |
49 \<open>\<forall>\<close> & \xsymbol{forall} & \texttt{ALL}\\ |
50 @{text "\<exists>"} & \xsymbol{exists} & \texttt{EX}\\ |
50 \<open>\<exists>\<close> & \xsymbol{exists} & \texttt{EX}\\ |
51 @{text "\<lambda>"} & \xsymbol{lambda} & \texttt{\%}\\ |
51 \<open>\<lambda>\<close> & \xsymbol{lambda} & \texttt{\%}\\ |
52 @{text "\<longrightarrow>"} & \texttt{-{}->}\\ |
52 \<open>\<longrightarrow>\<close> & \texttt{-{}->}\\ |
53 @{text "\<longleftrightarrow>"} & \texttt{<->}\\ |
53 \<open>\<longleftrightarrow>\<close> & \texttt{<->}\\ |
54 @{text "\<and>"} & \texttt{/\char`\\} & \texttt{\&}\\ |
54 \<open>\<and>\<close> & \texttt{/\char`\\} & \texttt{\&}\\ |
55 @{text "\<or>"} & \texttt{\char`\\/} & \texttt{|}\\ |
55 \<open>\<or>\<close> & \texttt{\char`\\/} & \texttt{|}\\ |
56 @{text "\<not>"} & \xsymbol{not} & \texttt{\char`~}\\ |
56 \<open>\<not>\<close> & \xsymbol{not} & \texttt{\char`~}\\ |
57 @{text "\<noteq>"} & \xsymbol{noteq} & \texttt{\char`~=} |
57 \<open>\<noteq>\<close> & \xsymbol{noteq} & \texttt{\char`~=} |
58 \end{tabular} |
58 \end{tabular} |
59 \end{center} |
59 \end{center} |
60 \caption{Logical symbols and their ASCII forms} |
60 \caption{Logical symbols and their ASCII forms} |
61 \label{fig:log-symbols} |
61 \label{fig:log-symbols} |
62 \end{figure} |
62 \end{figure} |
66 depends on the interface. The ASCII forms \texttt{/\char`\\} and |
66 depends on the interface. The ASCII forms \texttt{/\char`\\} and |
67 \texttt{\char`\\/} |
67 \texttt{\char`\\/} |
68 are special in that they are merely keyboard shortcuts for the interface and |
68 are special in that they are merely keyboard shortcuts for the interface and |
69 not logical symbols by themselves. |
69 not logical symbols by themselves. |
70 \begin{warn} |
70 \begin{warn} |
71 The implication @{text"\<Longrightarrow>"} is part of the Isabelle framework. It structures |
71 The implication \<open>\<Longrightarrow>\<close> is part of the Isabelle framework. It structures |
72 theorems and proof states, separating assumptions from conclusions. |
72 theorems and proof states, separating assumptions from conclusions. |
73 The implication @{text"\<longrightarrow>"} is part of the logic HOL and can occur inside the |
73 The implication \<open>\<longrightarrow>\<close> is part of the logic HOL and can occur inside the |
74 formulas that make up the assumptions and conclusion. |
74 formulas that make up the assumptions and conclusion. |
75 Theorems should be of the form @{text"\<lbrakk> A\<^sub>1; \<dots>; A\<^sub>n \<rbrakk> \<Longrightarrow> A"}, |
75 Theorems should be of the form \<open>\<lbrakk> A\<^sub>1; \<dots>; A\<^sub>n \<rbrakk> \<Longrightarrow> A\<close>, |
76 not @{text"A\<^sub>1 \<and> \<dots> \<and> A\<^sub>n \<longrightarrow> A"}. Both are logically equivalent |
76 not \<open>A\<^sub>1 \<and> \<dots> \<and> A\<^sub>n \<longrightarrow> A\<close>. Both are logically equivalent |
77 but the first one works better when using the theorem in further proofs. |
77 but the first one works better when using the theorem in further proofs. |
78 \end{warn} |
78 \end{warn} |
79 |
79 |
80 \section{Sets} |
80 \section{Sets} |
81 \label{sec:Sets} |
81 \label{sec:Sets} |
82 |
82 |
83 Sets of elements of type @{typ 'a} have type @{typ"'a set"}\index{set@@{text set}}. |
83 Sets of elements of type @{typ 'a} have type @{typ"'a set"}\index{set@\<open>set\<close>}. |
84 They can be finite or infinite. Sets come with the usual notation: |
84 They can be finite or infinite. Sets come with the usual notation: |
85 \begin{itemize} |
85 \begin{itemize} |
86 \item \indexed{@{term"{}"}}{$IMP042},\quad @{text"{e\<^sub>1,\<dots>,e\<^sub>n}"} |
86 \item \indexed{@{term"{}"}}{$IMP042},\quad \<open>{e\<^sub>1,\<dots>,e\<^sub>n}\<close> |
87 \item @{prop"e \<in> A"}\index{$HOLSet0@\isasymin},\quad @{prop"A \<subseteq> B"}\index{$HOLSet2@\isasymsubseteq} |
87 \item @{prop"e \<in> A"}\index{$HOLSet0@\isasymin},\quad @{prop"A \<subseteq> B"}\index{$HOLSet2@\isasymsubseteq} |
88 \item @{term"A \<union> B"}\index{$HOLSet4@\isasymunion},\quad @{term"A \<inter> B"}\index{$HOLSet5@\isasyminter},\quad @{term"A - B"},\quad @{term"-A"} |
88 \item @{term"A \<union> B"}\index{$HOLSet4@\isasymunion},\quad @{term"A \<inter> B"}\index{$HOLSet5@\isasyminter},\quad @{term"A - B"},\quad @{term"-A"} |
89 \end{itemize} |
89 \end{itemize} |
90 (where @{term"A-B"} and @{text"-A"} are set difference and complement) |
90 (where @{term"A-B"} and \<open>-A\<close> are set difference and complement) |
91 and much more. @{const UNIV} is the set of all elements of some type. |
91 and much more. @{const UNIV} is the set of all elements of some type. |
92 Set comprehension\index{set comprehension} is written |
92 Set comprehension\index{set comprehension} is written |
93 @{term"{x. P}"}\index{$IMP042@@{term"{x. P}"}} rather than @{text"{x | P}"}. |
93 @{term"{x. P}"}\index{$IMP042@@{term"{x. P}"}} rather than \<open>{x | P}\<close>. |
94 \begin{warn} |
94 \begin{warn} |
95 In @{term"{x. P}"} the @{text x} must be a variable. Set comprehension |
95 In @{term"{x. P}"} the \<open>x\<close> must be a variable. Set comprehension |
96 involving a proper term @{text t} must be written |
96 involving a proper term \<open>t\<close> must be written |
97 \noquotes{@{term[source] "{t | x y. P}"}}\index{$IMP042@@{text"{t |x. P}"}}, |
97 \noquotes{@{term[source] "{t | x y. P}"}}\index{$IMP042@\<open>{t |x. P}\<close>}, |
98 where @{text "x y"} are those free variables in @{text t} |
98 where \<open>x y\<close> are those free variables in \<open>t\<close> |
99 that occur in @{text P}. |
99 that occur in \<open>P\<close>. |
100 This is just a shorthand for @{term"{v. \<exists>x y. v = t \<and> P}"}, where |
100 This is just a shorthand for @{term"{v. \<exists>x y. v = t \<and> P}"}, where |
101 @{text v} is a new variable. For example, @{term"{x+y|x. x \<in> A}"} |
101 \<open>v\<close> is a new variable. For example, @{term"{x+y|x. x \<in> A}"} |
102 is short for \noquotes{@{term[source]"{v. \<exists>x. v = x+y \<and> x \<in> A}"}}. |
102 is short for \noquotes{@{term[source]"{v. \<exists>x. v = x+y \<and> x \<in> A}"}}. |
103 \end{warn} |
103 \end{warn} |
104 |
104 |
105 Here are the ASCII representations of the mathematical symbols: |
105 Here are the ASCII representations of the mathematical symbols: |
106 \begin{center} |
106 \begin{center} |
107 \begin{tabular}{l@ {\quad}l@ {\quad}l} |
107 \begin{tabular}{l@ {\quad}l@ {\quad}l} |
108 @{text "\<in>"} & \texttt{\char`\\\char`\<in>} & \texttt{:}\\ |
108 \<open>\<in>\<close> & \texttt{\char`\\\char`\<in>} & \texttt{:}\\ |
109 @{text "\<subseteq>"} & \texttt{\char`\\\char`\<subseteq>} & \texttt{<=}\\ |
109 \<open>\<subseteq>\<close> & \texttt{\char`\\\char`\<subseteq>} & \texttt{<=}\\ |
110 @{text "\<union>"} & \texttt{\char`\\\char`\<union>} & \texttt{Un}\\ |
110 \<open>\<union>\<close> & \texttt{\char`\\\char`\<union>} & \texttt{Un}\\ |
111 @{text "\<inter>"} & \texttt{\char`\\\char`\<inter>} & \texttt{Int} |
111 \<open>\<inter>\<close> & \texttt{\char`\\\char`\<inter>} & \texttt{Int} |
112 \end{tabular} |
112 \end{tabular} |
113 \end{center} |
113 \end{center} |
114 Sets also allow bounded quantifications @{prop"\<forall>x \<in> A. P"} and |
114 Sets also allow bounded quantifications @{prop"\<forall>x \<in> A. P"} and |
115 @{prop"\<exists>x \<in> A. P"}. |
115 @{prop"\<exists>x \<in> A. P"}. |
116 |
116 |
117 For the more ambitious, there are also @{text"\<Union>"}\index{$HOLSet6@\isasymUnion} |
117 For the more ambitious, there are also \<open>\<Union>\<close>\index{$HOLSet6@\isasymUnion} |
118 and @{text"\<Inter>"}\index{$HOLSet7@\isasymInter}: |
118 and \<open>\<Inter>\<close>\index{$HOLSet7@\isasymInter}: |
119 \begin{center} |
119 \begin{center} |
120 @{thm Union_eq} \qquad @{thm Inter_eq} |
120 @{thm Union_eq} \qquad @{thm Inter_eq} |
121 \end{center} |
121 \end{center} |
122 The ASCII forms of @{text"\<Union>"} are \texttt{\char`\\\char`\<Union>} and \texttt{Union}, |
122 The ASCII forms of \<open>\<Union>\<close> are \texttt{\char`\\\char`\<Union>} and \texttt{Union}, |
123 those of @{text"\<Inter>"} are \texttt{\char`\\\char`\<Inter>} and \texttt{Inter}. |
123 those of \<open>\<Inter>\<close> are \texttt{\char`\\\char`\<Inter>} and \texttt{Inter}. |
124 There are also indexed unions and intersections: |
124 There are also indexed unions and intersections: |
125 \begin{center} |
125 \begin{center} |
126 @{thm[eta_contract=false] UNION_eq} \\ @{thm[eta_contract=false] INTER_eq} |
126 @{thm[eta_contract=false] UNION_eq} \\ @{thm[eta_contract=false] INTER_eq} |
127 \end{center} |
127 \end{center} |
128 The ASCII forms are \ \texttt{UN x:A.~B} \ and \ \texttt{INT x:A. B} \ |
128 The ASCII forms are \ \texttt{UN x:A.~B} \ and \ \texttt{INT x:A. B} \ |
272 substitute arbitrary values for the free variables in a lemma. |
272 substitute arbitrary values for the free variables in a lemma. |
273 |
273 |
274 Just as for the other proof methods we have seen, there is no guarantee that |
274 Just as for the other proof methods we have seen, there is no guarantee that |
275 \isacom{sledgehammer} will find a proof if it exists. Nor is |
275 \isacom{sledgehammer} will find a proof if it exists. Nor is |
276 \isacom{sledgehammer} superior to the other proof methods. They are |
276 \isacom{sledgehammer} superior to the other proof methods. They are |
277 incomparable. Therefore it is recommended to apply @{text simp} or @{text |
277 incomparable. Therefore it is recommended to apply \<open>simp\<close> or \<open>auto\<close> before invoking \isacom{sledgehammer} on what is left. |
278 auto} before invoking \isacom{sledgehammer} on what is left. |
|
279 |
278 |
280 \subsection{Arithmetic} |
279 \subsection{Arithmetic} |
281 |
280 |
282 By arithmetic formulas we mean formulas involving variables, numbers, @{text |
281 By arithmetic formulas we mean formulas involving variables, numbers, \<open>+\<close>, \<open>-\<close>, \<open>=\<close>, \<open><\<close>, \<open>\<le>\<close> and the usual logical |
283 "+"}, @{text"-"}, @{text "="}, @{text "<"}, @{text "\<le>"} and the usual logical |
282 connectives \<open>\<not>\<close>, \<open>\<and>\<close>, \<open>\<or>\<close>, \<open>\<longrightarrow>\<close>, |
284 connectives @{text"\<not>"}, @{text"\<and>"}, @{text"\<or>"}, @{text"\<longrightarrow>"}, |
283 \<open>\<longleftrightarrow>\<close>. Strictly speaking, this is known as \concept{linear arithmetic} |
285 @{text"\<longleftrightarrow>"}. Strictly speaking, this is known as \concept{linear arithmetic} |
|
286 because it does not involve multiplication, although multiplication with |
284 because it does not involve multiplication, although multiplication with |
287 numbers, e.g., @{text"2*n"}, is allowed. Such formulas can be proved by |
285 numbers, e.g., \<open>2*n\<close>, is allowed. Such formulas can be proved by |
288 \indexed{@{text arith}}{arith}: |
286 \indexed{\<open>arith\<close>}{arith}: |
289 \<close> |
287 \<close> |
290 |
288 |
291 lemma "\<lbrakk> (a::nat) \<le> x + b; 2*x < c \<rbrakk> \<Longrightarrow> 2*a + 1 \<le> 2*b + c" |
289 lemma "\<lbrakk> (a::nat) \<le> x + b; 2*x < c \<rbrakk> \<Longrightarrow> 2*a + 1 \<le> 2*b + c" |
292 by arith |
290 by arith |
293 |
291 |
294 text\<open>In fact, @{text auto} and @{text simp} can prove many linear |
292 text\<open>In fact, \<open>auto\<close> and \<open>simp\<close> can prove many linear |
295 arithmetic formulas already, like the one above, by calling a weak but fast |
293 arithmetic formulas already, like the one above, by calling a weak but fast |
296 version of @{text arith}. Hence it is usually not necessary to invoke |
294 version of \<open>arith\<close>. Hence it is usually not necessary to invoke |
297 @{text arith} explicitly. |
295 \<open>arith\<close> explicitly. |
298 |
296 |
299 The above example involves natural numbers, but integers (type @{typ int}) |
297 The above example involves natural numbers, but integers (type @{typ int}) |
300 and real numbers (type @{text real}) are supported as well. As are a number |
298 and real numbers (type \<open>real\<close>) are supported as well. As are a number |
301 of further operators like @{const min} and @{const max}. On @{typ nat} and |
299 of further operators like @{const min} and @{const max}. On @{typ nat} and |
302 @{typ int}, @{text arith} can even prove theorems with quantifiers in them, |
300 @{typ int}, \<open>arith\<close> can even prove theorems with quantifiers in them, |
303 but we will not enlarge on that here. |
301 but we will not enlarge on that here. |
304 |
302 |
305 |
303 |
306 \subsection{Trying Them All} |
304 \subsection{Trying Them All} |
307 |
305 |
311 \end{isabelle} |
309 \end{isabelle} |
312 There is also a lightweight variant \isacom{try0} that does not call |
310 There is also a lightweight variant \isacom{try0} that does not call |
313 sledgehammer. If desired, specific simplification and introduction rules |
311 sledgehammer. If desired, specific simplification and introduction rules |
314 can be added: |
312 can be added: |
315 \begin{isabelle} |
313 \begin{isabelle} |
316 \isacom{try0} @{text"simp: \<dots> intro: \<dots>"} |
314 \isacom{try0} \<open>simp: \<dots> intro: \<dots>\<close> |
317 \end{isabelle} |
315 \end{isabelle} |
318 |
316 |
319 \section{Single Step Proofs} |
317 \section{Single Step Proofs} |
320 |
318 |
321 Although automation is nice, it often fails, at least initially, and you need |
319 Although automation is nice, it often fails, at least initially, and you need |
322 to find out why. When @{text fastforce} or @{text blast} simply fail, you have |
320 to find out why. When \<open>fastforce\<close> or \<open>blast\<close> simply fail, you have |
323 no clue why. At this point, the stepwise |
321 no clue why. At this point, the stepwise |
324 application of proof rules may be necessary. For example, if @{text blast} |
322 application of proof rules may be necessary. For example, if \<open>blast\<close> |
325 fails on @{prop"A \<and> B"}, you want to attack the two |
323 fails on @{prop"A \<and> B"}, you want to attack the two |
326 conjuncts @{text A} and @{text B} separately. This can |
324 conjuncts \<open>A\<close> and \<open>B\<close> separately. This can |
327 be achieved by applying \emph{conjunction introduction} |
325 be achieved by applying \emph{conjunction introduction} |
328 \[ @{thm[mode=Rule,show_question_marks]conjI}\ @{text conjI} |
326 \[ @{thm[mode=Rule,show_question_marks]conjI}\ \<open>conjI\<close> |
329 \] |
327 \] |
330 to the proof state. We will now examine the details of this process. |
328 to the proof state. We will now examine the details of this process. |
331 |
329 |
332 \subsection{Instantiating Unknowns} |
330 \subsection{Instantiating Unknowns} |
333 |
331 |
334 We had briefly mentioned earlier that after proving some theorem, |
332 We had briefly mentioned earlier that after proving some theorem, |
335 Isabelle replaces all free variables @{text x} by so called \conceptidx{unknowns}{unknown} |
333 Isabelle replaces all free variables \<open>x\<close> by so called \conceptidx{unknowns}{unknown} |
336 @{text "?x"}. We can see this clearly in rule @{thm[source] conjI}. |
334 \<open>?x\<close>. We can see this clearly in rule @{thm[source] conjI}. |
337 These unknowns can later be instantiated explicitly or implicitly: |
335 These unknowns can later be instantiated explicitly or implicitly: |
338 \begin{itemize} |
336 \begin{itemize} |
339 \item By hand, using \indexed{@{text of}}{of}. |
337 \item By hand, using \indexed{\<open>of\<close>}{of}. |
340 The expression @{text"conjI[of \"a=b\" \"False\"]"} |
338 The expression \<open>conjI[of "a=b" "False"]\<close> |
341 instantiates the unknowns in @{thm[source] conjI} from left to right with the |
339 instantiates the unknowns in @{thm[source] conjI} from left to right with the |
342 two formulas @{text"a=b"} and @{text False}, yielding the rule |
340 two formulas \<open>a=b\<close> and \<open>False\<close>, yielding the rule |
343 @{thm[display,mode=Rule,margin=100]conjI[of "a=b" False]} |
341 @{thm[display,mode=Rule,margin=100]conjI[of "a=b" False]} |
344 |
342 |
345 In general, @{text"th[of string\<^sub>1 \<dots> string\<^sub>n]"} instantiates |
343 In general, \<open>th[of string\<^sub>1 \<dots> string\<^sub>n]\<close> instantiates |
346 the unknowns in the theorem @{text th} from left to right with the terms |
344 the unknowns in the theorem \<open>th\<close> from left to right with the terms |
347 @{text string\<^sub>1} to @{text string\<^sub>n}. |
345 \<open>string\<^sub>1\<close> to \<open>string\<^sub>n\<close>. |
348 |
346 |
349 \item By unification. \conceptidx{Unification}{unification} is the process of making two |
347 \item By unification. \conceptidx{Unification}{unification} is the process of making two |
350 terms syntactically equal by suitable instantiations of unknowns. For example, |
348 terms syntactically equal by suitable instantiations of unknowns. For example, |
351 unifying @{text"?P \<and> ?Q"} with \mbox{@{prop"a=b \<and> False"}} instantiates |
349 unifying \<open>?P \<and> ?Q\<close> with \mbox{@{prop"a=b \<and> False"}} instantiates |
352 @{text "?P"} with @{prop "a=b"} and @{text "?Q"} with @{prop False}. |
350 \<open>?P\<close> with @{prop "a=b"} and \<open>?Q\<close> with @{prop False}. |
353 \end{itemize} |
351 \end{itemize} |
354 We need not instantiate all unknowns. If we want to skip a particular one we |
352 We need not instantiate all unknowns. If we want to skip a particular one we |
355 can write @{text"_"} instead, for example @{text "conjI[of _ \"False\"]"}. |
353 can write \<open>_\<close> instead, for example \<open>conjI[of _ "False"]\<close>. |
356 Unknowns can also be instantiated by name using \indexed{@{text "where"}}{where}, for example |
354 Unknowns can also be instantiated by name using \indexed{\<open>where\<close>}{where}, for example |
357 @{text "conjI[where ?P = \"a=b\""} \isacom{and} @{text"?Q = \"False\"]"}. |
355 \<open>conjI[where ?P = "a=b"\<close> \isacom{and} \<open>?Q = "False"]\<close>. |
358 |
356 |
359 |
357 |
360 \subsection{Rule Application} |
358 \subsection{Rule Application} |
361 |
359 |
362 \conceptidx{Rule application}{rule application} means applying a rule backwards to a proof state. |
360 \conceptidx{Rule application}{rule application} means applying a rule backwards to a proof state. |
363 For example, applying rule @{thm[source]conjI} to a proof state |
361 For example, applying rule @{thm[source]conjI} to a proof state |
364 \begin{quote} |
362 \begin{quote} |
365 @{text"1. \<dots> \<Longrightarrow> A \<and> B"} |
363 \<open>1. \<dots> \<Longrightarrow> A \<and> B\<close> |
366 \end{quote} |
364 \end{quote} |
367 results in two subgoals, one for each premise of @{thm[source]conjI}: |
365 results in two subgoals, one for each premise of @{thm[source]conjI}: |
368 \begin{quote} |
366 \begin{quote} |
369 @{text"1. \<dots> \<Longrightarrow> A"}\\ |
367 \<open>1. \<dots> \<Longrightarrow> A\<close>\\ |
370 @{text"2. \<dots> \<Longrightarrow> B"} |
368 \<open>2. \<dots> \<Longrightarrow> B\<close> |
371 \end{quote} |
369 \end{quote} |
372 In general, the application of a rule @{text"\<lbrakk> A\<^sub>1; \<dots>; A\<^sub>n \<rbrakk> \<Longrightarrow> A"} |
370 In general, the application of a rule \<open>\<lbrakk> A\<^sub>1; \<dots>; A\<^sub>n \<rbrakk> \<Longrightarrow> A\<close> |
373 to a subgoal \mbox{@{text"\<dots> \<Longrightarrow> C"}} proceeds in two steps: |
371 to a subgoal \mbox{\<open>\<dots> \<Longrightarrow> C\<close>} proceeds in two steps: |
374 \begin{enumerate} |
372 \begin{enumerate} |
375 \item |
373 \item |
376 Unify @{text A} and @{text C}, thus instantiating the unknowns in the rule. |
374 Unify \<open>A\<close> and \<open>C\<close>, thus instantiating the unknowns in the rule. |
377 \item |
375 \item |
378 Replace the subgoal @{text C} with @{text n} new subgoals @{text"A\<^sub>1"} to @{text"A\<^sub>n"}. |
376 Replace the subgoal \<open>C\<close> with \<open>n\<close> new subgoals \<open>A\<^sub>1\<close> to \<open>A\<^sub>n\<close>. |
379 \end{enumerate} |
377 \end{enumerate} |
380 This is the command to apply rule @{text xyz}: |
378 This is the command to apply rule \<open>xyz\<close>: |
381 \begin{quote} |
379 \begin{quote} |
382 \isacom{apply}@{text"(rule xyz)"}\index{rule@@{text rule}} |
380 \isacom{apply}\<open>(rule xyz)\<close>\index{rule@\<open>rule\<close>} |
383 \end{quote} |
381 \end{quote} |
384 This is also called \concept{backchaining} with rule @{text xyz}. |
382 This is also called \concept{backchaining} with rule \<open>xyz\<close>. |
385 |
383 |
386 \subsection{Introduction Rules} |
384 \subsection{Introduction Rules} |
387 |
385 |
388 Conjunction introduction (@{thm[source] conjI}) is one example of a whole |
386 Conjunction introduction (@{thm[source] conjI}) is one example of a whole |
389 class of rules known as \conceptidx{introduction rules}{introduction rule}. They explain under which |
387 class of rules known as \conceptidx{introduction rules}{introduction rule}. They explain under which |
390 premises some logical construct can be introduced. Here are some further |
388 premises some logical construct can be introduced. Here are some further |
391 useful introduction rules: |
389 useful introduction rules: |
392 \[ |
390 \[ |
393 \inferrule*[right=\mbox{@{text impI}}]{\mbox{@{text"?P \<Longrightarrow> ?Q"}}}{\mbox{@{text"?P \<longrightarrow> ?Q"}}} |
391 \inferrule*[right=\mbox{\<open>impI\<close>}]{\mbox{\<open>?P \<Longrightarrow> ?Q\<close>}}{\mbox{\<open>?P \<longrightarrow> ?Q\<close>}} |
394 \qquad |
392 \qquad |
395 \inferrule*[right=\mbox{@{text allI}}]{\mbox{@{text"\<And>x. ?P x"}}}{\mbox{@{text"\<forall>x. ?P x"}}} |
393 \inferrule*[right=\mbox{\<open>allI\<close>}]{\mbox{\<open>\<And>x. ?P x\<close>}}{\mbox{\<open>\<forall>x. ?P x\<close>}} |
396 \] |
394 \] |
397 \[ |
395 \[ |
398 \inferrule*[right=\mbox{@{text iffI}}]{\mbox{@{text"?P \<Longrightarrow> ?Q"}} \\ \mbox{@{text"?Q \<Longrightarrow> ?P"}}} |
396 \inferrule*[right=\mbox{\<open>iffI\<close>}]{\mbox{\<open>?P \<Longrightarrow> ?Q\<close>} \\ \mbox{\<open>?Q \<Longrightarrow> ?P\<close>}} |
399 {\mbox{@{text"?P = ?Q"}}} |
397 {\mbox{\<open>?P = ?Q\<close>}} |
400 \] |
398 \] |
401 These rules are part of the logical system of \concept{natural deduction} |
399 These rules are part of the logical system of \concept{natural deduction} |
402 (e.g., @{cite HuthRyan}). Although we intentionally de-emphasize the basic rules |
400 (e.g., @{cite HuthRyan}). Although we intentionally de-emphasize the basic rules |
403 of logic in favour of automatic proof methods that allow you to take bigger |
401 of logic in favour of automatic proof methods that allow you to take bigger |
404 steps, these rules are helpful in locating where and why automation fails. |
402 steps, these rules are helpful in locating where and why automation fails. |
405 When applied backwards, these rules decompose the goal: |
403 When applied backwards, these rules decompose the goal: |
406 \begin{itemize} |
404 \begin{itemize} |
407 \item @{thm[source] conjI} and @{thm[source]iffI} split the goal into two subgoals, |
405 \item @{thm[source] conjI} and @{thm[source]iffI} split the goal into two subgoals, |
408 \item @{thm[source] impI} moves the left-hand side of a HOL implication into the list of assumptions, |
406 \item @{thm[source] impI} moves the left-hand side of a HOL implication into the list of assumptions, |
409 \item and @{thm[source] allI} removes a @{text "\<forall>"} by turning the quantified variable into a fixed local variable of the subgoal. |
407 \item and @{thm[source] allI} removes a \<open>\<forall>\<close> by turning the quantified variable into a fixed local variable of the subgoal. |
410 \end{itemize} |
408 \end{itemize} |
411 Isabelle knows about these and a number of other introduction rules. |
409 Isabelle knows about these and a number of other introduction rules. |
412 The command |
410 The command |
413 \begin{quote} |
411 \begin{quote} |
414 \isacom{apply} @{text rule}\index{rule@@{text rule}} |
412 \isacom{apply} \<open>rule\<close>\index{rule@\<open>rule\<close>} |
415 \end{quote} |
413 \end{quote} |
416 automatically selects the appropriate rule for the current subgoal. |
414 automatically selects the appropriate rule for the current subgoal. |
417 |
415 |
418 You can also turn your own theorems into introduction rules by giving them |
416 You can also turn your own theorems into introduction rules by giving them |
419 the \indexed{@{text"intro"}}{intro} attribute, analogous to the @{text simp} attribute. In |
417 the \indexed{\<open>intro\<close>}{intro} attribute, analogous to the \<open>simp\<close> attribute. In |
420 that case @{text blast}, @{text fastforce} and (to a limited extent) @{text |
418 that case \<open>blast\<close>, \<open>fastforce\<close> and (to a limited extent) \<open>auto\<close> will automatically backchain with those theorems. The \<open>intro\<close> |
421 auto} will automatically backchain with those theorems. The @{text intro} |
|
422 attribute should be used with care because it increases the search space and |
419 attribute should be used with care because it increases the search space and |
423 can lead to nontermination. Sometimes it is better to use it only in |
420 can lead to nontermination. Sometimes it is better to use it only in |
424 specific calls of @{text blast} and friends. For example, |
421 specific calls of \<open>blast\<close> and friends. For example, |
425 @{thm[source] le_trans}, transitivity of @{text"\<le>"} on type @{typ nat}, |
422 @{thm[source] le_trans}, transitivity of \<open>\<le>\<close> on type @{typ nat}, |
426 is not an introduction rule by default because of the disastrous effect |
423 is not an introduction rule by default because of the disastrous effect |
427 on the search space, but can be useful in specific situations: |
424 on the search space, but can be useful in specific situations: |
428 \<close> |
425 \<close> |
429 |
426 |
430 lemma "\<lbrakk> (a::nat) \<le> b; b \<le> c; c \<le> d; d \<le> e \<rbrakk> \<Longrightarrow> a \<le> e" |
427 lemma "\<lbrakk> (a::nat) \<le> b; b \<le> c; c \<le> d; d \<le> e \<rbrakk> \<Longrightarrow> a \<le> e" |
431 by(blast intro: le_trans) |
428 by(blast intro: le_trans) |
432 |
429 |
433 text\<open> |
430 text\<open> |
434 Of course this is just an example and could be proved by @{text arith}, too. |
431 Of course this is just an example and could be proved by \<open>arith\<close>, too. |
435 |
432 |
436 \subsection{Forward Proof} |
433 \subsection{Forward Proof} |
437 \label{sec:forward-proof} |
434 \label{sec:forward-proof} |
438 |
435 |
439 Forward proof means deriving new theorems from old theorems. We have already |
436 Forward proof means deriving new theorems from old theorems. We have already |
440 seen a very simple form of forward proof: the @{text of} operator for |
437 seen a very simple form of forward proof: the \<open>of\<close> operator for |
441 instantiating unknowns in a theorem. The big brother of @{text of} is |
438 instantiating unknowns in a theorem. The big brother of \<open>of\<close> is |
442 \indexed{@{text OF}}{OF} for applying one theorem to others. Given a theorem @{prop"A \<Longrightarrow> B"} called |
439 \indexed{\<open>OF\<close>}{OF} for applying one theorem to others. Given a theorem @{prop"A \<Longrightarrow> B"} called |
443 @{text r} and a theorem @{text A'} called @{text r'}, the theorem @{text |
440 \<open>r\<close> and a theorem \<open>A'\<close> called \<open>r'\<close>, the theorem \<open>r[OF r']\<close> is the result of applying \<open>r\<close> to \<open>r'\<close>, where \<open>r\<close> should be viewed as a function taking a theorem \<open>A\<close> and returning |
444 "r[OF r']"} is the result of applying @{text r} to @{text r'}, where @{text |
441 \<open>B\<close>. More precisely, \<open>A\<close> and \<open>A'\<close> are unified, thus |
445 r} should be viewed as a function taking a theorem @{text A} and returning |
442 instantiating the unknowns in \<open>B\<close>, and the result is the instantiated |
446 @{text B}. More precisely, @{text A} and @{text A'} are unified, thus |
443 \<open>B\<close>. Of course, unification may also fail. |
447 instantiating the unknowns in @{text B}, and the result is the instantiated |
|
448 @{text B}. Of course, unification may also fail. |
|
449 \begin{warn} |
444 \begin{warn} |
450 Application of rules to other rules operates in the forward direction: from |
445 Application of rules to other rules operates in the forward direction: from |
451 the premises to the conclusion of the rule; application of rules to proof |
446 the premises to the conclusion of the rule; application of rules to proof |
452 states operates in the backward direction, from the conclusion to the |
447 states operates in the backward direction, from the conclusion to the |
453 premises. |
448 premises. |
454 \end{warn} |
449 \end{warn} |
455 |
450 |
456 In general @{text r} can be of the form @{text"\<lbrakk> A\<^sub>1; \<dots>; A\<^sub>n \<rbrakk> \<Longrightarrow> A"} |
451 In general \<open>r\<close> can be of the form \<open>\<lbrakk> A\<^sub>1; \<dots>; A\<^sub>n \<rbrakk> \<Longrightarrow> A\<close> |
457 and there can be multiple argument theorems @{text r\<^sub>1} to @{text r\<^sub>m} |
452 and there can be multiple argument theorems \<open>r\<^sub>1\<close> to \<open>r\<^sub>m\<close> |
458 (with @{text"m \<le> n"}), in which case @{text "r[OF r\<^sub>1 \<dots> r\<^sub>m]"} is obtained |
453 (with \<open>m \<le> n\<close>), in which case \<open>r[OF r\<^sub>1 \<dots> r\<^sub>m]\<close> is obtained |
459 by unifying and thus proving @{text "A\<^sub>i"} with @{text "r\<^sub>i"}, @{text"i = 1\<dots>m"}. |
454 by unifying and thus proving \<open>A\<^sub>i\<close> with \<open>r\<^sub>i\<close>, \<open>i = 1\<dots>m\<close>. |
460 Here is an example, where @{thm[source]refl} is the theorem |
455 Here is an example, where @{thm[source]refl} is the theorem |
461 @{thm[show_question_marks] refl}: |
456 @{thm[show_question_marks] refl}: |
462 \<close> |
457 \<close> |
463 |
458 |
464 thm conjI[OF refl[of "a"] refl[of "b"]] |
459 thm conjI[OF refl[of "a"] refl[of "b"]] |
465 |
460 |
466 text\<open>yields the theorem @{thm conjI[OF refl[of "a"] refl[of "b"]]}. |
461 text\<open>yields the theorem @{thm conjI[OF refl[of "a"] refl[of "b"]]}. |
467 The command \isacom{thm} merely displays the result. |
462 The command \isacom{thm} merely displays the result. |
468 |
463 |
469 Forward reasoning also makes sense in connection with proof states. |
464 Forward reasoning also makes sense in connection with proof states. |
470 Therefore @{text blast}, @{text fastforce} and @{text auto} support a modifier |
465 Therefore \<open>blast\<close>, \<open>fastforce\<close> and \<open>auto\<close> support a modifier |
471 @{text dest} which instructs the proof method to use certain rules in a |
466 \<open>dest\<close> which instructs the proof method to use certain rules in a |
472 forward fashion. If @{text r} is of the form \mbox{@{text "A \<Longrightarrow> B"}}, the modifier |
467 forward fashion. If \<open>r\<close> is of the form \mbox{\<open>A \<Longrightarrow> B\<close>}, the modifier |
473 \mbox{@{text"dest: r"}}\index{dest@@{text"dest:"}} |
468 \mbox{\<open>dest: r\<close>}\index{dest@\<open>dest:\<close>} |
474 allows proof search to reason forward with @{text r}, i.e., |
469 allows proof search to reason forward with \<open>r\<close>, i.e., |
475 to replace an assumption @{text A'}, where @{text A'} unifies with @{text A}, |
470 to replace an assumption \<open>A'\<close>, where \<open>A'\<close> unifies with \<open>A\<close>, |
476 with the correspondingly instantiated @{text B}. For example, @{thm[source,show_question_marks] Suc_leD} is the theorem \mbox{@{thm Suc_leD}}, which works well for forward reasoning: |
471 with the correspondingly instantiated \<open>B\<close>. For example, @{thm[source,show_question_marks] Suc_leD} is the theorem \mbox{@{thm Suc_leD}}, which works well for forward reasoning: |
477 \<close> |
472 \<close> |
478 |
473 |
479 lemma "Suc(Suc(Suc a)) \<le> b \<Longrightarrow> a \<le> b" |
474 lemma "Suc(Suc(Suc a)) \<le> b \<Longrightarrow> a \<le> b" |
480 by(blast dest: Suc_leD) |
475 by(blast dest: Suc_leD) |
481 |
476 |
576 \mbox{@{thm (prem 2) ev.induct}}\\ |
571 \mbox{@{thm (prem 2) ev.induct}}\\ |
577 \mbox{@{prop"!!n. \<lbrakk> ev n; P n \<rbrakk> \<Longrightarrow> P(n+2)"}}} |
572 \mbox{@{prop"!!n. \<lbrakk> ev n; P n \<rbrakk> \<Longrightarrow> P(n+2)"}}} |
578 {\mbox{@{thm (concl) ev.induct[of "n"]}}} |
573 {\mbox{@{thm (concl) ev.induct[of "n"]}}} |
579 \] |
574 \] |
580 The first premise @{prop"ev n"} enforces that this rule can only be applied |
575 The first premise @{prop"ev n"} enforces that this rule can only be applied |
581 in situations where we know that @{text n} is even. |
576 in situations where we know that \<open>n\<close> is even. |
582 |
577 |
583 Note that in the induction step we may not just assume @{prop"P n"} but also |
578 Note that in the induction step we may not just assume @{prop"P n"} but also |
584 \mbox{@{prop"ev n"}}, which is simply the premise of rule @{thm[source] |
579 \mbox{@{prop"ev n"}}, which is simply the premise of rule @{thm[source] |
585 evSS}. Here is an example where the local assumption @{prop"ev n"} comes in |
580 evSS}. Here is an example where the local assumption @{prop"ev n"} comes in |
586 handy: we prove @{prop"ev m \<Longrightarrow> ev(m - 2)"} by induction on @{prop"ev m"}. |
581 handy: we prove @{prop"ev m \<Longrightarrow> ev(m - 2)"} by induction on @{prop"ev m"}. |
587 Case @{thm[source]ev0} requires us to prove @{prop"ev(0 - 2)"}, which follows |
582 Case @{thm[source]ev0} requires us to prove @{prop"ev(0 - 2)"}, which follows |
588 from @{prop"ev 0"} because @{prop"0 - 2 = (0::nat)"} on type @{typ nat}. In |
583 from @{prop"ev 0"} because @{prop"0 - 2 = (0::nat)"} on type @{typ nat}. In |
589 case @{thm[source]evSS} we have \mbox{@{prop"m = n+(2::nat)"}} and may assume |
584 case @{thm[source]evSS} we have \mbox{@{prop"m = n+(2::nat)"}} and may assume |
590 @{prop"ev n"}, which implies @{prop"ev (m - 2)"} because @{text"m - 2 = (n + |
585 @{prop"ev n"}, which implies @{prop"ev (m - 2)"} because \<open>m - 2 = (n + |
591 2) - 2 = n"}. We did not need the induction hypothesis at all for this proof (it |
586 2) - 2 = n\<close>. We did not need the induction hypothesis at all for this proof (it |
592 is just a case analysis of which rule was used) but having @{prop"ev n"} |
587 is just a case analysis of which rule was used) but having @{prop"ev n"} |
593 at our disposal in case @{thm[source]evSS} was essential. |
588 at our disposal in case @{thm[source]evSS} was essential. |
594 This case analysis of rules is also called ``rule inversion'' |
589 This case analysis of rules is also called ``rule inversion'' |
595 and is discussed in more detail in \autoref{ch:Isar}. |
590 and is discussed in more detail in \autoref{ch:Isar}. |
596 |
591 |
597 \subsubsection{In Isabelle} |
592 \subsubsection{In Isabelle} |
598 |
593 |
599 Let us now recast the above informal proofs in Isabelle. For a start, |
594 Let us now recast the above informal proofs in Isabelle. For a start, |
600 we use @{const Suc} terms instead of numerals in rule @{thm[source]evSS}: |
595 we use @{const Suc} terms instead of numerals in rule @{thm[source]evSS}: |
601 @{thm[display] evSS} |
596 @{thm[display] evSS} |
602 This avoids the difficulty of unifying @{text"n+2"} with some numeral, |
597 This avoids the difficulty of unifying \<open>n+2\<close> with some numeral, |
603 which is not automatic. |
598 which is not automatic. |
604 |
599 |
605 The simplest way to prove @{prop"ev(Suc(Suc(Suc(Suc 0))))"} is in a forward |
600 The simplest way to prove @{prop"ev(Suc(Suc(Suc(Suc 0))))"} is in a forward |
606 direction: @{text "evSS[OF evSS[OF ev0]]"} yields the theorem @{thm evSS[OF |
601 direction: \<open>evSS[OF evSS[OF ev0]]\<close> yields the theorem @{thm evSS[OF |
607 evSS[OF ev0]]}. Alternatively, you can also prove it as a lemma in backwards |
602 evSS[OF ev0]]}. Alternatively, you can also prove it as a lemma in backwards |
608 fashion. Although this is more verbose, it allows us to demonstrate how each |
603 fashion. Although this is more verbose, it allows us to demonstrate how each |
609 rule application changes the proof state:\<close> |
604 rule application changes the proof state:\<close> |
610 |
605 |
611 lemma "ev(Suc(Suc(Suc(Suc 0))))" |
606 lemma "ev(Suc(Suc(Suc(Suc 0))))" |
809 inductive star' :: "('a \<Rightarrow> 'a \<Rightarrow> bool) \<Rightarrow> 'a \<Rightarrow> 'a \<Rightarrow> bool" for r where |
804 inductive star' :: "('a \<Rightarrow> 'a \<Rightarrow> bool) \<Rightarrow> 'a \<Rightarrow> 'a \<Rightarrow> bool" for r where |
810 refl': "star' r x x" | |
805 refl': "star' r x x" | |
811 step': "star' r x y \<Longrightarrow> r y z \<Longrightarrow> star' r x z" |
806 step': "star' r x y \<Longrightarrow> r y z \<Longrightarrow> star' r x z" |
812 |
807 |
813 text\<open> |
808 text\<open> |
814 The single @{text r} step is performed after rather than before the @{text star'} |
809 The single \<open>r\<close> step is performed after rather than before the \<open>star'\<close> |
815 steps. Prove @{prop "star' r x y \<Longrightarrow> star r x y"} and |
810 steps. Prove @{prop "star' r x y \<Longrightarrow> star r x y"} and |
816 @{prop "star r x y \<Longrightarrow> star' r x y"}. You may need lemmas. |
811 @{prop "star r x y \<Longrightarrow> star' r x y"}. You may need lemmas. |
817 Note that rule induction fails |
812 Note that rule induction fails |
818 if the assumption about the inductive predicate is not the first assumption. |
813 if the assumption about the inductive predicate is not the first assumption. |
819 \endexercise |
814 \endexercise |
820 |
815 |
821 \begin{exercise}\label{exe:iter} |
816 \begin{exercise}\label{exe:iter} |
822 Analogous to @{const star}, give an inductive definition of the @{text n}-fold iteration |
817 Analogous to @{const star}, give an inductive definition of the \<open>n\<close>-fold iteration |
823 of a relation @{text r}: @{term "iter r n x y"} should hold if there are @{text x\<^sub>0}, \dots, @{text x\<^sub>n} |
818 of a relation \<open>r\<close>: @{term "iter r n x y"} should hold if there are \<open>x\<^sub>0\<close>, \dots, \<open>x\<^sub>n\<close> |
824 such that @{prop"x = x\<^sub>0"}, @{prop"x\<^sub>n = y"} and @{text"r x\<^bsub>i\<^esub> x\<^bsub>i+1\<^esub>"} for |
819 such that @{prop"x = x\<^sub>0"}, @{prop"x\<^sub>n = y"} and \<open>r x\<^bsub>i\<^esub> x\<^bsub>i+1\<^esub>\<close> for |
825 all @{prop"i < n"}. Correct and prove the following claim: |
820 all @{prop"i < n"}. Correct and prove the following claim: |
826 @{prop"star r x y \<Longrightarrow> iter r n x y"}. |
821 @{prop"star r x y \<Longrightarrow> iter r n x y"}. |
827 \end{exercise} |
822 \end{exercise} |
828 |
823 |
829 \begin{exercise}\label{exe:cfg} |
824 \begin{exercise}\label{exe:cfg} |
830 A context-free grammar can be seen as an inductive definition where each |
825 A context-free grammar can be seen as an inductive definition where each |
831 nonterminal $A$ is an inductively defined predicate on lists of terminal |
826 nonterminal $A$ is an inductively defined predicate on lists of terminal |
832 symbols: $A(w)$ means that $w$ is in the language generated by $A$. |
827 symbols: $A(w)$ means that $w$ is in the language generated by $A$. |
833 For example, the production $S \to a S b$ can be viewed as the implication |
828 For example, the production $S \to a S b$ can be viewed as the implication |
834 @{prop"S w \<Longrightarrow> S (a # w @ [b])"} where @{text a} and @{text b} are terminal symbols, |
829 @{prop"S w \<Longrightarrow> S (a # w @ [b])"} where \<open>a\<close> and \<open>b\<close> are terminal symbols, |
835 i.e., elements of some alphabet. The alphabet can be defined like this: |
830 i.e., elements of some alphabet. The alphabet can be defined like this: |
836 \isacom{datatype} @{text"alpha = a | b | \<dots>"} |
831 \isacom{datatype} \<open>alpha = a | b | \<dots>\<close> |
837 |
832 |
838 Define the two grammars (where $\varepsilon$ is the empty word) |
833 Define the two grammars (where $\varepsilon$ is the empty word) |
839 \[ |
834 \[ |
840 \begin{array}{r@ {\quad}c@ {\quad}l} |
835 \begin{array}{r@ {\quad}c@ {\quad}l} |
841 S &\to& \varepsilon \quad\mid\quad aSb \quad\mid\quad SS \\ |
836 S &\to& \varepsilon \quad\mid\quad aSb \quad\mid\quad SS \\ |
842 T &\to& \varepsilon \quad\mid\quad TaTb |
837 T &\to& \varepsilon \quad\mid\quad TaTb |
843 \end{array} |
838 \end{array} |
844 \] |
839 \] |
845 as two inductive predicates. |
840 as two inductive predicates. |
846 If you think of @{text a} and @{text b} as ``@{text "("}'' and ``@{text ")"}'', |
841 If you think of \<open>a\<close> and \<open>b\<close> as ``\<open>(\<close>'' and ``\<open>)\<close>'', |
847 the grammar defines strings of balanced parentheses. |
842 the grammar defines strings of balanced parentheses. |
848 Prove @{prop"T w \<Longrightarrow> S w"} and \mbox{@{prop "S w \<Longrightarrow> T w"}} separately and conclude |
843 Prove @{prop"T w \<Longrightarrow> S w"} and \mbox{@{prop "S w \<Longrightarrow> T w"}} separately and conclude |
849 @{prop "S w = T w"}. |
844 @{prop "S w = T w"}. |
850 \end{exercise} |
845 \end{exercise} |
851 |
846 |
852 \ifsem |
847 \ifsem |
853 \begin{exercise} |
848 \begin{exercise} |
854 In \autoref{sec:AExp} we defined a recursive evaluation function |
849 In \autoref{sec:AExp} we defined a recursive evaluation function |
855 @{text "aval :: aexp \<Rightarrow> state \<Rightarrow> val"}. |
850 \<open>aval :: aexp \<Rightarrow> state \<Rightarrow> val\<close>. |
856 Define an inductive evaluation predicate |
851 Define an inductive evaluation predicate |
857 @{text "aval_rel :: aexp \<Rightarrow> state \<Rightarrow> val \<Rightarrow> bool"} |
852 \<open>aval_rel :: aexp \<Rightarrow> state \<Rightarrow> val \<Rightarrow> bool\<close> |
858 and prove that it agrees with the recursive function: |
853 and prove that it agrees with the recursive function: |
859 @{prop "aval_rel a s v \<Longrightarrow> aval a s = v"}, |
854 @{prop "aval_rel a s v \<Longrightarrow> aval a s = v"}, |
860 @{prop "aval a s = v \<Longrightarrow> aval_rel a s v"} and thus |
855 @{prop "aval a s = v \<Longrightarrow> aval_rel a s v"} and thus |
861 \noquotes{@{prop [source] "aval_rel a s v \<longleftrightarrow> aval a s = v"}}. |
856 \noquotes{@{prop [source] "aval_rel a s v \<longleftrightarrow> aval a s = v"}}. |
862 \end{exercise} |
857 \end{exercise} |