doc-src/TutorialI/Inductive/AB.thy
author nipkow
Wed Dec 06 13:22:58 2000 +0100 (2000-12-06)
changeset 10608 620647438780
parent 10520 bb9dfcc87951
child 10884 2995639c6a09
permissions -rw-r--r--
*** empty log message ***
     1 (*<*)theory AB = Main:(*>*)
     2 
     3 section{*Case study: A context free grammar*}
     4 
     5 text{*\label{sec:CFG}
     6 Grammars are nothing but shorthands for inductive definitions of nonterminals
     7 which represent sets of strings. For example, the production
     8 $A \to B c$ is short for
     9 \[ w \in B \Longrightarrow wc \in A \]
    10 This section demonstrates this idea with a standard example
    11 \cite[p.\ 81]{HopcroftUllman}, a grammar for generating all words with an
    12 equal number of $a$'s and $b$'s:
    13 \begin{eqnarray}
    14 S &\to& \epsilon \mid b A \mid a B \nonumber\\
    15 A &\to& a S \mid b A A \nonumber\\
    16 B &\to& b S \mid a B B \nonumber
    17 \end{eqnarray}
    18 At the end we say a few words about the relationship of the formalization
    19 and the text in the book~\cite[p.\ 81]{HopcroftUllman}.
    20 
    21 We start by fixing the alphabet, which consists only of @{term a}'s
    22 and @{term b}'s:
    23 *}
    24 
    25 datatype alfa = a | b;
    26 
    27 text{*\noindent
    28 For convenience we include the following easy lemmas as simplification rules:
    29 *}
    30 
    31 lemma [simp]: "(x \<noteq> a) = (x = b) \<and> (x \<noteq> b) = (x = a)";
    32 apply(case_tac x);
    33 by(auto);
    34 
    35 text{*\noindent
    36 Words over this alphabet are of type @{typ"alfa list"}, and
    37 the three nonterminals are declare as sets of such words:
    38 *}
    39 
    40 consts S :: "alfa list set"
    41        A :: "alfa list set"
    42        B :: "alfa list set";
    43 
    44 text{*\noindent
    45 The above productions are recast as a \emph{simultaneous} inductive
    46 definition\index{inductive definition!simultaneous}
    47 of @{term S}, @{term A} and @{term B}:
    48 *}
    49 
    50 inductive S A B
    51 intros
    52   "[] \<in> S"
    53   "w \<in> A \<Longrightarrow> b#w \<in> S"
    54   "w \<in> B \<Longrightarrow> a#w \<in> S"
    55 
    56   "w \<in> S        \<Longrightarrow> a#w   \<in> A"
    57   "\<lbrakk> v\<in>A; w\<in>A \<rbrakk> \<Longrightarrow> b#v@w \<in> A"
    58 
    59   "w \<in> S            \<Longrightarrow> b#w   \<in> B"
    60   "\<lbrakk> v \<in> B; w \<in> B \<rbrakk> \<Longrightarrow> a#v@w \<in> B";
    61 
    62 text{*\noindent
    63 First we show that all words in @{term S} contain the same number of @{term
    64 a}'s and @{term b}'s. Since the definition of @{term S} is by simultaneous
    65 induction, so is this proof: we show at the same time that all words in
    66 @{term A} contain one more @{term a} than @{term b} and all words in @{term
    67 B} contains one more @{term b} than @{term a}.
    68 *}
    69 
    70 lemma correctness:
    71   "(w \<in> S \<longrightarrow> size[x\<in>w. x=a] = size[x\<in>w. x=b])     \<and>
    72    (w \<in> A \<longrightarrow> size[x\<in>w. x=a] = size[x\<in>w. x=b] + 1) \<and>
    73    (w \<in> B \<longrightarrow> size[x\<in>w. x=b] = size[x\<in>w. x=a] + 1)"
    74 
    75 txt{*\noindent
    76 These propositions are expressed with the help of the predefined @{term
    77 filter} function on lists, which has the convenient syntax @{text"[x\<in>xs. P
    78 x]"}, the list of all elements @{term x} in @{term xs} such that @{prop"P x"}
    79 holds. Remember that on lists @{term size} and @{term length} are synonymous.
    80 
    81 The proof itself is by rule induction and afterwards automatic:
    82 *}
    83 
    84 apply(rule S_A_B.induct);
    85 by(auto);
    86 
    87 text{*\noindent
    88 This may seem surprising at first, and is indeed an indication of the power
    89 of inductive definitions. But it is also quite straightforward. For example,
    90 consider the production $A \to b A A$: if $v,w \in A$ and the elements of $A$
    91 contain one more $a$ than $b$'s, then $bvw$ must again contain one more $a$
    92 than $b$'s.
    93 
    94 As usual, the correctness of syntactic descriptions is easy, but completeness
    95 is hard: does @{term S} contain \emph{all} words with an equal number of
    96 @{term a}'s and @{term b}'s? It turns out that this proof requires the
    97 following little lemma: every string with two more @{term a}'s than @{term
    98 b}'s can be cut somehwere such that each half has one more @{term a} than
    99 @{term b}. This is best seen by imagining counting the difference between the
   100 number of @{term a}'s and @{term b}'s starting at the left end of the
   101 word. We start with 0 and end (at the right end) with 2. Since each move to the
   102 right increases or decreases the difference by 1, we must have passed through
   103 1 on our way from 0 to 2. Formally, we appeal to the following discrete
   104 intermediate value theorem @{thm[source]nat0_intermed_int_val}
   105 @{thm[display]nat0_intermed_int_val[no_vars]}
   106 where @{term f} is of type @{typ"nat \<Rightarrow> int"}, @{typ int} are the integers,
   107 @{text"\<bar>.\<bar>"} is the absolute value function, and @{term"#1::int"} is the
   108 integer 1 (see \S\ref{sec:numbers}).
   109 
   110 First we show that the our specific function, the difference between the
   111 numbers of @{term a}'s and @{term b}'s, does indeed only change by 1 in every
   112 move to the right. At this point we also start generalizing from @{term a}'s
   113 and @{term b}'s to an arbitrary property @{term P}. Otherwise we would have
   114 to prove the desired lemma twice, once as stated above and once with the
   115 roles of @{term a}'s and @{term b}'s interchanged.
   116 *}
   117 
   118 lemma step1: "\<forall>i < size w.
   119   \<bar>(int(size[x\<in>take (i+1) w. P x])-int(size[x\<in>take (i+1) w. \<not>P x]))
   120    - (int(size[x\<in>take i w. P x])-int(size[x\<in>take i w. \<not>P x]))\<bar> \<le> #1"
   121 
   122 txt{*\noindent
   123 The lemma is a bit hard to read because of the coercion function
   124 @{term[source]"int :: nat \<Rightarrow> int"}. It is required because @{term size} returns
   125 a natural number, but @{text-} on @{typ nat} will do the wrong thing.
   126 Function @{term take} is predefined and @{term"take i xs"} is the prefix of
   127 length @{term i} of @{term xs}; below we als need @{term"drop i xs"}, which
   128 is what remains after that prefix has been dropped from @{term xs}.
   129 
   130 The proof is by induction on @{term w}, with a trivial base case, and a not
   131 so trivial induction step. Since it is essentially just arithmetic, we do not
   132 discuss it.
   133 *}
   134 
   135 apply(induct w);
   136  apply(simp);
   137 by(force simp add:zabs_def take_Cons split:nat.split if_splits); 
   138 
   139 text{*
   140 Finally we come to the above mentioned lemma about cutting a word with two
   141 more elements of one sort than of the other sort into two halves:
   142 *}
   143 
   144 lemma part1:
   145  "size[x\<in>w. P x] = size[x\<in>w. \<not>P x]+2 \<Longrightarrow>
   146   \<exists>i\<le>size w. size[x\<in>take i w. P x] = size[x\<in>take i w. \<not>P x]+1";
   147 
   148 txt{*\noindent
   149 This is proved by force with the help of the intermediate value theorem,
   150 instantiated appropriately and with its first premise disposed of by lemma
   151 @{thm[source]step1}:
   152 *}
   153 
   154 apply(insert nat0_intermed_int_val[OF step1, of "P" "w" "#1"]);
   155 by force;
   156 
   157 text{*\noindent
   158 
   159 Lemma @{thm[source]part1} tells us only about the prefix @{term"take i w"}.
   160 The suffix @{term"drop i w"} is dealt with in the following easy lemma:
   161 *}
   162 
   163 
   164 lemma part2:
   165   "\<lbrakk>size[x\<in>take i w @ drop i w. P x] =
   166     size[x\<in>take i w @ drop i w. \<not>P x]+2;
   167     size[x\<in>take i w. P x] = size[x\<in>take i w. \<not>P x]+1\<rbrakk>
   168    \<Longrightarrow> size[x\<in>drop i w. P x] = size[x\<in>drop i w. \<not>P x]+1";
   169 by(simp del:append_take_drop_id);
   170 
   171 text{*\noindent
   172 Lemma @{thm[source]append_take_drop_id}, @{thm append_take_drop_id[no_vars]},
   173 which is generally useful, needs to be disabled for once.
   174 
   175 To dispose of trivial cases automatically, the rules of the inductive
   176 definition are declared simplification rules:
   177 *}
   178 
   179 declare S_A_B.intros[simp];
   180 
   181 text{*\noindent
   182 This could have been done earlier but was not necessary so far.
   183 
   184 The completeness theorem tells us that if a word has the same number of
   185 @{term a}'s and @{term b}'s, then it is in @{term S}, and similarly and
   186 simultaneously for @{term A} and @{term B}:
   187 *}
   188 
   189 theorem completeness:
   190   "(size[x\<in>w. x=a] = size[x\<in>w. x=b]     \<longrightarrow> w \<in> S) \<and>
   191    (size[x\<in>w. x=a] = size[x\<in>w. x=b] + 1 \<longrightarrow> w \<in> A) \<and>
   192    (size[x\<in>w. x=b] = size[x\<in>w. x=a] + 1 \<longrightarrow> w \<in> B)";
   193 
   194 txt{*\noindent
   195 The proof is by induction on @{term w}. Structural induction would fail here
   196 because, as we can see from the grammar, we need to make bigger steps than
   197 merely appending a single letter at the front. Hence we induct on the length
   198 of @{term w}, using the induction rule @{thm[source]length_induct}:
   199 *}
   200 
   201 apply(induct_tac w rule: length_induct);
   202 (*<*)apply(rename_tac w)(*>*)
   203 
   204 txt{*\noindent
   205 The @{text rule} parameter tells @{text induct_tac} explicitly which induction
   206 rule to use. For details see \S\ref{sec:complete-ind} below.
   207 In this case the result is that we may assume the lemma already
   208 holds for all words shorter than @{term w}.
   209 
   210 The proof continues with a case distinction on @{term w},
   211 i.e.\ if @{term w} is empty or not.
   212 *}
   213 
   214 apply(case_tac w);
   215  apply(simp_all);
   216 (*<*)apply(rename_tac x v)(*>*)
   217 
   218 txt{*\noindent
   219 Simplification disposes of the base case and leaves only two step
   220 cases to be proved:
   221 if @{prop"w = a#v"} and @{prop"size[x\<in>v. x=a] = size[x\<in>v. x=b]+2"} then
   222 @{prop"b#v \<in> A"}, and similarly for @{prop"w = b#v"}.
   223 We only consider the first case in detail.
   224 
   225 After breaking the conjuction up into two cases, we can apply
   226 @{thm[source]part1} to the assumption that @{term w} contains two more @{term
   227 a}'s than @{term b}'s.
   228 *}
   229 
   230 apply(rule conjI);
   231  apply(clarify);
   232  apply(frule part1[of "\<lambda>x. x=a", simplified]);
   233  apply(erule exE);
   234  apply(erule conjE);
   235 
   236 txt{*\noindent
   237 This yields an index @{prop"i \<le> length v"} such that
   238 @{prop"length [x\<in>take i v . x = a] = length [x\<in>take i v . x = b] + 1"}.
   239 With the help of @{thm[source]part1} it follows that
   240 @{prop"length [x\<in>drop i v . x = a] = length [x\<in>drop i v . x = b] + 1"}.
   241 *}
   242 
   243  apply(drule part2[of "\<lambda>x. x=a", simplified]);
   244   apply(assumption);
   245 
   246 txt{*\noindent
   247 Now it is time to decompose @{term v} in the conclusion @{prop"b#v \<in> A"}
   248 into @{term"take i v @ drop i v"},
   249 after which the appropriate rule of the grammar reduces the goal
   250 to the two subgoals @{prop"take i v \<in> A"} and @{prop"drop i v \<in> A"}:
   251 *}
   252 
   253  apply(rule_tac n1=i and t=v in subst[OF append_take_drop_id]);
   254  apply(rule S_A_B.intros);
   255 
   256 txt{*\noindent
   257 Both subgoals follow from the induction hypothesis because both @{term"take i
   258 v"} and @{term"drop i v"} are shorter than @{term w}:
   259 *}
   260 
   261   apply(force simp add: min_less_iff_disj);
   262  apply(force split add: nat_diff_split);
   263 
   264 txt{*\noindent
   265 Note that the variables @{term n1} and @{term t} referred to in the
   266 substitution step above come from the derived theorem @{text"subst[OF
   267 append_take_drop_id]"}.
   268 
   269 The case @{prop"w = b#v"} is proved completely analogously:
   270 *}
   271 
   272 apply(clarify);
   273 apply(frule part1[of "\<lambda>x. x=b", simplified]);
   274 apply(erule exE);
   275 apply(erule conjE);
   276 apply(drule part2[of "\<lambda>x. x=b", simplified]);
   277  apply(assumption);
   278 apply(rule_tac n1=i and t=v in subst[OF append_take_drop_id]);
   279 apply(rule S_A_B.intros);
   280  apply(force simp add:min_less_iff_disj);
   281 by(force simp add:min_less_iff_disj split add: nat_diff_split);
   282 
   283 text{*
   284 We conclude this section with a comparison of the above proof and the one
   285 in the textbook \cite[p.\ 81]{HopcroftUllman}. For a start, the texbook
   286 grammar, for no good reason, excludes the empty word, which complicates
   287 matters just a little bit because we now have 8 instead of our 7 productions.
   288 
   289 More importantly, the proof itself is different: rather than separating the
   290 two directions, they perform one induction on the length of a word. This
   291 deprives them of the beauty of rule induction and in the easy direction
   292 (correctness) their reasoning is more detailed than our @{text auto}. For the
   293 hard part (completeness), they consider just one of the cases that our @{text
   294 simp_all} disposes of automatically. Then they conclude the proof by saying
   295 about the remaining cases: ``We do this in a manner similar to our method of
   296 proof for part (1); this part is left to the reader''. But this is precisely
   297 the part that requires the intermediate value theorem and thus is not at all
   298 similar to the other cases (which are automatic in Isabelle). We conclude
   299 that the authors are at least cavalier about this point and may even have
   300 overlooked the slight difficulty lurking in the omitted cases. This is not
   301 atypical for pen-and-paper proofs, once analysed in detail.  *}
   302 
   303 (*<*)end(*>*)