summary |
shortlog |
changelog |
graph |
tags |
branches |
files |
changeset |
file |
revisions |
annotate |
diff |
raw

src/Doc/IsarRef/Generic.thy

author | wenzelm |

Fri May 17 20:53:28 2013 +0200 (2013-05-17) | |

changeset 52060 | 179236c82c2a |

parent 52037 | 837211662fb8 |

child 55029 | 61a6bf7d4b02 |

permissions | -rw-r--r-- |

renamed 'print_configs' to 'print_options';

1 theory Generic

2 imports Base Main

3 begin

5 chapter {* Generic tools and packages \label{ch:gen-tools} *}

7 section {* Configuration options \label{sec:config} *}

9 text {* Isabelle/Pure maintains a record of named configuration

10 options within the theory or proof context, with values of type

11 @{ML_type bool}, @{ML_type int}, @{ML_type real}, or @{ML_type

12 string}. Tools may declare options in ML, and then refer to these

13 values (relative to the context). Thus global reference variables

14 are easily avoided. The user may change the value of a

15 configuration option by means of an associated attribute of the same

16 name. This form of context declaration works particularly well with

17 commands such as @{command "declare"} or @{command "using"} like

18 this:

19 *}

21 declare [[show_main_goal = false]]

23 notepad

24 begin

25 note [[show_main_goal = true]]

26 end

28 text {* For historical reasons, some tools cannot take the full proof

29 context into account and merely refer to the background theory.

30 This is accommodated by configuration options being declared as

31 ``global'', which may not be changed within a local context.

33 \begin{matharray}{rcll}

34 @{command_def "print_options"} & : & @{text "context \<rightarrow>"} \\

35 \end{matharray}

37 @{rail "

38 @{syntax name} ('=' ('true' | 'false' | @{syntax int} | @{syntax float} | @{syntax name}))?

39 "}

41 \begin{description}

43 \item @{command "print_options"} prints the available configuration

44 options, with names, types, and current values.

46 \item @{text "name = value"} as an attribute expression modifies the

47 named option, with the syntax of the value depending on the option's

48 type. For @{ML_type bool} the default value is @{text true}. Any

49 attempt to change a global option in a local context is ignored.

51 \end{description}

52 *}

55 section {* Basic proof tools *}

57 subsection {* Miscellaneous methods and attributes \label{sec:misc-meth-att} *}

59 text {*

60 \begin{matharray}{rcl}

61 @{method_def unfold} & : & @{text method} \\

62 @{method_def fold} & : & @{text method} \\

63 @{method_def insert} & : & @{text method} \\[0.5ex]

64 @{method_def erule}@{text "\<^sup>*"} & : & @{text method} \\

65 @{method_def drule}@{text "\<^sup>*"} & : & @{text method} \\

66 @{method_def frule}@{text "\<^sup>*"} & : & @{text method} \\

67 @{method_def intro} & : & @{text method} \\

68 @{method_def elim} & : & @{text method} \\

69 @{method_def succeed} & : & @{text method} \\

70 @{method_def fail} & : & @{text method} \\

71 \end{matharray}

73 @{rail "

74 (@@{method fold} | @@{method unfold} | @@{method insert}) @{syntax thmrefs}

75 ;

76 (@@{method erule} | @@{method drule} | @@{method frule})

77 ('(' @{syntax nat} ')')? @{syntax thmrefs}

78 ;

79 (@@{method intro} | @@{method elim}) @{syntax thmrefs}?

80 "}

82 \begin{description}

84 \item @{method unfold}~@{text "a\<^sub>1 \<dots> a\<^sub>n"} and @{method fold}~@{text

85 "a\<^sub>1 \<dots> a\<^sub>n"} expand (or fold back) the given definitions throughout

86 all goals; any chained facts provided are inserted into the goal and

87 subject to rewriting as well.

89 \item @{method insert}~@{text "a\<^sub>1 \<dots> a\<^sub>n"} inserts theorems as facts

90 into all goals of the proof state. Note that current facts

91 indicated for forward chaining are ignored.

93 \item @{method erule}~@{text "a\<^sub>1 \<dots> a\<^sub>n"}, @{method

94 drule}~@{text "a\<^sub>1 \<dots> a\<^sub>n"}, and @{method frule}~@{text

95 "a\<^sub>1 \<dots> a\<^sub>n"} are similar to the basic @{method rule}

96 method (see \secref{sec:pure-meth-att}), but apply rules by

97 elim-resolution, destruct-resolution, and forward-resolution,

98 respectively \cite{isabelle-implementation}. The optional natural

99 number argument (default 0) specifies additional assumption steps to

100 be performed here.

102 Note that these methods are improper ones, mainly serving for

103 experimentation and tactic script emulation. Different modes of

104 basic rule application are usually expressed in Isar at the proof

105 language level, rather than via implicit proof state manipulations.

106 For example, a proper single-step elimination would be done using

107 the plain @{method rule} method, with forward chaining of current

108 facts.

110 \item @{method intro} and @{method elim} repeatedly refine some goal

111 by intro- or elim-resolution, after having inserted any chained

112 facts. Exactly the rules given as arguments are taken into account;

113 this allows fine-tuned decomposition of a proof problem, in contrast

114 to common automated tools.

116 \item @{method succeed} yields a single (unchanged) result; it is

117 the identity of the ``@{text ","}'' method combinator (cf.\

118 \secref{sec:proof-meth}).

120 \item @{method fail} yields an empty result sequence; it is the

121 identity of the ``@{text "|"}'' method combinator (cf.\

122 \secref{sec:proof-meth}).

124 \end{description}

126 \begin{matharray}{rcl}

127 @{attribute_def tagged} & : & @{text attribute} \\

128 @{attribute_def untagged} & : & @{text attribute} \\[0.5ex]

129 @{attribute_def THEN} & : & @{text attribute} \\

130 @{attribute_def unfolded} & : & @{text attribute} \\

131 @{attribute_def folded} & : & @{text attribute} \\

132 @{attribute_def abs_def} & : & @{text attribute} \\[0.5ex]

133 @{attribute_def rotated} & : & @{text attribute} \\

134 @{attribute_def (Pure) elim_format} & : & @{text attribute} \\

135 @{attribute_def standard}@{text "\<^sup>*"} & : & @{text attribute} \\

136 @{attribute_def no_vars}@{text "\<^sup>*"} & : & @{text attribute} \\

137 \end{matharray}

139 @{rail "

140 @@{attribute tagged} @{syntax name} @{syntax name}

141 ;

142 @@{attribute untagged} @{syntax name}

143 ;

144 @@{attribute THEN} ('[' @{syntax nat} ']')? @{syntax thmref}

145 ;

146 (@@{attribute unfolded} | @@{attribute folded}) @{syntax thmrefs}

147 ;

148 @@{attribute rotated} @{syntax int}?

149 "}

151 \begin{description}

153 \item @{attribute tagged}~@{text "name value"} and @{attribute

154 untagged}~@{text name} add and remove \emph{tags} of some theorem.

155 Tags may be any list of string pairs that serve as formal comment.

156 The first string is considered the tag name, the second its value.

157 Note that @{attribute untagged} removes any tags of the same name.

159 \item @{attribute THEN}~@{text a} composes rules by resolution; it

160 resolves with the first premise of @{text a} (an alternative

161 position may be also specified). See also @{ML_op "RS"} in

162 \cite{isabelle-implementation}.

164 \item @{attribute unfolded}~@{text "a\<^sub>1 \<dots> a\<^sub>n"} and @{attribute

165 folded}~@{text "a\<^sub>1 \<dots> a\<^sub>n"} expand and fold back again the given

166 definitions throughout a rule.

168 \item @{attribute abs_def} turns an equation of the form @{prop "f x

169 y \<equiv> t"} into @{prop "f \<equiv> \<lambda>x y. t"}, which ensures that @{method

170 simp} or @{method unfold} steps always expand it. This also works

171 for object-logic equality.

173 \item @{attribute rotated}~@{text n} rotate the premises of a

174 theorem by @{text n} (default 1).

176 \item @{attribute (Pure) elim_format} turns a destruction rule into

177 elimination rule format, by resolving with the rule @{prop "PROP A \<Longrightarrow>

178 (PROP A \<Longrightarrow> PROP B) \<Longrightarrow> PROP B"}.

180 Note that the Classical Reasoner (\secref{sec:classical}) provides

181 its own version of this operation.

183 \item @{attribute standard} puts a theorem into the standard form of

184 object-rules at the outermost theory level. Note that this

185 operation violates the local proof context (including active

186 locales).

188 \item @{attribute no_vars} replaces schematic variables by free

189 ones; this is mainly for tuning output of pretty printed theorems.

191 \end{description}

192 *}

195 subsection {* Low-level equational reasoning *}

197 text {*

198 \begin{matharray}{rcl}

199 @{method_def subst} & : & @{text method} \\

200 @{method_def hypsubst} & : & @{text method} \\

201 @{method_def split} & : & @{text method} \\

202 \end{matharray}

204 @{rail "

205 @@{method subst} ('(' 'asm' ')')? \\ ('(' (@{syntax nat}+) ')')? @{syntax thmref}

206 ;

207 @@{method split} @{syntax thmrefs}

208 "}

210 These methods provide low-level facilities for equational reasoning

211 that are intended for specialized applications only. Normally,

212 single step calculations would be performed in a structured text

213 (see also \secref{sec:calculation}), while the Simplifier methods

214 provide the canonical way for automated normalization (see

215 \secref{sec:simplifier}).

217 \begin{description}

219 \item @{method subst}~@{text eq} performs a single substitution step

220 using rule @{text eq}, which may be either a meta or object

221 equality.

223 \item @{method subst}~@{text "(asm) eq"} substitutes in an

224 assumption.

226 \item @{method subst}~@{text "(i \<dots> j) eq"} performs several

227 substitutions in the conclusion. The numbers @{text i} to @{text j}

228 indicate the positions to substitute at. Positions are ordered from

229 the top of the term tree moving down from left to right. For

230 example, in @{text "(a + b) + (c + d)"} there are three positions

231 where commutativity of @{text "+"} is applicable: 1 refers to @{text

232 "a + b"}, 2 to the whole term, and 3 to @{text "c + d"}.

234 If the positions in the list @{text "(i \<dots> j)"} are non-overlapping

235 (e.g.\ @{text "(2 3)"} in @{text "(a + b) + (c + d)"}) you may

236 assume all substitutions are performed simultaneously. Otherwise

237 the behaviour of @{text subst} is not specified.

239 \item @{method subst}~@{text "(asm) (i \<dots> j) eq"} performs the

240 substitutions in the assumptions. The positions refer to the

241 assumptions in order from left to right. For example, given in a

242 goal of the form @{text "P (a + b) \<Longrightarrow> P (c + d) \<Longrightarrow> \<dots>"}, position 1 of

243 commutativity of @{text "+"} is the subterm @{text "a + b"} and

244 position 2 is the subterm @{text "c + d"}.

246 \item @{method hypsubst} performs substitution using some

247 assumption; this only works for equations of the form @{text "x =

248 t"} where @{text x} is a free or bound variable.

250 \item @{method split}~@{text "a\<^sub>1 \<dots> a\<^sub>n"} performs single-step case

251 splitting using the given rules. Splitting is performed in the

252 conclusion or some assumption of the subgoal, depending of the

253 structure of the rule.

255 Note that the @{method simp} method already involves repeated

256 application of split rules as declared in the current context, using

257 @{attribute split}, for example.

259 \end{description}

260 *}

263 subsection {* Further tactic emulations \label{sec:tactics} *}

265 text {*

266 The following improper proof methods emulate traditional tactics.

267 These admit direct access to the goal state, which is normally

268 considered harmful! In particular, this may involve both numbered

269 goal addressing (default 1), and dynamic instantiation within the

270 scope of some subgoal.

272 \begin{warn}

273 Dynamic instantiations refer to universally quantified parameters

274 of a subgoal (the dynamic context) rather than fixed variables and

275 term abbreviations of a (static) Isar context.

276 \end{warn}

278 Tactic emulation methods, unlike their ML counterparts, admit

279 simultaneous instantiation from both dynamic and static contexts.

280 If names occur in both contexts goal parameters hide locally fixed

281 variables. Likewise, schematic variables refer to term

282 abbreviations, if present in the static context. Otherwise the

283 schematic variable is interpreted as a schematic variable and left

284 to be solved by unification with certain parts of the subgoal.

286 Note that the tactic emulation proof methods in Isabelle/Isar are

287 consistently named @{text foo_tac}. Note also that variable names

288 occurring on left hand sides of instantiations must be preceded by a

289 question mark if they coincide with a keyword or contain dots. This

290 is consistent with the attribute @{attribute "where"} (see

291 \secref{sec:pure-meth-att}).

293 \begin{matharray}{rcl}

294 @{method_def rule_tac}@{text "\<^sup>*"} & : & @{text method} \\

295 @{method_def erule_tac}@{text "\<^sup>*"} & : & @{text method} \\

296 @{method_def drule_tac}@{text "\<^sup>*"} & : & @{text method} \\

297 @{method_def frule_tac}@{text "\<^sup>*"} & : & @{text method} \\

298 @{method_def cut_tac}@{text "\<^sup>*"} & : & @{text method} \\

299 @{method_def thin_tac}@{text "\<^sup>*"} & : & @{text method} \\

300 @{method_def subgoal_tac}@{text "\<^sup>*"} & : & @{text method} \\

301 @{method_def rename_tac}@{text "\<^sup>*"} & : & @{text method} \\

302 @{method_def rotate_tac}@{text "\<^sup>*"} & : & @{text method} \\

303 @{method_def tactic}@{text "\<^sup>*"} & : & @{text method} \\

304 @{method_def raw_tactic}@{text "\<^sup>*"} & : & @{text method} \\

305 \end{matharray}

307 @{rail "

308 (@@{method rule_tac} | @@{method erule_tac} | @@{method drule_tac} |

309 @@{method frule_tac} | @@{method cut_tac} | @@{method thin_tac}) @{syntax goal_spec}? \\

310 ( dynamic_insts @'in' @{syntax thmref} | @{syntax thmrefs} )

311 ;

312 @@{method subgoal_tac} @{syntax goal_spec}? (@{syntax prop} +)

313 ;

314 @@{method rename_tac} @{syntax goal_spec}? (@{syntax name} +)

315 ;

316 @@{method rotate_tac} @{syntax goal_spec}? @{syntax int}?

317 ;

318 (@@{method tactic} | @@{method raw_tactic}) @{syntax text}

319 ;

321 dynamic_insts: ((@{syntax name} '=' @{syntax term}) + @'and')

322 "}

324 \begin{description}

326 \item @{method rule_tac} etc. do resolution of rules with explicit

327 instantiation. This works the same way as the ML tactics @{ML

328 res_inst_tac} etc. (see \cite{isabelle-implementation})

330 Multiple rules may be only given if there is no instantiation; then

331 @{method rule_tac} is the same as @{ML resolve_tac} in ML (see

332 \cite{isabelle-implementation}).

334 \item @{method cut_tac} inserts facts into the proof state as

335 assumption of a subgoal; instantiations may be given as well. Note

336 that the scope of schematic variables is spread over the main goal

337 statement and rule premises are turned into new subgoals. This is

338 in contrast to the regular method @{method insert} which inserts

339 closed rule statements.

341 \item @{method thin_tac}~@{text \<phi>} deletes the specified premise

342 from a subgoal. Note that @{text \<phi>} may contain schematic

343 variables, to abbreviate the intended proposition; the first

344 matching subgoal premise will be deleted. Removing useless premises

345 from a subgoal increases its readability and can make search tactics

346 run faster.

348 \item @{method subgoal_tac}~@{text "\<phi>\<^sub>1 \<dots> \<phi>\<^sub>n"} adds the propositions

349 @{text "\<phi>\<^sub>1 \<dots> \<phi>\<^sub>n"} as local premises to a subgoal, and poses the same

350 as new subgoals (in the original context).

352 \item @{method rename_tac}~@{text "x\<^sub>1 \<dots> x\<^sub>n"} renames parameters of a

353 goal according to the list @{text "x\<^sub>1, \<dots>, x\<^sub>n"}, which refers to the

354 \emph{suffix} of variables.

356 \item @{method rotate_tac}~@{text n} rotates the premises of a

357 subgoal by @{text n} positions: from right to left if @{text n} is

358 positive, and from left to right if @{text n} is negative; the

359 default value is 1.

361 \item @{method tactic}~@{text "text"} produces a proof method from

362 any ML text of type @{ML_type tactic}. Apart from the usual ML

363 environment and the current proof context, the ML code may refer to

364 the locally bound values @{ML_text facts}, which indicates any

365 current facts used for forward-chaining.

367 \item @{method raw_tactic} is similar to @{method tactic}, but

368 presents the goal state in its raw internal form, where simultaneous

369 subgoals appear as conjunction of the logical framework instead of

370 the usual split into several subgoals. While feature this is useful

371 for debugging of complex method definitions, it should not never

372 appear in production theories.

374 \end{description}

375 *}

378 section {* The Simplifier \label{sec:simplifier} *}

380 text {* The Simplifier performs conditional and unconditional

381 rewriting and uses contextual information: rule declarations in the

382 background theory or local proof context are taken into account, as

383 well as chained facts and subgoal premises (``local assumptions'').

384 There are several general hooks that allow to modify the

385 simplification strategy, or incorporate other proof tools that solve

386 sub-problems, produce rewrite rules on demand etc.

388 The rewriting strategy is always strictly bottom up, except for

389 congruence rules, which are applied while descending into a term.

390 Conditions in conditional rewrite rules are solved recursively

391 before the rewrite rule is applied.

393 The default Simplifier setup of major object logics (HOL, HOLCF,

394 FOL, ZF) makes the Simplifier ready for immediate use, without

395 engaging into the internal structures. Thus it serves as

396 general-purpose proof tool with the main focus on equational

397 reasoning, and a bit more than that.

398 *}

401 subsection {* Simplification methods \label{sec:simp-meth} *}

403 text {*

404 \begin{matharray}{rcl}

405 @{method_def simp} & : & @{text method} \\

406 @{method_def simp_all} & : & @{text method} \\

407 \end{matharray}

409 @{rail "

410 (@@{method simp} | @@{method simp_all}) opt? (@{syntax simpmod} * )

411 ;

413 opt: '(' ('no_asm' | 'no_asm_simp' | 'no_asm_use' | 'asm_lr' ) ')'

414 ;

415 @{syntax_def simpmod}: ('add' | 'del' | 'only' | 'split' (() | 'add' | 'del') |

416 'cong' (() | 'add' | 'del')) ':' @{syntax thmrefs}

417 "}

419 \begin{description}

421 \item @{method simp} invokes the Simplifier on the first subgoal,

422 after inserting chained facts as additional goal premises; further

423 rule declarations may be included via @{text "(simp add: facts)"}.

424 The proof method fails if the subgoal remains unchanged after

425 simplification.

427 Note that the original goal premises and chained facts are subject

428 to simplification themselves, while declarations via @{text

429 "add"}/@{text "del"} merely follow the policies of the object-logic

430 to extract rewrite rules from theorems, without further

431 simplification. This may lead to slightly different behavior in

432 either case, which might be required precisely like that in some

433 boundary situations to perform the intended simplification step!

435 \medskip The @{text only} modifier first removes all other rewrite

436 rules, looper tactics (including split rules), congruence rules, and

437 then behaves like @{text add}. Implicit solvers remain, which means

438 that trivial rules like reflexivity or introduction of @{text

439 "True"} are available to solve the simplified subgoals, but also

440 non-trivial tools like linear arithmetic in HOL. The latter may

441 lead to some surprise of the meaning of ``only'' in Isabelle/HOL

442 compared to English!

444 \medskip The @{text split} modifiers add or delete rules for the

445 Splitter (see also \secref{sec:simp-strategies} on the looper).

446 This works only if the Simplifier method has been properly setup to

447 include the Splitter (all major object logics such HOL, HOLCF, FOL,

448 ZF do this already).

450 There is also a separate @{method_ref split} method available for

451 single-step case splitting. The effect of repeatedly applying

452 @{text "(split thms)"} can be imitated by ``@{text "(simp only:

453 split: thms)"}''.

455 \medskip The @{text cong} modifiers add or delete Simplifier

456 congruence rules (see also \secref{sec:simp-rules}); the default is

457 to add.

459 \item @{method simp_all} is similar to @{method simp}, but acts on

460 all goals, working backwards from the last to the first one as usual

461 in Isabelle.\footnote{The order is irrelevant for goals without

462 schematic variables, so simplification might actually be performed

463 in parallel here.}

465 Chained facts are inserted into all subgoals, before the

466 simplification process starts. Further rule declarations are the

467 same as for @{method simp}.

469 The proof method fails if all subgoals remain unchanged after

470 simplification.

472 \end{description}

474 By default the Simplifier methods above take local assumptions fully

475 into account, using equational assumptions in the subsequent

476 normalization process, or simplifying assumptions themselves.

477 Further options allow to fine-tune the behavior of the Simplifier

478 in this respect, corresponding to a variety of ML tactics as

479 follows.\footnote{Unlike the corresponding Isar proof methods, the

480 ML tactics do not insist in changing the goal state.}

482 \begin{center}

483 \small

484 \begin{supertabular}{|l|l|p{0.3\textwidth}|}

485 \hline

486 Isar method & ML tactic & behavior \\\hline

488 @{text "(simp (no_asm))"} & @{ML simp_tac} & assumptions are ignored

489 completely \\\hline

491 @{text "(simp (no_asm_simp))"} & @{ML asm_simp_tac} & assumptions

492 are used in the simplification of the conclusion but are not

493 themselves simplified \\\hline

495 @{text "(simp (no_asm_use))"} & @{ML full_simp_tac} & assumptions

496 are simplified but are not used in the simplification of each other

497 or the conclusion \\\hline

499 @{text "(simp)"} & @{ML asm_full_simp_tac} & assumptions are used in

500 the simplification of the conclusion and to simplify other

501 assumptions \\\hline

503 @{text "(simp (asm_lr))"} & @{ML asm_lr_simp_tac} & compatibility

504 mode: an assumption is only used for simplifying assumptions which

505 are to the right of it \\\hline

507 \end{supertabular}

508 \end{center}

509 *}

512 subsubsection {* Examples *}

514 text {* We consider basic algebraic simplifications in Isabelle/HOL.

515 The rather trivial goal @{prop "0 + (x + 0) = x + 0 + 0"} looks like

516 a good candidate to be solved by a single call of @{method simp}:

517 *}

519 lemma "0 + (x + 0) = x + 0 + 0" apply simp? oops

521 text {* The above attempt \emph{fails}, because @{term "0"} and @{term

522 "op +"} in the HOL library are declared as generic type class

523 operations, without stating any algebraic laws yet. More specific

524 types are required to get access to certain standard simplifications

525 of the theory context, e.g.\ like this: *}

527 lemma fixes x :: nat shows "0 + (x + 0) = x + 0 + 0" by simp

528 lemma fixes x :: int shows "0 + (x + 0) = x + 0 + 0" by simp

529 lemma fixes x :: "'a :: monoid_add" shows "0 + (x + 0) = x + 0 + 0" by simp

531 text {*

532 \medskip In many cases, assumptions of a subgoal are also needed in

533 the simplification process. For example:

534 *}

536 lemma fixes x :: nat shows "x = 0 \<Longrightarrow> x + x = 0" by simp

537 lemma fixes x :: nat assumes "x = 0" shows "x + x = 0" apply simp oops

538 lemma fixes x :: nat assumes "x = 0" shows "x + x = 0" using assms by simp

540 text {* As seen above, local assumptions that shall contribute to

541 simplification need to be part of the subgoal already, or indicated

542 explicitly for use by the subsequent method invocation. Both too

543 little or too much information can make simplification fail, for

544 different reasons.

546 In the next example the malicious assumption @{prop "\<And>x::nat. f x =

547 g (f (g x))"} does not contribute to solve the problem, but makes

548 the default @{method simp} method loop: the rewrite rule @{text "f

549 ?x \<equiv> g (f (g ?x))"} extracted from the assumption does not

550 terminate. The Simplifier notices certain simple forms of

551 nontermination, but not this one. The problem can be solved

552 nonetheless, by ignoring assumptions via special options as

553 explained before:

554 *}

556 lemma "(\<And>x::nat. f x = g (f (g x))) \<Longrightarrow> f 0 = f 0 + 0"

557 by (simp (no_asm))

559 text {* The latter form is typical for long unstructured proof

560 scripts, where the control over the goal content is limited. In

561 structured proofs it is usually better to avoid pushing too many

562 facts into the goal state in the first place. Assumptions in the

563 Isar proof context do not intrude the reasoning if not used

564 explicitly. This is illustrated for a toplevel statement and a

565 local proof body as follows:

566 *}

568 lemma

569 assumes "\<And>x::nat. f x = g (f (g x))"

570 shows "f 0 = f 0 + 0" by simp

572 notepad

573 begin

574 assume "\<And>x::nat. f x = g (f (g x))"

575 have "f 0 = f 0 + 0" by simp

576 end

578 text {* \medskip Because assumptions may simplify each other, there

579 can be very subtle cases of nontermination. For example, the regular

580 @{method simp} method applied to @{prop "P (f x) \<Longrightarrow> y = x \<Longrightarrow> f x = f y

581 \<Longrightarrow> Q"} gives rise to the infinite reduction sequence

582 \[

583 @{text "P (f x)"} \stackrel{@{text "f x \<equiv> f y"}}{\longmapsto}

584 @{text "P (f y)"} \stackrel{@{text "y \<equiv> x"}}{\longmapsto}

585 @{text "P (f x)"} \stackrel{@{text "f x \<equiv> f y"}}{\longmapsto} \cdots

586 \]

587 whereas applying the same to @{prop "y = x \<Longrightarrow> f x = f y \<Longrightarrow> P (f x) \<Longrightarrow>

588 Q"} terminates (without solving the goal):

589 *}

591 lemma "y = x \<Longrightarrow> f x = f y \<Longrightarrow> P (f x) \<Longrightarrow> Q"

592 apply simp

593 oops

595 text {* See also \secref{sec:simp-config} for options to enable

596 Simplifier trace mode, which often helps to diagnose problems with

597 rewrite systems.

598 *}

601 subsection {* Declaring rules \label{sec:simp-rules} *}

603 text {*

604 \begin{matharray}{rcl}

605 @{attribute_def simp} & : & @{text attribute} \\

606 @{attribute_def split} & : & @{text attribute} \\

607 @{attribute_def cong} & : & @{text attribute} \\

608 @{command_def "print_simpset"}@{text "\<^sup>*"} & : & @{text "context \<rightarrow>"} \\

609 \end{matharray}

611 @{rail "

612 (@@{attribute simp} | @@{attribute split} | @@{attribute cong})

613 (() | 'add' | 'del')

614 "}

616 \begin{description}

618 \item @{attribute simp} declares rewrite rules, by adding or

619 deleting them from the simpset within the theory or proof context.

620 Rewrite rules are theorems expressing some form of equality, for

621 example:

623 @{text "Suc ?m + ?n = ?m + Suc ?n"} \\

624 @{text "?P \<and> ?P \<longleftrightarrow> ?P"} \\

625 @{text "?A \<union> ?B \<equiv> {x. x \<in> ?A \<or> x \<in> ?B}"}

627 \smallskip

628 Conditional rewrites such as @{text "?m < ?n \<Longrightarrow> ?m div ?n = 0"} are

629 also permitted; the conditions can be arbitrary formulas.

631 \medskip Internally, all rewrite rules are translated into Pure

632 equalities, theorems with conclusion @{text "lhs \<equiv> rhs"}. The

633 simpset contains a function for extracting equalities from arbitrary

634 theorems, which is usually installed when the object-logic is

635 configured initially. For example, @{text "\<not> ?x \<in> {}"} could be

636 turned into @{text "?x \<in> {} \<equiv> False"}. Theorems that are declared as

637 @{attribute simp} and local assumptions within a goal are treated

638 uniformly in this respect.

640 The Simplifier accepts the following formats for the @{text "lhs"}

641 term:

643 \begin{enumerate}

645 \item First-order patterns, considering the sublanguage of

646 application of constant operators to variable operands, without

647 @{text "\<lambda>"}-abstractions or functional variables.

648 For example:

650 @{text "(?x + ?y) + ?z \<equiv> ?x + (?y + ?z)"} \\

651 @{text "f (f ?x ?y) ?z \<equiv> f ?x (f ?y ?z)"}

653 \item Higher-order patterns in the sense of \cite{nipkow-patterns}.

654 These are terms in @{text "\<beta>"}-normal form (this will always be the

655 case unless you have done something strange) where each occurrence

656 of an unknown is of the form @{text "?F x\<^sub>1 \<dots> x\<^sub>n"}, where the

657 @{text "x\<^sub>i"} are distinct bound variables.

659 For example, @{text "(\<forall>x. ?P x \<and> ?Q x) \<equiv> (\<forall>x. ?P x) \<and> (\<forall>x. ?Q x)"}

660 or its symmetric form, since the @{text "rhs"} is also a

661 higher-order pattern.

663 \item Physical first-order patterns over raw @{text "\<lambda>"}-term

664 structure without @{text "\<alpha>\<beta>\<eta>"}-equality; abstractions and bound

665 variables are treated like quasi-constant term material.

667 For example, the rule @{text "?f ?x \<in> range ?f = True"} rewrites the

668 term @{text "g a \<in> range g"} to @{text "True"}, but will fail to

669 match @{text "g (h b) \<in> range (\<lambda>x. g (h x))"}. However, offending

670 subterms (in our case @{text "?f ?x"}, which is not a pattern) can

671 be replaced by adding new variables and conditions like this: @{text

672 "?y = ?f ?x \<Longrightarrow> ?y \<in> range ?f = True"} is acceptable as a conditional

673 rewrite rule of the second category since conditions can be

674 arbitrary terms.

676 \end{enumerate}

678 \item @{attribute split} declares case split rules.

680 \item @{attribute cong} declares congruence rules to the Simplifier

681 context.

683 Congruence rules are equalities of the form @{text [display]

684 "\<dots> \<Longrightarrow> f ?x\<^sub>1 \<dots> ?x\<^sub>n = f ?y\<^sub>1 \<dots> ?y\<^sub>n"}

686 This controls the simplification of the arguments of @{text f}. For

687 example, some arguments can be simplified under additional

688 assumptions: @{text [display] "?P\<^sub>1 \<longleftrightarrow> ?Q\<^sub>1 \<Longrightarrow> (?Q\<^sub>1 \<Longrightarrow> ?P\<^sub>2 \<longleftrightarrow> ?Q\<^sub>2) \<Longrightarrow>

689 (?P\<^sub>1 \<longrightarrow> ?P\<^sub>2) \<longleftrightarrow> (?Q\<^sub>1 \<longrightarrow> ?Q\<^sub>2)"}

691 Given this rule, the simplifier assumes @{text "?Q\<^sub>1"} and extracts

692 rewrite rules from it when simplifying @{text "?P\<^sub>2"}. Such local

693 assumptions are effective for rewriting formulae such as @{text "x =

694 0 \<longrightarrow> y + x = y"}.

696 %FIXME

697 %The local assumptions are also provided as theorems to the solver;

698 %see \secref{sec:simp-solver} below.

700 \medskip The following congruence rule for bounded quantifiers also

701 supplies contextual information --- about the bound variable:

702 @{text [display] "(?A = ?B) \<Longrightarrow> (\<And>x. x \<in> ?B \<Longrightarrow> ?P x \<longleftrightarrow> ?Q x) \<Longrightarrow>

703 (\<forall>x \<in> ?A. ?P x) \<longleftrightarrow> (\<forall>x \<in> ?B. ?Q x)"}

705 \medskip This congruence rule for conditional expressions can

706 supply contextual information for simplifying the arms:

707 @{text [display] "?p = ?q \<Longrightarrow> (?q \<Longrightarrow> ?a = ?c) \<Longrightarrow> (\<not> ?q \<Longrightarrow> ?b = ?d) \<Longrightarrow>

708 (if ?p then ?a else ?b) = (if ?q then ?c else ?d)"}

710 A congruence rule can also \emph{prevent} simplification of some

711 arguments. Here is an alternative congruence rule for conditional

712 expressions that conforms to non-strict functional evaluation:

713 @{text [display] "?p = ?q \<Longrightarrow> (if ?p then ?a else ?b) = (if ?q then ?a else ?b)"}

715 Only the first argument is simplified; the others remain unchanged.

716 This can make simplification much faster, but may require an extra

717 case split over the condition @{text "?q"} to prove the goal.

719 \item @{command "print_simpset"} prints the collection of rules

720 declared to the Simplifier, which is also known as ``simpset''

721 internally.

723 For historical reasons, simpsets may occur independently from the

724 current context, but are conceptually dependent on it. When the

725 Simplifier is invoked via one of its main entry points in the Isar

726 source language (as proof method \secref{sec:simp-meth} or rule

727 attribute \secref{sec:simp-meth}), its simpset is derived from the

728 current proof context, and carries a back-reference to that for

729 other tools that might get invoked internally (e.g.\ simplification

730 procedures \secref{sec:simproc}). A mismatch of the context of the

731 simpset and the context of the problem being simplified may lead to

732 unexpected results.

734 \end{description}

736 The implicit simpset of the theory context is propagated

737 monotonically through the theory hierarchy: forming a new theory,

738 the union of the simpsets of its imports are taken as starting

739 point. Also note that definitional packages like @{command

740 "datatype"}, @{command "primrec"}, @{command "fun"} routinely

741 declare Simplifier rules to the target context, while plain

742 @{command "definition"} is an exception in \emph{not} declaring

743 anything.

745 \medskip It is up the user to manipulate the current simpset further

746 by explicitly adding or deleting theorems as simplification rules,

747 or installing other tools via simplification procedures

748 (\secref{sec:simproc}). Good simpsets are hard to design. Rules

749 that obviously simplify, like @{text "?n + 0 \<equiv> ?n"} are good

750 candidates for the implicit simpset, unless a special

751 non-normalizing behavior of certain operations is intended. More

752 specific rules (such as distributive laws, which duplicate subterms)

753 should be added only for specific proof steps. Conversely,

754 sometimes a rule needs to be deleted just for some part of a proof.

755 The need of frequent additions or deletions may indicate a poorly

756 designed simpset.

758 \begin{warn}

759 The union of simpsets from theory imports (as described above) is

760 not always a good starting point for the new theory. If some

761 ancestors have deleted simplification rules because they are no

762 longer wanted, while others have left those rules in, then the union

763 will contain the unwanted rules, and thus have to be deleted again

764 in the theory body.

765 \end{warn}

766 *}

769 subsection {* Ordered rewriting with permutative rules *}

771 text {* A rewrite rule is \emph{permutative} if the left-hand side and

772 right-hand side are the equal up to renaming of variables. The most

773 common permutative rule is commutativity: @{text "?x + ?y = ?y +

774 ?x"}. Other examples include @{text "(?x - ?y) - ?z = (?x - ?z) -

775 ?y"} in arithmetic and @{text "insert ?x (insert ?y ?A) = insert ?y

776 (insert ?x ?A)"} for sets. Such rules are common enough to merit

777 special attention.

779 Because ordinary rewriting loops given such rules, the Simplifier

780 employs a special strategy, called \emph{ordered rewriting}.

781 Permutative rules are detected and only applied if the rewriting

782 step decreases the redex wrt.\ a given term ordering. For example,

783 commutativity rewrites @{text "b + a"} to @{text "a + b"}, but then

784 stops, because the redex cannot be decreased further in the sense of

785 the term ordering.

787 The default is lexicographic ordering of term structure, but this

788 could be also changed locally for special applications via

789 @{index_ML Simplifier.set_termless} in Isabelle/ML.

791 \medskip Permutative rewrite rules are declared to the Simplifier

792 just like other rewrite rules. Their special status is recognized

793 automatically, and their application is guarded by the term ordering

794 accordingly. *}

797 subsubsection {* Rewriting with AC operators *}

799 text {* Ordered rewriting is particularly effective in the case of

800 associative-commutative operators. (Associativity by itself is not

801 permutative.) When dealing with an AC-operator @{text "f"}, keep

802 the following points in mind:

804 \begin{itemize}

806 \item The associative law must always be oriented from left to

807 right, namely @{text "f (f x y) z = f x (f y z)"}. The opposite

808 orientation, if used with commutativity, leads to looping in

809 conjunction with the standard term order.

811 \item To complete your set of rewrite rules, you must add not just

812 associativity (A) and commutativity (C) but also a derived rule

813 \emph{left-commutativity} (LC): @{text "f x (f y z) = f y (f x z)"}.

815 \end{itemize}

817 Ordered rewriting with the combination of A, C, and LC sorts a term

818 lexicographically --- the rewriting engine imitates bubble-sort.

819 *}

821 locale AC_example =

822 fixes f :: "'a \<Rightarrow> 'a \<Rightarrow> 'a" (infix "\<bullet>" 60)

823 assumes assoc: "(x \<bullet> y) \<bullet> z = x \<bullet> (y \<bullet> z)"

824 assumes commute: "x \<bullet> y = y \<bullet> x"

825 begin

827 lemma left_commute: "x \<bullet> (y \<bullet> z) = y \<bullet> (x \<bullet> z)"

828 proof -

829 have "(x \<bullet> y) \<bullet> z = (y \<bullet> x) \<bullet> z" by (simp only: commute)

830 then show ?thesis by (simp only: assoc)

831 qed

833 lemmas AC_rules = assoc commute left_commute

835 text {* Thus the Simplifier is able to establish equalities with

836 arbitrary permutations of subterms, by normalizing to a common

837 standard form. For example: *}

839 lemma "(b \<bullet> c) \<bullet> a = xxx"

840 apply (simp only: AC_rules)

841 txt {* @{subgoals} *}

842 oops

844 lemma "(b \<bullet> c) \<bullet> a = a \<bullet> (b \<bullet> c)" by (simp only: AC_rules)

845 lemma "(b \<bullet> c) \<bullet> a = c \<bullet> (b \<bullet> a)" by (simp only: AC_rules)

846 lemma "(b \<bullet> c) \<bullet> a = (c \<bullet> b) \<bullet> a" by (simp only: AC_rules)

848 end

850 text {* Martin and Nipkow \cite{martin-nipkow} discuss the theory and

851 give many examples; other algebraic structures are amenable to

852 ordered rewriting, such as boolean rings. The Boyer-Moore theorem

853 prover \cite{bm88book} also employs ordered rewriting.

854 *}

857 subsubsection {* Re-orienting equalities *}

859 text {* Another application of ordered rewriting uses the derived rule

860 @{thm [source] eq_commute}: @{thm [source = false] eq_commute} to

861 reverse equations.

863 This is occasionally useful to re-orient local assumptions according

864 to the term ordering, when other built-in mechanisms of

865 reorientation and mutual simplification fail to apply. *}

868 subsection {* Configuration options \label{sec:simp-config} *}

870 text {*

871 \begin{tabular}{rcll}

872 @{attribute_def simp_depth_limit} & : & @{text attribute} & default @{text 100} \\

873 @{attribute_def simp_trace} & : & @{text attribute} & default @{text false} \\

874 @{attribute_def simp_trace_depth_limit} & : & @{text attribute} & default @{text 1} \\

875 @{attribute_def simp_debug} & : & @{text attribute} & default @{text false} \\

876 \end{tabular}

877 \medskip

879 These configurations options control further aspects of the Simplifier.

880 See also \secref{sec:config}.

882 \begin{description}

884 \item @{attribute simp_depth_limit} limits the number of recursive

885 invocations of the Simplifier during conditional rewriting.

887 \item @{attribute simp_trace} makes the Simplifier output internal

888 operations. This includes rewrite steps, but also bookkeeping like

889 modifications of the simpset.

891 \item @{attribute simp_trace_depth_limit} limits the effect of

892 @{attribute simp_trace} to the given depth of recursive Simplifier

893 invocations (when solving conditions of rewrite rules).

895 \item @{attribute simp_debug} makes the Simplifier output some extra

896 information about internal operations. This includes any attempted

897 invocation of simplification procedures.

899 \end{description}

900 *}

903 subsection {* Simplification procedures \label{sec:simproc} *}

905 text {* Simplification procedures are ML functions that produce proven

906 rewrite rules on demand. They are associated with higher-order

907 patterns that approximate the left-hand sides of equations. The

908 Simplifier first matches the current redex against one of the LHS

909 patterns; if this succeeds, the corresponding ML function is

910 invoked, passing the Simplifier context and redex term. Thus rules

911 may be specifically fashioned for particular situations, resulting

912 in a more powerful mechanism than term rewriting by a fixed set of

913 rules.

915 Any successful result needs to be a (possibly conditional) rewrite

916 rule @{text "t \<equiv> u"} that is applicable to the current redex. The

917 rule will be applied just as any ordinary rewrite rule. It is

918 expected to be already in \emph{internal form}, bypassing the

919 automatic preprocessing of object-level equivalences.

921 \begin{matharray}{rcl}

922 @{command_def "simproc_setup"} & : & @{text "local_theory \<rightarrow> local_theory"} \\

923 simproc & : & @{text attribute} \\

924 \end{matharray}

926 @{rail "

927 @@{command simproc_setup} @{syntax name} '(' (@{syntax term} + '|') ')' '='

928 @{syntax text} \\ (@'identifier' (@{syntax nameref}+))?

929 ;

931 @@{attribute simproc} (('add' ':')? | 'del' ':') (@{syntax name}+)

932 "}

934 \begin{description}

936 \item @{command "simproc_setup"} defines a named simplification

937 procedure that is invoked by the Simplifier whenever any of the

938 given term patterns match the current redex. The implementation,

939 which is provided as ML source text, needs to be of type @{ML_type

940 "morphism -> simpset -> cterm -> thm option"}, where the @{ML_type

941 cterm} represents the current redex @{text r} and the result is

942 supposed to be some proven rewrite rule @{text "r \<equiv> r'"} (or a

943 generalized version), or @{ML NONE} to indicate failure. The

944 @{ML_type simpset} argument holds the full context of the current

945 Simplifier invocation, including the actual Isar proof context. The

946 @{ML_type morphism} informs about the difference of the original

947 compilation context wrt.\ the one of the actual application later

948 on. The optional @{keyword "identifier"} specifies theorems that

949 represent the logical content of the abstract theory of this

950 simproc.

952 Morphisms and identifiers are only relevant for simprocs that are

953 defined within a local target context, e.g.\ in a locale.

955 \item @{text "simproc add: name"} and @{text "simproc del: name"}

956 add or delete named simprocs to the current Simplifier context. The

957 default is to add a simproc. Note that @{command "simproc_setup"}

958 already adds the new simproc to the subsequent context.

960 \end{description}

961 *}

964 subsubsection {* Example *}

966 text {* The following simplification procedure for @{thm

967 [source=false, show_types] unit_eq} in HOL performs fine-grained

968 control over rule application, beyond higher-order pattern matching.

969 Declaring @{thm unit_eq} as @{attribute simp} directly would make

970 the simplifier loop! Note that a version of this simplification

971 procedure is already active in Isabelle/HOL. *}

973 simproc_setup unit ("x::unit") = {*

974 fn _ => fn _ => fn ct =>

975 if HOLogic.is_unit (term_of ct) then NONE

976 else SOME (mk_meta_eq @{thm unit_eq})

977 *}

979 text {* Since the Simplifier applies simplification procedures

980 frequently, it is important to make the failure check in ML

981 reasonably fast. *}

984 subsection {* Configurable Simplifier strategies \label{sec:simp-strategies} *}

986 text {* The core term-rewriting engine of the Simplifier is normally

987 used in combination with some add-on components that modify the

988 strategy and allow to integrate other non-Simplifier proof tools.

989 These may be reconfigured in ML as explained below. Even if the

990 default strategies of object-logics like Isabelle/HOL are used

991 unchanged, it helps to understand how the standard Simplifier

992 strategies work. *}

995 subsubsection {* The subgoaler *}

997 text {*

998 \begin{mldecls}

999 @{index_ML Simplifier.set_subgoaler: "(Proof.context -> int -> tactic) ->

1000 Proof.context -> Proof.context"} \\

1001 @{index_ML Simplifier.prems_of: "Proof.context -> thm list"} \\

1002 \end{mldecls}

1004 The subgoaler is the tactic used to solve subgoals arising out of

1005 conditional rewrite rules or congruence rules. The default should

1006 be simplification itself. In rare situations, this strategy may

1007 need to be changed. For example, if the premise of a conditional

1008 rule is an instance of its conclusion, as in @{text "Suc ?m < ?n \<Longrightarrow>

1009 ?m < ?n"}, the default strategy could loop. % FIXME !??

1011 \begin{description}

1013 \item @{ML Simplifier.set_subgoaler}~@{text "tac ctxt"} sets the

1014 subgoaler of the context to @{text "tac"}. The tactic will

1015 be applied to the context of the running Simplifier instance.

1017 \item @{ML Simplifier.prems_of}~@{text "ctxt"} retrieves the current

1018 set of premises from the context. This may be non-empty only if

1019 the Simplifier has been told to utilize local assumptions in the

1020 first place (cf.\ the options in \secref{sec:simp-meth}).

1022 \end{description}

1024 As an example, consider the following alternative subgoaler:

1025 *}

1027 ML {*

1028 fun subgoaler_tac ctxt =

1029 assume_tac ORELSE'

1030 resolve_tac (Simplifier.prems_of ctxt) ORELSE'

1031 asm_simp_tac ctxt

1032 *}

1034 text {* This tactic first tries to solve the subgoal by assumption or

1035 by resolving with with one of the premises, calling simplification

1036 only if that fails. *}

1039 subsubsection {* The solver *}

1041 text {*

1042 \begin{mldecls}

1043 @{index_ML_type solver} \\

1044 @{index_ML Simplifier.mk_solver: "string ->

1045 (Proof.context -> int -> tactic) -> solver"} \\

1046 @{index_ML_op setSolver: "Proof.context * solver -> Proof.context"} \\

1047 @{index_ML_op addSolver: "Proof.context * solver -> Proof.context"} \\

1048 @{index_ML_op setSSolver: "Proof.context * solver -> Proof.context"} \\

1049 @{index_ML_op addSSolver: "Proof.context * solver -> Proof.context"} \\

1050 \end{mldecls}

1052 A solver is a tactic that attempts to solve a subgoal after

1053 simplification. Its core functionality is to prove trivial subgoals

1054 such as @{prop "True"} and @{text "t = t"}, but object-logics might

1055 be more ambitious. For example, Isabelle/HOL performs a restricted

1056 version of linear arithmetic here.

1058 Solvers are packaged up in abstract type @{ML_type solver}, with

1059 @{ML Simplifier.mk_solver} as the only operation to create a solver.

1061 \medskip Rewriting does not instantiate unknowns. For example,

1062 rewriting alone cannot prove @{text "a \<in> ?A"} since this requires

1063 instantiating @{text "?A"}. The solver, however, is an arbitrary

1064 tactic and may instantiate unknowns as it pleases. This is the only

1065 way the Simplifier can handle a conditional rewrite rule whose

1066 condition contains extra variables. When a simplification tactic is

1067 to be combined with other provers, especially with the Classical

1068 Reasoner, it is important whether it can be considered safe or not.

1069 For this reason a simpset contains two solvers: safe and unsafe.

1071 The standard simplification strategy solely uses the unsafe solver,

1072 which is appropriate in most cases. For special applications where

1073 the simplification process is not allowed to instantiate unknowns

1074 within the goal, simplification starts with the safe solver, but may

1075 still apply the ordinary unsafe one in nested simplifications for

1076 conditional rules or congruences. Note that in this way the overall

1077 tactic is not totally safe: it may instantiate unknowns that appear

1078 also in other subgoals.

1080 \begin{description}

1082 \item @{ML Simplifier.mk_solver}~@{text "name tac"} turns @{text

1083 "tac"} into a solver; the @{text "name"} is only attached as a

1084 comment and has no further significance.

1086 \item @{text "ctxt setSSolver solver"} installs @{text "solver"} as

1087 the safe solver of @{text "ctxt"}.

1089 \item @{text "ctxt addSSolver solver"} adds @{text "solver"} as an

1090 additional safe solver; it will be tried after the solvers which had

1091 already been present in @{text "ctxt"}.

1093 \item @{text "ctxt setSolver solver"} installs @{text "solver"} as the

1094 unsafe solver of @{text "ctxt"}.

1096 \item @{text "ctxt addSolver solver"} adds @{text "solver"} as an

1097 additional unsafe solver; it will be tried after the solvers which

1098 had already been present in @{text "ctxt"}.

1100 \end{description}

1102 \medskip The solver tactic is invoked with the context of the

1103 running Simplifier. Further operations

1104 may be used to retrieve relevant information, such as the list of

1105 local Simplifier premises via @{ML Simplifier.prems_of} --- this

1106 list may be non-empty only if the Simplifier runs in a mode that

1107 utilizes local assumptions (see also \secref{sec:simp-meth}). The

1108 solver is also presented the full goal including its assumptions in

1109 any case. Thus it can use these (e.g.\ by calling @{ML

1110 assume_tac}), even if the Simplifier proper happens to ignore local

1111 premises at the moment.

1113 \medskip As explained before, the subgoaler is also used to solve

1114 the premises of congruence rules. These are usually of the form

1115 @{text "s = ?x"}, where @{text "s"} needs to be simplified and

1116 @{text "?x"} needs to be instantiated with the result. Typically,

1117 the subgoaler will invoke the Simplifier at some point, which will

1118 eventually call the solver. For this reason, solver tactics must be

1119 prepared to solve goals of the form @{text "t = ?x"}, usually by

1120 reflexivity. In particular, reflexivity should be tried before any

1121 of the fancy automated proof tools.

1123 It may even happen that due to simplification the subgoal is no

1124 longer an equality. For example, @{text "False \<longleftrightarrow> ?Q"} could be

1125 rewritten to @{text "\<not> ?Q"}. To cover this case, the solver could

1126 try resolving with the theorem @{text "\<not> False"} of the

1127 object-logic.

1129 \medskip

1131 \begin{warn}

1132 If a premise of a congruence rule cannot be proved, then the

1133 congruence is ignored. This should only happen if the rule is

1134 \emph{conditional} --- that is, contains premises not of the form

1135 @{text "t = ?x"}. Otherwise it indicates that some congruence rule,

1136 or possibly the subgoaler or solver, is faulty.

1137 \end{warn}

1138 *}

1141 subsubsection {* The looper *}

1143 text {*

1144 \begin{mldecls}

1145 @{index_ML_op setloop: "Proof.context *

1146 (Proof.context -> int -> tactic) -> Proof.context"} \\

1147 @{index_ML_op addloop: "Proof.context *

1148 (string * (Proof.context -> int -> tactic))

1149 -> Proof.context"} \\

1150 @{index_ML_op delloop: "Proof.context * string -> Proof.context"} \\

1151 @{index_ML Splitter.add_split: "thm -> Proof.context -> Proof.context"} \\

1152 @{index_ML Splitter.del_split: "thm -> Proof.context -> Proof.context"} \\

1153 \end{mldecls}

1155 The looper is a list of tactics that are applied after

1156 simplification, in case the solver failed to solve the simplified

1157 goal. If the looper succeeds, the simplification process is started

1158 all over again. Each of the subgoals generated by the looper is

1159 attacked in turn, in reverse order.

1161 A typical looper is \emph{case splitting}: the expansion of a

1162 conditional. Another possibility is to apply an elimination rule on

1163 the assumptions. More adventurous loopers could start an induction.

1165 \begin{description}

1167 \item @{text "ctxt setloop tac"} installs @{text "tac"} as the only

1168 looper tactic of @{text "ctxt"}.

1170 \item @{text "ctxt addloop (name, tac)"} adds @{text "tac"} as an

1171 additional looper tactic with name @{text "name"}, which is

1172 significant for managing the collection of loopers. The tactic will

1173 be tried after the looper tactics that had already been present in

1174 @{text "ctxt"}.

1176 \item @{text "ctxt delloop name"} deletes the looper tactic that was

1177 associated with @{text "name"} from @{text "ctxt"}.

1179 \item @{ML Splitter.add_split}~@{text "thm ctxt"} adds split tactics

1180 for @{text "thm"} as additional looper tactics of @{text "ctxt"}.

1182 \item @{ML Splitter.del_split}~@{text "thm ctxt"} deletes the split

1183 tactic corresponding to @{text thm} from the looper tactics of

1184 @{text "ctxt"}.

1186 \end{description}

1188 The splitter replaces applications of a given function; the

1189 right-hand side of the replacement can be anything. For example,

1190 here is a splitting rule for conditional expressions:

1192 @{text [display] "?P (if ?Q ?x ?y) \<longleftrightarrow> (?Q \<longrightarrow> ?P ?x) \<and> (\<not> ?Q \<longrightarrow> ?P ?y)"}

1194 Another example is the elimination operator for Cartesian products

1195 (which happens to be called @{text split} in Isabelle/HOL:

1197 @{text [display] "?P (split ?f ?p) \<longleftrightarrow> (\<forall>a b. ?p = (a, b) \<longrightarrow> ?P (f a b))"}

1199 For technical reasons, there is a distinction between case splitting

1200 in the conclusion and in the premises of a subgoal. The former is

1201 done by @{ML Splitter.split_tac} with rules like @{thm [source]

1202 split_if} or @{thm [source] option.split}, which do not split the

1203 subgoal, while the latter is done by @{ML Splitter.split_asm_tac}

1204 with rules like @{thm [source] split_if_asm} or @{thm [source]

1205 option.split_asm}, which split the subgoal. The function @{ML

1206 Splitter.add_split} automatically takes care of which tactic to

1207 call, analyzing the form of the rules given as argument; it is the

1208 same operation behind @{text "split"} attribute or method modifier

1209 syntax in the Isar source language.

1211 Case splits should be allowed only when necessary; they are

1212 expensive and hard to control. Case-splitting on if-expressions in

1213 the conclusion is usually beneficial, so it is enabled by default in

1214 Isabelle/HOL and Isabelle/FOL/ZF.

1216 \begin{warn}

1217 With @{ML Splitter.split_asm_tac} as looper component, the

1218 Simplifier may split subgoals! This might cause unexpected problems

1219 in tactic expressions that silently assume 0 or 1 subgoals after

1220 simplification.

1221 \end{warn}

1222 *}

1225 subsection {* Forward simplification \label{sec:simp-forward} *}

1227 text {*

1228 \begin{matharray}{rcl}

1229 @{attribute_def simplified} & : & @{text attribute} \\

1230 \end{matharray}

1232 @{rail "

1233 @@{attribute simplified} opt? @{syntax thmrefs}?

1234 ;

1236 opt: '(' ('no_asm' | 'no_asm_simp' | 'no_asm_use') ')'

1237 "}

1239 \begin{description}

1241 \item @{attribute simplified}~@{text "a\<^sub>1 \<dots> a\<^sub>n"} causes a theorem to

1242 be simplified, either by exactly the specified rules @{text "a\<^sub>1, \<dots>,

1243 a\<^sub>n"}, or the implicit Simplifier context if no arguments are given.

1244 The result is fully simplified by default, including assumptions and

1245 conclusion; the options @{text no_asm} etc.\ tune the Simplifier in

1246 the same way as the for the @{text simp} method.

1248 Note that forward simplification restricts the simplifier to its

1249 most basic operation of term rewriting; solver and looper tactics

1250 (\secref{sec:simp-strategies}) are \emph{not} involved here. The

1251 @{attribute simplified} attribute should be only rarely required

1252 under normal circumstances.

1254 \end{description}

1255 *}

1258 section {* The Classical Reasoner \label{sec:classical} *}

1260 subsection {* Basic concepts *}

1262 text {* Although Isabelle is generic, many users will be working in

1263 some extension of classical first-order logic. Isabelle/ZF is built

1264 upon theory FOL, while Isabelle/HOL conceptually contains

1265 first-order logic as a fragment. Theorem-proving in predicate logic

1266 is undecidable, but many automated strategies have been developed to

1267 assist in this task.

1269 Isabelle's classical reasoner is a generic package that accepts

1270 certain information about a logic and delivers a suite of automatic

1271 proof tools, based on rules that are classified and declared in the

1272 context. These proof procedures are slow and simplistic compared

1273 with high-end automated theorem provers, but they can save

1274 considerable time and effort in practice. They can prove theorems

1275 such as Pelletier's \cite{pelletier86} problems 40 and 41 in a few

1276 milliseconds (including full proof reconstruction): *}

1278 lemma "(\<exists>y. \<forall>x. F x y \<longleftrightarrow> F x x) \<longrightarrow> \<not> (\<forall>x. \<exists>y. \<forall>z. F z y \<longleftrightarrow> \<not> F z x)"

1279 by blast

1281 lemma "(\<forall>z. \<exists>y. \<forall>x. f x y \<longleftrightarrow> f x z \<and> \<not> f x x) \<longrightarrow> \<not> (\<exists>z. \<forall>x. f x z)"

1282 by blast

1284 text {* The proof tools are generic. They are not restricted to

1285 first-order logic, and have been heavily used in the development of

1286 the Isabelle/HOL library and applications. The tactics can be

1287 traced, and their components can be called directly; in this manner,

1288 any proof can be viewed interactively. *}

1291 subsubsection {* The sequent calculus *}

1293 text {* Isabelle supports natural deduction, which is easy to use for

1294 interactive proof. But natural deduction does not easily lend

1295 itself to automation, and has a bias towards intuitionism. For

1296 certain proofs in classical logic, it can not be called natural.

1297 The \emph{sequent calculus}, a generalization of natural deduction,

1298 is easier to automate.

1300 A \textbf{sequent} has the form @{text "\<Gamma> \<turnstile> \<Delta>"}, where @{text "\<Gamma>"}

1301 and @{text "\<Delta>"} are sets of formulae.\footnote{For first-order

1302 logic, sequents can equivalently be made from lists or multisets of

1303 formulae.} The sequent @{text "P\<^sub>1, \<dots>, P\<^sub>m \<turnstile> Q\<^sub>1, \<dots>, Q\<^sub>n"} is

1304 \textbf{valid} if @{text "P\<^sub>1 \<and> \<dots> \<and> P\<^sub>m"} implies @{text "Q\<^sub>1 \<or> \<dots> \<or>

1305 Q\<^sub>n"}. Thus @{text "P\<^sub>1, \<dots>, P\<^sub>m"} represent assumptions, each of which

1306 is true, while @{text "Q\<^sub>1, \<dots>, Q\<^sub>n"} represent alternative goals. A

1307 sequent is \textbf{basic} if its left and right sides have a common

1308 formula, as in @{text "P, Q \<turnstile> Q, R"}; basic sequents are trivially

1309 valid.

1311 Sequent rules are classified as \textbf{right} or \textbf{left},

1312 indicating which side of the @{text "\<turnstile>"} symbol they operate on.

1313 Rules that operate on the right side are analogous to natural

1314 deduction's introduction rules, and left rules are analogous to

1315 elimination rules. The sequent calculus analogue of @{text "(\<longrightarrow>I)"}

1316 is the rule

1317 \[

1318 \infer[@{text "(\<longrightarrow>R)"}]{@{text "\<Gamma> \<turnstile> \<Delta>, P \<longrightarrow> Q"}}{@{text "P, \<Gamma> \<turnstile> \<Delta>, Q"}}

1319 \]

1320 Applying the rule backwards, this breaks down some implication on

1321 the right side of a sequent; @{text "\<Gamma>"} and @{text "\<Delta>"} stand for

1322 the sets of formulae that are unaffected by the inference. The

1323 analogue of the pair @{text "(\<or>I1)"} and @{text "(\<or>I2)"} is the

1324 single rule

1325 \[

1326 \infer[@{text "(\<or>R)"}]{@{text "\<Gamma> \<turnstile> \<Delta>, P \<or> Q"}}{@{text "\<Gamma> \<turnstile> \<Delta>, P, Q"}}

1327 \]

1328 This breaks down some disjunction on the right side, replacing it by

1329 both disjuncts. Thus, the sequent calculus is a kind of

1330 multiple-conclusion logic.

1332 To illustrate the use of multiple formulae on the right, let us

1333 prove the classical theorem @{text "(P \<longrightarrow> Q) \<or> (Q \<longrightarrow> P)"}. Working

1334 backwards, we reduce this formula to a basic sequent:

1335 \[

1336 \infer[@{text "(\<or>R)"}]{@{text "\<turnstile> (P \<longrightarrow> Q) \<or> (Q \<longrightarrow> P)"}}

1337 {\infer[@{text "(\<longrightarrow>R)"}]{@{text "\<turnstile> (P \<longrightarrow> Q), (Q \<longrightarrow> P)"}}

1338 {\infer[@{text "(\<longrightarrow>R)"}]{@{text "P \<turnstile> Q, (Q \<longrightarrow> P)"}}

1339 {@{text "P, Q \<turnstile> Q, P"}}}}

1340 \]

1342 This example is typical of the sequent calculus: start with the

1343 desired theorem and apply rules backwards in a fairly arbitrary

1344 manner. This yields a surprisingly effective proof procedure.

1345 Quantifiers add only few complications, since Isabelle handles

1346 parameters and schematic variables. See \cite[Chapter

1347 10]{paulson-ml2} for further discussion. *}

1350 subsubsection {* Simulating sequents by natural deduction *}

1352 text {* Isabelle can represent sequents directly, as in the

1353 object-logic LK. But natural deduction is easier to work with, and

1354 most object-logics employ it. Fortunately, we can simulate the

1355 sequent @{text "P\<^sub>1, \<dots>, P\<^sub>m \<turnstile> Q\<^sub>1, \<dots>, Q\<^sub>n"} by the Isabelle formula

1356 @{text "P\<^sub>1 \<Longrightarrow> \<dots> \<Longrightarrow> P\<^sub>m \<Longrightarrow> \<not> Q\<^sub>2 \<Longrightarrow> ... \<Longrightarrow> \<not> Q\<^sub>n \<Longrightarrow> Q\<^sub>1"} where the order of

1357 the assumptions and the choice of @{text "Q\<^sub>1"} are arbitrary.

1358 Elim-resolution plays a key role in simulating sequent proofs.

1360 We can easily handle reasoning on the left. Elim-resolution with

1361 the rules @{text "(\<or>E)"}, @{text "(\<bottom>E)"} and @{text "(\<exists>E)"} achieves

1362 a similar effect as the corresponding sequent rules. For the other

1363 connectives, we use sequent-style elimination rules instead of

1364 destruction rules such as @{text "(\<and>E1, 2)"} and @{text "(\<forall>E)"}.

1365 But note that the rule @{text "(\<not>L)"} has no effect under our

1366 representation of sequents!

1367 \[

1368 \infer[@{text "(\<not>L)"}]{@{text "\<not> P, \<Gamma> \<turnstile> \<Delta>"}}{@{text "\<Gamma> \<turnstile> \<Delta>, P"}}

1369 \]

1371 What about reasoning on the right? Introduction rules can only

1372 affect the formula in the conclusion, namely @{text "Q\<^sub>1"}. The

1373 other right-side formulae are represented as negated assumptions,

1374 @{text "\<not> Q\<^sub>2, \<dots>, \<not> Q\<^sub>n"}. In order to operate on one of these, it

1375 must first be exchanged with @{text "Q\<^sub>1"}. Elim-resolution with the

1376 @{text swap} rule has this effect: @{text "\<not> P \<Longrightarrow> (\<not> R \<Longrightarrow> P) \<Longrightarrow> R"}

1378 To ensure that swaps occur only when necessary, each introduction

1379 rule is converted into a swapped form: it is resolved with the

1380 second premise of @{text "(swap)"}. The swapped form of @{text

1381 "(\<and>I)"}, which might be called @{text "(\<not>\<and>E)"}, is

1382 @{text [display] "\<not> (P \<and> Q) \<Longrightarrow> (\<not> R \<Longrightarrow> P) \<Longrightarrow> (\<not> R \<Longrightarrow> Q) \<Longrightarrow> R"}

1384 Similarly, the swapped form of @{text "(\<longrightarrow>I)"} is

1385 @{text [display] "\<not> (P \<longrightarrow> Q) \<Longrightarrow> (\<not> R \<Longrightarrow> P \<Longrightarrow> Q) \<Longrightarrow> R"}

1387 Swapped introduction rules are applied using elim-resolution, which

1388 deletes the negated formula. Our representation of sequents also

1389 requires the use of ordinary introduction rules. If we had no

1390 regard for readability of intermediate goal states, we could treat

1391 the right side more uniformly by representing sequents as @{text

1392 [display] "P\<^sub>1 \<Longrightarrow> \<dots> \<Longrightarrow> P\<^sub>m \<Longrightarrow> \<not> Q\<^sub>1 \<Longrightarrow> \<dots> \<Longrightarrow> \<not> Q\<^sub>n \<Longrightarrow> \<bottom>"}

1393 *}

1396 subsubsection {* Extra rules for the sequent calculus *}

1398 text {* As mentioned, destruction rules such as @{text "(\<and>E1, 2)"} and

1399 @{text "(\<forall>E)"} must be replaced by sequent-style elimination rules.

1400 In addition, we need rules to embody the classical equivalence

1401 between @{text "P \<longrightarrow> Q"} and @{text "\<not> P \<or> Q"}. The introduction

1402 rules @{text "(\<or>I1, 2)"} are replaced by a rule that simulates

1403 @{text "(\<or>R)"}: @{text [display] "(\<not> Q \<Longrightarrow> P) \<Longrightarrow> P \<or> Q"}

1405 The destruction rule @{text "(\<longrightarrow>E)"} is replaced by @{text [display]

1406 "(P \<longrightarrow> Q) \<Longrightarrow> (\<not> P \<Longrightarrow> R) \<Longrightarrow> (Q \<Longrightarrow> R) \<Longrightarrow> R"}

1408 Quantifier replication also requires special rules. In classical

1409 logic, @{text "\<exists>x. P x"} is equivalent to @{text "\<not> (\<forall>x. \<not> P x)"};

1410 the rules @{text "(\<exists>R)"} and @{text "(\<forall>L)"} are dual:

1411 \[

1412 \infer[@{text "(\<exists>R)"}]{@{text "\<Gamma> \<turnstile> \<Delta>, \<exists>x. P x"}}{@{text "\<Gamma> \<turnstile> \<Delta>, \<exists>x. P x, P t"}}

1413 \qquad

1414 \infer[@{text "(\<forall>L)"}]{@{text "\<forall>x. P x, \<Gamma> \<turnstile> \<Delta>"}}{@{text "P t, \<forall>x. P x, \<Gamma> \<turnstile> \<Delta>"}}

1415 \]

1416 Thus both kinds of quantifier may be replicated. Theorems requiring

1417 multiple uses of a universal formula are easy to invent; consider

1418 @{text [display] "(\<forall>x. P x \<longrightarrow> P (f x)) \<and> P a \<longrightarrow> P (f\<^sup>n a)"} for any

1419 @{text "n > 1"}. Natural examples of the multiple use of an

1420 existential formula are rare; a standard one is @{text "\<exists>x. \<forall>y. P x

1421 \<longrightarrow> P y"}.

1423 Forgoing quantifier replication loses completeness, but gains

1424 decidability, since the search space becomes finite. Many useful

1425 theorems can be proved without replication, and the search generally

1426 delivers its verdict in a reasonable time. To adopt this approach,

1427 represent the sequent rules @{text "(\<exists>R)"}, @{text "(\<exists>L)"} and

1428 @{text "(\<forall>R)"} by @{text "(\<exists>I)"}, @{text "(\<exists>E)"} and @{text "(\<forall>I)"},

1429 respectively, and put @{text "(\<forall>E)"} into elimination form: @{text

1430 [display] "\<forall>x. P x \<Longrightarrow> (P t \<Longrightarrow> Q) \<Longrightarrow> Q"}

1432 Elim-resolution with this rule will delete the universal formula

1433 after a single use. To replicate universal quantifiers, replace the

1434 rule by @{text [display] "\<forall>x. P x \<Longrightarrow> (P t \<Longrightarrow> \<forall>x. P x \<Longrightarrow> Q) \<Longrightarrow> Q"}

1436 To replicate existential quantifiers, replace @{text "(\<exists>I)"} by

1437 @{text [display] "(\<not> (\<exists>x. P x) \<Longrightarrow> P t) \<Longrightarrow> \<exists>x. P x"}

1439 All introduction rules mentioned above are also useful in swapped

1440 form.

1442 Replication makes the search space infinite; we must apply the rules

1443 with care. The classical reasoner distinguishes between safe and

1444 unsafe rules, applying the latter only when there is no alternative.

1445 Depth-first search may well go down a blind alley; best-first search

1446 is better behaved in an infinite search space. However, quantifier

1447 replication is too expensive to prove any but the simplest theorems.

1448 *}

1451 subsection {* Rule declarations *}

1453 text {* The proof tools of the Classical Reasoner depend on

1454 collections of rules declared in the context, which are classified

1455 as introduction, elimination or destruction and as \emph{safe} or

1456 \emph{unsafe}. In general, safe rules can be attempted blindly,

1457 while unsafe rules must be used with care. A safe rule must never

1458 reduce a provable goal to an unprovable set of subgoals.

1460 The rule @{text "P \<Longrightarrow> P \<or> Q"} is unsafe because it reduces @{text "P

1461 \<or> Q"} to @{text "P"}, which might turn out as premature choice of an

1462 unprovable subgoal. Any rule is unsafe whose premises contain new

1463 unknowns. The elimination rule @{text "\<forall>x. P x \<Longrightarrow> (P t \<Longrightarrow> Q) \<Longrightarrow> Q"} is

1464 unsafe, since it is applied via elim-resolution, which discards the

1465 assumption @{text "\<forall>x. P x"} and replaces it by the weaker

1466 assumption @{text "P t"}. The rule @{text "P t \<Longrightarrow> \<exists>x. P x"} is

1467 unsafe for similar reasons. The quantifier duplication rule @{text

1468 "\<forall>x. P x \<Longrightarrow> (P t \<Longrightarrow> \<forall>x. P x \<Longrightarrow> Q) \<Longrightarrow> Q"} is unsafe in a different sense:

1469 since it keeps the assumption @{text "\<forall>x. P x"}, it is prone to

1470 looping. In classical first-order logic, all rules are safe except

1471 those mentioned above.

1473 The safe~/ unsafe distinction is vague, and may be regarded merely

1474 as a way of giving some rules priority over others. One could argue

1475 that @{text "(\<or>E)"} is unsafe, because repeated application of it

1476 could generate exponentially many subgoals. Induction rules are

1477 unsafe because inductive proofs are difficult to set up

1478 automatically. Any inference is unsafe that instantiates an unknown

1479 in the proof state --- thus matching must be used, rather than

1480 unification. Even proof by assumption is unsafe if it instantiates

1481 unknowns shared with other subgoals.

1483 \begin{matharray}{rcl}

1484 @{command_def "print_claset"}@{text "\<^sup>*"} & : & @{text "context \<rightarrow>"} \\

1485 @{attribute_def intro} & : & @{text attribute} \\

1486 @{attribute_def elim} & : & @{text attribute} \\

1487 @{attribute_def dest} & : & @{text attribute} \\

1488 @{attribute_def rule} & : & @{text attribute} \\

1489 @{attribute_def iff} & : & @{text attribute} \\

1490 @{attribute_def swapped} & : & @{text attribute} \\

1491 \end{matharray}

1493 @{rail "

1494 (@@{attribute intro} | @@{attribute elim} | @@{attribute dest}) ('!' | () | '?') @{syntax nat}?

1495 ;

1496 @@{attribute rule} 'del'

1497 ;

1498 @@{attribute iff} (((() | 'add') '?'?) | 'del')

1499 "}

1501 \begin{description}

1503 \item @{command "print_claset"} prints the collection of rules

1504 declared to the Classical Reasoner, i.e.\ the @{ML_type claset}

1505 within the context.

1507 \item @{attribute intro}, @{attribute elim}, and @{attribute dest}

1508 declare introduction, elimination, and destruction rules,

1509 respectively. By default, rules are considered as \emph{unsafe}

1510 (i.e.\ not applied blindly without backtracking), while ``@{text

1511 "!"}'' classifies as \emph{safe}. Rule declarations marked by

1512 ``@{text "?"}'' coincide with those of Isabelle/Pure, cf.\

1513 \secref{sec:pure-meth-att} (i.e.\ are only applied in single steps

1514 of the @{method rule} method). The optional natural number

1515 specifies an explicit weight argument, which is ignored by the

1516 automated reasoning tools, but determines the search order of single

1517 rule steps.

1519 Introduction rules are those that can be applied using ordinary

1520 resolution. Their swapped forms are generated internally, which

1521 will be applied using elim-resolution. Elimination rules are

1522 applied using elim-resolution. Rules are sorted by the number of

1523 new subgoals they will yield; rules that generate the fewest

1524 subgoals will be tried first. Otherwise, later declarations take

1525 precedence over earlier ones.

1527 Rules already present in the context with the same classification

1528 are ignored. A warning is printed if the rule has already been

1529 added with some other classification, but the rule is added anyway

1530 as requested.

1532 \item @{attribute rule}~@{text del} deletes all occurrences of a

1533 rule from the classical context, regardless of its classification as

1534 introduction~/ elimination~/ destruction and safe~/ unsafe.

1536 \item @{attribute iff} declares logical equivalences to the

1537 Simplifier and the Classical reasoner at the same time.

1538 Non-conditional rules result in a safe introduction and elimination

1539 pair; conditional ones are considered unsafe. Rules with negative

1540 conclusion are automatically inverted (using @{text "\<not>"}-elimination

1541 internally).

1543 The ``@{text "?"}'' version of @{attribute iff} declares rules to

1544 the Isabelle/Pure context only, and omits the Simplifier

1545 declaration.

1547 \item @{attribute swapped} turns an introduction rule into an

1548 elimination, by resolving with the classical swap principle @{text

1549 "\<not> P \<Longrightarrow> (\<not> R \<Longrightarrow> P) \<Longrightarrow> R"} in the second position. This is mainly for

1550 illustrative purposes: the Classical Reasoner already swaps rules

1551 internally as explained above.

1553 \end{description}

1554 *}

1557 subsection {* Structured methods *}

1559 text {*

1560 \begin{matharray}{rcl}

1561 @{method_def rule} & : & @{text method} \\

1562 @{method_def contradiction} & : & @{text method} \\

1563 \end{matharray}

1565 @{rail "

1566 @@{method rule} @{syntax thmrefs}?

1567 "}

1569 \begin{description}

1571 \item @{method rule} as offered by the Classical Reasoner is a

1572 refinement over the Pure one (see \secref{sec:pure-meth-att}). Both

1573 versions work the same, but the classical version observes the

1574 classical rule context in addition to that of Isabelle/Pure.

1576 Common object logics (HOL, ZF, etc.) declare a rich collection of

1577 classical rules (even if these would qualify as intuitionistic

1578 ones), but only few declarations to the rule context of

1579 Isabelle/Pure (\secref{sec:pure-meth-att}).

1581 \item @{method contradiction} solves some goal by contradiction,

1582 deriving any result from both @{text "\<not> A"} and @{text A}. Chained

1583 facts, which are guaranteed to participate, may appear in either

1584 order.

1586 \end{description}

1587 *}

1590 subsection {* Fully automated methods *}

1592 text {*

1593 \begin{matharray}{rcl}

1594 @{method_def blast} & : & @{text method} \\

1595 @{method_def auto} & : & @{text method} \\

1596 @{method_def force} & : & @{text method} \\

1597 @{method_def fast} & : & @{text method} \\

1598 @{method_def slow} & : & @{text method} \\

1599 @{method_def best} & : & @{text method} \\

1600 @{method_def fastforce} & : & @{text method} \\

1601 @{method_def slowsimp} & : & @{text method} \\

1602 @{method_def bestsimp} & : & @{text method} \\

1603 @{method_def deepen} & : & @{text method} \\

1604 \end{matharray}

1606 @{rail "

1607 @@{method blast} @{syntax nat}? (@{syntax clamod} * )

1608 ;

1609 @@{method auto} (@{syntax nat} @{syntax nat})? (@{syntax clasimpmod} * )

1610 ;

1611 @@{method force} (@{syntax clasimpmod} * )

1612 ;

1613 (@@{method fast} | @@{method slow} | @@{method best}) (@{syntax clamod} * )

1614 ;

1615 (@@{method fastforce} | @@{method slowsimp} | @@{method bestsimp})

1616 (@{syntax clasimpmod} * )

1617 ;

1618 @@{method deepen} (@{syntax nat} ?) (@{syntax clamod} * )

1619 ;

1620 @{syntax_def clamod}:

1621 (('intro' | 'elim' | 'dest') ('!' | () | '?') | 'del') ':' @{syntax thmrefs}

1622 ;

1623 @{syntax_def clasimpmod}: ('simp' (() | 'add' | 'del' | 'only') |

1624 ('cong' | 'split') (() | 'add' | 'del') |

1625 'iff' (((() | 'add') '?'?) | 'del') |

1626 (('intro' | 'elim' | 'dest') ('!' | () | '?') | 'del')) ':' @{syntax thmrefs}

1627 "}

1629 \begin{description}

1631 \item @{method blast} is a separate classical tableau prover that

1632 uses the same classical rule declarations as explained before.

1634 Proof search is coded directly in ML using special data structures.

1635 A successful proof is then reconstructed using regular Isabelle

1636 inferences. It is faster and more powerful than the other classical

1637 reasoning tools, but has major limitations too.

1639 \begin{itemize}

1641 \item It does not use the classical wrapper tacticals, such as the

1642 integration with the Simplifier of @{method fastforce}.

1644 \item It does not perform higher-order unification, as needed by the

1645 rule @{thm [source=false] rangeI} in HOL. There are often

1646 alternatives to such rules, for example @{thm [source=false]

1647 range_eqI}.

1649 \item Function variables may only be applied to parameters of the

1650 subgoal. (This restriction arises because the prover does not use

1651 higher-order unification.) If other function variables are present

1652 then the prover will fail with the message \texttt{Function Var's

1653 argument not a bound variable}.

1655 \item Its proof strategy is more general than @{method fast} but can

1656 be slower. If @{method blast} fails or seems to be running forever,

1657 try @{method fast} and the other proof tools described below.

1659 \end{itemize}

1661 The optional integer argument specifies a bound for the number of

1662 unsafe steps used in a proof. By default, @{method blast} starts

1663 with a bound of 0 and increases it successively to 20. In contrast,

1664 @{text "(blast lim)"} tries to prove the goal using a search bound

1665 of @{text "lim"}. Sometimes a slow proof using @{method blast} can

1666 be made much faster by supplying the successful search bound to this

1667 proof method instead.

1669 \item @{method auto} combines classical reasoning with

1670 simplification. It is intended for situations where there are a lot

1671 of mostly trivial subgoals; it proves all the easy ones, leaving the

1672 ones it cannot prove. Occasionally, attempting to prove the hard

1673 ones may take a long time.

1675 The optional depth arguments in @{text "(auto m n)"} refer to its

1676 builtin classical reasoning procedures: @{text m} (default 4) is for

1677 @{method blast}, which is tried first, and @{text n} (default 2) is

1678 for a slower but more general alternative that also takes wrappers

1679 into account.

1681 \item @{method force} is intended to prove the first subgoal

1682 completely, using many fancy proof tools and performing a rather

1683 exhaustive search. As a result, proof attempts may take rather long

1684 or diverge easily.

1686 \item @{method fast}, @{method best}, @{method slow} attempt to

1687 prove the first subgoal using sequent-style reasoning as explained

1688 before. Unlike @{method blast}, they construct proofs directly in

1689 Isabelle.

1691 There is a difference in search strategy and back-tracking: @{method

1692 fast} uses depth-first search and @{method best} uses best-first

1693 search (guided by a heuristic function: normally the total size of

1694 the proof state).

1696 Method @{method slow} is like @{method fast}, but conducts a broader

1697 search: it may, when backtracking from a failed proof attempt, undo

1698 even the step of proving a subgoal by assumption.

1700 \item @{method fastforce}, @{method slowsimp}, @{method bestsimp}

1701 are like @{method fast}, @{method slow}, @{method best},

1702 respectively, but use the Simplifier as additional wrapper. The name

1703 @{method fastforce}, reflects the behaviour of this popular method

1704 better without requiring an understanding of its implementation.

1706 \item @{method deepen} works by exhaustive search up to a certain

1707 depth. The start depth is 4 (unless specified explicitly), and the

1708 depth is increased iteratively up to 10. Unsafe rules are modified

1709 to preserve the formula they act on, so that it be used repeatedly.

1710 This method can prove more goals than @{method fast}, but is much

1711 slower, for example if the assumptions have many universal

1712 quantifiers.

1714 \end{description}

1716 Any of the above methods support additional modifiers of the context

1717 of classical (and simplifier) rules, but the ones related to the

1718 Simplifier are explicitly prefixed by @{text simp} here. The

1719 semantics of these ad-hoc rule declarations is analogous to the

1720 attributes given before. Facts provided by forward chaining are

1721 inserted into the goal before commencing proof search.

1722 *}

1725 subsection {* Partially automated methods *}

1727 text {* These proof methods may help in situations when the

1728 fully-automated tools fail. The result is a simpler subgoal that

1729 can be tackled by other means, such as by manual instantiation of

1730 quantifiers.

1732 \begin{matharray}{rcl}

1733 @{method_def safe} & : & @{text method} \\

1734 @{method_def clarify} & : & @{text method} \\

1735 @{method_def clarsimp} & : & @{text method} \\

1736 \end{matharray}

1738 @{rail "

1739 (@@{method safe} | @@{method clarify}) (@{syntax clamod} * )

1740 ;

1741 @@{method clarsimp} (@{syntax clasimpmod} * )

1742 "}

1744 \begin{description}

1746 \item @{method safe} repeatedly performs safe steps on all subgoals.

1747 It is deterministic, with at most one outcome.

1749 \item @{method clarify} performs a series of safe steps without

1750 splitting subgoals; see also @{method clarify_step}.

1752 \item @{method clarsimp} acts like @{method clarify}, but also does

1753 simplification. Note that if the Simplifier context includes a

1754 splitter for the premises, the subgoal may still be split.

1756 \end{description}

1757 *}

1760 subsection {* Single-step tactics *}

1762 text {*

1763 \begin{matharray}{rcl}

1764 @{method_def safe_step} & : & @{text method} \\

1765 @{method_def inst_step} & : & @{text method} \\

1766 @{method_def step} & : & @{text method} \\

1767 @{method_def slow_step} & : & @{text method} \\

1768 @{method_def clarify_step} & : & @{text method} \\

1769 \end{matharray}

1771 These are the primitive tactics behind the automated proof methods

1772 of the Classical Reasoner. By calling them yourself, you can

1773 execute these procedures one step at a time.

1775 \begin{description}

1777 \item @{method safe_step} performs a safe step on the first subgoal.

1778 The safe wrapper tacticals are applied to a tactic that may include

1779 proof by assumption or Modus Ponens (taking care not to instantiate

1780 unknowns), or substitution.

1782 \item @{method inst_step} is like @{method safe_step}, but allows

1783 unknowns to be instantiated.

1785 \item @{method step} is the basic step of the proof procedure, it

1786 operates on the first subgoal. The unsafe wrapper tacticals are

1787 applied to a tactic that tries @{method safe}, @{method inst_step},

1788 or applies an unsafe rule from the context.

1790 \item @{method slow_step} resembles @{method step}, but allows

1791 backtracking between using safe rules with instantiation (@{method

1792 inst_step}) and using unsafe rules. The resulting search space is

1793 larger.

1795 \item @{method clarify_step} performs a safe step on the first

1796 subgoal; no splitting step is applied. For example, the subgoal

1797 @{text "A \<and> B"} is left as a conjunction. Proof by assumption,

1798 Modus Ponens, etc., may be performed provided they do not

1799 instantiate unknowns. Assumptions of the form @{text "x = t"} may

1800 be eliminated. The safe wrapper tactical is applied.

1802 \end{description}

1803 *}

1806 subsection {* Modifying the search step *}

1808 text {*

1809 \begin{mldecls}

1810 @{index_ML_type wrapper: "(int -> tactic) -> (int -> tactic)"} \\[0.5ex]

1811 @{index_ML_op addSWrapper: "Proof.context *

1812 (string * (Proof.context -> wrapper)) -> Proof.context"} \\

1813 @{index_ML_op addSbefore: "Proof.context *

1814 (string * (Proof.context -> int -> tactic)) -> Proof.context"} \\

1815 @{index_ML_op addSafter: "Proof.context *

1816 (string * (Proof.context -> int -> tactic)) -> Proof.context"} \\

1817 @{index_ML_op delSWrapper: "Proof.context * string -> Proof.context"} \\[0.5ex]

1818 @{index_ML_op addWrapper: "Proof.context *

1819 (string * (Proof.context -> wrapper)) -> Proof.context"} \\

1820 @{index_ML_op addbefore: "Proof.context *

1821 (string * (Proof.context -> int -> tactic)) -> Proof.context"} \\

1822 @{index_ML_op addafter: "Proof.context *

1823 (string * (Proof.context -> int -> tactic)) -> Proof.context"} \\

1824 @{index_ML_op delWrapper: "Proof.context * string -> Proof.context"} \\[0.5ex]

1825 @{index_ML addSss: "Proof.context -> Proof.context"} \\

1826 @{index_ML addss: "Proof.context -> Proof.context"} \\

1827 \end{mldecls}

1829 The proof strategy of the Classical Reasoner is simple. Perform as

1830 many safe inferences as possible; or else, apply certain safe rules,

1831 allowing instantiation of unknowns; or else, apply an unsafe rule.

1832 The tactics also eliminate assumptions of the form @{text "x = t"}

1833 by substitution if they have been set up to do so. They may perform

1834 a form of Modus Ponens: if there are assumptions @{text "P \<longrightarrow> Q"} and

1835 @{text "P"}, then replace @{text "P \<longrightarrow> Q"} by @{text "Q"}.

1837 The classical reasoning tools --- except @{method blast} --- allow

1838 to modify this basic proof strategy by applying two lists of

1839 arbitrary \emph{wrapper tacticals} to it. The first wrapper list,

1840 which is considered to contain safe wrappers only, affects @{method

1841 safe_step} and all the tactics that call it. The second one, which

1842 may contain unsafe wrappers, affects the unsafe parts of @{method

1843 step}, @{method slow_step}, and the tactics that call them. A

1844 wrapper transforms each step of the search, for example by

1845 attempting other tactics before or after the original step tactic.

1846 All members of a wrapper list are applied in turn to the respective

1847 step tactic.

1849 Initially the two wrapper lists are empty, which means no

1850 modification of the step tactics. Safe and unsafe wrappers are added

1851 to a claset with the functions given below, supplying them with

1852 wrapper names. These names may be used to selectively delete

1853 wrappers.

1855 \begin{description}

1857 \item @{text "ctxt addSWrapper (name, wrapper)"} adds a new wrapper,

1858 which should yield a safe tactic, to modify the existing safe step

1859 tactic.

1861 \item @{text "ctxt addSbefore (name, tac)"} adds the given tactic as a

1862 safe wrapper, such that it is tried \emph{before} each safe step of

1863 the search.

1865 \item @{text "ctxt addSafter (name, tac)"} adds the given tactic as a

1866 safe wrapper, such that it is tried when a safe step of the search

1867 would fail.

1869 \item @{text "ctxt delSWrapper name"} deletes the safe wrapper with

1870 the given name.

1872 \item @{text "ctxt addWrapper (name, wrapper)"} adds a new wrapper to

1873 modify the existing (unsafe) step tactic.

1875 \item @{text "ctxt addbefore (name, tac)"} adds the given tactic as an

1876 unsafe wrapper, such that it its result is concatenated

1877 \emph{before} the result of each unsafe step.

1879 \item @{text "ctxt addafter (name, tac)"} adds the given tactic as an

1880 unsafe wrapper, such that it its result is concatenated \emph{after}

1881 the result of each unsafe step.

1883 \item @{text "ctxt delWrapper name"} deletes the unsafe wrapper with

1884 the given name.

1886 \item @{text "addSss"} adds the simpset of the context to its

1887 classical set. The assumptions and goal will be simplified, in a

1888 rather safe way, after each safe step of the search.

1890 \item @{text "addss"} adds the simpset of the context to its

1891 classical set. The assumptions and goal will be simplified, before

1892 the each unsafe step of the search.

1894 \end{description}

1895 *}

1898 section {* Object-logic setup \label{sec:object-logic} *}

1900 text {*

1901 \begin{matharray}{rcl}

1902 @{command_def "judgment"} & : & @{text "theory \<rightarrow> theory"} \\

1903 @{method_def atomize} & : & @{text method} \\

1904 @{attribute_def atomize} & : & @{text attribute} \\

1905 @{attribute_def rule_format} & : & @{text attribute} \\

1906 @{attribute_def rulify} & : & @{text attribute} \\

1907 \end{matharray}

1909 The very starting point for any Isabelle object-logic is a ``truth

1910 judgment'' that links object-level statements to the meta-logic

1911 (with its minimal language of @{text prop} that covers universal

1912 quantification @{text "\<And>"} and implication @{text "\<Longrightarrow>"}).

1914 Common object-logics are sufficiently expressive to internalize rule

1915 statements over @{text "\<And>"} and @{text "\<Longrightarrow>"} within their own

1916 language. This is useful in certain situations where a rule needs

1917 to be viewed as an atomic statement from the meta-level perspective,

1918 e.g.\ @{text "\<And>x. x \<in> A \<Longrightarrow> P x"} versus @{text "\<forall>x \<in> A. P x"}.

1920 From the following language elements, only the @{method atomize}

1921 method and @{attribute rule_format} attribute are occasionally

1922 required by end-users, the rest is for those who need to setup their

1923 own object-logic. In the latter case existing formulations of

1924 Isabelle/FOL or Isabelle/HOL may be taken as realistic examples.

1926 Generic tools may refer to the information provided by object-logic

1927 declarations internally.

1929 @{rail "

1930 @@{command judgment} @{syntax name} '::' @{syntax type} @{syntax mixfix}?

1931 ;

1932 @@{attribute atomize} ('(' 'full' ')')?

1933 ;

1934 @@{attribute rule_format} ('(' 'noasm' ')')?

1935 "}

1937 \begin{description}

1939 \item @{command "judgment"}~@{text "c :: \<sigma> (mx)"} declares constant

1940 @{text c} as the truth judgment of the current object-logic. Its

1941 type @{text \<sigma>} should specify a coercion of the category of

1942 object-level propositions to @{text prop} of the Pure meta-logic;

1943 the mixfix annotation @{text "(mx)"} would typically just link the

1944 object language (internally of syntactic category @{text logic})

1945 with that of @{text prop}. Only one @{command "judgment"}

1946 declaration may be given in any theory development.

1948 \item @{method atomize} (as a method) rewrites any non-atomic

1949 premises of a sub-goal, using the meta-level equations declared via

1950 @{attribute atomize} (as an attribute) beforehand. As a result,

1951 heavily nested goals become amenable to fundamental operations such

1952 as resolution (cf.\ the @{method (Pure) rule} method). Giving the ``@{text

1953 "(full)"}'' option here means to turn the whole subgoal into an

1954 object-statement (if possible), including the outermost parameters

1955 and assumptions as well.

1957 A typical collection of @{attribute atomize} rules for a particular

1958 object-logic would provide an internalization for each of the

1959 connectives of @{text "\<And>"}, @{text "\<Longrightarrow>"}, and @{text "\<equiv>"}.

1960 Meta-level conjunction should be covered as well (this is

1961 particularly important for locales, see \secref{sec:locale}).

1963 \item @{attribute rule_format} rewrites a theorem by the equalities

1964 declared as @{attribute rulify} rules in the current object-logic.

1965 By default, the result is fully normalized, including assumptions

1966 and conclusions at any depth. The @{text "(no_asm)"} option

1967 restricts the transformation to the conclusion of a rule.

1969 In common object-logics (HOL, FOL, ZF), the effect of @{attribute

1970 rule_format} is to replace (bounded) universal quantification

1971 (@{text "\<forall>"}) and implication (@{text "\<longrightarrow>"}) by the corresponding

1972 rule statements over @{text "\<And>"} and @{text "\<Longrightarrow>"}.

1974 \end{description}

1975 *}

1978 section {* Tracing higher-order unification *}

1980 text {*

1981 \begin{tabular}{rcll}

1982 @{attribute_def unify_trace_simp} & : & @{text "attribute"} & default @{text "false"} \\

1983 @{attribute_def unify_trace_types} & : & @{text "attribute"} & default @{text "false"} \\

1984 @{attribute_def unify_trace_bound} & : & @{text "attribute"} & default @{text "50"} \\

1985 @{attribute_def unify_search_bound} & : & @{text "attribute"} & default @{text "60"} \\

1986 \end{tabular}

1987 \medskip

1989 Higher-order unification works well in most practical situations,

1990 but sometimes needs extra care to identify problems. These tracing

1991 options may help.

1993 \begin{description}

1995 \item @{attribute unify_trace_simp} controls tracing of the

1996 simplification phase of higher-order unification.

1998 \item @{attribute unify_trace_types} controls warnings of

1999 incompleteness, when unification is not considering all possible

2000 instantiations of schematic type variables.

2002 \item @{attribute unify_trace_bound} determines the depth where

2003 unification starts to print tracing information once it reaches

2004 depth; 0 for full tracing. At the default value, tracing

2005 information is almost never printed in practice.

2007 \item @{attribute unify_search_bound} prevents unification from

2008 searching past the given depth. Because of this bound, higher-order

2009 unification cannot return an infinite sequence, though it can return

2010 an exponentially long one. The search rarely approaches the default

2011 value in practice. If the search is cut off, unification prints a

2012 warning ``Unification bound exceeded''.

2014 \end{description}

2016 \begin{warn}

2017 Options for unification cannot be modified in a local context. Only

2018 the global theory content is taken into account.

2019 \end{warn}

2020 *}

2022 end