Theory Manual

theory Manual
imports Base Eisbach_Tools
(*:maxLineLen=78:*)

theory Manual
imports Base "HOL-Eisbach.Eisbach_Tools"
begin

chapter ‹The method command›

text ‹
  The @{command_def method} command provides the ability to write proof
  methods by combining existing ones with their usual syntax. Specifically it
  allows compound proof methods to be named, and to extend the name space of
  basic methods accordingly. Method definitions may abstract over parameters:
  terms, facts, or other methods.

  ┉
  The syntax diagram below refers to some syntactic categories that are
  further defined in @{cite "isabelle-isar-ref"}.

  @{rail ‹
    @@{command method} name args @'=' method
    ;
    args: term_args? method_args? ⏎ fact_args? decl_args?
    ;
    term_args: @'for' @{syntax "fixes"}
    ;
    method_args: @'methods' (name+)
    ;
    fact_args: @'uses' (name+)
    ;
    decl_args: @'declares' (name+)
  ›}
›


section ‹Basic method definitions›

text ‹
  Consider the following proof that makes use of usual Isar method
  combinators.
›

    lemma "P ∧ Q ⟶ P"
      by ((rule impI, (erule conjE)?) | assumption)+

text ‹
  It is clear that this compound method will be applicable in more cases than
  this proof alone. With the @{command method} command we can define a proof
  method that makes the above functionality available generally.
›

    method prop_solver1 =
      ((rule impI, (erule conjE)?) | assumption)+

    lemma "P ∧ Q ∧ R ⟶ P"
      by prop_solver1

text ‹
  In this example, the facts ‹impI› and ‹conjE› are static. They are evaluated
  once when the method is defined and cannot be changed later. This makes the
  method stable in the sense of ∗‹static scoping›: naming another fact ‹impI›
  in a later context won't affect the behaviour of ‹prop_solver1›.
›


section ‹Term abstraction›

text ‹
  Methods can also abstract over terms using the @{keyword_def "for"} keyword,
  optionally providing type constraints. For instance, the following proof
  method ‹intro_ex› takes a term @{term y} of any type, which it uses to
  instantiate the @{term x}-variable of ‹exI› (existential introduction)
  before applying the result as a rule. The instantiation is performed here by
  Isar's @{attribute_ref "where"} attribute. If the current subgoal is to find
  a witness for the given predicate @{term Q}, then this has the effect of
  committing to @{term y}.
›

    method intro_ex for Q :: "'a ⇒ bool" and y :: 'a =
      (rule exI ["where" P = Q and x = y])


text ‹
  The term parameters @{term y} and @{term Q} can be used arbitrarily inside
  the method body, as part of attribute applications or arguments to other
  methods. The expression is type-checked as far as possible when the method
  is defined, however dynamic type errors can still occur when it is invoked
  (e.g.\ when terms are instantiated in a parameterized fact). Actual term
  arguments are supplied positionally, in the same order as in the method
  definition.
›

    lemma "P a ⟹ ∃x. P x"
      by (intro_ex P a)


section ‹Fact abstraction›

subsection ‹Named theorems›

text ‹
  A ∗‹named theorem› is a fact whose contents are produced dynamically within
  the current proof context. The Isar command @{command_ref "named_theorems"}
  provides simple access to this concept: it declares a dynamic fact with
  corresponding ∗‹attribute› for managing this particular data slot in the
  context.
›

    named_theorems intros

text ‹
  So far ‹intros› refers to the empty fact. Using the Isar command
  @{command_ref "declare"} we may apply declaration attributes to the context.
  Below we declare both ‹conjI› and ‹impI› as ‹intros›, adding them to the
  named theorem slot.
›

    declare conjI [intros] and impI [intros]

text ‹
  We can refer to named theorems as dynamic facts within a particular proof
  context, which are evaluated whenever the method is invoked. Instead of
  having facts hard-coded into the method, as in ‹prop_solver1›, we can
  instead refer to these named theorems.
›

    named_theorems elims
    declare conjE [elims]

    method prop_solver3 =
      ((rule intros, (erule elims)?) | assumption)+

    lemma "P ∧ Q ⟶ P"
      by prop_solver3

text ‹
  Often these named theorems need to be augmented on the spot, when a method
  is invoked. The @{keyword_def "declares"} keyword in the signature of
  @{command method} adds the common method syntax ‹method decl: facts› for
  each named theorem ‹decl›.
›

    method prop_solver4 declares intros elims =
      ((rule intros, (erule elims)?) | assumption)+

    lemma "P ∧ (P ⟶ Q) ⟶ Q ∧ P"
      by (prop_solver4 elims: impE intros: conjI)


subsection ‹Simple fact abstraction›

text ‹
  The @{keyword "declares"} keyword requires that a corresponding dynamic fact
  has been declared with @{command_ref named_theorems}. This is useful for
  managing collections of facts which are to be augmented with declarations,
  but is overkill if we simply want to pass a fact to a method.

  We may use the @{keyword_def "uses"} keyword in the method header to provide
  a simple fact parameter. In contrast to @{keyword "declares"}, these facts
  are always implicitly empty unless augmented when the method is invoked.
›

    method rule_twice uses my_rule =
      (rule my_rule, rule my_rule)

    lemma "P ⟹ Q ⟹ (P ∧ Q) ∧ Q"
      by (rule_twice my_rule: conjI)


section ‹Higher-order methods›

text ‹
  The ∗‹structured concatenation› combinator ``‹method1 ; method2›'' was
  introduced in Isabelle2015, motivated by development of Eisbach. It is
  similar to ``‹method1, method2›'', but ‹method2› is invoked on on ∗‹all›
  subgoals that have newly emerged from ‹method1›. This is useful to handle
  cases where the number of subgoals produced by a method is determined
  dynamically at run-time.
›

    method conj_with uses rule =
      (intro conjI ; intro rule)

    lemma
      assumes A: "P"
      shows "P ∧ P ∧ P"
      by (conj_with rule: A)

text ‹
  Method definitions may take other methods as arguments, and thus implement
  method combinators with prefix syntax. For example, to more usefully exploit
  Isabelle's backtracking, the explicit requirement that a method solve all
  produced subgoals is frequently useful. This can easily be written as a
  ∗‹higher-order method› using ``‹;›''. The @{keyword "methods"} keyword
  denotes method parameters that are other proof methods to be invoked by the
  method being defined.
›

    method solve methods m = (m ; fail)

text ‹
  Given some method-argument ‹m›, ‹solve ‹m›› applies the method ‹m› and then
  fails whenever ‹m› produces any new unsolved subgoals --- i.e. when ‹m›
  fails to completely discharge the goal it was applied to.
›


section ‹Example›

text ‹
  With these simple features we are ready to write our first non-trivial proof
  method. Returning to the first-order logic example, the following method
  definition applies various rules with their canonical methods.
›

    named_theorems subst

    method prop_solver declares intros elims subst =
      (assumption |
        (rule intros) | erule elims |
        subst subst | subst (asm) subst |
        (erule notE ; solve ‹prop_solver›))+

text ‹
  The only non-trivial part above is the final alternative ‹(erule notE ;
  solve ‹prop_solver›)›. Here, in the case that all other alternatives fail,
  the method takes one of the assumptions @{term "¬ P"} of the current goal
  and eliminates it with the rule ‹notE›, causing the goal to be proved to
  become @{term P}. The method then recursively invokes itself on the
  remaining goals. The job of the recursive call is to demonstrate that there
  is a contradiction in the original assumptions (i.e.\ that @{term P} can be
  derived from them). Note this recursive invocation is applied with the
  @{method solve} method combinator to ensure that a contradiction will indeed
  be shown. In the case where a contradiction cannot be found, backtracking
  will occur and a different assumption @{term "¬ Q"} will be chosen for
  elimination.

  Note that the recursive call to @{method prop_solver} does not have any
  parameters passed to it. Recall that fact parameters, e.g.\ ‹intros›,
  ‹elims›, and ‹subst›, are managed by declarations in the current proof
  context. They will therefore be passed to any recursive call to @{method
  prop_solver} and, more generally, any invocation of a method which declares
  these named theorems.

  ┉
  After declaring some standard rules to the context, the @{method
  prop_solver} becomes capable of solving non-trivial propositional
  tautologies.
›

    lemmas [intros] =
      conjI    ‹@{thm conjI}›
      impI    ‹@{thm impI}›
      disjCI    ‹@{thm disjCI}›
      iffI    ‹@{thm iffI}›
      notI    ‹@{thm notI}›

    lemmas [elims] =
      impCE    ‹@{thm impCE}›
      conjE    ‹@{thm conjE}›
      disjE    ‹@{thm disjE}›

    lemma "(A ∨ B) ∧ (A ⟶ C) ∧ (B ⟶ C) ⟶ C"
      by prop_solver


chapter ‹The match method \label{s:matching}›

text ‹
  So far we have seen methods defined as simple combinations of other methods.
  Some familiar programming language concepts have been introduced (i.e.\
  abstraction and recursion). The only control flow has been implicitly the
  result of backtracking. When designing more sophisticated proof methods this
  proves too restrictive and difficult to manage conceptually.

  To address this, we introduce the @{method_def match} method, which provides
  more direct access to the higher-order matching facility at the core of
  Isabelle. It is implemented as a separate proof method (in Isabelle/ML), and
  thus can be directly applied to proofs, however it is most useful when
  applied in the context of writing Eisbach method definitions.

  ┉
  The syntax diagram below refers to some syntactic categories that are
  further defined in @{cite "isabelle-isar-ref"}.

  @{rail ‹
    @@{method match} kind @'in' (pattern '⇒' @{syntax text} + '¦')
    ;
    kind:
      (@'conclusion' | @'premises' ('(' 'local' ')')? |
       '(' term ')' | @{syntax thms})
    ;
    pattern: fact_name? term args? ⏎ (@'for' fixes)?
    ;
    fact_name: @{syntax name} @{syntax attributes}? ':'
    ;
    args: '(' (('multi' | 'cut' nat?) + ',') ')'
  ›}

  Matching allows methods to introspect the goal state, and to implement more
  explicit control flow. In the basic case, a term or fact ‹ts› is given to
  match against as a ∗‹match target›, along with a collection of
  pattern-method pairs ‹(p, m)›: roughly speaking, when the pattern ‹p›
  matches any member of ‹ts›, the ∗‹inner› method ‹m› will be executed.
›

    lemma
      assumes X:
        "Q ⟶ P"
        "Q"
      shows P
        by (match X in I: "Q ⟶ P" and I': "Q"  ‹insert mp [OF I I']›)

text ‹
  In this example we have a structured Isar proof, with the named assumption
  ‹X› and a conclusion @{term "P"}. With the match method we can find the
  local facts @{term "Q ⟶ P"} and @{term "Q"}, binding them to separately as
  ‹I› and ‹I'›. We then specialize the modus-ponens rule @{thm mp [of Q P]} to
  these facts to solve the goal.
›


section ‹Subgoal focus›

text‹
  In the previous example we were able to match against an assumption out of
  the Isar proof state. In general, however, proof subgoals can be
  ∗‹unstructured›, with goal parameters and premises arising from rule
  application. To address this, @{method match} uses ∗‹subgoal focusing› to
  produce structured goals out of unstructured ones. In place of fact or term,
  we may give the keyword @{keyword_def "premises"} as the match target. This
  causes a subgoal focus on the first subgoal, lifting local goal parameters
  to fixed term variables and premises into hypothetical theorems. The match
  is performed against these theorems, naming them and binding them as
  appropriate. Similarly giving the keyword @{keyword_def "conclusion"}
  matches against the conclusion of the first subgoal.

  An unstructured version of the previous example can then be similarly solved
  through focusing.
›

    lemma "Q ⟶ P ⟹ Q ⟹ P"
      by (match premises in
                I: "Q ⟶ P" and I': "Q"  ‹insert mp [OF I I']›)

text ‹
  Match variables may be specified by giving a list of @{keyword_ref
  "for"}-fixes after the pattern description. This marks those terms as bound
  variables, which may be used in the method body.
›

    lemma "Q ⟶ P ⟹ Q ⟹ P"
      by (match premises in I: "Q ⟶ A" and I': "Q" for A 
            ‹match conclusion in A ⇒ ‹insert mp [OF I I']››)

text ‹
  In this example @{term A} is a match variable which is bound to @{term P}
  upon a successful match. The inner @{method match} then matches the
  now-bound @{term A} (bound to @{term P}) against the conclusion (also @{term
  P}), finally applying the specialized rule to solve the goal.

  Schematic terms like ‹?P› may also be used to specify match variables, but
  the result of the match is not bound, and thus cannot be used in the inner
  method body.

  ┉
  In the following example we extract the predicate of an existentially
  quantified conclusion in the current subgoal and search the current premises
  for a matching fact. If both matches are successful, we then instantiate the
  existential introduction rule with both the witness and predicate, solving
  with the matched premise.
›

    method solve_ex =
      (match conclusion in "∃x. Q x" for Q 
        ‹match premises in U: "Q y" for y ⇒
          ‹rule exI [where P = Q and x = y, OF U]››)

text ‹
  The first @{method match} matches the pattern @{term "∃x. Q x"} against the
  current conclusion, binding the term @{term "Q"} in the inner match. Next
  the pattern ‹Q y› is matched against all premises of the current subgoal. In
  this case @{term "Q"} is fixed and @{term "y"} may be instantiated. Once a
  match is found, the local fact ‹U› is bound to the matching premise and the
  variable @{term "y"} is bound to the matching witness. The existential
  introduction rule ‹exI:›~@{thm exI} is then instantiated with @{term "y"} as
  the witness and @{term "Q"} as the predicate, with its proof obligation
  solved by the local fact U (using the Isar attribute @{attribute OF}). The
  following example is a trivial use of this method.
›

    lemma "halts p ⟹ ∃x. halts x"
      by solve_ex


subsection ‹Operating within a focus›

text ‹
  Subgoal focusing provides a structured form of a subgoal, allowing for more
  expressive introspection of the goal state. This requires some consideration
  in order to be used effectively. When the keyword @{keyword "premises"} is
  given as the match target, the premises of the subgoal are lifted into
  hypothetical theorems, which can be found and named via match patterns.
  Additionally these premises are stripped from the subgoal, leaving only the
  conclusion. This renders them inaccessible to standard proof methods which
  operate on the premises, such as @{method frule} or @{method erule}. Naive
  usage of these methods within a match will most likely not function as the
  method author intended.
›

    method my_allE_bad for y :: 'a =
      (match premises in I: "∀x :: 'a. ?Q x" 
        ‹erule allE [where x = y]›)

text ‹
  Here we take a single parameter @{term y} and specialize the universal
  elimination rule (@{thm allE}) to it, then attempt to apply this specialized
  rule with @{method erule}. The method @{method erule} will attempt to unify
  with a universal quantifier in the premises that matches the type of @{term
  y}. Since @{keyword "premises"} causes a focus, however, there are no
  subgoal premises to be found and thus @{method my_allE_bad} will always
  fail. If focusing instead left the premises in place, using methods like
  @{method erule} would lead to unintended behaviour, specifically during
  backtracking. In our example, @{method erule} could choose an alternate
  premise while backtracking, while leaving ‹I› bound to the original match.
  In the case of more complex inner methods, where either ‹I› or bound terms
  are used, this would almost certainly not be the intended behaviour.

  An alternative implementation would be to specialize the elimination rule to
  the bound term and apply it directly.
›

    method my_allE_almost for y :: 'a =
      (match premises in I: "∀x :: 'a. ?Q x" 
        ‹rule allE [where x = y, OF I]›)

    lemma "∀x. P x ⟹ P y"
      by (my_allE_almost y)

text ‹
  This method will insert a specialized duplicate of a universally quantified
  premise. Although this will successfully apply in the presence of such a
  premise, it is not likely the intended behaviour. Repeated application of
  this method will produce an infinite stream of duplicate specialized
  premises, due to the original premise never being removed. To address this,
  matched premises may be declared with the @{attribute thin} attribute. This
  will hide the premise from subsequent inner matches, and remove it from the
  list of premises when the inner method has finished and the subgoal is
  unfocused. It can be considered analogous to the existing ‹thin_tac›.

  To complete our example, the correct implementation of the method will
  @{attribute thin} the premise from the match and then apply it to the
  specialized elimination rule.›

    method my_allE for y :: 'a =
      (match premises in I [thin]: "∀x :: 'a. ?Q x" 
         ‹rule allE [where x = y, OF I]›)

    lemma "∀x. P x ⟹ ∀x. Q x ⟹ P y ∧ Q y"
      by (my_allE y)+ (rule conjI)


subsubsection ‹Inner focusing›

text ‹
  Premises are ∗‹accumulated› for the purposes of subgoal focusing. In
  contrast to using standard methods like @{method frule} within focused
  match, another @{method match} will have access to all the premises of the
  outer focus.
›

    lemma "A ⟹ B ⟹ A ∧ B"
      by (match premises in H: A  ‹intro conjI, rule H,
            match premises in H': B ⇒ ‹rule H'››)

text ‹
  In this example, the inner @{method match} can find the focused premise
  @{term B}. In contrast, the @{method assumption} method would fail here due
  to @{term B} not being logically accessible.
›

    lemma "A ⟹ A ∧ (B ⟶ B)"
      by (match premises in H: A  ‹intro conjI, rule H, rule impI,
            match premises (local) in A ⇒ ‹fail›
                                 ¦ H': B ⇒ ‹rule H'››)

text ‹
  In this example, the only premise that exists in the first focus is @{term
  "A"}. Prior to the inner match, the rule ‹impI› changes the goal @{term "B ⟶
  B"} into @{term "B ⟹ B"}. A standard premise match would also include @{term
  A} as an original premise of the outer match. The ‹local› argument limits
  the match to newly focused premises.
›


section ‹Attributes›

text ‹
  Attributes may throw errors when applied to a given fact. For example, rule
  instantiation will fail of there is a type mismatch or if a given variable
  doesn't exist. Within a match or a method definition, it isn't generally
  possible to guarantee that applied attributes won't fail. For example, in
  the following method there is no guarantee that the two provided facts will
  necessarily compose.
›

    method my_compose uses rule1 rule2 =
      (rule rule1 [OF rule2])

text ‹
  Some attributes (like @{attribute OF}) have been made partially
  Eisbach-aware. This means that they are able to form a closure despite not
  necessarily always being applicable. In the case of @{attribute OF}, it is
  up to the proof author to guard attribute application with an appropriate
  @{method match}, but there are still no static guarantees.

  In contrast to @{attribute OF}, the @{attribute "where"} and @{attribute of}
  attributes attempt to provide static guarantees that they will apply
  whenever possible.

  Within a match pattern for a fact, each outermost quantifier specifies the
  requirement that a matching fact must have a schematic variable at that
  point. This gives a corresponding name to this ``slot'' for the purposes of
  forming a static closure, allowing the @{attribute "where"} attribute to
  perform an instantiation at run-time.
›

    lemma
      assumes A: "Q ⟹ False"
      shows "¬ Q"
      by (match intros in X: "⋀P. (P ⟹ False) ⟹ ¬ P" 
            ‹rule X [where P = Q, OF A]›)

text ‹
  Subgoal focusing converts the outermost quantifiers of premises into
  schematics when lifting them to hypothetical facts. This allows us to
  instantiate them with @{attribute "where"} when using an appropriate match
  pattern.
›

    lemma "(⋀x :: 'a. A x ⟹ B x) ⟹ A y ⟹ B y"
      by (match premises in I: "⋀x :: 'a. ?P x ⟹ ?Q x" 
            ‹rule I [where x = y]›)

text ‹
  The @{attribute of} attribute behaves similarly. It is worth noting,
  however, that the positional instantiation of @{attribute of} occurs against
  the position of the variables as they are declared ∗‹in the match pattern›.
›

    lemma
      fixes A B and x :: 'a and y :: 'b
      assumes asm: "(⋀x y. A y x ⟹ B x y )"
      shows "A y x ⟹ B x y"
      by (match asm in I: "⋀(x :: 'a) (y :: 'b). ?P x y ⟹ ?Q x y" 
            ‹rule I [of x y]›)

text ‹
  In this example, the order of schematics in ‹asm› is actually ‹?y ?x›, but
  we instantiate our matched rule in the opposite order. This is because the
  effective rule @{term I} was bound from the match, which declared the @{typ
  'a} slot first and the @{typ 'b} slot second.

  To get the dynamic behaviour of @{attribute of} we can choose to invoke it
  ∗‹unchecked›. This avoids trying to do any type inference for the provided
  parameters, instead storing them as their most general type and doing type
  matching at run-time. This, like @{attribute OF}, will throw errors if the
  expected slots don't exist or there is a type mismatch.
›

    lemma
      fixes A B and x :: 'a and y :: 'b
      assumes asm: "⋀x y. A y x ⟹ B x y"
      shows "A y x ⟹ B x y"
      by (match asm in I: "PROP ?P"  ‹rule I [of (unchecked) y x]›)

text ‹
  Attributes may be applied to matched facts directly as they are matched. Any
  declarations will therefore be applied in the context of the inner method,
  as well as any transformations to the rule.
›

    lemma "(⋀x :: 'a. A x ⟹ B x) ⟹ A y ⟶ B y"
      by (match premises in I [of y, intros]: "⋀x :: 'a. ?P x ⟹ ?Q x" 
            ‹prop_solver›)

text ‹
  In this example, the pattern ‹⋀x :: 'a. ?P x ⟹ ?Q x› matches against the
  only premise, giving an appropriately typed slot for @{term y}. After the
  match, the resulting rule is instantiated to @{term y} and then declared as
  an @{attribute intros} rule. This is then picked up by @{method prop_solver}
  to solve the goal.
›


section ‹Multi-match \label{sec:multi}›

text ‹
  In all previous examples, @{method match} was only ever searching for a
  single rule or premise. Each local fact would therefore always have a length
  of exactly one. We may, however, wish to find ∗‹all› matching results. To
  achieve this, we can simply mark a given pattern with the ‹(multi)›
  argument.
›

    lemma
      assumes asms: "A ⟹ B"  "A ⟹ D"
      shows "(A ⟶ B) ∧ (A ⟶ D)"
      apply (match asms in I [intros]: "?P ⟹ ?Q"   ‹solves ‹prop_solver››)?
      apply (match asms in I [intros]: "?P ⟹ ?Q" (multi)  ‹prop_solver›)
      done

text ‹
  In the first @{method match}, without the ‹(multi)› argument, @{term I} is
  only ever be bound to one of the members of ‹asms›. This backtracks over
  both possibilities (see next section), however neither assumption in
  isolation is sufficient to solve to goal. The use of the @{method solves}
  combinator ensures that @{method prop_solver} has no effect on the goal when
  it doesn't solve it, and so the first match leaves the goal unchanged. In
  the second @{method match}, ‹I› is bound to all of ‹asms›, declaring both
  results as ‹intros›. With these rules @{method prop_solver} is capable of
  solving the goal.

  Using for-fixed variables in patterns imposes additional constraints on the
  results. In all previous examples, the choice of using ‹?P› or a for-fixed
  @{term P} only depended on whether or not @{term P} was mentioned in another
  pattern or the inner method. When using a multi-match, however, all
  for-fixed terms must agree in the results.
›

    lemma
      assumes asms: "A ⟹ B"  "A ⟹ D"  "D ⟹ B"
      shows "(A ⟶ B) ∧ (A ⟶ D)"
      apply (match asms in I [intros]: "?P ⟹ Q" (multi) for Q 
              ‹solves ‹prop_solver››)?
      apply (match asms in I [intros]: "P ⟹ ?Q" (multi) for P 
              ‹prop_solver›)
      done

text ‹
  Here we have two seemingly-equivalent applications of @{method match},
  however only the second one is capable of solving the goal. The first
  @{method match} selects the first and third members of ‹asms› (those that
  agree on their conclusion), which is not sufficient. The second @{method
  match} selects the first and second members of ‹asms› (those that agree on
  their assumption), which is enough for @{method prop_solver} to solve the
  goal.
›


section ‹Dummy patterns›

text ‹
  Dummy patterns may be given as placeholders for unique schematics in
  patterns. They implicitly receive all currently bound variables as
  arguments, and are coerced into the @{typ prop} type whenever possible. For
  example, the trivial dummy pattern ‹_› will match any proposition. In
  contrast, by default the pattern ‹?P› is considered to have type @{typ
  bool}. It will not bind anything with meta-logical connectives (e.g. ‹_ ⟹ _›
  or ‹_ &&& _›).
›

    lemma
      assumes asms: "A &&& B ⟹ D"
      shows "(A ∧ B ⟶ D)"
      by (match asms in I: _  ‹prop_solver intros: I conjunctionI›)


section ‹Backtracking›

text ‹
  Patterns are considered top-down, executing the inner method ‹m› of the
  first pattern which is satisfied by the current match target. By default,
  matching performs extensive backtracking by attempting all valid variable
  and fact bindings according to the given pattern. In particular, all
  unifiers for a given pattern will be explored, as well as each matching
  fact. The inner method ‹m› will be re-executed for each different
  variable/fact binding during backtracking. A successful match is considered
  a cut-point for backtracking. Specifically, once a match is made no other
  pattern-method pairs will be considered.

  The method ‹foo› below fails for all goals that are conjunctions. Any such
  goal will match the first pattern, causing the second pattern (that would
  otherwise match all goals) to never be considered.
›

    method foo =
      (match conclusion in "?P ∧ ?Q"  ‹fail› ¦ "?R"  ‹prop_solver›)

text ‹
  The failure of an inner method that is executed after a successful match
  will cause the entire match to fail. This distinction is important due to
  the pervasive use of backtracking. When a method is used in a combinator
  chain, its failure becomes significant because it signals previously applied
  methods to move to the next result. Therefore, it is necessary for @{method
  match} to not mask such failure. One can always rewrite a match using the
  combinators ``‹?›'' and ``‹|›'' to try subsequent patterns in the case of an
  inner-method failure. The following proof method, for example, always
  invokes @{method prop_solver} for all goals because its first alternative
  either never matches or (if it does match) always fails.
›

    method foo1 =
      (match conclusion in "?P ∧ ?Q"  ‹fail›) |
      (match conclusion in "?R"  ‹prop_solver›)


subsection ‹Cut›

text ‹
  Backtracking may be controlled more precisely by marking individual patterns
  as ‹cut›. This causes backtracking to not progress beyond this pattern: once
  a match is found no others will be considered.
›

    method foo2 =
      (match premises in I: "P ∧ Q" (cut) and I': "P ⟶ ?U" for P Q 
        ‹rule mp [OF I' I [THEN conjunct1]]›)

text ‹
  In this example, once a conjunction is found (@{term "P ∧ Q"}), all possible
  implications of @{term "P"} in the premises are considered, evaluating the
  inner @{method rule} with each consequent. No other conjunctions will be
  considered, with method failure occurring once all implications of the form
  ‹P ⟶ ?U› have been explored. Here the left-right processing of individual
  patterns is important, as all patterns after of the cut will maintain their
  usual backtracking behaviour.
›

    lemma "A ∧ B ⟹ A ⟶ D ⟹ A ⟶ C ⟹ C"
      by foo2

    lemma "C ∧ D ⟹ A ∧ B ⟹ A ⟶ C  ⟹ C"
      by (foo2 | prop_solver)

text ‹
  In this example, the first lemma is solved by ‹foo2›, by first picking
  @{term "A ⟶ D"} for ‹I'›, then backtracking and ultimately succeeding after
  picking @{term "A ⟶ C"}. In the second lemma, however, @{term "C ∧ D"} is
  matched first, the second pattern in the match cannot be found and so the
  method fails, falling through to @{method prop_solver}.

  More precise control is also possible by giving a positive number ‹n› as an
  argument to ‹cut›. This will limit the number of backtracking results of
  that match to be at most ‹n›. The match argument ‹(cut 1)› is the same as
  simply ‹(cut)›.
›


subsection ‹Multi-match revisited›

text ‹
  A multi-match will produce a sequence of potential bindings for for-fixed
  variables, where each binding environment is the result of matching against
  at least one element from the match target. For each environment, the match
  result will be all elements of the match target which agree with the pattern
  under that environment. This can result in unexpected behaviour when giving
  very general patterns.
›

    lemma
      assumes asms: "⋀x. A x ∧ B x"  "⋀y. A y ∧ C y"  "⋀z. B z ∧ C z"
      shows "A x ∧ C x"
      by (match asms in I: "⋀x. P x ∧ ?Q x" (multi) for P 
         ‹match (P) in "A" ⇒ ‹fail›
                       ¦ _ ⇒ ‹match I in "⋀x. A x ∧ B x" ⇒ ‹fail›
                                                      ¦ _ ⇒ ‹rule I›››)

text ‹
  Intuitively it seems like this proof should fail to check. The first match
  result, which binds @{term I} to the first two members of ‹asms›, fails the
  second inner match due to binding @{term P} to @{term A}. Backtracking then
  attempts to bind @{term I} to the third member of ‹asms›. This passes all
  inner matches, but fails when @{method rule} cannot successfully apply this
  to the current goal. After this, a valid match that is produced by the
  unifier is one which binds @{term P} to simply ‹λa. A ?x›. The first inner
  match succeeds because ‹λa. A ?x› does not match @{term A}. The next inner
  match succeeds because @{term I} has only been bound to the first member of
  ‹asms›. This is due to @{method match} considering ‹λa. A ?x› and ‹λa. A ?y›
  as distinct terms.

  The simplest way to address this is to explicitly disallow term bindings
  which we would consider invalid.
›

    method abs_used for P =
      (match (P) in "λa. ?P"  ‹fail› ¦ _  ‹-›)

text ‹
  This method has no effect on the goal state, but instead serves as a filter
  on the environment produced from match.
›


section ‹Uncurrying›

text ‹
  The @{method match} method is not aware of the logical content of match
  targets. Each pattern is simply matched against the shallow structure of a
  fact or term. Most facts are in ∗‹normal form›, which curries premises via
  meta-implication ‹_ ⟹ _›.
›

    lemma
      assumes asms: "D ⟹ B ⟹ C"  "D ⟹ A"
      shows "D ⟹ B ⟹ C ∧ A"
      by (match asms in H: "D ⟹ _" (multi)  ‹prop_solver elims: H›)

text ‹
  For the first member of ‹asms› the dummy pattern successfully matches
  against @{term "B ⟹ C"} and so the proof is successful.
›

    lemma
      assumes asms: "A ⟹ B ⟹ C"  "D ⟹ C"
      shows "D ∨ (A ∧ B) ⟹ C"
      apply (match asms in H: "_ ⟹ C" (multi)  ‹prop_solver elims: H›)(*<*)?
      apply (prop_solver elims: asms)
      done(*>*)

text ‹
  This proof will fail to solve the goal. Our match pattern will only match
  rules which have a single premise, and conclusion @{term C}, so the first
  member of ‹asms› is not bound and thus the proof fails. Matching a pattern
  of the form @{term "P ⟹ Q"} against this fact will bind @{term "P"} to
  @{term "A"} and @{term Q} to @{term "B ⟹ C"}. Our pattern, with a concrete
  @{term "C"} in the conclusion, will fail to match this fact.

  To express our desired match, we may ∗‹uncurry› our rules before matching
  against them. This forms a meta-conjunction of all premises in a fact, so
  that only one implication remains. For example the uncurried version of
  @{term "A ⟹ B ⟹ C"} is @{term "A &&& B ⟹ C"}. This will now match our
  desired pattern ‹_ ⟹ C›, and can be ∗‹curried› after the match to put it
  back into normal form.
›

    lemma
      assumes asms: "A ⟹ B ⟹ C"  "D ⟹ C"
      shows "D ∨ (A ∧ B) ⟹ C"
      by (match asms [uncurry] in H [curry]: "_ ⟹ C" (multi) 
          ‹prop_solver elims: H›)


section ‹Reverse matching›

text ‹
  The @{method match} method only attempts to perform matching of the pattern
  against the match target. Specifically this means that it will not
  instantiate schematic terms in the match target.
›

    lemma
      assumes asms: "⋀x :: 'a. A x"
      shows "A y"
      apply (match asms in H: "A y"  ‹rule H›)?
      apply (match asms in H: P for P 
          ‹match ("A y") in P ⇒ ‹rule H››)
      done

text ‹
  In the first @{method match} we attempt to find a member of ‹asms› which
  matches our goal precisely. This fails due to no such member existing. The
  second match reverses the role of the fact in the match, by first giving a
  general pattern @{term P}. This bound pattern is then matched against @{term
  "A y"}. In this case, @{term P} is bound to ‹A ?x› and so it successfully
  matches.
›


section ‹Type matching›

text ‹
  The rule instantiation attributes @{attribute "where"} and @{attribute "of"}
  attempt to guarantee type-correctness wherever possible. This can require
  additional invocations of @{method match} in order to statically ensure that
  instantiation will succeed.
›

    lemma
      assumes asms: "⋀x :: 'a. A x"
      shows "A y"
      by (match asms in H: "⋀z :: 'b. P z" for P 
          ‹match (y) in "y :: 'b" for y ⇒ ‹rule H [where z = y]››)

text ‹
  In this example the type ‹'b› is matched to ‹'a›, however statically they
  are formally distinct types. The first match binds ‹'b› while the inner
  match serves to coerce @{term y} into having the type ‹'b›. This allows the
  rule instantiation to successfully apply.
›


chapter ‹Method development›

section ‹Tracing methods›

text ‹
  Method tracing is supported by auxiliary print methods provided by @{theory
  Eisbach_Tools}. These include @{method print_fact}, @{method print_term} and
  @{method print_type}. Whenever a print method is evaluated it leaves the
  goal unchanged and writes its argument as tracing output.

  Print methods can be combined with the @{method fail} method to investigate
  the backtracking behaviour of a method.
›

    lemma
      assumes asms: A B C D
      shows D
      apply (match asms in H: _  ‹print_fact H, fail›)(*<*)?
      apply (simp add: asms)
      done(*>*)

text ‹
  This proof will fail, but the tracing output will show the order that the
  assumptions are attempted.
›


section ‹Integrating with Isabelle/ML›

subsubsection ‹Attributes›

text ‹
  A custom rule attribute is a simple way to extend the functionality of
  Eisbach methods. The dummy rule attribute notation (‹[[ _ ]]›) invokes the
  given attribute against a dummy fact and evaluates to the result of that
  attribute. When used as a match target, this can serve as an effective
  auxiliary function.
›

    attribute_setup get_split_rule =
      ‹Args.term >> (fn t =>
        Thm.rule_attribute [] (fn context => fn _ =>
          (case get_split_rule (Context.proof_of context) t of
            SOME thm => thm
          | NONE => Drule.dummy_thm)))›

text ‹
  In this example, the new attribute @{attribute get_split_rule} lifts the ML
  function of the same name into an attribute. When applied to a case
  distinction over a datatype, it retrieves its corresponding split rule.

  We can then integrate this intro a method that applies the split rule, first
  matching to ensure that fetching the rule was successful.
›
(*<*)declare TrueI [intros](*>*)
    method splits =
      (match conclusion in "?P f" for f 
        ‹match [[get_split_rule f]] in U: "(_ :: bool) = _" ⇒
          ‹rule U [THEN iffD2]››)

    lemma "L ≠ [] ⟹ case L of [] ⇒ False | _ ⇒ True"
      apply splits
      apply (prop_solver intros: allI)
      done

text ‹
  Here the new @{method splits} method transforms the goal to use only logical
  connectives: @{term "L = [] ⟶ False ∧ (∀x y. L = x # y ⟶ True)"}. This goal
  is then in a form solvable by @{method prop_solver} when given the universal
  quantifier introduction rule ‹allI›.
›

end