isabelle update_cartouches -t;
authorwenzelm
Tue, 20 Oct 2015 23:53:40 +0200
changeset 61493 0debd22f0c0e
parent 61492 3480725c71d2
child 61494 63b18f758874
isabelle update_cartouches -t;
src/Doc/Eisbach/Manual.thy
src/Doc/Implementation/Eq.thy
src/Doc/Implementation/Integration.thy
src/Doc/Implementation/Isar.thy
src/Doc/Implementation/Local_Theory.thy
src/Doc/Implementation/Logic.thy
src/Doc/Implementation/ML.thy
src/Doc/Implementation/Prelim.thy
src/Doc/Implementation/Proof.thy
src/Doc/Implementation/Syntax.thy
src/Doc/Implementation/Tactic.thy
src/Doc/Isar_Ref/Document_Preparation.thy
src/Doc/Isar_Ref/First_Order_Logic.thy
src/Doc/Isar_Ref/Framework.thy
src/Doc/Isar_Ref/Generic.thy
src/Doc/Isar_Ref/HOL_Specific.thy
src/Doc/Isar_Ref/Inner_Syntax.thy
src/Doc/Isar_Ref/Outer_Syntax.thy
src/Doc/Isar_Ref/Preface.thy
src/Doc/Isar_Ref/Proof.thy
src/Doc/Isar_Ref/Proof_Script.thy
src/Doc/Isar_Ref/Quick_Reference.thy
src/Doc/Isar_Ref/Spec.thy
src/Doc/Isar_Ref/Symbols.thy
src/Doc/Isar_Ref/Synopsis.thy
src/Doc/JEdit/JEdit.thy
src/Doc/System/Basics.thy
src/Doc/System/Misc.thy
src/Doc/System/Presentation.thy
src/Doc/System/Sessions.thy
--- a/src/Doc/Eisbach/Manual.thy	Tue Oct 20 23:03:46 2015 +0200
+++ b/src/Doc/Eisbach/Manual.thy	Tue Oct 20 23:53:40 2015 +0200
@@ -56,11 +56,11 @@
       by prop_solver\<^sub>1
 
 text \<open>
-  In this example, the facts @{text impI} and @{text conjE} are static. They
+  In this example, the facts \<open>impI\<close> and \<open>conjE\<close> are static. They
   are evaluated once when the method is defined and cannot be changed later.
   This makes the method stable in the sense of \<^emph>\<open>static scoping\<close>: naming
-  another fact @{text impI} in a later context won't affect the behaviour of
-  @{text "prop_solver\<^sub>1"}.
+  another fact \<open>impI\<close> in a later context won't affect the behaviour of
+  \<open>prop_solver\<^sub>1\<close>.
 \<close>
 
 
@@ -69,8 +69,8 @@
 text \<open>
   Methods can also abstract over terms using the @{keyword_def "for"} keyword,
   optionally providing type constraints. For instance, the following proof
-  method @{text intro_ex} takes a term @{term y} of any type, which it uses to
-  instantiate the @{term x}-variable of @{text exI} (existential introduction)
+  method \<open>intro_ex\<close> takes a term @{term y} of any type, which it uses to
+  instantiate the @{term x}-variable of \<open>exI\<close> (existential introduction)
   before applying the result as a rule. The instantiation is performed here by
   Isar's @{attribute_ref "where"} attribute. If the current subgoal is to find
   a witness for the given predicate @{term Q}, then this has the effect of
@@ -100,7 +100,7 @@
 subsection \<open>Named theorems\<close>
 
 text \<open>
-  A @{text "named theorem"} is a fact whose contents are produced dynamically
+  A \<open>named theorem\<close> is a fact whose contents are produced dynamically
   within the current proof context. The Isar command @{command_ref
   "named_theorems"} provides simple access to this concept: it declares a
   dynamic fact with corresponding \<^emph>\<open>attribute\<close> for managing
@@ -110,10 +110,9 @@
     named_theorems intros
 
 text \<open>
-  So far @{text "intros"} refers to the empty fact. Using the Isar command
+  So far \<open>intros\<close> refers to the empty fact. Using the Isar command
   @{command_ref "declare"} we may apply declaration attributes to the context.
-  Below we declare both @{text "conjI"} and @{text "impI"} as @{text
-  "intros"}, adding them to the named theorem slot.
+  Below we declare both \<open>conjI\<close> and \<open>impI\<close> as \<open>intros\<close>, adding them to the named theorem slot.
 \<close>
 
     declare conjI [intros] and impI [intros]
@@ -121,7 +120,7 @@
 text \<open>
   We can refer to named theorems as dynamic facts within a particular proof
   context, which are evaluated whenever the method is invoked. Instead of
-  having facts hard-coded into the method, as in @{text prop_solver\<^sub>1}, we can
+  having facts hard-coded into the method, as in \<open>prop_solver\<^sub>1\<close>, we can
   instead refer to these named theorems.
 \<close>
 
@@ -137,8 +136,8 @@
 text \<open>
   Often these named theorems need to be augmented on the spot, when a method
   is invoked. The @{keyword_def "declares"} keyword in the signature of
-  @{command method} adds the common method syntax @{text "method decl: facts"}
-  for each named theorem @{text decl}.
+  @{command method} adds the common method syntax \<open>method decl: facts\<close>
+  for each named theorem \<open>decl\<close>.
 \<close>
 
     method prop_solver\<^sub>4 declares intros elims =
@@ -171,11 +170,10 @@
 section \<open>Higher-order methods\<close>
 
 text \<open>
-  The \<^emph>\<open>structured concatenation\<close> combinator ``@{text "method\<^sub>1 ;
-  method\<^sub>2"}'' was introduced in Isabelle2015, motivated by development of
-  Eisbach. It is similar to ``@{text "method\<^sub>1, method\<^sub>2"}'', but @{text
-  method\<^sub>2} is invoked on on \<^emph>\<open>all\<close> subgoals that have newly emerged from
-  @{text method\<^sub>1}. This is useful to handle cases where the number of
+  The \<^emph>\<open>structured concatenation\<close> combinator ``\<open>method\<^sub>1 ;
+  method\<^sub>2\<close>'' was introduced in Isabelle2015, motivated by development of
+  Eisbach. It is similar to ``\<open>method\<^sub>1, method\<^sub>2\<close>'', but \<open>method\<^sub>2\<close> is invoked on on \<^emph>\<open>all\<close> subgoals that have newly emerged from
+  \<open>method\<^sub>1\<close>. This is useful to handle cases where the number of
   subgoals produced by a method is determined dynamically at run-time.
 \<close>
 
@@ -192,7 +190,7 @@
   method combinators with prefix syntax. For example, to more usefully exploit
   Isabelle's backtracking, the explicit requirement that a method solve all
   produced subgoals is frequently useful. This can easily be written as a
-  \<^emph>\<open>higher-order method\<close> using ``@{text ";"}''. The @{keyword "methods"}
+  \<^emph>\<open>higher-order method\<close> using ``\<open>;\<close>''. The @{keyword "methods"}
   keyword denotes method parameters that are other proof methods to be invoked
   by the method being defined.
 \<close>
@@ -200,9 +198,9 @@
     method solve methods m = (m ; fail)
 
 text \<open>
-  Given some method-argument @{text m}, @{text "solve \<open>m\<close>"} applies the
-  method @{text m} and then fails whenever @{text m} produces any new unsolved
-  subgoals --- i.e. when @{text m} fails to completely discharge the goal it
+  Given some method-argument \<open>m\<close>, \<open>solve \<open>m\<close>\<close> applies the
+  method \<open>m\<close> and then fails whenever \<open>m\<close> produces any new unsolved
+  subgoals --- i.e. when \<open>m\<close> fails to completely discharge the goal it
   was applied to.
 \<close>
 
@@ -224,10 +222,10 @@
         (erule notE ; solve \<open>prop_solver\<close>))+
 
 text \<open>
-  The only non-trivial part above is the final alternative @{text "(erule notE
-  ; solve \<open>prop_solver\<close>)"}. Here, in the case that all other alternatives
+  The only non-trivial part above is the final alternative \<open>(erule notE
+  ; solve \<open>prop_solver\<close>)\<close>. Here, in the case that all other alternatives
   fail, the method takes one of the assumptions @{term "\<not> P"} of the current
-  goal and eliminates it with the rule @{text notE}, causing the goal to be
+  goal and eliminates it with the rule \<open>notE\<close>, causing the goal to be
   proved to become @{term P}. The method then recursively invokes itself on
   the remaining goals. The job of the recursive call is to demonstrate that
   there is a contradiction in the original assumptions (i.e.\ that @{term P}
@@ -238,8 +236,7 @@
   chosen for elimination.
 
   Note that the recursive call to @{method prop_solver} does not have any
-  parameters passed to it. Recall that fact parameters, e.g.\ @{text
-  "intros"}, @{text "elims"}, and @{text "subst"}, are managed by declarations
+  parameters passed to it. Recall that fact parameters, e.g.\ \<open>intros\<close>, \<open>elims\<close>, and \<open>subst\<close>, are managed by declarations
   in the current proof context. They will therefore be passed to any recursive
   call to @{method prop_solver} and, more generally, any invocation of a
   method which declares these named theorems.
@@ -299,11 +296,10 @@
   \<close>}
 
   Matching allows methods to introspect the goal state, and to implement more
-  explicit control flow. In the basic case, a term or fact @{text ts} is given
+  explicit control flow. In the basic case, a term or fact \<open>ts\<close> is given
   to match against as a \<^emph>\<open>match target\<close>, along with a collection of
-  pattern-method pairs @{text "(p, m)"}: roughly speaking, when the pattern
-  @{text p} matches any member of @{text ts}, the \<^emph>\<open>inner\<close> method @{text
-  m} will be executed.
+  pattern-method pairs \<open>(p, m)\<close>: roughly speaking, when the pattern
+  \<open>p\<close> matches any member of \<open>ts\<close>, the \<^emph>\<open>inner\<close> method \<open>m\<close> will be executed.
 \<close>
 
     lemma
@@ -315,9 +311,9 @@
 
 text \<open>
   In this example we have a structured Isar proof, with the named
-  assumption @{text "X"} and a conclusion @{term "P"}. With the match method
+  assumption \<open>X\<close> and a conclusion @{term "P"}. With the match method
   we can find the local facts @{term "Q \<longrightarrow> P"} and @{term "Q"}, binding them to
-  separately as @{text "I"} and @{text "I'"}. We then specialize the
+  separately as \<open>I\<close> and \<open>I'\<close>. We then specialize the
   modus-ponens rule @{thm mp [of Q P]} to these facts to solve the goal.
 \<close>
 
@@ -362,7 +358,7 @@
   now-bound @{term A} (bound to @{term P}) against the conclusion (also @{term
   P}), finally applying the specialized rule to solve the goal.
 
-  Schematic terms like @{text "?P"} may also be used to specify match
+  Schematic terms like \<open>?P\<close> may also be used to specify match
   variables, but the result of the match is not bound, and thus cannot be used
   in the inner method body.
 
@@ -382,11 +378,11 @@
 text \<open>
   The first @{method match} matches the pattern @{term "\<exists>x. Q x"} against the
   current conclusion, binding the term @{term "Q"} in the inner match. Next
-  the pattern @{text "Q y"} is matched against all premises of the current
+  the pattern \<open>Q y\<close> is matched against all premises of the current
   subgoal. In this case @{term "Q"} is fixed and @{term "y"} may be
-  instantiated. Once a match is found, the local fact @{text U} is bound to
+  instantiated. Once a match is found, the local fact \<open>U\<close> is bound to
   the matching premise and the variable @{term "y"} is bound to the matching
-  witness. The existential introduction rule @{text "exI:"}~@{thm exI} is then
+  witness. The existential introduction rule \<open>exI:\<close>~@{thm exI} is then
   instantiated with @{term "y"} as the witness and @{term "Q"} as the
   predicate, with its proof obligation solved by the local fact U (using the
   Isar attribute @{attribute OF}). The following example is a trivial use of
@@ -426,8 +422,8 @@
   fail. If focusing instead left the premises in place, using methods
   like @{method erule} would lead to unintended behaviour, specifically during
   backtracking. In our example, @{method erule} could choose an alternate
-  premise while backtracking, while leaving @{text I} bound to the original
-  match. In the case of more complex inner methods, where either @{text I} or
+  premise while backtracking, while leaving \<open>I\<close> bound to the original
+  match. In the case of more complex inner methods, where either \<open>I\<close> or
   bound terms are used, this would almost certainly not be the intended
   behaviour.
 
@@ -451,7 +447,7 @@
   matched premises may be declared with the @{attribute "thin"} attribute.
   This will hide the premise from subsequent inner matches, and remove it from
   the list of premises when the inner method has finished and the subgoal is
-  unfocused. It can be considered analogous to the existing @{text thin_tac}.
+  unfocused. It can be considered analogous to the existing \<open>thin_tac\<close>.
 
   To complete our example, the correct implementation of the method
   will @{attribute "thin"} the premise from the match and then apply it to the
@@ -491,10 +487,10 @@
 
 text \<open>
   In this example, the only premise that exists in the first focus is
-  @{term "A"}. Prior to the inner match, the rule @{text impI} changes
+  @{term "A"}. Prior to the inner match, the rule \<open>impI\<close> changes
   the goal @{term "B \<longrightarrow> B"} into @{term "B \<Longrightarrow> B"}. A standard premise
   match would also include @{term A} as an original premise of the outer
-  match. The @{text local} argument limits the match to
+  match. The \<open>local\<close> argument limits the match to
   newly focused premises.
 
 \<close>
@@ -563,8 +559,7 @@
             \<open>rule I [of x y]\<close>)
 
 text \<open>
-  In this example, the order of schematics in @{text asm} is actually @{text
-  "?y ?x"}, but we instantiate our matched rule in the opposite order. This is
+  In this example, the order of schematics in \<open>asm\<close> is actually \<open>?y ?x\<close>, but we instantiate our matched rule in the opposite order. This is
   because the effective rule @{term I} was bound from the match, which
   declared the @{typ 'a} slot first and the @{typ 'b} slot second.
 
@@ -592,7 +587,7 @@
             \<open>prop_solver\<close>)
 
 text \<open>
-  In this example, the pattern @{text "\<And>x :: 'a. ?P x \<Longrightarrow> ?Q x"} matches against
+  In this example, the pattern \<open>\<And>x :: 'a. ?P x \<Longrightarrow> ?Q x\<close> matches against
   the only premise, giving an appropriately typed slot for @{term y}. After
   the match, the resulting rule is instantiated to @{term y} and then declared
   as an @{attribute intros} rule. This is then picked up by @{method
@@ -606,8 +601,7 @@
   In all previous examples, @{method match} was only ever searching for a
   single rule or premise. Each local fact would therefore always have a length
   of exactly one. We may, however, wish to find \<^emph>\<open>all\<close> matching results.
-  To achieve this, we can simply mark a given pattern with the @{text
-  "(multi)"} argument.
+  To achieve this, we can simply mark a given pattern with the \<open>(multi)\<close> argument.
 \<close>
 
     lemma
@@ -618,18 +612,18 @@
       done
 
 text \<open>
-  In the first @{method match}, without the @{text "(multi)"} argument, @{term
-  I} is only ever be bound to one of the members of @{text asms}. This
+  In the first @{method match}, without the \<open>(multi)\<close> argument, @{term
+  I} is only ever be bound to one of the members of \<open>asms\<close>. This
   backtracks over both possibilities (see next section), however neither
   assumption in isolation is sufficient to solve to goal. The use of the
   @{method solves} combinator ensures that @{method prop_solver} has no effect
   on the goal when it doesn't solve it, and so the first match leaves the goal
-  unchanged. In the second @{method match}, @{text I} is bound to all of
-  @{text asms}, declaring both results as @{text intros}. With these rules
+  unchanged. In the second @{method match}, \<open>I\<close> is bound to all of
+  \<open>asms\<close>, declaring both results as \<open>intros\<close>. With these rules
   @{method prop_solver} is capable of solving the goal.
 
   Using for-fixed variables in patterns imposes additional constraints on the
-  results. In all previous examples, the choice of using @{text ?P} or a
+  results. In all previous examples, the choice of using \<open>?P\<close> or a
   for-fixed @{term P} only depended on whether or not @{term P} was mentioned
   in another pattern or the inner method. When using a multi-match, however,
   all for-fixed terms must agree in the results.
@@ -647,9 +641,9 @@
 text \<open>
   Here we have two seemingly-equivalent applications of @{method match},
   however only the second one is capable of solving the goal. The first
-  @{method match} selects the first and third members of @{text asms} (those
+  @{method match} selects the first and third members of \<open>asms\<close> (those
   that agree on their conclusion), which is not sufficient. The second
-  @{method match} selects the first and second members of @{text asms} (those
+  @{method match} selects the first and second members of \<open>asms\<close> (those
   that agree on their assumption), which is enough for @{method prop_solver}
   to solve the goal.
 \<close>
@@ -661,10 +655,10 @@
   Dummy patterns may be given as placeholders for unique schematics in
   patterns. They implicitly receive all currently bound variables as
   arguments, and are coerced into the @{typ prop} type whenever possible. For
-  example, the trivial dummy pattern @{text "_"} will match any proposition.
-  In contrast, by default the pattern @{text "?P"} is considered to have type
+  example, the trivial dummy pattern \<open>_\<close> will match any proposition.
+  In contrast, by default the pattern \<open>?P\<close> is considered to have type
   @{typ bool}. It will not bind anything with meta-logical connectives (e.g.
-  @{text "_ \<Longrightarrow> _"} or @{text "_ &&& _"}).
+  \<open>_ \<Longrightarrow> _\<close> or \<open>_ &&& _\<close>).
 \<close>
 
     lemma
@@ -676,17 +670,17 @@
 section \<open>Backtracking\<close>
 
 text \<open>
-  Patterns are considered top-down, executing the inner method @{text m} of
+  Patterns are considered top-down, executing the inner method \<open>m\<close> of
   the first pattern which is satisfied by the current match target. By
   default, matching performs extensive backtracking by attempting all valid
   variable and fact bindings according to the given pattern. In particular,
   all unifiers for a given pattern will be explored, as well as each matching
-  fact. The inner method @{text m} will be re-executed for each different
+  fact. The inner method \<open>m\<close> will be re-executed for each different
   variable/fact binding during backtracking. A successful match is considered
   a cut-point for backtracking. Specifically, once a match is made no other
   pattern-method pairs will be considered.
 
-  The method @{text foo} below fails for all goals that are conjunctions. Any
+  The method \<open>foo\<close> below fails for all goals that are conjunctions. Any
   such goal will match the first pattern, causing the second pattern (that
   would otherwise match all goals) to never be considered.
 \<close>
@@ -701,8 +695,7 @@
   combinator chain, its failure
   becomes significant because it signals previously applied methods to move to
   the next result. Therefore, it is necessary for @{method match} to not mask
-  such failure. One can always rewrite a match using the combinators ``@{text
-  "?"}'' and ``@{text "|"}'' to try subsequent patterns in the case of an
+  such failure. One can always rewrite a match using the combinators ``\<open>?\<close>'' and ``\<open>|\<close>'' to try subsequent patterns in the case of an
   inner-method failure. The following proof method, for example, always
   invokes @{method prop_solver} for all goals because its first alternative
   either never matches or (if it does match) always fails.
@@ -717,7 +710,7 @@
 
 text \<open>
   Backtracking may be controlled more precisely by marking individual patterns
-  as @{text cut}. This causes backtracking to not progress beyond this pattern:
+  as \<open>cut\<close>. This causes backtracking to not progress beyond this pattern:
   once a match is found no others will be considered.
 \<close>
 
@@ -730,7 +723,7 @@
   implications of @{term "P"} in the premises are considered, evaluating the
   inner @{method rule} with each consequent. No other conjunctions will be
   considered, with method failure occurring once all implications of the
-  form @{text "P \<longrightarrow> ?U"} have been explored. Here the left-right processing of
+  form \<open>P \<longrightarrow> ?U\<close> have been explored. Here the left-right processing of
   individual patterns is important, as all patterns after of the cut will
   maintain their usual backtracking behaviour.
 \<close>
@@ -742,16 +735,16 @@
       by (foo\<^sub>2 | prop_solver)
 
 text \<open>
-  In this example, the first lemma is solved by @{text foo\<^sub>2}, by first
-  picking @{term "A \<longrightarrow> D"} for @{text I'}, then backtracking and ultimately
+  In this example, the first lemma is solved by \<open>foo\<^sub>2\<close>, by first
+  picking @{term "A \<longrightarrow> D"} for \<open>I'\<close>, then backtracking and ultimately
   succeeding after picking @{term "A \<longrightarrow> C"}. In the second lemma, however,
   @{term "C \<and> D"} is matched first, the second pattern in the match cannot be
   found and so the method fails, falling through to @{method prop_solver}.
 
   More precise control is also possible by giving a positive
-  number @{text n} as an argument to @{text cut}. This will limit the number
-  of backtracking results of that match to be at most @{text n}.
-  The match argument @{text "(cut 1)"} is the same as simply @{text "(cut)"}.
+  number \<open>n\<close> as an argument to \<open>cut\<close>. This will limit the number
+  of backtracking results of that match to be at most \<open>n\<close>.
+  The match argument \<open>(cut 1)\<close> is the same as simply \<open>(cut)\<close>.
 \<close>
 
 
@@ -776,16 +769,14 @@
 
 text \<open>
   Intuitively it seems like this proof should fail to check. The first match
-  result, which binds @{term I} to the first two members of @{text asms},
+  result, which binds @{term I} to the first two members of \<open>asms\<close>,
   fails the second inner match due to binding @{term P} to @{term A}.
-  Backtracking then attempts to bind @{term I} to the third member of @{text
-  asms}. This passes all inner matches, but fails when @{method rule} cannot
+  Backtracking then attempts to bind @{term I} to the third member of \<open>asms\<close>. This passes all inner matches, but fails when @{method rule} cannot
   successfully apply this to the current goal. After this, a valid match that
-  is produced by the unifier is one which binds @{term P} to simply @{text
-  "\<lambda>a. A ?x"}. The first inner match succeeds because @{text "\<lambda>a. A ?x"} does
+  is produced by the unifier is one which binds @{term P} to simply \<open>\<lambda>a. A ?x\<close>. The first inner match succeeds because \<open>\<lambda>a. A ?x\<close> does
   not match @{term A}. The next inner match succeeds because @{term I} has
-  only been bound to the first member of @{text asms}. This is due to @{method
-  match} considering @{text "\<lambda>a. A ?x"} and @{text "\<lambda>a. A ?y"} as distinct
+  only been bound to the first member of \<open>asms\<close>. This is due to @{method
+  match} considering \<open>\<lambda>a. A ?x\<close> and \<open>\<lambda>a. A ?y\<close> as distinct
   terms.
 
   The simplest way to address this is to explicitly disallow term bindings
@@ -807,7 +798,7 @@
   The @{method match} method is not aware of the logical content of match
   targets. Each pattern is simply matched against the shallow structure of a
   fact or term. Most facts are in \<^emph>\<open>normal form\<close>, which curries premises
-  via meta-implication @{text "_ \<Longrightarrow> _"}.
+  via meta-implication \<open>_ \<Longrightarrow> _\<close>.
 \<close>
 
     lemma
@@ -816,7 +807,7 @@
       by (match asms in H: "D \<Longrightarrow> _" (multi) \<Rightarrow> \<open>prop_solver elims: H\<close>)
 
 text \<open>
-  For the first member of @{text asms} the dummy pattern successfully matches
+  For the first member of \<open>asms\<close> the dummy pattern successfully matches
   against @{term "B \<Longrightarrow> C"} and so the proof is successful.
 \<close>
 
@@ -830,7 +821,7 @@
 text \<open>
   This proof will fail to solve the goal. Our match pattern will only match
   rules which have a single premise, and conclusion @{term C}, so the first
-  member of @{text asms} is not bound and thus the proof fails. Matching a
+  member of \<open>asms\<close> is not bound and thus the proof fails. Matching a
   pattern of the form @{term "P \<Longrightarrow> Q"} against this fact will bind @{term "P"}
   to @{term "A"} and @{term Q} to @{term "B \<Longrightarrow> C"}. Our pattern, with a
   concrete @{term "C"} in the conclusion, will fail to match this fact.
@@ -839,7 +830,7 @@
   matching against them. This forms a meta-conjunction of all premises in a
   fact, so that only one implication remains. For example the uncurried
   version of @{term "A \<Longrightarrow> B \<Longrightarrow> C"} is @{term "A &&& B \<Longrightarrow> C"}. This will now match
-  our desired pattern @{text "_ \<Longrightarrow> C"}, and can be \<^emph>\<open>curried\<close> after the
+  our desired pattern \<open>_ \<Longrightarrow> C\<close>, and can be \<^emph>\<open>curried\<close> after the
   match to put it back into normal form.
 \<close>
 
@@ -867,11 +858,11 @@
       done
 
 text \<open>
-  In the first @{method match} we attempt to find a member of @{text asms}
+  In the first @{method match} we attempt to find a member of \<open>asms\<close>
   which matches our goal precisely. This fails due to no such member existing.
   The second match reverses the role of the fact in the match, by first giving
   a general pattern @{term P}. This bound pattern is then matched against
-  @{term "A y"}. In this case, @{term P} is bound to @{text "A ?x"} and so it
+  @{term "A y"}. In this case, @{term P} is bound to \<open>A ?x\<close> and so it
   successfully matches.
 \<close>
 
@@ -892,10 +883,9 @@
           \<open>match (y) in "y :: 'b" for y \<Rightarrow> \<open>rule H [where z = y]\<close>\<close>)
 
 text \<open>
-  In this example the type @{text 'b} is matched to @{text 'a}, however
-  statically they are formally distinct types. The first match binds @{text
-  'b} while the inner match serves to coerce @{term y} into having the type
-  @{text 'b}. This allows the rule instantiation to successfully apply.
+  In this example the type \<open>'b\<close> is matched to \<open>'a\<close>, however
+  statically they are formally distinct types. The first match binds \<open>'b\<close> while the inner match serves to coerce @{term y} into having the type
+  \<open>'b\<close>. This allows the rule instantiation to successfully apply.
 \<close>
 
 
@@ -932,7 +922,7 @@
 
 text \<open>
   A custom rule attribute is a simple way to extend the functionality of
-  Eisbach methods. The dummy rule attribute notation (@{text "[[ _ ]]"})
+  Eisbach methods. The dummy rule attribute notation (\<open>[[ _ ]]\<close>)
   invokes the given attribute against a dummy fact and evaluates to the result
   of that attribute. When used as a match target, this can serve as an
   effective auxiliary function.
@@ -968,7 +958,7 @@
   Here the new @{method splits} method transforms the goal to use only logical
   connectives: @{term "L = [] \<longrightarrow> False \<and> (\<forall>x y. L = x # y \<longrightarrow> True)"}. This goal
   is then in a form solvable by @{method prop_solver} when given the universal
-  quantifier introduction rule @{text allI}.
+  quantifier introduction rule \<open>allI\<close>.
 \<close>
 
 end
--- a/src/Doc/Implementation/Eq.thy	Tue Oct 20 23:03:46 2015 +0200
+++ b/src/Doc/Implementation/Eq.thy	Tue Oct 20 23:53:40 2015 +0200
@@ -6,7 +6,7 @@
 
 text \<open>Equality is one of the most fundamental concepts of
   mathematics.  The Isabelle/Pure logic (\chref{ch:logic}) provides a
-  builtin relation @{text "\<equiv> :: \<alpha> \<Rightarrow> \<alpha> \<Rightarrow> prop"} that expresses equality
+  builtin relation \<open>\<equiv> :: \<alpha> \<Rightarrow> \<alpha> \<Rightarrow> prop\<close> that expresses equality
   of arbitrary terms (or propositions) at the framework level, as
   expressed by certain basic inference rules (\secref{sec:eq-rules}).
 
@@ -18,7 +18,7 @@
 
   Higher-order matching is able to provide suitable instantiations for
   giving equality rules, which leads to the versatile concept of
-  @{text "\<lambda>"}-term rewriting (\secref{sec:rewriting}).  Internally
+  \<open>\<lambda>\<close>-term rewriting (\secref{sec:rewriting}).  Internally
   this is based on the general-purpose Simplifier engine of Isabelle,
   which is more specific and more efficient than plain conversions.
 
@@ -31,10 +31,10 @@
 
 section \<open>Basic equality rules \label{sec:eq-rules}\<close>
 
-text \<open>Isabelle/Pure uses @{text "\<equiv>"} for equality of arbitrary
+text \<open>Isabelle/Pure uses \<open>\<equiv>\<close> for equality of arbitrary
   terms, which includes equivalence of propositions of the logical
-  framework.  The conceptual axiomatization of the constant @{text "\<equiv>
-  :: \<alpha> \<Rightarrow> \<alpha> \<Rightarrow> prop"} is given in \figref{fig:pure-equality}.  The
+  framework.  The conceptual axiomatization of the constant \<open>\<equiv>
+  :: \<alpha> \<Rightarrow> \<alpha> \<Rightarrow> prop\<close> is given in \figref{fig:pure-equality}.  The
   inference kernel presents slightly different equality rules, which
   may be understood as derived rules from this minimal axiomatization.
   The Pure theory also provides some theorems that express the same
@@ -42,15 +42,14 @@
   rules as explained in \secref{sec:obj-rules}.
 
   For example, @{ML Thm.symmetric} as Pure inference is an ML function
-  that maps a theorem @{text "th"} stating @{text "t \<equiv> u"} to one
-  stating @{text "u \<equiv> t"}.  In contrast, @{thm [source]
+  that maps a theorem \<open>th\<close> stating \<open>t \<equiv> u\<close> to one
+  stating \<open>u \<equiv> t\<close>.  In contrast, @{thm [source]
   Pure.symmetric} as Pure theorem expresses the same reasoning in
-  declarative form.  If used like @{text "th [THEN Pure.symmetric]"}
+  declarative form.  If used like \<open>th [THEN Pure.symmetric]\<close>
   in Isar source notation, it achieves a similar effect as the ML
   inference function, although the rule attribute @{attribute THEN} or
   ML operator @{ML "op RS"} involve the full machinery of higher-order
-  unification (modulo @{text "\<beta>\<eta>"}-conversion) and lifting of @{text
-  "\<And>/\<Longrightarrow>"} contexts.\<close>
+  unification (modulo \<open>\<beta>\<eta>\<close>-conversion) and lifting of \<open>\<And>/\<Longrightarrow>\<close> contexts.\<close>
 
 text %mlref \<open>
   \begin{mldecls}
@@ -64,8 +63,8 @@
   \end{mldecls}
 
   See also @{file "~~/src/Pure/thm.ML" } for further description of
-  these inference rules, and a few more for primitive @{text "\<beta>"} and
-  @{text "\<eta>"} conversions.  Note that @{text "\<alpha>"} conversion is
+  these inference rules, and a few more for primitive \<open>\<beta>\<close> and
+  \<open>\<eta>\<close> conversions.  Note that \<open>\<alpha>\<close> conversion is
   implicit due to the representation of terms with de-Bruijn indices
   (\secref{sec:terms}).\<close>
 
@@ -83,10 +82,10 @@
 section \<open>Rewriting \label{sec:rewriting}\<close>
 
 text \<open>Rewriting normalizes a given term (theorem or goal) by
-  replacing instances of given equalities @{text "t \<equiv> u"} in subterms.
+  replacing instances of given equalities \<open>t \<equiv> u\<close> in subterms.
   Rewriting continues until no rewrites are applicable to any subterm.
-  This may be used to unfold simple definitions of the form @{text "f
-  x\<^sub>1 \<dots> x\<^sub>n \<equiv> u"}, but is slightly more general than that.
+  This may be used to unfold simple definitions of the form \<open>f
+  x\<^sub>1 \<dots> x\<^sub>n \<equiv> u\<close>, but is slightly more general than that.
 \<close>
 
 text %mlref \<open>
@@ -98,23 +97,22 @@
   @{index_ML fold_goals_tac: "Proof.context -> thm list -> tactic"} \\
   \end{mldecls}
 
-  \<^descr> @{ML rewrite_rule}~@{text "ctxt rules thm"} rewrites the whole
+  \<^descr> @{ML rewrite_rule}~\<open>ctxt rules thm\<close> rewrites the whole
   theorem by the given rules.
 
-  \<^descr> @{ML rewrite_goals_rule}~@{text "ctxt rules thm"} rewrites the
+  \<^descr> @{ML rewrite_goals_rule}~\<open>ctxt rules thm\<close> rewrites the
   outer premises of the given theorem.  Interpreting the same as a
   goal state (\secref{sec:tactical-goals}) it means to rewrite all
   subgoals (in the same manner as @{ML rewrite_goals_tac}).
 
-  \<^descr> @{ML rewrite_goal_tac}~@{text "ctxt rules i"} rewrites subgoal
-  @{text "i"} by the given rewrite rules.
+  \<^descr> @{ML rewrite_goal_tac}~\<open>ctxt rules i\<close> rewrites subgoal
+  \<open>i\<close> by the given rewrite rules.
 
-  \<^descr> @{ML rewrite_goals_tac}~@{text "ctxt rules"} rewrites all subgoals
+  \<^descr> @{ML rewrite_goals_tac}~\<open>ctxt rules\<close> rewrites all subgoals
   by the given rewrite rules.
 
-  \<^descr> @{ML fold_goals_tac}~@{text "ctxt rules"} essentially uses @{ML
-  rewrite_goals_tac} with the symmetric form of each member of @{text
-  "rules"}, re-ordered to fold longer expression first.  This supports
+  \<^descr> @{ML fold_goals_tac}~\<open>ctxt rules\<close> essentially uses @{ML
+  rewrite_goals_tac} with the symmetric form of each member of \<open>rules\<close>, re-ordered to fold longer expression first.  This supports
   to idea to fold primitive definitions that appear in expended form
   in the proof state.
 \<close>
--- a/src/Doc/Implementation/Integration.thy	Tue Oct 20 23:03:46 2015 +0200
+++ b/src/Doc/Implementation/Integration.thy	Tue Oct 20 23:53:40 2015 +0200
@@ -23,8 +23,7 @@
 subsection \<open>Toplevel state\<close>
 
 text \<open>
-  The toplevel state is a disjoint sum of empty @{text toplevel}, or @{text
-  theory}, or @{text proof}. The initial toplevel is empty; a theory is
+  The toplevel state is a disjoint sum of empty \<open>toplevel\<close>, or \<open>theory\<close>, or \<open>proof\<close>. The initial toplevel is empty; a theory is
   commenced by a @{command theory} header; within a theory we may use theory
   commands such as @{command definition}, or state a @{command theorem} to be
   proven. A proof state accepts a rich collection of Isar proof commands for
@@ -52,23 +51,23 @@
   operations.  Many operations work only partially for certain cases,
   since @{ML_type Toplevel.state} is a sum type.
 
-  \<^descr> @{ML Toplevel.is_toplevel}~@{text "state"} checks for an empty
+  \<^descr> @{ML Toplevel.is_toplevel}~\<open>state\<close> checks for an empty
   toplevel state.
 
-  \<^descr> @{ML Toplevel.theory_of}~@{text "state"} selects the
-  background theory of @{text "state"}, it raises @{ML Toplevel.UNDEF}
+  \<^descr> @{ML Toplevel.theory_of}~\<open>state\<close> selects the
+  background theory of \<open>state\<close>, it raises @{ML Toplevel.UNDEF}
   for an empty toplevel state.
 
-  \<^descr> @{ML Toplevel.proof_of}~@{text "state"} selects the Isar proof
+  \<^descr> @{ML Toplevel.proof_of}~\<open>state\<close> selects the Isar proof
   state if available, otherwise it raises an error.
 \<close>
 
 text %mlantiq \<open>
   \begin{matharray}{rcl}
-  @{ML_antiquotation_def "Isar.state"} & : & @{text ML_antiquotation} \\
+  @{ML_antiquotation_def "Isar.state"} & : & \<open>ML_antiquotation\<close> \\
   \end{matharray}
 
-  \<^descr> @{text "@{Isar.state}"} refers to Isar toplevel state at that
+  \<^descr> \<open>@{Isar.state}\<close> refers to Isar toplevel state at that
   point --- as abstract value.
 
   This only works for diagnostic ML commands, such as @{command
@@ -113,27 +112,27 @@
   Toplevel.transition -> Toplevel.transition"} \\
   \end{mldecls}
 
-  \<^descr> @{ML Toplevel.keep}~@{text "tr"} adjoins a diagnostic
+  \<^descr> @{ML Toplevel.keep}~\<open>tr\<close> adjoins a diagnostic
   function.
 
-  \<^descr> @{ML Toplevel.theory}~@{text "tr"} adjoins a theory
+  \<^descr> @{ML Toplevel.theory}~\<open>tr\<close> adjoins a theory
   transformer.
 
-  \<^descr> @{ML Toplevel.theory_to_proof}~@{text "tr"} adjoins a global
+  \<^descr> @{ML Toplevel.theory_to_proof}~\<open>tr\<close> adjoins a global
   goal function, which turns a theory into a proof state.  The theory
   may be changed before entering the proof; the generic Isar goal
   setup includes an @{verbatim after_qed} argument that specifies how to
   apply the proven result to the enclosing context, when the proof
   is finished.
 
-  \<^descr> @{ML Toplevel.proof}~@{text "tr"} adjoins a deterministic
+  \<^descr> @{ML Toplevel.proof}~\<open>tr\<close> adjoins a deterministic
   proof command, with a singleton result.
 
-  \<^descr> @{ML Toplevel.proofs}~@{text "tr"} adjoins a general proof
+  \<^descr> @{ML Toplevel.proofs}~\<open>tr\<close> adjoins a general proof
   command, with zero or more result states (represented as a lazy
   list).
 
-  \<^descr> @{ML Toplevel.end_proof}~@{text "tr"} adjoins a concluding
+  \<^descr> @{ML Toplevel.end_proof}~\<open>tr\<close> adjoins a concluding
   proof command, that returns the resulting theory, after applying the
   resulting facts to the target context.
 \<close>
@@ -163,7 +162,7 @@
   @{index_ML Thy_Info.register_thy: "theory -> unit"} \\
   \end{mldecls}
 
-  \<^descr> @{ML use_thy}~@{text A} ensures that theory @{text A} is fully
+  \<^descr> @{ML use_thy}~\<open>A\<close> ensures that theory \<open>A\<close> is fully
   up-to-date wrt.\ the external file store; outdated ancestors are reloaded on
   demand.
 
@@ -175,14 +174,14 @@
 
   This variant is used by default in @{tool build} @{cite "isabelle-system"}.
 
-  \<^descr> @{ML Thy_Info.get_theory}~@{text A} retrieves the theory value
-  presently associated with name @{text A}. Note that the result might be
+  \<^descr> @{ML Thy_Info.get_theory}~\<open>A\<close> retrieves the theory value
+  presently associated with name \<open>A\<close>. Note that the result might be
   outdated wrt.\ the file-system content.
 
-  \<^descr> @{ML Thy_Info.remove_thy}~@{text A} deletes theory @{text A} and all
+  \<^descr> @{ML Thy_Info.remove_thy}~\<open>A\<close> deletes theory \<open>A\<close> and all
   descendants from the theory database.
 
-  \<^descr> @{ML Thy_Info.register_thy}~@{text "text thy"} registers an existing
+  \<^descr> @{ML Thy_Info.register_thy}~\<open>text thy\<close> registers an existing
   theory value with the theory loader database and updates source version
   information according to the file store.
 \<close>
--- a/src/Doc/Implementation/Isar.thy	Tue Oct 20 23:03:46 2015 +0200
+++ b/src/Doc/Implementation/Isar.thy	Tue Oct 20 23:53:40 2015 +0200
@@ -78,7 +78,7 @@
   \<^descr> Type @{ML_type Proof.state} represents Isar proof states.
   This is a block-structured configuration with proof context,
   linguistic mode, and optional goal.  The latter consists of goal
-  context, goal facts (``@{text "using"}''), and tactical goal state
+  context, goal facts (``\<open>using\<close>''), and tactical goal state
   (see \secref{sec:tactical-goals}).
 
   The general idea is that the facts shall contribute to the
@@ -87,39 +87,37 @@
 
   \<^descr> @{ML Proof.assert_forward}, @{ML Proof.assert_chain}, @{ML
   Proof.assert_backward} are partial identity functions that fail
-  unless a certain linguistic mode is active, namely ``@{text
-  "proof(state)"}'', ``@{text "proof(chain)"}'', ``@{text
-  "proof(prove)"}'', respectively (using the terminology of
+  unless a certain linguistic mode is active, namely ``\<open>proof(state)\<close>'', ``\<open>proof(chain)\<close>'', ``\<open>proof(prove)\<close>'', respectively (using the terminology of
   @{cite "isabelle-isar-ref"}).
 
   It is advisable study the implementations of existing proof commands
   for suitable modes to be asserted.
 
-  \<^descr> @{ML Proof.simple_goal}~@{text "state"} returns the structured
+  \<^descr> @{ML Proof.simple_goal}~\<open>state\<close> returns the structured
   Isar goal (if available) in the form seen by ``simple'' methods
   (like @{method simp} or @{method blast}).  The Isar goal facts are
   already inserted as premises into the subgoals, which are presented
   individually as in @{ML Proof.goal}.
 
-  \<^descr> @{ML Proof.goal}~@{text "state"} returns the structured Isar
+  \<^descr> @{ML Proof.goal}~\<open>state\<close> returns the structured Isar
   goal (if available) in the form seen by regular methods (like
   @{method rule}).  The auxiliary internal encoding of Pure
   conjunctions is split into individual subgoals as usual.
 
-  \<^descr> @{ML Proof.raw_goal}~@{text "state"} returns the structured
+  \<^descr> @{ML Proof.raw_goal}~\<open>state\<close> returns the structured
   Isar goal (if available) in the raw internal form seen by ``raw''
   methods (like @{method induct}).  This form is rarely appropriate
   for diagnostic tools; @{ML Proof.simple_goal} or @{ML Proof.goal}
   should be used in most situations.
 
-  \<^descr> @{ML Proof.theorem}~@{text "before_qed after_qed statement ctxt"}
+  \<^descr> @{ML Proof.theorem}~\<open>before_qed after_qed statement ctxt\<close>
   initializes a toplevel Isar proof state within a given context.
 
-  The optional @{text "before_qed"} method is applied at the end of
+  The optional \<open>before_qed\<close> method is applied at the end of
   the proof, just before extracting the result (this feature is rarely
   used).
 
-  The @{text "after_qed"} continuation receives the extracted result
+  The \<open>after_qed\<close> continuation receives the extracted result
   in order to apply it to the final context in a suitable way (e.g.\
   storing named facts).  Note that at this generic level the target
   context is specified as @{ML_type Proof.context}, but the usual
@@ -127,20 +125,20 @@
   @{ML_type local_theory} here (\chref{ch:local-theory}).  This
   affects the way how results are stored.
 
-  The @{text "statement"} is given as a nested list of terms, each
+  The \<open>statement\<close> is given as a nested list of terms, each
   associated with optional @{keyword "is"} patterns as usual in the
   Isar source language.  The original nested list structure over terms
-  is turned into one over theorems when @{text "after_qed"} is
+  is turned into one over theorems when \<open>after_qed\<close> is
   invoked.
 \<close>
 
 
 text %mlantiq \<open>
   \begin{matharray}{rcl}
-  @{ML_antiquotation_def "Isar.goal"} & : & @{text ML_antiquotation} \\
+  @{ML_antiquotation_def "Isar.goal"} & : & \<open>ML_antiquotation\<close> \\
   \end{matharray}
 
-  \<^descr> @{text "@{Isar.goal}"} refers to the regular goal state (if
+  \<^descr> \<open>@{Isar.goal}\<close> refers to the regular goal state (if
   available) of the current proof state managed by the Isar toplevel
   --- as abstract value.
 
@@ -165,8 +163,8 @@
 
 section \<open>Proof methods\<close>
 
-text \<open>A @{text "method"} is a function @{text "context \<rightarrow> thm\<^sup>* \<rightarrow> goal
-  \<rightarrow> (cases \<times> goal)\<^sup>*\<^sup>*"} that operates on the full Isar goal
+text \<open>A \<open>method\<close> is a function \<open>context \<rightarrow> thm\<^sup>* \<rightarrow> goal
+  \<rightarrow> (cases \<times> goal)\<^sup>*\<^sup>*\<close> that operates on the full Isar goal
   configuration with context, goal facts, and tactical goal state and
   enumerates possible follow-up goal states, with the potential
   addition of named extensions of the proof context (\<^emph>\<open>cases\<close>).
@@ -186,9 +184,9 @@
 
   \<^item> A non-trivial method always needs to make progress: an
   identical follow-up goal state has to be avoided.\footnote{This
-  enables the user to write method expressions like @{text "meth\<^sup>+"}
+  enables the user to write method expressions like \<open>meth\<^sup>+\<close>
   without looping, while the trivial do-nothing case can be recovered
-  via @{text "meth\<^sup>?"}.}
+  via \<open>meth\<^sup>?\<close>.}
 
   Exception: trivial stuttering steps, such as ``@{method -}'' or
   @{method succeed}.
@@ -211,19 +209,17 @@
 
   \<^medskip>
   \begin{tabular}{l}
-  @{command from}~@{text "facts\<^sub>1"}~@{command have}~@{text "props"}~@{command using}~@{text "facts\<^sub>2"} \\
-  @{command proof}~@{text "(initial_method)"} \\
-  \quad@{text "body"} \\
-  @{command qed}~@{text "(terminal_method)"} \\
+  @{command from}~\<open>facts\<^sub>1\<close>~@{command have}~\<open>props\<close>~@{command using}~\<open>facts\<^sub>2\<close> \\
+  @{command proof}~\<open>(initial_method)\<close> \\
+  \quad\<open>body\<close> \\
+  @{command qed}~\<open>(terminal_method)\<close> \\
   \end{tabular}
   \<^medskip>
 
-  The goal configuration consists of @{text "facts\<^sub>1"} and
-  @{text "facts\<^sub>2"} appended in that order, and various @{text
-  "props"} being claimed.  The @{text "initial_method"} is invoked
+  The goal configuration consists of \<open>facts\<^sub>1\<close> and
+  \<open>facts\<^sub>2\<close> appended in that order, and various \<open>props\<close> being claimed.  The \<open>initial_method\<close> is invoked
   with facts and goals together and refines the problem to something
-  that is handled recursively in the proof @{text "body"}.  The @{text
-  "terminal_method"} has another chance to finish any remaining
+  that is handled recursively in the proof \<open>body\<close>.  The \<open>terminal_method\<close> has another chance to finish any remaining
   subgoals, but it does not see the facts of the initial step.
 
   \<^medskip>
@@ -231,20 +227,18 @@
 
   \<^medskip>
   \begin{tabular}{l}
-  @{command have}~@{text "props"} \\
-  \quad@{command using}~@{text "facts\<^sub>1"}~@{command apply}~@{text "method\<^sub>1"} \\
-  \quad@{command apply}~@{text "method\<^sub>2"} \\
-  \quad@{command using}~@{text "facts\<^sub>3"}~@{command apply}~@{text "method\<^sub>3"} \\
+  @{command have}~\<open>props\<close> \\
+  \quad@{command using}~\<open>facts\<^sub>1\<close>~@{command apply}~\<open>method\<^sub>1\<close> \\
+  \quad@{command apply}~\<open>method\<^sub>2\<close> \\
+  \quad@{command using}~\<open>facts\<^sub>3\<close>~@{command apply}~\<open>method\<^sub>3\<close> \\
   \quad@{command done} \\
   \end{tabular}
   \<^medskip>
 
-  The @{text "method\<^sub>1"} operates on the original claim while
-  using @{text "facts\<^sub>1"}.  Since the @{command apply} command
-  structurally resets the facts, the @{text "method\<^sub>2"} will
-  operate on the remaining goal state without facts.  The @{text
-  "method\<^sub>3"} will see again a collection of @{text
-  "facts\<^sub>3"} that has been inserted into the script explicitly.
+  The \<open>method\<^sub>1\<close> operates on the original claim while
+  using \<open>facts\<^sub>1\<close>.  Since the @{command apply} command
+  structurally resets the facts, the \<open>method\<^sub>2\<close> will
+  operate on the remaining goal state without facts.  The \<open>method\<^sub>3\<close> will see again a collection of \<open>facts\<^sub>3\<close> that has been inserted into the script explicitly.
 
   \<^medskip>
   Empirically, any Isar proof method can be categorized as
@@ -272,7 +266,7 @@
   \<^enum> \<^emph>\<open>Old-style tactic emulation\<close> with detailed numeric goal
   addressing and explicit references to entities of the internal goal
   state (which are otherwise invisible from proper Isar proof text).
-  The naming convention @{text "foo_tac"} makes this special
+  The naming convention \<open>foo_tac\<close> makes this special
   non-standard status clear.
 
   Example: @{method "rule_tac"}.
@@ -303,30 +297,27 @@
   \<^descr> Type @{ML_type Proof.method} represents proof methods as
   abstract type.
 
-  \<^descr> @{ML METHOD_CASES}~@{text "(fn facts => cases_tactic)"} wraps
-  @{text cases_tactic} depending on goal facts as proof method with
+  \<^descr> @{ML METHOD_CASES}~\<open>(fn facts => cases_tactic)\<close> wraps
+  \<open>cases_tactic\<close> depending on goal facts as proof method with
   cases; the goal context is passed via method syntax.
 
-  \<^descr> @{ML METHOD}~@{text "(fn facts => tactic)"} wraps @{text
-  tactic} depending on goal facts as regular proof method; the goal
+  \<^descr> @{ML METHOD}~\<open>(fn facts => tactic)\<close> wraps \<open>tactic\<close> depending on goal facts as regular proof method; the goal
   context is passed via method syntax.
 
-  \<^descr> @{ML SIMPLE_METHOD}~@{text "tactic"} wraps a tactic that
+  \<^descr> @{ML SIMPLE_METHOD}~\<open>tactic\<close> wraps a tactic that
   addresses all subgoals uniformly as simple proof method.  Goal facts
-  are already inserted into all subgoals before @{text "tactic"} is
+  are already inserted into all subgoals before \<open>tactic\<close> is
   applied.
 
-  \<^descr> @{ML SIMPLE_METHOD'}~@{text "tactic"} wraps a tactic that
+  \<^descr> @{ML SIMPLE_METHOD'}~\<open>tactic\<close> wraps a tactic that
   addresses a specific subgoal as simple proof method that operates on
-  subgoal 1.  Goal facts are inserted into the subgoal then the @{text
-  "tactic"} is applied.
+  subgoal 1.  Goal facts are inserted into the subgoal then the \<open>tactic\<close> is applied.
 
-  \<^descr> @{ML Method.insert_tac}~@{text "facts i"} inserts @{text
-  "facts"} into subgoal @{text "i"}.  This is convenient to reproduce
+  \<^descr> @{ML Method.insert_tac}~\<open>facts i\<close> inserts \<open>facts\<close> into subgoal \<open>i\<close>.  This is convenient to reproduce
   part of the @{ML SIMPLE_METHOD} or @{ML SIMPLE_METHOD'} wrapping
   within regular @{ML METHOD}, for example.
 
-  \<^descr> @{ML Method.setup}~@{text "name parser description"} provides
+  \<^descr> @{ML Method.setup}~\<open>name parser description\<close> provides
   the functionality of the Isar command @{command method_setup} as ML
   function.
 \<close>
@@ -470,8 +461,8 @@
   method arguments obtained via concrete syntax or the context does
   not meet the requirement of ``strong emphasis on facts'' of regular
   proof methods, because rewrite rules as used above can be easily
-  ignored.  A proof text ``@{command using}~@{text "foo"}~@{command
-  "by"}~@{text "my_simp"}'' where @{text "foo"} is not used would
+  ignored.  A proof text ``@{command using}~\<open>foo\<close>~@{command
+  "by"}~\<open>my_simp\<close>'' where \<open>foo\<close> is not used would
   deceive the reader.
 
   \<^medskip>
@@ -495,12 +486,12 @@
 
 section \<open>Attributes \label{sec:attributes}\<close>
 
-text \<open>An \<^emph>\<open>attribute\<close> is a function @{text "context \<times> thm \<rightarrow>
-  context \<times> thm"}, which means both a (generic) context and a theorem
+text \<open>An \<^emph>\<open>attribute\<close> is a function \<open>context \<times> thm \<rightarrow>
+  context \<times> thm\<close>, which means both a (generic) context and a theorem
   can be modified simultaneously.  In practice this mixed form is very
   rare, instead attributes are presented either as \<^emph>\<open>declaration
-  attribute:\<close> @{text "thm \<rightarrow> context \<rightarrow> context"} or \<^emph>\<open>rule
-  attribute:\<close> @{text "context \<rightarrow> thm \<rightarrow> thm"}.
+  attribute:\<close> \<open>thm \<rightarrow> context \<rightarrow> context\<close> or \<^emph>\<open>rule
+  attribute:\<close> \<open>context \<rightarrow> thm \<rightarrow> thm\<close>.
 
   Attributes can have additional arguments via concrete syntax.  There
   is a collection of context-sensitive parsers for various logical
@@ -527,28 +518,28 @@
   \<^descr> Type @{ML_type attribute} represents attributes as concrete
   type alias.
 
-  \<^descr> @{ML Thm.rule_attribute}~@{text "(fn context => rule)"} wraps
+  \<^descr> @{ML Thm.rule_attribute}~\<open>(fn context => rule)\<close> wraps
   a context-dependent rule (mapping on @{ML_type thm}) as attribute.
 
-  \<^descr> @{ML Thm.declaration_attribute}~@{text "(fn thm => decl)"}
+  \<^descr> @{ML Thm.declaration_attribute}~\<open>(fn thm => decl)\<close>
   wraps a theorem-dependent declaration (mapping on @{ML_type
   Context.generic}) as attribute.
 
-  \<^descr> @{ML Attrib.setup}~@{text "name parser description"} provides
+  \<^descr> @{ML Attrib.setup}~\<open>name parser description\<close> provides
   the functionality of the Isar command @{command attribute_setup} as
   ML function.
 \<close>
 
 text %mlantiq \<open>
   \begin{matharray}{rcl}
-  @{ML_antiquotation_def attributes} & : & @{text ML_antiquotation} \\
+  @{ML_antiquotation_def attributes} & : & \<open>ML_antiquotation\<close> \\
   \end{matharray}
 
   @{rail \<open>
   @@{ML_antiquotation attributes} attributes
   \<close>}
 
-  \<^descr> @{text "@{attributes [\<dots>]}"} embeds attribute source
+  \<^descr> \<open>@{attributes [\<dots>]}\<close> embeds attribute source
   representation into the ML text, which is particularly useful with
   declarations like @{ML Local_Theory.note}.  Attribute names are
   internalized at compile time, but the source is unevaluated.  This
--- a/src/Doc/Implementation/Local_Theory.thy	Tue Oct 20 23:03:46 2015 +0200
+++ b/src/Doc/Implementation/Local_Theory.thy	Tue Oct 20 23:53:40 2015 +0200
@@ -13,11 +13,11 @@
   context\<close>.
 
   The target is usually derived from the background theory by adding
-  local @{text "\<FIX>"} and @{text "\<ASSUME>"} elements, plus
+  local \<open>\<FIX>\<close> and \<open>\<ASSUME>\<close> elements, plus
   suitable modifications of non-logical context data (e.g.\ a special
   type-checking discipline).  Once initialized, the target is ready to
-  absorb definitional primitives: @{text "\<DEFINE>"} for terms and
-  @{text "\<NOTE>"} for theorems.  Such definitions may get
+  absorb definitional primitives: \<open>\<DEFINE>\<close> for terms and
+  \<open>\<NOTE>\<close> for theorems.  Such definitions may get
   transformed in a target-specific way, but the programming interface
   hides such details.
 
@@ -39,13 +39,13 @@
 section \<open>Definitional elements\<close>
 
 text \<open>
-  There are separate elements @{text "\<DEFINE> c \<equiv> t"} for terms, and
-  @{text "\<NOTE> b = thm"} for theorems.  Types are treated
+  There are separate elements \<open>\<DEFINE> c \<equiv> t\<close> for terms, and
+  \<open>\<NOTE> b = thm\<close> for theorems.  Types are treated
   implicitly, according to Hindley-Milner discipline (cf.\
   \secref{sec:variables}).  These definitional primitives essentially
-  act like @{text "let"}-bindings within a local context that may
-  already contain earlier @{text "let"}-bindings and some initial
-  @{text "\<lambda>"}-bindings.  Thus we gain \<^emph>\<open>dependent definitions\<close>
+  act like \<open>let\<close>-bindings within a local context that may
+  already contain earlier \<open>let\<close>-bindings and some initial
+  \<open>\<lambda>\<close>-bindings.  Thus we gain \<^emph>\<open>dependent definitions\<close>
   that are relative to an initial axiomatic context.  The following
   diagram illustrates this idea of axiomatic elements versus
   definitional elements:
@@ -53,30 +53,29 @@
   \begin{center}
   \begin{tabular}{|l|l|l|}
   \hline
-  & @{text "\<lambda>"}-binding & @{text "let"}-binding \\
+  & \<open>\<lambda>\<close>-binding & \<open>let\<close>-binding \\
   \hline
-  types & fixed @{text "\<alpha>"} & arbitrary @{text "\<beta>"} \\
-  terms & @{text "\<FIX> x :: \<tau>"} & @{text "\<DEFINE> c \<equiv> t"} \\
-  theorems & @{text "\<ASSUME> a: A"} & @{text "\<NOTE> b = \<^BG>B\<^EN>"} \\
+  types & fixed \<open>\<alpha>\<close> & arbitrary \<open>\<beta>\<close> \\
+  terms & \<open>\<FIX> x :: \<tau>\<close> & \<open>\<DEFINE> c \<equiv> t\<close> \\
+  theorems & \<open>\<ASSUME> a: A\<close> & \<open>\<NOTE> b = \<^BG>B\<^EN>\<close> \\
   \hline
   \end{tabular}
   \end{center}
 
-  A user package merely needs to produce suitable @{text "\<DEFINE>"}
-  and @{text "\<NOTE>"} elements according to the application.  For
-  example, a package for inductive definitions might first @{text
-  "\<DEFINE>"} a certain predicate as some fixed-point construction,
-  then @{text "\<NOTE>"} a proven result about monotonicity of the
+  A user package merely needs to produce suitable \<open>\<DEFINE>\<close>
+  and \<open>\<NOTE>\<close> elements according to the application.  For
+  example, a package for inductive definitions might first \<open>\<DEFINE>\<close> a certain predicate as some fixed-point construction,
+  then \<open>\<NOTE>\<close> a proven result about monotonicity of the
   functor involved here, and then produce further derived concepts via
-  additional @{text "\<DEFINE>"} and @{text "\<NOTE>"} elements.
+  additional \<open>\<DEFINE>\<close> and \<open>\<NOTE>\<close> elements.
 
-  The cumulative sequence of @{text "\<DEFINE>"} and @{text "\<NOTE>"}
+  The cumulative sequence of \<open>\<DEFINE>\<close> and \<open>\<NOTE>\<close>
   produced at package runtime is managed by the local theory
   infrastructure by means of an \<^emph>\<open>auxiliary context\<close>.  Thus the
   system holds up the impression of working within a fully abstract
-  situation with hypothetical entities: @{text "\<DEFINE> c \<equiv> t"}
-  always results in a literal fact @{text "\<^BG>c \<equiv> t\<^EN>"}, where
-  @{text "c"} is a fixed variable @{text "c"}.  The details about
+  situation with hypothetical entities: \<open>\<DEFINE> c \<equiv> t\<close>
+  always results in a literal fact \<open>\<^BG>c \<equiv> t\<^EN>\<close>, where
+  \<open>c\<close> is a fixed variable \<open>c\<close>.  The details about
   global constants, name spaces etc. are handled internally.
 
   So the general structure of a local theory is a sandwich of three
@@ -88,8 +87,7 @@
 
   When a definitional package is finished, the auxiliary context is
   reset to the target context.  The target now holds definitions for
-  terms and theorems that stem from the hypothetical @{text
-  "\<DEFINE>"} and @{text "\<NOTE>"} elements, transformed by the
+  terms and theorems that stem from the hypothetical \<open>\<DEFINE>\<close> and \<open>\<NOTE>\<close> elements, transformed by the
   particular target policy (see @{cite \<open>\S4--5\<close> "Haftmann-Wenzel:2009"}
   for details).\<close>
 
@@ -107,11 +105,11 @@
   Although this is merely an alias for @{ML_type Proof.context}, it is
   semantically a subtype of the same: a @{ML_type local_theory} holds
   target information as special context data.  Subtyping means that
-  any value @{text "lthy:"}~@{ML_type local_theory} can be also used
-  with operations on expecting a regular @{text "ctxt:"}~@{ML_type
+  any value \<open>lthy:\<close>~@{ML_type local_theory} can be also used
+  with operations on expecting a regular \<open>ctxt:\<close>~@{ML_type
   Proof.context}.
 
-  \<^descr> @{ML Named_Target.init}~@{text "before_exit name thy"}
+  \<^descr> @{ML Named_Target.init}~\<open>before_exit name thy\<close>
   initializes a local theory derived from the given background theory.
   An empty name refers to a \<^emph>\<open>global theory\<close> context, and a
   non-empty name refers to a @{command locale} or @{command class}
@@ -119,9 +117,9 @@
   useful for experimentation --- normally the Isar toplevel already
   takes care to initialize the local theory context.
 
-  \<^descr> @{ML Local_Theory.define}~@{text "((b, mx), (a, rhs))
-  lthy"} defines a local entity according to the specification that is
-  given relatively to the current @{text "lthy"} context.  In
+  \<^descr> @{ML Local_Theory.define}~\<open>((b, mx), (a, rhs))
+  lthy\<close> defines a local entity according to the specification that is
+  given relatively to the current \<open>lthy\<close> context.  In
   particular the term of the RHS may refer to earlier local entities
   from the auxiliary context, or hypothetical parameters from the
   target context.  The result is the newly defined term (which is
@@ -130,7 +128,7 @@
   definition as a hypothetical fact.
 
   Unless an explicit name binding is given for the RHS, the resulting
-  fact will be called @{text "b_def"}.  Any given attributes are
+  fact will be called \<open>b_def\<close>.  Any given attributes are
   applied to that same fact --- immediately in the auxiliary context
   \<^emph>\<open>and\<close> in any transformed versions stemming from target-specific
   policies or any later interpretations of results from the target
@@ -139,7 +137,7 @@
   declarations such as @{attribute simp}, while non-trivial rules like
   @{attribute simplified} are better avoided.
 
-  \<^descr> @{ML Local_Theory.note}~@{text "(a, ths) lthy"} is
+  \<^descr> @{ML Local_Theory.note}~\<open>(a, ths) lthy\<close> is
   analogous to @{ML Local_Theory.define}, but defines facts instead of
   terms.  There is also a slightly more general variant @{ML
   Local_Theory.notes} that defines several facts (with attribute
--- a/src/Doc/Implementation/Logic.thy	Tue Oct 20 23:03:46 2015 +0200
+++ b/src/Doc/Implementation/Logic.thy	Tue Oct 20 23:53:40 2015 +0200
@@ -7,23 +7,20 @@
 text \<open>
   The logical foundations of Isabelle/Isar are that of the Pure logic,
   which has been introduced as a Natural Deduction framework in
-  @{cite paulson700}.  This is essentially the same logic as ``@{text
-  "\<lambda>HOL"}'' in the more abstract setting of Pure Type Systems (PTS)
+  @{cite paulson700}.  This is essentially the same logic as ``\<open>\<lambda>HOL\<close>'' in the more abstract setting of Pure Type Systems (PTS)
   @{cite "Barendregt-Geuvers:2001"}, although there are some key
   differences in the specific treatment of simple types in
   Isabelle/Pure.
 
   Following type-theoretic parlance, the Pure logic consists of three
-  levels of @{text "\<lambda>"}-calculus with corresponding arrows, @{text
-  "\<Rightarrow>"} for syntactic function space (terms depending on terms), @{text
-  "\<And>"} for universal quantification (proofs depending on terms), and
-  @{text "\<Longrightarrow>"} for implication (proofs depending on proofs).
+  levels of \<open>\<lambda>\<close>-calculus with corresponding arrows, \<open>\<Rightarrow>\<close> for syntactic function space (terms depending on terms), \<open>\<And>\<close> for universal quantification (proofs depending on terms), and
+  \<open>\<Longrightarrow>\<close> for implication (proofs depending on proofs).
 
   Derivations are relative to a logical theory, which declares type
   constructors, constants, and axioms.  Theory declarations support
   schematic polymorphism, which is strictly speaking outside the
   logic.\footnote{This is the deeper logical reason, why the theory
-  context @{text "\<Theta>"} is separate from the proof context @{text "\<Gamma>"}
+  context \<open>\<Theta>\<close> is separate from the proof context \<open>\<Gamma>\<close>
   of the core calculus: type constructors, term constants, and facts
   (proof constants) may involve arbitrary type schemes, but the type
   of a locally fixed term parameter is also fixed!}
@@ -38,31 +35,29 @@
 
   \<^medskip>
   A \<^emph>\<open>type class\<close> is an abstract syntactic entity
-  declared in the theory context.  The \<^emph>\<open>subclass relation\<close> @{text
-  "c\<^sub>1 \<subseteq> c\<^sub>2"} is specified by stating an acyclic
+  declared in the theory context.  The \<^emph>\<open>subclass relation\<close> \<open>c\<^sub>1 \<subseteq> c\<^sub>2\<close> is specified by stating an acyclic
   generating relation; the transitive closure is maintained
   internally.  The resulting relation is an ordering: reflexive,
   transitive, and antisymmetric.
 
-  A \<^emph>\<open>sort\<close> is a list of type classes written as @{text "s = {c\<^sub>1,
-  \<dots>, c\<^sub>m}"}, it represents symbolic intersection.  Notationally, the
+  A \<^emph>\<open>sort\<close> is a list of type classes written as \<open>s = {c\<^sub>1,
+  \<dots>, c\<^sub>m}\<close>, it represents symbolic intersection.  Notationally, the
   curly braces are omitted for singleton intersections, i.e.\ any
-  class @{text "c"} may be read as a sort @{text "{c}"}.  The ordering
+  class \<open>c\<close> may be read as a sort \<open>{c}\<close>.  The ordering
   on type classes is extended to sorts according to the meaning of
-  intersections: @{text "{c\<^sub>1, \<dots> c\<^sub>m} \<subseteq> {d\<^sub>1, \<dots>, d\<^sub>n}"} iff @{text
-  "\<forall>j. \<exists>i. c\<^sub>i \<subseteq> d\<^sub>j"}.  The empty intersection @{text "{}"} refers to
+  intersections: \<open>{c\<^sub>1, \<dots> c\<^sub>m} \<subseteq> {d\<^sub>1, \<dots>, d\<^sub>n}\<close> iff \<open>\<forall>j. \<exists>i. c\<^sub>i \<subseteq> d\<^sub>j\<close>.  The empty intersection \<open>{}\<close> refers to
   the universal sort, which is the largest element wrt.\ the sort
-  order.  Thus @{text "{}"} represents the ``full sort'', not the
+  order.  Thus \<open>{}\<close> represents the ``full sort'', not the
   empty one!  The intersection of all (finitely many) classes declared
   in the current theory is the least element wrt.\ the sort ordering.
 
   \<^medskip>
   A \<^emph>\<open>fixed type variable\<close> is a pair of a basic name
-  (starting with a @{text "'"} character) and a sort constraint, e.g.\
-  @{text "('a, s)"} which is usually printed as @{text "\<alpha>\<^sub>s"}.
+  (starting with a \<open>'\<close> character) and a sort constraint, e.g.\
+  \<open>('a, s)\<close> which is usually printed as \<open>\<alpha>\<^sub>s\<close>.
   A \<^emph>\<open>schematic type variable\<close> is a pair of an indexname and a
-  sort constraint, e.g.\ @{text "(('a, 0), s)"} which is usually
-  printed as @{text "?\<alpha>\<^sub>s"}.
+  sort constraint, e.g.\ \<open>(('a, 0), s)\<close> which is usually
+  printed as \<open>?\<alpha>\<^sub>s\<close>.
 
   Note that \<^emph>\<open>all\<close> syntactic components contribute to the identity
   of type variables: basic name, index, and sort constraint.  The core
@@ -70,49 +65,47 @@
   as different, although the type-inference layer (which is outside
   the core) rejects anything like that.
 
-  A \<^emph>\<open>type constructor\<close> @{text "\<kappa>"} is a @{text "k"}-ary operator
+  A \<^emph>\<open>type constructor\<close> \<open>\<kappa>\<close> is a \<open>k\<close>-ary operator
   on types declared in the theory.  Type constructor application is
-  written postfix as @{text "(\<alpha>\<^sub>1, \<dots>, \<alpha>\<^sub>k)\<kappa>"}.  For
-  @{text "k = 0"} the argument tuple is omitted, e.g.\ @{text "prop"}
-  instead of @{text "()prop"}.  For @{text "k = 1"} the parentheses
-  are omitted, e.g.\ @{text "\<alpha> list"} instead of @{text "(\<alpha>)list"}.
+  written postfix as \<open>(\<alpha>\<^sub>1, \<dots>, \<alpha>\<^sub>k)\<kappa>\<close>.  For
+  \<open>k = 0\<close> the argument tuple is omitted, e.g.\ \<open>prop\<close>
+  instead of \<open>()prop\<close>.  For \<open>k = 1\<close> the parentheses
+  are omitted, e.g.\ \<open>\<alpha> list\<close> instead of \<open>(\<alpha>)list\<close>.
   Further notation is provided for specific constructors, notably the
-  right-associative infix @{text "\<alpha> \<Rightarrow> \<beta>"} instead of @{text "(\<alpha>,
-  \<beta>)fun"}.
+  right-associative infix \<open>\<alpha> \<Rightarrow> \<beta>\<close> instead of \<open>(\<alpha>,
+  \<beta>)fun\<close>.
   
   The logical category \<^emph>\<open>type\<close> is defined inductively over type
-  variables and type constructors as follows: @{text "\<tau> = \<alpha>\<^sub>s | ?\<alpha>\<^sub>s |
-  (\<tau>\<^sub>1, \<dots>, \<tau>\<^sub>k)\<kappa>"}.
+  variables and type constructors as follows: \<open>\<tau> = \<alpha>\<^sub>s | ?\<alpha>\<^sub>s |
+  (\<tau>\<^sub>1, \<dots>, \<tau>\<^sub>k)\<kappa>\<close>.
 
-  A \<^emph>\<open>type abbreviation\<close> is a syntactic definition @{text
-  "(\<^vec>\<alpha>)\<kappa> = \<tau>"} of an arbitrary type expression @{text "\<tau>"} over
-  variables @{text "\<^vec>\<alpha>"}.  Type abbreviations appear as type
+  A \<^emph>\<open>type abbreviation\<close> is a syntactic definition \<open>(\<^vec>\<alpha>)\<kappa> = \<tau>\<close> of an arbitrary type expression \<open>\<tau>\<close> over
+  variables \<open>\<^vec>\<alpha>\<close>.  Type abbreviations appear as type
   constructors in the syntax, but are expanded before entering the
   logical core.
 
   A \<^emph>\<open>type arity\<close> declares the image behavior of a type
-  constructor wrt.\ the algebra of sorts: @{text "\<kappa> :: (s\<^sub>1, \<dots>,
-  s\<^sub>k)s"} means that @{text "(\<tau>\<^sub>1, \<dots>, \<tau>\<^sub>k)\<kappa>"} is
-  of sort @{text "s"} if every argument type @{text "\<tau>\<^sub>i"} is
-  of sort @{text "s\<^sub>i"}.  Arity declarations are implicitly
-  completed, i.e.\ @{text "\<kappa> :: (\<^vec>s)c"} entails @{text "\<kappa> ::
-  (\<^vec>s)c'"} for any @{text "c' \<supseteq> c"}.
+  constructor wrt.\ the algebra of sorts: \<open>\<kappa> :: (s\<^sub>1, \<dots>,
+  s\<^sub>k)s\<close> means that \<open>(\<tau>\<^sub>1, \<dots>, \<tau>\<^sub>k)\<kappa>\<close> is
+  of sort \<open>s\<close> if every argument type \<open>\<tau>\<^sub>i\<close> is
+  of sort \<open>s\<^sub>i\<close>.  Arity declarations are implicitly
+  completed, i.e.\ \<open>\<kappa> :: (\<^vec>s)c\<close> entails \<open>\<kappa> ::
+  (\<^vec>s)c'\<close> for any \<open>c' \<supseteq> c\<close>.
 
   \<^medskip>
   The sort algebra is always maintained as \<^emph>\<open>coregular\<close>,
   which means that type arities are consistent with the subclass
-  relation: for any type constructor @{text "\<kappa>"}, and classes @{text
-  "c\<^sub>1 \<subseteq> c\<^sub>2"}, and arities @{text "\<kappa> ::
-  (\<^vec>s\<^sub>1)c\<^sub>1"} and @{text "\<kappa> ::
-  (\<^vec>s\<^sub>2)c\<^sub>2"} holds @{text "\<^vec>s\<^sub>1 \<subseteq>
-  \<^vec>s\<^sub>2"} component-wise.
+  relation: for any type constructor \<open>\<kappa>\<close>, and classes \<open>c\<^sub>1 \<subseteq> c\<^sub>2\<close>, and arities \<open>\<kappa> ::
+  (\<^vec>s\<^sub>1)c\<^sub>1\<close> and \<open>\<kappa> ::
+  (\<^vec>s\<^sub>2)c\<^sub>2\<close> holds \<open>\<^vec>s\<^sub>1 \<subseteq>
+  \<^vec>s\<^sub>2\<close> component-wise.
 
   The key property of a coregular order-sorted algebra is that sort
   constraints can be solved in a most general fashion: for each type
-  constructor @{text "\<kappa>"} and sort @{text "s"} there is a most general
-  vector of argument sorts @{text "(s\<^sub>1, \<dots>, s\<^sub>k)"} such
-  that a type scheme @{text "(\<alpha>\<^bsub>s\<^sub>1\<^esub>, \<dots>,
-  \<alpha>\<^bsub>s\<^sub>k\<^esub>)\<kappa>"} is of sort @{text "s"}.
+  constructor \<open>\<kappa>\<close> and sort \<open>s\<close> there is a most general
+  vector of argument sorts \<open>(s\<^sub>1, \<dots>, s\<^sub>k)\<close> such
+  that a type scheme \<open>(\<alpha>\<^bsub>s\<^sub>1\<^esub>, \<dots>,
+  \<alpha>\<^bsub>s\<^sub>k\<^esub>)\<kappa>\<close> is of sort \<open>s\<close>.
   Consequently, type unification has most general solutions (modulo
   equivalence of sorts), so type-inference produces primary types as
   expected @{cite "nipkow-prehofer"}.
@@ -145,54 +138,53 @@
   the empty class intersection, i.e.\ the ``full sort''.
 
   \<^descr> Type @{ML_type arity} represents type arities.  A triple
-  @{text "(\<kappa>, \<^vec>s, s) : arity"} represents @{text "\<kappa> ::
-  (\<^vec>s)s"} as described above.
+  \<open>(\<kappa>, \<^vec>s, s) : arity\<close> represents \<open>\<kappa> ::
+  (\<^vec>s)s\<close> as described above.
 
   \<^descr> Type @{ML_type typ} represents types; this is a datatype with
   constructors @{ML TFree}, @{ML TVar}, @{ML Type}.
 
-  \<^descr> @{ML Term.map_atyps}~@{text "f \<tau>"} applies the mapping @{text
-  "f"} to all atomic types (@{ML TFree}, @{ML TVar}) occurring in
-  @{text "\<tau>"}.
+  \<^descr> @{ML Term.map_atyps}~\<open>f \<tau>\<close> applies the mapping \<open>f\<close> to all atomic types (@{ML TFree}, @{ML TVar}) occurring in
+  \<open>\<tau>\<close>.
 
-  \<^descr> @{ML Term.fold_atyps}~@{text "f \<tau>"} iterates the operation
-  @{text "f"} over all occurrences of atomic types (@{ML TFree}, @{ML
-  TVar}) in @{text "\<tau>"}; the type structure is traversed from left to
+  \<^descr> @{ML Term.fold_atyps}~\<open>f \<tau>\<close> iterates the operation
+  \<open>f\<close> over all occurrences of atomic types (@{ML TFree}, @{ML
+  TVar}) in \<open>\<tau>\<close>; the type structure is traversed from left to
   right.
 
-  \<^descr> @{ML Sign.subsort}~@{text "thy (s\<^sub>1, s\<^sub>2)"}
-  tests the subsort relation @{text "s\<^sub>1 \<subseteq> s\<^sub>2"}.
+  \<^descr> @{ML Sign.subsort}~\<open>thy (s\<^sub>1, s\<^sub>2)\<close>
+  tests the subsort relation \<open>s\<^sub>1 \<subseteq> s\<^sub>2\<close>.
 
-  \<^descr> @{ML Sign.of_sort}~@{text "thy (\<tau>, s)"} tests whether type
-  @{text "\<tau>"} is of sort @{text "s"}.
+  \<^descr> @{ML Sign.of_sort}~\<open>thy (\<tau>, s)\<close> tests whether type
+  \<open>\<tau>\<close> is of sort \<open>s\<close>.
 
-  \<^descr> @{ML Sign.add_type}~@{text "ctxt (\<kappa>, k, mx)"} declares a
-  new type constructors @{text "\<kappa>"} with @{text "k"} arguments and
+  \<^descr> @{ML Sign.add_type}~\<open>ctxt (\<kappa>, k, mx)\<close> declares a
+  new type constructors \<open>\<kappa>\<close> with \<open>k\<close> arguments and
   optional mixfix syntax.
 
-  \<^descr> @{ML Sign.add_type_abbrev}~@{text "ctxt (\<kappa>, \<^vec>\<alpha>, \<tau>)"}
-  defines a new type abbreviation @{text "(\<^vec>\<alpha>)\<kappa> = \<tau>"}.
+  \<^descr> @{ML Sign.add_type_abbrev}~\<open>ctxt (\<kappa>, \<^vec>\<alpha>, \<tau>)\<close>
+  defines a new type abbreviation \<open>(\<^vec>\<alpha>)\<kappa> = \<tau>\<close>.
 
-  \<^descr> @{ML Sign.primitive_class}~@{text "(c, [c\<^sub>1, \<dots>,
-  c\<^sub>n])"} declares a new class @{text "c"}, together with class
-  relations @{text "c \<subseteq> c\<^sub>i"}, for @{text "i = 1, \<dots>, n"}.
+  \<^descr> @{ML Sign.primitive_class}~\<open>(c, [c\<^sub>1, \<dots>,
+  c\<^sub>n])\<close> declares a new class \<open>c\<close>, together with class
+  relations \<open>c \<subseteq> c\<^sub>i\<close>, for \<open>i = 1, \<dots>, n\<close>.
 
-  \<^descr> @{ML Sign.primitive_classrel}~@{text "(c\<^sub>1,
-  c\<^sub>2)"} declares the class relation @{text "c\<^sub>1 \<subseteq>
-  c\<^sub>2"}.
+  \<^descr> @{ML Sign.primitive_classrel}~\<open>(c\<^sub>1,
+  c\<^sub>2)\<close> declares the class relation \<open>c\<^sub>1 \<subseteq>
+  c\<^sub>2\<close>.
 
-  \<^descr> @{ML Sign.primitive_arity}~@{text "(\<kappa>, \<^vec>s, s)"} declares
-  the arity @{text "\<kappa> :: (\<^vec>s)s"}.
+  \<^descr> @{ML Sign.primitive_arity}~\<open>(\<kappa>, \<^vec>s, s)\<close> declares
+  the arity \<open>\<kappa> :: (\<^vec>s)s\<close>.
 \<close>
 
 text %mlantiq \<open>
   \begin{matharray}{rcl}
-  @{ML_antiquotation_def "class"} & : & @{text ML_antiquotation} \\
-  @{ML_antiquotation_def "sort"} & : & @{text ML_antiquotation} \\
-  @{ML_antiquotation_def "type_name"} & : & @{text ML_antiquotation} \\
-  @{ML_antiquotation_def "type_abbrev"} & : & @{text ML_antiquotation} \\
-  @{ML_antiquotation_def "nonterminal"} & : & @{text ML_antiquotation} \\
-  @{ML_antiquotation_def "typ"} & : & @{text ML_antiquotation} \\
+  @{ML_antiquotation_def "class"} & : & \<open>ML_antiquotation\<close> \\
+  @{ML_antiquotation_def "sort"} & : & \<open>ML_antiquotation\<close> \\
+  @{ML_antiquotation_def "type_name"} & : & \<open>ML_antiquotation\<close> \\
+  @{ML_antiquotation_def "type_abbrev"} & : & \<open>ML_antiquotation\<close> \\
+  @{ML_antiquotation_def "nonterminal"} & : & \<open>ML_antiquotation\<close> \\
+  @{ML_antiquotation_def "typ"} & : & \<open>ML_antiquotation\<close> \\
   \end{matharray}
 
   @{rail \<open>
@@ -207,23 +199,22 @@
   @@{ML_antiquotation typ} type
   \<close>}
 
-  \<^descr> @{text "@{class c}"} inlines the internalized class @{text
-  "c"} --- as @{ML_type string} literal.
+  \<^descr> \<open>@{class c}\<close> inlines the internalized class \<open>c\<close> --- as @{ML_type string} literal.
 
-  \<^descr> @{text "@{sort s}"} inlines the internalized sort @{text "s"}
+  \<^descr> \<open>@{sort s}\<close> inlines the internalized sort \<open>s\<close>
   --- as @{ML_type "string list"} literal.
 
-  \<^descr> @{text "@{type_name c}"} inlines the internalized type
-  constructor @{text "c"} --- as @{ML_type string} literal.
+  \<^descr> \<open>@{type_name c}\<close> inlines the internalized type
+  constructor \<open>c\<close> --- as @{ML_type string} literal.
 
-  \<^descr> @{text "@{type_abbrev c}"} inlines the internalized type
-  abbreviation @{text "c"} --- as @{ML_type string} literal.
+  \<^descr> \<open>@{type_abbrev c}\<close> inlines the internalized type
+  abbreviation \<open>c\<close> --- as @{ML_type string} literal.
 
-  \<^descr> @{text "@{nonterminal c}"} inlines the internalized syntactic
-  type~/ grammar nonterminal @{text "c"} --- as @{ML_type string}
+  \<^descr> \<open>@{nonterminal c}\<close> inlines the internalized syntactic
+  type~/ grammar nonterminal \<open>c\<close> --- as @{ML_type string}
   literal.
 
-  \<^descr> @{text "@{typ \<tau>}"} inlines the internalized type @{text "\<tau>"}
+  \<^descr> \<open>@{typ \<tau>}\<close> inlines the internalized type \<open>\<tau>\<close>
   --- as constructor term for datatype @{ML_type typ}.
 \<close>
 
@@ -231,18 +222,18 @@
 section \<open>Terms \label{sec:terms}\<close>
 
 text \<open>
-  The language of terms is that of simply-typed @{text "\<lambda>"}-calculus
+  The language of terms is that of simply-typed \<open>\<lambda>\<close>-calculus
   with de-Bruijn indices for bound variables (cf.\ @{cite debruijn72}
   or @{cite "paulson-ml2"}), with the types being determined by the
   corresponding binders.  In contrast, free variables and constants
   have an explicit name and type in each occurrence.
 
   \<^medskip>
-  A \<^emph>\<open>bound variable\<close> is a natural number @{text "b"},
+  A \<^emph>\<open>bound variable\<close> is a natural number \<open>b\<close>,
   which accounts for the number of intermediate binders between the
   variable occurrence in the body and its binding position.  For
-  example, the de-Bruijn term @{text "\<lambda>\<^bsub>bool\<^esub>. \<lambda>\<^bsub>bool\<^esub>. 1 \<and> 0"} would
-  correspond to @{text "\<lambda>x\<^bsub>bool\<^esub>. \<lambda>y\<^bsub>bool\<^esub>. x \<and> y"} in a named
+  example, the de-Bruijn term \<open>\<lambda>\<^bsub>bool\<^esub>. \<lambda>\<^bsub>bool\<^esub>. 1 \<and> 0\<close> would
+  correspond to \<open>\<lambda>x\<^bsub>bool\<^esub>. \<lambda>y\<^bsub>bool\<^esub>. x \<and> y\<close> in a named
   representation.  Note that a bound variable may be represented by
   different de-Bruijn indices at different occurrences, depending on
   the nesting of abstractions.
@@ -254,31 +245,29 @@
   without any loose variables.
 
   A \<^emph>\<open>fixed variable\<close> is a pair of a basic name and a type, e.g.\
-  @{text "(x, \<tau>)"} which is usually printed @{text "x\<^sub>\<tau>"} here.  A
+  \<open>(x, \<tau>)\<close> which is usually printed \<open>x\<^sub>\<tau>\<close> here.  A
   \<^emph>\<open>schematic variable\<close> is a pair of an indexname and a type,
-  e.g.\ @{text "((x, 0), \<tau>)"} which is likewise printed as @{text
-  "?x\<^sub>\<tau>"}.
+  e.g.\ \<open>((x, 0), \<tau>)\<close> which is likewise printed as \<open>?x\<^sub>\<tau>\<close>.
 
   \<^medskip>
   A \<^emph>\<open>constant\<close> is a pair of a basic name and a type,
-  e.g.\ @{text "(c, \<tau>)"} which is usually printed as @{text "c\<^sub>\<tau>"}
+  e.g.\ \<open>(c, \<tau>)\<close> which is usually printed as \<open>c\<^sub>\<tau>\<close>
   here.  Constants are declared in the context as polymorphic families
-  @{text "c :: \<sigma>"}, meaning that all substitution instances @{text
-  "c\<^sub>\<tau>"} for @{text "\<tau> = \<sigma>\<vartheta>"} are valid.
+  \<open>c :: \<sigma>\<close>, meaning that all substitution instances \<open>c\<^sub>\<tau>\<close> for \<open>\<tau> = \<sigma>\<vartheta>\<close> are valid.
 
-  The vector of \<^emph>\<open>type arguments\<close> of constant @{text "c\<^sub>\<tau>"} wrt.\
-  the declaration @{text "c :: \<sigma>"} is defined as the codomain of the
-  matcher @{text "\<vartheta> = {?\<alpha>\<^sub>1 \<mapsto> \<tau>\<^sub>1, \<dots>, ?\<alpha>\<^sub>n \<mapsto> \<tau>\<^sub>n}"} presented in
-  canonical order @{text "(\<tau>\<^sub>1, \<dots>, \<tau>\<^sub>n)"}, corresponding to the
-  left-to-right occurrences of the @{text "\<alpha>\<^sub>i"} in @{text "\<sigma>"}.
+  The vector of \<^emph>\<open>type arguments\<close> of constant \<open>c\<^sub>\<tau>\<close> wrt.\
+  the declaration \<open>c :: \<sigma>\<close> is defined as the codomain of the
+  matcher \<open>\<vartheta> = {?\<alpha>\<^sub>1 \<mapsto> \<tau>\<^sub>1, \<dots>, ?\<alpha>\<^sub>n \<mapsto> \<tau>\<^sub>n}\<close> presented in
+  canonical order \<open>(\<tau>\<^sub>1, \<dots>, \<tau>\<^sub>n)\<close>, corresponding to the
+  left-to-right occurrences of the \<open>\<alpha>\<^sub>i\<close> in \<open>\<sigma>\<close>.
   Within a given theory context, there is a one-to-one correspondence
-  between any constant @{text "c\<^sub>\<tau>"} and the application @{text "c(\<tau>\<^sub>1,
-  \<dots>, \<tau>\<^sub>n)"} of its type arguments.  For example, with @{text "plus :: \<alpha>
-  \<Rightarrow> \<alpha> \<Rightarrow> \<alpha>"}, the instance @{text "plus\<^bsub>nat \<Rightarrow> nat \<Rightarrow> nat\<^esub>"} corresponds to
-  @{text "plus(nat)"}.
+  between any constant \<open>c\<^sub>\<tau>\<close> and the application \<open>c(\<tau>\<^sub>1,
+  \<dots>, \<tau>\<^sub>n)\<close> of its type arguments.  For example, with \<open>plus :: \<alpha>
+  \<Rightarrow> \<alpha> \<Rightarrow> \<alpha>\<close>, the instance \<open>plus\<^bsub>nat \<Rightarrow> nat \<Rightarrow> nat\<^esub>\<close> corresponds to
+  \<open>plus(nat)\<close>.
 
-  Constant declarations @{text "c :: \<sigma>"} may contain sort constraints
-  for type variables in @{text "\<sigma>"}.  These are observed by
+  Constant declarations \<open>c :: \<sigma>\<close> may contain sort constraints
+  for type variables in \<open>\<sigma>\<close>.  These are observed by
   type-inference as expected, but \<^emph>\<open>ignored\<close> by the core logic.
   This means the primitive logic is able to reason with instances of
   polymorphic constants that the user-level type-checker would reject
@@ -287,21 +276,21 @@
   \<^medskip>
   An \<^emph>\<open>atomic term\<close> is either a variable or constant.
   The logical category \<^emph>\<open>term\<close> is defined inductively over atomic
-  terms, with abstraction and application as follows: @{text "t = b |
-  x\<^sub>\<tau> | ?x\<^sub>\<tau> | c\<^sub>\<tau> | \<lambda>\<^sub>\<tau>. t | t\<^sub>1 t\<^sub>2"}.  Parsing and printing takes care of
+  terms, with abstraction and application as follows: \<open>t = b |
+  x\<^sub>\<tau> | ?x\<^sub>\<tau> | c\<^sub>\<tau> | \<lambda>\<^sub>\<tau>. t | t\<^sub>1 t\<^sub>2\<close>.  Parsing and printing takes care of
   converting between an external representation with named bound
   variables.  Subsequently, we shall use the latter notation instead
   of internal de-Bruijn representation.
 
-  The inductive relation @{text "t :: \<tau>"} assigns a (unique) type to a
+  The inductive relation \<open>t :: \<tau>\<close> assigns a (unique) type to a
   term according to the structure of atomic terms, abstractions, and
   applications:
   \[
-  \infer{@{text "a\<^sub>\<tau> :: \<tau>"}}{}
+  \infer{\<open>a\<^sub>\<tau> :: \<tau>\<close>}{}
   \qquad
-  \infer{@{text "(\<lambda>x\<^sub>\<tau>. t) :: \<tau> \<Rightarrow> \<sigma>"}}{@{text "t :: \<sigma>"}}
+  \infer{\<open>(\<lambda>x\<^sub>\<tau>. t) :: \<tau> \<Rightarrow> \<sigma>\<close>}{\<open>t :: \<sigma>\<close>}
   \qquad
-  \infer{@{text "t u :: \<sigma>"}}{@{text "t :: \<tau> \<Rightarrow> \<sigma>"} & @{text "u :: \<tau>"}}
+  \infer{\<open>t u :: \<sigma>\<close>}{\<open>t :: \<tau> \<Rightarrow> \<sigma>\<close> & \<open>u :: \<tau>\<close>}
   \]
   A \<^emph>\<open>well-typed term\<close> is a term that can be typed according to these rules.
 
@@ -312,43 +301,38 @@
   variables, and declarations for polymorphic constants.
 
   The identity of atomic terms consists both of the name and the type
-  component.  This means that different variables @{text
-  "x\<^bsub>\<tau>\<^sub>1\<^esub>"} and @{text "x\<^bsub>\<tau>\<^sub>2\<^esub>"} may become the same after
+  component.  This means that different variables \<open>x\<^bsub>\<tau>\<^sub>1\<^esub>\<close> and \<open>x\<^bsub>\<tau>\<^sub>2\<^esub>\<close> may become the same after
   type instantiation.  Type-inference rejects variables of the same
   name, but different types.  In contrast, mixed instances of
   polymorphic constants occur routinely.
 
   \<^medskip>
-  The \<^emph>\<open>hidden polymorphism\<close> of a term @{text "t :: \<sigma>"}
-  is the set of type variables occurring in @{text "t"}, but not in
-  its type @{text "\<sigma>"}.  This means that the term implicitly depends
+  The \<^emph>\<open>hidden polymorphism\<close> of a term \<open>t :: \<sigma>\<close>
+  is the set of type variables occurring in \<open>t\<close>, but not in
+  its type \<open>\<sigma>\<close>.  This means that the term implicitly depends
   on type arguments that are not accounted in the result type, i.e.\
-  there are different type instances @{text "t\<vartheta> :: \<sigma>"} and
-  @{text "t\<vartheta>' :: \<sigma>"} with the same type.  This slightly
+  there are different type instances \<open>t\<vartheta> :: \<sigma>\<close> and
+  \<open>t\<vartheta>' :: \<sigma>\<close> with the same type.  This slightly
   pathological situation notoriously demands additional care.
 
   \<^medskip>
-  A \<^emph>\<open>term abbreviation\<close> is a syntactic definition @{text
-  "c\<^sub>\<sigma> \<equiv> t"} of a closed term @{text "t"} of type @{text "\<sigma>"},
+  A \<^emph>\<open>term abbreviation\<close> is a syntactic definition \<open>c\<^sub>\<sigma> \<equiv> t\<close> of a closed term \<open>t\<close> of type \<open>\<sigma>\<close>,
   without any hidden polymorphism.  A term abbreviation looks like a
   constant in the syntax, but is expanded before entering the logical
   core.  Abbreviations are usually reverted when printing terms, using
-  @{text "t \<rightarrow> c\<^sub>\<sigma>"} as rules for higher-order rewriting.
+  \<open>t \<rightarrow> c\<^sub>\<sigma>\<close> as rules for higher-order rewriting.
 
   \<^medskip>
-  Canonical operations on @{text "\<lambda>"}-terms include @{text
-  "\<alpha>\<beta>\<eta>"}-conversion: @{text "\<alpha>"}-conversion refers to capture-free
-  renaming of bound variables; @{text "\<beta>"}-conversion contracts an
+  Canonical operations on \<open>\<lambda>\<close>-terms include \<open>\<alpha>\<beta>\<eta>\<close>-conversion: \<open>\<alpha>\<close>-conversion refers to capture-free
+  renaming of bound variables; \<open>\<beta>\<close>-conversion contracts an
   abstraction applied to an argument term, substituting the argument
-  in the body: @{text "(\<lambda>x. b)a"} becomes @{text "b[a/x]"}; @{text
-  "\<eta>"}-conversion contracts vacuous application-abstraction: @{text
-  "\<lambda>x. f x"} becomes @{text "f"}, provided that the bound variable
-  does not occur in @{text "f"}.
+  in the body: \<open>(\<lambda>x. b)a\<close> becomes \<open>b[a/x]\<close>; \<open>\<eta>\<close>-conversion contracts vacuous application-abstraction: \<open>\<lambda>x. f x\<close> becomes \<open>f\<close>, provided that the bound variable
+  does not occur in \<open>f\<close>.
 
-  Terms are normally treated modulo @{text "\<alpha>"}-conversion, which is
+  Terms are normally treated modulo \<open>\<alpha>\<close>-conversion, which is
   implicit in the de-Bruijn representation.  Names for bound variables
   in abstractions are maintained separately as (meaningless) comments,
-  mostly for parsing and printing.  Full @{text "\<alpha>\<beta>\<eta>"}-conversion is
+  mostly for parsing and printing.  Full \<open>\<alpha>\<beta>\<eta>\<close>-conversion is
   commonplace in various standard operations (\secref{sec:obj-rules})
   that are based on higher-order unification and matching.
 \<close>
@@ -381,64 +365,59 @@
   Free}, @{index_ML Var}, @{index_ML Const}, @{index_ML Abs},
   @{index_ML_op "$"}.
 
-  \<^descr> @{text "t"}~@{ML_text aconv}~@{text "u"} checks @{text
-  "\<alpha>"}-equivalence of two terms.  This is the basic equality relation
+  \<^descr> \<open>t\<close>~@{ML_text aconv}~\<open>u\<close> checks \<open>\<alpha>\<close>-equivalence of two terms.  This is the basic equality relation
   on type @{ML_type term}; raw datatype equality should only be used
   for operations related to parsing or printing!
 
-  \<^descr> @{ML Term.map_types}~@{text "f t"} applies the mapping @{text
-  "f"} to all types occurring in @{text "t"}.
+  \<^descr> @{ML Term.map_types}~\<open>f t\<close> applies the mapping \<open>f\<close> to all types occurring in \<open>t\<close>.
 
-  \<^descr> @{ML Term.fold_types}~@{text "f t"} iterates the operation
-  @{text "f"} over all occurrences of types in @{text "t"}; the term
+  \<^descr> @{ML Term.fold_types}~\<open>f t\<close> iterates the operation
+  \<open>f\<close> over all occurrences of types in \<open>t\<close>; the term
   structure is traversed from left to right.
 
-  \<^descr> @{ML Term.map_aterms}~@{text "f t"} applies the mapping @{text
-  "f"} to all atomic terms (@{ML Bound}, @{ML Free}, @{ML Var}, @{ML
-  Const}) occurring in @{text "t"}.
+  \<^descr> @{ML Term.map_aterms}~\<open>f t\<close> applies the mapping \<open>f\<close> to all atomic terms (@{ML Bound}, @{ML Free}, @{ML Var}, @{ML
+  Const}) occurring in \<open>t\<close>.
 
-  \<^descr> @{ML Term.fold_aterms}~@{text "f t"} iterates the operation
-  @{text "f"} over all occurrences of atomic terms (@{ML Bound}, @{ML
-  Free}, @{ML Var}, @{ML Const}) in @{text "t"}; the term structure is
+  \<^descr> @{ML Term.fold_aterms}~\<open>f t\<close> iterates the operation
+  \<open>f\<close> over all occurrences of atomic terms (@{ML Bound}, @{ML
+  Free}, @{ML Var}, @{ML Const}) in \<open>t\<close>; the term structure is
   traversed from left to right.
 
-  \<^descr> @{ML fastype_of}~@{text "t"} determines the type of a
+  \<^descr> @{ML fastype_of}~\<open>t\<close> determines the type of a
   well-typed term.  This operation is relatively slow, despite the
   omission of any sanity checks.
 
-  \<^descr> @{ML lambda}~@{text "a b"} produces an abstraction @{text
-  "\<lambda>a. b"}, where occurrences of the atomic term @{text "a"} in the
-  body @{text "b"} are replaced by bound variables.
+  \<^descr> @{ML lambda}~\<open>a b\<close> produces an abstraction \<open>\<lambda>a. b\<close>, where occurrences of the atomic term \<open>a\<close> in the
+  body \<open>b\<close> are replaced by bound variables.
 
-  \<^descr> @{ML betapply}~@{text "(t, u)"} produces an application @{text
-  "t u"}, with topmost @{text "\<beta>"}-conversion if @{text "t"} is an
+  \<^descr> @{ML betapply}~\<open>(t, u)\<close> produces an application \<open>t u\<close>, with topmost \<open>\<beta>\<close>-conversion if \<open>t\<close> is an
   abstraction.
 
-  \<^descr> @{ML incr_boundvars}~@{text "j"} increments a term's dangling
-  bound variables by the offset @{text "j"}.  This is required when
+  \<^descr> @{ML incr_boundvars}~\<open>j\<close> increments a term's dangling
+  bound variables by the offset \<open>j\<close>.  This is required when
   moving a subterm into a context where it is enclosed by a different
   number of abstractions.  Bound variables with a matching abstraction
   are unaffected.
 
-  \<^descr> @{ML Sign.declare_const}~@{text "ctxt ((c, \<sigma>), mx)"} declares
-  a new constant @{text "c :: \<sigma>"} with optional mixfix syntax.
+  \<^descr> @{ML Sign.declare_const}~\<open>ctxt ((c, \<sigma>), mx)\<close> declares
+  a new constant \<open>c :: \<sigma>\<close> with optional mixfix syntax.
 
-  \<^descr> @{ML Sign.add_abbrev}~@{text "print_mode (c, t)"}
-  introduces a new term abbreviation @{text "c \<equiv> t"}.
+  \<^descr> @{ML Sign.add_abbrev}~\<open>print_mode (c, t)\<close>
+  introduces a new term abbreviation \<open>c \<equiv> t\<close>.
 
-  \<^descr> @{ML Sign.const_typargs}~@{text "thy (c, \<tau>)"} and @{ML
-  Sign.const_instance}~@{text "thy (c, [\<tau>\<^sub>1, \<dots>, \<tau>\<^sub>n])"}
+  \<^descr> @{ML Sign.const_typargs}~\<open>thy (c, \<tau>)\<close> and @{ML
+  Sign.const_instance}~\<open>thy (c, [\<tau>\<^sub>1, \<dots>, \<tau>\<^sub>n])\<close>
   convert between two representations of polymorphic constants: full
   type instance vs.\ compact type arguments form.
 \<close>
 
 text %mlantiq \<open>
   \begin{matharray}{rcl}
-  @{ML_antiquotation_def "const_name"} & : & @{text ML_antiquotation} \\
-  @{ML_antiquotation_def "const_abbrev"} & : & @{text ML_antiquotation} \\
-  @{ML_antiquotation_def "const"} & : & @{text ML_antiquotation} \\
-  @{ML_antiquotation_def "term"} & : & @{text ML_antiquotation} \\
-  @{ML_antiquotation_def "prop"} & : & @{text ML_antiquotation} \\
+  @{ML_antiquotation_def "const_name"} & : & \<open>ML_antiquotation\<close> \\
+  @{ML_antiquotation_def "const_abbrev"} & : & \<open>ML_antiquotation\<close> \\
+  @{ML_antiquotation_def "const"} & : & \<open>ML_antiquotation\<close> \\
+  @{ML_antiquotation_def "term"} & : & \<open>ML_antiquotation\<close> \\
+  @{ML_antiquotation_def "prop"} & : & \<open>ML_antiquotation\<close> \\
   \end{matharray}
 
   @{rail \<open>
@@ -452,45 +431,44 @@
   @@{ML_antiquotation prop} prop
   \<close>}
 
-  \<^descr> @{text "@{const_name c}"} inlines the internalized logical
-  constant name @{text "c"} --- as @{ML_type string} literal.
+  \<^descr> \<open>@{const_name c}\<close> inlines the internalized logical
+  constant name \<open>c\<close> --- as @{ML_type string} literal.
 
-  \<^descr> @{text "@{const_abbrev c}"} inlines the internalized
-  abbreviated constant name @{text "c"} --- as @{ML_type string}
+  \<^descr> \<open>@{const_abbrev c}\<close> inlines the internalized
+  abbreviated constant name \<open>c\<close> --- as @{ML_type string}
   literal.
 
-  \<^descr> @{text "@{const c(\<^vec>\<tau>)}"} inlines the internalized
-  constant @{text "c"} with precise type instantiation in the sense of
+  \<^descr> \<open>@{const c(\<^vec>\<tau>)}\<close> inlines the internalized
+  constant \<open>c\<close> with precise type instantiation in the sense of
   @{ML Sign.const_instance} --- as @{ML Const} constructor term for
   datatype @{ML_type term}.
 
-  \<^descr> @{text "@{term t}"} inlines the internalized term @{text "t"}
+  \<^descr> \<open>@{term t}\<close> inlines the internalized term \<open>t\<close>
   --- as constructor term for datatype @{ML_type term}.
 
-  \<^descr> @{text "@{prop \<phi>}"} inlines the internalized proposition
-  @{text "\<phi>"} --- as constructor term for datatype @{ML_type term}.
+  \<^descr> \<open>@{prop \<phi>}\<close> inlines the internalized proposition
+  \<open>\<phi>\<close> --- as constructor term for datatype @{ML_type term}.
 \<close>
 
 
 section \<open>Theorems \label{sec:thms}\<close>
 
 text \<open>
-  A \<^emph>\<open>proposition\<close> is a well-typed term of type @{text "prop"}, a
+  A \<^emph>\<open>proposition\<close> is a well-typed term of type \<open>prop\<close>, a
   \<^emph>\<open>theorem\<close> is a proven proposition (depending on a context of
   hypotheses and the background theory).  Primitive inferences include
-  plain Natural Deduction rules for the primary connectives @{text
-  "\<And>"} and @{text "\<Longrightarrow>"} of the framework.  There is also a builtin
-  notion of equality/equivalence @{text "\<equiv>"}.
+  plain Natural Deduction rules for the primary connectives \<open>\<And>\<close> and \<open>\<Longrightarrow>\<close> of the framework.  There is also a builtin
+  notion of equality/equivalence \<open>\<equiv>\<close>.
 \<close>
 
 
 subsection \<open>Primitive connectives and rules \label{sec:prim-rules}\<close>
 
 text \<open>
-  The theory @{text "Pure"} contains constant declarations for the
-  primitive connectives @{text "\<And>"}, @{text "\<Longrightarrow>"}, and @{text "\<equiv>"} of
+  The theory \<open>Pure\<close> contains constant declarations for the
+  primitive connectives \<open>\<And>\<close>, \<open>\<Longrightarrow>\<close>, and \<open>\<equiv>\<close> of
   the logical framework, see \figref{fig:pure-connectives}.  The
-  derivability judgment @{text "A\<^sub>1, \<dots>, A\<^sub>n \<turnstile> B"} is
+  derivability judgment \<open>A\<^sub>1, \<dots>, A\<^sub>n \<turnstile> B\<close> is
   defined inductively by the primitive inferences given in
   \figref{fig:prim-rules}, with the global restriction that the
   hypotheses must \<^emph>\<open>not\<close> contain any schematic variables.  The
@@ -501,9 +479,9 @@
   \begin{figure}[htb]
   \begin{center}
   \begin{tabular}{ll}
-  @{text "all :: (\<alpha> \<Rightarrow> prop) \<Rightarrow> prop"} & universal quantification (binder @{text "\<And>"}) \\
-  @{text "\<Longrightarrow> :: prop \<Rightarrow> prop \<Rightarrow> prop"} & implication (right associative infix) \\
-  @{text "\<equiv> :: \<alpha> \<Rightarrow> \<alpha> \<Rightarrow> prop"} & equality relation (infix) \\
+  \<open>all :: (\<alpha> \<Rightarrow> prop) \<Rightarrow> prop\<close> & universal quantification (binder \<open>\<And>\<close>) \\
+  \<open>\<Longrightarrow> :: prop \<Rightarrow> prop \<Rightarrow> prop\<close> & implication (right associative infix) \\
+  \<open>\<equiv> :: \<alpha> \<Rightarrow> \<alpha> \<Rightarrow> prop\<close> & equality relation (infix) \\
   \end{tabular}
   \caption{Primitive connectives of Pure}\label{fig:pure-connectives}
   \end{center}
@@ -512,19 +490,19 @@
   \begin{figure}[htb]
   \begin{center}
   \[
-  \infer[@{text "(axiom)"}]{@{text "\<turnstile> A"}}{@{text "A \<in> \<Theta>"}}
+  \infer[\<open>(axiom)\<close>]{\<open>\<turnstile> A\<close>}{\<open>A \<in> \<Theta>\<close>}
   \qquad
-  \infer[@{text "(assume)"}]{@{text "A \<turnstile> A"}}{}
+  \infer[\<open>(assume)\<close>]{\<open>A \<turnstile> A\<close>}{}
   \]
   \[
-  \infer[@{text "(\<And>\<hyphen>intro)"}]{@{text "\<Gamma> \<turnstile> \<And>x. B[x]"}}{@{text "\<Gamma> \<turnstile> B[x]"} & @{text "x \<notin> \<Gamma>"}}
+  \infer[\<open>(\<And>\<hyphen>intro)\<close>]{\<open>\<Gamma> \<turnstile> \<And>x. B[x]\<close>}{\<open>\<Gamma> \<turnstile> B[x]\<close> & \<open>x \<notin> \<Gamma>\<close>}
   \qquad
-  \infer[@{text "(\<And>\<hyphen>elim)"}]{@{text "\<Gamma> \<turnstile> B[a]"}}{@{text "\<Gamma> \<turnstile> \<And>x. B[x]"}}
+  \infer[\<open>(\<And>\<hyphen>elim)\<close>]{\<open>\<Gamma> \<turnstile> B[a]\<close>}{\<open>\<Gamma> \<turnstile> \<And>x. B[x]\<close>}
   \]
   \[
-  \infer[@{text "(\<Longrightarrow>\<hyphen>intro)"}]{@{text "\<Gamma> - A \<turnstile> A \<Longrightarrow> B"}}{@{text "\<Gamma> \<turnstile> B"}}
+  \infer[\<open>(\<Longrightarrow>\<hyphen>intro)\<close>]{\<open>\<Gamma> - A \<turnstile> A \<Longrightarrow> B\<close>}{\<open>\<Gamma> \<turnstile> B\<close>}
   \qquad
-  \infer[@{text "(\<Longrightarrow>\<hyphen>elim)"}]{@{text "\<Gamma>\<^sub>1 \<union> \<Gamma>\<^sub>2 \<turnstile> B"}}{@{text "\<Gamma>\<^sub>1 \<turnstile> A \<Longrightarrow> B"} & @{text "\<Gamma>\<^sub>2 \<turnstile> A"}}
+  \infer[\<open>(\<Longrightarrow>\<hyphen>elim)\<close>]{\<open>\<Gamma>\<^sub>1 \<union> \<Gamma>\<^sub>2 \<turnstile> B\<close>}{\<open>\<Gamma>\<^sub>1 \<turnstile> A \<Longrightarrow> B\<close> & \<open>\<Gamma>\<^sub>2 \<turnstile> A\<close>}
   \]
   \caption{Primitive inferences of Pure}\label{fig:prim-rules}
   \end{center}
@@ -533,72 +511,66 @@
   \begin{figure}[htb]
   \begin{center}
   \begin{tabular}{ll}
-  @{text "\<turnstile> (\<lambda>x. b[x]) a \<equiv> b[a]"} & @{text "\<beta>"}-conversion \\
-  @{text "\<turnstile> x \<equiv> x"} & reflexivity \\
-  @{text "\<turnstile> x \<equiv> y \<Longrightarrow> P x \<Longrightarrow> P y"} & substitution \\
-  @{text "\<turnstile> (\<And>x. f x \<equiv> g x) \<Longrightarrow> f \<equiv> g"} & extensionality \\
-  @{text "\<turnstile> (A \<Longrightarrow> B) \<Longrightarrow> (B \<Longrightarrow> A) \<Longrightarrow> A \<equiv> B"} & logical equivalence \\
+  \<open>\<turnstile> (\<lambda>x. b[x]) a \<equiv> b[a]\<close> & \<open>\<beta>\<close>-conversion \\
+  \<open>\<turnstile> x \<equiv> x\<close> & reflexivity \\
+  \<open>\<turnstile> x \<equiv> y \<Longrightarrow> P x \<Longrightarrow> P y\<close> & substitution \\
+  \<open>\<turnstile> (\<And>x. f x \<equiv> g x) \<Longrightarrow> f \<equiv> g\<close> & extensionality \\
+  \<open>\<turnstile> (A \<Longrightarrow> B) \<Longrightarrow> (B \<Longrightarrow> A) \<Longrightarrow> A \<equiv> B\<close> & logical equivalence \\
   \end{tabular}
   \caption{Conceptual axiomatization of Pure equality}\label{fig:pure-equality}
   \end{center}
   \end{figure}
 
-  The introduction and elimination rules for @{text "\<And>"} and @{text
-  "\<Longrightarrow>"} are analogous to formation of dependently typed @{text
-  "\<lambda>"}-terms representing the underlying proof objects.  Proof terms
+  The introduction and elimination rules for \<open>\<And>\<close> and \<open>\<Longrightarrow>\<close> are analogous to formation of dependently typed \<open>\<lambda>\<close>-terms representing the underlying proof objects.  Proof terms
   are irrelevant in the Pure logic, though; they cannot occur within
   propositions.  The system provides a runtime option to record
   explicit proof terms for primitive inferences, see also
-  \secref{sec:proof-terms}.  Thus all three levels of @{text
-  "\<lambda>"}-calculus become explicit: @{text "\<Rightarrow>"} for terms, and @{text
-  "\<And>/\<Longrightarrow>"} for proofs (cf.\ @{cite "Berghofer-Nipkow:2000:TPHOL"}).
+  \secref{sec:proof-terms}.  Thus all three levels of \<open>\<lambda>\<close>-calculus become explicit: \<open>\<Rightarrow>\<close> for terms, and \<open>\<And>/\<Longrightarrow>\<close> for proofs (cf.\ @{cite "Berghofer-Nipkow:2000:TPHOL"}).
 
-  Observe that locally fixed parameters (as in @{text
-  "\<And>\<hyphen>intro"}) need not be recorded in the hypotheses, because
+  Observe that locally fixed parameters (as in \<open>\<And>\<hyphen>intro\<close>) need not be recorded in the hypotheses, because
   the simple syntactic types of Pure are always inhabitable.
-  ``Assumptions'' @{text "x :: \<tau>"} for type-membership are only
-  present as long as some @{text "x\<^sub>\<tau>"} occurs in the statement
-  body.\footnote{This is the key difference to ``@{text "\<lambda>HOL"}'' in
+  ``Assumptions'' \<open>x :: \<tau>\<close> for type-membership are only
+  present as long as some \<open>x\<^sub>\<tau>\<close> occurs in the statement
+  body.\footnote{This is the key difference to ``\<open>\<lambda>HOL\<close>'' in
   the PTS framework @{cite "Barendregt-Geuvers:2001"}, where hypotheses
-  @{text "x : A"} are treated uniformly for propositions and types.}
+  \<open>x : A\<close> are treated uniformly for propositions and types.}
 
   \<^medskip>
   The axiomatization of a theory is implicitly closed by
-  forming all instances of type and term variables: @{text "\<turnstile>
-  A\<vartheta>"} holds for any substitution instance of an axiom
-  @{text "\<turnstile> A"}.  By pushing substitutions through derivations
-  inductively, we also get admissible @{text "generalize"} and @{text
-  "instantiate"} rules as shown in \figref{fig:subst-rules}.
+  forming all instances of type and term variables: \<open>\<turnstile>
+  A\<vartheta>\<close> holds for any substitution instance of an axiom
+  \<open>\<turnstile> A\<close>.  By pushing substitutions through derivations
+  inductively, we also get admissible \<open>generalize\<close> and \<open>instantiate\<close> rules as shown in \figref{fig:subst-rules}.
 
   \begin{figure}[htb]
   \begin{center}
   \[
-  \infer{@{text "\<Gamma> \<turnstile> B[?\<alpha>]"}}{@{text "\<Gamma> \<turnstile> B[\<alpha>]"} & @{text "\<alpha> \<notin> \<Gamma>"}}
+  \infer{\<open>\<Gamma> \<turnstile> B[?\<alpha>]\<close>}{\<open>\<Gamma> \<turnstile> B[\<alpha>]\<close> & \<open>\<alpha> \<notin> \<Gamma>\<close>}
   \quad
-  \infer[\quad@{text "(generalize)"}]{@{text "\<Gamma> \<turnstile> B[?x]"}}{@{text "\<Gamma> \<turnstile> B[x]"} & @{text "x \<notin> \<Gamma>"}}
+  \infer[\quad\<open>(generalize)\<close>]{\<open>\<Gamma> \<turnstile> B[?x]\<close>}{\<open>\<Gamma> \<turnstile> B[x]\<close> & \<open>x \<notin> \<Gamma>\<close>}
   \]
   \[
-  \infer{@{text "\<Gamma> \<turnstile> B[\<tau>]"}}{@{text "\<Gamma> \<turnstile> B[?\<alpha>]"}}
+  \infer{\<open>\<Gamma> \<turnstile> B[\<tau>]\<close>}{\<open>\<Gamma> \<turnstile> B[?\<alpha>]\<close>}
   \quad
-  \infer[\quad@{text "(instantiate)"}]{@{text "\<Gamma> \<turnstile> B[t]"}}{@{text "\<Gamma> \<turnstile> B[?x]"}}
+  \infer[\quad\<open>(instantiate)\<close>]{\<open>\<Gamma> \<turnstile> B[t]\<close>}{\<open>\<Gamma> \<turnstile> B[?x]\<close>}
   \]
   \caption{Admissible substitution rules}\label{fig:subst-rules}
   \end{center}
   \end{figure}
 
-  Note that @{text "instantiate"} does not require an explicit
-  side-condition, because @{text "\<Gamma>"} may never contain schematic
+  Note that \<open>instantiate\<close> does not require an explicit
+  side-condition, because \<open>\<Gamma>\<close> may never contain schematic
   variables.
 
   In principle, variables could be substituted in hypotheses as well,
   but this would disrupt the monotonicity of reasoning: deriving
-  @{text "\<Gamma>\<vartheta> \<turnstile> B\<vartheta>"} from @{text "\<Gamma> \<turnstile> B"} is
-  correct, but @{text "\<Gamma>\<vartheta> \<supseteq> \<Gamma>"} does not necessarily hold:
+  \<open>\<Gamma>\<vartheta> \<turnstile> B\<vartheta>\<close> from \<open>\<Gamma> \<turnstile> B\<close> is
+  correct, but \<open>\<Gamma>\<vartheta> \<supseteq> \<Gamma>\<close> does not necessarily hold:
   the result belongs to a different proof context.
 
   \<^medskip>
   An \<^emph>\<open>oracle\<close> is a function that produces axioms on the
-  fly.  Logically, this is an instance of the @{text "axiom"} rule
+  fly.  Logically, this is an instance of the \<open>axiom\<close> rule
   (\figref{fig:prim-rules}), but there is an operational difference.
   The system always records oracle invocations within derivations of
   theorems by a unique tag.
@@ -608,20 +580,16 @@
   Later on, theories are usually developed in a strictly definitional
   fashion, by stating only certain equalities over new constants.
 
-  A \<^emph>\<open>simple definition\<close> consists of a constant declaration @{text
-  "c :: \<sigma>"} together with an axiom @{text "\<turnstile> c \<equiv> t"}, where @{text "t
-  :: \<sigma>"} is a closed term without any hidden polymorphism.  The RHS
-  may depend on further defined constants, but not @{text "c"} itself.
-  Definitions of functions may be presented as @{text "c \<^vec>x \<equiv>
-  t"} instead of the puristic @{text "c \<equiv> \<lambda>\<^vec>x. t"}.
+  A \<^emph>\<open>simple definition\<close> consists of a constant declaration \<open>c :: \<sigma>\<close> together with an axiom \<open>\<turnstile> c \<equiv> t\<close>, where \<open>t
+  :: \<sigma>\<close> is a closed term without any hidden polymorphism.  The RHS
+  may depend on further defined constants, but not \<open>c\<close> itself.
+  Definitions of functions may be presented as \<open>c \<^vec>x \<equiv>
+  t\<close> instead of the puristic \<open>c \<equiv> \<lambda>\<^vec>x. t\<close>.
 
   An \<^emph>\<open>overloaded definition\<close> consists of a collection of axioms
-  for the same constant, with zero or one equations @{text
-  "c((\<^vec>\<alpha>)\<kappa>) \<equiv> t"} for each type constructor @{text "\<kappa>"} (for
-  distinct variables @{text "\<^vec>\<alpha>"}).  The RHS may mention
-  previously defined constants as above, or arbitrary constants @{text
-  "d(\<alpha>\<^sub>i)"} for some @{text "\<alpha>\<^sub>i"} projected from @{text
-  "\<^vec>\<alpha>"}.  Thus overloaded definitions essentially work by
+  for the same constant, with zero or one equations \<open>c((\<^vec>\<alpha>)\<kappa>) \<equiv> t\<close> for each type constructor \<open>\<kappa>\<close> (for
+  distinct variables \<open>\<^vec>\<alpha>\<close>).  The RHS may mention
+  previously defined constants as above, or arbitrary constants \<open>d(\<alpha>\<^sub>i)\<close> for some \<open>\<alpha>\<^sub>i\<close> projected from \<open>\<^vec>\<alpha>\<close>.  Thus overloaded definitions essentially work by
   primitive recursion over the syntactic structure of a single type
   argument.  See also @{cite \<open>\S4.3\<close> "Haftmann-Wenzel:2006:classes"}.
 \<close>
@@ -665,7 +633,7 @@
   Defs.entry -> Defs.entry list -> theory -> theory"} \\
   \end{mldecls}
 
-  \<^descr> @{ML Thm.peek_status}~@{text "thm"} informs about the current
+  \<^descr> @{ML Thm.peek_status}~\<open>thm\<close> informs about the current
   status of the derivation object behind the given theorem.  This is a
   snapshot of a potentially ongoing (parallel) evaluation of proofs.
   The three Boolean values indicate the following: @{verbatim oracle}
@@ -674,13 +642,13 @@
   failed} if some future proof has failed, rendering the theorem
   invalid!
 
-  \<^descr> @{ML Logic.all}~@{text "a B"} produces a Pure quantification
-  @{text "\<And>a. B"}, where occurrences of the atomic term @{text "a"} in
-  the body proposition @{text "B"} are replaced by bound variables.
+  \<^descr> @{ML Logic.all}~\<open>a B\<close> produces a Pure quantification
+  \<open>\<And>a. B\<close>, where occurrences of the atomic term \<open>a\<close> in
+  the body proposition \<open>B\<close> are replaced by bound variables.
   (See also @{ML lambda} on terms.)
 
-  \<^descr> @{ML Logic.mk_implies}~@{text "(A, B)"} produces a Pure
-  implication @{text "A \<Longrightarrow> B"}.
+  \<^descr> @{ML Logic.mk_implies}~\<open>(A, B)\<close> produces a Pure
+  implication \<open>A \<Longrightarrow> B\<close>.
 
   \<^descr> Types @{ML_type ctyp} and @{ML_type cterm} represent certified
   types and terms, respectively.  These are abstract datatypes that
@@ -693,8 +661,8 @@
   are located in the @{ML_structure Thm} module, even though theorems are
   not yet involved at that stage.
 
-  \<^descr> @{ML Thm.ctyp_of}~@{text "ctxt \<tau>"} and @{ML
-  Thm.cterm_of}~@{text "ctxt t"} explicitly check types and terms,
+  \<^descr> @{ML Thm.ctyp_of}~\<open>ctxt \<tau>\<close> and @{ML
+  Thm.cterm_of}~\<open>ctxt t\<close> explicitly check types and terms,
   respectively.  This also involves some basic normalizations, such
   expansion of type and term abbreviations from the underlying
   theory context.
@@ -716,7 +684,7 @@
   Every @{ML_type thm} value refers its background theory,
   cf.\ \secref{sec:context-theory}.
 
-  \<^descr> @{ML Thm.transfer}~@{text "thy thm"} transfers the given
+  \<^descr> @{ML Thm.transfer}~\<open>thy thm\<close> transfers the given
   theorem to a \<^emph>\<open>larger\<close> theory, see also \secref{sec:context}.
   This formal adjustment of the background context has no logical
   significance, but is occasionally required for formal reasons, e.g.\
@@ -727,50 +695,48 @@
   Thm.forall_elim}, @{ML Thm.implies_intr}, and @{ML Thm.implies_elim}
   correspond to the primitive inferences of \figref{fig:prim-rules}.
 
-  \<^descr> @{ML Thm.generalize}~@{text "(\<^vec>\<alpha>, \<^vec>x)"}
-  corresponds to the @{text "generalize"} rules of
+  \<^descr> @{ML Thm.generalize}~\<open>(\<^vec>\<alpha>, \<^vec>x)\<close>
+  corresponds to the \<open>generalize\<close> rules of
   \figref{fig:subst-rules}.  Here collections of type and term
   variables are generalized simultaneously, specified by the given
   basic names.
 
-  \<^descr> @{ML Thm.instantiate}~@{text "(\<^vec>\<alpha>\<^sub>s,
-  \<^vec>x\<^sub>\<tau>)"} corresponds to the @{text "instantiate"} rules
+  \<^descr> @{ML Thm.instantiate}~\<open>(\<^vec>\<alpha>\<^sub>s,
+  \<^vec>x\<^sub>\<tau>)\<close> corresponds to the \<open>instantiate\<close> rules
   of \figref{fig:subst-rules}.  Type variables are substituted before
-  term variables.  Note that the types in @{text "\<^vec>x\<^sub>\<tau>"}
+  term variables.  Note that the types in \<open>\<^vec>x\<^sub>\<tau>\<close>
   refer to the instantiated versions.
 
-  \<^descr> @{ML Thm.add_axiom}~@{text "ctxt (name, A)"} declares an
+  \<^descr> @{ML Thm.add_axiom}~\<open>ctxt (name, A)\<close> declares an
   arbitrary proposition as axiom, and retrieves it as a theorem from
-  the resulting theory, cf.\ @{text "axiom"} in
+  the resulting theory, cf.\ \<open>axiom\<close> in
   \figref{fig:prim-rules}.  Note that the low-level representation in
   the axiom table may differ slightly from the returned theorem.
 
-  \<^descr> @{ML Thm.add_oracle}~@{text "(binding, oracle)"} produces a named
+  \<^descr> @{ML Thm.add_oracle}~\<open>(binding, oracle)\<close> produces a named
   oracle rule, essentially generating arbitrary axioms on the fly,
-  cf.\ @{text "axiom"} in \figref{fig:prim-rules}.
+  cf.\ \<open>axiom\<close> in \figref{fig:prim-rules}.
 
-  \<^descr> @{ML Thm.add_def}~@{text "ctxt unchecked overloaded (name, c
-  \<^vec>x \<equiv> t)"} states a definitional axiom for an existing constant
-  @{text "c"}.  Dependencies are recorded via @{ML Theory.add_deps},
-  unless the @{text "unchecked"} option is set.  Note that the
+  \<^descr> @{ML Thm.add_def}~\<open>ctxt unchecked overloaded (name, c
+  \<^vec>x \<equiv> t)\<close> states a definitional axiom for an existing constant
+  \<open>c\<close>.  Dependencies are recorded via @{ML Theory.add_deps},
+  unless the \<open>unchecked\<close> option is set.  Note that the
   low-level representation in the axiom table may differ slightly from
   the returned theorem.
 
-  \<^descr> @{ML Theory.add_deps}~@{text "ctxt name c\<^sub>\<tau> \<^vec>d\<^sub>\<sigma>"}
-  declares dependencies of a named specification for constant @{text
-  "c\<^sub>\<tau>"}, relative to existing specifications for constants @{text
-  "\<^vec>d\<^sub>\<sigma>"}.  This also works for type constructors.
+  \<^descr> @{ML Theory.add_deps}~\<open>ctxt name c\<^sub>\<tau> \<^vec>d\<^sub>\<sigma>\<close>
+  declares dependencies of a named specification for constant \<open>c\<^sub>\<tau>\<close>, relative to existing specifications for constants \<open>\<^vec>d\<^sub>\<sigma>\<close>.  This also works for type constructors.
 \<close>
 
 
 text %mlantiq \<open>
   \begin{matharray}{rcl}
-  @{ML_antiquotation_def "ctyp"} & : & @{text ML_antiquotation} \\
-  @{ML_antiquotation_def "cterm"} & : & @{text ML_antiquotation} \\
-  @{ML_antiquotation_def "cprop"} & : & @{text ML_antiquotation} \\
-  @{ML_antiquotation_def "thm"} & : & @{text ML_antiquotation} \\
-  @{ML_antiquotation_def "thms"} & : & @{text ML_antiquotation} \\
-  @{ML_antiquotation_def "lemma"} & : & @{text ML_antiquotation} \\
+  @{ML_antiquotation_def "ctyp"} & : & \<open>ML_antiquotation\<close> \\
+  @{ML_antiquotation_def "cterm"} & : & \<open>ML_antiquotation\<close> \\
+  @{ML_antiquotation_def "cprop"} & : & \<open>ML_antiquotation\<close> \\
+  @{ML_antiquotation_def "thm"} & : & \<open>ML_antiquotation\<close> \\
+  @{ML_antiquotation_def "thms"} & : & \<open>ML_antiquotation\<close> \\
+  @{ML_antiquotation_def "lemma"} & : & \<open>ML_antiquotation\<close> \\
   \end{matharray}
 
   @{rail \<open>
@@ -788,28 +754,28 @@
     @'by' method method?
   \<close>}
 
-  \<^descr> @{text "@{ctyp \<tau>}"} produces a certified type wrt.\ the
+  \<^descr> \<open>@{ctyp \<tau>}\<close> produces a certified type wrt.\ the
   current background theory --- as abstract value of type @{ML_type
   ctyp}.
 
-  \<^descr> @{text "@{cterm t}"} and @{text "@{cprop \<phi>}"} produce a
+  \<^descr> \<open>@{cterm t}\<close> and \<open>@{cprop \<phi>}\<close> produce a
   certified term wrt.\ the current background theory --- as abstract
   value of type @{ML_type cterm}.
 
-  \<^descr> @{text "@{thm a}"} produces a singleton fact --- as abstract
+  \<^descr> \<open>@{thm a}\<close> produces a singleton fact --- as abstract
   value of type @{ML_type thm}.
 
-  \<^descr> @{text "@{thms a}"} produces a general fact --- as abstract
+  \<^descr> \<open>@{thms a}\<close> produces a general fact --- as abstract
   value of type @{ML_type "thm list"}.
 
-  \<^descr> @{text "@{lemma \<phi> by meth}"} produces a fact that is proven on
+  \<^descr> \<open>@{lemma \<phi> by meth}\<close> produces a fact that is proven on
   the spot according to the minimal proof, which imitates a terminal
   Isar proof.  The result is an abstract value of type @{ML_type thm}
   or @{ML_type "thm list"}, depending on the number of propositions
   given here.
 
   The internal derivation object lacks a proper theorem name, but it
-  is formally closed, unless the @{text "(open)"} option is specified
+  is formally closed, unless the \<open>(open)\<close> option is specified
   (this may impact performance of applications with proof terms).
 
   Since ML antiquotations are always evaluated at compile-time, there
@@ -823,7 +789,7 @@
 
 subsection \<open>Auxiliary connectives \label{sec:logic-aux}\<close>
 
-text \<open>Theory @{text "Pure"} provides a few auxiliary connectives
+text \<open>Theory \<open>Pure\<close> provides a few auxiliary connectives
   that are defined on top of the primitive ones, see
   \figref{fig:pure-aux}.  These special constants are useful in
   certain internal encodings, and are normally not directly exposed to
@@ -832,51 +798,49 @@
   \begin{figure}[htb]
   \begin{center}
   \begin{tabular}{ll}
-  @{text "conjunction :: prop \<Rightarrow> prop \<Rightarrow> prop"} & (infix @{text "&&&"}) \\
-  @{text "\<turnstile> A &&& B \<equiv> (\<And>C. (A \<Longrightarrow> B \<Longrightarrow> C) \<Longrightarrow> C)"} \\[1ex]
-  @{text "prop :: prop \<Rightarrow> prop"} & (prefix @{text "#"}, suppressed) \\
-  @{text "#A \<equiv> A"} \\[1ex]
-  @{text "term :: \<alpha> \<Rightarrow> prop"} & (prefix @{text "TERM"}) \\
-  @{text "term x \<equiv> (\<And>A. A \<Longrightarrow> A)"} \\[1ex]
-  @{text "type :: \<alpha> itself"} & (prefix @{text "TYPE"}) \\
-  @{text "(unspecified)"} \\
+  \<open>conjunction :: prop \<Rightarrow> prop \<Rightarrow> prop\<close> & (infix \<open>&&&\<close>) \\
+  \<open>\<turnstile> A &&& B \<equiv> (\<And>C. (A \<Longrightarrow> B \<Longrightarrow> C) \<Longrightarrow> C)\<close> \\[1ex]
+  \<open>prop :: prop \<Rightarrow> prop\<close> & (prefix \<open>#\<close>, suppressed) \\
+  \<open>#A \<equiv> A\<close> \\[1ex]
+  \<open>term :: \<alpha> \<Rightarrow> prop\<close> & (prefix \<open>TERM\<close>) \\
+  \<open>term x \<equiv> (\<And>A. A \<Longrightarrow> A)\<close> \\[1ex]
+  \<open>type :: \<alpha> itself\<close> & (prefix \<open>TYPE\<close>) \\
+  \<open>(unspecified)\<close> \\
   \end{tabular}
   \caption{Definitions of auxiliary connectives}\label{fig:pure-aux}
   \end{center}
   \end{figure}
 
-  The introduction @{text "A \<Longrightarrow> B \<Longrightarrow> A &&& B"}, and eliminations
-  (projections) @{text "A &&& B \<Longrightarrow> A"} and @{text "A &&& B \<Longrightarrow> B"} are
+  The introduction \<open>A \<Longrightarrow> B \<Longrightarrow> A &&& B\<close>, and eliminations
+  (projections) \<open>A &&& B \<Longrightarrow> A\<close> and \<open>A &&& B \<Longrightarrow> B\<close> are
   available as derived rules.  Conjunction allows to treat
   simultaneous assumptions and conclusions uniformly, e.g.\ consider
-  @{text "A \<Longrightarrow> B \<Longrightarrow> C &&& D"}.  In particular, the goal mechanism
+  \<open>A \<Longrightarrow> B \<Longrightarrow> C &&& D\<close>.  In particular, the goal mechanism
   represents multiple claims as explicit conjunction internally, but
   this is refined (via backwards introduction) into separate sub-goals
   before the user commences the proof; the final result is projected
   into a list of theorems using eliminations (cf.\
   \secref{sec:tactical-goals}).
 
-  The @{text "prop"} marker (@{text "#"}) makes arbitrarily complex
-  propositions appear as atomic, without changing the meaning: @{text
-  "\<Gamma> \<turnstile> A"} and @{text "\<Gamma> \<turnstile> #A"} are interchangeable.  See
+  The \<open>prop\<close> marker (\<open>#\<close>) makes arbitrarily complex
+  propositions appear as atomic, without changing the meaning: \<open>\<Gamma> \<turnstile> A\<close> and \<open>\<Gamma> \<turnstile> #A\<close> are interchangeable.  See
   \secref{sec:tactical-goals} for specific operations.
 
-  The @{text "term"} marker turns any well-typed term into a derivable
-  proposition: @{text "\<turnstile> TERM t"} holds unconditionally.  Although
+  The \<open>term\<close> marker turns any well-typed term into a derivable
+  proposition: \<open>\<turnstile> TERM t\<close> holds unconditionally.  Although
   this is logically vacuous, it allows to treat terms and proofs
   uniformly, similar to a type-theoretic framework.
 
-  The @{text "TYPE"} constructor is the canonical representative of
-  the unspecified type @{text "\<alpha> itself"}; it essentially injects the
+  The \<open>TYPE\<close> constructor is the canonical representative of
+  the unspecified type \<open>\<alpha> itself\<close>; it essentially injects the
   language of types into that of terms.  There is specific notation
-  @{text "TYPE(\<tau>)"} for @{text "TYPE\<^bsub>\<tau> itself\<^esub>"}.
-  Although being devoid of any particular meaning, the term @{text
-  "TYPE(\<tau>)"} accounts for the type @{text "\<tau>"} within the term
-  language.  In particular, @{text "TYPE(\<alpha>)"} may be used as formal
+  \<open>TYPE(\<tau>)\<close> for \<open>TYPE\<^bsub>\<tau> itself\<^esub>\<close>.
+  Although being devoid of any particular meaning, the term \<open>TYPE(\<tau>)\<close> accounts for the type \<open>\<tau>\<close> within the term
+  language.  In particular, \<open>TYPE(\<alpha>)\<close> may be used as formal
   argument in primitive definitions, in order to circumvent hidden
-  polymorphism (cf.\ \secref{sec:terms}).  For example, @{text "c
-  TYPE(\<alpha>) \<equiv> A[\<alpha>]"} defines @{text "c :: \<alpha> itself \<Rightarrow> prop"} in terms of
-  a proposition @{text "A"} that depends on an additional type
+  polymorphism (cf.\ \secref{sec:terms}).  For example, \<open>c
+  TYPE(\<alpha>) \<equiv> A[\<alpha>]\<close> defines \<open>c :: \<alpha> itself \<Rightarrow> prop\<close> in terms of
+  a proposition \<open>A\<close> that depends on an additional type
   argument, which is essentially a predicate on types.
 \<close>
 
@@ -890,22 +854,19 @@
   @{index_ML Logic.dest_type: "term -> typ"} \\
   \end{mldecls}
 
-  \<^descr> @{ML Conjunction.intr} derives @{text "A &&& B"} from @{text
-  "A"} and @{text "B"}.
+  \<^descr> @{ML Conjunction.intr} derives \<open>A &&& B\<close> from \<open>A\<close> and \<open>B\<close>.
 
-  \<^descr> @{ML Conjunction.elim} derives @{text "A"} and @{text "B"}
-  from @{text "A &&& B"}.
+  \<^descr> @{ML Conjunction.elim} derives \<open>A\<close> and \<open>B\<close>
+  from \<open>A &&& B\<close>.
 
-  \<^descr> @{ML Drule.mk_term} derives @{text "TERM t"}.
+  \<^descr> @{ML Drule.mk_term} derives \<open>TERM t\<close>.
 
-  \<^descr> @{ML Drule.dest_term} recovers term @{text "t"} from @{text
-  "TERM t"}.
+  \<^descr> @{ML Drule.dest_term} recovers term \<open>t\<close> from \<open>TERM t\<close>.
 
-  \<^descr> @{ML Logic.mk_type}~@{text "\<tau>"} produces the term @{text
-  "TYPE(\<tau>)"}.
+  \<^descr> @{ML Logic.mk_type}~\<open>\<tau>\<close> produces the term \<open>TYPE(\<tau>)\<close>.
 
-  \<^descr> @{ML Logic.dest_type}~@{text "TYPE(\<tau>)"} recovers the type
-  @{text "\<tau>"}.
+  \<^descr> @{ML Logic.dest_type}~\<open>TYPE(\<tau>)\<close> recovers the type
+  \<open>\<tau>\<close>.
 \<close>
 
 
@@ -913,29 +874,26 @@
 
 text \<open>Type variables are decorated with sorts, as explained in
   \secref{sec:types}.  This constrains type instantiation to certain
-  ranges of types: variable @{text "\<alpha>\<^sub>s"} may only be assigned to types
-  @{text "\<tau>"} that belong to sort @{text "s"}.  Within the logic, sort
-  constraints act like implicit preconditions on the result @{text
-  "\<lparr>\<alpha>\<^sub>1 : s\<^sub>1\<rparr>, \<dots>, \<lparr>\<alpha>\<^sub>n : s\<^sub>n\<rparr>, \<Gamma> \<turnstile> \<phi>"} where the type variables @{text
-  "\<alpha>\<^sub>1, \<dots>, \<alpha>\<^sub>n"} cover the propositions @{text "\<Gamma>"}, @{text "\<phi>"}, as
-  well as the proof of @{text "\<Gamma> \<turnstile> \<phi>"}.
+  ranges of types: variable \<open>\<alpha>\<^sub>s\<close> may only be assigned to types
+  \<open>\<tau>\<close> that belong to sort \<open>s\<close>.  Within the logic, sort
+  constraints act like implicit preconditions on the result \<open>\<lparr>\<alpha>\<^sub>1 : s\<^sub>1\<rparr>, \<dots>, \<lparr>\<alpha>\<^sub>n : s\<^sub>n\<rparr>, \<Gamma> \<turnstile> \<phi>\<close> where the type variables \<open>\<alpha>\<^sub>1, \<dots>, \<alpha>\<^sub>n\<close> cover the propositions \<open>\<Gamma>\<close>, \<open>\<phi>\<close>, as
+  well as the proof of \<open>\<Gamma> \<turnstile> \<phi>\<close>.
 
   These \<^emph>\<open>sort hypothesis\<close> of a theorem are passed monotonically
   through further derivations.  They are redundant, as long as the
   statement of a theorem still contains the type variables that are
   accounted here.  The logical significance of sort hypotheses is
   limited to the boundary case where type variables disappear from the
-  proposition, e.g.\ @{text "\<lparr>\<alpha>\<^sub>s : s\<rparr> \<turnstile> \<phi>"}.  Since such dangling type
+  proposition, e.g.\ \<open>\<lparr>\<alpha>\<^sub>s : s\<rparr> \<turnstile> \<phi>\<close>.  Since such dangling type
   variables can be renamed arbitrarily without changing the
-  proposition @{text "\<phi>"}, the inference kernel maintains sort
-  hypotheses in anonymous form @{text "s \<turnstile> \<phi>"}.
+  proposition \<open>\<phi>\<close>, the inference kernel maintains sort
+  hypotheses in anonymous form \<open>s \<turnstile> \<phi>\<close>.
 
   In most practical situations, such extra sort hypotheses may be
   stripped in a final bookkeeping step, e.g.\ at the end of a proof:
   they are typically left over from intermediate reasoning with type
-  classes that can be satisfied by some concrete type @{text "\<tau>"} of
-  sort @{text "s"} to replace the hypothetical type variable @{text
-  "\<alpha>\<^sub>s"}.\<close>
+  classes that can be satisfied by some concrete type \<open>\<tau>\<close> of
+  sort \<open>s\<close> to replace the hypothetical type variable \<open>\<alpha>\<^sub>s\<close>.\<close>
 
 text %mlref \<open>
   \begin{mldecls}
@@ -943,11 +901,11 @@
   @{index_ML Thm.strip_shyps: "thm -> thm"} \\
   \end{mldecls}
 
-  \<^descr> @{ML Thm.extra_shyps}~@{text "thm"} determines the extraneous
+  \<^descr> @{ML Thm.extra_shyps}~\<open>thm\<close> determines the extraneous
   sort hypotheses of the given theorem, i.e.\ the sorts that are not
   present within type variables of the statement.
 
-  \<^descr> @{ML Thm.strip_shyps}~@{text "thm"} removes any extraneous
+  \<^descr> @{ML Thm.strip_shyps}~\<open>thm\<close> removes any extraneous
   sort hypotheses that can be witnessed from the type signature.
 \<close>
 
@@ -976,9 +934,8 @@
   purposes.  User-level reasoning usually works via object-level rules
   that are represented as theorems of Pure.  Composition of rules
   involves \<^emph>\<open>backchaining\<close>, \<^emph>\<open>higher-order unification\<close> modulo
-  @{text "\<alpha>\<beta>\<eta>"}-conversion of @{text "\<lambda>"}-terms, and so-called
-  \<^emph>\<open>lifting\<close> of rules into a context of @{text "\<And>"} and @{text
-  "\<Longrightarrow>"} connectives.  Thus the full power of higher-order Natural
+  \<open>\<alpha>\<beta>\<eta>\<close>-conversion of \<open>\<lambda>\<close>-terms, and so-called
+  \<^emph>\<open>lifting\<close> of rules into a context of \<open>\<And>\<close> and \<open>\<Longrightarrow>\<close> connectives.  Thus the full power of higher-order Natural
   Deduction in Isabelle/Pure becomes readily available.
 \<close>
 
@@ -991,24 +948,24 @@
   arbitrary nesting similar to @{cite extensions91}.  The most basic
   rule format is that of a \<^emph>\<open>Horn Clause\<close>:
   \[
-  \infer{@{text "A"}}{@{text "A\<^sub>1"} & @{text "\<dots>"} & @{text "A\<^sub>n"}}
+  \infer{\<open>A\<close>}{\<open>A\<^sub>1\<close> & \<open>\<dots>\<close> & \<open>A\<^sub>n\<close>}
   \]
-  where @{text "A, A\<^sub>1, \<dots>, A\<^sub>n"} are atomic propositions
-  of the framework, usually of the form @{text "Trueprop B"}, where
-  @{text "B"} is a (compound) object-level statement.  This
+  where \<open>A, A\<^sub>1, \<dots>, A\<^sub>n\<close> are atomic propositions
+  of the framework, usually of the form \<open>Trueprop B\<close>, where
+  \<open>B\<close> is a (compound) object-level statement.  This
   object-level inference corresponds to an iterated implication in
   Pure like this:
   \[
-  @{text "A\<^sub>1 \<Longrightarrow> \<dots> A\<^sub>n \<Longrightarrow> A"}
+  \<open>A\<^sub>1 \<Longrightarrow> \<dots> A\<^sub>n \<Longrightarrow> A\<close>
   \]
-  As an example consider conjunction introduction: @{text "A \<Longrightarrow> B \<Longrightarrow> A \<and>
-  B"}.  Any parameters occurring in such rule statements are
+  As an example consider conjunction introduction: \<open>A \<Longrightarrow> B \<Longrightarrow> A \<and>
+  B\<close>.  Any parameters occurring in such rule statements are
   conceptionally treated as arbitrary:
   \[
-  @{text "\<And>x\<^sub>1 \<dots> x\<^sub>m. A\<^sub>1 x\<^sub>1 \<dots> x\<^sub>m \<Longrightarrow> \<dots> A\<^sub>n x\<^sub>1 \<dots> x\<^sub>m \<Longrightarrow> A x\<^sub>1 \<dots> x\<^sub>m"}
+  \<open>\<And>x\<^sub>1 \<dots> x\<^sub>m. A\<^sub>1 x\<^sub>1 \<dots> x\<^sub>m \<Longrightarrow> \<dots> A\<^sub>n x\<^sub>1 \<dots> x\<^sub>m \<Longrightarrow> A x\<^sub>1 \<dots> x\<^sub>m\<close>
   \]
 
-  Nesting of rules means that the positions of @{text "A\<^sub>i"} may
+  Nesting of rules means that the positions of \<open>A\<^sub>i\<close> may
   again hold compound rules, not just atomic propositions.
   Propositions of this format are called \<^emph>\<open>Hereditary Harrop
   Formulae\<close> in the literature @{cite "Miller:1991"}.  Here we give an
@@ -1016,19 +973,18 @@
 
   \<^medskip>
   \begin{tabular}{ll}
-  @{text "\<^bold>x"} & set of variables \\
-  @{text "\<^bold>A"} & set of atomic propositions \\
-  @{text "\<^bold>H  =  \<And>\<^bold>x\<^sup>*. \<^bold>H\<^sup>* \<Longrightarrow> \<^bold>A"} & set of Hereditary Harrop Formulas \\
+  \<open>\<^bold>x\<close> & set of variables \\
+  \<open>\<^bold>A\<close> & set of atomic propositions \\
+  \<open>\<^bold>H  =  \<And>\<^bold>x\<^sup>*. \<^bold>H\<^sup>* \<Longrightarrow> \<^bold>A\<close> & set of Hereditary Harrop Formulas \\
   \end{tabular}
   \<^medskip>
 
   Thus we essentially impose nesting levels on propositions formed
-  from @{text "\<And>"} and @{text "\<Longrightarrow>"}.  At each level there is a prefix
+  from \<open>\<And>\<close> and \<open>\<Longrightarrow>\<close>.  At each level there is a prefix
   of parameters and compound premises, concluding an atomic
-  proposition.  Typical examples are @{text "\<longrightarrow>"}-introduction @{text
-  "(A \<Longrightarrow> B) \<Longrightarrow> A \<longrightarrow> B"} or mathematical induction @{text "P 0 \<Longrightarrow> (\<And>n. P n
-  \<Longrightarrow> P (Suc n)) \<Longrightarrow> P n"}.  Even deeper nesting occurs in well-founded
-  induction @{text "(\<And>x. (\<And>y. y \<prec> x \<Longrightarrow> P y) \<Longrightarrow> P x) \<Longrightarrow> P x"}, but this
+  proposition.  Typical examples are \<open>\<longrightarrow>\<close>-introduction \<open>(A \<Longrightarrow> B) \<Longrightarrow> A \<longrightarrow> B\<close> or mathematical induction \<open>P 0 \<Longrightarrow> (\<And>n. P n
+  \<Longrightarrow> P (Suc n)) \<Longrightarrow> P n\<close>.  Even deeper nesting occurs in well-founded
+  induction \<open>(\<And>x. (\<And>y. y \<prec> x \<Longrightarrow> P y) \<Longrightarrow> P x) \<Longrightarrow> P x\<close>, but this
   already marks the limit of rule complexity that is usually seen in
   practice.
 
@@ -1036,14 +992,14 @@
   Regular user-level inferences in Isabelle/Pure always
   maintain the following canonical form of results:
 
-  \<^item> Normalization by @{text "(A \<Longrightarrow> (\<And>x. B x)) \<equiv> (\<And>x. A \<Longrightarrow> B x)"},
+  \<^item> Normalization by \<open>(A \<Longrightarrow> (\<And>x. B x)) \<equiv> (\<And>x. A \<Longrightarrow> B x)\<close>,
   which is a theorem of Pure, means that quantifiers are pushed in
   front of implication at each level of nesting.  The normal form is a
   Hereditary Harrop Formula.
 
   \<^item> The outermost prefix of parameters is represented via
-  schematic variables: instead of @{text "\<And>\<^vec>x. \<^vec>H \<^vec>x
-  \<Longrightarrow> A \<^vec>x"} we have @{text "\<^vec>H ?\<^vec>x \<Longrightarrow> A ?\<^vec>x"}.
+  schematic variables: instead of \<open>\<And>\<^vec>x. \<^vec>H \<^vec>x
+  \<Longrightarrow> A \<^vec>x\<close> we have \<open>\<^vec>H ?\<^vec>x \<Longrightarrow> A ?\<^vec>x\<close>.
   Note that this representation looses information about the order of
   parameters, and vacuous quantifiers vanish automatically.
 \<close>
@@ -1053,7 +1009,7 @@
   @{index_ML Simplifier.norm_hhf: "Proof.context -> thm -> thm"} \\
   \end{mldecls}
 
-  \<^descr> @{ML Simplifier.norm_hhf}~@{text "ctxt thm"} normalizes the given
+  \<^descr> @{ML Simplifier.norm_hhf}~\<open>ctxt thm\<close> normalizes the given
   theorem according to the canonical form specified above.  This is
   occasionally helpful to repair some low-level tools that do not
   handle Hereditary Harrop Formulae properly.
@@ -1071,41 +1027,40 @@
 
   To understand the all-important @{inference resolution} principle,
   we first consider raw @{inference_def composition} (modulo
-  higher-order unification with substitution @{text "\<vartheta>"}):
+  higher-order unification with substitution \<open>\<vartheta>\<close>):
   \[
-  \infer[(@{inference_def composition})]{@{text "\<^vec>A\<vartheta> \<Longrightarrow> C\<vartheta>"}}
-  {@{text "\<^vec>A \<Longrightarrow> B"} & @{text "B' \<Longrightarrow> C"} & @{text "B\<vartheta> = B'\<vartheta>"}}
+  \infer[(@{inference_def composition})]{\<open>\<^vec>A\<vartheta> \<Longrightarrow> C\<vartheta>\<close>}
+  {\<open>\<^vec>A \<Longrightarrow> B\<close> & \<open>B' \<Longrightarrow> C\<close> & \<open>B\<vartheta> = B'\<vartheta>\<close>}
   \]
   Here the conclusion of the first rule is unified with the premise of
   the second; the resulting rule instance inherits the premises of the
-  first and conclusion of the second.  Note that @{text "C"} can again
+  first and conclusion of the second.  Note that \<open>C\<close> can again
   consist of iterated implications.  We can also permute the premises
-  of the second rule back-and-forth in order to compose with @{text
-  "B'"} in any position (subsequently we shall always refer to
+  of the second rule back-and-forth in order to compose with \<open>B'\<close> in any position (subsequently we shall always refer to
   position 1 w.l.o.g.).
 
   In @{inference composition} the internal structure of the common
-  part @{text "B"} and @{text "B'"} is not taken into account.  For
-  proper @{inference resolution} we require @{text "B"} to be atomic,
-  and explicitly observe the structure @{text "\<And>\<^vec>x. \<^vec>H
-  \<^vec>x \<Longrightarrow> B' \<^vec>x"} of the premise of the second rule.  The
+  part \<open>B\<close> and \<open>B'\<close> is not taken into account.  For
+  proper @{inference resolution} we require \<open>B\<close> to be atomic,
+  and explicitly observe the structure \<open>\<And>\<^vec>x. \<^vec>H
+  \<^vec>x \<Longrightarrow> B' \<^vec>x\<close> of the premise of the second rule.  The
   idea is to adapt the first rule by ``lifting'' it into this context,
   by means of iterated application of the following inferences:
   \[
-  \infer[(@{inference_def imp_lift})]{@{text "(\<^vec>H \<Longrightarrow> \<^vec>A) \<Longrightarrow> (\<^vec>H \<Longrightarrow> B)"}}{@{text "\<^vec>A \<Longrightarrow> B"}}
+  \infer[(@{inference_def imp_lift})]{\<open>(\<^vec>H \<Longrightarrow> \<^vec>A) \<Longrightarrow> (\<^vec>H \<Longrightarrow> B)\<close>}{\<open>\<^vec>A \<Longrightarrow> B\<close>}
   \]
   \[
-  \infer[(@{inference_def all_lift})]{@{text "(\<And>\<^vec>x. \<^vec>A (?\<^vec>a \<^vec>x)) \<Longrightarrow> (\<And>\<^vec>x. B (?\<^vec>a \<^vec>x))"}}{@{text "\<^vec>A ?\<^vec>a \<Longrightarrow> B ?\<^vec>a"}}
+  \infer[(@{inference_def all_lift})]{\<open>(\<And>\<^vec>x. \<^vec>A (?\<^vec>a \<^vec>x)) \<Longrightarrow> (\<And>\<^vec>x. B (?\<^vec>a \<^vec>x))\<close>}{\<open>\<^vec>A ?\<^vec>a \<Longrightarrow> B ?\<^vec>a\<close>}
   \]
   By combining raw composition with lifting, we get full @{inference
   resolution} as follows:
   \[
   \infer[(@{inference_def resolution})]
-  {@{text "(\<And>\<^vec>x. \<^vec>H \<^vec>x \<Longrightarrow> \<^vec>A (?\<^vec>a \<^vec>x))\<vartheta> \<Longrightarrow> C\<vartheta>"}}
+  {\<open>(\<And>\<^vec>x. \<^vec>H \<^vec>x \<Longrightarrow> \<^vec>A (?\<^vec>a \<^vec>x))\<vartheta> \<Longrightarrow> C\<vartheta>\<close>}
   {\begin{tabular}{l}
-    @{text "\<^vec>A ?\<^vec>a \<Longrightarrow> B ?\<^vec>a"} \\
-    @{text "(\<And>\<^vec>x. \<^vec>H \<^vec>x \<Longrightarrow> B' \<^vec>x) \<Longrightarrow> C"} \\
-    @{text "(\<lambda>\<^vec>x. B (?\<^vec>a \<^vec>x))\<vartheta> = B'\<vartheta>"} \\
+    \<open>\<^vec>A ?\<^vec>a \<Longrightarrow> B ?\<^vec>a\<close> \\
+    \<open>(\<And>\<^vec>x. \<^vec>H \<^vec>x \<Longrightarrow> B' \<^vec>x) \<Longrightarrow> C\<close> \\
+    \<open>(\<lambda>\<^vec>x. B (?\<^vec>a \<^vec>x))\<vartheta> = B'\<vartheta>\<close> \\
    \end{tabular}}
   \]
 
@@ -1114,8 +1069,8 @@
   a rule of 0 premises, or by producing a ``short-circuit'' within a
   solved situation (again modulo unification):
   \[
-  \infer[(@{inference_def assumption})]{@{text "C\<vartheta>"}}
-  {@{text "(\<And>\<^vec>x. \<^vec>H \<^vec>x \<Longrightarrow> A \<^vec>x) \<Longrightarrow> C"} & @{text "A\<vartheta> = H\<^sub>i\<vartheta>"}~~\text{(for some~@{text i})}}
+  \infer[(@{inference_def assumption})]{\<open>C\<vartheta>\<close>}
+  {\<open>(\<And>\<^vec>x. \<^vec>H \<^vec>x \<Longrightarrow> A \<^vec>x) \<Longrightarrow> C\<close> & \<open>A\<vartheta> = H\<^sub>i\<vartheta>\<close>~~\text{(for some~\<open>i\<close>)}}
   \]
 
   %FIXME @{inference_def elim_resolution}, @{inference_def dest_resolution}
@@ -1133,8 +1088,8 @@
   @{index_ML_op "OF": "thm * thm list -> thm"} \\
   \end{mldecls}
 
-  \<^descr> @{text "rule\<^sub>1 RSN (i, rule\<^sub>2)"} resolves the conclusion of
-  @{text "rule\<^sub>1"} with the @{text i}-th premise of @{text "rule\<^sub>2"},
+  \<^descr> \<open>rule\<^sub>1 RSN (i, rule\<^sub>2)\<close> resolves the conclusion of
+  \<open>rule\<^sub>1\<close> with the \<open>i\<close>-th premise of \<open>rule\<^sub>2\<close>,
   according to the @{inference resolution} principle explained above.
   Unless there is precisely one resolvent it raises exception @{ML
   THM}.
@@ -1142,28 +1097,27 @@
   This corresponds to the rule attribute @{attribute THEN} in Isar
   source language.
 
-  \<^descr> @{text "rule\<^sub>1 RS rule\<^sub>2"} abbreviates @{text "rule\<^sub>1 RSN (1,
-  rule\<^sub>2)"}.
+  \<^descr> \<open>rule\<^sub>1 RS rule\<^sub>2\<close> abbreviates \<open>rule\<^sub>1 RSN (1,
+  rule\<^sub>2)\<close>.
 
-  \<^descr> @{text "rules\<^sub>1 RLN (i, rules\<^sub>2)"} joins lists of rules.  For
-  every @{text "rule\<^sub>1"} in @{text "rules\<^sub>1"} and @{text "rule\<^sub>2"} in
-  @{text "rules\<^sub>2"}, it resolves the conclusion of @{text "rule\<^sub>1"} with
-  the @{text "i"}-th premise of @{text "rule\<^sub>2"}, accumulating multiple
+  \<^descr> \<open>rules\<^sub>1 RLN (i, rules\<^sub>2)\<close> joins lists of rules.  For
+  every \<open>rule\<^sub>1\<close> in \<open>rules\<^sub>1\<close> and \<open>rule\<^sub>2\<close> in
+  \<open>rules\<^sub>2\<close>, it resolves the conclusion of \<open>rule\<^sub>1\<close> with
+  the \<open>i\<close>-th premise of \<open>rule\<^sub>2\<close>, accumulating multiple
   results in one big list.  Note that such strict enumerations of
   higher-order unifications can be inefficient compared to the lazy
   variant seen in elementary tactics like @{ML resolve_tac}.
 
-  \<^descr> @{text "rules\<^sub>1 RL rules\<^sub>2"} abbreviates @{text "rules\<^sub>1 RLN (1,
-  rules\<^sub>2)"}.
+  \<^descr> \<open>rules\<^sub>1 RL rules\<^sub>2\<close> abbreviates \<open>rules\<^sub>1 RLN (1,
+  rules\<^sub>2)\<close>.
 
-  \<^descr> @{text "[rule\<^sub>1, \<dots>, rule\<^sub>n] MRS rule"} resolves @{text "rule\<^sub>i"}
-  against premise @{text "i"} of @{text "rule"}, for @{text "i = n, \<dots>,
-  1"}.  By working from right to left, newly emerging premises are
+  \<^descr> \<open>[rule\<^sub>1, \<dots>, rule\<^sub>n] MRS rule\<close> resolves \<open>rule\<^sub>i\<close>
+  against premise \<open>i\<close> of \<open>rule\<close>, for \<open>i = n, \<dots>,
+  1\<close>.  By working from right to left, newly emerging premises are
   concatenated in the result, without interfering.
 
-  \<^descr> @{text "rule OF rules"} is an alternative notation for @{text
-  "rules MRS rule"}, which makes rule composition look more like
-  function application.  Note that the argument @{text "rules"} need
+  \<^descr> \<open>rule OF rules\<close> is an alternative notation for \<open>rules MRS rule\<close>, which makes rule composition look more like
+  function application.  Note that the argument \<open>rules\<close> need
   not be atomic.
 
   This corresponds to the rule attribute @{attribute OF} in Isar
@@ -1181,55 +1135,50 @@
   proof-checker, for example.
 
   According to the well-known \<^emph>\<open>Curry-Howard isomorphism\<close>, a proof
-  can be viewed as a @{text "\<lambda>"}-term. Following this idea, proofs in
+  can be viewed as a \<open>\<lambda>\<close>-term. Following this idea, proofs in
   Isabelle are internally represented by a datatype similar to the one
   for terms described in \secref{sec:terms}.  On top of these
-  syntactic terms, two more layers of @{text "\<lambda>"}-calculus are added,
-  which correspond to @{text "\<And>x :: \<alpha>. B x"} and @{text "A \<Longrightarrow> B"}
+  syntactic terms, two more layers of \<open>\<lambda>\<close>-calculus are added,
+  which correspond to \<open>\<And>x :: \<alpha>. B x\<close> and \<open>A \<Longrightarrow> B\<close>
   according to the propositions-as-types principle.  The resulting
-  3-level @{text "\<lambda>"}-calculus resembles ``@{text "\<lambda>HOL"}'' in the
+  3-level \<open>\<lambda>\<close>-calculus resembles ``\<open>\<lambda>HOL\<close>'' in the
   more abstract setting of Pure Type Systems (PTS)
   @{cite "Barendregt-Geuvers:2001"}, if some fine points like schematic
   polymorphism and type classes are ignored.
 
   \<^medskip>
-  \<^emph>\<open>Proof abstractions\<close> of the form @{text "\<^bold>\<lambda>x :: \<alpha>. prf"}
-  or @{text "\<^bold>\<lambda>p : A. prf"} correspond to introduction of @{text
-  "\<And>"}/@{text "\<Longrightarrow>"}, and \<^emph>\<open>proof applications\<close> of the form @{text
-  "p \<cdot> t"} or @{text "p \<bullet> q"} correspond to elimination of @{text
-  "\<And>"}/@{text "\<Longrightarrow>"}.  Actual types @{text "\<alpha>"}, propositions @{text
-  "A"}, and terms @{text "t"} might be suppressed and reconstructed
+  \<^emph>\<open>Proof abstractions\<close> of the form \<open>\<^bold>\<lambda>x :: \<alpha>. prf\<close>
+  or \<open>\<^bold>\<lambda>p : A. prf\<close> correspond to introduction of \<open>\<And>\<close>/\<open>\<Longrightarrow>\<close>, and \<^emph>\<open>proof applications\<close> of the form \<open>p \<cdot> t\<close> or \<open>p \<bullet> q\<close> correspond to elimination of \<open>\<And>\<close>/\<open>\<Longrightarrow>\<close>.  Actual types \<open>\<alpha>\<close>, propositions \<open>A\<close>, and terms \<open>t\<close> might be suppressed and reconstructed
   from the overall proof term.
 
   \<^medskip>
   Various atomic proofs indicate special situations within
   the proof construction as follows.
 
-  A \<^emph>\<open>bound proof variable\<close> is a natural number @{text "b"} that
+  A \<^emph>\<open>bound proof variable\<close> is a natural number \<open>b\<close> that
   acts as de-Bruijn index for proof term abstractions.
 
-  A \<^emph>\<open>minimal proof\<close> ``@{text "?"}'' is a dummy proof term.  This
+  A \<^emph>\<open>minimal proof\<close> ``\<open>?\<close>'' is a dummy proof term.  This
   indicates some unrecorded part of the proof.
 
-  @{text "Hyp A"} refers to some pending hypothesis by giving its
+  \<open>Hyp A\<close> refers to some pending hypothesis by giving its
   proposition.  This indicates an open context of implicit hypotheses,
   similar to loose bound variables or free variables within a term
   (\secref{sec:terms}).
 
-  An \<^emph>\<open>axiom\<close> or \<^emph>\<open>oracle\<close> @{text "a : A[\<^vec>\<tau>]"} refers
-  some postulated @{text "proof constant"}, which is subject to
+  An \<^emph>\<open>axiom\<close> or \<^emph>\<open>oracle\<close> \<open>a : A[\<^vec>\<tau>]\<close> refers
+  some postulated \<open>proof constant\<close>, which is subject to
   schematic polymorphism of theory content, and the particular type
-  instantiation may be given explicitly.  The vector of types @{text
-  "\<^vec>\<tau>"} refers to the schematic type variables in the generic
-  proposition @{text "A"} in canonical order.
+  instantiation may be given explicitly.  The vector of types \<open>\<^vec>\<tau>\<close> refers to the schematic type variables in the generic
+  proposition \<open>A\<close> in canonical order.
 
-  A \<^emph>\<open>proof promise\<close> @{text "a : A[\<^vec>\<tau>]"} is a placeholder
-  for some proof of polymorphic proposition @{text "A"}, with explicit
-  type instantiation as given by the vector @{text "\<^vec>\<tau>"}, as
+  A \<^emph>\<open>proof promise\<close> \<open>a : A[\<^vec>\<tau>]\<close> is a placeholder
+  for some proof of polymorphic proposition \<open>A\<close>, with explicit
+  type instantiation as given by the vector \<open>\<^vec>\<tau>\<close>, as
   above.  Unlike axioms or oracles, proof promises may be
-  \<^emph>\<open>fulfilled\<close> eventually, by substituting @{text "a"} by some
-  particular proof @{text "q"} at the corresponding type instance.
-  This acts like Hindley-Milner @{text "let"}-polymorphism: a generic
+  \<^emph>\<open>fulfilled\<close> eventually, by substituting \<open>a\<close> by some
+  particular proof \<open>q\<close> at the corresponding type instance.
+  This acts like Hindley-Milner \<open>let\<close>-polymorphism: a generic
   local proof definition may get used at different type instances, and
   is replaced by the concrete instance eventually.
 
@@ -1249,7 +1198,7 @@
   Therefore, the Isabelle/Pure inference kernel records only
   \<^emph>\<open>implicit\<close> proof terms, by omitting all typing information in
   terms, all term and type labels of proof abstractions, and some
-  argument terms of applications @{text "p \<cdot> t"} (if possible).
+  argument terms of applications \<open>p \<cdot> t\<close> (if possible).
 
   There are separate operations to reconstruct the full proof term
   later on, using \<^emph>\<open>higher-order pattern unification\<close>
@@ -1270,29 +1219,28 @@
   \begin{center}
   \begin{supertabular}{rclr}
 
-  @{syntax_def (inner) proof} & = & @{verbatim Lam} @{text params} @{verbatim "."} @{text proof} \\
-    & @{text "|"} & @{text "\<^bold>\<lambda>"} @{text "params"} @{verbatim "."} @{text proof} \\
-    & @{text "|"} & @{text proof} @{verbatim "%"} @{text any} \\
-    & @{text "|"} & @{text proof} @{text "\<cdot>"} @{text any} \\
-    & @{text "|"} & @{text proof} @{verbatim "%%"} @{text proof} \\
-    & @{text "|"} & @{text proof} @{text "\<bullet>"} @{text proof} \\
-    & @{text "|"} & @{text "id  |  longid"} \\
+  @{syntax_def (inner) proof} & = & @{verbatim Lam} \<open>params\<close> @{verbatim "."} \<open>proof\<close> \\
+    & \<open>|\<close> & \<open>\<^bold>\<lambda>\<close> \<open>params\<close> @{verbatim "."} \<open>proof\<close> \\
+    & \<open>|\<close> & \<open>proof\<close> @{verbatim "%"} \<open>any\<close> \\
+    & \<open>|\<close> & \<open>proof\<close> \<open>\<cdot>\<close> \<open>any\<close> \\
+    & \<open>|\<close> & \<open>proof\<close> @{verbatim "%%"} \<open>proof\<close> \\
+    & \<open>|\<close> & \<open>proof\<close> \<open>\<bullet>\<close> \<open>proof\<close> \\
+    & \<open>|\<close> & \<open>id  |  longid\<close> \\
   \\
 
-  @{text param} & = & @{text idt} \\
-    & @{text "|"} & @{text idt} @{verbatim ":"} @{text prop} \\
-    & @{text "|"} & @{verbatim "("} @{text param} @{verbatim ")"} \\
+  \<open>param\<close> & = & \<open>idt\<close> \\
+    & \<open>|\<close> & \<open>idt\<close> @{verbatim ":"} \<open>prop\<close> \\
+    & \<open>|\<close> & @{verbatim "("} \<open>param\<close> @{verbatim ")"} \\
   \\
 
-  @{text params} & = & @{text param} \\
-    & @{text "|"} & @{text param} @{text params} \\
+  \<open>params\<close> & = & \<open>param\<close> \\
+    & \<open>|\<close> & \<open>param\<close> \<open>params\<close> \\
 
   \end{supertabular}
   \end{center}
 
-  Implicit term arguments in partial proofs are indicated by ``@{text
-  "_"}''.  Type arguments for theorems and axioms may be specified
-  using @{text "p \<cdot> TYPE(type)"} (they must appear before any other
+  Implicit term arguments in partial proofs are indicated by ``\<open>_\<close>''.  Type arguments for theorems and axioms may be specified
+  using \<open>p \<cdot> TYPE(type)\<close> (they must appear before any other
   term argument of a theorem or axiom, but may be omitted altogether).
 
   \<^medskip>
@@ -1328,8 +1276,8 @@
   information, the implicit graph of nested theorems needs to be
   traversed (e.g.\ using @{ML Proofterm.fold_body_thms}).
 
-  \<^descr> @{ML Thm.proof_of}~@{text "thm"} and @{ML
-  Thm.proof_body_of}~@{text "thm"} produce the proof term or proof
+  \<^descr> @{ML Thm.proof_of}~\<open>thm\<close> and @{ML
+  Thm.proof_body_of}~\<open>thm\<close> produce the proof term or proof
   body (with digest of oracles and theorems) from a given theorem.
   Note that this involves a full join of internal futures that fulfill
   pending proof promises, and thus disrupts the natural bottom-up
@@ -1344,31 +1292,30 @@
   Officially named theorems that contribute to a result are recorded
   in any case.
 
-  \<^descr> @{ML Reconstruct.reconstruct_proof}~@{text "thy prop prf"}
-  turns the implicit proof term @{text "prf"} into a full proof of the
+  \<^descr> @{ML Reconstruct.reconstruct_proof}~\<open>thy prop prf\<close>
+  turns the implicit proof term \<open>prf\<close> into a full proof of the
   given proposition.
 
-  Reconstruction may fail if @{text "prf"} is not a proof of @{text
-  "prop"}, or if it does not contain sufficient information for
+  Reconstruction may fail if \<open>prf\<close> is not a proof of \<open>prop\<close>, or if it does not contain sufficient information for
   reconstruction.  Failure may only happen for proofs that are
   constructed manually, but not for those produced automatically by
   the inference kernel.
 
-  \<^descr> @{ML Reconstruct.expand_proof}~@{text "thy [thm\<^sub>1, \<dots>, thm\<^sub>n]
-  prf"} expands and reconstructs the proofs of all specified theorems,
+  \<^descr> @{ML Reconstruct.expand_proof}~\<open>thy [thm\<^sub>1, \<dots>, thm\<^sub>n]
+  prf\<close> expands and reconstructs the proofs of all specified theorems,
   with the given (full) proof.  Theorems that are not unique specified
   via their name may be disambiguated by giving their proposition.
 
-  \<^descr> @{ML Proof_Checker.thm_of_proof}~@{text "thy prf"} turns the
+  \<^descr> @{ML Proof_Checker.thm_of_proof}~\<open>thy prf\<close> turns the
   given (full) proof into a theorem, by replaying it using only
   primitive rules of the inference kernel.
 
-  \<^descr> @{ML Proof_Syntax.read_proof}~@{text "thy b\<^sub>1 b\<^sub>2 s"} reads in a
+  \<^descr> @{ML Proof_Syntax.read_proof}~\<open>thy b\<^sub>1 b\<^sub>2 s\<close> reads in a
   proof term. The Boolean flags indicate the use of sort and type
   information.  Usually, typing information is left implicit and is
   inferred during proof reconstruction.  %FIXME eliminate flags!?
 
-  \<^descr> @{ML Proof_Syntax.pretty_proof}~@{text "ctxt prf"}
+  \<^descr> @{ML Proof_Syntax.pretty_proof}~\<open>ctxt prf\<close>
   pretty-prints the given proof term.
 \<close>
 
--- a/src/Doc/Implementation/ML.thy	Tue Oct 20 23:03:46 2015 +0200
+++ b/src/Doc/Implementation/ML.thy	Tue Oct 20 23:53:40 2015 +0200
@@ -315,7 +315,7 @@
   Some special infixes (e.g.\ @{ML_text "|>"}) work better at the
   start of the line, but punctuation is always at the end.
 
-  Function application follows the tradition of @{text "\<lambda>"}-calculus,
+  Function application follows the tradition of \<open>\<lambda>\<close>-calculus,
   not informal mathematics.  For example: @{ML_text "f a b"} for a
   curried function, or @{ML_text "g (a, b)"} for a tupled function.
   Note that the space between @{ML_text g} and the pair @{ML_text
@@ -627,10 +627,10 @@
   correctly.  Recall that evaluation of a function body is delayed
   until actual run-time.
 
-  \<^descr> @{ML "Context.>>"}~@{text f} applies context transformation
-  @{text f} to the implicit context of the ML toplevel.
-
-  \<^descr> @{ML ML_Thms.bind_thms}~@{text "(name, thms)"} stores a list of
+  \<^descr> @{ML "Context.>>"}~\<open>f\<close> applies context transformation
+  \<open>f\<close> to the implicit context of the ML toplevel.
+
+  \<^descr> @{ML ML_Thms.bind_thms}~\<open>(name, thms)\<close> stores a list of
   theorems produced in ML both in the (global) theory context and the
   ML toplevel, associating it with the provided name.
 
@@ -660,11 +660,11 @@
   defined in @{cite "isabelle-isar-ref"}.
 
   \<^medskip>
-  A regular antiquotation @{text "@{name args}"} processes
+  A regular antiquotation \<open>@{name args}\<close> processes
   its arguments by the usual means of the Isar source language, and
   produces corresponding ML source text, either as literal
-  \<^emph>\<open>inline\<close> text (e.g.\ @{text "@{term t}"}) or abstract
-  \<^emph>\<open>value\<close> (e.g. @{text "@{thm th}"}).  This pre-compilation
+  \<^emph>\<open>inline\<close> text (e.g.\ \<open>@{term t}\<close>) or abstract
+  \<^emph>\<open>value\<close> (e.g. \<open>@{thm th}\<close>).  This pre-compilation
   scheme allows to refer to formal entities in a robust manner, with
   proper static scoping and with some degree of logical checking of
   small portions of the code.
@@ -687,8 +687,8 @@
 
 text %mlantiq \<open>
   \begin{matharray}{rcl}
-  @{ML_antiquotation_def "make_string"} & : & @{text ML_antiquotation} \\
-  @{ML_antiquotation_def "print"} & : & @{text ML_antiquotation} \\
+  @{ML_antiquotation_def "make_string"} & : & \<open>ML_antiquotation\<close> \\
+  @{ML_antiquotation_def "print"} & : & \<open>ML_antiquotation\<close> \\
   \end{matharray}
 
   @{rail \<open>
@@ -697,13 +697,13 @@
   @@{ML_antiquotation print} @{syntax name}?
   \<close>}
 
-  \<^descr> @{text "@{make_string}"} inlines a function to print arbitrary values
+  \<^descr> \<open>@{make_string}\<close> inlines a function to print arbitrary values
   similar to the ML toplevel. The result is compiler dependent and may fall
   back on "?" in certain situations. The value of configuration option
   @{attribute_ref ML_print_depth} determines further details of output.
 
-  \<^descr> @{text "@{print f}"} uses the ML function @{text "f: string ->
-  unit"} to output the result of @{text "@{make_string}"} above,
+  \<^descr> \<open>@{print f}\<close> uses the ML function \<open>f: string ->
+  unit\<close> to output the result of \<open>@{make_string}\<close> above,
   together with the source position of the antiquotation.  The default
   output function is @{ML writeln}.
 \<close>
@@ -724,25 +724,24 @@
 
 section \<open>Canonical argument order \label{sec:canonical-argument-order}\<close>
 
-text \<open>Standard ML is a language in the tradition of @{text
-  "\<lambda>"}-calculus and \<^emph>\<open>higher-order functional programming\<close>,
+text \<open>Standard ML is a language in the tradition of \<open>\<lambda>\<close>-calculus and \<^emph>\<open>higher-order functional programming\<close>,
   similar to OCaml, Haskell, or Isabelle/Pure and HOL as logical
   languages.  Getting acquainted with the native style of representing
   functions in that setting can save a lot of extra boiler-plate of
   redundant shuffling of arguments, auxiliary abstractions etc.
 
   Functions are usually \<^emph>\<open>curried\<close>: the idea of turning arguments
-  of type @{text "\<tau>\<^sub>i"} (for @{text "i \<in> {1, \<dots> n}"}) into a result of
-  type @{text "\<tau>"} is represented by the iterated function space
-  @{text "\<tau>\<^sub>1 \<rightarrow> \<dots> \<rightarrow> \<tau>\<^sub>n \<rightarrow> \<tau>"}.  This is isomorphic to the well-known
-  encoding via tuples @{text "\<tau>\<^sub>1 \<times> \<dots> \<times> \<tau>\<^sub>n \<rightarrow> \<tau>"}, but the curried
+  of type \<open>\<tau>\<^sub>i\<close> (for \<open>i \<in> {1, \<dots> n}\<close>) into a result of
+  type \<open>\<tau>\<close> is represented by the iterated function space
+  \<open>\<tau>\<^sub>1 \<rightarrow> \<dots> \<rightarrow> \<tau>\<^sub>n \<rightarrow> \<tau>\<close>.  This is isomorphic to the well-known
+  encoding via tuples \<open>\<tau>\<^sub>1 \<times> \<dots> \<times> \<tau>\<^sub>n \<rightarrow> \<tau>\<close>, but the curried
   version fits more smoothly into the basic calculus.\footnote{The
   difference is even more significant in HOL, because the redundant
   tuple structure needs to be accommodated extraneous proof steps.}
 
   Currying gives some flexibility due to \<^emph>\<open>partial application\<close>.  A
-  function @{text "f: \<tau>\<^sub>1 \<rightarrow> \<tau>\<^sub>2 \<rightarrow> \<tau>"} can be applied to @{text "x: \<tau>\<^sub>1"}
-  and the remaining @{text "(f x): \<tau>\<^sub>2 \<rightarrow> \<tau>"} passed to another function
+  function \<open>f: \<tau>\<^sub>1 \<rightarrow> \<tau>\<^sub>2 \<rightarrow> \<tau>\<close> can be applied to \<open>x: \<tau>\<^sub>1\<close>
+  and the remaining \<open>(f x): \<tau>\<^sub>2 \<rightarrow> \<tau>\<close> passed to another function
   etc.  How well this works in practice depends on the order of
   arguments.  In the worst case, arguments are arranged erratically,
   and using a function in a certain situation always requires some
@@ -752,8 +751,8 @@
   This can be avoided by \<^emph>\<open>canonical argument order\<close>, which
   observes certain standard patterns and minimizes adhoc permutations
   in their application.  In Isabelle/ML, large portions of text can be
-  written without auxiliary operations like @{text "swap: \<alpha> \<times> \<beta> \<rightarrow> \<beta> \<times>
-  \<alpha>"} or @{text "C: (\<alpha> \<rightarrow> \<beta> \<rightarrow> \<gamma>) \<rightarrow> (\<beta> \<rightarrow> \<alpha> \<rightarrow> \<gamma>)"} (the latter is not
+  written without auxiliary operations like \<open>swap: \<alpha> \<times> \<beta> \<rightarrow> \<beta> \<times>
+  \<alpha>\<close> or \<open>C: (\<alpha> \<rightarrow> \<beta> \<rightarrow> \<gamma>) \<rightarrow> (\<beta> \<rightarrow> \<alpha> \<rightarrow> \<gamma>)\<close> (the latter is not
   present in the Isabelle/ML library).
 
   \<^medskip>
@@ -763,32 +762,31 @@
   \<^emph>\<open>updates\<close>.
 
   The subsequent scheme is based on a hypothetical set-like container
-  of type @{text "\<beta>"} that manages elements of type @{text "\<alpha>"}.  Both
+  of type \<open>\<beta>\<close> that manages elements of type \<open>\<alpha>\<close>.  Both
   the names and types of the associated operations are canonical for
   Isabelle/ML.
 
   \begin{center}
   \begin{tabular}{ll}
   kind & canonical name and type \\\hline
-  selector & @{text "member: \<beta> \<rightarrow> \<alpha> \<rightarrow> bool"} \\
-  update & @{text "insert: \<alpha> \<rightarrow> \<beta> \<rightarrow> \<beta>"} \\
+  selector & \<open>member: \<beta> \<rightarrow> \<alpha> \<rightarrow> bool\<close> \\
+  update & \<open>insert: \<alpha> \<rightarrow> \<beta> \<rightarrow> \<beta>\<close> \\
   \end{tabular}
   \end{center}
 
-  Given a container @{text "B: \<beta>"}, the partially applied @{text
-  "member B"} is a predicate over elements @{text "\<alpha> \<rightarrow> bool"}, and
+  Given a container \<open>B: \<beta>\<close>, the partially applied \<open>member B\<close> is a predicate over elements \<open>\<alpha> \<rightarrow> bool\<close>, and
   thus represents the intended denotation directly.  It is customary
   to pass the abstract predicate to further operations, not the
   concrete container.  The argument order makes it easy to use other
-  combinators: @{text "forall (member B) list"} will check a list of
-  elements for membership in @{text "B"} etc. Often the explicit
-  @{text "list"} is pointless and can be contracted to @{text "forall
-  (member B)"} to get directly a predicate again.
+  combinators: \<open>forall (member B) list\<close> will check a list of
+  elements for membership in \<open>B\<close> etc. Often the explicit
+  \<open>list\<close> is pointless and can be contracted to \<open>forall
+  (member B)\<close> to get directly a predicate again.
 
   In contrast, an update operation varies the container, so it moves
-  to the right: @{text "insert a"} is a function @{text "\<beta> \<rightarrow> \<beta>"} to
-  insert a value @{text "a"}.  These can be composed naturally as
-  @{text "insert c \<circ> insert b \<circ> insert a"}.  The slightly awkward
+  to the right: \<open>insert a\<close> is a function \<open>\<beta> \<rightarrow> \<beta>\<close> to
+  insert a value \<open>a\<close>.  These can be composed naturally as
+  \<open>insert c \<circ> insert b \<circ> insert a\<close>.  The slightly awkward
   inversion of the composition order is due to conventional
   mathematical notation, which can be easily amended as explained
   below.
@@ -798,9 +796,9 @@
 subsection \<open>Forward application and composition\<close>
 
 text \<open>Regular function application and infix notation works best for
-  relatively deeply structured expressions, e.g.\ @{text "h (f x y + g
-  z)"}.  The important special case of \<^emph>\<open>linear transformation\<close>
-  applies a cascade of functions @{text "f\<^sub>n (\<dots> (f\<^sub>1 x))"}.  This
+  relatively deeply structured expressions, e.g.\ \<open>h (f x y + g
+  z)\<close>.  The important special case of \<^emph>\<open>linear transformation\<close>
+  applies a cascade of functions \<open>f\<^sub>n (\<dots> (f\<^sub>1 x))\<close>.  This
   becomes hard to read and maintain if the functions are themselves
   given as complex expressions.  The notation can be significantly
   improved by introducing \<^emph>\<open>forward\<close> versions of application and
@@ -808,14 +806,13 @@
 
   \<^medskip>
   \begin{tabular}{lll}
-  @{text "x |> f"} & @{text "\<equiv>"} & @{text "f x"} \\
-  @{text "(f #> g) x"} & @{text "\<equiv>"} & @{text "x |> f |> g"} \\
+  \<open>x |> f\<close> & \<open>\<equiv>\<close> & \<open>f x\<close> \\
+  \<open>(f #> g) x\<close> & \<open>\<equiv>\<close> & \<open>x |> f |> g\<close> \\
   \end{tabular}
   \<^medskip>
 
-  This enables to write conveniently @{text "x |> f\<^sub>1 |> \<dots> |> f\<^sub>n"} or
-  @{text "f\<^sub>1 #> \<dots> #> f\<^sub>n"} for its functional abstraction over @{text
-  "x"}.
+  This enables to write conveniently \<open>x |> f\<^sub>1 |> \<dots> |> f\<^sub>n\<close> or
+  \<open>f\<^sub>1 #> \<dots> #> f\<^sub>n\<close> for its functional abstraction over \<open>x\<close>.
 
   \<^medskip>
   There is an additional set of combinators to accommodate
@@ -824,8 +821,8 @@
 
   \<^medskip>
   \begin{tabular}{lll}
-  @{text "(x, y) |-> f"} & @{text "\<equiv>"} & @{text "f x y"} \\
-  @{text "(f #-> g) x"} & @{text "\<equiv>"} & @{text "x |> f |-> g"} \\
+  \<open>(x, y) |-> f\<close> & \<open>\<equiv>\<close> & \<open>f x y\<close> \\
+  \<open>(f #-> g) x\<close> & \<open>\<equiv>\<close> & \<open>x |> f |-> g\<close> \\
   \end{tabular}
   \<^medskip>
 \<close>
@@ -842,29 +839,25 @@
 
 subsection \<open>Canonical iteration\<close>
 
-text \<open>As explained above, a function @{text "f: \<alpha> \<rightarrow> \<beta> \<rightarrow> \<beta>"} can be
-  understood as update on a configuration of type @{text "\<beta>"},
-  parameterized by an argument of type @{text "\<alpha>"}.  Given @{text "a: \<alpha>"}
-  the partial application @{text "(f a): \<beta> \<rightarrow> \<beta>"} operates
-  homogeneously on @{text "\<beta>"}.  This can be iterated naturally over a
-  list of parameters @{text "[a\<^sub>1, \<dots>, a\<^sub>n]"} as @{text "f a\<^sub>1 #> \<dots> #> f a\<^sub>n"}.
-  The latter expression is again a function @{text "\<beta> \<rightarrow> \<beta>"}.
-  It can be applied to an initial configuration @{text "b: \<beta>"} to
-  start the iteration over the given list of arguments: each @{text
-  "a"} in @{text "a\<^sub>1, \<dots>, a\<^sub>n"} is applied consecutively by updating a
+text \<open>As explained above, a function \<open>f: \<alpha> \<rightarrow> \<beta> \<rightarrow> \<beta>\<close> can be
+  understood as update on a configuration of type \<open>\<beta>\<close>,
+  parameterized by an argument of type \<open>\<alpha>\<close>.  Given \<open>a: \<alpha>\<close>
+  the partial application \<open>(f a): \<beta> \<rightarrow> \<beta>\<close> operates
+  homogeneously on \<open>\<beta>\<close>.  This can be iterated naturally over a
+  list of parameters \<open>[a\<^sub>1, \<dots>, a\<^sub>n]\<close> as \<open>f a\<^sub>1 #> \<dots> #> f a\<^sub>n\<close>.
+  The latter expression is again a function \<open>\<beta> \<rightarrow> \<beta>\<close>.
+  It can be applied to an initial configuration \<open>b: \<beta>\<close> to
+  start the iteration over the given list of arguments: each \<open>a\<close> in \<open>a\<^sub>1, \<dots>, a\<^sub>n\<close> is applied consecutively by updating a
   cumulative configuration.
 
-  The @{text fold} combinator in Isabelle/ML lifts a function @{text
-  "f"} as above to its iterated version over a list of arguments.
-  Lifting can be repeated, e.g.\ @{text "(fold \<circ> fold) f"} iterates
+  The \<open>fold\<close> combinator in Isabelle/ML lifts a function \<open>f\<close> as above to its iterated version over a list of arguments.
+  Lifting can be repeated, e.g.\ \<open>(fold \<circ> fold) f\<close> iterates
   over a list of lists as expected.
 
-  The variant @{text "fold_rev"} works inside-out over the list of
-  arguments, such that @{text "fold_rev f \<equiv> fold f \<circ> rev"} holds.
-
-  The @{text "fold_map"} combinator essentially performs @{text
-  "fold"} and @{text "map"} simultaneously: each application of @{text
-  "f"} produces an updated configuration together with a side-result;
+  The variant \<open>fold_rev\<close> works inside-out over the list of
+  arguments, such that \<open>fold_rev f \<equiv> fold f \<circ> rev\<close> holds.
+
+  The \<open>fold_map\<close> combinator essentially performs \<open>fold\<close> and \<open>map\<close> simultaneously: each application of \<open>f\<close> produces an updated configuration together with a side-result;
   the iteration collects all such side-results as a separate list.
 \<close>
 
@@ -875,20 +868,19 @@
   @{index_ML fold_map: "('a -> 'b -> 'c * 'b) -> 'a list -> 'b -> 'c list * 'b"} \\
   \end{mldecls}
 
-  \<^descr> @{ML fold}~@{text f} lifts the parametrized update function
-  @{text "f"} to a list of parameters.
-
-  \<^descr> @{ML fold_rev}~@{text "f"} is similar to @{ML fold}~@{text
-  "f"}, but works inside-out, as if the list would be reversed.
-
-  \<^descr> @{ML fold_map}~@{text "f"} lifts the parametrized update
-  function @{text "f"} (with side-result) to a list of parameters and
+  \<^descr> @{ML fold}~\<open>f\<close> lifts the parametrized update function
+  \<open>f\<close> to a list of parameters.
+
+  \<^descr> @{ML fold_rev}~\<open>f\<close> is similar to @{ML fold}~\<open>f\<close>, but works inside-out, as if the list would be reversed.
+
+  \<^descr> @{ML fold_map}~\<open>f\<close> lifts the parametrized update
+  function \<open>f\<close> (with side-result) to a list of parameters and
   cumulative side-results.
 
 
   \begin{warn}
   The literature on functional programming provides a confusing multitude of
-  combinators called @{text "foldl"}, @{text "foldr"} etc. SML97 provides its
+  combinators called \<open>foldl\<close>, \<open>foldr\<close> etc. SML97 provides its
   own variations as @{ML List.foldl} and @{ML List.foldr}, while the classic
   Isabelle library also has the historic @{ML Library.foldl} and @{ML
   Library.foldr}. To avoid unnecessary complication, all these historical
@@ -998,11 +990,11 @@
   @{index_ML error: "string -> 'a"} % FIXME Output.error_message (!?) \\
   \end{mldecls}
 
-  \<^descr> @{ML writeln}~@{text "text"} outputs @{text "text"} as regular
+  \<^descr> @{ML writeln}~\<open>text\<close> outputs \<open>text\<close> as regular
   message.  This is the primary message output operation of Isabelle
   and should be used by default.
 
-  \<^descr> @{ML tracing}~@{text "text"} outputs @{text "text"} as special
+  \<^descr> @{ML tracing}~\<open>text\<close> outputs \<open>text\<close> as special
   tracing message, indicating potential high-volume output to the
   front-end (hundreds or thousands of messages issued by a single
   command).  The idea is to allow the user-interface to downgrade the
@@ -1012,17 +1004,16 @@
   output, e.g.\ switch to a different output window.  So this channel
   should not be used for regular output.
 
-  \<^descr> @{ML warning}~@{text "text"} outputs @{text "text"} as
+  \<^descr> @{ML warning}~\<open>text\<close> outputs \<open>text\<close> as
   warning, which typically means some extra emphasis on the front-end
   side (color highlighting, icons, etc.).
 
-  \<^descr> @{ML error}~@{text "text"} raises exception @{ML ERROR}~@{text
-  "text"} and thus lets the Isar toplevel print @{text "text"} on the
+  \<^descr> @{ML error}~\<open>text\<close> raises exception @{ML ERROR}~\<open>text\<close> and thus lets the Isar toplevel print \<open>text\<close> on the
   error channel, which typically means some extra emphasis on the
   front-end side (color highlighting, icons, etc.).
 
   This assumes that the exception is not handled before the command
-  terminates.  Handling exception @{ML ERROR}~@{text "text"} is a
+  terminates.  Handling exception @{ML ERROR}~\<open>text\<close> is a
   perfectly legal alternative: it means that the error is absorbed
   without any message output.
 
@@ -1107,7 +1098,7 @@
 
   It is considered bad style to refer to internal function names or
   values in ML source notation in user error messages.  Do not use
-  @{text "@{make_string}"} nor @{text "@{here}"}!
+  \<open>@{make_string}\<close> nor \<open>@{here}\<close>!
 
   Grammatical correctness of error messages can be improved by
   \<^emph>\<open>omitting\<close> final punctuation: messages are often concatenated
@@ -1177,31 +1168,30 @@
   @{index_ML Runtime.exn_trace: "(unit -> 'a) -> 'a"} \\
   \end{mldecls}
 
-  \<^descr> @{ML try}~@{text "f x"} makes the partiality of evaluating
-  @{text "f x"} explicit via the option datatype.  Interrupts are
+  \<^descr> @{ML try}~\<open>f x\<close> makes the partiality of evaluating
+  \<open>f x\<close> explicit via the option datatype.  Interrupts are
   \<^emph>\<open>not\<close> handled here, i.e.\ this form serves as safe replacement
-  for the \<^emph>\<open>unsafe\<close> version @{ML_text "(SOME"}~@{text "f
-  x"}~@{ML_text "handle _ => NONE)"} that is occasionally seen in
+  for the \<^emph>\<open>unsafe\<close> version @{ML_text "(SOME"}~\<open>f
+  x\<close>~@{ML_text "handle _ => NONE)"} that is occasionally seen in
   books about SML97, but not in Isabelle/ML.
 
   \<^descr> @{ML can} is similar to @{ML try} with more abstract result.
 
-  \<^descr> @{ML ERROR}~@{text "msg"} represents user errors; this
+  \<^descr> @{ML ERROR}~\<open>msg\<close> represents user errors; this
   exception is normally raised indirectly via the @{ML error} function
   (see \secref{sec:message-channels}).
 
-  \<^descr> @{ML Fail}~@{text "msg"} represents general program failures.
+  \<^descr> @{ML Fail}~\<open>msg\<close> represents general program failures.
 
   \<^descr> @{ML Exn.is_interrupt} identifies interrupts robustly, without
   mentioning concrete exception constructors in user code.  Handled
   interrupts need to be re-raised promptly!
 
-  \<^descr> @{ML reraise}~@{text "exn"} raises exception @{text "exn"}
+  \<^descr> @{ML reraise}~\<open>exn\<close> raises exception \<open>exn\<close>
   while preserving its implicit position information (if possible,
   depending on the ML platform).
 
-  \<^descr> @{ML Runtime.exn_trace}~@{ML_text "(fn () =>"}~@{text
-  "e"}@{ML_text ")"} evaluates expression @{text "e"} while printing
+  \<^descr> @{ML Runtime.exn_trace}~@{ML_text "(fn () =>"}~\<open>e\<close>@{ML_text ")"} evaluates expression \<open>e\<close> while printing
   a full trace of its stack of nested exceptions (if possible,
   depending on the ML platform).
 
@@ -1211,10 +1201,10 @@
 
 text %mlantiq \<open>
   \begin{matharray}{rcl}
-  @{ML_antiquotation_def "assert"} & : & @{text ML_antiquotation} \\
+  @{ML_antiquotation_def "assert"} & : & \<open>ML_antiquotation\<close> \\
   \end{matharray}
 
-  \<^descr> @{text "@{assert}"} inlines a function
+  \<^descr> \<open>@{assert}\<close> inlines a function
   @{ML_type "bool -> unit"} that raises @{ML Fail} if the argument is
   @{ML false}.  Due to inlining the source position of failed
   assertions is included in the error output.
@@ -1230,30 +1220,26 @@
   in itself a small string, which has either one of the following
   forms:
 
-  \<^enum> a single ASCII character ``@{text "c"}'', for example
+  \<^enum> a single ASCII character ``\<open>c\<close>'', for example
   ``@{verbatim a}'',
 
   \<^enum> a codepoint according to UTF-8 (non-ASCII byte sequence),
 
-  \<^enum> a regular symbol ``@{verbatim \<open>\\<close>}@{verbatim "<"}@{text
-  "ident"}@{verbatim ">"}'', for example ``@{verbatim "\<alpha>"}'',
-
-  \<^enum> a control symbol ``@{verbatim \<open>\\<close>}@{verbatim "<^"}@{text
-  "ident"}@{verbatim ">"}'', for example ``@{verbatim "\<^bold>"}'',
-
-  \<^enum> a raw symbol ``@{verbatim \<open>\\<close>}@{verbatim "<^raw:"}@{text
-  text}@{verbatim ">"}'' where @{text text} consists of printable characters
+  \<^enum> a regular symbol ``@{verbatim \<open>\\<close>}@{verbatim "<"}\<open>ident\<close>@{verbatim ">"}'', for example ``@{verbatim "\<alpha>"}'',
+
+  \<^enum> a control symbol ``@{verbatim \<open>\\<close>}@{verbatim "<^"}\<open>ident\<close>@{verbatim ">"}'', for example ``@{verbatim "\<^bold>"}'',
+
+  \<^enum> a raw symbol ``@{verbatim \<open>\\<close>}@{verbatim "<^raw:"}\<open>text\<close>@{verbatim ">"}'' where \<open>text\<close> consists of printable characters
   excluding ``@{verbatim "."}'' and ``@{verbatim ">"}'', for example
   ``@{verbatim "\<^raw:$\sum_{i = 1}^n$>"}'',
 
   \<^enum> a numbered raw control symbol ``@{verbatim \<open>\\<close>}@{verbatim
-  "<^raw"}@{text n}@{verbatim ">"}, where @{text n} consists of digits, for
+  "<^raw"}\<open>n\<close>@{verbatim ">"}, where \<open>n\<close> consists of digits, for
   example ``@{verbatim "\<^raw42>"}''.
 
 
-  The @{text "ident"} syntax for symbol names is @{text "letter
-  (letter | digit)\<^sup>*"}, where @{text "letter = A..Za..z"} and @{text
-  "digit = 0..9"}.  There are infinitely many regular symbols and
+  The \<open>ident\<close> syntax for symbol names is \<open>letter
+  (letter | digit)\<^sup>*\<close>, where \<open>letter = A..Za..z\<close> and \<open>digit = 0..9\<close>.  There are infinitely many regular symbols and
   control symbols, but a fixed collection of standard symbols is
   treated specifically.  For example, ``@{verbatim "\<alpha>"}'' is
   classified as a letter, which means it may occur within regular
@@ -1269,8 +1255,8 @@
   \<^medskip>
   Output of Isabelle symbols depends on the print mode. For example,
   the standard {\LaTeX} setup of the Isabelle document preparation system
-  would present ``@{verbatim "\<alpha>"}'' as @{text "\<alpha>"}, and ``@{verbatim
-  "\<^bold>\<alpha>"}'' as @{text "\<^bold>\<alpha>"}. On-screen rendering usually works by mapping a
+  would present ``@{verbatim "\<alpha>"}'' as \<open>\<alpha>\<close>, and ``@{verbatim
+  "\<^bold>\<alpha>"}'' as \<open>\<^bold>\<alpha>\<close>. On-screen rendering usually works by mapping a
   finite subset of Isabelle symbols to suitable Unicode characters.
 \<close>
 
@@ -1291,7 +1277,7 @@
   \<^descr> Type @{ML_type "Symbol.symbol"} represents individual Isabelle
   symbols.
 
-  \<^descr> @{ML "Symbol.explode"}~@{text "str"} produces a symbol list
+  \<^descr> @{ML "Symbol.explode"}~\<open>str\<close> produces a symbol list
   from the packed form.  This function supersedes @{ML
   "String.explode"} for virtually all purposes of manipulating text in
   Isabelle!\footnote{The runtime overhead for exploded strings is
@@ -1392,7 +1378,7 @@
   @{assert} (size s = 4);
 \<close>
 
-text \<open>Note that in Unicode renderings of the symbol @{text "\<A>"},
+text \<open>Note that in Unicode renderings of the symbol \<open>\<A>\<close>,
   variations of encodings like UTF-8 or UTF-16 pose delicate questions
   about the multi-byte representations of its codepoint, which is outside
   of the 16-bit address space of the original Unicode standard from
@@ -1435,8 +1421,7 @@
   to the SML97 basis library definition.  This is adequate for
   internal ML operations, but awkward in concrete time specifications.
 
-  \<^descr> @{ML seconds}~@{text "s"} turns the concrete scalar @{text
-  "s"} (measured in seconds) into an abstract time value.  Floating
+  \<^descr> @{ML seconds}~\<open>s\<close> turns the concrete scalar \<open>s\<close> (measured in seconds) into an abstract time value.  Floating
   point numbers are easy to use as configuration options in the
   context (see \secref{sec:config-options}) or system options that
   are maintained externally.
@@ -1479,7 +1464,7 @@
   @{index_ML update: "('a * 'a -> bool) -> 'a -> 'a list -> 'a list"} \\
   \end{mldecls}
 
-  \<^descr> @{ML cons}~@{text "x xs"} evaluates to @{text "x :: xs"}.
+  \<^descr> @{ML cons}~\<open>x xs\<close> evaluates to \<open>x :: xs\<close>.
 
   Tupled infix operators are a historical accident in Standard ML.
   The curried @{ML cons} amends this, but it should be only used when
@@ -1559,9 +1544,9 @@
   Note that a function called @{verbatim lookup} is obliged to express its
   partiality via an explicit option element.  There is no choice to
   raise an exception, without changing the name to something like
-  @{text "the_element"} or @{text "get"}.
-
-  The @{text "defined"} operation is essentially a contraction of @{ML
+  \<open>the_element\<close> or \<open>get\<close>.
+
+  The \<open>defined\<close> operation is essentially a contraction of @{ML
   is_some} and @{verbatim "lookup"}, but this is sufficiently frequent to
   justify its independent existence.  This also gives the
   implementation some opportunity for peep-hole optimization.
@@ -1717,7 +1702,7 @@
   synchronization, as long as each invocation gets its own copy and the
   tool itself is single-threaded.
 
-  \<^item> Avoid raw output on @{text "stdout"} or @{text "stderr"}.  The
+  \<^item> Avoid raw output on \<open>stdout\<close> or \<open>stderr\<close>.  The
   Poly/ML library is thread-safe for each individual output operation,
   but the ordering of parallel invocations is arbitrary.  This means
   raw output will appear on some system console with unpredictable
@@ -1748,11 +1733,11 @@
   @{index_ML serial_string: "unit -> string"} \\
   \end{mldecls}
 
-  \<^descr> @{ML File.tmp_path}~@{text "path"} relocates the base
-  component of @{text "path"} into the unique temporary directory of
+  \<^descr> @{ML File.tmp_path}~\<open>path\<close> relocates the base
+  component of \<open>path\<close> into the unique temporary directory of
   the running Isabelle/ML process.
 
-  \<^descr> @{ML serial_string}~@{text "()"} creates a new serial number
+  \<^descr> @{ML serial_string}~\<open>()\<close> creates a new serial number
   that is unique over the runtime of the Isabelle/ML process.
 \<close>
 
@@ -1794,19 +1779,18 @@
   \<^descr> Type @{ML_type "'a Synchronized.var"} represents synchronized
   variables with state of type @{ML_type 'a}.
 
-  \<^descr> @{ML Synchronized.var}~@{text "name x"} creates a synchronized
-  variable that is initialized with value @{text "x"}.  The @{text
-  "name"} is used for tracing.
-
-  \<^descr> @{ML Synchronized.guarded_access}~@{text "var f"} lets the
-  function @{text "f"} operate within a critical section on the state
-  @{text "x"} as follows: if @{text "f x"} produces @{ML NONE}, it
+  \<^descr> @{ML Synchronized.var}~\<open>name x\<close> creates a synchronized
+  variable that is initialized with value \<open>x\<close>.  The \<open>name\<close> is used for tracing.
+
+  \<^descr> @{ML Synchronized.guarded_access}~\<open>var f\<close> lets the
+  function \<open>f\<close> operate within a critical section on the state
+  \<open>x\<close> as follows: if \<open>f x\<close> produces @{ML NONE}, it
   continues to wait on the internal condition variable, expecting that
   some other thread will eventually change the content in a suitable
-  manner; if @{text "f x"} produces @{ML SOME}~@{text "(y, x')"} it is
-  satisfied and assigns the new state value @{text "x'"}, broadcasts a
+  manner; if \<open>f x\<close> produces @{ML SOME}~\<open>(y, x')\<close> it is
+  satisfied and assigns the new state value \<open>x'\<close>, broadcasts a
   signal to all waiting threads on the associated condition variable,
-  and returns the result @{text "y"}.
+  and returns the result \<open>y\<close>.
 
 
   There are some further variants of the @{ML
@@ -1864,10 +1848,10 @@
   occur routinely, and special care is required to tell them apart ---
   the static type-system of SML is only of limited help here.
 
-  The first form is more intuitive: some combinator @{text "(unit ->
-  'a) -> 'a"} applies the given function to @{text "()"} to initiate
+  The first form is more intuitive: some combinator \<open>(unit ->
+  'a) -> 'a\<close> applies the given function to \<open>()\<close> to initiate
   the postponed evaluation process.  The second form is more flexible:
-  some combinator @{text "('a -> 'b) -> 'a -> 'b"} acts like a
+  some combinator \<open>('a -> 'b) -> 'a -> 'b\<close> acts like a
   modified form of function application; several such combinators may
   be cascaded to modify a given function, before it is ultimately
   applied to some argument.
@@ -1875,7 +1859,7 @@
   \<^medskip>
   \<^emph>\<open>Reified results\<close> make the disjoint sum of regular
   values versions exceptional situations explicit as ML datatype:
-  @{text "'a result = Res of 'a | Exn of exn"}.  This is typically
+  \<open>'a result = Res of 'a | Exn of exn\<close>.  This is typically
   used for administrative purposes, to store the overall outcome of an
   evaluation process.
 
@@ -1905,8 +1889,8 @@
   ML results explicitly, with constructor @{ML Exn.Res} for regular
   values and @{ML "Exn.Exn"} for exceptions.
 
-  \<^descr> @{ML Exn.capture}~@{text "f x"} manages the evaluation of
-  @{text "f x"} such that exceptions are made explicit as @{ML
+  \<^descr> @{ML Exn.capture}~\<open>f x\<close> manages the evaluation of
+  \<open>f x\<close> such that exceptions are made explicit as @{ML
   "Exn.Exn"}.  Note that this includes physical interrupts (see also
   \secref{sec:exceptions}), so the same precautions apply to user
   code: interrupts must not be absorbed accidentally!
@@ -1915,11 +1899,11 @@
   Exn.capture}, but interrupts are immediately re-raised as required
   for user code.
 
-  \<^descr> @{ML Exn.release}~@{text "result"} releases the original
+  \<^descr> @{ML Exn.release}~\<open>result\<close> releases the original
   runtime result, exposing its regular value or raising the reified
   exception.
 
-  \<^descr> @{ML Par_Exn.release_all}~@{text "results"} combines results
+  \<^descr> @{ML Par_Exn.release_all}~\<open>results\<close> combines results
   that were produced independently (e.g.\ by parallel evaluation).  If
   all results are regular values, that list is returned.  Otherwise,
   the collection of all exceptions is raised, wrapped-up as collective
@@ -1938,8 +1922,8 @@
 
 text \<open>
   Algorithmic skeletons are combinators that operate on lists in
-  parallel, in the manner of well-known @{text map}, @{text exists},
-  @{text forall} etc.  Management of futures (\secref{sec:futures})
+  parallel, in the manner of well-known \<open>map\<close>, \<open>exists\<close>,
+  \<open>forall\<close> etc.  Management of futures (\secref{sec:futures})
   and their results as reified exceptions is wrapped up into simple
   programming interfaces that resemble the sequential versions.
 
@@ -1958,19 +1942,19 @@
   @{index_ML Par_List.get_some: "('a -> 'b option) -> 'a list -> 'b option"} \\
   \end{mldecls}
 
-  \<^descr> @{ML Par_List.map}~@{text "f [x\<^sub>1, \<dots>, x\<^sub>n]"} is like @{ML
-  "map"}~@{text "f [x\<^sub>1, \<dots>, x\<^sub>n]"}, but the evaluation of @{text "f x\<^sub>i"}
-  for @{text "i = 1, \<dots>, n"} is performed in parallel.
-
-  An exception in any @{text "f x\<^sub>i"} cancels the overall evaluation
+  \<^descr> @{ML Par_List.map}~\<open>f [x\<^sub>1, \<dots>, x\<^sub>n]\<close> is like @{ML
+  "map"}~\<open>f [x\<^sub>1, \<dots>, x\<^sub>n]\<close>, but the evaluation of \<open>f x\<^sub>i\<close>
+  for \<open>i = 1, \<dots>, n\<close> is performed in parallel.
+
+  An exception in any \<open>f x\<^sub>i\<close> cancels the overall evaluation
   process.  The final result is produced via @{ML
   Par_Exn.release_first} as explained above, which means the first
   program exception that happened to occur in the parallel evaluation
   is propagated, and all other failures are ignored.
 
-  \<^descr> @{ML Par_List.get_some}~@{text "f [x\<^sub>1, \<dots>, x\<^sub>n]"} produces some
-  @{text "f x\<^sub>i"} that is of the form @{text "SOME y\<^sub>i"}, if that
-  exists, otherwise @{text "NONE"}.  Thus it is similar to @{ML
+  \<^descr> @{ML Par_List.get_some}~\<open>f [x\<^sub>1, \<dots>, x\<^sub>n]\<close> produces some
+  \<open>f x\<^sub>i\<close> that is of the form \<open>SOME y\<^sub>i\<close>, if that
+  exists, otherwise \<open>NONE\<close>.  Thus it is similar to @{ML
   Library.get_first}, but subject to a non-deterministic parallel
   choice process.  The first successful result cancels the overall
   evaluation process; other exceptions are propagated as for @{ML
@@ -1997,16 +1981,15 @@
 subsection \<open>Lazy evaluation\<close>
 
 text \<open>
-  Classic lazy evaluation works via the @{text lazy}~/ @{text force} pair of
-  operations: @{text lazy} to wrap an unevaluated expression, and @{text
-  force} to evaluate it once and store its result persistently. Later
-  invocations of @{text force} retrieve the stored result without another
+  Classic lazy evaluation works via the \<open>lazy\<close>~/ \<open>force\<close> pair of
+  operations: \<open>lazy\<close> to wrap an unevaluated expression, and \<open>force\<close> to evaluate it once and store its result persistently. Later
+  invocations of \<open>force\<close> retrieve the stored result without another
   evaluation. Isabelle/ML refines this idea to accommodate the aspects of
   multi-threading, synchronous program exceptions and asynchronous interrupts.
 
-  The first thread that invokes @{text force} on an unfinished lazy value
+  The first thread that invokes \<open>force\<close> on an unfinished lazy value
   changes its state into a \<^emph>\<open>promise\<close> of the eventual result and starts
-  evaluating it. Any other threads that @{text force} the same lazy value in
+  evaluating it. Any other threads that \<open>force\<close> the same lazy value in
   the meantime need to wait for it to finish, by producing a regular result or
   program exception. If the evaluation attempt is interrupted, this event is
   propagated to all waiting threads and the lazy value is reset to its
@@ -2030,16 +2013,16 @@
   \<^descr> Type @{ML_type "'a lazy"} represents lazy values over type @{verbatim
   "'a"}.
 
-  \<^descr> @{ML Lazy.lazy}~@{text "(fn () => e)"} wraps the unevaluated
-  expression @{text e} as unfinished lazy value.
-
-  \<^descr> @{ML Lazy.value}~@{text a} wraps the value @{text a} as finished lazy
-  value.  When forced, it returns @{text a} without any further evaluation.
+  \<^descr> @{ML Lazy.lazy}~\<open>(fn () => e)\<close> wraps the unevaluated
+  expression \<open>e\<close> as unfinished lazy value.
+
+  \<^descr> @{ML Lazy.value}~\<open>a\<close> wraps the value \<open>a\<close> as finished lazy
+  value.  When forced, it returns \<open>a\<close> without any further evaluation.
 
   There is very low overhead for this proforma wrapping of strict values as
   lazy values.
 
-  \<^descr> @{ML Lazy.force}~@{text x} produces the result of the lazy value in a
+  \<^descr> @{ML Lazy.force}~\<open>x\<close> produces the result of the lazy value in a
   thread-safe manner as explained above. Thus it may cause the current thread
   to wait on a pending evaluation attempt by another thread.
 \<close>
@@ -2049,7 +2032,7 @@
 
 text \<open>
   Futures help to organize parallel execution in a value-oriented manner, with
-  @{text fork}~/ @{text join} as the main pair of operations, and some further
+  \<open>fork\<close>~/ \<open>join\<close> as the main pair of operations, and some further
   variants; see also @{cite "Wenzel:2009" and "Wenzel:2013:ITP"}. Unlike lazy
   values, futures are evaluated strictly and spontaneously on separate worker
   threads. Futures may be canceled, which leads to interrupts on running
@@ -2119,28 +2102,28 @@
   \<^descr> Type @{ML_type "'a future"} represents future values over type
   @{verbatim "'a"}.
 
-  \<^descr> @{ML Future.fork}~@{text "(fn () => e)"} registers the unevaluated
-  expression @{text e} as unfinished future value, to be evaluated eventually
+  \<^descr> @{ML Future.fork}~\<open>(fn () => e)\<close> registers the unevaluated
+  expression \<open>e\<close> as unfinished future value, to be evaluated eventually
   on the parallel worker-thread farm. This is a shorthand for @{ML
   Future.forks} below, with default parameters and a single expression.
 
-  \<^descr> @{ML Future.forks}~@{text "params exprs"} is the general interface to
-  fork several futures simultaneously. The @{text params} consist of the
+  \<^descr> @{ML Future.forks}~\<open>params exprs\<close> is the general interface to
+  fork several futures simultaneously. The \<open>params\<close> consist of the
   following fields:
 
-    \<^item> @{text "name : string"} (default @{ML "\"\""}) specifies a common name
+    \<^item> \<open>name : string\<close> (default @{ML "\"\""}) specifies a common name
     for the tasks of the forked futures, which serves diagnostic purposes.
 
-    \<^item> @{text "group : Future.group option"} (default @{ML NONE}) specifies
+    \<^item> \<open>group : Future.group option\<close> (default @{ML NONE}) specifies
     an optional task group for the forked futures. @{ML NONE} means that a new
     sub-group of the current worker-thread task context is created. If this is
     not a worker thread, the group will be a new root in the group hierarchy.
 
-    \<^item> @{text "deps : Future.task list"} (default @{ML "[]"}) specifies
+    \<^item> \<open>deps : Future.task list\<close> (default @{ML "[]"}) specifies
     dependencies on other future tasks, i.e.\ the adjacency relation in the
     global task queue. Dependencies on already finished tasks are ignored.
 
-    \<^item> @{text "pri : int"} (default @{ML 0}) specifies a priority within the
+    \<^item> \<open>pri : int\<close> (default @{ML 0}) specifies a priority within the
     task queue.
 
     Typically there is only little deviation from the default priority @{ML 0}.
@@ -2153,7 +2136,7 @@
     priority tasks that are queued later need to wait until this (or another)
     worker thread becomes free again.
 
-    \<^item> @{text "interrupts : bool"} (default @{ML true}) tells whether the
+    \<^item> \<open>interrupts : bool\<close> (default @{ML true}) tells whether the
     worker thread that processes the corresponding task is initially put into
     interruptible state. This state may change again while running, by modifying
     the thread attributes.
@@ -2162,7 +2145,7 @@
     the responsibility of the programmer that this special state is retained
     only briefly.
 
-  \<^descr> @{ML Future.join}~@{text x} retrieves the value of an already finished
+  \<^descr> @{ML Future.join}~\<open>x\<close> retrieves the value of an already finished
   future, which may lead to an exception, according to the result of its
   previous evaluation.
 
@@ -2181,12 +2164,11 @@
   some timeout.
 
   Whenever possible, static dependencies of futures should be specified
-  explicitly when forked (see @{text deps} above). Thus the evaluation can
+  explicitly when forked (see \<open>deps\<close> above). Thus the evaluation can
   work from the bottom up, without join conflicts and wait states.
 
-  \<^descr> @{ML Future.joins}~@{text xs} joins the given list of futures
-  simultaneously, which is more efficient than @{ML "map Future.join"}~@{text
-  xs}.
+  \<^descr> @{ML Future.joins}~\<open>xs\<close> joins the given list of futures
+  simultaneously, which is more efficient than @{ML "map Future.join"}~\<open>xs\<close>.
 
   Based on the dependency graph of tasks, the current thread takes over the
   responsibility to evaluate future expressions that are required for the main
@@ -2194,23 +2176,23 @@
   presently evaluated on other threads only happens as last resort, when no
   other unfinished futures are left over.
 
-  \<^descr> @{ML Future.value}~@{text a} wraps the value @{text a} as finished
+  \<^descr> @{ML Future.value}~\<open>a\<close> wraps the value \<open>a\<close> as finished
   future value, bypassing the worker-thread farm. When joined, it returns
-  @{text a} without any further evaluation.
+  \<open>a\<close> without any further evaluation.
 
   There is very low overhead for this proforma wrapping of strict values as
   futures.
 
-  \<^descr> @{ML Future.map}~@{text "f x"} is a fast-path implementation of @{ML
-  Future.fork}~@{text "(fn () => f ("}@{ML Future.join}~@{text "x))"}, which
+  \<^descr> @{ML Future.map}~\<open>f x\<close> is a fast-path implementation of @{ML
+  Future.fork}~\<open>(fn () => f (\<close>@{ML Future.join}~\<open>x))\<close>, which
   avoids the full overhead of the task queue and worker-thread farm as far as
-  possible. The function @{text f} is supposed to be some trivial
+  possible. The function \<open>f\<close> is supposed to be some trivial
   post-processing or projection of the future result.
 
-  \<^descr> @{ML Future.cancel}~@{text "x"} cancels the task group of the given
+  \<^descr> @{ML Future.cancel}~\<open>x\<close> cancels the task group of the given
   future, using @{ML Future.cancel_group} below.
 
-  \<^descr> @{ML Future.cancel_group}~@{text "group"} cancels all tasks of the
+  \<^descr> @{ML Future.cancel_group}~\<open>group\<close> cancels all tasks of the
   given task group for all time. Threads that are presently processing a task
   of the given group are interrupted: it may take some time until they are
   actually terminated. Tasks that are queued but not yet processed are
@@ -2218,12 +2200,11 @@
   invalidated, any further attempt to fork a future that belongs to it will
   yield a canceled result as well.
 
-  \<^descr> @{ML Future.promise}~@{text abort} registers a passive future with the
-  given @{text abort} operation: it is invoked when the future task group is
+  \<^descr> @{ML Future.promise}~\<open>abort\<close> registers a passive future with the
+  given \<open>abort\<close> operation: it is invoked when the future task group is
   canceled.
 
-  \<^descr> @{ML Future.fulfill}~@{text "x a"} finishes the passive future @{text
-  x} by the given value @{text a}. If the promise has already been canceled,
+  \<^descr> @{ML Future.fulfill}~\<open>x a\<close> finishes the passive future \<open>x\<close> by the given value \<open>a\<close>. If the promise has already been canceled,
   the attempt to fulfill it causes an exception.
 \<close>
 
--- a/src/Doc/Implementation/Prelim.thy	Tue Oct 20 23:03:46 2015 +0200
+++ b/src/Doc/Implementation/Prelim.thy	Tue Oct 20 23:53:40 2015 +0200
@@ -13,13 +13,13 @@
   results etc.).
 
   For example, derivations within the Isabelle/Pure logic can be
-  described as a judgment @{text "\<Gamma> \<turnstile>\<^sub>\<Theta> \<phi>"}, which means that a
-  proposition @{text "\<phi>"} is derivable from hypotheses @{text "\<Gamma>"}
-  within the theory @{text "\<Theta>"}.  There are logical reasons for
-  keeping @{text "\<Theta>"} and @{text "\<Gamma>"} separate: theories can be
+  described as a judgment \<open>\<Gamma> \<turnstile>\<^sub>\<Theta> \<phi>\<close>, which means that a
+  proposition \<open>\<phi>\<close> is derivable from hypotheses \<open>\<Gamma>\<close>
+  within the theory \<open>\<Theta>\<close>.  There are logical reasons for
+  keeping \<open>\<Theta>\<close> and \<open>\<Gamma>\<close> separate: theories can be
   liberal about supporting type constructors and schematic
   polymorphism of constants and axioms, while the inner calculus of
-  @{text "\<Gamma> \<turnstile> \<phi>"} is strictly limited to Simple Type Theory (with
+  \<open>\<Gamma> \<turnstile> \<phi>\<close> is strictly limited to Simple Type Theory (with
   fixed type variables in the assumptions).
 
   \<^medskip>
@@ -27,20 +27,20 @@
   principles:
 
   \<^item> Transfer: monotonicity of derivations admits results to be
-  transferred into a \<^emph>\<open>larger\<close> context, i.e.\ @{text "\<Gamma> \<turnstile>\<^sub>\<Theta>
-  \<phi>"} implies @{text "\<Gamma>' \<turnstile>\<^sub>\<Theta>\<^sub>' \<phi>"} for contexts @{text "\<Theta>'
-  \<supseteq> \<Theta>"} and @{text "\<Gamma>' \<supseteq> \<Gamma>"}.
+  transferred into a \<^emph>\<open>larger\<close> context, i.e.\ \<open>\<Gamma> \<turnstile>\<^sub>\<Theta>
+  \<phi>\<close> implies \<open>\<Gamma>' \<turnstile>\<^sub>\<Theta>\<^sub>' \<phi>\<close> for contexts \<open>\<Theta>'
+  \<supseteq> \<Theta>\<close> and \<open>\<Gamma>' \<supseteq> \<Gamma>\<close>.
 
   \<^item> Export: discharge of hypotheses admits results to be exported
-  into a \<^emph>\<open>smaller\<close> context, i.e.\ @{text "\<Gamma>' \<turnstile>\<^sub>\<Theta> \<phi>"}
-  implies @{text "\<Gamma> \<turnstile>\<^sub>\<Theta> \<Delta> \<Longrightarrow> \<phi>"} where @{text "\<Gamma>' \<supseteq> \<Gamma>"} and
-  @{text "\<Delta> = \<Gamma>' - \<Gamma>"}.  Note that @{text "\<Theta>"} remains unchanged here,
-  only the @{text "\<Gamma>"} part is affected.
+  into a \<^emph>\<open>smaller\<close> context, i.e.\ \<open>\<Gamma>' \<turnstile>\<^sub>\<Theta> \<phi>\<close>
+  implies \<open>\<Gamma> \<turnstile>\<^sub>\<Theta> \<Delta> \<Longrightarrow> \<phi>\<close> where \<open>\<Gamma>' \<supseteq> \<Gamma>\<close> and
+  \<open>\<Delta> = \<Gamma>' - \<Gamma>\<close>.  Note that \<open>\<Theta>\<close> remains unchanged here,
+  only the \<open>\<Gamma>\<close> part is affected.
 
 
   \<^medskip>
   By modeling the main characteristics of the primitive
-  @{text "\<Theta>"} and @{text "\<Gamma>"} above, and abstracting over any
+  \<open>\<Theta>\<close> and \<open>\<Gamma>\<close> above, and abstracting over any
   particular logical content, we arrive at the fundamental notions of
   \<^emph>\<open>theory context\<close> and \<^emph>\<open>proof context\<close> in Isabelle/Isar.
   These implement a certain policy to manage arbitrary \<^emph>\<open>context
@@ -48,15 +48,14 @@
   data at compile time.
 
   The internal bootstrap process of Isabelle/Pure eventually reaches a
-  stage where certain data slots provide the logical content of @{text
-  "\<Theta>"} and @{text "\<Gamma>"} sketched above, but this does not stop there!
+  stage where certain data slots provide the logical content of \<open>\<Theta>\<close> and \<open>\<Gamma>\<close> sketched above, but this does not stop there!
   Various additional data slots support all kinds of mechanisms that
   are not necessarily part of the core logic.
 
   For example, there would be data for canonical introduction and
   elimination rules for arbitrary operators (depending on the
   object-logic and application), which enables users to perform
-  standard proof steps implicitly (cf.\ the @{text "rule"} method
+  standard proof steps implicitly (cf.\ the \<open>rule\<close> method
   @{cite "isabelle-isar-ref"}).
 
   \<^medskip>
@@ -80,28 +79,27 @@
   ancestor theories.  To this end, the system maintains a set of
   symbolic ``identification stamps'' within each theory.
 
-  The @{text "begin"} operation starts a new theory by importing several
+  The \<open>begin\<close> operation starts a new theory by importing several
   parent theories (with merged contents) and entering a special mode of
-  nameless incremental updates, until the final @{text "end"} operation is
+  nameless incremental updates, until the final \<open>end\<close> operation is
   performed.
 
   \<^medskip>
   The example in \figref{fig:ex-theory} below shows a theory
-  graph derived from @{text "Pure"}, with theory @{text "Length"}
-  importing @{text "Nat"} and @{text "List"}.  The body of @{text
-  "Length"} consists of a sequence of updates, resulting in locally a
+  graph derived from \<open>Pure\<close>, with theory \<open>Length\<close>
+  importing \<open>Nat\<close> and \<open>List\<close>.  The body of \<open>Length\<close> consists of a sequence of updates, resulting in locally a
   linear sub-theory relation for each intermediate step.
 
   \begin{figure}[htb]
   \begin{center}
   \begin{tabular}{rcccl}
-        &            & @{text "Pure"} \\
-        &            & @{text "\<down>"} \\
-        &            & @{text "FOL"} \\
+        &            & \<open>Pure\<close> \\
+        &            & \<open>\<down>\<close> \\
+        &            & \<open>FOL\<close> \\
         & $\swarrow$ &              & $\searrow$ & \\
-  @{text "Nat"} &    &              &            & @{text "List"} \\
+  \<open>Nat\<close> &    &              &            & \<open>List\<close> \\
         & $\searrow$ &              & $\swarrow$ \\
-        &            & @{text "Length"} \\
+        &            & \<open>Length\<close> \\
         &            & \multicolumn{3}{l}{~~@{keyword "begin"}} \\
         &            & $\vdots$~~ \\
         &            & \multicolumn{3}{l}{~~@{command "end"}} \\
@@ -128,30 +126,30 @@
 
   \<^descr> Type @{ML_type theory} represents theory contexts.
 
-  \<^descr> @{ML "Context.eq_thy"}~@{text "(thy\<^sub>1, thy\<^sub>2)"} check strict
+  \<^descr> @{ML "Context.eq_thy"}~\<open>(thy\<^sub>1, thy\<^sub>2)\<close> check strict
   identity of two theories.
 
-  \<^descr> @{ML "Context.subthy"}~@{text "(thy\<^sub>1, thy\<^sub>2)"} compares theories
+  \<^descr> @{ML "Context.subthy"}~\<open>(thy\<^sub>1, thy\<^sub>2)\<close> compares theories
   according to the intrinsic graph structure of the construction.
   This sub-theory relation is a nominal approximation of inclusion
-  (@{text "\<subseteq>"}) of the corresponding content (according to the
+  (\<open>\<subseteq>\<close>) of the corresponding content (according to the
   semantics of the ML modules that implement the data).
 
-  \<^descr> @{ML "Theory.begin_theory"}~@{text "name parents"} constructs
+  \<^descr> @{ML "Theory.begin_theory"}~\<open>name parents\<close> constructs
   a new theory based on the given parents.  This ML function is
   normally not invoked directly.
 
-  \<^descr> @{ML "Theory.parents_of"}~@{text "thy"} returns the direct
-  ancestors of @{text thy}.
+  \<^descr> @{ML "Theory.parents_of"}~\<open>thy\<close> returns the direct
+  ancestors of \<open>thy\<close>.
 
-  \<^descr> @{ML "Theory.ancestors_of"}~@{text "thy"} returns all
-  ancestors of @{text thy} (not including @{text thy} itself).
+  \<^descr> @{ML "Theory.ancestors_of"}~\<open>thy\<close> returns all
+  ancestors of \<open>thy\<close> (not including \<open>thy\<close> itself).
 \<close>
 
 text %mlantiq \<open>
   \begin{matharray}{rcl}
-  @{ML_antiquotation_def "theory"} & : & @{text ML_antiquotation} \\
-  @{ML_antiquotation_def "theory_context"} & : & @{text ML_antiquotation} \\
+  @{ML_antiquotation_def "theory"} & : & \<open>ML_antiquotation\<close> \\
+  @{ML_antiquotation_def "theory_context"} & : & \<open>ML_antiquotation\<close> \\
   \end{matharray}
 
   @{rail \<open>
@@ -160,15 +158,15 @@
   @@{ML_antiquotation theory_context} nameref
   \<close>}
 
-  \<^descr> @{text "@{theory}"} refers to the background theory of the
+  \<^descr> \<open>@{theory}\<close> refers to the background theory of the
   current context --- as abstract value.
 
-  \<^descr> @{text "@{theory A}"} refers to an explicitly named ancestor
-  theory @{text "A"} of the background theory of the current context
+  \<^descr> \<open>@{theory A}\<close> refers to an explicitly named ancestor
+  theory \<open>A\<close> of the background theory of the current context
   --- as abstract value.
 
-  \<^descr> @{text "@{theory_context A}"} is similar to @{text "@{theory
-  A}"}, but presents the result as initial @{ML_type Proof.context}
+  \<^descr> \<open>@{theory_context A}\<close> is similar to \<open>@{theory
+  A}\<close>, but presents the result as initial @{ML_type Proof.context}
   (see also @{ML Proof_Context.init_global}).
 \<close>
 
@@ -176,16 +174,16 @@
 subsection \<open>Proof context \label{sec:context-proof}\<close>
 
 text \<open>A proof context is a container for pure data that refers to
-  the theory from which it is derived. The @{text "init"} operation
+  the theory from which it is derived. The \<open>init\<close> operation
   creates a proof context from a given theory. There is an explicit
-  @{text "transfer"} operation to force resynchronization with updates
+  \<open>transfer\<close> operation to force resynchronization with updates
   to the background theory -- this is rarely required in practice.
 
   Entities derived in a proof context need to record logical
   requirements explicitly, since there is no separate context
   identification or symbolic inclusion as for theories.  For example,
   hypotheses used in primitive derivations (cf.\ \secref{sec:thms})
-  are recorded separately within the sequent @{text "\<Gamma> \<turnstile> \<phi>"}, just to
+  are recorded separately within the sequent \<open>\<Gamma> \<turnstile> \<phi>\<close>, just to
   make double sure.  Results could still leak into an alien proof
   context due to programming errors, but Isabelle/Isar includes some
   extra validity checks in critical positions, notably at the end of a
@@ -211,23 +209,22 @@
 
   \<^descr> Type @{ML_type Proof.context} represents proof contexts.
 
-  \<^descr> @{ML Proof_Context.init_global}~@{text "thy"} produces a proof
-  context derived from @{text "thy"}, initializing all data.
+  \<^descr> @{ML Proof_Context.init_global}~\<open>thy\<close> produces a proof
+  context derived from \<open>thy\<close>, initializing all data.
 
-  \<^descr> @{ML Proof_Context.theory_of}~@{text "ctxt"} selects the
-  background theory from @{text "ctxt"}.
+  \<^descr> @{ML Proof_Context.theory_of}~\<open>ctxt\<close> selects the
+  background theory from \<open>ctxt\<close>.
 
-  \<^descr> @{ML Proof_Context.transfer}~@{text "thy ctxt"} promotes the
-  background theory of @{text "ctxt"} to the super theory @{text
-  "thy"}.
+  \<^descr> @{ML Proof_Context.transfer}~\<open>thy ctxt\<close> promotes the
+  background theory of \<open>ctxt\<close> to the super theory \<open>thy\<close>.
 \<close>
 
 text %mlantiq \<open>
   \begin{matharray}{rcl}
-  @{ML_antiquotation_def "context"} & : & @{text ML_antiquotation} \\
+  @{ML_antiquotation_def "context"} & : & \<open>ML_antiquotation\<close> \\
   \end{matharray}
 
-  \<^descr> @{text "@{context}"} refers to \<^emph>\<open>the\<close> context at
+  \<^descr> \<open>@{context}\<close> refers to \<^emph>\<open>the\<close> context at
   compile-time --- as abstract value.  Independently of (local) theory
   or proof mode, this always produces a meaningful result.
 
@@ -246,10 +243,9 @@
   and combinators for lifting operations on either component of the
   disjoint sum.
 
-  Moreover, there are total operations @{text "theory_of"} and @{text
-  "proof_of"} to convert a generic context into either kind: a theory
+  Moreover, there are total operations \<open>theory_of\<close> and \<open>proof_of\<close> to convert a generic context into either kind: a theory
   can always be selected from the sum, while a proof context might
-  have to be constructed by an ad-hoc @{text "init"} operation, which
+  have to be constructed by an ad-hoc \<open>init\<close> operation, which
   incurs a small runtime overhead.
 \<close>
 
@@ -264,12 +260,12 @@
   "theory"} and @{ML_type "Proof.context"}, with the datatype
   constructors @{ML "Context.Theory"} and @{ML "Context.Proof"}.
 
-  \<^descr> @{ML Context.theory_of}~@{text "context"} always produces a
-  theory from the generic @{text "context"}, using @{ML
+  \<^descr> @{ML Context.theory_of}~\<open>context\<close> always produces a
+  theory from the generic \<open>context\<close>, using @{ML
   "Proof_Context.theory_of"} as required.
 
-  \<^descr> @{ML Context.proof_of}~@{text "context"} always produces a
-  proof context from the generic @{text "context"}, using @{ML
+  \<^descr> @{ML Context.proof_of}~\<open>context\<close> always produces a
+  proof context from the generic \<open>context\<close>, using @{ML
   "Proof_Context.init_global"} as required (note that this re-initializes the
   context data with each invocation).
 \<close>
@@ -287,20 +283,19 @@
 
   \<^medskip>
   \begin{tabular}{ll}
-  @{text "\<type> T"} & representing type \\
-  @{text "\<val> empty: T"} & empty default value \\
-  @{text "\<val> extend: T \<rightarrow> T"} & re-initialize on import \\
-  @{text "\<val> merge: T \<times> T \<rightarrow> T"} & join on import \\
+  \<open>\<type> T\<close> & representing type \\
+  \<open>\<val> empty: T\<close> & empty default value \\
+  \<open>\<val> extend: T \<rightarrow> T\<close> & re-initialize on import \\
+  \<open>\<val> merge: T \<times> T \<rightarrow> T\<close> & join on import \\
   \end{tabular}
   \<^medskip>
 
-  The @{text "empty"} value acts as initial default for \<^emph>\<open>any\<close>
-  theory that does not declare actual data content; @{text "extend"}
-  is acts like a unitary version of @{text "merge"}.
+  The \<open>empty\<close> value acts as initial default for \<^emph>\<open>any\<close>
+  theory that does not declare actual data content; \<open>extend\<close>
+  is acts like a unitary version of \<open>merge\<close>.
 
-  Implementing @{text "merge"} can be tricky.  The general idea is
-  that @{text "merge (data\<^sub>1, data\<^sub>2)"} inserts those parts of @{text
-  "data\<^sub>2"} into @{text "data\<^sub>1"} that are not yet present, while
+  Implementing \<open>merge\<close> can be tricky.  The general idea is
+  that \<open>merge (data\<^sub>1, data\<^sub>2)\<close> inserts those parts of \<open>data\<^sub>2\<close> into \<open>data\<^sub>1\<close> that are not yet present, while
   keeping the general order of things.  The @{ML Library.merge}
   function on plain lists may serve as canonical template.
 
@@ -313,15 +308,15 @@
 
   \<^medskip>
   \begin{tabular}{ll}
-  @{text "\<type> T"} & representing type \\
-  @{text "\<val> init: theory \<rightarrow> T"} & produce initial value \\
+  \<open>\<type> T\<close> & representing type \\
+  \<open>\<val> init: theory \<rightarrow> T\<close> & produce initial value \\
   \end{tabular}
   \<^medskip>
 
-  The @{text "init"} operation is supposed to produce a pure value
+  The \<open>init\<close> operation is supposed to produce a pure value
   from the given background theory and should be somehow
   ``immediate''.  Whenever a proof context is initialized, which
-  happens frequently, the the system invokes the @{text "init"}
+  happens frequently, the the system invokes the \<open>init\<close>
   operation of \<^emph>\<open>all\<close> theory data slots ever declared.  This also
   means that one needs to be economic about the total number of proof
   data declarations in the system, i.e.\ each ML module should declare
@@ -330,19 +325,19 @@
   avoided!
 
   \paragraph{Generic data} provides a hybrid interface for both theory
-  and proof data.  The @{text "init"} operation for proof contexts is
+  and proof data.  The \<open>init\<close> operation for proof contexts is
   predefined to select the current data value from the background
   theory.
 
   \<^bigskip>
-  Any of the above data declarations over type @{text "T"}
+  Any of the above data declarations over type \<open>T\<close>
   result in an ML structure with the following signature:
 
   \<^medskip>
   \begin{tabular}{ll}
-  @{text "get: context \<rightarrow> T"} \\
-  @{text "put: T \<rightarrow> context \<rightarrow> context"} \\
-  @{text "map: (T \<rightarrow> T) \<rightarrow> context \<rightarrow> context"} \\
+  \<open>get: context \<rightarrow> T\<close> \\
+  \<open>put: T \<rightarrow> context \<rightarrow> context\<close> \\
+  \<open>map: (T \<rightarrow> T) \<rightarrow> context \<rightarrow> context\<close> \\
   \end{tabular}
   \<^medskip>
 
@@ -360,15 +355,15 @@
   @{index_ML_functor Generic_Data} \\
   \end{mldecls}
 
-  \<^descr> @{ML_functor Theory_Data}@{text "(spec)"} declares data for
+  \<^descr> @{ML_functor Theory_Data}\<open>(spec)\<close> declares data for
   type @{ML_type theory} according to the specification provided as
   argument structure.  The resulting structure provides data init and
   access operations as described above.
 
-  \<^descr> @{ML_functor Proof_Data}@{text "(spec)"} is analogous to
+  \<^descr> @{ML_functor Proof_Data}\<open>(spec)\<close> is analogous to
   @{ML_functor Theory_Data} for type @{ML_type Proof.context}.
 
-  \<^descr> @{ML_functor Generic_Data}@{text "(spec)"} is analogous to
+  \<^descr> @{ML_functor Generic_Data}\<open>(spec)\<close> is analogous to
   @{ML_functor Theory_Data} for type @{ML_type Context.generic}.
 \<close>
 
@@ -515,16 +510,16 @@
   string Config.T"} \\
   \end{mldecls}
 
-  \<^descr> @{ML Config.get}~@{text "ctxt config"} gets the value of
-  @{text "config"} in the given context.
+  \<^descr> @{ML Config.get}~\<open>ctxt config\<close> gets the value of
+  \<open>config\<close> in the given context.
 
-  \<^descr> @{ML Config.map}~@{text "config f ctxt"} updates the context
-  by updating the value of @{text "config"}.
+  \<^descr> @{ML Config.map}~\<open>config f ctxt\<close> updates the context
+  by updating the value of \<open>config\<close>.
 
-  \<^descr> @{text "config ="}~@{ML Attrib.setup_config_bool}~@{text "name
-  default"} creates a named configuration option of type @{ML_type
-  bool}, with the given @{text "default"} depending on the application
-  context.  The resulting @{text "config"} can be used to get/map its
+  \<^descr> \<open>config =\<close>~@{ML Attrib.setup_config_bool}~\<open>name
+  default\<close> creates a named configuration option of type @{ML_type
+  bool}, with the given \<open>default\<close> depending on the application
+  context.  The resulting \<open>config\<close> can be used to get/map its
   value in a given context.  There is an implicit update of the
   background theory that registers the option as attribute with some
   concrete syntax.
@@ -535,7 +530,7 @@
 \<close>
 
 text %mlex \<open>The following example shows how to declare and use a
-  Boolean configuration option called @{text "my_flag"} with constant
+  Boolean configuration option called \<open>my_flag\<close> with constant
   default value @{ML false}.\<close>
 
 ML \<open>
@@ -578,8 +573,8 @@
 
 text \<open>In principle, a name is just a string, but there are various
   conventions for representing additional structure.  For example,
-  ``@{text "Foo.bar.baz"}'' is considered as a long name consisting of
-  qualifier @{text "Foo.bar"} and base name @{text "baz"}.  The
+  ``\<open>Foo.bar.baz\<close>'' is considered as a long name consisting of
+  qualifier \<open>Foo.bar\<close> and base name \<open>baz\<close>.  The
   individual constituents of a name may have further substructure,
   e.g.\ the string ``@{verbatim \<alpha>}'' encodes as a single
   symbol (\secref{sec:symbols}).
@@ -610,31 +605,27 @@
   \<^emph>\<open>internal name\<close>, two underscores means \<^emph>\<open>Skolem name\<close>,
   three underscores means \<^emph>\<open>internal Skolem name\<close>.
 
-  For example, the basic name @{text "foo"} has the internal version
-  @{text "foo_"}, with Skolem versions @{text "foo__"} and @{text
-  "foo___"}, respectively.
+  For example, the basic name \<open>foo\<close> has the internal version
+  \<open>foo_\<close>, with Skolem versions \<open>foo__\<close> and \<open>foo___\<close>, respectively.
 
   These special versions provide copies of the basic name space, apart
   from anything that normally appears in the user text.  For example,
   system generated variables in Isar proof contexts are usually marked
-  as internal, which prevents mysterious names like @{text "xaa"} to
+  as internal, which prevents mysterious names like \<open>xaa\<close> to
   appear in human-readable text.
 
   \<^medskip>
   Manipulating binding scopes often requires on-the-fly
   renamings.  A \<^emph>\<open>name context\<close> contains a collection of already
-  used names.  The @{text "declare"} operation adds names to the
+  used names.  The \<open>declare\<close> operation adds names to the
   context.
 
-  The @{text "invents"} operation derives a number of fresh names from
+  The \<open>invents\<close> operation derives a number of fresh names from
   a given starting point.  For example, the first three names derived
-  from @{text "a"} are @{text "a"}, @{text "b"}, @{text "c"}.
+  from \<open>a\<close> are \<open>a\<close>, \<open>b\<close>, \<open>c\<close>.
 
-  The @{text "variants"} operation produces fresh names by
-  incrementing tentative names as base-26 numbers (with digits @{text
-  "a..z"}) until all clashes are resolved.  For example, name @{text
-  "foo"} results in variants @{text "fooa"}, @{text "foob"}, @{text
-  "fooc"}, \dots, @{text "fooaa"}, @{text "fooab"} etc.; each renaming
+  The \<open>variants\<close> operation produces fresh names by
+  incrementing tentative names as base-26 numbers (with digits \<open>a..z\<close>) until all clashes are resolved.  For example, name \<open>foo\<close> results in variants \<open>fooa\<close>, \<open>foob\<close>, \<open>fooc\<close>, \dots, \<open>fooaa\<close>, \<open>fooab\<close> etc.; each renaming
   step picks the next unused variant from this sequence.
 \<close>
 
@@ -654,25 +645,24 @@
   @{index_ML Variable.names_of: "Proof.context -> Name.context"} \\
   \end{mldecls}
 
-  \<^descr> @{ML Name.internal}~@{text "name"} produces an internal name
+  \<^descr> @{ML Name.internal}~\<open>name\<close> produces an internal name
   by adding one underscore.
 
-  \<^descr> @{ML Name.skolem}~@{text "name"} produces a Skolem name by
+  \<^descr> @{ML Name.skolem}~\<open>name\<close> produces a Skolem name by
   adding two underscores.
 
   \<^descr> Type @{ML_type Name.context} represents the context of already
   used names; the initial value is @{ML "Name.context"}.
 
-  \<^descr> @{ML Name.declare}~@{text "name"} enters a used name into the
+  \<^descr> @{ML Name.declare}~\<open>name\<close> enters a used name into the
   context.
 
-  \<^descr> @{ML Name.invent}~@{text "context name n"} produces @{text
-  "n"} fresh names derived from @{text "name"}.
+  \<^descr> @{ML Name.invent}~\<open>context name n\<close> produces \<open>n\<close> fresh names derived from \<open>name\<close>.
 
-  \<^descr> @{ML Name.variant}~@{text "name context"} produces a fresh
-  variant of @{text "name"}; the result is declared to the context.
+  \<^descr> @{ML Name.variant}~\<open>name context\<close> produces a fresh
+  variant of \<open>name\<close>; the result is declared to the context.
 
-  \<^descr> @{ML Variable.names_of}~@{text "ctxt"} retrieves the context
+  \<^descr> @{ML Variable.names_of}~\<open>ctxt\<close> retrieves the context
   of declared type and term variable names.  Projecting a proof
   context down to a primitive name context is occasionally useful when
   invoking lower-level operations.  Regular management of ``fresh
@@ -718,36 +708,36 @@
 subsection \<open>Indexed names \label{sec:indexname}\<close>
 
 text \<open>
-  An \<^emph>\<open>indexed name\<close> (or @{text "indexname"}) is a pair of a basic
+  An \<^emph>\<open>indexed name\<close> (or \<open>indexname\<close>) is a pair of a basic
   name and a natural number.  This representation allows efficient
   renaming by incrementing the second component only.  The canonical
   way to rename two collections of indexnames apart from each other is
-  this: determine the maximum index @{text "maxidx"} of the first
+  this: determine the maximum index \<open>maxidx\<close> of the first
   collection, then increment all indexes of the second collection by
-  @{text "maxidx + 1"}; the maximum index of an empty collection is
-  @{text "-1"}.
+  \<open>maxidx + 1\<close>; the maximum index of an empty collection is
+  \<open>-1\<close>.
 
   Occasionally, basic names are injected into the same pair type of
-  indexed names: then @{text "(x, -1)"} is used to encode the basic
-  name @{text "x"}.
+  indexed names: then \<open>(x, -1)\<close> is used to encode the basic
+  name \<open>x\<close>.
 
   \<^medskip>
   Isabelle syntax observes the following rules for
-  representing an indexname @{text "(x, i)"} as a packed string:
+  representing an indexname \<open>(x, i)\<close> as a packed string:
 
-  \<^item> @{text "?x"} if @{text "x"} does not end with a digit and @{text "i = 0"},
+  \<^item> \<open>?x\<close> if \<open>x\<close> does not end with a digit and \<open>i = 0\<close>,
 
-  \<^item> @{text "?xi"} if @{text "x"} does not end with a digit,
+  \<^item> \<open>?xi\<close> if \<open>x\<close> does not end with a digit,
 
-  \<^item> @{text "?x.i"} otherwise.
+  \<^item> \<open>?x.i\<close> otherwise.
 
 
   Indexnames may acquire large index numbers after several maxidx
   shifts have been applied.  Results are usually normalized towards
-  @{text "0"} at certain checkpoints, notably at the end of a proof.
+  \<open>0\<close> at certain checkpoints, notably at the end of a proof.
   This works by producing variants of the corresponding basic name
-  components.  For example, the collection @{text "?x1, ?x7, ?x42"}
-  becomes @{text "?x, ?xa, ?xb"}.
+  components.  For example, the collection \<open>?x1, ?x7, ?x42\<close>
+  becomes \<open>?x, ?xa, ?xb\<close>.
 \<close>
 
 text %mlref \<open>
@@ -757,8 +747,8 @@
 
   \<^descr> Type @{ML_type indexname} represents indexed names.  This is
   an abbreviation for @{ML_type "string * int"}.  The second component
-  is usually non-negative, except for situations where @{text "(x,
-  -1)"} is used to inject basic names into this type.  Other negative
+  is usually non-negative, except for situations where \<open>(x,
+  -1)\<close> is used to inject basic names into this type.  Other negative
   indexes should not be used.
 \<close>
 
@@ -767,16 +757,16 @@
 
 text \<open>A \<^emph>\<open>long name\<close> consists of a sequence of non-empty name
   components.  The packed representation uses a dot as separator, as
-  in ``@{text "A.b.c"}''.  The last component is called \<^emph>\<open>base
+  in ``\<open>A.b.c\<close>''.  The last component is called \<^emph>\<open>base
   name\<close>, the remaining prefix is called \<^emph>\<open>qualifier\<close> (which may be
   empty).  The qualifier can be understood as the access path to the
   named entity while passing through some nested block-structure,
   although our free-form long names do not really enforce any strict
   discipline.
 
-  For example, an item named ``@{text "A.b.c"}'' may be understood as
-  a local entity @{text "c"}, within a local structure @{text "b"},
-  within a global structure @{text "A"}.  In practice, long names
+  For example, an item named ``\<open>A.b.c\<close>'' may be understood as
+  a local entity \<open>c\<close>, within a local structure \<open>b\<close>,
+  within a global structure \<open>A\<close>.  In practice, long names
   usually represent 1--3 levels of qualification.  User ML code should
   not make any assumptions about the particular structure of long
   names!
@@ -796,42 +786,41 @@
   @{index_ML Long_Name.explode: "string -> string list"} \\
   \end{mldecls}
 
-  \<^descr> @{ML Long_Name.base_name}~@{text "name"} returns the base name
+  \<^descr> @{ML Long_Name.base_name}~\<open>name\<close> returns the base name
   of a long name.
 
-  \<^descr> @{ML Long_Name.qualifier}~@{text "name"} returns the qualifier
+  \<^descr> @{ML Long_Name.qualifier}~\<open>name\<close> returns the qualifier
   of a long name.
 
-  \<^descr> @{ML Long_Name.append}~@{text "name\<^sub>1 name\<^sub>2"} appends two long
+  \<^descr> @{ML Long_Name.append}~\<open>name\<^sub>1 name\<^sub>2\<close> appends two long
   names.
 
-  \<^descr> @{ML Long_Name.implode}~@{text "names"} and @{ML
-  Long_Name.explode}~@{text "name"} convert between the packed string
+  \<^descr> @{ML Long_Name.implode}~\<open>names\<close> and @{ML
+  Long_Name.explode}~\<open>name\<close> convert between the packed string
   representation and the explicit list form of long names.
 \<close>
 
 
 subsection \<open>Name spaces \label{sec:name-space}\<close>
 
-text \<open>A @{text "name space"} manages a collection of long names,
+text \<open>A \<open>name space\<close> manages a collection of long names,
   together with a mapping between partially qualified external names
   and fully qualified internal names (in both directions).  Note that
-  the corresponding @{text "intern"} and @{text "extern"} operations
-  are mostly used for parsing and printing only!  The @{text
-  "declare"} operation augments a name space according to the accesses
+  the corresponding \<open>intern\<close> and \<open>extern\<close> operations
+  are mostly used for parsing and printing only!  The \<open>declare\<close> operation augments a name space according to the accesses
   determined by a given binding, and a naming policy from the context.
 
   \<^medskip>
-  A @{text "binding"} specifies details about the prospective
+  A \<open>binding\<close> specifies details about the prospective
   long name of a newly introduced formal entity.  It consists of a
   base name, prefixes for qualification (separate ones for system
   infrastructure and user-space mechanisms), a slot for the original
   source position, and some additional flags.
 
   \<^medskip>
-  A @{text "naming"} provides some additional details for
+  A \<open>naming\<close> provides some additional details for
   producing a long name from a binding.  Normally, the naming is
-  implicit in the theory or proof context.  The @{text "full"}
+  implicit in the theory or proof context.  The \<open>full\<close>
   operation (and its variants for different context types) produces a
   fully qualified internal name to be entered into a name space.  The
   main equation of this ``chemical reaction'' when binding new
@@ -839,7 +828,7 @@
 
   \<^medskip>
   \begin{tabular}{l}
-  @{text "binding + naming \<longrightarrow> long name + name space accesses"}
+  \<open>binding + naming \<longrightarrow> long name + name space accesses\<close>
   \end{tabular}
 
   \<^bigskip>
@@ -847,13 +836,13 @@
   each kind of formal entity, e.g.\ fact, logical constant, type
   constructor, type class.  It is usually clear from the occurrence in
   concrete syntax (or from the scope) which kind of entity a name
-  refers to.  For example, the very same name @{text "c"} may be used
+  refers to.  For example, the very same name \<open>c\<close> may be used
   uniformly for a constant, type constructor, and type class.
 
   There are common schemes to name derived entities systematically
   according to the name of the main logical entity involved, e.g.\
-  fact @{text "c.intro"} for a canonical introduction rule related to
-  constant @{text "c"}.  This technique of mapping names from one
+  fact \<open>c.intro\<close> for a canonical introduction rule related to
+  constant \<open>c\<close>.  This technique of mapping names from one
   space into another requires some care in order to avoid conflicts.
   In particular, theorem names derived from a type constructor or type
   class should get an additional suffix in addition to the usual
@@ -863,9 +852,9 @@
   \<^medskip>
   \begin{tabular}{ll}
   logical entity & fact name \\\hline
-  constant @{text "c"} & @{text "c.intro"} \\
-  type @{text "c"} & @{text "c_type.intro"} \\
-  class @{text "c"} & @{text "c_class.intro"} \\
+  constant \<open>c\<close> & \<open>c.intro\<close> \\
+  type \<open>c\<close> & \<open>c_type.intro\<close> \\
+  class \<open>c\<close> & \<open>c_class.intro\<close> \\
   \end{tabular}
 \<close>
 
@@ -901,15 +890,14 @@
 
   \<^descr> @{ML Binding.empty} is the empty binding.
 
-  \<^descr> @{ML Binding.name}~@{text "name"} produces a binding with base
-  name @{text "name"}.  Note that this lacks proper source position
+  \<^descr> @{ML Binding.name}~\<open>name\<close> produces a binding with base
+  name \<open>name\<close>.  Note that this lacks proper source position
   information; see also the ML antiquotation @{ML_antiquotation
   binding}.
 
-  \<^descr> @{ML Binding.qualify}~@{text "mandatory name binding"}
-  prefixes qualifier @{text "name"} to @{text "binding"}.  The @{text
-  "mandatory"} flag tells if this name component always needs to be
-  given in name space accesses --- this is mostly @{text "false"} in
+  \<^descr> @{ML Binding.qualify}~\<open>mandatory name binding\<close>
+  prefixes qualifier \<open>name\<close> to \<open>binding\<close>.  The \<open>mandatory\<close> flag tells if this name component always needs to be
+  given in name space accesses --- this is mostly \<open>false\<close> in
   practice.  Note that this part of qualification is typically used in
   derived specification mechanisms.
 
@@ -918,14 +906,14 @@
   typically used in the infrastructure for modular specifications,
   notably ``local theory targets'' (see also \chref{ch:local-theory}).
 
-  \<^descr> @{ML Binding.concealed}~@{text "binding"} indicates that the
+  \<^descr> @{ML Binding.concealed}~\<open>binding\<close> indicates that the
   binding shall refer to an entity that serves foundational purposes
   only.  This flag helps to mark implementation details of
   specification mechanism etc.  Other tools should not depend on the
   particulars of concealed entities (cf.\ @{ML
   Name_Space.is_concealed}).
 
-  \<^descr> @{ML Binding.print}~@{text "binding"} produces a string
+  \<^descr> @{ML Binding.print}~\<open>binding\<close> produces a string
   representation for human-readable output, together with some formal
   markup that might get used in GUI front-ends, for example.
 
@@ -936,26 +924,25 @@
   global and lacks any path prefix.  In a regular theory context this is
   augmented by a path prefix consisting of the theory name.
 
-  \<^descr> @{ML Name_Space.add_path}~@{text "path naming"} augments the
+  \<^descr> @{ML Name_Space.add_path}~\<open>path naming\<close> augments the
   naming policy by extending its path component.
 
-  \<^descr> @{ML Name_Space.full_name}~@{text "naming binding"} turns a
+  \<^descr> @{ML Name_Space.full_name}~\<open>naming binding\<close> turns a
   name binding (usually a basic name) into the fully qualified
   internal name, according to the given naming policy.
 
   \<^descr> Type @{ML_type Name_Space.T} represents name spaces.
 
-  \<^descr> @{ML Name_Space.empty}~@{text "kind"} and @{ML Name_Space.merge}~@{text
-  "(space\<^sub>1, space\<^sub>2)"} are the canonical operations for
+  \<^descr> @{ML Name_Space.empty}~\<open>kind\<close> and @{ML Name_Space.merge}~\<open>(space\<^sub>1, space\<^sub>2)\<close> are the canonical operations for
   maintaining name spaces according to theory data management
-  (\secref{sec:context-data}); @{text "kind"} is a formal comment
+  (\secref{sec:context-data}); \<open>kind\<close> is a formal comment
   to characterize the purpose of a name space.
 
-  \<^descr> @{ML Name_Space.declare}~@{text "context strict binding
-  space"} enters a name binding as fully qualified internal name into
+  \<^descr> @{ML Name_Space.declare}~\<open>context strict binding
+  space\<close> enters a name binding as fully qualified internal name into
   the name space, using the naming of the context.
 
-  \<^descr> @{ML Name_Space.intern}~@{text "space name"} internalizes a
+  \<^descr> @{ML Name_Space.intern}~\<open>space name\<close> internalizes a
   (partially qualified) external name.
 
   This operation is mostly for parsing!  Note that fully qualified
@@ -964,28 +951,28 @@
   (or their derivatives for @{ML_type theory} and
   @{ML_type Proof.context}).
 
-  \<^descr> @{ML Name_Space.extern}~@{text "ctxt space name"} externalizes a
+  \<^descr> @{ML Name_Space.extern}~\<open>ctxt space name\<close> externalizes a
   (fully qualified) internal name.
 
   This operation is mostly for printing!  User code should not rely on
   the precise result too much.
 
-  \<^descr> @{ML Name_Space.is_concealed}~@{text "space name"} indicates
-  whether @{text "name"} refers to a strictly private entity that
+  \<^descr> @{ML Name_Space.is_concealed}~\<open>space name\<close> indicates
+  whether \<open>name\<close> refers to a strictly private entity that
   other tools are supposed to ignore!
 \<close>
 
 text %mlantiq \<open>
   \begin{matharray}{rcl}
-  @{ML_antiquotation_def "binding"} & : & @{text ML_antiquotation} \\
+  @{ML_antiquotation_def "binding"} & : & \<open>ML_antiquotation\<close> \\
   \end{matharray}
 
   @{rail \<open>
   @@{ML_antiquotation binding} name
   \<close>}
 
-  \<^descr> @{text "@{binding name}"} produces a binding with base name
-  @{text "name"} and the source position taken from the concrete
+  \<^descr> \<open>@{binding name}\<close> produces a binding with base name
+  \<open>name\<close> and the source position taken from the concrete
   syntax of this antiquotation.  In many situations this is more
   appropriate than the more basic @{ML Binding.name} function.
 \<close>
--- a/src/Doc/Implementation/Proof.thy	Tue Oct 20 23:03:46 2015 +0200
+++ b/src/Doc/Implementation/Proof.thy	Tue Oct 20 23:53:40 2015 +0200
@@ -7,27 +7,24 @@
 section \<open>Variables \label{sec:variables}\<close>
 
 text \<open>
-  Any variable that is not explicitly bound by @{text "\<lambda>"}-abstraction
+  Any variable that is not explicitly bound by \<open>\<lambda>\<close>-abstraction
   is considered as ``free''.  Logically, free variables act like
-  outermost universal quantification at the sequent level: @{text
-  "A\<^sub>1(x), \<dots>, A\<^sub>n(x) \<turnstile> B(x)"} means that the result
-  holds \<^emph>\<open>for all\<close> values of @{text "x"}.  Free variables for
-  terms (not types) can be fully internalized into the logic: @{text
-  "\<turnstile> B(x)"} and @{text "\<turnstile> \<And>x. B(x)"} are interchangeable, provided
-  that @{text "x"} does not occur elsewhere in the context.
-  Inspecting @{text "\<turnstile> \<And>x. B(x)"} more closely, we see that inside the
-  quantifier, @{text "x"} is essentially ``arbitrary, but fixed'',
+  outermost universal quantification at the sequent level: \<open>A\<^sub>1(x), \<dots>, A\<^sub>n(x) \<turnstile> B(x)\<close> means that the result
+  holds \<^emph>\<open>for all\<close> values of \<open>x\<close>.  Free variables for
+  terms (not types) can be fully internalized into the logic: \<open>\<turnstile> B(x)\<close> and \<open>\<turnstile> \<And>x. B(x)\<close> are interchangeable, provided
+  that \<open>x\<close> does not occur elsewhere in the context.
+  Inspecting \<open>\<turnstile> \<And>x. B(x)\<close> more closely, we see that inside the
+  quantifier, \<open>x\<close> is essentially ``arbitrary, but fixed'',
   while from outside it appears as a place-holder for instantiation
-  (thanks to @{text "\<And>"} elimination).
+  (thanks to \<open>\<And>\<close> elimination).
 
   The Pure logic represents the idea of variables being either inside
   or outside the current scope by providing separate syntactic
-  categories for \<^emph>\<open>fixed variables\<close> (e.g.\ @{text "x"}) vs.\
-  \<^emph>\<open>schematic variables\<close> (e.g.\ @{text "?x"}).  Incidently, a
-  universal result @{text "\<turnstile> \<And>x. B(x)"} has the HHF normal form @{text
-  "\<turnstile> B(?x)"}, which represents its generality without requiring an
+  categories for \<^emph>\<open>fixed variables\<close> (e.g.\ \<open>x\<close>) vs.\
+  \<^emph>\<open>schematic variables\<close> (e.g.\ \<open>?x\<close>).  Incidently, a
+  universal result \<open>\<turnstile> \<And>x. B(x)\<close> has the HHF normal form \<open>\<turnstile> B(?x)\<close>, which represents its generality without requiring an
   explicit quantifier.  The same principle works for type variables:
-  @{text "\<turnstile> B(?\<alpha>)"} represents the idea of ``@{text "\<turnstile> \<forall>\<alpha>. B(\<alpha>)"}''
+  \<open>\<turnstile> B(?\<alpha>)\<close> represents the idea of ``\<open>\<turnstile> \<forall>\<alpha>. B(\<alpha>)\<close>''
   without demanding a truly polymorphic framework.
 
   \<^medskip>
@@ -36,66 +33,64 @@
   depend on type variables, which means that type variables would have
   to be declared first.  For example, a raw type-theoretic framework
   would demand the context to be constructed in stages as follows:
-  @{text "\<Gamma> = \<alpha>: type, x: \<alpha>, a: A(x\<^sub>\<alpha>)"}.
+  \<open>\<Gamma> = \<alpha>: type, x: \<alpha>, a: A(x\<^sub>\<alpha>)\<close>.
 
   We allow a slightly less formalistic mode of operation: term
-  variables @{text "x"} are fixed without specifying a type yet
+  variables \<open>x\<close> are fixed without specifying a type yet
   (essentially \<^emph>\<open>all\<close> potential occurrences of some instance
-  @{text "x\<^sub>\<tau>"} are fixed); the first occurrence of @{text "x"}
+  \<open>x\<^sub>\<tau>\<close> are fixed); the first occurrence of \<open>x\<close>
   within a specific term assigns its most general type, which is then
   maintained consistently in the context.  The above example becomes
-  @{text "\<Gamma> = x: term, \<alpha>: type, A(x\<^sub>\<alpha>)"}, where type @{text
-  "\<alpha>"} is fixed \<^emph>\<open>after\<close> term @{text "x"}, and the constraint
-  @{text "x :: \<alpha>"} is an implicit consequence of the occurrence of
-  @{text "x\<^sub>\<alpha>"} in the subsequent proposition.
+  \<open>\<Gamma> = x: term, \<alpha>: type, A(x\<^sub>\<alpha>)\<close>, where type \<open>\<alpha>\<close> is fixed \<^emph>\<open>after\<close> term \<open>x\<close>, and the constraint
+  \<open>x :: \<alpha>\<close> is an implicit consequence of the occurrence of
+  \<open>x\<^sub>\<alpha>\<close> in the subsequent proposition.
 
   This twist of dependencies is also accommodated by the reverse
   operation of exporting results from a context: a type variable
-  @{text "\<alpha>"} is considered fixed as long as it occurs in some fixed
-  term variable of the context.  For example, exporting @{text "x:
-  term, \<alpha>: type \<turnstile> x\<^sub>\<alpha> \<equiv> x\<^sub>\<alpha>"} produces in the first step @{text "x: term
-  \<turnstile> x\<^sub>\<alpha> \<equiv> x\<^sub>\<alpha>"} for fixed @{text "\<alpha>"}, and only in the second step
-  @{text "\<turnstile> ?x\<^sub>?\<^sub>\<alpha> \<equiv> ?x\<^sub>?\<^sub>\<alpha>"} for schematic @{text "?x"} and @{text "?\<alpha>"}.
+  \<open>\<alpha>\<close> is considered fixed as long as it occurs in some fixed
+  term variable of the context.  For example, exporting \<open>x:
+  term, \<alpha>: type \<turnstile> x\<^sub>\<alpha> \<equiv> x\<^sub>\<alpha>\<close> produces in the first step \<open>x: term
+  \<turnstile> x\<^sub>\<alpha> \<equiv> x\<^sub>\<alpha>\<close> for fixed \<open>\<alpha>\<close>, and only in the second step
+  \<open>\<turnstile> ?x\<^sub>?\<^sub>\<alpha> \<equiv> ?x\<^sub>?\<^sub>\<alpha>\<close> for schematic \<open>?x\<close> and \<open>?\<alpha>\<close>.
   The following Isar source text illustrates this scenario.
 \<close>
 
 notepad
 begin
   {
-    fix x  -- \<open>all potential occurrences of some @{text "x::\<tau>"} are fixed\<close>
+    fix x  -- \<open>all potential occurrences of some \<open>x::\<tau>\<close> are fixed\<close>
     {
       have "x::'a \<equiv> x"  -- \<open>implicit type assignment by concrete occurrence\<close>
         by (rule reflexive)
     }
-    thm this  -- \<open>result still with fixed type @{text "'a"}\<close>
+    thm this  -- \<open>result still with fixed type \<open>'a\<close>\<close>
   }
-  thm this  -- \<open>fully general result for arbitrary @{text "?x::?'a"}\<close>
+  thm this  -- \<open>fully general result for arbitrary \<open>?x::?'a\<close>\<close>
 end
 
 text \<open>The Isabelle/Isar proof context manages the details of term
   vs.\ type variables, with high-level principles for moving the
   frontier between fixed and schematic variables.
 
-  The @{text "add_fixes"} operation explicitly declares fixed
-  variables; the @{text "declare_term"} operation absorbs a term into
+  The \<open>add_fixes\<close> operation explicitly declares fixed
+  variables; the \<open>declare_term\<close> operation absorbs a term into
   a context by fixing new type variables and adding syntactic
   constraints.
 
-  The @{text "export"} operation is able to perform the main work of
+  The \<open>export\<close> operation is able to perform the main work of
   generalizing term and type variables as sketched above, assuming
   that fixing variables and terms have been declared properly.
 
-  There @{text "import"} operation makes a generalized fact a genuine
+  There \<open>import\<close> operation makes a generalized fact a genuine
   part of the context, by inventing fixed variables for the schematic
-  ones.  The effect can be reversed by using @{text "export"} later,
+  ones.  The effect can be reversed by using \<open>export\<close> later,
   potentially with an extended context; the result is equivalent to
   the original modulo renaming of schematic variables.
 
-  The @{text "focus"} operation provides a variant of @{text "import"}
-  for nested propositions (with explicit quantification): @{text
-  "\<And>x\<^sub>1 \<dots> x\<^sub>n. B(x\<^sub>1, \<dots>, x\<^sub>n)"} is
-  decomposed by inventing fixed variables @{text "x\<^sub>1, \<dots>,
-  x\<^sub>n"} for the body.
+  The \<open>focus\<close> operation provides a variant of \<open>import\<close>
+  for nested propositions (with explicit quantification): \<open>\<And>x\<^sub>1 \<dots> x\<^sub>n. B(x\<^sub>1, \<dots>, x\<^sub>n)\<close> is
+  decomposed by inventing fixed variables \<open>x\<^sub>1, \<dots>,
+  x\<^sub>n\<close> for the body.
 \<close>
 
 text %mlref \<open>
@@ -114,8 +109,8 @@
   ((string * (string * typ)) list * term) * Proof.context"} \\
   \end{mldecls}
 
-  \<^descr> @{ML Variable.add_fixes}~@{text "xs ctxt"} fixes term
-  variables @{text "xs"}, returning the resulting internal names.  By
+  \<^descr> @{ML Variable.add_fixes}~\<open>xs ctxt\<close> fixes term
+  variables \<open>xs\<close>, returning the resulting internal names.  By
   default, the internal representation coincides with the external
   one, which also means that the given variables must not be fixed
   already.  There is a different policy within a local proof body: the
@@ -125,35 +120,33 @@
   Variable.add_fixes}, but always produces fresh variants of the given
   names.
 
-  \<^descr> @{ML Variable.declare_term}~@{text "t ctxt"} declares term
-  @{text "t"} to belong to the context.  This automatically fixes new
+  \<^descr> @{ML Variable.declare_term}~\<open>t ctxt\<close> declares term
+  \<open>t\<close> to belong to the context.  This automatically fixes new
   type variables, but not term variables.  Syntactic constraints for
   type and term variables are declared uniformly, though.
 
-  \<^descr> @{ML Variable.declare_constraints}~@{text "t ctxt"} declares
-  syntactic constraints from term @{text "t"}, without making it part
+  \<^descr> @{ML Variable.declare_constraints}~\<open>t ctxt\<close> declares
+  syntactic constraints from term \<open>t\<close>, without making it part
   of the context yet.
 
-  \<^descr> @{ML Variable.export}~@{text "inner outer thms"} generalizes
-  fixed type and term variables in @{text "thms"} according to the
-  difference of the @{text "inner"} and @{text "outer"} context,
+  \<^descr> @{ML Variable.export}~\<open>inner outer thms\<close> generalizes
+  fixed type and term variables in \<open>thms\<close> according to the
+  difference of the \<open>inner\<close> and \<open>outer\<close> context,
   following the principles sketched above.
 
-  \<^descr> @{ML Variable.polymorphic}~@{text "ctxt ts"} generalizes type
-  variables in @{text "ts"} as far as possible, even those occurring
+  \<^descr> @{ML Variable.polymorphic}~\<open>ctxt ts\<close> generalizes type
+  variables in \<open>ts\<close> as far as possible, even those occurring
   in fixed term variables.  The default policy of type-inference is to
   fix newly introduced type variables, which is essentially reversed
   with @{ML Variable.polymorphic}: here the given terms are detached
   from the context as far as possible.
 
-  \<^descr> @{ML Variable.import}~@{text "open thms ctxt"} invents fixed
-  type and term variables for the schematic ones occurring in @{text
-  "thms"}.  The @{text "open"} flag indicates whether the fixed names
+  \<^descr> @{ML Variable.import}~\<open>open thms ctxt\<close> invents fixed
+  type and term variables for the schematic ones occurring in \<open>thms\<close>.  The \<open>open\<close> flag indicates whether the fixed names
   should be accessible to the user, otherwise newly introduced names
   are marked as ``internal'' (\secref{sec:names}).
 
-  \<^descr> @{ML Variable.focus}~@{text "bindings B"} decomposes the outermost @{text
-  "\<And>"} prefix of proposition @{text "B"}, using the given name bindings.
+  \<^descr> @{ML Variable.focus}~\<open>bindings B\<close> decomposes the outermost \<open>\<And>\<close> prefix of proposition \<open>B\<close>, using the given name bindings.
 \<close>
 
 text %mlex \<open>The following example shows how to work with fixed term
@@ -180,7 +173,7 @@
 
 text \<open>In the above example, the starting context is derived from the
   toplevel theory, which means that fixed variables are internalized
-  literally: @{text "x"} is mapped again to @{text "x"}, and
+  literally: \<open>x\<close> is mapped again to \<open>x\<close>, and
   attempting to fix it again in the subsequent context is an error.
   Alternatively, fixed parameters can be renamed explicitly as
   follows:\<close>
@@ -192,7 +185,7 @@
 \<close>
 
 text \<open>The following ML code can now work with the invented names of
-  @{text x1}, @{text x2}, @{text x3}, without depending on
+  \<open>x1\<close>, \<open>x2\<close>, \<open>x3\<close>, without depending on
   the details on the system policy for introducing these variants.
   Recall that within a proof body the system always invents fresh
   ``Skolem constants'', e.g.\ as follows:\<close>
@@ -227,19 +220,18 @@
   Assumptions are restricted to fixed non-schematic statements, i.e.\
   all generality needs to be expressed by explicit quantifiers.
   Nevertheless, the result will be in HHF normal form with outermost
-  quantifiers stripped.  For example, by assuming @{text "\<And>x :: \<alpha>. P
-  x"} we get @{text "\<And>x :: \<alpha>. P x \<turnstile> P ?x"} for schematic @{text "?x"}
-  of fixed type @{text "\<alpha>"}.  Local derivations accumulate more and
-  more explicit references to hypotheses: @{text "A\<^sub>1, \<dots>,
-  A\<^sub>n \<turnstile> B"} where @{text "A\<^sub>1, \<dots>, A\<^sub>n"} needs to
+  quantifiers stripped.  For example, by assuming \<open>\<And>x :: \<alpha>. P
+  x\<close> we get \<open>\<And>x :: \<alpha>. P x \<turnstile> P ?x\<close> for schematic \<open>?x\<close>
+  of fixed type \<open>\<alpha>\<close>.  Local derivations accumulate more and
+  more explicit references to hypotheses: \<open>A\<^sub>1, \<dots>,
+  A\<^sub>n \<turnstile> B\<close> where \<open>A\<^sub>1, \<dots>, A\<^sub>n\<close> needs to
   be covered by the assumptions of the current context.
 
   \<^medskip>
-  The @{text "add_assms"} operation augments the context by
-  local assumptions, which are parameterized by an arbitrary @{text
-  "export"} rule (see below).
+  The \<open>add_assms\<close> operation augments the context by
+  local assumptions, which are parameterized by an arbitrary \<open>export\<close> rule (see below).
 
-  The @{text "export"} operation moves facts from a (larger) inner
+  The \<open>export\<close> operation moves facts from a (larger) inner
   context into a (smaller) outer context, by discharging the
   difference of the assumptions as specified by the associated export
   rules.  Note that the discharged portion is determined by the
@@ -249,30 +241,30 @@
 
   \<^medskip>
   The most basic export rule discharges assumptions directly
-  by means of the @{text "\<Longrightarrow>"} introduction rule:
+  by means of the \<open>\<Longrightarrow>\<close> introduction rule:
   \[
-  \infer[(@{text "\<Longrightarrow>\<hyphen>intro"})]{@{text "\<Gamma> - A \<turnstile> A \<Longrightarrow> B"}}{@{text "\<Gamma> \<turnstile> B"}}
+  \infer[(\<open>\<Longrightarrow>\<hyphen>intro\<close>)]{\<open>\<Gamma> - A \<turnstile> A \<Longrightarrow> B\<close>}{\<open>\<Gamma> \<turnstile> B\<close>}
   \]
 
   The variant for goal refinements marks the newly introduced
   premises, which causes the canonical Isar goal refinement scheme to
   enforce unification with local premises within the goal:
   \[
-  \infer[(@{text "#\<Longrightarrow>\<hyphen>intro"})]{@{text "\<Gamma> - A \<turnstile> #A \<Longrightarrow> B"}}{@{text "\<Gamma> \<turnstile> B"}}
+  \infer[(\<open>#\<Longrightarrow>\<hyphen>intro\<close>)]{\<open>\<Gamma> - A \<turnstile> #A \<Longrightarrow> B\<close>}{\<open>\<Gamma> \<turnstile> B\<close>}
   \]
 
   \<^medskip>
   Alternative versions of assumptions may perform arbitrary
   transformations on export, as long as the corresponding portion of
   hypotheses is removed from the given facts.  For example, a local
-  definition works by fixing @{text "x"} and assuming @{text "x \<equiv> t"},
+  definition works by fixing \<open>x\<close> and assuming \<open>x \<equiv> t\<close>,
   with the following export rule to reverse the effect:
   \[
-  \infer[(@{text "\<equiv>\<hyphen>expand"})]{@{text "\<Gamma> - (x \<equiv> t) \<turnstile> B t"}}{@{text "\<Gamma> \<turnstile> B x"}}
+  \infer[(\<open>\<equiv>\<hyphen>expand\<close>)]{\<open>\<Gamma> - (x \<equiv> t) \<turnstile> B t\<close>}{\<open>\<Gamma> \<turnstile> B x\<close>}
   \]
-  This works, because the assumption @{text "x \<equiv> t"} was introduced in
-  a context with @{text "x"} being fresh, so @{text "x"} does not
-  occur in @{text "\<Gamma>"} here.
+  This works, because the assumption \<open>x \<equiv> t\<close> was introduced in
+  a context with \<open>x\<close> being fresh, so \<open>x\<close> does not
+  occur in \<open>\<Gamma>\<close> here.
 \<close>
 
 text %mlref \<open>
@@ -293,23 +285,21 @@
   and the @{ML_type "cterm list"} the collection of assumptions to be
   discharged simultaneously.
 
-  \<^descr> @{ML Assumption.assume}~@{text "ctxt A"} turns proposition @{text
-  "A"} into a primitive assumption @{text "A \<turnstile> A'"}, where the
-  conclusion @{text "A'"} is in HHF normal form.
+  \<^descr> @{ML Assumption.assume}~\<open>ctxt A\<close> turns proposition \<open>A\<close> into a primitive assumption \<open>A \<turnstile> A'\<close>, where the
+  conclusion \<open>A'\<close> is in HHF normal form.
 
-  \<^descr> @{ML Assumption.add_assms}~@{text "r As"} augments the context
-  by assumptions @{text "As"} with export rule @{text "r"}.  The
+  \<^descr> @{ML Assumption.add_assms}~\<open>r As\<close> augments the context
+  by assumptions \<open>As\<close> with export rule \<open>r\<close>.  The
   resulting facts are hypothetical theorems as produced by the raw
   @{ML Assumption.assume}.
 
-  \<^descr> @{ML Assumption.add_assumes}~@{text "As"} is a special case of
-  @{ML Assumption.add_assms} where the export rule performs @{text
-  "\<Longrightarrow>\<hyphen>intro"} or @{text "#\<Longrightarrow>\<hyphen>intro"}, depending on goal
+  \<^descr> @{ML Assumption.add_assumes}~\<open>As\<close> is a special case of
+  @{ML Assumption.add_assms} where the export rule performs \<open>\<Longrightarrow>\<hyphen>intro\<close> or \<open>#\<Longrightarrow>\<hyphen>intro\<close>, depending on goal
   mode.
 
-  \<^descr> @{ML Assumption.export}~@{text "is_goal inner outer thm"}
-  exports result @{text "thm"} from the the @{text "inner"} context
-  back into the @{text "outer"} one; @{text "is_goal = true"} means
+  \<^descr> @{ML Assumption.export}~\<open>is_goal inner outer thm\<close>
+  exports result \<open>thm\<close> from the the \<open>inner\<close> context
+  back into the \<open>outer\<close> one; \<open>is_goal = true\<close> means
   this is a goal context.  The result is in HHF normal form.  Note
   that @{ML "Proof_Context.export"} combines @{ML "Variable.export"}
   and @{ML "Assumption.export"} in the canonical way.
@@ -344,25 +334,24 @@
 text \<open>
   Local results are established by monotonic reasoning from facts
   within a context.  This allows common combinations of theorems,
-  e.g.\ via @{text "\<And>/\<Longrightarrow>"} elimination, resolution rules, or equational
+  e.g.\ via \<open>\<And>/\<Longrightarrow>\<close> elimination, resolution rules, or equational
   reasoning, see \secref{sec:thms}.  Unaccounted context manipulations
-  should be avoided, notably raw @{text "\<And>/\<Longrightarrow>"} introduction or ad-hoc
+  should be avoided, notably raw \<open>\<And>/\<Longrightarrow>\<close> introduction or ad-hoc
   references to free variables or assumptions not present in the proof
   context.
 
   \<^medskip>
-  The @{text "SUBPROOF"} combinator allows to structure a
+  The \<open>SUBPROOF\<close> combinator allows to structure a
   tactical proof recursively by decomposing a selected sub-goal:
-  @{text "(\<And>x. A(x) \<Longrightarrow> B(x)) \<Longrightarrow> \<dots>"} is turned into @{text "B(x) \<Longrightarrow> \<dots>"}
-  after fixing @{text "x"} and assuming @{text "A(x)"}.  This means
+  \<open>(\<And>x. A(x) \<Longrightarrow> B(x)) \<Longrightarrow> \<dots>\<close> is turned into \<open>B(x) \<Longrightarrow> \<dots>\<close>
+  after fixing \<open>x\<close> and assuming \<open>A(x)\<close>.  This means
   the tactic needs to solve the conclusion, but may use the premise as
   a local fact, for locally fixed variables.
 
-  The family of @{text "FOCUS"} combinators is similar to @{text
-  "SUBPROOF"}, but allows to retain schematic variables and pending
+  The family of \<open>FOCUS\<close> combinators is similar to \<open>SUBPROOF\<close>, but allows to retain schematic variables and pending
   subgoals in the resulting goal state.
 
-  The @{text "prove"} operation provides an interface for structured
+  The \<open>prove\<close> operation provides an interface for structured
   backwards reasoning under program control, with some explicit sanity
   checks of the result.  The goal context can be augmented by
   additional fixed variables (cf.\ \secref{sec:variables}) and
@@ -371,7 +360,7 @@
   the result.  Type and term variables are generalized as usual,
   according to the context.
 
-  The @{text "obtain"} operation produces results by eliminating
+  The \<open>obtain\<close> operation produces results by eliminating
   existing facts by means of a given tactic.  This acts like a dual
   conclusion: the proof demonstrates that the context may be augmented
   by parameters and assumptions, without affecting any conclusions
@@ -411,7 +400,7 @@
   Proof.context -> ((string * cterm) list * thm list) * Proof.context"} \\
   \end{mldecls}
 
-  \<^descr> @{ML SUBPROOF}~@{text "tac ctxt i"} decomposes the structure
+  \<^descr> @{ML SUBPROOF}~\<open>tac ctxt i\<close> decomposes the structure
   of the specified sub-goal, producing an extended context and a
   reduced goal, which needs to be solved by the given tactic.  All
   schematic parameters of the goal are imported into the context as
@@ -429,13 +418,12 @@
   occasionally useful to experiment without writing actual tactics
   yet.
 
-  \<^descr> @{ML Goal.prove}~@{text "ctxt xs As C tac"} states goal @{text
-  "C"} in the context augmented by fixed variables @{text "xs"} and
-  assumptions @{text "As"}, and applies tactic @{text "tac"} to solve
+  \<^descr> @{ML Goal.prove}~\<open>ctxt xs As C tac\<close> states goal \<open>C\<close> in the context augmented by fixed variables \<open>xs\<close> and
+  assumptions \<open>As\<close>, and applies tactic \<open>tac\<close> to solve
   it.  The latter may depend on the local assumptions being presented
   as facts.  The result is in HHF normal form.
 
-  \<^descr> @{ML Goal.prove_common}~@{text "ctxt fork_pri"} is the common form
+  \<^descr> @{ML Goal.prove_common}~\<open>ctxt fork_pri\<close> is the common form
   to state and prove a simultaneous goal statement, where @{ML Goal.prove}
   is a convenient shorthand that is most frequently used in applications.
 
@@ -452,7 +440,7 @@
   transaction. Thus the system is able to expose error messages ultimately
   to the end-user, even though the subsequent ML code misses them.
 
-  \<^descr> @{ML Obtain.result}~@{text "tac thms ctxt"} eliminates the
+  \<^descr> @{ML Obtain.result}~\<open>tac thms ctxt\<close> eliminates the
   given facts using a tactic, which results in additional fixed
   variables and assumptions in the context.  Final results need to be
   exported explicitly.
--- a/src/Doc/Implementation/Syntax.thy	Tue Oct 20 23:03:46 2015 +0200
+++ b/src/Doc/Implementation/Syntax.thy	Tue Oct 20 23:53:40 2015 +0200
@@ -6,7 +6,7 @@
 
 chapter \<open>Concrete syntax and type-checking\<close>
 
-text \<open>Pure @{text "\<lambda>"}-calculus as introduced in \chref{ch:logic} is
+text \<open>Pure \<open>\<lambda>\<close>-calculus as introduced in \chref{ch:logic} is
   an adequate foundation for logical languages --- in the tradition of
   \<^emph>\<open>higher-order abstract syntax\<close> --- but end-users require
   additional means for reading and printing of terms and types.  This
@@ -15,12 +15,11 @@
   the theory and proof language @{cite "isabelle-isar-ref"}.
 
   For example, according to @{cite church40} quantifiers are represented as
-  higher-order constants @{text "All :: ('a \<Rightarrow> bool) \<Rightarrow> bool"} such that @{text
-  "All (\<lambda>x::'a. B x)"} faithfully represents the idea that is displayed in
-  Isabelle as @{text "\<forall>x::'a. B x"} via @{keyword "binder"} notation.
+  higher-order constants \<open>All :: ('a \<Rightarrow> bool) \<Rightarrow> bool\<close> such that \<open>All (\<lambda>x::'a. B x)\<close> faithfully represents the idea that is displayed in
+  Isabelle as \<open>\<forall>x::'a. B x\<close> via @{keyword "binder"} notation.
   Moreover, type-inference in the style of Hindley-Milner @{cite hindleymilner}
-  (and extensions) enables users to write @{text "\<forall>x. B x"} concisely, when
-  the type @{text "'a"} is already clear from the
+  (and extensions) enables users to write \<open>\<forall>x. B x\<close> concisely, when
+  the type \<open>'a\<close> is already clear from the
   context.\footnote{Type-inference taken to the extreme can easily confuse
   users. Beginners often stumble over unexpectedly general types inferred by
   the system.}
@@ -36,22 +35,21 @@
   \secref{sec:term-check}, respectively.  This results in the
   following decomposition of the main operations:
 
-  \<^item> @{text "read = parse; check"}
+  \<^item> \<open>read = parse; check\<close>
 
-  \<^item> @{text "pretty = uncheck; unparse"}
+  \<^item> \<open>pretty = uncheck; unparse\<close>
 
 
   For example, some specification package might thus intercept syntax
-  processing at a well-defined stage after @{text "parse"}, to a augment the
-  resulting pre-term before full type-reconstruction is performed by @{text
-  "check"}. Note that the formal status of bound variables, versus free
+  processing at a well-defined stage after \<open>parse\<close>, to a augment the
+  resulting pre-term before full type-reconstruction is performed by \<open>check\<close>. Note that the formal status of bound variables, versus free
   variables, versus constants must not be changed between these phases.
 
   \<^medskip>
-  In general, @{text check} and @{text uncheck} operate
+  In general, \<open>check\<close> and \<open>uncheck\<close> operate
   simultaneously on a list of terms. This is particular important for
   type-checking, to reconstruct types for several terms of the same context
-  and scope. In contrast, @{text parse} and @{text unparse} operate separately
+  and scope. In contrast, \<open>parse\<close> and \<open>unparse\<close> operate separately
   on single terms.
 
   There are analogous operations to read and print types, with the same
@@ -63,11 +61,10 @@
 
 text \<open>
   Read and print operations are roughly dual to each other, such that for the
-  user @{text "s' = pretty (read s)"} looks similar to the original source
-  text @{text "s"}, but the details depend on many side-conditions. There are
+  user \<open>s' = pretty (read s)\<close> looks similar to the original source
+  text \<open>s\<close>, but the details depend on many side-conditions. There are
   also explicit options to control the removal of type information in the
-  output. The default configuration routinely looses information, so @{text
-  "t' = read (pretty t)"} might fail, or produce a differently typed term, or
+  output. The default configuration routinely looses information, so \<open>t' = read (pretty t)\<close> might fail, or produce a differently typed term, or
   a completely different term in the face of syntactic overloading.
 \<close>
 
@@ -85,10 +82,10 @@
   @{index_ML Syntax.string_of_term: "Proof.context -> term -> string"} \\
   \end{mldecls}
 
-  \<^descr> @{ML Syntax.read_typs}~@{text "ctxt strs"} parses and checks a
+  \<^descr> @{ML Syntax.read_typs}~\<open>ctxt strs\<close> parses and checks a
   simultaneous list of source strings as types of the logic.
 
-  \<^descr> @{ML Syntax.read_terms}~@{text "ctxt strs"} parses and checks a
+  \<^descr> @{ML Syntax.read_terms}~\<open>ctxt strs\<close> parses and checks a
   simultaneous list of source strings as terms of the logic.
   Type-reconstruction puts all parsed terms into the same scope: types of
   free variables ultimately need to coincide.
@@ -98,7 +95,7 @@
   is possible to use @{ML Type.constraint} on the intermediate pre-terms
   (\secref{sec:term-check}).
 
-  \<^descr> @{ML Syntax.read_props}~@{text "ctxt strs"} parses and checks a
+  \<^descr> @{ML Syntax.read_props}~\<open>ctxt strs\<close> parses and checks a
   simultaneous list of source strings as terms of the logic, with an implicit
   type-constraint for each argument to enforce type @{typ prop}; this also
   affects the inner syntax for parsing. The remaining type-reconstruction
@@ -110,8 +107,8 @@
   its own scope is processed. Do not use @{ML "map o Syntax.read_term"} where
   @{ML Syntax.read_terms} is actually intended!
 
-  \<^descr> @{ML Syntax.pretty_typ}~@{text "ctxt T"} and @{ML
-  Syntax.pretty_term}~@{text "ctxt t"} uncheck and pretty-print the given type
+  \<^descr> @{ML Syntax.pretty_typ}~\<open>ctxt T\<close> and @{ML
+  Syntax.pretty_term}~\<open>ctxt t\<close> uncheck and pretty-print the given type
   or term, respectively. Although the uncheck phase acts on a simultaneous
   list as well, this is rarely used in practice, so only the singleton case is
   provided as combined pretty operation. There is no distinction of term vs.\
@@ -173,22 +170,22 @@
   @{index_ML Syntax.unparse_term: "Proof.context -> term -> Pretty.T"} \\
   \end{mldecls}
 
-  \<^descr> @{ML Syntax.parse_typ}~@{text "ctxt str"} parses a source string as
+  \<^descr> @{ML Syntax.parse_typ}~\<open>ctxt str\<close> parses a source string as
   pre-type that is ready to be used with subsequent check operations.
 
-  \<^descr> @{ML Syntax.parse_term}~@{text "ctxt str"} parses a source string as
+  \<^descr> @{ML Syntax.parse_term}~\<open>ctxt str\<close> parses a source string as
   pre-term that is ready to be used with subsequent check operations.
 
-  \<^descr> @{ML Syntax.parse_prop}~@{text "ctxt str"} parses a source string as
+  \<^descr> @{ML Syntax.parse_prop}~\<open>ctxt str\<close> parses a source string as
   pre-term that is ready to be used with subsequent check operations. The
   inner syntax category is @{typ prop} and a suitable type-constraint is
   included to ensure that this information is observed in subsequent type
   reconstruction.
 
-  \<^descr> @{ML Syntax.unparse_typ}~@{text "ctxt T"} unparses a type after
+  \<^descr> @{ML Syntax.unparse_typ}~\<open>ctxt T\<close> unparses a type after
   uncheck operations, to turn it into a pretty tree.
 
-  \<^descr> @{ML Syntax.unparse_term}~@{text "ctxt T"} unparses a term after
+  \<^descr> @{ML Syntax.unparse_term}~\<open>ctxt T\<close> unparses a term after
   uncheck operations, to turn it into a pretty tree. There is no distinction
   for propositions here.
 
@@ -212,9 +209,8 @@
 
   A typical add-on for the check/uncheck syntax layer is the @{command
   abbreviation} mechanism @{cite "isabelle-isar-ref"}. Here the user specifies
-  syntactic definitions that are managed by the system as polymorphic @{text
-  "let"} bindings. These are expanded during the @{text "check"} phase, and
-  contracted during the @{text "uncheck"} phase, without affecting the
+  syntactic definitions that are managed by the system as polymorphic \<open>let\<close> bindings. These are expanded during the \<open>check\<close> phase, and
+  contracted during the \<open>uncheck\<close> phase, without affecting the
   type-assignment of the given terms.
 
   \<^medskip>
@@ -222,7 +218,7 @@
   additional check/uncheck modules might be defined in user space.
 
   For example, the @{command class} command defines a context where
-  @{text "check"} treats certain type instances of overloaded
+  \<open>check\<close> treats certain type instances of overloaded
   constants according to the ``dictionary construction'' of its
   logical foundation.  This involves ``type improvement''
   (specialization of slightly too general types) and replacement by
@@ -238,11 +234,11 @@
   @{index_ML Syntax.uncheck_terms: "Proof.context -> term list -> term list"} \\
   \end{mldecls}
 
-  \<^descr> @{ML Syntax.check_typs}~@{text "ctxt Ts"} checks a simultaneous list
+  \<^descr> @{ML Syntax.check_typs}~\<open>ctxt Ts\<close> checks a simultaneous list
   of pre-types as types of the logic.  Typically, this involves normalization
   of type synonyms.
 
-  \<^descr> @{ML Syntax.check_terms}~@{text "ctxt ts"} checks a simultaneous list
+  \<^descr> @{ML Syntax.check_terms}~\<open>ctxt ts\<close> checks a simultaneous list
   of pre-terms as terms of the logic. Typically, this involves type-inference
   and normalization term abbreviations. The types within the given terms are
   treated in the same way as for @{ML Syntax.check_typs}.
@@ -253,15 +249,15 @@
   is checked; afterwards the type arguments are recovered with @{ML
   Logic.dest_type}.
 
-  \<^descr> @{ML Syntax.check_props}~@{text "ctxt ts"} checks a simultaneous list
+  \<^descr> @{ML Syntax.check_props}~\<open>ctxt ts\<close> checks a simultaneous list
   of pre-terms as terms of the logic, such that all terms are constrained by
   type @{typ prop}. The remaining check operation works as @{ML
   Syntax.check_terms} above.
 
-  \<^descr> @{ML Syntax.uncheck_typs}~@{text "ctxt Ts"} unchecks a simultaneous
+  \<^descr> @{ML Syntax.uncheck_typs}~\<open>ctxt Ts\<close> unchecks a simultaneous
   list of types of the logic, in preparation of pretty printing.
 
-  \<^descr> @{ML Syntax.uncheck_terms}~@{text "ctxt ts"} unchecks a simultaneous
+  \<^descr> @{ML Syntax.uncheck_terms}~\<open>ctxt ts\<close> unchecks a simultaneous
   list of terms of the logic, in preparation of pretty printing. There is no
   distinction for propositions here.
 
--- a/src/Doc/Implementation/Tactic.thy	Tue Oct 20 23:03:46 2015 +0200
+++ b/src/Doc/Implementation/Tactic.thy	Tue Oct 20 23:53:40 2015 +0200
@@ -5,40 +5,36 @@
 chapter \<open>Tactical reasoning\<close>
 
 text \<open>Tactical reasoning works by refining an initial claim in a
-  backwards fashion, until a solved form is reached.  A @{text "goal"}
+  backwards fashion, until a solved form is reached.  A \<open>goal\<close>
   consists of several subgoals that need to be solved in order to
   achieve the main statement; zero subgoals means that the proof may
-  be finished.  A @{text "tactic"} is a refinement operation that maps
-  a goal to a lazy sequence of potential successors.  A @{text
-  "tactical"} is a combinator for composing tactics.\<close>
+  be finished.  A \<open>tactic\<close> is a refinement operation that maps
+  a goal to a lazy sequence of potential successors.  A \<open>tactical\<close> is a combinator for composing tactics.\<close>
 
 
 section \<open>Goals \label{sec:tactical-goals}\<close>
 
 text \<open>
   Isabelle/Pure represents a goal as a theorem stating that the
-  subgoals imply the main goal: @{text "A\<^sub>1 \<Longrightarrow> \<dots> \<Longrightarrow> A\<^sub>n \<Longrightarrow>
-  C"}.  The outermost goal structure is that of a Horn Clause: i.e.\
+  subgoals imply the main goal: \<open>A\<^sub>1 \<Longrightarrow> \<dots> \<Longrightarrow> A\<^sub>n \<Longrightarrow>
+  C\<close>.  The outermost goal structure is that of a Horn Clause: i.e.\
   an iterated implication without any quantifiers\footnote{Recall that
-  outermost @{text "\<And>x. \<phi>[x]"} is always represented via schematic
-  variables in the body: @{text "\<phi>[?x]"}.  These variables may get
-  instantiated during the course of reasoning.}.  For @{text "n = 0"}
+  outermost \<open>\<And>x. \<phi>[x]\<close> is always represented via schematic
+  variables in the body: \<open>\<phi>[?x]\<close>.  These variables may get
+  instantiated during the course of reasoning.}.  For \<open>n = 0\<close>
   a goal is called ``solved''.
 
-  The structure of each subgoal @{text "A\<^sub>i"} is that of a
-  general Hereditary Harrop Formula @{text "\<And>x\<^sub>1 \<dots>
-  \<And>x\<^sub>k. H\<^sub>1 \<Longrightarrow> \<dots> \<Longrightarrow> H\<^sub>m \<Longrightarrow> B"}.  Here @{text
-  "x\<^sub>1, \<dots>, x\<^sub>k"} are goal parameters, i.e.\
-  arbitrary-but-fixed entities of certain types, and @{text
-  "H\<^sub>1, \<dots>, H\<^sub>m"} are goal hypotheses, i.e.\ facts that may
+  The structure of each subgoal \<open>A\<^sub>i\<close> is that of a
+  general Hereditary Harrop Formula \<open>\<And>x\<^sub>1 \<dots>
+  \<And>x\<^sub>k. H\<^sub>1 \<Longrightarrow> \<dots> \<Longrightarrow> H\<^sub>m \<Longrightarrow> B\<close>.  Here \<open>x\<^sub>1, \<dots>, x\<^sub>k\<close> are goal parameters, i.e.\
+  arbitrary-but-fixed entities of certain types, and \<open>H\<^sub>1, \<dots>, H\<^sub>m\<close> are goal hypotheses, i.e.\ facts that may
   be assumed locally.  Together, this forms the goal context of the
-  conclusion @{text B} to be established.  The goal hypotheses may be
+  conclusion \<open>B\<close> to be established.  The goal hypotheses may be
   again arbitrary Hereditary Harrop Formulas, although the level of
   nesting rarely exceeds 1--2 in practice.
 
-  The main conclusion @{text C} is internally marked as a protected
-  proposition, which is represented explicitly by the notation @{text
-  "#C"} here.  This ensures that the decomposition into subgoals and
+  The main conclusion \<open>C\<close> is internally marked as a protected
+  proposition, which is represented explicitly by the notation \<open>#C\<close> here.  This ensures that the decomposition into subgoals and
   main conclusion is well-defined for arbitrarily structured claims.
 
   \<^medskip>
@@ -46,8 +42,8 @@
   Isabelle/Pure rules:
 
   \[
-  \infer[@{text "(init)"}]{@{text "C \<Longrightarrow> #C"}}{} \qquad
-  \infer[@{text "(finish)"}]{@{text "C"}}{@{text "#C"}}
+  \infer[\<open>(init)\<close>]{\<open>C \<Longrightarrow> #C\<close>}{} \qquad
+  \infer[\<open>(finish)\<close>]{\<open>C\<close>}{\<open>#C\<close>}
   \]
 
   \<^medskip>
@@ -55,10 +51,10 @@
   with protected propositions:
 
   \[
-  \infer[@{text "(protect n)"}]{@{text "A\<^sub>1 \<Longrightarrow> \<dots> \<Longrightarrow> A\<^sub>n \<Longrightarrow> #C"}}{@{text "A\<^sub>1 \<Longrightarrow> \<dots> \<Longrightarrow> A\<^sub>n \<Longrightarrow> C"}}
+  \infer[\<open>(protect n)\<close>]{\<open>A\<^sub>1 \<Longrightarrow> \<dots> \<Longrightarrow> A\<^sub>n \<Longrightarrow> #C\<close>}{\<open>A\<^sub>1 \<Longrightarrow> \<dots> \<Longrightarrow> A\<^sub>n \<Longrightarrow> C\<close>}
   \]
   \[
-  \infer[@{text "(conclude)"}]{@{text "A \<Longrightarrow> \<dots> \<Longrightarrow> C"}}{@{text "A \<Longrightarrow> \<dots> \<Longrightarrow> #C"}}
+  \infer[\<open>(conclude)\<close>]{\<open>A \<Longrightarrow> \<dots> \<Longrightarrow> C\<close>}{\<open>A \<Longrightarrow> \<dots> \<Longrightarrow> #C\<close>}
   \]
 \<close>
 
@@ -70,26 +66,26 @@
   @{index_ML Goal.conclude: "thm -> thm"} \\
   \end{mldecls}
 
-  \<^descr> @{ML "Goal.init"}~@{text C} initializes a tactical goal from
-  the well-formed proposition @{text C}.
+  \<^descr> @{ML "Goal.init"}~\<open>C\<close> initializes a tactical goal from
+  the well-formed proposition \<open>C\<close>.
 
-  \<^descr> @{ML "Goal.finish"}~@{text "ctxt thm"} checks whether theorem
-  @{text "thm"} is a solved goal (no subgoals), and concludes the
+  \<^descr> @{ML "Goal.finish"}~\<open>ctxt thm\<close> checks whether theorem
+  \<open>thm\<close> is a solved goal (no subgoals), and concludes the
   result by removing the goal protection.  The context is only
   required for printing error messages.
 
-  \<^descr> @{ML "Goal.protect"}~@{text "n thm"} protects the statement
-  of theorem @{text "thm"}.  The parameter @{text n} indicates the
+  \<^descr> @{ML "Goal.protect"}~\<open>n thm\<close> protects the statement
+  of theorem \<open>thm\<close>.  The parameter \<open>n\<close> indicates the
   number of premises to be retained.
 
-  \<^descr> @{ML "Goal.conclude"}~@{text "thm"} removes the goal
+  \<^descr> @{ML "Goal.conclude"}~\<open>thm\<close> removes the goal
   protection, even if there are pending subgoals.
 \<close>
 
 
 section \<open>Tactics\label{sec:tactics}\<close>
 
-text \<open>A @{text "tactic"} is a function @{text "goal \<rightarrow> goal\<^sup>*\<^sup>*"} that
+text \<open>A \<open>tactic\<close> is a function \<open>goal \<rightarrow> goal\<^sup>*\<^sup>*\<close> that
   maps a given goal state (represented as a theorem, cf.\
   \secref{sec:tactical-goals}) to a lazy sequence of potential
   successor states.  The underlying sequence implementation is lazy
@@ -121,7 +117,7 @@
   schematic goal variables).
 
   Tactics with explicit \<^emph>\<open>subgoal addressing\<close> are of the form
-  @{text "int \<rightarrow> tactic"} and may be applied to a particular subgoal
+  \<open>int \<rightarrow> tactic\<close> and may be applied to a particular subgoal
   (counting from 1).  If the subgoal number is out of range, the
   tactic should fail with an empty result sequence, but must not raise
   an exception!
@@ -139,7 +135,7 @@
   very common error when implementing tactics!
 
   Tactics with internal subgoal addressing should expose the subgoal
-  index as @{text "int"} argument in full generality; a hardwired
+  index as \<open>int\<close> argument in full generality; a hardwired
   subgoal 1 is not acceptable.
   
   \<^medskip>
@@ -195,16 +191,16 @@
   \<^descr> @{ML all_tac} is a tactic that always succeeds, returning a
   singleton sequence with unchanged goal state.
 
-  \<^descr> @{ML print_tac}~@{text "ctxt message"} is like @{ML all_tac}, but
+  \<^descr> @{ML print_tac}~\<open>ctxt message\<close> is like @{ML all_tac}, but
   prints a message together with the goal state on the tracing
   channel.
 
-  \<^descr> @{ML PRIMITIVE}~@{text rule} turns a primitive inference rule
+  \<^descr> @{ML PRIMITIVE}~\<open>rule\<close> turns a primitive inference rule
   into a tactic with unique result.  Exception @{ML THM} is considered
   a regular tactic failure and produces an empty result; other
   exceptions are passed through.
 
-  \<^descr> @{ML SUBGOAL}~@{text "(fn (subgoal, i) => tactic)"} is the
+  \<^descr> @{ML SUBGOAL}~\<open>(fn (subgoal, i) => tactic)\<close> is the
   most basic form to produce a tactic with subgoal addressing.  The
   given abstraction over the subgoal term and subgoal number allows to
   peek at the relevant information of the full goal state.  The
@@ -215,14 +211,14 @@
   avoids expensive re-certification in situations where the subgoal is
   used directly for primitive inferences.
 
-  \<^descr> @{ML SELECT_GOAL}~@{text "tac i"} confines a tactic to the
-  specified subgoal @{text "i"}.  This rearranges subgoals and the
+  \<^descr> @{ML SELECT_GOAL}~\<open>tac i\<close> confines a tactic to the
+  specified subgoal \<open>i\<close>.  This rearranges subgoals and the
   main goal protection (\secref{sec:tactical-goals}), while retaining
   the syntactic context of the overall goal state (concerning
   schematic variables etc.).
 
-  \<^descr> @{ML PREFER_GOAL}~@{text "tac i"} rearranges subgoals to put
-  @{text "i"} in front.  This is similar to @{ML SELECT_GOAL}, but
+  \<^descr> @{ML PREFER_GOAL}~\<open>tac i\<close> rearranges subgoals to put
+  \<open>i\<close> in front.  This is similar to @{ML SELECT_GOAL}, but
   without changing the main goal protection.
 \<close>
 
@@ -237,7 +233,7 @@
   \<^emph>\<open>Destruct-resolution\<close> is like elim-resolution, but the given
   destruction rules are first turned into canonical elimination
   format.  \<^emph>\<open>Forward-resolution\<close> is like destruct-resolution, but
-  without deleting the selected assumption.  The @{text "r/e/d/f"}
+  without deleting the selected assumption.  The \<open>r/e/d/f\<close>
   naming convention is maintained for several different kinds of
   resolution rules and tactics.
 
@@ -281,20 +277,19 @@
   @{index_ML bimatch_tac: "Proof.context -> (bool * thm) list -> int -> tactic"} \\
   \end{mldecls}
 
-  \<^descr> @{ML resolve_tac}~@{text "ctxt thms i"} refines the goal state
+  \<^descr> @{ML resolve_tac}~\<open>ctxt thms i\<close> refines the goal state
   using the given theorems, which should normally be introduction
-  rules.  The tactic resolves a rule's conclusion with subgoal @{text
-  i}, replacing it by the corresponding versions of the rule's
+  rules.  The tactic resolves a rule's conclusion with subgoal \<open>i\<close>, replacing it by the corresponding versions of the rule's
   premises.
 
-  \<^descr> @{ML eresolve_tac}~@{text "ctxt thms i"} performs elim-resolution
+  \<^descr> @{ML eresolve_tac}~\<open>ctxt thms i\<close> performs elim-resolution
   with the given theorems, which are normally be elimination rules.
 
   Note that @{ML_text "eresolve_tac ctxt [asm_rl]"} is equivalent to @{ML_text
   "assume_tac ctxt"}, which facilitates mixing of assumption steps with
   genuine eliminations.
 
-  \<^descr> @{ML dresolve_tac}~@{text "ctxt thms i"} performs
+  \<^descr> @{ML dresolve_tac}~\<open>ctxt thms i\<close> performs
   destruct-resolution with the given theorems, which should normally
   be destruction rules.  This replaces an assumption by the result of
   applying one of the rules.
@@ -303,21 +298,20 @@
   selected assumption is not deleted.  It applies a rule to an
   assumption, adding the result as a new assumption.
 
-  \<^descr> @{ML biresolve_tac}~@{text "ctxt brls i"} refines the proof state
+  \<^descr> @{ML biresolve_tac}~\<open>ctxt brls i\<close> refines the proof state
   by resolution or elim-resolution on each rule, as indicated by its
-  flag.  It affects subgoal @{text "i"} of the proof state.
+  flag.  It affects subgoal \<open>i\<close> of the proof state.
 
-  For each pair @{text "(flag, rule)"}, it applies resolution if the
-  flag is @{text "false"} and elim-resolution if the flag is @{text
-  "true"}.  A single tactic call handles a mixture of introduction and
+  For each pair \<open>(flag, rule)\<close>, it applies resolution if the
+  flag is \<open>false\<close> and elim-resolution if the flag is \<open>true\<close>.  A single tactic call handles a mixture of introduction and
   elimination rules, which is useful to organize the search process
   systematically in proof tools.
 
-  \<^descr> @{ML assume_tac}~@{text "ctxt i"} attempts to solve subgoal @{text i}
+  \<^descr> @{ML assume_tac}~\<open>ctxt i\<close> attempts to solve subgoal \<open>i\<close>
   by assumption (modulo higher-order unification).
 
   \<^descr> @{ML eq_assume_tac} is similar to @{ML assume_tac}, but checks
-  only for immediate @{text "\<alpha>"}-convertibility instead of using
+  only for immediate \<open>\<alpha>\<close>-convertibility instead of using
   unification.  It succeeds (with a unique next state) if one of the
   assumptions is equal to the subgoal's conclusion.  Since it does not
   instantiate variables, it cannot make other subgoals unprovable.
@@ -346,25 +340,22 @@
   higher-order unification is not so useful.  This typically involves
   rules like universal elimination, existential introduction, or
   equational substitution.  Here the unification problem involves
-  fully flexible @{text "?P ?x"} schemes, which are hard to manage
+  fully flexible \<open>?P ?x\<close> schemes, which are hard to manage
   without further hints.
 
-  By providing a (small) rigid term for @{text "?x"} explicitly, the
-  remaining unification problem is to assign a (large) term to @{text
-  "?P"}, according to the shape of the given subgoal.  This is
+  By providing a (small) rigid term for \<open>?x\<close> explicitly, the
+  remaining unification problem is to assign a (large) term to \<open>?P\<close>, according to the shape of the given subgoal.  This is
   sufficiently well-behaved in most practical situations.
 
   \<^medskip>
-  Isabelle provides separate versions of the standard @{text
-  "r/e/d/f"} resolution tactics that allow to provide explicit
+  Isabelle provides separate versions of the standard \<open>r/e/d/f\<close> resolution tactics that allow to provide explicit
   instantiations of unknowns of the given rule, wrt.\ terms that refer
   to the implicit context of the selected subgoal.
 
-  An instantiation consists of a list of pairs of the form @{text
-  "(?x, t)"}, where @{text ?x} is a schematic variable occurring in
-  the given rule, and @{text t} is a term from the current proof
+  An instantiation consists of a list of pairs of the form \<open>(?x, t)\<close>, where \<open>?x\<close> is a schematic variable occurring in
+  the given rule, and \<open>t\<close> is a term from the current proof
   context, augmented by the local goal parameters of the selected
-  subgoal; cf.\ the @{text "focus"} operation described in
+  subgoal; cf.\ the \<open>focus\<close> operation described in
   \secref{sec:variables}.
 
   Entering the syntactic context of a subgoal is a brittle operation,
@@ -373,8 +364,7 @@
   global names.  Explicit renaming of subgoal parameters prior to
   explicit instantiation might help to achieve a bit more robustness.
 
-  Type instantiations may be given as well, via pairs like @{text
-  "(?'a, \<tau>)"}.  Type instantiations are distinguished from term
+  Type instantiations may be given as well, via pairs like \<open>(?'a, \<tau>)\<close>.  Type instantiations are distinguished from term
   instantiations by the syntactic form of the schematic variable.
   Types are instantiated before terms are.  Since term instantiation
   already performs simple type-inference, so explicit type
@@ -402,9 +392,9 @@
   @{index_ML rename_tac: "string list -> int -> tactic"} \\
   \end{mldecls}
 
-  \<^descr> @{ML Rule_Insts.res_inst_tac}~@{text "ctxt insts thm i"} instantiates the
-  rule @{text thm} with the instantiations @{text insts}, as described
-  above, and then performs resolution on subgoal @{text i}.
+  \<^descr> @{ML Rule_Insts.res_inst_tac}~\<open>ctxt insts thm i\<close> instantiates the
+  rule \<open>thm\<close> with the instantiations \<open>insts\<close>, as described
+  above, and then performs resolution on subgoal \<open>i\<close>.
   
   \<^descr> @{ML Rule_Insts.eres_inst_tac} is like @{ML Rule_Insts.res_inst_tac},
   but performs elim-resolution.
@@ -415,20 +405,19 @@
   \<^descr> @{ML Rule_Insts.forw_inst_tac} is like @{ML Rule_Insts.dres_inst_tac}
   except that the selected assumption is not deleted.
 
-  \<^descr> @{ML Rule_Insts.subgoal_tac}~@{text "ctxt \<phi> i"} adds the proposition
-  @{text "\<phi>"} as local premise to subgoal @{text "i"}, and poses the
-  same as a new subgoal @{text "i + 1"} (in the original context).
+  \<^descr> @{ML Rule_Insts.subgoal_tac}~\<open>ctxt \<phi> i\<close> adds the proposition
+  \<open>\<phi>\<close> as local premise to subgoal \<open>i\<close>, and poses the
+  same as a new subgoal \<open>i + 1\<close> (in the original context).
 
-  \<^descr> @{ML Rule_Insts.thin_tac}~@{text "ctxt \<phi> i"} deletes the specified
-  premise from subgoal @{text i}.  Note that @{text \<phi>} may contain
+  \<^descr> @{ML Rule_Insts.thin_tac}~\<open>ctxt \<phi> i\<close> deletes the specified
+  premise from subgoal \<open>i\<close>.  Note that \<open>\<phi>\<close> may contain
   schematic variables, to abbreviate the intended proposition; the
   first matching subgoal premise will be deleted.  Removing useless
   premises from a subgoal increases its readability and can make
   search tactics run faster.
 
-  \<^descr> @{ML rename_tac}~@{text "names i"} renames the innermost
-  parameters of subgoal @{text i} according to the provided @{text
-  names} (which need to be distinct identifiers).
+  \<^descr> @{ML rename_tac}~\<open>names i\<close> renames the innermost
+  parameters of subgoal \<open>i\<close> according to the provided \<open>names\<close> (which need to be distinct identifiers).
 
 
   For historical reasons, the above instantiation tactics take
@@ -453,9 +442,9 @@
   @{index_ML flexflex_tac: "Proof.context -> tactic"} \\
   \end{mldecls}
 
-  \<^descr> @{ML rotate_tac}~@{text "n i"} rotates the premises of subgoal
-  @{text i} by @{text n} positions: from right to left if @{text n} is
-  positive, and from left to right if @{text n} is negative.
+  \<^descr> @{ML rotate_tac}~\<open>n i\<close> rotates the premises of subgoal
+  \<open>i\<close> by \<open>n\<close> positions: from right to left if \<open>n\<close> is
+  positive, and from left to right if \<open>n\<close> is negative.
 
   \<^descr> @{ML distinct_subgoals_tac} removes duplicate subgoals from a
   proof state.  This is potentially inefficient.
@@ -489,24 +478,21 @@
   @{index_ML_op COMP: "thm * thm -> thm"} \\
   \end{mldecls}
 
-  \<^descr> @{ML compose_tac}~@{text "ctxt (flag, rule, m) i"} refines subgoal
-  @{text "i"} using @{text "rule"}, without lifting.  The @{text
-  "rule"} is taken to have the form @{text "\<psi>\<^sub>1 \<Longrightarrow> \<dots> \<psi>\<^sub>m \<Longrightarrow> \<psi>"}, where
-  @{text "\<psi>"} need not be atomic; thus @{text "m"} determines the
-  number of new subgoals.  If @{text "flag"} is @{text "true"} then it
-  performs elim-resolution --- it solves the first premise of @{text
-  "rule"} by assumption and deletes that assumption.
+  \<^descr> @{ML compose_tac}~\<open>ctxt (flag, rule, m) i\<close> refines subgoal
+  \<open>i\<close> using \<open>rule\<close>, without lifting.  The \<open>rule\<close> is taken to have the form \<open>\<psi>\<^sub>1 \<Longrightarrow> \<dots> \<psi>\<^sub>m \<Longrightarrow> \<psi>\<close>, where
+  \<open>\<psi>\<close> need not be atomic; thus \<open>m\<close> determines the
+  number of new subgoals.  If \<open>flag\<close> is \<open>true\<close> then it
+  performs elim-resolution --- it solves the first premise of \<open>rule\<close> by assumption and deletes that assumption.
 
-  \<^descr> @{ML Drule.compose}~@{text "(thm\<^sub>1, i, thm\<^sub>2)"} uses @{text "thm\<^sub>1"},
-  regarded as an atomic formula, to solve premise @{text "i"} of
-  @{text "thm\<^sub>2"}.  Let @{text "thm\<^sub>1"} and @{text "thm\<^sub>2"} be @{text
-  "\<psi>"} and @{text "\<phi>\<^sub>1 \<Longrightarrow> \<dots> \<phi>\<^sub>n \<Longrightarrow> \<phi>"}.  The unique @{text "s"} that
-  unifies @{text "\<psi>"} and @{text "\<phi>\<^sub>i"} yields the theorem @{text "(\<phi>\<^sub>1 \<Longrightarrow>
-  \<dots> \<phi>\<^sub>i\<^sub>-\<^sub>1 \<Longrightarrow> \<phi>\<^sub>i\<^sub>+\<^sub>1 \<Longrightarrow> \<dots> \<phi>\<^sub>n \<Longrightarrow> \<phi>)s"}.  Multiple results are considered as
+  \<^descr> @{ML Drule.compose}~\<open>(thm\<^sub>1, i, thm\<^sub>2)\<close> uses \<open>thm\<^sub>1\<close>,
+  regarded as an atomic formula, to solve premise \<open>i\<close> of
+  \<open>thm\<^sub>2\<close>.  Let \<open>thm\<^sub>1\<close> and \<open>thm\<^sub>2\<close> be \<open>\<psi>\<close> and \<open>\<phi>\<^sub>1 \<Longrightarrow> \<dots> \<phi>\<^sub>n \<Longrightarrow> \<phi>\<close>.  The unique \<open>s\<close> that
+  unifies \<open>\<psi>\<close> and \<open>\<phi>\<^sub>i\<close> yields the theorem \<open>(\<phi>\<^sub>1 \<Longrightarrow>
+  \<dots> \<phi>\<^sub>i\<^sub>-\<^sub>1 \<Longrightarrow> \<phi>\<^sub>i\<^sub>+\<^sub>1 \<Longrightarrow> \<dots> \<phi>\<^sub>n \<Longrightarrow> \<phi>)s\<close>.  Multiple results are considered as
   error (exception @{ML THM}).
 
-  \<^descr> @{text "thm\<^sub>1 COMP thm\<^sub>2"} is the same as @{text "Drule.compose
-  (thm\<^sub>1, 1, thm\<^sub>2)"}.
+  \<^descr> \<open>thm\<^sub>1 COMP thm\<^sub>2\<close> is the same as \<open>Drule.compose
+  (thm\<^sub>1, 1, thm\<^sub>2)\<close>.
 
 
   \begin{warn}
@@ -554,42 +540,35 @@
   @{index_ML "FIRST'": "('a -> tactic) list -> 'a -> tactic"} \\
   \end{mldecls}
 
-  \<^descr> @{text "tac\<^sub>1"}~@{ML_op THEN}~@{text "tac\<^sub>2"} is the sequential
-  composition of @{text "tac\<^sub>1"} and @{text "tac\<^sub>2"}.  Applied to a goal
+  \<^descr> \<open>tac\<^sub>1\<close>~@{ML_op THEN}~\<open>tac\<^sub>2\<close> is the sequential
+  composition of \<open>tac\<^sub>1\<close> and \<open>tac\<^sub>2\<close>.  Applied to a goal
   state, it returns all states reachable in two steps by applying
-  @{text "tac\<^sub>1"} followed by @{text "tac\<^sub>2"}.  First, it applies @{text
-  "tac\<^sub>1"} to the goal state, getting a sequence of possible next
-  states; then, it applies @{text "tac\<^sub>2"} to each of these and
+  \<open>tac\<^sub>1\<close> followed by \<open>tac\<^sub>2\<close>.  First, it applies \<open>tac\<^sub>1\<close> to the goal state, getting a sequence of possible next
+  states; then, it applies \<open>tac\<^sub>2\<close> to each of these and
   concatenates the results to produce again one flat sequence of
   states.
 
-  \<^descr> @{text "tac\<^sub>1"}~@{ML_op ORELSE}~@{text "tac\<^sub>2"} makes a choice
-  between @{text "tac\<^sub>1"} and @{text "tac\<^sub>2"}.  Applied to a state, it
-  tries @{text "tac\<^sub>1"} and returns the result if non-empty; if @{text
-  "tac\<^sub>1"} fails then it uses @{text "tac\<^sub>2"}.  This is a deterministic
-  choice: if @{text "tac\<^sub>1"} succeeds then @{text "tac\<^sub>2"} is excluded
+  \<^descr> \<open>tac\<^sub>1\<close>~@{ML_op ORELSE}~\<open>tac\<^sub>2\<close> makes a choice
+  between \<open>tac\<^sub>1\<close> and \<open>tac\<^sub>2\<close>.  Applied to a state, it
+  tries \<open>tac\<^sub>1\<close> and returns the result if non-empty; if \<open>tac\<^sub>1\<close> fails then it uses \<open>tac\<^sub>2\<close>.  This is a deterministic
+  choice: if \<open>tac\<^sub>1\<close> succeeds then \<open>tac\<^sub>2\<close> is excluded
   from the result.
 
-  \<^descr> @{text "tac\<^sub>1"}~@{ML_op APPEND}~@{text "tac\<^sub>2"} concatenates the
-  possible results of @{text "tac\<^sub>1"} and @{text "tac\<^sub>2"}.  Unlike
+  \<^descr> \<open>tac\<^sub>1\<close>~@{ML_op APPEND}~\<open>tac\<^sub>2\<close> concatenates the
+  possible results of \<open>tac\<^sub>1\<close> and \<open>tac\<^sub>2\<close>.  Unlike
   @{ML_op "ORELSE"} there is \<^emph>\<open>no commitment\<close> to either tactic, so
   @{ML_op "APPEND"} helps to avoid incompleteness during search, at
   the cost of potential inefficiencies.
 
-  \<^descr> @{ML EVERY}~@{text "[tac\<^sub>1, \<dots>, tac\<^sub>n]"} abbreviates @{text
-  "tac\<^sub>1"}~@{ML_op THEN}~@{text "\<dots>"}~@{ML_op THEN}~@{text "tac\<^sub>n"}.
+  \<^descr> @{ML EVERY}~\<open>[tac\<^sub>1, \<dots>, tac\<^sub>n]\<close> abbreviates \<open>tac\<^sub>1\<close>~@{ML_op THEN}~\<open>\<dots>\<close>~@{ML_op THEN}~\<open>tac\<^sub>n\<close>.
   Note that @{ML "EVERY []"} is the same as @{ML all_tac}: it always
   succeeds.
 
-  \<^descr> @{ML FIRST}~@{text "[tac\<^sub>1, \<dots>, tac\<^sub>n]"} abbreviates @{text
-  "tac\<^sub>1"}~@{ML_op ORELSE}~@{text "\<dots>"}~@{ML_op "ORELSE"}~@{text
-  "tac\<^sub>n"}.  Note that @{ML "FIRST []"} is the same as @{ML no_tac}: it
+  \<^descr> @{ML FIRST}~\<open>[tac\<^sub>1, \<dots>, tac\<^sub>n]\<close> abbreviates \<open>tac\<^sub>1\<close>~@{ML_op ORELSE}~\<open>\<dots>\<close>~@{ML_op "ORELSE"}~\<open>tac\<^sub>n\<close>.  Note that @{ML "FIRST []"} is the same as @{ML no_tac}: it
   always fails.
 
   \<^descr> @{ML_op "THEN'"} is the lifted version of @{ML_op "THEN"}, for
-  tactics with explicit subgoal addressing.  So @{text
-  "(tac\<^sub>1"}~@{ML_op THEN'}~@{text "tac\<^sub>2) i"} is the same as @{text
-  "(tac\<^sub>1 i"}~@{ML_op THEN}~@{text "tac\<^sub>2 i)"}.
+  tactics with explicit subgoal addressing.  So \<open>(tac\<^sub>1\<close>~@{ML_op THEN'}~\<open>tac\<^sub>2) i\<close> is the same as \<open>(tac\<^sub>1 i\<close>~@{ML_op THEN}~\<open>tac\<^sub>2 i)\<close>.
 
   The other primed tacticals work analogously.
 \<close>
@@ -610,35 +589,33 @@
   @{index_ML "REPEAT_DETERM_N": "int -> tactic -> tactic"} \\
   \end{mldecls}
 
-  \<^descr> @{ML TRY}~@{text "tac"} applies @{text "tac"} to the goal
+  \<^descr> @{ML TRY}~\<open>tac\<close> applies \<open>tac\<close> to the goal
   state and returns the resulting sequence, if non-empty; otherwise it
-  returns the original state.  Thus, it applies @{text "tac"} at most
+  returns the original state.  Thus, it applies \<open>tac\<close> at most
   once.
 
   Note that for tactics with subgoal addressing, the combinator can be
-  applied via functional composition: @{ML "TRY"}~@{ML_op o}~@{text
-  "tac"}.  There is no need for @{verbatim TRY'}.
+  applied via functional composition: @{ML "TRY"}~@{ML_op o}~\<open>tac\<close>.  There is no need for @{verbatim TRY'}.
 
-  \<^descr> @{ML REPEAT}~@{text "tac"} applies @{text "tac"} to the goal
+  \<^descr> @{ML REPEAT}~\<open>tac\<close> applies \<open>tac\<close> to the goal
   state and, recursively, to each element of the resulting sequence.
-  The resulting sequence consists of those states that make @{text
-  "tac"} fail.  Thus, it applies @{text "tac"} as many times as
+  The resulting sequence consists of those states that make \<open>tac\<close> fail.  Thus, it applies \<open>tac\<close> as many times as
   possible (including zero times), and allows backtracking over each
-  invocation of @{text "tac"}.  @{ML REPEAT} is more general than @{ML
+  invocation of \<open>tac\<close>.  @{ML REPEAT} is more general than @{ML
   REPEAT_DETERM}, but requires more space.
 
-  \<^descr> @{ML REPEAT1}~@{text "tac"} is like @{ML REPEAT}~@{text "tac"}
-  but it always applies @{text "tac"} at least once, failing if this
+  \<^descr> @{ML REPEAT1}~\<open>tac\<close> is like @{ML REPEAT}~\<open>tac\<close>
+  but it always applies \<open>tac\<close> at least once, failing if this
   is impossible.
 
-  \<^descr> @{ML REPEAT_DETERM}~@{text "tac"} applies @{text "tac"} to the
+  \<^descr> @{ML REPEAT_DETERM}~\<open>tac\<close> applies \<open>tac\<close> to the
   goal state and, recursively, to the head of the resulting sequence.
-  It returns the first state to make @{text "tac"} fail.  It is
+  It returns the first state to make \<open>tac\<close> fail.  It is
   deterministic, discarding alternative outcomes.
 
-  \<^descr> @{ML REPEAT_DETERM_N}~@{text "n tac"} is like @{ML
-  REPEAT_DETERM}~@{text "tac"} but the number of repetitions is bound
-  by @{text "n"} (where @{ML "~1"} means @{text "\<infinity>"}).
+  \<^descr> @{ML REPEAT_DETERM_N}~\<open>n tac\<close> is like @{ML
+  REPEAT_DETERM}~\<open>tac\<close> but the number of repetitions is bound
+  by \<open>n\<close> (where @{ML "~1"} means \<open>\<infinity>\<close>).
 \<close>
 
 text %mlex \<open>The basic tactics and tacticals considered above follow
@@ -649,7 +626,7 @@
 
   \<^item> @{ML no_tac} is the identity element of @{ML_op "ORELSE"} and
   @{ML_op "APPEND"}.  Also, it is a zero element for @{ML_op "THEN"},
-  which means that @{text "tac"}~@{ML_op THEN}~@{ML no_tac} is
+  which means that \<open>tac\<close>~@{ML_op THEN}~@{ML no_tac} is
   equivalent to @{ML no_tac}.
 
   \<^item> @{ML TRY} and @{ML REPEAT} can be expressed as (recursive)
@@ -662,9 +639,9 @@
   fun REPEAT tac st = ((tac THEN REPEAT tac) ORELSE all_tac) st;
 \<close>
 
-text \<open>If @{text "tac"} can return multiple outcomes then so can @{ML
-  REPEAT}~@{text "tac"}.  @{ML REPEAT} uses @{ML_op "ORELSE"} and not
-  @{ML_op "APPEND"}, it applies @{text "tac"} as many times as
+text \<open>If \<open>tac\<close> can return multiple outcomes then so can @{ML
+  REPEAT}~\<open>tac\<close>.  @{ML REPEAT} uses @{ML_op "ORELSE"} and not
+  @{ML_op "APPEND"}, it applies \<open>tac\<close> as many times as
   possible in each outcome.
 
   \begin{warn}
@@ -672,7 +649,7 @@
   definition of @{ML REPEAT}.  Recursive tacticals must be coded in
   this awkward fashion to avoid infinite recursion of eager functional
   evaluation in Standard ML.  The following attempt would make @{ML
-  REPEAT}~@{text "tac"} loop:
+  REPEAT}~\<open>tac\<close> loop:
   \end{warn}
 \<close>
 
@@ -690,9 +667,9 @@
   tactic a certain range of subgoals is covered.  Thus the body tactic
   is applied to \<^emph>\<open>all\<close> subgoals, \<^emph>\<open>some\<close> subgoal etc.
 
-  Suppose that the goal state has @{text "n \<ge> 0"} subgoals.  Many of
+  Suppose that the goal state has \<open>n \<ge> 0\<close> subgoals.  Many of
   these tacticals address subgoal ranges counting downwards from
-  @{text "n"} towards @{text "1"}.  This has the fortunate effect that
+  \<open>n\<close> towards \<open>1\<close>.  This has the fortunate effect that
   newly emerging subgoals are concatenated in the result, without
   interfering each other.  Nonetheless, there might be situations
   where a different order is desired.\<close>
@@ -708,30 +685,30 @@
   @{index_ML RANGE: "(int -> tactic) list -> int -> tactic"} \\
   \end{mldecls}
 
-  \<^descr> @{ML ALLGOALS}~@{text "tac"} is equivalent to @{text "tac
-  n"}~@{ML_op THEN}~@{text "\<dots>"}~@{ML_op THEN}~@{text "tac 1"}.  It
-  applies the @{text tac} to all the subgoals, counting downwards.
+  \<^descr> @{ML ALLGOALS}~\<open>tac\<close> is equivalent to \<open>tac
+  n\<close>~@{ML_op THEN}~\<open>\<dots>\<close>~@{ML_op THEN}~\<open>tac 1\<close>.  It
+  applies the \<open>tac\<close> to all the subgoals, counting downwards.
 
-  \<^descr> @{ML SOMEGOAL}~@{text "tac"} is equivalent to @{text "tac
-  n"}~@{ML_op ORELSE}~@{text "\<dots>"}~@{ML_op ORELSE}~@{text "tac 1"}.  It
-  applies @{text "tac"} to one subgoal, counting downwards.
+  \<^descr> @{ML SOMEGOAL}~\<open>tac\<close> is equivalent to \<open>tac
+  n\<close>~@{ML_op ORELSE}~\<open>\<dots>\<close>~@{ML_op ORELSE}~\<open>tac 1\<close>.  It
+  applies \<open>tac\<close> to one subgoal, counting downwards.
 
-  \<^descr> @{ML FIRSTGOAL}~@{text "tac"} is equivalent to @{text "tac
-  1"}~@{ML_op ORELSE}~@{text "\<dots>"}~@{ML_op ORELSE}~@{text "tac n"}.  It
-  applies @{text "tac"} to one subgoal, counting upwards.
+  \<^descr> @{ML FIRSTGOAL}~\<open>tac\<close> is equivalent to \<open>tac
+  1\<close>~@{ML_op ORELSE}~\<open>\<dots>\<close>~@{ML_op ORELSE}~\<open>tac n\<close>.  It
+  applies \<open>tac\<close> to one subgoal, counting upwards.
 
-  \<^descr> @{ML HEADGOAL}~@{text "tac"} is equivalent to @{text "tac 1"}.
-  It applies @{text "tac"} unconditionally to the first subgoal.
+  \<^descr> @{ML HEADGOAL}~\<open>tac\<close> is equivalent to \<open>tac 1\<close>.
+  It applies \<open>tac\<close> unconditionally to the first subgoal.
 
-  \<^descr> @{ML REPEAT_SOME}~@{text "tac"} applies @{text "tac"} once or
+  \<^descr> @{ML REPEAT_SOME}~\<open>tac\<close> applies \<open>tac\<close> once or
   more to a subgoal, counting downwards.
 
-  \<^descr> @{ML REPEAT_FIRST}~@{text "tac"} applies @{text "tac"} once or
+  \<^descr> @{ML REPEAT_FIRST}~\<open>tac\<close> applies \<open>tac\<close> once or
   more to a subgoal, counting upwards.
 
-  \<^descr> @{ML RANGE}~@{text "[tac\<^sub>1, \<dots>, tac\<^sub>k] i"} is equivalent to
-  @{text "tac\<^sub>k (i + k - 1)"}~@{ML_op THEN}~@{text "\<dots>"}~@{ML_op
-  THEN}~@{text "tac\<^sub>1 i"}.  It applies the given list of tactics to the
+  \<^descr> @{ML RANGE}~\<open>[tac\<^sub>1, \<dots>, tac\<^sub>k] i\<close> is equivalent to
+  \<open>tac\<^sub>k (i + k - 1)\<close>~@{ML_op THEN}~\<open>\<dots>\<close>~@{ML_op
+  THEN}~\<open>tac\<^sub>1 i\<close>.  It applies the given list of tactics to the
   corresponding range of subgoals, counting downwards.
 \<close>
 
@@ -757,14 +734,14 @@
   @{index_ML CHANGED: "tactic -> tactic"} \\
   \end{mldecls}
 
-  \<^descr> @{ML FILTER}~@{text "sat tac"} applies @{text "tac"} to the
+  \<^descr> @{ML FILTER}~\<open>sat tac\<close> applies \<open>tac\<close> to the
   goal state and returns a sequence consisting of those result goal
-  states that are satisfactory in the sense of @{text "sat"}.
+  states that are satisfactory in the sense of \<open>sat\<close>.
 
-  \<^descr> @{ML CHANGED}~@{text "tac"} applies @{text "tac"} to the goal
+  \<^descr> @{ML CHANGED}~\<open>tac\<close> applies \<open>tac\<close> to the goal
   state and returns precisely those states that differ from the
   original state (according to @{ML Thm.eq_thm}).  Thus @{ML
-  CHANGED}~@{text "tac"} always has some effect on the state.
+  CHANGED}~\<open>tac\<close> always has some effect on the state.
 \<close>
 
 
@@ -777,17 +754,17 @@
   @{index_ML DEPTH_SOLVE_1: "tactic -> tactic"} \\
   \end{mldecls}
 
-  \<^descr> @{ML DEPTH_FIRST}~@{text "sat tac"} returns the goal state if
-  @{text "sat"} returns true.  Otherwise it applies @{text "tac"},
+  \<^descr> @{ML DEPTH_FIRST}~\<open>sat tac\<close> returns the goal state if
+  \<open>sat\<close> returns true.  Otherwise it applies \<open>tac\<close>,
   then recursively searches from each element of the resulting
   sequence.  The code uses a stack for efficiency, in effect applying
-  @{text "tac"}~@{ML_op THEN}~@{ML DEPTH_FIRST}~@{text "sat tac"} to
+  \<open>tac\<close>~@{ML_op THEN}~@{ML DEPTH_FIRST}~\<open>sat tac\<close> to
   the state.
 
-  \<^descr> @{ML DEPTH_SOLVE}@{text "tac"} uses @{ML DEPTH_FIRST} to
+  \<^descr> @{ML DEPTH_SOLVE}\<open>tac\<close> uses @{ML DEPTH_FIRST} to
   search for states having no subgoals.
 
-  \<^descr> @{ML DEPTH_SOLVE_1}~@{text "tac"} uses @{ML DEPTH_FIRST} to
+  \<^descr> @{ML DEPTH_SOLVE_1}~\<open>tac\<close> uses @{ML DEPTH_FIRST} to
   search for states having fewer subgoals than the given state.  Thus,
   it insists upon solving at least one subgoal.
 \<close>
@@ -804,16 +781,16 @@
 
   These search strategies will find a solution if one exists.
   However, they do not enumerate all solutions; they terminate after
-  the first satisfactory result from @{text "tac"}.
+  the first satisfactory result from \<open>tac\<close>.
 
-  \<^descr> @{ML BREADTH_FIRST}~@{text "sat tac"} uses breadth-first
-  search to find states for which @{text "sat"} is true.  For most
+  \<^descr> @{ML BREADTH_FIRST}~\<open>sat tac\<close> uses breadth-first
+  search to find states for which \<open>sat\<close> is true.  For most
   applications, it is too slow.
 
-  \<^descr> @{ML BEST_FIRST}~@{text "(sat, dist) tac"} does a heuristic
-  search, using @{text "dist"} to estimate the distance from a
-  satisfactory state (in the sense of @{text "sat"}).  It maintains a
-  list of states ordered by distance.  It applies @{text "tac"} to the
+  \<^descr> @{ML BEST_FIRST}~\<open>(sat, dist) tac\<close> does a heuristic
+  search, using \<open>dist\<close> to estimate the distance from a
+  satisfactory state (in the sense of \<open>sat\<close>).  It maintains a
+  list of states ordered by distance.  It applies \<open>tac\<close> to the
   head of this list; if the result contains any satisfactory states,
   then it returns them.  Otherwise, @{ML BEST_FIRST} adds the new
   states to the list, and continues.
@@ -822,9 +799,9 @@
   the size of the state.  The smaller the state, the fewer and simpler
   subgoals it has.
 
-  \<^descr> @{ML THEN_BEST_FIRST}~@{text "tac\<^sub>0 (sat, dist) tac"} is like
+  \<^descr> @{ML THEN_BEST_FIRST}~\<open>tac\<^sub>0 (sat, dist) tac\<close> is like
   @{ML BEST_FIRST}, except that the priority queue initially contains
-  the result of applying @{text "tac\<^sub>0"} to the goal state.  This
+  the result of applying \<open>tac\<^sub>0\<close> to the goal state.  This
   tactical permits separate tactics for starting the search and
   continuing the search.
 \<close>
@@ -840,22 +817,22 @@
   @{index_ML DETERM: "tactic -> tactic"} \\
   \end{mldecls}
 
-  \<^descr> @{ML COND}~@{text "sat tac\<^sub>1 tac\<^sub>2"} applies @{text "tac\<^sub>1"} to
-  the goal state if it satisfies predicate @{text "sat"}, and applies
-  @{text "tac\<^sub>2"}.  It is a conditional tactical in that only one of
-  @{text "tac\<^sub>1"} and @{text "tac\<^sub>2"} is applied to a goal state.
-  However, both @{text "tac\<^sub>1"} and @{text "tac\<^sub>2"} are evaluated
+  \<^descr> @{ML COND}~\<open>sat tac\<^sub>1 tac\<^sub>2\<close> applies \<open>tac\<^sub>1\<close> to
+  the goal state if it satisfies predicate \<open>sat\<close>, and applies
+  \<open>tac\<^sub>2\<close>.  It is a conditional tactical in that only one of
+  \<open>tac\<^sub>1\<close> and \<open>tac\<^sub>2\<close> is applied to a goal state.
+  However, both \<open>tac\<^sub>1\<close> and \<open>tac\<^sub>2\<close> are evaluated
   because ML uses eager evaluation.
 
-  \<^descr> @{ML IF_UNSOLVED}~@{text "tac"} applies @{text "tac"} to the
+  \<^descr> @{ML IF_UNSOLVED}~\<open>tac\<close> applies \<open>tac\<close> to the
   goal state if it has any subgoals, and simply returns the goal state
   otherwise.  Many common tactics, such as @{ML resolve_tac}, fail if
   applied to a goal state that has no subgoals.
 
-  \<^descr> @{ML SOLVE}~@{text "tac"} applies @{text "tac"} to the goal
+  \<^descr> @{ML SOLVE}~\<open>tac\<close> applies \<open>tac\<close> to the goal
   state and then fails iff there are subgoals left.
 
-  \<^descr> @{ML DETERM}~@{text "tac"} applies @{text "tac"} to the goal
+  \<^descr> @{ML DETERM}~\<open>tac\<close> applies \<open>tac\<close> to the goal
   state and returns the head of the resulting sequence.  @{ML DETERM}
   limits the search space by making its argument deterministic.
 \<close>
@@ -871,20 +848,17 @@
   @{index_ML size_of_thm: "thm -> int"} \\
   \end{mldecls}
 
-  \<^descr> @{ML has_fewer_prems}~@{text "n thm"} reports whether @{text
-  "thm"} has fewer than @{text "n"} premises.
+  \<^descr> @{ML has_fewer_prems}~\<open>n thm\<close> reports whether \<open>thm\<close> has fewer than \<open>n\<close> premises.
 
-  \<^descr> @{ML Thm.eq_thm}~@{text "(thm\<^sub>1, thm\<^sub>2)"} reports whether @{text
-  "thm\<^sub>1"} and @{text "thm\<^sub>2"} are equal.  Both theorems must have the
+  \<^descr> @{ML Thm.eq_thm}~\<open>(thm\<^sub>1, thm\<^sub>2)\<close> reports whether \<open>thm\<^sub>1\<close> and \<open>thm\<^sub>2\<close> are equal.  Both theorems must have the
   same conclusions, the same set of hypotheses, and the same set of sort
   hypotheses.  Names of bound variables are ignored as usual.
 
-  \<^descr> @{ML Thm.eq_thm_prop}~@{text "(thm\<^sub>1, thm\<^sub>2)"} reports whether
-  the propositions of @{text "thm\<^sub>1"} and @{text "thm\<^sub>2"} are equal.
+  \<^descr> @{ML Thm.eq_thm_prop}~\<open>(thm\<^sub>1, thm\<^sub>2)\<close> reports whether
+  the propositions of \<open>thm\<^sub>1\<close> and \<open>thm\<^sub>2\<close> are equal.
   Names of bound variables are ignored.
 
-  \<^descr> @{ML size_of_thm}~@{text "thm"} computes the size of @{text
-  "thm"}, namely the number of variables, constants and abstractions
+  \<^descr> @{ML size_of_thm}~\<open>thm\<close> computes the size of \<open>thm\<close>, namely the number of variables, constants and abstractions
   in its conclusion.  It may serve as a distance function for
   @{ML BEST_FIRST}.
 \<close>
--- a/src/Doc/Isar_Ref/Document_Preparation.thy	Tue Oct 20 23:03:46 2015 +0200
+++ b/src/Doc/Isar_Ref/Document_Preparation.thy	Tue Oct 20 23:53:40 2015 +0200
@@ -22,15 +22,15 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{command_def "chapter"} & : & @{text "any \<rightarrow> any"} \\
-    @{command_def "section"} & : & @{text "any \<rightarrow> any"} \\
-    @{command_def "subsection"} & : & @{text "any \<rightarrow> any"} \\
-    @{command_def "subsubsection"} & : & @{text "any \<rightarrow> any"} \\
-    @{command_def "paragraph"} & : & @{text "any \<rightarrow> any"} \\
-    @{command_def "subparagraph"} & : & @{text "any \<rightarrow> any"} \\
-    @{command_def "text"} & : & @{text "any \<rightarrow> any"} \\
-    @{command_def "txt"} & : & @{text "any \<rightarrow> any"} \\
-    @{command_def "text_raw"} & : & @{text "any \<rightarrow> any"} \\
+    @{command_def "chapter"} & : & \<open>any \<rightarrow> any\<close> \\
+    @{command_def "section"} & : & \<open>any \<rightarrow> any\<close> \\
+    @{command_def "subsection"} & : & \<open>any \<rightarrow> any\<close> \\
+    @{command_def "subsubsection"} & : & \<open>any \<rightarrow> any\<close> \\
+    @{command_def "paragraph"} & : & \<open>any \<rightarrow> any\<close> \\
+    @{command_def "subparagraph"} & : & \<open>any \<rightarrow> any\<close> \\
+    @{command_def "text"} & : & \<open>any \<rightarrow> any\<close> \\
+    @{command_def "txt"} & : & \<open>any \<rightarrow> any\<close> \\
+    @{command_def "text_raw"} & : & \<open>any \<rightarrow> any\<close> \\
   \end{matharray}
 
   Markup commands provide a structured way to insert text into the
@@ -58,7 +58,7 @@
 
   \<^descr> @{command text} and @{command txt} specify paragraphs of plain text.
   This corresponds to a {\LaTeX} environment @{verbatim
-  \<open>\begin{isamarkuptext}\<close>} @{text "\<dots>"} @{verbatim \<open>\end{isamarkuptext}\<close>}
+  \<open>\begin{isamarkuptext}\<close>} \<open>\<dots>\<close> @{verbatim \<open>\end{isamarkuptext}\<close>}
   etc.
 
   \<^descr> @{command text_raw} is similar to @{command text}, but without
@@ -81,35 +81,35 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{antiquotation_def "theory"} & : & @{text antiquotation} \\
-    @{antiquotation_def "thm"} & : & @{text antiquotation} \\
-    @{antiquotation_def "lemma"} & : & @{text antiquotation} \\
-    @{antiquotation_def "prop"} & : & @{text antiquotation} \\
-    @{antiquotation_def "term"} & : & @{text antiquotation} \\
-    @{antiquotation_def term_type} & : & @{text antiquotation} \\
-    @{antiquotation_def typeof} & : & @{text antiquotation} \\
-    @{antiquotation_def const} & : & @{text antiquotation} \\
-    @{antiquotation_def abbrev} & : & @{text antiquotation} \\
-    @{antiquotation_def typ} & : & @{text antiquotation} \\
-    @{antiquotation_def type} & : & @{text antiquotation} \\
-    @{antiquotation_def class} & : & @{text antiquotation} \\
-    @{antiquotation_def "text"} & : & @{text antiquotation} \\
-    @{antiquotation_def goals} & : & @{text antiquotation} \\
-    @{antiquotation_def subgoals} & : & @{text antiquotation} \\
-    @{antiquotation_def prf} & : & @{text antiquotation} \\
-    @{antiquotation_def full_prf} & : & @{text antiquotation} \\
-    @{antiquotation_def ML} & : & @{text antiquotation} \\
-    @{antiquotation_def ML_op} & : & @{text antiquotation} \\
-    @{antiquotation_def ML_type} & : & @{text antiquotation} \\
-    @{antiquotation_def ML_structure} & : & @{text antiquotation} \\
-    @{antiquotation_def ML_functor} & : & @{text antiquotation} \\
-    @{antiquotation_def emph} & : & @{text antiquotation} \\
-    @{antiquotation_def bold} & : & @{text antiquotation} \\
-    @{antiquotation_def verbatim} & : & @{text antiquotation} \\
-    @{antiquotation_def "file"} & : & @{text antiquotation} \\
-    @{antiquotation_def "url"} & : & @{text antiquotation} \\
-    @{antiquotation_def "cite"} & : & @{text antiquotation} \\
-    @{command_def "print_antiquotations"}@{text "\<^sup>*"} & : & @{text "context \<rightarrow> "} \\
+    @{antiquotation_def "theory"} & : & \<open>antiquotation\<close> \\
+    @{antiquotation_def "thm"} & : & \<open>antiquotation\<close> \\
+    @{antiquotation_def "lemma"} & : & \<open>antiquotation\<close> \\
+    @{antiquotation_def "prop"} & : & \<open>antiquotation\<close> \\
+    @{antiquotation_def "term"} & : & \<open>antiquotation\<close> \\
+    @{antiquotation_def term_type} & : & \<open>antiquotation\<close> \\
+    @{antiquotation_def typeof} & : & \<open>antiquotation\<close> \\
+    @{antiquotation_def const} & : & \<open>antiquotation\<close> \\
+    @{antiquotation_def abbrev} & : & \<open>antiquotation\<close> \\
+    @{antiquotation_def typ} & : & \<open>antiquotation\<close> \\
+    @{antiquotation_def type} & : & \<open>antiquotation\<close> \\
+    @{antiquotation_def class} & : & \<open>antiquotation\<close> \\
+    @{antiquotation_def "text"} & : & \<open>antiquotation\<close> \\
+    @{antiquotation_def goals} & : & \<open>antiquotation\<close> \\
+    @{antiquotation_def subgoals} & : & \<open>antiquotation\<close> \\
+    @{antiquotation_def prf} & : & \<open>antiquotation\<close> \\
+    @{antiquotation_def full_prf} & : & \<open>antiquotation\<close> \\
+    @{antiquotation_def ML} & : & \<open>antiquotation\<close> \\
+    @{antiquotation_def ML_op} & : & \<open>antiquotation\<close> \\
+    @{antiquotation_def ML_type} & : & \<open>antiquotation\<close> \\
+    @{antiquotation_def ML_structure} & : & \<open>antiquotation\<close> \\
+    @{antiquotation_def ML_functor} & : & \<open>antiquotation\<close> \\
+    @{antiquotation_def emph} & : & \<open>antiquotation\<close> \\
+    @{antiquotation_def bold} & : & \<open>antiquotation\<close> \\
+    @{antiquotation_def verbatim} & : & \<open>antiquotation\<close> \\
+    @{antiquotation_def "file"} & : & \<open>antiquotation\<close> \\
+    @{antiquotation_def "url"} & : & \<open>antiquotation\<close> \\
+    @{antiquotation_def "cite"} & : & \<open>antiquotation\<close> \\
+    @{command_def "print_antiquotations"}\<open>\<^sup>*\<close> & : & \<open>context \<rightarrow> \<close> \\
   \end{matharray}
 
   The overall content of an Isabelle/Isar theory may alternate between
@@ -131,17 +131,15 @@
   antiquotations are checked within the current theory or proof
   context.
 
-  \<^medskip> Antiquotations are in general written as @{verbatim "@{"}@{text
-  "name"}~@{verbatim "["}@{text options}@{verbatim "]"}~@{text
-  "arguments"}@{verbatim "}"}. The short form @{verbatim \<open>\\<close>}@{verbatim
-  "<^"}@{text name}@{verbatim ">"}@{text "\<open>argument_content\<close>"} (without
-  surrounding @{verbatim "@{"}@{text "\<dots>"}@{verbatim "}"}) works for a single
+  \<^medskip> Antiquotations are in general written as @{verbatim "@{"}\<open>name\<close>~@{verbatim "["}\<open>options\<close>@{verbatim "]"}~\<open>arguments\<close>@{verbatim "}"}. The short form @{verbatim \<open>\\<close>}@{verbatim
+  "<^"}\<open>name\<close>@{verbatim ">"}\<open>\<open>argument_content\<close>\<close> (without
+  surrounding @{verbatim "@{"}\<open>\<dots>\<close>@{verbatim "}"}) works for a single
   argument that is a cartouche.
 
   Omitting the control symbol is also possible: a cartouche without special
   decoration is equivalent to @{verbatim \<open>\\<close>}@{verbatim
-  "<^cartouche>"}@{text "\<open>argument_content\<close>"}, which is equivalent to
-  @{verbatim "@{cartouche"}~@{text "\<open>argument_content\<close>"}@{verbatim "}"}. The
+  "<^cartouche>"}\<open>\<open>argument_content\<close>\<close>, which is equivalent to
+  @{verbatim "@{cartouche"}~\<open>\<open>argument_content\<close>\<close>@{verbatim "}"}. The
   special name @{antiquotation_def cartouche} is defined in the context:
   Isabelle/Pure introduces that as an alias to @{antiquotation_ref text}
   (see below). Consequently, \<open>\<open>foo_bar + baz \<le> bazar\<close>\<close> prints literal
@@ -163,8 +161,8 @@
   \endgroup
 
   Note that the syntax of antiquotations may \<^emph>\<open>not\<close> include source
-  comments @{verbatim "(*"}~@{text "\<dots>"}~@{verbatim "*)"} nor verbatim
-  text @{verbatim "{*"}~@{text "\<dots>"}~@{verbatim "*}"}.
+  comments @{verbatim "(*"}~\<open>\<dots>\<close>~@{verbatim "*)"} nor verbatim
+  text @{verbatim "{*"}~\<open>\<dots>\<close>~@{verbatim "*}"}.
 
   %% FIXME less monolithic presentation, move to individual sections!?
   @{rail \<open>
@@ -209,8 +207,7 @@
     @@{command print_antiquotations} ('!'?)
   \<close>}
 
-  \<^descr> @{text "@{text s}"} prints uninterpreted source text @{text
-  s}.  This is particularly useful to print portions of text according
+  \<^descr> \<open>@{text s}\<close> prints uninterpreted source text \<open>s\<close>.  This is particularly useful to print portions of text according
   to the Isabelle document style, without demanding well-formedness,
   e.g.\ small pieces of terms that should not be parsed or
   type-checked yet.
@@ -218,45 +215,44 @@
   It is also possible to write this in the short form \<open>\<open>s\<close>\<close> without any
   further decoration.
 
-  \<^descr> @{text "@{theory A}"} prints the name @{text "A"}, which is
+  \<^descr> \<open>@{theory A}\<close> prints the name \<open>A\<close>, which is
   guaranteed to refer to a valid ancestor theory in the current
   context.
 
-  \<^descr> @{text "@{thm a\<^sub>1 \<dots> a\<^sub>n}"} prints theorems @{text "a\<^sub>1 \<dots> a\<^sub>n"}.
+  \<^descr> \<open>@{thm a\<^sub>1 \<dots> a\<^sub>n}\<close> prints theorems \<open>a\<^sub>1 \<dots> a\<^sub>n\<close>.
   Full fact expressions are allowed here, including attributes
   (\secref{sec:syn-att}).
 
-  \<^descr> @{text "@{prop \<phi>}"} prints a well-typed proposition @{text
-  "\<phi>"}.
+  \<^descr> \<open>@{prop \<phi>}\<close> prints a well-typed proposition \<open>\<phi>\<close>.
 
-  \<^descr> @{text "@{lemma \<phi> by m}"} proves a well-typed proposition
-  @{text "\<phi>"} by method @{text m} and prints the original @{text "\<phi>"}.
+  \<^descr> \<open>@{lemma \<phi> by m}\<close> proves a well-typed proposition
+  \<open>\<phi>\<close> by method \<open>m\<close> and prints the original \<open>\<phi>\<close>.
 
-  \<^descr> @{text "@{term t}"} prints a well-typed term @{text "t"}.
+  \<^descr> \<open>@{term t}\<close> prints a well-typed term \<open>t\<close>.
   
-  \<^descr> @{text "@{value t}"} evaluates a term @{text "t"} and prints
+  \<^descr> \<open>@{value t}\<close> evaluates a term \<open>t\<close> and prints
   its result, see also @{command_ref (HOL) value}.
 
-  \<^descr> @{text "@{term_type t}"} prints a well-typed term @{text "t"}
+  \<^descr> \<open>@{term_type t}\<close> prints a well-typed term \<open>t\<close>
   annotated with its type.
 
-  \<^descr> @{text "@{typeof t}"} prints the type of a well-typed term
-  @{text "t"}.
+  \<^descr> \<open>@{typeof t}\<close> prints the type of a well-typed term
+  \<open>t\<close>.
 
-  \<^descr> @{text "@{const c}"} prints a logical or syntactic constant
-  @{text "c"}.
+  \<^descr> \<open>@{const c}\<close> prints a logical or syntactic constant
+  \<open>c\<close>.
   
-  \<^descr> @{text "@{abbrev c x\<^sub>1 \<dots> x\<^sub>n}"} prints a constant abbreviation
-  @{text "c x\<^sub>1 \<dots> x\<^sub>n \<equiv> rhs"} as defined in the current context.
+  \<^descr> \<open>@{abbrev c x\<^sub>1 \<dots> x\<^sub>n}\<close> prints a constant abbreviation
+  \<open>c x\<^sub>1 \<dots> x\<^sub>n \<equiv> rhs\<close> as defined in the current context.
 
-  \<^descr> @{text "@{typ \<tau>}"} prints a well-formed type @{text "\<tau>"}.
+  \<^descr> \<open>@{typ \<tau>}\<close> prints a well-formed type \<open>\<tau>\<close>.
 
-  \<^descr> @{text "@{type \<kappa>}"} prints a (logical or syntactic) type
-    constructor @{text "\<kappa>"}.
+  \<^descr> \<open>@{type \<kappa>}\<close> prints a (logical or syntactic) type
+    constructor \<open>\<kappa>\<close>.
 
-  \<^descr> @{text "@{class c}"} prints a class @{text c}.
+  \<^descr> \<open>@{class c}\<close> prints a class \<open>c\<close>.
 
-  \<^descr> @{text "@{goals}"} prints the current \<^emph>\<open>dynamic\<close> goal
+  \<^descr> \<open>@{goals}\<close> prints the current \<^emph>\<open>dynamic\<close> goal
   state.  This is mainly for support of tactic-emulation scripts
   within Isar.  Presentation of goal states does not conform to the
   idea of human-readable proof documents!
@@ -265,84 +261,83 @@
   the reasoning via proper Isar proof commands, instead of peeking at
   the internal machine configuration.
   
-  \<^descr> @{text "@{subgoals}"} is similar to @{text "@{goals}"}, but
+  \<^descr> \<open>@{subgoals}\<close> is similar to \<open>@{goals}\<close>, but
   does not print the main goal.
   
-  \<^descr> @{text "@{prf a\<^sub>1 \<dots> a\<^sub>n}"} prints the (compact) proof terms
-  corresponding to the theorems @{text "a\<^sub>1 \<dots> a\<^sub>n"}. Note that this
+  \<^descr> \<open>@{prf a\<^sub>1 \<dots> a\<^sub>n}\<close> prints the (compact) proof terms
+  corresponding to the theorems \<open>a\<^sub>1 \<dots> a\<^sub>n\<close>. Note that this
   requires proof terms to be switched on for the current logic
   session.
   
-  \<^descr> @{text "@{full_prf a\<^sub>1 \<dots> a\<^sub>n}"} is like @{text "@{prf a\<^sub>1 \<dots>
-  a\<^sub>n}"}, but prints the full proof terms, i.e.\ also displays
+  \<^descr> \<open>@{full_prf a\<^sub>1 \<dots> a\<^sub>n}\<close> is like \<open>@{prf a\<^sub>1 \<dots>
+  a\<^sub>n}\<close>, but prints the full proof terms, i.e.\ also displays
   information omitted in the compact proof term, which is denoted by
-  ``@{text _}'' placeholders there.
+  ``\<open>_\<close>'' placeholders there.
   
-  \<^descr> @{text "@{ML s}"}, @{text "@{ML_op s}"}, @{text "@{ML_type
-  s}"}, @{text "@{ML_structure s}"}, and @{text "@{ML_functor s}"}
-  check text @{text s} as ML value, infix operator, type, structure,
+  \<^descr> \<open>@{ML s}\<close>, \<open>@{ML_op s}\<close>, \<open>@{ML_type
+  s}\<close>, \<open>@{ML_structure s}\<close>, and \<open>@{ML_functor s}\<close>
+  check text \<open>s\<close> as ML value, infix operator, type, structure,
   and functor respectively.  The source is printed verbatim.
 
-  \<^descr> @{text "@{emph s}"} prints document source recursively, with {\LaTeX}
-  markup @{verbatim \<open>\emph{\<close>}@{text "\<dots>"}@{verbatim \<open>}\<close>}.
+  \<^descr> \<open>@{emph s}\<close> prints document source recursively, with {\LaTeX}
+  markup @{verbatim \<open>\emph{\<close>}\<open>\<dots>\<close>@{verbatim \<open>}\<close>}.
 
-  \<^descr> @{text "@{bold s}"} prints document source recursively, with {\LaTeX}
-  markup @{verbatim \<open>\textbf{\<close>}@{text "\<dots>"}@{verbatim \<open>}\<close>}.
+  \<^descr> \<open>@{bold s}\<close> prints document source recursively, with {\LaTeX}
+  markup @{verbatim \<open>\textbf{\<close>}\<open>\<dots>\<close>@{verbatim \<open>}\<close>}.
 
-  \<^descr> @{text "@{verbatim s}"} prints uninterpreted source text literally
+  \<^descr> \<open>@{verbatim s}\<close> prints uninterpreted source text literally
   as ASCII characters, using some type-writer font style.
 
-  \<^descr> @{text "@{file path}"} checks that @{text "path"} refers to a
+  \<^descr> \<open>@{file path}\<close> checks that \<open>path\<close> refers to a
   file (or directory) and prints it verbatim.
 
-  \<^descr> @{text "@{file_unchecked path}"} is like @{text "@{file
-  path}"}, but does not check the existence of the @{text "path"}
+  \<^descr> \<open>@{file_unchecked path}\<close> is like \<open>@{file
+  path}\<close>, but does not check the existence of the \<open>path\<close>
   within the file-system.
 
-  \<^descr> @{text "@{url name}"} produces markup for the given URL, which
+  \<^descr> \<open>@{url name}\<close> produces markup for the given URL, which
   results in an active hyperlink within the text.
 
-  \<^descr> @{text "@{cite name}"} produces a citation @{verbatim
+  \<^descr> \<open>@{cite name}\<close> produces a citation @{verbatim
   \<open>\cite{name}\<close>} in {\LaTeX}, where the name refers to some Bib{\TeX}
   database entry.
 
-  The variant @{text "@{cite \<open>opt\<close> name}"} produces @{verbatim
+  The variant \<open>@{cite \<open>opt\<close> name}\<close> produces @{verbatim
   \<open>\cite[opt]{name}\<close>} with some free-form optional argument. Multiple names
-  are output with commas, e.g. @{text "@{cite foo \<AND> bar}"} becomes
+  are output with commas, e.g. \<open>@{cite foo \<AND> bar}\<close> becomes
   @{verbatim \<open>\cite{foo,bar}\<close>}.
 
   The {\LaTeX} macro name is determined by the antiquotation option
   @{antiquotation_option_def cite_macro}, or the configuration option
-  @{attribute cite_macro} in the context. For example, @{text "@{cite
-  [cite_macro = nocite] foobar}"} produces @{verbatim \<open>\nocite{foobar}\<close>}.
+  @{attribute cite_macro} in the context. For example, \<open>@{cite
+  [cite_macro = nocite] foobar}\<close> produces @{verbatim \<open>\nocite{foobar}\<close>}.
 
   \<^descr> @{command "print_antiquotations"} prints all document antiquotations
-  that are defined in the current context; the ``@{text "!"}'' option
+  that are defined in the current context; the ``\<open>!\<close>'' option
   indicates extra verbosity.
 \<close>
 
 
 subsection \<open>Styled antiquotations\<close>
 
-text \<open>The antiquotations @{text thm}, @{text prop} and @{text
-  term} admit an extra \<^emph>\<open>style\<close> specification to modify the
+text \<open>The antiquotations \<open>thm\<close>, \<open>prop\<close> and \<open>term\<close> admit an extra \<^emph>\<open>style\<close> specification to modify the
   printed result.  A style is specified by a name with a possibly
   empty number of arguments;  multiple styles can be sequenced with
   commas.  The following standard styles are available:
 
-  \<^descr> @{text lhs} extracts the first argument of any application
+  \<^descr> \<open>lhs\<close> extracts the first argument of any application
   form with at least two arguments --- typically meta-level or
   object-level equality, or any other binary relation.
   
-  \<^descr> @{text rhs} is like @{text lhs}, but extracts the second
+  \<^descr> \<open>rhs\<close> is like \<open>lhs\<close>, but extracts the second
   argument.
   
-  \<^descr> @{text "concl"} extracts the conclusion @{text C} from a rule
-  in Horn-clause normal form @{text "A\<^sub>1 \<Longrightarrow> \<dots> A\<^sub>n \<Longrightarrow> C"}.
+  \<^descr> \<open>concl\<close> extracts the conclusion \<open>C\<close> from a rule
+  in Horn-clause normal form \<open>A\<^sub>1 \<Longrightarrow> \<dots> A\<^sub>n \<Longrightarrow> C\<close>.
   
-  \<^descr> @{text "prem"} @{text n} extract premise number
-  @{text "n"} from from a rule in Horn-clause
-  normal form @{text "A\<^sub>1 \<Longrightarrow> \<dots> A\<^sub>n \<Longrightarrow> C"}
+  \<^descr> \<open>prem\<close> \<open>n\<close> extract premise number
+  \<open>n\<close> from from a rule in Horn-clause
+  normal form \<open>A\<^sub>1 \<Longrightarrow> \<dots> A\<^sub>n \<Longrightarrow> C\<close>
 \<close>
 
 
@@ -352,34 +347,34 @@
   of antiquotations.  Note that many of these coincide with system and
   configuration options of the same names.
 
-  \<^descr> @{antiquotation_option_def show_types}~@{text "= bool"} and
-  @{antiquotation_option_def show_sorts}~@{text "= bool"} control
+  \<^descr> @{antiquotation_option_def show_types}~\<open>= bool\<close> and
+  @{antiquotation_option_def show_sorts}~\<open>= bool\<close> control
   printing of explicit type and sort constraints.
 
-  \<^descr> @{antiquotation_option_def show_structs}~@{text "= bool"}
+  \<^descr> @{antiquotation_option_def show_structs}~\<open>= bool\<close>
   controls printing of implicit structures.
 
-  \<^descr> @{antiquotation_option_def show_abbrevs}~@{text "= bool"}
+  \<^descr> @{antiquotation_option_def show_abbrevs}~\<open>= bool\<close>
   controls folding of abbreviations.
 
-  \<^descr> @{antiquotation_option_def names_long}~@{text "= bool"} forces
+  \<^descr> @{antiquotation_option_def names_long}~\<open>= bool\<close> forces
   names of types and constants etc.\ to be printed in their fully
   qualified internal form.
 
-  \<^descr> @{antiquotation_option_def names_short}~@{text "= bool"}
+  \<^descr> @{antiquotation_option_def names_short}~\<open>= bool\<close>
   forces names of types and constants etc.\ to be printed unqualified.
   Note that internalizing the output again in the current context may
   well yield a different result.
 
-  \<^descr> @{antiquotation_option_def names_unique}~@{text "= bool"}
+  \<^descr> @{antiquotation_option_def names_unique}~\<open>= bool\<close>
   determines whether the printed version of qualified names should be
   made sufficiently long to avoid overlap with names declared further
-  back.  Set to @{text false} for more concise output.
+  back.  Set to \<open>false\<close> for more concise output.
 
-  \<^descr> @{antiquotation_option_def eta_contract}~@{text "= bool"}
-  prints terms in @{text \<eta>}-contracted form.
+  \<^descr> @{antiquotation_option_def eta_contract}~\<open>= bool\<close>
+  prints terms in \<open>\<eta>\<close>-contracted form.
 
-  \<^descr> @{antiquotation_option_def display}~@{text "= bool"} indicates
+  \<^descr> @{antiquotation_option_def display}~\<open>= bool\<close> indicates
   if the text is to be output as multi-line ``display material'',
   rather than a small piece of text without line breaks (which is the
   default).
@@ -387,47 +382,45 @@
   In this mode the embedded entities are printed in the same style as
   the main theory text.
 
-  \<^descr> @{antiquotation_option_def break}~@{text "= bool"} controls
+  \<^descr> @{antiquotation_option_def break}~\<open>= bool\<close> controls
   line breaks in non-display material.
 
-  \<^descr> @{antiquotation_option_def quotes}~@{text "= bool"} indicates
+  \<^descr> @{antiquotation_option_def quotes}~\<open>= bool\<close> indicates
   if the output should be enclosed in double quotes.
 
-  \<^descr> @{antiquotation_option_def mode}~@{text "= name"} adds @{text
-  name} to the print mode to be used for presentation.  Note that the
+  \<^descr> @{antiquotation_option_def mode}~\<open>= name\<close> adds \<open>name\<close> to the print mode to be used for presentation.  Note that the
   standard setup for {\LaTeX} output is already present by default,
-  including the modes @{text latex} and @{text xsymbols}.
+  including the modes \<open>latex\<close> and \<open>xsymbols\<close>.
 
-  \<^descr> @{antiquotation_option_def margin}~@{text "= nat"} and
-  @{antiquotation_option_def indent}~@{text "= nat"} change the margin
+  \<^descr> @{antiquotation_option_def margin}~\<open>= nat\<close> and
+  @{antiquotation_option_def indent}~\<open>= nat\<close> change the margin
   or indentation for pretty printing of display material.
 
-  \<^descr> @{antiquotation_option_def goals_limit}~@{text "= nat"}
+  \<^descr> @{antiquotation_option_def goals_limit}~\<open>= nat\<close>
   determines the maximum number of subgoals to be printed (for goal-based
   antiquotation).
 
-  \<^descr> @{antiquotation_option_def source}~@{text "= bool"} prints the
+  \<^descr> @{antiquotation_option_def source}~\<open>= bool\<close> prints the
   original source text of the antiquotation arguments, rather than its
   internal representation.  Note that formal checking of
   @{antiquotation "thm"}, @{antiquotation "term"}, etc. is still
   enabled; use the @{antiquotation "text"} antiquotation for unchecked
   output.
 
-  Regular @{text "term"} and @{text "typ"} antiquotations with @{text
-  "source = false"} involve a full round-trip from the original source
+  Regular \<open>term\<close> and \<open>typ\<close> antiquotations with \<open>source = false\<close> involve a full round-trip from the original source
   to an internalized logical entity back to a source form, according
   to the syntax of the current context.  Thus the printed output is
   not under direct control of the author, it may even fluctuate a bit
   as the underlying theory is changed later on.
 
-  In contrast, @{antiquotation_option source}~@{text "= true"}
+  In contrast, @{antiquotation_option source}~\<open>= true\<close>
   admits direct printing of the given source text, with the desirable
   well-formedness check in the background, but without modification of
   the printed text.
 
 
-  For Boolean flags, ``@{text "name = true"}'' may be abbreviated as
-  ``@{text name}''.  All of the above flags are disabled by default,
+  For Boolean flags, ``\<open>name = true\<close>'' may be abbreviated as
+  ``\<open>name\<close>''.  All of the above flags are disabled by default,
   unless changed specifically for a logic session in the corresponding
   @{verbatim "ROOT"} file.
 \<close>
@@ -450,9 +443,9 @@
 
   \<^medskip>
   \begin{tabular}{ll}
-    @{text "theory"} & theory begin/end \\
-    @{text "proof"} & all proof commands \\
-    @{text "ML"} & all commands involving ML code \\
+    \<open>theory\<close> & theory begin/end \\
+    \<open>proof\<close> & all proof commands \\
+    \<open>ML\<close> & all commands involving ML code \\
   \end{tabular}
   \<^medskip>
 
@@ -461,17 +454,17 @@
   specifically, e.g.\ to fold proof texts, or drop parts of the text
   completely.
 
-  For example ``@{command "by"}~@{text "%invisible auto"}'' causes
-  that piece of proof to be treated as @{text invisible} instead of
-  @{text "proof"} (the default), which may be shown or hidden
+  For example ``@{command "by"}~\<open>%invisible auto\<close>'' causes
+  that piece of proof to be treated as \<open>invisible\<close> instead of
+  \<open>proof\<close> (the default), which may be shown or hidden
   depending on the document setup.  In contrast, ``@{command
-  "by"}~@{text "%visible auto"}'' forces this text to be shown
+  "by"}~\<open>%visible auto\<close>'' forces this text to be shown
   invariably.
 
   Explicit tag specifications within a proof apply to all subsequent
   commands of the same level of nesting.  For example, ``@{command
-  "proof"}~@{text "%visible \<dots>"}~@{command "qed"}'' forces the whole
-  sub-proof to be typeset as @{text visible} (unless some of its parts
+  "proof"}~\<open>%visible \<dots>\<close>~@{command "qed"}'' forces the whole
+  sub-proof to be typeset as \<open>visible\<close> (unless some of its parts
   are tagged differently).
 
   \<^medskip>
@@ -491,7 +484,7 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{antiquotation_def "rail"} & : & @{text antiquotation} \\
+    @{antiquotation_def "rail"} & : & \<open>antiquotation\<close> \\
   \end{matharray}
 
   @{rail \<open>
@@ -525,12 +518,12 @@
   \<close>}
   \endgroup
 
-  The lexical syntax of @{text "identifier"} coincides with that of
-  @{syntax ident} in regular Isabelle syntax, but @{text string} uses
+  The lexical syntax of \<open>identifier\<close> coincides with that of
+  @{syntax ident} in regular Isabelle syntax, but \<open>string\<close> uses
   single quotes instead of double quotes of the standard @{syntax
   string} category.
 
-  Each @{text rule} defines a formal language (with optional name),
+  Each \<open>rule\<close> defines a formal language (with optional name),
   using a notation that is similar to EBNF or regular expressions with
   recursion.  The meaning and visual appearance of these rail language
   elements is illustrated by the following representative examples.
@@ -600,14 +593,14 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{command_def "display_drafts"}@{text "\<^sup>*"} & : & @{text "any \<rightarrow>"} \\
+    @{command_def "display_drafts"}\<open>\<^sup>*\<close> & : & \<open>any \<rightarrow>\<close> \\
   \end{matharray}
 
   @{rail \<open>
     @@{command display_drafts} (@{syntax name} +)
   \<close>}
 
-  \<^descr> @{command "display_drafts"}~@{text paths} performs simple output of a
+  \<^descr> @{command "display_drafts"}~\<open>paths\<close> performs simple output of a
   given list of raw source files. Only those symbols that do not require
   additional {\LaTeX} packages are displayed properly, everything else is left
   verbatim.
--- a/src/Doc/Isar_Ref/First_Order_Logic.thy	Tue Oct 20 23:03:46 2015 +0200
+++ b/src/Doc/Isar_Ref/First_Order_Logic.thy	Tue Oct 20 23:53:40 2015 +0200
@@ -7,8 +7,8 @@
 
 text \<open>
   In order to commence a new object-logic within
-  Isabelle/Pure we introduce abstract syntactic categories @{text "i"}
-  for individuals and @{text "o"} for object-propositions.  The latter
+  Isabelle/Pure we introduce abstract syntactic categories \<open>i\<close>
+  for individuals and \<open>o\<close> for object-propositions.  The latter
   is embedded into the language of Pure propositions by means of a
   separate judgment.
 \<close>
@@ -126,8 +126,7 @@
 text \<open>
   Reasoning from basic axioms is often tedious.  Our proofs
   work by producing various instances of the given rules (potentially
-  the symmetric form) using the pattern ``@{command have}~@{text
-  eq}~@{command "by"}~@{text "(rule r)"}'' and composing the chain of
+  the symmetric form) using the pattern ``@{command have}~\<open>eq\<close>~@{command "by"}~\<open>(rule r)\<close>'' and composing the chain of
   results via @{command also}/@{command finally}.  These steps may
   involve any of the transitivity rules declared in
   \secref{sec:framework-ex-equal}, namely @{thm trans} in combining
@@ -162,8 +161,7 @@
   realistic object-logic would include proper setup for the Simplifier
   (\secref{sec:simplifier}), the main automated tool for equational
   reasoning in Isabelle.  Then ``@{command unfolding}~@{thm
-  left_inv}~@{command ".."}'' would become ``@{command "by"}~@{text
-  "(simp only: left_inv)"}'' etc.
+  left_inv}~@{command ".."}'' would become ``@{command "by"}~\<open>(simp only: left_inv)\<close>'' etc.
 \<close>
 
 end
@@ -228,7 +226,7 @@
 
 text \<open>
   Note that the analogous elimination rule for disjunction
-  ``@{text "\<ASSUMES> A \<or> B \<OBTAINS> A \<BBAR> B"}'' coincides with
+  ``\<open>\<ASSUMES> A \<or> B \<OBTAINS> A \<BBAR> B\<close>'' coincides with
   the original axiomatization of @{thm disjE}.
 
   \<^medskip>
@@ -320,9 +318,9 @@
   unchanged.  Note that @{prop "(\<not> C \<Longrightarrow> C) \<Longrightarrow> C"} fits again into our
   format for eliminations, despite the additional twist that the
   context refers to the main conclusion.  So we may write @{thm
-  classical} as the Isar statement ``@{text "\<OBTAINS> \<not> thesis"}''.
+  classical} as the Isar statement ``\<open>\<OBTAINS> \<not> thesis\<close>''.
   This also explains nicely how classical reasoning really works:
-  whatever the main @{text thesis} might be, we may always assume its
+  whatever the main \<open>thesis\<close> might be, we may always assume its
   negation!
 \<close>
 
@@ -335,9 +333,9 @@
   Representing quantifiers is easy, thanks to the higher-order nature
   of the underlying framework.  According to the well-known technique
   introduced by Church @{cite "church40"}, quantifiers are operators on
-  predicates, which are syntactically represented as @{text "\<lambda>"}-terms
-  of type @{typ "i \<Rightarrow> o"}.  Binder notation turns @{text "All (\<lambda>x. B
-  x)"} into @{text "\<forall>x. B x"} etc.
+  predicates, which are syntactically represented as \<open>\<lambda>\<close>-terms
+  of type @{typ "i \<Rightarrow> o"}.  Binder notation turns \<open>All (\<lambda>x. B
+  x)\<close> into \<open>\<forall>x. B x\<close> etc.
 \<close>
 
 axiomatization
@@ -351,8 +349,7 @@
   exE [elim]: "(\<exists>x. B x) \<Longrightarrow> (\<And>x. B x \<Longrightarrow> C) \<Longrightarrow> C"
 
 text \<open>
-  The statement of @{thm exE} corresponds to ``@{text
-  "\<ASSUMES> \<exists>x. B x \<OBTAINS> x \<WHERE> B x"}'' in Isar.  In the
+  The statement of @{thm exE} corresponds to ``\<open>\<ASSUMES> \<exists>x. B x \<OBTAINS> x \<WHERE> B x\<close>'' in Isar.  In the
   subsequent example we illustrate quantifier reasoning involving all
   four rules:
 \<close>
@@ -360,10 +357,10 @@
 theorem
   assumes "\<exists>x. \<forall>y. R x y"
   shows "\<forall>y. \<exists>x. R x y"
-proof    -- \<open>@{text "\<forall>"} introduction\<close>
-  obtain x where "\<forall>y. R x y" using \<open>\<exists>x. \<forall>y. R x y\<close> ..    -- \<open>@{text "\<exists>"} elimination\<close>
-  fix y have "R x y" using \<open>\<forall>y. R x y\<close> ..    -- \<open>@{text "\<forall>"} destruction\<close>
-  then show "\<exists>x. R x y" ..    -- \<open>@{text "\<exists>"} introduction\<close>
+proof    -- \<open>\<open>\<forall>\<close> introduction\<close>
+  obtain x where "\<forall>y. R x y" using \<open>\<exists>x. \<forall>y. R x y\<close> ..    -- \<open>\<open>\<exists>\<close> elimination\<close>
+  fix y have "R x y" using \<open>\<forall>y. R x y\<close> ..    -- \<open>\<open>\<forall>\<close> destruction\<close>
+  then show "\<exists>x. R x y" ..    -- \<open>\<open>\<exists>\<close> introduction\<close>
 qed
 
 
@@ -377,27 +374,27 @@
 
   \<^medskip>
   \begin{tabular}{l}
-  @{text "impI: \<ASSUMES> A \<Longrightarrow> B \<SHOWS> A \<longrightarrow> B"} \\
-  @{text "impD: \<ASSUMES> A \<longrightarrow> B \<AND> A \<SHOWS> B"} \\[1ex]
+  \<open>impI: \<ASSUMES> A \<Longrightarrow> B \<SHOWS> A \<longrightarrow> B\<close> \\
+  \<open>impD: \<ASSUMES> A \<longrightarrow> B \<AND> A \<SHOWS> B\<close> \\[1ex]
 
-  @{text "disjI\<^sub>1: \<ASSUMES> A \<SHOWS> A \<or> B"} \\
-  @{text "disjI\<^sub>2: \<ASSUMES> B \<SHOWS> A \<or> B"} \\
-  @{text "disjE: \<ASSUMES> A \<or> B \<OBTAINS> A \<BBAR> B"} \\[1ex]
+  \<open>disjI\<^sub>1: \<ASSUMES> A \<SHOWS> A \<or> B\<close> \\
+  \<open>disjI\<^sub>2: \<ASSUMES> B \<SHOWS> A \<or> B\<close> \\
+  \<open>disjE: \<ASSUMES> A \<or> B \<OBTAINS> A \<BBAR> B\<close> \\[1ex]
 
-  @{text "conjI: \<ASSUMES> A \<AND> B \<SHOWS> A \<and> B"} \\
-  @{text "conjE: \<ASSUMES> A \<and> B \<OBTAINS> A \<AND> B"} \\[1ex]
+  \<open>conjI: \<ASSUMES> A \<AND> B \<SHOWS> A \<and> B\<close> \\
+  \<open>conjE: \<ASSUMES> A \<and> B \<OBTAINS> A \<AND> B\<close> \\[1ex]
 
-  @{text "falseE: \<ASSUMES> \<bottom> \<SHOWS> A"} \\
-  @{text "trueI: \<SHOWS> \<top>"} \\[1ex]
+  \<open>falseE: \<ASSUMES> \<bottom> \<SHOWS> A\<close> \\
+  \<open>trueI: \<SHOWS> \<top>\<close> \\[1ex]
 
-  @{text "notI: \<ASSUMES> A \<Longrightarrow> \<bottom> \<SHOWS> \<not> A"} \\
-  @{text "notE: \<ASSUMES> \<not> A \<AND> A \<SHOWS> B"} \\[1ex]
+  \<open>notI: \<ASSUMES> A \<Longrightarrow> \<bottom> \<SHOWS> \<not> A\<close> \\
+  \<open>notE: \<ASSUMES> \<not> A \<AND> A \<SHOWS> B\<close> \\[1ex]
 
-  @{text "allI: \<ASSUMES> \<And>x. B x \<SHOWS> \<forall>x. B x"} \\
-  @{text "allE: \<ASSUMES> \<forall>x. B x \<SHOWS> B a"} \\[1ex]
+  \<open>allI: \<ASSUMES> \<And>x. B x \<SHOWS> \<forall>x. B x\<close> \\
+  \<open>allE: \<ASSUMES> \<forall>x. B x \<SHOWS> B a\<close> \\[1ex]
 
-  @{text "exI: \<ASSUMES> B a \<SHOWS> \<exists>x. B x"} \\
-  @{text "exE: \<ASSUMES> \<exists>x. B x \<OBTAINS> a \<WHERE> B a"}
+  \<open>exI: \<ASSUMES> B a \<SHOWS> \<exists>x. B x\<close> \\
+  \<open>exE: \<ASSUMES> \<exists>x. B x \<OBTAINS> a \<WHERE> B a\<close>
   \end{tabular}
   \<^medskip>
 
--- a/src/Doc/Isar_Ref/Framework.thy	Tue Oct 20 23:03:46 2015 +0200
+++ b/src/Doc/Isar_Ref/Framework.thy	Tue Oct 20 23:53:40 2015 +0200
@@ -27,7 +27,7 @@
   So Isar challenges the traditional way of recording informal proofs
   in mathematical prose, as well as the common tendency to see fully
   formal proofs directly as objects of some logical calculus (e.g.\
-  @{text "\<lambda>"}-terms in a version of type theory).  In fact, Isar is
+  \<open>\<lambda>\<close>-terms in a version of type theory).  In fact, Isar is
   better understood as an interpreter of a simple block-structured
   language for describing the data flow of local facts and goals,
   interspersed with occasional invocations of proof methods.
@@ -58,11 +58,10 @@
   includes common notions of predicate logic, naive set-theory etc.\
   using fairly standard mathematical notation.  From the perspective
   of generic natural deduction there is nothing special about the
-  logical connectives of HOL (@{text "\<and>"}, @{text "\<or>"}, @{text "\<forall>"},
-  @{text "\<exists>"}, etc.), only the resulting reasoning principles are
+  logical connectives of HOL (\<open>\<and>\<close>, \<open>\<or>\<close>, \<open>\<forall>\<close>,
+  \<open>\<exists>\<close>, etc.), only the resulting reasoning principles are
   relevant to the user.  There are similar rules available for
-  set-theory operators (@{text "\<inter>"}, @{text "\<union>"}, @{text "\<Inter>"}, @{text
-  "\<Union>"}, etc.), or any other theory developed in the library (lattice
+  set-theory operators (\<open>\<inter>\<close>, \<open>\<union>\<close>, \<open>\<Inter>\<close>, \<open>\<Union>\<close>, etc.), or any other theory developed in the library (lattice
   theory, topology etc.).
 
   Subsequently we briefly review fragments of Isar proof texts
@@ -71,7 +70,7 @@
   understanding connectives of predicate logic as something special.
 
   \<^medskip>
-  The following deduction performs @{text "\<inter>"}-introduction,
+  The following deduction performs \<open>\<inter>\<close>-introduction,
   working forwards from assumptions towards the conclusion.  We give
   both the Isar text, and depict the primitive rule involved, as
   determined by unification of the problem against rules that are
@@ -120,7 +119,7 @@
 (*>*)
 
 text \<open>
-  The format of the @{text "\<inter>"}-introduction rule represents
+  The format of the \<open>\<inter>\<close>-introduction rule represents
   the most basic inference, which proceeds from given premises to a
   conclusion, without any nested proof context involved.
 
@@ -149,7 +148,7 @@
 text_raw \<open>\end{minipage}\begin{minipage}{0.4\textwidth}\<close>
 
 text \<open>
-  \infer{@{prop "x \<in> \<Inter>\<A>"}}{\infer*{@{prop "x \<in> A"}}{@{text "[A][A \<in> \<A>]"}}}
+  \infer{@{prop "x \<in> \<Inter>\<A>"}}{\infer*{@{prop "x \<in> A"}}{\<open>[A][A \<in> \<A>]\<close>}}
 \<close>
 
 text_raw \<open>\end{minipage}\<close>
@@ -159,7 +158,7 @@
   This Isar reasoning pattern again refers to the
   primitive rule depicted above.  The system determines it in the
   ``@{command proof}'' step, which could have been spelled out more
-  explicitly as ``@{command proof}~@{text "(rule InterI)"}''.  Note
+  explicitly as ``@{command proof}~\<open>(rule InterI)\<close>''.  Note
   that the rule involves both a local parameter @{term "A"} and an
   assumption @{prop "A \<in> \<A>"} in the nested reasoning.  This kind of
   compound rule typically demands a genuine sub-proof in Isar, working
@@ -174,7 +173,7 @@
   The next example involves @{term "\<Union>\<A>"}, which can be
   characterized as the set of all @{term "x"} such that @{prop "\<exists>A. x
   \<in> A \<and> A \<in> \<A>"}.  The elimination rule for @{prop "x \<in> \<Union>\<A>"} does
-  not mention @{text "\<exists>"} and @{text "\<and>"} at all, but admits to obtain
+  not mention \<open>\<exists>\<close> and \<open>\<and>\<close> at all, but admits to obtain
   directly a local @{term "A"} such that @{prop "x \<in> A"} and @{prop "A
   \<in> \<A>"} hold.  This corresponds to the following Isar proof and
   inference rule, respectively:
@@ -200,7 +199,7 @@
 text_raw \<open>\end{minipage}\begin{minipage}{0.4\textwidth}\<close>
 
 text \<open>
-  \infer{@{prop "C"}}{@{prop "x \<in> \<Union>\<A>"} & \infer*{@{prop "C"}~}{@{text "[A][x \<in> A, A \<in> \<A>]"}}}
+  \infer{@{prop "C"}}{@{prop "x \<in> \<Union>\<A>"} & \infer*{@{prop "C"}~}{\<open>[A][x \<in> A, A \<in> \<A>]\<close>}}
 \<close>
 
 text_raw \<open>\end{minipage}\<close>
@@ -239,20 +238,19 @@
 text \<open>
   The Pure logic @{cite "paulson-found" and "paulson700"} is an intuitionistic
   fragment of higher-order logic @{cite "church40"}.  In type-theoretic
-  parlance, there are three levels of @{text "\<lambda>"}-calculus with
-  corresponding arrows @{text "\<Rightarrow>"}/@{text "\<And>"}/@{text "\<Longrightarrow>"}:
+  parlance, there are three levels of \<open>\<lambda>\<close>-calculus with
+  corresponding arrows \<open>\<Rightarrow>\<close>/\<open>\<And>\<close>/\<open>\<Longrightarrow>\<close>:
 
   \<^medskip>
   \begin{tabular}{ll}
-  @{text "\<alpha> \<Rightarrow> \<beta>"} & syntactic function space (terms depending on terms) \\
-  @{text "\<And>x. B(x)"} & universal quantification (proofs depending on terms) \\
-  @{text "A \<Longrightarrow> B"} & implication (proofs depending on proofs) \\
+  \<open>\<alpha> \<Rightarrow> \<beta>\<close> & syntactic function space (terms depending on terms) \\
+  \<open>\<And>x. B(x)\<close> & universal quantification (proofs depending on terms) \\
+  \<open>A \<Longrightarrow> B\<close> & implication (proofs depending on proofs) \\
   \end{tabular}
   \<^medskip>
 
   Here only the types of syntactic terms, and the
-  propositions of proof terms have been shown.  The @{text
-  "\<lambda>"}-structure of proofs can be recorded as an optional feature of
+  propositions of proof terms have been shown.  The \<open>\<lambda>\<close>-structure of proofs can be recorded as an optional feature of
   the Pure inference kernel @{cite "Berghofer-Nipkow:2000:TPHOL"}, but
   the formal system can never depend on them due to \<^emph>\<open>proof
   irrelevance\<close>.
@@ -260,7 +258,7 @@
   On top of this most primitive layer of proofs, Pure implements a
   generic calculus for nested natural deduction rules, similar to
   @{cite "Schroeder-Heister:1984"}.  Here object-logic inferences are
-  internalized as formulae over @{text "\<And>"} and @{text "\<Longrightarrow>"}.
+  internalized as formulae over \<open>\<And>\<close> and \<open>\<Longrightarrow>\<close>.
   Combining such rule statements may involve higher-order unification
   @{cite "paulson-natural"}.
 \<close>
@@ -269,49 +267,49 @@
 subsection \<open>Primitive inferences\<close>
 
 text \<open>
-  Term syntax provides explicit notation for abstraction @{text "\<lambda>x ::
-  \<alpha>. b(x)"} and application @{text "b a"}, while types are usually
-  implicit thanks to type-inference; terms of type @{text "prop"} are
-  called propositions.  Logical statements are composed via @{text "\<And>x
-  :: \<alpha>. B(x)"} and @{text "A \<Longrightarrow> B"}.  Primitive reasoning operates on
-  judgments of the form @{text "\<Gamma> \<turnstile> \<phi>"}, with standard introduction
-  and elimination rules for @{text "\<And>"} and @{text "\<Longrightarrow>"} that refer to
-  fixed parameters @{text "x\<^sub>1, \<dots>, x\<^sub>m"} and hypotheses
-  @{text "A\<^sub>1, \<dots>, A\<^sub>n"} from the context @{text "\<Gamma>"};
+  Term syntax provides explicit notation for abstraction \<open>\<lambda>x ::
+  \<alpha>. b(x)\<close> and application \<open>b a\<close>, while types are usually
+  implicit thanks to type-inference; terms of type \<open>prop\<close> are
+  called propositions.  Logical statements are composed via \<open>\<And>x
+  :: \<alpha>. B(x)\<close> and \<open>A \<Longrightarrow> B\<close>.  Primitive reasoning operates on
+  judgments of the form \<open>\<Gamma> \<turnstile> \<phi>\<close>, with standard introduction
+  and elimination rules for \<open>\<And>\<close> and \<open>\<Longrightarrow>\<close> that refer to
+  fixed parameters \<open>x\<^sub>1, \<dots>, x\<^sub>m\<close> and hypotheses
+  \<open>A\<^sub>1, \<dots>, A\<^sub>n\<close> from the context \<open>\<Gamma>\<close>;
   the corresponding proof terms are left implicit.  The subsequent
-  inference rules define @{text "\<Gamma> \<turnstile> \<phi>"} inductively, relative to a
+  inference rules define \<open>\<Gamma> \<turnstile> \<phi>\<close> inductively, relative to a
   collection of axioms:
 
   \[
-  \infer{@{text "\<turnstile> A"}}{(@{text "A"} \text{~axiom})}
+  \infer{\<open>\<turnstile> A\<close>}{(\<open>A\<close> \text{~axiom})}
   \qquad
-  \infer{@{text "A \<turnstile> A"}}{}
+  \infer{\<open>A \<turnstile> A\<close>}{}
   \]
 
   \[
-  \infer{@{text "\<Gamma> \<turnstile> \<And>x. B(x)"}}{@{text "\<Gamma> \<turnstile> B(x)"} & @{text "x \<notin> \<Gamma>"}}
+  \infer{\<open>\<Gamma> \<turnstile> \<And>x. B(x)\<close>}{\<open>\<Gamma> \<turnstile> B(x)\<close> & \<open>x \<notin> \<Gamma>\<close>}
   \qquad
-  \infer{@{text "\<Gamma> \<turnstile> B(a)"}}{@{text "\<Gamma> \<turnstile> \<And>x. B(x)"}}
+  \infer{\<open>\<Gamma> \<turnstile> B(a)\<close>}{\<open>\<Gamma> \<turnstile> \<And>x. B(x)\<close>}
   \]
 
   \[
-  \infer{@{text "\<Gamma> - A \<turnstile> A \<Longrightarrow> B"}}{@{text "\<Gamma> \<turnstile> B"}}
+  \infer{\<open>\<Gamma> - A \<turnstile> A \<Longrightarrow> B\<close>}{\<open>\<Gamma> \<turnstile> B\<close>}
   \qquad
-  \infer{@{text "\<Gamma>\<^sub>1 \<union> \<Gamma>\<^sub>2 \<turnstile> B"}}{@{text "\<Gamma>\<^sub>1 \<turnstile> A \<Longrightarrow> B"} & @{text "\<Gamma>\<^sub>2 \<turnstile> A"}}
+  \infer{\<open>\<Gamma>\<^sub>1 \<union> \<Gamma>\<^sub>2 \<turnstile> B\<close>}{\<open>\<Gamma>\<^sub>1 \<turnstile> A \<Longrightarrow> B\<close> & \<open>\<Gamma>\<^sub>2 \<turnstile> A\<close>}
   \]
 
-  Furthermore, Pure provides a built-in equality @{text "\<equiv> :: \<alpha> \<Rightarrow> \<alpha> \<Rightarrow>
-  prop"} with axioms for reflexivity, substitution, extensionality,
-  and @{text "\<alpha>\<beta>\<eta>"}-conversion on @{text "\<lambda>"}-terms.
+  Furthermore, Pure provides a built-in equality \<open>\<equiv> :: \<alpha> \<Rightarrow> \<alpha> \<Rightarrow>
+  prop\<close> with axioms for reflexivity, substitution, extensionality,
+  and \<open>\<alpha>\<beta>\<eta>\<close>-conversion on \<open>\<lambda>\<close>-terms.
 
   \<^medskip>
   An object-logic introduces another layer on top of Pure,
-  e.g.\ with types @{text "i"} for individuals and @{text "o"} for
-  propositions, term constants @{text "Trueprop :: o \<Rightarrow> prop"} as
-  (implicit) derivability judgment and connectives like @{text "\<and> :: o
-  \<Rightarrow> o \<Rightarrow> o"} or @{text "\<forall> :: (i \<Rightarrow> o) \<Rightarrow> o"}, and axioms for object-level
-  rules such as @{text "conjI: A \<Longrightarrow> B \<Longrightarrow> A \<and> B"} or @{text "allI: (\<And>x. B
-  x) \<Longrightarrow> \<forall>x. B x"}.  Derived object rules are represented as theorems of
+  e.g.\ with types \<open>i\<close> for individuals and \<open>o\<close> for
+  propositions, term constants \<open>Trueprop :: o \<Rightarrow> prop\<close> as
+  (implicit) derivability judgment and connectives like \<open>\<and> :: o
+  \<Rightarrow> o \<Rightarrow> o\<close> or \<open>\<forall> :: (i \<Rightarrow> o) \<Rightarrow> o\<close>, and axioms for object-level
+  rules such as \<open>conjI: A \<Longrightarrow> B \<Longrightarrow> A \<and> B\<close> or \<open>allI: (\<And>x. B
+  x) \<Longrightarrow> \<forall>x. B x\<close>.  Derived object rules are represented as theorems of
   Pure.  After the initial object-logic setup, further axiomatizations
   are usually avoided; plain definitions and derived principles are
   used exclusively.
@@ -323,51 +321,50 @@
 text \<open>
   Primitive inferences mostly serve foundational purposes.  The main
   reasoning mechanisms of Pure operate on nested natural deduction
-  rules expressed as formulae, using @{text "\<And>"} to bind local
-  parameters and @{text "\<Longrightarrow>"} to express entailment.  Multiple
+  rules expressed as formulae, using \<open>\<And>\<close> to bind local
+  parameters and \<open>\<Longrightarrow>\<close> to express entailment.  Multiple
   parameters and premises are represented by repeating these
   connectives in a right-associative manner.
 
-  Since @{text "\<And>"} and @{text "\<Longrightarrow>"} commute thanks to the theorem
+  Since \<open>\<And>\<close> and \<open>\<Longrightarrow>\<close> commute thanks to the theorem
   @{prop "(A \<Longrightarrow> (\<And>x. B x)) \<equiv> (\<And>x. A \<Longrightarrow> B x)"}, we may assume w.l.o.g.\
   that rule statements always observe the normal form where
   quantifiers are pulled in front of implications at each level of
   nesting.  This means that any Pure proposition may be presented as a
   \<^emph>\<open>Hereditary Harrop Formula\<close> @{cite "Miller:1991"} which is of the
-  form @{text "\<And>x\<^sub>1 \<dots> x\<^sub>m. H\<^sub>1 \<Longrightarrow> \<dots> H\<^sub>n \<Longrightarrow>
-  A"} for @{text "m, n \<ge> 0"}, and @{text "A"} atomic, and @{text
-  "H\<^sub>1, \<dots>, H\<^sub>n"} being recursively of the same format.
+  form \<open>\<And>x\<^sub>1 \<dots> x\<^sub>m. H\<^sub>1 \<Longrightarrow> \<dots> H\<^sub>n \<Longrightarrow>
+  A\<close> for \<open>m, n \<ge> 0\<close>, and \<open>A\<close> atomic, and \<open>H\<^sub>1, \<dots>, H\<^sub>n\<close> being recursively of the same format.
   Following the convention that outermost quantifiers are implicit,
-  Horn clauses @{text "A\<^sub>1 \<Longrightarrow> \<dots> A\<^sub>n \<Longrightarrow> A"} are a special
+  Horn clauses \<open>A\<^sub>1 \<Longrightarrow> \<dots> A\<^sub>n \<Longrightarrow> A\<close> are a special
   case of this.
 
-  For example, @{text "\<inter>"}-introduction rule encountered before is
+  For example, \<open>\<inter>\<close>-introduction rule encountered before is
   represented as a Pure theorem as follows:
   \[
-  @{text "IntI:"}~@{prop "x \<in> A \<Longrightarrow> x \<in> B \<Longrightarrow> x \<in> A \<inter> B"}
+  \<open>IntI:\<close>~@{prop "x \<in> A \<Longrightarrow> x \<in> B \<Longrightarrow> x \<in> A \<inter> B"}
   \]
 
   This is a plain Horn clause, since no further nesting on
-  the left is involved.  The general @{text "\<Inter>"}-introduction
+  the left is involved.  The general \<open>\<Inter>\<close>-introduction
   corresponds to a Hereditary Harrop Formula with one additional level
   of nesting:
   \[
-  @{text "InterI:"}~@{prop "(\<And>A. A \<in> \<A> \<Longrightarrow> x \<in> A) \<Longrightarrow> x \<in> \<Inter>\<A>"}
+  \<open>InterI:\<close>~@{prop "(\<And>A. A \<in> \<A> \<Longrightarrow> x \<in> A) \<Longrightarrow> x \<in> \<Inter>\<A>"}
   \]
 
   \<^medskip>
-  Goals are also represented as rules: @{text "A\<^sub>1 \<Longrightarrow>
-  \<dots> A\<^sub>n \<Longrightarrow> C"} states that the sub-goals @{text "A\<^sub>1, \<dots>,
-  A\<^sub>n"} entail the result @{text "C"}; for @{text "n = 0"} the
-  goal is finished.  To allow @{text "C"} being a rule statement
-  itself, we introduce the protective marker @{text "# :: prop \<Rightarrow>
-  prop"}, which is defined as identity and hidden from the user.  We
+  Goals are also represented as rules: \<open>A\<^sub>1 \<Longrightarrow>
+  \<dots> A\<^sub>n \<Longrightarrow> C\<close> states that the sub-goals \<open>A\<^sub>1, \<dots>,
+  A\<^sub>n\<close> entail the result \<open>C\<close>; for \<open>n = 0\<close> the
+  goal is finished.  To allow \<open>C\<close> being a rule statement
+  itself, we introduce the protective marker \<open># :: prop \<Rightarrow>
+  prop\<close>, which is defined as identity and hidden from the user.  We
   initialize and finish goal states as follows:
 
   \[
   \begin{array}{c@ {\qquad}c}
-  \infer[(@{inference_def init})]{@{text "C \<Longrightarrow> #C"}}{} &
-  \infer[(@{inference_def finish})]{@{text C}}{@{text "#C"}}
+  \infer[(@{inference_def init})]{\<open>C \<Longrightarrow> #C\<close>}{} &
+  \infer[(@{inference_def finish})]{\<open>C\<close>}{\<open>#C\<close>}
   \end{array}
   \]
 
@@ -376,30 +373,29 @@
   are @{inference resolution}, for back-chaining a rule against a
   sub-goal (replacing it by zero or more sub-goals), and @{inference
   assumption}, for solving a sub-goal (finding a short-circuit with
-  local assumptions).  Below @{text "\<^vec>x"} stands for @{text
-  "x\<^sub>1, \<dots>, x\<^sub>n"} (@{text "n \<ge> 0"}).
+  local assumptions).  Below \<open>\<^vec>x\<close> stands for \<open>x\<^sub>1, \<dots>, x\<^sub>n\<close> (\<open>n \<ge> 0\<close>).
 
   \[
   \infer[(@{inference_def resolution})]
-  {@{text "(\<And>\<^vec>x. \<^vec>H \<^vec>x \<Longrightarrow> \<^vec>A (\<^vec>a \<^vec>x))\<vartheta> \<Longrightarrow> C\<vartheta>"}}
+  {\<open>(\<And>\<^vec>x. \<^vec>H \<^vec>x \<Longrightarrow> \<^vec>A (\<^vec>a \<^vec>x))\<vartheta> \<Longrightarrow> C\<vartheta>\<close>}
   {\begin{tabular}{rl}
-    @{text "rule:"} &
-    @{text "\<^vec>A \<^vec>a \<Longrightarrow> B \<^vec>a"} \\
-    @{text "goal:"} &
-    @{text "(\<And>\<^vec>x. \<^vec>H \<^vec>x \<Longrightarrow> B' \<^vec>x) \<Longrightarrow> C"} \\
-    @{text "goal unifier:"} &
-    @{text "(\<lambda>\<^vec>x. B (\<^vec>a \<^vec>x))\<vartheta> = B'\<vartheta>"} \\
+    \<open>rule:\<close> &
+    \<open>\<^vec>A \<^vec>a \<Longrightarrow> B \<^vec>a\<close> \\
+    \<open>goal:\<close> &
+    \<open>(\<And>\<^vec>x. \<^vec>H \<^vec>x \<Longrightarrow> B' \<^vec>x) \<Longrightarrow> C\<close> \\
+    \<open>goal unifier:\<close> &
+    \<open>(\<lambda>\<^vec>x. B (\<^vec>a \<^vec>x))\<vartheta> = B'\<vartheta>\<close> \\
    \end{tabular}}
   \]
 
   \<^medskip>
 
   \[
-  \infer[(@{inference_def assumption})]{@{text "C\<vartheta>"}}
+  \infer[(@{inference_def assumption})]{\<open>C\<vartheta>\<close>}
   {\begin{tabular}{rl}
-    @{text "goal:"} &
-    @{text "(\<And>\<^vec>x. \<^vec>H \<^vec>x \<Longrightarrow> A \<^vec>x) \<Longrightarrow> C"} \\
-    @{text "assm unifier:"} & @{text "A\<vartheta> = H\<^sub>i\<vartheta>"}~~\text{(for some~@{text "H\<^sub>i"})} \\
+    \<open>goal:\<close> &
+    \<open>(\<And>\<^vec>x. \<^vec>H \<^vec>x \<Longrightarrow> A \<^vec>x) \<Longrightarrow> C\<close> \\
+    \<open>assm unifier:\<close> & \<open>A\<vartheta> = H\<^sub>i\<vartheta>\<close>~~\text{(for some~\<open>H\<^sub>i\<close>)} \\
    \end{tabular}}
   \]
 
@@ -409,13 +405,13 @@
   {\footnotesize
   \<^medskip>
   \begin{tabular}{r@ {\quad}l}
-  @{text "(A \<and> B \<Longrightarrow> B \<and> A) \<Longrightarrow> #(A \<and> B \<Longrightarrow> B \<and> A)"} & @{text "(init)"} \\
-  @{text "(A \<and> B \<Longrightarrow> B) \<Longrightarrow> (A \<and> B \<Longrightarrow> A) \<Longrightarrow> #\<dots>"} & @{text "(resolution B \<Longrightarrow> A \<Longrightarrow> B \<and> A)"} \\
-  @{text "(A \<and> B \<Longrightarrow> A \<and> B) \<Longrightarrow> (A \<and> B \<Longrightarrow> A) \<Longrightarrow> #\<dots>"} & @{text "(resolution A \<and> B \<Longrightarrow> B)"} \\
-  @{text "(A \<and> B \<Longrightarrow> A) \<Longrightarrow> #\<dots>"} & @{text "(assumption)"} \\
-  @{text "(A \<and> B \<Longrightarrow> A \<and> B) \<Longrightarrow> #\<dots>"} & @{text "(resolution A \<and> B \<Longrightarrow> A)"} \\
-  @{text "#\<dots>"} & @{text "(assumption)"} \\
-  @{text "A \<and> B \<Longrightarrow> B \<and> A"} & @{text "(finish)"} \\
+  \<open>(A \<and> B \<Longrightarrow> B \<and> A) \<Longrightarrow> #(A \<and> B \<Longrightarrow> B \<and> A)\<close> & \<open>(init)\<close> \\
+  \<open>(A \<and> B \<Longrightarrow> B) \<Longrightarrow> (A \<and> B \<Longrightarrow> A) \<Longrightarrow> #\<dots>\<close> & \<open>(resolution B \<Longrightarrow> A \<Longrightarrow> B \<and> A)\<close> \\
+  \<open>(A \<and> B \<Longrightarrow> A \<and> B) \<Longrightarrow> (A \<and> B \<Longrightarrow> A) \<Longrightarrow> #\<dots>\<close> & \<open>(resolution A \<and> B \<Longrightarrow> B)\<close> \\
+  \<open>(A \<and> B \<Longrightarrow> A) \<Longrightarrow> #\<dots>\<close> & \<open>(assumption)\<close> \\
+  \<open>(A \<and> B \<Longrightarrow> A \<and> B) \<Longrightarrow> #\<dots>\<close> & \<open>(resolution A \<and> B \<Longrightarrow> A)\<close> \\
+  \<open>#\<dots>\<close> & \<open>(assumption)\<close> \\
+  \<open>A \<and> B \<Longrightarrow> B \<and> A\<close> & \<open>(finish)\<close> \\
   \end{tabular}
   \<^medskip>
   }
@@ -426,28 +422,28 @@
   @{inference_def elim_resolution} principle.  In contrast, Isar uses
   a slightly more refined combination, where the assumptions to be
   closed are marked explicitly, using again the protective marker
-  @{text "#"}:
+  \<open>#\<close>:
 
   \[
   \infer[(@{inference refinement})]
-  {@{text "(\<And>\<^vec>x. \<^vec>H \<^vec>x \<Longrightarrow> \<^vec>G' (\<^vec>a \<^vec>x))\<vartheta> \<Longrightarrow> C\<vartheta>"}}
+  {\<open>(\<And>\<^vec>x. \<^vec>H \<^vec>x \<Longrightarrow> \<^vec>G' (\<^vec>a \<^vec>x))\<vartheta> \<Longrightarrow> C\<vartheta>\<close>}
   {\begin{tabular}{rl}
-    @{text "sub\<hyphen>proof:"} &
-    @{text "\<^vec>G \<^vec>a \<Longrightarrow> B \<^vec>a"} \\
-    @{text "goal:"} &
-    @{text "(\<And>\<^vec>x. \<^vec>H \<^vec>x \<Longrightarrow> B' \<^vec>x) \<Longrightarrow> C"} \\
-    @{text "goal unifier:"} &
-    @{text "(\<lambda>\<^vec>x. B (\<^vec>a \<^vec>x))\<vartheta> = B'\<vartheta>"} \\
-    @{text "assm unifiers:"} &
-    @{text "(\<lambda>\<^vec>x. G\<^sub>j (\<^vec>a \<^vec>x))\<vartheta> = #H\<^sub>i\<vartheta>"} \\
-    & \quad (for each marked @{text "G\<^sub>j"} some @{text "#H\<^sub>i"}) \\
+    \<open>sub\<hyphen>proof:\<close> &
+    \<open>\<^vec>G \<^vec>a \<Longrightarrow> B \<^vec>a\<close> \\
+    \<open>goal:\<close> &
+    \<open>(\<And>\<^vec>x. \<^vec>H \<^vec>x \<Longrightarrow> B' \<^vec>x) \<Longrightarrow> C\<close> \\
+    \<open>goal unifier:\<close> &
+    \<open>(\<lambda>\<^vec>x. B (\<^vec>a \<^vec>x))\<vartheta> = B'\<vartheta>\<close> \\
+    \<open>assm unifiers:\<close> &
+    \<open>(\<lambda>\<^vec>x. G\<^sub>j (\<^vec>a \<^vec>x))\<vartheta> = #H\<^sub>i\<vartheta>\<close> \\
+    & \quad (for each marked \<open>G\<^sub>j\<close> some \<open>#H\<^sub>i\<close>) \\
    \end{tabular}}
   \]
 
-  Here the @{text "sub\<hyphen>proof"} rule stems from the
+  Here the \<open>sub\<hyphen>proof\<close> rule stems from the
   main @{command fix}-@{command assume}-@{command show} outline of
   Isar (cf.\ \secref{sec:framework-subproof}): each assumption
-  indicated in the text results in a marked premise @{text "G"} above.
+  indicated in the text results in a marked premise \<open>G\<close> above.
   The marking enforces resolution against one of the sub-goal's
   premises.  Consequently, @{command fix}-@{command assume}-@{command
   show} enables to fit the result of a sub-proof quite robustly into a
@@ -467,29 +463,28 @@
   Isar is an exercise in sound minimalism.  Approximately half of the
   language is introduced as primitive, the rest defined as derived
   concepts.  The following grammar describes the core language
-  (category @{text "proof"}), which is embedded into theory
+  (category \<open>proof\<close>), which is embedded into theory
   specification elements such as @{command theorem}; see also
-  \secref{sec:framework-stmt} for the separate category @{text
-  "statement"}.
+  \secref{sec:framework-stmt} for the separate category \<open>statement\<close>.
 
   \<^medskip>
   \begin{tabular}{rcl}
-    @{text "theory\<hyphen>stmt"} & = & @{command "theorem"}~@{text "statement proof  |"}~~@{command "definition"}~@{text "\<dots>  |  \<dots>"} \\[1ex]
+    \<open>theory\<hyphen>stmt\<close> & = & @{command "theorem"}~\<open>statement proof  |\<close>~~@{command "definition"}~\<open>\<dots>  |  \<dots>\<close> \\[1ex]
 
-    @{text "proof"} & = & @{text "prfx\<^sup>*"}~@{command "proof"}~@{text "method\<^sup>? stmt\<^sup>*"}~@{command "qed"}~@{text "method\<^sup>?"} \\[1ex]
+    \<open>proof\<close> & = & \<open>prfx\<^sup>*\<close>~@{command "proof"}~\<open>method\<^sup>? stmt\<^sup>*\<close>~@{command "qed"}~\<open>method\<^sup>?\<close> \\[1ex]
 
-    @{text prfx} & = & @{command "using"}~@{text "facts"} \\
-    & @{text "|"} & @{command "unfolding"}~@{text "facts"} \\
+    \<open>prfx\<close> & = & @{command "using"}~\<open>facts\<close> \\
+    & \<open>|\<close> & @{command "unfolding"}~\<open>facts\<close> \\
 
-    @{text stmt} & = & @{command "{"}~@{text "stmt\<^sup>*"}~@{command "}"} \\
-    & @{text "|"} & @{command "next"} \\
-    & @{text "|"} & @{command "note"}~@{text "name = facts"} \\
-    & @{text "|"} & @{command "let"}~@{text "term = term"} \\
-    & @{text "|"} & @{command "fix"}~@{text "var\<^sup>+"} \\
-    & @{text "|"} & @{command assume}~@{text "\<guillemotleft>inference\<guillemotright> name: props"} \\
-    & @{text "|"} & @{command "then"}@{text "\<^sup>?"}~@{text goal} \\
-    @{text goal} & = & @{command "have"}~@{text "name: props proof"} \\
-    & @{text "|"} & @{command "show"}~@{text "name: props proof"} \\
+    \<open>stmt\<close> & = & @{command "{"}~\<open>stmt\<^sup>*\<close>~@{command "}"} \\
+    & \<open>|\<close> & @{command "next"} \\
+    & \<open>|\<close> & @{command "note"}~\<open>name = facts\<close> \\
+    & \<open>|\<close> & @{command "let"}~\<open>term = term\<close> \\
+    & \<open>|\<close> & @{command "fix"}~\<open>var\<^sup>+\<close> \\
+    & \<open>|\<close> & @{command assume}~\<open>\<guillemotleft>inference\<guillemotright> name: props\<close> \\
+    & \<open>|\<close> & @{command "then"}\<open>\<^sup>?\<close>~\<open>goal\<close> \\
+    \<open>goal\<close> & = & @{command "have"}~\<open>name: props proof\<close> \\
+    & \<open>|\<close> & @{command "show"}~\<open>name: props proof\<close> \\
   \end{tabular}
 
   \<^medskip>
@@ -498,20 +493,19 @@
 
   \<^medskip>
   The syntax for terms and propositions is inherited from
-  Pure (and the object-logic).  A @{text "pattern"} is a @{text
-  "term"} with schematic variables, to be bound by higher-order
+  Pure (and the object-logic).  A \<open>pattern\<close> is a \<open>term\<close> with schematic variables, to be bound by higher-order
   matching.
 
   \<^medskip>
   Facts may be referenced by name or proposition.  For
-  example, the result of ``@{command have}~@{text "a: A \<langle>proof\<rangle>"}''
-  becomes available both as @{text "a"} and
-  \isacharbackquoteopen@{text "A"}\isacharbackquoteclose.  Moreover,
+  example, the result of ``@{command have}~\<open>a: A \<langle>proof\<rangle>\<close>''
+  becomes available both as \<open>a\<close> and
+  \isacharbackquoteopen\<open>A\<close>\isacharbackquoteclose.  Moreover,
   fact expressions may involve attributes that modify either the
   theorem or the background context.  For example, the expression
-  ``@{text "a [OF b]"}'' refers to the composition of two facts
+  ``\<open>a [OF b]\<close>'' refers to the composition of two facts
   according to the @{inference resolution} inference of
-  \secref{sec:framework-resolution}, while ``@{text "a [intro]"}''
+  \secref{sec:framework-resolution}, while ``\<open>a [intro]\<close>''
   declares a fact as introduction rule in the context.
 
   The special fact called ``@{fact this}'' always refers to the last
@@ -522,12 +516,12 @@
 
   \<^medskip>
   \begin{tabular}{rcl}
-    @{command from}~@{text a} & @{text "\<equiv>"} & @{command note}~@{text a}~@{command then} \\
-    @{command with}~@{text a} & @{text "\<equiv>"} & @{command from}~@{text "a \<AND> this"} \\
+    @{command from}~\<open>a\<close> & \<open>\<equiv>\<close> & @{command note}~\<open>a\<close>~@{command then} \\
+    @{command with}~\<open>a\<close> & \<open>\<equiv>\<close> & @{command from}~\<open>a \<AND> this\<close> \\
   \end{tabular}
   \<^medskip>
 
-  The @{text "method"} category is essentially a parameter and may be
+  The \<open>method\<close> category is essentially a parameter and may be
   populated later.  Methods use the facts indicated by @{command
   "then"} or @{command using}, and then operate on the goal state.
   Some basic methods are predefined: ``@{method "-"}'' leaves the goal
@@ -536,8 +530,8 @@
   result to the goal (both ``@{method this}'' and ``@{method (Pure) rule}''
   refer to @{inference resolution} of
   \secref{sec:framework-resolution}).  The secondary arguments to
-  ``@{method (Pure) rule}'' may be specified explicitly as in ``@{text "(rule
-  a)"}'', or picked from the context.  In the latter case, the system
+  ``@{method (Pure) rule}'' may be specified explicitly as in ``\<open>(rule
+  a)\<close>'', or picked from the context.  In the latter case, the system
   first tries rules declared as @{attribute (Pure) elim} or
   @{attribute (Pure) dest}, followed by those declared as @{attribute
   (Pure) intro}.
@@ -545,16 +539,15 @@
   The default method for @{command proof} is ``@{method standard}''
   (arguments picked from the context), for @{command qed} it is
   ``@{method "succeed"}''.  Further abbreviations for terminal proof steps
-  are ``@{command "by"}~@{text "method\<^sub>1 method\<^sub>2"}'' for
-  ``@{command proof}~@{text "method\<^sub>1"}~@{command qed}~@{text
-  "method\<^sub>2"}'', and ``@{command ".."}'' for ``@{command
+  are ``@{command "by"}~\<open>method\<^sub>1 method\<^sub>2\<close>'' for
+  ``@{command proof}~\<open>method\<^sub>1\<close>~@{command qed}~\<open>method\<^sub>2\<close>'', and ``@{command ".."}'' for ``@{command
   "by"}~@{method standard}, and ``@{command "."}'' for ``@{command
   "by"}~@{method this}''.  The @{command unfolding} element operates
   directly on the current facts and goal by applying equalities.
 
   \<^medskip>
   Block structure can be indicated explicitly by ``@{command
-  "{"}~@{text "\<dots>"}~@{command "}"}'', although the body of a sub-proof
+  "{"}~\<open>\<dots>\<close>~@{command "}"}'', although the body of a sub-proof
   already involves implicit nesting.  In any case, @{command next}
   jumps into the next section of a block, i.e.\ it acts like closing
   an implicit block scope and opening another one; there is no direct
@@ -572,20 +565,18 @@
 subsection \<open>Context elements \label{sec:framework-context}\<close>
 
 text \<open>
-  In judgments @{text "\<Gamma> \<turnstile> \<phi>"} of the primitive framework, @{text "\<Gamma>"}
+  In judgments \<open>\<Gamma> \<turnstile> \<phi>\<close> of the primitive framework, \<open>\<Gamma>\<close>
   essentially acts like a proof context.  Isar elaborates this idea
   towards a higher-level notion, with additional information for
   type-inference, term abbreviations, local facts, hypotheses etc.
 
-  The element @{command fix}~@{text "x :: \<alpha>"} declares a local
+  The element @{command fix}~\<open>x :: \<alpha>\<close> declares a local
   parameter, i.e.\ an arbitrary-but-fixed entity of a given type; in
-  results exported from the context, @{text "x"} may become anything.
-  The @{command assume}~@{text "\<guillemotleft>inference\<guillemotright>"} element provides a
-  general interface to hypotheses: ``@{command assume}~@{text
-  "\<guillemotleft>inference\<guillemotright> A"}'' produces @{text "A \<turnstile> A"} locally, while the
-  included inference tells how to discharge @{text A} from results
-  @{text "A \<turnstile> B"} later on.  There is no user-syntax for @{text
-  "\<guillemotleft>inference\<guillemotright>"}, i.e.\ it may only occur internally when derived
+  results exported from the context, \<open>x\<close> may become anything.
+  The @{command assume}~\<open>\<guillemotleft>inference\<guillemotright>\<close> element provides a
+  general interface to hypotheses: ``@{command assume}~\<open>\<guillemotleft>inference\<guillemotright> A\<close>'' produces \<open>A \<turnstile> A\<close> locally, while the
+  included inference tells how to discharge \<open>A\<close> from results
+  \<open>A \<turnstile> B\<close> later on.  There is no user-syntax for \<open>\<guillemotleft>inference\<guillemotright>\<close>, i.e.\ it may only occur internally when derived
   commands are defined in ML.
 
   At the user-level, the default inference for @{command assume} is
@@ -594,19 +585,19 @@
 
   \<^medskip>
   \begin{tabular}{rcl}
-    @{command presume}~@{text A} & @{text "\<equiv>"} & @{command assume}~@{text "\<guillemotleft>weak\<hyphen>discharge\<guillemotright> A"} \\
-    @{command def}~@{text "x \<equiv> a"} & @{text "\<equiv>"} & @{command fix}~@{text x}~@{command assume}~@{text "\<guillemotleft>expansion\<guillemotright> x \<equiv> a"} \\
+    @{command presume}~\<open>A\<close> & \<open>\<equiv>\<close> & @{command assume}~\<open>\<guillemotleft>weak\<hyphen>discharge\<guillemotright> A\<close> \\
+    @{command def}~\<open>x \<equiv> a\<close> & \<open>\<equiv>\<close> & @{command fix}~\<open>x\<close>~@{command assume}~\<open>\<guillemotleft>expansion\<guillemotright> x \<equiv> a\<close> \\
   \end{tabular}
   \<^medskip>
 
   \[
-  \infer[(@{inference_def discharge})]{@{text "\<strut>\<Gamma> - A \<turnstile> #A \<Longrightarrow> B"}}{@{text "\<strut>\<Gamma> \<turnstile> B"}}
+  \infer[(@{inference_def discharge})]{\<open>\<strut>\<Gamma> - A \<turnstile> #A \<Longrightarrow> B\<close>}{\<open>\<strut>\<Gamma> \<turnstile> B\<close>}
   \]
   \[
-  \infer[(@{inference_def "weak\<hyphen>discharge"})]{@{text "\<strut>\<Gamma> - A \<turnstile> A \<Longrightarrow> B"}}{@{text "\<strut>\<Gamma> \<turnstile> B"}}
+  \infer[(@{inference_def "weak\<hyphen>discharge"})]{\<open>\<strut>\<Gamma> - A \<turnstile> A \<Longrightarrow> B\<close>}{\<open>\<strut>\<Gamma> \<turnstile> B\<close>}
   \]
   \[
-  \infer[(@{inference_def expansion})]{@{text "\<strut>\<Gamma> - (x \<equiv> a) \<turnstile> B a"}}{@{text "\<strut>\<Gamma> \<turnstile> B x"}}
+  \infer[(@{inference_def expansion})]{\<open>\<strut>\<Gamma> - (x \<equiv> a) \<turnstile> B a\<close>}{\<open>\<strut>\<Gamma> \<turnstile> B x\<close>}
   \]
 
   \<^medskip>
@@ -619,42 +610,42 @@
   The most interesting derived context element in Isar is @{command
   obtain} @{cite \<open>\S5.3\<close> "Wenzel-PhD"}, which supports generalized
   elimination steps in a purely forward manner.  The @{command obtain}
-  command takes a specification of parameters @{text "\<^vec>x"} and
-  assumptions @{text "\<^vec>A"} to be added to the context, together
+  command takes a specification of parameters \<open>\<^vec>x\<close> and
+  assumptions \<open>\<^vec>A\<close> to be added to the context, together
   with a proof of a case rule stating that this extension is
   conservative (i.e.\ may be removed from closed results later on):
 
   \<^medskip>
   \begin{tabular}{l}
-  @{text "\<langle>facts\<rangle>"}~~@{command obtain}~@{text "\<^vec>x \<WHERE> \<^vec>A \<^vec>x  \<langle>proof\<rangle> \<equiv>"} \\[0.5ex]
-  \quad @{command have}~@{text "case: \<And>thesis. (\<And>\<^vec>x. \<^vec>A \<^vec>x \<Longrightarrow> thesis) \<Longrightarrow> thesis\<rangle>"} \\
+  \<open>\<langle>facts\<rangle>\<close>~~@{command obtain}~\<open>\<^vec>x \<WHERE> \<^vec>A \<^vec>x  \<langle>proof\<rangle> \<equiv>\<close> \\[0.5ex]
+  \quad @{command have}~\<open>case: \<And>thesis. (\<And>\<^vec>x. \<^vec>A \<^vec>x \<Longrightarrow> thesis) \<Longrightarrow> thesis\<rangle>\<close> \\
   \quad @{command proof}~@{method "-"} \\
-  \qquad @{command fix}~@{text thesis} \\
-  \qquad @{command assume}~@{text "[intro]: \<And>\<^vec>x. \<^vec>A \<^vec>x \<Longrightarrow> thesis"} \\
-  \qquad @{command show}~@{text thesis}~@{command using}~@{text "\<langle>facts\<rangle> \<langle>proof\<rangle>"} \\
+  \qquad @{command fix}~\<open>thesis\<close> \\
+  \qquad @{command assume}~\<open>[intro]: \<And>\<^vec>x. \<^vec>A \<^vec>x \<Longrightarrow> thesis\<close> \\
+  \qquad @{command show}~\<open>thesis\<close>~@{command using}~\<open>\<langle>facts\<rangle> \<langle>proof\<rangle>\<close> \\
   \quad @{command qed} \\
-  \quad @{command fix}~@{text "\<^vec>x"}~@{command assume}~@{text "\<guillemotleft>elimination case\<guillemotright> \<^vec>A \<^vec>x"} \\
+  \quad @{command fix}~\<open>\<^vec>x\<close>~@{command assume}~\<open>\<guillemotleft>elimination case\<guillemotright> \<^vec>A \<^vec>x\<close> \\
   \end{tabular}
   \<^medskip>
 
   \[
-  \infer[(@{inference elimination})]{@{text "\<Gamma> \<turnstile> B"}}{
+  \infer[(@{inference elimination})]{\<open>\<Gamma> \<turnstile> B\<close>}{
     \begin{tabular}{rl}
-    @{text "case:"} &
-    @{text "\<Gamma> \<turnstile> \<And>thesis. (\<And>\<^vec>x. \<^vec>A \<^vec>x \<Longrightarrow> thesis) \<Longrightarrow> thesis"} \\[0.2ex]
-    @{text "result:"} &
-    @{text "\<Gamma> \<union> \<^vec>A \<^vec>y \<turnstile> B"} \\[0.2ex]
+    \<open>case:\<close> &
+    \<open>\<Gamma> \<turnstile> \<And>thesis. (\<And>\<^vec>x. \<^vec>A \<^vec>x \<Longrightarrow> thesis) \<Longrightarrow> thesis\<close> \\[0.2ex]
+    \<open>result:\<close> &
+    \<open>\<Gamma> \<union> \<^vec>A \<^vec>y \<turnstile> B\<close> \\[0.2ex]
     \end{tabular}}
   \]
 
-  Here the name ``@{text thesis}'' is a specific convention
+  Here the name ``\<open>thesis\<close>'' is a specific convention
   for an arbitrary-but-fixed proposition; in the primitive natural
-  deduction rules shown before we have occasionally used @{text C}.
-  The whole statement of ``@{command obtain}~@{text x}~@{keyword
-  "where"}~@{text "A x"}'' may be read as a claim that @{text "A x"}
-  may be assumed for some arbitrary-but-fixed @{text "x"}.  Also note
-  that ``@{command obtain}~@{text "A \<AND> B"}'' without parameters
-  is similar to ``@{command have}~@{text "A \<AND> B"}'', but the
+  deduction rules shown before we have occasionally used \<open>C\<close>.
+  The whole statement of ``@{command obtain}~\<open>x\<close>~@{keyword
+  "where"}~\<open>A x\<close>'' may be read as a claim that \<open>A x\<close>
+  may be assumed for some arbitrary-but-fixed \<open>x\<close>.  Also note
+  that ``@{command obtain}~\<open>A \<AND> B\<close>'' without parameters
+  is similar to ``@{command have}~\<open>A \<AND> B\<close>'', but the
   latter involves multiple sub-goals.
 
   \<^medskip>
@@ -707,50 +698,49 @@
 subsection \<open>Structured statements \label{sec:framework-stmt}\<close>
 
 text \<open>
-  The category @{text "statement"} of top-level theorem specifications
+  The category \<open>statement\<close> of top-level theorem specifications
   is defined as follows:
 
   \<^medskip>
   \begin{tabular}{rcl}
-  @{text "statement"} & @{text "\<equiv>"} & @{text "name: props \<AND> \<dots>"} \\
-  & @{text "|"} & @{text "context\<^sup>* conclusion"} \\[0.5ex]
+  \<open>statement\<close> & \<open>\<equiv>\<close> & \<open>name: props \<AND> \<dots>\<close> \\
+  & \<open>|\<close> & \<open>context\<^sup>* conclusion\<close> \\[0.5ex]
 
-  @{text "context"} & @{text "\<equiv>"} & @{text "\<FIXES> vars \<AND> \<dots>"} \\
-  & @{text "|"} & @{text "\<ASSUMES> name: props \<AND> \<dots>"} \\
+  \<open>context\<close> & \<open>\<equiv>\<close> & \<open>\<FIXES> vars \<AND> \<dots>\<close> \\
+  & \<open>|\<close> & \<open>\<ASSUMES> name: props \<AND> \<dots>\<close> \\
 
-  @{text "conclusion"} & @{text "\<equiv>"} & @{text "\<SHOWS> name: props \<AND> \<dots>"} \\
-  & @{text "|"} & @{text "\<OBTAINS> vars \<AND> \<dots> \<WHERE> name: props \<AND> \<dots>"} \\
-  & & \quad @{text "\<BBAR> \<dots>"} \\
+  \<open>conclusion\<close> & \<open>\<equiv>\<close> & \<open>\<SHOWS> name: props \<AND> \<dots>\<close> \\
+  & \<open>|\<close> & \<open>\<OBTAINS> vars \<AND> \<dots> \<WHERE> name: props \<AND> \<dots>\<close> \\
+  & & \quad \<open>\<BBAR> \<dots>\<close> \\
   \end{tabular}
 
   \<^medskip>
-  A simple @{text "statement"} consists of named
+  A simple \<open>statement\<close> consists of named
   propositions.  The full form admits local context elements followed
-  by the actual conclusions, such as ``@{keyword "fixes"}~@{text
-  x}~@{keyword "assumes"}~@{text "A x"}~@{keyword "shows"}~@{text "B
-  x"}''.  The final result emerges as a Pure rule after discharging
+  by the actual conclusions, such as ``@{keyword "fixes"}~\<open>x\<close>~@{keyword "assumes"}~\<open>A x\<close>~@{keyword "shows"}~\<open>B
+  x\<close>''.  The final result emerges as a Pure rule after discharging
   the context: @{prop "\<And>x. A x \<Longrightarrow> B x"}.
 
   The @{keyword "obtains"} variant is another abbreviation defined
   below; unlike @{command obtain} (cf.\
   \secref{sec:framework-context}) there may be several ``cases''
-  separated by ``@{text "\<BBAR>"}'', each consisting of several
-  parameters (@{text "vars"}) and several premises (@{text "props"}).
+  separated by ``\<open>\<BBAR>\<close>'', each consisting of several
+  parameters (\<open>vars\<close>) and several premises (\<open>props\<close>).
   This specifies multi-branch elimination rules.
 
   \<^medskip>
   \begin{tabular}{l}
-  @{text "\<OBTAINS> \<^vec>x \<WHERE> \<^vec>A \<^vec>x   \<BBAR>   \<dots>   \<equiv>"} \\[0.5ex]
-  \quad @{text "\<FIXES> thesis"} \\
-  \quad @{text "\<ASSUMES> [intro]: \<And>\<^vec>x. \<^vec>A \<^vec>x \<Longrightarrow> thesis  \<AND>  \<dots>"} \\
-  \quad @{text "\<SHOWS> thesis"} \\
+  \<open>\<OBTAINS> \<^vec>x \<WHERE> \<^vec>A \<^vec>x   \<BBAR>   \<dots>   \<equiv>\<close> \\[0.5ex]
+  \quad \<open>\<FIXES> thesis\<close> \\
+  \quad \<open>\<ASSUMES> [intro]: \<And>\<^vec>x. \<^vec>A \<^vec>x \<Longrightarrow> thesis  \<AND>  \<dots>\<close> \\
+  \quad \<open>\<SHOWS> thesis\<close> \\
   \end{tabular}
   \<^medskip>
 
   Presenting structured statements in such an ``open'' format usually
   simplifies the subsequent proof, because the outer structure of the
   problem is already laid out directly.  E.g.\ consider the following
-  canonical patterns for @{text "\<SHOWS>"} and @{text "\<OBTAINS>"},
+  canonical patterns for \<open>\<SHOWS>\<close> and \<open>\<OBTAINS>\<close>,
   respectively:
 \<close>
 
@@ -779,14 +769,12 @@
 
 text \<open>
   \<^medskip>
-  Here local facts \isacharbackquoteopen@{text "A
-  x"}\isacharbackquoteclose\ and \isacharbackquoteopen@{text "B
-  y"}\isacharbackquoteclose\ are referenced immediately; there is no
+  Here local facts \isacharbackquoteopen\<open>A
+  x\<close>\isacharbackquoteclose\ and \isacharbackquoteopen\<open>B
+  y\<close>\isacharbackquoteclose\ are referenced immediately; there is no
   need to decompose the logical rule structure again.  In the second
-  proof the final ``@{command then}~@{command show}~@{text
-  thesis}~@{command ".."}''  involves the local rule case @{text "\<And>x
-  y. A x \<Longrightarrow> B y \<Longrightarrow> thesis"} for the particular instance of terms @{text
-  "a"} and @{text "b"} produced in the body.
+  proof the final ``@{command then}~@{command show}~\<open>thesis\<close>~@{command ".."}''  involves the local rule case \<open>\<And>x
+  y. A x \<Longrightarrow> B y \<Longrightarrow> thesis\<close> for the particular instance of terms \<open>a\<close> and \<open>b\<close> produced in the body.
 \<close>
 
 
@@ -802,8 +790,8 @@
   configurations for debugging.
 
   The basic idea is analogous to evaluating algebraic expressions on a
-  stack machine: @{text "(a + b) \<cdot> c"} then corresponds to a sequence
-  of single transitions for each symbol @{text "(, a, +, b, ), \<cdot>, c"}.
+  stack machine: \<open>(a + b) \<cdot> c\<close> then corresponds to a sequence
+  of single transitions for each symbol \<open>(, a, +, b, ), \<cdot>, c\<close>.
   In Isar the algebraic values are facts or goals, and the operations
   are inferences.
 
@@ -822,15 +810,15 @@
   \caption{Isar/VM modes}\label{fig:isar-vm}
   \end{figure}
 
-  For example, in @{text "state"} mode Isar acts like a mathematical
+  For example, in \<open>state\<close> mode Isar acts like a mathematical
   scratch-pad, accepting declarations like @{command fix}, @{command
   assume}, and claims like @{command have}, @{command show}.  A goal
-  statement changes the mode to @{text "prove"}, which means that we
+  statement changes the mode to \<open>prove\<close>, which means that we
   may now refine the problem via @{command unfolding} or @{command
-  proof}.  Then we are again in @{text "state"} mode of a proof body,
+  proof}.  Then we are again in \<open>state\<close> mode of a proof body,
   which may issue @{command show} statements to solve pending
   sub-goals.  A concluding @{command qed} will return to the original
-  @{text "state"} mode one level upwards.  The subsequent Isar/VM
+  \<open>state\<close> mode one level upwards.  The subsequent Isar/VM
   trace indicates block structure, linguistic mode, goal state, and
   inferences:
 \<close>
@@ -847,34 +835,34 @@
   qed
   text_raw \<open>\end{minipage}\quad
 \begin{minipage}[t]{0.06\textwidth}
-@{text "begin"} \\
+\<open>begin\<close> \\
 \\
 \\
-@{text "begin"} \\
-@{text "end"} \\
-@{text "end"} \\
+\<open>begin\<close> \\
+\<open>end\<close> \\
+\<open>end\<close> \\
 \end{minipage}
 \begin{minipage}[t]{0.08\textwidth}
-@{text "prove"} \\
-@{text "state"} \\
-@{text "state"} \\
-@{text "prove"} \\
-@{text "state"} \\
-@{text "state"} \\
+\<open>prove\<close> \\
+\<open>state\<close> \\
+\<open>state\<close> \\
+\<open>prove\<close> \\
+\<open>state\<close> \\
+\<open>state\<close> \\
 \end{minipage}\begin{minipage}[t]{0.35\textwidth}
-@{text "(A \<longrightarrow> B) \<Longrightarrow> #(A \<longrightarrow> B)"} \\
-@{text "(A \<Longrightarrow> B) \<Longrightarrow> #(A \<longrightarrow> B)"} \\
+\<open>(A \<longrightarrow> B) \<Longrightarrow> #(A \<longrightarrow> B)\<close> \\
+\<open>(A \<Longrightarrow> B) \<Longrightarrow> #(A \<longrightarrow> B)\<close> \\
 \\
 \\
-@{text "#(A \<longrightarrow> B)"} \\
-@{text "A \<longrightarrow> B"} \\
+\<open>#(A \<longrightarrow> B)\<close> \\
+\<open>A \<longrightarrow> B\<close> \\
 \end{minipage}\begin{minipage}[t]{0.4\textwidth}
-@{text "(init)"} \\
-@{text "(resolution impI)"} \\
+\<open>(init)\<close> \\
+\<open>(resolution impI)\<close> \\
 \\
 \\
-@{text "(refinement #A \<Longrightarrow> B)"} \\
-@{text "(finish)"} \\
+\<open>(refinement #A \<Longrightarrow> B)\<close> \\
+\<open>(finish)\<close> \\
 \end{minipage}\<close>
 (*<*)
 end
@@ -970,15 +958,15 @@
   concept.  The generic proof elements introduced below depend on
   rules declared as @{attribute trans} in the context.  It is left to
   the object-logic to provide a suitable rule collection for mixed
-  relations of @{text "="}, @{text "<"}, @{text "\<le>"}, @{text "\<subset>"},
-  @{text "\<subseteq>"} etc.  Due to the flexibility of rule composition
+  relations of \<open>=\<close>, \<open><\<close>, \<open>\<le>\<close>, \<open>\<subset>\<close>,
+  \<open>\<subseteq>\<close> etc.  Due to the flexibility of rule composition
   (\secref{sec:framework-resolution}), substitution of equals by
   equals is covered as well, even substitution of inequalities
   involving monotonicity conditions; see also @{cite \<open>\S6\<close> "Wenzel-PhD"}
   and @{cite "Bauer-Wenzel:2001"}.
 
   The generic calculational mechanism is based on the observation that
-  rules such as @{text "trans:"}~@{prop "x = y \<Longrightarrow> y = z \<Longrightarrow> x = z"}
+  rules such as \<open>trans:\<close>~@{prop "x = y \<Longrightarrow> y = z \<Longrightarrow> x = z"}
   proceed from the premises towards the conclusion in a deterministic
   fashion.  Thus we may reason in forward mode, feeding intermediate
   results into rules selected from the context.  The course of
@@ -987,12 +975,12 @@
   already provided by the Isar primitives.  In the definitions below,
   @{attribute OF} refers to @{inference resolution}
   (\secref{sec:framework-resolution}) with multiple rule arguments,
-  and @{text "trans"} represents to a suitable rule from the context:
+  and \<open>trans\<close> represents to a suitable rule from the context:
 
   \begin{matharray}{rcl}
-    @{command "also"}@{text "\<^sub>0"} & \equiv & @{command "note"}~@{text "calculation = this"} \\
-    @{command "also"}@{text "\<^sub>n\<^sub>+\<^sub>1"} & \equiv & @{command "note"}~@{text "calculation = trans [OF calculation this]"} \\[0.5ex]
-    @{command "finally"} & \equiv & @{command "also"}~@{command "from"}~@{text calculation} \\
+    @{command "also"}\<open>\<^sub>0\<close> & \equiv & @{command "note"}~\<open>calculation = this\<close> \\
+    @{command "also"}\<open>\<^sub>n\<^sub>+\<^sub>1\<close> & \equiv & @{command "note"}~\<open>calculation = trans [OF calculation this]\<close> \\[0.5ex]
+    @{command "finally"} & \equiv & @{command "also"}~@{command "from"}~\<open>calculation\<close> \\
   \end{matharray}
 
   The start of a calculation is determined implicitly in the
@@ -1019,7 +1007,7 @@
 (*>*)
 
 text \<open>
-  The term ``@{text "\<dots>"}'' above is a special abbreviation
+  The term ``\<open>\<dots>\<close>'' above is a special abbreviation
   provided by the Isabelle/Isar syntax layer: it statically refers to
   the right-hand side argument of the previous statement given in the
   text.  Thus it happens to coincide with relevant sub-expressions in
@@ -1030,9 +1018,9 @@
   Symmetry rules such as @{prop "x = y \<Longrightarrow> y = x"} are like
   transitivities with only one premise.  Isar maintains a separate
   rule collection declared via the @{attribute sym} attribute, to be
-  used in fact expressions ``@{text "a [symmetric]"}'', or single-step
-  proofs ``@{command assume}~@{text "x = y"}~@{command then}~@{command
-  have}~@{text "y = x"}~@{command ".."}''.
+  used in fact expressions ``\<open>a [symmetric]\<close>'', or single-step
+  proofs ``@{command assume}~\<open>x = y\<close>~@{command then}~@{command
+  have}~\<open>y = x\<close>~@{command ".."}''.
 \<close>
 
 end
\ No newline at end of file
--- a/src/Doc/Isar_Ref/Generic.thy	Tue Oct 20 23:03:46 2015 +0200
+++ b/src/Doc/Isar_Ref/Generic.thy	Tue Oct 20 23:53:40 2015 +0200
@@ -29,7 +29,7 @@
 
 text \<open>
   \begin{matharray}{rcll}
-    @{command_def "print_options"} & : & @{text "context \<rightarrow>"} \\
+    @{command_def "print_options"} & : & \<open>context \<rightarrow>\<close> \\
   \end{matharray}
 
   @{rail \<open>
@@ -39,12 +39,12 @@
   \<close>}
 
   \<^descr> @{command "print_options"} prints the available configuration
-  options, with names, types, and current values; the ``@{text "!"}'' option
+  options, with names, types, and current values; the ``\<open>!\<close>'' option
   indicates extra verbosity.
   
-  \<^descr> @{text "name = value"} as an attribute expression modifies the
+  \<^descr> \<open>name = value\<close> as an attribute expression modifies the
   named option, with the syntax of the value depending on the option's
-  type.  For @{ML_type bool} the default value is @{text true}.  Any
+  type.  For @{ML_type bool} the default value is \<open>true\<close>.  Any
   attempt to change a global option in a local context is ignored.
 \<close>
 
@@ -55,17 +55,17 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{method_def unfold} & : & @{text method} \\
-    @{method_def fold} & : & @{text method} \\
-    @{method_def insert} & : & @{text method} \\[0.5ex]
-    @{method_def erule}@{text "\<^sup>*"} & : & @{text method} \\
-    @{method_def drule}@{text "\<^sup>*"} & : & @{text method} \\
-    @{method_def frule}@{text "\<^sup>*"} & : & @{text method} \\
-    @{method_def intro} & : & @{text method} \\
-    @{method_def elim} & : & @{text method} \\
-    @{method_def fail} & : & @{text method} \\
-    @{method_def succeed} & : & @{text method} \\
-    @{method_def sleep} & : & @{text method} \\
+    @{method_def unfold} & : & \<open>method\<close> \\
+    @{method_def fold} & : & \<open>method\<close> \\
+    @{method_def insert} & : & \<open>method\<close> \\[0.5ex]
+    @{method_def erule}\<open>\<^sup>*\<close> & : & \<open>method\<close> \\
+    @{method_def drule}\<open>\<^sup>*\<close> & : & \<open>method\<close> \\
+    @{method_def frule}\<open>\<^sup>*\<close> & : & \<open>method\<close> \\
+    @{method_def intro} & : & \<open>method\<close> \\
+    @{method_def elim} & : & \<open>method\<close> \\
+    @{method_def fail} & : & \<open>method\<close> \\
+    @{method_def succeed} & : & \<open>method\<close> \\
+    @{method_def sleep} & : & \<open>method\<close> \\
   \end{matharray}
 
   @{rail \<open>
@@ -79,18 +79,16 @@
     @@{method sleep} @{syntax real}
   \<close>}
 
-  \<^descr> @{method unfold}~@{text "a\<^sub>1 \<dots> a\<^sub>n"} and @{method fold}~@{text
-  "a\<^sub>1 \<dots> a\<^sub>n"} expand (or fold back) the given definitions throughout
+  \<^descr> @{method unfold}~\<open>a\<^sub>1 \<dots> a\<^sub>n\<close> and @{method fold}~\<open>a\<^sub>1 \<dots> a\<^sub>n\<close> expand (or fold back) the given definitions throughout
   all goals; any chained facts provided are inserted into the goal and
   subject to rewriting as well.
 
-  \<^descr> @{method insert}~@{text "a\<^sub>1 \<dots> a\<^sub>n"} inserts theorems as facts
+  \<^descr> @{method insert}~\<open>a\<^sub>1 \<dots> a\<^sub>n\<close> inserts theorems as facts
   into all goals of the proof state.  Note that current facts
   indicated for forward chaining are ignored.
 
-  \<^descr> @{method erule}~@{text "a\<^sub>1 \<dots> a\<^sub>n"}, @{method
-  drule}~@{text "a\<^sub>1 \<dots> a\<^sub>n"}, and @{method frule}~@{text
-  "a\<^sub>1 \<dots> a\<^sub>n"} are similar to the basic @{method rule}
+  \<^descr> @{method erule}~\<open>a\<^sub>1 \<dots> a\<^sub>n\<close>, @{method
+  drule}~\<open>a\<^sub>1 \<dots> a\<^sub>n\<close>, and @{method frule}~\<open>a\<^sub>1 \<dots> a\<^sub>n\<close> are similar to the basic @{method rule}
   method (see \secref{sec:pure-meth-att}), but apply rules by
   elim-resolution, destruct-resolution, and forward-resolution,
   respectively @{cite "isabelle-implementation"}.  The optional natural
@@ -112,28 +110,27 @@
   to common automated tools.
 
   \<^descr> @{method fail} yields an empty result sequence; it is the
-  identity of the ``@{text "|"}'' method combinator (cf.\
+  identity of the ``\<open>|\<close>'' method combinator (cf.\
   \secref{sec:proof-meth}).
 
   \<^descr> @{method succeed} yields a single (unchanged) result; it is
-  the identity of the ``@{text ","}'' method combinator (cf.\
+  the identity of the ``\<open>,\<close>'' method combinator (cf.\
   \secref{sec:proof-meth}).
 
-  \<^descr> @{method sleep}~@{text s} succeeds after a real-time delay of @{text
-  s} seconds. This is occasionally useful for demonstration and testing
+  \<^descr> @{method sleep}~\<open>s\<close> succeeds after a real-time delay of \<open>s\<close> seconds. This is occasionally useful for demonstration and testing
   purposes.
 
 
   \begin{matharray}{rcl}
-    @{attribute_def tagged} & : & @{text attribute} \\
-    @{attribute_def untagged} & : & @{text attribute} \\[0.5ex]
-    @{attribute_def THEN} & : & @{text attribute} \\
-    @{attribute_def unfolded} & : & @{text attribute} \\
-    @{attribute_def folded} & : & @{text attribute} \\
-    @{attribute_def abs_def} & : & @{text attribute} \\[0.5ex]
-    @{attribute_def rotated} & : & @{text attribute} \\
-    @{attribute_def (Pure) elim_format} & : & @{text attribute} \\
-    @{attribute_def no_vars}@{text "\<^sup>*"} & : & @{text attribute} \\
+    @{attribute_def tagged} & : & \<open>attribute\<close> \\
+    @{attribute_def untagged} & : & \<open>attribute\<close> \\[0.5ex]
+    @{attribute_def THEN} & : & \<open>attribute\<close> \\
+    @{attribute_def unfolded} & : & \<open>attribute\<close> \\
+    @{attribute_def folded} & : & \<open>attribute\<close> \\
+    @{attribute_def abs_def} & : & \<open>attribute\<close> \\[0.5ex]
+    @{attribute_def rotated} & : & \<open>attribute\<close> \\
+    @{attribute_def (Pure) elim_format} & : & \<open>attribute\<close> \\
+    @{attribute_def no_vars}\<open>\<^sup>*\<close> & : & \<open>attribute\<close> \\
   \end{matharray}
 
   @{rail \<open>
@@ -148,19 +145,19 @@
     @@{attribute rotated} @{syntax int}?
   \<close>}
 
-  \<^descr> @{attribute tagged}~@{text "name value"} and @{attribute
-  untagged}~@{text name} add and remove \<^emph>\<open>tags\<close> of some theorem.
+  \<^descr> @{attribute tagged}~\<open>name value\<close> and @{attribute
+  untagged}~\<open>name\<close> add and remove \<^emph>\<open>tags\<close> of some theorem.
   Tags may be any list of string pairs that serve as formal comment.
   The first string is considered the tag name, the second its value.
   Note that @{attribute untagged} removes any tags of the same name.
 
-  \<^descr> @{attribute THEN}~@{text a} composes rules by resolution; it
-  resolves with the first premise of @{text a} (an alternative
+  \<^descr> @{attribute THEN}~\<open>a\<close> composes rules by resolution; it
+  resolves with the first premise of \<open>a\<close> (an alternative
   position may be also specified).  See also @{ML_op "RS"} in
   @{cite "isabelle-implementation"}.
   
-  \<^descr> @{attribute unfolded}~@{text "a\<^sub>1 \<dots> a\<^sub>n"} and @{attribute
-  folded}~@{text "a\<^sub>1 \<dots> a\<^sub>n"} expand and fold back again the given
+  \<^descr> @{attribute unfolded}~\<open>a\<^sub>1 \<dots> a\<^sub>n\<close> and @{attribute
+  folded}~\<open>a\<^sub>1 \<dots> a\<^sub>n\<close> expand and fold back again the given
   definitions throughout a rule.
 
   \<^descr> @{attribute abs_def} turns an equation of the form @{prop "f x
@@ -168,8 +165,8 @@
   simp} or @{method unfold} steps always expand it.  This also works
   for object-logic equality.
 
-  \<^descr> @{attribute rotated}~@{text n} rotate the premises of a
-  theorem by @{text n} (default 1).
+  \<^descr> @{attribute rotated}~\<open>n\<close> rotate the premises of a
+  theorem by \<open>n\<close> (default 1).
 
   \<^descr> @{attribute (Pure) elim_format} turns a destruction rule into
   elimination rule format, by resolving with the rule @{prop "PROP A \<Longrightarrow>
@@ -187,9 +184,9 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{method_def subst} & : & @{text method} \\
-    @{method_def hypsubst} & : & @{text method} \\
-    @{method_def split} & : & @{text method} \\
+    @{method_def subst} & : & \<open>method\<close> \\
+    @{method_def hypsubst} & : & \<open>method\<close> \\
+    @{method_def split} & : & \<open>method\<close> \\
   \end{matharray}
 
   @{rail \<open>
@@ -205,38 +202,37 @@
   provide the canonical way for automated normalization (see
   \secref{sec:simplifier}).
 
-  \<^descr> @{method subst}~@{text eq} performs a single substitution step
-  using rule @{text eq}, which may be either a meta or object
+  \<^descr> @{method subst}~\<open>eq\<close> performs a single substitution step
+  using rule \<open>eq\<close>, which may be either a meta or object
   equality.
 
-  \<^descr> @{method subst}~@{text "(asm) eq"} substitutes in an
+  \<^descr> @{method subst}~\<open>(asm) eq\<close> substitutes in an
   assumption.
 
-  \<^descr> @{method subst}~@{text "(i \<dots> j) eq"} performs several
-  substitutions in the conclusion. The numbers @{text i} to @{text j}
+  \<^descr> @{method subst}~\<open>(i \<dots> j) eq\<close> performs several
+  substitutions in the conclusion. The numbers \<open>i\<close> to \<open>j\<close>
   indicate the positions to substitute at.  Positions are ordered from
   the top of the term tree moving down from left to right. For
-  example, in @{text "(a + b) + (c + d)"} there are three positions
-  where commutativity of @{text "+"} is applicable: 1 refers to @{text
-  "a + b"}, 2 to the whole term, and 3 to @{text "c + d"}.
+  example, in \<open>(a + b) + (c + d)\<close> there are three positions
+  where commutativity of \<open>+\<close> is applicable: 1 refers to \<open>a + b\<close>, 2 to the whole term, and 3 to \<open>c + d\<close>.
 
-  If the positions in the list @{text "(i \<dots> j)"} are non-overlapping
-  (e.g.\ @{text "(2 3)"} in @{text "(a + b) + (c + d)"}) you may
+  If the positions in the list \<open>(i \<dots> j)\<close> are non-overlapping
+  (e.g.\ \<open>(2 3)\<close> in \<open>(a + b) + (c + d)\<close>) you may
   assume all substitutions are performed simultaneously.  Otherwise
-  the behaviour of @{text subst} is not specified.
+  the behaviour of \<open>subst\<close> is not specified.
 
-  \<^descr> @{method subst}~@{text "(asm) (i \<dots> j) eq"} performs the
+  \<^descr> @{method subst}~\<open>(asm) (i \<dots> j) eq\<close> performs the
   substitutions in the assumptions. The positions refer to the
   assumptions in order from left to right.  For example, given in a
-  goal of the form @{text "P (a + b) \<Longrightarrow> P (c + d) \<Longrightarrow> \<dots>"}, position 1 of
-  commutativity of @{text "+"} is the subterm @{text "a + b"} and
-  position 2 is the subterm @{text "c + d"}.
+  goal of the form \<open>P (a + b) \<Longrightarrow> P (c + d) \<Longrightarrow> \<dots>\<close>, position 1 of
+  commutativity of \<open>+\<close> is the subterm \<open>a + b\<close> and
+  position 2 is the subterm \<open>c + d\<close>.
 
   \<^descr> @{method hypsubst} performs substitution using some
-  assumption; this only works for equations of the form @{text "x =
-  t"} where @{text x} is a free or bound variable.
+  assumption; this only works for equations of the form \<open>x =
+  t\<close> where \<open>x\<close> is a free or bound variable.
 
-  \<^descr> @{method split}~@{text "a\<^sub>1 \<dots> a\<^sub>n"} performs single-step case
+  \<^descr> @{method split}~\<open>a\<^sub>1 \<dots> a\<^sub>n\<close> performs single-step case
   splitting using the given rules.  Splitting is performed in the
   conclusion or some assumption of the subgoal, depending of the
   structure of the rule.
@@ -274,9 +270,9 @@
 
 text \<open>
   \begin{tabular}{rcll}
-    @{method_def simp} & : & @{text method} \\
-    @{method_def simp_all} & : & @{text method} \\
-    @{attribute_def simp_depth_limit} & : & @{text attribute} & default @{text 100} \\
+    @{method_def simp} & : & \<open>method\<close> \\
+    @{method_def simp_all} & : & \<open>method\<close> \\
+    @{attribute_def simp_depth_limit} & : & \<open>attribute\<close> & default \<open>100\<close> \\
   \end{tabular}
   \<^medskip>
 
@@ -292,30 +288,28 @@
 
   \<^descr> @{method simp} invokes the Simplifier on the first subgoal,
   after inserting chained facts as additional goal premises; further
-  rule declarations may be included via @{text "(simp add: facts)"}.
+  rule declarations may be included via \<open>(simp add: facts)\<close>.
   The proof method fails if the subgoal remains unchanged after
   simplification.
 
   Note that the original goal premises and chained facts are subject
-  to simplification themselves, while declarations via @{text
-  "add"}/@{text "del"} merely follow the policies of the object-logic
+  to simplification themselves, while declarations via \<open>add\<close>/\<open>del\<close> merely follow the policies of the object-logic
   to extract rewrite rules from theorems, without further
   simplification.  This may lead to slightly different behavior in
   either case, which might be required precisely like that in some
   boundary situations to perform the intended simplification step!
 
   \<^medskip>
-  The @{text only} modifier first removes all other rewrite
+  The \<open>only\<close> modifier first removes all other rewrite
   rules, looper tactics (including split rules), congruence rules, and
-  then behaves like @{text add}.  Implicit solvers remain, which means
-  that trivial rules like reflexivity or introduction of @{text
-  "True"} are available to solve the simplified subgoals, but also
+  then behaves like \<open>add\<close>.  Implicit solvers remain, which means
+  that trivial rules like reflexivity or introduction of \<open>True\<close> are available to solve the simplified subgoals, but also
   non-trivial tools like linear arithmetic in HOL.  The latter may
   lead to some surprise of the meaning of ``only'' in Isabelle/HOL
   compared to English!
 
   \<^medskip>
-  The @{text split} modifiers add or delete rules for the
+  The \<open>split\<close> modifiers add or delete rules for the
   Splitter (see also \secref{sec:simp-strategies} on the looper).
   This works only if the Simplifier method has been properly setup to
   include the Splitter (all major object logics such HOL, HOLCF, FOL,
@@ -323,11 +317,11 @@
 
   There is also a separate @{method_ref split} method available for
   single-step case splitting.  The effect of repeatedly applying
-  @{text "(split thms)"} can be imitated by ``@{text "(simp only:
-  split: thms)"}''.
+  \<open>(split thms)\<close> can be imitated by ``\<open>(simp only:
+  split: thms)\<close>''.
 
   \<^medskip>
-  The @{text cong} modifiers add or delete Simplifier
+  The \<open>cong\<close> modifiers add or delete Simplifier
   congruence rules (see also \secref{sec:simp-rules}); the default is
   to add.
 
@@ -362,22 +356,22 @@
   \hline
   Isar method & ML tactic & behavior \\\hline
 
-  @{text "(simp (no_asm))"} & @{ML simp_tac} & assumptions are ignored
+  \<open>(simp (no_asm))\<close> & @{ML simp_tac} & assumptions are ignored
   completely \\\hline
 
-  @{text "(simp (no_asm_simp))"} & @{ML asm_simp_tac} & assumptions
+  \<open>(simp (no_asm_simp))\<close> & @{ML asm_simp_tac} & assumptions
   are used in the simplification of the conclusion but are not
   themselves simplified \\\hline
 
-  @{text "(simp (no_asm_use))"} & @{ML full_simp_tac} & assumptions
+  \<open>(simp (no_asm_use))\<close> & @{ML full_simp_tac} & assumptions
   are simplified but are not used in the simplification of each other
   or the conclusion \\\hline
 
-  @{text "(simp)"} & @{ML asm_full_simp_tac} & assumptions are used in
+  \<open>(simp)\<close> & @{ML asm_full_simp_tac} & assumptions are used in
   the simplification of the conclusion and to simplify other
   assumptions \\\hline
 
-  @{text "(simp (asm_lr))"} & @{ML asm_lr_simp_tac} & compatibility
+  \<open>(simp (asm_lr))\<close> & @{ML asm_lr_simp_tac} & compatibility
   mode: an assumption is only used for simplifying assumptions which
   are to the right of it \\\hline
 
@@ -423,8 +417,8 @@
 
   In the next example the malicious assumption @{prop "\<And>x::nat. f x =
   g (f (g x))"} does not contribute to solve the problem, but makes
-  the default @{method simp} method loop: the rewrite rule @{text "f
-  ?x \<equiv> g (f (g ?x))"} extracted from the assumption does not
+  the default @{method simp} method loop: the rewrite rule \<open>f
+  ?x \<equiv> g (f (g ?x))\<close> extracted from the assumption does not
   terminate.  The Simplifier notices certain simple forms of
   nontermination, but not this one.  The problem can be solved
   nonetheless, by ignoring assumptions via special options as
@@ -460,9 +454,9 @@
   @{method simp} method applied to @{prop "P (f x) \<Longrightarrow> y = x \<Longrightarrow> f x = f y
   \<Longrightarrow> Q"} gives rise to the infinite reduction sequence
   \[
-  @{text "P (f x)"} \stackrel{@{text "f x \<equiv> f y"}}{\longmapsto}
-  @{text "P (f y)"} \stackrel{@{text "y \<equiv> x"}}{\longmapsto}
-  @{text "P (f x)"} \stackrel{@{text "f x \<equiv> f y"}}{\longmapsto} \cdots
+  \<open>P (f x)\<close> \stackrel{\<open>f x \<equiv> f y\<close>}{\longmapsto}
+  \<open>P (f y)\<close> \stackrel{\<open>y \<equiv> x\<close>}{\longmapsto}
+  \<open>P (f x)\<close> \stackrel{\<open>f x \<equiv> f y\<close>}{\longmapsto} \cdots
   \]
   whereas applying the same to @{prop "y = x \<Longrightarrow> f x = f y \<Longrightarrow> P (f x) \<Longrightarrow>
   Q"} terminates (without solving the goal):
@@ -482,10 +476,10 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{attribute_def simp} & : & @{text attribute} \\
-    @{attribute_def split} & : & @{text attribute} \\
-    @{attribute_def cong} & : & @{text attribute} \\
-    @{command_def "print_simpset"}@{text "\<^sup>*"} & : & @{text "context \<rightarrow>"} \\
+    @{attribute_def simp} & : & \<open>attribute\<close> \\
+    @{attribute_def split} & : & \<open>attribute\<close> \\
+    @{attribute_def cong} & : & \<open>attribute\<close> \\
+    @{command_def "print_simpset"}\<open>\<^sup>*\<close> & : & \<open>context \<rightarrow>\<close> \\
   \end{matharray}
 
   @{rail \<open>
@@ -500,55 +494,54 @@
   Rewrite rules are theorems expressing some form of equality, for
   example:
 
-  @{text "Suc ?m + ?n = ?m + Suc ?n"} \\
-  @{text "?P \<and> ?P \<longleftrightarrow> ?P"} \\
-  @{text "?A \<union> ?B \<equiv> {x. x \<in> ?A \<or> x \<in> ?B}"}
+  \<open>Suc ?m + ?n = ?m + Suc ?n\<close> \\
+  \<open>?P \<and> ?P \<longleftrightarrow> ?P\<close> \\
+  \<open>?A \<union> ?B \<equiv> {x. x \<in> ?A \<or> x \<in> ?B}\<close>
 
   \<^medskip>
-  Conditional rewrites such as @{text "?m < ?n \<Longrightarrow> ?m div ?n = 0"} are
+  Conditional rewrites such as \<open>?m < ?n \<Longrightarrow> ?m div ?n = 0\<close> are
   also permitted; the conditions can be arbitrary formulas.
 
   \<^medskip>
   Internally, all rewrite rules are translated into Pure
-  equalities, theorems with conclusion @{text "lhs \<equiv> rhs"}. The
+  equalities, theorems with conclusion \<open>lhs \<equiv> rhs\<close>. The
   simpset contains a function for extracting equalities from arbitrary
   theorems, which is usually installed when the object-logic is
-  configured initially. For example, @{text "\<not> ?x \<in> {}"} could be
-  turned into @{text "?x \<in> {} \<equiv> False"}. Theorems that are declared as
+  configured initially. For example, \<open>\<not> ?x \<in> {}\<close> could be
+  turned into \<open>?x \<in> {} \<equiv> False\<close>. Theorems that are declared as
   @{attribute simp} and local assumptions within a goal are treated
   uniformly in this respect.
 
-  The Simplifier accepts the following formats for the @{text "lhs"}
+  The Simplifier accepts the following formats for the \<open>lhs\<close>
   term:
 
     \<^enum> First-order patterns, considering the sublanguage of
     application of constant operators to variable operands, without
-    @{text "\<lambda>"}-abstractions or functional variables.
+    \<open>\<lambda>\<close>-abstractions or functional variables.
     For example:
 
-    @{text "(?x + ?y) + ?z \<equiv> ?x + (?y + ?z)"} \\
-    @{text "f (f ?x ?y) ?z \<equiv> f ?x (f ?y ?z)"}
+    \<open>(?x + ?y) + ?z \<equiv> ?x + (?y + ?z)\<close> \\
+    \<open>f (f ?x ?y) ?z \<equiv> f ?x (f ?y ?z)\<close>
 
     \<^enum> Higher-order patterns in the sense of @{cite "nipkow-patterns"}.
-    These are terms in @{text "\<beta>"}-normal form (this will always be the
+    These are terms in \<open>\<beta>\<close>-normal form (this will always be the
     case unless you have done something strange) where each occurrence
-    of an unknown is of the form @{text "?F x\<^sub>1 \<dots> x\<^sub>n"}, where the
-    @{text "x\<^sub>i"} are distinct bound variables.
+    of an unknown is of the form \<open>?F x\<^sub>1 \<dots> x\<^sub>n\<close>, where the
+    \<open>x\<^sub>i\<close> are distinct bound variables.
 
-    For example, @{text "(\<forall>x. ?P x \<and> ?Q x) \<equiv> (\<forall>x. ?P x) \<and> (\<forall>x. ?Q x)"}
-    or its symmetric form, since the @{text "rhs"} is also a
+    For example, \<open>(\<forall>x. ?P x \<and> ?Q x) \<equiv> (\<forall>x. ?P x) \<and> (\<forall>x. ?Q x)\<close>
+    or its symmetric form, since the \<open>rhs\<close> is also a
     higher-order pattern.
 
-    \<^enum> Physical first-order patterns over raw @{text "\<lambda>"}-term
-    structure without @{text "\<alpha>\<beta>\<eta>"}-equality; abstractions and bound
+    \<^enum> Physical first-order patterns over raw \<open>\<lambda>\<close>-term
+    structure without \<open>\<alpha>\<beta>\<eta>\<close>-equality; abstractions and bound
     variables are treated like quasi-constant term material.
 
-    For example, the rule @{text "?f ?x \<in> range ?f = True"} rewrites the
-    term @{text "g a \<in> range g"} to @{text "True"}, but will fail to
-    match @{text "g (h b) \<in> range (\<lambda>x. g (h x))"}. However, offending
-    subterms (in our case @{text "?f ?x"}, which is not a pattern) can
-    be replaced by adding new variables and conditions like this: @{text
-    "?y = ?f ?x \<Longrightarrow> ?y \<in> range ?f = True"} is acceptable as a conditional
+    For example, the rule \<open>?f ?x \<in> range ?f = True\<close> rewrites the
+    term \<open>g a \<in> range g\<close> to \<open>True\<close>, but will fail to
+    match \<open>g (h b) \<in> range (\<lambda>x. g (h x))\<close>. However, offending
+    subterms (in our case \<open>?f ?x\<close>, which is not a pattern) can
+    be replaced by adding new variables and conditions like this: \<open>?y = ?f ?x \<Longrightarrow> ?y \<in> range ?f = True\<close> is acceptable as a conditional
     rewrite rule of the second category since conditions can be
     arbitrary terms.
 
@@ -560,15 +553,15 @@
   Congruence rules are equalities of the form @{text [display]
   "\<dots> \<Longrightarrow> f ?x\<^sub>1 \<dots> ?x\<^sub>n = f ?y\<^sub>1 \<dots> ?y\<^sub>n"}
 
-  This controls the simplification of the arguments of @{text f}.  For
+  This controls the simplification of the arguments of \<open>f\<close>.  For
   example, some arguments can be simplified under additional
   assumptions: @{text [display] "?P\<^sub>1 \<longleftrightarrow> ?Q\<^sub>1 \<Longrightarrow> (?Q\<^sub>1 \<Longrightarrow> ?P\<^sub>2 \<longleftrightarrow> ?Q\<^sub>2) \<Longrightarrow>
   (?P\<^sub>1 \<longrightarrow> ?P\<^sub>2) \<longleftrightarrow> (?Q\<^sub>1 \<longrightarrow> ?Q\<^sub>2)"}
 
-  Given this rule, the Simplifier assumes @{text "?Q\<^sub>1"} and extracts
-  rewrite rules from it when simplifying @{text "?P\<^sub>2"}.  Such local
-  assumptions are effective for rewriting formulae such as @{text "x =
-  0 \<longrightarrow> y + x = y"}.
+  Given this rule, the Simplifier assumes \<open>?Q\<^sub>1\<close> and extracts
+  rewrite rules from it when simplifying \<open>?P\<^sub>2\<close>.  Such local
+  assumptions are effective for rewriting formulae such as \<open>x =
+  0 \<longrightarrow> y + x = y\<close>.
 
   %FIXME
   %The local assumptions are also provided as theorems to the solver;
@@ -593,11 +586,11 @@
 
   Only the first argument is simplified; the others remain unchanged.
   This can make simplification much faster, but may require an extra
-  case split over the condition @{text "?q"} to prove the goal.
+  case split over the condition \<open>?q\<close> to prove the goal.
 
   \<^descr> @{command "print_simpset"} prints the collection of rules declared
   to the Simplifier, which is also known as ``simpset'' internally; the
-  ``@{text "!"}'' option indicates extra verbosity.
+  ``\<open>!\<close>'' option indicates extra verbosity.
 
   For historical reasons, simpsets may occur independently from the
   current context, but are conceptually dependent on it.  When the
@@ -625,7 +618,7 @@
   by explicitly adding or deleting theorems as simplification rules,
   or installing other tools via simplification procedures
   (\secref{sec:simproc}).  Good simpsets are hard to design.  Rules
-  that obviously simplify, like @{text "?n + 0 \<equiv> ?n"} are good
+  that obviously simplify, like \<open>?n + 0 \<equiv> ?n\<close> are good
   candidates for the implicit simpset, unless a special
   non-normalizing behavior of certain operations is intended.  More
   specific rules (such as distributive laws, which duplicate subterms)
@@ -649,17 +642,17 @@
 
 text \<open>A rewrite rule is \<^emph>\<open>permutative\<close> if the left-hand side and
   right-hand side are the equal up to renaming of variables.  The most
-  common permutative rule is commutativity: @{text "?x + ?y = ?y +
-  ?x"}.  Other examples include @{text "(?x - ?y) - ?z = (?x - ?z) -
-  ?y"} in arithmetic and @{text "insert ?x (insert ?y ?A) = insert ?y
-  (insert ?x ?A)"} for sets.  Such rules are common enough to merit
+  common permutative rule is commutativity: \<open>?x + ?y = ?y +
+  ?x\<close>.  Other examples include \<open>(?x - ?y) - ?z = (?x - ?z) -
+  ?y\<close> in arithmetic and \<open>insert ?x (insert ?y ?A) = insert ?y
+  (insert ?x ?A)\<close> for sets.  Such rules are common enough to merit
   special attention.
 
   Because ordinary rewriting loops given such rules, the Simplifier
   employs a special strategy, called \<^emph>\<open>ordered rewriting\<close>.
   Permutative rules are detected and only applied if the rewriting
   step decreases the redex wrt.\ a given term ordering.  For example,
-  commutativity rewrites @{text "b + a"} to @{text "a + b"}, but then
+  commutativity rewrites \<open>b + a\<close> to \<open>a + b\<close>, but then
   stops, because the redex cannot be decreased further in the sense of
   the term ordering.
 
@@ -678,17 +671,17 @@
 
 text \<open>Ordered rewriting is particularly effective in the case of
   associative-commutative operators.  (Associativity by itself is not
-  permutative.)  When dealing with an AC-operator @{text "f"}, keep
+  permutative.)  When dealing with an AC-operator \<open>f\<close>, keep
   the following points in mind:
 
   \<^item> The associative law must always be oriented from left to
-  right, namely @{text "f (f x y) z = f x (f y z)"}.  The opposite
+  right, namely \<open>f (f x y) z = f x (f y z)\<close>.  The opposite
   orientation, if used with commutativity, leads to looping in
   conjunction with the standard term order.
 
   \<^item> To complete your set of rewrite rules, you must add not just
   associativity (A) and commutativity (C) but also a derived rule
-  \<^emph>\<open>left-commutativity\<close> (LC): @{text "f x (f y z) = f y (f x z)"}.
+  \<^emph>\<open>left-commutativity\<close> (LC): \<open>f x (f y z) = f y (f x z)\<close>.
 
 
   Ordered rewriting with the combination of A, C, and LC sorts a term
@@ -746,11 +739,11 @@
 
 text \<open>
   \begin{tabular}{rcll}
-    @{attribute_def simp_trace} & : & @{text attribute} & default @{text false} \\
-    @{attribute_def simp_trace_depth_limit} & : & @{text attribute} & default @{text 1} \\
-    @{attribute_def simp_debug} & : & @{text attribute} & default @{text false} \\
-    @{attribute_def simp_trace_new} & : & @{text attribute} \\
-    @{attribute_def simp_break} & : & @{text attribute} \\
+    @{attribute_def simp_trace} & : & \<open>attribute\<close> & default \<open>false\<close> \\
+    @{attribute_def simp_trace_depth_limit} & : & \<open>attribute\<close> & default \<open>1\<close> \\
+    @{attribute_def simp_debug} & : & \<open>attribute\<close> & default \<open>false\<close> \\
+    @{attribute_def simp_trace_new} & : & \<open>attribute\<close> \\
+    @{attribute_def simp_break} & : & \<open>attribute\<close> \\
   \end{tabular}
   \<^medskip>
 
@@ -816,14 +809,14 @@
   rules.
 
   Any successful result needs to be a (possibly conditional) rewrite
-  rule @{text "t \<equiv> u"} that is applicable to the current redex.  The
+  rule \<open>t \<equiv> u\<close> that is applicable to the current redex.  The
   rule will be applied just as any ordinary rewrite rule.  It is
   expected to be already in \<^emph>\<open>internal form\<close>, bypassing the
   automatic preprocessing of object-level equivalences.
 
   \begin{matharray}{rcl}
-    @{command_def "simproc_setup"} & : & @{text "local_theory \<rightarrow> local_theory"} \\
-    simproc & : & @{text attribute} \\
+    @{command_def "simproc_setup"} & : & \<open>local_theory \<rightarrow> local_theory\<close> \\
+    simproc & : & \<open>attribute\<close> \\
   \end{matharray}
 
   @{rail \<open>
@@ -839,8 +832,8 @@
   given term patterns match the current redex.  The implementation,
   which is provided as ML source text, needs to be of type @{ML_type
   "morphism -> simpset -> cterm -> thm option"}, where the @{ML_type
-  cterm} represents the current redex @{text r} and the result is
-  supposed to be some proven rewrite rule @{text "r \<equiv> r'"} (or a
+  cterm} represents the current redex \<open>r\<close> and the result is
+  supposed to be some proven rewrite rule \<open>r \<equiv> r'\<close> (or a
   generalized version), or @{ML NONE} to indicate failure.  The
   @{ML_type simpset} argument holds the full context of the current
   Simplifier invocation, including the actual Isar proof context.  The
@@ -853,7 +846,7 @@
   Morphisms and identifiers are only relevant for simprocs that are
   defined within a local target context, e.g.\ in a locale.
 
-  \<^descr> @{text "simproc add: name"} and @{text "simproc del: name"}
+  \<^descr> \<open>simproc add: name\<close> and \<open>simproc del: name\<close>
   add or delete named simprocs to the current Simplifier context.  The
   default is to add a simproc.  Note that @{command "simproc_setup"}
   already adds the new simproc to the subsequent context.
@@ -905,14 +898,14 @@
   conditional rewrite rules or congruence rules.  The default should
   be simplification itself.  In rare situations, this strategy may
   need to be changed.  For example, if the premise of a conditional
-  rule is an instance of its conclusion, as in @{text "Suc ?m < ?n \<Longrightarrow>
-  ?m < ?n"}, the default strategy could loop.  % FIXME !??
+  rule is an instance of its conclusion, as in \<open>Suc ?m < ?n \<Longrightarrow>
+  ?m < ?n\<close>, the default strategy could loop.  % FIXME !??
 
-  \<^descr> @{ML Simplifier.set_subgoaler}~@{text "tac ctxt"} sets the
-  subgoaler of the context to @{text "tac"}.  The tactic will
+  \<^descr> @{ML Simplifier.set_subgoaler}~\<open>tac ctxt\<close> sets the
+  subgoaler of the context to \<open>tac\<close>.  The tactic will
   be applied to the context of the running Simplifier instance.
 
-  \<^descr> @{ML Simplifier.prems_of}~@{text "ctxt"} retrieves the current
+  \<^descr> @{ML Simplifier.prems_of}~\<open>ctxt\<close> retrieves the current
   set of premises from the context.  This may be non-empty only if
   the Simplifier has been told to utilize local assumptions in the
   first place (cf.\ the options in \secref{sec:simp-meth}).
@@ -948,7 +941,7 @@
 
   A solver is a tactic that attempts to solve a subgoal after
   simplification.  Its core functionality is to prove trivial subgoals
-  such as @{prop "True"} and @{text "t = t"}, but object-logics might
+  such as @{prop "True"} and \<open>t = t\<close>, but object-logics might
   be more ambitious.  For example, Isabelle/HOL performs a restricted
   version of linear arithmetic here.
 
@@ -957,8 +950,8 @@
 
   \<^medskip>
   Rewriting does not instantiate unknowns.  For example,
-  rewriting alone cannot prove @{text "a \<in> ?A"} since this requires
-  instantiating @{text "?A"}.  The solver, however, is an arbitrary
+  rewriting alone cannot prove \<open>a \<in> ?A\<close> since this requires
+  instantiating \<open>?A\<close>.  The solver, however, is an arbitrary
   tactic and may instantiate unknowns as it pleases.  This is the only
   way the Simplifier can handle a conditional rewrite rule whose
   condition contains extra variables.  When a simplification tactic is
@@ -975,23 +968,22 @@
   tactic is not totally safe: it may instantiate unknowns that appear
   also in other subgoals.
 
-  \<^descr> @{ML Simplifier.mk_solver}~@{text "name tac"} turns @{text
-  "tac"} into a solver; the @{text "name"} is only attached as a
+  \<^descr> @{ML Simplifier.mk_solver}~\<open>name tac\<close> turns \<open>tac\<close> into a solver; the \<open>name\<close> is only attached as a
   comment and has no further significance.
 
-  \<^descr> @{text "ctxt setSSolver solver"} installs @{text "solver"} as
-  the safe solver of @{text "ctxt"}.
+  \<^descr> \<open>ctxt setSSolver solver\<close> installs \<open>solver\<close> as
+  the safe solver of \<open>ctxt\<close>.
 
-  \<^descr> @{text "ctxt addSSolver solver"} adds @{text "solver"} as an
+  \<^descr> \<open>ctxt addSSolver solver\<close> adds \<open>solver\<close> as an
   additional safe solver; it will be tried after the solvers which had
-  already been present in @{text "ctxt"}.
+  already been present in \<open>ctxt\<close>.
 
-  \<^descr> @{text "ctxt setSolver solver"} installs @{text "solver"} as the
-  unsafe solver of @{text "ctxt"}.
+  \<^descr> \<open>ctxt setSolver solver\<close> installs \<open>solver\<close> as the
+  unsafe solver of \<open>ctxt\<close>.
 
-  \<^descr> @{text "ctxt addSolver solver"} adds @{text "solver"} as an
+  \<^descr> \<open>ctxt addSolver solver\<close> adds \<open>solver\<close> as an
   additional unsafe solver; it will be tried after the solvers which
-  had already been present in @{text "ctxt"}.
+  had already been present in \<open>ctxt\<close>.
 
 
   \<^medskip>
@@ -1009,18 +1001,18 @@
   \<^medskip>
   As explained before, the subgoaler is also used to solve
   the premises of congruence rules.  These are usually of the form
-  @{text "s = ?x"}, where @{text "s"} needs to be simplified and
-  @{text "?x"} needs to be instantiated with the result.  Typically,
+  \<open>s = ?x\<close>, where \<open>s\<close> needs to be simplified and
+  \<open>?x\<close> needs to be instantiated with the result.  Typically,
   the subgoaler will invoke the Simplifier at some point, which will
   eventually call the solver.  For this reason, solver tactics must be
-  prepared to solve goals of the form @{text "t = ?x"}, usually by
+  prepared to solve goals of the form \<open>t = ?x\<close>, usually by
   reflexivity.  In particular, reflexivity should be tried before any
   of the fancy automated proof tools.
 
   It may even happen that due to simplification the subgoal is no
-  longer an equality.  For example, @{text "False \<longleftrightarrow> ?Q"} could be
-  rewritten to @{text "\<not> ?Q"}.  To cover this case, the solver could
-  try resolving with the theorem @{text "\<not> False"} of the
+  longer an equality.  For example, \<open>False \<longleftrightarrow> ?Q\<close> could be
+  rewritten to \<open>\<not> ?Q\<close>.  To cover this case, the solver could
+  try resolving with the theorem \<open>\<not> False\<close> of the
   object-logic.
 
   \<^medskip>
@@ -1028,7 +1020,7 @@
   If a premise of a congruence rule cannot be proved, then the
   congruence is ignored.  This should only happen if the rule is
   \<^emph>\<open>conditional\<close> --- that is, contains premises not of the form
-  @{text "t = ?x"}.  Otherwise it indicates that some congruence rule,
+  \<open>t = ?x\<close>.  Otherwise it indicates that some congruence rule,
   or possibly the subgoaler or solver, is faulty.
   \end{warn}
 \<close>
@@ -1058,24 +1050,24 @@
   conditional.  Another possibility is to apply an elimination rule on
   the assumptions.  More adventurous loopers could start an induction.
 
-  \<^descr> @{text "ctxt setloop tac"} installs @{text "tac"} as the only
-  looper tactic of @{text "ctxt"}.
+  \<^descr> \<open>ctxt setloop tac\<close> installs \<open>tac\<close> as the only
+  looper tactic of \<open>ctxt\<close>.
 
-  \<^descr> @{text "ctxt addloop (name, tac)"} adds @{text "tac"} as an
-  additional looper tactic with name @{text "name"}, which is
+  \<^descr> \<open>ctxt addloop (name, tac)\<close> adds \<open>tac\<close> as an
+  additional looper tactic with name \<open>name\<close>, which is
   significant for managing the collection of loopers.  The tactic will
   be tried after the looper tactics that had already been present in
-  @{text "ctxt"}.
+  \<open>ctxt\<close>.
 
-  \<^descr> @{text "ctxt delloop name"} deletes the looper tactic that was
-  associated with @{text "name"} from @{text "ctxt"}.
+  \<^descr> \<open>ctxt delloop name\<close> deletes the looper tactic that was
+  associated with \<open>name\<close> from \<open>ctxt\<close>.
 
-  \<^descr> @{ML Splitter.add_split}~@{text "thm ctxt"} adds split tactics
-  for @{text "thm"} as additional looper tactics of @{text "ctxt"}.
+  \<^descr> @{ML Splitter.add_split}~\<open>thm ctxt\<close> adds split tactics
+  for \<open>thm\<close> as additional looper tactics of \<open>ctxt\<close>.
 
-  \<^descr> @{ML Splitter.del_split}~@{text "thm ctxt"} deletes the split
-  tactic corresponding to @{text thm} from the looper tactics of
-  @{text "ctxt"}.
+  \<^descr> @{ML Splitter.del_split}~\<open>thm ctxt\<close> deletes the split
+  tactic corresponding to \<open>thm\<close> from the looper tactics of
+  \<open>ctxt\<close>.
 
 
   The splitter replaces applications of a given function; the
@@ -1098,7 +1090,7 @@
   option.split_asm}, which split the subgoal.  The function @{ML
   Splitter.add_split} automatically takes care of which tactic to
   call, analyzing the form of the rules given as argument; it is the
-  same operation behind @{text "split"} attribute or method modifier
+  same operation behind \<open>split\<close> attribute or method modifier
   syntax in the Isar source language.
 
   Case splits should be allowed only when necessary; they are
@@ -1119,7 +1111,7 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{attribute_def simplified} & : & @{text attribute} \\
+    @{attribute_def simplified} & : & \<open>attribute\<close> \\
   \end{matharray}
 
   @{rail \<open>
@@ -1129,12 +1121,12 @@
     opt: '(' ('no_asm' | 'no_asm_simp' | 'no_asm_use') ')'
   \<close>}
 
-  \<^descr> @{attribute simplified}~@{text "a\<^sub>1 \<dots> a\<^sub>n"} causes a theorem to
-  be simplified, either by exactly the specified rules @{text "a\<^sub>1, \<dots>,
-  a\<^sub>n"}, or the implicit Simplifier context if no arguments are given.
+  \<^descr> @{attribute simplified}~\<open>a\<^sub>1 \<dots> a\<^sub>n\<close> causes a theorem to
+  be simplified, either by exactly the specified rules \<open>a\<^sub>1, \<dots>,
+  a\<^sub>n\<close>, or the implicit Simplifier context if no arguments are given.
   The result is fully simplified by default, including assumptions and
-  conclusion; the options @{text no_asm} etc.\ tune the Simplifier in
-  the same way as the for the @{text simp} method.
+  conclusion; the options \<open>no_asm\<close> etc.\ tune the Simplifier in
+  the same way as the for the \<open>simp\<close> method.
 
   Note that forward simplification restricts the Simplifier to its
   most basic operation of term rewriting; solver and looper tactics
@@ -1186,46 +1178,46 @@
   The \<^emph>\<open>sequent calculus\<close>, a generalization of natural deduction,
   is easier to automate.
 
-  A \<^bold>\<open>sequent\<close> has the form @{text "\<Gamma> \<turnstile> \<Delta>"}, where @{text "\<Gamma>"}
-  and @{text "\<Delta>"} are sets of formulae.\footnote{For first-order
+  A \<^bold>\<open>sequent\<close> has the form \<open>\<Gamma> \<turnstile> \<Delta>\<close>, where \<open>\<Gamma>\<close>
+  and \<open>\<Delta>\<close> are sets of formulae.\footnote{For first-order
   logic, sequents can equivalently be made from lists or multisets of
-  formulae.} The sequent @{text "P\<^sub>1, \<dots>, P\<^sub>m \<turnstile> Q\<^sub>1, \<dots>, Q\<^sub>n"} is
-  \<^bold>\<open>valid\<close> if @{text "P\<^sub>1 \<and> \<dots> \<and> P\<^sub>m"} implies @{text "Q\<^sub>1 \<or> \<dots> \<or>
-  Q\<^sub>n"}.  Thus @{text "P\<^sub>1, \<dots>, P\<^sub>m"} represent assumptions, each of which
-  is true, while @{text "Q\<^sub>1, \<dots>, Q\<^sub>n"} represent alternative goals.  A
+  formulae.} The sequent \<open>P\<^sub>1, \<dots>, P\<^sub>m \<turnstile> Q\<^sub>1, \<dots>, Q\<^sub>n\<close> is
+  \<^bold>\<open>valid\<close> if \<open>P\<^sub>1 \<and> \<dots> \<and> P\<^sub>m\<close> implies \<open>Q\<^sub>1 \<or> \<dots> \<or>
+  Q\<^sub>n\<close>.  Thus \<open>P\<^sub>1, \<dots>, P\<^sub>m\<close> represent assumptions, each of which
+  is true, while \<open>Q\<^sub>1, \<dots>, Q\<^sub>n\<close> represent alternative goals.  A
   sequent is \<^bold>\<open>basic\<close> if its left and right sides have a common
-  formula, as in @{text "P, Q \<turnstile> Q, R"}; basic sequents are trivially
+  formula, as in \<open>P, Q \<turnstile> Q, R\<close>; basic sequents are trivially
   valid.
 
   Sequent rules are classified as \<^bold>\<open>right\<close> or \<^bold>\<open>left\<close>,
-  indicating which side of the @{text "\<turnstile>"} symbol they operate on.
+  indicating which side of the \<open>\<turnstile>\<close> symbol they operate on.
   Rules that operate on the right side are analogous to natural
   deduction's introduction rules, and left rules are analogous to
-  elimination rules.  The sequent calculus analogue of @{text "(\<longrightarrow>I)"}
+  elimination rules.  The sequent calculus analogue of \<open>(\<longrightarrow>I)\<close>
   is the rule
   \[
-  \infer[@{text "(\<longrightarrow>R)"}]{@{text "\<Gamma> \<turnstile> \<Delta>, P \<longrightarrow> Q"}}{@{text "P, \<Gamma> \<turnstile> \<Delta>, Q"}}
+  \infer[\<open>(\<longrightarrow>R)\<close>]{\<open>\<Gamma> \<turnstile> \<Delta>, P \<longrightarrow> Q\<close>}{\<open>P, \<Gamma> \<turnstile> \<Delta>, Q\<close>}
   \]
   Applying the rule backwards, this breaks down some implication on
-  the right side of a sequent; @{text "\<Gamma>"} and @{text "\<Delta>"} stand for
+  the right side of a sequent; \<open>\<Gamma>\<close> and \<open>\<Delta>\<close> stand for
   the sets of formulae that are unaffected by the inference.  The
-  analogue of the pair @{text "(\<or>I1)"} and @{text "(\<or>I2)"} is the
+  analogue of the pair \<open>(\<or>I1)\<close> and \<open>(\<or>I2)\<close> is the
   single rule
   \[
-  \infer[@{text "(\<or>R)"}]{@{text "\<Gamma> \<turnstile> \<Delta>, P \<or> Q"}}{@{text "\<Gamma> \<turnstile> \<Delta>, P, Q"}}
+  \infer[\<open>(\<or>R)\<close>]{\<open>\<Gamma> \<turnstile> \<Delta>, P \<or> Q\<close>}{\<open>\<Gamma> \<turnstile> \<Delta>, P, Q\<close>}
   \]
   This breaks down some disjunction on the right side, replacing it by
   both disjuncts.  Thus, the sequent calculus is a kind of
   multiple-conclusion logic.
 
   To illustrate the use of multiple formulae on the right, let us
-  prove the classical theorem @{text "(P \<longrightarrow> Q) \<or> (Q \<longrightarrow> P)"}.  Working
+  prove the classical theorem \<open>(P \<longrightarrow> Q) \<or> (Q \<longrightarrow> P)\<close>.  Working
   backwards, we reduce this formula to a basic sequent:
   \[
-  \infer[@{text "(\<or>R)"}]{@{text "\<turnstile> (P \<longrightarrow> Q) \<or> (Q \<longrightarrow> P)"}}
-    {\infer[@{text "(\<longrightarrow>R)"}]{@{text "\<turnstile> (P \<longrightarrow> Q), (Q \<longrightarrow> P)"}}
-      {\infer[@{text "(\<longrightarrow>R)"}]{@{text "P \<turnstile> Q, (Q \<longrightarrow> P)"}}
-        {@{text "P, Q \<turnstile> Q, P"}}}}
+  \infer[\<open>(\<or>R)\<close>]{\<open>\<turnstile> (P \<longrightarrow> Q) \<or> (Q \<longrightarrow> P)\<close>}
+    {\infer[\<open>(\<longrightarrow>R)\<close>]{\<open>\<turnstile> (P \<longrightarrow> Q), (Q \<longrightarrow> P)\<close>}
+      {\infer[\<open>(\<longrightarrow>R)\<close>]{\<open>P \<turnstile> Q, (Q \<longrightarrow> P)\<close>}
+        {\<open>P, Q \<turnstile> Q, P\<close>}}}
   \]
 
   This example is typical of the sequent calculus: start with the
@@ -1241,36 +1233,35 @@
 text \<open>Isabelle can represent sequents directly, as in the
   object-logic LK.  But natural deduction is easier to work with, and
   most object-logics employ it.  Fortunately, we can simulate the
-  sequent @{text "P\<^sub>1, \<dots>, P\<^sub>m \<turnstile> Q\<^sub>1, \<dots>, Q\<^sub>n"} by the Isabelle formula
-  @{text "P\<^sub>1 \<Longrightarrow> \<dots> \<Longrightarrow> P\<^sub>m \<Longrightarrow> \<not> Q\<^sub>2 \<Longrightarrow> ... \<Longrightarrow> \<not> Q\<^sub>n \<Longrightarrow> Q\<^sub>1"} where the order of
-  the assumptions and the choice of @{text "Q\<^sub>1"} are arbitrary.
+  sequent \<open>P\<^sub>1, \<dots>, P\<^sub>m \<turnstile> Q\<^sub>1, \<dots>, Q\<^sub>n\<close> by the Isabelle formula
+  \<open>P\<^sub>1 \<Longrightarrow> \<dots> \<Longrightarrow> P\<^sub>m \<Longrightarrow> \<not> Q\<^sub>2 \<Longrightarrow> ... \<Longrightarrow> \<not> Q\<^sub>n \<Longrightarrow> Q\<^sub>1\<close> where the order of
+  the assumptions and the choice of \<open>Q\<^sub>1\<close> are arbitrary.
   Elim-resolution plays a key role in simulating sequent proofs.
 
   We can easily handle reasoning on the left.  Elim-resolution with
-  the rules @{text "(\<or>E)"}, @{text "(\<bottom>E)"} and @{text "(\<exists>E)"} achieves
+  the rules \<open>(\<or>E)\<close>, \<open>(\<bottom>E)\<close> and \<open>(\<exists>E)\<close> achieves
   a similar effect as the corresponding sequent rules.  For the other
   connectives, we use sequent-style elimination rules instead of
-  destruction rules such as @{text "(\<and>E1, 2)"} and @{text "(\<forall>E)"}.
-  But note that the rule @{text "(\<not>L)"} has no effect under our
+  destruction rules such as \<open>(\<and>E1, 2)\<close> and \<open>(\<forall>E)\<close>.
+  But note that the rule \<open>(\<not>L)\<close> has no effect under our
   representation of sequents!
   \[
-  \infer[@{text "(\<not>L)"}]{@{text "\<not> P, \<Gamma> \<turnstile> \<Delta>"}}{@{text "\<Gamma> \<turnstile> \<Delta>, P"}}
+  \infer[\<open>(\<not>L)\<close>]{\<open>\<not> P, \<Gamma> \<turnstile> \<Delta>\<close>}{\<open>\<Gamma> \<turnstile> \<Delta>, P\<close>}
   \]
 
   What about reasoning on the right?  Introduction rules can only
-  affect the formula in the conclusion, namely @{text "Q\<^sub>1"}.  The
+  affect the formula in the conclusion, namely \<open>Q\<^sub>1\<close>.  The
   other right-side formulae are represented as negated assumptions,
-  @{text "\<not> Q\<^sub>2, \<dots>, \<not> Q\<^sub>n"}.  In order to operate on one of these, it
-  must first be exchanged with @{text "Q\<^sub>1"}.  Elim-resolution with the
-  @{text swap} rule has this effect: @{text "\<not> P \<Longrightarrow> (\<not> R \<Longrightarrow> P) \<Longrightarrow> R"}
+  \<open>\<not> Q\<^sub>2, \<dots>, \<not> Q\<^sub>n\<close>.  In order to operate on one of these, it
+  must first be exchanged with \<open>Q\<^sub>1\<close>.  Elim-resolution with the
+  \<open>swap\<close> rule has this effect: \<open>\<not> P \<Longrightarrow> (\<not> R \<Longrightarrow> P) \<Longrightarrow> R\<close>
 
   To ensure that swaps occur only when necessary, each introduction
   rule is converted into a swapped form: it is resolved with the
-  second premise of @{text "(swap)"}.  The swapped form of @{text
-  "(\<and>I)"}, which might be called @{text "(\<not>\<and>E)"}, is
+  second premise of \<open>(swap)\<close>.  The swapped form of \<open>(\<and>I)\<close>, which might be called \<open>(\<not>\<and>E)\<close>, is
   @{text [display] "\<not> (P \<and> Q) \<Longrightarrow> (\<not> R \<Longrightarrow> P) \<Longrightarrow> (\<not> R \<Longrightarrow> Q) \<Longrightarrow> R"}
 
-  Similarly, the swapped form of @{text "(\<longrightarrow>I)"} is
+  Similarly, the swapped form of \<open>(\<longrightarrow>I)\<close> is
   @{text [display] "\<not> (P \<longrightarrow> Q) \<Longrightarrow> (\<not> R \<Longrightarrow> P \<Longrightarrow> Q) \<Longrightarrow> R"}
 
   Swapped introduction rules are applied using elim-resolution, which
@@ -1284,45 +1275,45 @@
 
 subsubsection \<open>Extra rules for the sequent calculus\<close>
 
-text \<open>As mentioned, destruction rules such as @{text "(\<and>E1, 2)"} and
-  @{text "(\<forall>E)"} must be replaced by sequent-style elimination rules.
+text \<open>As mentioned, destruction rules such as \<open>(\<and>E1, 2)\<close> and
+  \<open>(\<forall>E)\<close> must be replaced by sequent-style elimination rules.
   In addition, we need rules to embody the classical equivalence
-  between @{text "P \<longrightarrow> Q"} and @{text "\<not> P \<or> Q"}.  The introduction
-  rules @{text "(\<or>I1, 2)"} are replaced by a rule that simulates
-  @{text "(\<or>R)"}: @{text [display] "(\<not> Q \<Longrightarrow> P) \<Longrightarrow> P \<or> Q"}
+  between \<open>P \<longrightarrow> Q\<close> and \<open>\<not> P \<or> Q\<close>.  The introduction
+  rules \<open>(\<or>I1, 2)\<close> are replaced by a rule that simulates
+  \<open>(\<or>R)\<close>: @{text [display] "(\<not> Q \<Longrightarrow> P) \<Longrightarrow> P \<or> Q"}
 
-  The destruction rule @{text "(\<longrightarrow>E)"} is replaced by @{text [display]
+  The destruction rule \<open>(\<longrightarrow>E)\<close> is replaced by @{text [display]
   "(P \<longrightarrow> Q) \<Longrightarrow> (\<not> P \<Longrightarrow> R) \<Longrightarrow> (Q \<Longrightarrow> R) \<Longrightarrow> R"}
 
   Quantifier replication also requires special rules.  In classical
-  logic, @{text "\<exists>x. P x"} is equivalent to @{text "\<not> (\<forall>x. \<not> P x)"};
-  the rules @{text "(\<exists>R)"} and @{text "(\<forall>L)"} are dual:
+  logic, \<open>\<exists>x. P x\<close> is equivalent to \<open>\<not> (\<forall>x. \<not> P x)\<close>;
+  the rules \<open>(\<exists>R)\<close> and \<open>(\<forall>L)\<close> are dual:
   \[
-  \infer[@{text "(\<exists>R)"}]{@{text "\<Gamma> \<turnstile> \<Delta>, \<exists>x. P x"}}{@{text "\<Gamma> \<turnstile> \<Delta>, \<exists>x. P x, P t"}}
+  \infer[\<open>(\<exists>R)\<close>]{\<open>\<Gamma> \<turnstile> \<Delta>, \<exists>x. P x\<close>}{\<open>\<Gamma> \<turnstile> \<Delta>, \<exists>x. P x, P t\<close>}
   \qquad
-  \infer[@{text "(\<forall>L)"}]{@{text "\<forall>x. P x, \<Gamma> \<turnstile> \<Delta>"}}{@{text "P t, \<forall>x. P x, \<Gamma> \<turnstile> \<Delta>"}}
+  \infer[\<open>(\<forall>L)\<close>]{\<open>\<forall>x. P x, \<Gamma> \<turnstile> \<Delta>\<close>}{\<open>P t, \<forall>x. P x, \<Gamma> \<turnstile> \<Delta>\<close>}
   \]
   Thus both kinds of quantifier may be replicated.  Theorems requiring
   multiple uses of a universal formula are easy to invent; consider
   @{text [display] "(\<forall>x. P x \<longrightarrow> P (f x)) \<and> P a \<longrightarrow> P (f\<^sup>n a)"} for any
-  @{text "n > 1"}.  Natural examples of the multiple use of an
-  existential formula are rare; a standard one is @{text "\<exists>x. \<forall>y. P x
-  \<longrightarrow> P y"}.
+  \<open>n > 1\<close>.  Natural examples of the multiple use of an
+  existential formula are rare; a standard one is \<open>\<exists>x. \<forall>y. P x
+  \<longrightarrow> P y\<close>.
 
   Forgoing quantifier replication loses completeness, but gains
   decidability, since the search space becomes finite.  Many useful
   theorems can be proved without replication, and the search generally
   delivers its verdict in a reasonable time.  To adopt this approach,
-  represent the sequent rules @{text "(\<exists>R)"}, @{text "(\<exists>L)"} and
-  @{text "(\<forall>R)"} by @{text "(\<exists>I)"}, @{text "(\<exists>E)"} and @{text "(\<forall>I)"},
-  respectively, and put @{text "(\<forall>E)"} into elimination form: @{text
+  represent the sequent rules \<open>(\<exists>R)\<close>, \<open>(\<exists>L)\<close> and
+  \<open>(\<forall>R)\<close> by \<open>(\<exists>I)\<close>, \<open>(\<exists>E)\<close> and \<open>(\<forall>I)\<close>,
+  respectively, and put \<open>(\<forall>E)\<close> into elimination form: @{text
   [display] "\<forall>x. P x \<Longrightarrow> (P t \<Longrightarrow> Q) \<Longrightarrow> Q"}
 
   Elim-resolution with this rule will delete the universal formula
   after a single use.  To replicate universal quantifiers, replace the
   rule by @{text [display] "\<forall>x. P x \<Longrightarrow> (P t \<Longrightarrow> \<forall>x. P x \<Longrightarrow> Q) \<Longrightarrow> Q"}
 
-  To replicate existential quantifiers, replace @{text "(\<exists>I)"} by
+  To replicate existential quantifiers, replace \<open>(\<exists>I)\<close> by
   @{text [display] "(\<not> (\<exists>x. P x) \<Longrightarrow> P t) \<Longrightarrow> \<exists>x. P x"}
 
   All introduction rules mentioned above are also useful in swapped
@@ -1346,22 +1337,21 @@
   while unsafe rules must be used with care.  A safe rule must never
   reduce a provable goal to an unprovable set of subgoals.
 
-  The rule @{text "P \<Longrightarrow> P \<or> Q"} is unsafe because it reduces @{text "P
-  \<or> Q"} to @{text "P"}, which might turn out as premature choice of an
+  The rule \<open>P \<Longrightarrow> P \<or> Q\<close> is unsafe because it reduces \<open>P
+  \<or> Q\<close> to \<open>P\<close>, which might turn out as premature choice of an
   unprovable subgoal.  Any rule is unsafe whose premises contain new
-  unknowns.  The elimination rule @{text "\<forall>x. P x \<Longrightarrow> (P t \<Longrightarrow> Q) \<Longrightarrow> Q"} is
+  unknowns.  The elimination rule \<open>\<forall>x. P x \<Longrightarrow> (P t \<Longrightarrow> Q) \<Longrightarrow> Q\<close> is
   unsafe, since it is applied via elim-resolution, which discards the
-  assumption @{text "\<forall>x. P x"} and replaces it by the weaker
-  assumption @{text "P t"}.  The rule @{text "P t \<Longrightarrow> \<exists>x. P x"} is
-  unsafe for similar reasons.  The quantifier duplication rule @{text
-  "\<forall>x. P x \<Longrightarrow> (P t \<Longrightarrow> \<forall>x. P x \<Longrightarrow> Q) \<Longrightarrow> Q"} is unsafe in a different sense:
-  since it keeps the assumption @{text "\<forall>x. P x"}, it is prone to
+  assumption \<open>\<forall>x. P x\<close> and replaces it by the weaker
+  assumption \<open>P t\<close>.  The rule \<open>P t \<Longrightarrow> \<exists>x. P x\<close> is
+  unsafe for similar reasons.  The quantifier duplication rule \<open>\<forall>x. P x \<Longrightarrow> (P t \<Longrightarrow> \<forall>x. P x \<Longrightarrow> Q) \<Longrightarrow> Q\<close> is unsafe in a different sense:
+  since it keeps the assumption \<open>\<forall>x. P x\<close>, it is prone to
   looping.  In classical first-order logic, all rules are safe except
   those mentioned above.
 
   The safe~/ unsafe distinction is vague, and may be regarded merely
   as a way of giving some rules priority over others.  One could argue
-  that @{text "(\<or>E)"} is unsafe, because repeated application of it
+  that \<open>(\<or>E)\<close> is unsafe, because repeated application of it
   could generate exponentially many subgoals.  Induction rules are
   unsafe because inductive proofs are difficult to set up
   automatically.  Any inference is unsafe that instantiates an unknown
@@ -1370,13 +1360,13 @@
   unknowns shared with other subgoals.
 
   \begin{matharray}{rcl}
-    @{command_def "print_claset"}@{text "\<^sup>*"} & : & @{text "context \<rightarrow>"} \\
-    @{attribute_def intro} & : & @{text attribute} \\
-    @{attribute_def elim} & : & @{text attribute} \\
-    @{attribute_def dest} & : & @{text attribute} \\
-    @{attribute_def rule} & : & @{text attribute} \\
-    @{attribute_def iff} & : & @{text attribute} \\
-    @{attribute_def swapped} & : & @{text attribute} \\
+    @{command_def "print_claset"}\<open>\<^sup>*\<close> & : & \<open>context \<rightarrow>\<close> \\
+    @{attribute_def intro} & : & \<open>attribute\<close> \\
+    @{attribute_def elim} & : & \<open>attribute\<close> \\
+    @{attribute_def dest} & : & \<open>attribute\<close> \\
+    @{attribute_def rule} & : & \<open>attribute\<close> \\
+    @{attribute_def iff} & : & \<open>attribute\<close> \\
+    @{attribute_def swapped} & : & \<open>attribute\<close> \\
   \end{matharray}
 
   @{rail \<open>
@@ -1394,9 +1384,8 @@
   \<^descr> @{attribute intro}, @{attribute elim}, and @{attribute dest}
   declare introduction, elimination, and destruction rules,
   respectively.  By default, rules are considered as \<^emph>\<open>unsafe\<close>
-  (i.e.\ not applied blindly without backtracking), while ``@{text
-  "!"}'' classifies as \<^emph>\<open>safe\<close>.  Rule declarations marked by
-  ``@{text "?"}'' coincide with those of Isabelle/Pure, cf.\
+  (i.e.\ not applied blindly without backtracking), while ``\<open>!\<close>'' classifies as \<^emph>\<open>safe\<close>.  Rule declarations marked by
+  ``\<open>?\<close>'' coincide with those of Isabelle/Pure, cf.\
   \secref{sec:pure-meth-att} (i.e.\ are only applied in single steps
   of the @{method rule} method).  The optional natural number
   specifies an explicit weight argument, which is ignored by the
@@ -1416,7 +1405,7 @@
   added with some other classification, but the rule is added anyway
   as requested.
 
-  \<^descr> @{attribute rule}~@{text del} deletes all occurrences of a
+  \<^descr> @{attribute rule}~\<open>del\<close> deletes all occurrences of a
   rule from the classical context, regardless of its classification as
   introduction~/ elimination~/ destruction and safe~/ unsafe.
 
@@ -1424,16 +1413,15 @@
   Simplifier and the Classical reasoner at the same time.
   Non-conditional rules result in a safe introduction and elimination
   pair; conditional ones are considered unsafe.  Rules with negative
-  conclusion are automatically inverted (using @{text "\<not>"}-elimination
+  conclusion are automatically inverted (using \<open>\<not>\<close>-elimination
   internally).
 
-  The ``@{text "?"}'' version of @{attribute iff} declares rules to
+  The ``\<open>?\<close>'' version of @{attribute iff} declares rules to
   the Isabelle/Pure context only, and omits the Simplifier
   declaration.
 
   \<^descr> @{attribute swapped} turns an introduction rule into an
-  elimination, by resolving with the classical swap principle @{text
-  "\<not> P \<Longrightarrow> (\<not> R \<Longrightarrow> P) \<Longrightarrow> R"} in the second position.  This is mainly for
+  elimination, by resolving with the classical swap principle \<open>\<not> P \<Longrightarrow> (\<not> R \<Longrightarrow> P) \<Longrightarrow> R\<close> in the second position.  This is mainly for
   illustrative purposes: the Classical Reasoner already swaps rules
   internally as explained above.
 \<close>
@@ -1443,8 +1431,8 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{method_def rule} & : & @{text method} \\
-    @{method_def contradiction} & : & @{text method} \\
+    @{method_def rule} & : & \<open>method\<close> \\
+    @{method_def contradiction} & : & \<open>method\<close> \\
   \end{matharray}
 
   @{rail \<open>
@@ -1462,7 +1450,7 @@
   Isabelle/Pure (\secref{sec:pure-meth-att}).
 
   \<^descr> @{method contradiction} solves some goal by contradiction,
-  deriving any result from both @{text "\<not> A"} and @{text A}.  Chained
+  deriving any result from both \<open>\<not> A\<close> and \<open>A\<close>.  Chained
   facts, which are guaranteed to participate, may appear in either
   order.
 \<close>
@@ -1472,16 +1460,16 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{method_def blast} & : & @{text method} \\
-    @{method_def auto} & : & @{text method} \\
-    @{method_def force} & : & @{text method} \\
-    @{method_def fast} & : & @{text method} \\
-    @{method_def slow} & : & @{text method} \\
-    @{method_def best} & : & @{text method} \\
-    @{method_def fastforce} & : & @{text method} \\
-    @{method_def slowsimp} & : & @{text method} \\
-    @{method_def bestsimp} & : & @{text method} \\
-    @{method_def deepen} & : & @{text method} \\
+    @{method_def blast} & : & \<open>method\<close> \\
+    @{method_def auto} & : & \<open>method\<close> \\
+    @{method_def force} & : & \<open>method\<close> \\
+    @{method_def fast} & : & \<open>method\<close> \\
+    @{method_def slow} & : & \<open>method\<close> \\
+    @{method_def best} & : & \<open>method\<close> \\
+    @{method_def fastforce} & : & \<open>method\<close> \\
+    @{method_def slowsimp} & : & \<open>method\<close> \\
+    @{method_def bestsimp} & : & \<open>method\<close> \\
+    @{method_def deepen} & : & \<open>method\<close> \\
   \end{matharray}
 
   @{rail \<open>
@@ -1536,8 +1524,8 @@
   The optional integer argument specifies a bound for the number of
   unsafe steps used in a proof.  By default, @{method blast} starts
   with a bound of 0 and increases it successively to 20.  In contrast,
-  @{text "(blast lim)"} tries to prove the goal using a search bound
-  of @{text "lim"}.  Sometimes a slow proof using @{method blast} can
+  \<open>(blast lim)\<close> tries to prove the goal using a search bound
+  of \<open>lim\<close>.  Sometimes a slow proof using @{method blast} can
   be made much faster by supplying the successful search bound to this
   proof method instead.
 
@@ -1547,9 +1535,9 @@
   ones it cannot prove.  Occasionally, attempting to prove the hard
   ones may take a long time.