isabelle update_cartouches -t;
authorwenzelm
Tue, 20 Oct 2015 23:53:40 +0200
changeset 61493 0debd22f0c0e
parent 61492 3480725c71d2
child 61494 63b18f758874
isabelle update_cartouches -t;
src/Doc/Eisbach/Manual.thy
src/Doc/Implementation/Eq.thy
src/Doc/Implementation/Integration.thy
src/Doc/Implementation/Isar.thy
src/Doc/Implementation/Local_Theory.thy
src/Doc/Implementation/Logic.thy
src/Doc/Implementation/ML.thy
src/Doc/Implementation/Prelim.thy
src/Doc/Implementation/Proof.thy
src/Doc/Implementation/Syntax.thy
src/Doc/Implementation/Tactic.thy
src/Doc/Isar_Ref/Document_Preparation.thy
src/Doc/Isar_Ref/First_Order_Logic.thy
src/Doc/Isar_Ref/Framework.thy
src/Doc/Isar_Ref/Generic.thy
src/Doc/Isar_Ref/HOL_Specific.thy
src/Doc/Isar_Ref/Inner_Syntax.thy
src/Doc/Isar_Ref/Outer_Syntax.thy
src/Doc/Isar_Ref/Preface.thy
src/Doc/Isar_Ref/Proof.thy
src/Doc/Isar_Ref/Proof_Script.thy
src/Doc/Isar_Ref/Quick_Reference.thy
src/Doc/Isar_Ref/Spec.thy
src/Doc/Isar_Ref/Symbols.thy
src/Doc/Isar_Ref/Synopsis.thy
src/Doc/JEdit/JEdit.thy
src/Doc/System/Basics.thy
src/Doc/System/Misc.thy
src/Doc/System/Presentation.thy
src/Doc/System/Sessions.thy
--- a/src/Doc/Eisbach/Manual.thy	Tue Oct 20 23:03:46 2015 +0200
+++ b/src/Doc/Eisbach/Manual.thy	Tue Oct 20 23:53:40 2015 +0200
@@ -56,11 +56,11 @@
       by prop_solver\<^sub>1
 
 text \<open>
-  In this example, the facts @{text impI} and @{text conjE} are static. They
+  In this example, the facts \<open>impI\<close> and \<open>conjE\<close> are static. They
   are evaluated once when the method is defined and cannot be changed later.
   This makes the method stable in the sense of \<^emph>\<open>static scoping\<close>: naming
-  another fact @{text impI} in a later context won't affect the behaviour of
-  @{text "prop_solver\<^sub>1"}.
+  another fact \<open>impI\<close> in a later context won't affect the behaviour of
+  \<open>prop_solver\<^sub>1\<close>.
 \<close>
 
 
@@ -69,8 +69,8 @@
 text \<open>
   Methods can also abstract over terms using the @{keyword_def "for"} keyword,
   optionally providing type constraints. For instance, the following proof
-  method @{text intro_ex} takes a term @{term y} of any type, which it uses to
-  instantiate the @{term x}-variable of @{text exI} (existential introduction)
+  method \<open>intro_ex\<close> takes a term @{term y} of any type, which it uses to
+  instantiate the @{term x}-variable of \<open>exI\<close> (existential introduction)
   before applying the result as a rule. The instantiation is performed here by
   Isar's @{attribute_ref "where"} attribute. If the current subgoal is to find
   a witness for the given predicate @{term Q}, then this has the effect of
@@ -100,7 +100,7 @@
 subsection \<open>Named theorems\<close>
 
 text \<open>
-  A @{text "named theorem"} is a fact whose contents are produced dynamically
+  A \<open>named theorem\<close> is a fact whose contents are produced dynamically
   within the current proof context. The Isar command @{command_ref
   "named_theorems"} provides simple access to this concept: it declares a
   dynamic fact with corresponding \<^emph>\<open>attribute\<close> for managing
@@ -110,10 +110,9 @@
     named_theorems intros
 
 text \<open>
-  So far @{text "intros"} refers to the empty fact. Using the Isar command
+  So far \<open>intros\<close> refers to the empty fact. Using the Isar command
   @{command_ref "declare"} we may apply declaration attributes to the context.
-  Below we declare both @{text "conjI"} and @{text "impI"} as @{text
-  "intros"}, adding them to the named theorem slot.
+  Below we declare both \<open>conjI\<close> and \<open>impI\<close> as \<open>intros\<close>, adding them to the named theorem slot.
 \<close>
 
     declare conjI [intros] and impI [intros]
@@ -121,7 +120,7 @@
 text \<open>
   We can refer to named theorems as dynamic facts within a particular proof
   context, which are evaluated whenever the method is invoked. Instead of
-  having facts hard-coded into the method, as in @{text prop_solver\<^sub>1}, we can
+  having facts hard-coded into the method, as in \<open>prop_solver\<^sub>1\<close>, we can
   instead refer to these named theorems.
 \<close>
 
@@ -137,8 +136,8 @@
 text \<open>
   Often these named theorems need to be augmented on the spot, when a method
   is invoked. The @{keyword_def "declares"} keyword in the signature of
-  @{command method} adds the common method syntax @{text "method decl: facts"}
-  for each named theorem @{text decl}.
+  @{command method} adds the common method syntax \<open>method decl: facts\<close>
+  for each named theorem \<open>decl\<close>.
 \<close>
 
     method prop_solver\<^sub>4 declares intros elims =
@@ -171,11 +170,10 @@
 section \<open>Higher-order methods\<close>
 
 text \<open>
-  The \<^emph>\<open>structured concatenation\<close> combinator ``@{text "method\<^sub>1 ;
-  method\<^sub>2"}'' was introduced in Isabelle2015, motivated by development of
-  Eisbach. It is similar to ``@{text "method\<^sub>1, method\<^sub>2"}'', but @{text
-  method\<^sub>2} is invoked on on \<^emph>\<open>all\<close> subgoals that have newly emerged from
-  @{text method\<^sub>1}. This is useful to handle cases where the number of
+  The \<^emph>\<open>structured concatenation\<close> combinator ``\<open>method\<^sub>1 ;
+  method\<^sub>2\<close>'' was introduced in Isabelle2015, motivated by development of
+  Eisbach. It is similar to ``\<open>method\<^sub>1, method\<^sub>2\<close>'', but \<open>method\<^sub>2\<close> is invoked on on \<^emph>\<open>all\<close> subgoals that have newly emerged from
+  \<open>method\<^sub>1\<close>. This is useful to handle cases where the number of
   subgoals produced by a method is determined dynamically at run-time.
 \<close>
 
@@ -192,7 +190,7 @@
   method combinators with prefix syntax. For example, to more usefully exploit
   Isabelle's backtracking, the explicit requirement that a method solve all
   produced subgoals is frequently useful. This can easily be written as a
-  \<^emph>\<open>higher-order method\<close> using ``@{text ";"}''. The @{keyword "methods"}
+  \<^emph>\<open>higher-order method\<close> using ``\<open>;\<close>''. The @{keyword "methods"}
   keyword denotes method parameters that are other proof methods to be invoked
   by the method being defined.
 \<close>
@@ -200,9 +198,9 @@
     method solve methods m = (m ; fail)
 
 text \<open>
-  Given some method-argument @{text m}, @{text "solve \<open>m\<close>"} applies the
-  method @{text m} and then fails whenever @{text m} produces any new unsolved
-  subgoals --- i.e. when @{text m} fails to completely discharge the goal it
+  Given some method-argument \<open>m\<close>, \<open>solve \<open>m\<close>\<close> applies the
+  method \<open>m\<close> and then fails whenever \<open>m\<close> produces any new unsolved
+  subgoals --- i.e. when \<open>m\<close> fails to completely discharge the goal it
   was applied to.
 \<close>
 
@@ -224,10 +222,10 @@
         (erule notE ; solve \<open>prop_solver\<close>))+
 
 text \<open>
-  The only non-trivial part above is the final alternative @{text "(erule notE
-  ; solve \<open>prop_solver\<close>)"}. Here, in the case that all other alternatives
+  The only non-trivial part above is the final alternative \<open>(erule notE
+  ; solve \<open>prop_solver\<close>)\<close>. Here, in the case that all other alternatives
   fail, the method takes one of the assumptions @{term "\<not> P"} of the current
-  goal and eliminates it with the rule @{text notE}, causing the goal to be
+  goal and eliminates it with the rule \<open>notE\<close>, causing the goal to be
   proved to become @{term P}. The method then recursively invokes itself on
   the remaining goals. The job of the recursive call is to demonstrate that
   there is a contradiction in the original assumptions (i.e.\ that @{term P}
@@ -238,8 +236,7 @@
   chosen for elimination.
 
   Note that the recursive call to @{method prop_solver} does not have any
-  parameters passed to it. Recall that fact parameters, e.g.\ @{text
-  "intros"}, @{text "elims"}, and @{text "subst"}, are managed by declarations
+  parameters passed to it. Recall that fact parameters, e.g.\ \<open>intros\<close>, \<open>elims\<close>, and \<open>subst\<close>, are managed by declarations
   in the current proof context. They will therefore be passed to any recursive
   call to @{method prop_solver} and, more generally, any invocation of a
   method which declares these named theorems.
@@ -299,11 +296,10 @@
   \<close>}
 
   Matching allows methods to introspect the goal state, and to implement more
-  explicit control flow. In the basic case, a term or fact @{text ts} is given
+  explicit control flow. In the basic case, a term or fact \<open>ts\<close> is given
   to match against as a \<^emph>\<open>match target\<close>, along with a collection of
-  pattern-method pairs @{text "(p, m)"}: roughly speaking, when the pattern
-  @{text p} matches any member of @{text ts}, the \<^emph>\<open>inner\<close> method @{text
-  m} will be executed.
+  pattern-method pairs \<open>(p, m)\<close>: roughly speaking, when the pattern
+  \<open>p\<close> matches any member of \<open>ts\<close>, the \<^emph>\<open>inner\<close> method \<open>m\<close> will be executed.
 \<close>
 
     lemma
@@ -315,9 +311,9 @@
 
 text \<open>
   In this example we have a structured Isar proof, with the named
-  assumption @{text "X"} and a conclusion @{term "P"}. With the match method
+  assumption \<open>X\<close> and a conclusion @{term "P"}. With the match method
   we can find the local facts @{term "Q \<longrightarrow> P"} and @{term "Q"}, binding them to
-  separately as @{text "I"} and @{text "I'"}. We then specialize the
+  separately as \<open>I\<close> and \<open>I'\<close>. We then specialize the
   modus-ponens rule @{thm mp [of Q P]} to these facts to solve the goal.
 \<close>
 
@@ -362,7 +358,7 @@
   now-bound @{term A} (bound to @{term P}) against the conclusion (also @{term
   P}), finally applying the specialized rule to solve the goal.
 
-  Schematic terms like @{text "?P"} may also be used to specify match
+  Schematic terms like \<open>?P\<close> may also be used to specify match
   variables, but the result of the match is not bound, and thus cannot be used
   in the inner method body.
 
@@ -382,11 +378,11 @@
 text \<open>
   The first @{method match} matches the pattern @{term "\<exists>x. Q x"} against the
   current conclusion, binding the term @{term "Q"} in the inner match. Next
-  the pattern @{text "Q y"} is matched against all premises of the current
+  the pattern \<open>Q y\<close> is matched against all premises of the current
   subgoal. In this case @{term "Q"} is fixed and @{term "y"} may be
-  instantiated. Once a match is found, the local fact @{text U} is bound to
+  instantiated. Once a match is found, the local fact \<open>U\<close> is bound to
   the matching premise and the variable @{term "y"} is bound to the matching
-  witness. The existential introduction rule @{text "exI:"}~@{thm exI} is then
+  witness. The existential introduction rule \<open>exI:\<close>~@{thm exI} is then
   instantiated with @{term "y"} as the witness and @{term "Q"} as the
   predicate, with its proof obligation solved by the local fact U (using the
   Isar attribute @{attribute OF}). The following example is a trivial use of
@@ -426,8 +422,8 @@
   fail. If focusing instead left the premises in place, using methods
   like @{method erule} would lead to unintended behaviour, specifically during
   backtracking. In our example, @{method erule} could choose an alternate
-  premise while backtracking, while leaving @{text I} bound to the original
-  match. In the case of more complex inner methods, where either @{text I} or
+  premise while backtracking, while leaving \<open>I\<close> bound to the original
+  match. In the case of more complex inner methods, where either \<open>I\<close> or
   bound terms are used, this would almost certainly not be the intended
   behaviour.
 
@@ -451,7 +447,7 @@
   matched premises may be declared with the @{attribute "thin"} attribute.
   This will hide the premise from subsequent inner matches, and remove it from
   the list of premises when the inner method has finished and the subgoal is
-  unfocused. It can be considered analogous to the existing @{text thin_tac}.
+  unfocused. It can be considered analogous to the existing \<open>thin_tac\<close>.
 
   To complete our example, the correct implementation of the method
   will @{attribute "thin"} the premise from the match and then apply it to the
@@ -491,10 +487,10 @@
 
 text \<open>
   In this example, the only premise that exists in the first focus is
-  @{term "A"}. Prior to the inner match, the rule @{text impI} changes
+  @{term "A"}. Prior to the inner match, the rule \<open>impI\<close> changes
   the goal @{term "B \<longrightarrow> B"} into @{term "B \<Longrightarrow> B"}. A standard premise
   match would also include @{term A} as an original premise of the outer
-  match. The @{text local} argument limits the match to
+  match. The \<open>local\<close> argument limits the match to
   newly focused premises.
 
 \<close>
@@ -563,8 +559,7 @@
             \<open>rule I [of x y]\<close>)
 
 text \<open>
-  In this example, the order of schematics in @{text asm} is actually @{text
-  "?y ?x"}, but we instantiate our matched rule in the opposite order. This is
+  In this example, the order of schematics in \<open>asm\<close> is actually \<open>?y ?x\<close>, but we instantiate our matched rule in the opposite order. This is
   because the effective rule @{term I} was bound from the match, which
   declared the @{typ 'a} slot first and the @{typ 'b} slot second.
 
@@ -592,7 +587,7 @@
             \<open>prop_solver\<close>)
 
 text \<open>
-  In this example, the pattern @{text "\<And>x :: 'a. ?P x \<Longrightarrow> ?Q x"} matches against
+  In this example, the pattern \<open>\<And>x :: 'a. ?P x \<Longrightarrow> ?Q x\<close> matches against
   the only premise, giving an appropriately typed slot for @{term y}. After
   the match, the resulting rule is instantiated to @{term y} and then declared
   as an @{attribute intros} rule. This is then picked up by @{method
@@ -606,8 +601,7 @@
   In all previous examples, @{method match} was only ever searching for a
   single rule or premise. Each local fact would therefore always have a length
   of exactly one. We may, however, wish to find \<^emph>\<open>all\<close> matching results.
-  To achieve this, we can simply mark a given pattern with the @{text
-  "(multi)"} argument.
+  To achieve this, we can simply mark a given pattern with the \<open>(multi)\<close> argument.
 \<close>
 
     lemma
@@ -618,18 +612,18 @@
       done
 
 text \<open>
-  In the first @{method match}, without the @{text "(multi)"} argument, @{term
-  I} is only ever be bound to one of the members of @{text asms}. This
+  In the first @{method match}, without the \<open>(multi)\<close> argument, @{term
+  I} is only ever be bound to one of the members of \<open>asms\<close>. This
   backtracks over both possibilities (see next section), however neither
   assumption in isolation is sufficient to solve to goal. The use of the
   @{method solves} combinator ensures that @{method prop_solver} has no effect
   on the goal when it doesn't solve it, and so the first match leaves the goal
-  unchanged. In the second @{method match}, @{text I} is bound to all of
-  @{text asms}, declaring both results as @{text intros}. With these rules
+  unchanged. In the second @{method match}, \<open>I\<close> is bound to all of
+  \<open>asms\<close>, declaring both results as \<open>intros\<close>. With these rules
   @{method prop_solver} is capable of solving the goal.
 
   Using for-fixed variables in patterns imposes additional constraints on the
-  results. In all previous examples, the choice of using @{text ?P} or a
+  results. In all previous examples, the choice of using \<open>?P\<close> or a
   for-fixed @{term P} only depended on whether or not @{term P} was mentioned
   in another pattern or the inner method. When using a multi-match, however,
   all for-fixed terms must agree in the results.
@@ -647,9 +641,9 @@
 text \<open>
   Here we have two seemingly-equivalent applications of @{method match},
   however only the second one is capable of solving the goal. The first
-  @{method match} selects the first and third members of @{text asms} (those
+  @{method match} selects the first and third members of \<open>asms\<close> (those
   that agree on their conclusion), which is not sufficient. The second
-  @{method match} selects the first and second members of @{text asms} (those
+  @{method match} selects the first and second members of \<open>asms\<close> (those
   that agree on their assumption), which is enough for @{method prop_solver}
   to solve the goal.
 \<close>
@@ -661,10 +655,10 @@
   Dummy patterns may be given as placeholders for unique schematics in
   patterns. They implicitly receive all currently bound variables as
   arguments, and are coerced into the @{typ prop} type whenever possible. For
-  example, the trivial dummy pattern @{text "_"} will match any proposition.
-  In contrast, by default the pattern @{text "?P"} is considered to have type
+  example, the trivial dummy pattern \<open>_\<close> will match any proposition.
+  In contrast, by default the pattern \<open>?P\<close> is considered to have type
   @{typ bool}. It will not bind anything with meta-logical connectives (e.g.
-  @{text "_ \<Longrightarrow> _"} or @{text "_ &&& _"}).
+  \<open>_ \<Longrightarrow> _\<close> or \<open>_ &&& _\<close>).
 \<close>
 
     lemma
@@ -676,17 +670,17 @@
 section \<open>Backtracking\<close>
 
 text \<open>
-  Patterns are considered top-down, executing the inner method @{text m} of
+  Patterns are considered top-down, executing the inner method \<open>m\<close> of
   the first pattern which is satisfied by the current match target. By
   default, matching performs extensive backtracking by attempting all valid
   variable and fact bindings according to the given pattern. In particular,
   all unifiers for a given pattern will be explored, as well as each matching
-  fact. The inner method @{text m} will be re-executed for each different
+  fact. The inner method \<open>m\<close> will be re-executed for each different
   variable/fact binding during backtracking. A successful match is considered
   a cut-point for backtracking. Specifically, once a match is made no other
   pattern-method pairs will be considered.
 
-  The method @{text foo} below fails for all goals that are conjunctions. Any
+  The method \<open>foo\<close> below fails for all goals that are conjunctions. Any
   such goal will match the first pattern, causing the second pattern (that
   would otherwise match all goals) to never be considered.
 \<close>
@@ -701,8 +695,7 @@
   combinator chain, its failure
   becomes significant because it signals previously applied methods to move to
   the next result. Therefore, it is necessary for @{method match} to not mask
-  such failure. One can always rewrite a match using the combinators ``@{text
-  "?"}'' and ``@{text "|"}'' to try subsequent patterns in the case of an
+  such failure. One can always rewrite a match using the combinators ``\<open>?\<close>'' and ``\<open>|\<close>'' to try subsequent patterns in the case of an
   inner-method failure. The following proof method, for example, always
   invokes @{method prop_solver} for all goals because its first alternative
   either never matches or (if it does match) always fails.
@@ -717,7 +710,7 @@
 
 text \<open>
   Backtracking may be controlled more precisely by marking individual patterns
-  as @{text cut}. This causes backtracking to not progress beyond this pattern:
+  as \<open>cut\<close>. This causes backtracking to not progress beyond this pattern:
   once a match is found no others will be considered.
 \<close>
 
@@ -730,7 +723,7 @@
   implications of @{term "P"} in the premises are considered, evaluating the
   inner @{method rule} with each consequent. No other conjunctions will be
   considered, with method failure occurring once all implications of the
-  form @{text "P \<longrightarrow> ?U"} have been explored. Here the left-right processing of
+  form \<open>P \<longrightarrow> ?U\<close> have been explored. Here the left-right processing of
   individual patterns is important, as all patterns after of the cut will
   maintain their usual backtracking behaviour.
 \<close>
@@ -742,16 +735,16 @@
       by (foo\<^sub>2 | prop_solver)
 
 text \<open>
-  In this example, the first lemma is solved by @{text foo\<^sub>2}, by first
-  picking @{term "A \<longrightarrow> D"} for @{text I'}, then backtracking and ultimately
+  In this example, the first lemma is solved by \<open>foo\<^sub>2\<close>, by first
+  picking @{term "A \<longrightarrow> D"} for \<open>I'\<close>, then backtracking and ultimately
   succeeding after picking @{term "A \<longrightarrow> C"}. In the second lemma, however,
   @{term "C \<and> D"} is matched first, the second pattern in the match cannot be
   found and so the method fails, falling through to @{method prop_solver}.
 
   More precise control is also possible by giving a positive
-  number @{text n} as an argument to @{text cut}. This will limit the number
-  of backtracking results of that match to be at most @{text n}.
-  The match argument @{text "(cut 1)"} is the same as simply @{text "(cut)"}.
+  number \<open>n\<close> as an argument to \<open>cut\<close>. This will limit the number
+  of backtracking results of that match to be at most \<open>n\<close>.
+  The match argument \<open>(cut 1)\<close> is the same as simply \<open>(cut)\<close>.
 \<close>
 
 
@@ -776,16 +769,14 @@
 
 text \<open>
   Intuitively it seems like this proof should fail to check. The first match
-  result, which binds @{term I} to the first two members of @{text asms},
+  result, which binds @{term I} to the first two members of \<open>asms\<close>,
   fails the second inner match due to binding @{term P} to @{term A}.
-  Backtracking then attempts to bind @{term I} to the third member of @{text
-  asms}. This passes all inner matches, but fails when @{method rule} cannot
+  Backtracking then attempts to bind @{term I} to the third member of \<open>asms\<close>. This passes all inner matches, but fails when @{method rule} cannot
   successfully apply this to the current goal. After this, a valid match that
-  is produced by the unifier is one which binds @{term P} to simply @{text
-  "\<lambda>a. A ?x"}. The first inner match succeeds because @{text "\<lambda>a. A ?x"} does
+  is produced by the unifier is one which binds @{term P} to simply \<open>\<lambda>a. A ?x\<close>. The first inner match succeeds because \<open>\<lambda>a. A ?x\<close> does
   not match @{term A}. The next inner match succeeds because @{term I} has
-  only been bound to the first member of @{text asms}. This is due to @{method
-  match} considering @{text "\<lambda>a. A ?x"} and @{text "\<lambda>a. A ?y"} as distinct
+  only been bound to the first member of \<open>asms\<close>. This is due to @{method
+  match} considering \<open>\<lambda>a. A ?x\<close> and \<open>\<lambda>a. A ?y\<close> as distinct
   terms.
 
   The simplest way to address this is to explicitly disallow term bindings
@@ -807,7 +798,7 @@
   The @{method match} method is not aware of the logical content of match
   targets. Each pattern is simply matched against the shallow structure of a
   fact or term. Most facts are in \<^emph>\<open>normal form\<close>, which curries premises
-  via meta-implication @{text "_ \<Longrightarrow> _"}.
+  via meta-implication \<open>_ \<Longrightarrow> _\<close>.
 \<close>
 
     lemma
@@ -816,7 +807,7 @@
       by (match asms in H: "D \<Longrightarrow> _" (multi) \<Rightarrow> \<open>prop_solver elims: H\<close>)
 
 text \<open>
-  For the first member of @{text asms} the dummy pattern successfully matches
+  For the first member of \<open>asms\<close> the dummy pattern successfully matches
   against @{term "B \<Longrightarrow> C"} and so the proof is successful.
 \<close>
 
@@ -830,7 +821,7 @@
 text \<open>
   This proof will fail to solve the goal. Our match pattern will only match
   rules which have a single premise, and conclusion @{term C}, so the first
-  member of @{text asms} is not bound and thus the proof fails. Matching a
+  member of \<open>asms\<close> is not bound and thus the proof fails. Matching a
   pattern of the form @{term "P \<Longrightarrow> Q"} against this fact will bind @{term "P"}
   to @{term "A"} and @{term Q} to @{term "B \<Longrightarrow> C"}. Our pattern, with a
   concrete @{term "C"} in the conclusion, will fail to match this fact.
@@ -839,7 +830,7 @@
   matching against them. This forms a meta-conjunction of all premises in a
   fact, so that only one implication remains. For example the uncurried
   version of @{term "A \<Longrightarrow> B \<Longrightarrow> C"} is @{term "A &&& B \<Longrightarrow> C"}. This will now match
-  our desired pattern @{text "_ \<Longrightarrow> C"}, and can be \<^emph>\<open>curried\<close> after the
+  our desired pattern \<open>_ \<Longrightarrow> C\<close>, and can be \<^emph>\<open>curried\<close> after the
   match to put it back into normal form.
 \<close>
 
@@ -867,11 +858,11 @@
       done
 
 text \<open>
-  In the first @{method match} we attempt to find a member of @{text asms}
+  In the first @{method match} we attempt to find a member of \<open>asms\<close>
   which matches our goal precisely. This fails due to no such member existing.
   The second match reverses the role of the fact in the match, by first giving
   a general pattern @{term P}. This bound pattern is then matched against
-  @{term "A y"}. In this case, @{term P} is bound to @{text "A ?x"} and so it
+  @{term "A y"}. In this case, @{term P} is bound to \<open>A ?x\<close> and so it
   successfully matches.
 \<close>
 
@@ -892,10 +883,9 @@
           \<open>match (y) in "y :: 'b" for y \<Rightarrow> \<open>rule H [where z = y]\<close>\<close>)
 
 text \<open>
-  In this example the type @{text 'b} is matched to @{text 'a}, however
-  statically they are formally distinct types. The first match binds @{text
-  'b} while the inner match serves to coerce @{term y} into having the type
-  @{text 'b}. This allows the rule instantiation to successfully apply.
+  In this example the type \<open>'b\<close> is matched to \<open>'a\<close>, however
+  statically they are formally distinct types. The first match binds \<open>'b\<close> while the inner match serves to coerce @{term y} into having the type
+  \<open>'b\<close>. This allows the rule instantiation to successfully apply.
 \<close>
 
 
@@ -932,7 +922,7 @@
 
 text \<open>
   A custom rule attribute is a simple way to extend the functionality of
-  Eisbach methods. The dummy rule attribute notation (@{text "[[ _ ]]"})
+  Eisbach methods. The dummy rule attribute notation (\<open>[[ _ ]]\<close>)
   invokes the given attribute against a dummy fact and evaluates to the result
   of that attribute. When used as a match target, this can serve as an
   effective auxiliary function.
@@ -968,7 +958,7 @@
   Here the new @{method splits} method transforms the goal to use only logical
   connectives: @{term "L = [] \<longrightarrow> False \<and> (\<forall>x y. L = x # y \<longrightarrow> True)"}. This goal
   is then in a form solvable by @{method prop_solver} when given the universal
-  quantifier introduction rule @{text allI}.
+  quantifier introduction rule \<open>allI\<close>.
 \<close>
 
 end
--- a/src/Doc/Implementation/Eq.thy	Tue Oct 20 23:03:46 2015 +0200
+++ b/src/Doc/Implementation/Eq.thy	Tue Oct 20 23:53:40 2015 +0200
@@ -6,7 +6,7 @@
 
 text \<open>Equality is one of the most fundamental concepts of
   mathematics.  The Isabelle/Pure logic (\chref{ch:logic}) provides a
-  builtin relation @{text "\<equiv> :: \<alpha> \<Rightarrow> \<alpha> \<Rightarrow> prop"} that expresses equality
+  builtin relation \<open>\<equiv> :: \<alpha> \<Rightarrow> \<alpha> \<Rightarrow> prop\<close> that expresses equality
   of arbitrary terms (or propositions) at the framework level, as
   expressed by certain basic inference rules (\secref{sec:eq-rules}).
 
@@ -18,7 +18,7 @@
 
   Higher-order matching is able to provide suitable instantiations for
   giving equality rules, which leads to the versatile concept of
-  @{text "\<lambda>"}-term rewriting (\secref{sec:rewriting}).  Internally
+  \<open>\<lambda>\<close>-term rewriting (\secref{sec:rewriting}).  Internally
   this is based on the general-purpose Simplifier engine of Isabelle,
   which is more specific and more efficient than plain conversions.
 
@@ -31,10 +31,10 @@
 
 section \<open>Basic equality rules \label{sec:eq-rules}\<close>
 
-text \<open>Isabelle/Pure uses @{text "\<equiv>"} for equality of arbitrary
+text \<open>Isabelle/Pure uses \<open>\<equiv>\<close> for equality of arbitrary
   terms, which includes equivalence of propositions of the logical
-  framework.  The conceptual axiomatization of the constant @{text "\<equiv>
-  :: \<alpha> \<Rightarrow> \<alpha> \<Rightarrow> prop"} is given in \figref{fig:pure-equality}.  The
+  framework.  The conceptual axiomatization of the constant \<open>\<equiv>
+  :: \<alpha> \<Rightarrow> \<alpha> \<Rightarrow> prop\<close> is given in \figref{fig:pure-equality}.  The
   inference kernel presents slightly different equality rules, which
   may be understood as derived rules from this minimal axiomatization.
   The Pure theory also provides some theorems that express the same
@@ -42,15 +42,14 @@
   rules as explained in \secref{sec:obj-rules}.
 
   For example, @{ML Thm.symmetric} as Pure inference is an ML function
-  that maps a theorem @{text "th"} stating @{text "t \<equiv> u"} to one
-  stating @{text "u \<equiv> t"}.  In contrast, @{thm [source]
+  that maps a theorem \<open>th\<close> stating \<open>t \<equiv> u\<close> to one
+  stating \<open>u \<equiv> t\<close>.  In contrast, @{thm [source]
   Pure.symmetric} as Pure theorem expresses the same reasoning in
-  declarative form.  If used like @{text "th [THEN Pure.symmetric]"}
+  declarative form.  If used like \<open>th [THEN Pure.symmetric]\<close>
   in Isar source notation, it achieves a similar effect as the ML
   inference function, although the rule attribute @{attribute THEN} or
   ML operator @{ML "op RS"} involve the full machinery of higher-order
-  unification (modulo @{text "\<beta>\<eta>"}-conversion) and lifting of @{text
-  "\<And>/\<Longrightarrow>"} contexts.\<close>
+  unification (modulo \<open>\<beta>\<eta>\<close>-conversion) and lifting of \<open>\<And>/\<Longrightarrow>\<close> contexts.\<close>
 
 text %mlref \<open>
   \begin{mldecls}
@@ -64,8 +63,8 @@
   \end{mldecls}
 
   See also @{file "~~/src/Pure/thm.ML" } for further description of
-  these inference rules, and a few more for primitive @{text "\<beta>"} and
-  @{text "\<eta>"} conversions.  Note that @{text "\<alpha>"} conversion is
+  these inference rules, and a few more for primitive \<open>\<beta>\<close> and
+  \<open>\<eta>\<close> conversions.  Note that \<open>\<alpha>\<close> conversion is
   implicit due to the representation of terms with de-Bruijn indices
   (\secref{sec:terms}).\<close>
 
@@ -83,10 +82,10 @@
 section \<open>Rewriting \label{sec:rewriting}\<close>
 
 text \<open>Rewriting normalizes a given term (theorem or goal) by
-  replacing instances of given equalities @{text "t \<equiv> u"} in subterms.
+  replacing instances of given equalities \<open>t \<equiv> u\<close> in subterms.
   Rewriting continues until no rewrites are applicable to any subterm.
-  This may be used to unfold simple definitions of the form @{text "f
-  x\<^sub>1 \<dots> x\<^sub>n \<equiv> u"}, but is slightly more general than that.
+  This may be used to unfold simple definitions of the form \<open>f
+  x\<^sub>1 \<dots> x\<^sub>n \<equiv> u\<close>, but is slightly more general than that.
 \<close>
 
 text %mlref \<open>
@@ -98,23 +97,22 @@
   @{index_ML fold_goals_tac: "Proof.context -> thm list -> tactic"} \\
   \end{mldecls}
 
-  \<^descr> @{ML rewrite_rule}~@{text "ctxt rules thm"} rewrites the whole
+  \<^descr> @{ML rewrite_rule}~\<open>ctxt rules thm\<close> rewrites the whole
   theorem by the given rules.
 
-  \<^descr> @{ML rewrite_goals_rule}~@{text "ctxt rules thm"} rewrites the
+  \<^descr> @{ML rewrite_goals_rule}~\<open>ctxt rules thm\<close> rewrites the
   outer premises of the given theorem.  Interpreting the same as a
   goal state (\secref{sec:tactical-goals}) it means to rewrite all
   subgoals (in the same manner as @{ML rewrite_goals_tac}).
 
-  \<^descr> @{ML rewrite_goal_tac}~@{text "ctxt rules i"} rewrites subgoal
-  @{text "i"} by the given rewrite rules.
+  \<^descr> @{ML rewrite_goal_tac}~\<open>ctxt rules i\<close> rewrites subgoal
+  \<open>i\<close> by the given rewrite rules.
 
-  \<^descr> @{ML rewrite_goals_tac}~@{text "ctxt rules"} rewrites all subgoals
+  \<^descr> @{ML rewrite_goals_tac}~\<open>ctxt rules\<close> rewrites all subgoals
   by the given rewrite rules.
 
-  \<^descr> @{ML fold_goals_tac}~@{text "ctxt rules"} essentially uses @{ML
-  rewrite_goals_tac} with the symmetric form of each member of @{text
-  "rules"}, re-ordered to fold longer expression first.  This supports
+  \<^descr> @{ML fold_goals_tac}~\<open>ctxt rules\<close> essentially uses @{ML
+  rewrite_goals_tac} with the symmetric form of each member of \<open>rules\<close>, re-ordered to fold longer expression first.  This supports
   to idea to fold primitive definitions that appear in expended form
   in the proof state.
 \<close>
--- a/src/Doc/Implementation/Integration.thy	Tue Oct 20 23:03:46 2015 +0200
+++ b/src/Doc/Implementation/Integration.thy	Tue Oct 20 23:53:40 2015 +0200
@@ -23,8 +23,7 @@
 subsection \<open>Toplevel state\<close>
 
 text \<open>
-  The toplevel state is a disjoint sum of empty @{text toplevel}, or @{text
-  theory}, or @{text proof}. The initial toplevel is empty; a theory is
+  The toplevel state is a disjoint sum of empty \<open>toplevel\<close>, or \<open>theory\<close>, or \<open>proof\<close>. The initial toplevel is empty; a theory is
   commenced by a @{command theory} header; within a theory we may use theory
   commands such as @{command definition}, or state a @{command theorem} to be
   proven. A proof state accepts a rich collection of Isar proof commands for
@@ -52,23 +51,23 @@
   operations.  Many operations work only partially for certain cases,
   since @{ML_type Toplevel.state} is a sum type.
 
-  \<^descr> @{ML Toplevel.is_toplevel}~@{text "state"} checks for an empty
+  \<^descr> @{ML Toplevel.is_toplevel}~\<open>state\<close> checks for an empty
   toplevel state.
 
-  \<^descr> @{ML Toplevel.theory_of}~@{text "state"} selects the
-  background theory of @{text "state"}, it raises @{ML Toplevel.UNDEF}
+  \<^descr> @{ML Toplevel.theory_of}~\<open>state\<close> selects the
+  background theory of \<open>state\<close>, it raises @{ML Toplevel.UNDEF}
   for an empty toplevel state.
 
-  \<^descr> @{ML Toplevel.proof_of}~@{text "state"} selects the Isar proof
+  \<^descr> @{ML Toplevel.proof_of}~\<open>state\<close> selects the Isar proof
   state if available, otherwise it raises an error.
 \<close>
 
 text %mlantiq \<open>
   \begin{matharray}{rcl}
-  @{ML_antiquotation_def "Isar.state"} & : & @{text ML_antiquotation} \\
+  @{ML_antiquotation_def "Isar.state"} & : & \<open>ML_antiquotation\<close> \\
   \end{matharray}
 
-  \<^descr> @{text "@{Isar.state}"} refers to Isar toplevel state at that
+  \<^descr> \<open>@{Isar.state}\<close> refers to Isar toplevel state at that
   point --- as abstract value.
 
   This only works for diagnostic ML commands, such as @{command
@@ -113,27 +112,27 @@
   Toplevel.transition -> Toplevel.transition"} \\
   \end{mldecls}
 
-  \<^descr> @{ML Toplevel.keep}~@{text "tr"} adjoins a diagnostic
+  \<^descr> @{ML Toplevel.keep}~\<open>tr\<close> adjoins a diagnostic
   function.
 
-  \<^descr> @{ML Toplevel.theory}~@{text "tr"} adjoins a theory
+  \<^descr> @{ML Toplevel.theory}~\<open>tr\<close> adjoins a theory
   transformer.
 
-  \<^descr> @{ML Toplevel.theory_to_proof}~@{text "tr"} adjoins a global
+  \<^descr> @{ML Toplevel.theory_to_proof}~\<open>tr\<close> adjoins a global
   goal function, which turns a theory into a proof state.  The theory
   may be changed before entering the proof; the generic Isar goal
   setup includes an @{verbatim after_qed} argument that specifies how to
   apply the proven result to the enclosing context, when the proof
   is finished.
 
-  \<^descr> @{ML Toplevel.proof}~@{text "tr"} adjoins a deterministic
+  \<^descr> @{ML Toplevel.proof}~\<open>tr\<close> adjoins a deterministic
   proof command, with a singleton result.
 
-  \<^descr> @{ML Toplevel.proofs}~@{text "tr"} adjoins a general proof
+  \<^descr> @{ML Toplevel.proofs}~\<open>tr\<close> adjoins a general proof
   command, with zero or more result states (represented as a lazy
   list).
 
-  \<^descr> @{ML Toplevel.end_proof}~@{text "tr"} adjoins a concluding
+  \<^descr> @{ML Toplevel.end_proof}~\<open>tr\<close> adjoins a concluding
   proof command, that returns the resulting theory, after applying the
   resulting facts to the target context.
 \<close>
@@ -163,7 +162,7 @@
   @{index_ML Thy_Info.register_thy: "theory -> unit"} \\
   \end{mldecls}
 
-  \<^descr> @{ML use_thy}~@{text A} ensures that theory @{text A} is fully
+  \<^descr> @{ML use_thy}~\<open>A\<close> ensures that theory \<open>A\<close> is fully
   up-to-date wrt.\ the external file store; outdated ancestors are reloaded on
   demand.
 
@@ -175,14 +174,14 @@
 
   This variant is used by default in @{tool build} @{cite "isabelle-system"}.
 
-  \<^descr> @{ML Thy_Info.get_theory}~@{text A} retrieves the theory value
-  presently associated with name @{text A}. Note that the result might be
+  \<^descr> @{ML Thy_Info.get_theory}~\<open>A\<close> retrieves the theory value
+  presently associated with name \<open>A\<close>. Note that the result might be
   outdated wrt.\ the file-system content.
 
-  \<^descr> @{ML Thy_Info.remove_thy}~@{text A} deletes theory @{text A} and all
+  \<^descr> @{ML Thy_Info.remove_thy}~\<open>A\<close> deletes theory \<open>A\<close> and all
   descendants from the theory database.
 
-  \<^descr> @{ML Thy_Info.register_thy}~@{text "text thy"} registers an existing
+  \<^descr> @{ML Thy_Info.register_thy}~\<open>text thy\<close> registers an existing
   theory value with the theory loader database and updates source version
   information according to the file store.
 \<close>
--- a/src/Doc/Implementation/Isar.thy	Tue Oct 20 23:03:46 2015 +0200
+++ b/src/Doc/Implementation/Isar.thy	Tue Oct 20 23:53:40 2015 +0200
@@ -78,7 +78,7 @@
   \<^descr> Type @{ML_type Proof.state} represents Isar proof states.
   This is a block-structured configuration with proof context,
   linguistic mode, and optional goal.  The latter consists of goal
-  context, goal facts (``@{text "using"}''), and tactical goal state
+  context, goal facts (``\<open>using\<close>''), and tactical goal state
   (see \secref{sec:tactical-goals}).
 
   The general idea is that the facts shall contribute to the
@@ -87,39 +87,37 @@
 
   \<^descr> @{ML Proof.assert_forward}, @{ML Proof.assert_chain}, @{ML
   Proof.assert_backward} are partial identity functions that fail
-  unless a certain linguistic mode is active, namely ``@{text
-  "proof(state)"}'', ``@{text "proof(chain)"}'', ``@{text
-  "proof(prove)"}'', respectively (using the terminology of
+  unless a certain linguistic mode is active, namely ``\<open>proof(state)\<close>'', ``\<open>proof(chain)\<close>'', ``\<open>proof(prove)\<close>'', respectively (using the terminology of
   @{cite "isabelle-isar-ref"}).
 
   It is advisable study the implementations of existing proof commands
   for suitable modes to be asserted.
 
-  \<^descr> @{ML Proof.simple_goal}~@{text "state"} returns the structured
+  \<^descr> @{ML Proof.simple_goal}~\<open>state\<close> returns the structured
   Isar goal (if available) in the form seen by ``simple'' methods
   (like @{method simp} or @{method blast}).  The Isar goal facts are
   already inserted as premises into the subgoals, which are presented
   individually as in @{ML Proof.goal}.
 
-  \<^descr> @{ML Proof.goal}~@{text "state"} returns the structured Isar
+  \<^descr> @{ML Proof.goal}~\<open>state\<close> returns the structured Isar
   goal (if available) in the form seen by regular methods (like
   @{method rule}).  The auxiliary internal encoding of Pure
   conjunctions is split into individual subgoals as usual.
 
-  \<^descr> @{ML Proof.raw_goal}~@{text "state"} returns the structured
+  \<^descr> @{ML Proof.raw_goal}~\<open>state\<close> returns the structured
   Isar goal (if available) in the raw internal form seen by ``raw''
   methods (like @{method induct}).  This form is rarely appropriate
   for diagnostic tools; @{ML Proof.simple_goal} or @{ML Proof.goal}
   should be used in most situations.
 
-  \<^descr> @{ML Proof.theorem}~@{text "before_qed after_qed statement ctxt"}
+  \<^descr> @{ML Proof.theorem}~\<open>before_qed after_qed statement ctxt\<close>
   initializes a toplevel Isar proof state within a given context.
 
-  The optional @{text "before_qed"} method is applied at the end of
+  The optional \<open>before_qed\<close> method is applied at the end of
   the proof, just before extracting the result (this feature is rarely
   used).
 
-  The @{text "after_qed"} continuation receives the extracted result
+  The \<open>after_qed\<close> continuation receives the extracted result
   in order to apply it to the final context in a suitable way (e.g.\
   storing named facts).  Note that at this generic level the target
   context is specified as @{ML_type Proof.context}, but the usual
@@ -127,20 +125,20 @@
   @{ML_type local_theory} here (\chref{ch:local-theory}).  This
   affects the way how results are stored.
 
-  The @{text "statement"} is given as a nested list of terms, each
+  The \<open>statement\<close> is given as a nested list of terms, each
   associated with optional @{keyword "is"} patterns as usual in the
   Isar source language.  The original nested list structure over terms
-  is turned into one over theorems when @{text "after_qed"} is
+  is turned into one over theorems when \<open>after_qed\<close> is
   invoked.
 \<close>
 
 
 text %mlantiq \<open>
   \begin{matharray}{rcl}
-  @{ML_antiquotation_def "Isar.goal"} & : & @{text ML_antiquotation} \\
+  @{ML_antiquotation_def "Isar.goal"} & : & \<open>ML_antiquotation\<close> \\
   \end{matharray}
 
-  \<^descr> @{text "@{Isar.goal}"} refers to the regular goal state (if
+  \<^descr> \<open>@{Isar.goal}\<close> refers to the regular goal state (if
   available) of the current proof state managed by the Isar toplevel
   --- as abstract value.
 
@@ -165,8 +163,8 @@
 
 section \<open>Proof methods\<close>
 
-text \<open>A @{text "method"} is a function @{text "context \<rightarrow> thm\<^sup>* \<rightarrow> goal
-  \<rightarrow> (cases \<times> goal)\<^sup>*\<^sup>*"} that operates on the full Isar goal
+text \<open>A \<open>method\<close> is a function \<open>context \<rightarrow> thm\<^sup>* \<rightarrow> goal
+  \<rightarrow> (cases \<times> goal)\<^sup>*\<^sup>*\<close> that operates on the full Isar goal
   configuration with context, goal facts, and tactical goal state and
   enumerates possible follow-up goal states, with the potential
   addition of named extensions of the proof context (\<^emph>\<open>cases\<close>).
@@ -186,9 +184,9 @@
 
   \<^item> A non-trivial method always needs to make progress: an
   identical follow-up goal state has to be avoided.\footnote{This
-  enables the user to write method expressions like @{text "meth\<^sup>+"}
+  enables the user to write method expressions like \<open>meth\<^sup>+\<close>
   without looping, while the trivial do-nothing case can be recovered
-  via @{text "meth\<^sup>?"}.}
+  via \<open>meth\<^sup>?\<close>.}
 
   Exception: trivial stuttering steps, such as ``@{method -}'' or
   @{method succeed}.
@@ -211,19 +209,17 @@
 
   \<^medskip>
   \begin{tabular}{l}
-  @{command from}~@{text "facts\<^sub>1"}~@{command have}~@{text "props"}~@{command using}~@{text "facts\<^sub>2"} \\
-  @{command proof}~@{text "(initial_method)"} \\
-  \quad@{text "body"} \\
-  @{command qed}~@{text "(terminal_method)"} \\
+  @{command from}~\<open>facts\<^sub>1\<close>~@{command have}~\<open>props\<close>~@{command using}~\<open>facts\<^sub>2\<close> \\
+  @{command proof}~\<open>(initial_method)\<close> \\
+  \quad\<open>body\<close> \\
+  @{command qed}~\<open>(terminal_method)\<close> \\
   \end{tabular}
   \<^medskip>
 
-  The goal configuration consists of @{text "facts\<^sub>1"} and
-  @{text "facts\<^sub>2"} appended in that order, and various @{text
-  "props"} being claimed.  The @{text "initial_method"} is invoked
+  The goal configuration consists of \<open>facts\<^sub>1\<close> and
+  \<open>facts\<^sub>2\<close> appended in that order, and various \<open>props\<close> being claimed.  The \<open>initial_method\<close> is invoked
   with facts and goals together and refines the problem to something
-  that is handled recursively in the proof @{text "body"}.  The @{text
-  "terminal_method"} has another chance to finish any remaining
+  that is handled recursively in the proof \<open>body\<close>.  The \<open>terminal_method\<close> has another chance to finish any remaining
   subgoals, but it does not see the facts of the initial step.
 
   \<^medskip>
@@ -231,20 +227,18 @@
 
   \<^medskip>
   \begin{tabular}{l}
-  @{command have}~@{text "props"} \\
-  \quad@{command using}~@{text "facts\<^sub>1"}~@{command apply}~@{text "method\<^sub>1"} \\
-  \quad@{command apply}~@{text "method\<^sub>2"} \\
-  \quad@{command using}~@{text "facts\<^sub>3"}~@{command apply}~@{text "method\<^sub>3"} \\
+  @{command have}~\<open>props\<close> \\
+  \quad@{command using}~\<open>facts\<^sub>1\<close>~@{command apply}~\<open>method\<^sub>1\<close> \\
+  \quad@{command apply}~\<open>method\<^sub>2\<close> \\
+  \quad@{command using}~\<open>facts\<^sub>3\<close>~@{command apply}~\<open>method\<^sub>3\<close> \\
   \quad@{command done} \\
   \end{tabular}
   \<^medskip>
 
-  The @{text "method\<^sub>1"} operates on the original claim while
-  using @{text "facts\<^sub>1"}.  Since the @{command apply} command
-  structurally resets the facts, the @{text "method\<^sub>2"} will
-  operate on the remaining goal state without facts.  The @{text
-  "method\<^sub>3"} will see again a collection of @{text
-  "facts\<^sub>3"} that has been inserted into the script explicitly.
+  The \<open>method\<^sub>1\<close> operates on the original claim while
+  using \<open>facts\<^sub>1\<close>.  Since the @{command apply} command
+  structurally resets the facts, the \<open>method\<^sub>2\<close> will
+  operate on the remaining goal state without facts.  The \<open>method\<^sub>3\<close> will see again a collection of \<open>facts\<^sub>3\<close> that has been inserted into the script explicitly.
 
   \<^medskip>
   Empirically, any Isar proof method can be categorized as
@@ -272,7 +266,7 @@
   \<^enum> \<^emph>\<open>Old-style tactic emulation\<close> with detailed numeric goal
   addressing and explicit references to entities of the internal goal
   state (which are otherwise invisible from proper Isar proof text).
-  The naming convention @{text "foo_tac"} makes this special
+  The naming convention \<open>foo_tac\<close> makes this special
   non-standard status clear.
 
   Example: @{method "rule_tac"}.
@@ -303,30 +297,27 @@
   \<^descr> Type @{ML_type Proof.method} represents proof methods as
   abstract type.
 
-  \<^descr> @{ML METHOD_CASES}~@{text "(fn facts => cases_tactic)"} wraps
-  @{text cases_tactic} depending on goal facts as proof method with
+  \<^descr> @{ML METHOD_CASES}~\<open>(fn facts => cases_tactic)\<close> wraps
+  \<open>cases_tactic\<close> depending on goal facts as proof method with
   cases; the goal context is passed via method syntax.
 
-  \<^descr> @{ML METHOD}~@{text "(fn facts => tactic)"} wraps @{text
-  tactic} depending on goal facts as regular proof method; the goal
+  \<^descr> @{ML METHOD}~\<open>(fn facts => tactic)\<close> wraps \<open>tactic\<close> depending on goal facts as regular proof method; the goal
   context is passed via method syntax.
 
-  \<^descr> @{ML SIMPLE_METHOD}~@{text "tactic"} wraps a tactic that
+  \<^descr> @{ML SIMPLE_METHOD}~\<open>tactic\<close> wraps a tactic that
   addresses all subgoals uniformly as simple proof method.  Goal facts
-  are already inserted into all subgoals before @{text "tactic"} is
+  are already inserted into all subgoals before \<open>tactic\<close> is
   applied.
 
-  \<^descr> @{ML SIMPLE_METHOD'}~@{text "tactic"} wraps a tactic that
+  \<^descr> @{ML SIMPLE_METHOD'}~\<open>tactic\<close> wraps a tactic that
   addresses a specific subgoal as simple proof method that operates on
-  subgoal 1.  Goal facts are inserted into the subgoal then the @{text
-  "tactic"} is applied.
+  subgoal 1.  Goal facts are inserted into the subgoal then the \<open>tactic\<close> is applied.
 
-  \<^descr> @{ML Method.insert_tac}~@{text "facts i"} inserts @{text
-  "facts"} into subgoal @{text "i"}.  This is convenient to reproduce
+  \<^descr> @{ML Method.insert_tac}~\<open>facts i\<close> inserts \<open>facts\<close> into subgoal \<open>i\<close>.  This is convenient to reproduce
   part of the @{ML SIMPLE_METHOD} or @{ML SIMPLE_METHOD'} wrapping
   within regular @{ML METHOD}, for example.
 
-  \<^descr> @{ML Method.setup}~@{text "name parser description"} provides
+  \<^descr> @{ML Method.setup}~\<open>name parser description\<close> provides
   the functionality of the Isar command @{command method_setup} as ML
   function.
 \<close>
@@ -470,8 +461,8 @@
   method arguments obtained via concrete syntax or the context does
   not meet the requirement of ``strong emphasis on facts'' of regular
   proof methods, because rewrite rules as used above can be easily
-  ignored.  A proof text ``@{command using}~@{text "foo"}~@{command
-  "by"}~@{text "my_simp"}'' where @{text "foo"} is not used would
+  ignored.  A proof text ``@{command using}~\<open>foo\<close>~@{command
+  "by"}~\<open>my_simp\<close>'' where \<open>foo\<close> is not used would
   deceive the reader.
 
   \<^medskip>
@@ -495,12 +486,12 @@
 
 section \<open>Attributes \label{sec:attributes}\<close>
 
-text \<open>An \<^emph>\<open>attribute\<close> is a function @{text "context \<times> thm \<rightarrow>
-  context \<times> thm"}, which means both a (generic) context and a theorem
+text \<open>An \<^emph>\<open>attribute\<close> is a function \<open>context \<times> thm \<rightarrow>
+  context \<times> thm\<close>, which means both a (generic) context and a theorem
   can be modified simultaneously.  In practice this mixed form is very
   rare, instead attributes are presented either as \<^emph>\<open>declaration
-  attribute:\<close> @{text "thm \<rightarrow> context \<rightarrow> context"} or \<^emph>\<open>rule
-  attribute:\<close> @{text "context \<rightarrow> thm \<rightarrow> thm"}.
+  attribute:\<close> \<open>thm \<rightarrow> context \<rightarrow> context\<close> or \<^emph>\<open>rule
+  attribute:\<close> \<open>context \<rightarrow> thm \<rightarrow> thm\<close>.
 
   Attributes can have additional arguments via concrete syntax.  There
   is a collection of context-sensitive parsers for various logical
@@ -527,28 +518,28 @@
   \<^descr> Type @{ML_type attribute} represents attributes as concrete
   type alias.
 
-  \<^descr> @{ML Thm.rule_attribute}~@{text "(fn context => rule)"} wraps
+  \<^descr> @{ML Thm.rule_attribute}~\<open>(fn context => rule)\<close> wraps
   a context-dependent rule (mapping on @{ML_type thm}) as attribute.
 
-  \<^descr> @{ML Thm.declaration_attribute}~@{text "(fn thm => decl)"}
+  \<^descr> @{ML Thm.declaration_attribute}~\<open>(fn thm => decl)\<close>
   wraps a theorem-dependent declaration (mapping on @{ML_type
   Context.generic}) as attribute.
 
-  \<^descr> @{ML Attrib.setup}~@{text "name parser description"} provides
+  \<^descr> @{ML Attrib.setup}~\<open>name parser description\<close> provides
   the functionality of the Isar command @{command attribute_setup} as
   ML function.
 \<close>
 
 text %mlantiq \<open>
   \begin{matharray}{rcl}
-  @{ML_antiquotation_def attributes} & : & @{text ML_antiquotation} \\
+  @{ML_antiquotation_def attributes} & : & \<open>ML_antiquotation\<close> \\
   \end{matharray}
 
   @{rail \<open>
   @@{ML_antiquotation attributes} attributes
   \<close>}
 
-  \<^descr> @{text "@{attributes [\<dots>]}"} embeds attribute source
+  \<^descr> \<open>@{attributes [\<dots>]}\<close> embeds attribute source
   representation into the ML text, which is particularly useful with
   declarations like @{ML Local_Theory.note}.  Attribute names are
   internalized at compile time, but the source is unevaluated.  This
--- a/src/Doc/Implementation/Local_Theory.thy	Tue Oct 20 23:03:46 2015 +0200
+++ b/src/Doc/Implementation/Local_Theory.thy	Tue Oct 20 23:53:40 2015 +0200
@@ -13,11 +13,11 @@
   context\<close>.
 
   The target is usually derived from the background theory by adding
-  local @{text "\<FIX>"} and @{text "\<ASSUME>"} elements, plus
+  local \<open>\<FIX>\<close> and \<open>\<ASSUME>\<close> elements, plus
   suitable modifications of non-logical context data (e.g.\ a special
   type-checking discipline).  Once initialized, the target is ready to
-  absorb definitional primitives: @{text "\<DEFINE>"} for terms and
-  @{text "\<NOTE>"} for theorems.  Such definitions may get
+  absorb definitional primitives: \<open>\<DEFINE>\<close> for terms and
+  \<open>\<NOTE>\<close> for theorems.  Such definitions may get
   transformed in a target-specific way, but the programming interface
   hides such details.
 
@@ -39,13 +39,13 @@
 section \<open>Definitional elements\<close>
 
 text \<open>
-  There are separate elements @{text "\<DEFINE> c \<equiv> t"} for terms, and
-  @{text "\<NOTE> b = thm"} for theorems.  Types are treated
+  There are separate elements \<open>\<DEFINE> c \<equiv> t\<close> for terms, and
+  \<open>\<NOTE> b = thm\<close> for theorems.  Types are treated
   implicitly, according to Hindley-Milner discipline (cf.\
   \secref{sec:variables}).  These definitional primitives essentially
-  act like @{text "let"}-bindings within a local context that may
-  already contain earlier @{text "let"}-bindings and some initial
-  @{text "\<lambda>"}-bindings.  Thus we gain \<^emph>\<open>dependent definitions\<close>
+  act like \<open>let\<close>-bindings within a local context that may
+  already contain earlier \<open>let\<close>-bindings and some initial
+  \<open>\<lambda>\<close>-bindings.  Thus we gain \<^emph>\<open>dependent definitions\<close>
   that are relative to an initial axiomatic context.  The following
   diagram illustrates this idea of axiomatic elements versus
   definitional elements:
@@ -53,30 +53,29 @@
   \begin{center}
   \begin{tabular}{|l|l|l|}
   \hline
-  & @{text "\<lambda>"}-binding & @{text "let"}-binding \\
+  & \<open>\<lambda>\<close>-binding & \<open>let\<close>-binding \\
   \hline
-  types & fixed @{text "\<alpha>"} & arbitrary @{text "\<beta>"} \\
-  terms & @{text "\<FIX> x :: \<tau>"} & @{text "\<DEFINE> c \<equiv> t"} \\
-  theorems & @{text "\<ASSUME> a: A"} & @{text "\<NOTE> b = \<^BG>B\<^EN>"} \\
+  types & fixed \<open>\<alpha>\<close> & arbitrary \<open>\<beta>\<close> \\
+  terms & \<open>\<FIX> x :: \<tau>\<close> & \<open>\<DEFINE> c \<equiv> t\<close> \\
+  theorems & \<open>\<ASSUME> a: A\<close> & \<open>\<NOTE> b = \<^BG>B\<^EN>\<close> \\
   \hline
   \end{tabular}
   \end{center}
 
-  A user package merely needs to produce suitable @{text "\<DEFINE>"}
-  and @{text "\<NOTE>"} elements according to the application.  For
-  example, a package for inductive definitions might first @{text
-  "\<DEFINE>"} a certain predicate as some fixed-point construction,
-  then @{text "\<NOTE>"} a proven result about monotonicity of the
+  A user package merely needs to produce suitable \<open>\<DEFINE>\<close>
+  and \<open>\<NOTE>\<close> elements according to the application.  For
+  example, a package for inductive definitions might first \<open>\<DEFINE>\<close> a certain predicate as some fixed-point construction,
+  then \<open>\<NOTE>\<close> a proven result about monotonicity of the
   functor involved here, and then produce further derived concepts via
-  additional @{text "\<DEFINE>"} and @{text "\<NOTE>"} elements.
+  additional \<open>\<DEFINE>\<close> and \<open>\<NOTE>\<close> elements.
 
-  The cumulative sequence of @{text "\<DEFINE>"} and @{text "\<NOTE>"}
+  The cumulative sequence of \<open>\<DEFINE>\<close> and \<open>\<NOTE>\<close>
   produced at package runtime is managed by the local theory
   infrastructure by means of an \<^emph>\<open>auxiliary context\<close>.  Thus the
   system holds up the impression of working within a fully abstract
-  situation with hypothetical entities: @{text "\<DEFINE> c \<equiv> t"}
-  always results in a literal fact @{text "\<^BG>c \<equiv> t\<^EN>"}, where
-  @{text "c"} is a fixed variable @{text "c"}.  The details about
+  situation with hypothetical entities: \<open>\<DEFINE> c \<equiv> t\<close>
+  always results in a literal fact \<open>\<^BG>c \<equiv> t\<^EN>\<close>, where
+  \<open>c\<close> is a fixed variable \<open>c\<close>.  The details about
   global constants, name spaces etc. are handled internally.
 
   So the general structure of a local theory is a sandwich of three
@@ -88,8 +87,7 @@
 
   When a definitional package is finished, the auxiliary context is
   reset to the target context.  The target now holds definitions for
-  terms and theorems that stem from the hypothetical @{text
-  "\<DEFINE>"} and @{text "\<NOTE>"} elements, transformed by the
+  terms and theorems that stem from the hypothetical \<open>\<DEFINE>\<close> and \<open>\<NOTE>\<close> elements, transformed by the
   particular target policy (see @{cite \<open>\S4--5\<close> "Haftmann-Wenzel:2009"}
   for details).\<close>
 
@@ -107,11 +105,11 @@
   Although this is merely an alias for @{ML_type Proof.context}, it is
   semantically a subtype of the same: a @{ML_type local_theory} holds
   target information as special context data.  Subtyping means that
-  any value @{text "lthy:"}~@{ML_type local_theory} can be also used
-  with operations on expecting a regular @{text "ctxt:"}~@{ML_type
+  any value \<open>lthy:\<close>~@{ML_type local_theory} can be also used
+  with operations on expecting a regular \<open>ctxt:\<close>~@{ML_type
   Proof.context}.
 
-  \<^descr> @{ML Named_Target.init}~@{text "before_exit name thy"}
+  \<^descr> @{ML Named_Target.init}~\<open>before_exit name thy\<close>
   initializes a local theory derived from the given background theory.
   An empty name refers to a \<^emph>\<open>global theory\<close> context, and a
   non-empty name refers to a @{command locale} or @{command class}
@@ -119,9 +117,9 @@
   useful for experimentation --- normally the Isar toplevel already
   takes care to initialize the local theory context.
 
-  \<^descr> @{ML Local_Theory.define}~@{text "((b, mx), (a, rhs))
-  lthy"} defines a local entity according to the specification that is
-  given relatively to the current @{text "lthy"} context.  In
+  \<^descr> @{ML Local_Theory.define}~\<open>((b, mx), (a, rhs))
+  lthy\<close> defines a local entity according to the specification that is
+  given relatively to the current \<open>lthy\<close> context.  In
   particular the term of the RHS may refer to earlier local entities
   from the auxiliary context, or hypothetical parameters from the
   target context.  The result is the newly defined term (which is
@@ -130,7 +128,7 @@
   definition as a hypothetical fact.
 
   Unless an explicit name binding is given for the RHS, the resulting
-  fact will be called @{text "b_def"}.  Any given attributes are
+  fact will be called \<open>b_def\<close>.  Any given attributes are
   applied to that same fact --- immediately in the auxiliary context
   \<^emph>\<open>and\<close> in any transformed versions stemming from target-specific
   policies or any later interpretations of results from the target
@@ -139,7 +137,7 @@
   declarations such as @{attribute simp}, while non-trivial rules like
   @{attribute simplified} are better avoided.
 
-  \<^descr> @{ML Local_Theory.note}~@{text "(a, ths) lthy"} is
+  \<^descr> @{ML Local_Theory.note}~\<open>(a, ths) lthy\<close> is
   analogous to @{ML Local_Theory.define}, but defines facts instead of
   terms.  There is also a slightly more general variant @{ML
   Local_Theory.notes} that defines several facts (with attribute
--- a/src/Doc/Implementation/Logic.thy	Tue Oct 20 23:03:46 2015 +0200
+++ b/src/Doc/Implementation/Logic.thy	Tue Oct 20 23:53:40 2015 +0200
@@ -7,23 +7,20 @@
 text \<open>
   The logical foundations of Isabelle/Isar are that of the Pure logic,
   which has been introduced as a Natural Deduction framework in
-  @{cite paulson700}.  This is essentially the same logic as ``@{text
-  "\<lambda>HOL"}'' in the more abstract setting of Pure Type Systems (PTS)
+  @{cite paulson700}.  This is essentially the same logic as ``\<open>\<lambda>HOL\<close>'' in the more abstract setting of Pure Type Systems (PTS)
   @{cite "Barendregt-Geuvers:2001"}, although there are some key
   differences in the specific treatment of simple types in
   Isabelle/Pure.
 
   Following type-theoretic parlance, the Pure logic consists of three
-  levels of @{text "\<lambda>"}-calculus with corresponding arrows, @{text
-  "\<Rightarrow>"} for syntactic function space (terms depending on terms), @{text
-  "\<And>"} for universal quantification (proofs depending on terms), and
-  @{text "\<Longrightarrow>"} for implication (proofs depending on proofs).
+  levels of \<open>\<lambda>\<close>-calculus with corresponding arrows, \<open>\<Rightarrow>\<close> for syntactic function space (terms depending on terms), \<open>\<And>\<close> for universal quantification (proofs depending on terms), and
+  \<open>\<Longrightarrow>\<close> for implication (proofs depending on proofs).
 
   Derivations are relative to a logical theory, which declares type
   constructors, constants, and axioms.  Theory declarations support
   schematic polymorphism, which is strictly speaking outside the
   logic.\footnote{This is the deeper logical reason, why the theory
-  context @{text "\<Theta>"} is separate from the proof context @{text "\<Gamma>"}
+  context \<open>\<Theta>\<close> is separate from the proof context \<open>\<Gamma>\<close>
   of the core calculus: type constructors, term constants, and facts
   (proof constants) may involve arbitrary type schemes, but the type
   of a locally fixed term parameter is also fixed!}
@@ -38,31 +35,29 @@
 
   \<^medskip>
   A \<^emph>\<open>type class\<close> is an abstract syntactic entity
-  declared in the theory context.  The \<^emph>\<open>subclass relation\<close> @{text
-  "c\<^sub>1 \<subseteq> c\<^sub>2"} is specified by stating an acyclic
+  declared in the theory context.  The \<^emph>\<open>subclass relation\<close> \<open>c\<^sub>1 \<subseteq> c\<^sub>2\<close> is specified by stating an acyclic
   generating relation; the transitive closure is maintained
   internally.  The resulting relation is an ordering: reflexive,
   transitive, and antisymmetric.
 
-  A \<^emph>\<open>sort\<close> is a list of type classes written as @{text "s = {c\<^sub>1,
-  \<dots>, c\<^sub>m}"}, it represents symbolic intersection.  Notationally, the
+  A \<^emph>\<open>sort\<close> is a list of type classes written as \<open>s = {c\<^sub>1,
+  \<dots>, c\<^sub>m}\<close>, it represents symbolic intersection.  Notationally, the
   curly braces are omitted for singleton intersections, i.e.\ any
-  class @{text "c"} may be read as a sort @{text "{c}"}.  The ordering
+  class \<open>c\<close> may be read as a sort \<open>{c}\<close>.  The ordering
   on type classes is extended to sorts according to the meaning of
-  intersections: @{text "{c\<^sub>1, \<dots> c\<^sub>m} \<subseteq> {d\<^sub>1, \<dots>, d\<^sub>n}"} iff @{text
-  "\<forall>j. \<exists>i. c\<^sub>i \<subseteq> d\<^sub>j"}.  The empty intersection @{text "{}"} refers to
+  intersections: \<open>{c\<^sub>1, \<dots> c\<^sub>m} \<subseteq> {d\<^sub>1, \<dots>, d\<^sub>n}\<close> iff \<open>\<forall>j. \<exists>i. c\<^sub>i \<subseteq> d\<^sub>j\<close>.  The empty intersection \<open>{}\<close> refers to
   the universal sort, which is the largest element wrt.\ the sort
-  order.  Thus @{text "{}"} represents the ``full sort'', not the
+  order.  Thus \<open>{}\<close> represents the ``full sort'', not the
   empty one!  The intersection of all (finitely many) classes declared
   in the current theory is the least element wrt.\ the sort ordering.
 
   \<^medskip>
   A \<^emph>\<open>fixed type variable\<close> is a pair of a basic name
-  (starting with a @{text "'"} character) and a sort constraint, e.g.\
-  @{text "('a, s)"} which is usually printed as @{text "\<alpha>\<^sub>s"}.
+  (starting with a \<open>'\<close> character) and a sort constraint, e.g.\
+  \<open>('a, s)\<close> which is usually printed as \<open>\<alpha>\<^sub>s\<close>.
   A \<^emph>\<open>schematic type variable\<close> is a pair of an indexname and a
-  sort constraint, e.g.\ @{text "(('a, 0), s)"} which is usually
-  printed as @{text "?\<alpha>\<^sub>s"}.
+  sort constraint, e.g.\ \<open>(('a, 0), s)\<close> which is usually
+  printed as \<open>?\<alpha>\<^sub>s\<close>.
 
   Note that \<^emph>\<open>all\<close> syntactic components contribute to the identity
   of type variables: basic name, index, and sort constraint.  The core
@@ -70,49 +65,47 @@
   as different, although the type-inference layer (which is outside
   the core) rejects anything like that.
 
-  A \<^emph>\<open>type constructor\<close> @{text "\<kappa>"} is a @{text "k"}-ary operator
+  A \<^emph>\<open>type constructor\<close> \<open>\<kappa>\<close> is a \<open>k\<close>-ary operator
   on types declared in the theory.  Type constructor application is
-  written postfix as @{text "(\<alpha>\<^sub>1, \<dots>, \<alpha>\<^sub>k)\<kappa>"}.  For
-  @{text "k = 0"} the argument tuple is omitted, e.g.\ @{text "prop"}
-  instead of @{text "()prop"}.  For @{text "k = 1"} the parentheses
-  are omitted, e.g.\ @{text "\<alpha> list"} instead of @{text "(\<alpha>)list"}.
+  written postfix as \<open>(\<alpha>\<^sub>1, \<dots>, \<alpha>\<^sub>k)\<kappa>\<close>.  For
+  \<open>k = 0\<close> the argument tuple is omitted, e.g.\ \<open>prop\<close>
+  instead of \<open>()prop\<close>.  For \<open>k = 1\<close> the parentheses
+  are omitted, e.g.\ \<open>\<alpha> list\<close> instead of \<open>(\<alpha>)list\<close>.
   Further notation is provided for specific constructors, notably the
-  right-associative infix @{text "\<alpha> \<Rightarrow> \<beta>"} instead of @{text "(\<alpha>,
-  \<beta>)fun"}.
+  right-associative infix \<open>\<alpha> \<Rightarrow> \<beta>\<close> instead of \<open>(\<alpha>,
+  \<beta>)fun\<close>.
   
   The logical category \<^emph>\<open>type\<close> is defined inductively over type
-  variables and type constructors as follows: @{text "\<tau> = \<alpha>\<^sub>s | ?\<alpha>\<^sub>s |
-  (\<tau>\<^sub>1, \<dots>, \<tau>\<^sub>k)\<kappa>"}.
+  variables and type constructors as follows: \<open>\<tau> = \<alpha>\<^sub>s | ?\<alpha>\<^sub>s |
+  (\<tau>\<^sub>1, \<dots>, \<tau>\<^sub>k)\<kappa>\<close>.
 
-  A \<^emph>\<open>type abbreviation\<close> is a syntactic definition @{text
-  "(\<^vec>\<alpha>)\<kappa> = \<tau>"} of an arbitrary type expression @{text "\<tau>"} over
-  variables @{text "\<^vec>\<alpha>"}.  Type abbreviations appear as type
+  A \<^emph>\<open>type abbreviation\<close> is a syntactic definition \<open>(\<^vec>\<alpha>)\<kappa> = \<tau>\<close> of an arbitrary type expression \<open>\<tau>\<close> over
+  variables \<open>\<^vec>\<alpha>\<close>.  Type abbreviations appear as type
   constructors in the syntax, but are expanded before entering the
   logical core.
 
   A \<^emph>\<open>type arity\<close> declares the image behavior of a type
-  constructor wrt.\ the algebra of sorts: @{text "\<kappa> :: (s\<^sub>1, \<dots>,
-  s\<^sub>k)s"} means that @{text "(\<tau>\<^sub>1, \<dots>, \<tau>\<^sub>k)\<kappa>"} is
-  of sort @{text "s"} if every argument type @{text "\<tau>\<^sub>i"} is
-  of sort @{text "s\<^sub>i"}.  Arity declarations are implicitly
-  completed, i.e.\ @{text "\<kappa> :: (\<^vec>s)c"} entails @{text "\<kappa> ::
-  (\<^vec>s)c'"} for any @{text "c' \<supseteq> c"}.
+  constructor wrt.\ the algebra of sorts: \<open>\<kappa> :: (s\<^sub>1, \<dots>,
+  s\<^sub>k)s\<close> means that \<open>(\<tau>\<^sub>1, \<dots>, \<tau>\<^sub>k)\<kappa>\<close> is
+  of sort \<open>s\<close> if every argument type \<open>\<tau>\<^sub>i\<close> is
+  of sort \<open>s\<^sub>i\<close>.  Arity declarations are implicitly
+  completed, i.e.\ \<open>\<kappa> :: (\<^vec>s)c\<close> entails \<open>\<kappa> ::
+  (\<^vec>s)c'\<close> for any \<open>c' \<supseteq> c\<close>.
 
   \<^medskip>
   The sort algebra is always maintained as \<^emph>\<open>coregular\<close>,
   which means that type arities are consistent with the subclass
-  relation: for any type constructor @{text "\<kappa>"}, and classes @{text
-  "c\<^sub>1 \<subseteq> c\<^sub>2"}, and arities @{text "\<kappa> ::
-  (\<^vec>s\<^sub>1)c\<^sub>1"} and @{text "\<kappa> ::
-  (\<^vec>s\<^sub>2)c\<^sub>2"} holds @{text "\<^vec>s\<^sub>1 \<subseteq>
-  \<^vec>s\<^sub>2"} component-wise.
+  relation: for any type constructor \<open>\<kappa>\<close>, and classes \<open>c\<^sub>1 \<subseteq> c\<^sub>2\<close>, and arities \<open>\<kappa> ::
+  (\<^vec>s\<^sub>1)c\<^sub>1\<close> and \<open>\<kappa> ::
+  (\<^vec>s\<^sub>2)c\<^sub>2\<close> holds \<open>\<^vec>s\<^sub>1 \<subseteq>
+  \<^vec>s\<^sub>2\<close> component-wise.
 
   The key property of a coregular order-sorted algebra is that sort
   constraints can be solved in a most general fashion: for each type
-  constructor @{text "\<kappa>"} and sort @{text "s"} there is a most general
-  vector of argument sorts @{text "(s\<^sub>1, \<dots>, s\<^sub>k)"} such
-  that a type scheme @{text "(\<alpha>\<^bsub>s\<^sub>1\<^esub>, \<dots>,
-  \<alpha>\<^bsub>s\<^sub>k\<^esub>)\<kappa>"} is of sort @{text "s"}.
+  constructor \<open>\<kappa>\<close> and sort \<open>s\<close> there is a most general
+  vector of argument sorts \<open>(s\<^sub>1, \<dots>, s\<^sub>k)\<close> such
+  that a type scheme \<open>(\<alpha>\<^bsub>s\<^sub>1\<^esub>, \<dots>,
+  \<alpha>\<^bsub>s\<^sub>k\<^esub>)\<kappa>\<close> is of sort \<open>s\<close>.
   Consequently, type unification has most general solutions (modulo
   equivalence of sorts), so type-inference produces primary types as
   expected @{cite "nipkow-prehofer"}.
@@ -145,54 +138,53 @@
   the empty class intersection, i.e.\ the ``full sort''.
 
   \<^descr> Type @{ML_type arity} represents type arities.  A triple
-  @{text "(\<kappa>, \<^vec>s, s) : arity"} represents @{text "\<kappa> ::
-  (\<^vec>s)s"} as described above.
+  \<open>(\<kappa>, \<^vec>s, s) : arity\<close> represents \<open>\<kappa> ::
+  (\<^vec>s)s\<close> as described above.
 
   \<^descr> Type @{ML_type typ} represents types; this is a datatype with
   constructors @{ML TFree}, @{ML TVar}, @{ML Type}.
 
-  \<^descr> @{ML Term.map_atyps}~@{text "f \<tau>"} applies the mapping @{text
-  "f"} to all atomic types (@{ML TFree}, @{ML TVar}) occurring in
-  @{text "\<tau>"}.
+  \<^descr> @{ML Term.map_atyps}~\<open>f \<tau>\<close> applies the mapping \<open>f\<close> to all atomic types (@{ML TFree}, @{ML TVar}) occurring in
+  \<open>\<tau>\<close>.
 
-  \<^descr> @{ML Term.fold_atyps}~@{text "f \<tau>"} iterates the operation
-  @{text "f"} over all occurrences of atomic types (@{ML TFree}, @{ML
-  TVar}) in @{text "\<tau>"}; the type structure is traversed from left to
+  \<^descr> @{ML Term.fold_atyps}~\<open>f \<tau>\<close> iterates the operation
+  \<open>f\<close> over all occurrences of atomic types (@{ML TFree}, @{ML
+  TVar}) in \<open>\<tau>\<close>; the type structure is traversed from left to
   right.
 
-  \<^descr> @{ML Sign.subsort}~@{text "thy (s\<^sub>1, s\<^sub>2)"}
-  tests the subsort relation @{text "s\<^sub>1 \<subseteq> s\<^sub>2"}.
+  \<^descr> @{ML Sign.subsort}~\<open>thy (s\<^sub>1, s\<^sub>2)\<close>
+  tests the subsort relation \<open>s\<^sub>1 \<subseteq> s\<^sub>2\<close>.
 
-  \<^descr> @{ML Sign.of_sort}~@{text "thy (\<tau>, s)"} tests whether type
-  @{text "\<tau>"} is of sort @{text "s"}.
+  \<^descr> @{ML Sign.of_sort}~\<open>thy (\<tau>, s)\<close> tests whether type
+  \<open>\<tau>\<close> is of sort \<open>s\<close>.
 
-  \<^descr> @{ML Sign.add_type}~@{text "ctxt (\<kappa>, k, mx)"} declares a
-  new type constructors @{text "\<kappa>"} with @{text "k"} arguments and
+  \<^descr> @{ML Sign.add_type}~\<open>ctxt (\<kappa>, k, mx)\<close> declares a
+  new type constructors \<open>\<kappa>\<close> with \<open>k\<close> arguments and
   optional mixfix syntax.
 
-  \<^descr> @{ML Sign.add_type_abbrev}~@{text "ctxt (\<kappa>, \<^vec>\<alpha>, \<tau>)"}
-  defines a new type abbreviation @{text "(\<^vec>\<alpha>)\<kappa> = \<tau>"}.
+  \<^descr> @{ML Sign.add_type_abbrev}~\<open>ctxt (\<kappa>, \<^vec>\<alpha>, \<tau>)\<close>
+  defines a new type abbreviation \<open>(\<^vec>\<alpha>)\<kappa> = \<tau>\<close>.
 
-  \<^descr> @{ML Sign.primitive_class}~@{text "(c, [c\<^sub>1, \<dots>,
-  c\<^sub>n])"} declares a new class @{text "c"}, together with class
-  relations @{text "c \<subseteq> c\<^sub>i"}, for @{text "i = 1, \<dots>, n"}.
+  \<^descr> @{ML Sign.primitive_class}~\<open>(c, [c\<^sub>1, \<dots>,
+  c\<^sub>n])\<close> declares a new class \<open>c\<close>, together with class
+  relations \<open>c \<subseteq> c\<^sub>i\<close>, for \<open>i = 1, \<dots>, n\<close>.
 
-  \<^descr> @{ML Sign.primitive_classrel}~@{text "(c\<^sub>1,
-  c\<^sub>2)"} declares the class relation @{text "c\<^sub>1 \<subseteq>
-  c\<^sub>2"}.
+  \<^descr> @{ML Sign.primitive_classrel}~\<open>(c\<^sub>1,
+  c\<^sub>2)\<close> declares the class relation \<open>c\<^sub>1 \<subseteq>
+  c\<^sub>2\<close>.
 
-  \<^descr> @{ML Sign.primitive_arity}~@{text "(\<kappa>, \<^vec>s, s)"} declares
-  the arity @{text "\<kappa> :: (\<^vec>s)s"}.
+  \<^descr> @{ML Sign.primitive_arity}~\<open>(\<kappa>, \<^vec>s, s)\<close> declares
+  the arity \<open>\<kappa> :: (\<^vec>s)s\<close>.
 \<close>
 
 text %mlantiq \<open>
   \begin{matharray}{rcl}
-  @{ML_antiquotation_def "class"} & : & @{text ML_antiquotation} \\
-  @{ML_antiquotation_def "sort"} & : & @{text ML_antiquotation} \\
-  @{ML_antiquotation_def "type_name"} & : & @{text ML_antiquotation} \\
-  @{ML_antiquotation_def "type_abbrev"} & : & @{text ML_antiquotation} \\
-  @{ML_antiquotation_def "nonterminal"} & : & @{text ML_antiquotation} \\
-  @{ML_antiquotation_def "typ"} & : & @{text ML_antiquotation} \\
+  @{ML_antiquotation_def "class"} & : & \<open>ML_antiquotation\<close> \\
+  @{ML_antiquotation_def "sort"} & : & \<open>ML_antiquotation\<close> \\
+  @{ML_antiquotation_def "type_name"} & : & \<open>ML_antiquotation\<close> \\
+  @{ML_antiquotation_def "type_abbrev"} & : & \<open>ML_antiquotation\<close> \\
+  @{ML_antiquotation_def "nonterminal"} & : & \<open>ML_antiquotation\<close> \\
+  @{ML_antiquotation_def "typ"} & : & \<open>ML_antiquotation\<close> \\
   \end{matharray}
 
   @{rail \<open>
@@ -207,23 +199,22 @@
   @@{ML_antiquotation typ} type
   \<close>}
 
-  \<^descr> @{text "@{class c}"} inlines the internalized class @{text
-  "c"} --- as @{ML_type string} literal.
+  \<^descr> \<open>@{class c}\<close> inlines the internalized class \<open>c\<close> --- as @{ML_type string} literal.
 
-  \<^descr> @{text "@{sort s}"} inlines the internalized sort @{text "s"}
+  \<^descr> \<open>@{sort s}\<close> inlines the internalized sort \<open>s\<close>
   --- as @{ML_type "string list"} literal.
 
-  \<^descr> @{text "@{type_name c}"} inlines the internalized type
-  constructor @{text "c"} --- as @{ML_type string} literal.
+  \<^descr> \<open>@{type_name c}\<close> inlines the internalized type
+  constructor \<open>c\<close> --- as @{ML_type string} literal.
 
-  \<^descr> @{text "@{type_abbrev c}"} inlines the internalized type
-  abbreviation @{text "c"} --- as @{ML_type string} literal.
+  \<^descr> \<open>@{type_abbrev c}\<close> inlines the internalized type
+  abbreviation \<open>c\<close> --- as @{ML_type string} literal.
 
-  \<^descr> @{text "@{nonterminal c}"} inlines the internalized syntactic
-  type~/ grammar nonterminal @{text "c"} --- as @{ML_type string}
+  \<^descr> \<open>@{nonterminal c}\<close> inlines the internalized syntactic
+  type~/ grammar nonterminal \<open>c\<close> --- as @{ML_type string}
   literal.
 
-  \<^descr> @{text "@{typ \<tau>}"} inlines the internalized type @{text "\<tau>"}
+  \<^descr> \<open>@{typ \<tau>}\<close> inlines the internalized type \<open>\<tau>\<close>
   --- as constructor term for datatype @{ML_type typ}.
 \<close>
 
@@ -231,18 +222,18 @@
 section \<open>Terms \label{sec:terms}\<close>
 
 text \<open>
-  The language of terms is that of simply-typed @{text "\<lambda>"}-calculus
+  The language of terms is that of simply-typed \<open>\<lambda>\<close>-calculus
   with de-Bruijn indices for bound variables (cf.\ @{cite debruijn72}
   or @{cite "paulson-ml2"}), with the types being determined by the
   corresponding binders.  In contrast, free variables and constants
   have an explicit name and type in each occurrence.
 
   \<^medskip>
-  A \<^emph>\<open>bound variable\<close> is a natural number @{text "b"},
+  A \<^emph>\<open>bound variable\<close> is a natural number \<open>b\<close>,
   which accounts for the number of intermediate binders between the
   variable occurrence in the body and its binding position.  For
-  example, the de-Bruijn term @{text "\<lambda>\<^bsub>bool\<^esub>. \<lambda>\<^bsub>bool\<^esub>. 1 \<and> 0"} would
-  correspond to @{text "\<lambda>x\<^bsub>bool\<^esub>. \<lambda>y\<^bsub>bool\<^esub>. x \<and> y"} in a named
+  example, the de-Bruijn term \<open>\<lambda>\<^bsub>bool\<^esub>. \<lambda>\<^bsub>bool\<^esub>. 1 \<and> 0\<close> would
+  correspond to \<open>\<lambda>x\<^bsub>bool\<^esub>. \<lambda>y\<^bsub>bool\<^esub>. x \<and> y\<close> in a named
   representation.  Note that a bound variable may be represented by
   different de-Bruijn indices at different occurrences, depending on
   the nesting of abstractions.
@@ -254,31 +245,29 @@
   without any loose variables.
 
   A \<^emph>\<open>fixed variable\<close> is a pair of a basic name and a type, e.g.\
-  @{text "(x, \<tau>)"} which is usually printed @{text "x\<^sub>\<tau>"} here.  A
+  \<open>(x, \<tau>)\<close> which is usually printed \<open>x\<^sub>\<tau>\<close> here.  A
   \<^emph>\<open>schematic variable\<close> is a pair of an indexname and a type,
-  e.g.\ @{text "((x, 0), \<tau>)"} which is likewise printed as @{text
-  "?x\<^sub>\<tau>"}.
+  e.g.\ \<open>((x, 0), \<tau>)\<close> which is likewise printed as \<open>?x\<^sub>\<tau>\<close>.
 
   \<^medskip>
   A \<^emph>\<open>constant\<close> is a pair of a basic name and a type,
-  e.g.\ @{text "(c, \<tau>)"} which is usually printed as @{text "c\<^sub>\<tau>"}
+  e.g.\ \<open>(c, \<tau>)\<close> which is usually printed as \<open>c\<^sub>\<tau>\<close>
   here.  Constants are declared in the context as polymorphic families
-  @{text "c :: \<sigma>"}, meaning that all substitution instances @{text
-  "c\<^sub>\<tau>"} for @{text "\<tau> = \<sigma>\<vartheta>"} are valid.
+  \<open>c :: \<sigma>\<close>, meaning that all substitution instances \<open>c\<^sub>\<tau>\<close> for \<open>\<tau> = \<sigma>\<vartheta>\<close> are valid.
 
-  The vector of \<^emph>\<open>type arguments\<close> of constant @{text "c\<^sub>\<tau>"} wrt.\
-  the declaration @{text "c :: \<sigma>"} is defined as the codomain of the
-  matcher @{text "\<vartheta> = {?\<alpha>\<^sub>1 \<mapsto> \<tau>\<^sub>1, \<dots>, ?\<alpha>\<^sub>n \<mapsto> \<tau>\<^sub>n}"} presented in
-  canonical order @{text "(\<tau>\<^sub>1, \<dots>, \<tau>\<^sub>n)"}, corresponding to the
-  left-to-right occurrences of the @{text "\<alpha>\<^sub>i"} in @{text "\<sigma>"}.
+  The vector of \<^emph>\<open>type arguments\<close> of constant \<open>c\<^sub>\<tau>\<close> wrt.\
+  the declaration \<open>c :: \<sigma>\<close> is defined as the codomain of the
+  matcher \<open>\<vartheta> = {?\<alpha>\<^sub>1 \<mapsto> \<tau>\<^sub>1, \<dots>, ?\<alpha>\<^sub>n \<mapsto> \<tau>\<^sub>n}\<close> presented in
+  canonical order \<open>(\<tau>\<^sub>1, \<dots>, \<tau>\<^sub>n)\<close>, corresponding to the
+  left-to-right occurrences of the \<open>\<alpha>\<^sub>i\<close> in \<open>\<sigma>\<close>.
   Within a given theory context, there is a one-to-one correspondence
-  between any constant @{text "c\<^sub>\<tau>"} and the application @{text "c(\<tau>\<^sub>1,
-  \<dots>, \<tau>\<^sub>n)"} of its type arguments.  For example, with @{text "plus :: \<alpha>
-  \<Rightarrow> \<alpha> \<Rightarrow> \<alpha>"}, the instance @{text "plus\<^bsub>nat \<Rightarrow> nat \<Rightarrow> nat\<^esub>"} corresponds to
-  @{text "plus(nat)"}.
+  between any constant \<open>c\<^sub>\<tau>\<close> and the application \<open>c(\<tau>\<^sub>1,
+  \<dots>, \<tau>\<^sub>n)\<close> of its type arguments.  For example, with \<open>plus :: \<alpha>
+  \<Rightarrow> \<alpha> \<Rightarrow> \<alpha>\<close>, the instance \<open>plus\<^bsub>nat \<Rightarrow> nat \<Rightarrow> nat\<^esub>\<close> corresponds to
+  \<open>plus(nat)\<close>.
 
-  Constant declarations @{text "c :: \<sigma>"} may contain sort constraints
-  for type variables in @{text "\<sigma>"}.  These are observed by
+  Constant declarations \<open>c :: \<sigma>\<close> may contain sort constraints
+  for type variables in \<open>\<sigma>\<close>.  These are observed by
   type-inference as expected, but \<^emph>\<open>ignored\<close> by the core logic.
   This means the primitive logic is able to reason with instances of
   polymorphic constants that the user-level type-checker would reject
@@ -287,21 +276,21 @@
   \<^medskip>
   An \<^emph>\<open>atomic term\<close> is either a variable or constant.
   The logical category \<^emph>\<open>term\<close> is defined inductively over atomic
-  terms, with abstraction and application as follows: @{text "t = b |
-  x\<^sub>\<tau> | ?x\<^sub>\<tau> | c\<^sub>\<tau> | \<lambda>\<^sub>\<tau>. t | t\<^sub>1 t\<^sub>2"}.  Parsing and printing takes care of
+  terms, with abstraction and application as follows: \<open>t = b |
+  x\<^sub>\<tau> | ?x\<^sub>\<tau> | c\<^sub>\<tau> | \<lambda>\<^sub>\<tau>. t | t\<^sub>1 t\<^sub>2\<close>.  Parsing and printing takes care of
   converting between an external representation with named bound
   variables.  Subsequently, we shall use the latter notation instead
   of internal de-Bruijn representation.
 
-  The inductive relation @{text "t :: \<tau>"} assigns a (unique) type to a
+  The inductive relation \<open>t :: \<tau>\<close> assigns a (unique) type to a
   term according to the structure of atomic terms, abstractions, and
   applications:
   \[
-  \infer{@{text "a\<^sub>\<tau> :: \<tau>"}}{}
+  \infer{\<open>a\<^sub>\<tau> :: \<tau>\<close>}{}
   \qquad
-  \infer{@{text "(\<lambda>x\<^sub>\<tau>. t) :: \<tau> \<Rightarrow> \<sigma>"}}{@{text "t :: \<sigma>"}}
+  \infer{\<open>(\<lambda>x\<^sub>\<tau>. t) :: \<tau> \<Rightarrow> \<sigma>\<close>}{\<open>t :: \<sigma>\<close>}
   \qquad
-  \infer{@{text "t u :: \<sigma>"}}{@{text "t :: \<tau> \<Rightarrow> \<sigma>"} & @{text "u :: \<tau>"}}
+  \infer{\<open>t u :: \<sigma>\<close>}{\<open>t :: \<tau> \<Rightarrow> \<sigma>\<close> & \<open>u :: \<tau>\<close>}
   \]
   A \<^emph>\<open>well-typed term\<close> is a term that can be typed according to these rules.
 
@@ -312,43 +301,38 @@
   variables, and declarations for polymorphic constants.
 
   The identity of atomic terms consists both of the name and the type
-  component.  This means that different variables @{text
-  "x\<^bsub>\<tau>\<^sub>1\<^esub>"} and @{text "x\<^bsub>\<tau>\<^sub>2\<^esub>"} may become the same after
+  component.  This means that different variables \<open>x\<^bsub>\<tau>\<^sub>1\<^esub>\<close> and \<open>x\<^bsub>\<tau>\<^sub>2\<^esub>\<close> may become the same after
   type instantiation.  Type-inference rejects variables of the same
   name, but different types.  In contrast, mixed instances of
   polymorphic constants occur routinely.
 
   \<^medskip>
-  The \<^emph>\<open>hidden polymorphism\<close> of a term @{text "t :: \<sigma>"}
-  is the set of type variables occurring in @{text "t"}, but not in
-  its type @{text "\<sigma>"}.  This means that the term implicitly depends
+  The \<^emph>\<open>hidden polymorphism\<close> of a term \<open>t :: \<sigma>\<close>
+  is the set of type variables occurring in \<open>t\<close>, but not in
+  its type \<open>\<sigma>\<close>.  This means that the term implicitly depends
   on type arguments that are not accounted in the result type, i.e.\
-  there are different type instances @{text "t\<vartheta> :: \<sigma>"} and
-  @{text "t\<vartheta>' :: \<sigma>"} with the same type.  This slightly
+  there are different type instances \<open>t\<vartheta> :: \<sigma>\<close> and
+  \<open>t\<vartheta>' :: \<sigma>\<close> with the same type.  This slightly
   pathological situation notoriously demands additional care.
 
   \<^medskip>
-  A \<^emph>\<open>term abbreviation\<close> is a syntactic definition @{text
-  "c\<^sub>\<sigma> \<equiv> t"} of a closed term @{text "t"} of type @{text "\<sigma>"},
+  A \<^emph>\<open>term abbreviation\<close> is a syntactic definition \<open>c\<^sub>\<sigma> \<equiv> t\<close> of a closed term \<open>t\<close> of type \<open>\<sigma>\<close>,
   without any hidden polymorphism.  A term abbreviation looks like a
   constant in the syntax, but is expanded before entering the logical
   core.  Abbreviations are usually reverted when printing terms, using
-  @{text "t \<rightarrow> c\<^sub>\<sigma>"} as rules for higher-order rewriting.
+  \<open>t \<rightarrow> c\<^sub>\<sigma>\<close> as rules for higher-order rewriting.
 
   \<^medskip>
-  Canonical operations on @{text "\<lambda>"}-terms include @{text
-  "\<alpha>\<beta>\<eta>"}-conversion: @{text "\<alpha>"}-conversion refers to capture-free
-  renaming of bound variables; @{text "\<beta>"}-conversion contracts an
+  Canonical operations on \<open>\<lambda>\<close>-terms include \<open>\<alpha>\<beta>\<eta>\<close>-conversion: \<open>\<alpha>\<close>-conversion refers to capture-free
+  renaming of bound variables; \<open>\<beta>\<close>-conversion contracts an
   abstraction applied to an argument term, substituting the argument
-  in the body: @{text "(\<lambda>x. b)a"} becomes @{text "b[a/x]"}; @{text
-  "\<eta>"}-conversion contracts vacuous application-abstraction: @{text
-  "\<lambda>x. f x"} becomes @{text "f"}, provided that the bound variable
-  does not occur in @{text "f"}.
+  in the body: \<open>(\<lambda>x. b)a\<close> becomes \<open>b[a/x]\<close>; \<open>\<eta>\<close>-conversion contracts vacuous application-abstraction: \<open>\<lambda>x. f x\<close> becomes \<open>f\<close>, provided that the bound variable
+  does not occur in \<open>f\<close>.
 
-  Terms are normally treated modulo @{text "\<alpha>"}-conversion, which is
+  Terms are normally treated modulo \<open>\<alpha>\<close>-conversion, which is
   implicit in the de-Bruijn representation.  Names for bound variables
   in abstractions are maintained separately as (meaningless) comments,
-  mostly for parsing and printing.  Full @{text "\<alpha>\<beta>\<eta>"}-conversion is
+  mostly for parsing and printing.  Full \<open>\<alpha>\<beta>\<eta>\<close>-conversion is
   commonplace in various standard operations (\secref{sec:obj-rules})
   that are based on higher-order unification and matching.
 \<close>
@@ -381,64 +365,59 @@
   Free}, @{index_ML Var}, @{index_ML Const}, @{index_ML Abs},
   @{index_ML_op "$"}.
 
-  \<^descr> @{text "t"}~@{ML_text aconv}~@{text "u"} checks @{text
-  "\<alpha>"}-equivalence of two terms.  This is the basic equality relation
+  \<^descr> \<open>t\<close>~@{ML_text aconv}~\<open>u\<close> checks \<open>\<alpha>\<close>-equivalence of two terms.  This is the basic equality relation
   on type @{ML_type term}; raw datatype equality should only be used
   for operations related to parsing or printing!
 
-  \<^descr> @{ML Term.map_types}~@{text "f t"} applies the mapping @{text
-  "f"} to all types occurring in @{text "t"}.
+  \<^descr> @{ML Term.map_types}~\<open>f t\<close> applies the mapping \<open>f\<close> to all types occurring in \<open>t\<close>.
 
-  \<^descr> @{ML Term.fold_types}~@{text "f t"} iterates the operation
-  @{text "f"} over all occurrences of types in @{text "t"}; the term
+  \<^descr> @{ML Term.fold_types}~\<open>f t\<close> iterates the operation
+  \<open>f\<close> over all occurrences of types in \<open>t\<close>; the term
   structure is traversed from left to right.
 
-  \<^descr> @{ML Term.map_aterms}~@{text "f t"} applies the mapping @{text
-  "f"} to all atomic terms (@{ML Bound}, @{ML Free}, @{ML Var}, @{ML
-  Const}) occurring in @{text "t"}.
+  \<^descr> @{ML Term.map_aterms}~\<open>f t\<close> applies the mapping \<open>f\<close> to all atomic terms (@{ML Bound}, @{ML Free}, @{ML Var}, @{ML
+  Const}) occurring in \<open>t\<close>.
 
-  \<^descr> @{ML Term.fold_aterms}~@{text "f t"} iterates the operation
-  @{text "f"} over all occurrences of atomic terms (@{ML Bound}, @{ML
-  Free}, @{ML Var}, @{ML Const}) in @{text "t"}; the term structure is
+  \<^descr> @{ML Term.fold_aterms}~\<open>f t\<close> iterates the operation
+  \<open>f\<close> over all occurrences of atomic terms (@{ML Bound}, @{ML
+  Free}, @{ML Var}, @{ML Const}) in \<open>t\<close>; the term structure is
   traversed from left to right.
 
-  \<^descr> @{ML fastype_of}~@{text "t"} determines the type of a
+  \<^descr> @{ML fastype_of}~\<open>t\<close> determines the type of a
   well-typed term.  This operation is relatively slow, despite the
   omission of any sanity checks.
 
-  \<^descr> @{ML lambda}~@{text "a b"} produces an abstraction @{text
-  "\<lambda>a. b"}, where occurrences of the atomic term @{text "a"} in the
-  body @{text "b"} are replaced by bound variables.
+  \<^descr> @{ML lambda}~\<open>a b\<close> produces an abstraction \<open>\<lambda>a. b\<close>, where occurrences of the atomic term \<open>a\<close> in the
+  body \<open>b\<close> are replaced by bound variables.
 
-  \<^descr> @{ML betapply}~@{text "(t, u)"} produces an application @{text
-  "t u"}, with topmost @{text "\<beta>"}-conversion if @{text "t"} is an
+  \<^descr> @{ML betapply}~\<open>(t, u)\<close> produces an application \<open>t u\<close>, with topmost \<open>\<beta>\<close>-conversion if \<open>t\<close> is an
   abstraction.
 
-  \<^descr> @{ML incr_boundvars}~@{text "j"} increments a term's dangling
-  bound variables by the offset @{text "j"}.  This is required when
+  \<^descr> @{ML incr_boundvars}~\<open>j\<close> increments a term's dangling
+  bound variables by the offset \<open>j\<close>.  This is required when
   moving a subterm into a context where it is enclosed by a different
   number of abstractions.  Bound variables with a matching abstraction
   are unaffected.
 
-  \<^descr> @{ML Sign.declare_const}~@{text "ctxt ((c, \<sigma>), mx)"} declares
-  a new constant @{text "c :: \<sigma>"} with optional mixfix syntax.
+  \<^descr> @{ML Sign.declare_const}~\<open>ctxt ((c, \<sigma>), mx)\<close> declares
+  a new constant \<open>c :: \<sigma>\<close> with optional mixfix syntax.
 
-  \<^descr> @{ML Sign.add_abbrev}~@{text "print_mode (c, t)"}
-  introduces a new term abbreviation @{text "c \<equiv> t"}.
+  \<^descr> @{ML Sign.add_abbrev}~\<open>print_mode (c, t)\<close>
+  introduces a new term abbreviation \<open>c \<equiv> t\<close>.
 
-  \<^descr> @{ML Sign.const_typargs}~@{text "thy (c, \<tau>)"} and @{ML
-  Sign.const_instance}~@{text "thy (c, [\<tau>\<^sub>1, \<dots>, \<tau>\<^sub>n])"}
+  \<^descr> @{ML Sign.const_typargs}~\<open>thy (c, \<tau>)\<close> and @{ML
+  Sign.const_instance}~\<open>thy (c, [\<tau>\<^sub>1, \<dots>, \<tau>\<^sub>n])\<close>
   convert between two representations of polymorphic constants: full
   type instance vs.\ compact type arguments form.
 \<close>
 
 text %mlantiq \<open>
   \begin{matharray}{rcl}
-  @{ML_antiquotation_def "const_name"} & : & @{text ML_antiquotation} \\
-  @{ML_antiquotation_def "const_abbrev"} & : & @{text ML_antiquotation} \\
-  @{ML_antiquotation_def "const"} & : & @{text ML_antiquotation} \\
-  @{ML_antiquotation_def "term"} & : & @{text ML_antiquotation} \\
-  @{ML_antiquotation_def "prop"} & : & @{text ML_antiquotation} \\
+  @{ML_antiquotation_def "const_name"} & : & \<open>ML_antiquotation\<close> \\
+  @{ML_antiquotation_def "const_abbrev"} & : & \<open>ML_antiquotation\<close> \\
+  @{ML_antiquotation_def "const"} & : & \<open>ML_antiquotation\<close> \\
+  @{ML_antiquotation_def "term"} & : & \<open>ML_antiquotation\<close> \\
+  @{ML_antiquotation_def "prop"} & : & \<open>ML_antiquotation\<close> \\
   \end{matharray}
 
   @{rail \<open>
@@ -452,45 +431,44 @@
   @@{ML_antiquotation prop} prop
   \<close>}
 
-  \<^descr> @{text "@{const_name c}"} inlines the internalized logical
-  constant name @{text "c"} --- as @{ML_type string} literal.
+  \<^descr> \<open>@{const_name c}\<close> inlines the internalized logical
+  constant name \<open>c\<close> --- as @{ML_type string} literal.
 
-  \<^descr> @{text "@{const_abbrev c}"} inlines the internalized
-  abbreviated constant name @{text "c"} --- as @{ML_type string}
+  \<^descr> \<open>@{const_abbrev c}\<close> inlines the internalized
+  abbreviated constant name \<open>c\<close> --- as @{ML_type string}
   literal.
 
-  \<^descr> @{text "@{const c(\<^vec>\<tau>)}"} inlines the internalized
-  constant @{text "c"} with precise type instantiation in the sense of
+  \<^descr> \<open>@{const c(\<^vec>\<tau>)}\<close> inlines the internalized
+  constant \<open>c\<close> with precise type instantiation in the sense of
   @{ML Sign.const_instance} --- as @{ML Const} constructor term for
   datatype @{ML_type term}.
 
-  \<^descr> @{text "@{term t}"} inlines the internalized term @{text "t"}
+  \<^descr> \<open>@{term t}\<close> inlines the internalized term \<open>t\<close>
   --- as constructor term for datatype @{ML_type term}.
 
-  \<^descr> @{text "@{prop \<phi>}"} inlines the internalized proposition
-  @{text "\<phi>"} --- as constructor term for datatype @{ML_type term}.
+  \<^descr> \<open>@{prop \<phi>}\<close> inlines the internalized proposition
+  \<open>\<phi>\<close> --- as constructor term for datatype @{ML_type term}.
 \<close>
 
 
 section \<open>Theorems \label{sec:thms}\<close>
 
 text \<open>
-  A \<^emph>\<open>proposition\<close> is a well-typed term of type @{text "prop"}, a
+  A \<^emph>\<open>proposition\<close> is a well-typed term of type \<open>prop\<close>, a
   \<^emph>\<open>theorem\<close> is a proven proposition (depending on a context of
   hypotheses and the background theory).  Primitive inferences include
-  plain Natural Deduction rules for the primary connectives @{text
-  "\<And>"} and @{text "\<Longrightarrow>"} of the framework.  There is also a builtin
-  notion of equality/equivalence @{text "\<equiv>"}.
+  plain Natural Deduction rules for the primary connectives \<open>\<And>\<close> and \<open>\<Longrightarrow>\<close> of the framework.  There is also a builtin
+  notion of equality/equivalence \<open>\<equiv>\<close>.
 \<close>
 
 
 subsection \<open>Primitive connectives and rules \label{sec:prim-rules}\<close>
 
 text \<open>
-  The theory @{text "Pure"} contains constant declarations for the
-  primitive connectives @{text "\<And>"}, @{text "\<Longrightarrow>"}, and @{text "\<equiv>"} of
+  The theory \<open>Pure\<close> contains constant declarations for the
+  primitive connectives \<open>\<And>\<close>, \<open>\<Longrightarrow>\<close>, and \<open>\<equiv>\<close> of
   the logical framework, see \figref{fig:pure-connectives}.  The
-  derivability judgment @{text "A\<^sub>1, \<dots>, A\<^sub>n \<turnstile> B"} is
+  derivability judgment \<open>A\<^sub>1, \<dots>, A\<^sub>n \<turnstile> B\<close> is
   defined inductively by the primitive inferences given in
   \figref{fig:prim-rules}, with the global restriction that the
   hypotheses must \<^emph>\<open>not\<close> contain any schematic variables.  The
@@ -501,9 +479,9 @@
   \begin{figure}[htb]
   \begin{center}
   \begin{tabular}{ll}
-  @{text "all :: (\<alpha> \<Rightarrow> prop) \<Rightarrow> prop"} & universal quantification (binder @{text "\<And>"}) \\
-  @{text "\<Longrightarrow> :: prop \<Rightarrow> prop \<Rightarrow> prop"} & implication (right associative infix) \\
-  @{text "\<equiv> :: \<alpha> \<Rightarrow> \<alpha> \<Rightarrow> prop"} & equality relation (infix) \\
+  \<open>all :: (\<alpha> \<Rightarrow> prop) \<Rightarrow> prop\<close> & universal quantification (binder \<open>\<And>\<close>) \\
+  \<open>\<Longrightarrow> :: prop \<Rightarrow> prop \<Rightarrow> prop\<close> & implication (right associative infix) \\
+  \<open>\<equiv> :: \<alpha> \<Rightarrow> \<alpha> \<Rightarrow> prop\<close> & equality relation (infix) \\
   \end{tabular}
   \caption{Primitive connectives of Pure}\label{fig:pure-connectives}
   \end{center}
@@ -512,19 +490,19 @@
   \begin{figure}[htb]
   \begin{center}
   \[
-  \infer[@{text "(axiom)"}]{@{text "\<turnstile> A"}}{@{text "A \<in> \<Theta>"}}
+  \infer[\<open>(axiom)\<close>]{\<open>\<turnstile> A\<close>}{\<open>A \<in> \<Theta>\<close>}
   \qquad
-  \infer[@{text "(assume)"}]{@{text "A \<turnstile> A"}}{}
+  \infer[\<open>(assume)\<close>]{\<open>A \<turnstile> A\<close>}{}
   \]
   \[
-  \infer[@{text "(\<And>\<hyphen>intro)"}]{@{text "\<Gamma> \<turnstile> \<And>x. B[x]"}}{@{text "\<Gamma> \<turnstile> B[x]"} & @{text "x \<notin> \<Gamma>"}}
+  \infer[\<open>(\<And>\<hyphen>intro)\<close>]{\<open>\<Gamma> \<turnstile> \<And>x. B[x]\<close>}{\<open>\<Gamma> \<turnstile> B[x]\<close> & \<open>x \<notin> \<Gamma>\<close>}
   \qquad
-  \infer[@{text "(\<And>\<hyphen>elim)"}]{@{text "\<Gamma> \<turnstile> B[a]"}}{@{text "\<Gamma> \<turnstile> \<And>x. B[x]"}}
+  \infer[\<open>(\<And>\<hyphen>elim)\<close>]{\<open>\<Gamma> \<turnstile> B[a]\<close>}{\<open>\<Gamma> \<turnstile> \<And>x. B[x]\<close>}
   \]
   \[
-  \infer[@{text "(\<Longrightarrow>\<hyphen>intro)"}]{@{text "\<Gamma> - A \<turnstile> A \<Longrightarrow> B"}}{@{text "\<Gamma> \<turnstile> B"}}
+  \infer[\<open>(\<Longrightarrow>\<hyphen>intro)\<close>]{\<open>\<Gamma> - A \<turnstile> A \<Longrightarrow> B\<close>}{\<open>\<Gamma> \<turnstile> B\<close>}
   \qquad
-  \infer[@{text "(\<Longrightarrow>\<hyphen>elim)"}]{@{text "\<Gamma>\<^sub>1 \<union> \<Gamma>\<^sub>2 \<turnstile> B"}}{@{text "\<Gamma>\<^sub>1 \<turnstile> A \<Longrightarrow> B"} & @{text "\<Gamma>\<^sub>2 \<turnstile> A"}}
+  \infer[\<open>(\<Longrightarrow>\<hyphen>elim)\<close>]{\<open>\<Gamma>\<^sub>1 \<union> \<Gamma>\<^sub>2 \<turnstile> B\<close>}{\<open>\<Gamma>\<^sub>1 \<turnstile> A \<Longrightarrow> B\<close> & \<open>\<Gamma>\<^sub>2 \<turnstile> A\<close>}
   \]
   \caption{Primitive inferences of Pure}\label{fig:prim-rules}
   \end{center}
@@ -533,72 +511,66 @@
   \begin{figure}[htb]
   \begin{center}
   \begin{tabular}{ll}
-  @{text "\<turnstile> (\<lambda>x. b[x]) a \<equiv> b[a]"} & @{text "\<beta>"}-conversion \\
-  @{text "\<turnstile> x \<equiv> x"} & reflexivity \\
-  @{text "\<turnstile> x \<equiv> y \<Longrightarrow> P x \<Longrightarrow> P y"} & substitution \\
-  @{text "\<turnstile> (\<And>x. f x \<equiv> g x) \<Longrightarrow> f \<equiv> g"} & extensionality \\
-  @{text "\<turnstile> (A \<Longrightarrow> B) \<Longrightarrow> (B \<Longrightarrow> A) \<Longrightarrow> A \<equiv> B"} & logical equivalence \\
+  \<open>\<turnstile> (\<lambda>x. b[x]) a \<equiv> b[a]\<close> & \<open>\<beta>\<close>-conversion \\
+  \<open>\<turnstile> x \<equiv> x\<close> & reflexivity \\
+  \<open>\<turnstile> x \<equiv> y \<Longrightarrow> P x \<Longrightarrow> P y\<close> & substitution \\
+  \<open>\<turnstile> (\<And>x. f x \<equiv> g x) \<Longrightarrow> f \<equiv> g\<close> & extensionality \\
+  \<open>\<turnstile> (A \<Longrightarrow> B) \<Longrightarrow> (B \<Longrightarrow> A) \<Longrightarrow> A \<equiv> B\<close> & logical equivalence \\
   \end{tabular}
   \caption{Conceptual axiomatization of Pure equality}\label{fig:pure-equality}
   \end{center}
   \end{figure}
 
-  The introduction and elimination rules for @{text "\<And>"} and @{text
-  "\<Longrightarrow>"} are analogous to formation of dependently typed @{text
-  "\<lambda>"}-terms representing the underlying proof objects.  Proof terms
+  The introduction and elimination rules for \<open>\<And>\<close> and \<open>\<Longrightarrow>\<close> are analogous to formation of dependently typed \<open>\<lambda>\<close>-terms representing the underlying proof objects.  Proof terms
   are irrelevant in the Pure logic, though; they cannot occur within
   propositions.  The system provides a runtime option to record
   explicit proof terms for primitive inferences, see also
-  \secref{sec:proof-terms}.  Thus all three levels of @{text
-  "\<lambda>"}-calculus become explicit: @{text "\<Rightarrow>"} for terms, and @{text
-  "\<And>/\<Longrightarrow>"} for proofs (cf.\ @{cite "Berghofer-Nipkow:2000:TPHOL"}).
+  \secref{sec:proof-terms}.  Thus all three levels of \<open>\<lambda>\<close>-calculus become explicit: \<open>\<Rightarrow>\<close> for terms, and \<open>\<And>/\<Longrightarrow>\<close> for proofs (cf.\ @{cite "Berghofer-Nipkow:2000:TPHOL"}).
 
-  Observe that locally fixed parameters (as in @{text
-  "\<And>\<hyphen>intro"}) need not be recorded in the hypotheses, because
+  Observe that locally fixed parameters (as in \<open>\<And>\<hyphen>intro\<close>) need not be recorded in the hypotheses, because
   the simple syntactic types of Pure are always inhabitable.
-  ``Assumptions'' @{text "x :: \<tau>"} for type-membership are only
-  present as long as some @{text "x\<^sub>\<tau>"} occurs in the statement
-  body.\footnote{This is the key difference to ``@{text "\<lambda>HOL"}'' in
+  ``Assumptions'' \<open>x :: \<tau>\<close> for type-membership are only
+  present as long as some \<open>x\<^sub>\<tau>\<close> occurs in the statement
+  body.\footnote{This is the key difference to ``\<open>\<lambda>HOL\<close>'' in
   the PTS framework @{cite "Barendregt-Geuvers:2001"}, where hypotheses
-  @{text "x : A"} are treated uniformly for propositions and types.}
+  \<open>x : A\<close> are treated uniformly for propositions and types.}
 
   \<^medskip>
   The axiomatization of a theory is implicitly closed by
-  forming all instances of type and term variables: @{text "\<turnstile>
-  A\<vartheta>"} holds for any substitution instance of an axiom
-  @{text "\<turnstile> A"}.  By pushing substitutions through derivations
-  inductively, we also get admissible @{text "generalize"} and @{text
-  "instantiate"} rules as shown in \figref{fig:subst-rules}.
+  forming all instances of type and term variables: \<open>\<turnstile>
+  A\<vartheta>\<close> holds for any substitution instance of an axiom
+  \<open>\<turnstile> A\<close>.  By pushing substitutions through derivations
+  inductively, we also get admissible \<open>generalize\<close> and \<open>instantiate\<close> rules as shown in \figref{fig:subst-rules}.
 
   \begin{figure}[htb]
   \begin{center}
   \[
-  \infer{@{text "\<Gamma> \<turnstile> B[?\<alpha>]"}}{@{text "\<Gamma> \<turnstile> B[\<alpha>]"} & @{text "\<alpha> \<notin> \<Gamma>"}}
+  \infer{\<open>\<Gamma> \<turnstile> B[?\<alpha>]\<close>}{\<open>\<Gamma> \<turnstile> B[\<alpha>]\<close> & \<open>\<alpha> \<notin> \<Gamma>\<close>}
   \quad
-  \infer[\quad@{text "(generalize)"}]{@{text "\<Gamma> \<turnstile> B[?x]"}}{@{text "\<Gamma> \<turnstile> B[x]"} & @{text "x \<notin> \<Gamma>"}}
+  \infer[\quad\<open>(generalize)\<close>]{\<open>\<Gamma> \<turnstile> B[?x]\<close>}{\<open>\<Gamma> \<turnstile> B[x]\<close> & \<open>x \<notin> \<Gamma>\<close>}
   \]
   \[
-  \infer{@{text "\<Gamma> \<turnstile> B[\<tau>]"}}{@{text "\<Gamma> \<turnstile> B[?\<alpha>]"}}
+  \infer{\<open>\<Gamma> \<turnstile> B[\<tau>]\<close>}{\<open>\<Gamma> \<turnstile> B[?\<alpha>]\<close>}
   \quad
-  \infer[\quad@{text "(instantiate)"}]{@{text "\<Gamma> \<turnstile> B[t]"}}{@{text "\<Gamma> \<turnstile> B[?x]"}}
+  \infer[\quad\<open>(instantiate)\<close>]{\<open>\<Gamma> \<turnstile> B[t]\<close>}{\<open>\<Gamma> \<turnstile> B[?x]\<close>}
   \]
   \caption{Admissible substitution rules}\label{fig:subst-rules}
   \end{center}
   \end{figure}
 
-  Note that @{text "instantiate"} does not require an explicit
-  side-condition, because @{text "\<Gamma>"} may never contain schematic
+  Note that \<open>instantiate\<close> does not require an explicit
+  side-condition, because \<open>\<Gamma>\<close> may never contain schematic
   variables.
 
   In principle, variables could be substituted in hypotheses as well,
   but this would disrupt the monotonicity of reasoning: deriving
-  @{text "\<Gamma>\<vartheta> \<turnstile> B\<vartheta>"} from @{text "\<Gamma> \<turnstile> B"} is
-  correct, but @{text "\<Gamma>\<vartheta> \<supseteq> \<Gamma>"} does not necessarily hold:
+  \<open>\<Gamma>\<vartheta> \<turnstile> B\<vartheta>\<close> from \<open>\<Gamma> \<turnstile> B\<close> is
+  correct, but \<open>\<Gamma>\<vartheta> \<supseteq> \<Gamma>\<close> does not necessarily hold:
   the result belongs to a different proof context.
 
   \<^medskip>
   An \<^emph>\<open>oracle\<close> is a function that produces axioms on the
-  fly.  Logically, this is an instance of the @{text "axiom"} rule
+  fly.  Logically, this is an instance of the \<open>axiom\<close> rule
   (\figref{fig:prim-rules}), but there is an operational difference.
   The system always records oracle invocations within derivations of
   theorems by a unique tag.
@@ -608,20 +580,16 @@
   Later on, theories are usually developed in a strictly definitional
   fashion, by stating only certain equalities over new constants.
 
-  A \<^emph>\<open>simple definition\<close> consists of a constant declaration @{text
-  "c :: \<sigma>"} together with an axiom @{text "\<turnstile> c \<equiv> t"}, where @{text "t
-  :: \<sigma>"} is a closed term without any hidden polymorphism.  The RHS
-  may depend on further defined constants, but not @{text "c"} itself.
-  Definitions of functions may be presented as @{text "c \<^vec>x \<equiv>
-  t"} instead of the puristic @{text "c \<equiv> \<lambda>\<^vec>x. t"}.
+  A \<^emph>\<open>simple definition\<close> consists of a constant declaration \<open>c :: \<sigma>\<close> together with an axiom \<open>\<turnstile> c \<equiv> t\<close>, where \<open>t
+  :: \<sigma>\<close> is a closed term without any hidden polymorphism.  The RHS
+  may depend on further defined constants, but not \<open>c\<close> itself.
+  Definitions of functions may be presented as \<open>c \<^vec>x \<equiv>
+  t\<close> instead of the puristic \<open>c \<equiv> \<lambda>\<^vec>x. t\<close>.
 
   An \<^emph>\<open>overloaded definition\<close> consists of a collection of axioms
-  for the same constant, with zero or one equations @{text
-  "c((\<^vec>\<alpha>)\<kappa>) \<equiv> t"} for each type constructor @{text "\<kappa>"} (for
-  distinct variables @{text "\<^vec>\<alpha>"}).  The RHS may mention
-  previously defined constants as above, or arbitrary constants @{text
-  "d(\<alpha>\<^sub>i)"} for some @{text "\<alpha>\<^sub>i"} projected from @{text
-  "\<^vec>\<alpha>"}.  Thus overloaded definitions essentially work by
+  for the same constant, with zero or one equations \<open>c((\<^vec>\<alpha>)\<kappa>) \<equiv> t\<close> for each type constructor \<open>\<kappa>\<close> (for
+  distinct variables \<open>\<^vec>\<alpha>\<close>).  The RHS may mention
+  previously defined constants as above, or arbitrary constants \<open>d(\<alpha>\<^sub>i)\<close> for some \<open>\<alpha>\<^sub>i\<close> projected from \<open>\<^vec>\<alpha>\<close>.  Thus overloaded definitions essentially work by
   primitive recursion over the syntactic structure of a single type
   argument.  See also @{cite \<open>\S4.3\<close> "Haftmann-Wenzel:2006:classes"}.
 \<close>
@@ -665,7 +633,7 @@
   Defs.entry -> Defs.entry list -> theory -> theory"} \\
   \end{mldecls}
 
-  \<^descr> @{ML Thm.peek_status}~@{text "thm"} informs about the current
+  \<^descr> @{ML Thm.peek_status}~\<open>thm\<close> informs about the current
   status of the derivation object behind the given theorem.  This is a
   snapshot of a potentially ongoing (parallel) evaluation of proofs.
   The three Boolean values indicate the following: @{verbatim oracle}
@@ -674,13 +642,13 @@
   failed} if some future proof has failed, rendering the theorem
   invalid!
 
-  \<^descr> @{ML Logic.all}~@{text "a B"} produces a Pure quantification
-  @{text "\<And>a. B"}, where occurrences of the atomic term @{text "a"} in
-  the body proposition @{text "B"} are replaced by bound variables.
+  \<^descr> @{ML Logic.all}~\<open>a B\<close> produces a Pure quantification
+  \<open>\<And>a. B\<close>, where occurrences of the atomic term \<open>a\<close> in
+  the body proposition \<open>B\<close> are replaced by bound variables.
   (See also @{ML lambda} on terms.)
 
-  \<^descr> @{ML Logic.mk_implies}~@{text "(A, B)"} produces a Pure
-  implication @{text "A \<Longrightarrow> B"}.
+  \<^descr> @{ML Logic.mk_implies}~\<open>(A, B)\<close> produces a Pure
+  implication \<open>A \<Longrightarrow> B\<close>.
 
   \<^descr> Types @{ML_type ctyp} and @{ML_type cterm} represent certified
   types and terms, respectively.  These are abstract datatypes that
@@ -693,8 +661,8 @@
   are located in the @{ML_structure Thm} module, even though theorems are
   not yet involved at that stage.
 
-  \<^descr> @{ML Thm.ctyp_of}~@{text "ctxt \<tau>"} and @{ML
-  Thm.cterm_of}~@{text "ctxt t"} explicitly check types and terms,
+  \<^descr> @{ML Thm.ctyp_of}~\<open>ctxt \<tau>\<close> and @{ML
+  Thm.cterm_of}~\<open>ctxt t\<close> explicitly check types and terms,
   respectively.  This also involves some basic normalizations, such
   expansion of type and term abbreviations from the underlying
   theory context.
@@ -716,7 +684,7 @@
   Every @{ML_type thm} value refers its background theory,
   cf.\ \secref{sec:context-theory}.
 
-  \<^descr> @{ML Thm.transfer}~@{text "thy thm"} transfers the given
+  \<^descr> @{ML Thm.transfer}~\<open>thy thm\<close> transfers the given
   theorem to a \<^emph>\<open>larger\<close> theory, see also \secref{sec:context}.
   This formal adjustment of the background context has no logical
   significance, but is occasionally required for formal reasons, e.g.\
@@ -727,50 +695,48 @@
   Thm.forall_elim}, @{ML Thm.implies_intr}, and @{ML Thm.implies_elim}
   correspond to the primitive inferences of \figref{fig:prim-rules}.
 
-  \<^descr> @{ML Thm.generalize}~@{text "(\<^vec>\<alpha>, \<^vec>x)"}
-  corresponds to the @{text "generalize"} rules of
+  \<^descr> @{ML Thm.generalize}~\<open>(\<^vec>\<alpha>, \<^vec>x)\<close>
+  corresponds to the \<open>generalize\<close> rules of
   \figref{fig:subst-rules}.  Here collections of type and term
   variables are generalized simultaneously, specified by the given
   basic names.
 
-  \<^descr> @{ML Thm.instantiate}~@{text "(\<^vec>\<alpha>\<^sub>s,
-  \<^vec>x\<^sub>\<tau>)"} corresponds to the @{text "instantiate"} rules
+  \<^descr> @{ML Thm.instantiate}~\<open>(\<^vec>\<alpha>\<^sub>s,
+  \<^vec>x\<^sub>\<tau>)\<close> corresponds to the \<open>instantiate\<close> rules
   of \figref{fig:subst-rules}.  Type variables are substituted before
-  term variables.  Note that the types in @{text "\<^vec>x\<^sub>\<tau>"}
+  term variables.  Note that the types in \<open>\<^vec>x\<^sub>\<tau>\<close>
   refer to the instantiated versions.
 
-  \<^descr> @{ML Thm.add_axiom}~@{text "ctxt (name, A)"} declares an
+  \<^descr> @{ML Thm.add_axiom}~\<open>ctxt (name, A)\<close> declares an
   arbitrary proposition as axiom, and retrieves it as a theorem from
-  the resulting theory, cf.\ @{text "axiom"} in
+  the resulting theory, cf.\ \<open>axiom\<close> in
   \figref{fig:prim-rules}.  Note that the low-level representation in
   the axiom table may differ slightly from the returned theorem.
 
-  \<^descr> @{ML Thm.add_oracle}~@{text "(binding, oracle)"} produces a named
+  \<^descr> @{ML Thm.add_oracle}~\<open>(binding, oracle)\<close> produces a named
   oracle rule, essentially generating arbitrary axioms on the fly,
-  cf.\ @{text "axiom"} in \figref{fig:prim-rules}.
+  cf.\ \<open>axiom\<close> in \figref{fig:prim-rules}.
 
-  \<^descr> @{ML Thm.add_def}~@{text "ctxt unchecked overloaded (name, c
-  \<^vec>x \<equiv> t)"} states a definitional axiom for an existing constant
-  @{text "c"}.  Dependencies are recorded via @{ML Theory.add_deps},
-  unless the @{text "unchecked"} option is set.  Note that the
+  \<^descr> @{ML Thm.add_def}~\<open>ctxt unchecked overloaded (name, c
+  \<^vec>x \<equiv> t)\<close> states a definitional axiom for an existing constant
+  \<open>c\<close>.  Dependencies are recorded via @{ML Theory.add_deps},
+  unless the \<open>unchecked\<close> option is set.  Note that the
   low-level representation in the axiom table may differ slightly from
   the returned theorem.
 
-  \<^descr> @{ML Theory.add_deps}~@{text "ctxt name c\<^sub>\<tau> \<^vec>d\<^sub>\<sigma>"}
-  declares dependencies of a named specification for constant @{text
-  "c\<^sub>\<tau>"}, relative to existing specifications for constants @{text
-  "\<^vec>d\<^sub>\<sigma>"}.  This also works for type constructors.
+  \<^descr> @{ML Theory.add_deps}~\<open>ctxt name c\<^sub>\<tau> \<^vec>d\<^sub>\<sigma>\<close>
+  declares dependencies of a named specification for constant \<open>c\<^sub>\<tau>\<close>, relative to existing specifications for constants \<open>\<^vec>d\<^sub>\<sigma>\<close>.  This also works for type constructors.
 \<close>
 
 
 text %mlantiq \<open>
   \begin{matharray}{rcl}
-  @{ML_antiquotation_def "ctyp"} & : & @{text ML_antiquotation} \\
-  @{ML_antiquotation_def "cterm"} & : & @{text ML_antiquotation} \\
-  @{ML_antiquotation_def "cprop"} & : & @{text ML_antiquotation} \\
-  @{ML_antiquotation_def "thm"} & : & @{text ML_antiquotation} \\
-  @{ML_antiquotation_def "thms"} & : & @{text ML_antiquotation} \\
-  @{ML_antiquotation_def "lemma"} & : & @{text ML_antiquotation} \\
+  @{ML_antiquotation_def "ctyp"} & : & \<open>ML_antiquotation\<close> \\
+  @{ML_antiquotation_def "cterm"} & : & \<open>ML_antiquotation\<close> \\
+  @{ML_antiquotation_def "cprop"} & : & \<open>ML_antiquotation\<close> \\
+  @{ML_antiquotation_def "thm"} & : & \<open>ML_antiquotation\<close> \\
+  @{ML_antiquotation_def "thms"} & : & \<open>ML_antiquotation\<close> \\
+  @{ML_antiquotation_def "lemma"} & : & \<open>ML_antiquotation\<close> \\
   \end{matharray}
 
   @{rail \<open>
@@ -788,28 +754,28 @@
     @'by' method method?
   \<close>}
 
-  \<^descr> @{text "@{ctyp \<tau>}"} produces a certified type wrt.\ the
+  \<^descr> \<open>@{ctyp \<tau>}\<close> produces a certified type wrt.\ the
   current background theory --- as abstract value of type @{ML_type
   ctyp}.
 
-  \<^descr> @{text "@{cterm t}"} and @{text "@{cprop \<phi>}"} produce a
+  \<^descr> \<open>@{cterm t}\<close> and \<open>@{cprop \<phi>}\<close> produce a
   certified term wrt.\ the current background theory --- as abstract
   value of type @{ML_type cterm}.
 
-  \<^descr> @{text "@{thm a}"} produces a singleton fact --- as abstract
+  \<^descr> \<open>@{thm a}\<close> produces a singleton fact --- as abstract
   value of type @{ML_type thm}.
 
-  \<^descr> @{text "@{thms a}"} produces a general fact --- as abstract
+  \<^descr> \<open>@{thms a}\<close> produces a general fact --- as abstract
   value of type @{ML_type "thm list"}.
 
-  \<^descr> @{text "@{lemma \<phi> by meth}"} produces a fact that is proven on
+  \<^descr> \<open>@{lemma \<phi> by meth}\<close> produces a fact that is proven on
   the spot according to the minimal proof, which imitates a terminal
   Isar proof.  The result is an abstract value of type @{ML_type thm}
   or @{ML_type "thm list"}, depending on the number of propositions
   given here.
 
   The internal derivation object lacks a proper theorem name, but it
-  is formally closed, unless the @{text "(open)"} option is specified
+  is formally closed, unless the \<open>(open)\<close> option is specified
   (this may impact performance of applications with proof terms).
 
   Since ML antiquotations are always evaluated at compile-time, there
@@ -823,7 +789,7 @@
 
 subsection \<open>Auxiliary connectives \label{sec:logic-aux}\<close>
 
-text \<open>Theory @{text "Pure"} provides a few auxiliary connectives
+text \<open>Theory \<open>Pure\<close> provides a few auxiliary connectives
   that are defined on top of the primitive ones, see
   \figref{fig:pure-aux}.  These special constants are useful in
   certain internal encodings, and are normally not directly exposed to
@@ -832,51 +798,49 @@
   \begin{figure}[htb]
   \begin{center}
   \begin{tabular}{ll}
-  @{text "conjunction :: prop \<Rightarrow> prop \<Rightarrow> prop"} & (infix @{text "&&&"}) \\
-  @{text "\<turnstile> A &&& B \<equiv> (\<And>C. (A \<Longrightarrow> B \<Longrightarrow> C) \<Longrightarrow> C)"} \\[1ex]
-  @{text "prop :: prop \<Rightarrow> prop"} & (prefix @{text "#"}, suppressed) \\
-  @{text "#A \<equiv> A"} \\[1ex]
-  @{text "term :: \<alpha> \<Rightarrow> prop"} & (prefix @{text "TERM"}) \\
-  @{text "term x \<equiv> (\<And>A. A \<Longrightarrow> A)"} \\[1ex]
-  @{text "type :: \<alpha> itself"} & (prefix @{text "TYPE"}) \\
-  @{text "(unspecified)"} \\
+  \<open>conjunction :: prop \<Rightarrow> prop \<Rightarrow> prop\<close> & (infix \<open>&&&\<close>) \\
+  \<open>\<turnstile> A &&& B \<equiv> (\<And>C. (A \<Longrightarrow> B \<Longrightarrow> C) \<Longrightarrow> C)\<close> \\[1ex]
+  \<open>prop :: prop \<Rightarrow> prop\<close> & (prefix \<open>#\<close>, suppressed) \\
+  \<open>#A \<equiv> A\<close> \\[1ex]
+  \<open>term :: \<alpha> \<Rightarrow> prop\<close> & (prefix \<open>TERM\<close>) \\
+  \<open>term x \<equiv> (\<And>A. A \<Longrightarrow> A)\<close> \\[1ex]
+  \<open>type :: \<alpha> itself\<close> & (prefix \<open>TYPE\<close>) \\
+  \<open>(unspecified)\<close> \\
   \end{tabular}
   \caption{Definitions of auxiliary connectives}\label{fig:pure-aux}
   \end{center}
   \end{figure}
 
-  The introduction @{text "A \<Longrightarrow> B \<Longrightarrow> A &&& B"}, and eliminations
-  (projections) @{text "A &&& B \<Longrightarrow> A"} and @{text "A &&& B \<Longrightarrow> B"} are
+  The introduction \<open>A \<Longrightarrow> B \<Longrightarrow> A &&& B\<close>, and eliminations
+  (projections) \<open>A &&& B \<Longrightarrow> A\<close> and \<open>A &&& B \<Longrightarrow> B\<close> are
   available as derived rules.  Conjunction allows to treat
   simultaneous assumptions and conclusions uniformly, e.g.\ consider
-  @{text "A \<Longrightarrow> B \<Longrightarrow> C &&& D"}.  In particular, the goal mechanism
+  \<open>A \<Longrightarrow> B \<Longrightarrow> C &&& D\<close>.  In particular, the goal mechanism
   represents multiple claims as explicit conjunction internally, but
   this is refined (via backwards introduction) into separate sub-goals
   before the user commences the proof; the final result is projected
   into a list of theorems using eliminations (cf.\
   \secref{sec:tactical-goals}).
 
-  The @{text "prop"} marker (@{text "#"}) makes arbitrarily complex
-  propositions appear as atomic, without changing the meaning: @{text
-  "\<Gamma> \<turnstile> A"} and @{text "\<Gamma> \<turnstile> #A"} are interchangeable.  See
+  The \<open>prop\<close> marker (\<open>#\<close>) makes arbitrarily complex
+  propositions appear as atomic, without changing the meaning: \<open>\<Gamma> \<turnstile> A\<close> and \<open>\<Gamma> \<turnstile> #A\<close> are interchangeable.  See
   \secref{sec:tactical-goals} for specific operations.
 
-  The @{text "term"} marker turns any well-typed term into a derivable
-  proposition: @{text "\<turnstile> TERM t"} holds unconditionally.  Although
+  The \<open>term\<close> marker turns any well-typed term into a derivable
+  proposition: \<open>\<turnstile> TERM t\<close> holds unconditionally.  Although
   this is logically vacuous, it allows to treat terms and proofs
   uniformly, similar to a type-theoretic framework.
 
-  The @{text "TYPE"} constructor is the canonical representative of
-  the unspecified type @{text "\<alpha> itself"}; it essentially injects the
+  The \<open>TYPE\<close> constructor is the canonical representative of
+  the unspecified type \<open>\<alpha> itself\<close>; it essentially injects the
   language of types into that of terms.  There is specific notation
-  @{text "TYPE(\<tau>)"} for @{text "TYPE\<^bsub>\<tau> itself\<^esub>"}.
-  Although being devoid of any particular meaning, the term @{text
-  "TYPE(\<tau>)"} accounts for the type @{text "\<tau>"} within the term
-  language.  In particular, @{text "TYPE(\<alpha>)"} may be used as formal
+  \<open>TYPE(\<tau>)\<close> for \<open>TYPE\<^bsub>\<tau> itself\<^esub>\<close>.
+  Although being devoid of any particular meaning, the term \<open>TYPE(\<tau>)\<close> accounts for the type \<open>\<tau>\<close> within the term
+  language.  In particular, \<open>TYPE(\<alpha>)\<close> may be used as formal
   argument in primitive definitions, in order to circumvent hidden
-  polymorphism (cf.\ \secref{sec:terms}).  For example, @{text "c
-  TYPE(\<alpha>) \<equiv> A[\<alpha>]"} defines @{text "c :: \<alpha> itself \<Rightarrow> prop"} in terms of
-  a proposition @{text "A"} that depends on an additional type
+  polymorphism (cf.\ \secref{sec:terms}).  For example, \<open>c
+  TYPE(\<alpha>) \<equiv> A[\<alpha>]\<close> defines \<open>c :: \<alpha> itself \<Rightarrow> prop\<close> in terms of
+  a proposition \<open>A\<close> that depends on an additional type
   argument, which is essentially a predicate on types.
 \<close>
 
@@ -890,22 +854,19 @@
   @{index_ML Logic.dest_type: "term -> typ"} \\
   \end{mldecls}
 
-  \<^descr> @{ML Conjunction.intr} derives @{text "A &&& B"} from @{text
-  "A"} and @{text "B"}.
+  \<^descr> @{ML Conjunction.intr} derives \<open>A &&& B\<close> from \<open>A\<close> and \<open>B\<close>.
 
-  \<^descr> @{ML Conjunction.elim} derives @{text "A"} and @{text "B"}
-  from @{text "A &&& B"}.
+  \<^descr> @{ML Conjunction.elim} derives \<open>A\<close> and \<open>B\<close>
+  from \<open>A &&& B\<close>.
 
-  \<^descr> @{ML Drule.mk_term} derives @{text "TERM t"}.
+  \<^descr> @{ML Drule.mk_term} derives \<open>TERM t\<close>.
 
-  \<^descr> @{ML Drule.dest_term} recovers term @{text "t"} from @{text
-  "TERM t"}.
+  \<^descr> @{ML Drule.dest_term} recovers term \<open>t\<close> from \<open>TERM t\<close>.
 
-  \<^descr> @{ML Logic.mk_type}~@{text "\<tau>"} produces the term @{text
-  "TYPE(\<tau>)"}.
+  \<^descr> @{ML Logic.mk_type}~\<open>\<tau>\<close> produces the term \<open>TYPE(\<tau>)\<close>.
 
-  \<^descr> @{ML Logic.dest_type}~@{text "TYPE(\<tau>)"} recovers the type
-  @{text "\<tau>"}.
+  \<^descr> @{ML Logic.dest_type}~\<open>TYPE(\<tau>)\<close> recovers the type
+  \<open>\<tau>\<close>.
 \<close>
 
 
@@ -913,29 +874,26 @@
 
 text \<open>Type variables are decorated with sorts, as explained in
   \secref{sec:types}.  This constrains type instantiation to certain
-  ranges of types: variable @{text "\<alpha>\<^sub>s"} may only be assigned to types
-  @{text "\<tau>"} that belong to sort @{text "s"}.  Within the logic, sort
-  constraints act like implicit preconditions on the result @{text
-  "\<lparr>\<alpha>\<^sub>1 : s\<^sub>1\<rparr>, \<dots>, \<lparr>\<alpha>\<^sub>n : s\<^sub>n\<rparr>, \<Gamma> \<turnstile> \<phi>"} where the type variables @{text
-  "\<alpha>\<^sub>1, \<dots>, \<alpha>\<^sub>n"} cover the propositions @{text "\<Gamma>"}, @{text "\<phi>"}, as
-  well as the proof of @{text "\<Gamma> \<turnstile> \<phi>"}.
+  ranges of types: variable \<open>\<alpha>\<^sub>s\<close> may only be assigned to types
+  \<open>\<tau>\<close> that belong to sort \<open>s\<close>.  Within the logic, sort
+  constraints act like implicit preconditions on the result \<open>\<lparr>\<alpha>\<^sub>1 : s\<^sub>1\<rparr>, \<dots>, \<lparr>\<alpha>\<^sub>n : s\<^sub>n\<rparr>, \<Gamma> \<turnstile> \<phi>\<close> where the type variables \<open>\<alpha>\<^sub>1, \<dots>, \<alpha>\<^sub>n\<close> cover the propositions \<open>\<Gamma>\<close>, \<open>\<phi>\<close>, as
+  well as the proof of \<open>\<Gamma> \<turnstile> \<phi>\<close>.
 
   These \<^emph>\<open>sort hypothesis\<close> of a theorem are passed monotonically
   through further derivations.  They are redundant, as long as the
   statement of a theorem still contains the type variables that are
   accounted here.  The logical significance of sort hypotheses is
   limited to the boundary case where type variables disappear from the
-  proposition, e.g.\ @{text "\<lparr>\<alpha>\<^sub>s : s\<rparr> \<turnstile> \<phi>"}.  Since such dangling type
+  proposition, e.g.\ \<open>\<lparr>\<alpha>\<^sub>s : s\<rparr> \<turnstile> \<phi>\<close>.  Since such dangling type
   variables can be renamed arbitrarily without changing the
-  proposition @{text "\<phi>"}, the inference kernel maintains sort
-  hypotheses in anonymous form @{text "s \<turnstile> \<phi>"}.
+  proposition \<open>\<phi>\<close>, the inference kernel maintains sort
+  hypotheses in anonymous form \<open>s \<turnstile> \<phi>\<close>.
 
   In most practical situations, such extra sort hypotheses may be
   stripped in a final bookkeeping step, e.g.\ at the end of a proof:
   they are typically left over from intermediate reasoning with type
-  classes that can be satisfied by some concrete type @{text "\<tau>"} of
-  sort @{text "s"} to replace the hypothetical type variable @{text
-  "\<alpha>\<^sub>s"}.\<close>
+  classes that can be satisfied by some concrete type \<open>\<tau>\<close> of
+  sort \<open>s\<close> to replace the hypothetical type variable \<open>\<alpha>\<^sub>s\<close>.\<close>
 
 text %mlref \<open>
   \begin{mldecls}
@@ -943,11 +901,11 @@
   @{index_ML Thm.strip_shyps: "thm -> thm"} \\
   \end{mldecls}
 
-  \<^descr> @{ML Thm.extra_shyps}~@{text "thm"} determines the extraneous
+  \<^descr> @{ML Thm.extra_shyps}~\<open>thm\<close> determines the extraneous
   sort hypotheses of the given theorem, i.e.\ the sorts that are not
   present within type variables of the statement.
 
-  \<^descr> @{ML Thm.strip_shyps}~@{text "thm"} removes any extraneous
+  \<^descr> @{ML Thm.strip_shyps}~\<open>thm\<close> removes any extraneous
   sort hypotheses that can be witnessed from the type signature.
 \<close>
 
@@ -976,9 +934,8 @@
   purposes.  User-level reasoning usually works via object-level rules
   that are represented as theorems of Pure.  Composition of rules
   involves \<^emph>\<open>backchaining\<close>, \<^emph>\<open>higher-order unification\<close> modulo
-  @{text "\<alpha>\<beta>\<eta>"}-conversion of @{text "\<lambda>"}-terms, and so-called
-  \<^emph>\<open>lifting\<close> of rules into a context of @{text "\<And>"} and @{text
-  "\<Longrightarrow>"} connectives.  Thus the full power of higher-order Natural
+  \<open>\<alpha>\<beta>\<eta>\<close>-conversion of \<open>\<lambda>\<close>-terms, and so-called
+  \<^emph>\<open>lifting\<close> of rules into a context of \<open>\<And>\<close> and \<open>\<Longrightarrow>\<close> connectives.  Thus the full power of higher-order Natural
   Deduction in Isabelle/Pure becomes readily available.
 \<close>
 
@@ -991,24 +948,24 @@
   arbitrary nesting similar to @{cite extensions91}.  The most basic
   rule format is that of a \<^emph>\<open>Horn Clause\<close>:
   \[
-  \infer{@{text "A"}}{@{text "A\<^sub>1"} & @{text "\<dots>"} & @{text "A\<^sub>n"}}
+  \infer{\<open>A\<close>}{\<open>A\<^sub>1\<close> & \<open>\<dots>\<close> & \<open>A\<^sub>n\<close>}
   \]
-  where @{text "A, A\<^sub>1, \<dots>, A\<^sub>n"} are atomic propositions
-  of the framework, usually of the form @{text "Trueprop B"}, where
-  @{text "B"} is a (compound) object-level statement.  This
+  where \<open>A, A\<^sub>1, \<dots>, A\<^sub>n\<close> are atomic propositions
+  of the framework, usually of the form \<open>Trueprop B\<close>, where
+  \<open>B\<close> is a (compound) object-level statement.  This
   object-level inference corresponds to an iterated implication in
   Pure like this:
   \[
-  @{text "A\<^sub>1 \<Longrightarrow> \<dots> A\<^sub>n \<Longrightarrow> A"}
+  \<open>A\<^sub>1 \<Longrightarrow> \<dots> A\<^sub>n \<Longrightarrow> A\<close>
   \]
-  As an example consider conjunction introduction: @{text "A \<Longrightarrow> B \<Longrightarrow> A \<and>
-  B"}.  Any parameters occurring in such rule statements are
+  As an example consider conjunction introduction: \<open>A \<Longrightarrow> B \<Longrightarrow> A \<and>
+  B\<close>.  Any parameters occurring in such rule statements are
   conceptionally treated as arbitrary:
   \[
-  @{text "\<And>x\<^sub>1 \<dots> x\<^sub>m. A\<^sub>1 x\<^sub>1 \<dots> x\<^sub>m \<Longrightarrow> \<dots> A\<^sub>n x\<^sub>1 \<dots> x\<^sub>m \<Longrightarrow> A x\<^sub>1 \<dots> x\<^sub>m"}
+  \<open>\<And>x\<^sub>1 \<dots> x\<^sub>m. A\<^sub>1 x\<^sub>1 \<dots> x\<^sub>m \<Longrightarrow> \<dots> A\<^sub>n x\<^sub>1 \<dots> x\<^sub>m \<Longrightarrow> A x\<^sub>1 \<dots> x\<^sub>m\<close>
   \]
 
-  Nesting of rules means that the positions of @{text "A\<^sub>i"} may
+  Nesting of rules means that the positions of \<open>A\<^sub>i\<close> may
   again hold compound rules, not just atomic propositions.
   Propositions of this format are called \<^emph>\<open>Hereditary Harrop
   Formulae\<close> in the literature @{cite "Miller:1991"}.  Here we give an
@@ -1016,19 +973,18 @@
 
   \<^medskip>
   \begin{tabular}{ll}
-  @{text "\<^bold>x"} & set of variables \\
-  @{text "\<^bold>A"} & set of atomic propositions \\
-  @{text "\<^bold>H  =  \<And>\<^bold>x\<^sup>*. \<^bold>H\<^sup>* \<Longrightarrow> \<^bold>A"} & set of Hereditary Harrop Formulas \\
+  \<open>\<^bold>x\<close> & set of variables \\
+  \<open>\<^bold>A\<close> & set of atomic propositions \\
+  \<open>\<^bold>H  =  \<And>\<^bold>x\<^sup>*. \<^bold>H\<^sup>* \<Longrightarrow> \<^bold>A\<close> & set of Hereditary Harrop Formulas \\
   \end{tabular}
   \<^medskip>
 
   Thus we essentially impose nesting levels on propositions formed
-  from @{text "\<And>"} and @{text "\<Longrightarrow>"}.  At each level there is a prefix
+  from \<open>\<And>\<close> and \<open>\<Longrightarrow>\<close>.  At each level there is a prefix
   of parameters and compound premises, concluding an atomic
-  proposition.  Typical examples are @{text "\<longrightarrow>"}-introduction @{text
-  "(A \<Longrightarrow> B) \<Longrightarrow> A \<longrightarrow> B"} or mathematical induction @{text "P 0 \<Longrightarrow> (\<And>n. P n
-  \<Longrightarrow> P (Suc n)) \<Longrightarrow> P n"}.  Even deeper nesting occurs in well-founded
-  induction @{text "(\<And>x. (\<And>y. y \<prec> x \<Longrightarrow> P y) \<Longrightarrow> P x) \<Longrightarrow> P x"}, but this
+  proposition.  Typical examples are \<open>\<longrightarrow>\<close>-introduction \<open>(A \<Longrightarrow> B) \<Longrightarrow> A \<longrightarrow> B\<close> or mathematical induction \<open>P 0 \<Longrightarrow> (\<And>n. P n
+  \<Longrightarrow> P (Suc n)) \<Longrightarrow> P n\<close>.  Even deeper nesting occurs in well-founded
+  induction \<open>(\<And>x. (\<And>y. y \<prec> x \<Longrightarrow> P y) \<Longrightarrow> P x) \<Longrightarrow> P x\<close>, but this
   already marks the limit of rule complexity that is usually seen in
   practice.
 
@@ -1036,14 +992,14 @@
   Regular user-level inferences in Isabelle/Pure always
   maintain the following canonical form of results:
 
-  \<^item> Normalization by @{text "(A \<Longrightarrow> (\<And>x. B x)) \<equiv> (\<And>x. A \<Longrightarrow> B x)"},
+  \<^item> Normalization by \<open>(A \<Longrightarrow> (\<And>x. B x)) \<equiv> (\<And>x. A \<Longrightarrow> B x)\<close>,
   which is a theorem of Pure, means that quantifiers are pushed in
   front of implication at each level of nesting.  The normal form is a
   Hereditary Harrop Formula.
 
   \<^item> The outermost prefix of parameters is represented via
-  schematic variables: instead of @{text "\<And>\<^vec>x. \<^vec>H \<^vec>x
-  \<Longrightarrow> A \<^vec>x"} we have @{text "\<^vec>H ?\<^vec>x \<Longrightarrow> A ?\<^vec>x"}.
+  schematic variables: instead of \<open>\<And>\<^vec>x. \<^vec>H \<^vec>x
+  \<Longrightarrow> A \<^vec>x\<close> we have \<open>\<^vec>H ?\<^vec>x \<Longrightarrow> A ?\<^vec>x\<close>.
   Note that this representation looses information about the order of
   parameters, and vacuous quantifiers vanish automatically.
 \<close>
@@ -1053,7 +1009,7 @@
   @{index_ML Simplifier.norm_hhf: "Proof.context -> thm -> thm"} \\
   \end{mldecls}
 
-  \<^descr> @{ML Simplifier.norm_hhf}~@{text "ctxt thm"} normalizes the given
+  \<^descr> @{ML Simplifier.norm_hhf}~\<open>ctxt thm\<close> normalizes the given
   theorem according to the canonical form specified above.  This is
   occasionally helpful to repair some low-level tools that do not
   handle Hereditary Harrop Formulae properly.
@@ -1071,41 +1027,40 @@
 
   To understand the all-important @{inference resolution} principle,
   we first consider raw @{inference_def composition} (modulo
-  higher-order unification with substitution @{text "\<vartheta>"}):
+  higher-order unification with substitution \<open>\<vartheta>\<close>):
   \[
-  \infer[(@{inference_def composition})]{@{text "\<^vec>A\<vartheta> \<Longrightarrow> C\<vartheta>"}}
-  {@{text "\<^vec>A \<Longrightarrow> B"} & @{text "B' \<Longrightarrow> C"} & @{text "B\<vartheta> = B'\<vartheta>"}}
+  \infer[(@{inference_def composition})]{\<open>\<^vec>A\<vartheta> \<Longrightarrow> C\<vartheta>\<close>}
+  {\<open>\<^vec>A \<Longrightarrow> B\<close> & \<open>B' \<Longrightarrow> C\<close> & \<open>B\<vartheta> = B'\<vartheta>\<close>}
   \]
   Here the conclusion of the first rule is unified with the premise of
   the second; the resulting rule instance inherits the premises of the
-  first and conclusion of the second.  Note that @{text "C"} can again
+  first and conclusion of the second.  Note that \<open>C\<close> can again
   consist of iterated implications.  We can also permute the premises
-  of the second rule back-and-forth in order to compose with @{text
-  "B'"} in any position (subsequently we shall always refer to
+  of the second rule back-and-forth in order to compose with \<open>B'\<close> in any position (subsequently we shall always refer to
   position 1 w.l.o.g.).
 
   In @{inference composition} the internal structure of the common
-  part @{text "B"} and @{text "B'"} is not taken into account.  For
-  proper @{inference resolution} we require @{text "B"} to be atomic,
-  and explicitly observe the structure @{text "\<And>\<^vec>x. \<^vec>H
-  \<^vec>x \<Longrightarrow> B' \<^vec>x"} of the premise of the second rule.  The
+  part \<open>B\<close> and \<open>B'\<close> is not taken into account.  For
+  proper @{inference resolution} we require \<open>B\<close> to be atomic,
+  and explicitly observe the structure \<open>\<And>\<^vec>x. \<^vec>H
+  \<^vec>x \<Longrightarrow> B' \<^vec>x\<close> of the premise of the second rule.  The
   idea is to adapt the first rule by ``lifting'' it into this context,
   by means of iterated application of the following inferences:
   \[
-  \infer[(@{inference_def imp_lift})]{@{text "(\<^vec>H \<Longrightarrow> \<^vec>A) \<Longrightarrow> (\<^vec>H \<Longrightarrow> B)"}}{@{text "\<^vec>A \<Longrightarrow> B"}}
+  \infer[(@{inference_def imp_lift})]{\<open>(\<^vec>H \<Longrightarrow> \<^vec>A) \<Longrightarrow> (\<^vec>H \<Longrightarrow> B)\<close>}{\<open>\<^vec>A \<Longrightarrow> B\<close>}
   \]
   \[
-  \infer[(@{inference_def all_lift})]{@{text "(\<And>\<^vec>x. \<^vec>A (?\<^vec>a \<^vec>x)) \<Longrightarrow> (\<And>\<^vec>x. B (?\<^vec>a \<^vec>x))"}}{@{text "\<^vec>A ?\<^vec>a \<Longrightarrow> B ?\<^vec>a"}}
+  \infer[(@{inference_def all_lift})]{\<open>(\<And>\<^vec>x. \<^vec>A (?\<^vec>a \<^vec>x)) \<Longrightarrow> (\<And>\<^vec>x. B (?\<^vec>a \<^vec>x))\<close>}{\<open>\<^vec>A ?\<^vec>a \<Longrightarrow> B ?\<^vec>a\<close>}
   \]
   By combining raw composition with lifting, we get full @{inference
   resolution} as follows:
   \[
   \infer[(@{inference_def resolution})]
-  {@{text "(\<And>\<^vec>x. \<^vec>H \<^vec>x \<Longrightarrow> \<^vec>A (?\<^vec>a \<^vec>x))\<vartheta> \<Longrightarrow> C\<vartheta>"}}
+  {\<open>(\<And>\<^vec>x. \<^vec>H \<^vec>x \<Longrightarrow> \<^vec>A (?\<^vec>a \<^vec>x))\<vartheta> \<Longrightarrow> C\<vartheta>\<close>}
   {\begin{tabular}{l}
-    @{text "\<^vec>A ?\<^vec>a \<Longrightarrow> B ?\<^vec>a"} \\
-    @{text "(\<And>\<^vec>x. \<^vec>H \<^vec>x \<Longrightarrow> B' \<^vec>x) \<Longrightarrow> C"} \\
-    @{text "(\<lambda>\<^vec>x. B (?\<^vec>a \<^vec>x))\<vartheta> = B'\<vartheta>"} \\
+    \<open>\<^vec>A ?\<^vec>a \<Longrightarrow> B ?\<^vec>a\<close> \\
+    \<open>(\<And>\<^vec>x. \<^vec>H \<^vec>x \<Longrightarrow> B' \<^vec>x) \<Longrightarrow> C\<close> \\
+    \<open>(\<lambda>\<^vec>x. B (?\<^vec>a \<^vec>x))\<vartheta> = B'\<vartheta>\<close> \\
    \end{tabular}}
   \]
 
@@ -1114,8 +1069,8 @@
   a rule of 0 premises, or by producing a ``short-circuit'' within a
   solved situation (again modulo unification):
   \[
-  \infer[(@{inference_def assumption})]{@{text "C\<vartheta>"}}
-  {@{text "(\<And>\<^vec>x. \<^vec>H \<^vec>x \<Longrightarrow> A \<^vec>x) \<Longrightarrow> C"} & @{text "A\<vartheta> = H\<^sub>i\<vartheta>"}~~\text{(for some~@{text i})}}
+  \infer[(@{inference_def assumption})]{\<open>C\<vartheta>\<close>}
+  {\<open>(\<And>\<^vec>x. \<^vec>H \<^vec>x \<Longrightarrow> A \<^vec>x) \<Longrightarrow> C\<close> & \<open>A\<vartheta> = H\<^sub>i\<vartheta>\<close>~~\text{(for some~\<open>i\<close>)}}
   \]
 
   %FIXME @{inference_def elim_resolution}, @{inference_def dest_resolution}
@@ -1133,8 +1088,8 @@
   @{index_ML_op "OF": "thm * thm list -> thm"} \\
   \end{mldecls}
 
-  \<^descr> @{text "rule\<^sub>1 RSN (i, rule\<^sub>2)"} resolves the conclusion of
-  @{text "rule\<^sub>1"} with the @{text i}-th premise of @{text "rule\<^sub>2"},
+  \<^descr> \<open>rule\<^sub>1 RSN (i, rule\<^sub>2)\<close> resolves the conclusion of
+  \<open>rule\<^sub>1\<close> with the \<open>i\<close>-th premise of \<open>rule\<^sub>2\<close>,
   according to the @{inference resolution} principle explained above.
   Unless there is precisely one resolvent it raises exception @{ML
   THM}.
@@ -1142,28 +1097,27 @@
   This corresponds to the rule attribute @{attribute THEN} in Isar
   source language.
 
-  \<^descr> @{text "rule\<^sub>1 RS rule\<^sub>2"} abbreviates @{text "rule\<^sub>1 RSN (1,
-  rule\<^sub>2)"}.
+  \<^descr> \<open>rule\<^sub>1 RS rule\<^sub>2\<close> abbreviates \<open>rule\<^sub>1 RSN (1,
+  rule\<^sub>2)\<close>.
 
-  \<^descr> @{text "rules\<^sub>1 RLN (i, rules\<^sub>2)"} joins lists of rules.  For
-  every @{text "rule\<^sub>1"} in @{text "rules\<^sub>1"} and @{text "rule\<^sub>2"} in
-  @{text "rules\<^sub>2"}, it resolves the conclusion of @{text "rule\<^sub>1"} with
-  the @{text "i"}-th premise of @{text "rule\<^sub>2"}, accumulating multiple
+  \<^descr> \<open>rules\<^sub>1 RLN (i, rules\<^sub>2)\<close> joins lists of rules.  For
+  every \<open>rule\<^sub>1\<close> in \<open>rules\<^sub>1\<close> and \<open>rule\<^sub>2\<close> in
+  \<open>rules\<^sub>2\<close>, it resolves the conclusion of \<open>rule\<^sub>1\<close> with
+  the \<open>i\<close>-th premise of \<open>rule\<^sub>2\<close>, accumulating multiple
   results in one big list.  Note that such strict enumerations of
   higher-order unifications can be inefficient compared to the lazy
   variant seen in elementary tactics like @{ML resolve_tac}.
 
-  \<^descr> @{text "rules\<^sub>1 RL rules\<^sub>2"} abbreviates @{text "rules\<^sub>1 RLN (1,
-  rules\<^sub>2)"}.
+  \<^descr> \<open>rules\<^sub>1 RL rules\<^sub>2\<close> abbreviates \<open>rules\<^sub>1 RLN (1,
+  rules\<^sub>2)\<close>.
 
-  \<^descr> @{text "[rule\<^sub>1, \<dots>, rule\<^sub>n] MRS rule"} resolves @{text "rule\<^sub>i"}
-  against premise @{text "i"} of @{text "rule"}, for @{text "i = n, \<dots>,
-  1"}.  By working from right to left, newly emerging premises are
+  \<^descr> \<open>[rule\<^sub>1, \<dots>, rule\<^sub>n] MRS rule\<close> resolves \<open>rule\<^sub>i\<close>
+  against premise \<open>i\<close> of \<open>rule\<close>, for \<open>i = n, \<dots>,
+  1\<close>.  By working from right to left, newly emerging premises are
   concatenated in the result, without interfering.
 
-  \<^descr> @{text "rule OF rules"} is an alternative notation for @{text
-  "rules MRS rule"}, which makes rule composition look more like
-  function application.  Note that the argument @{text "rules"} need
+  \<^descr> \<open>rule OF rules\<close> is an alternative notation for \<open>rules MRS rule\<close>, which makes rule composition look more like
+  function application.  Note that the argument \<open>rules\<close> need
   not be atomic.
 
   This corresponds to the rule attribute @{attribute OF} in Isar
@@ -1181,55 +1135,50 @@
   proof-checker, for example.
 
   According to the well-known \<^emph>\<open>Curry-Howard isomorphism\<close>, a proof
-  can be viewed as a @{text "\<lambda>"}-term. Following this idea, proofs in
+  can be viewed as a \<open>\<lambda>\<close>-term. Following this idea, proofs in
   Isabelle are internally represented by a datatype similar to the one
   for terms described in \secref{sec:terms}.  On top of these
-  syntactic terms, two more layers of @{text "\<lambda>"}-calculus are added,
-  which correspond to @{text "\<And>x :: \<alpha>. B x"} and @{text "A \<Longrightarrow> B"}
+  syntactic terms, two more layers of \<open>\<lambda>\<close>-calculus are added,
+  which correspond to \<open>\<And>x :: \<alpha>. B x\<close> and \<open>A \<Longrightarrow> B\<close>
   according to the propositions-as-types principle.  The resulting
-  3-level @{text "\<lambda>"}-calculus resembles ``@{text "\<lambda>HOL"}'' in the
+  3-level \<open>\<lambda>\<close>-calculus resembles ``\<open>\<lambda>HOL\<close>'' in the
   more abstract setting of Pure Type Systems (PTS)
   @{cite "Barendregt-Geuvers:2001"}, if some fine points like schematic
   polymorphism and type classes are ignored.
 
   \<^medskip>
-  \<^emph>\<open>Proof abstractions\<close> of the form @{text "\<^bold>\<lambda>x :: \<alpha>. prf"}
-  or @{text "\<^bold>\<lambda>p : A. prf"} correspond to introduction of @{text
-  "\<And>"}/@{text "\<Longrightarrow>"}, and \<^emph>\<open>proof applications\<close> of the form @{text
-  "p \<cdot> t"} or @{text "p \<bullet> q"} correspond to elimination of @{text
-  "\<And>"}/@{text "\<Longrightarrow>"}.  Actual types @{text "\<alpha>"}, propositions @{text
-  "A"}, and terms @{text "t"} might be suppressed and reconstructed
+  \<^emph>\<open>Proof abstractions\<close> of the form \<open>\<^bold>\<lambda>x :: \<alpha>. prf\<close>
+  or \<open>\<^bold>\<lambda>p : A. prf\<close> correspond to introduction of \<open>\<And>\<close>/\<open>\<Longrightarrow>\<close>, and \<^emph>\<open>proof applications\<close> of the form \<open>p \<cdot> t\<close> or \<open>p \<bullet> q\<close> correspond to elimination of \<open>\<And>\<close>/\<open>\<Longrightarrow>\<close>.  Actual types \<open>\<alpha>\<close>, propositions \<open>A\<close>, and terms \<open>t\<close> might be suppressed and reconstructed
   from the overall proof term.
 
   \<^medskip>
   Various atomic proofs indicate special situations within
   the proof construction as follows.
 
-  A \<^emph>\<open>bound proof variable\<close> is a natural number @{text "b"} that
+  A \<^emph>\<open>bound proof variable\<close> is a natural number \<open>b\<close> that
   acts as de-Bruijn index for proof term abstractions.
 
-  A \<^emph>\<open>minimal proof\<close> ``@{text "?"}'' is a dummy proof term.  This
+  A \<^emph>\<open>minimal proof\<close> ``\<open>?\<close>'' is a dummy proof term.  This
   indicates some unrecorded part of the proof.
 
-  @{text "Hyp A"} refers to some pending hypothesis by giving its
+  \<open>Hyp A\<close> refers to some pending hypothesis by giving its
   proposition.  This indicates an open context of implicit hypotheses,
   similar to loose bound variables or free variables within a term
   (\secref{sec:terms}).
 
-  An \<^emph>\<open>axiom\<close> or \<^emph>\<open>oracle\<close> @{text "a : A[\<^vec>\<tau>]"} refers
-  some postulated @{text "proof constant"}, which is subject to
+  An \<^emph>\<open>axiom\<close> or \<^emph>\<open>oracle\<close> \<open>a : A[\<^vec>\<tau>]\<close> refers
+  some postulated \<open>proof constant\<close>, which is subject to
   schematic polymorphism of theory content, and the particular type
-  instantiation may be given explicitly.  The vector of types @{text
-  "\<^vec>\<tau>"} refers to the schematic type variables in the generic
-  proposition @{text "A"} in canonical order.
+  instantiation may be given explicitly.  The vector of types \<open>\<^vec>\<tau>\<close> refers to the schematic type variables in the generic
+  proposition \<open>A\<close> in canonical order.
 
-  A \<^emph>\<open>proof promise\<close> @{text "a : A[\<^vec>\<tau>]"} is a placeholder
-  for some proof of polymorphic proposition @{text "A"}, with explicit
-  type instantiation as given by the vector @{text "\<^vec>\<tau>"}, as
+  A \<^emph>\<open>proof promise\<close> \<open>a : A[\<^vec>\<tau>]\<close> is a placeholder
+  for some proof of polymorphic proposition \<open>A\<close>, with explicit
+  type instantiation as given by the vector \<open>\<^vec>\<tau>\<close>, as
   above.  Unlike axioms or oracles, proof promises may be
-  \<^emph>\<open>fulfilled\<close> eventually, by substituting @{text "a"} by some
-  particular proof @{text "q"} at the corresponding type instance.
-  This acts like Hindley-Milner @{text "let"}-polymorphism: a generic
+  \<^emph>\<open>fulfilled\<close> eventually, by substituting \<open>a\<close> by some
+  particular proof \<open>q\<close> at the corresponding type instance.
+  This acts like Hindley-Milner \<open>let\<close>-polymorphism: a generic
   local proof definition may get used at different type instances, and
   is replaced by the concrete instance eventually.
 
@@ -1249,7 +1198,7 @@
   Therefore, the Isabelle/Pure inference kernel records only
   \<^emph>\<open>implicit\<close> proof terms, by omitting all typing information in
   terms, all term and type labels of proof abstractions, and some
-  argument terms of applications @{text "p \<cdot> t"} (if possible).
+  argument terms of applications \<open>p \<cdot> t\<close> (if possible).
 
   There are separate operations to reconstruct the full proof term
   later on, using \<^emph>\<open>higher-order pattern unification\<close>
@@ -1270,29 +1219,28 @@
   \begin{center}
   \begin{supertabular}{rclr}
 
-  @{syntax_def (inner) proof} & = & @{verbatim Lam} @{text params} @{verbatim "."} @{text proof} \\
-    & @{text "|"} & @{text "\<^bold>\<lambda>"} @{text "params"} @{verbatim "."} @{text proof} \\
-    & @{text "|"} & @{text proof} @{verbatim "%"} @{text any} \\
-    & @{text "|"} & @{text proof} @{text "\<cdot>"} @{text any} \\
-    & @{text "|"} & @{text proof} @{verbatim "%%"} @{text proof} \\
-    & @{text "|"} & @{text proof} @{text "\<bullet>"} @{text proof} \\
-    & @{text "|"} & @{text "id  |  longid"} \\
+  @{syntax_def (inner) proof} & = & @{verbatim Lam} \<open>params\<close> @{verbatim "."} \<open>proof\<close> \\
+    & \<open>|\<close> & \<open>\<^bold>\<lambda>\<close> \<open>params\<close> @{verbatim "."} \<open>proof\<close> \\
+    & \<open>|\<close> & \<open>proof\<close> @{verbatim "%"} \<open>any\<close> \\
+    & \<open>|\<close> & \<open>proof\<close> \<open>\<cdot>\<close> \<open>any\<close> \\
+    & \<open>|\<close> & \<open>proof\<close> @{verbatim "%%"} \<open>proof\<close> \\
+    & \<open>|\<close> & \<open>proof\<close> \<open>\<bullet>\<close> \<open>proof\<close> \\
+    & \<open>|\<close> & \<open>id  |  longid\<close> \\
   \\
 
-  @{text param} & = & @{text idt} \\
-    & @{text "|"} & @{text idt} @{verbatim ":"} @{text prop} \\
-    & @{text "|"} & @{verbatim "("} @{text param} @{verbatim ")"} \\
+  \<open>param\<close> & = & \<open>idt\<close> \\
+    & \<open>|\<close> & \<open>idt\<close> @{verbatim ":"} \<open>prop\<close> \\
+    & \<open>|\<close> & @{verbatim "("} \<open>param\<close> @{verbatim ")"} \\
   \\
 
-  @{text params} & = & @{text param} \\
-    & @{text "|"} & @{text param} @{text params} \\
+  \<open>params\<close> & = & \<open>param\<close> \\
+    & \<open>|\<close> & \<open>param\<close> \<open>params\<close> \\
 
   \end{supertabular}
   \end{center}
 
-  Implicit term arguments in partial proofs are indicated by ``@{text
-  "_"}''.  Type arguments for theorems and axioms may be specified
-  using @{text "p \<cdot> TYPE(type)"} (they must appear before any other
+  Implicit term arguments in partial proofs are indicated by ``\<open>_\<close>''.  Type arguments for theorems and axioms may be specified
+  using \<open>p \<cdot> TYPE(type)\<close> (they must appear before any other
   term argument of a theorem or axiom, but may be omitted altogether).
 
   \<^medskip>
@@ -1328,8 +1276,8 @@
   information, the implicit graph of nested theorems needs to be
   traversed (e.g.\ using @{ML Proofterm.fold_body_thms}).
 
-  \<^descr> @{ML Thm.proof_of}~@{text "thm"} and @{ML
-  Thm.proof_body_of}~@{text "thm"} produce the proof term or proof
+  \<^descr> @{ML Thm.proof_of}~\<open>thm\<close> and @{ML
+  Thm.proof_body_of}~\<open>thm\<close> produce the proof term or proof
   body (with digest of oracles and theorems) from a given theorem.
   Note that this involves a full join of internal futures that fulfill
   pending proof promises, and thus disrupts the natural bottom-up
@@ -1344,31 +1292,30 @@
   Officially named theorems that contribute to a result are recorded
   in any case.
 
-  \<^descr> @{ML Reconstruct.reconstruct_proof}~@{text "thy prop prf"}
-  turns the implicit proof term @{text "prf"} into a full proof of the
+  \<^descr> @{ML Reconstruct.reconstruct_proof}~\<open>thy prop prf\<close>
+  turns the implicit proof term \<open>prf\<close> into a full proof of the
   given proposition.
 
-  Reconstruction may fail if @{text "prf"} is not a proof of @{text
-  "prop"}, or if it does not contain sufficient information for
+  Reconstruction may fail if \<open>prf\<close> is not a proof of \<open>prop\<close>, or if it does not contain sufficient information for
   reconstruction.  Failure may only happen for proofs that are
   constructed manually, but not for those produced automatically by
   the inference kernel.
 
-  \<^descr> @{ML Reconstruct.expand_proof}~@{text "thy [thm\<^sub>1, \<dots>, thm\<^sub>n]
-  prf"} expands and reconstructs the proofs of all specified theorems,
+  \<^descr> @{ML Reconstruct.expand_proof}~\<open>thy [thm\<^sub>1, \<dots>, thm\<^sub>n]
+  prf\<close> expands and reconstructs the proofs of all specified theorems,
   with the given (full) proof.  Theorems that are not unique specified
   via their name may be disambiguated by giving their proposition.
 
-  \<^descr> @{ML Proof_Checker.thm_of_proof}~@{text "thy prf"} turns the
+  \<^descr> @{ML Proof_Checker.thm_of_proof}~\<open>thy prf\<close> turns the
   given (full) proof into a theorem, by replaying it using only
   primitive rules of the inference kernel.
 
-  \<^descr> @{ML Proof_Syntax.read_proof}~@{text "thy b\<^sub>1 b\<^sub>2 s"} reads in a
+  \<^descr> @{ML Proof_Syntax.read_proof}~\<open>thy b\<^sub>1 b\<^sub>2 s\<close> reads in a
   proof term. The Boolean flags indicate the use of sort and type
   information.  Usually, typing information is left implicit and is
   inferred during proof reconstruction.  %FIXME eliminate flags!?
 
-  \<^descr> @{ML Proof_Syntax.pretty_proof}~@{text "ctxt prf"}
+  \<^descr> @{ML Proof_Syntax.pretty_proof}~\<open>ctxt prf\<close>
   pretty-prints the given proof term.
 \<close>
 
--- a/src/Doc/Implementation/ML.thy	Tue Oct 20 23:03:46 2015 +0200
+++ b/src/Doc/Implementation/ML.thy	Tue Oct 20 23:53:40 2015 +0200
@@ -315,7 +315,7 @@
   Some special infixes (e.g.\ @{ML_text "|>"}) work better at the
   start of the line, but punctuation is always at the end.
 
-  Function application follows the tradition of @{text "\<lambda>"}-calculus,
+  Function application follows the tradition of \<open>\<lambda>\<close>-calculus,
   not informal mathematics.  For example: @{ML_text "f a b"} for a
   curried function, or @{ML_text "g (a, b)"} for a tupled function.
   Note that the space between @{ML_text g} and the pair @{ML_text
@@ -627,10 +627,10 @@
   correctly.  Recall that evaluation of a function body is delayed
   until actual run-time.
 
-  \<^descr> @{ML "Context.>>"}~@{text f} applies context transformation
-  @{text f} to the implicit context of the ML toplevel.
-
-  \<^descr> @{ML ML_Thms.bind_thms}~@{text "(name, thms)"} stores a list of
+  \<^descr> @{ML "Context.>>"}~\<open>f\<close> applies context transformation
+  \<open>f\<close> to the implicit context of the ML toplevel.
+
+  \<^descr> @{ML ML_Thms.bind_thms}~\<open>(name, thms)\<close> stores a list of
   theorems produced in ML both in the (global) theory context and the
   ML toplevel, associating it with the provided name.
 
@@ -660,11 +660,11 @@
   defined in @{cite "isabelle-isar-ref"}.
 
   \<^medskip>
-  A regular antiquotation @{text "@{name args}"} processes
+  A regular antiquotation \<open>@{name args}\<close> processes
   its arguments by the usual means of the Isar source language, and
   produces corresponding ML source text, either as literal
-  \<^emph>\<open>inline\<close> text (e.g.\ @{text "@{term t}"}) or abstract
-  \<^emph>\<open>value\<close> (e.g. @{text "@{thm th}"}).  This pre-compilation
+  \<^emph>\<open>inline\<close> text (e.g.\ \<open>@{term t}\<close>) or abstract
+  \<^emph>\<open>value\<close> (e.g. \<open>@{thm th}\<close>).  This pre-compilation
   scheme allows to refer to formal entities in a robust manner, with
   proper static scoping and with some degree of logical checking of
   small portions of the code.
@@ -687,8 +687,8 @@
 
 text %mlantiq \<open>
   \begin{matharray}{rcl}
-  @{ML_antiquotation_def "make_string"} & : & @{text ML_antiquotation} \\
-  @{ML_antiquotation_def "print"} & : & @{text ML_antiquotation} \\
+  @{ML_antiquotation_def "make_string"} & : & \<open>ML_antiquotation\<close> \\
+  @{ML_antiquotation_def "print"} & : & \<open>ML_antiquotation\<close> \\
   \end{matharray}
 
   @{rail \<open>
@@ -697,13 +697,13 @@
   @@{ML_antiquotation print} @{syntax name}?
   \<close>}
 
-  \<^descr> @{text "@{make_string}"} inlines a function to print arbitrary values
+  \<^descr> \<open>@{make_string}\<close> inlines a function to print arbitrary values
   similar to the ML toplevel. The result is compiler dependent and may fall
   back on "?" in certain situations. The value of configuration option
   @{attribute_ref ML_print_depth} determines further details of output.
 
-  \<^descr> @{text "@{print f}"} uses the ML function @{text "f: string ->
-  unit"} to output the result of @{text "@{make_string}"} above,
+  \<^descr> \<open>@{print f}\<close> uses the ML function \<open>f: string ->
+  unit\<close> to output the result of \<open>@{make_string}\<close> above,
   together with the source position of the antiquotation.  The default
   output function is @{ML writeln}.
 \<close>
@@ -724,25 +724,24 @@
 
 section \<open>Canonical argument order \label{sec:canonical-argument-order}\<close>
 
-text \<open>Standard ML is a language in the tradition of @{text
-  "\<lambda>"}-calculus and \<^emph>\<open>higher-order functional programming\<close>,
+text \<open>Standard ML is a language in the tradition of \<open>\<lambda>\<close>-calculus and \<^emph>\<open>higher-order functional programming\<close>,
   similar to OCaml, Haskell, or Isabelle/Pure and HOL as logical
   languages.  Getting acquainted with the native style of representing
   functions in that setting can save a lot of extra boiler-plate of
   redundant shuffling of arguments, auxiliary abstractions etc.
 
   Functions are usually \<^emph>\<open>curried\<close>: the idea of turning arguments
-  of type @{text "\<tau>\<^sub>i"} (for @{text "i \<in> {1, \<dots> n}"}) into a result of
-  type @{text "\<tau>"} is represented by the iterated function space
-  @{text "\<tau>\<^sub>1 \<rightarrow> \<dots> \<rightarrow> \<tau>\<^sub>n \<rightarrow> \<tau>"}.  This is isomorphic to the well-known
-  encoding via tuples @{text "\<tau>\<^sub>1 \<times> \<dots> \<times> \<tau>\<^sub>n \<rightarrow> \<tau>"}, but the curried
+  of type \<open>\<tau>\<^sub>i\<close> (for \<open>i \<in> {1, \<dots> n}\<close>) into a result of
+  type \<open>\<tau>\<close> is represented by the iterated function space
+  \<open>\<tau>\<^sub>1 \<rightarrow> \<dots> \<rightarrow> \<tau>\<^sub>n \<rightarrow> \<tau>\<close>.  This is isomorphic to the well-known
+  encoding via tuples \<open>\<tau>\<^sub>1 \<times> \<dots> \<times> \<tau>\<^sub>n \<rightarrow> \<tau>\<close>, but the curried
   version fits more smoothly into the basic calculus.\footnote{The
   difference is even more significant in HOL, because the redundant
   tuple structure needs to be accommodated extraneous proof steps.}
 
   Currying gives some flexibility due to \<^emph>\<open>partial application\<close>.  A
-  function @{text "f: \<tau>\<^sub>1 \<rightarrow> \<tau>\<^sub>2 \<rightarrow> \<tau>"} can be applied to @{text "x: \<tau>\<^sub>1"}
-  and the remaining @{text "(f x): \<tau>\<^sub>2 \<rightarrow> \<tau>"} passed to another function
+  function \<open>f: \<tau>\<^sub>1 \<rightarrow> \<tau>\<^sub>2 \<rightarrow> \<tau>\<close> can be applied to \<open>x: \<tau>\<^sub>1\<close>
+  and the remaining \<open>(f x): \<tau>\<^sub>2 \<rightarrow> \<tau>\<close> passed to another function
   etc.  How well this works in practice depends on the order of
   arguments.  In the worst case, arguments are arranged erratically,
   and using a function in a certain situation always requires some
@@ -752,8 +751,8 @@
   This can be avoided by \<^emph>\<open>canonical argument order\<close>, which
   observes certain standard patterns and minimizes adhoc permutations
   in their application.  In Isabelle/ML, large portions of text can be
-  written without auxiliary operations like @{text "swap: \<alpha> \<times> \<beta> \<rightarrow> \<beta> \<times>
-  \<alpha>"} or @{text "C: (\<alpha> \<rightarrow> \<beta> \<rightarrow> \<gamma>) \<rightarrow> (\<beta> \<rightarrow> \<alpha> \<rightarrow> \<gamma>)"} (the latter is not
+  written without auxiliary operations like \<open>swap: \<alpha> \<times> \<beta> \<rightarrow> \<beta> \<times>
+  \<alpha>\<close> or \<open>C: (\<alpha> \<rightarrow> \<beta> \<rightarrow> \<gamma>) \<rightarrow> (\<beta> \<rightarrow> \<alpha> \<rightarrow> \<gamma>)\<close> (the latter is not
   present in the Isabelle/ML library).
 
   \<^medskip>
@@ -763,32 +762,31 @@
   \<^emph>\<open>updates\<close>.
 
   The subsequent scheme is based on a hypothetical set-like container
-  of type @{text "\<beta>"} that manages elements of type @{text "\<alpha>"}.  Both
+  of type \<open>\<beta>\<close> that manages elements of type \<open>\<alpha>\<close>.  Both
   the names and types of the associated operations are canonical for
   Isabelle/ML.
 
   \begin{center}
   \begin{tabular}{ll}
   kind & canonical name and type \\\hline
-  selector & @{text "member: \<beta> \<rightarrow> \<alpha> \<rightarrow> bool"} \\
-  update & @{text "insert: \<alpha> \<rightarrow> \<beta> \<rightarrow> \<beta>"} \\
+  selector & \<open>member: \<beta> \<rightarrow> \<alpha> \<rightarrow> bool\<close> \\
+  update & \<open>insert: \<alpha> \<rightarrow> \<beta> \<rightarrow> \<beta>\<close> \\
   \end{tabular}
   \end{center}
 
-  Given a container @{text "B: \<beta>"}, the partially applied @{text
-  "member B"} is a predicate over elements @{text "\<alpha> \<rightarrow> bool"}, and
+  Given a container \<open>B: \<beta>\<close>, the partially applied \<open>member B\<close> is a predicate over elements \<open>\<alpha> \<rightarrow> bool\<close>, and
   thus represents the intended denotation directly.  It is customary
   to pass the abstract predicate to further operations, not the
   concrete container.  The argument order makes it easy to use other
-  combinators: @{text "forall (member B) list"} will check a list of
-  elements for membership in @{text "B"} etc. Often the explicit
-  @{text "list"} is pointless and can be contracted to @{text "forall
-  (member B)"} to get directly a predicate again.
+  combinators: \<open>forall (member B) list\<close> will check a list of
+  elements for membership in \<open>B\<close> etc. Often the explicit
+  \<open>list\<close> is pointless and can be contracted to \<open>forall
+  (member B)\<close> to get directly a predicate again.
 
   In contrast, an update operation varies the container, so it moves
-  to the right: @{text "insert a"} is a function @{text "\<beta> \<rightarrow> \<beta>"} to
-  insert a value @{text "a"}.  These can be composed naturally as
-  @{text "insert c \<circ> insert b \<circ> insert a"}.  The slightly awkward
+  to the right: \<open>insert a\<close> is a function \<open>\<beta> \<rightarrow> \<beta>\<close> to
+  insert a value \<open>a\<close>.  These can be composed naturally as
+  \<open>insert c \<circ> insert b \<circ> insert a\<close>.  The slightly awkward
   inversion of the composition order is due to conventional
   mathematical notation, which can be easily amended as explained
   below.
@@ -798,9 +796,9 @@
 subsection \<open>Forward application and composition\<close>
 
 text \<open>Regular function application and infix notation works best for
-  relatively deeply structured expressions, e.g.\ @{text "h (f x y + g
-  z)"}.  The important special case of \<^emph>\<open>linear transformation\<close>
-  applies a cascade of functions @{text "f\<^sub>n (\<dots> (f\<^sub>1 x))"}.  This
+  relatively deeply structured expressions, e.g.\ \<open>h (f x y + g
+  z)\<close>.  The important special case of \<^emph>\<open>linear transformation\<close>
+  applies a cascade of functions \<open>f\<^sub>n (\<dots> (f\<^sub>1 x))\<close>.  This
   becomes hard to read and maintain if the functions are themselves
   given as complex expressions.  The notation can be significantly
   improved by introducing \<^emph>\<open>forward\<close> versions of application and
@@ -808,14 +806,13 @@
 
   \<^medskip>
   \begin{tabular}{lll}
-  @{text "x |> f"} & @{text "\<equiv>"} & @{text "f x"} \\
-  @{text "(f #> g) x"} & @{text "\<equiv>"} & @{text "x |> f |> g"} \\
+  \<open>x |> f\<close> & \<open>\<equiv>\<close> & \<open>f x\<close> \\
+  \<open>(f #> g) x\<close> & \<open>\<equiv>\<close> & \<open>x |> f |> g\<close> \\
   \end{tabular}
   \<^medskip>
 
-  This enables to write conveniently @{text "x |> f\<^sub>1 |> \<dots> |> f\<^sub>n"} or
-  @{text "f\<^sub>1 #> \<dots> #> f\<^sub>n"} for its functional abstraction over @{text
-  "x"}.
+  This enables to write conveniently \<open>x |> f\<^sub>1 |> \<dots> |> f\<^sub>n\<close> or
+  \<open>f\<^sub>1 #> \<dots> #> f\<^sub>n\<close> for its functional abstraction over \<open>x\<close>.
 
   \<^medskip>
   There is an additional set of combinators to accommodate
@@ -824,8 +821,8 @@
 
   \<^medskip>
   \begin{tabular}{lll}
-  @{text "(x, y) |-> f"} & @{text "\<equiv>"} & @{text "f x y"} \\
-  @{text "(f #-> g) x"} & @{text "\<equiv>"} & @{text "x |> f |-> g"} \\
+  \<open>(x, y) |-> f\<close> & \<open>\<equiv>\<close> & \<open>f x y\<close> \\
+  \<open>(f #-> g) x\<close> & \<open>\<equiv>\<close> & \<open>x |> f |-> g\<close> \\
   \end{tabular}
   \<^medskip>
 \<close>
@@ -842,29 +839,25 @@
 
 subsection \<open>Canonical iteration\<close>
 
-text \<open>As explained above, a function @{text "f: \<alpha> \<rightarrow> \<beta> \<rightarrow> \<beta>"} can be
-  understood as update on a configuration of type @{text "\<beta>"},
-  parameterized by an argument of type @{text "\<alpha>"}.  Given @{text "a: \<alpha>"}
-  the partial application @{text "(f a): \<beta> \<rightarrow> \<beta>"} operates
-  homogeneously on @{text "\<beta>"}.  This can be iterated naturally over a
-  list of parameters @{text "[a\<^sub>1, \<dots>, a\<^sub>n]"} as @{text "f a\<^sub>1 #> \<dots> #> f a\<^sub>n"}.
-  The latter expression is again a function @{text "\<beta> \<rightarrow> \<beta>"}.
-  It can be applied to an initial configuration @{text "b: \<beta>"} to
-  start the iteration over the given list of arguments: each @{text
-  "a"} in @{text "a\<^sub>1, \<dots>, a\<^sub>n"} is applied consecutively by updating a
+text \<open>As explained above, a function \<open>f: \<alpha> \<rightarrow> \<beta> \<rightarrow> \<beta>\<close> can be
+  understood as update on a configuration of type \<open>\<beta>\<close>,
+  parameterized by an argument of type \<open>\<alpha>\<close>.  Given \<open>a: \<alpha>\<close>
+  the partial application \<open>(f a): \<beta> \<rightarrow> \<beta>\<close> operates
+  homogeneously on \<open>\<beta>\<close>.  This can be iterated naturally over a
+  list of parameters \<open>[a\<^sub>1, \<dots>, a\<^sub>n]\<close> as \<open>f a\<^sub>1 #> \<dots> #> f a\<^sub>n\<close>.
+  The latter expression is again a function \<open>\<beta> \<rightarrow> \<beta>\<close>.
+  It can be applied to an initial configuration \<open>b: \<beta>\<close> to
+  start the iteration over the given list of arguments: each \<open>a\<close> in \<open>a\<^sub>1, \<dots>, a\<^sub>n\<close> is applied consecutively by updating a
   cumulative configuration.
 
-  The @{text fold} combinator in Isabelle/ML lifts a function @{text
-  "f"} as above to its iterated version over a list of arguments.
-  Lifting can be repeated, e.g.\ @{text "(fold \<circ> fold) f"} iterates
+  The \<open>fold\<close> combinator in Isabelle/ML lifts a function \<open>f\<close> as above to its iterated version over a list of arguments.
+  Lifting can be repeated, e.g.\ \<open>(fold \<circ> fold) f\<close> iterates
   over a list of lists as expected.
 
-  The variant @{text "fold_rev"} works inside-out over the list of
-  arguments, such that @{text "fold_rev f \<equiv> fold f \<circ> rev"} holds.
-
-  The @{text "fold_map"} combinator essentially performs @{text
-  "fold"} and @{text "map"} simultaneously: each application of @{text
-  "f"} produces an updated configuration together with a side-result;
+  The variant \<open>fold_rev\<close> works inside-out over the list of
+  arguments, such that \<open>fold_rev f \<equiv> fold f \<circ> rev\<close> holds.
+
+  The \<open>fold_map\<close> combinator essentially performs \<open>fold\<close> and \<open>map\<close> simultaneously: each application of \<open>f\<close> produces an updated configuration together with a side-result;
   the iteration collects all such side-results as a separate list.
 \<close>
 
@@ -875,20 +868,19 @@
   @{index_ML fold_map: "('a -> 'b -> 'c * 'b) -> 'a list -> 'b -> 'c list * 'b"} \\
   \end{mldecls}
 
-  \<^descr> @{ML fold}~@{text f} lifts the parametrized update function
-  @{text "f"} to a list of parameters.
-
-  \<^descr> @{ML fold_rev}~@{text "f"} is similar to @{ML fold}~@{text
-  "f"}, but works inside-out, as if the list would be reversed.
-
-  \<^descr> @{ML fold_map}~@{text "f"} lifts the parametrized update
-  function @{text "f"} (with side-result) to a list of parameters and
+  \<^descr> @{ML fold}~\<open>f\<close> lifts the parametrized update function
+  \<open>f\<close> to a list of parameters.
+
+  \<^descr> @{ML fold_rev}~\<open>f\<close> is similar to @{ML fold}~\<open>f\<close>, but works inside-out, as if the list would be reversed.
+
+  \<^descr> @{ML fold_map}~\<open>f\<close> lifts the parametrized update
+  function \<open>f\<close> (with side-result) to a list of parameters and
   cumulative side-results.
 
 
   \begin{warn}
   The literature on functional programming provides a confusing multitude of
-  combinators called @{text "foldl"}, @{text "foldr"} etc. SML97 provides its
+  combinators called \<open>foldl\<close>, \<open>foldr\<close> etc. SML97 provides its
   own variations as @{ML List.foldl} and @{ML List.foldr}, while the classic
   Isabelle library also has the historic @{ML Library.foldl} and @{ML
   Library.foldr}. To avoid unnecessary complication, all these historical
@@ -998,11 +990,11 @@
   @{index_ML error: "string -> 'a"} % FIXME Output.error_message (!?) \\
   \end{mldecls}
 
-  \<^descr> @{ML writeln}~@{text "text"} outputs @{text "text"} as regular
+  \<^descr> @{ML writeln}~\<open>text\<close> outputs \<open>text\<close> as regular
   message.  This is the primary message output operation of Isabelle
   and should be used by default.
 
-  \<^descr> @{ML tracing}~@{text "text"} outputs @{text "text"} as special
+  \<^descr> @{ML tracing}~\<open>text\<close> outputs \<open>text\<close> as special
   tracing message, indicating potential high-volume output to the
   front-end (hundreds or thousands of messages issued by a single
   command).  The idea is to allow the user-interface to downgrade the
@@ -1012,17 +1004,16 @@
   output, e.g.\ switch to a different output window.  So this channel
   should not be used for regular output.
 
-  \<^descr> @{ML warning}~@{text "text"} outputs @{text "text"} as
+  \<^descr> @{ML warning}~\<open>text\<close> outputs \<open>text\<close> as
   warning, which typically means some extra emphasis on the front-end
   side (color highlighting, icons, etc.).
 
-  \<^descr> @{ML error}~@{text "text"} raises exception @{ML ERROR}~@{text
-  "text"} and thus lets the Isar toplevel print @{text "text"} on the
+  \<^descr> @{ML error}~\<open>text\<close> raises exception @{ML ERROR}~\<open>text\<close> and thus lets the Isar toplevel print \<open>text\<close> on the
   error channel, which typically means some extra emphasis on the
   front-end side (color highlighting, icons, etc.).
 
   This assumes that the exception is not handled before the command
-  terminates.  Handling exception @{ML ERROR}~@{text "text"} is a
+  terminates.  Handling exception @{ML ERROR}~\<open>text\<close> is a
   perfectly legal alternative: it means that the error is absorbed
   without any message output.
 
@@ -1107,7 +1098,7 @@
 
   It is considered bad style to refer to internal function names or
   values in ML source notation in user error messages.  Do not use
-  @{text "@{make_string}"} nor @{text "@{here}"}!
+  \<open>@{make_string}\<close> nor \<open>@{here}\<close>!
 
   Grammatical correctness of error messages can be improved by
   \<^emph>\<open>omitting\<close> final punctuation: messages are often concatenated
@@ -1177,31 +1168,30 @@
   @{index_ML Runtime.exn_trace: "(unit -> 'a) -> 'a"} \\
   \end{mldecls}
 
-  \<^descr> @{ML try}~@{text "f x"} makes the partiality of evaluating
-  @{text "f x"} explicit via the option datatype.  Interrupts are
+  \<^descr> @{ML try}~\<open>f x\<close> makes the partiality of evaluating
+  \<open>f x\<close> explicit via the option datatype.  Interrupts are
   \<^emph>\<open>not\<close> handled here, i.e.\ this form serves as safe replacement
-  for the \<^emph>\<open>unsafe\<close> version @{ML_text "(SOME"}~@{text "f
-  x"}~@{ML_text "handle _ => NONE)"} that is occasionally seen in
+  for the \<^emph>\<open>unsafe\<close> version @{ML_text "(SOME"}~\<open>f
+  x\<close>~@{ML_text "handle _ => NONE)"} that is occasionally seen in
   books about SML97, but not in Isabelle/ML.
 
   \<^descr> @{ML can} is similar to @{ML try} with more abstract result.
 
-  \<^descr> @{ML ERROR}~@{text "msg"} represents user errors; this
+  \<^descr> @{ML ERROR}~\<open>msg\<close> represents user errors; this
   exception is normally raised indirectly via the @{ML error} function
   (see \secref{sec:message-channels}).
 
-  \<^descr> @{ML Fail}~@{text "msg"} represents general program failures.
+  \<^descr> @{ML Fail}~\<open>msg\<close> represents general program failures.
 
   \<^descr> @{ML Exn.is_interrupt} identifies interrupts robustly, without
   mentioning concrete exception constructors in user code.  Handled
   interrupts need to be re-raised promptly!
 
-  \<^descr> @{ML reraise}~@{text "exn"} raises exception @{text "exn"}
+  \<^descr> @{ML reraise}~\<open>exn\<close> raises exception \<open>exn\<close>
   while preserving its implicit position information (if possible,
   depending on the ML platform).
 
-  \<^descr> @{ML Runtime.exn_trace}~@{ML_text "(fn () =>"}~@{text
-  "e"}@{ML_text ")"} evaluates expression @{text "e"} while printing
+  \<^descr> @{ML Runtime.exn_trace}~@{ML_text "(fn () =>"}~\<open>e\<close>@{ML_text ")"} evaluates expression \<open>e\<close> while printing
   a full trace of its stack of nested exceptions (if possible,
   depending on the ML platform).
 
@@ -1211,10 +1201,10 @@
 
 text %mlantiq \<open>
   \begin{matharray}{rcl}
-  @{ML_antiquotation_def "assert"} & : & @{text ML_antiquotation} \\
+  @{ML_antiquotation_def "assert"} & : & \<open>ML_antiquotation\<close> \\
   \end{matharray}
 
-  \<^descr> @{text "@{assert}"} inlines a function
+  \<^descr> \<open>@{assert}\<close> inlines a function
   @{ML_type "bool -> unit"} that raises @{ML Fail} if the argument is
   @{ML false}.  Due to inlining the source position of failed
   assertions is included in the error output.
@@ -1230,30 +1220,26 @@
   in itself a small string, which has either one of the following
   forms:
 
-  \<^enum> a single ASCII character ``@{text "c"}'', for example
+  \<^enum> a single ASCII character ``\<open>c\<close>'', for example
   ``@{verbatim a}'',
 
   \<^enum> a codepoint according to UTF-8 (non-ASCII byte sequence),
 
-  \<^enum> a regular symbol ``@{verbatim \<open>\\<close>}@{verbatim "<"}@{text
-  "ident"}@{verbatim ">"}'', for example ``@{verbatim "\<alpha>"}'',
-
-  \<^enum> a control symbol ``@{verbatim \<open>\\<close>}@{verbatim "<^"}@{text
-  "ident"}@{verbatim ">"}'', for example ``@{verbatim "\<^bold>"}'',
-
-  \<^enum> a raw symbol ``@{verbatim \<open>\\<close>}@{verbatim "<^raw:"}@{text
-  text}@{verbatim ">"}'' where @{text text} consists of printable characters
+  \<^enum> a regular symbol ``@{verbatim \<open>\\<close>}@{verbatim "<"}\<open>ident\<close>@{verbatim ">"}'', for example ``@{verbatim "\<alpha>"}'',
+
+  \<^enum> a control symbol ``@{verbatim \<open>\\<close>}@{verbatim "<^"}\<open>ident\<close>@{verbatim ">"}'', for example ``@{verbatim "\<^bold>"}'',
+
+  \<^enum> a raw symbol ``@{verbatim \<open>\\<close>}@{verbatim "<^raw:"}\<open>text\<close>@{verbatim ">"}'' where \<open>text\<close> consists of printable characters
   excluding ``@{verbatim "."}'' and ``@{verbatim ">"}'', for example
   ``@{verbatim "\<^raw:$\sum_{i = 1}^n$>"}'',
 
   \<^enum> a numbered raw control symbol ``@{verbatim \<open>\\<close>}@{verbatim
-  "<^raw"}@{text n}@{verbatim ">"}, where @{text n} consists of digits, for
+  "<^raw"}\<open>n\<close>@{verbatim ">"}, where \<open>n\<close> consists of digits, for
   example ``@{verbatim "\<^raw42>"}''.
 
 
-  The @{text "ident"} syntax for symbol names is @{text "letter
-  (letter | digit)\<^sup>*"}, where @{text "letter = A..Za..z"} and @{text
-  "digit = 0..9"}.  There are infinitely many regular symbols and
+  The \<open>ident\<close> syntax for symbol names is \<open>letter
+  (letter | digit)\<^sup>*\<close>, where \<open>letter = A..Za..z\<close> and \<open>digit = 0..9\<close>.  There are infinitely many regular symbols and
   control symbols, but a fixed collection of standard symbols is
   treated specifically.  For example, ``@{verbatim "\<alpha>"}'' is
   classified as a letter, which means it may occur within regular
@@ -1269,8 +1255,8 @@
   \<^medskip>
   Output of Isabelle symbols depends on the print mode. For example,
   the standard {\LaTeX} setup of the Isabelle document preparation system
-  would present ``@{verbatim "\<alpha>"}'' as @{text "\<alpha>"}, and ``@{verbatim
-  "\<^bold>\<alpha>"}'' as @{text "\<^bold>\<alpha>"}. On-screen rendering usually works by mapping a
+  would present ``@{verbatim "\<alpha>"}'' as \<open>\<alpha>\<close>, and ``@{verbatim
+  "\<^bold>\<alpha>"}'' as \<open>\<^bold>\<alpha>\<close>. On-screen rendering usually works by mapping a
   finite subset of Isabelle symbols to suitable Unicode characters.
 \<close>
 
@@ -1291,7 +1277,7 @@
   \<^descr> Type @{ML_type "Symbol.symbol"} represents individual Isabelle
   symbols.
 
-  \<^descr> @{ML "Symbol.explode"}~@{text "str"} produces a symbol list
+  \<^descr> @{ML "Symbol.explode"}~\<open>str\<close> produces a symbol list
   from the packed form.  This function supersedes @{ML
   "String.explode"} for virtually all purposes of manipulating text in
   Isabelle!\footnote{The runtime overhead for exploded strings is
@@ -1392,7 +1378,7 @@
   @{assert} (size s = 4);
 \<close>
 
-text \<open>Note that in Unicode renderings of the symbol @{text "\<A>"},
+text \<open>Note that in Unicode renderings of the symbol \<open>\<A>\<close>,
   variations of encodings like UTF-8 or UTF-16 pose delicate questions
   about the multi-byte representations of its codepoint, which is outside
   of the 16-bit address space of the original Unicode standard from
@@ -1435,8 +1421,7 @@
   to the SML97 basis library definition.  This is adequate for
   internal ML operations, but awkward in concrete time specifications.
 
-  \<^descr> @{ML seconds}~@{text "s"} turns the concrete scalar @{text
-  "s"} (measured in seconds) into an abstract time value.  Floating
+  \<^descr> @{ML seconds}~\<open>s\<close> turns the concrete scalar \<open>s\<close> (measured in seconds) into an abstract time value.  Floating
   point numbers are easy to use as configuration options in the
   context (see \secref{sec:config-options}) or system options that
   are maintained externally.
@@ -1479,7 +1464,7 @@
   @{index_ML update: "('a * 'a -> bool) -> 'a -> 'a list -> 'a list"} \\
   \end{mldecls}
 
-  \<^descr> @{ML cons}~@{text "x xs"} evaluates to @{text "x :: xs"}.
+  \<^descr> @{ML cons}~\<open>x xs\<close> evaluates to \<open>x :: xs\<close>.
 
   Tupled infix operators are a historical accident in Standard ML.
   The curried @{ML cons} amends this, but it should be only used when
@@ -1559,9 +1544,9 @@
   Note that a function called @{verbatim lookup} is obliged to express its
   partiality via an explicit option element.  There is no choice to
   raise an exception, without changing the name to something like
-  @{text "the_element"} or @{text "get"}.
-
-  The @{text "defined"} operation is essentially a contraction of @{ML
+  \<open>the_element\<close> or \<open>get\<close>.
+
+  The \<open>defined\<close> operation is essentially a contraction of @{ML
   is_some} and @{verbatim "lookup"}, but this is sufficiently frequent to
   justify its independent existence.  This also gives the
   implementation some opportunity for peep-hole optimization.
@@ -1717,7 +1702,7 @@
   synchronization, as long as each invocation gets its own copy and the
   tool itself is single-threaded.
 
-  \<^item> Avoid raw output on @{text "stdout"} or @{text "stderr"}.  The
+  \<^item> Avoid raw output on \<open>stdout\<close> or \<open>stderr\<close>.  The
   Poly/ML library is thread-safe for each individual output operation,
   but the ordering of parallel invocations is arbitrary.  This means
   raw output will appear on some system console with unpredictable
@@ -1748,11 +1733,11 @@
   @{index_ML serial_string: "unit -> string"} \\
   \end{mldecls}
 
-  \<^descr> @{ML File.tmp_path}~@{text "path"} relocates the base
-  component of @{text "path"} into the unique temporary directory of
+  \<^descr> @{ML File.tmp_path}~\<open>path\<close> relocates the base
+  component of \<open>path\<close> into the unique temporary directory of
   the running Isabelle/ML process.
 
-  \<^descr> @{ML serial_string}~@{text "()"} creates a new serial number
+  \<^descr> @{ML serial_string}~\<open>()\<close> creates a new serial number
   that is unique over the runtime of the Isabelle/ML process.
 \<close>
 
@@ -1794,19 +1779,18 @@
   \<^descr> Type @{ML_type "'a Synchronized.var"} represents synchronized
   variables with state of type @{ML_type 'a}.
 
-  \<^descr> @{ML Synchronized.var}~@{text "name x"} creates a synchronized
-  variable that is initialized with value @{text "x"}.  The @{text
-  "name"} is used for tracing.
-
-  \<^descr> @{ML Synchronized.guarded_access}~@{text "var f"} lets the
-  function @{text "f"} operate within a critical section on the state
-  @{text "x"} as follows: if @{text "f x"} produces @{ML NONE}, it
+  \<^descr> @{ML Synchronized.var}~\<open>name x\<close> creates a synchronized
+  variable that is initialized with value \<open>x\<close>.  The \<open>name\<close> is used for tracing.
+
+  \<^descr> @{ML Synchronized.guarded_access}~\<open>var f\<close> lets the
+  function \<open>f\<close> operate within a critical section on the state
+  \<open>x\<close> as follows: if \<open>f x\<close> produces @{ML NONE}, it
   continues to wait on the internal condition variable, expecting that
   some other thread will eventually change the content in a suitable
-  manner; if @{text "f x"} produces @{ML SOME}~@{text "(y, x')"} it is
-  satisfied and assigns the new state value @{text "x'"}, broadcasts a
+  manner; if \<open>f x\<close> produces @{ML SOME}~\<open>(y, x')\<close> it is
+  satisfied and assigns the new state value \<open>x'\<close>, broadcasts a
   signal to all waiting threads on the associated condition variable,
-  and returns the result @{text "y"}.
+  and returns the result \<open>y\<close>.
 
 
   There are some further variants of the @{ML
@@ -1864,10 +1848,10 @@
   occur routinely, and special care is required to tell them apart ---
   the static type-system of SML is only of limited help here.
 
-  The first form is more intuitive: some combinator @{text "(unit ->
-  'a) -> 'a"} applies the given function to @{text "()"} to initiate
+  The first form is more intuitive: some combinator \<open>(unit ->
+  'a) -> 'a\<close> applies the given function to \<open>()\<close> to initiate
   the postponed evaluation process.  The second form is more flexible:
-  some combinator @{text "('a -> 'b) -> 'a -> 'b"} acts like a
+  some combinator \<open>('a -> 'b) -> 'a -> 'b\<close> acts like a
   modified form of function application; several such combinators may
   be cascaded to modify a given function, before it is ultimately
   applied to some argument.
@@ -1875,7 +1859,7 @@
   \<^medskip>
   \<^emph>\<open>Reified results\<close> make the disjoint sum of regular
   values versions exceptional situations explicit as ML datatype:
-  @{text "'a result = Res of 'a | Exn of exn"}.  This is typically
+  \<open>'a result = Res of 'a | Exn of exn\<close>.  This is typically
   used for administrative purposes, to store the overall outcome of an
   evaluation process.
 
@@ -1905,8 +1889,8 @@
   ML results explicitly, with constructor @{ML Exn.Res} for regular
   values and @{ML "Exn.Exn"} for exceptions.
 
-  \<^descr> @{ML Exn.capture}~@{text "f x"} manages the evaluation of
-  @{text "f x"} such that exceptions are made explicit as @{ML
+  \<^descr> @{ML Exn.capture}~\<open>f x\<close> manages the evaluation of
+  \<open>f x\<close> such that exceptions are made explicit as @{ML
   "Exn.Exn"}.  Note that this includes physical interrupts (see also
   \secref{sec:exceptions}), so the same precautions apply to user
   code: interrupts must not be absorbed accidentally!
@@ -1915,11 +1899,11 @@
   Exn.capture}, but interrupts are immediately re-raised as required
   for user code.
 
-  \<^descr> @{ML Exn.release}~@{text "result"} releases the original
+  \<^descr> @{ML Exn.release}~\<open>result\<close> releases the original
   runtime result, exposing its regular value or raising the reified
   exception.
 
-  \<^descr> @{ML Par_Exn.release_all}~@{text "results"} combines results
+  \<^descr> @{ML Par_Exn.release_all}~\<open>results\<close> combines results
   that were produced independently (e.g.\ by parallel evaluation).  If
   all results are regular values, that list is returned.  Otherwise,
   the collection of all exceptions is raised, wrapped-up as collective
@@ -1938,8 +1922,8 @@
 
 text \<open>
   Algorithmic skeletons are combinators that operate on lists in
-  parallel, in the manner of well-known @{text map}, @{text exists},
-  @{text forall} etc.  Management of futures (\secref{sec:futures})
+  parallel, in the manner of well-known \<open>map\<close>, \<open>exists\<close>,
+  \<open>forall\<close> etc.  Management of futures (\secref{sec:futures})
   and their results as reified exceptions is wrapped up into simple
   programming interfaces that resemble the sequential versions.
 
@@ -1958,19 +1942,19 @@
   @{index_ML Par_List.get_some: "('a -> 'b option) -> 'a list -> 'b option"} \\
   \end{mldecls}
 
-  \<^descr> @{ML Par_List.map}~@{text "f [x\<^sub>1, \<dots>, x\<^sub>n]"} is like @{ML
-  "map"}~@{text "f [x\<^sub>1, \<dots>, x\<^sub>n]"}, but the evaluation of @{text "f x\<^sub>i"}
-  for @{text "i = 1, \<dots>, n"} is performed in parallel.
-
-  An exception in any @{text "f x\<^sub>i"} cancels the overall evaluation
+  \<^descr> @{ML Par_List.map}~\<open>f [x\<^sub>1, \<dots>, x\<^sub>n]\<close> is like @{ML
+  "map"}~\<open>f [x\<^sub>1, \<dots>, x\<^sub>n]\<close>, but the evaluation of \<open>f x\<^sub>i\<close>
+  for \<open>i = 1, \<dots>, n\<close> is performed in parallel.
+
+  An exception in any \<open>f x\<^sub>i\<close> cancels the overall evaluation
   process.  The final result is produced via @{ML
   Par_Exn.release_first} as explained above, which means the first
   program exception that happened to occur in the parallel evaluation
   is propagated, and all other failures are ignored.
 
-  \<^descr> @{ML Par_List.get_some}~@{text "f [x\<^sub>1, \<dots>, x\<^sub>n]"} produces some
-  @{text "f x\<^sub>i"} that is of the form @{text "SOME y\<^sub>i"}, if that
-  exists, otherwise @{text "NONE"}.  Thus it is similar to @{ML
+  \<^descr> @{ML Par_List.get_some}~\<open>f [x\<^sub>1, \<dots>, x\<^sub>n]\<close> produces some
+  \<open>f x\<^sub>i\<close> that is of the form \<open>SOME y\<^sub>i\<close>, if that
+  exists, otherwise \<open>NONE\<close>.  Thus it is similar to @{ML
   Library.get_first}, but subject to a non-deterministic parallel
   choice process.  The first successful result cancels the overall
   evaluation process; other exceptions are propagated as for @{ML
@@ -1997,16 +1981,15 @@
 subsection \<open>Lazy evaluation\<close>
 
 text \<open>
-  Classic lazy evaluation works via the @{text lazy}~/ @{text force} pair of
-  operations: @{text lazy} to wrap an unevaluated expression, and @{text
-  force} to evaluate it once and store its result persistently. Later
-  invocations of @{text force} retrieve the stored result without another
+  Classic lazy evaluation works via the \<open>lazy\<close>~/ \<open>force\<close> pair of
+  operations: \<open>lazy\<close> to wrap an unevaluated expression, and \<open>force\<close> to evaluate it once and store its result persistently. Later
+  invocations of \<open>force\<close> retrieve the stored result without another
   evaluation. Isabelle/ML refines this idea to accommodate the aspects of
   multi-threading, synchronous program exceptions and asynchronous interrupts.
 
-  The first thread that invokes @{text force} on an unfinished lazy value
+  The first thread that invokes \<open>force\<close> on an unfinished lazy value
   changes its state into a \<^emph>\<open>promise\<close> of the eventual result and starts
-  evaluating it. Any other threads that @{text force} the same lazy value in
+  evaluating it. Any other threads that \<open>force\<close> the same lazy value in
   the meantime need to wait for it to finish, by producing a regular result or
   program exception. If the evaluation attempt is interrupted, this event is
   propagated to all waiting threads and the lazy value is reset to its
@@ -2030,16 +2013,16 @@
   \<^descr> Type @{ML_type "'a lazy"} represents lazy values over type @{verbatim
   "'a"}.
 
-  \<^descr> @{ML Lazy.lazy}~@{text "(fn () => e)"} wraps the unevaluated
-  expression @{text e} as unfinished lazy value.
-
-  \<^descr> @{ML Lazy.value}~@{text a} wraps the value @{text a} as finished lazy
-  value.  When forced, it returns @{text a} without any further evaluation.
+  \<^descr> @{ML Lazy.lazy}~\<open>(fn () => e)\<close> wraps the unevaluated
+  expression \<open>e\<close> as unfinished lazy value.
+
+  \<^descr> @{ML Lazy.value}~\<open>a\<close> wraps the value \<open>a\<close> as finished lazy
+  value.  When forced, it returns \<open>a\<close> without any further evaluation.
 
   There is very low overhead for this proforma wrapping of strict values as
   lazy values.
 
-  \<^descr> @{ML Lazy.force}~@{text x} produces the result of the lazy value in a
+  \<^descr> @{ML Lazy.force}~\<open>x\<close> produces the result of the lazy value in a
   thread-safe manner as explained above. Thus it may cause the current thread
   to wait on a pending evaluation attempt by another thread.
 \<close>
@@ -2049,7 +2032,7 @@
 
 text \<open>
   Futures help to organize parallel execution in a value-oriented manner, with
-  @{text fork}~/ @{text join} as the main pair of operations, and some further
+  \<open>fork\<close>~/ \<open>join\<close> as the main pair of operations, and some further
   variants; see also @{cite "Wenzel:2009" and "Wenzel:2013:ITP"}. Unlike lazy
   values, futures are evaluated strictly and spontaneously on separate worker
   threads. Futures may be canceled, which leads to interrupts on running
@@ -2119,28 +2102,28 @@
   \<^descr> Type @{ML_type "'a future"} represents future values over type
   @{verbatim "'a"}.
 
-  \<^descr> @{ML Future.fork}~@{text "(fn () => e)"} registers the unevaluated
-  expression @{text e} as unfinished future value, to be evaluated eventually
+  \<^descr> @{ML Future.fork}~\<open>(fn () => e)\<close> registers the unevaluated
+  expression \<open>e\<close> as unfinished future value, to be evaluated eventually
   on the parallel worker-thread farm. This is a shorthand for @{ML
   Future.forks} below, with default parameters and a single expression.
 
-  \<^descr> @{ML Future.forks}~@{text "params exprs"} is the general interface to
-  fork several futures simultaneously. The @{text params} consist of the
+  \<^descr> @{ML Future.forks}~\<open>params exprs\<close> is the general interface to
+  fork several futures simultaneously. The \<open>params\<close> consist of the
   following fields:
 
-    \<^item> @{text "name : string"} (default @{ML "\"\""}) specifies a common name
+    \<^item> \<open>name : string\<close> (default @{ML "\"\""}) specifies a common name
     for the tasks of the forked futures, which serves diagnostic purposes.
 
-    \<^item> @{text "group : Future.group option"} (default @{ML NONE}) specifies
+    \<^item> \<open>group : Future.group option\<close> (default @{ML NONE}) specifies
     an optional task group for the forked futures. @{ML NONE} means that a new
     sub-group of the current worker-thread task context is created. If this is
     not a worker thread, the group will be a new root in the group hierarchy.
 
-    \<^item> @{text "deps : Future.task list"} (default @{ML "[]"}) specifies
+    \<^item> \<open>deps : Future.task list\<close> (default @{ML "[]"}) specifies
     dependencies on other future tasks, i.e.\ the adjacency relation in the
     global task queue. Dependencies on already finished tasks are ignored.
 
-    \<^item> @{text "pri : int"} (default @{ML 0}) specifies a priority within the
+    \<^item> \<open>pri : int\<close> (default @{ML 0}) specifies a priority within the
     task queue.
 
     Typically there is only little deviation from the default priority @{ML 0}.
@@ -2153,7 +2136,7 @@
     priority tasks that are queued later need to wait until this (or another)
     worker thread becomes free again.
 
-    \<^item> @{text "interrupts : bool"} (default @{ML true}) tells whether the
+    \<^item> \<open>interrupts : bool\<close> (default @{ML true}) tells whether the
     worker thread that processes the corresponding task is initially put into
     interruptible state. This state may change again while running, by modifying
     the thread attributes.
@@ -2162,7 +2145,7 @@
     the responsibility of the programmer that this special state is retained
     only briefly.
 
-  \<^descr> @{ML Future.join}~@{text x} retrieves the value of an already finished
+  \<^descr> @{ML Future.join}~\<open>x\<close> retrieves the value of an already finished
   future, which may lead to an exception, according to the result of its
   previous evaluation.
 
@@ -2181,12 +2164,11 @@
   some timeout.
 
   Whenever possible, static dependencies of futures should be specified
-  explicitly when forked (see @{text deps} above). Thus the evaluation can
+  explicitly when forked (see \<open>deps\<close> above). Thus the evaluation can
   work from the bottom up, without join conflicts and wait states.
 
-  \<^descr> @{ML Future.joins}~@{text xs} joins the given list of futures
-  simultaneously, which is more efficient than @{ML "map Future.join"}~@{text
-  xs}.
+  \<^descr> @{ML Future.joins}~\<open>xs\<close> joins the given list of futures
+  simultaneously, which is more efficient than @{ML "map Future.join"}~\<open>xs\<close>.
 
   Based on the dependency graph of tasks, the current thread takes over the
   responsibility to evaluate future expressions that are required for the main
@@ -2194,23 +2176,23 @@
   presently evaluated on other threads only happens as last resort, when no
   other unfinished futures are left over.
 
-  \<^descr> @{ML Future.value}~@{text a} wraps the value @{text a} as finished
+  \<^descr> @{ML Future.value}~\<open>a\<close> wraps the value \<open>a\<close> as finished
   future value, bypassing the worker-thread farm. When joined, it returns
-  @{text a} without any further evaluation.
+  \<open>a\<close> without any further evaluation.
 
   There is very low overhead for this proforma wrapping of strict values as
   futures.
 
-  \<^descr> @{ML Future.map}~@{text "f x"} is a fast-path implementation of @{ML
-  Future.fork}~@{text "(fn () => f ("}@{ML Future.join}~@{text "x))"}, which
+  \<^descr> @{ML Future.map}~\<open>f x\<close> is a fast-path implementation of @{ML
+  Future.fork}~\<open>(fn () => f (\<close>@{ML Future.join}~\<open>x))\<close>, which
   avoids the full overhead of the task queue and worker-thread farm as far as
-  possible. The function @{text f} is supposed to be some trivial
+  possible. The function \<open>f\<close> is supposed to be some trivial
   post-processing or projection of the future result.
 
-  \<^descr> @{ML Future.cancel}~@{text "x"} cancels the task group of the given
+  \<^descr> @{ML Future.cancel}~\<open>x\<close> cancels the task group of the given
   future, using @{ML Future.cancel_group} below.
 
-  \<^descr> @{ML Future.cancel_group}~@{text "group"} cancels all tasks of the
+  \<^descr> @{ML Future.cancel_group}~\<open>group\<close> cancels all tasks of the
   given task group for all time. Threads that are presently processing a task
   of the given group are interrupted: it may take some time until they are
   actually terminated. Tasks that are queued but not yet processed are
@@ -2218,12 +2200,11 @@
   invalidated, any further attempt to fork a future that belongs to it will
   yield a canceled result as well.
 
-  \<^descr> @{ML Future.promise}~@{text abort} registers a passive future with the
-  given @{text abort} operation: it is invoked when the future task group is
+  \<^descr> @{ML Future.promise}~\<open>abort\<close> registers a passive future with the
+  given \<open>abort\<close> operation: it is invoked when the future task group is
   canceled.
 
-  \<^descr> @{ML Future.fulfill}~@{text "x a"} finishes the passive future @{text
-  x} by the given value @{text a}. If the promise has already been canceled,
+  \<^descr> @{ML Future.fulfill}~\<open>x a\<close> finishes the passive future \<open>x\<close> by the given value \<open>a\<close>. If the promise has already been canceled,
   the attempt to fulfill it causes an exception.
 \<close>
 
--- a/src/Doc/Implementation/Prelim.thy	Tue Oct 20 23:03:46 2015 +0200
+++ b/src/Doc/Implementation/Prelim.thy	Tue Oct 20 23:53:40 2015 +0200
@@ -13,13 +13,13 @@
   results etc.).
 
   For example, derivations within the Isabelle/Pure logic can be
-  described as a judgment @{text "\<Gamma> \<turnstile>\<^sub>\<Theta> \<phi>"}, which means that a
-  proposition @{text "\<phi>"} is derivable from hypotheses @{text "\<Gamma>"}
-  within the theory @{text "\<Theta>"}.  There are logical reasons for
-  keeping @{text "\<Theta>"} and @{text "\<Gamma>"} separate: theories can be
+  described as a judgment \<open>\<Gamma> \<turnstile>\<^sub>\<Theta> \<phi>\<close>, which means that a
+  proposition \<open>\<phi>\<close> is derivable from hypotheses \<open>\<Gamma>\<close>
+  within the theory \<open>\<Theta>\<close>.  There are logical reasons for
+  keeping \<open>\<Theta>\<close> and \<open>\<Gamma>\<close> separate: theories can be
   liberal about supporting type constructors and schematic
   polymorphism of constants and axioms, while the inner calculus of
-  @{text "\<Gamma> \<turnstile> \<phi>"} is strictly limited to Simple Type Theory (with
+  \<open>\<Gamma> \<turnstile> \<phi>\<close> is strictly limited to Simple Type Theory (with
   fixed type variables in the assumptions).
 
   \<^medskip>
@@ -27,20 +27,20 @@
   principles:
 
   \<^item> Transfer: monotonicity of derivations admits results to be
-  transferred into a \<^emph>\<open>larger\<close> context, i.e.\ @{text "\<Gamma> \<turnstile>\<^sub>\<Theta>
-  \<phi>"} implies @{text "\<Gamma>' \<turnstile>\<^sub>\<Theta>\<^sub>' \<phi>"} for contexts @{text "\<Theta>'
-  \<supseteq> \<Theta>"} and @{text "\<Gamma>' \<supseteq> \<Gamma>"}.
+  transferred into a \<^emph>\<open>larger\<close> context, i.e.\ \<open>\<Gamma> \<turnstile>\<^sub>\<Theta>
+  \<phi>\<close> implies \<open>\<Gamma>' \<turnstile>\<^sub>\<Theta>\<^sub>' \<phi>\<close> for contexts \<open>\<Theta>'
+  \<supseteq> \<Theta>\<close> and \<open>\<Gamma>' \<supseteq> \<Gamma>\<close>.
 
   \<^item> Export: discharge of hypotheses admits results to be exported
-  into a \<^emph>\<open>smaller\<close> context, i.e.\ @{text "\<Gamma>' \<turnstile>\<^sub>\<Theta> \<phi>"}
-  implies @{text "\<Gamma> \<turnstile>\<^sub>\<Theta> \<Delta> \<Longrightarrow> \<phi>"} where @{text "\<Gamma>' \<supseteq> \<Gamma>"} and
-  @{text "\<Delta> = \<Gamma>' - \<Gamma>"}.  Note that @{text "\<Theta>"} remains unchanged here,
-  only the @{text "\<Gamma>"} part is affected.
+  into a \<^emph>\<open>smaller\<close> context, i.e.\ \<open>\<Gamma>' \<turnstile>\<^sub>\<Theta> \<phi>\<close>
+  implies \<open>\<Gamma> \<turnstile>\<^sub>\<Theta> \<Delta> \<Longrightarrow> \<phi>\<close> where \<open>\<Gamma>' \<supseteq> \<Gamma>\<close> and
+  \<open>\<Delta> = \<Gamma>' - \<Gamma>\<close>.  Note that \<open>\<Theta>\<close> remains unchanged here,
+  only the \<open>\<Gamma>\<close> part is affected.
 
 
   \<^medskip>
   By modeling the main characteristics of the primitive
-  @{text "\<Theta>"} and @{text "\<Gamma>"} above, and abstracting over any
+  \<open>\<Theta>\<close> and \<open>\<Gamma>\<close> above, and abstracting over any
   particular logical content, we arrive at the fundamental notions of
   \<^emph>\<open>theory context\<close> and \<^emph>\<open>proof context\<close> in Isabelle/Isar.
   These implement a certain policy to manage arbitrary \<^emph>\<open>context
@@ -48,15 +48,14 @@
   data at compile time.
 
   The internal bootstrap process of Isabelle/Pure eventually reaches a
-  stage where certain data slots provide the logical content of @{text
-  "\<Theta>"} and @{text "\<Gamma>"} sketched above, but this does not stop there!
+  stage where certain data slots provide the logical content of \<open>\<Theta>\<close> and \<open>\<Gamma>\<close> sketched above, but this does not stop there!
   Various additional data slots support all kinds of mechanisms that
   are not necessarily part of the core logic.
 
   For example, there would be data for canonical introduction and
   elimination rules for arbitrary operators (depending on the
   object-logic and application), which enables users to perform
-  standard proof steps implicitly (cf.\ the @{text "rule"} method
+  standard proof steps implicitly (cf.\ the \<open>rule\<close> method
   @{cite "isabelle-isar-ref"}).
 
   \<^medskip>
@@ -80,28 +79,27 @@
   ancestor theories.  To this end, the system maintains a set of
   symbolic ``identification stamps'' within each theory.
 
-  The @{text "begin"} operation starts a new theory by importing several
+  The \<open>begin\<close> operation starts a new theory by importing several
   parent theories (with merged contents) and entering a special mode of
-  nameless incremental updates, until the final @{text "end"} operation is
+  nameless incremental updates, until the final \<open>end\<close> operation is
   performed.
 
   \<^medskip>
   The example in \figref{fig:ex-theory} below shows a theory
-  graph derived from @{text "Pure"}, with theory @{text "Length"}
-  importing @{text "Nat"} and @{text "List"}.  The body of @{text
-  "Length"} consists of a sequence of updates, resulting in locally a
+  graph derived from \<open>Pure\<close>, with theory \<open>Length\<close>
+  importing \<open>Nat\<close> and \<open>List\<close>.  The body of \<open>Length\<close> consists of a sequence of updates, resulting in locally a
   linear sub-theory relation for each intermediate step.
 
   \begin{figure}[htb]
   \begin{center}
   \begin{tabular}{rcccl}
-        &            & @{text "Pure"} \\
-        &            & @{text "\<down>"} \\
-        &            & @{text "FOL"} \\
+        &            & \<open>Pure\<close> \\
+        &            & \<open>\<down>\<close> \\
+        &            & \<open>FOL\<close> \\
         & $\swarrow$ &              & $\searrow$ & \\
-  @{text "Nat"} &    &              &            & @{text "List"} \\
+  \<open>Nat\<close> &    &              &            & \<open>List\<close> \\
         & $\searrow$ &              & $\swarrow$ \\
-        &            & @{text "Length"} \\
+        &            & \<open>Length\<close> \\
         &            & \multicolumn{3}{l}{~~@{keyword "begin"}} \\
         &            & $\vdots$~~ \\
         &            & \multicolumn{3}{l}{~~@{command "end"}} \\
@@ -128,30 +126,30 @@
 
   \<^descr> Type @{ML_type theory} represents theory contexts.
 
-  \<^descr> @{ML "Context.eq_thy"}~@{text "(thy\<^sub>1, thy\<^sub>2)"} check strict
+  \<^descr> @{ML "Context.eq_thy"}~\<open>(thy\<^sub>1, thy\<^sub>2)\<close> check strict
   identity of two theories.
 
-  \<^descr> @{ML "Context.subthy"}~@{text "(thy\<^sub>1, thy\<^sub>2)"} compares theories
+  \<^descr> @{ML "Context.subthy"}~\<open>(thy\<^sub>1, thy\<^sub>2)\<close> compares theories
   according to the intrinsic graph structure of the construction.
   This sub-theory relation is a nominal approximation of inclusion
-  (@{text "\<subseteq>"}) of the corresponding content (according to the
+  (\<open>\<subseteq>\<close>) of the corresponding content (according to the
   semantics of the ML modules that implement the data).
 
-  \<^descr> @{ML "Theory.begin_theory"}~@{text "name parents"} constructs
+  \<^descr> @{ML "Theory.begin_theory"}~\<open>name parents\<close> constructs
   a new theory based on the given parents.  This ML function is
   normally not invoked directly.
 
-  \<^descr> @{ML "Theory.parents_of"}~@{text "thy"} returns the direct
-  ancestors of @{text thy}.
+  \<^descr> @{ML "Theory.parents_of"}~\<open>thy\<close> returns the direct
+  ancestors of \<open>thy\<close>.
 
-  \<^descr> @{ML "Theory.ancestors_of"}~@{text "thy"} returns all
-  ancestors of @{text thy} (not including @{text thy} itself).
+  \<^descr> @{ML "Theory.ancestors_of"}~\<open>thy\<close> returns all
+  ancestors of \<open>thy\<close> (not including \<open>thy\<close> itself).
 \<close>
 
 text %mlantiq \<open>
   \begin{matharray}{rcl}
-  @{ML_antiquotation_def "theory"} & : & @{text ML_antiquotation} \\
-  @{ML_antiquotation_def "theory_context"} & : & @{text ML_antiquotation} \\
+  @{ML_antiquotation_def "theory"} & : & \<open>ML_antiquotation\<close> \\
+  @{ML_antiquotation_def "theory_context"} & : & \<open>ML_antiquotation\<close> \\
   \end{matharray}
 
   @{rail \<open>
@@ -160,15 +158,15 @@
   @@{ML_antiquotation theory_context} nameref
   \<close>}
 
-  \<^descr> @{text "@{theory}"} refers to the background theory of the
+  \<^descr> \<open>@{theory}\<close> refers to the background theory of the
   current context --- as abstract value.
 
-  \<^descr> @{text "@{theory A}"} refers to an explicitly named ancestor
-  theory @{text "A"} of the background theory of the current context
+  \<^descr> \<open>@{theory A}\<close> refers to an explicitly named ancestor
+  theory \<open>A\<close> of the background theory of the current context
   --- as abstract value.
 
-  \<^descr> @{text "@{theory_context A}"} is similar to @{text "@{theory
-  A}"}, but presents the result as initial @{ML_type Proof.context}
+  \<^descr> \<open>@{theory_context A}\<close> is similar to \<open>@{theory
+  A}\<close>, but presents the result as initial @{ML_type Proof.context}
   (see also @{ML Proof_Context.init_global}).
 \<close>
 
@@ -176,16 +174,16 @@
 subsection \<open>Proof context \label{sec:context-proof}\<close>
 
 text \<open>A proof context is a container for pure data that refers to
-  the theory from which it is derived. The @{text "init"} operation
+  the theory from which it is derived. The \<open>init\<close> operation
   creates a proof context from a given theory. There is an explicit
-  @{text "transfer"} operation to force resynchronization with updates
+  \<open>transfer\<close> operation to force resynchronization with updates
   to the background theory -- this is rarely required in practice.
 
   Entities derived in a proof context need to record logical
   requirements explicitly, since there is no separate context
   identification or symbolic inclusion as for theories.  For example,
   hypotheses used in primitive derivations (cf.\ \secref{sec:thms})
-  are recorded separately within the sequent @{text "\<Gamma> \<turnstile> \<phi>"}, just to
+  are recorded separately within the sequent \<open>\<Gamma> \<turnstile> \<phi>\<close>, just to
   make double sure.  Results could still leak into an alien proof
   context due to programming errors, but Isabelle/Isar includes some
   extra validity checks in critical positions, notably at the end of a
@@ -211,23 +209,22 @@
 
   \<^descr> Type @{ML_type Proof.context} represents proof contexts.
 
-  \<^descr> @{ML Proof_Context.init_global}~@{text "thy"} produces a proof
-  context derived from @{text "thy"}, initializing all data.
+  \<^descr> @{ML Proof_Context.init_global}~\<open>thy\<close> produces a proof
+  context derived from \<open>thy\<close>, initializing all data.
 
-  \<^descr> @{ML Proof_Context.theory_of}~@{text "ctxt"} selects the
-  background theory from @{text "ctxt"}.
+  \<^descr> @{ML Proof_Context.theory_of}~\<open>ctxt\<close> selects the
+  background theory from \<open>ctxt\<close>.
 
-  \<^descr> @{ML Proof_Context.transfer}~@{text "thy ctxt"} promotes the
-  background theory of @{text "ctxt"} to the super theory @{text
-  "thy"}.
+  \<^descr> @{ML Proof_Context.transfer}~\<open>thy ctxt\<close> promotes the
+  background theory of \<open>ctxt\<close> to the super theory \<open>thy\<close>.
 \<close>
 
 text %mlantiq \<open>
   \begin{matharray}{rcl}
-  @{ML_antiquotation_def "context"} & : & @{text ML_antiquotation} \\
+  @{ML_antiquotation_def "context"} & : & \<open>ML_antiquotation\<close> \\
   \end{matharray}
 
-  \<^descr> @{text "@{context}"} refers to \<^emph>\<open>the\<close> context at
+  \<^descr> \<open>@{context}\<close> refers to \<^emph>\<open>the\<close> context at
   compile-time --- as abstract value.  Independently of (local) theory
   or proof mode, this always produces a meaningful result.
 
@@ -246,10 +243,9 @@
   and combinators for lifting operations on either component of the
   disjoint sum.
 
-  Moreover, there are total operations @{text "theory_of"} and @{text
-  "proof_of"} to convert a generic context into either kind: a theory
+  Moreover, there are total operations \<open>theory_of\<close> and \<open>proof_of\<close> to convert a generic context into either kind: a theory
   can always be selected from the sum, while a proof context might
-  have to be constructed by an ad-hoc @{text "init"} operation, which
+  have to be constructed by an ad-hoc \<open>init\<close> operation, which
   incurs a small runtime overhead.
 \<close>
 
@@ -264,12 +260,12 @@
   "theory"} and @{ML_type "Proof.context"}, with the datatype
   constructors @{ML "Context.Theory"} and @{ML "Context.Proof"}.
 
-  \<^descr> @{ML Context.theory_of}~@{text "context"} always produces a
-  theory from the generic @{text "context"}, using @{ML
+  \<^descr> @{ML Context.theory_of}~\<open>context\<close> always produces a
+  theory from the generic \<open>context\<close>, using @{ML
   "Proof_Context.theory_of"} as required.
 
-  \<^descr> @{ML Context.proof_of}~@{text "context"} always produces a
-  proof context from the generic @{text "context"}, using @{ML
+  \<^descr> @{ML Context.proof_of}~\<open>context\<close> always produces a
+  proof context from the generic \<open>context\<close>, using @{ML
   "Proof_Context.init_global"} as required (note that this re-initializes the
   context data with each invocation).
 \<close>
@@ -287,20 +283,19 @@
 
   \<^medskip>
   \begin{tabular}{ll}
-  @{text "\<type> T"} & representing type \\
-  @{text "\<val> empty: T"} & empty default value \\
-  @{text "\<val> extend: T \<rightarrow> T"} & re-initialize on import \\
-  @{text "\<val> merge: T \<times> T \<rightarrow> T"} & join on import \\
+  \<open>\<type> T\<close> & representing type \\
+  \<open>\<val> empty: T\<close> & empty default value \\
+  \<open>\<val> extend: T \<rightarrow> T\<close> & re-initialize on import \\
+  \<open>\<val> merge: T \<times> T \<rightarrow> T\<close> & join on import \\
   \end{tabular}
   \<^medskip>
 
-  The @{text "empty"} value acts as initial default for \<^emph>\<open>any\<close>
-  theory that does not declare actual data content; @{text "extend"}
-  is acts like a unitary version of @{text "merge"}.
+  The \<open>empty\<close> value acts as initial default for \<^emph>\<open>any\<close>
+  theory that does not declare actual data content; \<open>extend\<close>
+  is acts like a unitary version of \<open>merge\<close>.
 
-  Implementing @{text "merge"} can be tricky.  The general idea is
-  that @{text "merge (data\<^sub>1, data\<^sub>2)"} inserts those parts of @{text
-  "data\<^sub>2"} into @{text "data\<^sub>1"} that are not yet present, while
+  Implementing \<open>merge\<close> can be tricky.  The general idea is
+  that \<open>merge (data\<^sub>1, data\<^sub>2)\<close> inserts those parts of \<open>data\<^sub>2\<close> into \<open>data\<^sub>1\<close> that are not yet present, while
   keeping the general order of things.  The @{ML Library.merge}
   function on plain lists may serve as canonical template.
 
@@ -313,15 +308,15 @@
 
   \<^medskip>
   \begin{tabular}{ll}
-  @{text "\<type> T"} & representing type \\
-  @{text "\<val> init: theory \<rightarrow> T"} & produce initial value \\
+  \<open>\<type> T\<close> & representing type \\
+  \<open>\<val> init: theory \<rightarrow> T\<close> & produce initial value \\
   \end{tabular}
   \<^medskip>
 
-  The @{text "init"} operation is supposed to produce a pure value
+  The \<open>init\<close> operation is supposed to produce a pure value
   from the given background theory and should be somehow
   ``immediate''.  Whenever a proof context is initialized, which
-  happens frequently, the the system invokes the @{text "init"}
+  happens frequently, the the system invokes the \<open>init\<close>
   operation of \<^emph>\<open>all\<close> theory data slots ever declared.  This also
   means that one needs to be economic about the total number of proof
   data declarations in the system, i.e.\ each ML module should declare
@@ -330,19 +325,19 @@
   avoided!
 
   \paragraph{Generic data} provides a hybrid interface for both theory
-  and proof data.  The @{text "init"} operation for proof contexts is
+  and proof data.  The \<open>init\<close> operation for proof contexts is
   predefined to select the current data value from the background
   theory.
 
   \<^bigskip>
-  Any of the above data declarations over type @{text "T"}
+  Any of the above data declarations over type \<open>T\<close>
   result in an ML structure with the following signature:
 
   \<^medskip>
   \begin{tabular}{ll}
-  @{text "get: context \<rightarrow> T"} \\
-  @{text "put: T \<rightarrow> context \<rightarrow> context"} \\
-  @{text "map: (T \<rightarrow> T) \<rightarrow> context \<rightarrow> context"} \\
+  \<open>get: context \<rightarrow> T\<close> \\
+  \<open>put: T \<rightarrow> context \<rightarrow> context\<close> \\
+  \<open>map: (T \<rightarrow> T) \<rightarrow> context \<rightarrow> context\<close> \\
   \end{tabular}
   \<^medskip>
 
@@ -360,15 +355,15 @@
   @{index_ML_functor Generic_Data} \\
   \end{mldecls}
 
-  \<^descr> @{ML_functor Theory_Data}@{text "(spec)"} declares data for
+  \<^descr> @{ML_functor Theory_Data}\<open>(spec)\<close> declares data for
   type @{ML_type theory} according to the specification provided as
   argument structure.  The resulting structure provides data init and
   access operations as described above.
 
-  \<^descr> @{ML_functor Proof_Data}@{text "(spec)"} is analogous to
+  \<^descr> @{ML_functor Proof_Data}\<open>(spec)\<close> is analogous to
   @{ML_functor Theory_Data} for type @{ML_type Proof.context}.
 
-  \<^descr> @{ML_functor Generic_Data}@{text "(spec)"} is analogous to
+  \<^descr> @{ML_functor Generic_Data}\<open>(spec)\<close> is analogous to
   @{ML_functor Theory_Data} for type @{ML_type Context.generic}.
 \<close>
 
@@ -515,16 +510,16 @@
   string Config.T"} \\
   \end{mldecls}
 
-  \<^descr> @{ML Config.get}~@{text "ctxt config"} gets the value of
-  @{text "config"} in the given context.
+  \<^descr> @{ML Config.get}~\<open>ctxt config\<close> gets the value of
+  \<open>config\<close> in the given context.
 
-  \<^descr> @{ML Config.map}~@{text "config f ctxt"} updates the context
-  by updating the value of @{text "config"}.
+  \<^descr> @{ML Config.map}~\<open>config f ctxt\<close> updates the context
+  by updating the value of \<open>config\<close>.
 
-  \<^descr> @{text "config ="}~@{ML Attrib.setup_config_bool}~@{text "name
-  default"} creates a named configuration option of type @{ML_type
-  bool}, with the given @{text "default"} depending on the application
-  context.  The resulting @{text "config"} can be used to get/map its
+  \<^descr> \<open>config =\<close>~@{ML Attrib.setup_config_bool}~\<open>name
+  default\<close> creates a named configuration option of type @{ML_type
+  bool}, with the given \<open>default\<close> depending on the application
+  context.  The resulting \<open>config\<close> can be used to get/map its
   value in a given context.  There is an implicit update of the
   background theory that registers the option as attribute with some
   concrete syntax.
@@ -535,7 +530,7 @@
 \<close>
 
 text %mlex \<open>The following example shows how to declare and use a
-  Boolean configuration option called @{text "my_flag"} with constant
+  Boolean configuration option called \<open>my_flag\<close> with constant
   default value @{ML false}.\<close>
 
 ML \<open>
@@ -578,8 +573,8 @@
 
 text \<open>In principle, a name is just a string, but there are various
   conventions for representing additional structure.  For example,
-  ``@{text "Foo.bar.baz"}'' is considered as a long name consisting of
-  qualifier @{text "Foo.bar"} and base name @{text "baz"}.  The
+  ``\<open>Foo.bar.baz\<close>'' is considered as a long name consisting of
+  qualifier \<open>Foo.bar\<close> and base name \<open>baz\<close>.  The
   individual constituents of a name may have further substructure,
   e.g.\ the string ``@{verbatim \<alpha>}'' encodes as a single
   symbol (\secref{sec:symbols}).
@@ -610,31 +605,27 @@
   \<^emph>\<open>internal name\<close>, two underscores means \<^emph>\<open>Skolem name\<close>,
   three underscores means \<^emph>\<open>internal Skolem name\<close>.
 
-  For example, the basic name @{text "foo"} has the internal version
-  @{text "foo_"}, with Skolem versions @{text "foo__"} and @{text
-  "foo___"}, respectively.
+  For example, the basic name \<open>foo\<close> has the internal version
+  \<open>foo_\<close>, with Skolem versions \<open>foo__\<close> and \<open>foo___\<close>, respectively.
 
   These special versions provide copies of the basic name space, apart
   from anything that normally appears in the user text.  For example,
   system generated variables in Isar proof contexts are usually marked
-  as internal, which prevents mysterious names like @{text "xaa"} to
+  as internal, which prevents mysterious names like \<open>xaa\<close> to
   appear in human-readable text.
 
   \<^medskip>
   Manipulating binding scopes often requires on-the-fly
   renamings.  A \<^emph>\<open>name context\<close> contains a collection of already
-  used names.  The @{text "declare"} operation adds names to the
+  used names.  The \<open>declare\<close> operation adds names to the
   context.
 
-  The @{text "invents"} operation derives a number of fresh names from
+  The \<open>invents\<close> operation derives a number of fresh names from
   a given starting point.  For example, the first three names derived
-  from @{text "a"} are @{text "a"}, @{text "b"}, @{text "c"}.
+  from \<open>a\<close> are \<open>a\<close>, \<open>b\<close>, \<open>c\<close>.
 
-  The @{text "variants"} operation produces fresh names by
-  incrementing tentative names as base-26 numbers (with digits @{text
-  "a..z"}) until all clashes are resolved.  For example, name @{text
-  "foo"} results in variants @{text "fooa"}, @{text "foob"}, @{text
-  "fooc"}, \dots, @{text "fooaa"}, @{text "fooab"} etc.; each renaming
+  The \<open>variants\<close> operation produces fresh names by
+  incrementing tentative names as base-26 numbers (with digits \<open>a..z\<close>) until all clashes are resolved.  For example, name \<open>foo\<close> results in variants \<open>fooa\<close>, \<open>foob\<close>, \<open>fooc\<close>, \dots, \<open>fooaa\<close>, \<open>fooab\<close> etc.; each renaming
   step picks the next unused variant from this sequence.
 \<close>
 
@@ -654,25 +645,24 @@
   @{index_ML Variable.names_of: "Proof.context -> Name.context"} \\
   \end{mldecls}
 
-  \<^descr> @{ML Name.internal}~@{text "name"} produces an internal name
+  \<^descr> @{ML Name.internal}~\<open>name\<close> produces an internal name
   by adding one underscore.
 
-  \<^descr> @{ML Name.skolem}~@{text "name"} produces a Skolem name by
+  \<^descr> @{ML Name.skolem}~\<open>name\<close> produces a Skolem name by
   adding two underscores.
 
   \<^descr> Type @{ML_type Name.context} represents the context of already
   used names; the initial value is @{ML "Name.context"}.
 
-  \<^descr> @{ML Name.declare}~@{text "name"} enters a used name into the
+  \<^descr> @{ML Name.declare}~\<open>name\<close> enters a used name into the
   context.
 
-  \<^descr> @{ML Name.invent}~@{text "context name n"} produces @{text
-  "n"} fresh names derived from @{text "name"}.
+  \<^descr> @{ML Name.invent}~\<open>context name n\<close> produces \<open>n\<close> fresh names derived from \<open>name\<close>.
 
-  \<^descr> @{ML Name.variant}~@{text "name context"} produces a fresh
-  variant of @{text "name"}; the result is declared to the context.
+  \<^descr> @{ML Name.variant}~\<open>name context\<close> produces a fresh
+  variant of \<open>name\<close>; the result is declared to the context.
 
-  \<^descr> @{ML Variable.names_of}~@{text "ctxt"} retrieves the context
+  \<^descr> @{ML Variable.names_of}~\<open>ctxt\<close> retrieves the context
   of declared type and term variable names.  Projecting a proof
   context down to a primitive name context is occasionally useful when
   invoking lower-level operations.  Regular management of ``fresh
@@ -718,36 +708,36 @@
 subsection \<open>Indexed names \label{sec:indexname}\<close>
 
 text \<open>
-  An \<^emph>\<open>indexed name\<close> (or @{text "indexname"}) is a pair of a basic
+  An \<^emph>\<open>indexed name\<close> (or \<open>indexname\<close>) is a pair of a basic
   name and a natural number.  This representation allows efficient
   renaming by incrementing the second component only.  The canonical
   way to rename two collections of indexnames apart from each other is
-  this: determine the maximum index @{text "maxidx"} of the first
+  this: determine the maximum index \<open>maxidx\<close> of the first
   collection, then increment all indexes of the second collection by
-  @{text "maxidx + 1"}; the maximum index of an empty collection is
-  @{text "-1"}.
+  \<open>maxidx + 1\<close>; the maximum index of an empty collection is
+  \<open>-1\<close>.
 
   Occasionally, basic names are injected into the same pair type of
-  indexed names: then @{text "(x, -1)"} is used to encode the basic
-  name @{text "x"}.
+  indexed names: then \<open>(x, -1)\<close> is used to encode the basic
+  name \<open>x\<close>.
 
   \<^medskip>
   Isabelle syntax observes the following rules for
-  representing an indexname @{text "(x, i)"} as a packed string:
+  representing an indexname \<open>(x, i)\<close> as a packed string:
 
-  \<^item> @{text "?x"} if @{text "x"} does not end with a digit and @{text "i = 0"},
+  \<^item> \<open>?x\<close> if \<open>x\<close> does not end with a digit and \<open>i = 0\<close>,
 
-  \<^item> @{text "?xi"} if @{text "x"} does not end with a digit,
+  \<^item> \<open>?xi\<close> if \<open>x\<close> does not end with a digit,
 
-  \<^item> @{text "?x.i"} otherwise.
+  \<^item> \<open>?x.i\<close> otherwise.
 
 
   Indexnames may acquire large index numbers after several maxidx
   shifts have been applied.  Results are usually normalized towards
-  @{text "0"} at certain checkpoints, notably at the end of a proof.
+  \<open>0\<close> at certain checkpoints, notably at the end of a proof.
   This works by producing variants of the corresponding basic name
-  components.  For example, the collection @{text "?x1, ?x7, ?x42"}
-  becomes @{text "?x, ?xa, ?xb"}.
+  components.  For example, the collection \<open>?x1, ?x7, ?x42\<close>
+  becomes \<open>?x, ?xa, ?xb\<close>.
 \<close>
 
 text %mlref \<open>
@@ -757,8 +747,8 @@
 
   \<^descr> Type @{ML_type indexname} represents indexed names.  This is
   an abbreviation for @{ML_type "string * int"}.  The second component
-  is usually non-negative, except for situations where @{text "(x,
-  -1)"} is used to inject basic names into this type.  Other negative
+  is usually non-negative, except for situations where \<open>(x,
+  -1)\<close> is used to inject basic names into this type.  Other negative
   indexes should not be used.
 \<close>
 
@@ -767,16 +757,16 @@
 
 text \<open>A \<^emph>\<open>long name\<close> consists of a sequence of non-empty name
   components.  The packed representation uses a dot as separator, as
-  in ``@{text "A.b.c"}''.  The last component is called \<^emph>\<open>base
+  in ``\<open>A.b.c\<close>''.  The last component is called \<^emph>\<open>base
   name\<close>, the remaining prefix is called \<^emph>\<open>qualifier\<close> (which may be
   empty).  The qualifier can be understood as the access path to the
   named entity while passing through some nested block-structure,
   although our free-form long names do not really enforce any strict
   discipline.
 
-  For example, an item named ``@{text "A.b.c"}'' may be understood as
-  a local entity @{text "c"}, within a local structure @{text "b"},
-  within a global structure @{text "A"}.  In practice, long names
+  For example, an item named ``\<open>A.b.c\<close>'' may be understood as
+  a local entity \<open>c\<close>, within a local structure \<open>b\<close>,
+  within a global structure \<open>A\<close>.  In practice, long names
   usually represent 1--3 levels of qualification.  User ML code should
   not make any assumptions about the particular structure of long
   names!
@@ -796,42 +786,41 @@
   @{index_ML Long_Name.explode: "string -> string list"} \\
   \end{mldecls}
 
-  \<^descr> @{ML Long_Name.base_name}~@{text "name"} returns the base name
+  \<^descr> @{ML Long_Name.base_name}~\<open>name\<close> returns the base name
   of a long name.
 
-  \<^descr> @{ML Long_Name.qualifier}~@{text "name"} returns the qualifier
+  \<^descr> @{ML Long_Name.qualifier}~\<open>name\<close> returns the qualifier
   of a long name.
 
-  \<^descr> @{ML Long_Name.append}~@{text "name\<^sub>1 name\<^sub>2"} appends two long
+  \<^descr> @{ML Long_Name.append}~\<open>name\<^sub>1 name\<^sub>2\<close> appends two long
   names.
 
-  \<^descr> @{ML Long_Name.implode}~@{text "names"} and @{ML
-  Long_Name.explode}~@{text "name"} convert between the packed string
+  \<^descr> @{ML Long_Name.implode}~\<open>names\<close> and @{ML
+  Long_Name.explode}~\<open>name\<close> convert between the packed string
   representation and the explicit list form of long names.
 \<close>
 
 
 subsection \<open>Name spaces \label{sec:name-space}\<close>
 
-text \<open>A @{text "name space"} manages a collection of long names,
+text \<open>A \<open>name space\<close> manages a collection of long names,
   together with a mapping between partially qualified external names
   and fully qualified internal names (in both directions).  Note that
-  the corresponding @{text "intern"} and @{text "extern"} operations
-  are mostly used for parsing and printing only!  The @{text
-  "declare"} operation augments a name space according to the accesses
+  the corresponding \<open>intern\<close> and \<open>extern\<close> operations
+  are mostly used for parsing and printing only!  The \<open>declare\<close> operation augments a name space according to the accesses
   determined by a given binding, and a naming policy from the context.
 
   \<^medskip>
-  A @{text "binding"} specifies details about the prospective
+  A \<open>binding\<close> specifies details about the prospective
   long name of a newly introduced formal entity.  It consists of a
   base name, prefixes for qualification (separate ones for system
   infrastructure and user-space mechanisms), a slot for the original
   source position, and some additional flags.
 
   \<^medskip>
-  A @{text "naming"} provides some additional details for
+  A \<open>naming\<close> provides some additional details for
   producing a long name from a binding.  Normally, the naming is
-  implicit in the theory or proof context.  The @{text "full"}
+  implicit in the theory or proof context.  The \<open>full\<close>
   operation (and its variants for different context types) produces a
   fully qualified internal name to be entered into a name space.  The
   main equation of this ``chemical reaction'' when binding new
@@ -839,7 +828,7 @@
 
   \<^medskip>
   \begin{tabular}{l}
-  @{text "binding + naming \<longrightarrow> long name + name space accesses"}
+  \<open>binding + naming \<longrightarrow> long name + name space accesses\<close>
   \end{tabular}
 
   \<^bigskip>
@@ -847,13 +836,13 @@
   each kind of formal entity, e.g.\ fact, logical constant, type
   constructor, type class.  It is usually clear from the occurrence in
   concrete syntax (or from the scope) which kind of entity a name
-  refers to.  For example, the very same name @{text "c"} may be used
+  refers to.  For example, the very same name \<open>c\<close> may be used
   uniformly for a constant, type constructor, and type class.
 
   There are common schemes to name derived entities systematically
   according to the name of the main logical entity involved, e.g.\
-  fact @{text "c.intro"} for a canonical introduction rule related to
-  constant @{text "c"}.  This technique of mapping names from one
+  fact \<open>c.intro\<close> for a canonical introduction rule related to
+  constant \<open>c\<close>.  This technique of mapping names from one
   space into another requires some care in order to avoid conflicts.
   In particular, theorem names derived from a type constructor or type
   class should get an additional suffix in addition to the usual
@@ -863,9 +852,9 @@
   \<^medskip>
   \begin{tabular}{ll}
   logical entity & fact name \\\hline
-  constant @{text "c"} & @{text "c.intro"} \\
-  type @{text "c"} & @{text "c_type.intro"} \\
-  class @{text "c"} & @{text "c_class.intro"} \\
+  constant \<open>c\<close> & \<open>c.intro\<close> \\
+  type \<open>c\<close> & \<open>c_type.intro\<close> \\
+  class \<open>c\<close> & \<open>c_class.intro\<close> \\
   \end{tabular}
 \<close>
 
@@ -901,15 +890,14 @@
 
   \<^descr> @{ML Binding.empty} is the empty binding.
 
-  \<^descr> @{ML Binding.name}~@{text "name"} produces a binding with base
-  name @{text "name"}.  Note that this lacks proper source position
+  \<^descr> @{ML Binding.name}~\<open>name\<close> produces a binding with base
+  name \<open>name\<close>.  Note that this lacks proper source position
   information; see also the ML antiquotation @{ML_antiquotation
   binding}.
 
-  \<^descr> @{ML Binding.qualify}~@{text "mandatory name binding"}
-  prefixes qualifier @{text "name"} to @{text "binding"}.  The @{text
-  "mandatory"} flag tells if this name component always needs to be
-  given in name space accesses --- this is mostly @{text "false"} in
+  \<^descr> @{ML Binding.qualify}~\<open>mandatory name binding\<close>
+  prefixes qualifier \<open>name\<close> to \<open>binding\<close>.  The \<open>mandatory\<close> flag tells if this name component always needs to be
+  given in name space accesses --- this is mostly \<open>false\<close> in
   practice.  Note that this part of qualification is typically used in
   derived specification mechanisms.
 
@@ -918,14 +906,14 @@
   typically used in the infrastructure for modular specifications,
   notably ``local theory targets'' (see also \chref{ch:local-theory}).
 
-  \<^descr> @{ML Binding.concealed}~@{text "binding"} indicates that the
+  \<^descr> @{ML Binding.concealed}~\<open>binding\<close> indicates that the
   binding shall refer to an entity that serves foundational purposes
   only.  This flag helps to mark implementation details of
   specification mechanism etc.  Other tools should not depend on the
   particulars of concealed entities (cf.\ @{ML
   Name_Space.is_concealed}).
 
-  \<^descr> @{ML Binding.print}~@{text "binding"} produces a string
+  \<^descr> @{ML Binding.print}~\<open>binding\<close> produces a string
   representation for human-readable output, together with some formal
   markup that might get used in GUI front-ends, for example.
 
@@ -936,26 +924,25 @@
   global and lacks any path prefix.  In a regular theory context this is
   augmented by a path prefix consisting of the theory name.
 
-  \<^descr> @{ML Name_Space.add_path}~@{text "path naming"} augments the
+  \<^descr> @{ML Name_Space.add_path}~\<open>path naming\<close> augments the
   naming policy by extending its path component.
 
-  \<^descr> @{ML Name_Space.full_name}~@{text "naming binding"} turns a
+  \<^descr> @{ML Name_Space.full_name}~\<open>naming binding\<close> turns a
   name binding (usually a basic name) into the fully qualified
   internal name, according to the given naming policy.
 
   \<^descr> Type @{ML_type Name_Space.T} represents name spaces.
 
-  \<^descr> @{ML Name_Space.empty}~@{text "kind"} and @{ML Name_Space.merge}~@{text
-  "(space\<^sub>1, space\<^sub>2)"} are the canonical operations for
+  \<^descr> @{ML Name_Space.empty}~\<open>kind\<close> and @{ML Name_Space.merge}~\<open>(space\<^sub>1, space\<^sub>2)\<close> are the canonical operations for
   maintaining name spaces according to theory data management
-  (\secref{sec:context-data}); @{text "kind"} is a formal comment
+  (\secref{sec:context-data}); \<open>kind\<close> is a formal comment
   to characterize the purpose of a name space.
 
-  \<^descr> @{ML Name_Space.declare}~@{text "context strict binding
-  space"} enters a name binding as fully qualified internal name into
+  \<^descr> @{ML Name_Space.declare}~\<open>context strict binding
+  space\<close> enters a name binding as fully qualified internal name into
   the name space, using the naming of the context.
 
-  \<^descr> @{ML Name_Space.intern}~@{text "space name"} internalizes a
+  \<^descr> @{ML Name_Space.intern}~\<open>space name\<close> internalizes a
   (partially qualified) external name.
 
   This operation is mostly for parsing!  Note that fully qualified
@@ -964,28 +951,28 @@
   (or their derivatives for @{ML_type theory} and
   @{ML_type Proof.context}).
 
-  \<^descr> @{ML Name_Space.extern}~@{text "ctxt space name"} externalizes a
+  \<^descr> @{ML Name_Space.extern}~\<open>ctxt space name\<close> externalizes a
   (fully qualified) internal name.
 
   This operation is mostly for printing!  User code should not rely on
   the precise result too much.
 
-  \<^descr> @{ML Name_Space.is_concealed}~@{text "space name"} indicates
-  whether @{text "name"} refers to a strictly private entity that
+  \<^descr> @{ML Name_Space.is_concealed}~\<open>space name\<close> indicates
+  whether \<open>name\<close> refers to a strictly private entity that
   other tools are supposed to ignore!
 \<close>
 
 text %mlantiq \<open>
   \begin{matharray}{rcl}
-  @{ML_antiquotation_def "binding"} & : & @{text ML_antiquotation} \\
+  @{ML_antiquotation_def "binding"} & : & \<open>ML_antiquotation\<close> \\
   \end{matharray}
 
   @{rail \<open>
   @@{ML_antiquotation binding} name
   \<close>}
 
-  \<^descr> @{text "@{binding name}"} produces a binding with base name
-  @{text "name"} and the source position taken from the concrete
+  \<^descr> \<open>@{binding name}\<close> produces a binding with base name
+  \<open>name\<close> and the source position taken from the concrete
   syntax of this antiquotation.  In many situations this is more
   appropriate than the more basic @{ML Binding.name} function.
 \<close>
--- a/src/Doc/Implementation/Proof.thy	Tue Oct 20 23:03:46 2015 +0200
+++ b/src/Doc/Implementation/Proof.thy	Tue Oct 20 23:53:40 2015 +0200
@@ -7,27 +7,24 @@
 section \<open>Variables \label{sec:variables}\<close>
 
 text \<open>
-  Any variable that is not explicitly bound by @{text "\<lambda>"}-abstraction
+  Any variable that is not explicitly bound by \<open>\<lambda>\<close>-abstraction
   is considered as ``free''.  Logically, free variables act like
-  outermost universal quantification at the sequent level: @{text
-  "A\<^sub>1(x), \<dots>, A\<^sub>n(x) \<turnstile> B(x)"} means that the result
-  holds \<^emph>\<open>for all\<close> values of @{text "x"}.  Free variables for
-  terms (not types) can be fully internalized into the logic: @{text
-  "\<turnstile> B(x)"} and @{text "\<turnstile> \<And>x. B(x)"} are interchangeable, provided
-  that @{text "x"} does not occur elsewhere in the context.
-  Inspecting @{text "\<turnstile> \<And>x. B(x)"} more closely, we see that inside the
-  quantifier, @{text "x"} is essentially ``arbitrary, but fixed'',
+  outermost universal quantification at the sequent level: \<open>A\<^sub>1(x), \<dots>, A\<^sub>n(x) \<turnstile> B(x)\<close> means that the result
+  holds \<^emph>\<open>for all\<close> values of \<open>x\<close>.  Free variables for
+  terms (not types) can be fully internalized into the logic: \<open>\<turnstile> B(x)\<close> and \<open>\<turnstile> \<And>x. B(x)\<close> are interchangeable, provided
+  that \<open>x\<close> does not occur elsewhere in the context.
+  Inspecting \<open>\<turnstile> \<And>x. B(x)\<close> more closely, we see that inside the
+  quantifier, \<open>x\<close> is essentially ``arbitrary, but fixed'',
   while from outside it appears as a place-holder for instantiation
-  (thanks to @{text "\<And>"} elimination).
+  (thanks to \<open>\<And>\<close> elimination).
 
   The Pure logic represents the idea of variables being either inside
   or outside the current scope by providing separate syntactic
-  categories for \<^emph>\<open>fixed variables\<close> (e.g.\ @{text "x"}) vs.\
-  \<^emph>\<open>schematic variables\<close> (e.g.\ @{text "?x"}).  Incidently, a
-  universal result @{text "\<turnstile> \<And>x. B(x)"} has the HHF normal form @{text
-  "\<turnstile> B(?x)"}, which represents its generality without requiring an
+  categories for \<^emph>\<open>fixed variables\<close> (e.g.\ \<open>x\<close>) vs.\
+  \<^emph>\<open>schematic variables\<close> (e.g.\ \<open>?x\<close>).  Incidently, a
+  universal result \<open>\<turnstile> \<And>x. B(x)\<close> has the HHF normal form \<open>\<turnstile> B(?x)\<close>, which represents its generality without requiring an
   explicit quantifier.  The same principle works for type variables:
-  @{text "\<turnstile> B(?\<alpha>)"} represents the idea of ``@{text "\<turnstile> \<forall>\<alpha>. B(\<alpha>)"}''
+  \<open>\<turnstile> B(?\<alpha>)\<close> represents the idea of ``\<open>\<turnstile> \<forall>\<alpha>. B(\<alpha>)\<close>''
   without demanding a truly polymorphic framework.
 
   \<^medskip>
@@ -36,66 +33,64 @@
   depend on type variables, which means that type variables would have
   to be declared first.  For example, a raw type-theoretic framework
   would demand the context to be constructed in stages as follows:
-  @{text "\<Gamma> = \<alpha>: type, x: \<alpha>, a: A(x\<^sub>\<alpha>)"}.
+  \<open>\<Gamma> = \<alpha>: type, x: \<alpha>, a: A(x\<^sub>\<alpha>)\<close>.
 
   We allow a slightly less formalistic mode of operation: term
-  variables @{text "x"} are fixed without specifying a type yet
+  variables \<open>x\<close> are fixed without specifying a type yet
   (essentially \<^emph>\<open>all\<close> potential occurrences of some instance
-  @{text "x\<^sub>\<tau>"} are fixed); the first occurrence of @{text "x"}
+  \<open>x\<^sub>\<tau>\<close> are fixed); the first occurrence of \<open>x\<close>
   within a specific term assigns its most general type, which is then
   maintained consistently in the context.  The above example becomes
-  @{text "\<Gamma> = x: term, \<alpha>: type, A(x\<^sub>\<alpha>)"}, where type @{text
-  "\<alpha>"} is fixed \<^emph>\<open>after\<close> term @{text "x"}, and the constraint
-  @{text "x :: \<alpha>"} is an implicit consequence of the occurrence of
-  @{text "x\<^sub>\<alpha>"} in the subsequent proposition.
+  \<open>\<Gamma> = x: term, \<alpha>: type, A(x\<^sub>\<alpha>)\<close>, where type \<open>\<alpha>\<close> is fixed \<^emph>\<open>after\<close> term \<open>x\<close>, and the constraint
+  \<open>x :: \<alpha>\<close> is an implicit consequence of the occurrence of
+  \<open>x\<^sub>\<alpha>\<close> in the subsequent proposition.
 
   This twist of dependencies is also accommodated by the reverse
   operation of exporting results from a context: a type variable
-  @{text "\<alpha>"} is considered fixed as long as it occurs in some fixed
-  term variable of the context.  For example, exporting @{text "x:
-  term, \<alpha>: type \<turnstile> x\<^sub>\<alpha> \<equiv> x\<^sub>\<alpha>"} produces in the first step @{text "x: term
-  \<turnstile> x\<^sub>\<alpha> \<equiv> x\<^sub>\<alpha>"} for fixed @{text "\<alpha>"}, and only in the second step
-  @{text "\<turnstile> ?x\<^sub>?\<^sub>\<alpha> \<equiv> ?x\<^sub>?\<^sub>\<alpha>"} for schematic @{text "?x"} and @{text "?\<alpha>"}.
+  \<open>\<alpha>\<close> is considered fixed as long as it occurs in some fixed
+  term variable of the context.  For example, exporting \<open>x:
+  term, \<alpha>: type \<turnstile> x\<^sub>\<alpha> \<equiv> x\<^sub>\<alpha>\<close> produces in the first step \<open>x: term
+  \<turnstile> x\<^sub>\<alpha> \<equiv> x\<^sub>\<alpha>\<close> for fixed \<open>\<alpha>\<close>, and only in the second step
+  \<open>\<turnstile> ?x\<^sub>?\<^sub>\<alpha> \<equiv> ?x\<^sub>?\<^sub>\<alpha>\<close> for schematic \<open>?x\<close> and \<open>?\<alpha>\<close>.
   The following Isar source text illustrates this scenario.
 \<close>
 
 notepad
 begin
   {
-    fix x  -- \<open>all potential occurrences of some @{text "x::\<tau>"} are fixed\<close>
+    fix x  -- \<open>all potential occurrences of some \<open>x::\<tau>\<close> are fixed\<close>
     {
       have "x::'a \<equiv> x"  -- \<open>implicit type assignment by concrete occurrence\<close>
         by (rule reflexive)
     }
-    thm this  -- \<open>result still with fixed type @{text "'a"}\<close>
+    thm this  -- \<open>result still with fixed type \<open>'a\<close>\<close>
   }
-  thm this  -- \<open>fully general result for arbitrary @{text "?x::?'a"}\<close>
+  thm this  -- \<open>fully general result for arbitrary \<open>?x::?'a\<close>\<close>
 end
 
 text \<open>The Isabelle/Isar proof context manages the details of term
   vs.\ type variables, with high-level principles for moving the
   frontier between fixed and schematic variables.
 
-  The @{text "add_fixes"} operation explicitly declares fixed
-  variables; the @{text "declare_term"} operation absorbs a term into
+  The \<open>add_fixes\<close> operation explicitly declares fixed
+  variables; the \<open>declare_term\<close> operation absorbs a term into
   a context by fixing new type variables and adding syntactic
   constraints.
 
-  The @{text "export"} operation is able to perform the main work of
+  The \<open>export\<close> operation is able to perform the main work of
   generalizing term and type variables as sketched above, assuming
   that fixing variables and terms have been declared properly.
 
-  There @{text "import"} operation makes a generalized fact a genuine
+  There \<open>import\<close> operation makes a generalized fact a genuine
   part of the context, by inventing fixed variables for the schematic
-  ones.  The effect can be reversed by using @{text "export"} later,
+  ones.  The effect can be reversed by using \<open>export\<close> later,
   potentially with an extended context; the result is equivalent to
   the original modulo renaming of schematic variables.
 
-  The @{text "focus"} operation provides a variant of @{text "import"}
-  for nested propositions (with explicit quantification): @{text
-  "\<And>x\<^sub>1 \<dots> x\<^sub>n. B(x\<^sub>1, \<dots>, x\<^sub>n)"} is
-  decomposed by inventing fixed variables @{text "x\<^sub>1, \<dots>,
-  x\<^sub>n"} for the body.
+  The \<open>focus\<close> operation provides a variant of \<open>import\<close>
+  for nested propositions (with explicit quantification): \<open>\<And>x\<^sub>1 \<dots> x\<^sub>n. B(x\<^sub>1, \<dots>, x\<^sub>n)\<close> is
+  decomposed by inventing fixed variables \<open>x\<^sub>1, \<dots>,
+  x\<^sub>n\<close> for the body.
 \<close>
 
 text %mlref \<open>
@@ -114,8 +109,8 @@
   ((string * (string * typ)) list * term) * Proof.context"} \\
   \end{mldecls}
 
-  \<^descr> @{ML Variable.add_fixes}~@{text "xs ctxt"} fixes term
-  variables @{text "xs"}, returning the resulting internal names.  By
+  \<^descr> @{ML Variable.add_fixes}~\<open>xs ctxt\<close> fixes term
+  variables \<open>xs\<close>, returning the resulting internal names.  By
   default, the internal representation coincides with the external
   one, which also means that the given variables must not be fixed
   already.  There is a different policy within a local proof body: the
@@ -125,35 +120,33 @@
   Variable.add_fixes}, but always produces fresh variants of the given
   names.
 
-  \<^descr> @{ML Variable.declare_term}~@{text "t ctxt"} declares term
-  @{text "t"} to belong to the context.  This automatically fixes new
+  \<^descr> @{ML Variable.declare_term}~\<open>t ctxt\<close> declares term
+  \<open>t\<close> to belong to the context.  This automatically fixes new
   type variables, but not term variables.  Syntactic constraints for
   type and term variables are declared uniformly, though.
 
-  \<^descr> @{ML Variable.declare_constraints}~@{text "t ctxt"} declares
-  syntactic constraints from term @{text "t"}, without making it part
+  \<^descr> @{ML Variable.declare_constraints}~\<open>t ctxt\<close> declares
+  syntactic constraints from term \<open>t\<close>, without making it part
   of the context yet.
 
-  \<^descr> @{ML Variable.export}~@{text "inner outer thms"} generalizes
-  fixed type and term variables in @{text "thms"} according to the
-  difference of the @{text "inner"} and @{text "outer"} context,
+  \<^descr> @{ML Variable.export}~\<open>inner outer thms\<close> generalizes
+  fixed type and term variables in \<open>thms\<close> according to the
+  difference of the \<open>inner\<close> and \<open>outer\<close> context,
   following the principles sketched above.
 
-  \<^descr> @{ML Variable.polymorphic}~@{text "ctxt ts"} generalizes type
-  variables in @{text "ts"} as far as possible, even those occurring
+  \<^descr> @{ML Variable.polymorphic}~\<open>ctxt ts\<close> generalizes type
+  variables in \<open>ts\<close> as far as possible, even those occurring
   in fixed term variables.  The default policy of type-inference is to
   fix newly introduced type variables, which is essentially reversed
   with @{ML Variable.polymorphic}: here the given terms are detached
   from the context as far as possible.
 
-  \<^descr> @{ML Variable.import}~@{text "open thms ctxt"} invents fixed
-  type and term variables for the schematic ones occurring in @{text
-  "thms"}.  The @{text "open"} flag indicates whether the fixed names
+  \<^descr> @{ML Variable.import}~\<open>open thms ctxt\<close> invents fixed
+  type and term variables for the schematic ones occurring in \<open>thms\<close>.  The \<open>open\<close> flag indicates whether the fixed names
   should be accessible to the user, otherwise newly introduced names
   are marked as ``internal'' (\secref{sec:names}).
 
-  \<^descr> @{ML Variable.focus}~@{text "bindings B"} decomposes the outermost @{text
-  "\<And>"} prefix of proposition @{text "B"}, using the given name bindings.
+  \<^descr> @{ML Variable.focus}~\<open>bindings B\<close> decomposes the outermost \<open>\<And>\<close> prefix of proposition \<open>B\<close>, using the given name bindings.
 \<close>
 
 text %mlex \<open>The following example shows how to work with fixed term
@@ -180,7 +173,7 @@
 
 text \<open>In the above example, the starting context is derived from the
   toplevel theory, which means that fixed variables are internalized
-  literally: @{text "x"} is mapped again to @{text "x"}, and
+  literally: \<open>x\<close> is mapped again to \<open>x\<close>, and
   attempting to fix it again in the subsequent context is an error.
   Alternatively, fixed parameters can be renamed explicitly as
   follows:\<close>
@@ -192,7 +185,7 @@
 \<close>
 
 text \<open>The following ML code can now work with the invented names of
-  @{text x1}, @{text x2}, @{text x3}, without depending on
+  \<open>x1\<close>, \<open>x2\<close>, \<open>x3\<close>, without depending on
   the details on the system policy for introducing these variants.
   Recall that within a proof body the system always invents fresh
   ``Skolem constants'', e.g.\ as follows:\<close>
@@ -227,19 +220,18 @@
   Assumptions are restricted to fixed non-schematic statements, i.e.\
   all generality needs to be expressed by explicit quantifiers.
   Nevertheless, the result will be in HHF normal form with outermost
-  quantifiers stripped.  For example, by assuming @{text "\<And>x :: \<alpha>. P
-  x"} we get @{text "\<And>x :: \<alpha>. P x \<turnstile> P ?x"} for schematic @{text "?x"}
-  of fixed type @{text "\<alpha>"}.  Local derivations accumulate more and
-  more explicit references to hypotheses: @{text "A\<^sub>1, \<dots>,
-  A\<^sub>n \<turnstile> B"} where @{text "A\<^sub>1, \<dots>, A\<^sub>n"} needs to
+  quantifiers stripped.  For example, by assuming \<open>\<And>x :: \<alpha>. P
+  x\<close> we get \<open>\<And>x :: \<alpha>. P x \<turnstile> P ?x\<close> for schematic \<open>?x\<close>
+  of fixed type \<open>\<alpha>\<close>.  Local derivations accumulate more and
+  more explicit references to hypotheses: \<open>A\<^sub>1, \<dots>,
+  A\<^sub>n \<turnstile> B\<close> where \<open>A\<^sub>1, \<dots>, A\<^sub>n\<close> needs to
   be covered by the assumptions of the current context.
 
   \<^medskip>
-  The @{text "add_assms"} operation augments the context by
-  local assumptions, which are parameterized by an arbitrary @{text
-  "export"} rule (see below).
+  The \<open>add_assms\<close> operation augments the context by
+  local assumptions, which are parameterized by an arbitrary \<open>export\<close> rule (see below).
 
-  The @{text "export"} operation moves facts from a (larger) inner
+  The \<open>export\<close> operation moves facts from a (larger) inner
   context into a (smaller) outer context, by discharging the
   difference of the assumptions as specified by the associated export
   rules.  Note that the discharged portion is determined by the
@@ -249,30 +241,30 @@
 
   \<^medskip>
   The most basic export rule discharges assumptions directly
-  by means of the @{text "\<Longrightarrow>"} introduction rule:
+  by means of the \<open>\<Longrightarrow>\<close> introduction rule:
   \[
-  \infer[(@{text "\<Longrightarrow>\<hyphen>intro"})]{@{text "\<Gamma> - A \<turnstile> A \<Longrightarrow> B"}}{@{text "\<Gamma> \<turnstile> B"}}
+  \infer[(\<open>\<Longrightarrow>\<hyphen>intro\<close>)]{\<open>\<Gamma> - A \<turnstile> A \<Longrightarrow> B\<close>}{\<open>\<Gamma> \<turnstile> B\<close>}
   \]
 
   The variant for goal refinements marks the newly introduced
   premises, which causes the canonical Isar goal refinement scheme to
   enforce unification with local premises within the goal:
   \[
-  \infer[(@{text "#\<Longrightarrow>\<hyphen>intro"})]{@{text "\<Gamma> - A \<turnstile> #A \<Longrightarrow> B"}}{@{text "\<Gamma> \<turnstile> B"}}
+  \infer[(\<open>#\<Longrightarrow>\<hyphen>intro\<close>)]{\<open>\<Gamma> - A \<turnstile> #A \<Longrightarrow> B\<close>}{\<open>\<Gamma> \<turnstile> B\<close>}
   \]
 
   \<^medskip>
   Alternative versions of assumptions may perform arbitrary
   transformations on export, as long as the corresponding portion of
   hypotheses is removed from the given facts.  For example, a local
-  definition works by fixing @{text "x"} and assuming @{text "x \<equiv> t"},
+  definition works by fixing \<open>x\<close> and assuming \<open>x \<equiv> t\<close>,
   with the following export rule to reverse the effect:
   \[
-  \infer[(@{text "\<equiv>\<hyphen>expand"})]{@{text "\<Gamma> - (x \<equiv> t) \<turnstile> B t"}}{@{text "\<Gamma> \<turnstile> B x"}}
+  \infer[(\<open>\<equiv>\<hyphen>expand\<close>)]{\<open>\<Gamma> - (x \<equiv> t) \<turnstile> B t\<close>}{\<open>\<Gamma> \<turnstile> B x\<close>}
   \]
-  This works, because the assumption @{text "x \<equiv> t"} was introduced in
-  a context with @{text "x"} being fresh, so @{text "x"} does not
-  occur in @{text "\<Gamma>"} here.
+  This works, because the assumption \<open>x \<equiv> t\<close> was introduced in
+  a context with \<open>x\<close> being fresh, so \<open>x\<close> does not
+  occur in \<open>\<Gamma>\<close> here.
 \<close>
 
 text %mlref \<open>
@@ -293,23 +285,21 @@
   and the @{ML_type "cterm list"} the collection of assumptions to be
   discharged simultaneously.
 
-  \<^descr> @{ML Assumption.assume}~@{text "ctxt A"} turns proposition @{text
-  "A"} into a primitive assumption @{text "A \<turnstile> A'"}, where the
-  conclusion @{text "A'"} is in HHF normal form.
+  \<^descr> @{ML Assumption.assume}~\<open>ctxt A\<close> turns proposition \<open>A\<close> into a primitive assumption \<open>A \<turnstile> A'\<close>, where the
+  conclusion \<open>A'\<close> is in HHF normal form.
 
-  \<^descr> @{ML Assumption.add_assms}~@{text "r As"} augments the context
-  by assumptions @{text "As"} with export rule @{text "r"}.  The
+  \<^descr> @{ML Assumption.add_assms}~\<open>r As\<close> augments the context
+  by assumptions \<open>As\<close> with export rule \<open>r\<close>.  The
   resulting facts are hypothetical theorems as produced by the raw
   @{ML Assumption.assume}.
 
-  \<^descr> @{ML Assumption.add_assumes}~@{text "As"} is a special case of
-  @{ML Assumption.add_assms} where the export rule performs @{text
-  "\<Longrightarrow>\<hyphen>intro"} or @{text "#\<Longrightarrow>\<hyphen>intro"}, depending on goal
+  \<^descr> @{ML Assumption.add_assumes}~\<open>As\<close> is a special case of
+  @{ML Assumption.add_assms} where the export rule performs \<open>\<Longrightarrow>\<hyphen>intro\<close> or \<open>#\<Longrightarrow>\<hyphen>intro\<close>, depending on goal
   mode.
 
-  \<^descr> @{ML Assumption.export}~@{text "is_goal inner outer thm"}
-  exports result @{text "thm"} from the the @{text "inner"} context
-  back into the @{text "outer"} one; @{text "is_goal = true"} means
+  \<^descr> @{ML Assumption.export}~\<open>is_goal inner outer thm\<close>
+  exports result \<open>thm\<close> from the the \<open>inner\<close> context
+  back into the \<open>outer\<close> one; \<open>is_goal = true\<close> means
   this is a goal context.  The result is in HHF normal form.  Note
   that @{ML "Proof_Context.export"} combines @{ML "Variable.export"}
   and @{ML "Assumption.export"} in the canonical way.
@@ -344,25 +334,24 @@
 text \<open>
   Local results are established by monotonic reasoning from facts
   within a context.  This allows common combinations of theorems,
-  e.g.\ via @{text "\<And>/\<Longrightarrow>"} elimination, resolution rules, or equational
+  e.g.\ via \<open>\<And>/\<Longrightarrow>\<close> elimination, resolution rules, or equational
   reasoning, see \secref{sec:thms}.  Unaccounted context manipulations
-  should be avoided, notably raw @{text "\<And>/\<Longrightarrow>"} introduction or ad-hoc
+  should be avoided, notably raw \<open>\<And>/\<Longrightarrow>\<close> introduction or ad-hoc
   references to free variables or assumptions not present in the proof
   context.
 
   \<^medskip>
-  The @{text "SUBPROOF"} combinator allows to structure a
+  The \<open>SUBPROOF\<close> combinator allows to structure a
   tactical proof recursively by decomposing a selected sub-goal:
-  @{text "(\<And>x. A(x) \<Longrightarrow> B(x)) \<Longrightarrow> \<dots>"} is turned into @{text "B(x) \<Longrightarrow> \<dots>"}
-  after fixing @{text "x"} and assuming @{text "A(x)"}.  This means
+  \<open>(\<And>x. A(x) \<Longrightarrow> B(x)) \<Longrightarrow> \<dots>\<close> is turned into \<open>B(x) \<Longrightarrow> \<dots>\<close>
+  after fixing \<open>x\<close> and assuming \<open>A(x)\<close>.  This means
   the tactic needs to solve the conclusion, but may use the premise as
   a local fact, for locally fixed variables.
 
-  The family of @{text "FOCUS"} combinators is similar to @{text
-  "SUBPROOF"}, but allows to retain schematic variables and pending
+  The family of \<open>FOCUS\<close> combinators is similar to \<open>SUBPROOF\<close>, but allows to retain schematic variables and pending
   subgoals in the resulting goal state.
 
-  The @{text "prove"} operation provides an interface for structured
+  The \<open>prove\<close> operation provides an interface for structured
   backwards reasoning under program control, with some explicit sanity
   checks of the result.  The goal context can be augmented by
   additional fixed variables (cf.\ \secref{sec:variables}) and
@@ -371,7 +360,7 @@
   the result.  Type and term variables are generalized as usual,
   according to the context.
 
-  The @{text "obtain"} operation produces results by eliminating
+  The \<open>obtain\<close> operation produces results by eliminating
   existing facts by means of a given tactic.  This acts like a dual
   conclusion: the proof demonstrates that the context may be augmented
   by parameters and assumptions, without affecting any conclusions
@@ -411,7 +400,7 @@
   Proof.context -> ((string * cterm) list * thm list) * Proof.context"} \\
   \end{mldecls}
 
-  \<^descr> @{ML SUBPROOF}~@{text "tac ctxt i"} decomposes the structure
+  \<^descr> @{ML SUBPROOF}~\<open>tac ctxt i\<close> decomposes the structure
   of the specified sub-goal, producing an extended context and a
   reduced goal, which needs to be solved by the given tactic.  All
   schematic parameters of the goal are imported into the context as
@@ -429,13 +418,12 @@
   occasionally useful to experiment without writing actual tactics
   yet.
 
-  \<^descr> @{ML Goal.prove}~@{text "ctxt xs As C tac"} states goal @{text
-  "C"} in the context augmented by fixed variables @{text "xs"} and
-  assumptions @{text "As"}, and applies tactic @{text "tac"} to solve
+  \<^descr> @{ML Goal.prove}~\<open>ctxt xs As C tac\<close> states goal \<open>C\<close> in the context augmented by fixed variables \<open>xs\<close> and
+  assumptions \<open>As\<close>, and applies tactic \<open>tac\<close> to solve
   it.  The latter may depend on the local assumptions being presented
   as facts.  The result is in HHF normal form.
 
-  \<^descr> @{ML Goal.prove_common}~@{text "ctxt fork_pri"} is the common form
+  \<^descr> @{ML Goal.prove_common}~\<open>ctxt fork_pri\<close> is the common form
   to state and prove a simultaneous goal statement, where @{ML Goal.prove}
   is a convenient shorthand that is most frequently used in applications.
 
@@ -452,7 +440,7 @@
   transaction. Thus the system is able to expose error messages ultimately
   to the end-user, even though the subsequent ML code misses them.
 
-  \<^descr> @{ML Obtain.result}~@{text "tac thms ctxt"} eliminates the
+  \<^descr> @{ML Obtain.result}~\<open>tac thms ctxt\<close> eliminates the
   given facts using a tactic, which results in additional fixed
   variables and assumptions in the context.  Final results need to be
   exported explicitly.
--- a/src/Doc/Implementation/Syntax.thy	Tue Oct 20 23:03:46 2015 +0200
+++ b/src/Doc/Implementation/Syntax.thy	Tue Oct 20 23:53:40 2015 +0200
@@ -6,7 +6,7 @@
 
 chapter \<open>Concrete syntax and type-checking\<close>
 
-text \<open>Pure @{text "\<lambda>"}-calculus as introduced in \chref{ch:logic} is
+text \<open>Pure \<open>\<lambda>\<close>-calculus as introduced in \chref{ch:logic} is
   an adequate foundation for logical languages --- in the tradition of
   \<^emph>\<open>higher-order abstract syntax\<close> --- but end-users require
   additional means for reading and printing of terms and types.  This
@@ -15,12 +15,11 @@
   the theory and proof language @{cite "isabelle-isar-ref"}.
 
   For example, according to @{cite church40} quantifiers are represented as
-  higher-order constants @{text "All :: ('a \<Rightarrow> bool) \<Rightarrow> bool"} such that @{text
-  "All (\<lambda>x::'a. B x)"} faithfully represents the idea that is displayed in
-  Isabelle as @{text "\<forall>x::'a. B x"} via @{keyword "binder"} notation.
+  higher-order constants \<open>All :: ('a \<Rightarrow> bool) \<Rightarrow> bool\<close> such that \<open>All (\<lambda>x::'a. B x)\<close> faithfully represents the idea that is displayed in
+  Isabelle as \<open>\<forall>x::'a. B x\<close> via @{keyword "binder"} notation.
   Moreover, type-inference in the style of Hindley-Milner @{cite hindleymilner}
-  (and extensions) enables users to write @{text "\<forall>x. B x"} concisely, when
-  the type @{text "'a"} is already clear from the
+  (and extensions) enables users to write \<open>\<forall>x. B x\<close> concisely, when
+  the type \<open>'a\<close> is already clear from the
   context.\footnote{Type-inference taken to the extreme can easily confuse
   users. Beginners often stumble over unexpectedly general types inferred by
   the system.}
@@ -36,22 +35,21 @@
   \secref{sec:term-check}, respectively.  This results in the
   following decomposition of the main operations:
 
-  \<^item> @{text "read = parse; check"}
+  \<^item> \<open>read = parse; check\<close>
 
-  \<^item> @{text "pretty = uncheck; unparse"}
+  \<^item> \<open>pretty = uncheck; unparse\<close>
 
 
   For example, some specification package might thus intercept syntax
-  processing at a well-defined stage after @{text "parse"}, to a augment the
-  resulting pre-term before full type-reconstruction is performed by @{text
-  "check"}. Note that the formal status of bound variables, versus free
+  processing at a well-defined stage after \<open>parse\<close>, to a augment the
+  resulting pre-term before full type-reconstruction is performed by \<open>check\<close>. Note that the formal status of bound variables, versus free
   variables, versus constants must not be changed between these phases.
 
   \<^medskip>
-  In general, @{text check} and @{text uncheck} operate
+  In general, \<open>check\<close> and \<open>uncheck\<close> operate
   simultaneously on a list of terms. This is particular important for
   type-checking, to reconstruct types for several terms of the same context
-  and scope. In contrast, @{text parse} and @{text unparse} operate separately
+  and scope. In contrast, \<open>parse\<close> and \<open>unparse\<close> operate separately
   on single terms.
 
   There are analogous operations to read and print types, with the same
@@ -63,11 +61,10 @@
 
 text \<open>
   Read and print operations are roughly dual to each other, such that for the
-  user @{text "s' = pretty (read s)"} looks similar to the original source
-  text @{text "s"}, but the details depend on many side-conditions. There are
+  user \<open>s' = pretty (read s)\<close> looks similar to the original source
+  text \<open>s\<close>, but the details depend on many side-conditions. There are
   also explicit options to control the removal of type information in the
-  output. The default configuration routinely looses information, so @{text
-  "t' = read (pretty t)"} might fail, or produce a differently typed term, or
+  output. The default configuration routinely looses information, so \<open>t' = read (pretty t)\<close> might fail, or produce a differently typed term, or
   a completely different term in the face of syntactic overloading.
 \<close>
 
@@ -85,10 +82,10 @@
   @{index_ML Syntax.string_of_term: "Proof.context -> term -> string"} \\
   \end{mldecls}
 
-  \<^descr> @{ML Syntax.read_typs}~@{text "ctxt strs"} parses and checks a
+  \<^descr> @{ML Syntax.read_typs}~\<open>ctxt strs\<close> parses and checks a
   simultaneous list of source strings as types of the logic.
 
-  \<^descr> @{ML Syntax.read_terms}~@{text "ctxt strs"} parses and checks a
+  \<^descr> @{ML Syntax.read_terms}~\<open>ctxt strs\<close> parses and checks a
   simultaneous list of source strings as terms of the logic.
   Type-reconstruction puts all parsed terms into the same scope: types of
   free variables ultimately need to coincide.
@@ -98,7 +95,7 @@
   is possible to use @{ML Type.constraint} on the intermediate pre-terms
   (\secref{sec:term-check}).
 
-  \<^descr> @{ML Syntax.read_props}~@{text "ctxt strs"} parses and checks a
+  \<^descr> @{ML Syntax.read_props}~\<open>ctxt strs\<close> parses and checks a
   simultaneous list of source strings as terms of the logic, with an implicit
   type-constraint for each argument to enforce type @{typ prop}; this also
   affects the inner syntax for parsing. The remaining type-reconstruction
@@ -110,8 +107,8 @@
   its own scope is processed. Do not use @{ML "map o Syntax.read_term"} where
   @{ML Syntax.read_terms} is actually intended!
 
-  \<^descr> @{ML Syntax.pretty_typ}~@{text "ctxt T"} and @{ML
-  Syntax.pretty_term}~@{text "ctxt t"} uncheck and pretty-print the given type
+  \<^descr> @{ML Syntax.pretty_typ}~\<open>ctxt T\<close> and @{ML
+  Syntax.pretty_term}~\<open>ctxt t\<close> uncheck and pretty-print the given type
   or term, respectively. Although the uncheck phase acts on a simultaneous
   list as well, this is rarely used in practice, so only the singleton case is
   provided as combined pretty operation. There is no distinction of term vs.\
@@ -173,22 +170,22 @@
   @{index_ML Syntax.unparse_term: "Proof.context -> term -> Pretty.T"} \\
   \end{mldecls}
 
-  \<^descr> @{ML Syntax.parse_typ}~@{text "ctxt str"} parses a source string as
+  \<^descr> @{ML Syntax.parse_typ}~\<open>ctxt str\<close> parses a source string as
   pre-type that is ready to be used with subsequent check operations.
 
-  \<^descr> @{ML Syntax.parse_term}~@{text "ctxt str"} parses a source string as
+  \<^descr> @{ML Syntax.parse_term}~\<open>ctxt str\<close> parses a source string as
   pre-term that is ready to be used with subsequent check operations.
 
-  \<^descr> @{ML Syntax.parse_prop}~@{text "ctxt str"} parses a source string as
+  \<^descr> @{ML Syntax.parse_prop}~\<open>ctxt str\<close> parses a source string as
   pre-term that is ready to be used with subsequent check operations. The
   inner syntax category is @{typ prop} and a suitable type-constraint is
   included to ensure that this information is observed in subsequent type
   reconstruction.
 
-  \<^descr> @{ML Syntax.unparse_typ}~@{text "ctxt T"} unparses a type after
+  \<^descr> @{ML Syntax.unparse_typ}~\<open>ctxt T\<close> unparses a type after
   uncheck operations, to turn it into a pretty tree.
 
-  \<^descr> @{ML Syntax.unparse_term}~@{text "ctxt T"} unparses a term after
+  \<^descr> @{ML Syntax.unparse_term}~\<open>ctxt T\<close> unparses a term after
   uncheck operations, to turn it into a pretty tree. There is no distinction
   for propositions here.
 
@@ -212,9 +209,8 @@
 
   A typical add-on for the check/uncheck syntax layer is the @{command
   abbreviation} mechanism @{cite "isabelle-isar-ref"}. Here the user specifies
-  syntactic definitions that are managed by the system as polymorphic @{text
-  "let"} bindings. These are expanded during the @{text "check"} phase, and
-  contracted during the @{text "uncheck"} phase, without affecting the
+  syntactic definitions that are managed by the system as polymorphic \<open>let\<close> bindings. These are expanded during the \<open>check\<close> phase, and
+  contracted during the \<open>uncheck\<close> phase, without affecting the
   type-assignment of the given terms.
 
   \<^medskip>
@@ -222,7 +218,7 @@
   additional check/uncheck modules might be defined in user space.
 
   For example, the @{command class} command defines a context where
-  @{text "check"} treats certain type instances of overloaded
+  \<open>check\<close> treats certain type instances of overloaded
   constants according to the ``dictionary construction'' of its
   logical foundation.  This involves ``type improvement''
   (specialization of slightly too general types) and replacement by
@@ -238,11 +234,11 @@
   @{index_ML Syntax.uncheck_terms: "Proof.context -> term list -> term list"} \\
   \end{mldecls}
 
-  \<^descr> @{ML Syntax.check_typs}~@{text "ctxt Ts"} checks a simultaneous list
+  \<^descr> @{ML Syntax.check_typs}~\<open>ctxt Ts\<close> checks a simultaneous list
   of pre-types as types of the logic.  Typically, this involves normalization
   of type synonyms.
 
-  \<^descr> @{ML Syntax.check_terms}~@{text "ctxt ts"} checks a simultaneous list
+  \<^descr> @{ML Syntax.check_terms}~\<open>ctxt ts\<close> checks a simultaneous list
   of pre-terms as terms of the logic. Typically, this involves type-inference
   and normalization term abbreviations. The types within the given terms are
   treated in the same way as for @{ML Syntax.check_typs}.
@@ -253,15 +249,15 @@
   is checked; afterwards the type arguments are recovered with @{ML
   Logic.dest_type}.
 
-  \<^descr> @{ML Syntax.check_props}~@{text "ctxt ts"} checks a simultaneous list
+  \<^descr> @{ML Syntax.check_props}~\<open>ctxt ts\<close> checks a simultaneous list
   of pre-terms as terms of the logic, such that all terms are constrained by
   type @{typ prop}. The remaining check operation works as @{ML
   Syntax.check_terms} above.
 
-  \<^descr> @{ML Syntax.uncheck_typs}~@{text "ctxt Ts"} unchecks a simultaneous
+  \<^descr> @{ML Syntax.uncheck_typs}~\<open>ctxt Ts\<close> unchecks a simultaneous
   list of types of the logic, in preparation of pretty printing.
 
-  \<^descr> @{ML Syntax.uncheck_terms}~@{text "ctxt ts"} unchecks a simultaneous
+  \<^descr> @{ML Syntax.uncheck_terms}~\<open>ctxt ts\<close> unchecks a simultaneous
   list of terms of the logic, in preparation of pretty printing. There is no
   distinction for propositions here.
 
--- a/src/Doc/Implementation/Tactic.thy	Tue Oct 20 23:03:46 2015 +0200
+++ b/src/Doc/Implementation/Tactic.thy	Tue Oct 20 23:53:40 2015 +0200
@@ -5,40 +5,36 @@
 chapter \<open>Tactical reasoning\<close>
 
 text \<open>Tactical reasoning works by refining an initial claim in a
-  backwards fashion, until a solved form is reached.  A @{text "goal"}
+  backwards fashion, until a solved form is reached.  A \<open>goal\<close>
   consists of several subgoals that need to be solved in order to
   achieve the main statement; zero subgoals means that the proof may
-  be finished.  A @{text "tactic"} is a refinement operation that maps
-  a goal to a lazy sequence of potential successors.  A @{text
-  "tactical"} is a combinator for composing tactics.\<close>
+  be finished.  A \<open>tactic\<close> is a refinement operation that maps
+  a goal to a lazy sequence of potential successors.  A \<open>tactical\<close> is a combinator for composing tactics.\<close>
 
 
 section \<open>Goals \label{sec:tactical-goals}\<close>
 
 text \<open>
   Isabelle/Pure represents a goal as a theorem stating that the
-  subgoals imply the main goal: @{text "A\<^sub>1 \<Longrightarrow> \<dots> \<Longrightarrow> A\<^sub>n \<Longrightarrow>
-  C"}.  The outermost goal structure is that of a Horn Clause: i.e.\
+  subgoals imply the main goal: \<open>A\<^sub>1 \<Longrightarrow> \<dots> \<Longrightarrow> A\<^sub>n \<Longrightarrow>
+  C\<close>.  The outermost goal structure is that of a Horn Clause: i.e.\
   an iterated implication without any quantifiers\footnote{Recall that
-  outermost @{text "\<And>x. \<phi>[x]"} is always represented via schematic
-  variables in the body: @{text "\<phi>[?x]"}.  These variables may get
-  instantiated during the course of reasoning.}.  For @{text "n = 0"}
+  outermost \<open>\<And>x. \<phi>[x]\<close> is always represented via schematic
+  variables in the body: \<open>\<phi>[?x]\<close>.  These variables may get
+  instantiated during the course of reasoning.}.  For \<open>n = 0\<close>
   a goal is called ``solved''.
 
-  The structure of each subgoal @{text "A\<^sub>i"} is that of a
-  general Hereditary Harrop Formula @{text "\<And>x\<^sub>1 \<dots>
-  \<And>x\<^sub>k. H\<^sub>1 \<Longrightarrow> \<dots> \<Longrightarrow> H\<^sub>m \<Longrightarrow> B"}.  Here @{text
-  "x\<^sub>1, \<dots>, x\<^sub>k"} are goal parameters, i.e.\
-  arbitrary-but-fixed entities of certain types, and @{text
-  "H\<^sub>1, \<dots>, H\<^sub>m"} are goal hypotheses, i.e.\ facts that may
+  The structure of each subgoal \<open>A\<^sub>i\<close> is that of a
+  general Hereditary Harrop Formula \<open>\<And>x\<^sub>1 \<dots>
+  \<And>x\<^sub>k. H\<^sub>1 \<Longrightarrow> \<dots> \<Longrightarrow> H\<^sub>m \<Longrightarrow> B\<close>.  Here \<open>x\<^sub>1, \<dots>, x\<^sub>k\<close> are goal parameters, i.e.\
+  arbitrary-but-fixed entities of certain types, and \<open>H\<^sub>1, \<dots>, H\<^sub>m\<close> are goal hypotheses, i.e.\ facts that may
   be assumed locally.  Together, this forms the goal context of the
-  conclusion @{text B} to be established.  The goal hypotheses may be
+  conclusion \<open>B\<close> to be established.  The goal hypotheses may be
   again arbitrary Hereditary Harrop Formulas, although the level of
   nesting rarely exceeds 1--2 in practice.
 
-  The main conclusion @{text C} is internally marked as a protected
-  proposition, which is represented explicitly by the notation @{text
-  "#C"} here.  This ensures that the decomposition into subgoals and
+  The main conclusion \<open>C\<close> is internally marked as a protected
+  proposition, which is represented explicitly by the notation \<open>#C\<close> here.  This ensures that the decomposition into subgoals and
   main conclusion is well-defined for arbitrarily structured claims.
 
   \<^medskip>
@@ -46,8 +42,8 @@
   Isabelle/Pure rules:
 
   \[
-  \infer[@{text "(init)"}]{@{text "C \<Longrightarrow> #C"}}{} \qquad
-  \infer[@{text "(finish)"}]{@{text "C"}}{@{text "#C"}}
+  \infer[\<open>(init)\<close>]{\<open>C \<Longrightarrow> #C\<close>}{} \qquad
+  \infer[\<open>(finish)\<close>]{\<open>C\<close>}{\<open>#C\<close>}
   \]
 
   \<^medskip>
@@ -55,10 +51,10 @@
   with protected propositions:
 
   \[
-  \infer[@{text "(protect n)"}]{@{text "A\<^sub>1 \<Longrightarrow> \<dots> \<Longrightarrow> A\<^sub>n \<Longrightarrow> #C"}}{@{text "A\<^sub>1 \<Longrightarrow> \<dots> \<Longrightarrow> A\<^sub>n \<Longrightarrow> C"}}
+  \infer[\<open>(protect n)\<close>]{\<open>A\<^sub>1 \<Longrightarrow> \<dots> \<Longrightarrow> A\<^sub>n \<Longrightarrow> #C\<close>}{\<open>A\<^sub>1 \<Longrightarrow> \<dots> \<Longrightarrow> A\<^sub>n \<Longrightarrow> C\<close>}
   \]
   \[
-  \infer[@{text "(conclude)"}]{@{text "A \<Longrightarrow> \<dots> \<Longrightarrow> C"}}{@{text "A \<Longrightarrow> \<dots> \<Longrightarrow> #C"}}
+  \infer[\<open>(conclude)\<close>]{\<open>A \<Longrightarrow> \<dots> \<Longrightarrow> C\<close>}{\<open>A \<Longrightarrow> \<dots> \<Longrightarrow> #C\<close>}
   \]
 \<close>
 
@@ -70,26 +66,26 @@
   @{index_ML Goal.conclude: "thm -> thm"} \\
   \end{mldecls}
 
-  \<^descr> @{ML "Goal.init"}~@{text C} initializes a tactical goal from
-  the well-formed proposition @{text C}.
+  \<^descr> @{ML "Goal.init"}~\<open>C\<close> initializes a tactical goal from
+  the well-formed proposition \<open>C\<close>.
 
-  \<^descr> @{ML "Goal.finish"}~@{text "ctxt thm"} checks whether theorem
-  @{text "thm"} is a solved goal (no subgoals), and concludes the
+  \<^descr> @{ML "Goal.finish"}~\<open>ctxt thm\<close> checks whether theorem
+  \<open>thm\<close> is a solved goal (no subgoals), and concludes the
   result by removing the goal protection.  The context is only
   required for printing error messages.
 
-  \<^descr> @{ML "Goal.protect"}~@{text "n thm"} protects the statement
-  of theorem @{text "thm"}.  The parameter @{text n} indicates the
+  \<^descr> @{ML "Goal.protect"}~\<open>n thm\<close> protects the statement
+  of theorem \<open>thm\<close>.  The parameter \<open>n\<close> indicates the
   number of premises to be retained.
 
-  \<^descr> @{ML "Goal.conclude"}~@{text "thm"} removes the goal
+  \<^descr> @{ML "Goal.conclude"}~\<open>thm\<close> removes the goal
   protection, even if there are pending subgoals.
 \<close>
 
 
 section \<open>Tactics\label{sec:tactics}\<close>
 
-text \<open>A @{text "tactic"} is a function @{text "goal \<rightarrow> goal\<^sup>*\<^sup>*"} that
+text \<open>A \<open>tactic\<close> is a function \<open>goal \<rightarrow> goal\<^sup>*\<^sup>*\<close> that
   maps a given goal state (represented as a theorem, cf.\
   \secref{sec:tactical-goals}) to a lazy sequence of potential
   successor states.  The underlying sequence implementation is lazy
@@ -121,7 +117,7 @@
   schematic goal variables).
 
   Tactics with explicit \<^emph>\<open>subgoal addressing\<close> are of the form
-  @{text "int \<rightarrow> tactic"} and may be applied to a particular subgoal
+  \<open>int \<rightarrow> tactic\<close> and may be applied to a particular subgoal
   (counting from 1).  If the subgoal number is out of range, the
   tactic should fail with an empty result sequence, but must not raise
   an exception!
@@ -139,7 +135,7 @@
   very common error when implementing tactics!
 
   Tactics with internal subgoal addressing should expose the subgoal
-  index as @{text "int"} argument in full generality; a hardwired
+  index as \<open>int\<close> argument in full generality; a hardwired
   subgoal 1 is not acceptable.
   
   \<^medskip>
@@ -195,16 +191,16 @@
   \<^descr> @{ML all_tac} is a tactic that always succeeds, returning a
   singleton sequence with unchanged goal state.
 
-  \<^descr> @{ML print_tac}~@{text "ctxt message"} is like @{ML all_tac}, but
+  \<^descr> @{ML print_tac}~\<open>ctxt message\<close> is like @{ML all_tac}, but
   prints a message together with the goal state on the tracing
   channel.
 
-  \<^descr> @{ML PRIMITIVE}~@{text rule} turns a primitive inference rule
+  \<^descr> @{ML PRIMITIVE}~\<open>rule\<close> turns a primitive inference rule
   into a tactic with unique result.  Exception @{ML THM} is considered
   a regular tactic failure and produces an empty result; other
   exceptions are passed through.
 
-  \<^descr> @{ML SUBGOAL}~@{text "(fn (subgoal, i) => tactic)"} is the
+  \<^descr> @{ML SUBGOAL}~\<open>(fn (subgoal, i) => tactic)\<close> is the
   most basic form to produce a tactic with subgoal addressing.  The
   given abstraction over the subgoal term and subgoal number allows to
   peek at the relevant information of the full goal state.  The
@@ -215,14 +211,14 @@
   avoids expensive re-certification in situations where the subgoal is
   used directly for primitive inferences.
 
-  \<^descr> @{ML SELECT_GOAL}~@{text "tac i"} confines a tactic to the
-  specified subgoal @{text "i"}.  This rearranges subgoals and the
+  \<^descr> @{ML SELECT_GOAL}~\<open>tac i\<close> confines a tactic to the
+  specified subgoal \<open>i\<close>.  This rearranges subgoals and the
   main goal protection (\secref{sec:tactical-goals}), while retaining
   the syntactic context of the overall goal state (concerning
   schematic variables etc.).
 
-  \<^descr> @{ML PREFER_GOAL}~@{text "tac i"} rearranges subgoals to put
-  @{text "i"} in front.  This is similar to @{ML SELECT_GOAL}, but
+  \<^descr> @{ML PREFER_GOAL}~\<open>tac i\<close> rearranges subgoals to put
+  \<open>i\<close> in front.  This is similar to @{ML SELECT_GOAL}, but
   without changing the main goal protection.
 \<close>
 
@@ -237,7 +233,7 @@
   \<^emph>\<open>Destruct-resolution\<close> is like elim-resolution, but the given
   destruction rules are first turned into canonical elimination
   format.  \<^emph>\<open>Forward-resolution\<close> is like destruct-resolution, but
-  without deleting the selected assumption.  The @{text "r/e/d/f"}
+  without deleting the selected assumption.  The \<open>r/e/d/f\<close>
   naming convention is maintained for several different kinds of
   resolution rules and tactics.
 
@@ -281,20 +277,19 @@
   @{index_ML bimatch_tac: "Proof.context -> (bool * thm) list -> int -> tactic"} \\
   \end{mldecls}
 
-  \<^descr> @{ML resolve_tac}~@{text "ctxt thms i"} refines the goal state
+  \<^descr> @{ML resolve_tac}~\<open>ctxt thms i\<close> refines the goal state
   using the given theorems, which should normally be introduction
-  rules.  The tactic resolves a rule's conclusion with subgoal @{text
-  i}, replacing it by the corresponding versions of the rule's
+  rules.  The tactic resolves a rule's conclusion with subgoal \<open>i\<close>, replacing it by the corresponding versions of the rule's
   premises.
 
-  \<^descr> @{ML eresolve_tac}~@{text "ctxt thms i"} performs elim-resolution
+  \<^descr> @{ML eresolve_tac}~\<open>ctxt thms i\<close> performs elim-resolution
   with the given theorems, which are normally be elimination rules.
 
   Note that @{ML_text "eresolve_tac ctxt [asm_rl]"} is equivalent to @{ML_text
   "assume_tac ctxt"}, which facilitates mixing of assumption steps with
   genuine eliminations.
 
-  \<^descr> @{ML dresolve_tac}~@{text "ctxt thms i"} performs
+  \<^descr> @{ML dresolve_tac}~\<open>ctxt thms i\<close> performs
   destruct-resolution with the given theorems, which should normally
   be destruction rules.  This replaces an assumption by the result of
   applying one of the rules.
@@ -303,21 +298,20 @@
   selected assumption is not deleted.  It applies a rule to an
   assumption, adding the result as a new assumption.
 
-  \<^descr> @{ML biresolve_tac}~@{text "ctxt brls i"} refines the proof state
+  \<^descr> @{ML biresolve_tac}~\<open>ctxt brls i\<close> refines the proof state
   by resolution or elim-resolution on each rule, as indicated by its
-  flag.  It affects subgoal @{text "i"} of the proof state.
+  flag.  It affects subgoal \<open>i\<close> of the proof state.
 
-  For each pair @{text "(flag, rule)"}, it applies resolution if the
-  flag is @{text "false"} and elim-resolution if the flag is @{text
-  "true"}.  A single tactic call handles a mixture of introduction and
+  For each pair \<open>(flag, rule)\<close>, it applies resolution if the
+  flag is \<open>false\<close> and elim-resolution if the flag is \<open>true\<close>.  A single tactic call handles a mixture of introduction and
   elimination rules, which is useful to organize the search process
   systematically in proof tools.
 
-  \<^descr> @{ML assume_tac}~@{text "ctxt i"} attempts to solve subgoal @{text i}
+  \<^descr> @{ML assume_tac}~\<open>ctxt i\<close> attempts to solve subgoal \<open>i\<close>
   by assumption (modulo higher-order unification).
 
   \<^descr> @{ML eq_assume_tac} is similar to @{ML assume_tac}, but checks
-  only for immediate @{text "\<alpha>"}-convertibility instead of using
+  only for immediate \<open>\<alpha>\<close>-convertibility instead of using
   unification.  It succeeds (with a unique next state) if one of the
   assumptions is equal to the subgoal's conclusion.  Since it does not
   instantiate variables, it cannot make other subgoals unprovable.
@@ -346,25 +340,22 @@
   higher-order unification is not so useful.  This typically involves
   rules like universal elimination, existential introduction, or
   equational substitution.  Here the unification problem involves
-  fully flexible @{text "?P ?x"} schemes, which are hard to manage
+  fully flexible \<open>?P ?x\<close> schemes, which are hard to manage
   without further hints.
 
-  By providing a (small) rigid term for @{text "?x"} explicitly, the
-  remaining unification problem is to assign a (large) term to @{text
-  "?P"}, according to the shape of the given subgoal.  This is
+  By providing a (small) rigid term for \<open>?x\<close> explicitly, the
+  remaining unification problem is to assign a (large) term to \<open>?P\<close>, according to the shape of the given subgoal.  This is
   sufficiently well-behaved in most practical situations.
 
   \<^medskip>
-  Isabelle provides separate versions of the standard @{text
-  "r/e/d/f"} resolution tactics that allow to provide explicit
+  Isabelle provides separate versions of the standard \<open>r/e/d/f\<close> resolution tactics that allow to provide explicit
   instantiations of unknowns of the given rule, wrt.\ terms that refer
   to the implicit context of the selected subgoal.
 
-  An instantiation consists of a list of pairs of the form @{text
-  "(?x, t)"}, where @{text ?x} is a schematic variable occurring in
-  the given rule, and @{text t} is a term from the current proof
+  An instantiation consists of a list of pairs of the form \<open>(?x, t)\<close>, where \<open>?x\<close> is a schematic variable occurring in
+  the given rule, and \<open>t\<close> is a term from the current proof
   context, augmented by the local goal parameters of the selected
-  subgoal; cf.\ the @{text "focus"} operation described in
+  subgoal; cf.\ the \<open>focus\<close> operation described in
   \secref{sec:variables}.
 
   Entering the syntactic context of a subgoal is a brittle operation,
@@ -373,8 +364,7 @@
   global names.  Explicit renaming of subgoal parameters prior to
   explicit instantiation might help to achieve a bit more robustness.
 
-  Type instantiations may be given as well, via pairs like @{text
-  "(?'a, \<tau>)"}.  Type instantiations are distinguished from term
+  Type instantiations may be given as well, via pairs like \<open>(?'a, \<tau>)\<close>.  Type instantiations are distinguished from term
   instantiations by the syntactic form of the schematic variable.
   Types are instantiated before terms are.  Since term instantiation
   already performs simple type-inference, so explicit type
@@ -402,9 +392,9 @@
   @{index_ML rename_tac: "string list -> int -> tactic"} \\
   \end{mldecls}
 
-  \<^descr> @{ML Rule_Insts.res_inst_tac}~@{text "ctxt insts thm i"} instantiates the
-  rule @{text thm} with the instantiations @{text insts}, as described
-  above, and then performs resolution on subgoal @{text i}.
+  \<^descr> @{ML Rule_Insts.res_inst_tac}~\<open>ctxt insts thm i\<close> instantiates the
+  rule \<open>thm\<close> with the instantiations \<open>insts\<close>, as described
+  above, and then performs resolution on subgoal \<open>i\<close>.
   
   \<^descr> @{ML Rule_Insts.eres_inst_tac} is like @{ML Rule_Insts.res_inst_tac},
   but performs elim-resolution.
@@ -415,20 +405,19 @@
   \<^descr> @{ML Rule_Insts.forw_inst_tac} is like @{ML Rule_Insts.dres_inst_tac}
   except that the selected assumption is not deleted.
 
-  \<^descr> @{ML Rule_Insts.subgoal_tac}~@{text "ctxt \<phi> i"} adds the proposition
-  @{text "\<phi>"} as local premise to subgoal @{text "i"}, and poses the
-  same as a new subgoal @{text "i + 1"} (in the original context).
+  \<^descr> @{ML Rule_Insts.subgoal_tac}~\<open>ctxt \<phi> i\<close> adds the proposition
+  \<open>\<phi>\<close> as local premise to subgoal \<open>i\<close>, and poses the
+  same as a new subgoal \<open>i + 1\<close> (in the original context).
 
-  \<^descr> @{ML Rule_Insts.thin_tac}~@{text "ctxt \<phi> i"} deletes the specified
-  premise from subgoal @{text i}.  Note that @{text \<phi>} may contain
+  \<^descr> @{ML Rule_Insts.thin_tac}~\<open>ctxt \<phi> i\<close> deletes the specified
+  premise from subgoal \<open>i\<close>.  Note that \<open>\<phi>\<close> may contain
   schematic variables, to abbreviate the intended proposition; the
   first matching subgoal premise will be deleted.  Removing useless
   premises from a subgoal increases its readability and can make
   search tactics run faster.
 
-  \<^descr> @{ML rename_tac}~@{text "names i"} renames the innermost
-  parameters of subgoal @{text i} according to the provided @{text
-  names} (which need to be distinct identifiers).
+  \<^descr> @{ML rename_tac}~\<open>names i\<close> renames the innermost
+  parameters of subgoal \<open>i\<close> according to the provided \<open>names\<close> (which need to be distinct identifiers).
 
 
   For historical reasons, the above instantiation tactics take
@@ -453,9 +442,9 @@
   @{index_ML flexflex_tac: "Proof.context -> tactic"} \\
   \end{mldecls}
 
-  \<^descr> @{ML rotate_tac}~@{text "n i"} rotates the premises of subgoal
-  @{text i} by @{text n} positions: from right to left if @{text n} is
-  positive, and from left to right if @{text n} is negative.
+  \<^descr> @{ML rotate_tac}~\<open>n i\<close> rotates the premises of subgoal
+  \<open>i\<close> by \<open>n\<close> positions: from right to left if \<open>n\<close> is
+  positive, and from left to right if \<open>n\<close> is negative.
 
   \<^descr> @{ML distinct_subgoals_tac} removes duplicate subgoals from a
   proof state.  This is potentially inefficient.
@@ -489,24 +478,21 @@
   @{index_ML_op COMP: "thm * thm -> thm"} \\
   \end{mldecls}
 
-  \<^descr> @{ML compose_tac}~@{text "ctxt (flag, rule, m) i"} refines subgoal
-  @{text "i"} using @{text "rule"}, without lifting.  The @{text
-  "rule"} is taken to have the form @{text "\<psi>\<^sub>1 \<Longrightarrow> \<dots> \<psi>\<^sub>m \<Longrightarrow> \<psi>"}, where
-  @{text "\<psi>"} need not be atomic; thus @{text "m"} determines the
-  number of new subgoals.  If @{text "flag"} is @{text "true"} then it
-  performs elim-resolution --- it solves the first premise of @{text
-  "rule"} by assumption and deletes that assumption.
+  \<^descr> @{ML compose_tac}~\<open>ctxt (flag, rule, m) i\<close> refines subgoal
+  \<open>i\<close> using \<open>rule\<close>, without lifting.  The \<open>rule\<close> is taken to have the form \<open>\<psi>\<^sub>1 \<Longrightarrow> \<dots> \<psi>\<^sub>m \<Longrightarrow> \<psi>\<close>, where
+  \<open>\<psi>\<close> need not be atomic; thus \<open>m\<close> determines the
+  number of new subgoals.  If \<open>flag\<close> is \<open>true\<close> then it
+  performs elim-resolution --- it solves the first premise of \<open>rule\<close> by assumption and deletes that assumption.
 
-  \<^descr> @{ML Drule.compose}~@{text "(thm\<^sub>1, i, thm\<^sub>2)"} uses @{text "thm\<^sub>1"},
-  regarded as an atomic formula, to solve premise @{text "i"} of
-  @{text "thm\<^sub>2"}.  Let @{text "thm\<^sub>1"} and @{text "thm\<^sub>2"} be @{text
-  "\<psi>"} and @{text "\<phi>\<^sub>1 \<Longrightarrow> \<dots> \<phi>\<^sub>n \<Longrightarrow> \<phi>"}.  The unique @{text "s"} that
-  unifies @{text "\<psi>"} and @{text "\<phi>\<^sub>i"} yields the theorem @{text "(\<phi>\<^sub>1 \<Longrightarrow>
-  \<dots> \<phi>\<^sub>i\<^sub>-\<^sub>1 \<Longrightarrow> \<phi>\<^sub>i\<^sub>+\<^sub>1 \<Longrightarrow> \<dots> \<phi>\<^sub>n \<Longrightarrow> \<phi>)s"}.  Multiple results are considered as
+  \<^descr> @{ML Drule.compose}~\<open>(thm\<^sub>1, i, thm\<^sub>2)\<close> uses \<open>thm\<^sub>1\<close>,
+  regarded as an atomic formula, to solve premise \<open>i\<close> of
+  \<open>thm\<^sub>2\<close>.  Let \<open>thm\<^sub>1\<close> and \<open>thm\<^sub>2\<close> be \<open>\<psi>\<close> and \<open>\<phi>\<^sub>1 \<Longrightarrow> \<dots> \<phi>\<^sub>n \<Longrightarrow> \<phi>\<close>.  The unique \<open>s\<close> that
+  unifies \<open>\<psi>\<close> and \<open>\<phi>\<^sub>i\<close> yields the theorem \<open>(\<phi>\<^sub>1 \<Longrightarrow>
+  \<dots> \<phi>\<^sub>i\<^sub>-\<^sub>1 \<Longrightarrow> \<phi>\<^sub>i\<^sub>+\<^sub>1 \<Longrightarrow> \<dots> \<phi>\<^sub>n \<Longrightarrow> \<phi>)s\<close>.  Multiple results are considered as
   error (exception @{ML THM}).
 
-  \<^descr> @{text "thm\<^sub>1 COMP thm\<^sub>2"} is the same as @{text "Drule.compose
-  (thm\<^sub>1, 1, thm\<^sub>2)"}.
+  \<^descr> \<open>thm\<^sub>1 COMP thm\<^sub>2\<close> is the same as \<open>Drule.compose
+  (thm\<^sub>1, 1, thm\<^sub>2)\<close>.
 
 
   \begin{warn}
@@ -554,42 +540,35 @@
   @{index_ML "FIRST'": "('a -> tactic) list -> 'a -> tactic"} \\
   \end{mldecls}
 
-  \<^descr> @{text "tac\<^sub>1"}~@{ML_op THEN}~@{text "tac\<^sub>2"} is the sequential
-  composition of @{text "tac\<^sub>1"} and @{text "tac\<^sub>2"}.  Applied to a goal
+  \<^descr> \<open>tac\<^sub>1\<close>~@{ML_op THEN}~\<open>tac\<^sub>2\<close> is the sequential
+  composition of \<open>tac\<^sub>1\<close> and \<open>tac\<^sub>2\<close>.  Applied to a goal
   state, it returns all states reachable in two steps by applying
-  @{text "tac\<^sub>1"} followed by @{text "tac\<^sub>2"}.  First, it applies @{text
-  "tac\<^sub>1"} to the goal state, getting a sequence of possible next
-  states; then, it applies @{text "tac\<^sub>2"} to each of these and
+  \<open>tac\<^sub>1\<close> followed by \<open>tac\<^sub>2\<close>.  First, it applies \<open>tac\<^sub>1\<close> to the goal state, getting a sequence of possible next
+  states; then, it applies \<open>tac\<^sub>2\<close> to each of these and
   concatenates the results to produce again one flat sequence of
   states.
 
-  \<^descr> @{text "tac\<^sub>1"}~@{ML_op ORELSE}~@{text "tac\<^sub>2"} makes a choice
-  between @{text "tac\<^sub>1"} and @{text "tac\<^sub>2"}.  Applied to a state, it
-  tries @{text "tac\<^sub>1"} and returns the result if non-empty; if @{text
-  "tac\<^sub>1"} fails then it uses @{text "tac\<^sub>2"}.  This is a deterministic
-  choice: if @{text "tac\<^sub>1"} succeeds then @{text "tac\<^sub>2"} is excluded
+  \<^descr> \<open>tac\<^sub>1\<close>~@{ML_op ORELSE}~\<open>tac\<^sub>2\<close> makes a choice
+  between \<open>tac\<^sub>1\<close> and \<open>tac\<^sub>2\<close>.  Applied to a state, it
+  tries \<open>tac\<^sub>1\<close> and returns the result if non-empty; if \<open>tac\<^sub>1\<close> fails then it uses \<open>tac\<^sub>2\<close>.  This is a deterministic
+  choice: if \<open>tac\<^sub>1\<close> succeeds then \<open>tac\<^sub>2\<close> is excluded
   from the result.
 
-  \<^descr> @{text "tac\<^sub>1"}~@{ML_op APPEND}~@{text "tac\<^sub>2"} concatenates the
-  possible results of @{text "tac\<^sub>1"} and @{text "tac\<^sub>2"}.  Unlike
+  \<^descr> \<open>tac\<^sub>1\<close>~@{ML_op APPEND}~\<open>tac\<^sub>2\<close> concatenates the
+  possible results of \<open>tac\<^sub>1\<close> and \<open>tac\<^sub>2\<close>.  Unlike
   @{ML_op "ORELSE"} there is \<^emph>\<open>no commitment\<close> to either tactic, so
   @{ML_op "APPEND"} helps to avoid incompleteness during search, at
   the cost of potential inefficiencies.
 
-  \<^descr> @{ML EVERY}~@{text "[tac\<^sub>1, \<dots>, tac\<^sub>n]"} abbreviates @{text
-  "tac\<^sub>1"}~@{ML_op THEN}~@{text "\<dots>"}~@{ML_op THEN}~@{text "tac\<^sub>n"}.
+  \<^descr> @{ML EVERY}~\<open>[tac\<^sub>1, \<dots>, tac\<^sub>n]\<close> abbreviates \<open>tac\<^sub>1\<close>~@{ML_op THEN}~\<open>\<dots>\<close>~@{ML_op THEN}~\<open>tac\<^sub>n\<close>.
   Note that @{ML "EVERY []"} is the same as @{ML all_tac}: it always
   succeeds.
 
-  \<^descr> @{ML FIRST}~@{text "[tac\<^sub>1, \<dots>, tac\<^sub>n]"} abbreviates @{text
-  "tac\<^sub>1"}~@{ML_op ORELSE}~@{text "\<dots>"}~@{ML_op "ORELSE"}~@{text
-  "tac\<^sub>n"}.  Note that @{ML "FIRST []"} is the same as @{ML no_tac}: it
+  \<^descr> @{ML FIRST}~\<open>[tac\<^sub>1, \<dots>, tac\<^sub>n]\<close> abbreviates \<open>tac\<^sub>1\<close>~@{ML_op ORELSE}~\<open>\<dots>\<close>~@{ML_op "ORELSE"}~\<open>tac\<^sub>n\<close>.  Note that @{ML "FIRST []"} is the same as @{ML no_tac}: it
   always fails.
 
   \<^descr> @{ML_op "THEN'"} is the lifted version of @{ML_op "THEN"}, for
-  tactics with explicit subgoal addressing.  So @{text
-  "(tac\<^sub>1"}~@{ML_op THEN'}~@{text "tac\<^sub>2) i"} is the same as @{text
-  "(tac\<^sub>1 i"}~@{ML_op THEN}~@{text "tac\<^sub>2 i)"}.
+  tactics with explicit subgoal addressing.  So \<open>(tac\<^sub>1\<close>~@{ML_op THEN'}~\<open>tac\<^sub>2) i\<close> is the same as \<open>(tac\<^sub>1 i\<close>~@{ML_op THEN}~\<open>tac\<^sub>2 i)\<close>.
 
   The other primed tacticals work analogously.
 \<close>
@@ -610,35 +589,33 @@
   @{index_ML "REPEAT_DETERM_N": "int -> tactic -> tactic"} \\
   \end{mldecls}
 
-  \<^descr> @{ML TRY}~@{text "tac"} applies @{text "tac"} to the goal
+  \<^descr> @{ML TRY}~\<open>tac\<close> applies \<open>tac\<close> to the goal
   state and returns the resulting sequence, if non-empty; otherwise it
-  returns the original state.  Thus, it applies @{text "tac"} at most
+  returns the original state.  Thus, it applies \<open>tac\<close> at most
   once.
 
   Note that for tactics with subgoal addressing, the combinator can be
-  applied via functional composition: @{ML "TRY"}~@{ML_op o}~@{text
-  "tac"}.  There is no need for @{verbatim TRY'}.
+  applied via functional composition: @{ML "TRY"}~@{ML_op o}~\<open>tac\<close>.  There is no need for @{verbatim TRY'}.
 
-  \<^descr> @{ML REPEAT}~@{text "tac"} applies @{text "tac"} to the goal
+  \<^descr> @{ML REPEAT}~\<open>tac\<close> applies \<open>tac\<close> to the goal
   state and, recursively, to each element of the resulting sequence.
-  The resulting sequence consists of those states that make @{text
-  "tac"} fail.  Thus, it applies @{text "tac"} as many times as
+  The resulting sequence consists of those states that make \<open>tac\<close> fail.  Thus, it applies \<open>tac\<close> as many times as
   possible (including zero times), and allows backtracking over each
-  invocation of @{text "tac"}.  @{ML REPEAT} is more general than @{ML
+  invocation of \<open>tac\<close>.  @{ML REPEAT} is more general than @{ML
   REPEAT_DETERM}, but requires more space.
 
-  \<^descr> @{ML REPEAT1}~@{text "tac"} is like @{ML REPEAT}~@{text "tac"}
-  but it always applies @{text "tac"} at least once, failing if this
+  \<^descr> @{ML REPEAT1}~\<open>tac\<close> is like @{ML REPEAT}~\<open>tac\<close>
+  but it always applies \<open>tac\<close> at least once, failing if this
   is impossible.
 
-  \<^descr> @{ML REPEAT_DETERM}~@{text "tac"} applies @{text "tac"} to the
+  \<^descr> @{ML REPEAT_DETERM}~\<open>tac\<close> applies \<open>tac\<close> to the
   goal state and, recursively, to the head of the resulting sequence.
-  It returns the first state to make @{text "tac"} fail.  It is
+  It returns the first state to make \<open>tac\<close> fail.  It is
   deterministic, discarding alternative outcomes.
 
-  \<^descr> @{ML REPEAT_DETERM_N}~@{text "n tac"} is like @{ML
-  REPEAT_DETERM}~@{text "tac"} but the number of repetitions is bound
-  by @{text "n"} (where @{ML "~1"} means @{text "\<infinity>"}).
+  \<^descr> @{ML REPEAT_DETERM_N}~\<open>n tac\<close> is like @{ML
+  REPEAT_DETERM}~\<open>tac\<close> but the number of repetitions is bound
+  by \<open>n\<close> (where @{ML "~1"} means \<open>\<infinity>\<close>).
 \<close>
 
 text %mlex \<open>The basic tactics and tacticals considered above follow
@@ -649,7 +626,7 @@
 
   \<^item> @{ML no_tac} is the identity element of @{ML_op "ORELSE"} and
   @{ML_op "APPEND"}.  Also, it is a zero element for @{ML_op "THEN"},
-  which means that @{text "tac"}~@{ML_op THEN}~@{ML no_tac} is
+  which means that \<open>tac\<close>~@{ML_op THEN}~@{ML no_tac} is
   equivalent to @{ML no_tac}.
 
   \<^item> @{ML TRY} and @{ML REPEAT} can be expressed as (recursive)
@@ -662,9 +639,9 @@
   fun REPEAT tac st = ((tac THEN REPEAT tac) ORELSE all_tac) st;
 \<close>
 
-text \<open>If @{text "tac"} can return multiple outcomes then so can @{ML
-  REPEAT}~@{text "tac"}.  @{ML REPEAT} uses @{ML_op "ORELSE"} and not
-  @{ML_op "APPEND"}, it applies @{text "tac"} as many times as
+text \<open>If \<open>tac\<close> can return multiple outcomes then so can @{ML
+  REPEAT}~\<open>tac\<close>.  @{ML REPEAT} uses @{ML_op "ORELSE"} and not
+  @{ML_op "APPEND"}, it applies \<open>tac\<close> as many times as
   possible in each outcome.
 
   \begin{warn}
@@ -672,7 +649,7 @@
   definition of @{ML REPEAT}.  Recursive tacticals must be coded in
   this awkward fashion to avoid infinite recursion of eager functional
   evaluation in Standard ML.  The following attempt would make @{ML
-  REPEAT}~@{text "tac"} loop:
+  REPEAT}~\<open>tac\<close> loop:
   \end{warn}
 \<close>
 
@@ -690,9 +667,9 @@
   tactic a certain range of subgoals is covered.  Thus the body tactic
   is applied to \<^emph>\<open>all\<close> subgoals, \<^emph>\<open>some\<close> subgoal etc.
 
-  Suppose that the goal state has @{text "n \<ge> 0"} subgoals.  Many of
+  Suppose that the goal state has \<open>n \<ge> 0\<close> subgoals.  Many of
   these tacticals address subgoal ranges counting downwards from
-  @{text "n"} towards @{text "1"}.  This has the fortunate effect that
+  \<open>n\<close> towards \<open>1\<close>.  This has the fortunate effect that
   newly emerging subgoals are concatenated in the result, without
   interfering each other.  Nonetheless, there might be situations
   where a different order is desired.\<close>
@@ -708,30 +685,30 @@
   @{index_ML RANGE: "(int -> tactic) list -> int -> tactic"} \\
   \end{mldecls}
 
-  \<^descr> @{ML ALLGOALS}~@{text "tac"} is equivalent to @{text "tac
-  n"}~@{ML_op THEN}~@{text "\<dots>"}~@{ML_op THEN}~@{text "tac 1"}.  It
-  applies the @{text tac} to all the subgoals, counting downwards.
+  \<^descr> @{ML ALLGOALS}~\<open>tac\<close> is equivalent to \<open>tac
+  n\<close>~@{ML_op THEN}~\<open>\<dots>\<close>~@{ML_op THEN}~\<open>tac 1\<close>.  It
+  applies the \<open>tac\<close> to all the subgoals, counting downwards.
 
-  \<^descr> @{ML SOMEGOAL}~@{text "tac"} is equivalent to @{text "tac
-  n"}~@{ML_op ORELSE}~@{text "\<dots>"}~@{ML_op ORELSE}~@{text "tac 1"}.  It
-  applies @{text "tac"} to one subgoal, counting downwards.
+  \<^descr> @{ML SOMEGOAL}~\<open>tac\<close> is equivalent to \<open>tac
+  n\<close>~@{ML_op ORELSE}~\<open>\<dots>\<close>~@{ML_op ORELSE}~\<open>tac 1\<close>.  It
+  applies \<open>tac\<close> to one subgoal, counting downwards.
 
-  \<^descr> @{ML FIRSTGOAL}~@{text "tac"} is equivalent to @{text "tac
-  1"}~@{ML_op ORELSE}~@{text "\<dots>"}~@{ML_op ORELSE}~@{text "tac n"}.  It
-  applies @{text "tac"} to one subgoal, counting upwards.
+  \<^descr> @{ML FIRSTGOAL}~\<open>tac\<close> is equivalent to \<open>tac
+  1\<close>~@{ML_op ORELSE}~\<open>\<dots>\<close>~@{ML_op ORELSE}~\<open>tac n\<close>.  It
+  applies \<open>tac\<close> to one subgoal, counting upwards.
 
-  \<^descr> @{ML HEADGOAL}~@{text "tac"} is equivalent to @{text "tac 1"}.
-  It applies @{text "tac"} unconditionally to the first subgoal.
+  \<^descr> @{ML HEADGOAL}~\<open>tac\<close> is equivalent to \<open>tac 1\<close>.
+  It applies \<open>tac\<close> unconditionally to the first subgoal.
 
-  \<^descr> @{ML REPEAT_SOME}~@{text "tac"} applies @{text "tac"} once or
+  \<^descr> @{ML REPEAT_SOME}~\<open>tac\<close> applies \<open>tac\<close> once or
   more to a subgoal, counting downwards.
 
-  \<^descr> @{ML REPEAT_FIRST}~@{text "tac"} applies @{text "tac"} once or
+  \<^descr> @{ML REPEAT_FIRST}~\<open>tac\<close> applies \<open>tac\<close> once or
   more to a subgoal, counting upwards.
 
-  \<^descr> @{ML RANGE}~@{text "[tac\<^sub>1, \<dots>, tac\<^sub>k] i"} is equivalent to
-  @{text "tac\<^sub>k (i + k - 1)"}~@{ML_op THEN}~@{text "\<dots>"}~@{ML_op
-  THEN}~@{text "tac\<^sub>1 i"}.  It applies the given list of tactics to the
+  \<^descr> @{ML RANGE}~\<open>[tac\<^sub>1, \<dots>, tac\<^sub>k] i\<close> is equivalent to
+  \<open>tac\<^sub>k (i + k - 1)\<close>~@{ML_op THEN}~\<open>\<dots>\<close>~@{ML_op
+  THEN}~\<open>tac\<^sub>1 i\<close>.  It applies the given list of tactics to the
   corresponding range of subgoals, counting downwards.
 \<close>
 
@@ -757,14 +734,14 @@
   @{index_ML CHANGED: "tactic -> tactic"} \\
   \end{mldecls}
 
-  \<^descr> @{ML FILTER}~@{text "sat tac"} applies @{text "tac"} to the
+  \<^descr> @{ML FILTER}~\<open>sat tac\<close> applies \<open>tac\<close> to the
   goal state and returns a sequence consisting of those result goal
-  states that are satisfactory in the sense of @{text "sat"}.
+  states that are satisfactory in the sense of \<open>sat\<close>.
 
-  \<^descr> @{ML CHANGED}~@{text "tac"} applies @{text "tac"} to the goal
+  \<^descr> @{ML CHANGED}~\<open>tac\<close> applies \<open>tac\<close> to the goal
   state and returns precisely those states that differ from the
   original state (according to @{ML Thm.eq_thm}).  Thus @{ML
-  CHANGED}~@{text "tac"} always has some effect on the state.
+  CHANGED}~\<open>tac\<close> always has some effect on the state.
 \<close>
 
 
@@ -777,17 +754,17 @@
   @{index_ML DEPTH_SOLVE_1: "tactic -> tactic"} \\
   \end{mldecls}
 
-  \<^descr> @{ML DEPTH_FIRST}~@{text "sat tac"} returns the goal state if
-  @{text "sat"} returns true.  Otherwise it applies @{text "tac"},
+  \<^descr> @{ML DEPTH_FIRST}~\<open>sat tac\<close> returns the goal state if
+  \<open>sat\<close> returns true.  Otherwise it applies \<open>tac\<close>,
   then recursively searches from each element of the resulting
   sequence.  The code uses a stack for efficiency, in effect applying
-  @{text "tac"}~@{ML_op THEN}~@{ML DEPTH_FIRST}~@{text "sat tac"} to
+  \<open>tac\<close>~@{ML_op THEN}~@{ML DEPTH_FIRST}~\<open>sat tac\<close> to
   the state.
 
-  \<^descr> @{ML DEPTH_SOLVE}@{text "tac"} uses @{ML DEPTH_FIRST} to
+  \<^descr> @{ML DEPTH_SOLVE}\<open>tac\<close> uses @{ML DEPTH_FIRST} to
   search for states having no subgoals.
 
-  \<^descr> @{ML DEPTH_SOLVE_1}~@{text "tac"} uses @{ML DEPTH_FIRST} to
+  \<^descr> @{ML DEPTH_SOLVE_1}~\<open>tac\<close> uses @{ML DEPTH_FIRST} to
   search for states having fewer subgoals than the given state.  Thus,
   it insists upon solving at least one subgoal.
 \<close>
@@ -804,16 +781,16 @@
 
   These search strategies will find a solution if one exists.
   However, they do not enumerate all solutions; they terminate after
-  the first satisfactory result from @{text "tac"}.
+  the first satisfactory result from \<open>tac\<close>.
 
-  \<^descr> @{ML BREADTH_FIRST}~@{text "sat tac"} uses breadth-first
-  search to find states for which @{text "sat"} is true.  For most
+  \<^descr> @{ML BREADTH_FIRST}~\<open>sat tac\<close> uses breadth-first
+  search to find states for which \<open>sat\<close> is true.  For most
   applications, it is too slow.
 
-  \<^descr> @{ML BEST_FIRST}~@{text "(sat, dist) tac"} does a heuristic
-  search, using @{text "dist"} to estimate the distance from a
-  satisfactory state (in the sense of @{text "sat"}).  It maintains a
-  list of states ordered by distance.  It applies @{text "tac"} to the
+  \<^descr> @{ML BEST_FIRST}~\<open>(sat, dist) tac\<close> does a heuristic
+  search, using \<open>dist\<close> to estimate the distance from a
+  satisfactory state (in the sense of \<open>sat\<close>).  It maintains a
+  list of states ordered by distance.  It applies \<open>tac\<close> to the
   head of this list; if the result contains any satisfactory states,
   then it returns them.  Otherwise, @{ML BEST_FIRST} adds the new
   states to the list, and continues.
@@ -822,9 +799,9 @@
   the size of the state.  The smaller the state, the fewer and simpler
   subgoals it has.
 
-  \<^descr> @{ML THEN_BEST_FIRST}~@{text "tac\<^sub>0 (sat, dist) tac"} is like
+  \<^descr> @{ML THEN_BEST_FIRST}~\<open>tac\<^sub>0 (sat, dist) tac\<close> is like
   @{ML BEST_FIRST}, except that the priority queue initially contains
-  the result of applying @{text "tac\<^sub>0"} to the goal state.  This
+  the result of applying \<open>tac\<^sub>0\<close> to the goal state.  This
   tactical permits separate tactics for starting the search and
   continuing the search.
 \<close>
@@ -840,22 +817,22 @@
   @{index_ML DETERM: "tactic -> tactic"} \\
   \end{mldecls}
 
-  \<^descr> @{ML COND}~@{text "sat tac\<^sub>1 tac\<^sub>2"} applies @{text "tac\<^sub>1"} to
-  the goal state if it satisfies predicate @{text "sat"}, and applies
-  @{text "tac\<^sub>2"}.  It is a conditional tactical in that only one of
-  @{text "tac\<^sub>1"} and @{text "tac\<^sub>2"} is applied to a goal state.
-  However, both @{text "tac\<^sub>1"} and @{text "tac\<^sub>2"} are evaluated
+  \<^descr> @{ML COND}~\<open>sat tac\<^sub>1 tac\<^sub>2\<close> applies \<open>tac\<^sub>1\<close> to
+  the goal state if it satisfies predicate \<open>sat\<close>, and applies
+  \<open>tac\<^sub>2\<close>.  It is a conditional tactical in that only one of
+  \<open>tac\<^sub>1\<close> and \<open>tac\<^sub>2\<close> is applied to a goal state.
+  However, both \<open>tac\<^sub>1\<close> and \<open>tac\<^sub>2\<close> are evaluated
   because ML uses eager evaluation.
 
-  \<^descr> @{ML IF_UNSOLVED}~@{text "tac"} applies @{text "tac"} to the
+  \<^descr> @{ML IF_UNSOLVED}~\<open>tac\<close> applies \<open>tac\<close> to the
   goal state if it has any subgoals, and simply returns the goal state
   otherwise.  Many common tactics, such as @{ML resolve_tac}, fail if
   applied to a goal state that has no subgoals.
 
-  \<^descr> @{ML SOLVE}~@{text "tac"} applies @{text "tac"} to the goal
+  \<^descr> @{ML SOLVE}~\<open>tac\<close> applies \<open>tac\<close> to the goal
   state and then fails iff there are subgoals left.
 
-  \<^descr> @{ML DETERM}~@{text "tac"} applies @{text "tac"} to the goal
+  \<^descr> @{ML DETERM}~\<open>tac\<close> applies \<open>tac\<close> to the goal
   state and returns the head of the resulting sequence.  @{ML DETERM}
   limits the search space by making its argument deterministic.
 \<close>
@@ -871,20 +848,17 @@
   @{index_ML size_of_thm: "thm -> int"} \\
   \end{mldecls}
 
-  \<^descr> @{ML has_fewer_prems}~@{text "n thm"} reports whether @{text
-  "thm"} has fewer than @{text "n"} premises.
+  \<^descr> @{ML has_fewer_prems}~\<open>n thm\<close> reports whether \<open>thm\<close> has fewer than \<open>n\<close> premises.
 
-  \<^descr> @{ML Thm.eq_thm}~@{text "(thm\<^sub>1, thm\<^sub>2)"} reports whether @{text
-  "thm\<^sub>1"} and @{text "thm\<^sub>2"} are equal.  Both theorems must have the
+  \<^descr> @{ML Thm.eq_thm}~\<open>(thm\<^sub>1, thm\<^sub>2)\<close> reports whether \<open>thm\<^sub>1\<close> and \<open>thm\<^sub>2\<close> are equal.  Both theorems must have the
   same conclusions, the same set of hypotheses, and the same set of sort
   hypotheses.  Names of bound variables are ignored as usual.
 
-  \<^descr> @{ML Thm.eq_thm_prop}~@{text "(thm\<^sub>1, thm\<^sub>2)"} reports whether
-  the propositions of @{text "thm\<^sub>1"} and @{text "thm\<^sub>2"} are equal.
+  \<^descr> @{ML Thm.eq_thm_prop}~\<open>(thm\<^sub>1, thm\<^sub>2)\<close> reports whether
+  the propositions of \<open>thm\<^sub>1\<close> and \<open>thm\<^sub>2\<close> are equal.
   Names of bound variables are ignored.
 
-  \<^descr> @{ML size_of_thm}~@{text "thm"} computes the size of @{text
-  "thm"}, namely the number of variables, constants and abstractions
+  \<^descr> @{ML size_of_thm}~\<open>thm\<close> computes the size of \<open>thm\<close>, namely the number of variables, constants and abstractions
   in its conclusion.  It may serve as a distance function for
   @{ML BEST_FIRST}.
 \<close>
--- a/src/Doc/Isar_Ref/Document_Preparation.thy	Tue Oct 20 23:03:46 2015 +0200
+++ b/src/Doc/Isar_Ref/Document_Preparation.thy	Tue Oct 20 23:53:40 2015 +0200
@@ -22,15 +22,15 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{command_def "chapter"} & : & @{text "any \<rightarrow> any"} \\
-    @{command_def "section"} & : & @{text "any \<rightarrow> any"} \\
-    @{command_def "subsection"} & : & @{text "any \<rightarrow> any"} \\
-    @{command_def "subsubsection"} & : & @{text "any \<rightarrow> any"} \\
-    @{command_def "paragraph"} & : & @{text "any \<rightarrow> any"} \\
-    @{command_def "subparagraph"} & : & @{text "any \<rightarrow> any"} \\
-    @{command_def "text"} & : & @{text "any \<rightarrow> any"} \\
-    @{command_def "txt"} & : & @{text "any \<rightarrow> any"} \\
-    @{command_def "text_raw"} & : & @{text "any \<rightarrow> any"} \\
+    @{command_def "chapter"} & : & \<open>any \<rightarrow> any\<close> \\
+    @{command_def "section"} & : & \<open>any \<rightarrow> any\<close> \\
+    @{command_def "subsection"} & : & \<open>any \<rightarrow> any\<close> \\
+    @{command_def "subsubsection"} & : & \<open>any \<rightarrow> any\<close> \\
+    @{command_def "paragraph"} & : & \<open>any \<rightarrow> any\<close> \\
+    @{command_def "subparagraph"} & : & \<open>any \<rightarrow> any\<close> \\
+    @{command_def "text"} & : & \<open>any \<rightarrow> any\<close> \\
+    @{command_def "txt"} & : & \<open>any \<rightarrow> any\<close> \\
+    @{command_def "text_raw"} & : & \<open>any \<rightarrow> any\<close> \\
   \end{matharray}
 
   Markup commands provide a structured way to insert text into the
@@ -58,7 +58,7 @@
 
   \<^descr> @{command text} and @{command txt} specify paragraphs of plain text.
   This corresponds to a {\LaTeX} environment @{verbatim
-  \<open>\begin{isamarkuptext}\<close>} @{text "\<dots>"} @{verbatim \<open>\end{isamarkuptext}\<close>}
+  \<open>\begin{isamarkuptext}\<close>} \<open>\<dots>\<close> @{verbatim \<open>\end{isamarkuptext}\<close>}
   etc.
 
   \<^descr> @{command text_raw} is similar to @{command text}, but without
@@ -81,35 +81,35 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{antiquotation_def "theory"} & : & @{text antiquotation} \\
-    @{antiquotation_def "thm"} & : & @{text antiquotation} \\
-    @{antiquotation_def "lemma"} & : & @{text antiquotation} \\
-    @{antiquotation_def "prop"} & : & @{text antiquotation} \\
-    @{antiquotation_def "term"} & : & @{text antiquotation} \\
-    @{antiquotation_def term_type} & : & @{text antiquotation} \\
-    @{antiquotation_def typeof} & : & @{text antiquotation} \\
-    @{antiquotation_def const} & : & @{text antiquotation} \\
-    @{antiquotation_def abbrev} & : & @{text antiquotation} \\
-    @{antiquotation_def typ} & : & @{text antiquotation} \\
-    @{antiquotation_def type} & : & @{text antiquotation} \\
-    @{antiquotation_def class} & : & @{text antiquotation} \\
-    @{antiquotation_def "text"} & : & @{text antiquotation} \\
-    @{antiquotation_def goals} & : & @{text antiquotation} \\
-    @{antiquotation_def subgoals} & : & @{text antiquotation} \\
-    @{antiquotation_def prf} & : & @{text antiquotation} \\
-    @{antiquotation_def full_prf} & : & @{text antiquotation} \\
-    @{antiquotation_def ML} & : & @{text antiquotation} \\
-    @{antiquotation_def ML_op} & : & @{text antiquotation} \\
-    @{antiquotation_def ML_type} & : & @{text antiquotation} \\
-    @{antiquotation_def ML_structure} & : & @{text antiquotation} \\
-    @{antiquotation_def ML_functor} & : & @{text antiquotation} \\
-    @{antiquotation_def emph} & : & @{text antiquotation} \\
-    @{antiquotation_def bold} & : & @{text antiquotation} \\
-    @{antiquotation_def verbatim} & : & @{text antiquotation} \\
-    @{antiquotation_def "file"} & : & @{text antiquotation} \\
-    @{antiquotation_def "url"} & : & @{text antiquotation} \\
-    @{antiquotation_def "cite"} & : & @{text antiquotation} \\
-    @{command_def "print_antiquotations"}@{text "\<^sup>*"} & : & @{text "context \<rightarrow> "} \\
+    @{antiquotation_def "theory"} & : & \<open>antiquotation\<close> \\
+    @{antiquotation_def "thm"} & : & \<open>antiquotation\<close> \\
+    @{antiquotation_def "lemma"} & : & \<open>antiquotation\<close> \\
+    @{antiquotation_def "prop"} & : & \<open>antiquotation\<close> \\
+    @{antiquotation_def "term"} & : & \<open>antiquotation\<close> \\
+    @{antiquotation_def term_type} & : & \<open>antiquotation\<close> \\
+    @{antiquotation_def typeof} & : & \<open>antiquotation\<close> \\
+    @{antiquotation_def const} & : & \<open>antiquotation\<close> \\
+    @{antiquotation_def abbrev} & : & \<open>antiquotation\<close> \\
+    @{antiquotation_def typ} & : & \<open>antiquotation\<close> \\
+    @{antiquotation_def type} & : & \<open>antiquotation\<close> \\
+    @{antiquotation_def class} & : & \<open>antiquotation\<close> \\
+    @{antiquotation_def "text"} & : & \<open>antiquotation\<close> \\
+    @{antiquotation_def goals} & : & \<open>antiquotation\<close> \\
+    @{antiquotation_def subgoals} & : & \<open>antiquotation\<close> \\
+    @{antiquotation_def prf} & : & \<open>antiquotation\<close> \\
+    @{antiquotation_def full_prf} & : & \<open>antiquotation\<close> \\
+    @{antiquotation_def ML} & : & \<open>antiquotation\<close> \\
+    @{antiquotation_def ML_op} & : & \<open>antiquotation\<close> \\
+    @{antiquotation_def ML_type} & : & \<open>antiquotation\<close> \\
+    @{antiquotation_def ML_structure} & : & \<open>antiquotation\<close> \\
+    @{antiquotation_def ML_functor} & : & \<open>antiquotation\<close> \\
+    @{antiquotation_def emph} & : & \<open>antiquotation\<close> \\
+    @{antiquotation_def bold} & : & \<open>antiquotation\<close> \\
+    @{antiquotation_def verbatim} & : & \<open>antiquotation\<close> \\
+    @{antiquotation_def "file"} & : & \<open>antiquotation\<close> \\
+    @{antiquotation_def "url"} & : & \<open>antiquotation\<close> \\
+    @{antiquotation_def "cite"} & : & \<open>antiquotation\<close> \\
+    @{command_def "print_antiquotations"}\<open>\<^sup>*\<close> & : & \<open>context \<rightarrow> \<close> \\
   \end{matharray}
 
   The overall content of an Isabelle/Isar theory may alternate between
@@ -131,17 +131,15 @@
   antiquotations are checked within the current theory or proof
   context.
 
-  \<^medskip> Antiquotations are in general written as @{verbatim "@{"}@{text
-  "name"}~@{verbatim "["}@{text options}@{verbatim "]"}~@{text
-  "arguments"}@{verbatim "}"}. The short form @{verbatim \<open>\\<close>}@{verbatim
-  "<^"}@{text name}@{verbatim ">"}@{text "\<open>argument_content\<close>"} (without
-  surrounding @{verbatim "@{"}@{text "\<dots>"}@{verbatim "}"}) works for a single
+  \<^medskip> Antiquotations are in general written as @{verbatim "@{"}\<open>name\<close>~@{verbatim "["}\<open>options\<close>@{verbatim "]"}~\<open>arguments\<close>@{verbatim "}"}. The short form @{verbatim \<open>\\<close>}@{verbatim
+  "<^"}\<open>name\<close>@{verbatim ">"}\<open>\<open>argument_content\<close>\<close> (without
+  surrounding @{verbatim "@{"}\<open>\<dots>\<close>@{verbatim "}"}) works for a single
   argument that is a cartouche.
 
   Omitting the control symbol is also possible: a cartouche without special
   decoration is equivalent to @{verbatim \<open>\\<close>}@{verbatim
-  "<^cartouche>"}@{text "\<open>argument_content\<close>"}, which is equivalent to
-  @{verbatim "@{cartouche"}~@{text "\<open>argument_content\<close>"}@{verbatim "}"}. The
+  "<^cartouche>"}\<open>\<open>argument_content\<close>\<close>, which is equivalent to
+  @{verbatim "@{cartouche"}~\<open>\<open>argument_content\<close>\<close>@{verbatim "}"}. The
   special name @{antiquotation_def cartouche} is defined in the context:
   Isabelle/Pure introduces that as an alias to @{antiquotation_ref text}
   (see below). Consequently, \<open>\<open>foo_bar + baz \<le> bazar\<close>\<close> prints literal
@@ -163,8 +161,8 @@
   \endgroup
 
   Note that the syntax of antiquotations may \<^emph>\<open>not\<close> include source
-  comments @{verbatim "(*"}~@{text "\<dots>"}~@{verbatim "*)"} nor verbatim
-  text @{verbatim "{*"}~@{text "\<dots>"}~@{verbatim "*}"}.
+  comments @{verbatim "(*"}~\<open>\<dots>\<close>~@{verbatim "*)"} nor verbatim
+  text @{verbatim "{*"}~\<open>\<dots>\<close>~@{verbatim "*}"}.
 
   %% FIXME less monolithic presentation, move to individual sections!?
   @{rail \<open>
@@ -209,8 +207,7 @@
     @@{command print_antiquotations} ('!'?)
   \<close>}
 
-  \<^descr> @{text "@{text s}"} prints uninterpreted source text @{text
-  s}.  This is particularly useful to print portions of text according
+  \<^descr> \<open>@{text s}\<close> prints uninterpreted source text \<open>s\<close>.  This is particularly useful to print portions of text according
   to the Isabelle document style, without demanding well-formedness,
   e.g.\ small pieces of terms that should not be parsed or
   type-checked yet.
@@ -218,45 +215,44 @@
   It is also possible to write this in the short form \<open>\<open>s\<close>\<close> without any
   further decoration.
 
-  \<^descr> @{text "@{theory A}"} prints the name @{text "A"}, which is
+  \<^descr> \<open>@{theory A}\<close> prints the name \<open>A\<close>, which is
   guaranteed to refer to a valid ancestor theory in the current
   context.
 
-  \<^descr> @{text "@{thm a\<^sub>1 \<dots> a\<^sub>n}"} prints theorems @{text "a\<^sub>1 \<dots> a\<^sub>n"}.
+  \<^descr> \<open>@{thm a\<^sub>1 \<dots> a\<^sub>n}\<close> prints theorems \<open>a\<^sub>1 \<dots> a\<^sub>n\<close>.
   Full fact expressions are allowed here, including attributes
   (\secref{sec:syn-att}).
 
-  \<^descr> @{text "@{prop \<phi>}"} prints a well-typed proposition @{text
-  "\<phi>"}.
+  \<^descr> \<open>@{prop \<phi>}\<close> prints a well-typed proposition \<open>\<phi>\<close>.
 
-  \<^descr> @{text "@{lemma \<phi> by m}"} proves a well-typed proposition
-  @{text "\<phi>"} by method @{text m} and prints the original @{text "\<phi>"}.
+  \<^descr> \<open>@{lemma \<phi> by m}\<close> proves a well-typed proposition
+  \<open>\<phi>\<close> by method \<open>m\<close> and prints the original \<open>\<phi>\<close>.
 
-  \<^descr> @{text "@{term t}"} prints a well-typed term @{text "t"}.
+  \<^descr> \<open>@{term t}\<close> prints a well-typed term \<open>t\<close>.
   
-  \<^descr> @{text "@{value t}"} evaluates a term @{text "t"} and prints
+  \<^descr> \<open>@{value t}\<close> evaluates a term \<open>t\<close> and prints
   its result, see also @{command_ref (HOL) value}.
 
-  \<^descr> @{text "@{term_type t}"} prints a well-typed term @{text "t"}
+  \<^descr> \<open>@{term_type t}\<close> prints a well-typed term \<open>t\<close>
   annotated with its type.
 
-  \<^descr> @{text "@{typeof t}"} prints the type of a well-typed term
-  @{text "t"}.
+  \<^descr> \<open>@{typeof t}\<close> prints the type of a well-typed term
+  \<open>t\<close>.
 
-  \<^descr> @{text "@{const c}"} prints a logical or syntactic constant
-  @{text "c"}.
+  \<^descr> \<open>@{const c}\<close> prints a logical or syntactic constant
+  \<open>c\<close>.
   
-  \<^descr> @{text "@{abbrev c x\<^sub>1 \<dots> x\<^sub>n}"} prints a constant abbreviation
-  @{text "c x\<^sub>1 \<dots> x\<^sub>n \<equiv> rhs"} as defined in the current context.
+  \<^descr> \<open>@{abbrev c x\<^sub>1 \<dots> x\<^sub>n}\<close> prints a constant abbreviation
+  \<open>c x\<^sub>1 \<dots> x\<^sub>n \<equiv> rhs\<close> as defined in the current context.
 
-  \<^descr> @{text "@{typ \<tau>}"} prints a well-formed type @{text "\<tau>"}.
+  \<^descr> \<open>@{typ \<tau>}\<close> prints a well-formed type \<open>\<tau>\<close>.
 
-  \<^descr> @{text "@{type \<kappa>}"} prints a (logical or syntactic) type
-    constructor @{text "\<kappa>"}.
+  \<^descr> \<open>@{type \<kappa>}\<close> prints a (logical or syntactic) type
+    constructor \<open>\<kappa>\<close>.
 
-  \<^descr> @{text "@{class c}"} prints a class @{text c}.
+  \<^descr> \<open>@{class c}\<close> prints a class \<open>c\<close>.
 
-  \<^descr> @{text "@{goals}"} prints the current \<^emph>\<open>dynamic\<close> goal
+  \<^descr> \<open>@{goals}\<close> prints the current \<^emph>\<open>dynamic\<close> goal
   state.  This is mainly for support of tactic-emulation scripts
   within Isar.  Presentation of goal states does not conform to the
   idea of human-readable proof documents!
@@ -265,84 +261,83 @@
   the reasoning via proper Isar proof commands, instead of peeking at
   the internal machine configuration.
   
-  \<^descr> @{text "@{subgoals}"} is similar to @{text "@{goals}"}, but
+  \<^descr> \<open>@{subgoals}\<close> is similar to \<open>@{goals}\<close>, but
   does not print the main goal.
   
-  \<^descr> @{text "@{prf a\<^sub>1 \<dots> a\<^sub>n}"} prints the (compact) proof terms
-  corresponding to the theorems @{text "a\<^sub>1 \<dots> a\<^sub>n"}. Note that this
+  \<^descr> \<open>@{prf a\<^sub>1 \<dots> a\<^sub>n}\<close> prints the (compact) proof terms
+  corresponding to the theorems \<open>a\<^sub>1 \<dots> a\<^sub>n\<close>. Note that this
   requires proof terms to be switched on for the current logic
   session.
   
-  \<^descr> @{text "@{full_prf a\<^sub>1 \<dots> a\<^sub>n}"} is like @{text "@{prf a\<^sub>1 \<dots>
-  a\<^sub>n}"}, but prints the full proof terms, i.e.\ also displays
+  \<^descr> \<open>@{full_prf a\<^sub>1 \<dots> a\<^sub>n}\<close> is like \<open>@{prf a\<^sub>1 \<dots>
+  a\<^sub>n}\<close>, but prints the full proof terms, i.e.\ also displays
   information omitted in the compact proof term, which is denoted by
-  ``@{text _}'' placeholders there.
+  ``\<open>_\<close>'' placeholders there.
   
-  \<^descr> @{text "@{ML s}"}, @{text "@{ML_op s}"}, @{text "@{ML_type
-  s}"}, @{text "@{ML_structure s}"}, and @{text "@{ML_functor s}"}
-  check text @{text s} as ML value, infix operator, type, structure,
+  \<^descr> \<open>@{ML s}\<close>, \<open>@{ML_op s}\<close>, \<open>@{ML_type
+  s}\<close>, \<open>@{ML_structure s}\<close>, and \<open>@{ML_functor s}\<close>
+  check text \<open>s\<close> as ML value, infix operator, type, structure,
   and functor respectively.  The source is printed verbatim.
 
-  \<^descr> @{text "@{emph s}"} prints document source recursively, with {\LaTeX}
-  markup @{verbatim \<open>\emph{\<close>}@{text "\<dots>"}@{verbatim \<open>}\<close>}.
+  \<^descr> \<open>@{emph s}\<close> prints document source recursively, with {\LaTeX}
+  markup @{verbatim \<open>\emph{\<close>}\<open>\<dots>\<close>@{verbatim \<open>}\<close>}.
 
-  \<^descr> @{text "@{bold s}"} prints document source recursively, with {\LaTeX}
-  markup @{verbatim \<open>\textbf{\<close>}@{text "\<dots>"}@{verbatim \<open>}\<close>}.
+  \<^descr> \<open>@{bold s}\<close> prints document source recursively, with {\LaTeX}
+  markup @{verbatim \<open>\textbf{\<close>}\<open>\<dots>\<close>@{verbatim \<open>}\<close>}.
 
-  \<^descr> @{text "@{verbatim s}"} prints uninterpreted source text literally
+  \<^descr> \<open>@{verbatim s}\<close> prints uninterpreted source text literally
   as ASCII characters, using some type-writer font style.
 
-  \<^descr> @{text "@{file path}"} checks that @{text "path"} refers to a
+  \<^descr> \<open>@{file path}\<close> checks that \<open>path\<close> refers to a
   file (or directory) and prints it verbatim.
 
-  \<^descr> @{text "@{file_unchecked path}"} is like @{text "@{file
-  path}"}, but does not check the existence of the @{text "path"}
+  \<^descr> \<open>@{file_unchecked path}\<close> is like \<open>@{file
+  path}\<close>, but does not check the existence of the \<open>path\<close>
   within the file-system.
 
-  \<^descr> @{text "@{url name}"} produces markup for the given URL, which
+  \<^descr> \<open>@{url name}\<close> produces markup for the given URL, which
   results in an active hyperlink within the text.
 
-  \<^descr> @{text "@{cite name}"} produces a citation @{verbatim
+  \<^descr> \<open>@{cite name}\<close> produces a citation @{verbatim
   \<open>\cite{name}\<close>} in {\LaTeX}, where the name refers to some Bib{\TeX}
   database entry.
 
-  The variant @{text "@{cite \<open>opt\<close> name}"} produces @{verbatim
+  The variant \<open>@{cite \<open>opt\<close> name}\<close> produces @{verbatim
   \<open>\cite[opt]{name}\<close>} with some free-form optional argument. Multiple names
-  are output with commas, e.g. @{text "@{cite foo \<AND> bar}"} becomes
+  are output with commas, e.g. \<open>@{cite foo \<AND> bar}\<close> becomes
   @{verbatim \<open>\cite{foo,bar}\<close>}.
 
   The {\LaTeX} macro name is determined by the antiquotation option
   @{antiquotation_option_def cite_macro}, or the configuration option
-  @{attribute cite_macro} in the context. For example, @{text "@{cite
-  [cite_macro = nocite] foobar}"} produces @{verbatim \<open>\nocite{foobar}\<close>}.
+  @{attribute cite_macro} in the context. For example, \<open>@{cite
+  [cite_macro = nocite] foobar}\<close> produces @{verbatim \<open>\nocite{foobar}\<close>}.
 
   \<^descr> @{command "print_antiquotations"} prints all document antiquotations
-  that are defined in the current context; the ``@{text "!"}'' option
+  that are defined in the current context; the ``\<open>!\<close>'' option
   indicates extra verbosity.
 \<close>
 
 
 subsection \<open>Styled antiquotations\<close>
 
-text \<open>The antiquotations @{text thm}, @{text prop} and @{text
-  term} admit an extra \<^emph>\<open>style\<close> specification to modify the
+text \<open>The antiquotations \<open>thm\<close>, \<open>prop\<close> and \<open>term\<close> admit an extra \<^emph>\<open>style\<close> specification to modify the
   printed result.  A style is specified by a name with a possibly
   empty number of arguments;  multiple styles can be sequenced with
   commas.  The following standard styles are available:
 
-  \<^descr> @{text lhs} extracts the first argument of any application
+  \<^descr> \<open>lhs\<close> extracts the first argument of any application
   form with at least two arguments --- typically meta-level or
   object-level equality, or any other binary relation.
   
-  \<^descr> @{text rhs} is like @{text lhs}, but extracts the second
+  \<^descr> \<open>rhs\<close> is like \<open>lhs\<close>, but extracts the second
   argument.
   
-  \<^descr> @{text "concl"} extracts the conclusion @{text C} from a rule
-  in Horn-clause normal form @{text "A\<^sub>1 \<Longrightarrow> \<dots> A\<^sub>n \<Longrightarrow> C"}.
+  \<^descr> \<open>concl\<close> extracts the conclusion \<open>C\<close> from a rule
+  in Horn-clause normal form \<open>A\<^sub>1 \<Longrightarrow> \<dots> A\<^sub>n \<Longrightarrow> C\<close>.
   
-  \<^descr> @{text "prem"} @{text n} extract premise number
-  @{text "n"} from from a rule in Horn-clause
-  normal form @{text "A\<^sub>1 \<Longrightarrow> \<dots> A\<^sub>n \<Longrightarrow> C"}
+  \<^descr> \<open>prem\<close> \<open>n\<close> extract premise number
+  \<open>n\<close> from from a rule in Horn-clause
+  normal form \<open>A\<^sub>1 \<Longrightarrow> \<dots> A\<^sub>n \<Longrightarrow> C\<close>
 \<close>
 
 
@@ -352,34 +347,34 @@
   of antiquotations.  Note that many of these coincide with system and
   configuration options of the same names.
 
-  \<^descr> @{antiquotation_option_def show_types}~@{text "= bool"} and
-  @{antiquotation_option_def show_sorts}~@{text "= bool"} control
+  \<^descr> @{antiquotation_option_def show_types}~\<open>= bool\<close> and
+  @{antiquotation_option_def show_sorts}~\<open>= bool\<close> control
   printing of explicit type and sort constraints.
 
-  \<^descr> @{antiquotation_option_def show_structs}~@{text "= bool"}
+  \<^descr> @{antiquotation_option_def show_structs}~\<open>= bool\<close>
   controls printing of implicit structures.
 
-  \<^descr> @{antiquotation_option_def show_abbrevs}~@{text "= bool"}
+  \<^descr> @{antiquotation_option_def show_abbrevs}~\<open>= bool\<close>
   controls folding of abbreviations.
 
-  \<^descr> @{antiquotation_option_def names_long}~@{text "= bool"} forces
+  \<^descr> @{antiquotation_option_def names_long}~\<open>= bool\<close> forces
   names of types and constants etc.\ to be printed in their fully
   qualified internal form.
 
-  \<^descr> @{antiquotation_option_def names_short}~@{text "= bool"}
+  \<^descr> @{antiquotation_option_def names_short}~\<open>= bool\<close>
   forces names of types and constants etc.\ to be printed unqualified.
   Note that internalizing the output again in the current context may
   well yield a different result.
 
-  \<^descr> @{antiquotation_option_def names_unique}~@{text "= bool"}
+  \<^descr> @{antiquotation_option_def names_unique}~\<open>= bool\<close>
   determines whether the printed version of qualified names should be
   made sufficiently long to avoid overlap with names declared further
-  back.  Set to @{text false} for more concise output.
+  back.  Set to \<open>false\<close> for more concise output.
 
-  \<^descr> @{antiquotation_option_def eta_contract}~@{text "= bool"}
-  prints terms in @{text \<eta>}-contracted form.
+  \<^descr> @{antiquotation_option_def eta_contract}~\<open>= bool\<close>
+  prints terms in \<open>\<eta>\<close>-contracted form.
 
-  \<^descr> @{antiquotation_option_def display}~@{text "= bool"} indicates
+  \<^descr> @{antiquotation_option_def display}~\<open>= bool\<close> indicates
   if the text is to be output as multi-line ``display material'',
   rather than a small piece of text without line breaks (which is the
   default).
@@ -387,47 +382,45 @@
   In this mode the embedded entities are printed in the same style as
   the main theory text.
 
-  \<^descr> @{antiquotation_option_def break}~@{text "= bool"} controls
+  \<^descr> @{antiquotation_option_def break}~\<open>= bool\<close> controls
   line breaks in non-display material.
 
-  \<^descr> @{antiquotation_option_def quotes}~@{text "= bool"} indicates
+  \<^descr> @{antiquotation_option_def quotes}~\<open>= bool\<close> indicates
   if the output should be enclosed in double quotes.
 
-  \<^descr> @{antiquotation_option_def mode}~@{text "= name"} adds @{text
-  name} to the print mode to be used for presentation.  Note that the
+  \<^descr> @{antiquotation_option_def mode}~\<open>= name\<close> adds \<open>name\<close> to the print mode to be used for presentation.  Note that the
   standard setup for {\LaTeX} output is already present by default,
-  including the modes @{text latex} and @{text xsymbols}.
+  including the modes \<open>latex\<close> and \<open>xsymbols\<close>.
 
-  \<^descr> @{antiquotation_option_def margin}~@{text "= nat"} and
-  @{antiquotation_option_def indent}~@{text "= nat"} change the margin
+  \<^descr> @{antiquotation_option_def margin}~\<open>= nat\<close> and
+  @{antiquotation_option_def indent}~\<open>= nat\<close> change the margin
   or indentation for pretty printing of display material.
 
-  \<^descr> @{antiquotation_option_def goals_limit}~@{text "= nat"}
+  \<^descr> @{antiquotation_option_def goals_limit}~\<open>= nat\<close>
   determines the maximum number of subgoals to be printed (for goal-based
   antiquotation).
 
-  \<^descr> @{antiquotation_option_def source}~@{text "= bool"} prints the
+  \<^descr> @{antiquotation_option_def source}~\<open>= bool\<close> prints the
   original source text of the antiquotation arguments, rather than its
   internal representation.  Note that formal checking of
   @{antiquotation "thm"}, @{antiquotation "term"}, etc. is still
   enabled; use the @{antiquotation "text"} antiquotation for unchecked
   output.
 
-  Regular @{text "term"} and @{text "typ"} antiquotations with @{text
-  "source = false"} involve a full round-trip from the original source
+  Regular \<open>term\<close> and \<open>typ\<close> antiquotations with \<open>source = false\<close> involve a full round-trip from the original source
   to an internalized logical entity back to a source form, according
   to the syntax of the current context.  Thus the printed output is
   not under direct control of the author, it may even fluctuate a bit
   as the underlying theory is changed later on.
 
-  In contrast, @{antiquotation_option source}~@{text "= true"}
+  In contrast, @{antiquotation_option source}~\<open>= true\<close>
   admits direct printing of the given source text, with the desirable
   well-formedness check in the background, but without modification of
   the printed text.
 
 
-  For Boolean flags, ``@{text "name = true"}'' may be abbreviated as
-  ``@{text name}''.  All of the above flags are disabled by default,
+  For Boolean flags, ``\<open>name = true\<close>'' may be abbreviated as
+  ``\<open>name\<close>''.  All of the above flags are disabled by default,
   unless changed specifically for a logic session in the corresponding
   @{verbatim "ROOT"} file.
 \<close>
@@ -450,9 +443,9 @@
 
   \<^medskip>
   \begin{tabular}{ll}
-    @{text "theory"} & theory begin/end \\
-    @{text "proof"} & all proof commands \\
-    @{text "ML"} & all commands involving ML code \\
+    \<open>theory\<close> & theory begin/end \\
+    \<open>proof\<close> & all proof commands \\
+    \<open>ML\<close> & all commands involving ML code \\
   \end{tabular}
   \<^medskip>
 
@@ -461,17 +454,17 @@
   specifically, e.g.\ to fold proof texts, or drop parts of the text
   completely.
 
-  For example ``@{command "by"}~@{text "%invisible auto"}'' causes
-  that piece of proof to be treated as @{text invisible} instead of
-  @{text "proof"} (the default), which may be shown or hidden
+  For example ``@{command "by"}~\<open>%invisible auto\<close>'' causes
+  that piece of proof to be treated as \<open>invisible\<close> instead of
+  \<open>proof\<close> (the default), which may be shown or hidden
   depending on the document setup.  In contrast, ``@{command
-  "by"}~@{text "%visible auto"}'' forces this text to be shown
+  "by"}~\<open>%visible auto\<close>'' forces this text to be shown
   invariably.
 
   Explicit tag specifications within a proof apply to all subsequent
   commands of the same level of nesting.  For example, ``@{command
-  "proof"}~@{text "%visible \<dots>"}~@{command "qed"}'' forces the whole
-  sub-proof to be typeset as @{text visible} (unless some of its parts
+  "proof"}~\<open>%visible \<dots>\<close>~@{command "qed"}'' forces the whole
+  sub-proof to be typeset as \<open>visible\<close> (unless some of its parts
   are tagged differently).
 
   \<^medskip>
@@ -491,7 +484,7 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{antiquotation_def "rail"} & : & @{text antiquotation} \\
+    @{antiquotation_def "rail"} & : & \<open>antiquotation\<close> \\
   \end{matharray}
 
   @{rail \<open>
@@ -525,12 +518,12 @@
   \<close>}
   \endgroup
 
-  The lexical syntax of @{text "identifier"} coincides with that of
-  @{syntax ident} in regular Isabelle syntax, but @{text string} uses
+  The lexical syntax of \<open>identifier\<close> coincides with that of
+  @{syntax ident} in regular Isabelle syntax, but \<open>string\<close> uses
   single quotes instead of double quotes of the standard @{syntax
   string} category.
 
-  Each @{text rule} defines a formal language (with optional name),
+  Each \<open>rule\<close> defines a formal language (with optional name),
   using a notation that is similar to EBNF or regular expressions with
   recursion.  The meaning and visual appearance of these rail language
   elements is illustrated by the following representative examples.
@@ -600,14 +593,14 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{command_def "display_drafts"}@{text "\<^sup>*"} & : & @{text "any \<rightarrow>"} \\
+    @{command_def "display_drafts"}\<open>\<^sup>*\<close> & : & \<open>any \<rightarrow>\<close> \\
   \end{matharray}
 
   @{rail \<open>
     @@{command display_drafts} (@{syntax name} +)
   \<close>}
 
-  \<^descr> @{command "display_drafts"}~@{text paths} performs simple output of a
+  \<^descr> @{command "display_drafts"}~\<open>paths\<close> performs simple output of a
   given list of raw source files. Only those symbols that do not require
   additional {\LaTeX} packages are displayed properly, everything else is left
   verbatim.
--- a/src/Doc/Isar_Ref/First_Order_Logic.thy	Tue Oct 20 23:03:46 2015 +0200
+++ b/src/Doc/Isar_Ref/First_Order_Logic.thy	Tue Oct 20 23:53:40 2015 +0200
@@ -7,8 +7,8 @@
 
 text \<open>
   In order to commence a new object-logic within
-  Isabelle/Pure we introduce abstract syntactic categories @{text "i"}
-  for individuals and @{text "o"} for object-propositions.  The latter
+  Isabelle/Pure we introduce abstract syntactic categories \<open>i\<close>
+  for individuals and \<open>o\<close> for object-propositions.  The latter
   is embedded into the language of Pure propositions by means of a
   separate judgment.
 \<close>
@@ -126,8 +126,7 @@
 text \<open>
   Reasoning from basic axioms is often tedious.  Our proofs
   work by producing various instances of the given rules (potentially
-  the symmetric form) using the pattern ``@{command have}~@{text
-  eq}~@{command "by"}~@{text "(rule r)"}'' and composing the chain of
+  the symmetric form) using the pattern ``@{command have}~\<open>eq\<close>~@{command "by"}~\<open>(rule r)\<close>'' and composing the chain of
   results via @{command also}/@{command finally}.  These steps may
   involve any of the transitivity rules declared in
   \secref{sec:framework-ex-equal}, namely @{thm trans} in combining
@@ -162,8 +161,7 @@
   realistic object-logic would include proper setup for the Simplifier
   (\secref{sec:simplifier}), the main automated tool for equational
   reasoning in Isabelle.  Then ``@{command unfolding}~@{thm
-  left_inv}~@{command ".."}'' would become ``@{command "by"}~@{text
-  "(simp only: left_inv)"}'' etc.
+  left_inv}~@{command ".."}'' would become ``@{command "by"}~\<open>(simp only: left_inv)\<close>'' etc.
 \<close>
 
 end
@@ -228,7 +226,7 @@
 
 text \<open>
   Note that the analogous elimination rule for disjunction
-  ``@{text "\<ASSUMES> A \<or> B \<OBTAINS> A \<BBAR> B"}'' coincides with
+  ``\<open>\<ASSUMES> A \<or> B \<OBTAINS> A \<BBAR> B\<close>'' coincides with
   the original axiomatization of @{thm disjE}.
 
   \<^medskip>
@@ -320,9 +318,9 @@
   unchanged.  Note that @{prop "(\<not> C \<Longrightarrow> C) \<Longrightarrow> C"} fits again into our
   format for eliminations, despite the additional twist that the
   context refers to the main conclusion.  So we may write @{thm
-  classical} as the Isar statement ``@{text "\<OBTAINS> \<not> thesis"}''.
+  classical} as the Isar statement ``\<open>\<OBTAINS> \<not> thesis\<close>''.
   This also explains nicely how classical reasoning really works:
-  whatever the main @{text thesis} might be, we may always assume its
+  whatever the main \<open>thesis\<close> might be, we may always assume its
   negation!
 \<close>
 
@@ -335,9 +333,9 @@
   Representing quantifiers is easy, thanks to the higher-order nature
   of the underlying framework.  According to the well-known technique
   introduced by Church @{cite "church40"}, quantifiers are operators on
-  predicates, which are syntactically represented as @{text "\<lambda>"}-terms
-  of type @{typ "i \<Rightarrow> o"}.  Binder notation turns @{text "All (\<lambda>x. B
-  x)"} into @{text "\<forall>x. B x"} etc.
+  predicates, which are syntactically represented as \<open>\<lambda>\<close>-terms
+  of type @{typ "i \<Rightarrow> o"}.  Binder notation turns \<open>All (\<lambda>x. B
+  x)\<close> into \<open>\<forall>x. B x\<close> etc.
 \<close>
 
 axiomatization
@@ -351,8 +349,7 @@
   exE [elim]: "(\<exists>x. B x) \<Longrightarrow> (\<And>x. B x \<Longrightarrow> C) \<Longrightarrow> C"
 
 text \<open>
-  The statement of @{thm exE} corresponds to ``@{text
-  "\<ASSUMES> \<exists>x. B x \<OBTAINS> x \<WHERE> B x"}'' in Isar.  In the
+  The statement of @{thm exE} corresponds to ``\<open>\<ASSUMES> \<exists>x. B x \<OBTAINS> x \<WHERE> B x\<close>'' in Isar.  In the
   subsequent example we illustrate quantifier reasoning involving all
   four rules:
 \<close>
@@ -360,10 +357,10 @@
 theorem
   assumes "\<exists>x. \<forall>y. R x y"
   shows "\<forall>y. \<exists>x. R x y"
-proof    -- \<open>@{text "\<forall>"} introduction\<close>
-  obtain x where "\<forall>y. R x y" using \<open>\<exists>x. \<forall>y. R x y\<close> ..    -- \<open>@{text "\<exists>"} elimination\<close>
-  fix y have "R x y" using \<open>\<forall>y. R x y\<close> ..    -- \<open>@{text "\<forall>"} destruction\<close>
-  then show "\<exists>x. R x y" ..    -- \<open>@{text "\<exists>"} introduction\<close>
+proof    -- \<open>\<open>\<forall>\<close> introduction\<close>
+  obtain x where "\<forall>y. R x y" using \<open>\<exists>x. \<forall>y. R x y\<close> ..    -- \<open>\<open>\<exists>\<close> elimination\<close>
+  fix y have "R x y" using \<open>\<forall>y. R x y\<close> ..    -- \<open>\<open>\<forall>\<close> destruction\<close>
+  then show "\<exists>x. R x y" ..    -- \<open>\<open>\<exists>\<close> introduction\<close>
 qed
 
 
@@ -377,27 +374,27 @@
 
   \<^medskip>
   \begin{tabular}{l}
-  @{text "impI: \<ASSUMES> A \<Longrightarrow> B \<SHOWS> A \<longrightarrow> B"} \\
-  @{text "impD: \<ASSUMES> A \<longrightarrow> B \<AND> A \<SHOWS> B"} \\[1ex]
+  \<open>impI: \<ASSUMES> A \<Longrightarrow> B \<SHOWS> A \<longrightarrow> B\<close> \\
+  \<open>impD: \<ASSUMES> A \<longrightarrow> B \<AND> A \<SHOWS> B\<close> \\[1ex]
 
-  @{text "disjI\<^sub>1: \<ASSUMES> A \<SHOWS> A \<or> B"} \\
-  @{text "disjI\<^sub>2: \<ASSUMES> B \<SHOWS> A \<or> B"} \\
-  @{text "disjE: \<ASSUMES> A \<or> B \<OBTAINS> A \<BBAR> B"} \\[1ex]
+  \<open>disjI\<^sub>1: \<ASSUMES> A \<SHOWS> A \<or> B\<close> \\
+  \<open>disjI\<^sub>2: \<ASSUMES> B \<SHOWS> A \<or> B\<close> \\
+  \<open>disjE: \<ASSUMES> A \<or> B \<OBTAINS> A \<BBAR> B\<close> \\[1ex]
 
-  @{text "conjI: \<ASSUMES> A \<AND> B \<SHOWS> A \<and> B"} \\
-  @{text "conjE: \<ASSUMES> A \<and> B \<OBTAINS> A \<AND> B"} \\[1ex]
+  \<open>conjI: \<ASSUMES> A \<AND> B \<SHOWS> A \<and> B\<close> \\
+  \<open>conjE: \<ASSUMES> A \<and> B \<OBTAINS> A \<AND> B\<close> \\[1ex]
 
-  @{text "falseE: \<ASSUMES> \<bottom> \<SHOWS> A"} \\
-  @{text "trueI: \<SHOWS> \<top>"} \\[1ex]
+  \<open>falseE: \<ASSUMES> \<bottom> \<SHOWS> A\<close> \\
+  \<open>trueI: \<SHOWS> \<top>\<close> \\[1ex]
 
-  @{text "notI: \<ASSUMES> A \<Longrightarrow> \<bottom> \<SHOWS> \<not> A"} \\
-  @{text "notE: \<ASSUMES> \<not> A \<AND> A \<SHOWS> B"} \\[1ex]
+  \<open>notI: \<ASSUMES> A \<Longrightarrow> \<bottom> \<SHOWS> \<not> A\<close> \\
+  \<open>notE: \<ASSUMES> \<not> A \<AND> A \<SHOWS> B\<close> \\[1ex]
 
-  @{text "allI: \<ASSUMES> \<And>x. B x \<SHOWS> \<forall>x. B x"} \\
-  @{text "allE: \<ASSUMES> \<forall>x. B x \<SHOWS> B a"} \\[1ex]
+  \<open>allI: \<ASSUMES> \<And>x. B x \<SHOWS> \<forall>x. B x\<close> \\
+  \<open>allE: \<ASSUMES> \<forall>x. B x \<SHOWS> B a\<close> \\[1ex]
 
-  @{text "exI: \<ASSUMES> B a \<SHOWS> \<exists>x. B x"} \\
-  @{text "exE: \<ASSUMES> \<exists>x. B x \<OBTAINS> a \<WHERE> B a"}
+  \<open>exI: \<ASSUMES> B a \<SHOWS> \<exists>x. B x\<close> \\
+  \<open>exE: \<ASSUMES> \<exists>x. B x \<OBTAINS> a \<WHERE> B a\<close>
   \end{tabular}
   \<^medskip>
 
--- a/src/Doc/Isar_Ref/Framework.thy	Tue Oct 20 23:03:46 2015 +0200
+++ b/src/Doc/Isar_Ref/Framework.thy	Tue Oct 20 23:53:40 2015 +0200
@@ -27,7 +27,7 @@
   So Isar challenges the traditional way of recording informal proofs
   in mathematical prose, as well as the common tendency to see fully
   formal proofs directly as objects of some logical calculus (e.g.\
-  @{text "\<lambda>"}-terms in a version of type theory).  In fact, Isar is
+  \<open>\<lambda>\<close>-terms in a version of type theory).  In fact, Isar is
   better understood as an interpreter of a simple block-structured
   language for describing the data flow of local facts and goals,
   interspersed with occasional invocations of proof methods.
@@ -58,11 +58,10 @@
   includes common notions of predicate logic, naive set-theory etc.\
   using fairly standard mathematical notation.  From the perspective
   of generic natural deduction there is nothing special about the
-  logical connectives of HOL (@{text "\<and>"}, @{text "\<or>"}, @{text "\<forall>"},
-  @{text "\<exists>"}, etc.), only the resulting reasoning principles are
+  logical connectives of HOL (\<open>\<and>\<close>, \<open>\<or>\<close>, \<open>\<forall>\<close>,
+  \<open>\<exists>\<close>, etc.), only the resulting reasoning principles are
   relevant to the user.  There are similar rules available for
-  set-theory operators (@{text "\<inter>"}, @{text "\<union>"}, @{text "\<Inter>"}, @{text
-  "\<Union>"}, etc.), or any other theory developed in the library (lattice
+  set-theory operators (\<open>\<inter>\<close>, \<open>\<union>\<close>, \<open>\<Inter>\<close>, \<open>\<Union>\<close>, etc.), or any other theory developed in the library (lattice
   theory, topology etc.).
 
   Subsequently we briefly review fragments of Isar proof texts
@@ -71,7 +70,7 @@
   understanding connectives of predicate logic as something special.
 
   \<^medskip>
-  The following deduction performs @{text "\<inter>"}-introduction,
+  The following deduction performs \<open>\<inter>\<close>-introduction,
   working forwards from assumptions towards the conclusion.  We give
   both the Isar text, and depict the primitive rule involved, as
   determined by unification of the problem against rules that are
@@ -120,7 +119,7 @@
 (*>*)
 
 text \<open>
-  The format of the @{text "\<inter>"}-introduction rule represents
+  The format of the \<open>\<inter>\<close>-introduction rule represents
   the most basic inference, which proceeds from given premises to a
   conclusion, without any nested proof context involved.
 
@@ -149,7 +148,7 @@
 text_raw \<open>\end{minipage}\begin{minipage}{0.4\textwidth}\<close>
 
 text \<open>
-  \infer{@{prop "x \<in> \<Inter>\<A>"}}{\infer*{@{prop "x \<in> A"}}{@{text "[A][A \<in> \<A>]"}}}
+  \infer{@{prop "x \<in> \<Inter>\<A>"}}{\infer*{@{prop "x \<in> A"}}{\<open>[A][A \<in> \<A>]\<close>}}
 \<close>
 
 text_raw \<open>\end{minipage}\<close>
@@ -159,7 +158,7 @@
   This Isar reasoning pattern again refers to the
   primitive rule depicted above.  The system determines it in the
   ``@{command proof}'' step, which could have been spelled out more
-  explicitly as ``@{command proof}~@{text "(rule InterI)"}''.  Note
+  explicitly as ``@{command proof}~\<open>(rule InterI)\<close>''.  Note
   that the rule involves both a local parameter @{term "A"} and an
   assumption @{prop "A \<in> \<A>"} in the nested reasoning.  This kind of
   compound rule typically demands a genuine sub-proof in Isar, working
@@ -174,7 +173,7 @@
   The next example involves @{term "\<Union>\<A>"}, which can be
   characterized as the set of all @{term "x"} such that @{prop "\<exists>A. x
   \<in> A \<and> A \<in> \<A>"}.  The elimination rule for @{prop "x \<in> \<Union>\<A>"} does
-  not mention @{text "\<exists>"} and @{text "\<and>"} at all, but admits to obtain
+  not mention \<open>\<exists>\<close> and \<open>\<and>\<close> at all, but admits to obtain
   directly a local @{term "A"} such that @{prop "x \<in> A"} and @{prop "A
   \<in> \<A>"} hold.  This corresponds to the following Isar proof and
   inference rule, respectively:
@@ -200,7 +199,7 @@
 text_raw \<open>\end{minipage}\begin{minipage}{0.4\textwidth}\<close>
 
 text \<open>
-  \infer{@{prop "C"}}{@{prop "x \<in> \<Union>\<A>"} & \infer*{@{prop "C"}~}{@{text "[A][x \<in> A, A \<in> \<A>]"}}}
+  \infer{@{prop "C"}}{@{prop "x \<in> \<Union>\<A>"} & \infer*{@{prop "C"}~}{\<open>[A][x \<in> A, A \<in> \<A>]\<close>}}
 \<close>
 
 text_raw \<open>\end{minipage}\<close>
@@ -239,20 +238,19 @@
 text \<open>
   The Pure logic @{cite "paulson-found" and "paulson700"} is an intuitionistic
   fragment of higher-order logic @{cite "church40"}.  In type-theoretic
-  parlance, there are three levels of @{text "\<lambda>"}-calculus with
-  corresponding arrows @{text "\<Rightarrow>"}/@{text "\<And>"}/@{text "\<Longrightarrow>"}:
+  parlance, there are three levels of \<open>\<lambda>\<close>-calculus with
+  corresponding arrows \<open>\<Rightarrow>\<close>/\<open>\<And>\<close>/\<open>\<Longrightarrow>\<close>:
 
   \<^medskip>
   \begin{tabular}{ll}
-  @{text "\<alpha> \<Rightarrow> \<beta>"} & syntactic function space (terms depending on terms) \\
-  @{text "\<And>x. B(x)"} & universal quantification (proofs depending on terms) \\
-  @{text "A \<Longrightarrow> B"} & implication (proofs depending on proofs) \\
+  \<open>\<alpha> \<Rightarrow> \<beta>\<close> & syntactic function space (terms depending on terms) \\
+  \<open>\<And>x. B(x)\<close> & universal quantification (proofs depending on terms) \\
+  \<open>A \<Longrightarrow> B\<close> & implication (proofs depending on proofs) \\
   \end{tabular}
   \<^medskip>
 
   Here only the types of syntactic terms, and the
-  propositions of proof terms have been shown.  The @{text
-  "\<lambda>"}-structure of proofs can be recorded as an optional feature of
+  propositions of proof terms have been shown.  The \<open>\<lambda>\<close>-structure of proofs can be recorded as an optional feature of
   the Pure inference kernel @{cite "Berghofer-Nipkow:2000:TPHOL"}, but
   the formal system can never depend on them due to \<^emph>\<open>proof
   irrelevance\<close>.
@@ -260,7 +258,7 @@
   On top of this most primitive layer of proofs, Pure implements a
   generic calculus for nested natural deduction rules, similar to
   @{cite "Schroeder-Heister:1984"}.  Here object-logic inferences are
-  internalized as formulae over @{text "\<And>"} and @{text "\<Longrightarrow>"}.
+  internalized as formulae over \<open>\<And>\<close> and \<open>\<Longrightarrow>\<close>.
   Combining such rule statements may involve higher-order unification
   @{cite "paulson-natural"}.
 \<close>
@@ -269,49 +267,49 @@
 subsection \<open>Primitive inferences\<close>
 
 text \<open>
-  Term syntax provides explicit notation for abstraction @{text "\<lambda>x ::
-  \<alpha>. b(x)"} and application @{text "b a"}, while types are usually
-  implicit thanks to type-inference; terms of type @{text "prop"} are
-  called propositions.  Logical statements are composed via @{text "\<And>x
-  :: \<alpha>. B(x)"} and @{text "A \<Longrightarrow> B"}.  Primitive reasoning operates on
-  judgments of the form @{text "\<Gamma> \<turnstile> \<phi>"}, with standard introduction
-  and elimination rules for @{text "\<And>"} and @{text "\<Longrightarrow>"} that refer to
-  fixed parameters @{text "x\<^sub>1, \<dots>, x\<^sub>m"} and hypotheses
-  @{text "A\<^sub>1, \<dots>, A\<^sub>n"} from the context @{text "\<Gamma>"};
+  Term syntax provides explicit notation for abstraction \<open>\<lambda>x ::
+  \<alpha>. b(x)\<close> and application \<open>b a\<close>, while types are usually
+  implicit thanks to type-inference; terms of type \<open>prop\<close> are
+  called propositions.  Logical statements are composed via \<open>\<And>x
+  :: \<alpha>. B(x)\<close> and \<open>A \<Longrightarrow> B\<close>.  Primitive reasoning operates on
+  judgments of the form \<open>\<Gamma> \<turnstile> \<phi>\<close>, with standard introduction
+  and elimination rules for \<open>\<And>\<close> and \<open>\<Longrightarrow>\<close> that refer to
+  fixed parameters \<open>x\<^sub>1, \<dots>, x\<^sub>m\<close> and hypotheses
+  \<open>A\<^sub>1, \<dots>, A\<^sub>n\<close> from the context \<open>\<Gamma>\<close>;
   the corresponding proof terms are left implicit.  The subsequent
-  inference rules define @{text "\<Gamma> \<turnstile> \<phi>"} inductively, relative to a
+  inference rules define \<open>\<Gamma> \<turnstile> \<phi>\<close> inductively, relative to a
   collection of axioms:
 
   \[
-  \infer{@{text "\<turnstile> A"}}{(@{text "A"} \text{~axiom})}
+  \infer{\<open>\<turnstile> A\<close>}{(\<open>A\<close> \text{~axiom})}
   \qquad
-  \infer{@{text "A \<turnstile> A"}}{}
+  \infer{\<open>A \<turnstile> A\<close>}{}
   \]
 
   \[
-  \infer{@{text "\<Gamma> \<turnstile> \<And>x. B(x)"}}{@{text "\<Gamma> \<turnstile> B(x)"} & @{text "x \<notin> \<Gamma>"}}
+  \infer{\<open>\<Gamma> \<turnstile> \<And>x. B(x)\<close>}{\<open>\<Gamma> \<turnstile> B(x)\<close> & \<open>x \<notin> \<Gamma>\<close>}
   \qquad
-  \infer{@{text "\<Gamma> \<turnstile> B(a)"}}{@{text "\<Gamma> \<turnstile> \<And>x. B(x)"}}
+  \infer{\<open>\<Gamma> \<turnstile> B(a)\<close>}{\<open>\<Gamma> \<turnstile> \<And>x. B(x)\<close>}
   \]
 
   \[
-  \infer{@{text "\<Gamma> - A \<turnstile> A \<Longrightarrow> B"}}{@{text "\<Gamma> \<turnstile> B"}}
+  \infer{\<open>\<Gamma> - A \<turnstile> A \<Longrightarrow> B\<close>}{\<open>\<Gamma> \<turnstile> B\<close>}
   \qquad
-  \infer{@{text "\<Gamma>\<^sub>1 \<union> \<Gamma>\<^sub>2 \<turnstile> B"}}{@{text "\<Gamma>\<^sub>1 \<turnstile> A \<Longrightarrow> B"} & @{text "\<Gamma>\<^sub>2 \<turnstile> A"}}
+  \infer{\<open>\<Gamma>\<^sub>1 \<union> \<Gamma>\<^sub>2 \<turnstile> B\<close>}{\<open>\<Gamma>\<^sub>1 \<turnstile> A \<Longrightarrow> B\<close> & \<open>\<Gamma>\<^sub>2 \<turnstile> A\<close>}
   \]
 
-  Furthermore, Pure provides a built-in equality @{text "\<equiv> :: \<alpha> \<Rightarrow> \<alpha> \<Rightarrow>
-  prop"} with axioms for reflexivity, substitution, extensionality,
-  and @{text "\<alpha>\<beta>\<eta>"}-conversion on @{text "\<lambda>"}-terms.
+  Furthermore, Pure provides a built-in equality \<open>\<equiv> :: \<alpha> \<Rightarrow> \<alpha> \<Rightarrow>
+  prop\<close> with axioms for reflexivity, substitution, extensionality,
+  and \<open>\<alpha>\<beta>\<eta>\<close>-conversion on \<open>\<lambda>\<close>-terms.
 
   \<^medskip>
   An object-logic introduces another layer on top of Pure,
-  e.g.\ with types @{text "i"} for individuals and @{text "o"} for
-  propositions, term constants @{text "Trueprop :: o \<Rightarrow> prop"} as
-  (implicit) derivability judgment and connectives like @{text "\<and> :: o
-  \<Rightarrow> o \<Rightarrow> o"} or @{text "\<forall> :: (i \<Rightarrow> o) \<Rightarrow> o"}, and axioms for object-level
-  rules such as @{text "conjI: A \<Longrightarrow> B \<Longrightarrow> A \<and> B"} or @{text "allI: (\<And>x. B
-  x) \<Longrightarrow> \<forall>x. B x"}.  Derived object rules are represented as theorems of
+  e.g.\ with types \<open>i\<close> for individuals and \<open>o\<close> for
+  propositions, term constants \<open>Trueprop :: o \<Rightarrow> prop\<close> as
+  (implicit) derivability judgment and connectives like \<open>\<and> :: o
+  \<Rightarrow> o \<Rightarrow> o\<close> or \<open>\<forall> :: (i \<Rightarrow> o) \<Rightarrow> o\<close>, and axioms for object-level
+  rules such as \<open>conjI: A \<Longrightarrow> B \<Longrightarrow> A \<and> B\<close> or \<open>allI: (\<And>x. B
+  x) \<Longrightarrow> \<forall>x. B x\<close>.  Derived object rules are represented as theorems of
   Pure.  After the initial object-logic setup, further axiomatizations
   are usually avoided; plain definitions and derived principles are
   used exclusively.
@@ -323,51 +321,50 @@
 text \<open>
   Primitive inferences mostly serve foundational purposes.  The main
   reasoning mechanisms of Pure operate on nested natural deduction
-  rules expressed as formulae, using @{text "\<And>"} to bind local
-  parameters and @{text "\<Longrightarrow>"} to express entailment.  Multiple
+  rules expressed as formulae, using \<open>\<And>\<close> to bind local
+  parameters and \<open>\<Longrightarrow>\<close> to express entailment.  Multiple
   parameters and premises are represented by repeating these
   connectives in a right-associative manner.
 
-  Since @{text "\<And>"} and @{text "\<Longrightarrow>"} commute thanks to the theorem
+  Since \<open>\<And>\<close> and \<open>\<Longrightarrow>\<close> commute thanks to the theorem
   @{prop "(A \<Longrightarrow> (\<And>x. B x)) \<equiv> (\<And>x. A \<Longrightarrow> B x)"}, we may assume w.l.o.g.\
   that rule statements always observe the normal form where
   quantifiers are pulled in front of implications at each level of
   nesting.  This means that any Pure proposition may be presented as a
   \<^emph>\<open>Hereditary Harrop Formula\<close> @{cite "Miller:1991"} which is of the
-  form @{text "\<And>x\<^sub>1 \<dots> x\<^sub>m. H\<^sub>1 \<Longrightarrow> \<dots> H\<^sub>n \<Longrightarrow>
-  A"} for @{text "m, n \<ge> 0"}, and @{text "A"} atomic, and @{text
-  "H\<^sub>1, \<dots>, H\<^sub>n"} being recursively of the same format.
+  form \<open>\<And>x\<^sub>1 \<dots> x\<^sub>m. H\<^sub>1 \<Longrightarrow> \<dots> H\<^sub>n \<Longrightarrow>
+  A\<close> for \<open>m, n \<ge> 0\<close>, and \<open>A\<close> atomic, and \<open>H\<^sub>1, \<dots>, H\<^sub>n\<close> being recursively of the same format.
   Following the convention that outermost quantifiers are implicit,
-  Horn clauses @{text "A\<^sub>1 \<Longrightarrow> \<dots> A\<^sub>n \<Longrightarrow> A"} are a special
+  Horn clauses \<open>A\<^sub>1 \<Longrightarrow> \<dots> A\<^sub>n \<Longrightarrow> A\<close> are a special
   case of this.
 
-  For example, @{text "\<inter>"}-introduction rule encountered before is
+  For example, \<open>\<inter>\<close>-introduction rule encountered before is
   represented as a Pure theorem as follows:
   \[
-  @{text "IntI:"}~@{prop "x \<in> A \<Longrightarrow> x \<in> B \<Longrightarrow> x \<in> A \<inter> B"}
+  \<open>IntI:\<close>~@{prop "x \<in> A \<Longrightarrow> x \<in> B \<Longrightarrow> x \<in> A \<inter> B"}
   \]
 
   This is a plain Horn clause, since no further nesting on
-  the left is involved.  The general @{text "\<Inter>"}-introduction
+  the left is involved.  The general \<open>\<Inter>\<close>-introduction
   corresponds to a Hereditary Harrop Formula with one additional level
   of nesting:
   \[
-  @{text "InterI:"}~@{prop "(\<And>A. A \<in> \<A> \<Longrightarrow> x \<in> A) \<Longrightarrow> x \<in> \<Inter>\<A>"}
+  \<open>InterI:\<close>~@{prop "(\<And>A. A \<in> \<A> \<Longrightarrow> x \<in> A) \<Longrightarrow> x \<in> \<Inter>\<A>"}
   \]
 
   \<^medskip>
-  Goals are also represented as rules: @{text "A\<^sub>1 \<Longrightarrow>
-  \<dots> A\<^sub>n \<Longrightarrow> C"} states that the sub-goals @{text "A\<^sub>1, \<dots>,
-  A\<^sub>n"} entail the result @{text "C"}; for @{text "n = 0"} the
-  goal is finished.  To allow @{text "C"} being a rule statement
-  itself, we introduce the protective marker @{text "# :: prop \<Rightarrow>
-  prop"}, which is defined as identity and hidden from the user.  We
+  Goals are also represented as rules: \<open>A\<^sub>1 \<Longrightarrow>
+  \<dots> A\<^sub>n \<Longrightarrow> C\<close> states that the sub-goals \<open>A\<^sub>1, \<dots>,
+  A\<^sub>n\<close> entail the result \<open>C\<close>; for \<open>n = 0\<close> the
+  goal is finished.  To allow \<open>C\<close> being a rule statement
+  itself, we introduce the protective marker \<open># :: prop \<Rightarrow>
+  prop\<close>, which is defined as identity and hidden from the user.  We
   initialize and finish goal states as follows:
 
   \[
   \begin{array}{c@ {\qquad}c}
-  \infer[(@{inference_def init})]{@{text "C \<Longrightarrow> #C"}}{} &
-  \infer[(@{inference_def finish})]{@{text C}}{@{text "#C"}}
+  \infer[(@{inference_def init})]{\<open>C \<Longrightarrow> #C\<close>}{} &
+  \infer[(@{inference_def finish})]{\<open>C\<close>}{\<open>#C\<close>}
   \end{array}
   \]
 
@@ -376,30 +373,29 @@
   are @{inference resolution}, for back-chaining a rule against a
   sub-goal (replacing it by zero or more sub-goals), and @{inference
   assumption}, for solving a sub-goal (finding a short-circuit with
-  local assumptions).  Below @{text "\<^vec>x"} stands for @{text
-  "x\<^sub>1, \<dots>, x\<^sub>n"} (@{text "n \<ge> 0"}).
+  local assumptions).  Below \<open>\<^vec>x\<close> stands for \<open>x\<^sub>1, \<dots>, x\<^sub>n\<close> (\<open>n \<ge> 0\<close>).
 
   \[
   \infer[(@{inference_def resolution})]
-  {@{text "(\<And>\<^vec>x. \<^vec>H \<^vec>x \<Longrightarrow> \<^vec>A (\<^vec>a \<^vec>x))\<vartheta> \<Longrightarrow> C\<vartheta>"}}
+  {\<open>(\<And>\<^vec>x. \<^vec>H \<^vec>x \<Longrightarrow> \<^vec>A (\<^vec>a \<^vec>x))\<vartheta> \<Longrightarrow> C\<vartheta>\<close>}
   {\begin{tabular}{rl}
-    @{text "rule:"} &
-    @{text "\<^vec>A \<^vec>a \<Longrightarrow> B \<^vec>a"} \\
-    @{text "goal:"} &
-    @{text "(\<And>\<^vec>x. \<^vec>H \<^vec>x \<Longrightarrow> B' \<^vec>x) \<Longrightarrow> C"} \\
-    @{text "goal unifier:"} &
-    @{text "(\<lambda>\<^vec>x. B (\<^vec>a \<^vec>x))\<vartheta> = B'\<vartheta>"} \\
+    \<open>rule:\<close> &
+    \<open>\<^vec>A \<^vec>a \<Longrightarrow> B \<^vec>a\<close> \\
+    \<open>goal:\<close> &
+    \<open>(\<And>\<^vec>x. \<^vec>H \<^vec>x \<Longrightarrow> B' \<^vec>x) \<Longrightarrow> C\<close> \\
+    \<open>goal unifier:\<close> &
+    \<open>(\<lambda>\<^vec>x. B (\<^vec>a \<^vec>x))\<vartheta> = B'\<vartheta>\<close> \\
    \end{tabular}}
   \]
 
   \<^medskip>
 
   \[
-  \infer[(@{inference_def assumption})]{@{text "C\<vartheta>"}}
+  \infer[(@{inference_def assumption})]{\<open>C\<vartheta>\<close>}
   {\begin{tabular}{rl}
-    @{text "goal:"} &
-    @{text "(\<And>\<^vec>x. \<^vec>H \<^vec>x \<Longrightarrow> A \<^vec>x) \<Longrightarrow> C"} \\
-    @{text "assm unifier:"} & @{text "A\<vartheta> = H\<^sub>i\<vartheta>"}~~\text{(for some~@{text "H\<^sub>i"})} \\
+    \<open>goal:\<close> &
+    \<open>(\<And>\<^vec>x. \<^vec>H \<^vec>x \<Longrightarrow> A \<^vec>x) \<Longrightarrow> C\<close> \\
+    \<open>assm unifier:\<close> & \<open>A\<vartheta> = H\<^sub>i\<vartheta>\<close>~~\text{(for some~\<open>H\<^sub>i\<close>)} \\
    \end{tabular}}
   \]
 
@@ -409,13 +405,13 @@
   {\footnotesize
   \<^medskip>
   \begin{tabular}{r@ {\quad}l}
-  @{text "(A \<and> B \<Longrightarrow> B \<and> A) \<Longrightarrow> #(A \<and> B \<Longrightarrow> B \<and> A)"} & @{text "(init)"} \\
-  @{text "(A \<and> B \<Longrightarrow> B) \<Longrightarrow> (A \<and> B \<Longrightarrow> A) \<Longrightarrow> #\<dots>"} & @{text "(resolution B \<Longrightarrow> A \<Longrightarrow> B \<and> A)"} \\
-  @{text "(A \<and> B \<Longrightarrow> A \<and> B) \<Longrightarrow> (A \<and> B \<Longrightarrow> A) \<Longrightarrow> #\<dots>"} & @{text "(resolution A \<and> B \<Longrightarrow> B)"} \\
-  @{text "(A \<and> B \<Longrightarrow> A) \<Longrightarrow> #\<dots>"} & @{text "(assumption)"} \\
-  @{text "(A \<and> B \<Longrightarrow> A \<and> B) \<Longrightarrow> #\<dots>"} & @{text "(resolution A \<and> B \<Longrightarrow> A)"} \\
-  @{text "#\<dots>"} & @{text "(assumption)"} \\
-  @{text "A \<and> B \<Longrightarrow> B \<and> A"} & @{text "(finish)"} \\
+  \<open>(A \<and> B \<Longrightarrow> B \<and> A) \<Longrightarrow> #(A \<and> B \<Longrightarrow> B \<and> A)\<close> & \<open>(init)\<close> \\
+  \<open>(A \<and> B \<Longrightarrow> B) \<Longrightarrow> (A \<and> B \<Longrightarrow> A) \<Longrightarrow> #\<dots>\<close> & \<open>(resolution B \<Longrightarrow> A \<Longrightarrow> B \<and> A)\<close> \\
+  \<open>(A \<and> B \<Longrightarrow> A \<and> B) \<Longrightarrow> (A \<and> B \<Longrightarrow> A) \<Longrightarrow> #\<dots>\<close> & \<open>(resolution A \<and> B \<Longrightarrow> B)\<close> \\
+  \<open>(A \<and> B \<Longrightarrow> A) \<Longrightarrow> #\<dots>\<close> & \<open>(assumption)\<close> \\
+  \<open>(A \<and> B \<Longrightarrow> A \<and> B) \<Longrightarrow> #\<dots>\<close> & \<open>(resolution A \<and> B \<Longrightarrow> A)\<close> \\
+  \<open>#\<dots>\<close> & \<open>(assumption)\<close> \\
+  \<open>A \<and> B \<Longrightarrow> B \<and> A\<close> & \<open>(finish)\<close> \\
   \end{tabular}
   \<^medskip>
   }
@@ -426,28 +422,28 @@
   @{inference_def elim_resolution} principle.  In contrast, Isar uses
   a slightly more refined combination, where the assumptions to be
   closed are marked explicitly, using again the protective marker
-  @{text "#"}:
+  \<open>#\<close>:
 
   \[
   \infer[(@{inference refinement})]
-  {@{text "(\<And>\<^vec>x. \<^vec>H \<^vec>x \<Longrightarrow> \<^vec>G' (\<^vec>a \<^vec>x))\<vartheta> \<Longrightarrow> C\<vartheta>"}}
+  {\<open>(\<And>\<^vec>x. \<^vec>H \<^vec>x \<Longrightarrow> \<^vec>G' (\<^vec>a \<^vec>x))\<vartheta> \<Longrightarrow> C\<vartheta>\<close>}
   {\begin{tabular}{rl}
-    @{text "sub\<hyphen>proof:"} &
-    @{text "\<^vec>G \<^vec>a \<Longrightarrow> B \<^vec>a"} \\
-    @{text "goal:"} &
-    @{text "(\<And>\<^vec>x. \<^vec>H \<^vec>x \<Longrightarrow> B' \<^vec>x) \<Longrightarrow> C"} \\
-    @{text "goal unifier:"} &
-    @{text "(\<lambda>\<^vec>x. B (\<^vec>a \<^vec>x))\<vartheta> = B'\<vartheta>"} \\
-    @{text "assm unifiers:"} &
-    @{text "(\<lambda>\<^vec>x. G\<^sub>j (\<^vec>a \<^vec>x))\<vartheta> = #H\<^sub>i\<vartheta>"} \\
-    & \quad (for each marked @{text "G\<^sub>j"} some @{text "#H\<^sub>i"}) \\
+    \<open>sub\<hyphen>proof:\<close> &
+    \<open>\<^vec>G \<^vec>a \<Longrightarrow> B \<^vec>a\<close> \\
+    \<open>goal:\<close> &
+    \<open>(\<And>\<^vec>x. \<^vec>H \<^vec>x \<Longrightarrow> B' \<^vec>x) \<Longrightarrow> C\<close> \\
+    \<open>goal unifier:\<close> &
+    \<open>(\<lambda>\<^vec>x. B (\<^vec>a \<^vec>x))\<vartheta> = B'\<vartheta>\<close> \\
+    \<open>assm unifiers:\<close> &
+    \<open>(\<lambda>\<^vec>x. G\<^sub>j (\<^vec>a \<^vec>x))\<vartheta> = #H\<^sub>i\<vartheta>\<close> \\
+    & \quad (for each marked \<open>G\<^sub>j\<close> some \<open>#H\<^sub>i\<close>) \\
    \end{tabular}}
   \]
 
-  Here the @{text "sub\<hyphen>proof"} rule stems from the
+  Here the \<open>sub\<hyphen>proof\<close> rule stems from the
   main @{command fix}-@{command assume}-@{command show} outline of
   Isar (cf.\ \secref{sec:framework-subproof}): each assumption
-  indicated in the text results in a marked premise @{text "G"} above.
+  indicated in the text results in a marked premise \<open>G\<close> above.
   The marking enforces resolution against one of the sub-goal's
   premises.  Consequently, @{command fix}-@{command assume}-@{command
   show} enables to fit the result of a sub-proof quite robustly into a
@@ -467,29 +463,28 @@
   Isar is an exercise in sound minimalism.  Approximately half of the
   language is introduced as primitive, the rest defined as derived
   concepts.  The following grammar describes the core language
-  (category @{text "proof"}), which is embedded into theory
+  (category \<open>proof\<close>), which is embedded into theory
   specification elements such as @{command theorem}; see also
-  \secref{sec:framework-stmt} for the separate category @{text
-  "statement"}.
+  \secref{sec:framework-stmt} for the separate category \<open>statement\<close>.
 
   \<^medskip>
   \begin{tabular}{rcl}
-    @{text "theory\<hyphen>stmt"} & = & @{command "theorem"}~@{text "statement proof  |"}~~@{command "definition"}~@{text "\<dots>  |  \<dots>"} \\[1ex]
+    \<open>theory\<hyphen>stmt\<close> & = & @{command "theorem"}~\<open>statement proof  |\<close>~~@{command "definition"}~\<open>\<dots>  |  \<dots>\<close> \\[1ex]
 
-    @{text "proof"} & = & @{text "prfx\<^sup>*"}~@{command "proof"}~@{text "method\<^sup>? stmt\<^sup>*"}~@{command "qed"}~@{text "method\<^sup>?"} \\[1ex]
+    \<open>proof\<close> & = & \<open>prfx\<^sup>*\<close>~@{command "proof"}~\<open>method\<^sup>? stmt\<^sup>*\<close>~@{command "qed"}~\<open>method\<^sup>?\<close> \\[1ex]
 
-    @{text prfx} & = & @{command "using"}~@{text "facts"} \\
-    & @{text "|"} & @{command "unfolding"}~@{text "facts"} \\
+    \<open>prfx\<close> & = & @{command "using"}~\<open>facts\<close> \\
+    & \<open>|\<close> & @{command "unfolding"}~\<open>facts\<close> \\
 
-    @{text stmt} & = & @{command "{"}~@{text "stmt\<^sup>*"}~@{command "}"} \\
-    & @{text "|"} & @{command "next"} \\
-    & @{text "|"} & @{command "note"}~@{text "name = facts"} \\
-    & @{text "|"} & @{command "let"}~@{text "term = term"} \\
-    & @{text "|"} & @{command "fix"}~@{text "var\<^sup>+"} \\
-    & @{text "|"} & @{command assume}~@{text "\<guillemotleft>inference\<guillemotright> name: props"} \\
-    & @{text "|"} & @{command "then"}@{text "\<^sup>?"}~@{text goal} \\
-    @{text goal} & = & @{command "have"}~@{text "name: props proof"} \\
-    & @{text "|"} & @{command "show"}~@{text "name: props proof"} \\
+    \<open>stmt\<close> & = & @{command "{"}~\<open>stmt\<^sup>*\<close>~@{command "}"} \\
+    & \<open>|\<close> & @{command "next"} \\
+    & \<open>|\<close> & @{command "note"}~\<open>name = facts\<close> \\
+    & \<open>|\<close> & @{command "let"}~\<open>term = term\<close> \\
+    & \<open>|\<close> & @{command "fix"}~\<open>var\<^sup>+\<close> \\
+    & \<open>|\<close> & @{command assume}~\<open>\<guillemotleft>inference\<guillemotright> name: props\<close> \\
+    & \<open>|\<close> & @{command "then"}\<open>\<^sup>?\<close>~\<open>goal\<close> \\
+    \<open>goal\<close> & = & @{command "have"}~\<open>name: props proof\<close> \\
+    & \<open>|\<close> & @{command "show"}~\<open>name: props proof\<close> \\
   \end{tabular}
 
   \<^medskip>
@@ -498,20 +493,19 @@
 
   \<^medskip>
   The syntax for terms and propositions is inherited from
-  Pure (and the object-logic).  A @{text "pattern"} is a @{text
-  "term"} with schematic variables, to be bound by higher-order
+  Pure (and the object-logic).  A \<open>pattern\<close> is a \<open>term\<close> with schematic variables, to be bound by higher-order
   matching.
 
   \<^medskip>
   Facts may be referenced by name or proposition.  For
-  example, the result of ``@{command have}~@{text "a: A \<langle>proof\<rangle>"}''
-  becomes available both as @{text "a"} and
-  \isacharbackquoteopen@{text "A"}\isacharbackquoteclose.  Moreover,
+  example, the result of ``@{command have}~\<open>a: A \<langle>proof\<rangle>\<close>''
+  becomes available both as \<open>a\<close> and
+  \isacharbackquoteopen\<open>A\<close>\isacharbackquoteclose.  Moreover,
   fact expressions may involve attributes that modify either the
   theorem or the background context.  For example, the expression
-  ``@{text "a [OF b]"}'' refers to the composition of two facts
+  ``\<open>a [OF b]\<close>'' refers to the composition of two facts
   according to the @{inference resolution} inference of
-  \secref{sec:framework-resolution}, while ``@{text "a [intro]"}''
+  \secref{sec:framework-resolution}, while ``\<open>a [intro]\<close>''
   declares a fact as introduction rule in the context.
 
   The special fact called ``@{fact this}'' always refers to the last
@@ -522,12 +516,12 @@
 
   \<^medskip>
   \begin{tabular}{rcl}
-    @{command from}~@{text a} & @{text "\<equiv>"} & @{command note}~@{text a}~@{command then} \\
-    @{command with}~@{text a} & @{text "\<equiv>"} & @{command from}~@{text "a \<AND> this"} \\
+    @{command from}~\<open>a\<close> & \<open>\<equiv>\<close> & @{command note}~\<open>a\<close>~@{command then} \\
+    @{command with}~\<open>a\<close> & \<open>\<equiv>\<close> & @{command from}~\<open>a \<AND> this\<close> \\
   \end{tabular}
   \<^medskip>
 
-  The @{text "method"} category is essentially a parameter and may be
+  The \<open>method\<close> category is essentially a parameter and may be
   populated later.  Methods use the facts indicated by @{command
   "then"} or @{command using}, and then operate on the goal state.
   Some basic methods are predefined: ``@{method "-"}'' leaves the goal
@@ -536,8 +530,8 @@
   result to the goal (both ``@{method this}'' and ``@{method (Pure) rule}''
   refer to @{inference resolution} of
   \secref{sec:framework-resolution}).  The secondary arguments to
-  ``@{method (Pure) rule}'' may be specified explicitly as in ``@{text "(rule
-  a)"}'', or picked from the context.  In the latter case, the system
+  ``@{method (Pure) rule}'' may be specified explicitly as in ``\<open>(rule
+  a)\<close>'', or picked from the context.  In the latter case, the system
   first tries rules declared as @{attribute (Pure) elim} or
   @{attribute (Pure) dest}, followed by those declared as @{attribute
   (Pure) intro}.
@@ -545,16 +539,15 @@
   The default method for @{command proof} is ``@{method standard}''
   (arguments picked from the context), for @{command qed} it is
   ``@{method "succeed"}''.  Further abbreviations for terminal proof steps
-  are ``@{command "by"}~@{text "method\<^sub>1 method\<^sub>2"}'' for
-  ``@{command proof}~@{text "method\<^sub>1"}~@{command qed}~@{text
-  "method\<^sub>2"}'', and ``@{command ".."}'' for ``@{command
+  are ``@{command "by"}~\<open>method\<^sub>1 method\<^sub>2\<close>'' for
+  ``@{command proof}~\<open>method\<^sub>1\<close>~@{command qed}~\<open>method\<^sub>2\<close>'', and ``@{command ".."}'' for ``@{command
   "by"}~@{method standard}, and ``@{command "."}'' for ``@{command
   "by"}~@{method this}''.  The @{command unfolding} element operates
   directly on the current facts and goal by applying equalities.
 
   \<^medskip>
   Block structure can be indicated explicitly by ``@{command
-  "{"}~@{text "\<dots>"}~@{command "}"}'', although the body of a sub-proof
+  "{"}~\<open>\<dots>\<close>~@{command "}"}'', although the body of a sub-proof
   already involves implicit nesting.  In any case, @{command next}
   jumps into the next section of a block, i.e.\ it acts like closing
   an implicit block scope and opening another one; there is no direct
@@ -572,20 +565,18 @@
 subsection \<open>Context elements \label{sec:framework-context}\<close>
 
 text \<open>
-  In judgments @{text "\<Gamma> \<turnstile> \<phi>"} of the primitive framework, @{text "\<Gamma>"}
+  In judgments \<open>\<Gamma> \<turnstile> \<phi>\<close> of the primitive framework, \<open>\<Gamma>\<close>
   essentially acts like a proof context.  Isar elaborates this idea
   towards a higher-level notion, with additional information for
   type-inference, term abbreviations, local facts, hypotheses etc.
 
-  The element @{command fix}~@{text "x :: \<alpha>"} declares a local
+  The element @{command fix}~\<open>x :: \<alpha>\<close> declares a local
   parameter, i.e.\ an arbitrary-but-fixed entity of a given type; in
-  results exported from the context, @{text "x"} may become anything.
-  The @{command assume}~@{text "\<guillemotleft>inference\<guillemotright>"} element provides a
-  general interface to hypotheses: ``@{command assume}~@{text
-  "\<guillemotleft>inference\<guillemotright> A"}'' produces @{text "A \<turnstile> A"} locally, while the
-  included inference tells how to discharge @{text A} from results
-  @{text "A \<turnstile> B"} later on.  There is no user-syntax for @{text
-  "\<guillemotleft>inference\<guillemotright>"}, i.e.\ it may only occur internally when derived
+  results exported from the context, \<open>x\<close> may become anything.
+  The @{command assume}~\<open>\<guillemotleft>inference\<guillemotright>\<close> element provides a
+  general interface to hypotheses: ``@{command assume}~\<open>\<guillemotleft>inference\<guillemotright> A\<close>'' produces \<open>A \<turnstile> A\<close> locally, while the
+  included inference tells how to discharge \<open>A\<close> from results
+  \<open>A \<turnstile> B\<close> later on.  There is no user-syntax for \<open>\<guillemotleft>inference\<guillemotright>\<close>, i.e.\ it may only occur internally when derived
   commands are defined in ML.
 
   At the user-level, the default inference for @{command assume} is
@@ -594,19 +585,19 @@
 
   \<^medskip>
   \begin{tabular}{rcl}
-    @{command presume}~@{text A} & @{text "\<equiv>"} & @{command assume}~@{text "\<guillemotleft>weak\<hyphen>discharge\<guillemotright> A"} \\
-    @{command def}~@{text "x \<equiv> a"} & @{text "\<equiv>"} & @{command fix}~@{text x}~@{command assume}~@{text "\<guillemotleft>expansion\<guillemotright> x \<equiv> a"} \\
+    @{command presume}~\<open>A\<close> & \<open>\<equiv>\<close> & @{command assume}~\<open>\<guillemotleft>weak\<hyphen>discharge\<guillemotright> A\<close> \\
+    @{command def}~\<open>x \<equiv> a\<close> & \<open>\<equiv>\<close> & @{command fix}~\<open>x\<close>~@{command assume}~\<open>\<guillemotleft>expansion\<guillemotright> x \<equiv> a\<close> \\
   \end{tabular}
   \<^medskip>
 
   \[
-  \infer[(@{inference_def discharge})]{@{text "\<strut>\<Gamma> - A \<turnstile> #A \<Longrightarrow> B"}}{@{text "\<strut>\<Gamma> \<turnstile> B"}}
+  \infer[(@{inference_def discharge})]{\<open>\<strut>\<Gamma> - A \<turnstile> #A \<Longrightarrow> B\<close>}{\<open>\<strut>\<Gamma> \<turnstile> B\<close>}
   \]
   \[
-  \infer[(@{inference_def "weak\<hyphen>discharge"})]{@{text "\<strut>\<Gamma> - A \<turnstile> A \<Longrightarrow> B"}}{@{text "\<strut>\<Gamma> \<turnstile> B"}}
+  \infer[(@{inference_def "weak\<hyphen>discharge"})]{\<open>\<strut>\<Gamma> - A \<turnstile> A \<Longrightarrow> B\<close>}{\<open>\<strut>\<Gamma> \<turnstile> B\<close>}
   \]
   \[
-  \infer[(@{inference_def expansion})]{@{text "\<strut>\<Gamma> - (x \<equiv> a) \<turnstile> B a"}}{@{text "\<strut>\<Gamma> \<turnstile> B x"}}
+  \infer[(@{inference_def expansion})]{\<open>\<strut>\<Gamma> - (x \<equiv> a) \<turnstile> B a\<close>}{\<open>\<strut>\<Gamma> \<turnstile> B x\<close>}
   \]
 
   \<^medskip>
@@ -619,42 +610,42 @@
   The most interesting derived context element in Isar is @{command
   obtain} @{cite \<open>\S5.3\<close> "Wenzel-PhD"}, which supports generalized
   elimination steps in a purely forward manner.  The @{command obtain}
-  command takes a specification of parameters @{text "\<^vec>x"} and
-  assumptions @{text "\<^vec>A"} to be added to the context, together
+  command takes a specification of parameters \<open>\<^vec>x\<close> and
+  assumptions \<open>\<^vec>A\<close> to be added to the context, together
   with a proof of a case rule stating that this extension is
   conservative (i.e.\ may be removed from closed results later on):
 
   \<^medskip>
   \begin{tabular}{l}
-  @{text "\<langle>facts\<rangle>"}~~@{command obtain}~@{text "\<^vec>x \<WHERE> \<^vec>A \<^vec>x  \<langle>proof\<rangle> \<equiv>"} \\[0.5ex]
-  \quad @{command have}~@{text "case: \<And>thesis. (\<And>\<^vec>x. \<^vec>A \<^vec>x \<Longrightarrow> thesis) \<Longrightarrow> thesis\<rangle>"} \\
+  \<open>\<langle>facts\<rangle>\<close>~~@{command obtain}~\<open>\<^vec>x \<WHERE> \<^vec>A \<^vec>x  \<langle>proof\<rangle> \<equiv>\<close> \\[0.5ex]
+  \quad @{command have}~\<open>case: \<And>thesis. (\<And>\<^vec>x. \<^vec>A \<^vec>x \<Longrightarrow> thesis) \<Longrightarrow> thesis\<rangle>\<close> \\
   \quad @{command proof}~@{method "-"} \\
-  \qquad @{command fix}~@{text thesis} \\
-  \qquad @{command assume}~@{text "[intro]: \<And>\<^vec>x. \<^vec>A \<^vec>x \<Longrightarrow> thesis"} \\
-  \qquad @{command show}~@{text thesis}~@{command using}~@{text "\<langle>facts\<rangle> \<langle>proof\<rangle>"} \\
+  \qquad @{command fix}~\<open>thesis\<close> \\
+  \qquad @{command assume}~\<open>[intro]: \<And>\<^vec>x. \<^vec>A \<^vec>x \<Longrightarrow> thesis\<close> \\
+  \qquad @{command show}~\<open>thesis\<close>~@{command using}~\<open>\<langle>facts\<rangle> \<langle>proof\<rangle>\<close> \\
   \quad @{command qed} \\
-  \quad @{command fix}~@{text "\<^vec>x"}~@{command assume}~@{text "\<guillemotleft>elimination case\<guillemotright> \<^vec>A \<^vec>x"} \\
+  \quad @{command fix}~\<open>\<^vec>x\<close>~@{command assume}~\<open>\<guillemotleft>elimination case\<guillemotright> \<^vec>A \<^vec>x\<close> \\
   \end{tabular}
   \<^medskip>
 
   \[
-  \infer[(@{inference elimination})]{@{text "\<Gamma> \<turnstile> B"}}{
+  \infer[(@{inference elimination})]{\<open>\<Gamma> \<turnstile> B\<close>}{
     \begin{tabular}{rl}
-    @{text "case:"} &
-    @{text "\<Gamma> \<turnstile> \<And>thesis. (\<And>\<^vec>x. \<^vec>A \<^vec>x \<Longrightarrow> thesis) \<Longrightarrow> thesis"} \\[0.2ex]
-    @{text "result:"} &
-    @{text "\<Gamma> \<union> \<^vec>A \<^vec>y \<turnstile> B"} \\[0.2ex]
+    \<open>case:\<close> &
+    \<open>\<Gamma> \<turnstile> \<And>thesis. (\<And>\<^vec>x. \<^vec>A \<^vec>x \<Longrightarrow> thesis) \<Longrightarrow> thesis\<close> \\[0.2ex]
+    \<open>result:\<close> &
+    \<open>\<Gamma> \<union> \<^vec>A \<^vec>y \<turnstile> B\<close> \\[0.2ex]
     \end{tabular}}
   \]
 
-  Here the name ``@{text thesis}'' is a specific convention
+  Here the name ``\<open>thesis\<close>'' is a specific convention
   for an arbitrary-but-fixed proposition; in the primitive natural
-  deduction rules shown before we have occasionally used @{text C}.
-  The whole statement of ``@{command obtain}~@{text x}~@{keyword
-  "where"}~@{text "A x"}'' may be read as a claim that @{text "A x"}
-  may be assumed for some arbitrary-but-fixed @{text "x"}.  Also note
-  that ``@{command obtain}~@{text "A \<AND> B"}'' without parameters
-  is similar to ``@{command have}~@{text "A \<AND> B"}'', but the
+  deduction rules shown before we have occasionally used \<open>C\<close>.
+  The whole statement of ``@{command obtain}~\<open>x\<close>~@{keyword
+  "where"}~\<open>A x\<close>'' may be read as a claim that \<open>A x\<close>
+  may be assumed for some arbitrary-but-fixed \<open>x\<close>.  Also note
+  that ``@{command obtain}~\<open>A \<AND> B\<close>'' without parameters
+  is similar to ``@{command have}~\<open>A \<AND> B\<close>'', but the
   latter involves multiple sub-goals.
 
   \<^medskip>
@@ -707,50 +698,49 @@
 subsection \<open>Structured statements \label{sec:framework-stmt}\<close>
 
 text \<open>
-  The category @{text "statement"} of top-level theorem specifications
+  The category \<open>statement\<close> of top-level theorem specifications
   is defined as follows:
 
   \<^medskip>
   \begin{tabular}{rcl}
-  @{text "statement"} & @{text "\<equiv>"} & @{text "name: props \<AND> \<dots>"} \\
-  & @{text "|"} & @{text "context\<^sup>* conclusion"} \\[0.5ex]
+  \<open>statement\<close> & \<open>\<equiv>\<close> & \<open>name: props \<AND> \<dots>\<close> \\
+  & \<open>|\<close> & \<open>context\<^sup>* conclusion\<close> \\[0.5ex]
 
-  @{text "context"} & @{text "\<equiv>"} & @{text "\<FIXES> vars \<AND> \<dots>"} \\
-  & @{text "|"} & @{text "\<ASSUMES> name: props \<AND> \<dots>"} \\
+  \<open>context\<close> & \<open>\<equiv>\<close> & \<open>\<FIXES> vars \<AND> \<dots>\<close> \\
+  & \<open>|\<close> & \<open>\<ASSUMES> name: props \<AND> \<dots>\<close> \\
 
-  @{text "conclusion"} & @{text "\<equiv>"} & @{text "\<SHOWS> name: props \<AND> \<dots>"} \\
-  & @{text "|"} & @{text "\<OBTAINS> vars \<AND> \<dots> \<WHERE> name: props \<AND> \<dots>"} \\
-  & & \quad @{text "\<BBAR> \<dots>"} \\
+  \<open>conclusion\<close> & \<open>\<equiv>\<close> & \<open>\<SHOWS> name: props \<AND> \<dots>\<close> \\
+  & \<open>|\<close> & \<open>\<OBTAINS> vars \<AND> \<dots> \<WHERE> name: props \<AND> \<dots>\<close> \\
+  & & \quad \<open>\<BBAR> \<dots>\<close> \\
   \end{tabular}
 
   \<^medskip>
-  A simple @{text "statement"} consists of named
+  A simple \<open>statement\<close> consists of named
   propositions.  The full form admits local context elements followed
-  by the actual conclusions, such as ``@{keyword "fixes"}~@{text
-  x}~@{keyword "assumes"}~@{text "A x"}~@{keyword "shows"}~@{text "B
-  x"}''.  The final result emerges as a Pure rule after discharging
+  by the actual conclusions, such as ``@{keyword "fixes"}~\<open>x\<close>~@{keyword "assumes"}~\<open>A x\<close>~@{keyword "shows"}~\<open>B
+  x\<close>''.  The final result emerges as a Pure rule after discharging
   the context: @{prop "\<And>x. A x \<Longrightarrow> B x"}.
 
   The @{keyword "obtains"} variant is another abbreviation defined
   below; unlike @{command obtain} (cf.\
   \secref{sec:framework-context}) there may be several ``cases''
-  separated by ``@{text "\<BBAR>"}'', each consisting of several
-  parameters (@{text "vars"}) and several premises (@{text "props"}).
+  separated by ``\<open>\<BBAR>\<close>'', each consisting of several
+  parameters (\<open>vars\<close>) and several premises (\<open>props\<close>).
   This specifies multi-branch elimination rules.
 
   \<^medskip>
   \begin{tabular}{l}
-  @{text "\<OBTAINS> \<^vec>x \<WHERE> \<^vec>A \<^vec>x   \<BBAR>   \<dots>   \<equiv>"} \\[0.5ex]
-  \quad @{text "\<FIXES> thesis"} \\
-  \quad @{text "\<ASSUMES> [intro]: \<And>\<^vec>x. \<^vec>A \<^vec>x \<Longrightarrow> thesis  \<AND>  \<dots>"} \\
-  \quad @{text "\<SHOWS> thesis"} \\
+  \<open>\<OBTAINS> \<^vec>x \<WHERE> \<^vec>A \<^vec>x   \<BBAR>   \<dots>   \<equiv>\<close> \\[0.5ex]
+  \quad \<open>\<FIXES> thesis\<close> \\
+  \quad \<open>\<ASSUMES> [intro]: \<And>\<^vec>x. \<^vec>A \<^vec>x \<Longrightarrow> thesis  \<AND>  \<dots>\<close> \\
+  \quad \<open>\<SHOWS> thesis\<close> \\
   \end{tabular}
   \<^medskip>
 
   Presenting structured statements in such an ``open'' format usually
   simplifies the subsequent proof, because the outer structure of the
   problem is already laid out directly.  E.g.\ consider the following
-  canonical patterns for @{text "\<SHOWS>"} and @{text "\<OBTAINS>"},
+  canonical patterns for \<open>\<SHOWS>\<close> and \<open>\<OBTAINS>\<close>,
   respectively:
 \<close>
 
@@ -779,14 +769,12 @@
 
 text \<open>
   \<^medskip>
-  Here local facts \isacharbackquoteopen@{text "A
-  x"}\isacharbackquoteclose\ and \isacharbackquoteopen@{text "B
-  y"}\isacharbackquoteclose\ are referenced immediately; there is no
+  Here local facts \isacharbackquoteopen\<open>A
+  x\<close>\isacharbackquoteclose\ and \isacharbackquoteopen\<open>B
+  y\<close>\isacharbackquoteclose\ are referenced immediately; there is no
   need to decompose the logical rule structure again.  In the second
-  proof the final ``@{command then}~@{command show}~@{text
-  thesis}~@{command ".."}''  involves the local rule case @{text "\<And>x
-  y. A x \<Longrightarrow> B y \<Longrightarrow> thesis"} for the particular instance of terms @{text
-  "a"} and @{text "b"} produced in the body.
+  proof the final ``@{command then}~@{command show}~\<open>thesis\<close>~@{command ".."}''  involves the local rule case \<open>\<And>x
+  y. A x \<Longrightarrow> B y \<Longrightarrow> thesis\<close> for the particular instance of terms \<open>a\<close> and \<open>b\<close> produced in the body.
 \<close>
 
 
@@ -802,8 +790,8 @@
   configurations for debugging.
 
   The basic idea is analogous to evaluating algebraic expressions on a
-  stack machine: @{text "(a + b) \<cdot> c"} then corresponds to a sequence
-  of single transitions for each symbol @{text "(, a, +, b, ), \<cdot>, c"}.
+  stack machine: \<open>(a + b) \<cdot> c\<close> then corresponds to a sequence
+  of single transitions for each symbol \<open>(, a, +, b, ), \<cdot>, c\<close>.
   In Isar the algebraic values are facts or goals, and the operations
   are inferences.
 
@@ -822,15 +810,15 @@
   \caption{Isar/VM modes}\label{fig:isar-vm}
   \end{figure}
 
-  For example, in @{text "state"} mode Isar acts like a mathematical
+  For example, in \<open>state\<close> mode Isar acts like a mathematical
   scratch-pad, accepting declarations like @{command fix}, @{command
   assume}, and claims like @{command have}, @{command show}.  A goal
-  statement changes the mode to @{text "prove"}, which means that we
+  statement changes the mode to \<open>prove\<close>, which means that we
   may now refine the problem via @{command unfolding} or @{command
-  proof}.  Then we are again in @{text "state"} mode of a proof body,
+  proof}.  Then we are again in \<open>state\<close> mode of a proof body,
   which may issue @{command show} statements to solve pending
   sub-goals.  A concluding @{command qed} will return to the original
-  @{text "state"} mode one level upwards.  The subsequent Isar/VM
+  \<open>state\<close> mode one level upwards.  The subsequent Isar/VM
   trace indicates block structure, linguistic mode, goal state, and
   inferences:
 \<close>
@@ -847,34 +835,34 @@
   qed
   text_raw \<open>\end{minipage}\quad
 \begin{minipage}[t]{0.06\textwidth}
-@{text "begin"} \\
+\<open>begin\<close> \\
 \\
 \\
-@{text "begin"} \\
-@{text "end"} \\
-@{text "end"} \\
+\<open>begin\<close> \\
+\<open>end\<close> \\
+\<open>end\<close> \\
 \end{minipage}
 \begin{minipage}[t]{0.08\textwidth}
-@{text "prove"} \\
-@{text "state"} \\
-@{text "state"} \\
-@{text "prove"} \\
-@{text "state"} \\
-@{text "state"} \\
+\<open>prove\<close> \\
+\<open>state\<close> \\
+\<open>state\<close> \\
+\<open>prove\<close> \\
+\<open>state\<close> \\
+\<open>state\<close> \\
 \end{minipage}\begin{minipage}[t]{0.35\textwidth}
-@{text "(A \<longrightarrow> B) \<Longrightarrow> #(A \<longrightarrow> B)"} \\
-@{text "(A \<Longrightarrow> B) \<Longrightarrow> #(A \<longrightarrow> B)"} \\
+\<open>(A \<longrightarrow> B) \<Longrightarrow> #(A \<longrightarrow> B)\<close> \\
+\<open>(A \<Longrightarrow> B) \<Longrightarrow> #(A \<longrightarrow> B)\<close> \\
 \\
 \\
-@{text "#(A \<longrightarrow> B)"} \\
-@{text "A \<longrightarrow> B"} \\
+\<open>#(A \<longrightarrow> B)\<close> \\
+\<open>A \<longrightarrow> B\<close> \\
 \end{minipage}\begin{minipage}[t]{0.4\textwidth}
-@{text "(init)"} \\
-@{text "(resolution impI)"} \\
+\<open>(init)\<close> \\
+\<open>(resolution impI)\<close> \\
 \\
 \\
-@{text "(refinement #A \<Longrightarrow> B)"} \\
-@{text "(finish)"} \\
+\<open>(refinement #A \<Longrightarrow> B)\<close> \\
+\<open>(finish)\<close> \\
 \end{minipage}\<close>
 (*<*)
 end
@@ -970,15 +958,15 @@
   concept.  The generic proof elements introduced below depend on
   rules declared as @{attribute trans} in the context.  It is left to
   the object-logic to provide a suitable rule collection for mixed
-  relations of @{text "="}, @{text "<"}, @{text "\<le>"}, @{text "\<subset>"},
-  @{text "\<subseteq>"} etc.  Due to the flexibility of rule composition
+  relations of \<open>=\<close>, \<open><\<close>, \<open>\<le>\<close>, \<open>\<subset>\<close>,
+  \<open>\<subseteq>\<close> etc.  Due to the flexibility of rule composition
   (\secref{sec:framework-resolution}), substitution of equals by
   equals is covered as well, even substitution of inequalities
   involving monotonicity conditions; see also @{cite \<open>\S6\<close> "Wenzel-PhD"}
   and @{cite "Bauer-Wenzel:2001"}.
 
   The generic calculational mechanism is based on the observation that
-  rules such as @{text "trans:"}~@{prop "x = y \<Longrightarrow> y = z \<Longrightarrow> x = z"}
+  rules such as \<open>trans:\<close>~@{prop "x = y \<Longrightarrow> y = z \<Longrightarrow> x = z"}
   proceed from the premises towards the conclusion in a deterministic
   fashion.  Thus we may reason in forward mode, feeding intermediate
   results into rules selected from the context.  The course of
@@ -987,12 +975,12 @@
   already provided by the Isar primitives.  In the definitions below,
   @{attribute OF} refers to @{inference resolution}
   (\secref{sec:framework-resolution}) with multiple rule arguments,
-  and @{text "trans"} represents to a suitable rule from the context:
+  and \<open>trans\<close> represents to a suitable rule from the context:
 
   \begin{matharray}{rcl}
-    @{command "also"}@{text "\<^sub>0"} & \equiv & @{command "note"}~@{text "calculation = this"} \\
-    @{command "also"}@{text "\<^sub>n\<^sub>+\<^sub>1"} & \equiv & @{command "note"}~@{text "calculation = trans [OF calculation this]"} \\[0.5ex]
-    @{command "finally"} & \equiv & @{command "also"}~@{command "from"}~@{text calculation} \\
+    @{command "also"}\<open>\<^sub>0\<close> & \equiv & @{command "note"}~\<open>calculation = this\<close> \\
+    @{command "also"}\<open>\<^sub>n\<^sub>+\<^sub>1\<close> & \equiv & @{command "note"}~\<open>calculation = trans [OF calculation this]\<close> \\[0.5ex]
+    @{command "finally"} & \equiv & @{command "also"}~@{command "from"}~\<open>calculation\<close> \\
   \end{matharray}
 
   The start of a calculation is determined implicitly in the
@@ -1019,7 +1007,7 @@
 (*>*)
 
 text \<open>
-  The term ``@{text "\<dots>"}'' above is a special abbreviation
+  The term ``\<open>\<dots>\<close>'' above is a special abbreviation
   provided by the Isabelle/Isar syntax layer: it statically refers to
   the right-hand side argument of the previous statement given in the
   text.  Thus it happens to coincide with relevant sub-expressions in
@@ -1030,9 +1018,9 @@
   Symmetry rules such as @{prop "x = y \<Longrightarrow> y = x"} are like
   transitivities with only one premise.  Isar maintains a separate
   rule collection declared via the @{attribute sym} attribute, to be
-  used in fact expressions ``@{text "a [symmetric]"}'', or single-step
-  proofs ``@{command assume}~@{text "x = y"}~@{command then}~@{command
-  have}~@{text "y = x"}~@{command ".."}''.
+  used in fact expressions ``\<open>a [symmetric]\<close>'', or single-step
+  proofs ``@{command assume}~\<open>x = y\<close>~@{command then}~@{command
+  have}~\<open>y = x\<close>~@{command ".."}''.
 \<close>
 
 end
\ No newline at end of file
--- a/src/Doc/Isar_Ref/Generic.thy	Tue Oct 20 23:03:46 2015 +0200
+++ b/src/Doc/Isar_Ref/Generic.thy	Tue Oct 20 23:53:40 2015 +0200
@@ -29,7 +29,7 @@
 
 text \<open>
   \begin{matharray}{rcll}
-    @{command_def "print_options"} & : & @{text "context \<rightarrow>"} \\
+    @{command_def "print_options"} & : & \<open>context \<rightarrow>\<close> \\
   \end{matharray}
 
   @{rail \<open>
@@ -39,12 +39,12 @@
   \<close>}
 
   \<^descr> @{command "print_options"} prints the available configuration
-  options, with names, types, and current values; the ``@{text "!"}'' option
+  options, with names, types, and current values; the ``\<open>!\<close>'' option
   indicates extra verbosity.
   
-  \<^descr> @{text "name = value"} as an attribute expression modifies the
+  \<^descr> \<open>name = value\<close> as an attribute expression modifies the
   named option, with the syntax of the value depending on the option's
-  type.  For @{ML_type bool} the default value is @{text true}.  Any
+  type.  For @{ML_type bool} the default value is \<open>true\<close>.  Any
   attempt to change a global option in a local context is ignored.
 \<close>
 
@@ -55,17 +55,17 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{method_def unfold} & : & @{text method} \\
-    @{method_def fold} & : & @{text method} \\
-    @{method_def insert} & : & @{text method} \\[0.5ex]
-    @{method_def erule}@{text "\<^sup>*"} & : & @{text method} \\
-    @{method_def drule}@{text "\<^sup>*"} & : & @{text method} \\
-    @{method_def frule}@{text "\<^sup>*"} & : & @{text method} \\
-    @{method_def intro} & : & @{text method} \\
-    @{method_def elim} & : & @{text method} \\
-    @{method_def fail} & : & @{text method} \\
-    @{method_def succeed} & : & @{text method} \\
-    @{method_def sleep} & : & @{text method} \\
+    @{method_def unfold} & : & \<open>method\<close> \\
+    @{method_def fold} & : & \<open>method\<close> \\
+    @{method_def insert} & : & \<open>method\<close> \\[0.5ex]
+    @{method_def erule}\<open>\<^sup>*\<close> & : & \<open>method\<close> \\
+    @{method_def drule}\<open>\<^sup>*\<close> & : & \<open>method\<close> \\
+    @{method_def frule}\<open>\<^sup>*\<close> & : & \<open>method\<close> \\
+    @{method_def intro} & : & \<open>method\<close> \\
+    @{method_def elim} & : & \<open>method\<close> \\
+    @{method_def fail} & : & \<open>method\<close> \\
+    @{method_def succeed} & : & \<open>method\<close> \\
+    @{method_def sleep} & : & \<open>method\<close> \\
   \end{matharray}
 
   @{rail \<open>
@@ -79,18 +79,16 @@
     @@{method sleep} @{syntax real}
   \<close>}
 
-  \<^descr> @{method unfold}~@{text "a\<^sub>1 \<dots> a\<^sub>n"} and @{method fold}~@{text
-  "a\<^sub>1 \<dots> a\<^sub>n"} expand (or fold back) the given definitions throughout
+  \<^descr> @{method unfold}~\<open>a\<^sub>1 \<dots> a\<^sub>n\<close> and @{method fold}~\<open>a\<^sub>1 \<dots> a\<^sub>n\<close> expand (or fold back) the given definitions throughout
   all goals; any chained facts provided are inserted into the goal and
   subject to rewriting as well.
 
-  \<^descr> @{method insert}~@{text "a\<^sub>1 \<dots> a\<^sub>n"} inserts theorems as facts
+  \<^descr> @{method insert}~\<open>a\<^sub>1 \<dots> a\<^sub>n\<close> inserts theorems as facts
   into all goals of the proof state.  Note that current facts
   indicated for forward chaining are ignored.
 
-  \<^descr> @{method erule}~@{text "a\<^sub>1 \<dots> a\<^sub>n"}, @{method
-  drule}~@{text "a\<^sub>1 \<dots> a\<^sub>n"}, and @{method frule}~@{text
-  "a\<^sub>1 \<dots> a\<^sub>n"} are similar to the basic @{method rule}
+  \<^descr> @{method erule}~\<open>a\<^sub>1 \<dots> a\<^sub>n\<close>, @{method
+  drule}~\<open>a\<^sub>1 \<dots> a\<^sub>n\<close>, and @{method frule}~\<open>a\<^sub>1 \<dots> a\<^sub>n\<close> are similar to the basic @{method rule}
   method (see \secref{sec:pure-meth-att}), but apply rules by
   elim-resolution, destruct-resolution, and forward-resolution,
   respectively @{cite "isabelle-implementation"}.  The optional natural
@@ -112,28 +110,27 @@
   to common automated tools.
 
   \<^descr> @{method fail} yields an empty result sequence; it is the
-  identity of the ``@{text "|"}'' method combinator (cf.\
+  identity of the ``\<open>|\<close>'' method combinator (cf.\
   \secref{sec:proof-meth}).
 
   \<^descr> @{method succeed} yields a single (unchanged) result; it is
-  the identity of the ``@{text ","}'' method combinator (cf.\
+  the identity of the ``\<open>,\<close>'' method combinator (cf.\
   \secref{sec:proof-meth}).
 
-  \<^descr> @{method sleep}~@{text s} succeeds after a real-time delay of @{text
-  s} seconds. This is occasionally useful for demonstration and testing
+  \<^descr> @{method sleep}~\<open>s\<close> succeeds after a real-time delay of \<open>s\<close> seconds. This is occasionally useful for demonstration and testing
   purposes.
 
 
   \begin{matharray}{rcl}
-    @{attribute_def tagged} & : & @{text attribute} \\
-    @{attribute_def untagged} & : & @{text attribute} \\[0.5ex]
-    @{attribute_def THEN} & : & @{text attribute} \\
-    @{attribute_def unfolded} & : & @{text attribute} \\
-    @{attribute_def folded} & : & @{text attribute} \\
-    @{attribute_def abs_def} & : & @{text attribute} \\[0.5ex]
-    @{attribute_def rotated} & : & @{text attribute} \\
-    @{attribute_def (Pure) elim_format} & : & @{text attribute} \\
-    @{attribute_def no_vars}@{text "\<^sup>*"} & : & @{text attribute} \\
+    @{attribute_def tagged} & : & \<open>attribute\<close> \\
+    @{attribute_def untagged} & : & \<open>attribute\<close> \\[0.5ex]
+    @{attribute_def THEN} & : & \<open>attribute\<close> \\
+    @{attribute_def unfolded} & : & \<open>attribute\<close> \\
+    @{attribute_def folded} & : & \<open>attribute\<close> \\
+    @{attribute_def abs_def} & : & \<open>attribute\<close> \\[0.5ex]
+    @{attribute_def rotated} & : & \<open>attribute\<close> \\
+    @{attribute_def (Pure) elim_format} & : & \<open>attribute\<close> \\
+    @{attribute_def no_vars}\<open>\<^sup>*\<close> & : & \<open>attribute\<close> \\
   \end{matharray}
 
   @{rail \<open>
@@ -148,19 +145,19 @@
     @@{attribute rotated} @{syntax int}?
   \<close>}
 
-  \<^descr> @{attribute tagged}~@{text "name value"} and @{attribute
-  untagged}~@{text name} add and remove \<^emph>\<open>tags\<close> of some theorem.
+  \<^descr> @{attribute tagged}~\<open>name value\<close> and @{attribute
+  untagged}~\<open>name\<close> add and remove \<^emph>\<open>tags\<close> of some theorem.
   Tags may be any list of string pairs that serve as formal comment.
   The first string is considered the tag name, the second its value.
   Note that @{attribute untagged} removes any tags of the same name.
 
-  \<^descr> @{attribute THEN}~@{text a} composes rules by resolution; it
-  resolves with the first premise of @{text a} (an alternative
+  \<^descr> @{attribute THEN}~\<open>a\<close> composes rules by resolution; it
+  resolves with the first premise of \<open>a\<close> (an alternative
   position may be also specified).  See also @{ML_op "RS"} in
   @{cite "isabelle-implementation"}.
   
-  \<^descr> @{attribute unfolded}~@{text "a\<^sub>1 \<dots> a\<^sub>n"} and @{attribute
-  folded}~@{text "a\<^sub>1 \<dots> a\<^sub>n"} expand and fold back again the given
+  \<^descr> @{attribute unfolded}~\<open>a\<^sub>1 \<dots> a\<^sub>n\<close> and @{attribute
+  folded}~\<open>a\<^sub>1 \<dots> a\<^sub>n\<close> expand and fold back again the given
   definitions throughout a rule.
 
   \<^descr> @{attribute abs_def} turns an equation of the form @{prop "f x
@@ -168,8 +165,8 @@
   simp} or @{method unfold} steps always expand it.  This also works
   for object-logic equality.
 
-  \<^descr> @{attribute rotated}~@{text n} rotate the premises of a
-  theorem by @{text n} (default 1).
+  \<^descr> @{attribute rotated}~\<open>n\<close> rotate the premises of a
+  theorem by \<open>n\<close> (default 1).
 
   \<^descr> @{attribute (Pure) elim_format} turns a destruction rule into
   elimination rule format, by resolving with the rule @{prop "PROP A \<Longrightarrow>
@@ -187,9 +184,9 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{method_def subst} & : & @{text method} \\
-    @{method_def hypsubst} & : & @{text method} \\
-    @{method_def split} & : & @{text method} \\
+    @{method_def subst} & : & \<open>method\<close> \\
+    @{method_def hypsubst} & : & \<open>method\<close> \\
+    @{method_def split} & : & \<open>method\<close> \\
   \end{matharray}
 
   @{rail \<open>
@@ -205,38 +202,37 @@
   provide the canonical way for automated normalization (see
   \secref{sec:simplifier}).
 
-  \<^descr> @{method subst}~@{text eq} performs a single substitution step
-  using rule @{text eq}, which may be either a meta or object
+  \<^descr> @{method subst}~\<open>eq\<close> performs a single substitution step
+  using rule \<open>eq\<close>, which may be either a meta or object
   equality.
 
-  \<^descr> @{method subst}~@{text "(asm) eq"} substitutes in an
+  \<^descr> @{method subst}~\<open>(asm) eq\<close> substitutes in an
   assumption.
 
-  \<^descr> @{method subst}~@{text "(i \<dots> j) eq"} performs several
-  substitutions in the conclusion. The numbers @{text i} to @{text j}
+  \<^descr> @{method subst}~\<open>(i \<dots> j) eq\<close> performs several
+  substitutions in the conclusion. The numbers \<open>i\<close> to \<open>j\<close>
   indicate the positions to substitute at.  Positions are ordered from
   the top of the term tree moving down from left to right. For
-  example, in @{text "(a + b) + (c + d)"} there are three positions
-  where commutativity of @{text "+"} is applicable: 1 refers to @{text
-  "a + b"}, 2 to the whole term, and 3 to @{text "c + d"}.
+  example, in \<open>(a + b) + (c + d)\<close> there are three positions
+  where commutativity of \<open>+\<close> is applicable: 1 refers to \<open>a + b\<close>, 2 to the whole term, and 3 to \<open>c + d\<close>.
 
-  If the positions in the list @{text "(i \<dots> j)"} are non-overlapping
-  (e.g.\ @{text "(2 3)"} in @{text "(a + b) + (c + d)"}) you may
+  If the positions in the list \<open>(i \<dots> j)\<close> are non-overlapping
+  (e.g.\ \<open>(2 3)\<close> in \<open>(a + b) + (c + d)\<close>) you may
   assume all substitutions are performed simultaneously.  Otherwise
-  the behaviour of @{text subst} is not specified.
+  the behaviour of \<open>subst\<close> is not specified.
 
-  \<^descr> @{method subst}~@{text "(asm) (i \<dots> j) eq"} performs the
+  \<^descr> @{method subst}~\<open>(asm) (i \<dots> j) eq\<close> performs the
   substitutions in the assumptions. The positions refer to the
   assumptions in order from left to right.  For example, given in a
-  goal of the form @{text "P (a + b) \<Longrightarrow> P (c + d) \<Longrightarrow> \<dots>"}, position 1 of
-  commutativity of @{text "+"} is the subterm @{text "a + b"} and
-  position 2 is the subterm @{text "c + d"}.
+  goal of the form \<open>P (a + b) \<Longrightarrow> P (c + d) \<Longrightarrow> \<dots>\<close>, position 1 of
+  commutativity of \<open>+\<close> is the subterm \<open>a + b\<close> and
+  position 2 is the subterm \<open>c + d\<close>.
 
   \<^descr> @{method hypsubst} performs substitution using some
-  assumption; this only works for equations of the form @{text "x =
-  t"} where @{text x} is a free or bound variable.
+  assumption; this only works for equations of the form \<open>x =
+  t\<close> where \<open>x\<close> is a free or bound variable.
 
-  \<^descr> @{method split}~@{text "a\<^sub>1 \<dots> a\<^sub>n"} performs single-step case
+  \<^descr> @{method split}~\<open>a\<^sub>1 \<dots> a\<^sub>n\<close> performs single-step case
   splitting using the given rules.  Splitting is performed in the
   conclusion or some assumption of the subgoal, depending of the
   structure of the rule.
@@ -274,9 +270,9 @@
 
 text \<open>
   \begin{tabular}{rcll}
-    @{method_def simp} & : & @{text method} \\
-    @{method_def simp_all} & : & @{text method} \\
-    @{attribute_def simp_depth_limit} & : & @{text attribute} & default @{text 100} \\
+    @{method_def simp} & : & \<open>method\<close> \\
+    @{method_def simp_all} & : & \<open>method\<close> \\
+    @{attribute_def simp_depth_limit} & : & \<open>attribute\<close> & default \<open>100\<close> \\
   \end{tabular}
   \<^medskip>
 
@@ -292,30 +288,28 @@
 
   \<^descr> @{method simp} invokes the Simplifier on the first subgoal,
   after inserting chained facts as additional goal premises; further
-  rule declarations may be included via @{text "(simp add: facts)"}.
+  rule declarations may be included via \<open>(simp add: facts)\<close>.
   The proof method fails if the subgoal remains unchanged after
   simplification.
 
   Note that the original goal premises and chained facts are subject
-  to simplification themselves, while declarations via @{text
-  "add"}/@{text "del"} merely follow the policies of the object-logic
+  to simplification themselves, while declarations via \<open>add\<close>/\<open>del\<close> merely follow the policies of the object-logic
   to extract rewrite rules from theorems, without further
   simplification.  This may lead to slightly different behavior in
   either case, which might be required precisely like that in some
   boundary situations to perform the intended simplification step!
 
   \<^medskip>
-  The @{text only} modifier first removes all other rewrite
+  The \<open>only\<close> modifier first removes all other rewrite
   rules, looper tactics (including split rules), congruence rules, and
-  then behaves like @{text add}.  Implicit solvers remain, which means
-  that trivial rules like reflexivity or introduction of @{text
-  "True"} are available to solve the simplified subgoals, but also
+  then behaves like \<open>add\<close>.  Implicit solvers remain, which means
+  that trivial rules like reflexivity or introduction of \<open>True\<close> are available to solve the simplified subgoals, but also
   non-trivial tools like linear arithmetic in HOL.  The latter may
   lead to some surprise of the meaning of ``only'' in Isabelle/HOL
   compared to English!
 
   \<^medskip>
-  The @{text split} modifiers add or delete rules for the
+  The \<open>split\<close> modifiers add or delete rules for the
   Splitter (see also \secref{sec:simp-strategies} on the looper).
   This works only if the Simplifier method has been properly setup to
   include the Splitter (all major object logics such HOL, HOLCF, FOL,
@@ -323,11 +317,11 @@
 
   There is also a separate @{method_ref split} method available for
   single-step case splitting.  The effect of repeatedly applying
-  @{text "(split thms)"} can be imitated by ``@{text "(simp only:
-  split: thms)"}''.
+  \<open>(split thms)\<close> can be imitated by ``\<open>(simp only:
+  split: thms)\<close>''.
 
   \<^medskip>
-  The @{text cong} modifiers add or delete Simplifier
+  The \<open>cong\<close> modifiers add or delete Simplifier
   congruence rules (see also \secref{sec:simp-rules}); the default is
   to add.
 
@@ -362,22 +356,22 @@
   \hline
   Isar method & ML tactic & behavior \\\hline
 
-  @{text "(simp (no_asm))"} & @{ML simp_tac} & assumptions are ignored
+  \<open>(simp (no_asm))\<close> & @{ML simp_tac} & assumptions are ignored
   completely \\\hline
 
-  @{text "(simp (no_asm_simp))"} & @{ML asm_simp_tac} & assumptions
+  \<open>(simp (no_asm_simp))\<close> & @{ML asm_simp_tac} & assumptions
   are used in the simplification of the conclusion but are not
   themselves simplified \\\hline
 
-  @{text "(simp (no_asm_use))"} & @{ML full_simp_tac} & assumptions
+  \<open>(simp (no_asm_use))\<close> & @{ML full_simp_tac} & assumptions
   are simplified but are not used in the simplification of each other
   or the conclusion \\\hline
 
-  @{text "(simp)"} & @{ML asm_full_simp_tac} & assumptions are used in
+  \<open>(simp)\<close> & @{ML asm_full_simp_tac} & assumptions are used in
   the simplification of the conclusion and to simplify other
   assumptions \\\hline
 
-  @{text "(simp (asm_lr))"} & @{ML asm_lr_simp_tac} & compatibility
+  \<open>(simp (asm_lr))\<close> & @{ML asm_lr_simp_tac} & compatibility
   mode: an assumption is only used for simplifying assumptions which
   are to the right of it \\\hline
 
@@ -423,8 +417,8 @@
 
   In the next example the malicious assumption @{prop "\<And>x::nat. f x =
   g (f (g x))"} does not contribute to solve the problem, but makes
-  the default @{method simp} method loop: the rewrite rule @{text "f
-  ?x \<equiv> g (f (g ?x))"} extracted from the assumption does not
+  the default @{method simp} method loop: the rewrite rule \<open>f
+  ?x \<equiv> g (f (g ?x))\<close> extracted from the assumption does not
   terminate.  The Simplifier notices certain simple forms of
   nontermination, but not this one.  The problem can be solved
   nonetheless, by ignoring assumptions via special options as
@@ -460,9 +454,9 @@
   @{method simp} method applied to @{prop "P (f x) \<Longrightarrow> y = x \<Longrightarrow> f x = f y
   \<Longrightarrow> Q"} gives rise to the infinite reduction sequence
   \[
-  @{text "P (f x)"} \stackrel{@{text "f x \<equiv> f y"}}{\longmapsto}
-  @{text "P (f y)"} \stackrel{@{text "y \<equiv> x"}}{\longmapsto}
-  @{text "P (f x)"} \stackrel{@{text "f x \<equiv> f y"}}{\longmapsto} \cdots
+  \<open>P (f x)\<close> \stackrel{\<open>f x \<equiv> f y\<close>}{\longmapsto}
+  \<open>P (f y)\<close> \stackrel{\<open>y \<equiv> x\<close>}{\longmapsto}
+  \<open>P (f x)\<close> \stackrel{\<open>f x \<equiv> f y\<close>}{\longmapsto} \cdots
   \]
   whereas applying the same to @{prop "y = x \<Longrightarrow> f x = f y \<Longrightarrow> P (f x) \<Longrightarrow>
   Q"} terminates (without solving the goal):
@@ -482,10 +476,10 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{attribute_def simp} & : & @{text attribute} \\
-    @{attribute_def split} & : & @{text attribute} \\
-    @{attribute_def cong} & : & @{text attribute} \\
-    @{command_def "print_simpset"}@{text "\<^sup>*"} & : & @{text "context \<rightarrow>"} \\
+    @{attribute_def simp} & : & \<open>attribute\<close> \\
+    @{attribute_def split} & : & \<open>attribute\<close> \\
+    @{attribute_def cong} & : & \<open>attribute\<close> \\
+    @{command_def "print_simpset"}\<open>\<^sup>*\<close> & : & \<open>context \<rightarrow>\<close> \\
   \end{matharray}
 
   @{rail \<open>
@@ -500,55 +494,54 @@
   Rewrite rules are theorems expressing some form of equality, for
   example:
 
-  @{text "Suc ?m + ?n = ?m + Suc ?n"} \\
-  @{text "?P \<and> ?P \<longleftrightarrow> ?P"} \\
-  @{text "?A \<union> ?B \<equiv> {x. x \<in> ?A \<or> x \<in> ?B}"}
+  \<open>Suc ?m + ?n = ?m + Suc ?n\<close> \\
+  \<open>?P \<and> ?P \<longleftrightarrow> ?P\<close> \\
+  \<open>?A \<union> ?B \<equiv> {x. x \<in> ?A \<or> x \<in> ?B}\<close>
 
   \<^medskip>
-  Conditional rewrites such as @{text "?m < ?n \<Longrightarrow> ?m div ?n = 0"} are
+  Conditional rewrites such as \<open>?m < ?n \<Longrightarrow> ?m div ?n = 0\<close> are
   also permitted; the conditions can be arbitrary formulas.
 
   \<^medskip>
   Internally, all rewrite rules are translated into Pure
-  equalities, theorems with conclusion @{text "lhs \<equiv> rhs"}. The
+  equalities, theorems with conclusion \<open>lhs \<equiv> rhs\<close>. The
   simpset contains a function for extracting equalities from arbitrary
   theorems, which is usually installed when the object-logic is
-  configured initially. For example, @{text "\<not> ?x \<in> {}"} could be
-  turned into @{text "?x \<in> {} \<equiv> False"}. Theorems that are declared as
+  configured initially. For example, \<open>\<not> ?x \<in> {}\<close> could be
+  turned into \<open>?x \<in> {} \<equiv> False\<close>. Theorems that are declared as
   @{attribute simp} and local assumptions within a goal are treated
   uniformly in this respect.
 
-  The Simplifier accepts the following formats for the @{text "lhs"}
+  The Simplifier accepts the following formats for the \<open>lhs\<close>
   term:
 
     \<^enum> First-order patterns, considering the sublanguage of
     application of constant operators to variable operands, without
-    @{text "\<lambda>"}-abstractions or functional variables.
+    \<open>\<lambda>\<close>-abstractions or functional variables.
     For example:
 
-    @{text "(?x + ?y) + ?z \<equiv> ?x + (?y + ?z)"} \\
-    @{text "f (f ?x ?y) ?z \<equiv> f ?x (f ?y ?z)"}
+    \<open>(?x + ?y) + ?z \<equiv> ?x + (?y + ?z)\<close> \\
+    \<open>f (f ?x ?y) ?z \<equiv> f ?x (f ?y ?z)\<close>
 
     \<^enum> Higher-order patterns in the sense of @{cite "nipkow-patterns"}.
-    These are terms in @{text "\<beta>"}-normal form (this will always be the
+    These are terms in \<open>\<beta>\<close>-normal form (this will always be the
     case unless you have done something strange) where each occurrence
-    of an unknown is of the form @{text "?F x\<^sub>1 \<dots> x\<^sub>n"}, where the
-    @{text "x\<^sub>i"} are distinct bound variables.
+    of an unknown is of the form \<open>?F x\<^sub>1 \<dots> x\<^sub>n\<close>, where the
+    \<open>x\<^sub>i\<close> are distinct bound variables.
 
-    For example, @{text "(\<forall>x. ?P x \<and> ?Q x) \<equiv> (\<forall>x. ?P x) \<and> (\<forall>x. ?Q x)"}
-    or its symmetric form, since the @{text "rhs"} is also a
+    For example, \<open>(\<forall>x. ?P x \<and> ?Q x) \<equiv> (\<forall>x. ?P x) \<and> (\<forall>x. ?Q x)\<close>
+    or its symmetric form, since the \<open>rhs\<close> is also a
     higher-order pattern.
 
-    \<^enum> Physical first-order patterns over raw @{text "\<lambda>"}-term
-    structure without @{text "\<alpha>\<beta>\<eta>"}-equality; abstractions and bound
+    \<^enum> Physical first-order patterns over raw \<open>\<lambda>\<close>-term
+    structure without \<open>\<alpha>\<beta>\<eta>\<close>-equality; abstractions and bound
     variables are treated like quasi-constant term material.
 
-    For example, the rule @{text "?f ?x \<in> range ?f = True"} rewrites the
-    term @{text "g a \<in> range g"} to @{text "True"}, but will fail to
-    match @{text "g (h b) \<in> range (\<lambda>x. g (h x))"}. However, offending
-    subterms (in our case @{text "?f ?x"}, which is not a pattern) can
-    be replaced by adding new variables and conditions like this: @{text
-    "?y = ?f ?x \<Longrightarrow> ?y \<in> range ?f = True"} is acceptable as a conditional
+    For example, the rule \<open>?f ?x \<in> range ?f = True\<close> rewrites the
+    term \<open>g a \<in> range g\<close> to \<open>True\<close>, but will fail to
+    match \<open>g (h b) \<in> range (\<lambda>x. g (h x))\<close>. However, offending
+    subterms (in our case \<open>?f ?x\<close>, which is not a pattern) can
+    be replaced by adding new variables and conditions like this: \<open>?y = ?f ?x \<Longrightarrow> ?y \<in> range ?f = True\<close> is acceptable as a conditional
     rewrite rule of the second category since conditions can be
     arbitrary terms.
 
@@ -560,15 +553,15 @@
   Congruence rules are equalities of the form @{text [display]
   "\<dots> \<Longrightarrow> f ?x\<^sub>1 \<dots> ?x\<^sub>n = f ?y\<^sub>1 \<dots> ?y\<^sub>n"}
 
-  This controls the simplification of the arguments of @{text f}.  For
+  This controls the simplification of the arguments of \<open>f\<close>.  For
   example, some arguments can be simplified under additional
   assumptions: @{text [display] "?P\<^sub>1 \<longleftrightarrow> ?Q\<^sub>1 \<Longrightarrow> (?Q\<^sub>1 \<Longrightarrow> ?P\<^sub>2 \<longleftrightarrow> ?Q\<^sub>2) \<Longrightarrow>
   (?P\<^sub>1 \<longrightarrow> ?P\<^sub>2) \<longleftrightarrow> (?Q\<^sub>1 \<longrightarrow> ?Q\<^sub>2)"}
 
-  Given this rule, the Simplifier assumes @{text "?Q\<^sub>1"} and extracts
-  rewrite rules from it when simplifying @{text "?P\<^sub>2"}.  Such local
-  assumptions are effective for rewriting formulae such as @{text "x =
-  0 \<longrightarrow> y + x = y"}.
+  Given this rule, the Simplifier assumes \<open>?Q\<^sub>1\<close> and extracts
+  rewrite rules from it when simplifying \<open>?P\<^sub>2\<close>.  Such local
+  assumptions are effective for rewriting formulae such as \<open>x =
+  0 \<longrightarrow> y + x = y\<close>.
 
   %FIXME
   %The local assumptions are also provided as theorems to the solver;
@@ -593,11 +586,11 @@
 
   Only the first argument is simplified; the others remain unchanged.
   This can make simplification much faster, but may require an extra
-  case split over the condition @{text "?q"} to prove the goal.
+  case split over the condition \<open>?q\<close> to prove the goal.
 
   \<^descr> @{command "print_simpset"} prints the collection of rules declared
   to the Simplifier, which is also known as ``simpset'' internally; the
-  ``@{text "!"}'' option indicates extra verbosity.
+  ``\<open>!\<close>'' option indicates extra verbosity.
 
   For historical reasons, simpsets may occur independently from the
   current context, but are conceptually dependent on it.  When the
@@ -625,7 +618,7 @@
   by explicitly adding or deleting theorems as simplification rules,
   or installing other tools via simplification procedures
   (\secref{sec:simproc}).  Good simpsets are hard to design.  Rules
-  that obviously simplify, like @{text "?n + 0 \<equiv> ?n"} are good
+  that obviously simplify, like \<open>?n + 0 \<equiv> ?n\<close> are good
   candidates for the implicit simpset, unless a special
   non-normalizing behavior of certain operations is intended.  More
   specific rules (such as distributive laws, which duplicate subterms)
@@ -649,17 +642,17 @@
 
 text \<open>A rewrite rule is \<^emph>\<open>permutative\<close> if the left-hand side and
   right-hand side are the equal up to renaming of variables.  The most
-  common permutative rule is commutativity: @{text "?x + ?y = ?y +
-  ?x"}.  Other examples include @{text "(?x - ?y) - ?z = (?x - ?z) -
-  ?y"} in arithmetic and @{text "insert ?x (insert ?y ?A) = insert ?y
-  (insert ?x ?A)"} for sets.  Such rules are common enough to merit
+  common permutative rule is commutativity: \<open>?x + ?y = ?y +
+  ?x\<close>.  Other examples include \<open>(?x - ?y) - ?z = (?x - ?z) -
+  ?y\<close> in arithmetic and \<open>insert ?x (insert ?y ?A) = insert ?y
+  (insert ?x ?A)\<close> for sets.  Such rules are common enough to merit
   special attention.
 
   Because ordinary rewriting loops given such rules, the Simplifier
   employs a special strategy, called \<^emph>\<open>ordered rewriting\<close>.
   Permutative rules are detected and only applied if the rewriting
   step decreases the redex wrt.\ a given term ordering.  For example,
-  commutativity rewrites @{text "b + a"} to @{text "a + b"}, but then
+  commutativity rewrites \<open>b + a\<close> to \<open>a + b\<close>, but then
   stops, because the redex cannot be decreased further in the sense of
   the term ordering.
 
@@ -678,17 +671,17 @@
 
 text \<open>Ordered rewriting is particularly effective in the case of
   associative-commutative operators.  (Associativity by itself is not
-  permutative.)  When dealing with an AC-operator @{text "f"}, keep
+  permutative.)  When dealing with an AC-operator \<open>f\<close>, keep
   the following points in mind:
 
   \<^item> The associative law must always be oriented from left to
-  right, namely @{text "f (f x y) z = f x (f y z)"}.  The opposite
+  right, namely \<open>f (f x y) z = f x (f y z)\<close>.  The opposite
   orientation, if used with commutativity, leads to looping in
   conjunction with the standard term order.
 
   \<^item> To complete your set of rewrite rules, you must add not just
   associativity (A) and commutativity (C) but also a derived rule
-  \<^emph>\<open>left-commutativity\<close> (LC): @{text "f x (f y z) = f y (f x z)"}.
+  \<^emph>\<open>left-commutativity\<close> (LC): \<open>f x (f y z) = f y (f x z)\<close>.
 
 
   Ordered rewriting with the combination of A, C, and LC sorts a term
@@ -746,11 +739,11 @@
 
 text \<open>
   \begin{tabular}{rcll}
-    @{attribute_def simp_trace} & : & @{text attribute} & default @{text false} \\
-    @{attribute_def simp_trace_depth_limit} & : & @{text attribute} & default @{text 1} \\
-    @{attribute_def simp_debug} & : & @{text attribute} & default @{text false} \\
-    @{attribute_def simp_trace_new} & : & @{text attribute} \\
-    @{attribute_def simp_break} & : & @{text attribute} \\
+    @{attribute_def simp_trace} & : & \<open>attribute\<close> & default \<open>false\<close> \\
+    @{attribute_def simp_trace_depth_limit} & : & \<open>attribute\<close> & default \<open>1\<close> \\
+    @{attribute_def simp_debug} & : & \<open>attribute\<close> & default \<open>false\<close> \\
+    @{attribute_def simp_trace_new} & : & \<open>attribute\<close> \\
+    @{attribute_def simp_break} & : & \<open>attribute\<close> \\
   \end{tabular}
   \<^medskip>
 
@@ -816,14 +809,14 @@
   rules.
 
   Any successful result needs to be a (possibly conditional) rewrite
-  rule @{text "t \<equiv> u"} that is applicable to the current redex.  The
+  rule \<open>t \<equiv> u\<close> that is applicable to the current redex.  The
   rule will be applied just as any ordinary rewrite rule.  It is
   expected to be already in \<^emph>\<open>internal form\<close>, bypassing the
   automatic preprocessing of object-level equivalences.
 
   \begin{matharray}{rcl}
-    @{command_def "simproc_setup"} & : & @{text "local_theory \<rightarrow> local_theory"} \\
-    simproc & : & @{text attribute} \\
+    @{command_def "simproc_setup"} & : & \<open>local_theory \<rightarrow> local_theory\<close> \\
+    simproc & : & \<open>attribute\<close> \\
   \end{matharray}
 
   @{rail \<open>
@@ -839,8 +832,8 @@
   given term patterns match the current redex.  The implementation,
   which is provided as ML source text, needs to be of type @{ML_type
   "morphism -> simpset -> cterm -> thm option"}, where the @{ML_type
-  cterm} represents the current redex @{text r} and the result is
-  supposed to be some proven rewrite rule @{text "r \<equiv> r'"} (or a
+  cterm} represents the current redex \<open>r\<close> and the result is
+  supposed to be some proven rewrite rule \<open>r \<equiv> r'\<close> (or a
   generalized version), or @{ML NONE} to indicate failure.  The
   @{ML_type simpset} argument holds the full context of the current
   Simplifier invocation, including the actual Isar proof context.  The
@@ -853,7 +846,7 @@
   Morphisms and identifiers are only relevant for simprocs that are
   defined within a local target context, e.g.\ in a locale.
 
-  \<^descr> @{text "simproc add: name"} and @{text "simproc del: name"}
+  \<^descr> \<open>simproc add: name\<close> and \<open>simproc del: name\<close>
   add or delete named simprocs to the current Simplifier context.  The
   default is to add a simproc.  Note that @{command "simproc_setup"}
   already adds the new simproc to the subsequent context.
@@ -905,14 +898,14 @@
   conditional rewrite rules or congruence rules.  The default should
   be simplification itself.  In rare situations, this strategy may
   need to be changed.  For example, if the premise of a conditional
-  rule is an instance of its conclusion, as in @{text "Suc ?m < ?n \<Longrightarrow>
-  ?m < ?n"}, the default strategy could loop.  % FIXME !??
+  rule is an instance of its conclusion, as in \<open>Suc ?m < ?n \<Longrightarrow>
+  ?m < ?n\<close>, the default strategy could loop.  % FIXME !??
 
-  \<^descr> @{ML Simplifier.set_subgoaler}~@{text "tac ctxt"} sets the
-  subgoaler of the context to @{text "tac"}.  The tactic will
+  \<^descr> @{ML Simplifier.set_subgoaler}~\<open>tac ctxt\<close> sets the
+  subgoaler of the context to \<open>tac\<close>.  The tactic will
   be applied to the context of the running Simplifier instance.
 
-  \<^descr> @{ML Simplifier.prems_of}~@{text "ctxt"} retrieves the current
+  \<^descr> @{ML Simplifier.prems_of}~\<open>ctxt\<close> retrieves the current
   set of premises from the context.  This may be non-empty only if
   the Simplifier has been told to utilize local assumptions in the
   first place (cf.\ the options in \secref{sec:simp-meth}).
@@ -948,7 +941,7 @@
 
   A solver is a tactic that attempts to solve a subgoal after
   simplification.  Its core functionality is to prove trivial subgoals
-  such as @{prop "True"} and @{text "t = t"}, but object-logics might
+  such as @{prop "True"} and \<open>t = t\<close>, but object-logics might
   be more ambitious.  For example, Isabelle/HOL performs a restricted
   version of linear arithmetic here.
 
@@ -957,8 +950,8 @@
 
   \<^medskip>
   Rewriting does not instantiate unknowns.  For example,
-  rewriting alone cannot prove @{text "a \<in> ?A"} since this requires
-  instantiating @{text "?A"}.  The solver, however, is an arbitrary
+  rewriting alone cannot prove \<open>a \<in> ?A\<close> since this requires
+  instantiating \<open>?A\<close>.  The solver, however, is an arbitrary
   tactic and may instantiate unknowns as it pleases.  This is the only
   way the Simplifier can handle a conditional rewrite rule whose
   condition contains extra variables.  When a simplification tactic is
@@ -975,23 +968,22 @@
   tactic is not totally safe: it may instantiate unknowns that appear
   also in other subgoals.
 
-  \<^descr> @{ML Simplifier.mk_solver}~@{text "name tac"} turns @{text
-  "tac"} into a solver; the @{text "name"} is only attached as a
+  \<^descr> @{ML Simplifier.mk_solver}~\<open>name tac\<close> turns \<open>tac\<close> into a solver; the \<open>name\<close> is only attached as a
   comment and has no further significance.
 
-  \<^descr> @{text "ctxt setSSolver solver"} installs @{text "solver"} as
-  the safe solver of @{text "ctxt"}.
+  \<^descr> \<open>ctxt setSSolver solver\<close> installs \<open>solver\<close> as
+  the safe solver of \<open>ctxt\<close>.
 
-  \<^descr> @{text "ctxt addSSolver solver"} adds @{text "solver"} as an
+  \<^descr> \<open>ctxt addSSolver solver\<close> adds \<open>solver\<close> as an
   additional safe solver; it will be tried after the solvers which had
-  already been present in @{text "ctxt"}.
+  already been present in \<open>ctxt\<close>.
 
-  \<^descr> @{text "ctxt setSolver solver"} installs @{text "solver"} as the
-  unsafe solver of @{text "ctxt"}.
+  \<^descr> \<open>ctxt setSolver solver\<close> installs \<open>solver\<close> as the
+  unsafe solver of \<open>ctxt\<close>.
 
-  \<^descr> @{text "ctxt addSolver solver"} adds @{text "solver"} as an
+  \<^descr> \<open>ctxt addSolver solver\<close> adds \<open>solver\<close> as an
   additional unsafe solver; it will be tried after the solvers which
-  had already been present in @{text "ctxt"}.
+  had already been present in \<open>ctxt\<close>.
 
 
   \<^medskip>
@@ -1009,18 +1001,18 @@
   \<^medskip>
   As explained before, the subgoaler is also used to solve
   the premises of congruence rules.  These are usually of the form
-  @{text "s = ?x"}, where @{text "s"} needs to be simplified and
-  @{text "?x"} needs to be instantiated with the result.  Typically,
+  \<open>s = ?x\<close>, where \<open>s\<close> needs to be simplified and
+  \<open>?x\<close> needs to be instantiated with the result.  Typically,
   the subgoaler will invoke the Simplifier at some point, which will
   eventually call the solver.  For this reason, solver tactics must be
-  prepared to solve goals of the form @{text "t = ?x"}, usually by
+  prepared to solve goals of the form \<open>t = ?x\<close>, usually by
   reflexivity.  In particular, reflexivity should be tried before any
   of the fancy automated proof tools.
 
   It may even happen that due to simplification the subgoal is no
-  longer an equality.  For example, @{text "False \<longleftrightarrow> ?Q"} could be
-  rewritten to @{text "\<not> ?Q"}.  To cover this case, the solver could
-  try resolving with the theorem @{text "\<not> False"} of the
+  longer an equality.  For example, \<open>False \<longleftrightarrow> ?Q\<close> could be
+  rewritten to \<open>\<not> ?Q\<close>.  To cover this case, the solver could
+  try resolving with the theorem \<open>\<not> False\<close> of the
   object-logic.
 
   \<^medskip>
@@ -1028,7 +1020,7 @@
   If a premise of a congruence rule cannot be proved, then the
   congruence is ignored.  This should only happen if the rule is
   \<^emph>\<open>conditional\<close> --- that is, contains premises not of the form
-  @{text "t = ?x"}.  Otherwise it indicates that some congruence rule,
+  \<open>t = ?x\<close>.  Otherwise it indicates that some congruence rule,
   or possibly the subgoaler or solver, is faulty.
   \end{warn}
 \<close>
@@ -1058,24 +1050,24 @@
   conditional.  Another possibility is to apply an elimination rule on
   the assumptions.  More adventurous loopers could start an induction.
 
-  \<^descr> @{text "ctxt setloop tac"} installs @{text "tac"} as the only
-  looper tactic of @{text "ctxt"}.
+  \<^descr> \<open>ctxt setloop tac\<close> installs \<open>tac\<close> as the only
+  looper tactic of \<open>ctxt\<close>.
 
-  \<^descr> @{text "ctxt addloop (name, tac)"} adds @{text "tac"} as an
-  additional looper tactic with name @{text "name"}, which is
+  \<^descr> \<open>ctxt addloop (name, tac)\<close> adds \<open>tac\<close> as an
+  additional looper tactic with name \<open>name\<close>, which is
   significant for managing the collection of loopers.  The tactic will
   be tried after the looper tactics that had already been present in
-  @{text "ctxt"}.
+  \<open>ctxt\<close>.
 
-  \<^descr> @{text "ctxt delloop name"} deletes the looper tactic that was
-  associated with @{text "name"} from @{text "ctxt"}.
+  \<^descr> \<open>ctxt delloop name\<close> deletes the looper tactic that was
+  associated with \<open>name\<close> from \<open>ctxt\<close>.
 
-  \<^descr> @{ML Splitter.add_split}~@{text "thm ctxt"} adds split tactics
-  for @{text "thm"} as additional looper tactics of @{text "ctxt"}.
+  \<^descr> @{ML Splitter.add_split}~\<open>thm ctxt\<close> adds split tactics
+  for \<open>thm\<close> as additional looper tactics of \<open>ctxt\<close>.
 
-  \<^descr> @{ML Splitter.del_split}~@{text "thm ctxt"} deletes the split
-  tactic corresponding to @{text thm} from the looper tactics of
-  @{text "ctxt"}.
+  \<^descr> @{ML Splitter.del_split}~\<open>thm ctxt\<close> deletes the split
+  tactic corresponding to \<open>thm\<close> from the looper tactics of
+  \<open>ctxt\<close>.
 
 
   The splitter replaces applications of a given function; the
@@ -1098,7 +1090,7 @@
   option.split_asm}, which split the subgoal.  The function @{ML
   Splitter.add_split} automatically takes care of which tactic to
   call, analyzing the form of the rules given as argument; it is the
-  same operation behind @{text "split"} attribute or method modifier
+  same operation behind \<open>split\<close> attribute or method modifier
   syntax in the Isar source language.
 
   Case splits should be allowed only when necessary; they are
@@ -1119,7 +1111,7 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{attribute_def simplified} & : & @{text attribute} \\
+    @{attribute_def simplified} & : & \<open>attribute\<close> \\
   \end{matharray}
 
   @{rail \<open>
@@ -1129,12 +1121,12 @@
     opt: '(' ('no_asm' | 'no_asm_simp' | 'no_asm_use') ')'
   \<close>}
 
-  \<^descr> @{attribute simplified}~@{text "a\<^sub>1 \<dots> a\<^sub>n"} causes a theorem to
-  be simplified, either by exactly the specified rules @{text "a\<^sub>1, \<dots>,
-  a\<^sub>n"}, or the implicit Simplifier context if no arguments are given.
+  \<^descr> @{attribute simplified}~\<open>a\<^sub>1 \<dots> a\<^sub>n\<close> causes a theorem to
+  be simplified, either by exactly the specified rules \<open>a\<^sub>1, \<dots>,
+  a\<^sub>n\<close>, or the implicit Simplifier context if no arguments are given.
   The result is fully simplified by default, including assumptions and
-  conclusion; the options @{text no_asm} etc.\ tune the Simplifier in
-  the same way as the for the @{text simp} method.
+  conclusion; the options \<open>no_asm\<close> etc.\ tune the Simplifier in
+  the same way as the for the \<open>simp\<close> method.
 
   Note that forward simplification restricts the Simplifier to its
   most basic operation of term rewriting; solver and looper tactics
@@ -1186,46 +1178,46 @@
   The \<^emph>\<open>sequent calculus\<close>, a generalization of natural deduction,
   is easier to automate.
 
-  A \<^bold>\<open>sequent\<close> has the form @{text "\<Gamma> \<turnstile> \<Delta>"}, where @{text "\<Gamma>"}
-  and @{text "\<Delta>"} are sets of formulae.\footnote{For first-order
+  A \<^bold>\<open>sequent\<close> has the form \<open>\<Gamma> \<turnstile> \<Delta>\<close>, where \<open>\<Gamma>\<close>
+  and \<open>\<Delta>\<close> are sets of formulae.\footnote{For first-order
   logic, sequents can equivalently be made from lists or multisets of
-  formulae.} The sequent @{text "P\<^sub>1, \<dots>, P\<^sub>m \<turnstile> Q\<^sub>1, \<dots>, Q\<^sub>n"} is
-  \<^bold>\<open>valid\<close> if @{text "P\<^sub>1 \<and> \<dots> \<and> P\<^sub>m"} implies @{text "Q\<^sub>1 \<or> \<dots> \<or>
-  Q\<^sub>n"}.  Thus @{text "P\<^sub>1, \<dots>, P\<^sub>m"} represent assumptions, each of which
-  is true, while @{text "Q\<^sub>1, \<dots>, Q\<^sub>n"} represent alternative goals.  A
+  formulae.} The sequent \<open>P\<^sub>1, \<dots>, P\<^sub>m \<turnstile> Q\<^sub>1, \<dots>, Q\<^sub>n\<close> is
+  \<^bold>\<open>valid\<close> if \<open>P\<^sub>1 \<and> \<dots> \<and> P\<^sub>m\<close> implies \<open>Q\<^sub>1 \<or> \<dots> \<or>
+  Q\<^sub>n\<close>.  Thus \<open>P\<^sub>1, \<dots>, P\<^sub>m\<close> represent assumptions, each of which
+  is true, while \<open>Q\<^sub>1, \<dots>, Q\<^sub>n\<close> represent alternative goals.  A
   sequent is \<^bold>\<open>basic\<close> if its left and right sides have a common
-  formula, as in @{text "P, Q \<turnstile> Q, R"}; basic sequents are trivially
+  formula, as in \<open>P, Q \<turnstile> Q, R\<close>; basic sequents are trivially
   valid.
 
   Sequent rules are classified as \<^bold>\<open>right\<close> or \<^bold>\<open>left\<close>,
-  indicating which side of the @{text "\<turnstile>"} symbol they operate on.
+  indicating which side of the \<open>\<turnstile>\<close> symbol they operate on.
   Rules that operate on the right side are analogous to natural
   deduction's introduction rules, and left rules are analogous to
-  elimination rules.  The sequent calculus analogue of @{text "(\<longrightarrow>I)"}
+  elimination rules.  The sequent calculus analogue of \<open>(\<longrightarrow>I)\<close>
   is the rule
   \[
-  \infer[@{text "(\<longrightarrow>R)"}]{@{text "\<Gamma> \<turnstile> \<Delta>, P \<longrightarrow> Q"}}{@{text "P, \<Gamma> \<turnstile> \<Delta>, Q"}}
+  \infer[\<open>(\<longrightarrow>R)\<close>]{\<open>\<Gamma> \<turnstile> \<Delta>, P \<longrightarrow> Q\<close>}{\<open>P, \<Gamma> \<turnstile> \<Delta>, Q\<close>}
   \]
   Applying the rule backwards, this breaks down some implication on
-  the right side of a sequent; @{text "\<Gamma>"} and @{text "\<Delta>"} stand for
+  the right side of a sequent; \<open>\<Gamma>\<close> and \<open>\<Delta>\<close> stand for
   the sets of formulae that are unaffected by the inference.  The
-  analogue of the pair @{text "(\<or>I1)"} and @{text "(\<or>I2)"} is the
+  analogue of the pair \<open>(\<or>I1)\<close> and \<open>(\<or>I2)\<close> is the
   single rule
   \[
-  \infer[@{text "(\<or>R)"}]{@{text "\<Gamma> \<turnstile> \<Delta>, P \<or> Q"}}{@{text "\<Gamma> \<turnstile> \<Delta>, P, Q"}}
+  \infer[\<open>(\<or>R)\<close>]{\<open>\<Gamma> \<turnstile> \<Delta>, P \<or> Q\<close>}{\<open>\<Gamma> \<turnstile> \<Delta>, P, Q\<close>}
   \]
   This breaks down some disjunction on the right side, replacing it by
   both disjuncts.  Thus, the sequent calculus is a kind of
   multiple-conclusion logic.
 
   To illustrate the use of multiple formulae on the right, let us
-  prove the classical theorem @{text "(P \<longrightarrow> Q) \<or> (Q \<longrightarrow> P)"}.  Working
+  prove the classical theorem \<open>(P \<longrightarrow> Q) \<or> (Q \<longrightarrow> P)\<close>.  Working
   backwards, we reduce this formula to a basic sequent:
   \[
-  \infer[@{text "(\<or>R)"}]{@{text "\<turnstile> (P \<longrightarrow> Q) \<or> (Q \<longrightarrow> P)"}}
-    {\infer[@{text "(\<longrightarrow>R)"}]{@{text "\<turnstile> (P \<longrightarrow> Q), (Q \<longrightarrow> P)"}}
-      {\infer[@{text "(\<longrightarrow>R)"}]{@{text "P \<turnstile> Q, (Q \<longrightarrow> P)"}}
-        {@{text "P, Q \<turnstile> Q, P"}}}}
+  \infer[\<open>(\<or>R)\<close>]{\<open>\<turnstile> (P \<longrightarrow> Q) \<or> (Q \<longrightarrow> P)\<close>}
+    {\infer[\<open>(\<longrightarrow>R)\<close>]{\<open>\<turnstile> (P \<longrightarrow> Q), (Q \<longrightarrow> P)\<close>}
+      {\infer[\<open>(\<longrightarrow>R)\<close>]{\<open>P \<turnstile> Q, (Q \<longrightarrow> P)\<close>}
+        {\<open>P, Q \<turnstile> Q, P\<close>}}}
   \]
 
   This example is typical of the sequent calculus: start with the
@@ -1241,36 +1233,35 @@
 text \<open>Isabelle can represent sequents directly, as in the
   object-logic LK.  But natural deduction is easier to work with, and
   most object-logics employ it.  Fortunately, we can simulate the
-  sequent @{text "P\<^sub>1, \<dots>, P\<^sub>m \<turnstile> Q\<^sub>1, \<dots>, Q\<^sub>n"} by the Isabelle formula
-  @{text "P\<^sub>1 \<Longrightarrow> \<dots> \<Longrightarrow> P\<^sub>m \<Longrightarrow> \<not> Q\<^sub>2 \<Longrightarrow> ... \<Longrightarrow> \<not> Q\<^sub>n \<Longrightarrow> Q\<^sub>1"} where the order of
-  the assumptions and the choice of @{text "Q\<^sub>1"} are arbitrary.
+  sequent \<open>P\<^sub>1, \<dots>, P\<^sub>m \<turnstile> Q\<^sub>1, \<dots>, Q\<^sub>n\<close> by the Isabelle formula
+  \<open>P\<^sub>1 \<Longrightarrow> \<dots> \<Longrightarrow> P\<^sub>m \<Longrightarrow> \<not> Q\<^sub>2 \<Longrightarrow> ... \<Longrightarrow> \<not> Q\<^sub>n \<Longrightarrow> Q\<^sub>1\<close> where the order of
+  the assumptions and the choice of \<open>Q\<^sub>1\<close> are arbitrary.
   Elim-resolution plays a key role in simulating sequent proofs.
 
   We can easily handle reasoning on the left.  Elim-resolution with
-  the rules @{text "(\<or>E)"}, @{text "(\<bottom>E)"} and @{text "(\<exists>E)"} achieves
+  the rules \<open>(\<or>E)\<close>, \<open>(\<bottom>E)\<close> and \<open>(\<exists>E)\<close> achieves
   a similar effect as the corresponding sequent rules.  For the other
   connectives, we use sequent-style elimination rules instead of
-  destruction rules such as @{text "(\<and>E1, 2)"} and @{text "(\<forall>E)"}.
-  But note that the rule @{text "(\<not>L)"} has no effect under our
+  destruction rules such as \<open>(\<and>E1, 2)\<close> and \<open>(\<forall>E)\<close>.
+  But note that the rule \<open>(\<not>L)\<close> has no effect under our
   representation of sequents!
   \[
-  \infer[@{text "(\<not>L)"}]{@{text "\<not> P, \<Gamma> \<turnstile> \<Delta>"}}{@{text "\<Gamma> \<turnstile> \<Delta>, P"}}
+  \infer[\<open>(\<not>L)\<close>]{\<open>\<not> P, \<Gamma> \<turnstile> \<Delta>\<close>}{\<open>\<Gamma> \<turnstile> \<Delta>, P\<close>}
   \]
 
   What about reasoning on the right?  Introduction rules can only
-  affect the formula in the conclusion, namely @{text "Q\<^sub>1"}.  The
+  affect the formula in the conclusion, namely \<open>Q\<^sub>1\<close>.  The
   other right-side formulae are represented as negated assumptions,
-  @{text "\<not> Q\<^sub>2, \<dots>, \<not> Q\<^sub>n"}.  In order to operate on one of these, it
-  must first be exchanged with @{text "Q\<^sub>1"}.  Elim-resolution with the
-  @{text swap} rule has this effect: @{text "\<not> P \<Longrightarrow> (\<not> R \<Longrightarrow> P) \<Longrightarrow> R"}
+  \<open>\<not> Q\<^sub>2, \<dots>, \<not> Q\<^sub>n\<close>.  In order to operate on one of these, it
+  must first be exchanged with \<open>Q\<^sub>1\<close>.  Elim-resolution with the
+  \<open>swap\<close> rule has this effect: \<open>\<not> P \<Longrightarrow> (\<not> R \<Longrightarrow> P) \<Longrightarrow> R\<close>
 
   To ensure that swaps occur only when necessary, each introduction
   rule is converted into a swapped form: it is resolved with the
-  second premise of @{text "(swap)"}.  The swapped form of @{text
-  "(\<and>I)"}, which might be called @{text "(\<not>\<and>E)"}, is
+  second premise of \<open>(swap)\<close>.  The swapped form of \<open>(\<and>I)\<close>, which might be called \<open>(\<not>\<and>E)\<close>, is
   @{text [display] "\<not> (P \<and> Q) \<Longrightarrow> (\<not> R \<Longrightarrow> P) \<Longrightarrow> (\<not> R \<Longrightarrow> Q) \<Longrightarrow> R"}
 
-  Similarly, the swapped form of @{text "(\<longrightarrow>I)"} is
+  Similarly, the swapped form of \<open>(\<longrightarrow>I)\<close> is
   @{text [display] "\<not> (P \<longrightarrow> Q) \<Longrightarrow> (\<not> R \<Longrightarrow> P \<Longrightarrow> Q) \<Longrightarrow> R"}
 
   Swapped introduction rules are applied using elim-resolution, which
@@ -1284,45 +1275,45 @@
 
 subsubsection \<open>Extra rules for the sequent calculus\<close>
 
-text \<open>As mentioned, destruction rules such as @{text "(\<and>E1, 2)"} and
-  @{text "(\<forall>E)"} must be replaced by sequent-style elimination rules.
+text \<open>As mentioned, destruction rules such as \<open>(\<and>E1, 2)\<close> and
+  \<open>(\<forall>E)\<close> must be replaced by sequent-style elimination rules.
   In addition, we need rules to embody the classical equivalence
-  between @{text "P \<longrightarrow> Q"} and @{text "\<not> P \<or> Q"}.  The introduction
-  rules @{text "(\<or>I1, 2)"} are replaced by a rule that simulates
-  @{text "(\<or>R)"}: @{text [display] "(\<not> Q \<Longrightarrow> P) \<Longrightarrow> P \<or> Q"}
+  between \<open>P \<longrightarrow> Q\<close> and \<open>\<not> P \<or> Q\<close>.  The introduction
+  rules \<open>(\<or>I1, 2)\<close> are replaced by a rule that simulates
+  \<open>(\<or>R)\<close>: @{text [display] "(\<not> Q \<Longrightarrow> P) \<Longrightarrow> P \<or> Q"}
 
-  The destruction rule @{text "(\<longrightarrow>E)"} is replaced by @{text [display]
+  The destruction rule \<open>(\<longrightarrow>E)\<close> is replaced by @{text [display]
   "(P \<longrightarrow> Q) \<Longrightarrow> (\<not> P \<Longrightarrow> R) \<Longrightarrow> (Q \<Longrightarrow> R) \<Longrightarrow> R"}
 
   Quantifier replication also requires special rules.  In classical
-  logic, @{text "\<exists>x. P x"} is equivalent to @{text "\<not> (\<forall>x. \<not> P x)"};
-  the rules @{text "(\<exists>R)"} and @{text "(\<forall>L)"} are dual:
+  logic, \<open>\<exists>x. P x\<close> is equivalent to \<open>\<not> (\<forall>x. \<not> P x)\<close>;
+  the rules \<open>(\<exists>R)\<close> and \<open>(\<forall>L)\<close> are dual:
   \[
-  \infer[@{text "(\<exists>R)"}]{@{text "\<Gamma> \<turnstile> \<Delta>, \<exists>x. P x"}}{@{text "\<Gamma> \<turnstile> \<Delta>, \<exists>x. P x, P t"}}
+  \infer[\<open>(\<exists>R)\<close>]{\<open>\<Gamma> \<turnstile> \<Delta>, \<exists>x. P x\<close>}{\<open>\<Gamma> \<turnstile> \<Delta>, \<exists>x. P x, P t\<close>}
   \qquad
-  \infer[@{text "(\<forall>L)"}]{@{text "\<forall>x. P x, \<Gamma> \<turnstile> \<Delta>"}}{@{text "P t, \<forall>x. P x, \<Gamma> \<turnstile> \<Delta>"}}
+  \infer[\<open>(\<forall>L)\<close>]{\<open>\<forall>x. P x, \<Gamma> \<turnstile> \<Delta>\<close>}{\<open>P t, \<forall>x. P x, \<Gamma> \<turnstile> \<Delta>\<close>}
   \]
   Thus both kinds of quantifier may be replicated.  Theorems requiring
   multiple uses of a universal formula are easy to invent; consider
   @{text [display] "(\<forall>x. P x \<longrightarrow> P (f x)) \<and> P a \<longrightarrow> P (f\<^sup>n a)"} for any
-  @{text "n > 1"}.  Natural examples of the multiple use of an
-  existential formula are rare; a standard one is @{text "\<exists>x. \<forall>y. P x
-  \<longrightarrow> P y"}.
+  \<open>n > 1\<close>.  Natural examples of the multiple use of an
+  existential formula are rare; a standard one is \<open>\<exists>x. \<forall>y. P x
+  \<longrightarrow> P y\<close>.
 
   Forgoing quantifier replication loses completeness, but gains
   decidability, since the search space becomes finite.  Many useful
   theorems can be proved without replication, and the search generally
   delivers its verdict in a reasonable time.  To adopt this approach,
-  represent the sequent rules @{text "(\<exists>R)"}, @{text "(\<exists>L)"} and
-  @{text "(\<forall>R)"} by @{text "(\<exists>I)"}, @{text "(\<exists>E)"} and @{text "(\<forall>I)"},
-  respectively, and put @{text "(\<forall>E)"} into elimination form: @{text
+  represent the sequent rules \<open>(\<exists>R)\<close>, \<open>(\<exists>L)\<close> and
+  \<open>(\<forall>R)\<close> by \<open>(\<exists>I)\<close>, \<open>(\<exists>E)\<close> and \<open>(\<forall>I)\<close>,
+  respectively, and put \<open>(\<forall>E)\<close> into elimination form: @{text
   [display] "\<forall>x. P x \<Longrightarrow> (P t \<Longrightarrow> Q) \<Longrightarrow> Q"}
 
   Elim-resolution with this rule will delete the universal formula
   after a single use.  To replicate universal quantifiers, replace the
   rule by @{text [display] "\<forall>x. P x \<Longrightarrow> (P t \<Longrightarrow> \<forall>x. P x \<Longrightarrow> Q) \<Longrightarrow> Q"}
 
-  To replicate existential quantifiers, replace @{text "(\<exists>I)"} by
+  To replicate existential quantifiers, replace \<open>(\<exists>I)\<close> by
   @{text [display] "(\<not> (\<exists>x. P x) \<Longrightarrow> P t) \<Longrightarrow> \<exists>x. P x"}
 
   All introduction rules mentioned above are also useful in swapped
@@ -1346,22 +1337,21 @@
   while unsafe rules must be used with care.  A safe rule must never
   reduce a provable goal to an unprovable set of subgoals.
 
-  The rule @{text "P \<Longrightarrow> P \<or> Q"} is unsafe because it reduces @{text "P
-  \<or> Q"} to @{text "P"}, which might turn out as premature choice of an
+  The rule \<open>P \<Longrightarrow> P \<or> Q\<close> is unsafe because it reduces \<open>P
+  \<or> Q\<close> to \<open>P\<close>, which might turn out as premature choice of an
   unprovable subgoal.  Any rule is unsafe whose premises contain new
-  unknowns.  The elimination rule @{text "\<forall>x. P x \<Longrightarrow> (P t \<Longrightarrow> Q) \<Longrightarrow> Q"} is
+  unknowns.  The elimination rule \<open>\<forall>x. P x \<Longrightarrow> (P t \<Longrightarrow> Q) \<Longrightarrow> Q\<close> is
   unsafe, since it is applied via elim-resolution, which discards the
-  assumption @{text "\<forall>x. P x"} and replaces it by the weaker
-  assumption @{text "P t"}.  The rule @{text "P t \<Longrightarrow> \<exists>x. P x"} is
-  unsafe for similar reasons.  The quantifier duplication rule @{text
-  "\<forall>x. P x \<Longrightarrow> (P t \<Longrightarrow> \<forall>x. P x \<Longrightarrow> Q) \<Longrightarrow> Q"} is unsafe in a different sense:
-  since it keeps the assumption @{text "\<forall>x. P x"}, it is prone to
+  assumption \<open>\<forall>x. P x\<close> and replaces it by the weaker
+  assumption \<open>P t\<close>.  The rule \<open>P t \<Longrightarrow> \<exists>x. P x\<close> is
+  unsafe for similar reasons.  The quantifier duplication rule \<open>\<forall>x. P x \<Longrightarrow> (P t \<Longrightarrow> \<forall>x. P x \<Longrightarrow> Q) \<Longrightarrow> Q\<close> is unsafe in a different sense:
+  since it keeps the assumption \<open>\<forall>x. P x\<close>, it is prone to
   looping.  In classical first-order logic, all rules are safe except
   those mentioned above.
 
   The safe~/ unsafe distinction is vague, and may be regarded merely
   as a way of giving some rules priority over others.  One could argue
-  that @{text "(\<or>E)"} is unsafe, because repeated application of it
+  that \<open>(\<or>E)\<close> is unsafe, because repeated application of it
   could generate exponentially many subgoals.  Induction rules are
   unsafe because inductive proofs are difficult to set up
   automatically.  Any inference is unsafe that instantiates an unknown
@@ -1370,13 +1360,13 @@
   unknowns shared with other subgoals.
 
   \begin{matharray}{rcl}
-    @{command_def "print_claset"}@{text "\<^sup>*"} & : & @{text "context \<rightarrow>"} \\
-    @{attribute_def intro} & : & @{text attribute} \\
-    @{attribute_def elim} & : & @{text attribute} \\
-    @{attribute_def dest} & : & @{text attribute} \\
-    @{attribute_def rule} & : & @{text attribute} \\
-    @{attribute_def iff} & : & @{text attribute} \\
-    @{attribute_def swapped} & : & @{text attribute} \\
+    @{command_def "print_claset"}\<open>\<^sup>*\<close> & : & \<open>context \<rightarrow>\<close> \\
+    @{attribute_def intro} & : & \<open>attribute\<close> \\
+    @{attribute_def elim} & : & \<open>attribute\<close> \\
+    @{attribute_def dest} & : & \<open>attribute\<close> \\
+    @{attribute_def rule} & : & \<open>attribute\<close> \\
+    @{attribute_def iff} & : & \<open>attribute\<close> \\
+    @{attribute_def swapped} & : & \<open>attribute\<close> \\
   \end{matharray}
 
   @{rail \<open>
@@ -1394,9 +1384,8 @@
   \<^descr> @{attribute intro}, @{attribute elim}, and @{attribute dest}
   declare introduction, elimination, and destruction rules,
   respectively.  By default, rules are considered as \<^emph>\<open>unsafe\<close>
-  (i.e.\ not applied blindly without backtracking), while ``@{text
-  "!"}'' classifies as \<^emph>\<open>safe\<close>.  Rule declarations marked by
-  ``@{text "?"}'' coincide with those of Isabelle/Pure, cf.\
+  (i.e.\ not applied blindly without backtracking), while ``\<open>!\<close>'' classifies as \<^emph>\<open>safe\<close>.  Rule declarations marked by
+  ``\<open>?\<close>'' coincide with those of Isabelle/Pure, cf.\
   \secref{sec:pure-meth-att} (i.e.\ are only applied in single steps
   of the @{method rule} method).  The optional natural number
   specifies an explicit weight argument, which is ignored by the
@@ -1416,7 +1405,7 @@
   added with some other classification, but the rule is added anyway
   as requested.
 
-  \<^descr> @{attribute rule}~@{text del} deletes all occurrences of a
+  \<^descr> @{attribute rule}~\<open>del\<close> deletes all occurrences of a
   rule from the classical context, regardless of its classification as
   introduction~/ elimination~/ destruction and safe~/ unsafe.
 
@@ -1424,16 +1413,15 @@
   Simplifier and the Classical reasoner at the same time.
   Non-conditional rules result in a safe introduction and elimination
   pair; conditional ones are considered unsafe.  Rules with negative
-  conclusion are automatically inverted (using @{text "\<not>"}-elimination
+  conclusion are automatically inverted (using \<open>\<not>\<close>-elimination
   internally).
 
-  The ``@{text "?"}'' version of @{attribute iff} declares rules to
+  The ``\<open>?\<close>'' version of @{attribute iff} declares rules to
   the Isabelle/Pure context only, and omits the Simplifier
   declaration.
 
   \<^descr> @{attribute swapped} turns an introduction rule into an
-  elimination, by resolving with the classical swap principle @{text
-  "\<not> P \<Longrightarrow> (\<not> R \<Longrightarrow> P) \<Longrightarrow> R"} in the second position.  This is mainly for
+  elimination, by resolving with the classical swap principle \<open>\<not> P \<Longrightarrow> (\<not> R \<Longrightarrow> P) \<Longrightarrow> R\<close> in the second position.  This is mainly for
   illustrative purposes: the Classical Reasoner already swaps rules
   internally as explained above.
 \<close>
@@ -1443,8 +1431,8 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{method_def rule} & : & @{text method} \\
-    @{method_def contradiction} & : & @{text method} \\
+    @{method_def rule} & : & \<open>method\<close> \\
+    @{method_def contradiction} & : & \<open>method\<close> \\
   \end{matharray}
 
   @{rail \<open>
@@ -1462,7 +1450,7 @@
   Isabelle/Pure (\secref{sec:pure-meth-att}).
 
   \<^descr> @{method contradiction} solves some goal by contradiction,
-  deriving any result from both @{text "\<not> A"} and @{text A}.  Chained
+  deriving any result from both \<open>\<not> A\<close> and \<open>A\<close>.  Chained
   facts, which are guaranteed to participate, may appear in either
   order.
 \<close>
@@ -1472,16 +1460,16 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{method_def blast} & : & @{text method} \\
-    @{method_def auto} & : & @{text method} \\
-    @{method_def force} & : & @{text method} \\
-    @{method_def fast} & : & @{text method} \\
-    @{method_def slow} & : & @{text method} \\
-    @{method_def best} & : & @{text method} \\
-    @{method_def fastforce} & : & @{text method} \\
-    @{method_def slowsimp} & : & @{text method} \\
-    @{method_def bestsimp} & : & @{text method} \\
-    @{method_def deepen} & : & @{text method} \\
+    @{method_def blast} & : & \<open>method\<close> \\
+    @{method_def auto} & : & \<open>method\<close> \\
+    @{method_def force} & : & \<open>method\<close> \\
+    @{method_def fast} & : & \<open>method\<close> \\
+    @{method_def slow} & : & \<open>method\<close> \\
+    @{method_def best} & : & \<open>method\<close> \\
+    @{method_def fastforce} & : & \<open>method\<close> \\
+    @{method_def slowsimp} & : & \<open>method\<close> \\
+    @{method_def bestsimp} & : & \<open>method\<close> \\
+    @{method_def deepen} & : & \<open>method\<close> \\
   \end{matharray}
 
   @{rail \<open>
@@ -1536,8 +1524,8 @@
   The optional integer argument specifies a bound for the number of
   unsafe steps used in a proof.  By default, @{method blast} starts
   with a bound of 0 and increases it successively to 20.  In contrast,
-  @{text "(blast lim)"} tries to prove the goal using a search bound
-  of @{text "lim"}.  Sometimes a slow proof using @{method blast} can
+  \<open>(blast lim)\<close> tries to prove the goal using a search bound
+  of \<open>lim\<close>.  Sometimes a slow proof using @{method blast} can
   be made much faster by supplying the successful search bound to this
   proof method instead.
 
@@ -1547,9 +1535,9 @@
   ones it cannot prove.  Occasionally, attempting to prove the hard
   ones may take a long time.
 
-  The optional depth arguments in @{text "(auto m n)"} refer to its
-  builtin classical reasoning procedures: @{text m} (default 4) is for
-  @{method blast}, which is tried first, and @{text n} (default 2) is
+  The optional depth arguments in \<open>(auto m n)\<close> refer to its
+  builtin classical reasoning procedures: \<open>m\<close> (default 4) is for
+  @{method blast}, which is tried first, and \<open>n\<close> (default 2) is
   for a slower but more general alternative that also takes wrappers
   into account.
 
@@ -1589,7 +1577,7 @@
 
   Any of the above methods support additional modifiers of the context
   of classical (and simplifier) rules, but the ones related to the
-  Simplifier are explicitly prefixed by @{text simp} here.  The
+  Simplifier are explicitly prefixed by \<open>simp\<close> here.  The
   semantics of these ad-hoc rule declarations is analogous to the
   attributes given before.  Facts provided by forward chaining are
   inserted into the goal before commencing proof search.
@@ -1604,9 +1592,9 @@
   quantifiers.
 
   \begin{matharray}{rcl}
-    @{method_def safe} & : & @{text method} \\
-    @{method_def clarify} & : & @{text method} \\
-    @{method_def clarsimp} & : & @{text method} \\
+    @{method_def safe} & : & \<open>method\<close> \\
+    @{method_def clarify} & : & \<open>method\<close> \\
+    @{method_def clarsimp} & : & \<open>method\<close> \\
   \end{matharray}
 
   @{rail \<open>
@@ -1631,11 +1619,11 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{method_def safe_step} & : & @{text method} \\
-    @{method_def inst_step} & : & @{text method} \\
-    @{method_def step} & : & @{text method} \\
-    @{method_def slow_step} & : & @{text method} \\
-    @{method_def clarify_step} & : &  @{text method} \\
+    @{method_def safe_step} & : & \<open>method\<close> \\
+    @{method_def inst_step} & : & \<open>method\<close> \\
+    @{method_def step} & : & \<open>method\<close> \\
+    @{method_def slow_step} & : & \<open>method\<close> \\
+    @{method_def clarify_step} & : &  \<open>method\<close> \\
   \end{matharray}
 
   These are the primitive tactics behind the automated proof methods
@@ -1662,9 +1650,9 @@
 
   \<^descr> @{method clarify_step} performs a safe step on the first
   subgoal; no splitting step is applied.  For example, the subgoal
-  @{text "A \<and> B"} is left as a conjunction.  Proof by assumption,
+  \<open>A \<and> B\<close> is left as a conjunction.  Proof by assumption,
   Modus Ponens, etc., may be performed provided they do not
-  instantiate unknowns.  Assumptions of the form @{text "x = t"} may
+  instantiate unknowns.  Assumptions of the form \<open>x = t\<close> may
   be eliminated.  The safe wrapper tactical is applied.
 \<close>
 
@@ -1695,10 +1683,10 @@
   The proof strategy of the Classical Reasoner is simple.  Perform as
   many safe inferences as possible; or else, apply certain safe rules,
   allowing instantiation of unknowns; or else, apply an unsafe rule.
-  The tactics also eliminate assumptions of the form @{text "x = t"}
+  The tactics also eliminate assumptions of the form \<open>x = t\<close>
   by substitution if they have been set up to do so.  They may perform
-  a form of Modus Ponens: if there are assumptions @{text "P \<longrightarrow> Q"} and
-  @{text "P"}, then replace @{text "P \<longrightarrow> Q"} by @{text "Q"}.
+  a form of Modus Ponens: if there are assumptions \<open>P \<longrightarrow> Q\<close> and
+  \<open>P\<close>, then replace \<open>P \<longrightarrow> Q\<close> by \<open>Q\<close>.
 
   The classical reasoning tools --- except @{method blast} --- allow
   to modify this basic proof strategy by applying two lists of
@@ -1718,40 +1706,40 @@
   wrapper names.  These names may be used to selectively delete
   wrappers.
 
-  \<^descr> @{text "ctxt addSWrapper (name, wrapper)"} adds a new wrapper,
+  \<^descr> \<open>ctxt addSWrapper (name, wrapper)\<close> adds a new wrapper,
   which should yield a safe tactic, to modify the existing safe step
   tactic.
 
-  \<^descr> @{text "ctxt addSbefore (name, tac)"} adds the given tactic as a
+  \<^descr> \<open>ctxt addSbefore (name, tac)\<close> adds the given tactic as a
   safe wrapper, such that it is tried \<^emph>\<open>before\<close> each safe step of
   the search.
 
-  \<^descr> @{text "ctxt addSafter (name, tac)"} adds the given tactic as a
+  \<^descr> \<open>ctxt addSafter (name, tac)\<close> adds the given tactic as a
   safe wrapper, such that it is tried when a safe step of the search
   would fail.
 
-  \<^descr> @{text "ctxt delSWrapper name"} deletes the safe wrapper with
+  \<^descr> \<open>ctxt delSWrapper name\<close> deletes the safe wrapper with
   the given name.
 
-  \<^descr> @{text "ctxt addWrapper (name, wrapper)"} adds a new wrapper to
+  \<^descr> \<open>ctxt addWrapper (name, wrapper)\<close> adds a new wrapper to
   modify the existing (unsafe) step tactic.
 
-  \<^descr> @{text "ctxt addbefore (name, tac)"} adds the given tactic as an
+  \<^descr> \<open>ctxt addbefore (name, tac)\<close> adds the given tactic as an
   unsafe wrapper, such that it its result is concatenated
   \<^emph>\<open>before\<close> the result of each unsafe step.
 
-  \<^descr> @{text "ctxt addafter (name, tac)"} adds the given tactic as an
+  \<^descr> \<open>ctxt addafter (name, tac)\<close> adds the given tactic as an
   unsafe wrapper, such that it its result is concatenated \<^emph>\<open>after\<close>
   the result of each unsafe step.
 
-  \<^descr> @{text "ctxt delWrapper name"} deletes the unsafe wrapper with
+  \<^descr> \<open>ctxt delWrapper name\<close> deletes the unsafe wrapper with
   the given name.
 
-  \<^descr> @{text "addSss"} adds the simpset of the context to its
+  \<^descr> \<open>addSss\<close> adds the simpset of the context to its
   classical set. The assumptions and goal will be simplified, in a
   rather safe way, after each safe step of the search.
 
-  \<^descr> @{text "addss"} adds the simpset of the context to its
+  \<^descr> \<open>addss\<close> adds the simpset of the context to its
   classical set. The assumptions and goal will be simplified, before
   the each unsafe step of the search.
 \<close>
@@ -1761,23 +1749,23 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{command_def "judgment"} & : & @{text "theory \<rightarrow> theory"} \\
-    @{method_def atomize} & : & @{text method} \\
-    @{attribute_def atomize} & : & @{text attribute} \\
-    @{attribute_def rule_format} & : & @{text attribute} \\
-    @{attribute_def rulify} & : & @{text attribute} \\
+    @{command_def "judgment"} & : & \<open>theory \<rightarrow> theory\<close> \\
+    @{method_def atomize} & : & \<open>method\<close> \\
+    @{attribute_def atomize} & : & \<open>attribute\<close> \\
+    @{attribute_def rule_format} & : & \<open>attribute\<close> \\
+    @{attribute_def rulify} & : & \<open>attribute\<close> \\
   \end{matharray}
 
   The very starting point for any Isabelle object-logic is a ``truth
   judgment'' that links object-level statements to the meta-logic
-  (with its minimal language of @{text prop} that covers universal
-  quantification @{text "\<And>"} and implication @{text "\<Longrightarrow>"}).
+  (with its minimal language of \<open>prop\<close> that covers universal
+  quantification \<open>\<And>\<close> and implication \<open>\<Longrightarrow>\<close>).
 
   Common object-logics are sufficiently expressive to internalize rule
-  statements over @{text "\<And>"} and @{text "\<Longrightarrow>"} within their own
+  statements over \<open>\<And>\<close> and \<open>\<Longrightarrow>\<close> within their own
   language.  This is useful in certain situations where a rule needs
   to be viewed as an atomic statement from the meta-level perspective,
-  e.g.\ @{text "\<And>x. x \<in> A \<Longrightarrow> P x"} versus @{text "\<forall>x \<in> A. P x"}.
+  e.g.\ \<open>\<And>x. x \<in> A \<Longrightarrow> P x\<close> versus \<open>\<forall>x \<in> A. P x\<close>.
 
   From the following language elements, only the @{method atomize}
   method and @{attribute rule_format} attribute are occasionally
@@ -1796,40 +1784,39 @@
     @@{attribute rule_format} ('(' 'noasm' ')')?
   \<close>}
 
-  \<^descr> @{command "judgment"}~@{text "c :: \<sigma> (mx)"} declares constant
-  @{text c} as the truth judgment of the current object-logic.  Its
-  type @{text \<sigma>} should specify a coercion of the category of
-  object-level propositions to @{text prop} of the Pure meta-logic;
-  the mixfix annotation @{text "(mx)"} would typically just link the
-  object language (internally of syntactic category @{text logic})
-  with that of @{text prop}.  Only one @{command "judgment"}
+  \<^descr> @{command "judgment"}~\<open>c :: \<sigma> (mx)\<close> declares constant
+  \<open>c\<close> as the truth judgment of the current object-logic.  Its
+  type \<open>\<sigma>\<close> should specify a coercion of the category of
+  object-level propositions to \<open>prop\<close> of the Pure meta-logic;
+  the mixfix annotation \<open>(mx)\<close> would typically just link the
+  object language (internally of syntactic category \<open>logic\<close>)
+  with that of \<open>prop\<close>.  Only one @{command "judgment"}
   declaration may be given in any theory development.
   
   \<^descr> @{method atomize} (as a method) rewrites any non-atomic
   premises of a sub-goal, using the meta-level equations declared via
   @{attribute atomize} (as an attribute) beforehand.  As a result,
   heavily nested goals become amenable to fundamental operations such
-  as resolution (cf.\ the @{method (Pure) rule} method).  Giving the ``@{text
-  "(full)"}'' option here means to turn the whole subgoal into an
+  as resolution (cf.\ the @{method (Pure) rule} method).  Giving the ``\<open>(full)\<close>'' option here means to turn the whole subgoal into an
   object-statement (if possible), including the outermost parameters
   and assumptions as well.
 
   A typical collection of @{attribute atomize} rules for a particular
   object-logic would provide an internalization for each of the
-  connectives of @{text "\<And>"}, @{text "\<Longrightarrow>"}, and @{text "\<equiv>"}.
+  connectives of \<open>\<And>\<close>, \<open>\<Longrightarrow>\<close>, and \<open>\<equiv>\<close>.
   Meta-level conjunction should be covered as well (this is
   particularly important for locales, see \secref{sec:locale}).
 
   \<^descr> @{attribute rule_format} rewrites a theorem by the equalities
   declared as @{attribute rulify} rules in the current object-logic.
   By default, the result is fully normalized, including assumptions
-  and conclusions at any depth.  The @{text "(no_asm)"} option
+  and conclusions at any depth.  The \<open>(no_asm)\<close> option
   restricts the transformation to the conclusion of a rule.
 
   In common object-logics (HOL, FOL, ZF), the effect of @{attribute
   rule_format} is to replace (bounded) universal quantification
-  (@{text "\<forall>"}) and implication (@{text "\<longrightarrow>"}) by the corresponding
-  rule statements over @{text "\<And>"} and @{text "\<Longrightarrow>"}.
+  (\<open>\<forall>\<close>) and implication (\<open>\<longrightarrow>\<close>) by the corresponding
+  rule statements over \<open>\<And>\<close> and \<open>\<Longrightarrow>\<close>.
 \<close>
 
 
@@ -1837,10 +1824,10 @@
 
 text \<open>
   \begin{tabular}{rcll}
-    @{attribute_def unify_trace_simp} & : & @{text "attribute"} & default @{text "false"} \\
-    @{attribute_def unify_trace_types} & : & @{text "attribute"} & default @{text "false"} \\
-    @{attribute_def unify_trace_bound} & : & @{text "attribute"} & default @{text "50"} \\
-    @{attribute_def unify_search_bound} & : & @{text "attribute"} & default @{text "60"} \\
+    @{attribute_def unify_trace_simp} & : & \<open>attribute\<close> & default \<open>false\<close> \\
+    @{attribute_def unify_trace_types} & : & \<open>attribute\<close> & default \<open>false\<close> \\
+    @{attribute_def unify_trace_bound} & : & \<open>attribute\<close> & default \<open>50\<close> \\
+    @{attribute_def unify_search_bound} & : & \<open>attribute\<close> & default \<open>60\<close> \\
   \end{tabular}
   \<^medskip>
 
--- a/src/Doc/Isar_Ref/HOL_Specific.thy	Tue Oct 20 23:03:46 2015 +0200
+++ b/src/Doc/Isar_Ref/HOL_Specific.thy	Tue Oct 20 23:53:40 2015 +0200
@@ -36,15 +36,15 @@
   prefer ML to Lisp, you will probably prefer HOL to ZF.
 
   \<^medskip>
-  The syntax of HOL follows @{text "\<lambda>"}-calculus and
+  The syntax of HOL follows \<open>\<lambda>\<close>-calculus and
   functional programming.  Function application is curried.  To apply
-  the function @{text f} of type @{text "\<tau>\<^sub>1 \<Rightarrow> \<tau>\<^sub>2 \<Rightarrow> \<tau>\<^sub>3"} to the
-  arguments @{text a} and @{text b} in HOL, you simply write @{text "f
-  a b"} (as in ML or Haskell).  There is no ``apply'' operator; the
-  existing application of the Pure @{text "\<lambda>"}-calculus is re-used.
-  Note that in HOL @{text "f (a, b)"} means ``@{text "f"} applied to
-  the pair @{text "(a, b)"} (which is notation for @{text "Pair a
-  b"}).  The latter typically introduces extra formal efforts that can
+  the function \<open>f\<close> of type \<open>\<tau>\<^sub>1 \<Rightarrow> \<tau>\<^sub>2 \<Rightarrow> \<tau>\<^sub>3\<close> to the
+  arguments \<open>a\<close> and \<open>b\<close> in HOL, you simply write \<open>f
+  a b\<close> (as in ML or Haskell).  There is no ``apply'' operator; the
+  existing application of the Pure \<open>\<lambda>\<close>-calculus is re-used.
+  Note that in HOL \<open>f (a, b)\<close> means ``\<open>f\<close> applied to
+  the pair \<open>(a, b)\<close> (which is notation for \<open>Pair a
+  b\<close>).  The latter typically introduces extra formal efforts that can
   be avoided by currying functions by default.  Explicit tuples are as
   infrequent in HOL formalizations as in good ML or Haskell programs.
 
@@ -69,24 +69,24 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{command_def (HOL) "inductive"} & : & @{text "local_theory \<rightarrow> local_theory"} \\
-    @{command_def (HOL) "inductive_set"} & : & @{text "local_theory \<rightarrow> local_theory"} \\
-    @{command_def (HOL) "coinductive"} & : & @{text "local_theory \<rightarrow> local_theory"} \\
-    @{command_def (HOL) "coinductive_set"} & : & @{text "local_theory \<rightarrow> local_theory"} \\
-    @{command_def "print_inductives"}@{text "\<^sup>*"} & : & @{text "context \<rightarrow>"} \\
-    @{attribute_def (HOL) mono} & : & @{text attribute} \\
+    @{command_def (HOL) "inductive"} & : & \<open>local_theory \<rightarrow> local_theory\<close> \\
+    @{command_def (HOL) "inductive_set"} & : & \<open>local_theory \<rightarrow> local_theory\<close> \\
+    @{command_def (HOL) "coinductive"} & : & \<open>local_theory \<rightarrow> local_theory\<close> \\
+    @{command_def (HOL) "coinductive_set"} & : & \<open>local_theory \<rightarrow> local_theory\<close> \\
+    @{command_def "print_inductives"}\<open>\<^sup>*\<close> & : & \<open>context \<rightarrow>\<close> \\
+    @{attribute_def (HOL) mono} & : & \<open>attribute\<close> \\
   \end{matharray}
 
   An \<^emph>\<open>inductive definition\<close> specifies the least predicate or set
-  @{text R} closed under given rules: applying a rule to elements of
-  @{text R} yields a result within @{text R}.  For example, a
+  \<open>R\<close> closed under given rules: applying a rule to elements of
+  \<open>R\<close> yields a result within \<open>R\<close>.  For example, a
   structural operational semantics is an inductive definition of an
   evaluation relation.
 
   Dually, a \<^emph>\<open>coinductive definition\<close> specifies the greatest
-  predicate or set @{text R} that is consistent with given rules:
-  every element of @{text R} can be seen as arising by applying a rule
-  to elements of @{text R}.  An important example is using
+  predicate or set \<open>R\<close> that is consistent with given rules:
+  every element of \<open>R\<close> can be seen as arising by applying a rule
+  to elements of \<open>R\<close>.  An important example is using
   bisimulation relations to formalise equivalence of processes and
   infinite data structures.
 
@@ -99,7 +99,7 @@
   works by pushing inclusion through logical connectives and any other
   operator that might be wrapped around recursive occurrences of the
   defined relation: there must be a monotonicity theorem of the form
-  @{text "A \<le> B \<Longrightarrow> \<M> A \<le> \<M> B"}, for each premise @{text "\<M> R t"} in an
+  \<open>A \<le> B \<Longrightarrow> \<M> A \<le> \<M> B\<close>, for each premise \<open>\<M> R t\<close> in an
   introduction rule.  The default rule declarations of Isabelle/HOL
   already take care of most common situations.
 
@@ -120,9 +120,9 @@
   "coinductive"} define (co)inductive predicates from the introduction
   rules.
 
-  The propositions given as @{text "clauses"} in the @{keyword
-  "where"} part are either rules of the usual @{text "\<And>/\<Longrightarrow>"} format
-  (with arbitrary nesting), or equalities using @{text "\<equiv>"}.  The
+  The propositions given as \<open>clauses\<close> in the @{keyword
+  "where"} part are either rules of the usual \<open>\<And>/\<Longrightarrow>\<close> format
+  (with arbitrary nesting), or equalities using \<open>\<equiv>\<close>.  The
   latter specifies extra-logical abbreviations in the sense of
   @{command_ref abbreviation}.  Introducing abstract syntax
   simultaneously with the actual introduction rules is occasionally
@@ -131,7 +131,7 @@
   The optional @{keyword "for"} part contains a list of parameters of
   the (co)inductive predicates that remain fixed throughout the
   definition, in contrast to arguments of the relation that may vary
-  in each occurrence within the given @{text "clauses"}.
+  in each occurrence within the given \<open>clauses\<close>.
 
   The optional @{keyword "monos"} declaration contains additional
   \<^emph>\<open>monotonicity theorems\<close>, which are required for each operator
@@ -143,7 +143,7 @@
   where multiple arguments are simulated via tuples.
 
   \<^descr> @{command "print_inductives"} prints (co)inductive definitions and
-  monotonicity rules; the ``@{text "!"}'' option indicates extra verbosity.
+  monotonicity rules; the ``\<open>!\<close>'' option indicates extra verbosity.
 
   \<^descr> @{attribute (HOL) mono} declares monotonicity rules in the
   context.  These rule are involved in the automated monotonicity
@@ -153,27 +153,27 @@
 
 subsection \<open>Derived rules\<close>
 
-text \<open>A (co)inductive definition of @{text R} provides the following
+text \<open>A (co)inductive definition of \<open>R\<close> provides the following
   main theorems:
 
-  \<^descr> @{text R.intros} is the list of introduction rules as proven
+  \<^descr> \<open>R.intros\<close> is the list of introduction rules as proven
   theorems, for the recursive predicates (or sets).  The rules are
   also available individually, using the names given them in the
   theory file;
 
-  \<^descr> @{text R.cases} is the case analysis (or elimination) rule;
-
-  \<^descr> @{text R.induct} or @{text R.coinduct} is the (co)induction
+  \<^descr> \<open>R.cases\<close> is the case analysis (or elimination) rule;
+
+  \<^descr> \<open>R.induct\<close> or \<open>R.coinduct\<close> is the (co)induction
   rule;
 
-  \<^descr> @{text R.simps} is the equation unrolling the fixpoint of the
+  \<^descr> \<open>R.simps\<close> is the equation unrolling the fixpoint of the
   predicate one step.
 
 
-  When several predicates @{text "R\<^sub>1, \<dots>, R\<^sub>n"} are defined simultaneously,
-  the list of introduction rules is called @{text "R\<^sub>1_\<dots>_R\<^sub>n.intros"}, the
-  case analysis rules are called @{text "R\<^sub>1.cases, \<dots>, R\<^sub>n.cases"}, and the
-  list of mutual induction rules is called @{text "R\<^sub>1_\<dots>_R\<^sub>n.inducts"}.
+  When several predicates \<open>R\<^sub>1, \<dots>, R\<^sub>n\<close> are defined simultaneously,
+  the list of introduction rules is called \<open>R\<^sub>1_\<dots>_R\<^sub>n.intros\<close>, the
+  case analysis rules are called \<open>R\<^sub>1.cases, \<dots>, R\<^sub>n.cases\<close>, and the
+  list of mutual induction rules is called \<open>R\<^sub>1_\<dots>_R\<^sub>n.inducts\<close>.
 \<close>
 
 
@@ -185,15 +185,15 @@
   sources for some examples.  The general format of such monotonicity
   theorems is as follows:
 
-  \<^item> Theorems of the form @{text "A \<le> B \<Longrightarrow> \<M> A \<le> \<M> B"}, for proving
+  \<^item> Theorems of the form \<open>A \<le> B \<Longrightarrow> \<M> A \<le> \<M> B\<close>, for proving
   monotonicity of inductive definitions whose introduction rules have
-  premises involving terms such as @{text "\<M> R t"}.
+  premises involving terms such as \<open>\<M> R t\<close>.
 
   \<^item> Monotonicity theorems for logical operators, which are of the
-  general form @{text "(\<dots> \<longrightarrow> \<dots>) \<Longrightarrow> \<dots> (\<dots> \<longrightarrow> \<dots>) \<Longrightarrow> \<dots> \<longrightarrow> \<dots>"}.  For example, in
-  the case of the operator @{text "\<or>"}, the corresponding theorem is
+  general form \<open>(\<dots> \<longrightarrow> \<dots>) \<Longrightarrow> \<dots> (\<dots> \<longrightarrow> \<dots>) \<Longrightarrow> \<dots> \<longrightarrow> \<dots>\<close>.  For example, in
+  the case of the operator \<open>\<or>\<close>, the corresponding theorem is
   \[
-  \infer{@{text "P\<^sub>1 \<or> P\<^sub>2 \<longrightarrow> Q\<^sub>1 \<or> Q\<^sub>2"}}{@{text "P\<^sub>1 \<longrightarrow> Q\<^sub>1"} & @{text "P\<^sub>2 \<longrightarrow> Q\<^sub>2"}}
+  \infer{\<open>P\<^sub>1 \<or> P\<^sub>2 \<longrightarrow> Q\<^sub>1 \<or> Q\<^sub>2\<close>}{\<open>P\<^sub>1 \<longrightarrow> Q\<^sub>1\<close> & \<open>P\<^sub>2 \<longrightarrow> Q\<^sub>2\<close>}
   \]
 
   \<^item> De Morgan style equations for reasoning about the ``polarity''
@@ -244,7 +244,7 @@
 where "B a \<Longrightarrow> EXISTS B"
 (*<*)end(*>*)
 
-text \<open>Here the @{text "cases"} or @{text "induct"} rules produced by the
+text \<open>Here the \<open>cases\<close> or \<open>induct\<close> rules produced by the
   @{command inductive} package coincide with the expected elimination rules
   for Natural Deduction. Already in the original article by Gerhard Gentzen
   @{cite "Gentzen:1935"} there is a hint that each connective can be
@@ -256,11 +256,11 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{command_def (HOL) "primrec"} & : & @{text "local_theory \<rightarrow> local_theory"} \\
-    @{command_def (HOL) "fun"} & : & @{text "local_theory \<rightarrow> local_theory"} \\
-    @{command_def (HOL) "function"} & : & @{text "local_theory \<rightarrow> proof(prove)"} \\
-    @{command_def (HOL) "termination"} & : & @{text "local_theory \<rightarrow> proof(prove)"} \\
-    @{command_def (HOL) "fun_cases"} & : & @{text "local_theory \<rightarrow> local_theory"} \\
+    @{command_def (HOL) "primrec"} & : & \<open>local_theory \<rightarrow> local_theory\<close> \\
+    @{command_def (HOL) "fun"} & : & \<open>local_theory \<rightarrow> local_theory\<close> \\
+    @{command_def (HOL) "function"} & : & \<open>local_theory \<rightarrow> proof(prove)\<close> \\
+    @{command_def (HOL) "termination"} & : & \<open>local_theory \<rightarrow> proof(prove)\<close> \\
+    @{command_def (HOL) "fun_cases"} & : & \<open>local_theory \<rightarrow> local_theory\<close> \\
   \end{matharray}
 
   @{rail \<open>
@@ -280,8 +280,7 @@
   \<close>}
 
   \<^descr> @{command (HOL) "primrec"} defines primitive recursive functions
-  over datatypes (see also @{command_ref (HOL) datatype}). The given @{text
-  equations} specify reduction rules that are produced by instantiating the
+  over datatypes (see also @{command_ref (HOL) datatype}). The given \<open>equations\<close> specify reduction rules that are produced by instantiating the
   generic combinator for primitive recursion that is available for each
   datatype.
 
@@ -289,17 +288,17 @@
 
   @{text [display] "f x\<^sub>1 \<dots> x\<^sub>m (C y\<^sub>1 \<dots> y\<^sub>k) z\<^sub>1 \<dots> z\<^sub>n = rhs"}
 
-  such that @{text C} is a datatype constructor, @{text rhs} contains only
+  such that \<open>C\<close> is a datatype constructor, \<open>rhs\<close> contains only
   the free variables on the left-hand side (or from the context), and all
-  recursive occurrences of @{text "f"} in @{text "rhs"} are of the form
-  @{text "f \<dots> y\<^sub>i \<dots>"} for some @{text i}. At most one reduction rule for
+  recursive occurrences of \<open>f\<close> in \<open>rhs\<close> are of the form
+  \<open>f \<dots> y\<^sub>i \<dots>\<close> for some \<open>i\<close>. At most one reduction rule for
   each constructor can be given. The order does not matter. For missing
   constructors, the function is defined to return a default value, but this
   equation is made difficult to access for users.
 
   The reduction rules are declared as @{attribute simp} by default, which
   enables standard proof methods like @{method simp} and @{method auto} to
-  normalize expressions of @{text "f"} applied to datatype constructions, by
+  normalize expressions of \<open>f\<close> applied to datatype constructions, by
   simulating symbolic computation via rewriting.
 
   \<^descr> @{command (HOL) "function"} defines functions by general wellfounded
@@ -310,18 +309,17 @@
   compatibility of patterns.
 
   The defined function is considered partial, and the resulting
-  simplification rules (named @{text "f.psimps"}) and induction rule (named
-  @{text "f.pinduct"}) are guarded by a generated domain predicate @{text
-  "f_dom"}. The @{command (HOL) "termination"} command can then be used to
+  simplification rules (named \<open>f.psimps\<close>) and induction rule (named
+  \<open>f.pinduct\<close>) are guarded by a generated domain predicate \<open>f_dom\<close>. The @{command (HOL) "termination"} command can then be used to
   establish that the function is total.
 
   \<^descr> @{command (HOL) "fun"} is a shorthand notation for ``@{command (HOL)
-  "function"}~@{text "(sequential)"}'', followed by automated proof attempts
+  "function"}~\<open>(sequential)\<close>'', followed by automated proof attempts
   regarding pattern matching and termination. See @{cite
   "isabelle-function"} for further details.
 
-  \<^descr> @{command (HOL) "termination"}~@{text f} commences a termination
-  proof for the previously defined function @{text f}. If this is omitted,
+  \<^descr> @{command (HOL) "termination"}~\<open>f\<close> commences a termination
+  proof for the previously defined function \<open>f\<close>. If this is omitted,
   the command refers to the most recent function definition. After the proof
   is closed, the recursive equations and the induction principle is
   established.
@@ -334,19 +332,19 @@
 
   Recursive definitions introduced by the @{command (HOL) "function"}
   command accommodate reasoning by induction (cf.\ @{method induct}): rule
-  @{text "f.induct"} refers to a specific induction rule, with parameters
+  \<open>f.induct\<close> refers to a specific induction rule, with parameters
   named according to the user-specified equations. Cases are numbered
   starting from 1. For @{command (HOL) "primrec"}, the induction principle
   coincides with structural recursion on the datatype where the recursion is
   carried out.
 
   The equations provided by these packages may be referred later as theorem
-  list @{text "f.simps"}, where @{text f} is the (collective) name of the
+  list \<open>f.simps\<close>, where \<open>f\<close> is the (collective) name of the
   functions defined. Individual equations may be named explicitly as well.
 
   The @{command (HOL) "function"} command accepts the following options.
 
-  \<^descr> @{text sequential} enables a preprocessor which disambiguates
+  \<^descr> \<open>sequential\<close> enables a preprocessor which disambiguates
   overlapping patterns by making them mutually disjoint. Earlier equations
   take precedence over later ones. This allows to give the specification in
   a format very similar to functional programming. Note that the resulting
@@ -355,7 +353,7 @@
   equation given by the user may result in several theorems. Also note that
   this automatic transformation only works for ML-style datatype patterns.
 
-  \<^descr> @{text domintros} enables the automated generation of introduction
+  \<^descr> \<open>domintros\<close> enables the automated generation of introduction
   rules for the domain predicate. While mostly not needed, they can be
   helpful in some proofs about partial functions.
 \<close>
@@ -400,7 +398,7 @@
 
   \<^medskip>
   Substitution on expressions can be defined similarly. The mapping
-  @{text f} of type @{typ "'a \<Rightarrow> 'a aexp"} given as a parameter is lifted
+  \<open>f\<close> of type @{typ "'a \<Rightarrow> 'a aexp"} given as a parameter is lifted
   canonically on the types @{typ "'a aexp"} and @{typ "'a bexp"},
   respectively.
 \<close>
@@ -481,7 +479,7 @@
   "map_tree f (Atom a) = Atom (f a)"
 | "map_tree f (Branch ts) = Branch (\<lambda>x. map_tree f (ts x))"
 
-text \<open>Note that all occurrences of functions such as @{text ts} above must
+text \<open>Note that all occurrences of functions such as \<open>ts\<close> above must
   be applied to an argument. In particular, @{term "map_tree f \<circ> ts"} is not
   allowed here.
 
@@ -498,11 +496,11 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{method_def (HOL) pat_completeness} & : & @{text method} \\
-    @{method_def (HOL) relation} & : & @{text method} \\
-    @{method_def (HOL) lexicographic_order} & : & @{text method} \\
-    @{method_def (HOL) size_change} & : & @{text method} \\
-    @{method_def (HOL) induction_schema} & : & @{text method} \\
+    @{method_def (HOL) pat_completeness} & : & \<open>method\<close> \\
+    @{method_def (HOL) relation} & : & \<open>method\<close> \\
+    @{method_def (HOL) lexicographic_order} & : & \<open>method\<close> \\
+    @{method_def (HOL) size_change} & : & \<open>method\<close> \\
+    @{method_def (HOL) induction_schema} & : & \<open>method\<close> \\
   \end{matharray}
 
   @{rail \<open>
@@ -522,10 +520,10 @@
   required by the @{command (HOL) "function"} package (cf.\
   @{cite "isabelle-function"}).
 
-  \<^descr> @{method (HOL) relation}~@{text R} introduces a termination
-  proof using the relation @{text R}.  The resulting proof state will
-  contain goals expressing that @{text R} is wellfounded, and that the
-  arguments of recursive calls decrease with respect to @{text R}.
+  \<^descr> @{method (HOL) relation}~\<open>R\<close> introduces a termination
+  proof using the relation \<open>R\<close>.  The resulting proof state will
+  contain goals expressing that \<open>R\<close> is wellfounded, and that the
+  arguments of recursive calls decrease with respect to \<open>R\<close>.
   Usually, this method is used as the initial proof step of manual
   termination proofs.
 
@@ -542,9 +540,9 @@
   \<^descr> @{method (HOL) "size_change"} also works on termination goals,
   using a variation of the size-change principle, together with a
   graph decomposition technique (see @{cite krauss_phd} for details).
-  Three kinds of orders are used internally: @{text max}, @{text min},
-  and @{text ms} (multiset), which is only available when the theory
-  @{text Multiset} is loaded. When no order kinds are given, they are
+  Three kinds of orders are used internally: \<open>max\<close>, \<open>min\<close>,
+  and \<open>ms\<close> (multiset), which is only available when the theory
+  \<open>Multiset\<close> is loaded. When no order kinds are given, they are
   tried in order. The search for a termination proof uses SAT solving
   internally.
 
@@ -563,8 +561,8 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{command_def (HOL) "partial_function"} & : & @{text "local_theory \<rightarrow> local_theory"} \\
-    @{attribute_def (HOL) "partial_function_mono"} & : & @{text attribute} \\
+    @{command_def (HOL) "partial_function"} & : & \<open>local_theory \<rightarrow> local_theory\<close> \\
+    @{attribute_def (HOL) "partial_function_mono"} & : & \<open>attribute\<close> \\
   \end{matharray}
 
   @{rail \<open>
@@ -572,7 +570,7 @@
       @'where' @{syntax thmdecl}? @{syntax prop}
   \<close>}
 
-  \<^descr> @{command (HOL) "partial_function"}~@{text "(mode)"} defines
+  \<^descr> @{command (HOL) "partial_function"}~\<open>(mode)\<close> defines
   recursive functions based on fixpoints in complete partial
   orders. No termination proof is required from the user or
   constructed internally. Instead, the possibility of non-termination
@@ -590,19 +588,19 @@
   can be influenced by declaring hints using the
   @{attribute (HOL) partial_function_mono} attribute.
 
-  The mandatory @{text mode} argument specifies the mode of operation
+  The mandatory \<open>mode\<close> argument specifies the mode of operation
   of the command, which directly corresponds to a complete partial
   order on the result type. By default, the following modes are
   defined:
 
-    \<^descr> @{text option} defines functions that map into the @{type
+    \<^descr> \<open>option\<close> defines functions that map into the @{type
     option} type. Here, the value @{term None} is used to model a
     non-terminating computation. Monotonicity requires that if @{term
     None} is returned by a recursive call, then the overall result must
     also be @{term None}. This is best achieved through the use of the
     monadic operator @{const "Option.bind"}.
 
-    \<^descr> @{text tailrec} defines functions with an arbitrary result
+    \<^descr> \<open>tailrec\<close> defines functions with an arbitrary result
     type and uses the slightly degenerated partial order where @{term
     "undefined"} is the bottom element.  Now, monotonicity requires that
     if @{term undefined} is returned by a recursive call, then the
@@ -624,7 +622,7 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{command_def (HOL) "recdef"} & : & @{text "theory \<rightarrow> theory)"} \\
+    @{command_def (HOL) "recdef"} & : & \<open>theory \<rightarrow> theory)\<close> \\
   \end{matharray}
 
   The old TFL command @{command (HOL) "recdef"} for defining recursive is
@@ -643,10 +641,9 @@
 
   \<^descr> @{command (HOL) "recdef"} defines general well-founded
   recursive functions (using the TFL package), see also
-  @{cite "isabelle-HOL"}.  The ``@{text "(permissive)"}'' option tells
+  @{cite "isabelle-HOL"}.  The ``\<open>(permissive)\<close>'' option tells
   TFL to recover from failed proof attempts, returning unfinished
-  results.  The @{text recdef_simp}, @{text recdef_cong}, and @{text
-  recdef_wf} hints refer to auxiliary rules to be used in the internal
+  results.  The \<open>recdef_simp\<close>, \<open>recdef_cong\<close>, and \<open>recdef_wf\<close> hints refer to auxiliary rules to be used in the internal
   automated proof process of TFL.  Additional @{syntax clasimpmod}
   declarations may be given to tune the context of the Simplifier
   (cf.\ \secref{sec:simplifier}) and Classical reasoner (cf.\
@@ -658,9 +655,9 @@
   globally, using the following attributes.
 
   \begin{matharray}{rcl}
-    @{attribute_def (HOL) recdef_simp} & : & @{text attribute} \\
-    @{attribute_def (HOL) recdef_cong} & : & @{text attribute} \\
-    @{attribute_def (HOL) recdef_wf} & : & @{text attribute} \\
+    @{attribute_def (HOL) recdef_simp} & : & \<open>attribute\<close> \\
+    @{attribute_def (HOL) recdef_cong} & : & \<open>attribute\<close> \\
+    @{attribute_def (HOL) recdef_wf} & : & \<open>attribute\<close> \\
   \end{matharray}
 
   @{rail \<open>
@@ -674,9 +671,9 @@
 
 text \<open>
   \begin{tabular}{rcll}
-  @{command_def "adhoc_overloading"} & : & @{text "local_theory \<rightarrow> local_theory"} \\
-  @{command_def "no_adhoc_overloading"} & : & @{text "local_theory \<rightarrow> local_theory"} \\
-  @{attribute_def "show_variants"} & : & @{text "attribute"} & default @{text false} \\
+  @{command_def "adhoc_overloading"} & : & \<open>local_theory \<rightarrow> local_theory\<close> \\
+  @{command_def "no_adhoc_overloading"} & : & \<open>local_theory \<rightarrow> local_theory\<close> \\
+  @{attribute_def "show_variants"} & : & \<open>attribute\<close> & default \<open>false\<close> \\
   \end{tabular}
 
   \<^medskip>
@@ -692,7 +689,7 @@
       (@{syntax nameref} (@{syntax term} + ) + @'and')
   \<close>}
 
-  \<^descr> @{command "adhoc_overloading"}~@{text "c v\<^sub>1 ... v\<^sub>n"}
+  \<^descr> @{command "adhoc_overloading"}~\<open>c v\<^sub>1 ... v\<^sub>n\<close>
   associates variants with an existing constant.
 
   \<^descr> @{command "no_adhoc_overloading"} is similar to
@@ -711,7 +708,7 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{command_def (HOL) "specification"} & : & @{text "theory \<rightarrow> proof(prove)"} \\
+    @{command_def (HOL) "specification"} & : & \<open>theory \<rightarrow> proof(prove)\<close> \\
   \end{matharray}
 
   @{rail \<open>
@@ -721,16 +718,16 @@
     decl: (@{syntax name} ':')? @{syntax term} ('(' @'overloaded' ')')?
   \<close>}
 
-  \<^descr> @{command (HOL) "specification"}~@{text "decls \<phi>"} sets up a
+  \<^descr> @{command (HOL) "specification"}~\<open>decls \<phi>\<close> sets up a
   goal stating the existence of terms with the properties specified to
-  hold for the constants given in @{text decls}.  After finishing the
+  hold for the constants given in \<open>decls\<close>.  After finishing the
   proof, the theory will be augmented with definitions for the given
   constants, as well as with theorems stating the properties for these
   constants.
 
-  @{text decl} declares a constant to be defined by the
-  specification given.  The definition for the constant @{text c} is
-  bound to the name @{text c_def} unless a theorem name is given in
+  \<open>decl\<close> declares a constant to be defined by the
+  specification given.  The definition for the constant \<open>c\<close> is
+  bound to the name \<open>c_def\<close> unless a theorem name is given in
   the declaration.  Overloaded constants should be declared as such.
 \<close>
 
@@ -739,8 +736,8 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{command_def (HOL) "old_datatype"} & : & @{text "theory \<rightarrow> theory"} \\
-    @{command_def (HOL) "old_rep_datatype"} & : & @{text "theory \<rightarrow> proof(prove)"} \\
+    @{command_def (HOL) "old_datatype"} & : & \<open>theory \<rightarrow> theory\<close> \\
+    @{command_def (HOL) "old_rep_datatype"} & : & \<open>theory \<rightarrow> proof(prove)\<close> \\
   \end{matharray}
 
   @{rail \<open>
@@ -829,34 +826,33 @@
   \begin{center}
   \begin{tabular}{l|l|l}
     & record terms & record types \\ \hline
-    fixed & @{text "\<lparr>x = a, y = b\<rparr>"} & @{text "\<lparr>x :: A, y :: B\<rparr>"} \\
-    schematic & @{text "\<lparr>x = a, y = b, \<dots> = m\<rparr>"} &
-      @{text "\<lparr>x :: A, y :: B, \<dots> :: M\<rparr>"} \\
+    fixed & \<open>\<lparr>x = a, y = b\<rparr>\<close> & \<open>\<lparr>x :: A, y :: B\<rparr>\<close> \\
+    schematic & \<open>\<lparr>x = a, y = b, \<dots> = m\<rparr>\<close> &
+      \<open>\<lparr>x :: A, y :: B, \<dots> :: M\<rparr>\<close> \\
   \end{tabular}
   \end{center}
 
-  The ASCII representation of @{text "\<lparr>x = a\<rparr>"} is @{text
-  "(| x = a |)"}.
-
-  A fixed record @{text "\<lparr>x = a, y = b\<rparr>"} has field @{text x} of value
-  @{text a} and field @{text y} of value @{text b}.  The corresponding
-  type is @{text "\<lparr>x :: A, y :: B\<rparr>"}, assuming that @{text "a :: A"}
-  and @{text "b :: B"}.
-
-  A record scheme like @{text "\<lparr>x = a, y = b, \<dots> = m\<rparr>"} contains fields
-  @{text x} and @{text y} as before, but also possibly further fields
-  as indicated by the ``@{text "\<dots>"}'' notation (which is actually part
-  of the syntax).  The improper field ``@{text "\<dots>"}'' of a record
+  The ASCII representation of \<open>\<lparr>x = a\<rparr>\<close> is \<open>(| x = a |)\<close>.
+
+  A fixed record \<open>\<lparr>x = a, y = b\<rparr>\<close> has field \<open>x\<close> of value
+  \<open>a\<close> and field \<open>y\<close> of value \<open>b\<close>.  The corresponding
+  type is \<open>\<lparr>x :: A, y :: B\<rparr>\<close>, assuming that \<open>a :: A\<close>
+  and \<open>b :: B\<close>.
+
+  A record scheme like \<open>\<lparr>x = a, y = b, \<dots> = m\<rparr>\<close> contains fields
+  \<open>x\<close> and \<open>y\<close> as before, but also possibly further fields
+  as indicated by the ``\<open>\<dots>\<close>'' notation (which is actually part
+  of the syntax).  The improper field ``\<open>\<dots>\<close>'' of a record
   scheme is called the \<^emph>\<open>more part\<close>.  Logically it is just a free
   variable, which is occasionally referred to as ``row variable'' in
   the literature.  The more part of a record scheme may be
   instantiated by zero or more further components.  For example, the
-  previous scheme may get instantiated to @{text "\<lparr>x = a, y = b, z =
-  c, \<dots> = m'\<rparr>"}, where @{text m'} refers to a different more part.
+  previous scheme may get instantiated to \<open>\<lparr>x = a, y = b, z =
+  c, \<dots> = m'\<rparr>\<close>, where \<open>m'\<close> refers to a different more part.
   Fixed records are special instances of record schemes, where
-  ``@{text "\<dots>"}'' is properly terminated by the @{text "() :: unit"}
-  element.  In fact, @{text "\<lparr>x = a, y = b\<rparr>"} is just an abbreviation
-  for @{text "\<lparr>x = a, y = b, \<dots> = ()\<rparr>"}.
+  ``\<open>\<dots>\<close>'' is properly terminated by the \<open>() :: unit\<close>
+  element.  In fact, \<open>\<lparr>x = a, y = b\<rparr>\<close> is just an abbreviation
+  for \<open>\<lparr>x = a, y = b, \<dots> = ()\<rparr>\<close>.
 
   \<^medskip>
   Two key observations make extensible records in a simply
@@ -886,7 +882,7 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{command_def (HOL) "record"} & : & @{text "theory \<rightarrow> theory"} \\
+    @{command_def (HOL) "record"} & : & \<open>theory \<rightarrow> theory\<close> \\
   \end{matharray}
 
   @{rail \<open>
@@ -896,32 +892,30 @@
     constdecl: @{syntax name} '::' @{syntax type} @{syntax mixfix}?
   \<close>}
 
-  \<^descr> @{command (HOL) "record"}~@{text "(\<alpha>\<^sub>1, \<dots>, \<alpha>\<^sub>m) t = \<tau> + c\<^sub>1 :: \<sigma>\<^sub>1
-  \<dots> c\<^sub>n :: \<sigma>\<^sub>n"} defines extensible record type @{text "(\<alpha>\<^sub>1, \<dots>, \<alpha>\<^sub>m) t"},
-  derived from the optional parent record @{text "\<tau>"} by adding new
-  field components @{text "c\<^sub>i :: \<sigma>\<^sub>i"} etc.
-
-  The type variables of @{text "\<tau>"} and @{text "\<sigma>\<^sub>i"} need to be
-  covered by the (distinct) parameters @{text "\<alpha>\<^sub>1, \<dots>,
-  \<alpha>\<^sub>m"}.  Type constructor @{text t} has to be new, while @{text
-  \<tau>} needs to specify an instance of an existing record type.  At
-  least one new field @{text "c\<^sub>i"} has to be specified.
+  \<^descr> @{command (HOL) "record"}~\<open>(\<alpha>\<^sub>1, \<dots>, \<alpha>\<^sub>m) t = \<tau> + c\<^sub>1 :: \<sigma>\<^sub>1
+  \<dots> c\<^sub>n :: \<sigma>\<^sub>n\<close> defines extensible record type \<open>(\<alpha>\<^sub>1, \<dots>, \<alpha>\<^sub>m) t\<close>,
+  derived from the optional parent record \<open>\<tau>\<close> by adding new
+  field components \<open>c\<^sub>i :: \<sigma>\<^sub>i\<close> etc.
+
+  The type variables of \<open>\<tau>\<close> and \<open>\<sigma>\<^sub>i\<close> need to be
+  covered by the (distinct) parameters \<open>\<alpha>\<^sub>1, \<dots>,
+  \<alpha>\<^sub>m\<close>.  Type constructor \<open>t\<close> has to be new, while \<open>\<tau>\<close> needs to specify an instance of an existing record type.  At
+  least one new field \<open>c\<^sub>i\<close> has to be specified.
   Basically, field names need to belong to a unique record.  This is
   not a real restriction in practice, since fields are qualified by
   the record name internally.
 
-  The parent record specification @{text \<tau>} is optional; if omitted
-  @{text t} becomes a root record.  The hierarchy of all records
+  The parent record specification \<open>\<tau>\<close> is optional; if omitted
+  \<open>t\<close> becomes a root record.  The hierarchy of all records
   declared within a theory context forms a forest structure, i.e.\ a
   set of trees starting with a root record each.  There is no way to
   merge multiple parent records!
 
-  For convenience, @{text "(\<alpha>\<^sub>1, \<dots>, \<alpha>\<^sub>m) t"} is made a
-  type abbreviation for the fixed record type @{text "\<lparr>c\<^sub>1 ::
-  \<sigma>\<^sub>1, \<dots>, c\<^sub>n :: \<sigma>\<^sub>n\<rparr>"}, likewise is @{text
-  "(\<alpha>\<^sub>1, \<dots>, \<alpha>\<^sub>m, \<zeta>) t_scheme"} made an abbreviation for
-  @{text "\<lparr>c\<^sub>1 :: \<sigma>\<^sub>1, \<dots>, c\<^sub>n :: \<sigma>\<^sub>n, \<dots> ::
-  \<zeta>\<rparr>"}.
+  For convenience, \<open>(\<alpha>\<^sub>1, \<dots>, \<alpha>\<^sub>m) t\<close> is made a
+  type abbreviation for the fixed record type \<open>\<lparr>c\<^sub>1 ::
+  \<sigma>\<^sub>1, \<dots>, c\<^sub>n :: \<sigma>\<^sub>n\<rparr>\<close>, likewise is \<open>(\<alpha>\<^sub>1, \<dots>, \<alpha>\<^sub>m, \<zeta>) t_scheme\<close> made an abbreviation for
+  \<open>\<lparr>c\<^sub>1 :: \<sigma>\<^sub>1, \<dots>, c\<^sub>n :: \<sigma>\<^sub>n, \<dots> ::
+  \<zeta>\<rparr>\<close>.
 \<close>
 
 
@@ -929,27 +923,27 @@
 
 text \<open>Any record definition of the form presented above produces certain
   standard operations. Selectors and updates are provided for any field,
-  including the improper one ``@{text more}''. There are also cumulative
+  including the improper one ``\<open>more\<close>''. There are also cumulative
   record constructor functions. To simplify the presentation below, we
-  assume for now that @{text "(\<alpha>\<^sub>1, \<dots>, \<alpha>\<^sub>m) t"} is a root record with fields
-  @{text "c\<^sub>1 :: \<sigma>\<^sub>1, \<dots>, c\<^sub>n :: \<sigma>\<^sub>n"}.
+  assume for now that \<open>(\<alpha>\<^sub>1, \<dots>, \<alpha>\<^sub>m) t\<close> is a root record with fields
+  \<open>c\<^sub>1 :: \<sigma>\<^sub>1, \<dots>, c\<^sub>n :: \<sigma>\<^sub>n\<close>.
 
   \<^medskip>
   \<^bold>\<open>Selectors\<close> and \<^bold>\<open>updates\<close> are available for any
-  field (including ``@{text more}''):
+  field (including ``\<open>more\<close>''):
 
   \begin{matharray}{lll}
-    @{text "c\<^sub>i"} & @{text "::"} & @{text "\<lparr>\<^vec>c :: \<^vec>\<sigma>, \<dots> :: \<zeta>\<rparr> \<Rightarrow> \<sigma>\<^sub>i"} \\
-    @{text "c\<^sub>i_update"} & @{text "::"} & @{text "\<sigma>\<^sub>i \<Rightarrow> \<lparr>\<^vec>c :: \<^vec>\<sigma>, \<dots> :: \<zeta>\<rparr> \<Rightarrow> \<lparr>\<^vec>c :: \<^vec>\<sigma>, \<dots> :: \<zeta>\<rparr>"} \\
+    \<open>c\<^sub>i\<close> & \<open>::\<close> & \<open>\<lparr>\<^vec>c :: \<^vec>\<sigma>, \<dots> :: \<zeta>\<rparr> \<Rightarrow> \<sigma>\<^sub>i\<close> \\
+    \<open>c\<^sub>i_update\<close> & \<open>::\<close> & \<open>\<sigma>\<^sub>i \<Rightarrow> \<lparr>\<^vec>c :: \<^vec>\<sigma>, \<dots> :: \<zeta>\<rparr> \<Rightarrow> \<lparr>\<^vec>c :: \<^vec>\<sigma>, \<dots> :: \<zeta>\<rparr>\<close> \\
   \end{matharray}
 
-  There is special syntax for application of updates: @{text "r\<lparr>x := a\<rparr>"}
-  abbreviates term @{text "x_update a r"}. Further notation for repeated
-  updates is also available: @{text "r\<lparr>x := a\<rparr>\<lparr>y := b\<rparr>\<lparr>z := c\<rparr>"} may be
-  written @{text "r\<lparr>x := a, y := b, z := c\<rparr>"}. Note that because of postfix
+  There is special syntax for application of updates: \<open>r\<lparr>x := a\<rparr>\<close>
+  abbreviates term \<open>x_update a r\<close>. Further notation for repeated
+  updates is also available: \<open>r\<lparr>x := a\<rparr>\<lparr>y := b\<rparr>\<lparr>z := c\<rparr>\<close> may be
+  written \<open>r\<lparr>x := a, y := b, z := c\<rparr>\<close>. Note that because of postfix
   notation the order of fields shown here is reverse than in the actual
   term. Since repeated updates are just function applications, fields may be
-  freely permuted in @{text "\<lparr>x := a, y := b, z := c\<rparr>"}, as far as logical
+  freely permuted in \<open>\<lparr>x := a, y := b, z := c\<rparr>\<close>, as far as logical
   equality is concerned. Thus commutativity of independent updates can be
   proven within the logic for any two fields, but not as a general theorem.
 
@@ -958,7 +952,7 @@
   constructor function:
 
   \begin{matharray}{lll}
-    @{text "t.make"} & @{text "::"} & @{text "\<sigma>\<^sub>1 \<Rightarrow> \<dots> \<sigma>\<^sub>n \<Rightarrow> \<lparr>\<^vec>c :: \<^vec>\<sigma>\<rparr>"} \\
+    \<open>t.make\<close> & \<open>::\<close> & \<open>\<sigma>\<^sub>1 \<Rightarrow> \<dots> \<sigma>\<^sub>n \<Rightarrow> \<lparr>\<^vec>c :: \<^vec>\<sigma>\<rparr>\<close> \\
   \end{matharray}
 
   \<^medskip>
@@ -967,38 +961,38 @@
   well, resulting in a list of \<^emph>\<open>ancestor records\<close>. Appending the lists
   of fields of all ancestors results in a certain field prefix. The record
   package automatically takes care of this by lifting operations over this
-  context of ancestor fields. Assuming that @{text "(\<alpha>\<^sub>1, \<dots>, \<alpha>\<^sub>m) t"} has
-  ancestor fields @{text "b\<^sub>1 :: \<rho>\<^sub>1, \<dots>, b\<^sub>k :: \<rho>\<^sub>k"}, the above record
+  context of ancestor fields. Assuming that \<open>(\<alpha>\<^sub>1, \<dots>, \<alpha>\<^sub>m) t\<close> has
+  ancestor fields \<open>b\<^sub>1 :: \<rho>\<^sub>1, \<dots>, b\<^sub>k :: \<rho>\<^sub>k\<close>, the above record
   operations will get the following types:
 
   \<^medskip>
   \begin{tabular}{lll}
-    @{text "c\<^sub>i"} & @{text "::"} & @{text "\<lparr>\<^vec>b :: \<^vec>\<rho>, \<^vec>c :: \<^vec>\<sigma>, \<dots> :: \<zeta>\<rparr> \<Rightarrow> \<sigma>\<^sub>i"} \\
-    @{text "c\<^sub>i_update"} & @{text "::"} & @{text "\<sigma>\<^sub>i \<Rightarrow>
+    \<open>c\<^sub>i\<close> & \<open>::\<close> & \<open>\<lparr>\<^vec>b :: \<^vec>\<rho>, \<^vec>c :: \<^vec>\<sigma>, \<dots> :: \<zeta>\<rparr> \<Rightarrow> \<sigma>\<^sub>i\<close> \\
+    \<open>c\<^sub>i_update\<close> & \<open>::\<close> & \<open>\<sigma>\<^sub>i \<Rightarrow>
       \<lparr>\<^vec>b :: \<^vec>\<rho>, \<^vec>c :: \<^vec>\<sigma>, \<dots> :: \<zeta>\<rparr> \<Rightarrow>
-      \<lparr>\<^vec>b :: \<^vec>\<rho>, \<^vec>c :: \<^vec>\<sigma>, \<dots> :: \<zeta>\<rparr>"} \\
-    @{text "t.make"} & @{text "::"} & @{text "\<rho>\<^sub>1 \<Rightarrow> \<dots> \<rho>\<^sub>k \<Rightarrow> \<sigma>\<^sub>1 \<Rightarrow> \<dots> \<sigma>\<^sub>n \<Rightarrow>
-      \<lparr>\<^vec>b :: \<^vec>\<rho>, \<^vec>c :: \<^vec>\<sigma>\<rparr>"} \\
+      \<lparr>\<^vec>b :: \<^vec>\<rho>, \<^vec>c :: \<^vec>\<sigma>, \<dots> :: \<zeta>\<rparr>\<close> \\
+    \<open>t.make\<close> & \<open>::\<close> & \<open>\<rho>\<^sub>1 \<Rightarrow> \<dots> \<rho>\<^sub>k \<Rightarrow> \<sigma>\<^sub>1 \<Rightarrow> \<dots> \<sigma>\<^sub>n \<Rightarrow>
+      \<lparr>\<^vec>b :: \<^vec>\<rho>, \<^vec>c :: \<^vec>\<sigma>\<rparr>\<close> \\
   \end{tabular}
   \<^medskip>
 
   Some further operations address the extension aspect of a
-  derived record scheme specifically: @{text "t.fields"} produces a record
+  derived record scheme specifically: \<open>t.fields\<close> produces a record
   fragment consisting of exactly the new fields introduced here (the result
-  may serve as a more part elsewhere); @{text "t.extend"} takes a fixed
-  record and adds a given more part; @{text "t.truncate"} restricts a record
+  may serve as a more part elsewhere); \<open>t.extend\<close> takes a fixed
+  record and adds a given more part; \<open>t.truncate\<close> restricts a record
   scheme to a fixed record.
 
   \<^medskip>
   \begin{tabular}{lll}
-    @{text "t.fields"} & @{text "::"} & @{text "\<sigma>\<^sub>1 \<Rightarrow> \<dots> \<sigma>\<^sub>n \<Rightarrow> \<lparr>\<^vec>c :: \<^vec>\<sigma>\<rparr>"} \\
-    @{text "t.extend"} & @{text "::"} & @{text "\<lparr>\<^vec>b :: \<^vec>\<rho>, \<^vec>c :: \<^vec>\<sigma>\<rparr> \<Rightarrow>
-      \<zeta> \<Rightarrow> \<lparr>\<^vec>b :: \<^vec>\<rho>, \<^vec>c :: \<^vec>\<sigma>, \<dots> :: \<zeta>\<rparr>"} \\
-    @{text "t.truncate"} & @{text "::"} & @{text "\<lparr>\<^vec>b :: \<^vec>\<rho>, \<^vec>c :: \<^vec>\<sigma>, \<dots> :: \<zeta>\<rparr> \<Rightarrow> \<lparr>\<^vec>b :: \<^vec>\<rho>, \<^vec>c :: \<^vec>\<sigma>\<rparr>"} \\
+    \<open>t.fields\<close> & \<open>::\<close> & \<open>\<sigma>\<^sub>1 \<Rightarrow> \<dots> \<sigma>\<^sub>n \<Rightarrow> \<lparr>\<^vec>c :: \<^vec>\<sigma>\<rparr>\<close> \\
+    \<open>t.extend\<close> & \<open>::\<close> & \<open>\<lparr>\<^vec>b :: \<^vec>\<rho>, \<^vec>c :: \<^vec>\<sigma>\<rparr> \<Rightarrow>
+      \<zeta> \<Rightarrow> \<lparr>\<^vec>b :: \<^vec>\<rho>, \<^vec>c :: \<^vec>\<sigma>, \<dots> :: \<zeta>\<rparr>\<close> \\
+    \<open>t.truncate\<close> & \<open>::\<close> & \<open>\<lparr>\<^vec>b :: \<^vec>\<rho>, \<^vec>c :: \<^vec>\<sigma>, \<dots> :: \<zeta>\<rparr> \<Rightarrow> \<lparr>\<^vec>b :: \<^vec>\<rho>, \<^vec>c :: \<^vec>\<sigma>\<rparr>\<close> \\
   \end{tabular}
   \<^medskip>
 
-  Note that @{text "t.make"} and @{text "t.fields"} coincide for
+  Note that \<open>t.make\<close> and \<open>t.fields\<close> coincide for
   root records.
 \<close>
 
@@ -1009,13 +1003,12 @@
   The record package proves several results internally, declaring
   these facts to appropriate proof tools.  This enables users to
   reason about record structures quite conveniently.  Assume that
-  @{text t} is a record type as specified above.
+  \<open>t\<close> is a record type as specified above.
 
   \<^enum> Standard conversions for selectors or updates applied to record
   constructor terms are made part of the default Simplifier context; thus
   proofs by reduction of basic operations merely require the @{method simp}
-  method without further arguments. These rules are available as @{text
-  "t.simps"}, too.
+  method without further arguments. These rules are available as \<open>t.simps\<close>, too.
 
   \<^enum> Selectors applied to updated records are automatically reduced by an
   internal simplification procedure, which is also part of the standard
@@ -1023,12 +1016,12 @@
 
   \<^enum> Inject equations of a form analogous to @{prop "(x, y) = (x', y') \<equiv>
   x = x' \<and> y = y'"} are declared to the Simplifier and Classical Reasoner as
-  @{attribute iff} rules. These rules are available as @{text "t.iffs"}.
-
-  \<^enum> The introduction rule for record equality analogous to @{text "x r =
-  x r' \<Longrightarrow> y r = y r' \<dots> \<Longrightarrow> r = r'"} is declared to the Simplifier, and as the
-  basic rule context as ``@{attribute intro}@{text "?"}''. The rule is
-  called @{text "t.equality"}.
+  @{attribute iff} rules. These rules are available as \<open>t.iffs\<close>.
+
+  \<^enum> The introduction rule for record equality analogous to \<open>x r =
+  x r' \<Longrightarrow> y r = y r' \<dots> \<Longrightarrow> r = r'\<close> is declared to the Simplifier, and as the
+  basic rule context as ``@{attribute intro}\<open>?\<close>''. The rule is
+  called \<open>t.equality\<close>.
 
   \<^enum> Representations of arbitrary record expressions as canonical
   constructor terms are provided both in @{method cases} and @{method
@@ -1038,13 +1031,13 @@
 
   The generic proof methods are sufficiently smart to pick the most sensible
   rule according to the type of the indicated record expression: users just
-  need to apply something like ``@{text "(cases r)"}'' to a certain proof
+  need to apply something like ``\<open>(cases r)\<close>'' to a certain proof
   problem.
 
-  \<^enum> The derived record operations @{text "t.make"}, @{text "t.fields"},
-  @{text "t.extend"}, @{text "t.truncate"} are \<^emph>\<open>not\<close> treated
+  \<^enum> The derived record operations \<open>t.make\<close>, \<open>t.fields\<close>,
+  \<open>t.extend\<close>, \<open>t.truncate\<close> are \<^emph>\<open>not\<close> treated
   automatically, but usually need to be expanded by hand, using the
-  collective fact @{text "t.defs"}.
+  collective fact \<open>t.defs\<close>.
 \<close>
 
 
@@ -1057,18 +1050,18 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{command_def (HOL) "typedef"} & : & @{text "local_theory \<rightarrow> proof(prove)"} \\
+    @{command_def (HOL) "typedef"} & : & \<open>local_theory \<rightarrow> proof(prove)\<close> \\
   \end{matharray}
 
   A type definition identifies a new type with a non-empty subset of an
   existing type. More precisely, the new type is defined by exhibiting an
-  existing type @{text \<tau>}, a set @{text "A :: \<tau> set"}, and proving @{prop
-  "\<exists>x. x \<in> A"}. Thus @{text A} is a non-empty subset of @{text \<tau>}, and the
+  existing type \<open>\<tau>\<close>, a set \<open>A :: \<tau> set\<close>, and proving @{prop
+  "\<exists>x. x \<in> A"}. Thus \<open>A\<close> is a non-empty subset of \<open>\<tau>\<close>, and the
   new type denotes this subset. New functions are postulated that establish
   an isomorphism between the new type and the subset. In general, the type
-  @{text \<tau>} may involve type variables @{text "\<alpha>\<^sub>1, \<dots>, \<alpha>\<^sub>n"} which means
-  that the type definition produces a type constructor @{text "(\<alpha>\<^sub>1, \<dots>, \<alpha>\<^sub>n)
-  t"} depending on those type arguments.
+  \<open>\<tau>\<close> may involve type variables \<open>\<alpha>\<^sub>1, \<dots>, \<alpha>\<^sub>n\<close> which means
+  that the type definition produces a type constructor \<open>(\<alpha>\<^sub>1, \<dots>, \<alpha>\<^sub>n)
+  t\<close> depending on those type arguments.
 
   @{rail \<open>
     @@{command (HOL) typedef} @{syntax "overloaded"}? abs_type '=' rep_set
@@ -1122,14 +1115,14 @@
   dependent type: the meaning relies on the operations provided by different
   type-class instances.
 
-  \<^descr> @{command (HOL) "typedef"}~@{text "(\<alpha>\<^sub>1, \<dots>, \<alpha>\<^sub>n) t = A"} defines a
-  new type @{text "(\<alpha>\<^sub>1, \<dots>, \<alpha>\<^sub>n) t"} from the set @{text A} over an existing
-  type. The set @{text A} may contain type variables @{text "\<alpha>\<^sub>1, \<dots>, \<alpha>\<^sub>n"}
-  as specified on the LHS, but no term variables. Non-emptiness of @{text A}
+  \<^descr> @{command (HOL) "typedef"}~\<open>(\<alpha>\<^sub>1, \<dots>, \<alpha>\<^sub>n) t = A\<close> defines a
+  new type \<open>(\<alpha>\<^sub>1, \<dots>, \<alpha>\<^sub>n) t\<close> from the set \<open>A\<close> over an existing
+  type. The set \<open>A\<close> may contain type variables \<open>\<alpha>\<^sub>1, \<dots>, \<alpha>\<^sub>n\<close>
+  as specified on the LHS, but no term variables. Non-emptiness of \<open>A\<close>
   needs to be proven on the spot, in order to turn the internal conditional
   characterization into usable theorems.
 
-  The ``@{text "(overloaded)"}'' option allows the @{command
+  The ``\<open>(overloaded)\<close>'' option allows the @{command
   "typedef"} specification to depend on constants that are not (yet)
   specified and thus left open as parameters, e.g.\ type-class parameters.
 
@@ -1138,11 +1131,10 @@
   syntactically impossible in HOL. The non-emptiness proof may formally
   depend on local assumptions, but this has little practical relevance.
 
-  For @{command (HOL) "typedef"}~@{text "t = A"} the newly introduced
-  type @{text t} is accompanied by a pair of morphisms to relate it to
+  For @{command (HOL) "typedef"}~\<open>t = A\<close> the newly introduced
+  type \<open>t\<close> is accompanied by a pair of morphisms to relate it to
   the representing set over the old type.  By default, the injection
-  from type to set is called @{text Rep_t} and its inverse @{text
-  Abs_t}: An explicit @{keyword (HOL) "morphisms"} specification
+  from type to set is called \<open>Rep_t\<close> and its inverse \<open>Abs_t\<close>: An explicit @{keyword (HOL) "morphisms"} specification
   allows to provide alternative names.
 
   The logical characterization of @{command typedef} uses the predicate of
@@ -1150,7 +1142,7 @@
   basic consequences of that are instantiated accordingly, re-using the
   locale facts with names derived from the new type constructor. Thus the
   generic theorem @{thm type_definition.Rep} is turned into the specific
-  @{text "Rep_t"}, for example.
+  \<open>Rep_t\<close>, for example.
 
   Theorems @{thm type_definition.Rep}, @{thm
   type_definition.Rep_inverse}, and @{thm type_definition.Abs_inverse}
@@ -1206,14 +1198,14 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{command_def (HOL) "functor"} & : & @{text "local_theory \<rightarrow> proof(prove)"}
+    @{command_def (HOL) "functor"} & : & \<open>local_theory \<rightarrow> proof(prove)\<close>
   \end{matharray}
 
   @{rail \<open>
     @@{command (HOL) functor} (@{syntax name} ':')? @{syntax term}
   \<close>}
 
-  \<^descr> @{command (HOL) "functor"}~@{text "prefix: m"} allows to prove and
+  \<^descr> @{command (HOL) "functor"}~\<open>prefix: m\<close> allows to prove and
   register properties about the functorial structure of type constructors.
   These properties then can be used by other packages to deal with those
   type constructors in certain type constructions. Characteristic theorems
@@ -1221,20 +1213,18 @@
   the base name of the type constructor, an explicit prefix can be given
   alternatively.
 
-  The given term @{text "m"} is considered as \<^emph>\<open>mapper\<close> for the
+  The given term \<open>m\<close> is considered as \<^emph>\<open>mapper\<close> for the
   corresponding type constructor and must conform to the following type
   pattern:
 
   \begin{matharray}{lll}
-    @{text "m"} & @{text "::"} &
-      @{text "\<sigma>\<^sub>1 \<Rightarrow> \<dots> \<sigma>\<^sub>k \<Rightarrow> (\<^vec>\<alpha>\<^sub>n) t \<Rightarrow> (\<^vec>\<beta>\<^sub>n) t"} \\
+    \<open>m\<close> & \<open>::\<close> &
+      \<open>\<sigma>\<^sub>1 \<Rightarrow> \<dots> \<sigma>\<^sub>k \<Rightarrow> (\<^vec>\<alpha>\<^sub>n) t \<Rightarrow> (\<^vec>\<beta>\<^sub>n) t\<close> \\
   \end{matharray}
 
-  where @{text t} is the type constructor, @{text "\<^vec>\<alpha>\<^sub>n"}
-  and @{text "\<^vec>\<beta>\<^sub>n"} are distinct type variables free in the local
-  theory and @{text "\<sigma>\<^sub>1"}, \ldots, @{text "\<sigma>\<^sub>k"} is a subsequence of @{text
-  "\<alpha>\<^sub>1 \<Rightarrow> \<beta>\<^sub>1"}, @{text "\<beta>\<^sub>1 \<Rightarrow> \<alpha>\<^sub>1"}, \ldots, @{text "\<alpha>\<^sub>n \<Rightarrow> \<beta>\<^sub>n"}, @{text
-  "\<beta>\<^sub>n \<Rightarrow> \<alpha>\<^sub>n"}.
+  where \<open>t\<close> is the type constructor, \<open>\<^vec>\<alpha>\<^sub>n\<close>
+  and \<open>\<^vec>\<beta>\<^sub>n\<close> are distinct type variables free in the local
+  theory and \<open>\<sigma>\<^sub>1\<close>, \ldots, \<open>\<sigma>\<^sub>k\<close> is a subsequence of \<open>\<alpha>\<^sub>1 \<Rightarrow> \<beta>\<^sub>1\<close>, \<open>\<beta>\<^sub>1 \<Rightarrow> \<alpha>\<^sub>1\<close>, \ldots, \<open>\<alpha>\<^sub>n \<Rightarrow> \<beta>\<^sub>n\<close>, \<open>\<beta>\<^sub>n \<Rightarrow> \<alpha>\<^sub>n\<close>.
 \<close>
 
 
@@ -1252,7 +1242,7 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{command_def (HOL) "quotient_type"} & : & @{text "local_theory \<rightarrow> proof(prove)"}\\
+    @{command_def (HOL) "quotient_type"} & : & \<open>local_theory \<rightarrow> proof(prove)\<close>\\
   \end{matharray}
 
   @{rail \<open>
@@ -1267,14 +1257,11 @@
     quot_parametric: @'parametric' @{syntax thmref}
   \<close>}
 
-  \<^descr> @{command (HOL) "quotient_type"} defines a new quotient type @{text
-  \<tau>}. The injection from a quotient type to a raw type is called @{text
-  rep_\<tau>}, its inverse @{text abs_\<tau>} unless explicit @{keyword (HOL)
+  \<^descr> @{command (HOL) "quotient_type"} defines a new quotient type \<open>\<tau>\<close>. The injection from a quotient type to a raw type is called \<open>rep_\<tau>\<close>, its inverse \<open>abs_\<tau>\<close> unless explicit @{keyword (HOL)
   "morphisms"} specification provides alternative names. @{command (HOL)
   "quotient_type"} requires the user to prove that the relation is an
-  equivalence relation (predicate @{text equivp}), unless the user specifies
-  explicitly @{text partial} in which case the obligation is @{text
-  part_equivp}. A quotient defined with @{text partial} is weaker in the
+  equivalence relation (predicate \<open>equivp\<close>), unless the user specifies
+  explicitly \<open>partial\<close> in which case the obligation is \<open>part_equivp\<close>. A quotient defined with \<open>partial\<close> is weaker in the
   sense that less things can be proved automatically.
 
   The command internally proves a Quotient theorem and sets up the Lifting
@@ -1308,18 +1295,18 @@
   "Huffman-Kuncar:2013:lifting_transfer"}.
 
   \begin{matharray}{rcl}
-    @{command_def (HOL) "setup_lifting"} & : & @{text "local_theory \<rightarrow> local_theory"}\\
-    @{command_def (HOL) "lift_definition"} & : & @{text "local_theory \<rightarrow> proof(prove)"}\\
-    @{command_def (HOL) "lifting_forget"} & : & @{text "local_theory \<rightarrow> local_theory"}\\
-    @{command_def (HOL) "lifting_update"} & : & @{text "local_theory \<rightarrow> local_theory"}\\
-    @{command_def (HOL) "print_quot_maps"} & : & @{text "context \<rightarrow>"}\\
-    @{command_def (HOL) "print_quotients"} & : & @{text "context \<rightarrow>"}\\
-    @{attribute_def (HOL) "quot_map"} & : & @{text attribute} \\
-    @{attribute_def (HOL) "relator_eq_onp"} & : & @{text attribute} \\
-    @{attribute_def (HOL) "relator_mono"} & : & @{text attribute} \\
-    @{attribute_def (HOL) "relator_distr"} & : & @{text attribute} \\
-    @{attribute_def (HOL) "quot_del"} & : & @{text attribute} \\
-    @{attribute_def (HOL) "lifting_restore"} & : & @{text attribute} \\
+    @{command_def (HOL) "setup_lifting"} & : & \<open>local_theory \<rightarrow> local_theory\<close>\\
+    @{command_def (HOL) "lift_definition"} & : & \<open>local_theory \<rightarrow> proof(prove)\<close>\\
+    @{command_def (HOL) "lifting_forget"} & : & \<open>local_theory \<rightarrow> local_theory\<close>\\
+    @{command_def (HOL) "lifting_update"} & : & \<open>local_theory \<rightarrow> local_theory\<close>\\
+    @{command_def (HOL) "print_quot_maps"} & : & \<open>context \<rightarrow>\<close>\\
+    @{command_def (HOL) "print_quotients"} & : & \<open>context \<rightarrow>\<close>\\
+    @{attribute_def (HOL) "quot_map"} & : & \<open>attribute\<close> \\
+    @{attribute_def (HOL) "relator_eq_onp"} & : & \<open>attribute\<close> \\
+    @{attribute_def (HOL) "relator_mono"} & : & \<open>attribute\<close> \\
+    @{attribute_def (HOL) "relator_distr"} & : & \<open>attribute\<close> \\
+    @{attribute_def (HOL) "quot_del"} & : & \<open>attribute\<close> \\
+    @{attribute_def (HOL) "lifting_restore"} & : & \<open>attribute\<close> \\
   \end{matharray}
 
   @{rail \<open>
@@ -1352,7 +1339,7 @@
     for @{term R} can be provided as a third argument. This allows the package
     to generate a stronger transfer rule for equality.
 
-    Users generally will not prove the @{text Quotient} theorem manually for
+    Users generally will not prove the \<open>Quotient\<close> theorem manually for
     new types, as special commands exist to automate the process.
 
     \<^enum> When a new subtype is defined by @{command (HOL) typedef}, @{command
@@ -1373,61 +1360,60 @@
   by @{command (HOL) "lift_definition"}, the Lifting package proves and
   registers a code equation (if there is one) for the new constant.
 
-  \<^descr> @{command (HOL) "lift_definition"} @{text "f :: \<tau>"} @{keyword (HOL)
-  "is"} @{text t} Defines a new function @{text f} with an abstract type
-  @{text \<tau>} in terms of a corresponding operation @{text t} on a
-  representation type. More formally, if @{text "t :: \<sigma>"}, then the command
-  builds a term @{text "F"} as a corresponding combination of abstraction
-  and representation functions such that @{text "F :: \<sigma> \<Rightarrow> \<tau>" } and defines
-  @{text "f \<equiv> F t"}. The term @{text t} does not have to be necessarily a
+  \<^descr> @{command (HOL) "lift_definition"} \<open>f :: \<tau>\<close> @{keyword (HOL)
+  "is"} \<open>t\<close> Defines a new function \<open>f\<close> with an abstract type
+  \<open>\<tau>\<close> in terms of a corresponding operation \<open>t\<close> on a
+  representation type. More formally, if \<open>t :: \<sigma>\<close>, then the command
+  builds a term \<open>F\<close> as a corresponding combination of abstraction
+  and representation functions such that \<open>F :: \<sigma> \<Rightarrow> \<tau>\<close> and defines
+  \<open>f \<equiv> F t\<close>. The term \<open>t\<close> does not have to be necessarily a
   constant but it can be any term.
 
   The command opens a proof and the user must discharge a respectfulness
-  proof obligation. For a type copy, i.e.\ a typedef with @{text UNIV}, the
+  proof obligation. For a type copy, i.e.\ a typedef with \<open>UNIV\<close>, the
   obligation is discharged automatically. The proof goal is presented in a
   user-friendly, readable form. A respectfulness theorem in the standard
-  format @{text f.rsp} and a transfer rule @{text f.transfer} for the
+  format \<open>f.rsp\<close> and a transfer rule \<open>f.transfer\<close> for the
   Transfer package are generated by the package.
 
-  The user can specify a parametricity theorems for @{text t} after the
+  The user can specify a parametricity theorems for \<open>t\<close> after the
   keyword @{keyword "parametric"}, which allows the command to generate
-  parametric transfer rules for @{text f}.
+  parametric transfer rules for \<open>f\<close>.
 
   For each constant defined through trivial quotients (type copies or
-  subtypes) @{text f.rep_eq} is generated. The equation is a code
-  certificate that defines @{text f} using the representation function.
-
-  For each constant @{text f.abs_eq} is generated. The equation is
-  unconditional for total quotients. The equation defines @{text f} using
+  subtypes) \<open>f.rep_eq\<close> is generated. The equation is a code
+  certificate that defines \<open>f\<close> using the representation function.
+
+  For each constant \<open>f.abs_eq\<close> is generated. The equation is
+  unconditional for total quotients. The equation defines \<open>f\<close> using
   the abstraction function.
 
   \<^medskip>
   Integration with [@{attribute code} abstract]: For subtypes
   (e.g.\ corresponding to a datatype invariant, such as @{typ "'a dlist"}),
-  @{command (HOL) "lift_definition"} uses a code certificate theorem @{text
-  f.rep_eq} as a code equation. Because of the limitation of the code
-  generator, @{text f.rep_eq} cannot be used as a code equation if the
+  @{command (HOL) "lift_definition"} uses a code certificate theorem \<open>f.rep_eq\<close> as a code equation. Because of the limitation of the code
+  generator, \<open>f.rep_eq\<close> cannot be used as a code equation if the
   subtype occurs inside the result type rather than at the top level (e.g.\
   function returning @{typ "'a dlist option"} vs. @{typ "'a dlist"}).
 
   In this case, an extension of @{command (HOL) "lift_definition"} can be
-  invoked by specifying the flag @{text "code_dt"}. This extension enables
+  invoked by specifying the flag \<open>code_dt\<close>. This extension enables
   code execution through series of internal type and lifting definitions if
-  the return type @{text "\<tau>"} meets the following inductive conditions:
-
-    \<^descr> @{text "\<tau>"} is a type variable
-
-    \<^descr> @{text "\<tau> = \<tau>\<^sub>1 \<dots> \<tau>\<^sub>n \<kappa>"},
-    where @{text "\<kappa>"} is an abstract type constructor and @{text "\<tau>\<^sub>1 \<dots> \<tau>\<^sub>n"}
+  the return type \<open>\<tau>\<close> meets the following inductive conditions:
+
+    \<^descr> \<open>\<tau>\<close> is a type variable
+
+    \<^descr> \<open>\<tau> = \<tau>\<^sub>1 \<dots> \<tau>\<^sub>n \<kappa>\<close>,
+    where \<open>\<kappa>\<close> is an abstract type constructor and \<open>\<tau>\<^sub>1 \<dots> \<tau>\<^sub>n\<close>
     do not contain abstract types (i.e.\ @{typ "int dlist"} is allowed whereas
     @{typ "int dlist dlist"} not)
 
-    \<^descr> @{text "\<tau> = \<tau>\<^sub>1 \<dots> \<tau>\<^sub>n \<kappa>"}, @{text "\<kappa>"} is a type constructor that
+    \<^descr> \<open>\<tau> = \<tau>\<^sub>1 \<dots> \<tau>\<^sub>n \<kappa>\<close>, \<open>\<kappa>\<close> is a type constructor that
     was defined as a (co)datatype whose constructor argument types do not
     contain either non-free datatypes or the function type.
 
   Integration with [@{attribute code} equation]: For total quotients,
-  @{command (HOL) "lift_definition"} uses @{text f.abs_eq} as a code
+  @{command (HOL) "lift_definition"} uses \<open>f.abs_eq\<close> as a code
   equation.
 
   \<^descr> @{command (HOL) lifting_forget} and @{command (HOL) lifting_update}
@@ -1437,17 +1423,16 @@
   when the construction is finished but it still allows additions to this
   construction when this is later necessary.
 
-  Whenever the Lifting package is set up with a new abstract type @{text
-  "\<tau>"} by @{command_def (HOL) "lift_definition"}, the package defines a new
-  bundle that is called @{text "\<tau>.lifting"}. This bundle already includes
+  Whenever the Lifting package is set up with a new abstract type \<open>\<tau>\<close> by @{command_def (HOL) "lift_definition"}, the package defines a new
+  bundle that is called \<open>\<tau>.lifting\<close>. This bundle already includes
   set-up for the Lifting package. The new transfer rules introduced by
   @{command (HOL) "lift_definition"} can be stored in the bundle by the
-  command @{command (HOL) "lifting_update"} @{text "\<tau>.lifting"}.
-
-  The command @{command (HOL) "lifting_forget"} @{text "\<tau>.lifting"} deletes
-  set-up of the Lifting package for @{text \<tau>} and deletes all the transfer
+  command @{command (HOL) "lifting_update"} \<open>\<tau>.lifting\<close>.
+
+  The command @{command (HOL) "lifting_forget"} \<open>\<tau>.lifting\<close> deletes
+  set-up of the Lifting package for \<open>\<tau>\<close> and deletes all the transfer
   rules that were introduced by @{command (HOL) "lift_definition"} using
-  @{text \<tau>} as an abstract type.
+  \<open>\<tau>\<close> as an abstract type.
 
   The stored set-up in a bundle can be reintroduced by the Isar commands for
   including a bundle (@{command "include"}, @{keyword "includes"} and
@@ -1469,7 +1454,7 @@
   that a relator applied to an equality restricted by a predicate @{term P}
   (i.e.\ @{term "eq_onp P"}) is equal to a predicator applied to the @{term
   P}. The combinator @{const eq_onp} is used for internal encoding of proper
-  subtypes. Such theorems allows the package to hide @{text eq_onp} from a
+  subtypes. Such theorems allows the package to hide \<open>eq_onp\<close> from a
   user in a user-readable form of a respectfulness theorem. For examples see
   @{file "~~/src/HOL/Lifting_Set.thy"} or @{file "~~/src/HOL/Lifting.thy"}.
   This property is proved automatically if the involved type is BNF without
@@ -1485,8 +1470,7 @@
   involved type is BNF without dead variables.
 
   \<^descr> @{attribute (HOL) "relator_distr"} registers a property describing a
-  distributivity of the relation composition and a relator. E.g.\ @{text
-  "rel_set R \<circ>\<circ> rel_set S = rel_set (R \<circ>\<circ> S)"}. This property is needed for
+  distributivity of the relation composition and a relator. E.g.\ \<open>rel_set R \<circ>\<circ> rel_set S = rel_set (R \<circ>\<circ> S)\<close>. This property is needed for
   proving a stronger transfer rule in @{command_def (HOL) "lift_definition"}
   when a parametricity theorem for the raw term is specified. When this
   equality does not hold unconditionally (e.g.\ for the function type), the
@@ -1505,16 +1489,16 @@
   rather used for low-level manipulation with set-up of the Lifting package
   because @{command (HOL) lifting_forget} is preferred for normal usage.
 
-  \<^descr> @{attribute (HOL) lifting_restore} @{text "Quotient_thm pcr_def
-  pcr_cr_eq_thm"} registers the Quotient theorem @{text Quotient_thm} in the
+  \<^descr> @{attribute (HOL) lifting_restore} \<open>Quotient_thm pcr_def
+  pcr_cr_eq_thm\<close> registers the Quotient theorem \<open>Quotient_thm\<close> in the
   Lifting infrastructure and thus sets up lifting for an abstract type
-  @{text \<tau>} (that is defined by @{text Quotient_thm}). Optional theorems
-  @{text pcr_def} and @{text pcr_cr_eq_thm} can be specified to register the
-  parametrized correspondence relation for @{text \<tau>}. E.g.\ for @{typ "'a
-  dlist"}, @{text pcr_def} is @{text "pcr_dlist A \<equiv> list_all2 A \<circ>\<circ>
-  cr_dlist"} and @{text pcr_cr_eq_thm} is @{text "pcr_dlist (op =) = (op
-  =)"}. This attribute is rather used for low-level manipulation with set-up
-  of the Lifting package because using of the bundle @{text \<tau>.lifting}
+  \<open>\<tau>\<close> (that is defined by \<open>Quotient_thm\<close>). Optional theorems
+  \<open>pcr_def\<close> and \<open>pcr_cr_eq_thm\<close> can be specified to register the
+  parametrized correspondence relation for \<open>\<tau>\<close>. E.g.\ for @{typ "'a
+  dlist"}, \<open>pcr_def\<close> is \<open>pcr_dlist A \<equiv> list_all2 A \<circ>\<circ>
+  cr_dlist\<close> and \<open>pcr_cr_eq_thm\<close> is \<open>pcr_dlist (op =) = (op
+  =)\<close>. This attribute is rather used for low-level manipulation with set-up
+  of the Lifting package because using of the bundle \<open>\<tau>.lifting\<close>
   together with the commands @{command (HOL) lifting_forget} and @{command
   (HOL) lifting_update} is preferred for normal usage.
 
@@ -1533,20 +1517,20 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{method_def (HOL) "transfer"} & : & @{text method} \\
-    @{method_def (HOL) "transfer'"} & : & @{text method} \\
-    @{method_def (HOL) "transfer_prover"} & : & @{text method} \\
-    @{attribute_def (HOL) "Transfer.transferred"} & : & @{text attribute} \\
-    @{attribute_def (HOL) "untransferred"} & : & @{text attribute} \\
-    @{method_def (HOL) "transfer_start"} & : & @{text method} \\
-    @{method_def (HOL) "transfer_prover_start"} & : & @{text method} \\
-    @{method_def (HOL) "transfer_step"} & : & @{text method} \\
-    @{method_def (HOL) "transfer_end"} & : & @{text method} \\
-    @{method_def (HOL) "transfer_prover_end"} & : & @{text method} \\
-    @{attribute_def (HOL) "transfer_rule"} & : & @{text attribute} \\
-    @{attribute_def (HOL) "transfer_domain_rule"} & : & @{text attribute} \\
-    @{attribute_def (HOL) "relator_eq"} & : & @{text attribute} \\
-    @{attribute_def (HOL) "relator_domain"} & : & @{text attribute} \\
+    @{method_def (HOL) "transfer"} & : & \<open>method\<close> \\
+    @{method_def (HOL) "transfer'"} & : & \<open>method\<close> \\
+    @{method_def (HOL) "transfer_prover"} & : & \<open>method\<close> \\
+    @{attribute_def (HOL) "Transfer.transferred"} & : & \<open>attribute\<close> \\
+    @{attribute_def (HOL) "untransferred"} & : & \<open>attribute\<close> \\
+    @{method_def (HOL) "transfer_start"} & : & \<open>method\<close> \\
+    @{method_def (HOL) "transfer_prover_start"} & : & \<open>method\<close> \\
+    @{method_def (HOL) "transfer_step"} & : & \<open>method\<close> \\
+    @{method_def (HOL) "transfer_end"} & : & \<open>method\<close> \\
+    @{method_def (HOL) "transfer_prover_end"} & : & \<open>method\<close> \\
+    @{attribute_def (HOL) "transfer_rule"} & : & \<open>attribute\<close> \\
+    @{attribute_def (HOL) "transfer_domain_rule"} & : & \<open>attribute\<close> \\
+    @{attribute_def (HOL) "relator_eq"} & : & \<open>attribute\<close> \\
+    @{attribute_def (HOL) "relator_domain"} & : & \<open>attribute\<close> \\
   \end{matharray}
 
   \<^descr> @{method (HOL) "transfer"} method replaces the current subgoal with
@@ -1554,7 +1538,7 @@
   replacement of types and constants is guided by the database of transfer
   rules. Goals are generalized over all free variables by default; this is
   necessary for variables whose types change, but can be overridden for
-  specific variables with e.g. @{text "transfer fixing: x y z"}.
+  specific variables with e.g. \<open>transfer fixing: x y z\<close>.
 
   \<^descr> @{method (HOL) "transfer'"} is a variant of @{method (HOL) transfer}
   that allows replacing a subgoal with one that is logically stronger
@@ -1583,9 +1567,9 @@
 
   \<^descr> @{attribute (HOL) Transfer.transferred} works in the opposite
   direction than @{method (HOL) "transfer'"}. E.g.\ given the transfer
-  relation @{text "ZN x n \<equiv> (x = int n)"}, corresponding transfer rules and
-  the theorem @{text "\<forall>x::int \<in> {0..}. x < x + 1"}, the attribute would
-  prove @{text "\<forall>n::nat. n < n + 1"}. The attribute is still in experimental
+  relation \<open>ZN x n \<equiv> (x = int n)\<close>, corresponding transfer rules and
+  the theorem \<open>\<forall>x::int \<in> {0..}. x < x + 1\<close>, the attribute would
+  prove \<open>\<forall>n::nat. n < n + 1\<close>. The attribute is still in experimental
   phase of development.
 
   \<^descr> @{attribute (HOL) "transfer_rule"} attribute maintains a collection
@@ -1594,25 +1578,25 @@
   constant, or they may relate an operation on a raw type to a corresponding
   operation on an abstract type (quotient or subtype). For example:
 
-    @{text "((A ===> B) ===> list_all2 A ===> list_all2 B) map map"} \\
-    @{text "(cr_int ===> cr_int ===> cr_int) (\<lambda>(x,y) (u,v). (x+u, y+v)) plus"}
+    \<open>((A ===> B) ===> list_all2 A ===> list_all2 B) map map\<close> \\
+    \<open>(cr_int ===> cr_int ===> cr_int) (\<lambda>(x,y) (u,v). (x+u, y+v)) plus\<close>
 
   Lemmas involving predicates on relations can also be registered using the
   same attribute. For example:
 
-    @{text "bi_unique A \<Longrightarrow> (list_all2 A ===> op =) distinct distinct"} \\
-    @{text "\<lbrakk>bi_unique A; bi_unique B\<rbrakk> \<Longrightarrow> bi_unique (rel_prod A B)"}
-
-  Preservation of predicates on relations (@{text "bi_unique, bi_total,
-  right_unique, right_total, left_unique, left_total"}) with the respect to
+    \<open>bi_unique A \<Longrightarrow> (list_all2 A ===> op =) distinct distinct\<close> \\
+    \<open>\<lbrakk>bi_unique A; bi_unique B\<rbrakk> \<Longrightarrow> bi_unique (rel_prod A B)\<close>
+
+  Preservation of predicates on relations (\<open>bi_unique, bi_total,
+  right_unique, right_total, left_unique, left_total\<close>) with the respect to
   a relator is proved automatically if the involved type is BNF @{cite
   "isabelle-datatypes"} without dead variables.
 
   \<^descr> @{attribute (HOL) "transfer_domain_rule"} attribute maintains a
   collection of rules, which specify a domain of a transfer relation by a
-  predicate. E.g.\ given the transfer relation @{text "ZN x n \<equiv> (x = int
-  n)"}, one can register the following transfer domain rule: @{text "Domainp
-  ZN = (\<lambda>x. x \<ge> 0)"}. The rules allow the package to produce more readable
+  predicate. E.g.\ given the transfer relation \<open>ZN x n \<equiv> (x = int
+  n)\<close>, one can register the following transfer domain rule: \<open>Domainp
+  ZN = (\<lambda>x. x \<ge> 0)\<close>. The rules allow the package to produce more readable
   transferred goals, e.g.\ when quantifiers are transferred.
 
   \<^descr> @{attribute (HOL) relator_eq} attribute collects identity laws for
@@ -1641,23 +1625,23 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{command_def (HOL) "quotient_definition"} & : & @{text "local_theory \<rightarrow> proof(prove)"}\\
-    @{command_def (HOL) "print_quotmapsQ3"} & : & @{text "context \<rightarrow>"}\\
-    @{command_def (HOL) "print_quotientsQ3"} & : & @{text "context \<rightarrow>"}\\
-    @{command_def (HOL) "print_quotconsts"} & : & @{text "context \<rightarrow>"}\\
-    @{method_def (HOL) "lifting"} & : & @{text method} \\
-    @{method_def (HOL) "lifting_setup"} & : & @{text method} \\
-    @{method_def (HOL) "descending"} & : & @{text method} \\
-    @{method_def (HOL) "descending_setup"} & : & @{text method} \\
-    @{method_def (HOL) "partiality_descending"} & : & @{text method} \\
-    @{method_def (HOL) "partiality_descending_setup"} & : & @{text method} \\
-    @{method_def (HOL) "regularize"} & : & @{text method} \\
-    @{method_def (HOL) "injection"} & : & @{text method} \\
-    @{method_def (HOL) "cleaning"} & : & @{text method} \\
-    @{attribute_def (HOL) "quot_thm"} & : & @{text attribute} \\
-    @{attribute_def (HOL) "quot_lifted"} & : & @{text attribute} \\
-    @{attribute_def (HOL) "quot_respect"} & : & @{text attribute} \\
-    @{attribute_def (HOL) "quot_preserve"} & : & @{text attribute} \\
+    @{command_def (HOL) "quotient_definition"} & : & \<open>local_theory \<rightarrow> proof(prove)\<close>\\
+    @{command_def (HOL) "print_quotmapsQ3"} & : & \<open>context \<rightarrow>\<close>\\
+    @{command_def (HOL) "print_quotientsQ3"} & : & \<open>context \<rightarrow>\<close>\\
+    @{command_def (HOL) "print_quotconsts"} & : & \<open>context \<rightarrow>\<close>\\
+    @{method_def (HOL) "lifting"} & : & \<open>method\<close> \\
+    @{method_def (HOL) "lifting_setup"} & : & \<open>method\<close> \\
+    @{method_def (HOL) "descending"} & : & \<open>method\<close> \\
+    @{method_def (HOL) "descending_setup"} & : & \<open>method\<close> \\
+    @{method_def (HOL) "partiality_descending"} & : & \<open>method\<close> \\
+    @{method_def (HOL) "partiality_descending_setup"} & : & \<open>method\<close> \\
+    @{method_def (HOL) "regularize"} & : & \<open>method\<close> \\
+    @{method_def (HOL) "injection"} & : & \<open>method\<close> \\
+    @{method_def (HOL) "cleaning"} & : & \<open>method\<close> \\
+    @{attribute_def (HOL) "quot_thm"} & : & \<open>attribute\<close> \\
+    @{attribute_def (HOL) "quot_lifted"} & : & \<open>attribute\<close> \\
+    @{attribute_def (HOL) "quot_respect"} & : & \<open>attribute\<close> \\
+    @{attribute_def (HOL) "quot_preserve"} & : & \<open>attribute\<close> \\
   \end{matharray}
 
   @{rail \<open>
@@ -1744,11 +1728,11 @@
   These tools are available via the following commands.
 
   \begin{matharray}{rcl}
-    @{command_def (HOL) "solve_direct"}@{text "\<^sup>*"} & : & @{text "proof \<rightarrow>"} \\
-    @{command_def (HOL) "try"}@{text "\<^sup>*"} & : & @{text "proof \<rightarrow>"} \\
-    @{command_def (HOL) "try0"}@{text "\<^sup>*"} & : & @{text "proof \<rightarrow>"} \\
-    @{command_def (HOL) "sledgehammer"}@{text "\<^sup>*"} & : & @{text "proof \<rightarrow>"} \\
-    @{command_def (HOL) "sledgehammer_params"} & : & @{text "theory \<rightarrow> theory"}
+    @{command_def (HOL) "solve_direct"}\<open>\<^sup>*\<close> & : & \<open>proof \<rightarrow>\<close> \\
+    @{command_def (HOL) "try"}\<open>\<^sup>*\<close> & : & \<open>proof \<rightarrow>\<close> \\
+    @{command_def (HOL) "try0"}\<open>\<^sup>*\<close> & : & \<open>proof \<rightarrow>\<close> \\
+    @{command_def (HOL) "sledgehammer"}\<open>\<^sup>*\<close> & : & \<open>proof \<rightarrow>\<close> \\
+    @{command_def (HOL) "sledgehammer_params"} & : & \<open>theory \<rightarrow> theory\<close>
   \end{matharray}
 
   @{rail \<open>
@@ -1776,8 +1760,7 @@
   \<^descr> @{command (HOL) "try0"} attempts to prove a subgoal
   using a combination of standard proof methods (@{method auto},
   @{method simp}, @{method blast}, etc.).  Additional facts supplied
-  via @{text "simp:"}, @{text "intro:"}, @{text "elim:"}, and @{text
-  "dest:"} are passed to the appropriate proof methods.
+  via \<open>simp:\<close>, \<open>intro:\<close>, \<open>elim:\<close>, and \<open>dest:\<close> are passed to the appropriate proof methods.
 
   \<^descr> @{command (HOL) "try"} attempts to prove or disprove a subgoal
   using a combination of provers and disprovers (@{command (HOL)
@@ -1803,14 +1786,14 @@
   is supported by the following commands.
 
   \begin{matharray}{rcl}
-    @{command_def (HOL) "value"}@{text "\<^sup>*"} & : & @{text "context \<rightarrow>"} \\
-    @{command_def (HOL) "values"}@{text "\<^sup>*"} & : & @{text "context \<rightarrow>"} \\
-    @{command_def (HOL) "quickcheck"}@{text "\<^sup>*"} & : & @{text "proof \<rightarrow>"} \\
-    @{command_def (HOL) "nitpick"}@{text "\<^sup>*"} & : & @{text "proof \<rightarrow>"} \\
-    @{command_def (HOL) "quickcheck_params"} & : & @{text "theory \<rightarrow> theory"} \\
-    @{command_def (HOL) "nitpick_params"} & : & @{text "theory \<rightarrow> theory"} \\
-    @{command_def (HOL) "quickcheck_generator"} & : & @{text "theory \<rightarrow> theory"} \\
-    @{command_def (HOL) "find_unused_assms"} & : & @{text "context \<rightarrow>"}
+    @{command_def (HOL) "value"}\<open>\<^sup>*\<close> & : & \<open>context \<rightarrow>\<close> \\
+    @{command_def (HOL) "values"}\<open>\<^sup>*\<close> & : & \<open>context \<rightarrow>\<close> \\
+    @{command_def (HOL) "quickcheck"}\<open>\<^sup>*\<close> & : & \<open>proof \<rightarrow>\<close> \\
+    @{command_def (HOL) "nitpick"}\<open>\<^sup>*\<close> & : & \<open>proof \<rightarrow>\<close> \\
+    @{command_def (HOL) "quickcheck_params"} & : & \<open>theory \<rightarrow> theory\<close> \\
+    @{command_def (HOL) "nitpick_params"} & : & \<open>theory \<rightarrow> theory\<close> \\
+    @{command_def (HOL) "quickcheck_generator"} & : & \<open>theory \<rightarrow> theory\<close> \\
+    @{command_def (HOL) "find_unused_assms"} & : & \<open>context \<rightarrow>\<close>
   \end{matharray}
 
   @{rail \<open>
@@ -1839,20 +1822,20 @@
     args: ( @{syntax name} '=' value + ',' )
   \<close>} % FIXME check "value"
 
-  \<^descr> @{command (HOL) "value"}~@{text t} evaluates and prints a
-  term; optionally @{text modes} can be specified, which are appended
+  \<^descr> @{command (HOL) "value"}~\<open>t\<close> evaluates and prints a
+  term; optionally \<open>modes\<close> can be specified, which are appended
   to the current print mode; see \secref{sec:print-modes}.
   Evaluation is tried first using ML, falling
   back to normalization by evaluation if this fails.
   Alternatively a specific evaluator can be selected using square
   brackets; typical evaluators use the current set of code equations
-  to normalize and include @{text simp} for fully symbolic evaluation
-  using the simplifier, @{text nbe} for \<^emph>\<open>normalization by
+  to normalize and include \<open>simp\<close> for fully symbolic evaluation
+  using the simplifier, \<open>nbe\<close> for \<^emph>\<open>normalization by
   evaluation\<close> and \<^emph>\<open>code\<close> for code generation in SML.
 
-  \<^descr> @{command (HOL) "values"}~@{text t} enumerates a set
+  \<^descr> @{command (HOL) "values"}~\<open>t\<close> enumerates a set
   comprehension by evaluation and prints its values up to the given
-  number of solutions; optionally @{text modes} can be specified,
+  number of solutions; optionally \<open>modes\<close> can be specified,
   which are appended to the current print mode; see
   \secref{sec:print-modes}.
 
@@ -1866,10 +1849,10 @@
   quickcheck uses exhaustive testing.  A number of configuration
   options are supported for @{command (HOL) "quickcheck"}, notably:
 
-    \<^descr>[@{text tester}] specifies which testing approach to apply.
-    There are three testers, @{text exhaustive}, @{text random}, and
-    @{text narrowing}.  An unknown configuration option is treated as
-    an argument to tester, making @{text "tester ="} optional.  When
+    \<^descr>[\<open>tester\<close>] specifies which testing approach to apply.
+    There are three testers, \<open>exhaustive\<close>, \<open>random\<close>, and
+    \<open>narrowing\<close>.  An unknown configuration option is treated as
+    an argument to tester, making \<open>tester =\<close> optional.  When
     multiple testers are given, these are applied in parallel.  If no
     tester is specified, quickcheck uses the testers that are set
     active, i.e.\ configurations @{attribute
@@ -1877,67 +1860,66 @@
     quickcheck_random_active}, @{attribute
     quickcheck_narrowing_active} are set to true.
 
-    \<^descr>[@{text size}] specifies the maximum size of the search space
+    \<^descr>[\<open>size\<close>] specifies the maximum size of the search space
     for assignment values.
 
-    \<^descr>[@{text genuine_only}] sets quickcheck only to return genuine
+    \<^descr>[\<open>genuine_only\<close>] sets quickcheck only to return genuine
     counterexample, but not potentially spurious counterexamples due
     to underspecified functions.
 
-    \<^descr>[@{text abort_potential}] sets quickcheck to abort once it
+    \<^descr>[\<open>abort_potential\<close>] sets quickcheck to abort once it
     found a potentially spurious counterexample and to not continue
     to search for a further genuine counterexample.
-    For this option to be effective, the @{text genuine_only} option
+    For this option to be effective, the \<open>genuine_only\<close> option
     must be set to false.
 
-    \<^descr>[@{text eval}] takes a term or a list of terms and evaluates
+    \<^descr>[\<open>eval\<close>] takes a term or a list of terms and evaluates
     these terms under the variable assignment found by quickcheck.
     This option is currently only supported by the default
     (exhaustive) tester.
 
-    \<^descr>[@{text iterations}] sets how many sets of assignments are
+    \<^descr>[\<open>iterations\<close>] sets how many sets of assignments are
     generated for each particular size.
 
-    \<^descr>[@{text no_assms}] specifies whether assumptions in
+    \<^descr>[\<open>no_assms\<close>] specifies whether assumptions in
     structured proofs should be ignored.
 
-    \<^descr>[@{text locale}] specifies how to process conjectures in
+    \<^descr>[\<open>locale\<close>] specifies how to process conjectures in
     a locale context, i.e.\ they can be interpreted or expanded.
     The option is a whitespace-separated list of the two words
-    @{text interpret} and @{text expand}. The list determines the
+    \<open>interpret\<close> and \<open>expand\<close>. The list determines the
     order they are employed. The default setting is to first use
     interpretations and then test the expanded conjecture.
     The option is only provided as attribute declaration, but not
     as parameter to the command.
 
-    \<^descr>[@{text timeout}] sets the time limit in seconds.
-
-    \<^descr>[@{text default_type}] sets the type(s) generally used to
+    \<^descr>[\<open>timeout\<close>] sets the time limit in seconds.
+
+    \<^descr>[\<open>default_type\<close>] sets the type(s) generally used to
     instantiate type variables.
 
-    \<^descr>[@{text report}] if set quickcheck reports how many tests
+    \<^descr>[\<open>report\<close>] if set quickcheck reports how many tests
     fulfilled the preconditions.
 
-    \<^descr>[@{text use_subtype}] if set quickcheck automatically lifts
+    \<^descr>[\<open>use_subtype\<close>] if set quickcheck automatically lifts
     conjectures to registered subtypes if possible, and tests the
     lifted conjecture.
 
-    \<^descr>[@{text quiet}] if set quickcheck does not output anything
+    \<^descr>[\<open>quiet\<close>] if set quickcheck does not output anything
     while testing.
 
-    \<^descr>[@{text verbose}] if set quickcheck informs about the current
+    \<^descr>[\<open>verbose\<close>] if set quickcheck informs about the current
     size and cardinality while testing.
 
-    \<^descr>[@{text expect}] can be used to check if the user's
-    expectation was met (@{text no_expectation}, @{text
-    no_counterexample}, or @{text counterexample}).
+    \<^descr>[\<open>expect\<close>] can be used to check if the user's
+    expectation was met (\<open>no_expectation\<close>, \<open>no_counterexample\<close>, or \<open>counterexample\<close>).
 
   These option can be given within square brackets.
 
   Using the following type classes, the testers generate values and convert
   them back into Isabelle terms for displaying counterexamples.
 
-    \<^descr>[@{text exhaustive}] The parameters of the type classes @{class exhaustive}
+    \<^descr>[\<open>exhaustive\<close>] The parameters of the type classes @{class exhaustive}
     and @{class full_exhaustive} implement the testing. They take a
     testing function as a parameter, which takes a value of type @{typ "'a"}
     and optionally produces a counterexample, and a size parameter for the test values.
@@ -1945,17 +1927,17 @@
     expects a lazy term reconstruction in the type @{typ Code_Evaluation.term}
     of the tested value.
 
-    The canonical implementation for @{text exhaustive} testers calls the given
+    The canonical implementation for \<open>exhaustive\<close> testers calls the given
     testing function on all values up to the given size and stops as soon
     as a counterexample is found.
 
-    \<^descr>[@{text random}] The operation @{const Quickcheck_Random.random}
+    \<^descr>[\<open>random\<close>] The operation @{const Quickcheck_Random.random}
     of the type class @{class random} generates a pseudo-random
     value of the given size and a lazy term reconstruction of the value
     in the type @{typ Code_Evaluation.term}. A pseudo-randomness generator
     is defined in theory @{theory Random}.
 
-    \<^descr>[@{text narrowing}] implements Haskell's Lazy Smallcheck @{cite "runciman-naylor-lindblad"}
+    \<^descr>[\<open>narrowing\<close>] implements Haskell's Lazy Smallcheck @{cite "runciman-naylor-lindblad"}
     using the type classes @{class narrowing} and @{class partial_term_of}.
     Variables in the current goal are initially represented as symbolic variables.
     If the execution of the goal tries to evaluate one of them, the test engine
@@ -1978,7 +1960,7 @@
     refined if needed.
 
     To reconstruct counterexamples, the operation @{const partial_term_of} transforms
-    @{text narrowing}'s deep representation of terms to the type @{typ Code_Evaluation.term}.
+    \<open>narrowing\<close>'s deep representation of terms to the type @{typ Code_Evaluation.term}.
     The deep representation models symbolic variables as
     @{const Quickcheck_Narrowing.Narrowing_variable}, which are normally converted to
     @{const Code_Evaluation.Free}, and refined values as
@@ -2019,11 +2001,11 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{attribute_def (HOL) coercion} & : & @{text attribute} \\
-    @{attribute_def (HOL) coercion_delete} & : & @{text attribute} \\
-    @{attribute_def (HOL) coercion_enabled} & : & @{text attribute} \\
-    @{attribute_def (HOL) coercion_map} & : & @{text attribute} \\
-    @{attribute_def (HOL) coercion_args} & : & @{text attribute} \\
+    @{attribute_def (HOL) coercion} & : & \<open>attribute\<close> \\
+    @{attribute_def (HOL) coercion_delete} & : & \<open>attribute\<close> \\
+    @{attribute_def (HOL) coercion_enabled} & : & \<open>attribute\<close> \\
+    @{attribute_def (HOL) coercion_map} & : & \<open>attribute\<close> \\
+    @{attribute_def (HOL) coercion_args} & : & \<open>attribute\<close> \\
   \end{matharray}
 
   Coercive subtyping allows the user to omit explicit type
@@ -2041,41 +2023,41 @@
     @@{attribute (HOL) coercion_args} (@{syntax const}) (('+' | '0' | '-')+)
   \<close>}
 
-  \<^descr> @{attribute (HOL) "coercion"}~@{text "f"} registers a new
-  coercion function @{text "f :: \<sigma>\<^sub>1 \<Rightarrow> \<sigma>\<^sub>2"} where @{text "\<sigma>\<^sub>1"} and
-  @{text "\<sigma>\<^sub>2"} are type constructors without arguments.  Coercions are
+  \<^descr> @{attribute (HOL) "coercion"}~\<open>f\<close> registers a new
+  coercion function \<open>f :: \<sigma>\<^sub>1 \<Rightarrow> \<sigma>\<^sub>2\<close> where \<open>\<sigma>\<^sub>1\<close> and
+  \<open>\<sigma>\<^sub>2\<close> are type constructors without arguments.  Coercions are
   composed by the inference algorithm if needed.  Note that the type
   inference algorithm is complete only if the registered coercions
   form a lattice.
 
-  \<^descr> @{attribute (HOL) "coercion_delete"}~@{text "f"} deletes a
+  \<^descr> @{attribute (HOL) "coercion_delete"}~\<open>f\<close> deletes a
   preceding declaration (using @{attribute (HOL) "coercion"}) of the
-  function @{text "f :: \<sigma>\<^sub>1 \<Rightarrow> \<sigma>\<^sub>2"} as a coercion.
-
-  \<^descr> @{attribute (HOL) "coercion_map"}~@{text "map"} registers a
+  function \<open>f :: \<sigma>\<^sub>1 \<Rightarrow> \<sigma>\<^sub>2\<close> as a coercion.
+
+  \<^descr> @{attribute (HOL) "coercion_map"}~\<open>map\<close> registers a
   new map function to lift coercions through type constructors. The
-  function @{text "map"} must conform to the following type pattern
+  function \<open>map\<close> must conform to the following type pattern
 
   \begin{matharray}{lll}
-    @{text "map"} & @{text "::"} &
-      @{text "f\<^sub>1 \<Rightarrow> \<dots> \<Rightarrow> f\<^sub>n \<Rightarrow> (\<alpha>\<^sub>1, \<dots>, \<alpha>\<^sub>n) t \<Rightarrow> (\<beta>\<^sub>1, \<dots>, \<beta>\<^sub>n) t"} \\
+    \<open>map\<close> & \<open>::\<close> &
+      \<open>f\<^sub>1 \<Rightarrow> \<dots> \<Rightarrow> f\<^sub>n \<Rightarrow> (\<alpha>\<^sub>1, \<dots>, \<alpha>\<^sub>n) t \<Rightarrow> (\<beta>\<^sub>1, \<dots>, \<beta>\<^sub>n) t\<close> \\
   \end{matharray}
 
-  where @{text "t"} is a type constructor and @{text "f\<^sub>i"} is of type
-  @{text "\<alpha>\<^sub>i \<Rightarrow> \<beta>\<^sub>i"} or @{text "\<beta>\<^sub>i \<Rightarrow> \<alpha>\<^sub>i"}.  Registering a map function
+  where \<open>t\<close> is a type constructor and \<open>f\<^sub>i\<close> is of type
+  \<open>\<alpha>\<^sub>i \<Rightarrow> \<beta>\<^sub>i\<close> or \<open>\<beta>\<^sub>i \<Rightarrow> \<alpha>\<^sub>i\<close>.  Registering a map function
   overwrites any existing map function for this particular type
   constructor.
 
   \<^descr> @{attribute (HOL) "coercion_args"} can be used to disallow
   coercions to be inserted in certain positions in a term. For example,
-  given the constant @{text "c :: \<sigma>\<^sub>1 \<Rightarrow> \<sigma>\<^sub>2 \<Rightarrow> \<sigma>\<^sub>3 \<Rightarrow> \<sigma>\<^sub>4"} and the list
-  of policies @{text "- + 0"} as arguments, coercions will not be
-  inserted in the first argument of @{text "c"} (policy @{text "-"});
-  they may be inserted in the second argument (policy @{text "+"})
-  even if the constant @{text "c"} itself is in a position where
+  given the constant \<open>c :: \<sigma>\<^sub>1 \<Rightarrow> \<sigma>\<^sub>2 \<Rightarrow> \<sigma>\<^sub>3 \<Rightarrow> \<sigma>\<^sub>4\<close> and the list
+  of policies \<open>- + 0\<close> as arguments, coercions will not be
+  inserted in the first argument of \<open>c\<close> (policy \<open>-\<close>);
+  they may be inserted in the second argument (policy \<open>+\<close>)
+  even if the constant \<open>c\<close> itself is in a position where
   coercions are disallowed; the third argument inherits the allowance
-  of coercsion insertion from the position of the constant @{text "c"}
-  (policy @{text "0"}). The standard usage of policies is the definition
+  of coercsion insertion from the position of the constant \<open>c\<close>
+  (policy \<open>0\<close>). The standard usage of policies is the definition
   of syntatic constructs (usually extralogical, i.e., processed and
   stripped during type inference), that should not be destroyed by the
   insertion of coercions (see, for example, the setup for the case syntax
@@ -2090,13 +2072,13 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{method_def (HOL) arith} & : & @{text method} \\
-    @{attribute_def (HOL) arith} & : & @{text attribute} \\
-    @{attribute_def (HOL) arith_split} & : & @{text attribute} \\
+    @{method_def (HOL) arith} & : & \<open>method\<close> \\
+    @{attribute_def (HOL) arith} & : & \<open>attribute\<close> \\
+    @{attribute_def (HOL) arith_split} & : & \<open>attribute\<close> \\
   \end{matharray}
 
   \<^descr> @{method (HOL) arith} decides linear arithmetic problems (on
-  types @{text nat}, @{text int}, @{text real}).  Any current facts
+  types \<open>nat\<close>, \<open>int\<close>, \<open>real\<close>).  Any current facts
   are inserted into the goal before running the procedure.
 
   \<^descr> @{attribute (HOL) arith} declares facts that are supplied to
@@ -2115,7 +2097,7 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{method_def (HOL) iprover} & : & @{text method} \\
+    @{method_def (HOL) iprover} & : & \<open>method\<close> \\
   \end{matharray}
 
   @{rail \<open>
@@ -2129,9 +2111,9 @@
 
   Rules need to be classified as @{attribute (Pure) intro},
   @{attribute (Pure) elim}, or @{attribute (Pure) dest}; here the
-  ``@{text "!"}'' indicator refers to ``safe'' rules, which may be
+  ``\<open>!\<close>'' indicator refers to ``safe'' rules, which may be
   applied aggressively (without considering back-tracking later).
-  Rules declared with ``@{text "?"}'' are ignored in proof search (the
+  Rules declared with ``\<open>?\<close>'' are ignored in proof search (the
   single-step @{method (Pure) rule} method still observes these).  An
   explicit weight annotation may be given as well; otherwise the
   number of rule premises will be taken into account here.
@@ -2142,8 +2124,8 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{method_def (HOL) "meson"} & : & @{text method} \\
-    @{method_def (HOL) "metis"} & : & @{text method} \\
+    @{method_def (HOL) "meson"} & : & \<open>method\<close> \\
+    @{method_def (HOL) "metis"} & : & \<open>method\<close> \\
   \end{matharray}
 
   @{rail \<open>
@@ -2171,8 +2153,8 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{method_def (HOL) "algebra"} & : & @{text method} \\
-    @{attribute_def (HOL) algebra} & : & @{text attribute} \\
+    @{method_def (HOL) "algebra"} & : & \<open>method\<close> \\
+    @{attribute_def (HOL) algebra} & : & \<open>attribute\<close> \\
   \end{matharray}
 
   @{rail \<open>
@@ -2195,8 +2177,8 @@
     the underlying method is based on Hilbert's Nullstellensatz, where
     the equivalence only holds for algebraically closed fields.
 
-    The problems can contain equations @{text "p = 0"} or inequations
-    @{text "q \<noteq> 0"} anywhere within a universal problem statement.
+    The problems can contain equations \<open>p = 0\<close> or inequations
+    \<open>q \<noteq> 0\<close> anywhere within a universal problem statement.
 
     \<^enum> All-exists problems of the following restricted (but useful)
     form:
@@ -2208,11 +2190,11 @@
         \<dots> \<and>
         p\<^sub>t\<^sub>1(x\<^sub>1, \<dots>, x\<^sub>n) * y\<^sub>1 + \<dots> + p\<^sub>t\<^sub>k(x\<^sub>1, \<dots>, x\<^sub>n) * y\<^sub>k = 0)"}
 
-    Here @{text "e\<^sub>1, \<dots>, e\<^sub>n"} and the @{text "p\<^sub>i\<^sub>j"} are multivariate
+    Here \<open>e\<^sub>1, \<dots>, e\<^sub>n\<close> and the \<open>p\<^sub>i\<^sub>j\<close> are multivariate
     polynomials only in the variables mentioned as arguments.
 
   The proof method is preceded by a simplification step, which may be
-  modified by using the form @{text "(algebra add: ths\<^sub>1 del: ths\<^sub>2)"}.
+  modified by using the form \<open>(algebra add: ths\<^sub>1 del: ths\<^sub>2)\<close>.
   This acts like declarations for the Simplifier
   (\secref{sec:simplifier}) on a private simpset for this tool.
 
@@ -2249,7 +2231,7 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{method_def (HOL) "coherent"} & : & @{text method} \\
+    @{method_def (HOL) "coherent"} & : & \<open>method\<close> \\
   \end{matharray}
 
   @{rail \<open>
@@ -2271,10 +2253,10 @@
   \secref{sec:cases-induct} for proper Isar versions of similar ideas.
 
   \begin{matharray}{rcl}
-    @{method_def (HOL) case_tac}@{text "\<^sup>*"} & : & @{text method} \\
-    @{method_def (HOL) induct_tac}@{text "\<^sup>*"} & : & @{text method} \\
-    @{method_def (HOL) ind_cases}@{text "\<^sup>*"} & : & @{text method} \\
-    @{command_def (HOL) "inductive_cases"}@{text "\<^sup>*"} & : & @{text "local_theory \<rightarrow> local_theory"} \\
+    @{method_def (HOL) case_tac}\<open>\<^sup>*\<close> & : & \<open>method\<close> \\
+    @{method_def (HOL) induct_tac}\<open>\<^sup>*\<close> & : & \<open>method\<close> \\
+    @{method_def (HOL) ind_cases}\<open>\<^sup>*\<close> & : & \<open>method\<close> \\
+    @{command_def (HOL) "inductive_cases"}\<open>\<^sup>*\<close> & : & \<open>local_theory \<rightarrow> local_theory\<close> \\
   \end{matharray}
 
   @{rail \<open>
@@ -2321,14 +2303,14 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{attribute_def (HOL) split_format}@{text "\<^sup>*"} & : & @{text attribute} \\
+    @{attribute_def (HOL) split_format}\<open>\<^sup>*\<close> & : & \<open>attribute\<close> \\
   \end{matharray}
 
   @{rail \<open>
     @@{attribute (HOL) split_format} ('(' 'complete' ')')?
   \<close>}
 
-  \<^descr> @{attribute (HOL) split_format}\ @{text "(complete)"} causes
+  \<^descr> @{attribute (HOL) split_format}\ \<open>(complete)\<close> causes
   arguments in function applications to be represented canonically
   according to their tuple type structure.
 
@@ -2359,22 +2341,22 @@
   within HOL. See @{cite "isabelle-codegen"} for an introduction.
 
   \begin{matharray}{rcl}
-    @{command_def (HOL) "export_code"}@{text "\<^sup>*"} & : & @{text "context \<rightarrow>"} \\
-    @{attribute_def (HOL) code} & : & @{text attribute} \\
-    @{command_def (HOL) "code_datatype"} & : & @{text "theory \<rightarrow> theory"} \\
-    @{command_def (HOL) "print_codesetup"}@{text "\<^sup>*"} & : & @{text "context \<rightarrow>"} \\
-    @{attribute_def (HOL) code_unfold} & : & @{text attribute} \\
-    @{attribute_def (HOL) code_post} & : & @{text attribute} \\
-    @{attribute_def (HOL) code_abbrev} & : & @{text attribute} \\
-    @{command_def (HOL) "print_codeproc"}@{text "\<^sup>*"} & : & @{text "context \<rightarrow>"} \\
-    @{command_def (HOL) "code_thms"}@{text "\<^sup>*"} & : & @{text "context \<rightarrow>"} \\
-    @{command_def (HOL) "code_deps"}@{text "\<^sup>*"} & : & @{text "context \<rightarrow>"} \\
-    @{command_def (HOL) "code_reserved"} & : & @{text "theory \<rightarrow> theory"} \\
-    @{command_def (HOL) "code_printing"} & : & @{text "theory \<rightarrow> theory"} \\
-    @{command_def (HOL) "code_identifier"} & : & @{text "theory \<rightarrow> theory"} \\
-    @{command_def (HOL) "code_monad"} & : & @{text "theory \<rightarrow> theory"} \\
-    @{command_def (HOL) "code_reflect"} & : & @{text "theory \<rightarrow> theory"} \\
-    @{command_def (HOL) "code_pred"} & : & @{text "theory \<rightarrow> proof(prove)"}
+    @{command_def (HOL) "export_code"}\<open>\<^sup>*\<close> & : & \<open>context \<rightarrow>\<close> \\
+    @{attribute_def (HOL) code} & : & \<open>attribute\<close> \\
+    @{command_def (HOL) "code_datatype"} & : & \<open>theory \<rightarrow> theory\<close> \\
+    @{command_def (HOL) "print_codesetup"}\<open>\<^sup>*\<close> & : & \<open>context \<rightarrow>\<close> \\
+    @{attribute_def (HOL) code_unfold} & : & \<open>attribute\<close> \\
+    @{attribute_def (HOL) code_post} & : & \<open>attribute\<close> \\
+    @{attribute_def (HOL) code_abbrev} & : & \<open>attribute\<close> \\
+    @{command_def (HOL) "print_codeproc"}\<open>\<^sup>*\<close> & : & \<open>context \<rightarrow>\<close> \\
+    @{command_def (HOL) "code_thms"}\<open>\<^sup>*\<close> & : & \<open>context \<rightarrow>\<close> \\
+    @{command_def (HOL) "code_deps"}\<open>\<^sup>*\<close> & : & \<open>context \<rightarrow>\<close> \\
+    @{command_def (HOL) "code_reserved"} & : & \<open>theory \<rightarrow> theory\<close> \\
+    @{command_def (HOL) "code_printing"} & : & \<open>theory \<rightarrow> theory\<close> \\
+    @{command_def (HOL) "code_identifier"} & : & \<open>theory \<rightarrow> theory\<close> \\
+    @{command_def (HOL) "code_monad"} & : & \<open>theory \<rightarrow> theory\<close> \\
+    @{command_def (HOL) "code_reflect"} & : & \<open>theory \<rightarrow> theory\<close> \\
+    @{command_def (HOL) "code_pred"} & : & \<open>theory \<rightarrow> proof(prove)\<close>
   \end{matharray}
 
   @{rail \<open>
@@ -2469,9 +2451,9 @@
   instruction is given, only abstract code is generated internally.
 
   Constants may be specified by giving them literally, referring to all
-  executable constants within a certain theory by giving @{text "name._"},
+  executable constants within a certain theory by giving \<open>name._\<close>,
   or referring to \<^emph>\<open>all\<close> executable constants currently available by
-  giving @{text "_"}.
+  giving \<open>_\<close>.
 
   By default, exported identifiers are minimized per module. This can be
   suppressed by prepending @{keyword "open"} before the list of constants.
@@ -2487,21 +2469,20 @@
   hierarchy. Omitting the file specification denotes standard output.
 
   Serializers take an optional list of arguments in parentheses. For
-  \<^emph>\<open>Haskell\<close> a module name prefix may be given using the ``@{text
-  "root:"}'' argument; ``@{text string_classes}'' adds a ``@{verbatim
+  \<^emph>\<open>Haskell\<close> a module name prefix may be given using the ``\<open>root:\<close>'' argument; ``\<open>string_classes\<close>'' adds a ``@{verbatim
   "deriving (Read, Show)"}'' clause to each appropriate datatype
   declaration.
 
   \<^descr> @{attribute (HOL) code} declare code equations for code generation.
-  Variant @{text "code equation"} declares a conventional equation as code
-  equation. Variants @{text "code abstype"} and @{text "code abstract"}
+  Variant \<open>code equation\<close> declares a conventional equation as code
+  equation. Variants \<open>code abstype\<close> and \<open>code abstract\<close>
   declare abstract datatype certificates or code equations on abstract
-  datatype representations respectively. Vanilla @{text "code"} falls back
-  to @{text "code equation"} or @{text "code abstype"} depending on the
-  syntactic shape of the underlying equation. Variant @{text "code del"}
+  datatype representations respectively. Vanilla \<open>code\<close> falls back
+  to \<open>code equation\<close> or \<open>code abstype\<close> depending on the
+  syntactic shape of the underlying equation. Variant \<open>code del\<close>
   deselects a code equation for code generation.
 
-  Variants @{text "code drop:"} and @{text "code abort:"} take a list of
+  Variants \<open>code drop:\<close> and \<open>code abort:\<close> take a list of
   constant as arguments and drop all code equations declared for them. In
   the case of {text abort}, these constants then are are not required to
   have a definition by means of code equations; if needed these are
@@ -2516,16 +2497,13 @@
   \<^descr> @{command (HOL) "print_codesetup"} gives an overview on selected
   code equations and code generator datatypes.
 
-  \<^descr> @{attribute (HOL) code_unfold} declares (or with option ``@{text
-  "del"}'' removes) theorems which during preprocessing are applied as
+  \<^descr> @{attribute (HOL) code_unfold} declares (or with option ``\<open>del\<close>'' removes) theorems which during preprocessing are applied as
   rewrite rules to any code equation or evaluation input.
 
-  \<^descr> @{attribute (HOL) code_post} declares (or with option ``@{text
-  "del"}'' removes) theorems which are applied as rewrite rules to any
+  \<^descr> @{attribute (HOL) code_post} declares (or with option ``\<open>del\<close>'' removes) theorems which are applied as rewrite rules to any
   result of an evaluation.
 
-  \<^descr> @{attribute (HOL) code_abbrev} declares (or with option ``@{text
-  "del"}'' removes) equations which are applied as rewrite rules to any
+  \<^descr> @{attribute (HOL) code_abbrev} declares (or with option ``\<open>del\<close>'' removes) equations which are applied as rewrite rules to any
   result of an evaluation and symmetrically during preprocessing to any code
   equation or evaluation input.
 
@@ -2561,12 +2539,10 @@
   identifiers in compound statements like type classes or datatypes are
   still the same.
 
-  \<^descr> @{command (HOL) "code_reflect"} without a ``@{text "file"}''
+  \<^descr> @{command (HOL) "code_reflect"} without a ``\<open>file\<close>''
   argument compiles code into the system runtime environment and modifies
   the code generator setup that future invocations of system runtime code
-  generation referring to one of the ``@{text "datatypes"}'' or ``@{text
-  "functions"}'' entities use these precompiled entities. With a ``@{text
-  "file"}'' argument, the corresponding code is generated into that
+  generation referring to one of the ``\<open>datatypes\<close>'' or ``\<open>functions\<close>'' entities use these precompiled entities. With a ``\<open>file\<close>'' argument, the corresponding code is generated into that
   specified file without modifying the code generator setup.
 
   \<^descr> @{command (HOL) "code_pred"} creates code equations for a predicate
--- a/src/Doc/Isar_Ref/Inner_Syntax.thy	Tue Oct 20 23:03:46 2015 +0200
+++ b/src/Doc/Isar_Ref/Inner_Syntax.thy	Tue Oct 20 23:53:40 2015 +0200
@@ -5,13 +5,12 @@
 chapter \<open>Inner syntax --- the term language \label{ch:inner-syntax}\<close>
 
 text \<open>The inner syntax of Isabelle provides concrete notation for
-  the main entities of the logical framework, notably @{text
-  "\<lambda>"}-terms with types and type classes.  Applications may either
+  the main entities of the logical framework, notably \<open>\<lambda>\<close>-terms with types and type classes.  Applications may either
   extend existing syntactic categories by additional notation, or
   define new sub-languages that are linked to the standard term
   language via some explicit markers.  For example @{verbatim
-  FOO}~@{text "foo"} could embed the syntax corresponding for some
-  user-defined nonterminal @{text "foo"} --- within the bounds of the
+  FOO}~\<open>foo\<close> could embed the syntax corresponding for some
+  user-defined nonterminal \<open>foo\<close> --- within the bounds of the
   given lexical syntax of Isabelle/Pure.
 
   The most basic way to specify concrete syntax for logical entities
@@ -34,13 +33,13 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{command_def "typ"}@{text "\<^sup>*"} & : & @{text "context \<rightarrow>"} \\
-    @{command_def "term"}@{text "\<^sup>*"} & : & @{text "context \<rightarrow>"} \\
-    @{command_def "prop"}@{text "\<^sup>*"} & : & @{text "context \<rightarrow>"} \\
-    @{command_def "thm"}@{text "\<^sup>*"} & : & @{text "context \<rightarrow>"} \\
-    @{command_def "prf"}@{text "\<^sup>*"} & : & @{text "context \<rightarrow>"} \\
-    @{command_def "full_prf"}@{text "\<^sup>*"} & : & @{text "context \<rightarrow>"} \\
-    @{command_def "print_state"}@{text "\<^sup>*"} & : & @{text "any \<rightarrow>"} \\
+    @{command_def "typ"}\<open>\<^sup>*\<close> & : & \<open>context \<rightarrow>\<close> \\
+    @{command_def "term"}\<open>\<^sup>*\<close> & : & \<open>context \<rightarrow>\<close> \\
+    @{command_def "prop"}\<open>\<^sup>*\<close> & : & \<open>context \<rightarrow>\<close> \\
+    @{command_def "thm"}\<open>\<^sup>*\<close> & : & \<open>context \<rightarrow>\<close> \\
+    @{command_def "prf"}\<open>\<^sup>*\<close> & : & \<open>context \<rightarrow>\<close> \\
+    @{command_def "full_prf"}\<open>\<^sup>*\<close> & : & \<open>context \<rightarrow>\<close> \\
+    @{command_def "print_state"}\<open>\<^sup>*\<close> & : & \<open>any \<rightarrow>\<close> \\
   \end{matharray}
 
   These diagnostic commands assist interactive development by printing
@@ -62,28 +61,28 @@
     @{syntax_def modes}: '(' (@{syntax name} + ) ')'
   \<close>}
 
-  \<^descr> @{command "typ"}~@{text \<tau>} reads and prints a type expression
+  \<^descr> @{command "typ"}~\<open>\<tau>\<close> reads and prints a type expression
   according to the current context.
 
-  \<^descr> @{command "typ"}~@{text "\<tau> :: s"} uses type-inference to
-  determine the most general way to make @{text "\<tau>"} conform to sort
-  @{text "s"}.  For concrete @{text "\<tau>"} this checks if the type
-  belongs to that sort.  Dummy type parameters ``@{text "_"}''
+  \<^descr> @{command "typ"}~\<open>\<tau> :: s\<close> uses type-inference to
+  determine the most general way to make \<open>\<tau>\<close> conform to sort
+  \<open>s\<close>.  For concrete \<open>\<tau>\<close> this checks if the type
+  belongs to that sort.  Dummy type parameters ``\<open>_\<close>''
   (underscore) are assigned to fresh type variables with most general
   sorts, according the the principles of type-inference.
 
-  \<^descr> @{command "term"}~@{text t} and @{command "prop"}~@{text \<phi>}
+  \<^descr> @{command "term"}~\<open>t\<close> and @{command "prop"}~\<open>\<phi>\<close>
   read, type-check and print terms or propositions according to the
-  current theory or proof context; the inferred type of @{text t} is
+  current theory or proof context; the inferred type of \<open>t\<close> is
   output as well.  Note that these commands are also useful in
   inspecting the current environment of term abbreviations.
 
-  \<^descr> @{command "thm"}~@{text "a\<^sub>1 \<dots> a\<^sub>n"} retrieves
+  \<^descr> @{command "thm"}~\<open>a\<^sub>1 \<dots> a\<^sub>n\<close> retrieves
   theorems from the current theory or proof context.  Note that any
   attributes included in the theorem specifications are applied to a
   temporary context derived from the current theory or proof; the
-  result is discarded, i.e.\ attributes involved in @{text "a\<^sub>1,
-  \<dots>, a\<^sub>n"} do not have any permanent effect.
+  result is discarded, i.e.\ attributes involved in \<open>a\<^sub>1,
+  \<dots>, a\<^sub>n\<close> do not have any permanent effect.
 
   \<^descr> @{command "prf"} displays the (compact) proof term of the
   current proof state (if present), or of the given theorems. Note
@@ -93,18 +92,18 @@
 
   \<^descr> @{command "full_prf"} is like @{command "prf"}, but displays
   the full proof term, i.e.\ also displays information omitted in the
-  compact proof term, which is denoted by ``@{text _}'' placeholders
+  compact proof term, which is denoted by ``\<open>_\<close>'' placeholders
   there.
 
   \<^descr> @{command "print_state"} prints the current proof state (if
   present), including current facts and goals.
 
 
-  All of the diagnostic commands above admit a list of @{text modes}
+  All of the diagnostic commands above admit a list of \<open>modes\<close>
   to be specified, which is appended to the current print mode; see
   also \secref{sec:print-modes}.  Thus the output behavior may be
   modified according particular print mode features.  For example,
-  @{command "print_state"}~@{text "(latex xsymbols)"} prints the
+  @{command "print_state"}~\<open>(latex xsymbols)\<close> prints the
   current proof state with mathematical symbols and special characters
   represented in {\LaTeX} source, according to the Isabelle style
   @{cite "isabelle-system"}.
@@ -119,21 +118,21 @@
 
 text \<open>
   \begin{tabular}{rcll}
-    @{attribute_def show_markup} & : & @{text attribute} \\
-    @{attribute_def show_types} & : & @{text attribute} & default @{text false} \\
-    @{attribute_def show_sorts} & : & @{text attribute} & default @{text false} \\
-    @{attribute_def show_consts} & : & @{text attribute} & default @{text false} \\
-    @{attribute_def show_abbrevs} & : & @{text attribute} & default @{text true} \\
-    @{attribute_def show_brackets} & : & @{text attribute} & default @{text false} \\
-    @{attribute_def names_long} & : & @{text attribute} & default @{text false} \\
-    @{attribute_def names_short} & : & @{text attribute} & default @{text false} \\
-    @{attribute_def names_unique} & : & @{text attribute} & default @{text true} \\
-    @{attribute_def eta_contract} & : & @{text attribute} & default @{text true} \\
-    @{attribute_def goals_limit} & : & @{text attribute} & default @{text 10} \\
-    @{attribute_def show_main_goal} & : & @{text attribute} & default @{text false} \\
-    @{attribute_def show_hyps} & : & @{text attribute} & default @{text false} \\
-    @{attribute_def show_tags} & : & @{text attribute} & default @{text false} \\
-    @{attribute_def show_question_marks} & : & @{text attribute} & default @{text true} \\
+    @{attribute_def show_markup} & : & \<open>attribute\<close> \\
+    @{attribute_def show_types} & : & \<open>attribute\<close> & default \<open>false\<close> \\
+    @{attribute_def show_sorts} & : & \<open>attribute\<close> & default \<open>false\<close> \\
+    @{attribute_def show_consts} & : & \<open>attribute\<close> & default \<open>false\<close> \\
+    @{attribute_def show_abbrevs} & : & \<open>attribute\<close> & default \<open>true\<close> \\
+    @{attribute_def show_brackets} & : & \<open>attribute\<close> & default \<open>false\<close> \\
+    @{attribute_def names_long} & : & \<open>attribute\<close> & default \<open>false\<close> \\
+    @{attribute_def names_short} & : & \<open>attribute\<close> & default \<open>false\<close> \\
+    @{attribute_def names_unique} & : & \<open>attribute\<close> & default \<open>true\<close> \\
+    @{attribute_def eta_contract} & : & \<open>attribute\<close> & default \<open>true\<close> \\
+    @{attribute_def goals_limit} & : & \<open>attribute\<close> & default \<open>10\<close> \\
+    @{attribute_def show_main_goal} & : & \<open>attribute\<close> & default \<open>false\<close> \\
+    @{attribute_def show_hyps} & : & \<open>attribute\<close> & default \<open>false\<close> \\
+    @{attribute_def show_tags} & : & \<open>attribute\<close> & default \<open>false\<close> \\
+    @{attribute_def show_question_marks} & : & \<open>attribute\<close> & default \<open>true\<close> \\
   \end{tabular}
   \<^medskip>
 
@@ -182,24 +181,22 @@
   \secref{sec:antiq} for the document antiquotation options of the
   same names.
 
-  \<^descr> @{attribute eta_contract} controls @{text "\<eta>"}-contracted
+  \<^descr> @{attribute eta_contract} controls \<open>\<eta>\<close>-contracted
   printing of terms.
 
-  The @{text \<eta>}-contraction law asserts @{prop "(\<lambda>x. f x) \<equiv> f"},
-  provided @{text x} is not free in @{text f}.  It asserts
+  The \<open>\<eta>\<close>-contraction law asserts @{prop "(\<lambda>x. f x) \<equiv> f"},
+  provided \<open>x\<close> is not free in \<open>f\<close>.  It asserts
   \<^emph>\<open>extensionality\<close> of functions: @{prop "f \<equiv> g"} if @{prop "f x \<equiv>
-  g x"} for all @{text x}.  Higher-order unification frequently puts
-  terms into a fully @{text \<eta>}-expanded form.  For example, if @{text
-  F} has type @{text "(\<tau> \<Rightarrow> \<tau>) \<Rightarrow> \<tau>"} then its expanded form is @{term
+  g x"} for all \<open>x\<close>.  Higher-order unification frequently puts
+  terms into a fully \<open>\<eta>\<close>-expanded form.  For example, if \<open>F\<close> has type \<open>(\<tau> \<Rightarrow> \<tau>) \<Rightarrow> \<tau>\<close> then its expanded form is @{term
   "\<lambda>h. F (\<lambda>x. h x)"}.
 
-  Enabling @{attribute eta_contract} makes Isabelle perform @{text
-  \<eta>}-contractions before printing, so that @{term "\<lambda>h. F (\<lambda>x. h x)"}
-  appears simply as @{text F}.
+  Enabling @{attribute eta_contract} makes Isabelle perform \<open>\<eta>\<close>-contractions before printing, so that @{term "\<lambda>h. F (\<lambda>x. h x)"}
+  appears simply as \<open>F\<close>.
 
-  Note that the distinction between a term and its @{text \<eta>}-expanded
+  Note that the distinction between a term and its \<open>\<eta>\<close>-expanded
   form occasionally matters.  While higher-order resolution and
-  rewriting operate modulo @{text "\<alpha>\<beta>\<eta>"}-conversion, some other tools
+  rewriting operate modulo \<open>\<alpha>\<beta>\<eta>\<close>-conversion, some other tools
   might look at terms more discretely.
 
   \<^descr> @{attribute goals_limit} controls the maximum number of
@@ -229,7 +226,7 @@
   associated with a theorem.
 
   \<^descr> @{attribute show_question_marks} controls printing of question
-  marks for schematic variables, such as @{text ?x}.  Only the leading
+  marks for schematic variables, such as \<open>?x\<close>.  Only the leading
   question mark is affected, the remaining text is unchanged
   (including proper markup for schematic variables that might be
   relevant for user interfaces).
@@ -255,9 +252,9 @@
   representation of certain individual features for printing (with
   precedence from left to right).
 
-  \<^descr> @{ML Print_Mode.with_modes}~@{text "modes f x"} evaluates
-  @{text "f x"} in an execution context where the print mode is
-  prepended by the given @{text "modes"}.  This provides a thread-safe
+  \<^descr> @{ML Print_Mode.with_modes}~\<open>modes f x\<close> evaluates
+  \<open>f x\<close> in an execution context where the print mode is
+  prepended by the given \<open>modes\<close>.  This provides a thread-safe
   way to augment print modes.  It is also monotonic in the set of mode
   names: it retains the default print mode that certain
   user-interfaces might have installed for their proper functioning!
@@ -319,9 +316,9 @@
     prios: '[' (@{syntax nat} + ',') ']'
   \<close>}
 
-  The string given as @{text template} may include literal text,
-  spacing, blocks, and arguments (denoted by ``@{text _}''); the
-  special symbol ``@{verbatim "\<index>"}'' (printed as ``@{text "\<index>"}'')
+  The string given as \<open>template\<close> may include literal text,
+  spacing, blocks, and arguments (denoted by ``\<open>_\<close>''); the
+  special symbol ``@{verbatim "\<index>"}'' (printed as ``\<open>\<index>\<close>'')
   represents an index argument that specifies an implicit @{keyword
   "structure"} reference (see also \secref{sec:locale}).  Only locally
   fixed variables may be declared as @{keyword "structure"}.
@@ -335,25 +332,24 @@
 subsection \<open>The general mixfix form\<close>
 
 text \<open>In full generality, mixfix declarations work as follows.
-  Suppose a constant @{text "c :: \<tau>\<^sub>1 \<Rightarrow> \<dots> \<tau>\<^sub>n \<Rightarrow> \<tau>"} is annotated by
-  @{text "(mixfix [p\<^sub>1, \<dots>, p\<^sub>n] p)"}, where @{text "mixfix"} is a string
-  @{text "d\<^sub>0 _ d\<^sub>1 _ \<dots> _ d\<^sub>n"} consisting of delimiters that surround
+  Suppose a constant \<open>c :: \<tau>\<^sub>1 \<Rightarrow> \<dots> \<tau>\<^sub>n \<Rightarrow> \<tau>\<close> is annotated by
+  \<open>(mixfix [p\<^sub>1, \<dots>, p\<^sub>n] p)\<close>, where \<open>mixfix\<close> is a string
+  \<open>d\<^sub>0 _ d\<^sub>1 _ \<dots> _ d\<^sub>n\<close> consisting of delimiters that surround
   argument positions as indicated by underscores.
 
   Altogether this determines a production for a context-free priority
-  grammar, where for each argument @{text "i"} the syntactic category
-  is determined by @{text "\<tau>\<^sub>i"} (with priority @{text "p\<^sub>i"}), and the
-  result category is determined from @{text "\<tau>"} (with priority @{text
-  "p"}).  Priority specifications are optional, with default 0 for
+  grammar, where for each argument \<open>i\<close> the syntactic category
+  is determined by \<open>\<tau>\<^sub>i\<close> (with priority \<open>p\<^sub>i\<close>), and the
+  result category is determined from \<open>\<tau>\<close> (with priority \<open>p\<close>).  Priority specifications are optional, with default 0 for
   arguments and 1000 for the result.\footnote{Omitting priorities is
   prone to syntactic ambiguities unless the delimiter tokens determine
-  fully bracketed notation, as in @{text "if _ then _ else _ fi"}.}
+  fully bracketed notation, as in \<open>if _ then _ else _ fi\<close>.}
 
-  Since @{text "\<tau>"} may be again a function type, the constant
+  Since \<open>\<tau>\<close> may be again a function type, the constant
   type scheme may have more argument positions than the mixfix
-  pattern.  Printing a nested application @{text "c t\<^sub>1 \<dots> t\<^sub>m"} for
-  @{text "m > n"} works by attaching concrete notation only to the
-  innermost part, essentially by printing @{text "(c t\<^sub>1 \<dots> t\<^sub>n) \<dots> t\<^sub>m"}
+  pattern.  Printing a nested application \<open>c t\<^sub>1 \<dots> t\<^sub>m\<close> for
+  \<open>m > n\<close> works by attaching concrete notation only to the
+  innermost part, essentially by printing \<open>(c t\<^sub>1 \<dots> t\<^sub>n) \<dots> t\<^sub>m\<close>
   instead.  If a term has fewer arguments than specified in the mixfix
   template, the concrete syntax is ignored.
 
@@ -363,14 +359,14 @@
   general template format is a sequence over any of the following
   entities.
 
-  \<^descr> @{text "d"} is a delimiter, namely a non-empty sequence of
+  \<^descr> \<open>d\<close> is a delimiter, namely a non-empty sequence of
   characters other than the following special characters:
 
   \<^medskip>
   \begin{tabular}{ll}
     @{verbatim "'"} & single quote \\
     @{verbatim "_"} & underscore \\
-    @{text "\<index>"} & index symbol \\
+    \<open>\<index>\<close> & index symbol \\
     @{verbatim "("} & open parenthesis \\
     @{verbatim ")"} & close parenthesis \\
     @{verbatim "/"} & slash \\
@@ -388,13 +384,13 @@
   \<^descr> @{verbatim "_"} is an argument position, which stands for a
   certain syntactic category in the underlying grammar.
 
-  \<^descr> @{text "\<index>"} is an indexed argument position; this is the place
+  \<^descr> \<open>\<index>\<close> is an indexed argument position; this is the place
   where implicit structure arguments can be attached.
 
-  \<^descr> @{text "s"} is a non-empty sequence of spaces for printing.
+  \<^descr> \<open>s\<close> is a non-empty sequence of spaces for printing.
   This and the following specifications do not affect parsing at all.
 
-  \<^descr> @{verbatim "("}@{text n} opens a pretty printing block.  The
+  \<^descr> @{verbatim "("}\<open>n\<close> opens a pretty printing block.  The
   optional number specifies how much indentation to add when a line
   break occurs within the block.  If the parenthesis is not followed
   by digits, the indentation defaults to 0.  A block specified via
@@ -404,7 +400,7 @@
 
   \<^descr> @{verbatim "//"} forces a line break.
 
-  \<^descr> @{verbatim "/"}@{text s} allows a line break.  Here @{text s}
+  \<^descr> @{verbatim "/"}\<open>s\<close> allows a line break.  Here \<open>s\<close>
   stands for the string of spaces (zero or more) right after the
   slash.  These spaces are printed if the break is \<^emph>\<open>not\<close> taken.
 
@@ -422,25 +418,25 @@
   \begin{center}
   \begin{tabular}{lll}
 
-  @{verbatim "("}@{keyword_def "infix"}~@{verbatim \<open>"\<close>}@{text sy}@{verbatim \<open>"\<close>} @{text "p"}@{verbatim ")"}
-  & @{text "\<mapsto>"} &
-  @{verbatim \<open>("(_\<close>}~@{text sy}@{verbatim \<open>/ _)" [\<close>}@{text "p + 1"}@{verbatim ","}~@{text "p + 1"}@{verbatim "]"}~@{text "p"}@{verbatim ")"} \\
-  @{verbatim "("}@{keyword_def "infixl"}~@{verbatim \<open>"\<close>}@{text sy}@{verbatim \<open>"\<close>} @{text "p"}@{verbatim ")"}
-  & @{text "\<mapsto>"} &
-  @{verbatim \<open>("(_\<close>}~@{text sy}@{verbatim \<open>/ _)" [\<close>}@{text "p"}@{verbatim ","}~@{text "p + 1"}@{verbatim "]"}~@{text "p"}@{verbatim ")"} \\
-  @{verbatim "("}@{keyword_def "infixr"}~@{verbatim \<open>"\<close>}@{text sy}@{verbatim \<open>"\<close>}~@{text "p"}@{verbatim ")"}
-  & @{text "\<mapsto>"} &
-  @{verbatim \<open>("(_\<close>}~@{text sy}@{verbatim \<open>/ _)" [\<close>}@{text "p + 1"}@{verbatim ","}~@{text "p"}@{verbatim "]"}~@{text "p"}@{verbatim ")"} \\
+  @{verbatim "("}@{keyword_def "infix"}~@{verbatim \<open>"\<close>}\<open>sy\<close>@{verbatim \<open>"\<close>} \<open>p\<close>@{verbatim ")"}
+  & \<open>\<mapsto>\<close> &
+  @{verbatim \<open>("(_\<close>}~\<open>sy\<close>@{verbatim \<open>/ _)" [\<close>}\<open>p + 1\<close>@{verbatim ","}~\<open>p + 1\<close>@{verbatim "]"}~\<open>p\<close>@{verbatim ")"} \\
+  @{verbatim "("}@{keyword_def "infixl"}~@{verbatim \<open>"\<close>}\<open>sy\<close>@{verbatim \<open>"\<close>} \<open>p\<close>@{verbatim ")"}
+  & \<open>\<mapsto>\<close> &
+  @{verbatim \<open>("(_\<close>}~\<open>sy\<close>@{verbatim \<open>/ _)" [\<close>}\<open>p\<close>@{verbatim ","}~\<open>p + 1\<close>@{verbatim "]"}~\<open>p\<close>@{verbatim ")"} \\
+  @{verbatim "("}@{keyword_def "infixr"}~@{verbatim \<open>"\<close>}\<open>sy\<close>@{verbatim \<open>"\<close>}~\<open>p\<close>@{verbatim ")"}
+  & \<open>\<mapsto>\<close> &
+  @{verbatim \<open>("(_\<close>}~\<open>sy\<close>@{verbatim \<open>/ _)" [\<close>}\<open>p + 1\<close>@{verbatim ","}~\<open>p\<close>@{verbatim "]"}~\<open>p\<close>@{verbatim ")"} \\
 
   \end{tabular}
   \end{center}
 
-  The mixfix template @{verbatim \<open>"(_\<close>}~@{text sy}@{verbatim \<open>/ _)"\<close>}
+  The mixfix template @{verbatim \<open>"(_\<close>}~\<open>sy\<close>@{verbatim \<open>/ _)"\<close>}
   specifies two argument positions; the delimiter is preceded by a
   space and followed by a space or line break; the entire phrase is a
   pretty printing block.
 
-  The alternative notation @{verbatim "op"}~@{text sy} is introduced
+  The alternative notation @{verbatim "op"}~\<open>sy\<close> is introduced
   in addition.  Thus any infix operator may be written in prefix form
   (as in ML), independently of the number of arguments in the term.
 \<close>
@@ -449,41 +445,39 @@
 subsection \<open>Binders\<close>
 
 text \<open>A \<^emph>\<open>binder\<close> is a variable-binding construct such as a
-  quantifier.  The idea to formalize @{text "\<forall>x. b"} as @{text "All
-  (\<lambda>x. b)"} for @{text "All :: ('a \<Rightarrow> bool) \<Rightarrow> bool"} already goes back
+  quantifier.  The idea to formalize \<open>\<forall>x. b\<close> as \<open>All
+  (\<lambda>x. b)\<close> for \<open>All :: ('a \<Rightarrow> bool) \<Rightarrow> bool\<close> already goes back
   to @{cite church40}.  Isabelle declarations of certain higher-order
   operators may be annotated with @{keyword_def "binder"} annotations
   as follows:
 
   \begin{center}
-  @{text "c :: "}@{verbatim \<open>"\<close>}@{text "(\<tau>\<^sub>1 \<Rightarrow> \<tau>\<^sub>2) \<Rightarrow> \<tau>\<^sub>3"}@{verbatim \<open>"  (\<close>}@{keyword "binder"}~@{verbatim \<open>"\<close>}@{text "sy"}@{verbatim \<open>" [\<close>}@{text "p"}@{verbatim "]"}~@{text "q"}@{verbatim ")"}
+  \<open>c :: \<close>@{verbatim \<open>"\<close>}\<open>(\<tau>\<^sub>1 \<Rightarrow> \<tau>\<^sub>2) \<Rightarrow> \<tau>\<^sub>3\<close>@{verbatim \<open>"  (\<close>}@{keyword "binder"}~@{verbatim \<open>"\<close>}\<open>sy\<close>@{verbatim \<open>" [\<close>}\<open>p\<close>@{verbatim "]"}~\<open>q\<close>@{verbatim ")"}
   \end{center}
 
-  This introduces concrete binder syntax @{text "sy x. b"}, where
-  @{text x} is a bound variable of type @{text "\<tau>\<^sub>1"}, the body @{text
-  b} has type @{text "\<tau>\<^sub>2"} and the whole term has type @{text "\<tau>\<^sub>3"}.
-  The optional integer @{text p} specifies the syntactic priority of
-  the body; the default is @{text "q"}, which is also the priority of
+  This introduces concrete binder syntax \<open>sy x. b\<close>, where
+  \<open>x\<close> is a bound variable of type \<open>\<tau>\<^sub>1\<close>, the body \<open>b\<close> has type \<open>\<tau>\<^sub>2\<close> and the whole term has type \<open>\<tau>\<^sub>3\<close>.
+  The optional integer \<open>p\<close> specifies the syntactic priority of
+  the body; the default is \<open>q\<close>, which is also the priority of
   the whole construct.
 
   Internally, the binder syntax is expanded to something like this:
   \begin{center}
-  @{text "c_binder :: "}@{verbatim \<open>"\<close>}@{text "idts \<Rightarrow> \<tau>\<^sub>2 \<Rightarrow> \<tau>\<^sub>3"}@{verbatim \<open>"  ("(3\<close>}@{text sy}@{verbatim \<open>_./ _)" [0,\<close>}~@{text "p"}@{verbatim "]"}~@{text "q"}@{verbatim ")"}
+  \<open>c_binder :: \<close>@{verbatim \<open>"\<close>}\<open>idts \<Rightarrow> \<tau>\<^sub>2 \<Rightarrow> \<tau>\<^sub>3\<close>@{verbatim \<open>"  ("(3\<close>}\<open>sy\<close>@{verbatim \<open>_./ _)" [0,\<close>}~\<open>p\<close>@{verbatim "]"}~\<open>q\<close>@{verbatim ")"}
   \end{center}
 
   Here @{syntax (inner) idts} is the nonterminal symbol for a list of
   identifiers with optional type constraints (see also
   \secref{sec:pure-grammar}).  The mixfix template @{verbatim
-  \<open>"(3\<close>}@{text sy}@{verbatim \<open>_./ _)"\<close>} defines argument positions
+  \<open>"(3\<close>}\<open>sy\<close>@{verbatim \<open>_./ _)"\<close>} defines argument positions
   for the bound identifiers and the body, separated by a dot with
   optional line break; the entire phrase is a pretty printing block of
-  indentation level 3.  Note that there is no extra space after @{text
-  "sy"}, so it needs to be included user specification if the binder
+  indentation level 3.  Note that there is no extra space after \<open>sy\<close>, so it needs to be included user specification if the binder
   syntax ends with a token that may be continued by an identifier
   token at the start of @{syntax (inner) idts}.
 
-  Furthermore, a syntax translation to transforms @{text "c_binder x\<^sub>1
-  \<dots> x\<^sub>n b"} into iterated application @{text "c (\<lambda>x\<^sub>1. \<dots> c (\<lambda>x\<^sub>n. b)\<dots>)"}.
+  Furthermore, a syntax translation to transforms \<open>c_binder x\<^sub>1
+  \<dots> x\<^sub>n b\<close> into iterated application \<open>c (\<lambda>x\<^sub>1. \<dots> c (\<lambda>x\<^sub>n. b)\<dots>)\<close>.
   This works in both directions, for parsing and printing.\<close>
 
 
@@ -491,11 +485,11 @@
 
 text \<open>
   \begin{matharray}{rcll}
-    @{command_def "type_notation"} & : & @{text "local_theory \<rightarrow> local_theory"} \\
-    @{command_def "no_type_notation"} & : & @{text "local_theory \<rightarrow> local_theory"} \\
-    @{command_def "notation"} & : & @{text "local_theory \<rightarrow> local_theory"} \\
-    @{command_def "no_notation"} & : & @{text "local_theory \<rightarrow> local_theory"} \\
-    @{command_def "write"} & : & @{text "proof(state) \<rightarrow> proof(state)"} \\
+    @{command_def "type_notation"} & : & \<open>local_theory \<rightarrow> local_theory\<close> \\
+    @{command_def "no_type_notation"} & : & \<open>local_theory \<rightarrow> local_theory\<close> \\
+    @{command_def "notation"} & : & \<open>local_theory \<rightarrow> local_theory\<close> \\
+    @{command_def "no_notation"} & : & \<open>local_theory \<rightarrow> local_theory\<close> \\
+    @{command_def "write"} & : & \<open>proof(state) \<rightarrow> proof(state)\<close> \\
   \end{matharray}
 
   Commands that introduce new logical entities (terms or types)
@@ -515,7 +509,7 @@
     @@{command write} @{syntax mode}? (@{syntax nameref} @{syntax mixfix} + @'and')
   \<close>}
 
-  \<^descr> @{command "type_notation"}~@{text "c (mx)"} associates mixfix
+  \<^descr> @{command "type_notation"}~\<open>c (mx)\<close> associates mixfix
   syntax with an existing type constructor.  The arity of the
   constructor is retrieved from the context.
 
@@ -523,7 +517,7 @@
   "type_notation"}, but removes the specified syntax annotation from
   the present context.
 
-  \<^descr> @{command "notation"}~@{text "c (mx)"} associates mixfix
+  \<^descr> @{command "notation"}~\<open>c (mx)\<close> associates mixfix
   syntax with an existing constant or fixed variable.  The type
   declaration of the given entity is retrieved from the context.
 
@@ -570,9 +564,9 @@
     @{syntax_def (inner) tvar} & = & @{syntax_ref typevar} \\
     @{syntax_def (inner) num_token} & = & @{syntax_ref nat} \\
     @{syntax_def (inner) float_token} & = & @{syntax_ref nat}@{verbatim "."}@{syntax_ref nat} \\
-    @{syntax_def (inner) str_token} & = & @{verbatim "''"} @{text "\<dots>"} @{verbatim "''"} \\
-    @{syntax_def (inner) string_token} & = & @{verbatim \<open>"\<close>} @{text "\<dots>"} @{verbatim \<open>"\<close>} \\
-    @{syntax_def (inner) cartouche} & = & @{verbatim "\<open>"} @{text "\<dots>"} @{verbatim "\<close>"} \\
+    @{syntax_def (inner) str_token} & = & @{verbatim "''"} \<open>\<dots>\<close> @{verbatim "''"} \\
+    @{syntax_def (inner) string_token} & = & @{verbatim \<open>"\<close>} \<open>\<dots>\<close> @{verbatim \<open>"\<close>} \\
+    @{syntax_def (inner) cartouche} & = & @{verbatim "\<open>"} \<open>\<dots>\<close> @{verbatim "\<close>"} \\
   \end{supertabular}
   \end{center}
 
@@ -593,8 +587,8 @@
 
 text \<open>A context-free grammar consists of a set of \<^emph>\<open>terminal
   symbols\<close>, a set of \<^emph>\<open>nonterminal symbols\<close> and a set of
-  \<^emph>\<open>productions\<close>.  Productions have the form @{text "A = \<gamma>"},
-  where @{text A} is a nonterminal and @{text \<gamma>} is a string of
+  \<^emph>\<open>productions\<close>.  Productions have the form \<open>A = \<gamma>\<close>,
+  where \<open>A\<close> is a nonterminal and \<open>\<gamma>\<close> is a string of
   terminals and nonterminals.  One designated nonterminal is called
   the \<^emph>\<open>root symbol\<close>.  The language defined by the grammar
   consists of all strings of terminals that can be derived from the
@@ -602,17 +596,16 @@
 
   The standard Isabelle parser for inner syntax uses a \<^emph>\<open>priority
   grammar\<close>.  Each nonterminal is decorated by an integer priority:
-  @{text "A\<^sup>(\<^sup>p\<^sup>)"}.  In a derivation, @{text "A\<^sup>(\<^sup>p\<^sup>)"} may be rewritten
-  using a production @{text "A\<^sup>(\<^sup>q\<^sup>) = \<gamma>"} only if @{text "p \<le> q"}.  Any
+  \<open>A\<^sup>(\<^sup>p\<^sup>)\<close>.  In a derivation, \<open>A\<^sup>(\<^sup>p\<^sup>)\<close> may be rewritten
+  using a production \<open>A\<^sup>(\<^sup>q\<^sup>) = \<gamma>\<close> only if \<open>p \<le> q\<close>.  Any
   priority grammar can be translated into a normal context-free
   grammar by introducing new nonterminals and productions.
 
   \<^medskip>
-  Formally, a set of context free productions @{text G}
-  induces a derivation relation @{text "\<longrightarrow>\<^sub>G"} as follows.  Let @{text
-  \<alpha>} and @{text \<beta>} denote strings of terminal or nonterminal symbols.
-  Then @{text "\<alpha> A\<^sup>(\<^sup>p\<^sup>) \<beta> \<longrightarrow>\<^sub>G \<alpha> \<gamma> \<beta>"} holds if and only if @{text G}
-  contains some production @{text "A\<^sup>(\<^sup>q\<^sup>) = \<gamma>"} for @{text "p \<le> q"}.
+  Formally, a set of context free productions \<open>G\<close>
+  induces a derivation relation \<open>\<longrightarrow>\<^sub>G\<close> as follows.  Let \<open>\<alpha>\<close> and \<open>\<beta>\<close> denote strings of terminal or nonterminal symbols.
+  Then \<open>\<alpha> A\<^sup>(\<^sup>p\<^sup>) \<beta> \<longrightarrow>\<^sub>G \<alpha> \<gamma> \<beta>\<close> holds if and only if \<open>G\<close>
+  contains some production \<open>A\<^sup>(\<^sup>q\<^sup>) = \<gamma>\<close> for \<open>p \<le> q\<close>.
 
   \<^medskip>
   The following grammar for arithmetic expressions
@@ -621,11 +614,11 @@
 
   \begin{center}
   \begin{tabular}{rclr}
-  @{text "A\<^sup>(\<^sup>1\<^sup>0\<^sup>0\<^sup>0\<^sup>)"} & @{text "="} & @{verbatim "("} @{text "A\<^sup>(\<^sup>0\<^sup>)"} @{verbatim ")"} \\
-  @{text "A\<^sup>(\<^sup>1\<^sup>0\<^sup>0\<^sup>0\<^sup>)"} & @{text "="} & @{verbatim 0} \\
-  @{text "A\<^sup>(\<^sup>0\<^sup>)"} & @{text "="} & @{text "A\<^sup>(\<^sup>0\<^sup>)"} @{verbatim "+"} @{text "A\<^sup>(\<^sup>1\<^sup>)"} \\
-  @{text "A\<^sup>(\<^sup>2\<^sup>)"} & @{text "="} & @{text "A\<^sup>(\<^sup>3\<^sup>)"} @{verbatim "*"} @{text "A\<^sup>(\<^sup>2\<^sup>)"} \\
-  @{text "A\<^sup>(\<^sup>3\<^sup>)"} & @{text "="} & @{verbatim "-"} @{text "A\<^sup>(\<^sup>3\<^sup>)"} \\
+  \<open>A\<^sup>(\<^sup>1\<^sup>0\<^sup>0\<^sup>0\<^sup>)\<close> & \<open>=\<close> & @{verbatim "("} \<open>A\<^sup>(\<^sup>0\<^sup>)\<close> @{verbatim ")"} \\
+  \<open>A\<^sup>(\<^sup>1\<^sup>0\<^sup>0\<^sup>0\<^sup>)\<close> & \<open>=\<close> & @{verbatim 0} \\
+  \<open>A\<^sup>(\<^sup>0\<^sup>)\<close> & \<open>=\<close> & \<open>A\<^sup>(\<^sup>0\<^sup>)\<close> @{verbatim "+"} \<open>A\<^sup>(\<^sup>1\<^sup>)\<close> \\
+  \<open>A\<^sup>(\<^sup>2\<^sup>)\<close> & \<open>=\<close> & \<open>A\<^sup>(\<^sup>3\<^sup>)\<close> @{verbatim "*"} \<open>A\<^sup>(\<^sup>2\<^sup>)\<close> \\
+  \<open>A\<^sup>(\<^sup>3\<^sup>)\<close> & \<open>=\<close> & @{verbatim "-"} \<open>A\<^sup>(\<^sup>3\<^sup>)\<close> \\
   \end{tabular}
   \end{center}
   The choice of priorities determines that @{verbatim "-"} binds
@@ -641,13 +634,13 @@
   \<^item> Priority 0 on the right-hand side and priority 1000 on the
   left-hand side may be omitted.
 
-  \<^item> The production @{text "A\<^sup>(\<^sup>p\<^sup>) = \<alpha>"} is written as @{text "A = \<alpha>
-  (p)"}, i.e.\ the priority of the left-hand side actually appears in
+  \<^item> The production \<open>A\<^sup>(\<^sup>p\<^sup>) = \<alpha>\<close> is written as \<open>A = \<alpha>
+  (p)\<close>, i.e.\ the priority of the left-hand side actually appears in
   a column on the far right.
 
-  \<^item> Alternatives are separated by @{text "|"}.
+  \<^item> Alternatives are separated by \<open>|\<close>.
 
-  \<^item> Repetition is indicated by dots @{text "(\<dots>)"} in an informal
+  \<^item> Repetition is indicated by dots \<open>(\<dots>)\<close> in an informal
   but obvious way.
 
 
@@ -655,11 +648,11 @@
   takes the form:
   \begin{center}
   \begin{tabular}{rclc}
-    @{text A} & @{text "="} & @{verbatim "("} @{text A} @{verbatim ")"} \\
-              & @{text "|"} & @{verbatim 0} & \qquad\qquad \\
-              & @{text "|"} & @{text A} @{verbatim "+"} @{text "A\<^sup>(\<^sup>1\<^sup>)"} & @{text "(0)"} \\
-              & @{text "|"} & @{text "A\<^sup>(\<^sup>3\<^sup>)"} @{verbatim "*"} @{text "A\<^sup>(\<^sup>2\<^sup>)"} & @{text "(2)"} \\
-              & @{text "|"} & @{verbatim "-"} @{text "A\<^sup>(\<^sup>3\<^sup>)"} & @{text "(3)"} \\
+    \<open>A\<close> & \<open>=\<close> & @{verbatim "("} \<open>A\<close> @{verbatim ")"} \\
+              & \<open>|\<close> & @{verbatim 0} & \qquad\qquad \\
+              & \<open>|\<close> & \<open>A\<close> @{verbatim "+"} \<open>A\<^sup>(\<^sup>1\<^sup>)\<close> & \<open>(0)\<close> \\
+              & \<open>|\<close> & \<open>A\<^sup>(\<^sup>3\<^sup>)\<close> @{verbatim "*"} \<open>A\<^sup>(\<^sup>2\<^sup>)\<close> & \<open>(2)\<close> \\
+              & \<open>|\<close> & @{verbatim "-"} \<open>A\<^sup>(\<^sup>3\<^sup>)\<close> & \<open>(3)\<close> \\
   \end{tabular}
   \end{center}
 \<close>
@@ -667,75 +660,75 @@
 
 subsection \<open>The Pure grammar \label{sec:pure-grammar}\<close>
 
-text \<open>The priority grammar of the @{text "Pure"} theory is defined
+text \<open>The priority grammar of the \<open>Pure\<close> theory is defined
   approximately like this:
 
   \begin{center}
   \begin{supertabular}{rclr}
 
-  @{syntax_def (inner) any} & = & @{text "prop  |  logic"} \\\\
+  @{syntax_def (inner) any} & = & \<open>prop  |  logic\<close> \\\\
 
-  @{syntax_def (inner) prop} & = & @{verbatim "("} @{text prop} @{verbatim ")"} \\
-    & @{text "|"} & @{text "prop\<^sup>(\<^sup>4\<^sup>)"} @{verbatim "::"} @{text type} & @{text "(3)"} \\
-    & @{text "|"} & @{text "any\<^sup>(\<^sup>3\<^sup>)"} @{verbatim "=="} @{text "any\<^sup>(\<^sup>3\<^sup>)"} & @{text "(2)"} \\
-    & @{text "|"} & @{text "any\<^sup>(\<^sup>3\<^sup>)"} @{text "\<equiv>"} @{text "any\<^sup>(\<^sup>3\<^sup>)"} & @{text "(2)"} \\
-    & @{text "|"} & @{text "prop\<^sup>(\<^sup>3\<^sup>)"} @{verbatim "&&&"} @{text "prop\<^sup>(\<^sup>2\<^sup>)"} & @{text "(2)"} \\
-    & @{text "|"} & @{text "prop\<^sup>(\<^sup>2\<^sup>)"} @{verbatim "==>"} @{text "prop\<^sup>(\<^sup>1\<^sup>)"} & @{text "(1)"} \\
-    & @{text "|"} & @{text "prop\<^sup>(\<^sup>2\<^sup>)"} @{text "\<Longrightarrow>"} @{text "prop\<^sup>(\<^sup>1\<^sup>)"} & @{text "(1)"} \\
-    & @{text "|"} & @{verbatim "[|"} @{text prop} @{verbatim ";"} @{text "\<dots>"} @{verbatim ";"} @{text prop} @{verbatim "|]"} @{verbatim "==>"} @{text "prop\<^sup>(\<^sup>1\<^sup>)"} & @{text "(1)"} \\
-    & @{text "|"} & @{text "\<lbrakk>"} @{text prop} @{verbatim ";"} @{text "\<dots>"} @{verbatim ";"} @{text prop} @{text "\<rbrakk>"} @{text "\<Longrightarrow>"} @{text "prop\<^sup>(\<^sup>1\<^sup>)"} & @{text "(1)"} \\
-    & @{text "|"} & @{verbatim "!!"} @{text idts} @{verbatim "."} @{text prop} & @{text "(0)"} \\
-    & @{text "|"} & @{text "\<And>"} @{text idts} @{verbatim "."} @{text prop} & @{text "(0)"} \\
-    & @{text "|"} & @{verbatim OFCLASS} @{verbatim "("} @{text type} @{verbatim ","} @{text logic} @{verbatim ")"} \\
-    & @{text "|"} & @{verbatim SORT_CONSTRAINT} @{verbatim "("} @{text type} @{verbatim ")"} \\
-    & @{text "|"} & @{verbatim TERM} @{text logic} \\
-    & @{text "|"} & @{verbatim PROP} @{text aprop} \\\\
+  @{syntax_def (inner) prop} & = & @{verbatim "("} \<open>prop\<close> @{verbatim ")"} \\
+    & \<open>|\<close> & \<open>prop\<^sup>(\<^sup>4\<^sup>)\<close> @{verbatim "::"} \<open>type\<close> & \<open>(3)\<close> \\
+    & \<open>|\<close> & \<open>any\<^sup>(\<^sup>3\<^sup>)\<close> @{verbatim "=="} \<open>any\<^sup>(\<^sup>3\<^sup>)\<close> & \<open>(2)\<close> \\
+    & \<open>|\<close> & \<open>any\<^sup>(\<^sup>3\<^sup>)\<close> \<open>\<equiv>\<close> \<open>any\<^sup>(\<^sup>3\<^sup>)\<close> & \<open>(2)\<close> \\
+    & \<open>|\<close> & \<open>prop\<^sup>(\<^sup>3\<^sup>)\<close> @{verbatim "&&&"} \<open>prop\<^sup>(\<^sup>2\<^sup>)\<close> & \<open>(2)\<close> \\
+    & \<open>|\<close> & \<open>prop\<^sup>(\<^sup>2\<^sup>)\<close> @{verbatim "==>"} \<open>prop\<^sup>(\<^sup>1\<^sup>)\<close> & \<open>(1)\<close> \\
+    & \<open>|\<close> & \<open>prop\<^sup>(\<^sup>2\<^sup>)\<close> \<open>\<Longrightarrow>\<close> \<open>prop\<^sup>(\<^sup>1\<^sup>)\<close> & \<open>(1)\<close> \\
+    & \<open>|\<close> & @{verbatim "[|"} \<open>prop\<close> @{verbatim ";"} \<open>\<dots>\<close> @{verbatim ";"} \<open>prop\<close> @{verbatim "|]"} @{verbatim "==>"} \<open>prop\<^sup>(\<^sup>1\<^sup>)\<close> & \<open>(1)\<close> \\
+    & \<open>|\<close> & \<open>\<lbrakk>\<close> \<open>prop\<close> @{verbatim ";"} \<open>\<dots>\<close> @{verbatim ";"} \<open>prop\<close> \<open>\<rbrakk>\<close> \<open>\<Longrightarrow>\<close> \<open>prop\<^sup>(\<^sup>1\<^sup>)\<close> & \<open>(1)\<close> \\
+    & \<open>|\<close> & @{verbatim "!!"} \<open>idts\<close> @{verbatim "."} \<open>prop\<close> & \<open>(0)\<close> \\
+    & \<open>|\<close> & \<open>\<And>\<close> \<open>idts\<close> @{verbatim "."} \<open>prop\<close> & \<open>(0)\<close> \\
+    & \<open>|\<close> & @{verbatim OFCLASS} @{verbatim "("} \<open>type\<close> @{verbatim ","} \<open>logic\<close> @{verbatim ")"} \\
+    & \<open>|\<close> & @{verbatim SORT_CONSTRAINT} @{verbatim "("} \<open>type\<close> @{verbatim ")"} \\
+    & \<open>|\<close> & @{verbatim TERM} \<open>logic\<close> \\
+    & \<open>|\<close> & @{verbatim PROP} \<open>aprop\<close> \\\\
 
-  @{syntax_def (inner) aprop} & = & @{verbatim "("} @{text aprop} @{verbatim ")"} \\
-    & @{text "|"} & @{text "id  |  longid  |  var  |  "}@{verbatim "_"}@{text "  |  "}@{verbatim "..."} \\
-    & @{text "|"} & @{verbatim CONST} @{text "id  |  "}@{verbatim CONST} @{text "longid"} \\
-    & @{text "|"} & @{verbatim XCONST} @{text "id  |  "}@{verbatim XCONST} @{text "longid"} \\
-    & @{text "|"} & @{text "logic\<^sup>(\<^sup>1\<^sup>0\<^sup>0\<^sup>0\<^sup>)  any\<^sup>(\<^sup>1\<^sup>0\<^sup>0\<^sup>0\<^sup>) \<dots> any\<^sup>(\<^sup>1\<^sup>0\<^sup>0\<^sup>0\<^sup>)"} & @{text "(999)"} \\\\
+  @{syntax_def (inner) aprop} & = & @{verbatim "("} \<open>aprop\<close> @{verbatim ")"} \\
+    & \<open>|\<close> & \<open>id  |  longid  |  var  |  \<close>@{verbatim "_"}\<open>  |  \<close>@{verbatim "..."} \\
+    & \<open>|\<close> & @{verbatim CONST} \<open>id  |  \<close>@{verbatim CONST} \<open>longid\<close> \\
+    & \<open>|\<close> & @{verbatim XCONST} \<open>id  |  \<close>@{verbatim XCONST} \<open>longid\<close> \\
+    & \<open>|\<close> & \<open>logic\<^sup>(\<^sup>1\<^sup>0\<^sup>0\<^sup>0\<^sup>)  any\<^sup>(\<^sup>1\<^sup>0\<^sup>0\<^sup>0\<^sup>) \<dots> any\<^sup>(\<^sup>1\<^sup>0\<^sup>0\<^sup>0\<^sup>)\<close> & \<open>(999)\<close> \\\\
 
-  @{syntax_def (inner) logic} & = & @{verbatim "("} @{text logic} @{verbatim ")"} \\
-    & @{text "|"} & @{text "logic\<^sup>(\<^sup>4\<^sup>)"} @{verbatim "::"} @{text type} & @{text "(3)"} \\
-    & @{text "|"} & @{text "id  |  longid  |  var  |  "}@{verbatim "_"}@{text "  |  "}@{verbatim "..."} \\
-    & @{text "|"} & @{verbatim CONST} @{text "id  |  "}@{verbatim CONST} @{text "longid"} \\
-    & @{text "|"} & @{verbatim XCONST} @{text "id  |  "}@{verbatim XCONST} @{text "longid"} \\
-    & @{text "|"} & @{text "logic\<^sup>(\<^sup>1\<^sup>0\<^sup>0\<^sup>0\<^sup>)  any\<^sup>(\<^sup>1\<^sup>0\<^sup>0\<^sup>0\<^sup>) \<dots> any\<^sup>(\<^sup>1\<^sup>0\<^sup>0\<^sup>0\<^sup>)"} & @{text "(999)"} \\
-    & @{text "|"} & @{text "\<struct> index\<^sup>(\<^sup>1\<^sup>0\<^sup>0\<^sup>0\<^sup>)"} \\
-    & @{text "|"} & @{verbatim "%"} @{text pttrns} @{verbatim "."} @{text "any\<^sup>(\<^sup>3\<^sup>)"} & @{text "(3)"} \\
-    & @{text "|"} & @{text \<lambda>} @{text pttrns} @{verbatim "."} @{text "any\<^sup>(\<^sup>3\<^sup>)"} & @{text "(3)"} \\
-    & @{text "|"} & @{verbatim op} @{verbatim "=="}@{text "  |  "}@{verbatim op} @{text "\<equiv>"}@{text "  |  "}@{verbatim op} @{verbatim "&&&"} \\
-    & @{text "|"} & @{verbatim op} @{verbatim "==>"}@{text "  |  "}@{verbatim op} @{text "\<Longrightarrow>"} \\
-    & @{text "|"} & @{verbatim TYPE} @{verbatim "("} @{text type} @{verbatim ")"} \\\\
+  @{syntax_def (inner) logic} & = & @{verbatim "("} \<open>logic\<close> @{verbatim ")"} \\
+    & \<open>|\<close> & \<open>logic\<^sup>(\<^sup>4\<^sup>)\<close> @{verbatim "::"} \<open>type\<close> & \<open>(3)\<close> \\
+    & \<open>|\<close> & \<open>id  |  longid  |  var  |  \<close>@{verbatim "_"}\<open>  |  \<close>@{verbatim "..."} \\
+    & \<open>|\<close> & @{verbatim CONST} \<open>id  |  \<close>@{verbatim CONST} \<open>longid\<close> \\
+    & \<open>|\<close> & @{verbatim XCONST} \<open>id  |  \<close>@{verbatim XCONST} \<open>longid\<close> \\
+    & \<open>|\<close> & \<open>logic\<^sup>(\<^sup>1\<^sup>0\<^sup>0\<^sup>0\<^sup>)  any\<^sup>(\<^sup>1\<^sup>0\<^sup>0\<^sup>0\<^sup>) \<dots> any\<^sup>(\<^sup>1\<^sup>0\<^sup>0\<^sup>0\<^sup>)\<close> & \<open>(999)\<close> \\
+    & \<open>|\<close> & \<open>\<struct> index\<^sup>(\<^sup>1\<^sup>0\<^sup>0\<^sup>0\<^sup>)\<close> \\
+    & \<open>|\<close> & @{verbatim "%"} \<open>pttrns\<close> @{verbatim "."} \<open>any\<^sup>(\<^sup>3\<^sup>)\<close> & \<open>(3)\<close> \\
+    & \<open>|\<close> & \<open>\<lambda>\<close> \<open>pttrns\<close> @{verbatim "."} \<open>any\<^sup>(\<^sup>3\<^sup>)\<close> & \<open>(3)\<close> \\
+    & \<open>|\<close> & @{verbatim op} @{verbatim "=="}\<open>  |  \<close>@{verbatim op} \<open>\<equiv>\<close>\<open>  |  \<close>@{verbatim op} @{verbatim "&&&"} \\
+    & \<open>|\<close> & @{verbatim op} @{verbatim "==>"}\<open>  |  \<close>@{verbatim op} \<open>\<Longrightarrow>\<close> \\
+    & \<open>|\<close> & @{verbatim TYPE} @{verbatim "("} \<open>type\<close> @{verbatim ")"} \\\\
 
-  @{syntax_def (inner) idt} & = & @{verbatim "("} @{text idt} @{verbatim ")"}@{text "  |  id  |  "}@{verbatim "_"} \\
-    & @{text "|"} & @{text id} @{verbatim "::"} @{text type} & @{text "(0)"} \\
-    & @{text "|"} & @{verbatim "_"} @{verbatim "::"} @{text type} & @{text "(0)"} \\\\
+  @{syntax_def (inner) idt} & = & @{verbatim "("} \<open>idt\<close> @{verbatim ")"}\<open>  |  id  |  \<close>@{verbatim "_"} \\
+    & \<open>|\<close> & \<open>id\<close> @{verbatim "::"} \<open>type\<close> & \<open>(0)\<close> \\
+    & \<open>|\<close> & @{verbatim "_"} @{verbatim "::"} \<open>type\<close> & \<open>(0)\<close> \\\\
 
-  @{syntax_def (inner) index} & = & @{verbatim "\<^bsub>"} @{text "logic\<^sup>(\<^sup>0\<^sup>)"} @{verbatim "\<^esub>"}@{text "  |  |  \<index>"} \\\\
+  @{syntax_def (inner) index} & = & @{verbatim "\<^bsub>"} \<open>logic\<^sup>(\<^sup>0\<^sup>)\<close> @{verbatim "\<^esub>"}\<open>  |  |  \<index>\<close> \\\\
 
-  @{syntax_def (inner) idts} & = & @{text "idt  |  idt\<^sup>(\<^sup>1\<^sup>) idts"} & @{text "(0)"} \\\\
+  @{syntax_def (inner) idts} & = & \<open>idt  |  idt\<^sup>(\<^sup>1\<^sup>) idts\<close> & \<open>(0)\<close> \\\\
 
-  @{syntax_def (inner) pttrn} & = & @{text idt} \\\\
+  @{syntax_def (inner) pttrn} & = & \<open>idt\<close> \\\\
 
-  @{syntax_def (inner) pttrns} & = & @{text "pttrn  |  pttrn\<^sup>(\<^sup>1\<^sup>) pttrns"} & @{text "(0)"} \\\\
+  @{syntax_def (inner) pttrns} & = & \<open>pttrn  |  pttrn\<^sup>(\<^sup>1\<^sup>) pttrns\<close> & \<open>(0)\<close> \\\\
 
-  @{syntax_def (inner) type} & = & @{verbatim "("} @{text type} @{verbatim ")"} \\
-    & @{text "|"} & @{text "tid  |  tvar  |  "}@{verbatim "_"} \\
-    & @{text "|"} & @{text "tid"} @{verbatim "::"} @{text "sort  |  tvar  "}@{verbatim "::"} @{text "sort  |  "}@{verbatim "_"} @{verbatim "::"} @{text "sort"} \\
-    & @{text "|"} & @{text "type_name  |  type\<^sup>(\<^sup>1\<^sup>0\<^sup>0\<^sup>0\<^sup>) type_name"} \\
-    & @{text "|"} & @{verbatim "("} @{text type} @{verbatim ","} @{text "\<dots>"} @{verbatim ","} @{text type} @{verbatim ")"} @{text type_name} \\
-    & @{text "|"} & @{text "type\<^sup>(\<^sup>1\<^sup>)"} @{verbatim "=>"} @{text type} & @{text "(0)"} \\
-    & @{text "|"} & @{text "type\<^sup>(\<^sup>1\<^sup>)"} @{text "\<Rightarrow>"} @{text type} & @{text "(0)"} \\
-    & @{text "|"} & @{verbatim "["} @{text type} @{verbatim ","} @{text "\<dots>"} @{verbatim ","} @{text type} @{verbatim "]"} @{verbatim "=>"} @{text type} & @{text "(0)"} \\
-    & @{text "|"} & @{verbatim "["} @{text type} @{verbatim ","} @{text "\<dots>"} @{verbatim ","} @{text type} @{verbatim "]"} @{text "\<Rightarrow>"} @{text type} & @{text "(0)"} \\
-  @{syntax_def (inner) type_name} & = & @{text "id  |  longid"} \\\\
+  @{syntax_def (inner) type} & = & @{verbatim "("} \<open>type\<close> @{verbatim ")"} \\
+    & \<open>|\<close> & \<open>tid  |  tvar  |  \<close>@{verbatim "_"} \\
+    & \<open>|\<close> & \<open>tid\<close> @{verbatim "::"} \<open>sort  |  tvar  \<close>@{verbatim "::"} \<open>sort  |  \<close>@{verbatim "_"} @{verbatim "::"} \<open>sort\<close> \\
+    & \<open>|\<close> & \<open>type_name  |  type\<^sup>(\<^sup>1\<^sup>0\<^sup>0\<^sup>0\<^sup>) type_name\<close> \\
+    & \<open>|\<close> & @{verbatim "("} \<open>type\<close> @{verbatim ","} \<open>\<dots>\<close> @{verbatim ","} \<open>type\<close> @{verbatim ")"} \<open>type_name\<close> \\
+    & \<open>|\<close> & \<open>type\<^sup>(\<^sup>1\<^sup>)\<close> @{verbatim "=>"} \<open>type\<close> & \<open>(0)\<close> \\
+    & \<open>|\<close> & \<open>type\<^sup>(\<^sup>1\<^sup>)\<close> \<open>\<Rightarrow>\<close> \<open>type\<close> & \<open>(0)\<close> \\
+    & \<open>|\<close> & @{verbatim "["} \<open>type\<close> @{verbatim ","} \<open>\<dots>\<close> @{verbatim ","} \<open>type\<close> @{verbatim "]"} @{verbatim "=>"} \<open>type\<close> & \<open>(0)\<close> \\
+    & \<open>|\<close> & @{verbatim "["} \<open>type\<close> @{verbatim ","} \<open>\<dots>\<close> @{verbatim ","} \<open>type\<close> @{verbatim "]"} \<open>\<Rightarrow>\<close> \<open>type\<close> & \<open>(0)\<close> \\
+  @{syntax_def (inner) type_name} & = & \<open>id  |  longid\<close> \\\\
 
-  @{syntax_def (inner) sort} & = & @{syntax class_name}~@{text "  |  "}@{verbatim "{}"} \\
-    & @{text "|"} & @{verbatim "{"} @{syntax class_name} @{verbatim ","} @{text "\<dots>"} @{verbatim ","} @{syntax class_name} @{verbatim "}"} \\
-  @{syntax_def (inner) class_name} & = & @{text "id  |  longid"} \\
+  @{syntax_def (inner) sort} & = & @{syntax class_name}~\<open>  |  \<close>@{verbatim "{}"} \\
+    & \<open>|\<close> & @{verbatim "{"} @{syntax class_name} @{verbatim ","} \<open>\<dots>\<close> @{verbatim ","} @{syntax class_name} @{verbatim "}"} \\
+  @{syntax_def (inner) class_name} & = & \<open>id  |  longid\<close> \\
   \end{supertabular}
   \end{center}
 
@@ -750,7 +743,7 @@
   \<^descr> @{syntax_ref (inner) prop} denotes meta-level propositions,
   which are terms of type @{typ prop}.  The syntax of such formulae of
   the meta-logic is carefully distinguished from usual conventions for
-  object-logics.  In particular, plain @{text "\<lambda>"}-term notation is
+  object-logics.  In particular, plain \<open>\<lambda>\<close>-term notation is
   \<^emph>\<open>not\<close> recognized as @{syntax (inner) prop}.
 
   \<^descr> @{syntax_ref (inner) aprop} denotes atomic propositions, which
@@ -765,8 +758,7 @@
 
   \<^descr> @{syntax_ref (inner) logic} denotes arbitrary terms of a
   logical type, excluding type @{typ prop}.  This is the main
-  syntactic category of object-logic entities, covering plain @{text
-  \<lambda>}-term notation (variables, abstraction, application), plus
+  syntactic category of object-logic entities, covering plain \<open>\<lambda>\<close>-term notation (variables, abstraction, application), plus
   anything defined by the user.
 
   When specifying notation for logical entities, all logical types
@@ -775,8 +767,7 @@
 
   \<^descr> @{syntax_ref (inner) index} denotes an optional index term for
   indexed syntax.  If omitted, it refers to the first @{keyword_ref
-  "structure"} variable in the context.  The special dummy ``@{text
-  "\<index>"}'' serves as pattern variable in mixfix annotations that
+  "structure"} variable in the context.  The special dummy ``\<open>\<index>\<close>'' serves as pattern variable in mixfix annotations that
   introduce indexed notation.
 
   \<^descr> @{syntax_ref (inner) idt} denotes identifiers, possibly
@@ -784,7 +775,7 @@
 
   \<^descr> @{syntax_ref (inner) idts} denotes a sequence of @{syntax_ref
   (inner) idt}.  This is the most basic category for variables in
-  iterated binders, such as @{text "\<lambda>"} or @{text "\<And>"}.
+  iterated binders, such as \<open>\<lambda>\<close> or \<open>\<And>\<close>.
 
   \<^descr> @{syntax_ref (inner) pttrn} and @{syntax_ref (inner) pttrns}
   denote patterns for abstraction, cases bindings etc.  In Pure, these
@@ -799,43 +790,42 @@
 
   Here are some further explanations of certain syntax features.
 
-  \<^item> In @{syntax (inner) idts}, note that @{text "x :: nat y"} is
-  parsed as @{text "x :: (nat y)"}, treating @{text y} like a type
-  constructor applied to @{text nat}.  To avoid this interpretation,
-  write @{text "(x :: nat) y"} with explicit parentheses.
+  \<^item> In @{syntax (inner) idts}, note that \<open>x :: nat y\<close> is
+  parsed as \<open>x :: (nat y)\<close>, treating \<open>y\<close> like a type
+  constructor applied to \<open>nat\<close>.  To avoid this interpretation,
+  write \<open>(x :: nat) y\<close> with explicit parentheses.
 
-  \<^item> Similarly, @{text "x :: nat y :: nat"} is parsed as @{text "x ::
-  (nat y :: nat)"}.  The correct form is @{text "(x :: nat) (y ::
-  nat)"}, or @{text "(x :: nat) y :: nat"} if @{text y} is last in the
+  \<^item> Similarly, \<open>x :: nat y :: nat\<close> is parsed as \<open>x ::
+  (nat y :: nat)\<close>.  The correct form is \<open>(x :: nat) (y ::
+  nat)\<close>, or \<open>(x :: nat) y :: nat\<close> if \<open>y\<close> is last in the
   sequence of identifiers.
 
   \<^item> Type constraints for terms bind very weakly.  For example,
-  @{text "x < y :: nat"} is normally parsed as @{text "(x < y) ::
-  nat"}, unless @{text "<"} has a very low priority, in which case the
-  input is likely to be ambiguous.  The correct form is @{text "x < (y
-  :: nat)"}.
+  \<open>x < y :: nat\<close> is normally parsed as \<open>(x < y) ::
+  nat\<close>, unless \<open><\<close> has a very low priority, in which case the
+  input is likely to be ambiguous.  The correct form is \<open>x < (y
+  :: nat)\<close>.
 
   \<^item> Dummy variables (written as underscore) may occur in different
   roles.
 
-    \<^descr> A type ``@{text "_"}'' or ``@{text "_ :: sort"}'' acts like an
+    \<^descr> A type ``\<open>_\<close>'' or ``\<open>_ :: sort\<close>'' acts like an
     anonymous inference parameter, which is filled-in according to the
     most general type produced by the type-checking phase.
 
-    \<^descr> A bound ``@{text "_"}'' refers to a vacuous abstraction, where
+    \<^descr> A bound ``\<open>_\<close>'' refers to a vacuous abstraction, where
     the body does not refer to the binding introduced here.  As in the
-    term @{term "\<lambda>x _. x"}, which is @{text "\<alpha>"}-equivalent to @{text
-    "\<lambda>x y. x"}.
+    term @{term "\<lambda>x _. x"}, which is \<open>\<alpha>\<close>-equivalent to \<open>\<lambda>x y. x\<close>.
 
-    \<^descr> A free ``@{text "_"}'' refers to an implicit outer binding.
-    Higher definitional packages usually allow forms like @{text "f x _
-    = x"}.
+    \<^descr> A free ``\<open>_\<close>'' refers to an implicit outer binding.
+    Higher definitional packages usually allow forms like \<open>f x _
+    = x\<close>.
 
-    \<^descr> A schematic ``@{text "_"}'' (within a term pattern, see
+    \<^descr> A schematic ``\<open>_\<close>'' (within a term pattern, see
     \secref{sec:term-decls}) refers to an anonymous variable that is
     implicitly abstracted over its context of locally bound variables.
-    For example, this allows pattern matching of @{text "{x. f x = g
-    x}"} against @{text "{x. _ = _}"}, or even @{text "{_. _ = _}"} by
+    For example, this allows pattern matching of \<open>{x. f x = g
+    x}\<close> against \<open>{x. _ = _}\<close>, or even \<open>{_. _ = _}\<close> by
     using both bound and schematic dummies.
 
   \<^descr> The three literal dots ``@{verbatim "..."}'' may be also
@@ -859,22 +849,21 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{command_def "print_syntax"}@{text "\<^sup>*"} & : & @{text "context \<rightarrow>"} \\
+    @{command_def "print_syntax"}\<open>\<^sup>*\<close> & : & \<open>context \<rightarrow>\<close> \\
   \end{matharray}
 
   \<^descr> @{command "print_syntax"} prints the inner syntax of the
   current context.  The output can be quite large; the most important
   sections are explained below.
 
-    \<^descr> @{text "lexicon"} lists the delimiters of the inner token
+    \<^descr> \<open>lexicon\<close> lists the delimiters of the inner token
     language; see \secref{sec:inner-lex}.
 
-    \<^descr> @{text "prods"} lists the productions of the underlying
+    \<^descr> \<open>prods\<close> lists the productions of the underlying
     priority grammar; see \secref{sec:priority-grammar}.
 
-    The nonterminal @{text "A\<^sup>(\<^sup>p\<^sup>)"} is rendered in plain text as @{text
-    "A[p]"}; delimiters are quoted.  Many productions have an extra
-    @{text "\<dots> => name"}.  These names later become the heads of parse
+    The nonterminal \<open>A\<^sup>(\<^sup>p\<^sup>)\<close> is rendered in plain text as \<open>A[p]\<close>; delimiters are quoted.  Many productions have an extra
+    \<open>\<dots> => name\<close>.  These names later become the heads of parse
     trees; they also guide the pretty printer.
 
     Productions without such parse tree names are called \<^emph>\<open>copy
@@ -888,20 +877,19 @@
     production\<close>.  Chain productions act as abbreviations: conceptually,
     they are removed from the grammar by adding new productions.
     Priority information attached to chain productions is ignored; only
-    the dummy value @{text "-1"} is displayed.
+    the dummy value \<open>-1\<close> is displayed.
 
-    \<^descr> @{text "print modes"} lists the alternative print modes
+    \<^descr> \<open>print modes\<close> lists the alternative print modes
     provided by this grammar; see \secref{sec:print-modes}.
 
-    \<^descr> @{text "parse_rules"} and @{text "print_rules"} relate to
+    \<^descr> \<open>parse_rules\<close> and \<open>print_rules\<close> relate to
     syntax translations (macros); see \secref{sec:syn-trans}.
 
-    \<^descr> @{text "parse_ast_translation"} and @{text
-    "print_ast_translation"} list sets of constants that invoke
+    \<^descr> \<open>parse_ast_translation\<close> and \<open>print_ast_translation\<close> list sets of constants that invoke
     translation functions for abstract syntax trees, which are only
     required in very special situations; see \secref{sec:tr-funs}.
 
-    \<^descr> @{text "parse_translation"} and @{text "print_translation"}
+    \<^descr> \<open>parse_translation\<close> and \<open>print_translation\<close>
     list the sets of constants that invoke regular translation
     functions; see \secref{sec:tr-funs}.
 \<close>
@@ -911,8 +899,8 @@
 
 text \<open>
   \begin{tabular}{rcll}
-    @{attribute_def syntax_ambiguity_warning} & : & @{text attribute} & default @{text true} \\
-    @{attribute_def syntax_ambiguity_limit} & : & @{text attribute} & default @{text 10} \\
+    @{attribute_def syntax_ambiguity_warning} & : & \<open>attribute\<close> & default \<open>true\<close> \\
+    @{attribute_def syntax_ambiguity_limit} & : & \<open>attribute\<close> & default \<open>10\<close> \\
   \end{tabular}
 
   Depending on the grammar and the given input, parsing may be
@@ -943,7 +931,7 @@
 text \<open>The inner syntax engine of Isabelle provides separate
   mechanisms to transform parse trees either via rewrite systems on
   first-order ASTs (\secref{sec:syn-trans}), or ML functions on ASTs
-  or syntactic @{text "\<lambda>"}-terms (\secref{sec:tr-funs}).  This works
+  or syntactic \<open>\<lambda>\<close>-terms (\secref{sec:tr-funs}).  This works
   both for parsing and printing, as outlined in
   \figref{fig:parse-print}.
 
@@ -951,19 +939,19 @@
   \begin{center}
   \begin{tabular}{cl}
   string          & \\
-  @{text "\<down>"}     & lexer + parser \\
+  \<open>\<down>\<close>     & lexer + parser \\
   parse tree      & \\
-  @{text "\<down>"}     & parse AST translation \\
+  \<open>\<down>\<close>     & parse AST translation \\
   AST             & \\
-  @{text "\<down>"}     & AST rewriting (macros) \\
+  \<open>\<down>\<close>     & AST rewriting (macros) \\
   AST             & \\
-  @{text "\<down>"}     & parse translation \\
+  \<open>\<down>\<close>     & parse translation \\
   --- pre-term ---    & \\
-  @{text "\<down>"}     & print translation \\
+  \<open>\<down>\<close>     & print translation \\
   AST             & \\
-  @{text "\<down>"}     & AST rewriting (macros) \\
+  \<open>\<down>\<close>     & AST rewriting (macros) \\
   AST             & \\
-  @{text "\<down>"}     & print AST translation \\
+  \<open>\<down>\<close>     & print AST translation \\
   string          &
   \end{tabular}
   \end{center}
@@ -1034,8 +1022,8 @@
   Ast.Constant} versus @{ML Ast.Variable} serves slightly different
   purposes.
 
-  Input syntax of a term such as @{text "f a b = c"} does not yet
-  indicate the scopes of atomic entities @{text "f, a, b, c"}: they
+  Input syntax of a term such as \<open>f a b = c\<close> does not yet
+  indicate the scopes of atomic entities \<open>f, a, b, c\<close>: they
   could be global constants or local variables, even bound ones
   depending on the context of the term.  @{ML Ast.Variable} leaves
   this choice still open: later syntax layers (or translation
@@ -1047,13 +1035,13 @@
 
   Output syntax turns term constants into @{ML Ast.Constant} and
   variables (free or schematic) into @{ML Ast.Variable}.  This
-  information is precise when printing fully formal @{text "\<lambda>"}-terms.
+  information is precise when printing fully formal \<open>\<lambda>\<close>-terms.
 
   \<^medskip>
   AST translation patterns (\secref{sec:syn-trans}) that
   represent terms cannot distinguish constants and variables
-  syntactically.  Explicit indication of @{text "CONST c"} inside the
-  term language is required, unless @{text "c"} is known as special
+  syntactically.  Explicit indication of \<open>CONST c\<close> inside the
+  term language is required, unless \<open>c\<close> is known as special
   \<^emph>\<open>syntax constant\<close> (see also @{command syntax}).  It is also
   possible to use @{command syntax} declarations (without mixfix
   annotation) to enforce that certain unqualified names are always
@@ -1061,9 +1049,9 @@
 
   The situation is simpler for ASTs that represent types or sorts,
   since the concrete syntax already distinguishes type variables from
-  type constants (constructors).  So @{text "('a, 'b) foo"}
-  corresponds to an AST application of some constant for @{text foo}
-  and variable arguments for @{text "'a"} and @{text "'b"}.  Note that
+  type constants (constructors).  So \<open>('a, 'b) foo\<close>
+  corresponds to an AST application of some constant for \<open>foo\<close>
+  and variable arguments for \<open>'a\<close> and \<open>'b\<close>.  Note that
   the postfix application is merely a feature of the concrete syntax,
   while in the AST the constructor occurs in head position.\<close>
 
@@ -1078,8 +1066,7 @@
 
   \<^emph>\<open>Authentic syntax names\<close> for @{ML Ast.Constant} avoid this
   problem: the fully-qualified constant name with a special prefix for
-  its formal category (@{text "class"}, @{text "type"}, @{text
-  "const"}, @{text "fixed"}) represents the information faithfully
+  its formal category (\<open>class\<close>, \<open>type\<close>, \<open>const\<close>, \<open>fixed\<close>) represents the information faithfully
   within the untyped AST format.  Accidental overlap with free or
   bound variables is excluded as well.  Authentic syntax names work
   implicitly in the following situations:
@@ -1108,13 +1095,13 @@
 
 text \<open>
   \begin{tabular}{rcll}
-    @{command_def "nonterminal"} & : & @{text "theory \<rightarrow> theory"} \\
-    @{command_def "syntax"} & : & @{text "theory \<rightarrow> theory"} \\
-    @{command_def "no_syntax"} & : & @{text "theory \<rightarrow> theory"} \\
-    @{command_def "translations"} & : & @{text "theory \<rightarrow> theory"} \\
-    @{command_def "no_translations"} & : & @{text "theory \<rightarrow> theory"} \\
-    @{attribute_def syntax_ast_trace} & : & @{text attribute} & default @{text false} \\
-    @{attribute_def syntax_ast_stats} & : & @{text attribute} & default @{text false} \\
+    @{command_def "nonterminal"} & : & \<open>theory \<rightarrow> theory\<close> \\
+    @{command_def "syntax"} & : & \<open>theory \<rightarrow> theory\<close> \\
+    @{command_def "no_syntax"} & : & \<open>theory \<rightarrow> theory\<close> \\
+    @{command_def "translations"} & : & \<open>theory \<rightarrow> theory\<close> \\
+    @{command_def "no_translations"} & : & \<open>theory \<rightarrow> theory\<close> \\
+    @{attribute_def syntax_ast_trace} & : & \<open>attribute\<close> & default \<open>false\<close> \\
+    @{attribute_def syntax_ast_stats} & : & \<open>attribute\<close> & default \<open>false\<close> \\
   \end{tabular}
   \<^medskip>
 
@@ -1143,75 +1130,71 @@
     transpat: ('(' @{syntax nameref} ')')? @{syntax string}
   \<close>}
 
-  \<^descr> @{command "nonterminal"}~@{text c} declares a type
-  constructor @{text c} (without arguments) to act as purely syntactic
+  \<^descr> @{command "nonterminal"}~\<open>c\<close> declares a type
+  constructor \<open>c\<close> (without arguments) to act as purely syntactic
   type: a nonterminal symbol of the inner syntax.
 
-  \<^descr> @{command "syntax"}~@{text "(mode) c :: \<sigma> (mx)"} augments the
+  \<^descr> @{command "syntax"}~\<open>(mode) c :: \<sigma> (mx)\<close> augments the
   priority grammar and the pretty printer table for the given print
   mode (default @{verbatim \<open>""\<close>}). An optional keyword @{keyword_ref
   "output"} means that only the pretty printer table is affected.
 
-  Following \secref{sec:mixfix}, the mixfix annotation @{text "mx =
-  template ps q"} together with type @{text "\<sigma> = \<tau>\<^sub>1 \<Rightarrow> \<dots> \<tau>\<^sub>n \<Rightarrow> \<tau>"} and
-  specify a grammar production.  The @{text template} contains
-  delimiter tokens that surround @{text "n"} argument positions
+  Following \secref{sec:mixfix}, the mixfix annotation \<open>mx =
+  template ps q\<close> together with type \<open>\<sigma> = \<tau>\<^sub>1 \<Rightarrow> \<dots> \<tau>\<^sub>n \<Rightarrow> \<tau>\<close> and
+  specify a grammar production.  The \<open>template\<close> contains
+  delimiter tokens that surround \<open>n\<close> argument positions
   (@{verbatim "_"}).  The latter correspond to nonterminal symbols
-  @{text "A\<^sub>i"} derived from the argument types @{text "\<tau>\<^sub>i"} as
+  \<open>A\<^sub>i\<close> derived from the argument types \<open>\<tau>\<^sub>i\<close> as
   follows:
 
-    \<^item> @{text "prop"} if @{text "\<tau>\<^sub>i = prop"}
+    \<^item> \<open>prop\<close> if \<open>\<tau>\<^sub>i = prop\<close>
 
-    \<^item> @{text "logic"} if @{text "\<tau>\<^sub>i = (\<dots>)\<kappa>"} for logical type
-    constructor @{text "\<kappa> \<noteq> prop"}
+    \<^item> \<open>logic\<close> if \<open>\<tau>\<^sub>i = (\<dots>)\<kappa>\<close> for logical type
+    constructor \<open>\<kappa> \<noteq> prop\<close>
 
-    \<^item> @{text any} if @{text "\<tau>\<^sub>i = \<alpha>"} for type variables
+    \<^item> \<open>any\<close> if \<open>\<tau>\<^sub>i = \<alpha>\<close> for type variables
 
-    \<^item> @{text "\<kappa>"} if @{text "\<tau>\<^sub>i = \<kappa>"} for nonterminal @{text "\<kappa>"}
+    \<^item> \<open>\<kappa>\<close> if \<open>\<tau>\<^sub>i = \<kappa>\<close> for nonterminal \<open>\<kappa>\<close>
     (syntactic type constructor)
 
-  Each @{text "A\<^sub>i"} is decorated by priority @{text "p\<^sub>i"} from the
-  given list @{text "ps"}; missing priorities default to 0.
+  Each \<open>A\<^sub>i\<close> is decorated by priority \<open>p\<^sub>i\<close> from the
+  given list \<open>ps\<close>; missing priorities default to 0.
 
   The resulting nonterminal of the production is determined similarly
-  from type @{text "\<tau>"}, with priority @{text "q"} and default 1000.
+  from type \<open>\<tau>\<close>, with priority \<open>q\<close> and default 1000.
 
   \<^medskip>
-  Parsing via this production produces parse trees @{text
-  "t\<^sub>1, \<dots>, t\<^sub>n"} for the argument slots.  The resulting parse tree is
-  composed as @{text "c t\<^sub>1 \<dots> t\<^sub>n"}, by using the syntax constant @{text
-  "c"} of the syntax declaration.
+  Parsing via this production produces parse trees \<open>t\<^sub>1, \<dots>, t\<^sub>n\<close> for the argument slots.  The resulting parse tree is
+  composed as \<open>c t\<^sub>1 \<dots> t\<^sub>n\<close>, by using the syntax constant \<open>c\<close> of the syntax declaration.
 
   Such syntactic constants are invented on the spot, without formal
   check wrt.\ existing declarations.  It is conventional to use plain
-  identifiers prefixed by a single underscore (e.g.\ @{text
-  "_foobar"}).  Names should be chosen with care, to avoid clashes
+  identifiers prefixed by a single underscore (e.g.\ \<open>_foobar\<close>).  Names should be chosen with care, to avoid clashes
   with other syntax declarations.
 
   \<^medskip>
-  The special case of copy production is specified by @{text
-  "c = "}@{verbatim \<open>""\<close>} (empty string).  It means that the
-  resulting parse tree @{text "t"} is copied directly, without any
+  The special case of copy production is specified by \<open>c = \<close>@{verbatim \<open>""\<close>} (empty string).  It means that the
+  resulting parse tree \<open>t\<close> is copied directly, without any
   further decoration.
 
-  \<^descr> @{command "no_syntax"}~@{text "(mode) decls"} removes grammar
-  declarations (and translations) resulting from @{text decls}, which
+  \<^descr> @{command "no_syntax"}~\<open>(mode) decls\<close> removes grammar
+  declarations (and translations) resulting from \<open>decls\<close>, which
   are interpreted in the same manner as for @{command "syntax"} above.
 
-  \<^descr> @{command "translations"}~@{text rules} specifies syntactic
+  \<^descr> @{command "translations"}~\<open>rules\<close> specifies syntactic
   translation rules (i.e.\ macros) as first-order rewrite rules on
   ASTs (\secref{sec:ast}).  The theory context maintains two
   independent lists translation rules: parse rules (@{verbatim "=>"}
-  or @{text "\<rightharpoonup>"}) and print rules (@{verbatim "<="} or @{text "\<leftharpoondown>"}).
+  or \<open>\<rightharpoonup>\<close>) and print rules (@{verbatim "<="} or \<open>\<leftharpoondown>\<close>).
   For convenience, both can be specified simultaneously as parse~/
-  print rules (@{verbatim "=="} or @{text "\<rightleftharpoons>"}).
+  print rules (@{verbatim "=="} or \<open>\<rightleftharpoons>\<close>).
 
   Translation patterns may be prefixed by the syntactic category to be
-  used for parsing; the default is @{text logic} which means that
+  used for parsing; the default is \<open>logic\<close> which means that
   regular term syntax is used.  Both sides of the syntax translation
   rule undergo parsing and parse AST translations
   \secref{sec:tr-funs}, in order to perform some fundamental
-  normalization like @{text "\<lambda>x y. b \<leadsto> \<lambda>x. \<lambda>y. b"}, but other AST
+  normalization like \<open>\<lambda>x y. b \<leadsto> \<lambda>x. \<lambda>y. b\<close>, but other AST
   translation rules are \<^emph>\<open>not\<close> applied recursively here.
 
   When processing AST patterns, the inner syntax lexer runs in a
@@ -1225,23 +1208,22 @@
   Ast.Variable} as follows: a qualified name or syntax constant
   declared via @{command syntax}, or parse tree head of concrete
   notation becomes @{ML Ast.Constant}, anything else @{ML
-  Ast.Variable}.  Note that @{text CONST} and @{text XCONST} within
+  Ast.Variable}.  Note that \<open>CONST\<close> and \<open>XCONST\<close> within
   the term language (\secref{sec:pure-grammar}) allow to enforce
   treatment as constants.
 
-  AST rewrite rules @{text "(lhs, rhs)"} need to obey the following
+  AST rewrite rules \<open>(lhs, rhs)\<close> need to obey the following
   side-conditions:
 
-    \<^item> Rules must be left linear: @{text "lhs"} must not contain
+    \<^item> Rules must be left linear: \<open>lhs\<close> must not contain
     repeated variables.\footnote{The deeper reason for this is that AST
     equality is not well-defined: different occurrences of the ``same''
     AST could be decorated differently by accidental type-constraints or
     source position information, for example.}
 
-    \<^item> Every variable in @{text "rhs"} must also occur in @{text
-    "lhs"}.
+    \<^item> Every variable in \<open>rhs\<close> must also occur in \<open>lhs\<close>.
 
-  \<^descr> @{command "no_translations"}~@{text rules} removes syntactic
+  \<^descr> @{command "no_translations"}~\<open>rules\<close> removes syntactic
   translation rules, which are interpreted in the same manner as for
   @{command "translations"} above.
 
@@ -1278,38 +1260,36 @@
   first-order term rewriting system.  We first examine how a single
   rule is applied.
 
-  Let @{text "t"} be the abstract syntax tree to be normalized and
-  @{text "(lhs, rhs)"} some translation rule.  A subtree @{text "u"}
-  of @{text "t"} is called \<^emph>\<open>redex\<close> if it is an instance of @{text
-  "lhs"}; in this case the pattern @{text "lhs"} is said to match the
-  object @{text "u"}.  A redex matched by @{text "lhs"} may be
-  replaced by the corresponding instance of @{text "rhs"}, thus
-  \<^emph>\<open>rewriting\<close> the AST @{text "t"}.  Matching requires some notion
+  Let \<open>t\<close> be the abstract syntax tree to be normalized and
+  \<open>(lhs, rhs)\<close> some translation rule.  A subtree \<open>u\<close>
+  of \<open>t\<close> is called \<^emph>\<open>redex\<close> if it is an instance of \<open>lhs\<close>; in this case the pattern \<open>lhs\<close> is said to match the
+  object \<open>u\<close>.  A redex matched by \<open>lhs\<close> may be
+  replaced by the corresponding instance of \<open>rhs\<close>, thus
+  \<^emph>\<open>rewriting\<close> the AST \<open>t\<close>.  Matching requires some notion
   of \<^emph>\<open>place-holders\<close> in rule patterns: @{ML Ast.Variable} serves
   this purpose.
 
-  More precisely, the matching of the object @{text "u"} against the
-  pattern @{text "lhs"} is performed as follows:
+  More precisely, the matching of the object \<open>u\<close> against the
+  pattern \<open>lhs\<close> is performed as follows:
 
-  \<^item> Objects of the form @{ML Ast.Variable}~@{text "x"} or @{ML
-  Ast.Constant}~@{text "x"} are matched by pattern @{ML
-  Ast.Constant}~@{text "x"}.  Thus all atomic ASTs in the object are
+  \<^item> Objects of the form @{ML Ast.Variable}~\<open>x\<close> or @{ML
+  Ast.Constant}~\<open>x\<close> are matched by pattern @{ML
+  Ast.Constant}~\<open>x\<close>.  Thus all atomic ASTs in the object are
   treated as (potential) constants, and a successful match makes them
   actual constants even before name space resolution (see also
   \secref{sec:ast}).
 
-  \<^item> Object @{text "u"} is matched by pattern @{ML
-  Ast.Variable}~@{text "x"}, binding @{text "x"} to @{text "u"}.
+  \<^item> Object \<open>u\<close> is matched by pattern @{ML
+  Ast.Variable}~\<open>x\<close>, binding \<open>x\<close> to \<open>u\<close>.
 
-  \<^item> Object @{ML Ast.Appl}~@{text "us"} is matched by @{ML
-  Ast.Appl}~@{text "ts"} if @{text "us"} and @{text "ts"} have the
+  \<^item> Object @{ML Ast.Appl}~\<open>us\<close> is matched by @{ML
+  Ast.Appl}~\<open>ts\<close> if \<open>us\<close> and \<open>ts\<close> have the
   same length and each corresponding subtree matches.
 
   \<^item> In every other case, matching fails.
 
 
-  A successful match yields a substitution that is applied to @{text
-  "rhs"}, generating the instance that replaces @{text "u"}.
+  A successful match yields a substitution that is applied to \<open>rhs\<close>, generating the instance that replaces \<open>u\<close>.
 
   Normalizing an AST involves repeatedly applying translation rules
   until none are applicable.  This works yoyo-like: top-down,
@@ -1329,11 +1309,11 @@
   \end{warn}
 
   \begin{warn}
-  If @{attribute_ref eta_contract} is set to @{text "true"}, terms
-  will be @{text "\<eta>"}-contracted \<^emph>\<open>before\<close> the AST rewriter sees
+  If @{attribute_ref eta_contract} is set to \<open>true\<close>, terms
+  will be \<open>\<eta>\<close>-contracted \<^emph>\<open>before\<close> the AST rewriter sees
   them.  Thus some abstraction nodes needed for print rules to match
-  may vanish.  For example, @{text "Ball A (\<lambda>x. P x)"} would contract
-  to @{text "Ball A P"} and the standard print rule would fail to
+  may vanish.  For example, \<open>Ball A (\<lambda>x. P x)\<close> would contract
+  to \<open>Ball A P\<close> and the standard print rule would fail to
   apply.  This problem can be avoided by hand-written ML translation
   functions (see also \secref{sec:tr-funs}), which is in fact the same
   mechanism used in built-in @{keyword "binder"} declarations.
@@ -1345,15 +1325,15 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{command_def "parse_ast_translation"} & : & @{text "theory \<rightarrow> theory"} \\
-    @{command_def "parse_translation"} & : & @{text "theory \<rightarrow> theory"} \\
-    @{command_def "print_translation"} & : & @{text "theory \<rightarrow> theory"} \\
-    @{command_def "typed_print_translation"} & : & @{text "theory \<rightarrow> theory"} \\
-    @{command_def "print_ast_translation"} & : & @{text "theory \<rightarrow> theory"} \\
-    @{ML_antiquotation_def "class_syntax"} & : & @{text "ML antiquotation"} \\
-    @{ML_antiquotation_def "type_syntax"} & : & @{text "ML antiquotation"} \\
-    @{ML_antiquotation_def "const_syntax"} & : & @{text "ML antiquotation"} \\
-    @{ML_antiquotation_def "syntax_const"} & : & @{text "ML antiquotation"} \\
+    @{command_def "parse_ast_translation"} & : & \<open>theory \<rightarrow> theory\<close> \\
+    @{command_def "parse_translation"} & : & \<open>theory \<rightarrow> theory\<close> \\
+    @{command_def "print_translation"} & : & \<open>theory \<rightarrow> theory\<close> \\
+    @{command_def "typed_print_translation"} & : & \<open>theory \<rightarrow> theory\<close> \\
+    @{command_def "print_ast_translation"} & : & \<open>theory \<rightarrow> theory\<close> \\
+    @{ML_antiquotation_def "class_syntax"} & : & \<open>ML antiquotation\<close> \\
+    @{ML_antiquotation_def "type_syntax"} & : & \<open>ML antiquotation\<close> \\
+    @{ML_antiquotation_def "const_syntax"} & : & \<open>ML antiquotation\<close> \\
+    @{ML_antiquotation_def "syntax_const"} & : & \<open>ML antiquotation\<close> \\
   \end{matharray}
 
   Syntax translation functions written in ML admit almost arbitrary
@@ -1392,25 +1372,22 @@
   \end{tabular}}
   \<^medskip>
 
-  The argument list consists of @{text "(c, tr)"} pairs, where @{text
-  "c"} is the syntax name of the formal entity involved, and @{text
-  "tr"} a function that translates a syntax form @{text "c args"} into
-  @{text "tr ctxt args"} (depending on the context).  The Isabelle/ML
-  naming convention for parse translations is @{text "c_tr"} and for
-  print translations @{text "c_tr'"}.
+  The argument list consists of \<open>(c, tr)\<close> pairs, where \<open>c\<close> is the syntax name of the formal entity involved, and \<open>tr\<close> a function that translates a syntax form \<open>c args\<close> into
+  \<open>tr ctxt args\<close> (depending on the context).  The Isabelle/ML
+  naming convention for parse translations is \<open>c_tr\<close> and for
+  print translations \<open>c_tr'\<close>.
 
   The @{command_ref print_syntax} command displays the sets of names
-  associated with the translation functions of a theory under @{text
-  "parse_ast_translation"} etc.
+  associated with the translation functions of a theory under \<open>parse_ast_translation\<close> etc.
 
-  \<^descr> @{text "@{class_syntax c}"}, @{text "@{type_syntax c}"},
-  @{text "@{const_syntax c}"} inline the authentic syntax name of the
+  \<^descr> \<open>@{class_syntax c}\<close>, \<open>@{type_syntax c}\<close>,
+  \<open>@{const_syntax c}\<close> inline the authentic syntax name of the
   given formal entities into the ML source.  This is the
   fully-qualified logical name prefixed by a special marker to
   indicate its kind: thus different logical name spaces are properly
   distinguished within parse trees.
 
-  \<^descr> @{text "@{const_syntax c}"} inlines the name @{text "c"} of
+  \<^descr> \<open>@{const_syntax c}\<close> inlines the name \<open>c\<close> of
   the given syntax constant, having checked that it has been declared
   via some @{command syntax} commands within the theory context.  Note
   that the usual naming convention makes syntax constants start with
@@ -1424,19 +1401,19 @@
 text \<open>The different kinds of translation functions are invoked during
   the transformations between parse trees, ASTs and syntactic terms
   (cf.\ \figref{fig:parse-print}).  Whenever a combination of the form
-  @{text "c x\<^sub>1 \<dots> x\<^sub>n"} is encountered, and a translation function
-  @{text "f"} of appropriate kind is declared for @{text "c"}, the
-  result is produced by evaluation of @{text "f [x\<^sub>1, \<dots>, x\<^sub>n]"} in ML.
+  \<open>c x\<^sub>1 \<dots> x\<^sub>n\<close> is encountered, and a translation function
+  \<open>f\<close> of appropriate kind is declared for \<open>c\<close>, the
+  result is produced by evaluation of \<open>f [x\<^sub>1, \<dots>, x\<^sub>n]\<close> in ML.
 
-  For AST translations, the arguments @{text "x\<^sub>1, \<dots>, x\<^sub>n"} are ASTs.  A
-  combination has the form @{ML "Ast.Constant"}~@{text "c"} or @{ML
-  "Ast.Appl"}~@{text "["}@{ML Ast.Constant}~@{text "c, x\<^sub>1, \<dots>, x\<^sub>n]"}.
+  For AST translations, the arguments \<open>x\<^sub>1, \<dots>, x\<^sub>n\<close> are ASTs.  A
+  combination has the form @{ML "Ast.Constant"}~\<open>c\<close> or @{ML
+  "Ast.Appl"}~\<open>[\<close>@{ML Ast.Constant}~\<open>c, x\<^sub>1, \<dots>, x\<^sub>n]\<close>.
   For term translations, the arguments are terms and a combination has
-  the form @{ML Const}~@{text "(c, \<tau>)"} or @{ML Const}~@{text "(c, \<tau>)
-  $ x\<^sub>1 $ \<dots> $ x\<^sub>n"}.  Terms allow more sophisticated transformations
+  the form @{ML Const}~\<open>(c, \<tau>)\<close> or @{ML Const}~\<open>(c, \<tau>)
+  $ x\<^sub>1 $ \<dots> $ x\<^sub>n\<close>.  Terms allow more sophisticated transformations
   than ASTs do, typically involving abstractions and bound
   variables. \<^emph>\<open>Typed\<close> print translations may even peek at the type
-  @{text "\<tau>"} of the constant they are invoked on, although some
+  \<open>\<tau>\<close> of the constant they are invoked on, although some
   information might have been suppressed for term output already.
 
   Regardless of whether they act on ASTs or terms, translation
@@ -1493,7 +1470,7 @@
   source position information from input tokens.
 
   The Pure syntax provides predefined AST translations to make the
-  basic @{text "\<lambda>"}-term structure more apparent within the
+  basic \<open>\<lambda>\<close>-term structure more apparent within the
   (first-order) AST representation, and thus facilitate the use of
   @{command translations} (see also \secref{sec:syn-trans}).  This
   covers ordinary term application, type application, nested
@@ -1505,13 +1482,13 @@
   \begin{tabular}{ll}
   input source & AST \\
   \hline
-  @{text "f x y z"} & @{verbatim "(f x y z)"} \\
-  @{text "'a ty"} & @{verbatim "(ty 'a)"} \\
-  @{text "('a, 'b)ty"} & @{verbatim "(ty 'a 'b)"} \\
-  @{text "\<lambda>x y z. t"} & @{verbatim \<open>("_abs" x ("_abs" y ("_abs" z t)))\<close>} \\
-  @{text "\<lambda>x :: 'a. t"} & @{verbatim \<open>("_abs" ("_constrain" x 'a) t)\<close>} \\
-  @{text "\<lbrakk>P; Q; R\<rbrakk> \<Longrightarrow> S"} & @{verbatim \<open>("Pure.imp" P ("Pure.imp" Q ("Pure.imp" R S)))\<close>} \\
-   @{text "['a, 'b, 'c] \<Rightarrow> 'd"} & @{verbatim \<open>("fun" 'a ("fun" 'b ("fun" 'c 'd)))\<close>} \\
+  \<open>f x y z\<close> & @{verbatim "(f x y z)"} \\
+  \<open>'a ty\<close> & @{verbatim "(ty 'a)"} \\
+  \<open>('a, 'b)ty\<close> & @{verbatim "(ty 'a 'b)"} \\
+  \<open>\<lambda>x y z. t\<close> & @{verbatim \<open>("_abs" x ("_abs" y ("_abs" z t)))\<close>} \\
+  \<open>\<lambda>x :: 'a. t\<close> & @{verbatim \<open>("_abs" ("_constrain" x 'a) t)\<close>} \\
+  \<open>\<lbrakk>P; Q; R\<rbrakk> \<Longrightarrow> S\<close> & @{verbatim \<open>("Pure.imp" P ("Pure.imp" Q ("Pure.imp" R S)))\<close>} \\
+   \<open>['a, 'b, 'c] \<Rightarrow> 'd\<close> & @{verbatim \<open>("fun" 'a ("fun" 'b ("fun" 'c 'd)))\<close>} \\
   \end{tabular}
   \end{center}
 
@@ -1568,28 +1545,28 @@
   used as templates for pretty printing, with argument slots stemming
   from nonterminals, and syntactic sugar stemming from literal tokens.
 
-  Each AST application with constant head @{text "c"} and arguments
-  @{text "t\<^sub>1"}, \dots, @{text "t\<^sub>n"} (for @{text "n = 0"} the AST is
-  just the constant @{text "c"} itself) is printed according to the
-  first grammar production of result name @{text "c"}.  The required
+  Each AST application with constant head \<open>c\<close> and arguments
+  \<open>t\<^sub>1\<close>, \dots, \<open>t\<^sub>n\<close> (for \<open>n = 0\<close> the AST is
+  just the constant \<open>c\<close> itself) is printed according to the
+  first grammar production of result name \<open>c\<close>.  The required
   syntax priority of the argument slot is given by its nonterminal
-  @{text "A\<^sup>(\<^sup>p\<^sup>)"}.  The argument @{text "t\<^sub>i"} that corresponds to the
-  position of @{text "A\<^sup>(\<^sup>p\<^sup>)"} is printed recursively, and then put in
-  parentheses \<^emph>\<open>if\<close> its priority @{text "p"} requires this.  The
+  \<open>A\<^sup>(\<^sup>p\<^sup>)\<close>.  The argument \<open>t\<^sub>i\<close> that corresponds to the
+  position of \<open>A\<^sup>(\<^sup>p\<^sup>)\<close> is printed recursively, and then put in
+  parentheses \<^emph>\<open>if\<close> its priority \<open>p\<close> requires this.  The
   resulting output is concatenated with the syntactic sugar according
   to the grammar production.
 
-  If an AST application @{text "(c x\<^sub>1 \<dots> x\<^sub>m)"} has more arguments than
-  the corresponding production, it is first split into @{text "((c x\<^sub>1
-  \<dots> x\<^sub>n) x\<^sub>n\<^sub>+\<^sub>1 \<dots> x\<^sub>m)"} and then printed recursively as above.
+  If an AST application \<open>(c x\<^sub>1 \<dots> x\<^sub>m)\<close> has more arguments than
+  the corresponding production, it is first split into \<open>((c x\<^sub>1
+  \<dots> x\<^sub>n) x\<^sub>n\<^sub>+\<^sub>1 \<dots> x\<^sub>m)\<close> and then printed recursively as above.
 
   Applications with too few arguments or with non-constant head or
   without a corresponding production are printed in prefix-form like
-  @{text "f t\<^sub>1 \<dots> t\<^sub>n"} for terms.
+  \<open>f t\<^sub>1 \<dots> t\<^sub>n\<close> for terms.
 
-  Multiple productions associated with some name @{text "c"} are tried
+  Multiple productions associated with some name \<open>c\<close> are tried
   in order of appearance within the grammar.  An occurrence of some
-  AST variable @{text "x"} is printed as @{text "x"} outright.
+  AST variable \<open>x\<close> is printed as \<open>x\<close> outright.
 
   \<^medskip>
   White space is \<^emph>\<open>not\<close> inserted automatically.  If
--- a/src/Doc/Isar_Ref/Outer_Syntax.thy	Tue Oct 20 23:03:46 2015 +0200
+++ b/src/Doc/Isar_Ref/Outer_Syntax.thy	Tue Oct 20 23:53:40 2015 +0200
@@ -39,8 +39,8 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{command_def "print_commands"}@{text "\<^sup>*"} & : & @{text "any \<rightarrow>"} \\
-    @{command_def "help"}@{text "\<^sup>*"} & : & @{text "any \<rightarrow>"} \\
+    @{command_def "print_commands"}\<open>\<^sup>*\<close> & : & \<open>any \<rightarrow>\<close> \\
+    @{command_def "help"}\<open>\<^sup>*\<close> & : & \<open>any \<rightarrow>\<close> \\
   \end{matharray}
 
   @{rail \<open>
@@ -50,7 +50,7 @@
   \<^descr> @{command "print_commands"} prints all outer syntax keywords
   and commands.
 
-  \<^descr> @{command "help"}~@{text "pats"} retrieves outer syntax
+  \<^descr> @{command "help"}~\<open>pats\<close> retrieves outer syntax
   commands according to the specified name patterns.
 \<close>
 
@@ -101,50 +101,49 @@
 
   \begin{center}
   \begin{supertabular}{rcl}
-    @{syntax_def ident} & = & @{text "letter (subscript\<^sup>? quasiletter)\<^sup>*"} \\
-    @{syntax_def longident} & = & @{text "ident("}@{verbatim "."}@{text "ident)\<^sup>+"} \\
-    @{syntax_def symident} & = & @{text "sym\<^sup>+  |  "}@{verbatim \<open>\\<close>}@{verbatim "<"}@{text ident}@{verbatim ">"} \\
-    @{syntax_def nat} & = & @{text "digit\<^sup>+"} \\
-    @{syntax_def float} & = & @{syntax_ref nat}@{verbatim "."}@{syntax_ref nat}@{text "  |  "}@{verbatim "-"}@{syntax_ref nat}@{verbatim "."}@{syntax_ref nat} \\
-    @{syntax_def var} & = & @{verbatim "?"}@{text "ident  |  "}@{verbatim "?"}@{text ident}@{verbatim "."}@{text nat} \\
-    @{syntax_def typefree} & = & @{verbatim "'"}@{text ident} \\
-    @{syntax_def typevar} & = & @{verbatim "?"}@{text "typefree  |  "}@{verbatim "?"}@{text typefree}@{verbatim "."}@{text nat} \\
-    @{syntax_def string} & = & @{verbatim \<open>"\<close>} @{text "\<dots>"} @{verbatim \<open>"\<close>} \\
-    @{syntax_def altstring} & = & @{verbatim "`"} @{text "\<dots>"} @{verbatim "`"} \\
-    @{syntax_def cartouche} & = & @{verbatim "\<open>"} @{text "\<dots>"} @{verbatim "\<close>"} \\
-    @{syntax_def verbatim} & = & @{verbatim "{*"} @{text "\<dots>"} @{verbatim "*}"} \\[1ex]
+    @{syntax_def ident} & = & \<open>letter (subscript\<^sup>? quasiletter)\<^sup>*\<close> \\
+    @{syntax_def longident} & = & \<open>ident(\<close>@{verbatim "."}\<open>ident)\<^sup>+\<close> \\
+    @{syntax_def symident} & = & \<open>sym\<^sup>+  |  \<close>@{verbatim \<open>\\<close>}@{verbatim "<"}\<open>ident\<close>@{verbatim ">"} \\
+    @{syntax_def nat} & = & \<open>digit\<^sup>+\<close> \\
+    @{syntax_def float} & = & @{syntax_ref nat}@{verbatim "."}@{syntax_ref nat}\<open>  |  \<close>@{verbatim "-"}@{syntax_ref nat}@{verbatim "."}@{syntax_ref nat} \\
+    @{syntax_def var} & = & @{verbatim "?"}\<open>ident  |  \<close>@{verbatim "?"}\<open>ident\<close>@{verbatim "."}\<open>nat\<close> \\
+    @{syntax_def typefree} & = & @{verbatim "'"}\<open>ident\<close> \\
+    @{syntax_def typevar} & = & @{verbatim "?"}\<open>typefree  |  \<close>@{verbatim "?"}\<open>typefree\<close>@{verbatim "."}\<open>nat\<close> \\
+    @{syntax_def string} & = & @{verbatim \<open>"\<close>} \<open>\<dots>\<close> @{verbatim \<open>"\<close>} \\
+    @{syntax_def altstring} & = & @{verbatim "`"} \<open>\<dots>\<close> @{verbatim "`"} \\
+    @{syntax_def cartouche} & = & @{verbatim "\<open>"} \<open>\<dots>\<close> @{verbatim "\<close>"} \\
+    @{syntax_def verbatim} & = & @{verbatim "{*"} \<open>\<dots>\<close> @{verbatim "*}"} \\[1ex]
 
-    @{text letter} & = & @{text "latin  |  "}@{verbatim \<open>\\<close>}@{verbatim "<"}@{text latin}@{verbatim ">"}@{text "  |  "}@{verbatim \<open>\\<close>}@{verbatim "<"}@{text "latin latin"}@{verbatim ">"}@{text "  |  greek  |"} \\
-    @{text subscript} & = & @{verbatim "\<^sub>"} \\
-    @{text quasiletter} & = & @{text "letter  |  digit  |  "}@{verbatim "_"}@{text "  |  "}@{verbatim "'"} \\
-    @{text latin} & = & @{verbatim a}@{text "  | \<dots> |  "}@{verbatim z}@{text "  |  "}@{verbatim A}@{text "  |  \<dots> |  "}@{verbatim Z} \\
-    @{text digit} & = & @{verbatim "0"}@{text "  |  \<dots> |  "}@{verbatim "9"} \\
-    @{text sym} & = & @{verbatim "!"}@{text "  |  "}@{verbatim "#"}@{text "  |  "}@{verbatim "$"}@{text "  |  "}@{verbatim "%"}@{text "  |  "}@{verbatim "&"}@{text "  |  "}@{verbatim "*"}@{text "  |  "}@{verbatim "+"}@{text "  |  "}@{verbatim "-"}@{text "  |  "}@{verbatim "/"}@{text "  |"} \\
-    & & @{verbatim "<"}@{text "  |  "}@{verbatim "="}@{text "  |  "}@{verbatim ">"}@{text "  |  "}@{verbatim "?"}@{text "  |  "}@{verbatim "@"}@{text "  |  "}@{verbatim "^"}@{text "  |  "}@{verbatim "_"}@{text "  |  "}@{verbatim "|"}@{text "  |  "}@{verbatim "~"} \\
-    @{text greek} & = & @{verbatim "\<alpha>"}@{text "  |  "}@{verbatim "\<beta>"}@{text "  |  "}@{verbatim "\<gamma>"}@{text "  |  "}@{verbatim "\<delta>"}@{text "  |"} \\
-          &   & @{verbatim "\<epsilon>"}@{text "  |  "}@{verbatim "\<zeta>"}@{text "  |  "}@{verbatim "\<eta>"}@{text "  |  "}@{verbatim "\<theta>"}@{text "  |"} \\
-          &   & @{verbatim "\<iota>"}@{text "  |  "}@{verbatim "\<kappa>"}@{text "  |  "}@{verbatim "\<mu>"}@{text "  |  "}@{verbatim "\<nu>"}@{text "  |"} \\
-          &   & @{verbatim "\<xi>"}@{text "  |  "}@{verbatim "\<pi>"}@{text "  |  "}@{verbatim "\<rho>"}@{text "  |  "}@{verbatim "\<sigma>"}@{text "  |  "}@{verbatim "\<tau>"}@{text "  |"} \\
-          &   & @{verbatim "\<upsilon>"}@{text "  |  "}@{verbatim "\<phi>"}@{text "  |  "}@{verbatim "\<chi>"}@{text "  |  "}@{verbatim "\<psi>"}@{text "  |"} \\
-          &   & @{verbatim "\<omega>"}@{text "  |  "}@{verbatim "\<Gamma>"}@{text "  |  "}@{verbatim "\<Delta>"}@{text "  |  "}@{verbatim "\<Theta>"}@{text "  |"} \\
-          &   & @{verbatim "\<Lambda>"}@{text "  |  "}@{verbatim "\<Xi>"}@{text "  |  "}@{verbatim "\<Pi>"}@{text "  |  "}@{verbatim "\<Sigma>"}@{text "  |"} \\
-          &   & @{verbatim "\<Upsilon>"}@{text "  |  "}@{verbatim "\<Phi>"}@{text "  |  "}@{verbatim "\<Psi>"}@{text "  |  "}@{verbatim "\<Omega>"} \\
+    \<open>letter\<close> & = & \<open>latin  |  \<close>@{verbatim \<open>\\<close>}@{verbatim "<"}\<open>latin\<close>@{verbatim ">"}\<open>  |  \<close>@{verbatim \<open>\\<close>}@{verbatim "<"}\<open>latin latin\<close>@{verbatim ">"}\<open>  |  greek  |\<close> \\
+    \<open>subscript\<close> & = & @{verbatim "\<^sub>"} \\
+    \<open>quasiletter\<close> & = & \<open>letter  |  digit  |  \<close>@{verbatim "_"}\<open>  |  \<close>@{verbatim "'"} \\
+    \<open>latin\<close> & = & @{verbatim a}\<open>  | \<dots> |  \<close>@{verbatim z}\<open>  |  \<close>@{verbatim A}\<open>  |  \<dots> |  \<close>@{verbatim Z} \\
+    \<open>digit\<close> & = & @{verbatim "0"}\<open>  |  \<dots> |  \<close>@{verbatim "9"} \\
+    \<open>sym\<close> & = & @{verbatim "!"}\<open>  |  \<close>@{verbatim "#"}\<open>  |  \<close>@{verbatim "$"}\<open>  |  \<close>@{verbatim "%"}\<open>  |  \<close>@{verbatim "&"}\<open>  |  \<close>@{verbatim "*"}\<open>  |  \<close>@{verbatim "+"}\<open>  |  \<close>@{verbatim "-"}\<open>  |  \<close>@{verbatim "/"}\<open>  |\<close> \\
+    & & @{verbatim "<"}\<open>  |  \<close>@{verbatim "="}\<open>  |  \<close>@{verbatim ">"}\<open>  |  \<close>@{verbatim "?"}\<open>  |  \<close>@{verbatim "@"}\<open>  |  \<close>@{verbatim "^"}\<open>  |  \<close>@{verbatim "_"}\<open>  |  \<close>@{verbatim "|"}\<open>  |  \<close>@{verbatim "~"} \\
+    \<open>greek\<close> & = & @{verbatim "\<alpha>"}\<open>  |  \<close>@{verbatim "\<beta>"}\<open>  |  \<close>@{verbatim "\<gamma>"}\<open>  |  \<close>@{verbatim "\<delta>"}\<open>  |\<close> \\
+          &   & @{verbatim "\<epsilon>"}\<open>  |  \<close>@{verbatim "\<zeta>"}\<open>  |  \<close>@{verbatim "\<eta>"}\<open>  |  \<close>@{verbatim "\<theta>"}\<open>  |\<close> \\
+          &   & @{verbatim "\<iota>"}\<open>  |  \<close>@{verbatim "\<kappa>"}\<open>  |  \<close>@{verbatim "\<mu>"}\<open>  |  \<close>@{verbatim "\<nu>"}\<open>  |\<close> \\
+          &   & @{verbatim "\<xi>"}\<open>  |  \<close>@{verbatim "\<pi>"}\<open>  |  \<close>@{verbatim "\<rho>"}\<open>  |  \<close>@{verbatim "\<sigma>"}\<open>  |  \<close>@{verbatim "\<tau>"}\<open>  |\<close> \\
+          &   & @{verbatim "\<upsilon>"}\<open>  |  \<close>@{verbatim "\<phi>"}\<open>  |  \<close>@{verbatim "\<chi>"}\<open>  |  \<close>@{verbatim "\<psi>"}\<open>  |\<close> \\
+          &   & @{verbatim "\<omega>"}\<open>  |  \<close>@{verbatim "\<Gamma>"}\<open>  |  \<close>@{verbatim "\<Delta>"}\<open>  |  \<close>@{verbatim "\<Theta>"}\<open>  |\<close> \\
+          &   & @{verbatim "\<Lambda>"}\<open>  |  \<close>@{verbatim "\<Xi>"}\<open>  |  \<close>@{verbatim "\<Pi>"}\<open>  |  \<close>@{verbatim "\<Sigma>"}\<open>  |\<close> \\
+          &   & @{verbatim "\<Upsilon>"}\<open>  |  \<close>@{verbatim "\<Phi>"}\<open>  |  \<close>@{verbatim "\<Psi>"}\<open>  |  \<close>@{verbatim "\<Omega>"} \\
   \end{supertabular}
   \end{center}
 
   A @{syntax_ref var} or @{syntax_ref typevar} describes an unknown,
   which is internally a pair of base name and index (ML type @{ML_type
   indexname}).  These components are either separated by a dot as in
-  @{text "?x.1"} or @{text "?x7.3"} or run together as in @{text
-  "?x1"}.  The latter form is possible if the base name does not end
+  \<open>?x.1\<close> or \<open>?x7.3\<close> or run together as in \<open>?x1\<close>.  The latter form is possible if the base name does not end
   with digits.  If the index is 0, it may be dropped altogether:
-  @{text "?x"} and @{text "?x0"} and @{text "?x.0"} all refer to the
-  same unknown, with basename @{text "x"} and index 0.
+  \<open>?x\<close> and \<open>?x0\<close> and \<open>?x.0\<close> all refer to the
+  same unknown, with basename \<open>x\<close> and index 0.
 
   The syntax of @{syntax_ref string} admits any characters, including
   newlines; ``@{verbatim \<open>"\<close>}'' (double-quote) and ``@{verbatim \<open>\\<close>}''
   (backslash) need to be escaped by a backslash; arbitrary
-  character codes may be specified as ``@{verbatim \<open>\\<close>}@{text ddd}'',
+  character codes may be specified as ``@{verbatim \<open>\\<close>}\<open>ddd\<close>'',
   with three decimal digits.  Alternative strings according to
   @{syntax_ref altstring} are analogous, using single back-quotes
   instead.
@@ -155,24 +154,23 @@
   do not have this limitation.
 
   A @{syntax_ref cartouche} consists of arbitrary text, with properly
-  balanced blocks of ``@{verbatim "\<open>"}~@{text "\<dots>"}~@{verbatim
+  balanced blocks of ``@{verbatim "\<open>"}~\<open>\<dots>\<close>~@{verbatim
   "\<close>"}''.  Note that the rendering of cartouche delimiters is
-  usually like this: ``@{text "\<open> \<dots> \<close>"}''.
+  usually like this: ``\<open>\<open> \<dots> \<close>\<close>''.
 
-  Source comments take the form @{verbatim "(*"}~@{text
-  "\<dots>"}~@{verbatim "*)"} and may be nested, although the user-interface
+  Source comments take the form @{verbatim "(*"}~\<open>\<dots>\<close>~@{verbatim "*)"} and may be nested, although the user-interface
   might prevent this.  Note that this form indicates source comments
   only, which are stripped after lexical analysis of the input.  The
   Isar syntax also provides proper \<^emph>\<open>document comments\<close> that are
   considered as part of the text (see \secref{sec:comments}).
 
-  Common mathematical symbols such as @{text \<forall>} are represented in
+  Common mathematical symbols such as \<open>\<forall>\<close> are represented in
   Isabelle as @{verbatim \<forall>}.  There are infinitely many Isabelle
   symbols like this, although proper presentation is left to front-end
   tools such as {\LaTeX} or Isabelle/jEdit.  A list of
   predefined Isabelle symbols that work well with these tools is given
   in \appref{app:symbols}.  Note that @{verbatim "\<lambda>"} does not belong
-  to the @{text letter} category, since it is already used differently
+  to the \<open>letter\<close> category, since it is already used differently
   in the Pure term language.\<close>
 
 
@@ -226,8 +224,8 @@
 subsection \<open>Comments \label{sec:comments}\<close>
 
 text \<open>Large chunks of plain @{syntax text} are usually given @{syntax
-  verbatim}, i.e.\ enclosed in @{verbatim "{*"}~@{text "\<dots>"}~@{verbatim "*}"},
-  or as @{syntax cartouche} @{text "\<open>\<dots>\<close>"}. For convenience, any of the
+  verbatim}, i.e.\ enclosed in @{verbatim "{*"}~\<open>\<dots>\<close>~@{verbatim "*}"},
+  or as @{syntax cartouche} \<open>\<open>\<dots>\<close>\<close>. For convenience, any of the
   smaller text units conforming to @{syntax nameref} are admitted as well. A
   marginal @{syntax comment} is of the form @{verbatim "--"}~@{syntax text}.
   Any number of these may occur within Isabelle/Isar commands.
@@ -244,8 +242,8 @@
 
 text \<open>
   Classes are specified by plain names.  Sorts have a very simple
-  inner syntax, which is either a single class name @{text c} or a
-  list @{text "{c\<^sub>1, \<dots>, c\<^sub>n}"} referring to the
+  inner syntax, which is either a single class name \<open>c\<close> or a
+  list \<open>{c\<^sub>1, \<dots>, c\<^sub>n}\<close> referring to the
   intersection of these classes.  The syntax of type arities is given
   directly at the outer level.
 
@@ -270,8 +268,7 @@
   liberal convention is adopted: quotes may be omitted for any type or
   term that is already atomic at the outer level.  For example, one
   may just write @{verbatim x} instead of quoted @{verbatim \<open>"x"\<close>}.
-  Note that symbolic identifiers (e.g.\ @{verbatim "++"} or @{text
-  "\<forall>"} are available as well, provided these have not been superseded
+  Note that symbolic identifiers (e.g.\ @{verbatim "++"} or \<open>\<forall>\<close> are available as well, provided these have not been superseded
   by commands or other keywords already (such as @{verbatim "="} or
   @{verbatim "+"}).
 
@@ -285,7 +282,7 @@
   \<close>}
 
   Positional instantiations are specified as a sequence of terms, or the
-  placeholder ``@{text _}'' (underscore), which means to skip a position.
+  placeholder ``\<open>_\<close>'' (underscore), which means to skip a position.
 
   @{rail \<open>
     @{syntax_def inst}: '_' | @{syntax term}
@@ -293,8 +290,8 @@
     @{syntax_def insts}: (@{syntax inst} *)
   \<close>}
 
-  Named instantiations are specified as pairs of assignments @{text "v =
-  t"}, which refer to schematic variables in some theorem that is
+  Named instantiations are specified as pairs of assignments \<open>v =
+  t\<close>, which refer to schematic variables in some theorem that is
   instantiated. Both type and terms instantiations are admitted, and
   distinguished by the usual syntax of variable names.
 
@@ -326,7 +323,7 @@
 
 text \<open>Wherever explicit propositions (or term fragments) occur in a
   proof text, casual binding of schematic term variables may be given
-  specified via patterns of the form ``@{text "(\<IS> p\<^sub>1 \<dots> p\<^sub>n)"}''.
+  specified via patterns of the form ``\<open>(\<IS> p\<^sub>1 \<dots> p\<^sub>n)\<close>''.
   This works both for @{syntax term} and @{syntax prop}.
 
   @{rail \<open>
@@ -336,8 +333,8 @@
   \<close>}
 
   \<^medskip>
-  Declarations of local variables @{text "x :: \<tau>"} and
-  logical propositions @{text "a : \<phi>"} represent different views on
+  Declarations of local variables \<open>x :: \<tau>\<close> and
+  logical propositions \<open>a : \<phi>\<close> represent different views on
   the same principle of introducing a local scope.  In practice, one
   may usually omit the typing of @{syntax vars} (due to
   type-inference), and the naming of propositions (due to implicit
@@ -352,8 +349,8 @@
 
   The treatment of multiple declarations corresponds to the
   complementary focus of @{syntax vars} versus @{syntax props}.  In
-  ``@{text "x\<^sub>1 \<dots> x\<^sub>n :: \<tau>"}'' the typing refers to all variables, while
-  in @{text "a: \<phi>\<^sub>1 \<dots> \<phi>\<^sub>n"} the naming refers to all propositions
+  ``\<open>x\<^sub>1 \<dots> x\<^sub>n :: \<tau>\<close>'' the typing refers to all variables, while
+  in \<open>a: \<phi>\<^sub>1 \<dots> \<phi>\<^sub>n\<close> the naming refers to all propositions
   collectively.  Isar language elements that refer to @{syntax vars}
   or @{syntax props} typically admit separate typings or namings via
   another level of iteration, with explicit @{keyword_ref "and"}
@@ -369,7 +366,7 @@
 
   The category @{syntax "fixes"} is a richer variant of @{syntax vars}: it
   admits specification of mixfix syntax for the entities that are introduced
-  into the context. An optional suffix ``@{keyword "for"}~@{text "fixes"}''
+  into the context. An optional suffix ``@{keyword "for"}~\<open>fixes\<close>''
   is admitted in many situations to indicate a so-called ``eigen-context''
   of a formal element: the result will be exported and thus generalized over
   the given variables.\<close>
@@ -406,13 +403,13 @@
 
   There are three forms of theorem references:
 
-  \<^enum> named facts @{text "a"},
+  \<^enum> named facts \<open>a\<close>,
 
-  \<^enum> selections from named facts @{text "a(i)"} or @{text "a(j - k)"},
+  \<^enum> selections from named facts \<open>a(i)\<close> or \<open>a(j - k)\<close>,
 
   \<^enum> literal fact propositions using token syntax @{syntax_ref altstring}
-  @{verbatim "`"}@{text "\<phi>"}@{verbatim "`"} or @{syntax_ref cartouche}
-  @{text "\<open>\<phi>\<close>"} (see also method @{method_ref fact}).
+  @{verbatim "`"}\<open>\<phi>\<close>@{verbatim "`"} or @{syntax_ref cartouche}
+  \<open>\<open>\<phi>\<close>\<close> (see also method @{method_ref fact}).
 
 
   Any kind of theorem specification may include lists of attributes
@@ -421,8 +418,7 @@
   not stored within the theorem database of the theory or proof
   context, but any given attributes are applied nonetheless.
 
-  An extra pair of brackets around attributes (like ``@{text
-  "[[simproc a]]"}'') abbreviates a theorem reference involving an
+  An extra pair of brackets around attributes (like ``\<open>[[simproc a]]\<close>'') abbreviates a theorem reference involving an
   internal dummy fact, which will be ignored later on.  So only the
   effect of the attribute on the background context will persist.
   This form of in-place declarations is particularly useful with
@@ -454,17 +450,17 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{command_def "print_theory"}@{text "\<^sup>*"} & : & @{text "context \<rightarrow>"} \\
-    @{command_def "print_definitions"}@{text "\<^sup>*"} & : & @{text "context \<rightarrow>"} \\
-    @{command_def "print_methods"}@{text "\<^sup>*"} & : & @{text "context \<rightarrow>"} \\
-    @{command_def "print_attributes"}@{text "\<^sup>*"} & : & @{text "context \<rightarrow>"} \\
-    @{command_def "print_theorems"}@{text "\<^sup>*"} & : & @{text "context \<rightarrow>"} \\
-    @{command_def "find_theorems"}@{text "\<^sup>*"} & : & @{text "context \<rightarrow>"} \\
-    @{command_def "find_consts"}@{text "\<^sup>*"} & : & @{text "context \<rightarrow>"} \\
-    @{command_def "thm_deps"}@{text "\<^sup>*"} & : & @{text "context \<rightarrow>"} \\
-    @{command_def "unused_thms"}@{text "\<^sup>*"} & : & @{text "context \<rightarrow>"} \\
-    @{command_def "print_facts"}@{text "\<^sup>*"} & : & @{text "context \<rightarrow>"} \\
-    @{command_def "print_term_bindings"}@{text "\<^sup>*"} & : & @{text "context \<rightarrow>"} \\
+    @{command_def "print_theory"}\<open>\<^sup>*\<close> & : & \<open>context \<rightarrow>\<close> \\
+    @{command_def "print_definitions"}\<open>\<^sup>*\<close> & : & \<open>context \<rightarrow>\<close> \\
+    @{command_def "print_methods"}\<open>\<^sup>*\<close> & : & \<open>context \<rightarrow>\<close> \\
+    @{command_def "print_attributes"}\<open>\<^sup>*\<close> & : & \<open>context \<rightarrow>\<close> \\
+    @{command_def "print_theorems"}\<open>\<^sup>*\<close> & : & \<open>context \<rightarrow>\<close> \\
+    @{command_def "find_theorems"}\<open>\<^sup>*\<close> & : & \<open>context \<rightarrow>\<close> \\
+    @{command_def "find_consts"}\<open>\<^sup>*\<close> & : & \<open>context \<rightarrow>\<close> \\
+    @{command_def "thm_deps"}\<open>\<^sup>*\<close> & : & \<open>context \<rightarrow>\<close> \\
+    @{command_def "unused_thms"}\<open>\<^sup>*\<close> & : & \<open>context \<rightarrow>\<close> \\
+    @{command_def "print_facts"}\<open>\<^sup>*\<close> & : & \<open>context \<rightarrow>\<close> \\
+    @{command_def "print_term_bindings"}\<open>\<^sup>*\<close> & : & \<open>context \<rightarrow>\<close> \\
   \end{matharray}
 
   @{rail \<open>
@@ -495,73 +491,72 @@
   of rules declared for simplifications.
 
   \<^descr> @{command "print_theory"} prints the main logical content of the
-  background theory; the ``@{text "!"}'' option indicates extra verbosity.
+  background theory; the ``\<open>!\<close>'' option indicates extra verbosity.
 
   \<^descr> @{command "print_definitions"} prints dependencies of definitional
   specifications within the background theory, which may be constants
   \secref{sec:consts} or types (\secref{sec:types-pure},
-  \secref{sec:hol-typedef}); the ``@{text "!"}'' option indicates extra
+  \secref{sec:hol-typedef}); the ``\<open>!\<close>'' option indicates extra
   verbosity.
 
   \<^descr> @{command "print_methods"} prints all proof methods available in the
-  current theory context; the ``@{text "!"}'' option indicates extra
+  current theory context; the ``\<open>!\<close>'' option indicates extra
   verbosity.
 
   \<^descr> @{command "print_attributes"} prints all attributes available in the
-  current theory context; the ``@{text "!"}'' option indicates extra
+  current theory context; the ``\<open>!\<close>'' option indicates extra
   verbosity.
 
   \<^descr> @{command "print_theorems"} prints theorems of the background theory
-  resulting from the last command; the ``@{text "!"}'' option indicates
+  resulting from the last command; the ``\<open>!\<close>'' option indicates
   extra verbosity.
 
   \<^descr> @{command "print_facts"} prints all local facts of the current
-  context, both named and unnamed ones; the ``@{text "!"}'' option indicates
+  context, both named and unnamed ones; the ``\<open>!\<close>'' option indicates
   extra verbosity.
 
   \<^descr> @{command "print_term_bindings"} prints all term bindings that
   are present in the context.
 
-  \<^descr> @{command "find_theorems"}~@{text criteria} retrieves facts
+  \<^descr> @{command "find_theorems"}~\<open>criteria\<close> retrieves facts
   from the theory or proof context matching all of given search
-  criteria.  The criterion @{text "name: p"} selects all theorems
-  whose fully qualified name matches pattern @{text p}, which may
-  contain ``@{text "*"}'' wildcards.  The criteria @{text intro},
-  @{text elim}, and @{text dest} select theorems that match the
+  criteria.  The criterion \<open>name: p\<close> selects all theorems
+  whose fully qualified name matches pattern \<open>p\<close>, which may
+  contain ``\<open>*\<close>'' wildcards.  The criteria \<open>intro\<close>,
+  \<open>elim\<close>, and \<open>dest\<close> select theorems that match the
   current goal as introduction, elimination or destruction rules,
-  respectively.  The criterion @{text "solves"} returns all rules
+  respectively.  The criterion \<open>solves\<close> returns all rules
   that would directly solve the current goal.  The criterion
-  @{text "simp: t"} selects all rewrite rules whose left-hand side
-  matches the given term.  The criterion term @{text t} selects all
-  theorems that contain the pattern @{text t} -- as usual, patterns
-  may contain occurrences of the dummy ``@{text _}'', schematic
+  \<open>simp: t\<close> selects all rewrite rules whose left-hand side
+  matches the given term.  The criterion term \<open>t\<close> selects all
+  theorems that contain the pattern \<open>t\<close> -- as usual, patterns
+  may contain occurrences of the dummy ``\<open>_\<close>'', schematic
   variables, and type constraints.
 
-  Criteria can be preceded by ``@{text "-"}'' to select theorems that
+  Criteria can be preceded by ``\<open>-\<close>'' to select theorems that
   do \<^emph>\<open>not\<close> match. Note that giving the empty list of criteria
   yields \<^emph>\<open>all\<close> currently known facts.  An optional limit for the
   number of printed facts may be given; the default is 40.  By
   default, duplicates are removed from the search result. Use
-  @{text with_dups} to display duplicates.
+  \<open>with_dups\<close> to display duplicates.
 
-  \<^descr> @{command "find_consts"}~@{text criteria} prints all constants
-  whose type meets all of the given criteria. The criterion @{text
-  "strict: ty"} is met by any type that matches the type pattern
-  @{text ty}.  Patterns may contain both the dummy type ``@{text _}''
-  and sort constraints. The criterion @{text ty} is similar, but it
-  also matches against subtypes. The criterion @{text "name: p"} and
-  the prefix ``@{text "-"}'' function as described for @{command
+  \<^descr> @{command "find_consts"}~\<open>criteria\<close> prints all constants
+  whose type meets all of the given criteria. The criterion \<open>strict: ty\<close> is met by any type that matches the type pattern
+  \<open>ty\<close>.  Patterns may contain both the dummy type ``\<open>_\<close>''
+  and sort constraints. The criterion \<open>ty\<close> is similar, but it
+  also matches against subtypes. The criterion \<open>name: p\<close> and
+  the prefix ``\<open>-\<close>'' function as described for @{command
   "find_theorems"}.
 
-  \<^descr> @{command "thm_deps"}~@{text "a\<^sub>1 \<dots> a\<^sub>n"}
+  \<^descr> @{command "thm_deps"}~\<open>a\<^sub>1 \<dots> a\<^sub>n\<close>
   visualizes dependencies of facts, using Isabelle's graph browser
   tool (see also @{cite "isabelle-system"}).
 
-  \<^descr> @{command "unused_thms"}~@{text "A\<^sub>1 \<dots> A\<^sub>m - B\<^sub>1 \<dots> B\<^sub>n"}
-  displays all theorems that are proved in theories @{text "B\<^sub>1 \<dots> B\<^sub>n"}
-  or their parents but not in @{text "A\<^sub>1 \<dots> A\<^sub>m"} or their parents and
+  \<^descr> @{command "unused_thms"}~\<open>A\<^sub>1 \<dots> A\<^sub>m - B\<^sub>1 \<dots> B\<^sub>n\<close>
+  displays all theorems that are proved in theories \<open>B\<^sub>1 \<dots> B\<^sub>n\<close>
+  or their parents but not in \<open>A\<^sub>1 \<dots> A\<^sub>m\<close> or their parents and
   that are never used.
-  If @{text n} is @{text 0}, the end of the range of theories
+  If \<open>n\<close> is \<open>0\<close>, the end of the range of theories
   defaults to the current theory. If no range is specified,
   only the unused theorems in the current theory are displayed.
 \<close>
--- a/src/Doc/Isar_Ref/Preface.thy	Tue Oct 20 23:03:46 2015 +0200
+++ b/src/Doc/Isar_Ref/Preface.thy	Tue Oct 20 23:53:40 2015 +0200
@@ -64,8 +64,7 @@
   Isar commands may be either \<^emph>\<open>proper\<close> document
   constructors, or \<^emph>\<open>improper commands\<close>.  Some proof methods and
   attributes introduced later are classified as improper as well.
-  Improper Isar language elements, which are marked by ``@{text
-  "\<^sup>*"}'' in the subsequent chapters; they are often helpful
+  Improper Isar language elements, which are marked by ``\<open>\<^sup>*\<close>'' in the subsequent chapters; they are often helpful
   when developing proof documents, but their use is discouraged for
   the final human-readable outcome.  Typical examples are diagnostic
   commands that print terms or theorems according to the current
--- a/src/Doc/Isar_Ref/Proof.thy	Tue Oct 20 23:03:46 2015 +0200
+++ b/src/Doc/Isar_Ref/Proof.thy	Tue Oct 20 23:53:40 2015 +0200
@@ -11,17 +11,16 @@
   facts, and open goals.  Isar/VM transitions are typed according to
   the following three different modes of operation:
 
-  \<^descr> @{text "proof(prove)"} means that a new goal has just been
+  \<^descr> \<open>proof(prove)\<close> means that a new goal has just been
   stated that is now to be \<^emph>\<open>proven\<close>; the next command may refine
   it by some proof method, and enter a sub-proof to establish the
   actual result.
 
-  \<^descr> @{text "proof(state)"} is like a nested theory mode: the
+  \<^descr> \<open>proof(state)\<close> is like a nested theory mode: the
   context may be augmented by \<^emph>\<open>stating\<close> additional assumptions,
   intermediate results etc.
 
-  \<^descr> @{text "proof(chain)"} is intermediate between @{text
-  "proof(state)"} and @{text "proof(prove)"}: existing facts (i.e.\ the
+  \<^descr> \<open>proof(chain)\<close> is intermediate between \<open>proof(state)\<close> and \<open>proof(prove)\<close>: existing facts (i.e.\ the
   contents of the special @{fact_ref this} register) have been just picked
   up in order to be used when refining the goal claimed next.
 
@@ -46,7 +45,7 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{command_def "notepad"} & : & @{text "local_theory \<rightarrow> proof(state)"} \\
+    @{command_def "notepad"} & : & \<open>local_theory \<rightarrow> proof(state)\<close> \\
   \end{matharray}
 
   @{rail \<open>
@@ -65,9 +64,9 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{command_def "next"} & : & @{text "proof(state) \<rightarrow> proof(state)"} \\
-    @{command_def "{"} & : & @{text "proof(state) \<rightarrow> proof(state)"} \\
-    @{command_def "}"} & : & @{text "proof(state) \<rightarrow> proof(state)"} \\
+    @{command_def "next"} & : & \<open>proof(state) \<rightarrow> proof(state)\<close> \\
+    @{command_def "{"} & : & \<open>proof(state) \<rightarrow> proof(state)\<close> \\
+    @{command_def "}"} & : & \<open>proof(state) \<rightarrow> proof(state)\<close> \\
   \end{matharray}
 
   While Isar is inherently block-structured, opening and closing
@@ -103,7 +102,7 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{command_def "oops"} & : & @{text "proof \<rightarrow> local_theory | theory"} \\
+    @{command_def "oops"} & : & \<open>proof \<rightarrow> local_theory | theory\<close> \\
   \end{matharray}
 
   The @{command "oops"} command discontinues the current proof
@@ -121,7 +120,7 @@
   preparation tools of Isabelle described in \chref{ch:document-prep}.
   Thus partial or even wrong proof attempts can be discussed in a
   logically sound manner.  Note that the Isabelle {\LaTeX} macros can
-  be easily adapted to print something like ``@{text "\<dots>"}'' instead of
+  be easily adapted to print something like ``\<open>\<dots>\<close>'' instead of
   the keyword ``@{command "oops"}''.
 \<close>
 
@@ -132,28 +131,28 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{command_def "fix"} & : & @{text "proof(state) \<rightarrow> proof(state)"} \\
-    @{command_def "assume"} & : & @{text "proof(state) \<rightarrow> proof(state)"} \\
-    @{command_def "presume"} & : & @{text "proof(state) \<rightarrow> proof(state)"} \\
-    @{command_def "def"} & : & @{text "proof(state) \<rightarrow> proof(state)"} \\
+    @{command_def "fix"} & : & \<open>proof(state) \<rightarrow> proof(state)\<close> \\
+    @{command_def "assume"} & : & \<open>proof(state) \<rightarrow> proof(state)\<close> \\
+    @{command_def "presume"} & : & \<open>proof(state) \<rightarrow> proof(state)\<close> \\
+    @{command_def "def"} & : & \<open>proof(state) \<rightarrow> proof(state)\<close> \\
   \end{matharray}
 
   The logical proof context consists of fixed variables and
   assumptions.  The former closely correspond to Skolem constants, or
   meta-level universal quantification as provided by the Isabelle/Pure
   logical framework.  Introducing some \<^emph>\<open>arbitrary, but fixed\<close>
-  variable via ``@{command "fix"}~@{text x}'' results in a local value
+  variable via ``@{command "fix"}~\<open>x\<close>'' results in a local value
   that may be used in the subsequent proof as any other variable or
-  constant.  Furthermore, any result @{text "\<turnstile> \<phi>[x]"} exported from
-  the context will be universally closed wrt.\ @{text x} at the
-  outermost level: @{text "\<turnstile> \<And>x. \<phi>[x]"} (this is expressed in normal
+  constant.  Furthermore, any result \<open>\<turnstile> \<phi>[x]\<close> exported from
+  the context will be universally closed wrt.\ \<open>x\<close> at the
+  outermost level: \<open>\<turnstile> \<And>x. \<phi>[x]\<close> (this is expressed in normal
   form using Isabelle's meta-variables).
 
-  Similarly, introducing some assumption @{text \<chi>} has two effects.
+  Similarly, introducing some assumption \<open>\<chi>\<close> has two effects.
   On the one hand, a local theorem is created that may be used as a
   fact in subsequent proof steps.  On the other hand, any result
-  @{text "\<chi> \<turnstile> \<phi>"} exported from the context becomes conditional wrt.\
-  the assumption: @{text "\<turnstile> \<chi> \<Longrightarrow> \<phi>"}.  Thus, solving an enclosing goal
+  \<open>\<chi> \<turnstile> \<phi>\<close> exported from the context becomes conditional wrt.\
+  the assumption: \<open>\<turnstile> \<chi> \<Longrightarrow> \<phi>\<close>.  Thus, solving an enclosing goal
   using such a result would basically introduce a new subgoal stemming
   from the assumption.  How this situation is handled depends on the
   version of assumption command used: while @{command "assume"}
@@ -161,12 +160,12 @@
   the goal, @{command "presume"} leaves the subgoal unchanged in order
   to be proved later by the user.
 
-  Local definitions, introduced by ``@{command "def"}~@{text "x \<equiv>
-  t"}'', are achieved by combining ``@{command "fix"}~@{text x}'' with
+  Local definitions, introduced by ``@{command "def"}~\<open>x \<equiv>
+  t\<close>'', are achieved by combining ``@{command "fix"}~\<open>x\<close>'' with
   another version of assumption that causes any hypothetical equation
-  @{text "x \<equiv> t"} to be eliminated by the reflexivity rule.  Thus,
-  exporting some result @{text "x \<equiv> t \<turnstile> \<phi>[x]"} yields @{text "\<turnstile>
-  \<phi>[t]"}.
+  \<open>x \<equiv> t\<close> to be eliminated by the reflexivity rule.  Thus,
+  exporting some result \<open>x \<equiv> t \<turnstile> \<phi>[x]\<close> yields \<open>\<turnstile>
+  \<phi>[t]\<close>.
 
   @{rail \<open>
     @@{command fix} @{syntax "fixes"}
@@ -179,28 +178,26 @@
       @{syntax name} ('==' | '\<equiv>') @{syntax term} @{syntax term_pat}?
   \<close>}
 
-  \<^descr> @{command "fix"}~@{text x} introduces a local variable @{text
-  x} that is \<^emph>\<open>arbitrary, but fixed\<close>.
+  \<^descr> @{command "fix"}~\<open>x\<close> introduces a local variable \<open>x\<close> that is \<^emph>\<open>arbitrary, but fixed\<close>.
 
-  \<^descr> @{command "assume"}~@{text "a: \<phi>"} and @{command
-  "presume"}~@{text "a: \<phi>"} introduce a local fact @{text "\<phi> \<turnstile> \<phi>"} by
+  \<^descr> @{command "assume"}~\<open>a: \<phi>\<close> and @{command
+  "presume"}~\<open>a: \<phi>\<close> introduce a local fact \<open>\<phi> \<turnstile> \<phi>\<close> by
   assumption.  Subsequent results applied to an enclosing goal (e.g.\
   by @{command_ref "show"}) are handled as follows: @{command
   "assume"} expects to be able to unify with existing premises in the
-  goal, while @{command "presume"} leaves @{text \<phi>} as new subgoals.
+  goal, while @{command "presume"} leaves \<open>\<phi>\<close> as new subgoals.
 
   Several lists of assumptions may be given (separated by
   @{keyword_ref "and"}; the resulting list of current facts consists
   of all of these concatenated.
 
-  \<^descr> @{command "def"}~@{text "x \<equiv> t"} introduces a local
+  \<^descr> @{command "def"}~\<open>x \<equiv> t\<close> introduces a local
   (non-polymorphic) definition.  In results exported from the context,
-  @{text x} is replaced by @{text t}.  Basically, ``@{command
-  "def"}~@{text "x \<equiv> t"}'' abbreviates ``@{command "fix"}~@{text
-  x}~@{command "assume"}~@{text "x \<equiv> t"}'', with the resulting
+  \<open>x\<close> is replaced by \<open>t\<close>.  Basically, ``@{command
+  "def"}~\<open>x \<equiv> t\<close>'' abbreviates ``@{command "fix"}~\<open>x\<close>~@{command "assume"}~\<open>x \<equiv> t\<close>'', with the resulting
   hypothetical equation solved by reflexivity.
 
-  The default name for the definitional equation is @{text x_def}.
+  The default name for the definitional equation is \<open>x_def\<close>.
   Several simultaneous definitions may be given at the same time.
 \<close>
 
@@ -209,16 +206,16 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{command_def "let"} & : & @{text "proof(state) \<rightarrow> proof(state)"} \\
+    @{command_def "let"} & : & \<open>proof(state) \<rightarrow> proof(state)\<close> \\
     @{keyword_def "is"} & : & syntax \\
   \end{matharray}
 
   Abbreviations may be either bound by explicit @{command
-  "let"}~@{text "p \<equiv> t"} statements, or by annotating assumptions or
-  goal statements with a list of patterns ``@{text "(\<IS> p\<^sub>1 \<dots>
-  p\<^sub>n)"}''.  In both cases, higher-order matching is invoked to
+  "let"}~\<open>p \<equiv> t\<close> statements, or by annotating assumptions or
+  goal statements with a list of patterns ``\<open>(\<IS> p\<^sub>1 \<dots>
+  p\<^sub>n)\<close>''.  In both cases, higher-order matching is invoked to
   bind extra-logical term variables, which may be either named
-  schematic variables of the form @{text ?x}, or nameless dummies
+  schematic variables of the form \<open>?x\<close>, or nameless dummies
   ``@{variable _}'' (underscore). Note that in the @{command "let"}
   form the patterns occur on the left-hand side, while the @{keyword
   "is"} patterns are in postfix position.
@@ -247,12 +244,12 @@
   The syntax of @{keyword "is"} patterns follows @{syntax term_pat} or
   @{syntax prop_pat} (see \secref{sec:term-decls}).
 
-  \<^descr> @{command "let"}~@{text "p\<^sub>1 = t\<^sub>1 \<AND> \<dots> p\<^sub>n = t\<^sub>n"} binds any
-  text variables in patterns @{text "p\<^sub>1, \<dots>, p\<^sub>n"} by simultaneous
-  higher-order matching against terms @{text "t\<^sub>1, \<dots>, t\<^sub>n"}.
+  \<^descr> @{command "let"}~\<open>p\<^sub>1 = t\<^sub>1 \<AND> \<dots> p\<^sub>n = t\<^sub>n\<close> binds any
+  text variables in patterns \<open>p\<^sub>1, \<dots>, p\<^sub>n\<close> by simultaneous
+  higher-order matching against terms \<open>t\<^sub>1, \<dots>, t\<^sub>n\<close>.
 
-  \<^descr> @{text "(\<IS> p\<^sub>1 \<dots> p\<^sub>n)"} resembles @{command "let"}, but
-  matches @{text "p\<^sub>1, \<dots>, p\<^sub>n"} against the preceding statement.  Also
+  \<^descr> \<open>(\<IS> p\<^sub>1 \<dots> p\<^sub>n)\<close> resembles @{command "let"}, but
+  matches \<open>p\<^sub>1, \<dots>, p\<^sub>n\<close> against the preceding statement.  Also
   note that @{keyword "is"} is not a separate command, but part of
   others (such as @{command "assume"}, @{command "have"} etc.).
 
@@ -263,8 +260,8 @@
   abstracted over any meta-level parameters (if present).  Likewise,
   @{variable_ref this} is bound for fact statements resulting from
   assumptions or finished goals.  In case @{variable this} refers to
-  an object-logic statement that is an application @{text "f t"}, then
-  @{text t} is bound to the special text variable ``@{variable "\<dots>"}''
+  an object-logic statement that is an application \<open>f t\<close>, then
+  \<open>t\<close> is bound to the special text variable ``@{variable "\<dots>"}''
   (three dots).  The canonical application of this convenience are
   calculational proofs (see \secref{sec:calculation}).
 \<close>
@@ -274,12 +271,12 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{command_def "note"} & : & @{text "proof(state) \<rightarrow> proof(state)"} \\
-    @{command_def "then"} & : & @{text "proof(state) \<rightarrow> proof(chain)"} \\
-    @{command_def "from"} & : & @{text "proof(state) \<rightarrow> proof(chain)"} \\
-    @{command_def "with"} & : & @{text "proof(state) \<rightarrow> proof(chain)"} \\
-    @{command_def "using"} & : & @{text "proof(prove) \<rightarrow> proof(prove)"} \\
-    @{command_def "unfolding"} & : & @{text "proof(prove) \<rightarrow> proof(prove)"} \\
+    @{command_def "note"} & : & \<open>proof(state) \<rightarrow> proof(state)\<close> \\
+    @{command_def "then"} & : & \<open>proof(state) \<rightarrow> proof(chain)\<close> \\
+    @{command_def "from"} & : & \<open>proof(state) \<rightarrow> proof(chain)\<close> \\
+    @{command_def "with"} & : & \<open>proof(state) \<rightarrow> proof(chain)\<close> \\
+    @{command_def "using"} & : & \<open>proof(prove) \<rightarrow> proof(prove)\<close> \\
+    @{command_def "unfolding"} & : & \<open>proof(prove) \<rightarrow> proof(prove)\<close> \\
   \end{matharray}
 
   New facts are established either by assumption or proof of local
@@ -300,8 +297,8 @@
       (@{syntax thmrefs} + @'and')
   \<close>}
 
-  \<^descr> @{command "note"}~@{text "a = b\<^sub>1 \<dots> b\<^sub>n"} recalls existing facts
-  @{text "b\<^sub>1, \<dots>, b\<^sub>n"}, binding the result as @{text a}.  Note that
+  \<^descr> @{command "note"}~\<open>a = b\<^sub>1 \<dots> b\<^sub>n\<close> recalls existing facts
+  \<open>b\<^sub>1, \<dots>, b\<^sub>n\<close>, binding the result as \<open>a\<close>.  Note that
   attributes may be involved as well, both on the left and right hand
   sides.
 
@@ -315,34 +312,34 @@
   facts into the goal state before operation.  This provides a simple
   scheme to control relevance of facts in automated proof search.
 
-  \<^descr> @{command "from"}~@{text b} abbreviates ``@{command
-  "note"}~@{text b}~@{command "then"}''; thus @{command "then"} is
-  equivalent to ``@{command "from"}~@{text this}''.
+  \<^descr> @{command "from"}~\<open>b\<close> abbreviates ``@{command
+  "note"}~\<open>b\<close>~@{command "then"}''; thus @{command "then"} is
+  equivalent to ``@{command "from"}~\<open>this\<close>''.
 
-  \<^descr> @{command "with"}~@{text "b\<^sub>1 \<dots> b\<^sub>n"} abbreviates ``@{command
-  "from"}~@{text "b\<^sub>1 \<dots> b\<^sub>n \<AND> this"}''; thus the forward chaining
+  \<^descr> @{command "with"}~\<open>b\<^sub>1 \<dots> b\<^sub>n\<close> abbreviates ``@{command
+  "from"}~\<open>b\<^sub>1 \<dots> b\<^sub>n \<AND> this\<close>''; thus the forward chaining
   is from earlier facts together with the current ones.
 
-  \<^descr> @{command "using"}~@{text "b\<^sub>1 \<dots> b\<^sub>n"} augments the facts being
+  \<^descr> @{command "using"}~\<open>b\<^sub>1 \<dots> b\<^sub>n\<close> augments the facts being
   currently indicated for use by a subsequent refinement step (such as
   @{command_ref "apply"} or @{command_ref "proof"}).
 
-  \<^descr> @{command "unfolding"}~@{text "b\<^sub>1 \<dots> b\<^sub>n"} is structurally
+  \<^descr> @{command "unfolding"}~\<open>b\<^sub>1 \<dots> b\<^sub>n\<close> is structurally
   similar to @{command "using"}, but unfolds definitional equations
-  @{text "b\<^sub>1, \<dots> b\<^sub>n"} throughout the goal state and facts.
+  \<open>b\<^sub>1, \<dots> b\<^sub>n\<close> throughout the goal state and facts.
 
 
   Forward chaining with an empty list of theorems is the same as not
-  chaining at all.  Thus ``@{command "from"}~@{text nothing}'' has no
-  effect apart from entering @{text "prove(chain)"} mode, since
+  chaining at all.  Thus ``@{command "from"}~\<open>nothing\<close>'' has no
+  effect apart from entering \<open>prove(chain)\<close> mode, since
   @{fact_ref nothing} is bound to the empty list of theorems.
 
   Basic proof methods (such as @{method_ref (Pure) rule}) expect multiple
   facts to be given in their proper order, corresponding to a prefix
   of the premises of the rule involved.  Note that positions may be
-  easily skipped using something like @{command "from"}~@{text "_
-  \<AND> a \<AND> b"}, for example.  This involves the trivial rule
-  @{text "PROP \<psi> \<Longrightarrow> PROP \<psi>"}, which is bound in Isabelle/Pure as
+  easily skipped using something like @{command "from"}~\<open>_
+  \<AND> a \<AND> b\<close>, for example.  This involves the trivial rule
+  \<open>PROP \<psi> \<Longrightarrow> PROP \<psi>\<close>, which is bound in Isabelle/Pure as
   ``@{fact_ref "_"}'' (underscore).
 
   Automated methods (such as @{method simp} or @{method auto}) just
@@ -356,16 +353,16 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{command_def "lemma"} & : & @{text "local_theory \<rightarrow> proof(prove)"} \\
-    @{command_def "theorem"} & : & @{text "local_theory \<rightarrow> proof(prove)"} \\
-    @{command_def "corollary"} & : & @{text "local_theory \<rightarrow> proof(prove)"} \\
-    @{command_def "proposition"} & : & @{text "local_theory \<rightarrow> proof(prove)"} \\
-    @{command_def "schematic_goal"} & : & @{text "local_theory \<rightarrow> proof(prove)"} \\
-    @{command_def "have"} & : & @{text "proof(state) | proof(chain) \<rightarrow> proof(prove)"} \\
-    @{command_def "show"} & : & @{text "proof(state) | proof(chain) \<rightarrow> proof(prove)"} \\
-    @{command_def "hence"} & : & @{text "proof(state) \<rightarrow> proof(prove)"} \\
-    @{command_def "thus"} & : & @{text "proof(state) \<rightarrow> proof(prove)"} \\
-    @{command_def "print_statement"}@{text "\<^sup>*"} & : & @{text "context \<rightarrow>"} \\
+    @{command_def "lemma"} & : & \<open>local_theory \<rightarrow> proof(prove)\<close> \\
+    @{command_def "theorem"} & : & \<open>local_theory \<rightarrow> proof(prove)\<close> \\
+    @{command_def "corollary"} & : & \<open>local_theory \<rightarrow> proof(prove)\<close> \\
+    @{command_def "proposition"} & : & \<open>local_theory \<rightarrow> proof(prove)\<close> \\
+    @{command_def "schematic_goal"} & : & \<open>local_theory \<rightarrow> proof(prove)\<close> \\
+    @{command_def "have"} & : & \<open>proof(state) | proof(chain) \<rightarrow> proof(prove)\<close> \\
+    @{command_def "show"} & : & \<open>proof(state) | proof(chain) \<rightarrow> proof(prove)\<close> \\
+    @{command_def "hence"} & : & \<open>proof(state) \<rightarrow> proof(prove)\<close> \\
+    @{command_def "thus"} & : & \<open>proof(state) \<rightarrow> proof(prove)\<close> \\
+    @{command_def "print_statement"}\<open>\<^sup>*\<close> & : & \<open>context \<rightarrow>\<close> \\
   \end{matharray}
 
   From a theory context, proof mode is entered by an initial goal command
@@ -375,7 +372,7 @@
 
   Goals may consist of multiple statements, resulting in a list of
   facts eventually.  A pending multi-goal is internally represented as
-  a meta-level conjunction (@{text "&&&"}), which is usually
+  a meta-level conjunction (\<open>&&&\<close>), which is usually
   split into the corresponding number of sub-goals prior to an initial
   method application, via @{command_ref "proof"}
   (\secref{sec:proof-steps}) or @{command_ref "apply"}
@@ -422,9 +419,9 @@
       (@{syntax thmdecl}? (@{syntax prop}+) + @'and')
   \<close>}
 
-  \<^descr> @{command "lemma"}~@{text "a: \<phi>"} enters proof mode with
-  @{text \<phi>} as main goal, eventually resulting in some fact @{text "\<turnstile>
-  \<phi>"} to be put back into the target context.  An additional @{syntax
+  \<^descr> @{command "lemma"}~\<open>a: \<phi>\<close> enters proof mode with
+  \<open>\<phi>\<close> as main goal, eventually resulting in some fact \<open>\<turnstile>
+  \<phi>\<close> to be put back into the target context.  An additional @{syntax
   context} specification may build up an initial proof context for the
   subsequent claim; this includes local definitions and syntax as
   well, see also @{syntax "includes"} in \secref{sec:bundle} and
@@ -444,15 +441,15 @@
   proofs is lost, which also impacts performance, because proof
   checking is forced into sequential mode.
 
-  \<^descr> @{command "have"}~@{text "a: \<phi>"} claims a local goal,
+  \<^descr> @{command "have"}~\<open>a: \<phi>\<close> claims a local goal,
   eventually resulting in a fact within the current logical context.
   This operation is completely independent of any pending sub-goals of
   an enclosing goal statements, so @{command "have"} may be freely
   used for experimental exploration of potential results within a
   proof body.
 
-  \<^descr> @{command "show"}~@{text "a: \<phi>"} is like @{command
-  "have"}~@{text "a: \<phi>"} plus a second stage to refine some pending
+  \<^descr> @{command "show"}~\<open>a: \<phi>\<close> is like @{command
+  "have"}~\<open>a: \<phi>\<close> plus a second stage to refine some pending
   sub-goal for each one of the finished result, after having been
   exported into the corresponding context (at the head of the
   sub-proof of this @{command "show"} command).
@@ -467,13 +464,13 @@
   \<^descr> @{command "hence"} abbreviates ``@{command "then"}~@{command
   "have"}'', i.e.\ claims a local goal to be proven by forward
   chaining the current facts.  Note that @{command "hence"} is also
-  equivalent to ``@{command "from"}~@{text this}~@{command "have"}''.
+  equivalent to ``@{command "from"}~\<open>this\<close>~@{command "have"}''.
 
   \<^descr> @{command "thus"} abbreviates ``@{command "then"}~@{command
   "show"}''.  Note that @{command "thus"} is also equivalent to
-  ``@{command "from"}~@{text this}~@{command "show"}''.
+  ``@{command "from"}~\<open>this\<close>~@{command "show"}''.
 
-  \<^descr> @{command "print_statement"}~@{text a} prints facts from the
+  \<^descr> @{command "print_statement"}~\<open>a\<close> prints facts from the
   current theory or proof context in long statement form, according to
   the syntax for @{command "lemma"} given above.
 
@@ -493,19 +490,19 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{command_def "also"} & : & @{text "proof(state) \<rightarrow> proof(state)"} \\
-    @{command_def "finally"} & : & @{text "proof(state) \<rightarrow> proof(chain)"} \\
-    @{command_def "moreover"} & : & @{text "proof(state) \<rightarrow> proof(state)"} \\
-    @{command_def "ultimately"} & : & @{text "proof(state) \<rightarrow> proof(chain)"} \\
-    @{command_def "print_trans_rules"}@{text "\<^sup>*"} & : & @{text "context \<rightarrow>"} \\
-    @{attribute trans} & : & @{text attribute} \\
-    @{attribute sym} & : & @{text attribute} \\
-    @{attribute symmetric} & : & @{text attribute} \\
+    @{command_def "also"} & : & \<open>proof(state) \<rightarrow> proof(state)\<close> \\
+    @{command_def "finally"} & : & \<open>proof(state) \<rightarrow> proof(chain)\<close> \\
+    @{command_def "moreover"} & : & \<open>proof(state) \<rightarrow> proof(state)\<close> \\
+    @{command_def "ultimately"} & : & \<open>proof(state) \<rightarrow> proof(chain)\<close> \\
+    @{command_def "print_trans_rules"}\<open>\<^sup>*\<close> & : & \<open>context \<rightarrow>\<close> \\
+    @{attribute trans} & : & \<open>attribute\<close> \\
+    @{attribute sym} & : & \<open>attribute\<close> \\
+    @{attribute symmetric} & : & \<open>attribute\<close> \\
   \end{matharray}
 
   Calculational proof is forward reasoning with implicit application
-  of transitivity rules (such those of @{text "="}, @{text "\<le>"},
-  @{text "<"}).  Isabelle/Isar maintains an auxiliary fact register
+  of transitivity rules (such those of \<open>=\<close>, \<open>\<le>\<close>,
+  \<open><\<close>).  Isabelle/Isar maintains an auxiliary fact register
   @{fact_ref calculation} for accumulating results obtained by
   transitivity composed with the current result.  Command @{command
   "also"} updates @{fact calculation} involving @{fact this}, while
@@ -519,7 +516,7 @@
   but only collect further results in @{fact calculation} without
   applying any rules yet.
 
-  Also note that the implicit term abbreviation ``@{text "\<dots>"}'' has
+  Also note that the implicit term abbreviation ``\<open>\<dots>\<close>'' has
   its canonical application with calculational proofs.  It refers to
   the argument of the preceding statement. (The argument of a curried
   infix expression happens to be its right-hand side.)
@@ -537,11 +534,11 @@
   handling of block-structure.}
 
   \begin{matharray}{rcl}
-    @{command "also"}@{text "\<^sub>0"} & \equiv & @{command "note"}~@{text "calculation = this"} \\
-    @{command "also"}@{text "\<^sub>n+1"} & \equiv & @{command "note"}~@{text "calculation = trans [OF calculation this]"} \\[0.5ex]
-    @{command "finally"} & \equiv & @{command "also"}~@{command "from"}~@{text calculation} \\[0.5ex]
-    @{command "moreover"} & \equiv & @{command "note"}~@{text "calculation = calculation this"} \\
-    @{command "ultimately"} & \equiv & @{command "moreover"}~@{command "from"}~@{text calculation} \\
+    @{command "also"}\<open>\<^sub>0\<close> & \equiv & @{command "note"}~\<open>calculation = this\<close> \\
+    @{command "also"}\<open>\<^sub>n+1\<close> & \equiv & @{command "note"}~\<open>calculation = trans [OF calculation this]\<close> \\[0.5ex]
+    @{command "finally"} & \equiv & @{command "also"}~@{command "from"}~\<open>calculation\<close> \\[0.5ex]
+    @{command "moreover"} & \equiv & @{command "note"}~\<open>calculation = calculation this\<close> \\
+    @{command "ultimately"} & \equiv & @{command "moreover"}~@{command "from"}~\<open>calculation\<close> \\
   \end{matharray}
 
   @{rail \<open>
@@ -550,7 +547,7 @@
     @@{attribute trans} (() | 'add' | 'del')
   \<close>}
 
-  \<^descr> @{command "also"}~@{text "(a\<^sub>1 \<dots> a\<^sub>n)"} maintains the auxiliary
+  \<^descr> @{command "also"}~\<open>(a\<^sub>1 \<dots> a\<^sub>n)\<close> maintains the auxiliary
   @{fact calculation} register as follows.  The first occurrence of
   @{command "also"} in some calculational thread initializes @{fact
   calculation} by @{fact this}. Any subsequent @{command "also"} on
@@ -560,15 +557,15 @@
   current context, unless alternative rules are given as explicit
   arguments.
 
-  \<^descr> @{command "finally"}~@{text "(a\<^sub>1 \<dots> a\<^sub>n)"} maintaining @{fact
+  \<^descr> @{command "finally"}~\<open>(a\<^sub>1 \<dots> a\<^sub>n)\<close> maintaining @{fact
   calculation} in the same way as @{command "also"}, and concludes the
   current calculational thread.  The final result is exhibited as fact
   for forward chaining towards the next goal. Basically, @{command
   "finally"} just abbreviates @{command "also"}~@{command
   "from"}~@{fact calculation}.  Typical idioms for concluding
   calculational proofs are ``@{command "finally"}~@{command
-  "show"}~@{text ?thesis}~@{command "."}'' and ``@{command
-  "finally"}~@{command "have"}~@{text \<phi>}~@{command "."}''.
+  "show"}~\<open>?thesis\<close>~@{command "."}'' and ``@{command
+  "finally"}~@{command "have"}~\<open>\<phi>\<close>~@{command "."}''.
 
   \<^descr> @{command "moreover"} and @{command "ultimately"} are
   analogous to @{command "also"} and @{command "finally"}, but collect
@@ -583,17 +580,16 @@
   \<^descr> @{attribute trans} declares theorems as transitivity rules.
 
   \<^descr> @{attribute sym} declares symmetry rules, as well as
-  @{attribute "Pure.elim"}@{text "?"} rules.
+  @{attribute "Pure.elim"}\<open>?\<close> rules.
 
   \<^descr> @{attribute symmetric} resolves a theorem with some rule
   declared as @{attribute sym} in the current context.  For example,
-  ``@{command "assume"}~@{text "[symmetric]: x = y"}'' produces a
+  ``@{command "assume"}~\<open>[symmetric]: x = y\<close>'' produces a
   swapped fact derived from that assumption.
 
   In structured proof texts it is often more appropriate to use an
   explicit single-step elimination proof, such as ``@{command
-  "assume"}~@{text "x = y"}~@{command "then"}~@{command "have"}~@{text
-  "y = x"}~@{command ".."}''.
+  "assume"}~\<open>x = y\<close>~@{command "then"}~@{command "have"}~\<open>y = x\<close>~@{command ".."}''.
 \<close>
 
 
@@ -605,8 +601,8 @@
   methods via ``@{verbatim ","}'' (sequential composition), ``@{verbatim
   ";"}'' (structural composition), ``@{verbatim "|"}'' (alternative
   choices), ``@{verbatim "?"}'' (try), ``@{verbatim "+"}'' (repeat at least
-  once), ``@{verbatim "["}@{text n}@{verbatim "]"}'' (restriction to first
-  @{text n} subgoals). In practice, proof methods are usually just a comma
+  once), ``@{verbatim "["}\<open>n\<close>@{verbatim "]"}'' (restriction to first
+  \<open>n\<close> subgoals). In practice, proof methods are usually just a comma
   separated list of @{syntax nameref}~@{syntax args} specifications. Note
   that parentheses may be dropped for single method specifications (with no
   arguments). The syntactic precedence of method combinators is @{verbatim
@@ -631,26 +627,24 @@
   elsewhere. Thus a proof method has no other chance than to operate on the
   subgoals that are presently exposed.
 
-  Structural composition ``@{text m\<^sub>1}@{verbatim ";"}~@{text m\<^sub>2}'' means
-  that method @{text m\<^sub>1} is applied with restriction to the first subgoal,
-  then @{text m\<^sub>2} is applied consecutively with restriction to each subgoal
-  that has newly emerged due to @{text m\<^sub>1}. This is analogous to the tactic
+  Structural composition ``\<open>m\<^sub>1\<close>@{verbatim ";"}~\<open>m\<^sub>2\<close>'' means
+  that method \<open>m\<^sub>1\<close> is applied with restriction to the first subgoal,
+  then \<open>m\<^sub>2\<close> is applied consecutively with restriction to each subgoal
+  that has newly emerged due to \<open>m\<^sub>1\<close>. This is analogous to the tactic
   combinator @{ML_op THEN_ALL_NEW} in Isabelle/ML, see also @{cite
-  "isabelle-implementation"}. For example, @{text "(rule r; blast)"} applies
-  rule @{text "r"} and then solves all new subgoals by @{text blast}.
+  "isabelle-implementation"}. For example, \<open>(rule r; blast)\<close> applies
+  rule \<open>r\<close> and then solves all new subgoals by \<open>blast\<close>.
 
-  Moreover, the explicit goal restriction operator ``@{text "[n]"}'' exposes
-  only the first @{text n} subgoals (which need to exist), with default
-  @{text "n = 1"}. For example, the method expression ``@{text
-  "simp_all[3]"}'' simplifies the first three subgoals, while ``@{text
-  "(rule r, simp_all)[]"}'' simplifies all new goals that emerge from
-  applying rule @{text "r"} to the originally first one.
+  Moreover, the explicit goal restriction operator ``\<open>[n]\<close>'' exposes
+  only the first \<open>n\<close> subgoals (which need to exist), with default
+  \<open>n = 1\<close>. For example, the method expression ``\<open>simp_all[3]\<close>'' simplifies the first three subgoals, while ``\<open>(rule r, simp_all)[]\<close>'' simplifies all new goals that emerge from
+  applying rule \<open>r\<close> to the originally first one.
 
   \<^medskip>
   Improper methods, notably tactic emulations, offer low-level goal
   addressing as explicit argument to the individual tactic being involved.
-  Here ``@{text "[!]"}'' refers to all goals, and ``@{text "[n-]"}'' to all
-  goals starting from @{text "n"}.
+  Here ``\<open>[!]\<close>'' refers to all goals, and ``\<open>[n-]\<close>'' to all
+  goals starting from \<open>n\<close>.
 
   @{rail \<open>
     @{syntax_def goal_spec}:
@@ -663,13 +657,13 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{command_def "proof"} & : & @{text "proof(prove) \<rightarrow> proof(state)"} \\
-    @{command_def "qed"} & : & @{text "proof(state) \<rightarrow> proof(state) | local_theory | theory"} \\
-    @{command_def "by"} & : & @{text "proof(prove) \<rightarrow> proof(state) | local_theory | theory"} \\
-    @{command_def ".."} & : & @{text "proof(prove) \<rightarrow> proof(state) | local_theory | theory"} \\
-    @{command_def "."} & : & @{text "proof(prove) \<rightarrow> proof(state) | local_theory | theory"} \\
-    @{command_def "sorry"} & : & @{text "proof(prove) \<rightarrow> proof(state) | local_theory | theory"} \\
-    @{method_def standard} & : & @{text method} \\
+    @{command_def "proof"} & : & \<open>proof(prove) \<rightarrow> proof(state)\<close> \\
+    @{command_def "qed"} & : & \<open>proof(state) \<rightarrow> proof(state) | local_theory | theory\<close> \\
+    @{command_def "by"} & : & \<open>proof(prove) \<rightarrow> proof(state) | local_theory | theory\<close> \\
+    @{command_def ".."} & : & \<open>proof(prove) \<rightarrow> proof(state) | local_theory | theory\<close> \\
+    @{command_def "."} & : & \<open>proof(prove) \<rightarrow> proof(state) | local_theory | theory\<close> \\
+    @{command_def "sorry"} & : & \<open>proof(prove) \<rightarrow> proof(state) | local_theory | theory\<close> \\
+    @{method_def standard} & : & \<open>method\<close> \\
   \end{matharray}
 
   Arbitrary goal refinement via tactics is considered harmful.
@@ -677,14 +671,12 @@
   invoked in two places only.
 
   \<^enum> An \<^emph>\<open>initial\<close> refinement step @{command_ref
-  "proof"}~@{text "m\<^sub>1"} reduces a newly stated goal to a number
+  "proof"}~\<open>m\<^sub>1\<close> reduces a newly stated goal to a number
   of sub-goals that are to be solved later.  Facts are passed to
-  @{text "m\<^sub>1"} for forward chaining, if so indicated by @{text
-  "proof(chain)"} mode.
+  \<open>m\<^sub>1\<close> for forward chaining, if so indicated by \<open>proof(chain)\<close> mode.
 
-  \<^enum> A \<^emph>\<open>terminal\<close> conclusion step @{command_ref "qed"}~@{text
-  "m\<^sub>2"} is intended to solve remaining goals.  No facts are
-  passed to @{text "m\<^sub>2"}.
+  \<^enum> A \<^emph>\<open>terminal\<close> conclusion step @{command_ref "qed"}~\<open>m\<^sub>2\<close> is intended to solve remaining goals.  No facts are
+  passed to \<open>m\<^sub>2\<close>.
 
 
   The only other (proper) way to affect pending goals in a proof body
@@ -718,16 +710,15 @@
     (@@{command "."} | @@{command ".."} | @@{command sorry})
   \<close>}
 
-  \<^descr> @{command "proof"}~@{text "m\<^sub>1"} refines the goal by proof
-  method @{text "m\<^sub>1"}; facts for forward chaining are passed if so
-  indicated by @{text "proof(chain)"} mode.
+  \<^descr> @{command "proof"}~\<open>m\<^sub>1\<close> refines the goal by proof
+  method \<open>m\<^sub>1\<close>; facts for forward chaining are passed if so
+  indicated by \<open>proof(chain)\<close> mode.
 
-  \<^descr> @{command "qed"}~@{text "m\<^sub>2"} refines any remaining goals by
-  proof method @{text "m\<^sub>2"} and concludes the sub-proof by assumption.
-  If the goal had been @{text "show"} (or @{text "thus"}), some
+  \<^descr> @{command "qed"}~\<open>m\<^sub>2\<close> refines any remaining goals by
+  proof method \<open>m\<^sub>2\<close> and concludes the sub-proof by assumption.
+  If the goal had been \<open>show\<close> (or \<open>thus\<close>), some
   pending sub-goal is solved as well by the rule resulting from the
-  result \<^emph>\<open>exported\<close> into the enclosing goal context.  Thus @{text
-  "qed"} may fail for two reasons: either @{text "m\<^sub>2"} fails, or the
+  result \<^emph>\<open>exported\<close> into the enclosing goal context.  Thus \<open>qed\<close> may fail for two reasons: either \<open>m\<^sub>2\<close> fails, or the
   resulting rule does not fit to any pending goal\footnote{This
   includes any additional ``strong'' assumptions as introduced by
   @{command "assume"}.} of the enclosing context.  Debugging such a
@@ -735,22 +726,20 @@
   @{command "have"}, or weakening the local context by replacing
   occurrences of @{command "assume"} by @{command "presume"}.
 
-  \<^descr> @{command "by"}~@{text "m\<^sub>1 m\<^sub>2"} is a \<^emph>\<open>terminal
+  \<^descr> @{command "by"}~\<open>m\<^sub>1 m\<^sub>2\<close> is a \<^emph>\<open>terminal
   proof\<close>\index{proof!terminal}; it abbreviates @{command
-  "proof"}~@{text "m\<^sub>1"}~@{command "qed"}~@{text "m\<^sub>2"}, but with
+  "proof"}~\<open>m\<^sub>1\<close>~@{command "qed"}~\<open>m\<^sub>2\<close>, but with
   backtracking across both methods.  Debugging an unsuccessful
-  @{command "by"}~@{text "m\<^sub>1 m\<^sub>2"} command can be done by expanding its
-  definition; in many cases @{command "proof"}~@{text "m\<^sub>1"} (or even
-  @{text "apply"}~@{text "m\<^sub>1"}) is already sufficient to see the
+  @{command "by"}~\<open>m\<^sub>1 m\<^sub>2\<close> command can be done by expanding its
+  definition; in many cases @{command "proof"}~\<open>m\<^sub>1\<close> (or even
+  \<open>apply\<close>~\<open>m\<^sub>1\<close>) is already sufficient to see the
   problem.
 
   \<^descr> ``@{command ".."}'' is a \<^emph>\<open>standard
-  proof\<close>\index{proof!standard}; it abbreviates @{command "by"}~@{text
-  "standard"}.
+  proof\<close>\index{proof!standard}; it abbreviates @{command "by"}~\<open>standard\<close>.
 
   \<^descr> ``@{command "."}'' is a \<^emph>\<open>trivial
-  proof\<close>\index{proof!trivial}; it abbreviates @{command "by"}~@{text
-  "this"}.
+  proof\<close>\index{proof!trivial}; it abbreviates @{command "by"}~\<open>this\<close>.
 
   \<^descr> @{command "sorry"} is a \<^emph>\<open>fake proof\<close>\index{proof!fake}
   pretending to solve the pending claim without further ado.  This
@@ -787,20 +776,20 @@
   \chref{ch:gen-tools} and \partref{part:hol}).
 
   \begin{matharray}{rcl}
-    @{command_def "print_rules"}@{text "\<^sup>*"} & : & @{text "context \<rightarrow>"} \\[0.5ex]
-    @{method_def "-"} & : & @{text method} \\
-    @{method_def "goal_cases"} & : & @{text method} \\
-    @{method_def "fact"} & : & @{text method} \\
-    @{method_def "assumption"} & : & @{text method} \\
-    @{method_def "this"} & : & @{text method} \\
-    @{method_def (Pure) "rule"} & : & @{text method} \\
-    @{attribute_def (Pure) "intro"} & : & @{text attribute} \\
-    @{attribute_def (Pure) "elim"} & : & @{text attribute} \\
-    @{attribute_def (Pure) "dest"} & : & @{text attribute} \\
-    @{attribute_def (Pure) "rule"} & : & @{text attribute} \\[0.5ex]
-    @{attribute_def "OF"} & : & @{text attribute} \\
-    @{attribute_def "of"} & : & @{text attribute} \\
-    @{attribute_def "where"} & : & @{text attribute} \\
+    @{command_def "print_rules"}\<open>\<^sup>*\<close> & : & \<open>context \<rightarrow>\<close> \\[0.5ex]
+    @{method_def "-"} & : & \<open>method\<close> \\
+    @{method_def "goal_cases"} & : & \<open>method\<close> \\
+    @{method_def "fact"} & : & \<open>method\<close> \\
+    @{method_def "assumption"} & : & \<open>method\<close> \\
+    @{method_def "this"} & : & \<open>method\<close> \\
+    @{method_def (Pure) "rule"} & : & \<open>method\<close> \\
+    @{attribute_def (Pure) "intro"} & : & \<open>attribute\<close> \\
+    @{attribute_def (Pure) "elim"} & : & \<open>attribute\<close> \\
+    @{attribute_def (Pure) "dest"} & : & \<open>attribute\<close> \\
+    @{attribute_def (Pure) "rule"} & : & \<open>attribute\<close> \\[0.5ex]
+    @{attribute_def "OF"} & : & \<open>attribute\<close> \\
+    @{attribute_def "of"} & : & \<open>attribute\<close> \\
+    @{attribute_def "where"} & : & \<open>attribute\<close> \\
   \end{matharray}
 
   @{rail \<open>
@@ -839,9 +828,9 @@
   Note that command @{command_ref "proof"} without any method actually
   performs a single reduction step using the @{method_ref (Pure) rule}
   method; thus a plain \<^emph>\<open>do-nothing\<close> proof step would be ``@{command
-  "proof"}~@{text "-"}'' rather than @{command "proof"} alone.
+  "proof"}~\<open>-\<close>'' rather than @{command "proof"} alone.
 
-  \<^descr> @{method "goal_cases"}~@{text "a\<^sub>1 \<dots> a\<^sub>n"} turns the current subgoals
+  \<^descr> @{method "goal_cases"}~\<open>a\<^sub>1 \<dots> a\<^sub>n\<close> turns the current subgoals
   into cases within the context (see also \secref{sec:cases-induct}). The
   specified case names are used if present; otherwise cases are numbered
   starting from 1.
@@ -851,15 +840,14 @@
   premises, and @{command let} variable @{variable_ref ?case} refer to the
   conclusion.
 
-  \<^descr> @{method "fact"}~@{text "a\<^sub>1 \<dots> a\<^sub>n"} composes some fact from
-  @{text "a\<^sub>1, \<dots>, a\<^sub>n"} (or implicitly from the current proof context)
+  \<^descr> @{method "fact"}~\<open>a\<^sub>1 \<dots> a\<^sub>n\<close> composes some fact from
+  \<open>a\<^sub>1, \<dots>, a\<^sub>n\<close> (or implicitly from the current proof context)
   modulo unification of schematic type and term variables.  The rule
   structure is not taken into account, i.e.\ meta-level implication is
   considered atomic.  This is the same principle underlying literal
-  facts (cf.\ \secref{sec:syn-att}): ``@{command "have"}~@{text
-  "\<phi>"}~@{command "by"}~@{text fact}'' is equivalent to ``@{command
-  "note"}~@{verbatim "`"}@{text \<phi>}@{verbatim "`"}'' provided that
-  @{text "\<turnstile> \<phi>"} is an instance of some known @{text "\<turnstile> \<phi>"} in the
+  facts (cf.\ \secref{sec:syn-att}): ``@{command "have"}~\<open>\<phi>\<close>~@{command "by"}~\<open>fact\<close>'' is equivalent to ``@{command
+  "note"}~@{verbatim "`"}\<open>\<phi>\<close>@{verbatim "`"}'' provided that
+  \<open>\<turnstile> \<phi>\<close> is an instance of some known \<open>\<turnstile> \<phi>\<close> in the
   proof context.
 
   \<^descr> @{method assumption} solves some goal by a single assumption
@@ -872,9 +860,9 @@
 
   \<^descr> @{method this} applies all of the current facts directly as
   rules.  Recall that ``@{command "."}'' (dot) abbreviates ``@{command
-  "by"}~@{text this}''.
+  "by"}~\<open>this\<close>''.
 
-  \<^descr> @{method (Pure) rule}~@{text "a\<^sub>1 \<dots> a\<^sub>n"} applies some rule given as
+  \<^descr> @{method (Pure) rule}~\<open>a\<^sub>1 \<dots> a\<^sub>n\<close> applies some rule given as
   argument in backward manner; facts are used to reduce the rule
   before applying it to the goal.  Thus @{method (Pure) rule} without facts
   is plain introduction, while with facts it becomes elimination.
@@ -890,48 +878,48 @@
   @{attribute (Pure) dest} declare introduction, elimination, and
   destruct rules, to be used with method @{method (Pure) rule}, and similar
   tools.  Note that the latter will ignore rules declared with
-  ``@{text "?"}'', while ``@{text "!"}''  are used most aggressively.
+  ``\<open>?\<close>'', while ``\<open>!\<close>''  are used most aggressively.
 
   The classical reasoner (see \secref{sec:classical}) introduces its
   own variants of these attributes; use qualified names to access the
   present versions of Isabelle/Pure, i.e.\ @{attribute (Pure)
   "Pure.intro"}.
 
-  \<^descr> @{attribute (Pure) rule}~@{text del} undeclares introduction,
+  \<^descr> @{attribute (Pure) rule}~\<open>del\<close> undeclares introduction,
   elimination, or destruct rules.
 
-  \<^descr> @{attribute OF}~@{text "a\<^sub>1 \<dots> a\<^sub>n"} applies some theorem to all
-  of the given rules @{text "a\<^sub>1, \<dots>, a\<^sub>n"} in canonical right-to-left
-  order, which means that premises stemming from the @{text "a\<^sub>i"}
+  \<^descr> @{attribute OF}~\<open>a\<^sub>1 \<dots> a\<^sub>n\<close> applies some theorem to all
+  of the given rules \<open>a\<^sub>1, \<dots>, a\<^sub>n\<close> in canonical right-to-left
+  order, which means that premises stemming from the \<open>a\<^sub>i\<close>
   emerge in parallel in the result, without interfering with each
-  other.  In many practical situations, the @{text "a\<^sub>i"} do not have
-  premises themselves, so @{text "rule [OF a\<^sub>1 \<dots> a\<^sub>n]"} can be actually
+  other.  In many practical situations, the \<open>a\<^sub>i\<close> do not have
+  premises themselves, so \<open>rule [OF a\<^sub>1 \<dots> a\<^sub>n]\<close> can be actually
   read as functional application (modulo unification).
 
-  Argument positions may be effectively skipped by using ``@{text _}''
+  Argument positions may be effectively skipped by using ``\<open>_\<close>''
   (underscore), which refers to the propositional identity rule in the
   Pure theory.
 
-  \<^descr> @{attribute of}~@{text "t\<^sub>1 \<dots> t\<^sub>n"} performs positional
-  instantiation of term variables.  The terms @{text "t\<^sub>1, \<dots>, t\<^sub>n"} are
+  \<^descr> @{attribute of}~\<open>t\<^sub>1 \<dots> t\<^sub>n\<close> performs positional
+  instantiation of term variables.  The terms \<open>t\<^sub>1, \<dots>, t\<^sub>n\<close> are
   substituted for any schematic variables occurring in a theorem from
-  left to right; ``@{text _}'' (underscore) indicates to skip a
-  position.  Arguments following a ``@{text "concl:"}'' specification
+  left to right; ``\<open>_\<close>'' (underscore) indicates to skip a
+  position.  Arguments following a ``\<open>concl:\<close>'' specification
   refer to positions of the conclusion of a rule.
 
-  An optional context of local variables @{text "\<FOR> x\<^sub>1 \<dots> x\<^sub>m"} may
+  An optional context of local variables \<open>\<FOR> x\<^sub>1 \<dots> x\<^sub>m\<close> may
   be specified: the instantiated theorem is exported, and these
   variables become schematic (usually with some shifting of indices).
 
-  \<^descr> @{attribute "where"}~@{text "x\<^sub>1 = t\<^sub>1 \<AND> \<dots> x\<^sub>n = t\<^sub>n"}
+  \<^descr> @{attribute "where"}~\<open>x\<^sub>1 = t\<^sub>1 \<AND> \<dots> x\<^sub>n = t\<^sub>n\<close>
   performs named instantiation of schematic type and term variables
   occurring in a theorem.  Schematic variables have to be specified on
-  the left-hand side (e.g.\ @{text "?x1.3"}).  The question mark may
+  the left-hand side (e.g.\ \<open>?x1.3\<close>).  The question mark may
   be omitted if the variable name is a plain identifier without index.
   As type instantiations are inferred from term instantiations,
   explicit type instantiations are seldom necessary.
 
-  An optional context of local variables @{text "\<FOR> x\<^sub>1 \<dots> x\<^sub>m"} may
+  An optional context of local variables \<open>\<FOR> x\<^sub>1 \<dots> x\<^sub>m\<close> may
   be specified as for @{attribute "of"} above.
 \<close>
 
@@ -940,16 +928,15 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{command_def "method_setup"} & : & @{text "local_theory \<rightarrow> local_theory"} \\
+    @{command_def "method_setup"} & : & \<open>local_theory \<rightarrow> local_theory\<close> \\
   \end{matharray}
 
   @{rail \<open>
     @@{command method_setup} @{syntax name} '=' @{syntax text} @{syntax text}?
   \<close>}
 
-  \<^descr> @{command "method_setup"}~@{text "name = text description"}
-  defines a proof method in the current context.  The given @{text
-  "text"} has to be an ML expression of type
+  \<^descr> @{command "method_setup"}~\<open>name = text description\<close>
+  defines a proof method in the current context.  The given \<open>text\<close> has to be an ML expression of type
   @{ML_type "(Proof.context -> Proof.method) context_parser"}, cf.\
   basic parsers defined in structure @{ML_structure Args} and @{ML_structure
   Attrib}.  There are also combinators like @{ML METHOD} and @{ML
@@ -983,12 +970,12 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{command_def "case"} & : & @{text "proof(state) \<rightarrow> proof(state)"} \\
-    @{command_def "print_cases"}@{text "\<^sup>*"} & : & @{text "context \<rightarrow>"} \\
-    @{attribute_def case_names} & : & @{text attribute} \\
-    @{attribute_def case_conclusion} & : & @{text attribute} \\
-    @{attribute_def params} & : & @{text attribute} \\
-    @{attribute_def consumes} & : & @{text attribute} \\
+    @{command_def "case"} & : & \<open>proof(state) \<rightarrow> proof(state)\<close> \\
+    @{command_def "print_cases"}\<open>\<^sup>*\<close> & : & \<open>context \<rightarrow>\<close> \\
+    @{attribute_def case_names} & : & \<open>attribute\<close> \\
+    @{attribute_def case_conclusion} & : & \<open>attribute\<close> \\
+    @{attribute_def params} & : & \<open>attribute\<close> \\
+    @{attribute_def consumes} & : & \<open>attribute\<close> \\
   \end{matharray}
 
   The puristic way to build up Isar proof contexts is by explicit
@@ -1000,27 +987,26 @@
 
   The @{command "case"} command provides a shorthand to refer to a
   local context symbolically: certain proof methods provide an
-  environment of named ``cases'' of the form @{text "c: x\<^sub>1, \<dots>,
-  x\<^sub>m, \<phi>\<^sub>1, \<dots>, \<phi>\<^sub>n"}; the effect of ``@{command
-  "case"}~@{text c}'' is then equivalent to ``@{command "fix"}~@{text
-  "x\<^sub>1 \<dots> x\<^sub>m"}~@{command "assume"}~@{text "c: \<phi>\<^sub>1 \<dots>
-  \<phi>\<^sub>n"}''.  Term bindings may be covered as well, notably
+  environment of named ``cases'' of the form \<open>c: x\<^sub>1, \<dots>,
+  x\<^sub>m, \<phi>\<^sub>1, \<dots>, \<phi>\<^sub>n\<close>; the effect of ``@{command
+  "case"}~\<open>c\<close>'' is then equivalent to ``@{command "fix"}~\<open>x\<^sub>1 \<dots> x\<^sub>m\<close>~@{command "assume"}~\<open>c: \<phi>\<^sub>1 \<dots>
+  \<phi>\<^sub>n\<close>''.  Term bindings may be covered as well, notably
   @{variable ?case} for the main conclusion.
 
-  By default, the ``terminology'' @{text "x\<^sub>1, \<dots>, x\<^sub>m"} of
+  By default, the ``terminology'' \<open>x\<^sub>1, \<dots>, x\<^sub>m\<close> of
   a case value is marked as hidden, i.e.\ there is no way to refer to
   such parameters in the subsequent proof text.  After all, original
   rule parameters stem from somewhere outside of the current proof
-  text.  By using the explicit form ``@{command "case"}~@{text "(c
-  y\<^sub>1 \<dots> y\<^sub>m)"}'' instead, the proof author is able to
+  text.  By using the explicit form ``@{command "case"}~\<open>(c
+  y\<^sub>1 \<dots> y\<^sub>m)\<close>'' instead, the proof author is able to
   chose local names that fit nicely into the current context.
 
   \<^medskip>
   It is important to note that proper use of @{command
   "case"} does not provide means to peek at the current goal state,
   which is not directly observable in Isar!  Nonetheless, goal
-  refinement commands do provide named cases @{text "goal\<^sub>i"}
-  for each subgoal @{text "i = 1, \<dots>, n"} of the resulting goal state.
+  refinement commands do provide named cases \<open>goal\<^sub>i\<close>
+  for each subgoal \<open>i = 1, \<dots>, n\<close> of the resulting goal state.
   Using this extra feature requires great care, because some bits of
   the internal tactical machinery intrude the proof text.  In
   particular, parameter names stemming from the left-over of automated
@@ -1051,57 +1037,56 @@
     @@{attribute consumes} @{syntax int}?
   \<close>}
 
-  \<^descr> @{command "case"}~@{text "a: (c x\<^sub>1 \<dots> x\<^sub>m)"} invokes a named local
-  context @{text "c: x\<^sub>1, \<dots>, x\<^sub>m, \<phi>\<^sub>1, \<dots>, \<phi>\<^sub>m"}, as provided by an
+  \<^descr> @{command "case"}~\<open>a: (c x\<^sub>1 \<dots> x\<^sub>m)\<close> invokes a named local
+  context \<open>c: x\<^sub>1, \<dots>, x\<^sub>m, \<phi>\<^sub>1, \<dots>, \<phi>\<^sub>m\<close>, as provided by an
   appropriate proof method (such as @{method_ref cases} and @{method_ref
-  induct}). The command ``@{command "case"}~@{text "a: (c x\<^sub>1 \<dots> x\<^sub>m)"}''
-  abbreviates ``@{command "fix"}~@{text "x\<^sub>1 \<dots> x\<^sub>m"}~@{command
-  "assume"}~@{text "a.c: \<phi>\<^sub>1 \<dots> \<phi>\<^sub>n"}''. Each local fact is qualified by the
-  prefix @{text "a"}, and all such facts are collectively bound to the name
-  @{text a}.
+  induct}). The command ``@{command "case"}~\<open>a: (c x\<^sub>1 \<dots> x\<^sub>m)\<close>''
+  abbreviates ``@{command "fix"}~\<open>x\<^sub>1 \<dots> x\<^sub>m\<close>~@{command
+  "assume"}~\<open>a.c: \<phi>\<^sub>1 \<dots> \<phi>\<^sub>n\<close>''. Each local fact is qualified by the
+  prefix \<open>a\<close>, and all such facts are collectively bound to the name
+  \<open>a\<close>.
 
-  The fact name is specification @{text a} is optional, the default is to
-  re-use @{text c}. So @{command "case"}~@{text "(c x\<^sub>1 \<dots> x\<^sub>m)"} is the same
-  as @{command "case"}~@{text "c: (c x\<^sub>1 \<dots> x\<^sub>m)"}.
+  The fact name is specification \<open>a\<close> is optional, the default is to
+  re-use \<open>c\<close>. So @{command "case"}~\<open>(c x\<^sub>1 \<dots> x\<^sub>m)\<close> is the same
+  as @{command "case"}~\<open>c: (c x\<^sub>1 \<dots> x\<^sub>m)\<close>.
 
   \<^descr> @{command "print_cases"} prints all local contexts of the
   current state, using Isar proof language notation.
 
-  \<^descr> @{attribute case_names}~@{text "c\<^sub>1 \<dots> c\<^sub>k"} declares names for
-  the local contexts of premises of a theorem; @{text "c\<^sub>1, \<dots>, c\<^sub>k"}
+  \<^descr> @{attribute case_names}~\<open>c\<^sub>1 \<dots> c\<^sub>k\<close> declares names for
+  the local contexts of premises of a theorem; \<open>c\<^sub>1, \<dots>, c\<^sub>k\<close>
   refers to the \<^emph>\<open>prefix\<close> of the list of premises. Each of the
-  cases @{text "c\<^sub>i"} can be of the form @{text "c[h\<^sub>1 \<dots> h\<^sub>n]"} where
-  the @{text "h\<^sub>1 \<dots> h\<^sub>n"} are the names of the hypotheses in case @{text "c\<^sub>i"}
+  cases \<open>c\<^sub>i\<close> can be of the form \<open>c[h\<^sub>1 \<dots> h\<^sub>n]\<close> where
+  the \<open>h\<^sub>1 \<dots> h\<^sub>n\<close> are the names of the hypotheses in case \<open>c\<^sub>i\<close>
   from left to right.
 
-  \<^descr> @{attribute case_conclusion}~@{text "c d\<^sub>1 \<dots> d\<^sub>k"} declares
-  names for the conclusions of a named premise @{text c}; here @{text
-  "d\<^sub>1, \<dots>, d\<^sub>k"} refers to the prefix of arguments of a logical formula
-  built by nesting a binary connective (e.g.\ @{text "\<or>"}).
+  \<^descr> @{attribute case_conclusion}~\<open>c d\<^sub>1 \<dots> d\<^sub>k\<close> declares
+  names for the conclusions of a named premise \<open>c\<close>; here \<open>d\<^sub>1, \<dots>, d\<^sub>k\<close> refers to the prefix of arguments of a logical formula
+  built by nesting a binary connective (e.g.\ \<open>\<or>\<close>).
 
   Note that proof methods such as @{method induct} and @{method
   coinduct} already provide a default name for the conclusion as a
   whole.  The need to name subformulas only arises with cases that
   split into several sub-cases, as in common co-induction rules.
 
-  \<^descr> @{attribute params}~@{text "p\<^sub>1 \<dots> p\<^sub>m \<AND> \<dots> q\<^sub>1 \<dots> q\<^sub>n"} renames
-  the innermost parameters of premises @{text "1, \<dots>, n"} of some
+  \<^descr> @{attribute params}~\<open>p\<^sub>1 \<dots> p\<^sub>m \<AND> \<dots> q\<^sub>1 \<dots> q\<^sub>n\<close> renames
+  the innermost parameters of premises \<open>1, \<dots>, n\<close> of some
   theorem.  An empty list of names may be given to skip positions,
   leaving the present parameters unchanged.
 
   Note that the default usage of case rules does \<^emph>\<open>not\<close> directly
   expose parameters to the proof context.
 
-  \<^descr> @{attribute consumes}~@{text n} declares the number of ``major
+  \<^descr> @{attribute consumes}~\<open>n\<close> declares the number of ``major
   premises'' of a rule, i.e.\ the number of facts to be consumed when
   it is applied by an appropriate proof method.  The default value of
-  @{attribute consumes} is @{text "n = 1"}, which is appropriate for
+  @{attribute consumes} is \<open>n = 1\<close>, which is appropriate for
   the usual kind of cases and induction rules for inductive sets (cf.\
   \secref{sec:hol-inductive}).  Rules without any @{attribute
   consumes} declaration given are treated as if @{attribute
-  consumes}~@{text 0} had been specified.
+  consumes}~\<open>0\<close> had been specified.
 
-  A negative @{text n} is interpreted relatively to the total number
+  A negative \<open>n\<close> is interpreted relatively to the total number
   of premises of the rule in the target context.  Thus its absolute
   value specifies the remaining number of premises, after subtracting
   the prefix of major premises as indicated above. This form of
@@ -1120,10 +1105,10 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{method_def cases} & : & @{text method} \\
-    @{method_def induct} & : & @{text method} \\
-    @{method_def induction} & : & @{text method} \\
-    @{method_def coinduct} & : & @{text method} \\
+    @{method_def cases} & : & \<open>method\<close> \\
+    @{method_def induct} & : & \<open>method\<close> \\
+    @{method_def induction} & : & \<open>method\<close> \\
+    @{method_def coinduct} & : & \<open>method\<close> \\
   \end{matharray}
 
   The @{method cases}, @{method induct}, @{method induction},
@@ -1167,9 +1152,9 @@
     taking: 'taking' ':' @{syntax insts}
   \<close>}
 
-  \<^descr> @{method cases}~@{text "insts R"} applies method @{method
+  \<^descr> @{method cases}~\<open>insts R\<close> applies method @{method
   rule} with an appropriate case distinction theorem, instantiated to
-  the subjects @{text insts}.  Symbolic case names are bound according
+  the subjects \<open>insts\<close>.  Symbolic case names are bound according
   to the rule's local contexts.
 
   The rule is determined as follows, according to the facts and
@@ -1178,11 +1163,11 @@
   \<^medskip>
   \begin{tabular}{llll}
     facts           &                 & arguments   & rule \\\hline
-    @{text "\<turnstile> R"}   & @{method cases} &             & implicit rule @{text R} \\
+    \<open>\<turnstile> R\<close>   & @{method cases} &             & implicit rule \<open>R\<close> \\
                     & @{method cases} &             & classical case split \\
-                    & @{method cases} & @{text t}   & datatype exhaustion (type of @{text t}) \\
-    @{text "\<turnstile> A t"} & @{method cases} & @{text "\<dots>"} & inductive predicate/set elimination (of @{text A}) \\
-    @{text "\<dots>"}     & @{method cases} & @{text "\<dots> rule: R"} & explicit rule @{text R} \\
+                    & @{method cases} & \<open>t\<close>   & datatype exhaustion (type of \<open>t\<close>) \\
+    \<open>\<turnstile> A t\<close> & @{method cases} & \<open>\<dots>\<close> & inductive predicate/set elimination (of \<open>A\<close>) \\
+    \<open>\<dots>\<close>     & @{method cases} & \<open>\<dots> rule: R\<close> & explicit rule \<open>R\<close> \\
   \end{tabular}
   \<^medskip>
 
@@ -1190,91 +1175,89 @@
   of premises of the case rule; within each premise, the \<^emph>\<open>prefix\<close>
   of variables is instantiated.  In most situations, only a single
   term needs to be specified; this refers to the first variable of the
-  last premise (it is usually the same for all cases).  The @{text
-  "(no_simp)"} option can be used to disable pre-simplification of
+  last premise (it is usually the same for all cases).  The \<open>(no_simp)\<close> option can be used to disable pre-simplification of
   cases (see the description of @{method induct} below for details).
 
-  \<^descr> @{method induct}~@{text "insts R"} and
-  @{method induction}~@{text "insts R"} are analogous to the
+  \<^descr> @{method induct}~\<open>insts R\<close> and
+  @{method induction}~\<open>insts R\<close> are analogous to the
   @{method cases} method, but refer to induction rules, which are
   determined as follows:
 
   \<^medskip>
   \begin{tabular}{llll}
     facts           &                  & arguments            & rule \\\hline
-                    & @{method induct} & @{text "P x"}        & datatype induction (type of @{text x}) \\
-    @{text "\<turnstile> A x"} & @{method induct} & @{text "\<dots>"}          & predicate/set induction (of @{text A}) \\
-    @{text "\<dots>"}     & @{method induct} & @{text "\<dots> rule: R"} & explicit rule @{text R} \\
+                    & @{method induct} & \<open>P x\<close>        & datatype induction (type of \<open>x\<close>) \\
+    \<open>\<turnstile> A x\<close> & @{method induct} & \<open>\<dots>\<close>          & predicate/set induction (of \<open>A\<close>) \\
+    \<open>\<dots>\<close>     & @{method induct} & \<open>\<dots> rule: R\<close> & explicit rule \<open>R\<close> \\
   \end{tabular}
   \<^medskip>
 
   Several instantiations may be given, each referring to some part of
   a mutual inductive definition or datatype --- only related partial
   induction rules may be used together, though.  Any of the lists of
-  terms @{text "P, x, \<dots>"} refers to the \<^emph>\<open>suffix\<close> of variables
+  terms \<open>P, x, \<dots>\<close> refers to the \<^emph>\<open>suffix\<close> of variables
   present in the induction rule.  This enables the writer to specify
   only induction variables, or both predicates and variables, for
   example.
 
-  Instantiations may be definitional: equations @{text "x \<equiv> t"}
+  Instantiations may be definitional: equations \<open>x \<equiv> t\<close>
   introduce local definitions, which are inserted into the claim and
   discharged after applying the induction rule.  Equalities reappear
   in the inductive cases, but have been transformed according to the
   induction principle being involved here.  In order to achieve
   practically useful induction hypotheses, some variables occurring in
-  @{text t} need to be fixed (see below).  Instantiations of the form
-  @{text t}, where @{text t} is not a variable, are taken as a
-  shorthand for \mbox{@{text "x \<equiv> t"}}, where @{text x} is a fresh
-  variable. If this is not intended, @{text t} has to be enclosed in
+  \<open>t\<close> need to be fixed (see below).  Instantiations of the form
+  \<open>t\<close>, where \<open>t\<close> is not a variable, are taken as a
+  shorthand for \mbox{\<open>x \<equiv> t\<close>}, where \<open>x\<close> is a fresh
+  variable. If this is not intended, \<open>t\<close> has to be enclosed in
   parentheses.  By default, the equalities generated by definitional
   instantiations are pre-simplified using a specific set of rules,
   usually consisting of distinctness and injectivity theorems for
   datatypes. This pre-simplification may cause some of the parameters
   of an inductive case to disappear, or may even completely delete
   some of the inductive cases, if one of the equalities occurring in
-  their premises can be simplified to @{text False}.  The @{text
-  "(no_simp)"} option can be used to disable pre-simplification.
+  their premises can be simplified to \<open>False\<close>.  The \<open>(no_simp)\<close> option can be used to disable pre-simplification.
   Additional rules to be used in pre-simplification can be declared
   using the @{attribute_def induct_simp} attribute.
 
-  The optional ``@{text "arbitrary: x\<^sub>1 \<dots> x\<^sub>m"}''
-  specification generalizes variables @{text "x\<^sub>1, \<dots>,
-  x\<^sub>m"} of the original goal before applying induction.  One can
-  separate variables by ``@{text "and"}'' to generalize them in other
+  The optional ``\<open>arbitrary: x\<^sub>1 \<dots> x\<^sub>m\<close>''
+  specification generalizes variables \<open>x\<^sub>1, \<dots>,
+  x\<^sub>m\<close> of the original goal before applying induction.  One can
+  separate variables by ``\<open>and\<close>'' to generalize them in other
   goals then the first. Thus induction hypotheses may become
   sufficiently general to get the proof through.  Together with
   definitional instantiations, one may effectively perform induction
   over expressions of a certain structure.
 
-  The optional ``@{text "taking: t\<^sub>1 \<dots> t\<^sub>n"}''
+  The optional ``\<open>taking: t\<^sub>1 \<dots> t\<^sub>n\<close>''
   specification provides additional instantiations of a prefix of
   pending variables in the rule.  Such schematic induction rules
   rarely occur in practice, though.
 
-  \<^descr> @{method coinduct}~@{text "inst R"} is analogous to the
+  \<^descr> @{method coinduct}~\<open>inst R\<close> is analogous to the
   @{method induct} method, but refers to coinduction rules, which are
   determined as follows:
 
   \<^medskip>
   \begin{tabular}{llll}
     goal          &                    & arguments & rule \\\hline
-                  & @{method coinduct} & @{text x} & type coinduction (type of @{text x}) \\
-    @{text "A x"} & @{method coinduct} & @{text "\<dots>"} & predicate/set coinduction (of @{text A}) \\
-    @{text "\<dots>"}   & @{method coinduct} & @{text "\<dots> rule: R"} & explicit rule @{text R} \\
+                  & @{method coinduct} & \<open>x\<close> & type coinduction (type of \<open>x\<close>) \\
+    \<open>A x\<close> & @{method coinduct} & \<open>\<dots>\<close> & predicate/set coinduction (of \<open>A\<close>) \\
+    \<open>\<dots>\<close>   & @{method coinduct} & \<open>\<dots> rule: R\<close> & explicit rule \<open>R\<close> \\
   \end{tabular}
   \<^medskip>
 
   Coinduction is the dual of induction.  Induction essentially
-  eliminates @{text "A x"} towards a generic result @{text "P x"},
-  while coinduction introduces @{text "A x"} starting with @{text "B
-  x"}, for a suitable ``bisimulation'' @{text B}.  The cases of a
+  eliminates \<open>A x\<close> towards a generic result \<open>P x\<close>,
+  while coinduction introduces \<open>A x\<close> starting with \<open>B
+  x\<close>, for a suitable ``bisimulation'' \<open>B\<close>.  The cases of a
   coinduct rule are typically named after the predicates or sets being
   covered, while the conclusions consist of several alternatives being
   named after the individual destructor patterns.
 
   The given instantiation refers to the \<^emph>\<open>suffix\<close> of variables
   occurring in the rule's major premise, or conclusion if unavailable.
-  An additional ``@{text "taking: t\<^sub>1 \<dots> t\<^sub>n"}''
+  An additional ``\<open>taking: t\<^sub>1 \<dots> t\<^sub>n\<close>''
   specification may be required in order to specify the bisimulation
   to be used in the coinduction step.
 
@@ -1295,8 +1278,7 @@
   Despite the additional infrastructure, both @{method cases}
   and @{method coinduct} merely apply a certain rule, after
   instantiation, while conforming due to the usual way of monotonic
-  natural deduction: the context of a structured statement @{text
-  "\<And>x\<^sub>1 \<dots> x\<^sub>m. \<phi>\<^sub>1 \<Longrightarrow> \<dots> \<phi>\<^sub>n \<Longrightarrow> \<dots>"}
+  natural deduction: the context of a structured statement \<open>\<And>x\<^sub>1 \<dots> x\<^sub>m. \<phi>\<^sub>1 \<Longrightarrow> \<dots> \<phi>\<^sub>n \<Longrightarrow> \<dots>\<close>
   reappears unchanged after the case split.
 
   The @{method induct} method is fundamentally different in this
@@ -1309,17 +1291,16 @@
   of the original statement.
 
   In @{method induct} proofs, local assumptions introduced by cases are split
-  into two different kinds: @{text hyps} stemming from the rule and
-  @{text prems} from the goal statement.  This is reflected in the
-  extracted cases accordingly, so invoking ``@{command "case"}~@{text
-  c}'' will provide separate facts @{text c.hyps} and @{text c.prems},
-  as well as fact @{text c} to hold the all-inclusive list.
+  into two different kinds: \<open>hyps\<close> stemming from the rule and
+  \<open>prems\<close> from the goal statement.  This is reflected in the
+  extracted cases accordingly, so invoking ``@{command "case"}~\<open>c\<close>'' will provide separate facts \<open>c.hyps\<close> and \<open>c.prems\<close>,
+  as well as fact \<open>c\<close> to hold the all-inclusive list.
 
   In @{method induction} proofs, local assumptions introduced by cases are
-  split into three different kinds: @{text IH}, the induction hypotheses,
-  @{text hyps}, the remaining hypotheses stemming from the rule, and
-  @{text prems}, the assumptions from the goal statement. The names are
-  @{text c.IH}, @{text c.hyps} and @{text c.prems}, as above.
+  split into three different kinds: \<open>IH\<close>, the induction hypotheses,
+  \<open>hyps\<close>, the remaining hypotheses stemming from the rule, and
+  \<open>prems\<close>, the assumptions from the goal statement. The names are
+  \<open>c.IH\<close>, \<open>c.hyps\<close> and \<open>c.prems\<close>, as above.
 
 
   \<^medskip>
@@ -1328,7 +1309,7 @@
   usually 0 for plain cases and induction rules of datatypes etc.\ and
   1 for rules of inductive predicates or sets and the like.  The
   remaining facts are inserted into the goal verbatim before the
-  actual @{text cases}, @{text induct}, or @{text coinduct} rule is
+  actual \<open>cases\<close>, \<open>induct\<close>, or \<open>coinduct\<close> rule is
   applied.
 \<close>
 
@@ -1337,10 +1318,10 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{command_def "print_induct_rules"}@{text "\<^sup>*"} & : & @{text "context \<rightarrow>"} \\
-    @{attribute_def cases} & : & @{text attribute} \\
-    @{attribute_def induct} & : & @{text attribute} \\
-    @{attribute_def coinduct} & : & @{text attribute} \\
+    @{command_def "print_induct_rules"}\<open>\<^sup>*\<close> & : & \<open>context \<rightarrow>\<close> \\
+    @{attribute_def cases} & : & \<open>attribute\<close> \\
+    @{attribute_def induct} & : & \<open>attribute\<close> \\
+    @{attribute_def coinduct} & : & \<open>attribute\<close> \\
   \end{matharray}
 
   @{rail \<open>
@@ -1364,18 +1345,18 @@
   packages of object-logics usually declare emerging cases and
   induction rules as expected, so users rarely need to intervene.
 
-  Rules may be deleted via the @{text "del"} specification, which
-  covers all of the @{text "type"}/@{text "pred"}/@{text "set"}
+  Rules may be deleted via the \<open>del\<close> specification, which
+  covers all of the \<open>type\<close>/\<open>pred\<close>/\<open>set\<close>
   sub-categories simultaneously.  For example, @{attribute
-  cases}~@{text del} removes any @{attribute cases} rules declared for
+  cases}~\<open>del\<close> removes any @{attribute cases} rules declared for
   some type, predicate, or set.
 
   Manual rule declarations usually refer to the @{attribute
   case_names} and @{attribute params} attributes to adjust names of
   cases and parameters of a rule; the @{attribute consumes}
   declaration is taken care of automatically: @{attribute
-  consumes}~@{text 0} is specified for ``type'' rules and @{attribute
-  consumes}~@{text 1} for ``predicate'' / ``set'' rules.
+  consumes}~\<open>0\<close> is specified for ``type'' rules and @{attribute
+  consumes}~\<open>1\<close> for ``predicate'' / ``set'' rules.
 \<close>
 
 
@@ -1383,9 +1364,9 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{command_def "consider"} & : & @{text "proof(state) | proof(chain) \<rightarrow> proof(prove)"} \\
-    @{command_def "obtain"} & : & @{text "proof(state) | proof(chain) \<rightarrow> proof(prove)"} \\
-    @{command_def "guess"}@{text "\<^sup>*"} & : & @{text "proof(state) | proof(chain) \<rightarrow> proof(prove)"} \\
+    @{command_def "consider"} & : & \<open>proof(state) | proof(chain) \<rightarrow> proof(prove)\<close> \\
+    @{command_def "obtain"} & : & \<open>proof(state) | proof(chain) \<rightarrow> proof(prove)\<close> \\
+    @{command_def "guess"}\<open>\<^sup>*\<close> & : & \<open>proof(state) | proof(chain) \<rightarrow> proof(prove)\<close> \\
   \end{matharray}
 
   Generalized elimination means that hypothetical parameters and premises
@@ -1396,9 +1377,9 @@
 
   \<^medskip>
   \begin{tabular}{ll}
-  @{text "\<exists>x. B x \<Longrightarrow> (\<And>x. B x \<Longrightarrow> thesis) \<Longrightarrow> thesis"} \\
-  @{text "A \<and> B \<Longrightarrow> (A \<Longrightarrow> B \<Longrightarrow> thesis) \<Longrightarrow> thesis"} \\
-  @{text "A \<or> B \<Longrightarrow> (A \<Longrightarrow> thesis) \<Longrightarrow> (B \<Longrightarrow> thesis) \<Longrightarrow> thesis"} \\
+  \<open>\<exists>x. B x \<Longrightarrow> (\<And>x. B x \<Longrightarrow> thesis) \<Longrightarrow> thesis\<close> \\
+  \<open>A \<and> B \<Longrightarrow> (A \<Longrightarrow> B \<Longrightarrow> thesis) \<Longrightarrow> thesis\<close> \\
+  \<open>A \<or> B \<Longrightarrow> (A \<Longrightarrow> thesis) \<Longrightarrow> (B \<Longrightarrow> thesis) \<Longrightarrow> thesis\<close> \\
   \end{tabular}
   \<^medskip>
 
@@ -1416,8 +1397,8 @@
     @@{command guess} (@{syntax "fixes"} + @'and')
   \<close>}
 
-  \<^descr> @{command consider}~@{text "(a) \<^vec>x \<WHERE> \<^vec>A \<^vec>x
-  | (b) \<^vec>y \<WHERE> \<^vec>B \<^vec>y | \<dots> "} states a rule for case
+  \<^descr> @{command consider}~\<open>(a) \<^vec>x \<WHERE> \<^vec>A \<^vec>x
+  | (b) \<^vec>y \<WHERE> \<^vec>B \<^vec>y | \<dots> \<close> states a rule for case
   splitting into separate subgoals, such that each case involves new
   parameters and premises. After the proof is finished, the resulting rule
   may be used directly with the @{method cases} proof method
@@ -1433,24 +1414,24 @@
   Isar language element as follows:
 
   \begin{matharray}{l}
-    @{command "consider"}~@{text "(a) \<^vec>x \<WHERE> \<^vec>A \<^vec>x | (b) \<^vec>y \<WHERE> \<^vec>B \<^vec>y | \<dots> \<equiv>"} \\[1ex]
-    \quad @{command "have"}~@{text "[case_names a b \<dots>]: thesis"} \\
-    \qquad @{text "\<IF> a [Pure.intro?]: \<And>\<^vec>x. \<^vec>A \<^vec>x \<Longrightarrow> thesis"} \\
-    \qquad @{text "\<AND> b [Pure.intro?]: \<And>\<^vec>y. \<^vec>B \<^vec>y \<Longrightarrow> thesis"} \\
-    \qquad @{text "\<AND> \<dots>"} \\
-    \qquad @{text "\<FOR> thesis"} \\
-    \qquad @{command "apply"}~@{text "(insert a b \<dots>)"} \\
+    @{command "consider"}~\<open>(a) \<^vec>x \<WHERE> \<^vec>A \<^vec>x | (b) \<^vec>y \<WHERE> \<^vec>B \<^vec>y | \<dots> \<equiv>\<close> \\[1ex]
+    \quad @{command "have"}~\<open>[case_names a b \<dots>]: thesis\<close> \\
+    \qquad \<open>\<IF> a [Pure.intro?]: \<And>\<^vec>x. \<^vec>A \<^vec>x \<Longrightarrow> thesis\<close> \\
+    \qquad \<open>\<AND> b [Pure.intro?]: \<And>\<^vec>y. \<^vec>B \<^vec>y \<Longrightarrow> thesis\<close> \\
+    \qquad \<open>\<AND> \<dots>\<close> \\
+    \qquad \<open>\<FOR> thesis\<close> \\
+    \qquad @{command "apply"}~\<open>(insert a b \<dots>)\<close> \\
   \end{matharray}
 
   See also \secref{sec:goals} for @{keyword "obtains"} in toplevel goal
   statements, as well as @{command print_statement} to print existing rules
   in a similar format.
 
-  \<^descr> @{command obtain}~@{text "\<^vec>x \<WHERE> \<^vec>A \<^vec>x"}
+  \<^descr> @{command obtain}~\<open>\<^vec>x \<WHERE> \<^vec>A \<^vec>x\<close>
   states a generalized elimination rule with exactly one case. After the
   proof is finished, it is activated for the subsequent proof text: the
-  context is augmented via @{command fix}~@{text "\<^vec>x"} @{command
-  assume}~@{text "\<^vec>A \<^vec>x"}, with special provisions to export
+  context is augmented via @{command fix}~\<open>\<^vec>x\<close> @{command
+  assume}~\<open>\<^vec>A \<^vec>x\<close>, with special provisions to export
   later results by discharging these assumptions again.
 
   Note that according to the parameter scopes within the elimination rule,
@@ -1464,13 +1445,13 @@
   @{command assume}:
 
   \begin{matharray}{l}
-    @{command "obtain"}~@{text "\<^vec>x \<WHERE> a: \<^vec>A \<^vec>x  \<langle>proof\<rangle> \<equiv>"} \\[1ex]
-    \quad @{command "have"}~@{text "thesis"} \\
-    \qquad @{text "\<IF> that [Pure.intro?]: \<And>\<^vec>x. \<^vec>A \<^vec>x \<Longrightarrow> thesis"} \\
-    \qquad @{text "\<FOR> thesis"} \\
-    \qquad @{command "apply"}~@{text "(insert that)"} \\
-    \qquad @{text "\<langle>proof\<rangle>"} \\
-    \quad @{command "fix"}~@{text "\<^vec>x"}~@{command "assume"}@{text "\<^sup>* a: \<^vec>A \<^vec>x"} \\
+    @{command "obtain"}~\<open>\<^vec>x \<WHERE> a: \<^vec>A \<^vec>x  \<langle>proof\<rangle> \<equiv>\<close> \\[1ex]
+    \quad @{command "have"}~\<open>thesis\<close> \\
+    \qquad \<open>\<IF> that [Pure.intro?]: \<And>\<^vec>x. \<^vec>A \<^vec>x \<Longrightarrow> thesis\<close> \\
+    \qquad \<open>\<FOR> thesis\<close> \\
+    \qquad @{command "apply"}~\<open>(insert that)\<close> \\
+    \qquad \<open>\<langle>proof\<rangle>\<close> \\
+    \quad @{command "fix"}~\<open>\<^vec>x\<close>~@{command "assume"}\<open>\<^sup>* a: \<^vec>A \<^vec>x\<close> \\
   \end{matharray}
 
   \<^descr> @{command guess} is similar to @{command obtain}, but it derives the
@@ -1478,11 +1459,10 @@
   proof. Thus it can considerably obscure the proof: it is classified as
   \<^emph>\<open>improper\<close>.
 
-  A proof with @{command guess} starts with a fixed goal @{text thesis}. The
-  subsequent refinement steps may turn this to anything of the form @{text
-  "\<And>\<^vec>x. \<^vec>A \<^vec>x \<Longrightarrow> thesis"}, but without splitting into new
+  A proof with @{command guess} starts with a fixed goal \<open>thesis\<close>. The
+  subsequent refinement steps may turn this to anything of the form \<open>\<And>\<^vec>x. \<^vec>A \<^vec>x \<Longrightarrow> thesis\<close>, but without splitting into new
   subgoals. The final goal state is then used as reduction rule for the
-  obtain pattern described above. Obtained parameters @{text "\<^vec>x"} are
+  obtain pattern described above. Obtained parameters \<open>\<^vec>x\<close> are
   marked as internal by default, and thus inaccessible in the proof text.
   The variable names and type constraints given as arguments for @{command
   "guess"} specify a prefix of accessible parameters.
--- a/src/Doc/Isar_Ref/Proof_Script.thy	Tue Oct 20 23:03:46 2015 +0200
+++ b/src/Doc/Isar_Ref/Proof_Script.thy	Tue Oct 20 23:53:40 2015 +0200
@@ -22,13 +22,13 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{command_def "supply"}@{text "\<^sup>*"} & : & @{text "proof(prove) \<rightarrow> proof(prove)"} \\
-    @{command_def "apply"}@{text "\<^sup>*"} & : & @{text "proof(prove) \<rightarrow> proof(prove)"} \\
-    @{command_def "apply_end"}@{text "\<^sup>*"} & : & @{text "proof(state) \<rightarrow> proof(state)"} \\
-    @{command_def "done"}@{text "\<^sup>*"} & : & @{text "proof(prove) \<rightarrow> proof(state) | local_theory | theory"} \\
-    @{command_def "defer"}@{text "\<^sup>*"} & : & @{text "proof \<rightarrow> proof"} \\
-    @{command_def "prefer"}@{text "\<^sup>*"} & : & @{text "proof \<rightarrow> proof"} \\
-    @{command_def "back"}@{text "\<^sup>*"} & : & @{text "proof \<rightarrow> proof"} \\
+    @{command_def "supply"}\<open>\<^sup>*\<close> & : & \<open>proof(prove) \<rightarrow> proof(prove)\<close> \\
+    @{command_def "apply"}\<open>\<^sup>*\<close> & : & \<open>proof(prove) \<rightarrow> proof(prove)\<close> \\
+    @{command_def "apply_end"}\<open>\<^sup>*\<close> & : & \<open>proof(state) \<rightarrow> proof(state)\<close> \\
+    @{command_def "done"}\<open>\<^sup>*\<close> & : & \<open>proof(prove) \<rightarrow> proof(state) | local_theory | theory\<close> \\
+    @{command_def "defer"}\<open>\<^sup>*\<close> & : & \<open>proof \<rightarrow> proof\<close> \\
+    @{command_def "prefer"}\<open>\<^sup>*\<close> & : & \<open>proof \<rightarrow> proof\<close> \\
+    @{command_def "back"}\<open>\<^sup>*\<close> & : & \<open>proof \<rightarrow> proof\<close> \\
   \end{matharray}
 
   @{rail \<open>
@@ -45,22 +45,20 @@
   refinement: it is similar to @{command "note"}, but it operates in
   backwards mode and does not have any impact on chained facts.
 
-  \<^descr> @{command "apply"}~@{text m} applies proof method @{text m} in
-  initial position, but unlike @{command "proof"} it retains ``@{text
-  "proof(prove)"}'' mode.  Thus consecutive method applications may be
+  \<^descr> @{command "apply"}~\<open>m\<close> applies proof method \<open>m\<close> in
+  initial position, but unlike @{command "proof"} it retains ``\<open>proof(prove)\<close>'' mode.  Thus consecutive method applications may be
   given just as in tactic scripts.
 
-  Facts are passed to @{text m} as indicated by the goal's
+  Facts are passed to \<open>m\<close> as indicated by the goal's
   forward-chain mode, and are \<^emph>\<open>consumed\<close> afterwards.  Thus any
   further @{command "apply"} command would always work in a purely
   backward manner.
 
-  \<^descr> @{command "apply_end"}~@{text "m"} applies proof method @{text
-  m} as if in terminal position.  Basically, this simulates a
+  \<^descr> @{command "apply_end"}~\<open>m\<close> applies proof method \<open>m\<close> as if in terminal position.  Basically, this simulates a
   multi-step tactic script for @{command "qed"}, but may be given
   anywhere within the proof body.
 
-  No facts are passed to @{text m} here.  Furthermore, the static
+  No facts are passed to \<open>m\<close> here.  Furthermore, the static
   context is that of the enclosing goal (as for actual @{command
   "qed"}).  Thus the proof method may not refer to any assumptions
   introduced in the current body, for example.
@@ -70,10 +68,10 @@
   structured proof commands (e.g.\ ``@{command "."}'' or @{command
   "sorry"}) may be used to conclude proof scripts as well.
 
-  \<^descr> @{command "defer"}~@{text n} and @{command "prefer"}~@{text n}
+  \<^descr> @{command "defer"}~\<open>n\<close> and @{command "prefer"}~\<open>n\<close>
   shuffle the list of pending goals: @{command "defer"} puts off
-  sub-goal @{text n} to the end of the list (@{text "n = 1"} by
-  default), while @{command "prefer"} brings sub-goal @{text n} to the
+  sub-goal \<open>n\<close> to the end of the list (\<open>n = 1\<close> by
+  default), while @{command "prefer"} brings sub-goal \<open>n\<close> to the
   front.
 
   \<^descr> @{command "back"} does back-tracking over the result sequence
@@ -88,7 +86,7 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{command_def "subgoal"}@{text "\<^sup>*"} & : & @{text "proof \<rightarrow> proof"} \\
+    @{command_def "subgoal"}\<open>\<^sup>*\<close> & : & \<open>proof \<rightarrow> proof\<close> \\
   \end{matharray}
 
   @{rail \<open>
@@ -110,20 +108,20 @@
   is not given explicitly.
 
   Goal parameters may be specified separately, in order to allow referring
-  to them in the proof body: ``@{command subgoal}~@{keyword "for"}~@{text "x
-  y z"}'' names a \<^emph>\<open>prefix\<close>, and ``@{command subgoal}~@{keyword
-  "for"}~@{text "\<dots> x y z"}'' names a \<^emph>\<open>suffix\<close> of goal parameters. The
+  to them in the proof body: ``@{command subgoal}~@{keyword "for"}~\<open>x
+  y z\<close>'' names a \<^emph>\<open>prefix\<close>, and ``@{command subgoal}~@{keyword
+  "for"}~\<open>\<dots> x y z\<close>'' names a \<^emph>\<open>suffix\<close> of goal parameters. The
   latter uses a literal @{verbatim "\<dots>"} symbol as notation. Parameter
   positions may be skipped via dummies (underscore). Unspecified names
   remain internal, and thus inaccessible in the proof text.
 
-  ``@{command subgoal}~@{keyword "premises"}~@{text prems}'' indicates that
+  ``@{command subgoal}~@{keyword "premises"}~\<open>prems\<close>'' indicates that
   goal premises should be turned into assumptions of the context (otherwise
   the remaining conclusion is a Pure implication). The fact name and
-  attributes are optional; the particular name ``@{text prems}'' is a common
+  attributes are optional; the particular name ``\<open>prems\<close>'' is a common
   convention for the premises of an arbitrary goal context in proof scripts.
 
-  ``@{command subgoal}~@{text result}'' indicates a fact name for the result
+  ``@{command subgoal}~\<open>result\<close>'' indicates a fact name for the result
   of a proven subgoal. Thus it may be re-used in further reasoning, similar
   to the result of @{command show} in structured Isar proofs.
 
@@ -202,24 +200,24 @@
   to be solved by unification with certain parts of the subgoal.
 
   Note that the tactic emulation proof methods in Isabelle/Isar are
-  consistently named @{text foo_tac}.  Note also that variable names
+  consistently named \<open>foo_tac\<close>.  Note also that variable names
   occurring on left hand sides of instantiations must be preceded by a
   question mark if they coincide with a keyword or contain dots.  This
   is consistent with the attribute @{attribute "where"} (see
   \secref{sec:pure-meth-att}).
 
   \begin{matharray}{rcl}
-    @{method_def rule_tac}@{text "\<^sup>*"} & : & @{text method} \\
-    @{method_def erule_tac}@{text "\<^sup>*"} & : & @{text method} \\
-    @{method_def drule_tac}@{text "\<^sup>*"} & : & @{text method} \\
-    @{method_def frule_tac}@{text "\<^sup>*"} & : & @{text method} \\
-    @{method_def cut_tac}@{text "\<^sup>*"} & : & @{text method} \\
-    @{method_def thin_tac}@{text "\<^sup>*"} & : & @{text method} \\
-    @{method_def subgoal_tac}@{text "\<^sup>*"} & : & @{text method} \\
-    @{method_def rename_tac}@{text "\<^sup>*"} & : & @{text method} \\
-    @{method_def rotate_tac}@{text "\<^sup>*"} & : & @{text method} \\
-    @{method_def tactic}@{text "\<^sup>*"} & : & @{text method} \\
-    @{method_def raw_tactic}@{text "\<^sup>*"} & : & @{text method} \\
+    @{method_def rule_tac}\<open>\<^sup>*\<close> & : & \<open>method\<close> \\
+    @{method_def erule_tac}\<open>\<^sup>*\<close> & : & \<open>method\<close> \\
+    @{method_def drule_tac}\<open>\<^sup>*\<close> & : & \<open>method\<close> \\
+    @{method_def frule_tac}\<open>\<^sup>*\<close> & : & \<open>method\<close> \\
+    @{method_def cut_tac}\<open>\<^sup>*\<close> & : & \<open>method\<close> \\
+    @{method_def thin_tac}\<open>\<^sup>*\<close> & : & \<open>method\<close> \\
+    @{method_def subgoal_tac}\<open>\<^sup>*\<close> & : & \<open>method\<close> \\
+    @{method_def rename_tac}\<open>\<^sup>*\<close> & : & \<open>method\<close> \\
+    @{method_def rotate_tac}\<open>\<^sup>*\<close> & : & \<open>method\<close> \\
+    @{method_def tactic}\<open>\<^sup>*\<close> & : & \<open>method\<close> \\
+    @{method_def raw_tactic}\<open>\<^sup>*\<close> & : & \<open>method\<close> \\
   \end{matharray}
 
   @{rail \<open>
@@ -253,27 +251,27 @@
   in contrast to the regular method @{method insert} which inserts
   closed rule statements.
 
-  \<^descr> @{method thin_tac}~@{text \<phi>} deletes the specified premise
-  from a subgoal.  Note that @{text \<phi>} may contain schematic
+  \<^descr> @{method thin_tac}~\<open>\<phi>\<close> deletes the specified premise
+  from a subgoal.  Note that \<open>\<phi>\<close> may contain schematic
   variables, to abbreviate the intended proposition; the first
   matching subgoal premise will be deleted.  Removing useless premises
   from a subgoal increases its readability and can make search tactics
   run faster.
 
-  \<^descr> @{method subgoal_tac}~@{text "\<phi>\<^sub>1 \<dots> \<phi>\<^sub>n"} adds the propositions
-  @{text "\<phi>\<^sub>1 \<dots> \<phi>\<^sub>n"} as local premises to a subgoal, and poses the same
+  \<^descr> @{method subgoal_tac}~\<open>\<phi>\<^sub>1 \<dots> \<phi>\<^sub>n\<close> adds the propositions
+  \<open>\<phi>\<^sub>1 \<dots> \<phi>\<^sub>n\<close> as local premises to a subgoal, and poses the same
   as new subgoals (in the original context).
 
-  \<^descr> @{method rename_tac}~@{text "x\<^sub>1 \<dots> x\<^sub>n"} renames parameters of a
-  goal according to the list @{text "x\<^sub>1, \<dots>, x\<^sub>n"}, which refers to the
+  \<^descr> @{method rename_tac}~\<open>x\<^sub>1 \<dots> x\<^sub>n\<close> renames parameters of a
+  goal according to the list \<open>x\<^sub>1, \<dots>, x\<^sub>n\<close>, which refers to the
   \<^emph>\<open>suffix\<close> of variables.
 
-  \<^descr> @{method rotate_tac}~@{text n} rotates the premises of a
-  subgoal by @{text n} positions: from right to left if @{text n} is
-  positive, and from left to right if @{text n} is negative; the
+  \<^descr> @{method rotate_tac}~\<open>n\<close> rotates the premises of a
+  subgoal by \<open>n\<close> positions: from right to left if \<open>n\<close> is
+  positive, and from left to right if \<open>n\<close> is negative; the
   default value is 1.
 
-  \<^descr> @{method tactic}~@{text "text"} produces a proof method from
+  \<^descr> @{method tactic}~\<open>text\<close> produces a proof method from
   any ML text of type @{ML_type tactic}.  Apart from the usual ML
   environment and the current proof context, the ML code may refer to
   the locally bound values @{ML_text facts}, which indicates any
--- a/src/Doc/Isar_Ref/Quick_Reference.thy	Tue Oct 20 23:03:46 2015 +0200
+++ b/src/Doc/Isar_Ref/Quick_Reference.thy	Tue Oct 20 23:53:40 2015 +0200
@@ -10,39 +10,39 @@
 
 text \<open>
   \begin{tabular}{ll}
-    @{command "fix"}~@{text x} & augment context by @{text "\<And>x. \<box>"} \\
-    @{command "assume"}~@{text "a: \<phi>"} & augment context by @{text "\<phi> \<Longrightarrow> \<box>"} \\
+    @{command "fix"}~\<open>x\<close> & augment context by \<open>\<And>x. \<box>\<close> \\
+    @{command "assume"}~\<open>a: \<phi>\<close> & augment context by \<open>\<phi> \<Longrightarrow> \<box>\<close> \\
     @{command "then"} & indicate forward chaining of facts \\
-    @{command "have"}~@{text "a: \<phi>"} & prove local result \\
-    @{command "show"}~@{text "a: \<phi>"} & prove local result, refining some goal \\
-    @{command "using"}~@{text a} & indicate use of additional facts \\
-    @{command "unfolding"}~@{text a} & unfold definitional equations \\
-    @{command "proof"}~@{text "m\<^sub>1"}~\dots~@{command "qed"}~@{text "m\<^sub>2"} & indicate proof structure and refinements \\
-    @{command "{"}~@{text "\<dots>"}~@{command "}"} & indicate explicit blocks \\
+    @{command "have"}~\<open>a: \<phi>\<close> & prove local result \\
+    @{command "show"}~\<open>a: \<phi>\<close> & prove local result, refining some goal \\
+    @{command "using"}~\<open>a\<close> & indicate use of additional facts \\
+    @{command "unfolding"}~\<open>a\<close> & unfold definitional equations \\
+    @{command "proof"}~\<open>m\<^sub>1\<close>~\dots~@{command "qed"}~\<open>m\<^sub>2\<close> & indicate proof structure and refinements \\
+    @{command "{"}~\<open>\<dots>\<close>~@{command "}"} & indicate explicit blocks \\
     @{command "next"} & switch blocks \\
-    @{command "note"}~@{text "a = b"} & reconsider facts \\
-    @{command "let"}~@{text "p = t"} & abbreviate terms by higher-order matching \\
-    @{command "write"}~@{text "c  (mx)"} & declare local mixfix syntax \\
+    @{command "note"}~\<open>a = b\<close> & reconsider facts \\
+    @{command "let"}~\<open>p = t\<close> & abbreviate terms by higher-order matching \\
+    @{command "write"}~\<open>c  (mx)\<close> & declare local mixfix syntax \\
   \end{tabular}
 
   \<^medskip>
 
   \begin{tabular}{rcl}
-    @{text "proof"} & = & @{text "prfx\<^sup>*"}~@{command "proof"}~@{text "method\<^sup>? stmt\<^sup>*"}~@{command "qed"}~@{text "method\<^sup>?"} \\
-    & @{text "|"} & @{text "prfx\<^sup>*"}~@{command "done"} \\
-    @{text prfx} & = & @{command "apply"}~@{text method} \\
-    & @{text "|"} & @{command "using"}~@{text "facts"} \\
-    & @{text "|"} & @{command "unfolding"}~@{text "facts"} \\
-    @{text stmt} & = & @{command "{"}~@{text "stmt\<^sup>*"}~@{command "}"} \\
-    & @{text "|"} & @{command "next"} \\
-    & @{text "|"} & @{command "note"}~@{text "name = facts"} \\
-    & @{text "|"} & @{command "let"}~@{text "term = term"} \\
-    & @{text "|"} & @{command "write"}~@{text "name (mixfix)"} \\
-    & @{text "|"} & @{command "fix"}~@{text "var\<^sup>+"} \\
-    & @{text "|"} & @{command "assume"}~@{text "name: props"} \\
-    & @{text "|"} & @{command "then"}@{text "\<^sup>?"}~@{text goal} \\
-    @{text goal} & = & @{command "have"}~@{text "name: props proof"} \\
-    & @{text "|"} & @{command "show"}~@{text "name: props proof"} \\
+    \<open>proof\<close> & = & \<open>prfx\<^sup>*\<close>~@{command "proof"}~\<open>method\<^sup>? stmt\<^sup>*\<close>~@{command "qed"}~\<open>method\<^sup>?\<close> \\
+    & \<open>|\<close> & \<open>prfx\<^sup>*\<close>~@{command "done"} \\
+    \<open>prfx\<close> & = & @{command "apply"}~\<open>method\<close> \\
+    & \<open>|\<close> & @{command "using"}~\<open>facts\<close> \\
+    & \<open>|\<close> & @{command "unfolding"}~\<open>facts\<close> \\
+    \<open>stmt\<close> & = & @{command "{"}~\<open>stmt\<^sup>*\<close>~@{command "}"} \\
+    & \<open>|\<close> & @{command "next"} \\
+    & \<open>|\<close> & @{command "note"}~\<open>name = facts\<close> \\
+    & \<open>|\<close> & @{command "let"}~\<open>term = term\<close> \\
+    & \<open>|\<close> & @{command "write"}~\<open>name (mixfix)\<close> \\
+    & \<open>|\<close> & @{command "fix"}~\<open>var\<^sup>+\<close> \\
+    & \<open>|\<close> & @{command "assume"}~\<open>name: props\<close> \\
+    & \<open>|\<close> & @{command "then"}\<open>\<^sup>?\<close>~\<open>goal\<close> \\
+    \<open>goal\<close> & = & @{command "have"}~\<open>name: props proof\<close> \\
+    & \<open>|\<close> & @{command "show"}~\<open>name: props proof\<close> \\
   \end{tabular}
 \<close>
 
@@ -51,17 +51,17 @@
 
 text \<open>
   \begin{tabular}{rcl}
-    @{command "by"}~@{text "m\<^sub>1 m\<^sub>2"} & @{text "\<equiv>"} &
-      @{command "proof"}~@{text "m\<^sub>1"}~@{command "qed"}~@{text "m\<^sub>2"} \\
-    @{command ".."} & @{text "\<equiv>"} & @{command "by"}~@{text standard} \\
-    @{command "."} & @{text "\<equiv>"} & @{command "by"}~@{text this} \\
-    @{command "hence"} & @{text "\<equiv>"} & @{command "then"}~@{command "have"} \\
-    @{command "thus"} & @{text "\<equiv>"} & @{command "then"}~@{command "show"} \\
-    @{command "from"}~@{text a} & @{text "\<equiv>"} & @{command "note"}~@{text a}~@{command "then"} \\
-    @{command "with"}~@{text a} & @{text "\<equiv>"} & @{command "from"}~@{text "a \<AND> this"} \\
-    @{command "from"}~@{text this} & @{text "\<equiv>"} & @{command "then"} \\
-    @{command "from"}~@{text this}~@{command "have"} & @{text "\<equiv>"} & @{command "hence"} \\
-    @{command "from"}~@{text this}~@{command "show"} & @{text "\<equiv>"} & @{command "thus"} \\
+    @{command "by"}~\<open>m\<^sub>1 m\<^sub>2\<close> & \<open>\<equiv>\<close> &
+      @{command "proof"}~\<open>m\<^sub>1\<close>~@{command "qed"}~\<open>m\<^sub>2\<close> \\
+    @{command ".."} & \<open>\<equiv>\<close> & @{command "by"}~\<open>standard\<close> \\
+    @{command "."} & \<open>\<equiv>\<close> & @{command "by"}~\<open>this\<close> \\
+    @{command "hence"} & \<open>\<equiv>\<close> & @{command "then"}~@{command "have"} \\
+    @{command "thus"} & \<open>\<equiv>\<close> & @{command "then"}~@{command "show"} \\
+    @{command "from"}~\<open>a\<close> & \<open>\<equiv>\<close> & @{command "note"}~\<open>a\<close>~@{command "then"} \\
+    @{command "with"}~\<open>a\<close> & \<open>\<equiv>\<close> & @{command "from"}~\<open>a \<AND> this\<close> \\
+    @{command "from"}~\<open>this\<close> & \<open>\<equiv>\<close> & @{command "then"} \\
+    @{command "from"}~\<open>this\<close>~@{command "have"} & \<open>\<equiv>\<close> & @{command "hence"} \\
+    @{command "from"}~\<open>this\<close>~@{command "show"} & \<open>\<equiv>\<close> & @{command "thus"} \\
   \end{tabular}
 \<close>
 
@@ -70,26 +70,26 @@
 
 text \<open>
   \begin{tabular}{rcl}
-    @{command "also"}@{text "\<^sub>0"} & @{text "\<approx>"} &
-      @{command "note"}~@{text "calculation = this"} \\
-    @{command "also"}@{text "\<^sub>n\<^sub>+\<^sub>1"} & @{text "\<approx>"} &
-      @{command "note"}~@{text "calculation = trans [OF calculation this]"} \\
-    @{command "finally"} & @{text "\<approx>"} &
-      @{command "also"}~@{command "from"}~@{text calculation} \\[0.5ex]
-    @{command "moreover"} & @{text "\<approx>"} &
-      @{command "note"}~@{text "calculation = calculation this"} \\
-    @{command "ultimately"} & @{text "\<approx>"} &
-      @{command "moreover"}~@{command "from"}~@{text calculation} \\[0.5ex]
-    @{command "presume"}~@{text "a: \<phi>"} & @{text "\<approx>"} &
-      @{command "assume"}~@{text "a: \<phi>"} \\
-    @{command "def"}~@{text "a: x \<equiv> t"} & @{text "\<approx>"} &
-      @{command "fix"}~@{text x}~@{command "assume"}~@{text "a: x \<equiv> t"} \\
-    @{command "obtain"}~@{text "x \<WHERE> a: \<phi>"} & @{text "\<approx>"} &
-      @{text "\<dots>"}~@{command "fix"}~@{text x}~@{command "assume"}~@{text "a: \<phi>"} \\
-    @{command "case"}~@{text c} & @{text "\<approx>"} &
-      @{command "fix"}~@{text x}~@{command "assume"}~@{text "c: \<phi>"} \\
-    @{command "sorry"} & @{text "\<approx>"} &
-      @{command "by"}~@{text cheating} \\
+    @{command "also"}\<open>\<^sub>0\<close> & \<open>\<approx>\<close> &
+      @{command "note"}~\<open>calculation = this\<close> \\
+    @{command "also"}\<open>\<^sub>n\<^sub>+\<^sub>1\<close> & \<open>\<approx>\<close> &
+      @{command "note"}~\<open>calculation = trans [OF calculation this]\<close> \\
+    @{command "finally"} & \<open>\<approx>\<close> &
+      @{command "also"}~@{command "from"}~\<open>calculation\<close> \\[0.5ex]
+    @{command "moreover"} & \<open>\<approx>\<close> &
+      @{command "note"}~\<open>calculation = calculation this\<close> \\
+    @{command "ultimately"} & \<open>\<approx>\<close> &
+      @{command "moreover"}~@{command "from"}~\<open>calculation\<close> \\[0.5ex]
+    @{command "presume"}~\<open>a: \<phi>\<close> & \<open>\<approx>\<close> &
+      @{command "assume"}~\<open>a: \<phi>\<close> \\
+    @{command "def"}~\<open>a: x \<equiv> t\<close> & \<open>\<approx>\<close> &
+      @{command "fix"}~\<open>x\<close>~@{command "assume"}~\<open>a: x \<equiv> t\<close> \\
+    @{command "obtain"}~\<open>x \<WHERE> a: \<phi>\<close> & \<open>\<approx>\<close> &
+      \<open>\<dots>\<close>~@{command "fix"}~\<open>x\<close>~@{command "assume"}~\<open>a: \<phi>\<close> \\
+    @{command "case"}~\<open>c\<close> & \<open>\<approx>\<close> &
+      @{command "fix"}~\<open>x\<close>~@{command "assume"}~\<open>c: \<phi>\<close> \\
+    @{command "sorry"} & \<open>\<approx>\<close> &
+      @{command "by"}~\<open>cheating\<close> \\
   \end{tabular}
 \<close>
 
@@ -100,10 +100,10 @@
   \begin{tabular}{ll}
     @{command "print_state"} & print proof state \\
     @{command "print_statement"} & print fact in long statement form \\
-    @{command "thm"}~@{text a} & print fact \\
-    @{command "prop"}~@{text \<phi>} & print proposition \\
-    @{command "term"}~@{text t} & print term \\
-    @{command "typ"}~@{text \<tau>} & print type \\
+    @{command "thm"}~\<open>a\<close> & print fact \\
+    @{command "prop"}~\<open>\<phi>\<close> & print proposition \\
+    @{command "term"}~\<open>t\<close> & print term \\
+    @{command "typ"}~\<open>\<tau>\<close> & print type \\
   \end{tabular}
 \<close>
 
@@ -115,18 +115,18 @@
     \multicolumn{2}{l}{\<^bold>\<open>Single steps (forward-chaining facts)\<close>} \\[0.5ex]
     @{method assumption} & apply some assumption \\
     @{method this} & apply current facts \\
-    @{method rule}~@{text a} & apply some rule  \\
+    @{method rule}~\<open>a\<close> & apply some rule  \\
     @{method standard} & apply standard rule (default for @{command "proof"}) \\
-    @{method contradiction} & apply @{text "\<not>"} elimination rule (any order) \\
-    @{method cases}~@{text t} & case analysis (provides cases) \\
-    @{method induct}~@{text x} & proof by induction (provides cases) \\[2ex]
+    @{method contradiction} & apply \<open>\<not>\<close> elimination rule (any order) \\
+    @{method cases}~\<open>t\<close> & case analysis (provides cases) \\
+    @{method induct}~\<open>x\<close> & proof by induction (provides cases) \\[2ex]
 
     \multicolumn{2}{l}{\<^bold>\<open>Repeated steps (inserting facts)\<close>} \\[0.5ex]
     @{method "-"} & no rules \\
-    @{method intro}~@{text a} & introduction rules \\
+    @{method intro}~\<open>a\<close> & introduction rules \\
     @{method intro_classes} & class introduction rules \\
-    @{method elim}~@{text a} & elimination rules \\
-    @{method unfold}~@{text a} & definitional rewrite rules \\[2ex]
+    @{method elim}~\<open>a\<close> & elimination rules \\
+    @{method unfold}~\<open>a\<close> & definitional rewrite rules \\[2ex]
 
     \multicolumn{2}{l}{\<^bold>\<open>Automated proof tools (inserting facts)\<close>} \\[0.5ex]
     @{method iprover} & intuitionistic proof search \\
@@ -143,11 +143,11 @@
 text \<open>
   \begin{tabular}{ll}
     \multicolumn{2}{l}{\<^bold>\<open>Rules\<close>} \\[0.5ex]
-    @{attribute OF}~@{text a} & rule resolved with facts (skipping ``@{text _}'') \\
-    @{attribute of}~@{text t} & rule instantiated with terms (skipping ``@{text _}'') \\
-    @{attribute "where"}~@{text "x = t"} & rule instantiated with terms, by variable name \\
+    @{attribute OF}~\<open>a\<close> & rule resolved with facts (skipping ``\<open>_\<close>'') \\
+    @{attribute of}~\<open>t\<close> & rule instantiated with terms (skipping ``\<open>_\<close>'') \\
+    @{attribute "where"}~\<open>x = t\<close> & rule instantiated with terms, by variable name \\
     @{attribute symmetric} & resolution with symmetry rule \\
-    @{attribute THEN}~@{text b} & resolution with another rule \\
+    @{attribute THEN}~\<open>b\<close> & resolution with another rule \\
     @{attribute rule_format} & result put into standard rule format \\
     @{attribute elim_format} & destruct rule turned into elimination rule format \\[1ex]
 
@@ -169,26 +169,26 @@
       & @{method rule} & @{method iprover} & @{method blast} & @{method simp} & @{method auto} \\
       &                &                   & @{method fast} & @{method simp_all} & @{method force} \\
     \hline
-    @{attribute Pure.elim}@{text "!"} @{attribute Pure.intro}@{text "!"}
-      & @{text "\<times>"}    & @{text "\<times>"} \\
+    @{attribute Pure.elim}\<open>!\<close> @{attribute Pure.intro}\<open>!\<close>
+      & \<open>\<times>\<close>    & \<open>\<times>\<close> \\
     @{attribute Pure.elim} @{attribute Pure.intro}
-      & @{text "\<times>"}    & @{text "\<times>"} \\
-    @{attribute elim}@{text "!"} @{attribute intro}@{text "!"}
-      & @{text "\<times>"}    &                    & @{text "\<times>"}          &                     & @{text "\<times>"} \\
+      & \<open>\<times>\<close>    & \<open>\<times>\<close> \\
+    @{attribute elim}\<open>!\<close> @{attribute intro}\<open>!\<close>
+      & \<open>\<times>\<close>    &                    & \<open>\<times>\<close>          &                     & \<open>\<times>\<close> \\
     @{attribute elim} @{attribute intro}
-      & @{text "\<times>"}    &                    & @{text "\<times>"}          &                     & @{text "\<times>"} \\
+      & \<open>\<times>\<close>    &                    & \<open>\<times>\<close>          &                     & \<open>\<times>\<close> \\
     @{attribute iff}
-      & @{text "\<times>"}    &                    & @{text "\<times>"}          & @{text "\<times>"}         & @{text "\<times>"} \\
-    @{attribute iff}@{text "?"}
-      & @{text "\<times>"} \\
-    @{attribute elim}@{text "?"} @{attribute intro}@{text "?"}
-      & @{text "\<times>"} \\
+      & \<open>\<times>\<close>    &                    & \<open>\<times>\<close>          & \<open>\<times>\<close>         & \<open>\<times>\<close> \\
+    @{attribute iff}\<open>?\<close>
+      & \<open>\<times>\<close> \\
+    @{attribute elim}\<open>?\<close> @{attribute intro}\<open>?\<close>
+      & \<open>\<times>\<close> \\
     @{attribute simp}
-      &                &                    &                      & @{text "\<times>"}         & @{text "\<times>"} \\
+      &                &                    &                      & \<open>\<times>\<close>         & \<open>\<times>\<close> \\
     @{attribute cong}
-      &                &                    &                      & @{text "\<times>"}         & @{text "\<times>"} \\
+      &                &                    &                      & \<open>\<times>\<close>         & \<open>\<times>\<close> \\
     @{attribute split}
-      &                &                    &                      & @{text "\<times>"}         & @{text "\<times>"} \\
+      &                &                    &                      & \<open>\<times>\<close>         & \<open>\<times>\<close> \\
   \end{tabular}
 \<close>
 
@@ -199,11 +199,11 @@
 
 text \<open>
   \begin{tabular}{ll}
-    @{command "apply"}~@{text m} & apply proof method at initial position \\
-    @{command "apply_end"}~@{text m} & apply proof method near terminal position \\
+    @{command "apply"}~\<open>m\<close> & apply proof method at initial position \\
+    @{command "apply_end"}~\<open>m\<close> & apply proof method near terminal position \\
     @{command "done"} & complete proof \\
-    @{command "defer"}~@{text n} & move subgoal to end \\
-    @{command "prefer"}~@{text n} & move subgoal to beginning \\
+    @{command "defer"}~\<open>n\<close> & move subgoal to end \\
+    @{command "prefer"}~\<open>n\<close> & move subgoal to beginning \\
     @{command "back"} & backtrack last command \\
   \end{tabular}
 \<close>
@@ -213,19 +213,19 @@
 
 text \<open>
   \begin{tabular}{ll}
-    @{method rule_tac}~@{text insts} & resolution (with instantiation) \\
-    @{method erule_tac}~@{text insts} & elim-resolution (with instantiation) \\
-    @{method drule_tac}~@{text insts} & destruct-resolution (with instantiation) \\
-    @{method frule_tac}~@{text insts} & forward-resolution (with instantiation) \\
-    @{method cut_tac}~@{text insts} & insert facts (with instantiation) \\
-    @{method thin_tac}~@{text \<phi>} & delete assumptions \\
-    @{method subgoal_tac}~@{text \<phi>} & new claims \\
-    @{method rename_tac}~@{text x} & rename innermost goal parameters \\
-    @{method rotate_tac}~@{text n} & rotate assumptions of goal \\
-    @{method tactic}~@{text "text"} & arbitrary ML tactic \\
-    @{method case_tac}~@{text t} & exhaustion (datatypes) \\
-    @{method induct_tac}~@{text x} & induction (datatypes) \\
-    @{method ind_cases}~@{text t} & exhaustion + simplification (inductive predicates) \\
+    @{method rule_tac}~\<open>insts\<close> & resolution (with instantiation) \\
+    @{method erule_tac}~\<open>insts\<close> & elim-resolution (with instantiation) \\
+    @{method drule_tac}~\<open>insts\<close> & destruct-resolution (with instantiation) \\
+    @{method frule_tac}~\<open>insts\<close> & forward-resolution (with instantiation) \\
+    @{method cut_tac}~\<open>insts\<close> & insert facts (with instantiation) \\
+    @{method thin_tac}~\<open>\<phi>\<close> & delete assumptions \\
+    @{method subgoal_tac}~\<open>\<phi>\<close> & new claims \\
+    @{method rename_tac}~\<open>x\<close> & rename innermost goal parameters \\
+    @{method rotate_tac}~\<open>n\<close> & rotate assumptions of goal \\
+    @{method tactic}~\<open>text\<close> & arbitrary ML tactic \\
+    @{method case_tac}~\<open>t\<close> & exhaustion (datatypes) \\
+    @{method induct_tac}~\<open>x\<close> & induction (datatypes) \\
+    @{method ind_cases}~\<open>t\<close> & exhaustion + simplification (inductive predicates) \\
   \end{tabular}
 \<close>
 
--- a/src/Doc/Isar_Ref/Spec.thy	Tue Oct 20 23:03:46 2015 +0200
+++ b/src/Doc/Isar_Ref/Spec.thy	Tue Oct 20 23:53:40 2015 +0200
@@ -23,9 +23,9 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{command_def "theory"} & : & @{text "toplevel \<rightarrow> theory"} \\
-    @{command_def (global) "end"} & : & @{text "theory \<rightarrow> toplevel"} \\
-    @{command_def "thy_deps"}@{text "\<^sup>*"} & : & @{text "theory \<rightarrow>"} \\
+    @{command_def "theory"} & : & \<open>toplevel \<rightarrow> theory\<close> \\
+    @{command_def (global) "end"} & : & \<open>theory \<rightarrow> toplevel\<close> \\
+    @{command_def "thy_deps"}\<open>\<^sup>*\<close> & : & \<open>theory \<rightarrow>\<close> \\
   \end{matharray}
 
   Isabelle/Isar theories are defined via theory files, which consist of an
@@ -72,9 +72,9 @@
     thy_bounds: @{syntax name} | '(' (@{syntax name} + @'|') ')'
   \<close>}
 
-  \<^descr> @{command "theory"}~@{text "A \<IMPORTS> B\<^sub>1 \<dots> B\<^sub>n \<BEGIN>"}
-  starts a new theory @{text A} based on the merge of existing
-  theories @{text "B\<^sub>1 \<dots> B\<^sub>n"}.  Due to the possibility to import more
+  \<^descr> @{command "theory"}~\<open>A \<IMPORTS> B\<^sub>1 \<dots> B\<^sub>n \<BEGIN>\<close>
+  starts a new theory \<open>A\<close> based on the merge of existing
+  theories \<open>B\<^sub>1 \<dots> B\<^sub>n\<close>.  Due to the possibility to import more
   than one ancestor, the resulting theory structure of an Isabelle
   session forms a directed acyclic graph (DAG).  Isabelle takes care
   that sources contributing to the development graph are always
@@ -94,13 +94,13 @@
   keywords need to be classified according to their structural role in
   the formal text.  Examples may be seen in Isabelle/HOL sources
   itself, such as @{keyword "keywords"}~@{verbatim \<open>"typedef"\<close>}
-  @{text ":: thy_goal"} or @{keyword "keywords"}~@{verbatim
-  \<open>"datatype"\<close>} @{text ":: thy_decl"} for theory-level declarations
+  \<open>:: thy_goal\<close> or @{keyword "keywords"}~@{verbatim
+  \<open>"datatype"\<close>} \<open>:: thy_decl\<close> for theory-level declarations
   with and without proof, respectively.  Additional @{syntax tags}
   provide defaults for document preparation (\secref{sec:tags}).
 
   It is possible to specify an alternative completion via @{verbatim
-  "=="}~@{text "text"}, while the default is the corresponding keyword name.
+  "=="}~\<open>text\<close>, while the default is the corresponding keyword name.
   
   \<^descr> @{command (global) "end"} concludes the current theory
   definition.  Note that some other commands, e.g.\ local theory
@@ -120,8 +120,8 @@
 
 text \<open>
   \begin{matharray}{rcll}
-    @{command_def "context"} & : & @{text "theory \<rightarrow> local_theory"} \\
-    @{command_def (local) "end"} & : & @{text "local_theory \<rightarrow> theory"} \\
+    @{command_def "context"} & : & \<open>theory \<rightarrow> local_theory\<close> \\
+    @{command_def (local) "end"} & : & \<open>local_theory \<rightarrow> theory\<close> \\
     @{keyword_def "private"} \\
     @{keyword_def "qualified"} \\
   \end{matharray}
@@ -132,7 +132,7 @@
   depending on the context may be added incrementally later on.
 
   \<^emph>\<open>Named contexts\<close> refer to locales (cf.\ \secref{sec:locale}) or
-  type classes (cf.\ \secref{sec:class}); the name ``@{text "-"}''
+  type classes (cf.\ \secref{sec:class}); the name ``\<open>-\<close>''
   signifies the global theory context.
 
   \<^emph>\<open>Unnamed contexts\<close> may introduce additional parameters and
@@ -149,16 +149,16 @@
     @{syntax_def target}: '(' @'in' @{syntax nameref} ')'
   \<close>}
 
-  \<^descr> @{command "context"}~@{text "c \<BEGIN>"} opens a named
-  context, by recommencing an existing locale or class @{text c}.
+  \<^descr> @{command "context"}~\<open>c \<BEGIN>\<close> opens a named
+  context, by recommencing an existing locale or class \<open>c\<close>.
   Note that locale and class definitions allow to include the
   @{keyword "begin"} keyword as well, in order to continue the local
   theory immediately after the initial specification.
 
-  \<^descr> @{command "context"}~@{text "bundles elements \<BEGIN>"} opens
+  \<^descr> @{command "context"}~\<open>bundles elements \<BEGIN>\<close> opens
   an unnamed context, by extending the enclosing global or local
   theory target by the given declaration bundles (\secref{sec:bundle})
-  and context elements (@{text "\<FIXES>"}, @{text "\<ASSUMES>"}
+  and context elements (\<open>\<FIXES>\<close>, \<open>\<ASSUMES>\<close>
   etc.).  This means any results stemming from definitions and proofs
   in the extended context will be exported into the enclosing target
   by lifting over extra parameters and premises.
@@ -179,38 +179,36 @@
   a local scope by itself: an extra unnamed context is required to use
   @{keyword "private"} or @{keyword "qualified"} here.
 
-  \<^descr> @{text "("}@{keyword_def "in"}~@{text "c)"} given after any local
+  \<^descr> \<open>(\<close>@{keyword_def "in"}~\<open>c)\<close> given after any local
   theory command specifies an immediate target, e.g.\ ``@{command
-  "definition"}~@{text "(\<IN> c)"}'' or ``@{command "theorem"}~@{text
-  "(\<IN> c)"}''. This works both in a local or global theory context; the
+  "definition"}~\<open>(\<IN> c)\<close>'' or ``@{command "theorem"}~\<open>(\<IN> c)\<close>''. This works both in a local or global theory context; the
   current target context will be suspended for this command only. Note that
-  ``@{text "(\<IN> -)"}'' will always produce a global result independently
+  ``\<open>(\<IN> -)\<close>'' will always produce a global result independently
   of the current target context.
 
 
-  Any specification element that operates on @{text local_theory} according
-  to this manual implicitly allows the above target syntax @{text
-  "("}@{keyword "in"}~@{text "c)"}, but individual syntax diagrams omit that
+  Any specification element that operates on \<open>local_theory\<close> according
+  to this manual implicitly allows the above target syntax \<open>(\<close>@{keyword "in"}~\<open>c)\<close>, but individual syntax diagrams omit that
   aspect for clarity.
 
   \<^medskip>
   The exact meaning of results produced within a local theory
   context depends on the underlying target infrastructure (locale, type
   class etc.). The general idea is as follows, considering a context named
-  @{text c} with parameter @{text x} and assumption @{text "A[x]"}.
+  \<open>c\<close> with parameter \<open>x\<close> and assumption \<open>A[x]\<close>.
   
   Definitions are exported by introducing a global version with
   additional arguments; a syntactic abbreviation links the long form
   with the abstract version of the target context.  For example,
-  @{text "a \<equiv> t[x]"} becomes @{text "c.a ?x \<equiv> t[?x]"} at the theory
-  level (for arbitrary @{text "?x"}), together with a local
-  abbreviation @{text "c \<equiv> c.a x"} in the target context (for the
-  fixed parameter @{text x}).
+  \<open>a \<equiv> t[x]\<close> becomes \<open>c.a ?x \<equiv> t[?x]\<close> at the theory
+  level (for arbitrary \<open>?x\<close>), together with a local
+  abbreviation \<open>c \<equiv> c.a x\<close> in the target context (for the
+  fixed parameter \<open>x\<close>).
 
   Theorems are exported by discharging the assumptions and
-  generalizing the parameters of the context.  For example, @{text "a:
-  B[x]"} becomes @{text "c.a: A[?x] \<Longrightarrow> B[?x]"}, again for arbitrary
-  @{text "?x"}.
+  generalizing the parameters of the context.  For example, \<open>a:
+  B[x]\<close> becomes \<open>c.a: A[?x] \<Longrightarrow> B[?x]\<close>, again for arbitrary
+  \<open>?x\<close>.
 \<close>
 
 
@@ -218,20 +216,20 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{command_def "bundle"} & : & @{text "local_theory \<rightarrow> local_theory"} \\
-    @{command_def "print_bundles"}@{text "\<^sup>*"} & : & @{text "context \<rightarrow> "} \\
-    @{command_def "include"} & : & @{text "proof(state) \<rightarrow> proof(state)"} \\
-    @{command_def "including"} & : & @{text "proof(prove) \<rightarrow> proof(prove)"} \\
+    @{command_def "bundle"} & : & \<open>local_theory \<rightarrow> local_theory\<close> \\
+    @{command_def "print_bundles"}\<open>\<^sup>*\<close> & : & \<open>context \<rightarrow> \<close> \\
+    @{command_def "include"} & : & \<open>proof(state) \<rightarrow> proof(state)\<close> \\
+    @{command_def "including"} & : & \<open>proof(prove) \<rightarrow> proof(prove)\<close> \\
     @{keyword_def "includes"} & : & syntax \\
   \end{matharray}
 
   The outer syntax of fact expressions (\secref{sec:syn-att}) involves
   theorems and attributes, which are evaluated in the context and
   applied to it.  Attributes may declare theorems to the context, as
-  in @{text "this_rule [intro] that_rule [elim]"} for example.
+  in \<open>this_rule [intro] that_rule [elim]\<close> for example.
   Configuration options (\secref{sec:config}) are special declaration
   attributes that operate on the context without a theorem, as in
-  @{text "[[show_types = false]]"} for example.
+  \<open>[[show_types = false]]\<close> for example.
 
   Expressions of this form may be defined as \<^emph>\<open>bundled
   declarations\<close> in the context, and included in other situations later
@@ -249,7 +247,7 @@
     @{syntax_def "includes"}: @'includes' (@{syntax nameref}+)
   \<close>}
 
-  \<^descr> @{command bundle}~@{text "b = decls"} defines a bundle of
+  \<^descr> @{command bundle}~\<open>b = decls\<close> defines a bundle of
   declarations in the current context.  The RHS is similar to the one
   of the @{command declare} command.  Bundles defined in local theory
   targets are subject to transformations via morphisms, when moved
@@ -257,10 +255,10 @@
   other local theory specification.
 
   \<^descr> @{command print_bundles} prints the named bundles that are available
-  in the current context; the ``@{text "!"}'' option indicates extra
+  in the current context; the ``\<open>!\<close>'' option indicates extra
   verbosity.
 
-  \<^descr> @{command include}~@{text "b\<^sub>1 \<dots> b\<^sub>n"} includes the declarations
+  \<^descr> @{command include}~\<open>b\<^sub>1 \<dots> b\<^sub>n\<close> includes the declarations
   from the given bundles into the current proof body context.  This is
   analogous to @{command "note"} (\secref{sec:proof-facts}) with the
   expanded bundles.
@@ -270,7 +268,7 @@
   @{command "using"} (\secref{sec:proof-facts}) with the expanded
   bundles.
 
-  \<^descr> @{keyword "includes"}~@{text "b\<^sub>1 \<dots> b\<^sub>n"} is similar to
+  \<^descr> @{keyword "includes"}~\<open>b\<^sub>1 \<dots> b\<^sub>n\<close> is similar to
   @{command include}, but works in situations where a specification
   context is constructed, notably for @{command context} and long
   statements of @{command theorem} etc.
@@ -292,11 +290,11 @@
 
 text \<open>
   \begin{matharray}{rcll}
-    @{command_def "definition"} & : & @{text "local_theory \<rightarrow> local_theory"} \\
-    @{attribute_def "defn"} & : & @{text attribute} \\
-    @{command_def "print_defn_rules"}@{text "\<^sup>*"} & : & @{text "context \<rightarrow> "} \\
-    @{command_def "abbreviation"} & : & @{text "local_theory \<rightarrow> local_theory"} \\
-    @{command_def "print_abbrevs"}@{text "\<^sup>*"} & : & @{text "context \<rightarrow> "} \\
+    @{command_def "definition"} & : & \<open>local_theory \<rightarrow> local_theory\<close> \\
+    @{attribute_def "defn"} & : & \<open>attribute\<close> \\
+    @{command_def "print_defn_rules"}\<open>\<^sup>*\<close> & : & \<open>context \<rightarrow> \<close> \\
+    @{command_def "abbreviation"} & : & \<open>local_theory \<rightarrow> local_theory\<close> \\
+    @{command_def "print_abbrevs"}\<open>\<^sup>*\<close> & : & \<open>context \<rightarrow> \<close> \\
   \end{matharray}
 
   Term definitions may either happen within the logic (as equational axioms
@@ -315,26 +313,26 @@
     @@{command print_abbrevs} ('!'?)
   \<close>}
 
-  \<^descr> @{command "definition"}~@{text "c \<WHERE> eq"} produces an
-  internal definition @{text "c \<equiv> t"} according to the specification
-  given as @{text eq}, which is then turned into a proven fact.  The
+  \<^descr> @{command "definition"}~\<open>c \<WHERE> eq\<close> produces an
+  internal definition \<open>c \<equiv> t\<close> according to the specification
+  given as \<open>eq\<close>, which is then turned into a proven fact.  The
   given proposition may deviate from internal meta-level equality
   according to the rewrite rules declared as @{attribute defn} by the
-  object-logic.  This usually covers object-level equality @{text "x =
-  y"} and equivalence @{text "A \<leftrightarrow> B"}.  End-users normally need not
+  object-logic.  This usually covers object-level equality \<open>x =
+  y\<close> and equivalence \<open>A \<leftrightarrow> B\<close>.  End-users normally need not
   change the @{attribute defn} setup.
   
   Definitions may be presented with explicit arguments on the LHS, as
-  well as additional conditions, e.g.\ @{text "f x y = t"} instead of
-  @{text "f \<equiv> \<lambda>x y. t"} and @{text "y \<noteq> 0 \<Longrightarrow> g x y = u"} instead of an
-  unrestricted @{text "g \<equiv> \<lambda>x y. u"}.
+  well as additional conditions, e.g.\ \<open>f x y = t\<close> instead of
+  \<open>f \<equiv> \<lambda>x y. t\<close> and \<open>y \<noteq> 0 \<Longrightarrow> g x y = u\<close> instead of an
+  unrestricted \<open>g \<equiv> \<lambda>x y. u\<close>.
 
   \<^descr> @{command "print_defn_rules"} prints the definitional rewrite rules
   declared via @{attribute defn} in the current context.
 
-  \<^descr> @{command "abbreviation"}~@{text "c \<WHERE> eq"} introduces a
+  \<^descr> @{command "abbreviation"}~\<open>c \<WHERE> eq\<close> introduces a
   syntactic constant which is associated with a certain term according
-  to the meta-level equality @{text eq}.
+  to the meta-level equality \<open>eq\<close>.
   
   Abbreviations participate in the usual type-inference process, but
   are expanded before the logic ever sees them.  Pretty printing of
@@ -342,15 +340,15 @@
   reverted abbreviations.  This needs some care to avoid overlapping
   or looping syntactic replacements!
   
-  The optional @{text mode} specification restricts output to a
-  particular print mode; using ``@{text input}'' here achieves the
+  The optional \<open>mode\<close> specification restricts output to a
+  particular print mode; using ``\<open>input\<close>'' here achieves the
   effect of one-way abbreviations.  The mode may also include an
   ``@{keyword "output"}'' qualifier that affects the concrete syntax
   declared for abbreviations, cf.\ @{command "syntax"} in
   \secref{sec:syn-trans}.
   
   \<^descr> @{command "print_abbrevs"} prints all constant abbreviations of the
-  current context; the ``@{text "!"}'' option indicates extra verbosity.
+  current context; the ``\<open>!\<close>'' option indicates extra verbosity.
 \<close>
 
 
@@ -358,7 +356,7 @@
 
 text \<open>
   \begin{matharray}{rcll}
-    @{command_def "axiomatization"} & : & @{text "theory \<rightarrow> theory"} & (axiomatic!) \\
+    @{command_def "axiomatization"} & : & \<open>theory \<rightarrow> theory\<close> & (axiomatic!) \\
   \end{matharray}
 
   @{rail \<open>
@@ -367,7 +365,7 @@
     specs: (@{syntax thmdecl}? @{syntax props} + @'and')
   \<close>}
 
-  \<^descr> @{command "axiomatization"}~@{text "c\<^sub>1 \<dots> c\<^sub>m \<WHERE> \<phi>\<^sub>1 \<dots> \<phi>\<^sub>n"}
+  \<^descr> @{command "axiomatization"}~\<open>c\<^sub>1 \<dots> c\<^sub>m \<WHERE> \<phi>\<^sub>1 \<dots> \<phi>\<^sub>n\<close>
   introduces several constants simultaneously and states axiomatic
   properties for these. The constants are marked as being specified once and
   for all, which prevents additional specifications for the same constants
@@ -392,9 +390,9 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{command_def "declaration"} & : & @{text "local_theory \<rightarrow> local_theory"} \\
-    @{command_def "syntax_declaration"} & : & @{text "local_theory \<rightarrow> local_theory"} \\
-    @{command_def "declare"} & : & @{text "local_theory \<rightarrow> local_theory"} \\
+    @{command_def "declaration"} & : & \<open>local_theory \<rightarrow> local_theory\<close> \\
+    @{command_def "syntax_declaration"} & : & \<open>local_theory \<rightarrow> local_theory\<close> \\
+    @{command_def "declare"} & : & \<open>local_theory \<rightarrow> local_theory\<close> \\
   \end{matharray}
 
   Arbitrary operations on the background context may be wrapped-up as
@@ -413,13 +411,13 @@
     @@{command declare} (@{syntax thmrefs} + @'and')
   \<close>}
 
-  \<^descr> @{command "declaration"}~@{text d} adds the declaration
-  function @{text d} of ML type @{ML_type declaration}, to the current
+  \<^descr> @{command "declaration"}~\<open>d\<close> adds the declaration
+  function \<open>d\<close> of ML type @{ML_type declaration}, to the current
   local theory under construction.  In later application contexts, the
   function is transformed according to the morphisms being involved in
   the interpretation hierarchy.
 
-  If the @{text "(pervasive)"} option is given, the corresponding
+  If the \<open>(pervasive)\<close> option is given, the corresponding
   declaration is applied to all possible contexts involved, including
   the global background theory.
 
@@ -427,7 +425,7 @@
   "declaration"}, but is meant to affect only ``syntactic'' tools by
   convention (such as notation and type-checking information).
 
-  \<^descr> @{command "declare"}~@{text thms} declares theorems to the
+  \<^descr> @{command "declare"}~\<open>thms\<close> declares theorems to the
   current local theory context.  No theorem binding is involved here,
   unlike @{command "lemmas"} (cf.\ \secref{sec:theorems}), so
   @{command "declare"} only has the effect of applying attributes as
@@ -484,7 +482,7 @@
   A locale instance consists of a reference to a locale and either
   positional or named parameter instantiations.  Identical
   instantiations (that is, those that instantiate a parameter by itself)
-  may be omitted.  The notation `@{text "_"}' enables to omit the
+  may be omitted.  The notation `\<open>_\<close>' enables to omit the
   instantiation for a parameter inside a positional instantiation.
 
   Terms in instantiations are from the context the locale expressions
@@ -511,13 +509,13 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{command_def "locale"} & : & @{text "theory \<rightarrow> local_theory"} \\
-    @{command_def "experiment"} & : & @{text "theory \<rightarrow> local_theory"} \\
-    @{command_def "print_locale"}@{text "\<^sup>*"} & : & @{text "context \<rightarrow>"} \\
-    @{command_def "print_locales"}@{text "\<^sup>*"} & : & @{text "context \<rightarrow>"} \\
-    @{command_def "locale_deps"}@{text "\<^sup>*"} & : & @{text "context \<rightarrow>"} \\
-    @{method_def intro_locales} & : & @{text method} \\
-    @{method_def unfold_locales} & : & @{text method} \\
+    @{command_def "locale"} & : & \<open>theory \<rightarrow> local_theory\<close> \\
+    @{command_def "experiment"} & : & \<open>theory \<rightarrow> local_theory\<close> \\
+    @{command_def "print_locale"}\<open>\<^sup>*\<close> & : & \<open>context \<rightarrow>\<close> \\
+    @{command_def "print_locales"}\<open>\<^sup>*\<close> & : & \<open>context \<rightarrow>\<close> \\
+    @{command_def "locale_deps"}\<open>\<^sup>*\<close> & : & \<open>context \<rightarrow>\<close> \\
+    @{method_def intro_locales} & : & \<open>method\<close> \\
+    @{method_def unfold_locales} & : & \<open>method\<close> \\
   \end{matharray}
 
   \indexisarelem{fixes}\indexisarelem{constrains}\indexisarelem{assumes}
@@ -542,19 +540,19 @@
       @'notes' (@{syntax thmdef}? @{syntax thmrefs} + @'and')
   \<close>}
 
-  \<^descr> @{command "locale"}~@{text "loc = import + body"} defines a
-  new locale @{text loc} as a context consisting of a certain view of
-  existing locales (@{text import}) plus some additional elements
-  (@{text body}).  Both @{text import} and @{text body} are optional;
-  the degenerate form @{command "locale"}~@{text loc} defines an empty
+  \<^descr> @{command "locale"}~\<open>loc = import + body\<close> defines a
+  new locale \<open>loc\<close> as a context consisting of a certain view of
+  existing locales (\<open>import\<close>) plus some additional elements
+  (\<open>body\<close>).  Both \<open>import\<close> and \<open>body\<close> are optional;
+  the degenerate form @{command "locale"}~\<open>loc\<close> defines an empty
   locale, which may still be useful to collect declarations of facts
   later on.  Type-inference on locale expressions automatically takes
   care of the most general typing that the combined context elements
   may acquire.
 
-  The @{text import} consists of a locale expression; see
+  The \<open>import\<close> consists of a locale expression; see
   \secref{sec:proof-context} above.  Its @{keyword "for"} clause defines
-  the parameters of @{text import}.  These are parameters of
+  the parameters of \<open>import\<close>.  These are parameters of
   the defined locale.  Locale parameters whose instantiation is
   omitted automatically extend the (possibly empty) @{keyword "for"}
   clause: they are inserted at its beginning.  This means that these
@@ -563,31 +561,30 @@
   convenience for the inheritance of parameters in locale
   declarations.
 
-  The @{text body} consists of context elements.
+  The \<open>body\<close> consists of context elements.
 
-    \<^descr> @{element "fixes"}~@{text "x :: \<tau> (mx)"} declares a local
-    parameter of type @{text \<tau>} and mixfix annotation @{text mx} (both
-    are optional).  The special syntax declaration ``@{text
-    "("}@{keyword_ref "structure"}@{text ")"}'' means that @{text x} may
+    \<^descr> @{element "fixes"}~\<open>x :: \<tau> (mx)\<close> declares a local
+    parameter of type \<open>\<tau>\<close> and mixfix annotation \<open>mx\<close> (both
+    are optional).  The special syntax declaration ``\<open>(\<close>@{keyword_ref "structure"}\<open>)\<close>'' means that \<open>x\<close> may
     be referenced implicitly in this context.
 
-    \<^descr> @{element "constrains"}~@{text "x :: \<tau>"} introduces a type
-    constraint @{text \<tau>} on the local parameter @{text x}.  This
+    \<^descr> @{element "constrains"}~\<open>x :: \<tau>\<close> introduces a type
+    constraint \<open>\<tau>\<close> on the local parameter \<open>x\<close>.  This
     element is deprecated.  The type constraint should be introduced in
     the @{keyword "for"} clause or the relevant @{element "fixes"} element.
 
-    \<^descr> @{element "assumes"}~@{text "a: \<phi>\<^sub>1 \<dots> \<phi>\<^sub>n"}
+    \<^descr> @{element "assumes"}~\<open>a: \<phi>\<^sub>1 \<dots> \<phi>\<^sub>n\<close>
     introduces local premises, similar to @{command "assume"} within a
     proof (cf.\ \secref{sec:proof-context}).
 
-    \<^descr> @{element "defines"}~@{text "a: x \<equiv> t"} defines a previously
+    \<^descr> @{element "defines"}~\<open>a: x \<equiv> t\<close> defines a previously
     declared parameter.  This is similar to @{command "def"} within a
     proof (cf.\ \secref{sec:proof-context}), but @{element "defines"}
     takes an equational proposition instead of variable-term pair.  The
     left-hand side of the equation may have additional arguments, e.g.\
-    ``@{element "defines"}~@{text "f x\<^sub>1 \<dots> x\<^sub>n \<equiv> t"}''.
+    ``@{element "defines"}~\<open>f x\<^sub>1 \<dots> x\<^sub>n \<equiv> t\<close>''.
 
-    \<^descr> @{element "notes"}~@{text "a = b\<^sub>1 \<dots> b\<^sub>n"}
+    \<^descr> @{element "notes"}~\<open>a = b\<^sub>1 \<dots> b\<^sub>n\<close>
     reconsiders facts within a local context.  Most notably, this may
     include arbitrary declarations in any attribute specifications
     included here, e.g.\ a local @{attribute simp} rule.
@@ -597,7 +594,7 @@
   derived from the parameters, @{command "definition"}
   (\secref{sec:term-definitions}) is usually more appropriate.
   
-  Note that ``@{text "(\<IS> p\<^sub>1 \<dots> p\<^sub>n)"}'' patterns given
+  Note that ``\<open>(\<IS> p\<^sub>1 \<dots> p\<^sub>n)\<close>'' patterns given
   in the syntax of @{element "assumes"} and @{element "defines"} above
   are illegal in locale definitions.  In the long goal format of
   \secref{sec:goals}, term bindings may be included as expected,
@@ -605,35 +602,33 @@
   
   \<^medskip>
   Locale specifications are ``closed up'' by
-  turning the given text into a predicate definition @{text
-  loc_axioms} and deriving the original assumptions as local lemmas
+  turning the given text into a predicate definition \<open>loc_axioms\<close> and deriving the original assumptions as local lemmas
   (modulo local definitions).  The predicate statement covers only the
   newly specified assumptions, omitting the content of included locale
   expressions.  The full cumulative view is only provided on export,
-  involving another predicate @{text loc} that refers to the complete
+  involving another predicate \<open>loc\<close> that refers to the complete
   specification text.
   
   In any case, the predicate arguments are those locale parameters
   that actually occur in the respective piece of text.  Also these
   predicates operate at the meta-level in theory, but the locale
   packages attempts to internalize statements according to the
-  object-logic setup (e.g.\ replacing @{text \<And>} by @{text \<forall>}, and
-  @{text "\<Longrightarrow>"} by @{text "\<longrightarrow>"} in HOL; see also
-  \secref{sec:object-logic}).  Separate introduction rules @{text
-  loc_axioms.intro} and @{text loc.intro} are provided as well.
+  object-logic setup (e.g.\ replacing \<open>\<And>\<close> by \<open>\<forall>\<close>, and
+  \<open>\<Longrightarrow>\<close> by \<open>\<longrightarrow>\<close> in HOL; see also
+  \secref{sec:object-logic}).  Separate introduction rules \<open>loc_axioms.intro\<close> and \<open>loc.intro\<close> are provided as well.
 
-  \<^descr> @{command experiment}~@{text exprs}~@{keyword "begin"} opens an
+  \<^descr> @{command experiment}~\<open>exprs\<close>~@{keyword "begin"} opens an
   anonymous locale context with private naming policy. Specifications in its
   body are inaccessible from outside. This is useful to perform experiments,
   without polluting the name space.
 
-  \<^descr> @{command "print_locale"}~@{text "locale"} prints the
+  \<^descr> @{command "print_locale"}~\<open>locale\<close> prints the
   contents of the named locale.  The command omits @{element "notes"}
-  elements by default.  Use @{command "print_locale"}@{text "!"} to
+  elements by default.  Use @{command "print_locale"}\<open>!\<close> to
   have them included.
 
   \<^descr> @{command "print_locales"} prints the names of all locales of the
-  current theory; the ``@{text "!"}'' option indicates extra verbosity.
+  current theory; the ``\<open>!\<close>'' option indicates extra verbosity.
 
   \<^descr> @{command "locale_deps"} visualizes all locales and their
   relations as a Hasse diagram. This includes locales defined as type
@@ -642,10 +637,9 @@
 
   \<^descr> @{method intro_locales} and @{method unfold_locales}
   repeatedly expand all introduction rules of locale predicates of the
-  theory.  While @{method intro_locales} only applies the @{text
-  loc.intro} introduction rules and therefore does not descend to
+  theory.  While @{method intro_locales} only applies the \<open>loc.intro\<close> introduction rules and therefore does not descend to
   assumptions, @{method unfold_locales} is more aggressive and applies
-  @{text loc_axioms.intro} as well.  Both methods are aware of locale
+  \<open>loc_axioms.intro\<close> as well.  Both methods are aware of locale
   specifications entailed by the context, both from target statements,
   and from interpretations (see below).  New goals that are entailed
   by the current context are discharged automatically.
@@ -656,11 +650,11 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{command_def "interpretation"} & : & @{text "theory | local_theory \<rightarrow> proof(prove)"} \\
-    @{command_def "interpret"} & : & @{text "proof(state) | proof(chain) \<rightarrow> proof(prove)"} \\
-    @{command_def "sublocale"} & : & @{text "theory | local_theory \<rightarrow> proof(prove)"} \\
-    @{command_def "print_dependencies"}@{text "\<^sup>*"} & : & @{text "context \<rightarrow>"} \\
-    @{command_def "print_interps"}@{text "\<^sup>*"} & : & @{text "context \<rightarrow>"} \\
+    @{command_def "interpretation"} & : & \<open>theory | local_theory \<rightarrow> proof(prove)\<close> \\
+    @{command_def "interpret"} & : & \<open>proof(state) | proof(chain) \<rightarrow> proof(prove)\<close> \\
+    @{command_def "sublocale"} & : & \<open>theory | local_theory \<rightarrow> proof(prove)\<close> \\
+    @{command_def "print_dependencies"}\<open>\<^sup>*\<close> & : & \<open>context \<rightarrow>\<close> \\
+    @{command_def "print_interps"}\<open>\<^sup>*\<close> & : & \<open>context \<rightarrow>\<close> \\
   \end{matharray}
 
   Locales may be instantiated, and the resulting instantiated
@@ -687,8 +681,8 @@
     equations: @'where' (@{syntax thmdecl}? @{syntax prop} + @'and')
   \<close>}
 
-  \<^descr> @{command "interpretation"}~@{text "expr \<WHERE> eqns"}
-  interprets @{text expr} in a global or local theory.  The command
+  \<^descr> @{command "interpretation"}~\<open>expr \<WHERE> eqns\<close>
+  interprets \<open>expr\<close> in a global or local theory.  The command
   generates proof obligations for the instantiated specifications.
   Once these are discharged by the user, instantiated declarations (in
   particular, facts) are added to the theory in a post-processing
@@ -713,8 +707,8 @@
   This enables establishing facts based on interpretations without
   creating permanent links to the interpreted locale instances, as
   would be the case with @{command sublocale}.
-  While @{command "interpretation"}~@{text "(\<IN> c)
-  \<dots>"} is technically possible, it is not useful since its result is
+  While @{command "interpretation"}~\<open>(\<IN> c)
+  \<dots>\<close> is technically possible, it is not useful since its result is
   discarded immediately.
 
   Free variables in the interpreted expression are allowed.  They are
@@ -723,43 +717,43 @@
   context --- for example, because a constant of that name exists ---
   add it to the @{keyword "for"} clause.
 
-  The equations @{text eqns} yield \<^emph>\<open>rewrite morphisms\<close>, which are
+  The equations \<open>eqns\<close> yield \<^emph>\<open>rewrite morphisms\<close>, which are
   unfolded during post-processing and are useful for interpreting
   concepts introduced through definitions.  The equations must be
   proved.
 
-  \<^descr> @{command "interpret"}~@{text "expr \<WHERE> eqns"} interprets
-  @{text expr} in the proof context and is otherwise similar to
+  \<^descr> @{command "interpret"}~\<open>expr \<WHERE> eqns\<close> interprets
+  \<open>expr\<close> in the proof context and is otherwise similar to
   interpretation in local theories.  Note that for @{command
-  "interpret"} the @{text eqns} should be
+  "interpret"} the \<open>eqns\<close> should be
   explicitly universally quantified.
 
-  \<^descr> @{command "sublocale"}~@{text "name \<subseteq> expr \<WHERE>
-  eqns"}
-  interprets @{text expr} in the locale @{text name}.  A proof that
-  the specification of @{text name} implies the specification of
-  @{text expr} is required.  As in the localized version of the
-  theorem command, the proof is in the context of @{text name}.  After
+  \<^descr> @{command "sublocale"}~\<open>name \<subseteq> expr \<WHERE>
+  eqns\<close>
+  interprets \<open>expr\<close> in the locale \<open>name\<close>.  A proof that
+  the specification of \<open>name\<close> implies the specification of
+  \<open>expr\<close> is required.  As in the localized version of the
+  theorem command, the proof is in the context of \<open>name\<close>.  After
   the proof obligation has been discharged, the locale hierarchy is
-  changed as if @{text name} imported @{text expr} (hence the name
-  @{command "sublocale"}).  When the context of @{text name} is
+  changed as if \<open>name\<close> imported \<open>expr\<close> (hence the name
+  @{command "sublocale"}).  When the context of \<open>name\<close> is
   subsequently entered, traversing the locale hierarchy will involve
-  the locale instances of @{text expr}, and their declarations will be
+  the locale instances of \<open>expr\<close>, and their declarations will be
   added to the context.  This makes @{command "sublocale"}
-  dynamic: extensions of a locale that is instantiated in @{text expr}
+  dynamic: extensions of a locale that is instantiated in \<open>expr\<close>
   may take place after the @{command "sublocale"} declaration and
   still become available in the context.  Circular @{command
   "sublocale"} declarations are allowed as long as they do not lead to
   infinite chains.
 
-  If interpretations of @{text name} exist in the current global
-  theory, the command adds interpretations for @{text expr} as well,
-  with the same qualifier, although only for fragments of @{text expr}
+  If interpretations of \<open>name\<close> exist in the current global
+  theory, the command adds interpretations for \<open>expr\<close> as well,
+  with the same qualifier, although only for fragments of \<open>expr\<close>
   that are not interpreted in the theory already.
 
-  The equations @{text eqns} amend the morphism through
-  which @{text expr} is interpreted.  This enables mapping definitions
-  from the interpreted locales to entities of @{text name} and can
+  The equations \<open>eqns\<close> amend the morphism through
+  which \<open>expr\<close> is interpreted.  This enables mapping definitions
+  from the interpreted locales to entities of \<open>name\<close> and can
   help break infinite chains induced by circular @{command
   "sublocale"} declarations.
 
@@ -767,18 +761,18 @@
   be used, but the locale argument must be omitted.  The command then
   refers to the locale (or class) target of the context block.
 
-  \<^descr> @{command "print_dependencies"}~@{text "expr"} is useful for
-  understanding the effect of an interpretation of @{text "expr"} in
+  \<^descr> @{command "print_dependencies"}~\<open>expr\<close> is useful for
+  understanding the effect of an interpretation of \<open>expr\<close> in
   the current context.  It lists all locale instances for which
   interpretations would be added to the current context.  Variant
-  @{command "print_dependencies"}@{text "!"} does not generalize
+  @{command "print_dependencies"}\<open>!\<close> does not generalize
   parameters and assumes an empty context --- that is, it prints all
   locale instances that would be considered for interpretation.  The
   latter is useful for understanding the dependencies of a locale
   expression.
 
-  \<^descr> @{command "print_interps"}~@{text "locale"} lists all
-  interpretations of @{text "locale"} in the current theory or proof
+  \<^descr> @{command "print_interps"}~\<open>locale\<close> lists all
+  interpretations of \<open>locale\<close> in the current theory or proof
   context, including those due to a combination of an @{command
   "interpretation"} or @{command "interpret"} and one or several
   @{command "sublocale"} declarations.
@@ -825,7 +819,7 @@
 
   
   \begin{matharray}{rcl}
-    @{command_def "permanent_interpretation"} & : & @{text "local_theory \<rightarrow> proof(prove)"}
+    @{command_def "permanent_interpretation"} & : & \<open>local_theory \<rightarrow> proof(prove)\<close>
   \end{matharray}
 
   @{rail \<open>
@@ -837,8 +831,8 @@
     equations: @'where' (@{syntax thmdecl}? @{syntax prop} + @'and')
   \<close>}
 
-  \<^descr> @{command "permanent_interpretation"}~@{text "expr \<DEFINING> defs \<WHERE> eqns"}
-  interprets @{text expr} in the current local theory.  The command
+  \<^descr> @{command "permanent_interpretation"}~\<open>expr \<DEFINING> defs \<WHERE> eqns\<close>
+  interprets \<open>expr\<close> in the current local theory.  The command
   generates proof obligations for the instantiated specifications.
   Instantiated declarations (in particular, facts) are added to the
   local theory's underlying target in a post-processing phase.  When
@@ -857,7 +851,7 @@
   (see \secref{sec:target}) are not admissible since they are
   non-permanent due to their very nature.  
 
-  In additions to \<^emph>\<open>rewrite morphisms\<close> specified by @{text eqns},
+  In additions to \<^emph>\<open>rewrite morphisms\<close> specified by \<open>eqns\<close>,
   also \<^emph>\<open>rewrite definitions\<close> may be specified.  Semantically, a
   rewrite definition
   
@@ -885,18 +879,18 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{command_def "class"} & : & @{text "theory \<rightarrow> local_theory"} \\
-    @{command_def "instantiation"} & : & @{text "theory \<rightarrow> local_theory"} \\
-    @{command_def "instance"} & : & @{text "local_theory \<rightarrow> local_theory"} \\
-    @{command "instance"} & : & @{text "theory \<rightarrow> proof(prove)"} \\
-    @{command_def "subclass"} & : & @{text "local_theory \<rightarrow> local_theory"} \\
-    @{command_def "print_classes"}@{text "\<^sup>*"} & : & @{text "context \<rightarrow>"} \\
-    @{command_def "class_deps"}@{text "\<^sup>*"} & : & @{text "context \<rightarrow>"} \\
-    @{method_def intro_classes} & : & @{text method} \\
+    @{command_def "class"} & : & \<open>theory \<rightarrow> local_theory\<close> \\
+    @{command_def "instantiation"} & : & \<open>theory \<rightarrow> local_theory\<close> \\
+    @{command_def "instance"} & : & \<open>local_theory \<rightarrow> local_theory\<close> \\
+    @{command "instance"} & : & \<open>theory \<rightarrow> proof(prove)\<close> \\
+    @{command_def "subclass"} & : & \<open>local_theory \<rightarrow> local_theory\<close> \\
+    @{command_def "print_classes"}\<open>\<^sup>*\<close> & : & \<open>context \<rightarrow>\<close> \\
+    @{command_def "class_deps"}\<open>\<^sup>*\<close> & : & \<open>context \<rightarrow>\<close> \\
+    @{method_def intro_classes} & : & \<open>method\<close> \\
   \end{matharray}
 
   A class is a particular locale with \<^emph>\<open>exactly one\<close> type variable
-  @{text \<alpha>}.  Beyond the underlying locale, a corresponding type class
+  \<open>\<alpha>\<close>.  Beyond the underlying locale, a corresponding type class
   is established which is interpreted logically as axiomatic type
   class @{cite "Wenzel:1997:TPHOL"} whose logical content are the
   assumptions of the locale.  Thus, classes provide the full
@@ -923,29 +917,27 @@
     class_bounds: @{syntax sort} | '(' (@{syntax sort} + @'|') ')'
   \<close>}
 
-  \<^descr> @{command "class"}~@{text "c = superclasses + body"} defines
-  a new class @{text c}, inheriting from @{text superclasses}.  This
-  introduces a locale @{text c} with import of all locales @{text
-  superclasses}.
+  \<^descr> @{command "class"}~\<open>c = superclasses + body\<close> defines
+  a new class \<open>c\<close>, inheriting from \<open>superclasses\<close>.  This
+  introduces a locale \<open>c\<close> with import of all locales \<open>superclasses\<close>.
 
-  Any @{element "fixes"} in @{text body} are lifted to the global
-  theory level (\<^emph>\<open>class operations\<close> @{text "f\<^sub>1, \<dots>,
-  f\<^sub>n"} of class @{text c}), mapping the local type parameter
-  @{text \<alpha>} to a schematic type variable @{text "?\<alpha> :: c"}.
+  Any @{element "fixes"} in \<open>body\<close> are lifted to the global
+  theory level (\<^emph>\<open>class operations\<close> \<open>f\<^sub>1, \<dots>,
+  f\<^sub>n\<close> of class \<open>c\<close>), mapping the local type parameter
+  \<open>\<alpha>\<close> to a schematic type variable \<open>?\<alpha> :: c\<close>.
 
-  Likewise, @{element "assumes"} in @{text body} are also lifted,
-  mapping each local parameter @{text "f :: \<tau>[\<alpha>]"} to its
-  corresponding global constant @{text "f :: \<tau>[?\<alpha> :: c]"}.  The
-  corresponding introduction rule is provided as @{text
-  c_class_axioms.intro}.  This rule should be rarely needed directly
+  Likewise, @{element "assumes"} in \<open>body\<close> are also lifted,
+  mapping each local parameter \<open>f :: \<tau>[\<alpha>]\<close> to its
+  corresponding global constant \<open>f :: \<tau>[?\<alpha> :: c]\<close>.  The
+  corresponding introduction rule is provided as \<open>c_class_axioms.intro\<close>.  This rule should be rarely needed directly
   --- the @{method intro_classes} method takes care of the details of
   class membership proofs.
 
-  \<^descr> @{command "instantiation"}~@{text "t :: (s\<^sub>1, \<dots>, s\<^sub>n)s
-  \<BEGIN>"} opens a target (cf.\ \secref{sec:target}) which
-  allows to specify class operations @{text "f\<^sub>1, \<dots>, f\<^sub>n"} corresponding
-  to sort @{text s} at the particular type instance @{text "(\<alpha>\<^sub>1 :: s\<^sub>1,
-  \<dots>, \<alpha>\<^sub>n :: s\<^sub>n) t"}.  A plain @{command "instance"} command in the
+  \<^descr> @{command "instantiation"}~\<open>t :: (s\<^sub>1, \<dots>, s\<^sub>n)s
+  \<BEGIN>\<close> opens a target (cf.\ \secref{sec:target}) which
+  allows to specify class operations \<open>f\<^sub>1, \<dots>, f\<^sub>n\<close> corresponding
+  to sort \<open>s\<close> at the particular type instance \<open>(\<alpha>\<^sub>1 :: s\<^sub>1,
+  \<dots>, \<alpha>\<^sub>n :: s\<^sub>n) t\<close>.  A plain @{command "instance"} command in the
   target body poses a goal stating these type arities.  The target is
   concluded by an @{command_ref (local) "end"} command.
 
@@ -960,20 +952,19 @@
   the type classes involved.  After finishing the proof, the
   background theory will be augmented by the proven type arities.
 
-  On the theory level, @{command "instance"}~@{text "t :: (s\<^sub>1, \<dots>,
-  s\<^sub>n)s"} provides a convenient way to instantiate a type class with no
+  On the theory level, @{command "instance"}~\<open>t :: (s\<^sub>1, \<dots>,
+  s\<^sub>n)s\<close> provides a convenient way to instantiate a type class with no
   need to specify operations: one can continue with the
   instantiation proof immediately.
 
-  \<^descr> @{command "subclass"}~@{text c} in a class context for class
-  @{text d} sets up a goal stating that class @{text c} is logically
-  contained in class @{text d}.  After finishing the proof, class
-  @{text d} is proven to be subclass @{text c} and the locale @{text
-  c} is interpreted into @{text d} simultaneously.
+  \<^descr> @{command "subclass"}~\<open>c\<close> in a class context for class
+  \<open>d\<close> sets up a goal stating that class \<open>c\<close> is logically
+  contained in class \<open>d\<close>.  After finishing the proof, class
+  \<open>d\<close> is proven to be subclass \<open>c\<close> and the locale \<open>c\<close> is interpreted into \<open>d\<close> simultaneously.
 
   A weakened form of this is available through a further variant of
-  @{command instance}:  @{command instance}~@{text "c\<^sub>1 \<subseteq> c\<^sub>2"} opens
-  a proof that class @{text "c\<^sub>2"} implies @{text "c\<^sub>1"} without reference
+  @{command instance}:  @{command instance}~\<open>c\<^sub>1 \<subseteq> c\<^sub>2\<close> opens
+  a proof that class \<open>c\<^sub>2\<close> implies \<open>c\<^sub>1\<close> without reference
   to the underlying locales;  this is useful if the properties to prove
   the logical connection are not sufficient on the locale level but on
   the theory level.
@@ -984,10 +975,10 @@
   \<^descr> @{command "class_deps"} visualizes classes and their subclass
   relations as a directed acyclic graph. By default, all classes from the
   current theory context are show. This may be restricted by optional bounds
-  as follows: @{command "class_deps"}~@{text upper} or @{command
-  "class_deps"}~@{text "upper lower"}. A class is visualized, iff it is a
-  subclass of some sort from @{text upper} and a superclass of some sort
-  from @{text lower}.
+  as follows: @{command "class_deps"}~\<open>upper\<close> or @{command
+  "class_deps"}~\<open>upper lower\<close>. A class is visualized, iff it is a
+  subclass of some sort from \<open>upper\<close> and a superclass of some sort
+  from \<open>lower\<close>.
 
   \<^descr> @{method intro_classes} repeatedly expands all class
   introduction rules of this theory.  Note that this method usually
@@ -1004,18 +995,18 @@
   %FIXME check
 
   A named context may refer to a locale (cf.\ \secref{sec:target}).
-  If this locale is also a class @{text c}, apart from the common
+  If this locale is also a class \<open>c\<close>, apart from the common
   locale target behaviour the following happens.
 
-  \<^item> Local constant declarations @{text "g[\<alpha>]"} referring to the
-  local type parameter @{text \<alpha>} and local parameters @{text "f[\<alpha>]"}
-  are accompanied by theory-level constants @{text "g[?\<alpha> :: c]"}
-  referring to theory-level class operations @{text "f[?\<alpha> :: c]"}.
+  \<^item> Local constant declarations \<open>g[\<alpha>]\<close> referring to the
+  local type parameter \<open>\<alpha>\<close> and local parameters \<open>f[\<alpha>]\<close>
+  are accompanied by theory-level constants \<open>g[?\<alpha> :: c]\<close>
+  referring to theory-level class operations \<open>f[?\<alpha> :: c]\<close>.
 
   \<^item> Local theorem bindings are lifted as are assumptions.
 
-  \<^item> Local syntax refers to local operations @{text "g[\<alpha>]"} and
-  global operations @{text "g[?\<alpha> :: c]"} uniformly.  Type inference
+  \<^item> Local syntax refers to local operations \<open>g[\<alpha>]\<close> and
+  global operations \<open>g[?\<alpha> :: c]\<close> uniformly.  Type inference
   resolves ambiguities.  In rare cases, manual type annotations are
   needed.
 \<close>
@@ -1030,15 +1021,15 @@
   \<^medskip>
   For the subsequent formulation of co-regularity we assume
   that the class relation is closed by transitivity and reflexivity.
-  Moreover the collection of arities @{text "t :: (\<^vec>s)c"} is
-  completed such that @{text "t :: (\<^vec>s)c"} and @{text "c \<subseteq> c'"}
-  implies @{text "t :: (\<^vec>s)c'"} for all such declarations.
+  Moreover the collection of arities \<open>t :: (\<^vec>s)c\<close> is
+  completed such that \<open>t :: (\<^vec>s)c\<close> and \<open>c \<subseteq> c'\<close>
+  implies \<open>t :: (\<^vec>s)c'\<close> for all such declarations.
 
   Treating sorts as finite sets of classes (meaning the intersection),
-  the class relation @{text "c\<^sub>1 \<subseteq> c\<^sub>2"} is extended to sorts as
+  the class relation \<open>c\<^sub>1 \<subseteq> c\<^sub>2\<close> is extended to sorts as
   follows:
   \[
-    @{text "s\<^sub>1 \<subseteq> s\<^sub>2 \<equiv> \<forall>c\<^sub>2 \<in> s\<^sub>2. \<exists>c\<^sub>1 \<in> s\<^sub>1. c\<^sub>1 \<subseteq> c\<^sub>2"}
+    \<open>s\<^sub>1 \<subseteq> s\<^sub>2 \<equiv> \<forall>c\<^sub>2 \<in> s\<^sub>2. \<exists>c\<^sub>1 \<in> s\<^sub>1. c\<^sub>1 \<subseteq> c\<^sub>2\<close>
   \]
 
   This relation on sorts is further extended to tuples of sorts (of
@@ -1048,7 +1039,7 @@
   Co-regularity of the class relation together with the
   arities relation means:
   \[
-    @{text "t :: (\<^vec>s\<^sub>1)c\<^sub>1 \<Longrightarrow> t :: (\<^vec>s\<^sub>2)c\<^sub>2 \<Longrightarrow> c\<^sub>1 \<subseteq> c\<^sub>2 \<Longrightarrow> \<^vec>s\<^sub>1 \<subseteq> \<^vec>s\<^sub>2"}
+    \<open>t :: (\<^vec>s\<^sub>1)c\<^sub>1 \<Longrightarrow> t :: (\<^vec>s\<^sub>2)c\<^sub>2 \<Longrightarrow> c\<^sub>1 \<subseteq> c\<^sub>2 \<Longrightarrow> \<^vec>s\<^sub>1 \<subseteq> \<^vec>s\<^sub>2\<close>
   \]
   for all such arities.  In other words, whenever the result
   classes of some type-constructor arities are related, then the
@@ -1065,15 +1056,15 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{command_def "overloading"} & : & @{text "theory \<rightarrow> local_theory"} \\
+    @{command_def "overloading"} & : & \<open>theory \<rightarrow> local_theory\<close> \\
   \end{matharray}
 
   Isabelle/Pure's definitional schemes support certain forms of
   overloading (see \secref{sec:consts}).  Overloading means that a
-  constant being declared as @{text "c :: \<alpha> decl"} may be
+  constant being declared as \<open>c :: \<alpha> decl\<close> may be
   defined separately on type instances
-  @{text "c :: (\<beta>\<^sub>1, \<dots>, \<beta>\<^sub>n) t decl"}
-  for each type constructor @{text t}.  At most occasions
+  \<open>c :: (\<beta>\<^sub>1, \<dots>, \<beta>\<^sub>n) t decl\<close>
+  for each type constructor \<open>t\<close>.  At most occasions
   overloading will be used in a Haskell-like fashion together with
   type classes by means of @{command "instantiation"} (see
   \secref{sec:class}).  Sometimes low-level overloading is desirable.
@@ -1086,17 +1077,17 @@
     spec: @{syntax name} ( '==' | '\<equiv>' ) @{syntax term} ( '(' @'unchecked' ')' )?
   \<close>}
 
-  \<^descr> @{command "overloading"}~@{text "x\<^sub>1 \<equiv> c\<^sub>1 :: \<tau>\<^sub>1 \<AND> \<dots> x\<^sub>n \<equiv> c\<^sub>n :: \<tau>\<^sub>n \<BEGIN>"}
+  \<^descr> @{command "overloading"}~\<open>x\<^sub>1 \<equiv> c\<^sub>1 :: \<tau>\<^sub>1 \<AND> \<dots> x\<^sub>n \<equiv> c\<^sub>n :: \<tau>\<^sub>n \<BEGIN>\<close>
   opens a theory target (cf.\ \secref{sec:target}) which allows to
   specify constants with overloaded definitions.  These are identified
-  by an explicitly given mapping from variable names @{text "x\<^sub>i"} to
-  constants @{text "c\<^sub>i"} at particular type instances.  The
+  by an explicitly given mapping from variable names \<open>x\<^sub>i\<close> to
+  constants \<open>c\<^sub>i\<close> at particular type instances.  The
   definitions themselves are established using common specification
-  tools, using the names @{text "x\<^sub>i"} as reference to the
+  tools, using the names \<open>x\<^sub>i\<close> as reference to the
   corresponding constants.  The target is concluded by @{command
   (local) "end"}.
 
-  A @{text "(unchecked)"} option disables global dependency checks for
+  A \<open>(unchecked)\<close> option disables global dependency checks for
   the corresponding definition, which is occasionally useful for
   exotic overloading (see \secref{sec:consts} for a precise description).
   It is at the discretion of the user to avoid
@@ -1108,20 +1099,20 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{command_def "SML_file"} & : & @{text "theory \<rightarrow> theory"} \\
-    @{command_def "ML_file"} & : & @{text "local_theory \<rightarrow> local_theory"} \\
-    @{command_def "ML"} & : & @{text "local_theory \<rightarrow> local_theory"} \\
-    @{command_def "ML_prf"} & : & @{text "proof \<rightarrow> proof"} \\
-    @{command_def "ML_val"} & : & @{text "any \<rightarrow>"} \\
-    @{command_def "ML_command"} & : & @{text "any \<rightarrow>"} \\
-    @{command_def "setup"} & : & @{text "theory \<rightarrow> theory"} \\
-    @{command_def "local_setup"} & : & @{text "local_theory \<rightarrow> local_theory"} \\
-    @{command_def "attribute_setup"} & : & @{text "local_theory \<rightarrow> local_theory"} \\
+    @{command_def "SML_file"} & : & \<open>theory \<rightarrow> theory\<close> \\
+    @{command_def "ML_file"} & : & \<open>local_theory \<rightarrow> local_theory\<close> \\
+    @{command_def "ML"} & : & \<open>local_theory \<rightarrow> local_theory\<close> \\
+    @{command_def "ML_prf"} & : & \<open>proof \<rightarrow> proof\<close> \\
+    @{command_def "ML_val"} & : & \<open>any \<rightarrow>\<close> \\
+    @{command_def "ML_command"} & : & \<open>any \<rightarrow>\<close> \\
+    @{command_def "setup"} & : & \<open>theory \<rightarrow> theory\<close> \\
+    @{command_def "local_setup"} & : & \<open>local_theory \<rightarrow> local_theory\<close> \\
+    @{command_def "attribute_setup"} & : & \<open>local_theory \<rightarrow> local_theory\<close> \\
   \end{matharray}
   \begin{tabular}{rcll}
-    @{attribute_def ML_print_depth} & : & @{text attribute} & default 10 \\
-    @{attribute_def ML_source_trace} & : & @{text attribute} & default @{text false} \\
-    @{attribute_def ML_exception_trace} & : & @{text attribute} & default @{text false} \\
+    @{attribute_def ML_print_depth} & : & \<open>attribute\<close> & default 10 \\
+    @{attribute_def ML_source_trace} & : & \<open>attribute\<close> & default \<open>false\<close> \\
+    @{attribute_def ML_exception_trace} & : & \<open>attribute\<close> & default \<open>false\<close> \\
   \end{tabular}
 
   @{rail \<open>
@@ -1133,7 +1124,7 @@
     @@{command attribute_setup} @{syntax name} '=' @{syntax text} @{syntax text}?
   \<close>}
 
-  \<^descr> @{command "SML_file"}~@{text "name"} reads and evaluates the
+  \<^descr> @{command "SML_file"}~\<open>name\<close> reads and evaluates the
   given Standard ML file.  Top-level SML bindings are stored within
   the theory context; the initial environment is restricted to the
   Standard ML implementation of Poly/ML, without the many add-ons of
@@ -1141,14 +1132,14 @@
   build larger Standard ML projects, independently of the regular
   Isabelle/ML environment.
 
-  \<^descr> @{command "ML_file"}~@{text "name"} reads and evaluates the
+  \<^descr> @{command "ML_file"}~\<open>name\<close> reads and evaluates the
   given ML file.  The current theory context is passed down to the ML
   toplevel and may be modified, using @{ML "Context.>>"} or derived ML
   commands.  Top-level ML bindings are stored within the (global or
   local) theory context.
   
-  \<^descr> @{command "ML"}~@{text "text"} is similar to @{command
-  "ML_file"}, but evaluates directly the given @{text "text"}.
+  \<^descr> @{command "ML"}~\<open>text\<close> is similar to @{command
+  "ML_file"}, but evaluates directly the given \<open>text\<close>.
   Top-level ML bindings are stored within the (global or local) theory
   context.
 
@@ -1163,8 +1154,8 @@
   updated.  @{command "ML_val"} echos the bindings produced at the ML
   toplevel, but @{command "ML_command"} is silent.
   
-  \<^descr> @{command "setup"}~@{text "text"} changes the current theory
-  context by applying @{text "text"}, which refers to an ML expression
+  \<^descr> @{command "setup"}~\<open>text\<close> changes the current theory
+  context by applying \<open>text\<close>, which refers to an ML expression
   of type @{ML_type "theory -> theory"}.  This enables to initialize
   any object-logic specific tools and packages written in ML, for
   example.
@@ -1175,9 +1166,8 @@
   invoke local theory specification packages without going through
   concrete outer syntax, for example.
 
-  \<^descr> @{command "attribute_setup"}~@{text "name = text description"}
-  defines an attribute in the current context.  The given @{text
-  "text"} has to be an ML expression of type
+  \<^descr> @{command "attribute_setup"}~\<open>name = text description\<close>
+  defines an attribute in the current context.  The given \<open>text\<close> has to be an ML expression of type
   @{ML_type "attribute context_parser"}, cf.\ basic parsers defined in
   structure @{ML_structure Args} and @{ML_structure Attrib}.
 
@@ -1232,14 +1222,14 @@
 
 text \<open>
   \begin{matharray}{rcll}
-    @{command_def "default_sort"} & : & @{text "local_theory \<rightarrow> local_theory"}
+    @{command_def "default_sort"} & : & \<open>local_theory \<rightarrow> local_theory\<close>
   \end{matharray}
 
   @{rail \<open>
     @@{command default_sort} @{syntax sort}
   \<close>}
 
-  \<^descr> @{command "default_sort"}~@{text s} makes sort @{text s} the
+  \<^descr> @{command "default_sort"}~\<open>s\<close> makes sort \<open>s\<close> the
   new default sort for any type variable that is given explicitly in
   the text, but lacks a sort constraint (wrt.\ the current context).
   Type variables generated by type inference are not affected.
@@ -1258,8 +1248,8 @@
 
 text \<open>
   \begin{matharray}{rcll}
-    @{command_def "type_synonym"} & : & @{text "local_theory \<rightarrow> local_theory"} \\
-    @{command_def "typedecl"} & : & @{text "local_theory \<rightarrow> local_theory"} \\
+    @{command_def "type_synonym"} & : & \<open>local_theory \<rightarrow> local_theory\<close> \\
+    @{command_def "typedecl"} & : & \<open>local_theory \<rightarrow> local_theory\<close> \\
   \end{matharray}
 
   @{rail \<open>
@@ -1268,16 +1258,15 @@
     @@{command typedecl} @{syntax typespec} @{syntax mixfix}?
   \<close>}
 
-  \<^descr> @{command "type_synonym"}~@{text "(\<alpha>\<^sub>1, \<dots>, \<alpha>\<^sub>n) t = \<tau>"} introduces a
-  \<^emph>\<open>type synonym\<close> @{text "(\<alpha>\<^sub>1, \<dots>, \<alpha>\<^sub>n) t"} for the existing type @{text
-  "\<tau>"}. Unlike the semantic type definitions in Isabelle/HOL, type synonyms
+  \<^descr> @{command "type_synonym"}~\<open>(\<alpha>\<^sub>1, \<dots>, \<alpha>\<^sub>n) t = \<tau>\<close> introduces a
+  \<^emph>\<open>type synonym\<close> \<open>(\<alpha>\<^sub>1, \<dots>, \<alpha>\<^sub>n) t\<close> for the existing type \<open>\<tau>\<close>. Unlike the semantic type definitions in Isabelle/HOL, type synonyms
   are merely syntactic abbreviations without any logical significance.
   Internally, type synonyms are fully expanded.
   
-  \<^descr> @{command "typedecl"}~@{text "(\<alpha>\<^sub>1, \<dots>, \<alpha>\<^sub>n) t"} declares a new
-  type constructor @{text t}.  If the object-logic defines a base sort
-  @{text s}, then the constructor is declared to operate on that, via
-  the axiomatic type-class instance @{text "t :: (s, \<dots>, s)s"}.
+  \<^descr> @{command "typedecl"}~\<open>(\<alpha>\<^sub>1, \<dots>, \<alpha>\<^sub>n) t\<close> declares a new
+  type constructor \<open>t\<close>.  If the object-logic defines a base sort
+  \<open>s\<close>, then the constructor is declared to operate on that, via
+  the axiomatic type-class instance \<open>t :: (s, \<dots>, s)s\<close>.
 
 
   \begin{warn}
@@ -1298,18 +1287,17 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{command_def "consts"} & : & @{text "theory \<rightarrow> theory"} \\
-    @{command_def "defs"} & : & @{text "theory \<rightarrow> theory"} \\
+    @{command_def "consts"} & : & \<open>theory \<rightarrow> theory\<close> \\
+    @{command_def "defs"} & : & \<open>theory \<rightarrow> theory\<close> \\
   \end{matharray}
 
   Definitions essentially express abbreviations within the logic.  The
-  simplest form of a definition is @{text "c :: \<sigma> \<equiv> t"}, where @{text
-  c} is a newly declared constant.  Isabelle also allows derived forms
-  where the arguments of @{text c} appear on the left, abbreviating a
-  prefix of @{text \<lambda>}-abstractions, e.g.\ @{text "c \<equiv> \<lambda>x y. t"} may be
-  written more conveniently as @{text "c x y \<equiv> t"}.  Moreover,
+  simplest form of a definition is \<open>c :: \<sigma> \<equiv> t\<close>, where \<open>c\<close> is a newly declared constant.  Isabelle also allows derived forms
+  where the arguments of \<open>c\<close> appear on the left, abbreviating a
+  prefix of \<open>\<lambda>\<close>-abstractions, e.g.\ \<open>c \<equiv> \<lambda>x y. t\<close> may be
+  written more conveniently as \<open>c x y \<equiv> t\<close>.  Moreover,
   definitions may be weakened by adding arbitrary pre-conditions:
-  @{text "A \<Longrightarrow> c x y \<equiv> t"}.
+  \<open>A \<Longrightarrow> c x y \<equiv> t\<close>.
 
   \<^medskip>
   The built-in well-formedness conditions for definitional
@@ -1321,8 +1309,8 @@
   left-hand side.
 
   \<^item> All type variables on the right-hand side must also appear on
-  the left-hand side; this prohibits @{text "0 :: nat \<equiv> length ([] ::
-  \<alpha> list)"} for example.
+  the left-hand side; this prohibits \<open>0 :: nat \<equiv> length ([] ::
+  \<alpha> list)\<close> for example.
 
   \<^item> The definition must not be recursive.  Most object-logics
   provide definitional principles that can be used to express
@@ -1331,11 +1319,11 @@
 
   The right-hand side of overloaded definitions may mention overloaded constants
   recursively at type instances corresponding to the immediate
-  argument types @{text "\<beta>\<^sub>1, \<dots>, \<beta>\<^sub>n"}.  Incomplete
+  argument types \<open>\<beta>\<^sub>1, \<dots>, \<beta>\<^sub>n\<close>.  Incomplete
   specification patterns impose global constraints on all occurrences,
-  e.g.\ @{text "d :: \<alpha> \<times> \<alpha>"} on the left-hand side means that all
+  e.g.\ \<open>d :: \<alpha> \<times> \<alpha>\<close> on the left-hand side means that all
   corresponding occurrences on some right-hand side need to be an
-  instance of this, general @{text "d :: \<alpha> \<times> \<beta>"} will be disallowed.
+  instance of this, general \<open>d :: \<alpha> \<times> \<beta>\<close> will be disallowed.
 
   @{rail \<open>
     @@{command consts} ((@{syntax name} '::' @{syntax type} @{syntax mixfix}?) +)
@@ -1345,20 +1333,19 @@
     opt: '(' @'unchecked'? @'overloaded'? ')'
   \<close>}
 
-  \<^descr> @{command "consts"}~@{text "c :: \<sigma>"} declares constant @{text
-  c} to have any instance of type scheme @{text \<sigma>}.  The optional
+  \<^descr> @{command "consts"}~\<open>c :: \<sigma>\<close> declares constant \<open>c\<close> to have any instance of type scheme \<open>\<sigma>\<close>.  The optional
   mixfix annotations may attach concrete syntax to the constants
   declared.
   
-  \<^descr> @{command "defs"}~@{text "name: eqn"} introduces @{text eqn}
+  \<^descr> @{command "defs"}~\<open>name: eqn\<close> introduces \<open>eqn\<close>
   as a definitional axiom for some existing constant.
   
-  The @{text "(unchecked)"} option disables global dependency checks
+  The \<open>(unchecked)\<close> option disables global dependency checks
   for this definition, which is occasionally useful for exotic
   overloading.  It is at the discretion of the user to avoid malformed
   theory specifications!
   
-  The @{text "(overloaded)"} option declares definitions to be
+  The \<open>(overloaded)\<close> option declares definitions to be
   potentially overloaded.  Unless this option is given, a warning
   message would be issued for any definitional equation with a more
   special type than that of the corresponding constant declaration.
@@ -1369,8 +1356,8 @@
 
 text \<open>
   \begin{matharray}{rcll}
-    @{command_def "lemmas"} & : & @{text "local_theory \<rightarrow> local_theory"} \\
-    @{command_def "named_theorems"} & : & @{text "local_theory \<rightarrow> local_theory"} \\
+    @{command_def "lemmas"} & : & \<open>local_theory \<rightarrow> local_theory\<close> \\
+    @{command_def "named_theorems"} & : & \<open>local_theory \<rightarrow> local_theory\<close> \\
   \end{matharray}
 
   @{rail \<open>
@@ -1380,15 +1367,15 @@
     @@{command named_theorems} (@{syntax name} @{syntax text}? + @'and')
   \<close>}
 
-  \<^descr> @{command "lemmas"}~@{text "a = b\<^sub>1 \<dots> b\<^sub>n"}~@{keyword_def
-  "for"}~@{text "x\<^sub>1 \<dots> x\<^sub>m"} evaluates given facts (with attributes) in
+  \<^descr> @{command "lemmas"}~\<open>a = b\<^sub>1 \<dots> b\<^sub>n\<close>~@{keyword_def
+  "for"}~\<open>x\<^sub>1 \<dots> x\<^sub>m\<close> evaluates given facts (with attributes) in
   the current context, which may be augmented by local variables.
   Results are standardized before being stored, i.e.\ schematic
-  variables are renamed to enforce index @{text "0"} uniformly.
+  variables are renamed to enforce index \<open>0\<close> uniformly.
 
-  \<^descr> @{command "named_theorems"}~@{text "name description"} declares a
-  dynamic fact within the context. The same @{text name} is used to define
-  an attribute with the usual @{text add}/@{text del} syntax (e.g.\ see
+  \<^descr> @{command "named_theorems"}~\<open>name description\<close> declares a
+  dynamic fact within the context. The same \<open>name\<close> is used to define
+  an attribute with the usual \<open>add\<close>/\<open>del\<close> syntax (e.g.\ see
   \secref{sec:simp-rules}) to maintain the content incrementally, in
   canonical declaration order of the text structure.
 \<close>
@@ -1398,7 +1385,7 @@
 
 text \<open>
   \begin{matharray}{rcll}
-    @{command_def "oracle"} & : & @{text "theory \<rightarrow> theory"} & (axiomatic!) \\
+    @{command_def "oracle"} & : & \<open>theory \<rightarrow> theory\<close> & (axiomatic!) \\
   \end{matharray}
 
   Oracles allow Isabelle to take advantage of external reasoners such
@@ -1420,8 +1407,8 @@
     @@{command oracle} @{syntax name} '=' @{syntax text}
   \<close>}
 
-  \<^descr> @{command "oracle"}~@{text "name = text"} turns the given ML
-  expression @{text "text"} of type @{ML_text "'a -> cterm"} into an
+  \<^descr> @{command "oracle"}~\<open>name = text\<close> turns the given ML
+  expression \<open>text\<close> of type @{ML_text "'a -> cterm"} into an
   ML function of type @{ML_text "'a -> thm"}, which is bound to the
   global identifier @{ML_text name}.  This acts like an infinitary
   specification of axioms!  Invoking the oracle only works within the
@@ -1438,10 +1425,10 @@
 
 text \<open>
   \begin{matharray}{rcl}
-    @{command_def "hide_class"} & : & @{text "theory \<rightarrow> theory"} \\
-    @{command_def "hide_type"} & : & @{text "theory \<rightarrow> theory"} \\
-    @{command_def "hide_const"} & : & @{text "theory \<rightarrow> theory"} \\
-    @{command_def "hide_fact"} & : & @{text "theory \<rightarrow> theory"} \\
+    @{command_def "hide_class"} & : & \<open>theory \<rightarrow> theory\<close> \\
+    @{command_def "hide_type"} & : & \<open>theory \<rightarrow> theory\<close> \\
+    @{command_def "hide_const"} & : & \<open>theory \<rightarrow> theory\<close> \\
+    @{command_def "hide_fact"} & : & \<open>theory \<rightarrow> theory\<close> \\
   \end{matharray}
 
   @{rail \<open>
@@ -1455,14 +1442,14 @@
   name spaces by hand, yet the following commands provide some way to
   do so.
 
-  \<^descr> @{command "hide_class"}~@{text names} fully removes class
-  declarations from a given name space; with the @{text "(open)"}
+  \<^descr> @{command "hide_class"}~\<open>names\<close> fully removes class
+  declarations from a given name space; with the \<open>(open)\<close>
   option, only the unqualified base name is hidden.
 
   Note that hiding name space accesses has no impact on logical
   declarations --- they remain valid internally.  Entities that are no
   longer accessible to the user are printed with the special qualifier
-  ``@{text "??"}'' prefixed to the full internal name.
+  ``\<open>??\<close>'' prefixed to the full internal name.
 
   \<^descr> @{command "hide_type"}, @{command "hide_const"}, and @{command
   "hide_fact"} are similar to @{command "hide_class"}, but hide types,
--- a/src/Doc/Isar_Ref/Symbols.thy	Tue Oct 20 23:03:46 2015 +0200
+++ b/src/Doc/Isar_Ref/Symbols.thy	Tue Oct 20 23:53:40 2015 +0200
@@ -6,26 +6,24 @@
 
 text \<open>
   Isabelle supports an infinite number of non-ASCII symbols, which are
-  represented in source text as @{verbatim \<open>\\<close>}@{verbatim "<"}@{text
-  name}@{verbatim ">"} (where @{text name} may be any identifier).  It
+  represented in source text as @{verbatim \<open>\\<close>}@{verbatim "<"}\<open>name\<close>@{verbatim ">"} (where \<open>name\<close> may be any identifier).  It
   is left to front-end tools how to present these symbols to the user.
   The collection of predefined standard symbols given below is
   available by default for Isabelle document output, due to
-  appropriate definitions of @{verbatim \<open>\isasym\<close>}@{text
-  name} for each @{verbatim \<open>\\<close>}@{verbatim "<"}@{text name}@{verbatim
+  appropriate definitions of @{verbatim \<open>\isasym\<close>}\<open>name\<close> for each @{verbatim \<open>\\<close>}@{verbatim "<"}\<open>name\<close>@{verbatim
   ">"} in the @{verbatim isabellesym.sty} file.  Most of these symbols
   are displayed properly in Isabelle/jEdit and {\LaTeX} generated from Isabelle.
 
   Moreover, any single symbol (or ASCII character) may be prefixed by
   @{verbatim "\<^sup>"} for superscript and @{verbatim "\<^sub>"} for subscript,
-  such as @{verbatim "A\<^sup>\<star>"} for @{text "A\<^sup>\<star>"} and @{verbatim "A\<^sub>1"} for
-  @{text "A\<^sub>1"}.  Sub- and superscripts that span a region of text can
-  be marked up with @{verbatim "\<^bsub>"}@{text "\<dots>"}@{verbatim
-  "\<^esub>"} and @{verbatim "\<^bsup>"}@{text "\<dots>"}@{verbatim "\<^esup>"}
+  such as @{verbatim "A\<^sup>\<star>"} for \<open>A\<^sup>\<star>\<close> and @{verbatim "A\<^sub>1"} for
+  \<open>A\<^sub>1\<close>.  Sub- and superscripts that span a region of text can
+  be marked up with @{verbatim "\<^bsub>"}\<open>\<dots>\<close>@{verbatim
+  "\<^esub>"} and @{verbatim "\<^bsup>"}\<open>\<dots>\<close>@{verbatim "\<^esup>"}
   respectively, but note that there are limitations in the typographic
   rendering quality of this form.  Furthermore, all ASCII characters
   and most other symbols may be printed in bold by prefixing
-  @{verbatim "\<^bold>"} such as @{verbatim "\<^bold>\<alpha>"} for @{text "\<^bold>\<alpha>"}.  Note that
+  @{verbatim "\<^bold>"} such as @{verbatim "\<^bold>\<alpha>"} for \<open>\<^bold>\<alpha>\<close>.  Note that
   @{verbatim "\<^sup>"}, @{verbatim "\<^sub>"}, @{verbatim "\<^bold>"} cannot be combined.
 
   Further details of Isabelle document preparation are covered in
--- a/src/Doc/Isar_Ref/Synopsis.thy	Tue Oct 20 23:03:46 2015 +0200
+++ b/src/Doc/Isar_Ref/Synopsis.thy	Tue Oct 20 23:53:40 2015 +0200
@@ -138,10 +138,9 @@
   misleading and hard to maintain.
 
   \<^item> Natural numbers can be used as ``meaningless'' names (more
-  appropriate than @{text "a1"}, @{text "a2"} etc.)
+  appropriate than \<open>a1\<close>, \<open>a2\<close> etc.)
 
-  \<^item> Symbolic identifiers are supported (e.g. @{text "*"}, @{text
-  "**"}, @{text "***"}).
+  \<^item> Symbolic identifiers are supported (e.g. \<open>*\<close>, \<open>**\<close>, \<open>***\<close>).
 \<close>
 
 
@@ -220,13 +219,13 @@
 subsection \<open>Special names in Isar proofs\<close>
 
 text \<open>
-  \<^item> term @{text "?thesis"} --- the main conclusion of the
+  \<^item> term \<open>?thesis\<close> --- the main conclusion of the
   innermost pending claim
 
-  \<^item> term @{text "\<dots>"} --- the argument of the last explicitly
+  \<^item> term \<open>\<dots>\<close> --- the argument of the last explicitly
   stated result (for infix application this is the right-hand side)
 
-  \<^item> fact @{text "this"} --- the last result produced in the text
+  \<^item> fact \<open>this\<close> --- the last result produced in the text
 \<close>
 
 notepad
@@ -242,15 +241,14 @@
 end
 
 text \<open>Calculational reasoning maintains the special fact called
-  ``@{text calculation}'' in the background.  Certain language
-  elements combine primary @{text this} with secondary @{text
-  calculation}.\<close>
+  ``\<open>calculation\<close>'' in the background.  Certain language
+  elements combine primary \<open>this\<close> with secondary \<open>calculation\<close>.\<close>
 
 
 subsection \<open>Transitive chains\<close>
 
-text \<open>The Idea is to combine @{text this} and @{text calculation}
-  via typical @{text trans} rules (see also @{command
+text \<open>The Idea is to combine \<open>this\<close> and \<open>calculation\<close>
+  via typical \<open>trans\<close> rules (see also @{command
   print_trans_rules}):\<close>
 
 thm trans
@@ -268,7 +266,7 @@
   finally
   have "a = d" .
 
-  txt \<open>Variant using the @{text "\<dots>"} abbreviation:\<close>
+  txt \<open>Variant using the \<open>\<dots>\<close> abbreviation:\<close>
   have "a = b" sorry
   also
   have "\<dots> = c" sorry
@@ -305,7 +303,7 @@
 subsubsection \<open>Notes\<close>
 
 text \<open>
-  \<^item> The notion of @{text trans} rule is very general due to the
+  \<^item> The notion of \<open>trans\<close> rule is very general due to the
   flexibility of Isabelle/Pure rule composition.
 
   \<^item> User applications may declare their own rules, with some care
@@ -315,7 +313,7 @@
 
 subsection \<open>Degenerate calculations and bigstep reasoning\<close>
 
-text \<open>The Idea is to append @{text this} to @{text calculation},
+text \<open>The Idea is to append \<open>this\<close> to \<open>calculation\<close>,
   without rule composition.\<close>
 
 notepad
@@ -443,7 +441,7 @@
   rule.  This achieves convenient proof patterns, thanks to some
   internal trickery in the @{method induct} method.
 
-  Important: Using compact HOL formulae with @{text "\<forall>/\<longrightarrow>"} is a
+  Important: Using compact HOL formulae with \<open>\<forall>/\<longrightarrow>\<close> is a
   well-known anti-pattern! It would produce useless formal noise.
 \<close>
 
@@ -551,7 +549,7 @@
   Isabelle/Pure ``theorems'' are always natural deduction rules,
   which sometimes happen to consist of a conclusion only.
 
-  The framework connectives @{text "\<And>"} and @{text "\<Longrightarrow>"} indicate the
+  The framework connectives \<open>\<And>\<close> and \<open>\<Longrightarrow>\<close> indicate the
   rule structure declaratively.  For example:\<close>
 
 thm conjI
@@ -669,8 +667,8 @@
   \<^item> closing of branches by @{inference assumption}
 
 
-  Both principles involve higher-order unification of @{text \<lambda>}-terms
-  modulo @{text "\<alpha>\<beta>\<eta>"}-equivalence (cf.\ Huet and Miller).
+  Both principles involve higher-order unification of \<open>\<lambda>\<close>-terms
+  modulo \<open>\<alpha>\<beta>\<eta>\<close>-equivalence (cf.\ Huet and Miller).
 \<close>
 
 notepad
@@ -885,8 +883,7 @@
 
 subsubsection \<open>Example: set-theoretic operators\<close>
 
-text \<open>There is nothing special about logical connectives (@{text
-  "\<and>"}, @{text "\<or>"}, @{text "\<forall>"}, @{text "\<exists>"} etc.).  Operators from
+text \<open>There is nothing special about logical connectives (\<open>\<and>\<close>, \<open>\<or>\<close>, \<open>\<forall>\<close>, \<open>\<exists>\<close> etc.).  Operators from
   set-theory or lattice-theory work analogously.  It is only a matter
   of rule declarations in the library; rules can be also specified
   explicitly.
@@ -979,7 +976,7 @@
   qed
 end
 
-text \<open>Here @{text "?thesis"} is used to refer to the unchanged goal
+text \<open>Here \<open>?thesis\<close> is used to refer to the unchanged goal
   statement.\<close>
 
 
--- a/src/Doc/JEdit/JEdit.thy	Tue Oct 20 23:03:46 2015 +0200
+++ b/src/Doc/JEdit/JEdit.thy	Tue Oct 20 23:53:40 2015 +0200
@@ -448,7 +448,7 @@
   grouped according to some simple rules, e.g.\ as plain ``@{verbatim a}'' or
   symbolic ``@{verbatim "\<alpha>"}''. For the editor front-end, a certain subset of
   symbols is rendered physically via Unicode glyphs, in order to show
-  ``@{verbatim "\<alpha>"}'' as ``@{text "\<alpha>"}'', for example. This symbol
+  ``@{verbatim "\<alpha>"}'' as ``\<open>\<alpha>\<close>'', for example. This symbol
   interpretation is specified by the Isabelle system distribution in @{file
   "$ISABELLE_HOME/etc/symbols"} and may be augmented by the user in
   @{file_unchecked "$ISABELLE_HOME_USER/etc/symbols"}.
@@ -470,7 +470,7 @@
   accidentally, or malformed UTF-8 sequences in the text force jEdit to fall
   back on a different encoding like @{verbatim "ISO-8859-15"}. In that case,
   verbatim ``@{verbatim "\<alpha>"}'' will be shown in the text buffer instead of its
-  Unicode rendering ``@{text "\<alpha>"}''. The jEdit menu operation \<^emph>\<open>File~/
+  Unicode rendering ``\<open>\<alpha>\<close>''. The jEdit menu operation \<^emph>\<open>File~/
   Reload with Encoding~/ UTF-8-Isabelle\<close> helps to resolve such problems (after
   repairing malformed parts of the text).
 
@@ -536,20 +536,20 @@
   \<^medskip>
   \begin{tabular}{lll}
     \<^bold>\<open>symbol\<close> & \<^bold>\<open>name with backslash\<close> & \<^bold>\<open>abbreviation\<close> \\\hline
-    @{text "\<lambda>"} & @{verbatim "\\lambda"} & @{verbatim "%"} \\
-    @{text "\<Rightarrow>"} & @{verbatim "\\Rightarrow"} & @{verbatim "=>"} \\
-    @{text "\<Longrightarrow>"} & @{verbatim "\\Longrightarrow"} & @{verbatim "==>"} \\[0.5ex]
-    @{text "\<And>"} & @{verbatim "\\And"} & @{verbatim "!!"} \\
-    @{text "\<equiv>"} & @{verbatim "\\equiv"} & @{verbatim "=="} \\[0.5ex]
-    @{text "\<forall>"} & @{verbatim "\\forall"} & @{verbatim "!"} \\
-    @{text "\<exists>"} & @{verbatim "\\exists"} & @{verbatim "?"} \\
-    @{text "\<longrightarrow>"} & @{verbatim "\\longrightarrow"} & @{verbatim "-->"} \\
-    @{text "\<and>"} & @{verbatim "\\and"} & @{verbatim "&"} \\
-    @{text "\<or>"} & @{verbatim "\\or"} & @{verbatim "|"} \\
-    @{text "\<not>"} & @{verbatim "\\not"} & @{verbatim "~"} \\
-    @{text "\<noteq>"} & @{verbatim "\\noteq"} & @{verbatim "~="} \\
-    @{text "\<in>"} & @{verbatim "\\in"} & @{verbatim ":"} \\
-    @{text "\<notin>"} & @{verbatim "\\notin"} & @{verbatim "~:"} \\
+    \<open>\<lambda>\<close> & @{verbatim "\\lambda"} & @{verbatim "%"} \\
+    \<open>\<Rightarrow>\<close> & @{verbatim "\\Rightarrow"} & @{verbatim "=>"} \\
+    \<open>\<Longrightarrow>\<close> & @{verbatim "\\Longrightarrow"} & @{verbatim "==>"} \\[0.5ex]
+    \<open>\<And>\<close> & @{verbatim "\\And"} & @{verbatim "!!"} \\
+    \<open>\<equiv>\<close> & @{verbatim "\\equiv"} & @{verbatim "=="} \\[0.5ex]
+    \<open>\<forall>\<close> & @{verbatim "\\forall"} & @{verbatim "!"} \\
+    \<open>\<exists>\<close> & @{verbatim "\\exists"} & @{verbatim "?"} \\
+    \<open>\<longrightarrow>\<close> & @{verbatim "\\longrightarrow"} & @{verbatim "-->"} \\
+    \<open>\<and>\<close> & @{verbatim "\\and"} & @{verbatim "&"} \\
+    \<open>\<or>\<close> & @{verbatim "\\or"} & @{verbatim "|"} \\
+    \<open>\<not>\<close> & @{verbatim "\\not"} & @{verbatim "~"} \\
+    \<open>\<noteq>\<close> & @{verbatim "\\noteq"} & @{verbatim "~="} \\
+    \<open>\<in>\<close> & @{verbatim "\\in"} & @{verbatim ":"} \\
+    \<open>\<notin>\<close> & @{verbatim "\\notin"} & @{verbatim "~:"} \\
   \end{tabular}
   \<^medskip>
  
@@ -1142,11 +1142,11 @@
   \<^descr> @{verbatim "`"} (single ASCII back-quote) supports \<^emph>\<open>quotations\<close>
   via text cartouches. There are three selections, which are always presented
   in the same order and do not depend on any context information. The default
-  choice produces a template ``@{text "\<open>\<box>\<close>"}'', where the box indicates the
+  choice produces a template ``\<open>\<open>\<box>\<close>\<close>'', where the box indicates the
   cursor position after insertion; the other choices help to repair the block
   structure of unbalanced text cartouches.
 
-  \<^descr> @{verbatim "@{"} is completed to the template ``@{text "@{\<box>}"}'',
+  \<^descr> @{verbatim "@{"} is completed to the template ``\<open>@{\<box>}\<close>'',
   where the box indicates the cursor position after insertion. Here it is
   convenient to use the wildcard ``@{verbatim __}'' or a more specific name
   prefix to let semantic completion of name-space entries propose
@@ -1176,7 +1176,7 @@
   specify an alternative replacement string to be inserted instead of the
   keyword itself. An empty string means to suppress the keyword altogether,
   which is occasionally useful to avoid confusion, e.g.\ the rare keyword
-  @{command simproc_setup} vs.\ the frequent name-space entry @{text simp}.
+  @{command simproc_setup} vs.\ the frequent name-space entry \<open>simp\<close>.
 \<close>
 
 
@@ -1198,7 +1198,7 @@
   \<^medskip>
 
   When inserted into the text, the above examples all produce the same Unicode
-  rendering @{text "\<forall>"} of the underlying symbol @{verbatim "\<forall>"}.
+  rendering \<open>\<forall>\<close> of the underlying symbol @{verbatim "\<forall>"}.
 
   A symbol abbreviation that is a plain word, like @{verbatim "ALL"}, is
   treated like a syntax keyword. Non-word abbreviations like @{verbatim "-->"}
@@ -1306,7 +1306,7 @@
   The prover may produce \<^emph>\<open>no completion\<close> markup in exceptional
   situations, to tell that some language keywords should be excluded from
   further completion attempts. For example, @{verbatim ":"} within accepted
-  Isar syntax looses its meaning as abbreviation for symbol @{text "\<in>"}.
+  Isar syntax looses its meaning as abbreviation for symbol \<open>\<in>\<close>.
 
   \<^medskip>
   The completion context is \<^emph>\<open>ignored\<close> for built-in templates and
@@ -1425,7 +1425,7 @@
   \<^emph>\<open>HyperSearch\<close>: clicking on one of the items in the \<^emph>\<open>HyperSearch
   Results\<close> window makes jEdit select all its occurrences in the corresponding
   line of text. Then explicit completion can be invoked via @{verbatim "C+b"},
-  e.g.\ to replace occurrences of @{verbatim "-->"} by @{text "\<longrightarrow>"}.
+  e.g.\ to replace occurrences of @{verbatim "-->"} by \<open>\<longrightarrow>\<close>.
 
   \<^medskip>
   Insertion works by removing and inserting pieces of text from the
@@ -1662,7 +1662,7 @@
   "isabelle-system"} are smart enough to assemble the result, based on the
   session directory layout.
 
-  The document antiquotation @{text "@{cite}"} is described in @{cite
+  The document antiquotation \<open>@{cite}\<close> is described in @{cite
   "isabelle-isar-ref"}. Within the Prover IDE it provides semantic markup for
   tooltips, hyperlinks, and completion for Bib{\TeX} database entries.
   Isabelle/jEdit does \<^emph>\<open>not\<close> know about the actual Bib{\TeX} environment
--- a/src/Doc/System/Basics.thy	Tue Oct 20 23:03:46 2015 +0200
+++ b/src/Doc/System/Basics.thy	Tue Oct 20 23:53:40 2015 +0200
@@ -136,9 +136,9 @@
   This is a reference of common Isabelle settings variables. Note that
   the list is somewhat open-ended. Third-party utilities or interfaces
   may add their own selection. Variables that are special in some
-  sense are marked with @{text "\<^sup>*"}.
+  sense are marked with \<open>\<^sup>*\<close>.
 
-  \<^descr>[@{setting_def USER_HOME}@{text "\<^sup>*"}] Is the cross-platform
+  \<^descr>[@{setting_def USER_HOME}\<open>\<^sup>*\<close>] Is the cross-platform
   user home directory.  On Unix systems this is usually the same as
   @{setting HOME}, but on Windows it is the regular home directory of
   the user, not the one of within the Cygwin root
@@ -146,7 +146,7 @@
   its HOME should point to the @{file_unchecked "/home"} directory tree or the
   Windows user home.}
 
-  \<^descr>[@{setting_def ISABELLE_HOME}@{text "\<^sup>*"}] is the location of the
+  \<^descr>[@{setting_def ISABELLE_HOME}\<open>\<^sup>*\<close>] is the location of the
   top-level Isabelle distribution directory. This is automatically
   determined from the Isabelle executable that has been invoked.  Do
   not attempt to set @{setting ISABELLE_HOME} yourself from the shell!
@@ -160,21 +160,21 @@
   defaults may be overridden by a private @{verbatim
   "$ISABELLE_HOME_USER/etc/settings"}.
 
-  \<^descr>[@{setting_def ISABELLE_PLATFORM_FAMILY}@{text "\<^sup>*"}] is
+  \<^descr>[@{setting_def ISABELLE_PLATFORM_FAMILY}\<open>\<^sup>*\<close>] is
   automatically set to the general platform family: @{verbatim linux},
   @{verbatim macos}, @{verbatim windows}.  Note that
   platform-dependent tools usually need to refer to the more specific
   identification according to @{setting ISABELLE_PLATFORM}, @{setting
   ISABELLE_PLATFORM32}, @{setting ISABELLE_PLATFORM64}.
 
-  \<^descr>[@{setting_def ISABELLE_PLATFORM}@{text "\<^sup>*"}] is automatically
+  \<^descr>[@{setting_def ISABELLE_PLATFORM}\<open>\<^sup>*\<close>] is automatically
   set to a symbolic identifier for the underlying hardware and
   operating system.  The Isabelle platform identification always
   refers to the 32 bit variant, even this is a 64 bit machine.  Note
   that the ML or Java runtime may have a different idea, depending on
   which binaries are actually run.
 
-  \<^descr>[@{setting_def ISABELLE_PLATFORM64}@{text "\<^sup>*"}] is similar to
+  \<^descr>[@{setting_def ISABELLE_PLATFORM64}\<open>\<^sup>*\<close>] is similar to
   @{setting ISABELLE_PLATFORM} but refers to the proper 64 bit variant
   on a platform that supports this; the value is empty for 32 bit.
   Note that the following bash expression (including the quotes)
@@ -182,20 +182,20 @@
 
   @{verbatim [display] \<open>"${ISABELLE_PLATFORM64:-$ISABELLE_PLATFORM}"\<close>}
 
-  \<^descr>[@{setting_def ISABELLE_PROCESS}@{text "\<^sup>*"}, @{setting
-  ISABELLE_TOOL}@{text "\<^sup>*"}] are automatically set to the full path
+  \<^descr>[@{setting_def ISABELLE_PROCESS}\<open>\<^sup>*\<close>, @{setting
+  ISABELLE_TOOL}\<open>\<^sup>*\<close>] are automatically set to the full path
   names of the @{executable "isabelle_process"} and @{executable
   isabelle} executables, respectively.  Thus other tools and scripts
   need not assume that the @{file "$ISABELLE_HOME/bin"} directory is
   on the current search path of the shell.
   
-  \<^descr>[@{setting_def ISABELLE_IDENTIFIER}@{text "\<^sup>*"}] refers
+  \<^descr>[@{setting_def ISABELLE_IDENTIFIER}\<open>\<^sup>*\<close>] refers
   to the name of this Isabelle distribution, e.g.\ ``@{verbatim
   Isabelle2012}''.
 
   \<^descr>[@{setting_def ML_SYSTEM}, @{setting_def ML_HOME},
   @{setting_def ML_OPTIONS}, @{setting_def ML_PLATFORM}, @{setting_def
-  ML_IDENTIFIER}@{text "\<^sup>*"}] specify the underlying ML system
+  ML_IDENTIFIER}\<open>\<^sup>*\<close>] specify the underlying ML system
   to be used for Isabelle.  There is only a fixed set of admissable
   @{setting ML_SYSTEM} names (see the @{file
   "$ISABELLE_HOME/etc/settings"} file of the distribution).
@@ -208,7 +208,7 @@
   automatically obtained by composing the values of @{setting
   ML_SYSTEM}, @{setting ML_PLATFORM} and the Isabelle version values.
 
-  \<^descr>[@{setting_def ML_SYSTEM_POLYML}@{text "\<^sup>*"}] is @{verbatim true}
+  \<^descr>[@{setting_def ML_SYSTEM_POLYML}\<open>\<^sup>*\<close>] is @{verbatim true}
   for @{setting ML_SYSTEM} values derived from Poly/ML, as opposed to
   SML/NJ where it is empty.  This is particularly useful with the
   build option @{system_option condition}
@@ -227,7 +227,7 @@
   looking up heaps files, the value of @{setting ML_IDENTIFIER} is
   appended to each component internally.
   
-  \<^descr>[@{setting_def ISABELLE_OUTPUT}@{text "\<^sup>*"}] is a
+  \<^descr>[@{setting_def ISABELLE_OUTPUT}\<open>\<^sup>*\<close>] is a
   directory where output heap files should be stored by default. The
   ML system and Isabelle version identifier is appended here, too.
   
@@ -261,7 +261,7 @@
   \<^descr>[@{setting_def DVI_VIEWER}] specifies the program to be used
   for displaying @{verbatim dvi} files.
   
-  \<^descr>[@{setting_def ISABELLE_TMP_PREFIX}@{text "\<^sup>*"}] is the
+  \<^descr>[@{setting_def ISABELLE_TMP_PREFIX}\<open>\<^sup>*\<close>] is the
   prefix from which any running @{executable "isabelle_process"}
   derives an individual directory for temporary files.
 \<close>
--- a/src/Doc/System/Misc.thy	Tue Oct 20 23:03:46 2015 +0200
+++ b/src/Doc/System/Misc.thy	Tue Oct 20 23:53:40 2015 +0200
@@ -141,25 +141,24 @@
 
   \begin{center}\small
   \begin{tabular}{rcl}
-    @{text graph} & @{text "="} & @{text "{ vertex"}~@{verbatim ";"}~@{text "}+"} \\
-    @{text vertex} & @{text "="} & @{text "vertex_name vertex_ID dir_name ["}~@{verbatim "+"}~@{text "] path ["}~@{verbatim "<"}~@{text "|"}~@{verbatim ">"}~@{text "] { vertex_ID }*"}
+    \<open>graph\<close> & \<open>=\<close> & \<open>{ vertex\<close>~@{verbatim ";"}~\<open>}+\<close> \\
+    \<open>vertex\<close> & \<open>=\<close> & \<open>vertex_name vertex_ID dir_name [\<close>~@{verbatim "+"}~\<open>] path [\<close>~@{verbatim "<"}~\<open>|\<close>~@{verbatim ">"}~\<open>] { vertex_ID }*\<close>
   \end{tabular}
   \end{center}
 
   The meaning of the items in a vertex description is as follows:
 
-  \<^descr>[@{text vertex_name}] The name of the vertex.
+  \<^descr>[\<open>vertex_name\<close>] The name of the vertex.
 
-  \<^descr>[@{text vertex_ID}] The vertex identifier. Note that there may
+  \<^descr>[\<open>vertex_ID\<close>] The vertex identifier. Note that there may
   be several vertices with equal names, whereas identifiers must be
   unique.
 
-  \<^descr>[@{text dir_name}] The name of the ``directory'' the vertex
-  should be placed in.  A ``@{verbatim "+"}'' sign after @{text
-  dir_name} indicates that the nodes in the directory are initially
+  \<^descr>[\<open>dir_name\<close>] The name of the ``directory'' the vertex
+  should be placed in.  A ``@{verbatim "+"}'' sign after \<open>dir_name\<close> indicates that the nodes in the directory are initially
   visible. Directories are initially invisible by default.
 
-  \<^descr>[@{text path}] The path of the corresponding theory file. This
+  \<^descr>[\<open>path\<close>] The path of the corresponding theory file. This
   is specified relatively to the path of the graph definition file.
 
   \<^descr>[List of successor/predecessor nodes] A ``@{verbatim "<"}''
@@ -330,8 +329,7 @@
   With the @{verbatim "-a"} option, one may inspect the full process
   environment that Isabelle related programs are run in. This usually
   contains much more variables than are actually Isabelle settings.
-  Normally, output is a list of lines of the form @{text
-  name}@{verbatim "="}@{text value}. The @{verbatim "-b"} option
+  Normally, output is a list of lines of the form \<open>name\<close>@{verbatim "="}\<open>value\<close>. The @{verbatim "-b"} option
   causes only the values to be printed.
 
   Option @{verbatim "-d"} produces a dump of the complete environment
@@ -373,7 +371,7 @@
   The @{verbatim "-d"} option overrides the current Isabelle
   distribution directory as determined by @{setting ISABELLE_HOME}.
 
-  The @{text BINDIR} argument tells where executable wrapper scripts
+  The \<open>BINDIR\<close> argument tells where executable wrapper scripts
   for @{executable "isabelle_process"} and @{executable isabelle}
   should be placed, which is typically a directory in the shell's
   @{setting PATH}, such as @{verbatim "$HOME/bin"}.
@@ -398,7 +396,7 @@
     -q           quiet mode\<close>}
 
   Option @{verbatim "-n"} specifies an altenative (base) name for the
-  generated files.  The default is @{verbatim "isabelle_"}@{text xyz}
+  generated files.  The default is @{verbatim "isabelle_"}\<open>xyz\<close>
   in lower-case.
 
   Option @{verbatim "-q"} omits printing of the result file name.
@@ -444,31 +442,31 @@
   comments).
 
   \<^enum> Markup elements are represented via ASCII control characters
-  @{text "\<^bold>X = 5"} and @{text "\<^bold>Y = 6"} as follows:
+  \<open>\<^bold>X = 5\<close> and \<open>\<^bold>Y = 6\<close> as follows:
 
   \begin{tabular}{ll}
     XML & YXML \\\hline
-    @{verbatim "<"}@{text "name attribute"}@{verbatim "="}@{text "value \<dots>"}@{verbatim ">"} &
-    @{text "\<^bold>X\<^bold>Yname\<^bold>Yattribute"}@{verbatim "="}@{text "value\<dots>\<^bold>X"} \\
-    @{verbatim "</"}@{text name}@{verbatim ">"} & @{text "\<^bold>X\<^bold>Y\<^bold>X"} \\
+    @{verbatim "<"}\<open>name attribute\<close>@{verbatim "="}\<open>value \<dots>\<close>@{verbatim ">"} &
+    \<open>\<^bold>X\<^bold>Yname\<^bold>Yattribute\<close>@{verbatim "="}\<open>value\<dots>\<^bold>X\<close> \\
+    @{verbatim "</"}\<open>name\<close>@{verbatim ">"} & \<open>\<^bold>X\<^bold>Y\<^bold>X\<close> \\
   \end{tabular}
 
   There is no special case for empty body text, i.e.\ @{verbatim
   "<foo/>"} is treated like @{verbatim "<foo></foo>"}.  Also note that
-  @{text "\<^bold>X"} and @{text "\<^bold>Y"} may never occur in
+  \<open>\<^bold>X\<close> and \<open>\<^bold>Y\<close> may never occur in
   well-formed XML documents.
 
 
   Parsing YXML is pretty straight-forward: split the text into chunks
-  separated by @{text "\<^bold>X"}, then split each chunk into
-  sub-chunks separated by @{text "\<^bold>Y"}.  Markup chunks start
+  separated by \<open>\<^bold>X\<close>, then split each chunk into
+  sub-chunks separated by \<open>\<^bold>Y\<close>.  Markup chunks start
   with an empty sub-chunk, and a second empty sub-chunk indicates
   close of an element.  Any other non-empty chunk consists of plain
   text.  For example, see @{file "~~/src/Pure/PIDE/yxml.ML"} or
   @{file "~~/src/Pure/PIDE/yxml.scala"}.
 
   YXML documents may be detected quickly by checking that the first
-  two characters are @{text "\<^bold>X\<^bold>Y"}.
+  two characters are \<open>\<^bold>X\<^bold>Y\<close>.
 \<close>
 
 end
\ No newline at end of file
--- a/src/Doc/System/Presentation.thy	Tue Oct 20 23:03:46 2015 +0200
+++ b/src/Doc/System/Presentation.thy	Tue Oct 20 23:53:40 2015 +0200
@@ -115,7 +115,7 @@
 
   Prepare session root DIR (default: current directory).\<close>}
 
-  The results are placed in the given directory @{text dir}, which
+  The results are placed in the given directory \<open>dir\<close>, which
   refers to the current directory by default.  The @{tool mkroot} tool
   is conservative in the sense that it does not overwrite existing
   files or directories.  Earlier attempts to generate a session root
@@ -123,7 +123,7 @@
 
   \<^medskip>
   Option @{verbatim "-d"} indicates that the session shall be
-  accompanied by a formal document, with @{text DIR}@{verbatim
+  accompanied by a formal document, with \<open>DIR\<close>@{verbatim
   "/document/root.tex"} as its {\LaTeX} entry point (see also
   \chref{ch:present}).
 
@@ -188,10 +188,9 @@
   \<^medskip>
   The @{verbatim "-t"} option tells {\LaTeX} how to interpret
   tagged Isabelle command regions.  Tags are specified as a comma
-  separated list of modifier/name pairs: ``@{verbatim "+"}@{text
-  foo}'' (or just ``@{text foo}'') means to keep, ``@{verbatim
-  "-"}@{text foo}'' to drop, and ``@{verbatim "/"}@{text foo}'' to
-  fold text tagged as @{text foo}.  The builtin default is equivalent
+  separated list of modifier/name pairs: ``@{verbatim "+"}\<open>foo\<close>'' (or just ``\<open>foo\<close>'') means to keep, ``@{verbatim
+  "-"}\<open>foo\<close>'' to drop, and ``@{verbatim "/"}\<open>foo\<close>'' to
+  fold text tagged as \<open>foo\<close>.  The builtin default is equivalent
   to the tag specification ``@{verbatim
   "+theory,+proof,+ML,+visible,-invisible"}''; see also the {\LaTeX}
   macros @{verbatim "\\isakeeptag"}, @{verbatim "\\isadroptag"}, and
@@ -225,9 +224,8 @@
   When running the session, Isabelle copies the content of
   the original @{verbatim document} directory into its proper place
   within @{setting ISABELLE_BROWSER_INFO}, according to the session
-  path and document variant.  Then, for any processed theory @{text A}
-  some {\LaTeX} source is generated and put there as @{text
-  A}@{verbatim ".tex"}.  Furthermore, a list of all generated theory
+  path and document variant.  Then, for any processed theory \<open>A\<close>
+  some {\LaTeX} source is generated and put there as \<open>A\<close>@{verbatim ".tex"}.  Furthermore, a list of all generated theory
   files is put into @{verbatim session.tex}.  Typically, the root
   {\LaTeX} file provided by the user would include @{verbatim
   session.tex} to get a document containing all the theories.
@@ -242,8 +240,8 @@
   @{verbatim "\\"}@{verbatim "<forall>"}) then @{verbatim
   "isabellesym.sty"} should be included as well.  This package
   contains a standard set of {\LaTeX} macro definitions @{verbatim
-  "\\isasym"}@{text foo} corresponding to @{verbatim "\\"}@{verbatim
-  "<"}@{text foo}@{verbatim ">"}, see @{cite "isabelle-implementation"} for a
+  "\\isasym"}\<open>foo\<close> corresponding to @{verbatim "\\"}@{verbatim
+  "<"}\<open>foo\<close>@{verbatim ">"}, see @{cite "isabelle-implementation"} for a
   complete list of predefined Isabelle symbols.  Users may invent
   further symbols as well, just by providing {\LaTeX} macros in a
   similar fashion as in @{file "~~/lib/texinputs/isabellesym.sty"} of
--- a/src/Doc/System/Sessions.thy	Tue Oct 20 23:03:46 2015 +0200
+++ b/src/Doc/System/Sessions.thy	Tue Oct 20 23:53:40 2015 +0200
@@ -17,7 +17,7 @@
   Application sessions are built on a given parent session, which may
   be built recursively on other parents.  Following this path in the
   hierarchy eventually leads to some major object-logic session like
-  @{text "HOL"}, which itself is based on @{text "Pure"} as the common
+  \<open>HOL\<close>, which itself is based on \<open>Pure\<close> as the common
   root of all sessions.
 
   Processing sessions may take considerable time.  Isabelle build
@@ -42,7 +42,7 @@
   session_entry} is given as syntax diagram below; each ROOT file may
   contain multiple specifications like this.  Chapters help to
   organize browser info (\secref{sec:info}), but have no formal
-  meaning.  The default chapter is ``@{text "Unsorted"}''.
+  meaning.  The default chapter is ``\<open>Unsorted\<close>''.
 
   Isabelle/jEdit @{cite "isabelle-jedit"} includes a simple editing
   mode @{verbatim "isabelle-root"} for session ROOT files, which is
@@ -77,26 +77,26 @@
     document_files: @'document_files' ('(' dir ')')? (@{syntax name}+)
   \<close>}
 
-  \<^descr> \isakeyword{session}~@{text "A = B + body"} defines a new
-  session @{text "A"} based on parent session @{text "B"}, with its
-  content given in @{text body} (theories and auxiliary source files).
-  Note that a parent (like @{text "HOL"}) is mandatory in practical
+  \<^descr> \isakeyword{session}~\<open>A = B + body\<close> defines a new
+  session \<open>A\<close> based on parent session \<open>B\<close>, with its
+  content given in \<open>body\<close> (theories and auxiliary source files).
+  Note that a parent (like \<open>HOL\<close>) is mandatory in practical
   applications: only Isabelle/Pure can bootstrap itself from nothing.
 
   All such session specifications together describe a hierarchy (tree)
   of sessions, with globally unique names.  The new session name
-  @{text "A"} should be sufficiently long and descriptive to stand on
+  \<open>A\<close> should be sufficiently long and descriptive to stand on
   its own in a potentially large library.
 
-  \<^descr> \isakeyword{session}~@{text "A (groups)"} indicates a
+  \<^descr> \isakeyword{session}~\<open>A (groups)\<close> indicates a
   collection of groups where the new session is a member.  Group names
   are uninterpreted and merely follow certain conventions.  For
   example, the Isabelle distribution tags some important sessions by
-  the group name called ``@{text "main"}''.  Other projects may invent
+  the group name called ``\<open>main\<close>''.  Other projects may invent
   their own conventions, but this requires some care to avoid clashes
   within this unchecked name space.
 
-  \<^descr> \isakeyword{session}~@{text "A"}~\isakeyword{in}~@{text "dir"}
+  \<^descr> \isakeyword{session}~\<open>A\<close>~\isakeyword{in}~\<open>dir\<close>
   specifies an explicit directory for this session; by default this is
   the current directory of the @{verbatim ROOT} file.
 
@@ -104,41 +104,39 @@
   the session directory.  The prover process is run within the same as
   its current working directory.
 
-  \<^descr> \isakeyword{description}~@{text "text"} is a free-form
+  \<^descr> \isakeyword{description}~\<open>text\<close> is a free-form
   annotation for this session.
 
-  \<^descr> \isakeyword{options}~@{text "[x = a, y = b, z]"} defines
+  \<^descr> \isakeyword{options}~\<open>[x = a, y = b, z]\<close> defines
   separate options (\secref{sec:system-options}) that are used when
   processing this session, but \<^emph>\<open>without\<close> propagation to child
-  sessions.  Note that @{text "z"} abbreviates @{text "z = true"} for
+  sessions.  Note that \<open>z\<close> abbreviates \<open>z = true\<close> for
   Boolean options.
 
-  \<^descr> \isakeyword{theories}~@{text "options names"} specifies a
+  \<^descr> \isakeyword{theories}~\<open>options names\<close> specifies a
   block of theories that are processed within an environment that is
   augmented by the given options, in addition to the global session
   options given before.  Any number of blocks of \isakeyword{theories}
   may be given.  Options are only active for each
   \isakeyword{theories} block separately.
 
-  \<^descr> \isakeyword{files}~@{text "files"} lists additional source
+  \<^descr> \isakeyword{files}~\<open>files\<close> lists additional source
   files that are involved in the processing of this session.  This
   should cover anything outside the formal content of the theory
   sources.  In contrast, files that are loaded formally
   within a theory, e.g.\ via @{command "ML_file"}, need not be
   declared again.
 
-  \<^descr> \isakeyword{document_files}~@{text "("}\isakeyword{in}~@{text
-  "base_dir) files"} lists source files for document preparation,
+  \<^descr> \isakeyword{document_files}~\<open>(\<close>\isakeyword{in}~\<open>base_dir) files\<close> lists source files for document preparation,
   typically @{verbatim ".tex"} and @{verbatim ".sty"} for {\LaTeX}.
   Only these explicitly given files are copied from the base directory
   to the document output directory, before formal document processing
   is started (see also \secref{sec:tool-document}).  The local path
-  structure of the @{text files} is preserved, which allows to
-  reconstruct the original directory hierarchy of @{text "base_dir"}.
+  structure of the \<open>files\<close> is preserved, which allows to
+  reconstruct the original directory hierarchy of \<open>base_dir\<close>.
 
-  \<^descr> \isakeyword{document_files}~@{text "files"} abbreviates
-  \isakeyword{document_files}~@{text "("}\isakeyword{in}~@{text
-  "document) files"}, i.e.\ document sources are taken from the base
+  \<^descr> \isakeyword{document_files}~\<open>files\<close> abbreviates
+  \isakeyword{document_files}~\<open>(\<close>\isakeyword{in}~\<open>document) files\<close>, i.e.\ document sources are taken from the base
   directory @{verbatim document} within the session root directory.
 \<close>
 
@@ -147,8 +145,8 @@
 
 text \<open>See @{file "~~/src/HOL/ROOT"} for a diversity of practically
   relevant situations, although it uses relatively complex
-  quasi-hierarchic naming conventions like @{text "HOL\<dash>SPARK"},
-  @{text "HOL\<dash>SPARK\<dash>Examples"}.  An alternative is to use
+  quasi-hierarchic naming conventions like \<open>HOL\<dash>SPARK\<close>,
+  \<open>HOL\<dash>SPARK\<dash>Examples\<close>.  An alternative is to use
   unqualified names that are relatively long and descriptive, as in
   the Archive of Formal Proofs (@{url "http://afp.sourceforge.net"}), for
   example.\<close>
@@ -182,7 +180,7 @@
   the document.
 
   \<^item> @{system_option_def "document_variants"} specifies document
-  variants as a colon-separated list of @{text "name=tags"} entries,
+  variants as a colon-separated list of \<open>name=tags\<close> entries,
   corresponding to @{tool document} options @{verbatim "-n"} and
   @{verbatim "-t"}.
 
@@ -199,7 +197,7 @@
 
   \<^item> @{system_option_def "threads"} determines the number of worker
   threads for parallel checking of theories and proofs.  The default
-  @{text "0"} means that a sensible maximum value is determined by the
+  \<open>0\<close> means that a sensible maximum value is determined by the
   underlying hardware.  For machines with many cores or with
   hyperthreading, this is often requires manual adjustment (on the
   command-line or within personal settings or preferences, not within
@@ -239,7 +237,7 @@
   arguments NAME=VAL or NAME.\<close>}
 
   The command line arguments provide additional system options of the
-  form @{text "name"}@{verbatim "="}@{text "value"} or @{text name}
+  form \<open>name\<close>@{verbatim "="}\<open>value\<close> or \<open>name\<close>
   for Boolean options.
 
   Option @{verbatim "-b"} augments the implicit environment of system
@@ -298,7 +296,7 @@
   described in (\secref{sec:session-root}).  The totality of sessions
   is determined by collecting such specifications from all Isabelle
   component directories (\secref{sec:components}), augmented by more
-  directories given via options @{verbatim "-d"}~@{text "DIR"} on the
+  directories given via options @{verbatim "-d"}~\<open>DIR\<close> on the
   command line.  Each such directory may contain a session
   @{verbatim ROOT} file with several session specifications.
 
@@ -311,14 +309,14 @@
 
   \<^medskip>
   The subset of sessions to be managed is determined via
-  individual @{text "SESSIONS"} given as command-line arguments, or
+  individual \<open>SESSIONS\<close> given as command-line arguments, or
   session groups that are given via one or more options @{verbatim
-  "-g"}~@{text "NAME"}.  Option @{verbatim "-a"} selects all sessions.
+  "-g"}~\<open>NAME\<close>.  Option @{verbatim "-a"} selects all sessions.
   The build tool takes session dependencies into account: the set of
   selected sessions is completed by including all ancestors.
 
   \<^medskip>
-  One or more options @{verbatim "-x"}~@{text NAME} specify
+  One or more options @{verbatim "-x"}~\<open>NAME\<close> specify
   sessions to be excluded. All descendents of excluded sessions are removed
   from the selection as specified above. Option @{verbatim "-X"} is
   analogous to this, but excluded sessions are specified by session group
@@ -342,9 +340,9 @@
   ISABELLE_BUILD_OPTIONS} allows to provide additional defaults, e.g.\
   @{verbatim \<open>ISABELLE_BUILD_OPTIONS="document=pdf threads=4"\<close>}. Moreover,
   the environment of system build options may be augmented on the
-  command line via @{verbatim "-o"}~@{text "name"}@{verbatim
-  "="}@{text "value"} or @{verbatim "-o"}~@{text "name"}, which
-  abbreviates @{verbatim "-o"}~@{text "name"}@{verbatim"=true"} for
+  command line via @{verbatim "-o"}~\<open>name\<close>@{verbatim
+  "="}\<open>value\<close> or @{verbatim "-o"}~\<open>name\<close>, which
+  abbreviates @{verbatim "-o"}~\<open>name\<close>@{verbatim"=true"} for
   Boolean options.  Multiple occurrences of @{verbatim "-o"} on the
   command-line are applied in the given order.