--- a/src/Doc/Isar_Ref/Document_Preparation.thy Sun Oct 05 22:22:40 2014 +0200
+++ b/src/Doc/Isar_Ref/Document_Preparation.thy Sun Oct 05 22:24:07 2014 +0200
@@ -11,10 +11,10 @@
{\LaTeX} output is generated while processing a \emph{session} in
batch mode, as explained in the \emph{The Isabelle System Manual}
- \cite{isabelle-sys}. The main Isabelle tools to get started with
+ @{cite "isabelle-sys"}. The main Isabelle tools to get started with
document preparation are @{tool_ref mkroot} and @{tool_ref build}.
- The classic Isabelle/HOL tutorial \cite{isabelle-hol-book} also
+ The classic Isabelle/HOL tutorial @{cite "isabelle-hol-book"} also
explains some aspects of theory presentation. *}
@@ -423,7 +423,7 @@
\end{tabular}
\medskip The Isabelle document preparation system
- \cite{isabelle-sys} allows tagged command regions to be presented
+ @{cite "isabelle-sys"} allows tagged command regions to be presented
specifically, e.g.\ to fold proof texts, or drop parts of the text
completely.
@@ -448,7 +448,7 @@
arbitrary tags to ``keep'', ``drop'', or ``fold'' the corresponding
parts of the text. Logic sessions may also specify ``document
versions'', where given tags are interpreted in some particular way.
- Again see \cite{isabelle-sys} for further details.
+ Again see @{cite "isabelle-sys"} for further details.
*}
--- a/src/Doc/Isar_Ref/First_Order_Logic.thy Sun Oct 05 22:22:40 2014 +0200
+++ b/src/Doc/Isar_Ref/First_Order_Logic.thy Sun Oct 05 22:24:07 2014 +0200
@@ -174,7 +174,7 @@
text {*
We axiomatize basic connectives of propositional logic: implication,
disjunction, and conjunction. The associated rules are modeled
- after Gentzen's system of Natural Deduction \cite{Gentzen:1935}.
+ after Gentzen's system of Natural Deduction @{cite "Gentzen:1935"}.
*}
axiomatization
@@ -333,7 +333,7 @@
text {*
Representing quantifiers is easy, thanks to the higher-order nature
of the underlying framework. According to the well-known technique
- introduced by Church \cite{church40}, quantifiers are operators on
+ introduced by Church @{cite "church40"}, quantifiers are operators on
predicates, which are syntactically represented as @{text "\<lambda>"}-terms
of type @{typ "i \<Rightarrow> o"}. Binder notation turns @{text "All (\<lambda>x. B
x)"} into @{text "\<forall>x. B x"} etc.
--- a/src/Doc/Isar_Ref/Framework.thy Sun Oct 05 22:22:40 2014 +0200
+++ b/src/Doc/Isar_Ref/Framework.thy Sun Oct 05 22:24:07 2014 +0200
@@ -5,8 +5,8 @@
chapter {* The Isabelle/Isar Framework \label{ch:isar-framework} *}
text {*
- Isabelle/Isar
- \cite{Wenzel:1999:TPHOL,Wenzel-PhD,Nipkow-TYPES02,Wenzel-Paulson:2006,Wenzel:2006:Festschrift}
+ Isabelle/Isar @{cite "Wenzel:1999:TPHOL" and "Wenzel-PhD" and
+ "Nipkow-TYPES02" and "Wenzel-Paulson:2006" and "Wenzel:2006:Festschrift"}
is intended as a generic framework for developing formal
mathematical documents with full proof checking. Definitions and
proofs are organized as theories. An assembly of theory sources may
@@ -18,10 +18,10 @@
format'' in Isar terminology. Such a primary proof language is
somewhere in the middle between the extremes of primitive proof
objects and actual natural language. In this respect, Isar is a bit
- more formalistic than Mizar
- \cite{Trybulec:1993:MizarFeatures,Rudnicki:1992:MizarOverview,Wiedijk:1999:Mizar},
+ more formalistic than Mizar @{cite "Trybulec:1993:MizarFeatures" and
+ "Rudnicki:1992:MizarOverview" and "Wiedijk:1999:Mizar"},
using logical symbols for certain reasoning schemes where Mizar
- would prefer English words; see \cite{Wenzel-Wiedijk:2002} for
+ would prefer English words; see @{cite "Wenzel-Wiedijk:2002"} for
further comparisons of these systems.
So Isar challenges the traditional way of recording informal proofs
@@ -40,16 +40,16 @@
The Isar proof language has emerged from careful analysis of some
inherent virtues of the existing logical framework of Isabelle/Pure
- \cite{paulson-found,paulson700}, notably composition of higher-order
+ @{cite "paulson-found" and "paulson700"}, notably composition of higher-order
natural deduction rules, which is a generalization of Gentzen's
- original calculus \cite{Gentzen:1935}. The approach of generic
+ original calculus @{cite "Gentzen:1935"}. The approach of generic
inference systems in Pure is continued by Isar towards actual proof
texts.
Concrete applications require another intermediate layer: an
- object-logic. Isabelle/HOL \cite{isa-tutorial} (simply-typed
+ object-logic. Isabelle/HOL @{cite "isa-tutorial"} (simply-typed
set-theory) is being used most of the time; Isabelle/ZF
- \cite{isabelle-ZF} is less extensively developed, although it would
+ @{cite "isabelle-ZF"} is less extensively developed, although it would
probably fit better for classical mathematics.
\medskip In order to illustrate natural deduction in Isar, we shall
@@ -231,8 +231,8 @@
section {* The Pure framework \label{sec:framework-pure} *}
text {*
- The Pure logic \cite{paulson-found,paulson700} is an intuitionistic
- fragment of higher-order logic \cite{church40}. In type-theoretic
+ The Pure logic @{cite "paulson-found" and "paulson700"} is an intuitionistic
+ fragment of higher-order logic @{cite "church40"}. In type-theoretic
parlance, there are three levels of @{text "\<lambda>"}-calculus with
corresponding arrows @{text "\<Rightarrow>"}/@{text "\<And>"}/@{text "\<Longrightarrow>"}:
@@ -247,16 +247,16 @@
\noindent Here only the types of syntactic terms, and the
propositions of proof terms have been shown. The @{text
"\<lambda>"}-structure of proofs can be recorded as an optional feature of
- the Pure inference kernel \cite{Berghofer-Nipkow:2000:TPHOL}, but
+ the Pure inference kernel @{cite "Berghofer-Nipkow:2000:TPHOL"}, but
the formal system can never depend on them due to \emph{proof
irrelevance}.
On top of this most primitive layer of proofs, Pure implements a
generic calculus for nested natural deduction rules, similar to
- \cite{Schroeder-Heister:1984}. Here object-logic inferences are
+ @{cite "Schroeder-Heister:1984"}. Here object-logic inferences are
internalized as formulae over @{text "\<And>"} and @{text "\<Longrightarrow>"}.
Combining such rule statements may involve higher-order unification
- \cite{paulson-natural}.
+ @{cite "paulson-natural"}.
*}
@@ -326,7 +326,7 @@
that rule statements always observe the normal form where
quantifiers are pulled in front of implications at each level of
nesting. This means that any Pure proposition may be presented as a
- \emph{Hereditary Harrop Formula} \cite{Miller:1991} which is of the
+ \emph{Hereditary Harrop Formula} @{cite "Miller:1991"} which is of the
form @{text "\<And>x\<^sub>1 \<dots> x\<^sub>m. H\<^sub>1 \<Longrightarrow> \<dots> H\<^sub>n \<Longrightarrow>
A"} for @{text "m, n \<ge> 0"}, and @{text "A"} atomic, and @{text
"H\<^sub>1, \<dots>, H\<^sub>n"} being recursively of the same format.
@@ -604,7 +604,7 @@
cf.\ \secref{sec:framework-subproof}.
The most interesting derived context element in Isar is @{command
- obtain} \cite[\S5.3]{Wenzel-PhD}, which supports generalized
+ obtain} @{cite \<open>\S5.3\<close> "Wenzel-PhD"}, which supports generalized
elimination steps in a purely forward manner. The @{command obtain}
command takes a specification of parameters @{text "\<^vec>x"} and
assumptions @{text "\<^vec>A"} to be added to the context, together
@@ -953,8 +953,8 @@
@{text "\<subseteq>"} etc. Due to the flexibility of rule composition
(\secref{sec:framework-resolution}), substitution of equals by
equals is covered as well, even substitution of inequalities
- involving monotonicity conditions; see also \cite[\S6]{Wenzel-PhD}
- and \cite{Bauer-Wenzel:2001}.
+ involving monotonicity conditions; see also @{cite \<open>\S6\<close> "Wenzel-PhD"}
+ and @{cite "Bauer-Wenzel:2001"}.
The generic calculational mechanism is based on the observation that
rules such as @{text "trans:"}~@{prop "x = y \<Longrightarrow> y = z \<Longrightarrow> x = z"}
--- a/src/Doc/Isar_Ref/Generic.thy Sun Oct 05 22:22:40 2014 +0200
+++ b/src/Doc/Isar_Ref/Generic.thy Sun Oct 05 22:24:07 2014 +0200
@@ -95,7 +95,7 @@
"a\<^sub>1 \<dots> a\<^sub>n"} are similar to the basic @{method rule}
method (see \secref{sec:pure-meth-att}), but apply rules by
elim-resolution, destruct-resolution, and forward-resolution,
- respectively \cite{isabelle-implementation}. The optional natural
+ respectively @{cite "isabelle-implementation"}. The optional natural
number argument (default 0) specifies additional assumption steps to
be performed here.
@@ -158,7 +158,7 @@
\item @{attribute THEN}~@{text a} composes rules by resolution; it
resolves with the first premise of @{text a} (an alternative
position may be also specified). See also @{ML_op "RS"} in
- \cite{isabelle-implementation}.
+ @{cite "isabelle-implementation"}.
\item @{attribute unfolded}~@{text "a\<^sub>1 \<dots> a\<^sub>n"} and @{attribute
folded}~@{text "a\<^sub>1 \<dots> a\<^sub>n"} expand and fold back again the given
@@ -319,11 +319,11 @@
\item @{method rule_tac} etc. do resolution of rules with explicit
instantiation. This works the same way as the ML tactics @{ML
- res_inst_tac} etc. (see \cite{isabelle-implementation})
+ res_inst_tac} etc.\ (see @{cite "isabelle-implementation"}).
Multiple rules may be only given if there is no instantiation; then
@{method rule_tac} is the same as @{ML resolve_tac} in ML (see
- \cite{isabelle-implementation}).
+ @{cite "isabelle-implementation"}).
\item @{method cut_tac} inserts facts into the proof state as
assumption of a subgoal; instantiations may be given as well. Note
@@ -649,7 +649,7 @@
@{text "(?x + ?y) + ?z \<equiv> ?x + (?y + ?z)"} \\
@{text "f (f ?x ?y) ?z \<equiv> f ?x (f ?y ?z)"}
- \item Higher-order patterns in the sense of \cite{nipkow-patterns}.
+ \item Higher-order patterns in the sense of @{cite "nipkow-patterns"}.
These are terms in @{text "\<beta>"}-normal form (this will always be the
case unless you have done something strange) where each occurrence
of an unknown is of the form @{text "?F x\<^sub>1 \<dots> x\<^sub>n"}, where the
@@ -846,10 +846,10 @@
end
-text {* Martin and Nipkow \cite{martin-nipkow} discuss the theory and
+text {* Martin and Nipkow @{cite "martin-nipkow"} discuss the theory and
give many examples; other algebraic structures are amenable to
ordered rewriting, such as Boolean rings. The Boyer-Moore theorem
- prover \cite{bm88book} also employs ordered rewriting.
+ prover @{cite bm88book} also employs ordered rewriting.
*}
@@ -903,7 +903,7 @@
invocation of simplification procedures.
\item @{attribute simp_trace_new} controls Simplifier tracing within
- Isabelle/PIDE applications, notably Isabelle/jEdit \cite{isabelle-jedit}.
+ Isabelle/PIDE applications, notably Isabelle/jEdit @{cite "isabelle-jedit"}.
This provides a hierarchical representation of the rewriting steps
performed by the Simplifier.
@@ -1299,7 +1299,7 @@
context. These proof procedures are slow and simplistic compared
with high-end automated theorem provers, but they can save
considerable time and effort in practice. They can prove theorems
- such as Pelletier's \cite{pelletier86} problems 40 and 41 in a few
+ such as Pelletier's @{cite pelletier86} problems 40 and 41 in a few
milliseconds (including full proof reconstruction): *}
lemma "(\<exists>y. \<forall>x. F x y \<longleftrightarrow> F x x) \<longrightarrow> \<not> (\<forall>x. \<exists>y. \<forall>z. F z y \<longleftrightarrow> \<not> F z x)"
@@ -1370,8 +1370,8 @@
desired theorem and apply rules backwards in a fairly arbitrary
manner. This yields a surprisingly effective proof procedure.
Quantifiers add only few complications, since Isabelle handles
- parameters and schematic variables. See \cite[Chapter
- 10]{paulson-ml2} for further discussion. *}
+ parameters and schematic variables. See @{cite \<open>Chapter 10\<close>
+ "paulson-ml2"} for further discussion. *}
subsubsection {* Simulating sequents by natural deduction *}
--- a/src/Doc/Isar_Ref/HOL_Specific.thy Sun Oct 05 22:22:40 2014 +0200
+++ b/src/Doc/Isar_Ref/HOL_Specific.thy Sun Oct 05 22:24:07 2014 +0200
@@ -9,16 +9,16 @@
version of Church's Simple Theory of Types. HOL can be best
understood as a simply-typed version of classical set theory. The
logic was first implemented in Gordon's HOL system
- \cite{mgordon-hol}. It extends Church's original logic
- \cite{church40} by explicit type variables (naive polymorphism) and
+ @{cite "mgordon-hol"}. It extends Church's original logic
+ @{cite "church40"} by explicit type variables (naive polymorphism) and
a sound axiomatization scheme for new types based on subsets of
existing types.
- Andrews's book \cite{andrews86} is a full description of the
+ Andrews's book @{cite andrews86} is a full description of the
original Church-style higher-order logic, with proofs of correctness
and completeness wrt.\ certain set-theoretic interpretations. The
particular extensions of Gordon-style HOL are explained semantically
- in two chapters of the 1993 HOL book \cite{pitts93}.
+ in two chapters of the 1993 HOL book @{cite pitts93}.
Experience with HOL over decades has demonstrated that higher-order
logic is widely applicable in many areas of mathematics and computer
@@ -249,7 +249,7 @@
text {* Here the @{text "cases"} or @{text "induct"} rules produced by
the @{command inductive} package coincide with the expected
elimination rules for Natural Deduction. Already in the original
- article by Gerhard Gentzen \cite{Gentzen:1935} there is a hint that
+ article by Gerhard Gentzen @{cite "Gentzen:1935"} there is a hint that
each connective can be characterized by its introductions, and the
elimination can be constructed systematically. *}
@@ -310,7 +310,7 @@
\item @{command (HOL) "function"} defines functions by general
wellfounded recursion. A detailed description with examples can be
- found in \cite{isabelle-function}. The function is specified by a
+ found in @{cite "isabelle-function"}. The function is specified by a
set of (possibly conditional) recursive equations with arbitrary
pattern matching. The command generates proof obligations for the
completeness and the compatibility of patterns.
@@ -324,7 +324,7 @@
\item @{command (HOL) "fun"} is a shorthand notation for ``@{command
(HOL) "function"}~@{text "(sequential)"}, followed by automated
proof attempts regarding pattern matching and termination. See
- \cite{isabelle-function} for further details.
+ @{cite "isabelle-function"} for further details.
\item @{command (HOL) "termination"}~@{text f} commences a
termination proof for the previously defined function @{text f}. If
@@ -525,7 +525,7 @@
\item @{method (HOL) pat_completeness} is a specialized method to
solve goals regarding the completeness of pattern matching, as
required by the @{command (HOL) "function"} package (cf.\
- \cite{isabelle-function}).
+ @{cite "isabelle-function"}).
\item @{method (HOL) relation}~@{text R} introduces a termination
proof using the relation @{text R}. The resulting proof state will
@@ -542,11 +542,11 @@
clasimpmod} modifiers are accepted (as for @{method auto}).
In case of failure, extensive information is printed, which can help
- to analyse the situation (cf.\ \cite{isabelle-function}).
+ to analyse the situation (cf.\ @{cite "isabelle-function"}).
\item @{method (HOL) "size_change"} also works on termination goals,
using a variation of the size-change principle, together with a
- graph decomposition technique (see \cite{krauss_phd} for details).
+ graph decomposition technique (see @{cite krauss_phd} for details).
Three kinds of orders are used internally: @{text max}, @{text min},
and @{text ms} (multiset), which is only available when the theory
@{text Multiset} is loaded. When no order kinds are given, they are
@@ -667,7 +667,7 @@
\item @{command (HOL) "recdef"} defines general well-founded
recursive functions (using the TFL package), see also
- \cite{isabelle-HOL}. The ``@{text "(permissive)"}'' option tells
+ @{cite "isabelle-HOL"}. The ``@{text "(permissive)"}'' option tells
TFL to recover from failed proof attempts, returning unfinished
results. The @{text recdef_simp}, @{text recdef_cong}, and @{text
recdef_wf} hints refer to auxiliary rules to be used in the internal
@@ -734,7 +734,7 @@
These commands are mostly obsolete; @{command (HOL) "datatype"}
should be used instead.
- See \cite{isabelle-HOL} for more details on datatypes, but beware of
+ See @{cite "isabelle-HOL"} for more details on datatypes, but beware of
the old-style theory syntax being used there! Apart from proper
proof methods for case-analysis and induction, there are also
emulations of ML tactics @{method (HOL) case_tac} and @{method (HOL)
@@ -784,7 +784,7 @@
advanced, though, supporting truly extensible record schemes. This
admits operations that are polymorphic with respect to record
extension, yielding ``object-oriented'' effects like (single)
- inheritance. See also \cite{NaraschewskiW-TPHOLs98} for more
+ inheritance. See also @{cite "NaraschewskiW-TPHOLs98"} for more
details on object-oriented verification and record subtyping in HOL.
*}
@@ -847,7 +847,7 @@
The record package provides several standard operations like
selectors and updates. The common setup for various generic proof
tools enable succinct reasoning patterns. See also the Isabelle/HOL
- tutorial \cite{isabelle-hol-book} for further instructions on using
+ tutorial @{cite "isabelle-hol-book"} for further instructions on using
records in practice.
*}
@@ -1052,7 +1052,7 @@
those type arguments.
The axiomatization can be considered a ``definition'' in the sense of the
- particular set-theoretic interpretation of HOL \cite{pitts93}, where the
+ particular set-theoretic interpretation of HOL @{cite pitts93}, where the
universe of types is required to be downwards-closed wrt.\ arbitrary
non-empty subsets. Thus genuinely new types introduced by @{command
"typedef"} stay within the range of HOL models by construction.
@@ -1530,7 +1530,8 @@
Preservation of predicates on relations (@{text "bi_unique, bi_total,
right_unique, right_total, left_unique, left_total"}) with the respect to a relator
- is proved automatically if the involved type is BNF\cite{isabelle-datatypes} without dead variables.
+ is proved automatically if the involved type is BNF
+ @{cite "isabelle-datatypes"} without dead variables.
\item @{attribute (HOL) "transfer_domain_rule"} attribute maintains a collection
of rules, which specify a domain of a transfer relation by a predicate.
@@ -1556,7 +1557,7 @@
\end{description}
- Theoretical background can be found in \cite{Huffman-Kuncar:2013:lifting_transfer}.
+ Theoretical background can be found in @{cite "Huffman-Kuncar:2013:lifting_transfer"}.
*}
@@ -1573,7 +1574,7 @@
The Lifting package works with all four kinds of type abstraction: type copies, subtypes,
total quotients and partial quotients.
- Theoretical background can be found in \cite{Huffman-Kuncar:2013:lifting_transfer}.
+ Theoretical background can be found in @{cite "Huffman-Kuncar:2013:lifting_transfer"}.
\begin{matharray}{rcl}
@{command_def (HOL) "setup_lifting"} & : & @{text "local_theory \<rightarrow> local_theory"}\\
@@ -1771,7 +1772,7 @@
together with the commands @{command (HOL) lifting_forget} and @{command (HOL) lifting_update} is
preferred for normal usage.
- \item Integration with the BNF package\cite{isabelle-datatypes}:
+ \item Integration with the BNF package @{cite "isabelle-datatypes"}:
As already mentioned, the theorems that are registered
by the following attributes are proved and registered automatically if the involved type
is BNF without dead variables: @{attribute (HOL) quot_map}, @{attribute (HOL) relator_eq_onp},
@@ -1795,7 +1796,7 @@
Coercive subtyping allows the user to omit explicit type
conversions, also called \emph{coercions}. Type inference will add
them as necessary when parsing a term. See
- \cite{traytel-berghofer-nipkow-2011} for details.
+ @{cite "traytel-berghofer-nipkow-2011"} for details.
@{rail \<open>
@@{attribute (HOL) coercion} (@{syntax term})?
@@ -1911,13 +1912,13 @@
\begin{description}
\item @{method (HOL) meson} implements Loveland's model elimination
- procedure \cite{loveland-78}. See @{file
+ procedure @{cite "loveland-78"}. See @{file
"~~/src/HOL/ex/Meson_Test.thy"} for examples.
\item @{method (HOL) metis} combines ordered resolution and ordered
paramodulation to find first-order (or mildly higher-order) proofs.
The first optional argument specifies a type encoding; see the
- Sledgehammer manual \cite{isabelle-sledgehammer} for details. The
+ Sledgehammer manual @{cite "isabelle-sledgehammer"} for details. The
directory @{file "~~/src/HOL/Metis_Examples"} contains several small
theories developed to a large extent using @{method (HOL) metis}.
@@ -1944,8 +1945,8 @@
\begin{description}
\item @{method (HOL) algebra} performs algebraic reasoning via
- Gr\"obner bases, see also \cite{Chaieb-Wenzel:2007} and
- \cite[\S3.2]{Chaieb-thesis}. The method handles deals with two main
+ Gr\"obner bases, see also @{cite "Chaieb-Wenzel:2007"} and
+ @{cite \<open>\S3.2\<close> "Chaieb-thesis"}. The method handles deals with two main
classes of problems:
\begin{enumerate}
@@ -2023,7 +2024,7 @@
\begin{description}
\item @{method (HOL) coherent} solves problems of \emph{Coherent
- Logic} \cite{Bezem-Coquand:2005}, which covers applications in
+ Logic} @{cite "Bezem-Coquand:2005"}, which covers applications in
confluence theory, lattice theory and projective geometry. See
@{file "~~/src/HOL/ex/Coherent.thy"} for some examples.
@@ -2084,7 +2085,7 @@
\item @{command (HOL) "sledgehammer"} attempts to prove a subgoal
using external automatic provers (resolution provers and SMT
- solvers). See the Sledgehammer manual \cite{isabelle-sledgehammer}
+ solvers). See the Sledgehammer manual @{cite "isabelle-sledgehammer"}
for details.
\item @{command (HOL) "sledgehammer_params"} changes @{command (HOL)
@@ -2260,7 +2261,7 @@
in the type @{typ Code_Evaluation.term}. A pseudo-randomness generator
is defined in theory @{theory Random}.
- \item[@{text narrowing}] implements Haskell's Lazy Smallcheck~\cite{runciman-naylor-lindblad}
+ \item[@{text narrowing}] implements Haskell's Lazy Smallcheck @{cite "runciman-naylor-lindblad"}
using the type classes @{class narrowing} and @{class partial_term_of}.
Variables in the current goal are initially represented as symbolic variables.
If the execution of the goal tries to evaluate one of them, the test engine
@@ -2307,7 +2308,7 @@
\item @{command (HOL) "nitpick"} tests the current goal for
counterexamples using a reduction to first-order relational
- logic. See the Nitpick manual \cite{isabelle-nitpick} for details.
+ logic. See the Nitpick manual @{cite "isabelle-nitpick"} for details.
\item @{command (HOL) "nitpick_params"} changes @{command (HOL)
"nitpick"} configuration options persistently.
@@ -2394,14 +2395,14 @@
from executable specifications. Isabelle/HOL instantiates these
mechanisms in a way that is amenable to end-user applications. Code
can be generated for functional programs (including overloading
- using type classes) targeting SML \cite{SML}, OCaml \cite{OCaml},
- Haskell \cite{haskell-revised-report} and Scala
- \cite{scala-overview-tech-report}. Conceptually, code generation is
+ using type classes) targeting SML @{cite SML}, OCaml @{cite OCaml},
+ Haskell @{cite "haskell-revised-report"} and Scala
+ @{cite "scala-overview-tech-report"}. Conceptually, code generation is
split up in three steps: \emph{selection} of code theorems,
\emph{translation} into an abstract executable view and
\emph{serialization} to a specific \emph{target language}.
Inductive specifications can be executed using the predicate
- compiler which operates within HOL. See \cite{isabelle-codegen} for
+ compiler which operates within HOL. See @{cite "isabelle-codegen"} for
an introduction.
\begin{matharray}{rcl}
--- a/src/Doc/Isar_Ref/Inner_Syntax.thy Sun Oct 05 22:22:40 2014 +0200
+++ b/src/Doc/Isar_Ref/Inner_Syntax.thy Sun Oct 05 22:24:07 2014 +0200
@@ -110,7 +110,7 @@
@{command "print_state"}~@{text "(latex xsymbols)"} prints the
current proof state with mathematical symbols and special characters
represented in {\LaTeX} source, according to the Isabelle style
- \cite{isabelle-sys}.
+ @{cite "isabelle-sys"}.
Note that antiquotations (cf.\ \secref{sec:antiq}) provide a more
systematic way to include formal items into the printed text
@@ -325,7 +325,7 @@
syntax, and the pretty printing. Special case annotations provide a
simple means of specifying infix operators and binders.
- Isabelle mixfix syntax is inspired by {\OBJ} \cite{OBJ}. It allows
+ Isabelle mixfix syntax is inspired by {\OBJ} @{cite OBJ}. It allows
to specify any context-free priority grammar, which is more general
than the fixity declarations of ML and Prolog.
@@ -435,7 +435,7 @@
\end{description}
The general idea of pretty printing with blocks and breaks is also
- described in \cite{paulson-ml2}; it goes back to \cite{Oppen:1980}.
+ described in @{cite "paulson-ml2"}; it goes back to @{cite "Oppen:1980"}.
*}
@@ -476,7 +476,7 @@
text {* A \emph{binder} is a variable-binding construct such as a
quantifier. The idea to formalize @{text "\<forall>x. b"} as @{text "All
(\<lambda>x. b)"} for @{text "All :: ('a \<Rightarrow> bool) \<Rightarrow> bool"} already goes back
- to \cite{church40}. Isabelle declarations of certain higher-order
+ to @{cite church40}. Isabelle declarations of certain higher-order
operators may be annotated with @{keyword_def "binder"} annotations
as follows:
@@ -1034,7 +1034,7 @@
Pre-terms are further processed by the so-called \emph{check} and
\emph{unckeck} phases that are intertwined with type-inference (see
- also \cite{isabelle-implementation}). The latter allows to operate
+ also @{cite "isabelle-implementation"}). The latter allows to operate
on higher-order abstract syntax with proper binding and type
information already available.
--- a/src/Doc/Isar_Ref/ML_Tactic.thy Sun Oct 05 22:22:40 2014 +0200
+++ b/src/Doc/Isar_Ref/ML_Tactic.thy Sun Oct 05 22:24:07 2014 +0200
@@ -153,14 +153,14 @@
\end{tabular}
\medskip
- \medskip @{ML CHANGED} (see \cite{isabelle-implementation}) is
+ \medskip @{ML CHANGED} (see @{cite "isabelle-implementation"}) is
usually not required in Isar, since most basic proof methods already
fail unless there is an actual change in the goal state.
Nevertheless, ``@{text "?"}'' (try) may be used to accept
\emph{unchanged} results as well.
\medskip @{ML ALLGOALS}, @{ML SOMEGOAL} etc.\ (see
- \cite{isabelle-implementation}) are not available in Isar, since
+ @{cite "isabelle-implementation"}) are not available in Isar, since
there is no direct goal addressing. Nevertheless, some basic
methods address all goals internally, notably @{method simp_all}
(see \secref{sec:simplifier}). Also note that @{ML ALLGOALS} can be
--- a/src/Doc/Isar_Ref/Misc.thy Sun Oct 05 22:22:40 2014 +0200
+++ b/src/Doc/Isar_Ref/Misc.thy Sun Oct 05 22:24:07 2014 +0200
@@ -99,7 +99,7 @@
\item @{command "thm_deps"}~@{text "a\<^sub>1 \<dots> a\<^sub>n"}
visualizes dependencies of facts, using Isabelle's graph browser
- tool (see also \cite{isabelle-sys}).
+ tool (see also @{cite "isabelle-sys"}).
\item @{command "unused_thms"}~@{text "A\<^sub>1 \<dots> A\<^sub>m - B\<^sub>1 \<dots> B\<^sub>n"}
displays all theorems that are proved in theories @{text "B\<^sub>1 \<dots> B\<^sub>n"}
--- a/src/Doc/Isar_Ref/Outer_Syntax.thy Sun Oct 05 22:22:40 2014 +0200
+++ b/src/Doc/Isar_Ref/Outer_Syntax.thy Sun Oct 05 22:24:07 2014 +0200
@@ -28,7 +28,7 @@
Printed theory documents usually omit quotes to gain readability
(this is a matter of {\LaTeX} macro setup, say via @{verbatim
- "\\isabellestyle"}, see also \cite{isabelle-sys}). Experienced
+ "\\isabellestyle"}, see also @{cite "isabelle-sys"}). Experienced
users of Isabelle/Isar may easily reconstruct the lost technical
information, while mere readers need not care about quotes at all.
@@ -41,8 +41,8 @@
clearly recognized from the input syntax, e.g.\ encounter of the
next command keyword.
- More advanced interfaces such as Isabelle/jEdit \cite{Wenzel:2012}
- and Proof~General \cite{proofgeneral} do not require explicit
+ More advanced interfaces such as Isabelle/jEdit @{cite "Wenzel:2012"}
+ and Proof~General @{cite proofgeneral} do not require explicit
semicolons: command spans are determined by inspecting the content
of the editor buffer. In the printed presentation of Isabelle/Isar
documents semicolons are omitted altogether for readability.
--- a/src/Doc/Isar_Ref/Preface.thy Sun Oct 05 22:22:40 2014 +0200
+++ b/src/Doc/Isar_Ref/Preface.thy Sun Oct 05 22:24:07 2014 +0200
@@ -20,18 +20,18 @@
transactions with unlimited undo etc.
In its pioneering times, the Isabelle/Isar version of the
- \emph{Proof~General} user interface
- \cite{proofgeneral,Aspinall:TACAS:2000} has contributed to the
+ \emph{Proof~General} user interface @{cite proofgeneral and
+ "Aspinall:TACAS:2000"} has contributed to the
success of for interactive theory and proof development in this
advanced theorem proving environment, even though it was somewhat
biased towards old-style proof scripts. The more recent
- Isabelle/jEdit Prover IDE \cite{Wenzel:2012} emphasizes the
+ Isabelle/jEdit Prover IDE @{cite "Wenzel:2012"} emphasizes the
document-oriented approach of Isabelle/Isar again more explicitly.
\medskip Apart from the technical advances over bare-bones ML
programming, the main purpose of the Isar language is to provide a
conceptually different view on machine-checked proofs
- \cite{Wenzel:1999:TPHOL,Wenzel-PhD}. \emph{Isar} stands for
+ @{cite "Wenzel:1999:TPHOL" and "Wenzel-PhD"}. \emph{Isar} stands for
\emph{Intelligible semi-automated reasoning}. Drawing from both the
traditions of informal mathematical proof texts and high-level
programming languages, Isar offers a versatile environment for
--- a/src/Doc/Isar_Ref/Proof.thy Sun Oct 05 22:22:40 2014 +0200
+++ b/src/Doc/Isar_Ref/Proof.thy Sun Oct 05 22:24:07 2014 +0200
@@ -673,7 +673,7 @@
quick_and_dirty} is enabled. Facts emerging from fake
proofs are not the real thing. Internally, the derivation object is
tainted by an oracle invocation, which may be inspected via the
- theorem status \cite{isabelle-implementation}.
+ theorem status @{cite "isabelle-implementation"}.
The most important application of @{command "sorry"} is to support
experimentation and top-down proof development.
--- a/src/Doc/Isar_Ref/Spec.thy Sun Oct 05 22:22:40 2014 +0200
+++ b/src/Doc/Isar_Ref/Spec.thy Sun Oct 05 22:24:07 2014 +0200
@@ -442,7 +442,7 @@
redundant locale instances are omitted. A locale instance is
redundant if it is subsumed by an instance encountered earlier. A
more detailed description of this process is available elsewhere
- \cite{Ballarin2014}.
+ @{cite Ballarin2014}.
*}
@@ -813,10 +813,10 @@
A class is a particular locale with \emph{exactly one} type variable
@{text \<alpha>}. Beyond the underlying locale, a corresponding type class
is established which is interpreted logically as axiomatic type
- class \cite{Wenzel:1997:TPHOL} whose logical content are the
+ class @{cite "Wenzel:1997:TPHOL"} whose logical content are the
assumptions of the locale. Thus, classes provide the full
generality of locales combined with the commodity of type classes
- (notably type-inference). See \cite{isabelle-classes} for a short
+ (notably type-inference). See @{cite "isabelle-classes"} for a short
tutorial.
@{rail \<open>
@@ -972,7 +972,7 @@
\medskip Co-regularity is a very fundamental property of the
order-sorted algebra of types. For example, it entails principle
- types and most general unifiers, e.g.\ see \cite{nipkow-prehofer}.
+ types and most general unifiers, e.g.\ see @{cite "nipkow-prehofer"}.
*}
@@ -1144,8 +1144,8 @@
The boundary for the exception trace is the current Isar command
transactions. It is occasionally better to insert the combinator @{ML
- Runtime.exn_trace} into ML code for debugging
- \cite{isabelle-implementation}, closer to the point where it actually
+ Runtime.exn_trace} into ML code for debugging @{cite
+ "isabelle-implementation"}, closer to the point where it actually
happens.
\end{description}