--- a/src/Doc/Implementation/Syntax.thy Thu Jul 03 11:31:25 2014 +0200
+++ b/src/Doc/Implementation/Syntax.thy Thu Jul 03 12:17:55 2014 +0200
@@ -12,18 +12,18 @@
additional means for reading and printing of terms and types. This
important add-on outside the logical core is called \emph{inner
syntax} in Isabelle jargon, as opposed to the \emph{outer syntax} of
- the theory and proof language (cf.\ \cite{isabelle-isar-ref}).
+ the theory and proof language \cite{isabelle-isar-ref}.
- For example, according to \cite{church40} quantifiers are
- represented as higher-order constants @{text "All :: ('a \<Rightarrow> bool) \<Rightarrow>
- bool"} such that @{text "All (\<lambda>x::'a. B x)"} faithfully represents
- the idea that is displayed as @{text "\<forall>x::'a. B x"} via @{keyword
- "binder"} notation. Moreover, type-inference in the style of
- Hindley-Milner \cite{hindleymilner} (and extensions) enables users
- to write @{text "\<forall>x. B x"} concisely, when the type @{text "'a"} is
- already clear from the context.\footnote{Type-inference taken to the
- extreme can easily confuse users, though. Beginners often stumble
- over unexpectedly general types inferred by the system.}
+ For example, according to \cite{church40} quantifiers are represented as
+ higher-order constants @{text "All :: ('a \<Rightarrow> bool) \<Rightarrow> bool"} such that @{text
+ "All (\<lambda>x::'a. B x)"} faithfully represents the idea that is displayed in
+ Isabelle as @{text "\<forall>x::'a. B x"} via @{keyword "binder"} notation.
+ Moreover, type-inference in the style of Hindley-Milner \cite{hindleymilner}
+ (and extensions) enables users to write @{text "\<forall>x. B x"} concisely, when
+ the type @{text "'a"} is already clear from the
+ context.\footnote{Type-inference taken to the extreme can easily confuse
+ users. Beginners often stumble over unexpectedly general types inferred by
+ the system.}
\medskip The main inner syntax operations are \emph{read} for
parsing together with type-checking, and \emph{pretty} for formatted
@@ -43,18 +43,17 @@
\end{itemize}
- Some specification package might thus intercept syntax processing at
- a well-defined stage after @{text "parse"}, to a augment the
- resulting pre-term before full type-reconstruction is performed by
- @{text "check"}, for example. Note that the formal status of bound
- variables, versus free variables, versus constants must not be
- changed between these phases!
+ For example, some specification package might thus intercept syntax
+ processing at a well-defined stage after @{text "parse"}, to a augment the
+ resulting pre-term before full type-reconstruction is performed by @{text
+ "check"}. Note that the formal status of bound variables, versus free
+ variables, versus constants must not be changed between these phases.
\medskip In general, @{text check} and @{text uncheck} operate
simultaneously on a list of terms. This is particular important for
type-checking, to reconstruct types for several terms of the same context
and scope. In contrast, @{text parse} and @{text unparse} operate separately
- in single terms.
+ on single terms.
There are analogous operations to read and print types, with the same
sub-division into phases.
@@ -63,14 +62,15 @@
section {* Reading and pretty printing \label{sec:read-print} *}
-text {* Read and print operations are roughly dual to each other, such
- that for the user @{text "s' = pretty (read s)"} looks similar to
- the original source text @{text "s"}, but the details depend on many
- side-conditions. There are also explicit options to control
- suppressing of type information in the output. The default
- configuration routinely looses information, so @{text "t' = read
- (pretty t)"} might fail, or produce a differently typed term, or a
- completely different term in the face of syntactic overloading! *}
+text {*
+ Read and print operations are roughly dual to each other, such that for the
+ user @{text "s' = pretty (read s)"} looks similar to the original source
+ text @{text "s"}, but the details depend on many side-conditions. There are
+ also explicit options to control the removal of type information in the
+ output. The default configuration routinely looses information, so @{text
+ "t' = read (pretty t)"} might fail, or produce a differently typed term, or
+ a completely different term in the face of syntactic overloading.
+*}
text %mlref {*
\begin{mldecls}
@@ -98,26 +98,27 @@
If particular type-constraints are required for some of the arguments, the
read operations needs to be split into its parse and check phases. Then it
- is possible to use @{ML Type.constraint} on the intermediate pre-terms.
+ is possible to use @{ML Type.constraint} on the intermediate pre-terms
+ \secref{sec:term-check}.
\item @{ML Syntax.read_props}~@{text "ctxt strs"} parses and checks a
simultaneous list of source strings as terms of the logic, with an implicit
type-constraint for each argument to enforce type @{typ prop}; this also
- affects the inner syntax for parsing. The remaining type-reconstructions
- works as for @{ML Syntax.read_terms} above.
+ affects the inner syntax for parsing. The remaining type-reconstruction
+ works as for @{ML Syntax.read_terms}.
\item @{ML Syntax.read_typ}, @{ML Syntax.read_term}, @{ML Syntax.read_prop}
- are like the simultaneous versions above, but operate on a single argument
- only. This convenient shorthand is adequate in situations where a single
- item in its own scope is processed. Do not use @{ML "map o
- Syntax.read_term"} where @{ML Syntax.read_terms} is actually intended!
+ are like the simultaneous versions, but operate on a single argument only.
+ This convenient shorthand is adequate in situations where a single item in
+ its own scope is processed. Do not use @{ML "map o Syntax.read_term"} where
+ @{ML Syntax.read_terms} is actually intended!
\item @{ML Syntax.pretty_typ}~@{text "ctxt T"} and @{ML
Syntax.pretty_term}~@{text "ctxt t"} uncheck and pretty-print the given type
or term, respectively. Although the uncheck phase acts on a simultaneous
- list as well, this is rarely relevant in practice, so only the singleton
- case is provided as combined pretty operation. There is no distinction of
- term vs.\ proposition.
+ list as well, this is rarely used in practice, so only the singleton case is
+ provided as combined pretty operation. There is no distinction of term vs.\
+ proposition.
\item @{ML Syntax.string_of_typ} and @{ML Syntax.string_of_term} are
convenient compositions of @{ML Syntax.pretty_typ} and @{ML
@@ -130,11 +131,11 @@
@{ML Syntax.read_term}, @{ML Syntax.read_prop}, and @{ML
Syntax.string_of_term} are the most important operations in practice.
- \medskip Note that the string values that are passed in and out here are
+ \medskip Note that the string values that are passed in and out are
annotated by the system, to carry further markup that is relevant for the
Prover IDE \cite{isabelle-jedit}. User code should neither compose its own
input strings, nor try to analyze the output strings. Conceptually this is
- an abstract datatype, encoded into a concrete string for historical reasons.
+ an abstract datatype, encoded as concrete string for historical reasons.
The standard way to provide the required position markup for input works via
the outer syntax parser wrapper @{ML Parse.inner_syntax}, which is already
@@ -147,47 +148,46 @@
section {* Parsing and unparsing \label{sec:parse-unparse} *}
-text {* Parsing and unparsing converts between actual source text and
- a certain \emph{pre-term} format, where all bindings and scopes are
- resolved faithfully. Thus the names of free variables or constants
- are already determined in the sense of the logical context, but type
- information might be still missing. Pre-terms support an explicit
- language of \emph{type constraints} that may be augmented by user
- code to guide the later \emph{check} phase.
+text {*
+ Parsing and unparsing converts between actual source text and a certain
+ \emph{pre-term} format, where all bindings and scopes are already resolved
+ faithfully. Thus the names of free variables or constants are determined in
+ the sense of the logical context, but type information might be still
+ missing. Pre-terms support an explicit language of \emph{type constraints}
+ that may be augmented by user code to guide the later \emph{check} phase.
- Actual parsing is based on traditional lexical analysis and Earley
- parsing for arbitrary context-free grammars. The user can specify
- the grammar via mixfix annotations. Moreover, there are \emph{syntax
- translations} that can be augmented by the user, either
- declaratively via @{command translations} or programmatically via
- @{command parse_translation}, @{command print_translation} etc. The
- final scope-resolution is performed by the system, according to name
- spaces for types, term variables and constants etc.\ determined by
- the context.
+ Actual parsing is based on traditional lexical analysis and Earley parsing
+ for arbitrary context-free grammars. The user can specify the grammar
+ declaratively via mixfix annotations. Moreover, there are \emph{syntax
+ translations} that can be augmented by the user, either declaratively via
+ @{command translations} or programmatically via @{command
+ parse_translation}, @{command print_translation} \cite{isabelle-isar-ref}.
+ The final scope-resolution is performed by the system, according to name
+ spaces for types, term variables and constants determined by the context.
*}
text %mlref {*
\begin{mldecls}
@{index_ML Syntax.parse_typ: "Proof.context -> string -> typ"} \\
@{index_ML Syntax.parse_term: "Proof.context -> string -> term"} \\
- @{index_ML Syntax.parse_prop: "Proof.context -> string -> term"} \\
+ @{index_ML Syntax.parse_prop: "Proof.context -> string -> term"} \\[0.5ex]
@{index_ML Syntax.unparse_typ: "Proof.context -> typ -> Pretty.T"} \\
@{index_ML Syntax.unparse_term: "Proof.context -> term -> Pretty.T"} \\
\end{mldecls}
\begin{description}
- \item @{ML Syntax.parse_typ}~@{text "ctxt str"} parses a source strings as
+ \item @{ML Syntax.parse_typ}~@{text "ctxt str"} parses a source string as
pre-type that is ready to be used with subsequent check operations.
- \item @{ML Syntax.parse_term}~@{text "ctxt str"} parses a source strings as
+ \item @{ML Syntax.parse_term}~@{text "ctxt str"} parses a source string as
pre-term that is ready to be used with subsequent check operations.
- \item @{ML Syntax.parse_prop}~@{text "ctxt str"} parses a source strings as
+ \item @{ML Syntax.parse_prop}~@{text "ctxt str"} parses a source string as
pre-term that is ready to be used with subsequent check operations. The
inner syntax category is @{typ prop} and a suitable type-constraint is
- included to ensure that this information is preserved during the check
- phase.
+ included to ensure that this information is observed in subsequent type
+ reconstruction.
\item @{ML Syntax.unparse_typ}~@{text "ctxt T"} unparses a type after
uncheck operations, to turn it into a pretty tree.
@@ -198,8 +198,8 @@
\end{description}
- These operations always operate on single items; use the combinator @{ML
- map} to apply them to a list of items.
+ These operations always operate on a single item; use the combinator @{ML
+ map} to apply them to a list.
*}
@@ -216,14 +216,14 @@
before pretty printing.
A typical add-on for the check/uncheck syntax layer is the @{command
- abbreviation} mechanism. Here the user specifies syntactic
- definitions that are managed by the system as polymorphic @{text
- "let"} bindings. These are expanded during the @{text "check"}
- phase, and contracted during the @{text "uncheck"} phase, without
- affecting the type-assignment of the given terms.
+ abbreviation} mechanism \cite{isabelle-isar-ref}. Here the user specifies
+ syntactic definitions that are managed by the system as polymorphic @{text
+ "let"} bindings. These are expanded during the @{text "check"} phase, and
+ contracted during the @{text "uncheck"} phase, without affecting the
+ type-assignment of the given terms.
- \medskip The precise meaning of type checking depends on the context
- --- additional check/uncheck plugins might be defined in user space.
+ \medskip The precise meaning of type checking depends on the context ---
+ additional check/uncheck modules might be defined in user space.
For example, the @{command class} command defines a context where
@{text "check"} treats certain type instances of overloaded
@@ -237,7 +237,7 @@
\begin{mldecls}
@{index_ML Syntax.check_typs: "Proof.context -> typ list -> typ list"} \\
@{index_ML Syntax.check_terms: "Proof.context -> term list -> term list"} \\
- @{index_ML Syntax.check_props: "Proof.context -> term list -> term list"} \\
+ @{index_ML Syntax.check_props: "Proof.context -> term list -> term list"} \\[0.5ex]
@{index_ML Syntax.uncheck_typs: "Proof.context -> typ list -> typ list"} \\
@{index_ML Syntax.uncheck_terms: "Proof.context -> term list -> term list"} \\
\end{mldecls}
@@ -256,7 +256,7 @@
Applications sometimes need to check several types and terms together. The
standard approach uses @{ML Logic.mk_type} to embed the language of types
into that of terms; all arguments are appended into one list of terms that
- is checked; afterwards the original type arguments are recovered with @{ML
+ is checked; afterwards the type arguments are recovered with @{ML
Logic.dest_type}.
\item @{ML Syntax.check_props}~@{text "ctxt ts"} checks a simultaneous list
@@ -273,8 +273,8 @@
\end{description}
- These operations always operate simultaneously on multiple items; use the
- combinator @{ML singleton} to apply them to a single item.
+ These operations always operate simultaneously on a list; use the combinator
+ @{ML singleton} to apply them to a single item.
*}
end