diff -r 151f894984d8 -r 2bf52eec4e8a src/Doc/Implementation/ML.thy --- a/src/Doc/Implementation/ML.thy Wed Oct 14 15:06:42 2015 +0200 +++ b/src/Doc/Implementation/ML.thy Wed Oct 14 15:10:32 2015 +0200 @@ -260,7 +260,7 @@ @{ML_text lhs} (not @{ML_text lhs_tm}). Or a term that is known to be a variable can be called @{ML_text v} or @{ML_text x}. - \item Tactics (\secref{sec:tactics}) are sufficiently important to + \<^item> Tactics (\secref{sec:tactics}) are sufficiently important to have specific naming conventions. The name of a basic tactic definition always has a @{ML_text "_tac"} suffix, the subgoal index (if applicable) is always called @{ML_text i}, and the goal state @@ -648,20 +648,20 @@ \begin{description} - \item @{ML "ML_Context.the_generic_context ()"} refers to the theory + \<^descr> @{ML "ML_Context.the_generic_context ()"} refers to the theory context of the ML toplevel --- at compile time. ML code needs to take care to refer to @{ML "ML_Context.the_generic_context ()"} correctly. Recall that evaluation of a function body is delayed until actual run-time. - \item @{ML "Context.>>"}~@{text f} applies context transformation + \<^descr> @{ML "Context.>>"}~@{text f} applies context transformation @{text f} to the implicit context of the ML toplevel. - \item @{ML ML_Thms.bind_thms}~@{text "(name, thms)"} stores a list of + \<^descr> @{ML ML_Thms.bind_thms}~@{text "(name, thms)"} stores a list of theorems produced in ML both in the (global) theory context and the ML toplevel, associating it with the provided name. - \item @{ML ML_Thms.bind_thm} is similar to @{ML ML_Thms.bind_thms} but + \<^descr> @{ML ML_Thms.bind_thm} is similar to @{ML ML_Thms.bind_thms} but refers to a singleton fact. \end{description} @@ -727,12 +727,12 @@ \begin{description} - \item @{text "@{make_string}"} inlines a function to print arbitrary values + \<^descr> @{text "@{make_string}"} inlines a function to print arbitrary values similar to the ML toplevel. The result is compiler dependent and may fall back on "?" in certain situations. The value of configuration option @{attribute_ref ML_print_depth} determines further details of output. - \item @{text "@{print f}"} uses the ML function @{text "f: string -> + \<^descr> @{text "@{print f}"} uses the ML function @{text "f: string -> unit"} to output the result of @{text "@{make_string}"} above, together with the source position of the antiquotation. The default output function is @{ML writeln}. @@ -909,13 +909,13 @@ \begin{description} - \item @{ML fold}~@{text f} lifts the parametrized update function + \<^descr> @{ML fold}~@{text f} lifts the parametrized update function @{text "f"} to a list of parameters. - \item @{ML fold_rev}~@{text "f"} is similar to @{ML fold}~@{text + \<^descr> @{ML fold_rev}~@{text "f"} is similar to @{ML fold}~@{text "f"}, but works inside-out, as if the list would be reversed. - \item @{ML fold_map}~@{text "f"} lifts the parametrized update + \<^descr> @{ML fold_map}~@{text "f"} lifts the parametrized update function @{text "f"} (with side-result) to a list of parameters and cumulative side-results. @@ -1035,11 +1035,11 @@ \begin{description} - \item @{ML writeln}~@{text "text"} outputs @{text "text"} as regular + \<^descr> @{ML writeln}~@{text "text"} outputs @{text "text"} as regular message. This is the primary message output operation of Isabelle and should be used by default. - \item @{ML tracing}~@{text "text"} outputs @{text "text"} as special + \<^descr> @{ML tracing}~@{text "text"} outputs @{text "text"} as special tracing message, indicating potential high-volume output to the front-end (hundreds or thousands of messages issued by a single command). The idea is to allow the user-interface to downgrade the @@ -1049,11 +1049,11 @@ output, e.g.\ switch to a different output window. So this channel should not be used for regular output. - \item @{ML warning}~@{text "text"} outputs @{text "text"} as + \<^descr> @{ML warning}~@{text "text"} outputs @{text "text"} as warning, which typically means some extra emphasis on the front-end side (color highlighting, icons, etc.). - \item @{ML error}~@{text "text"} raises exception @{ML ERROR}~@{text + \<^descr> @{ML error}~@{text "text"} raises exception @{ML ERROR}~@{text "text"} and thus lets the Isar toplevel print @{text "text"} on the error channel, which typically means some extra emphasis on the front-end side (color highlighting, icons, etc.). @@ -1217,30 +1217,30 @@ \begin{description} - \item @{ML try}~@{text "f x"} makes the partiality of evaluating + \<^descr> @{ML try}~@{text "f x"} makes the partiality of evaluating @{text "f x"} explicit via the option datatype. Interrupts are \emph{not} handled here, i.e.\ this form serves as safe replacement for the \emph{unsafe} version @{ML_text "(SOME"}~@{text "f x"}~@{ML_text "handle _ => NONE)"} that is occasionally seen in books about SML97, but not in Isabelle/ML. - \item @{ML can} is similar to @{ML try} with more abstract result. - - \item @{ML ERROR}~@{text "msg"} represents user errors; this + \<^descr> @{ML can} is similar to @{ML try} with more abstract result. + + \<^descr> @{ML ERROR}~@{text "msg"} represents user errors; this exception is normally raised indirectly via the @{ML error} function (see \secref{sec:message-channels}). - \item @{ML Fail}~@{text "msg"} represents general program failures. - - \item @{ML Exn.is_interrupt} identifies interrupts robustly, without + \<^descr> @{ML Fail}~@{text "msg"} represents general program failures. + + \<^descr> @{ML Exn.is_interrupt} identifies interrupts robustly, without mentioning concrete exception constructors in user code. Handled interrupts need to be re-raised promptly! - \item @{ML reraise}~@{text "exn"} raises exception @{text "exn"} + \<^descr> @{ML reraise}~@{text "exn"} raises exception @{text "exn"} while preserving its implicit position information (if possible, depending on the ML platform). - \item @{ML Runtime.exn_trace}~@{ML_text "(fn () =>"}~@{text + \<^descr> @{ML Runtime.exn_trace}~@{ML_text "(fn () =>"}~@{text "e"}@{ML_text ")"} evaluates expression @{text "e"} while printing a full trace of its stack of nested exceptions (if possible, depending on the ML platform). @@ -1258,7 +1258,7 @@ \begin{description} - \item @{text "@{assert}"} inlines a function + \<^descr> @{text "@{assert}"} inlines a function @{ML_type "bool -> unit"} that raises @{ML Fail} if the argument is @{ML false}. Due to inlining the source position of failed assertions is included in the error output. @@ -1339,28 +1339,28 @@ \begin{description} - \item Type @{ML_type "Symbol.symbol"} represents individual Isabelle + \<^descr> Type @{ML_type "Symbol.symbol"} represents individual Isabelle symbols. - \item @{ML "Symbol.explode"}~@{text "str"} produces a symbol list + \<^descr> @{ML "Symbol.explode"}~@{text "str"} produces a symbol list from the packed form. This function supersedes @{ML "String.explode"} for virtually all purposes of manipulating text in Isabelle!\footnote{The runtime overhead for exploded strings is mainly that of the list structure: individual symbols that happen to be a singleton string do not require extra memory in Poly/ML.} - \item @{ML "Symbol.is_letter"}, @{ML "Symbol.is_digit"}, @{ML + \<^descr> @{ML "Symbol.is_letter"}, @{ML "Symbol.is_digit"}, @{ML "Symbol.is_quasi"}, @{ML "Symbol.is_blank"} classify standard symbols according to fixed syntactic conventions of Isabelle, cf.\ @{cite "isabelle-isar-ref"}. - \item Type @{ML_type "Symbol.sym"} is a concrete datatype that + \<^descr> Type @{ML_type "Symbol.sym"} is a concrete datatype that represents the different kinds of symbols explicitly, with constructors @{ML "Symbol.Char"}, @{ML "Symbol.UTF8"}, @{ML "Symbol.Sym"}, @{ML "Symbol.Ctrl"}, @{ML "Symbol.Raw"}, @{ML "Symbol.Malformed"}. - \item @{ML "Symbol.decode"} converts the string representation of a + \<^descr> @{ML "Symbol.decode"} converts the string representation of a symbol into the datatype version. \end{description} @@ -1401,7 +1401,7 @@ \begin{description} - \item Type @{ML_type char} is \emph{not} used. The smallest textual + \<^descr> Type @{ML_type char} is \emph{not} used. The smallest textual unit in Isabelle is represented as a ``symbol'' (see \secref{sec:symbols}). @@ -1418,7 +1418,7 @@ \begin{description} - \item Type @{ML_type string} represents immutable vectors of 8-bit + \<^descr> Type @{ML_type string} represents immutable vectors of 8-bit characters. There are operations in SML to convert back and forth to actual byte vectors, which are seldom used. @@ -1473,7 +1473,7 @@ \begin{description} - \item Type @{ML_type int} represents regular mathematical integers, which + \<^descr> Type @{ML_type int} represents regular mathematical integers, which are \emph{unbounded}. Overflow is treated properly, but should never happen in practice.\footnote{The size limit for integer bit patterns in memory is 64\,MB for 32-bit Poly/ML, and much higher for 64-bit systems.} This works @@ -1501,11 +1501,11 @@ \begin{description} - \item Type @{ML_type Time.time} represents time abstractly according + \<^descr> Type @{ML_type Time.time} represents time abstractly according to the SML97 basis library definition. This is adequate for internal ML operations, but awkward in concrete time specifications. - \item @{ML seconds}~@{text "s"} turns the concrete scalar @{text + \<^descr> @{ML seconds}~@{text "s"} turns the concrete scalar @{text "s"} (measured in seconds) into an abstract time value. Floating point numbers are easy to use as configuration options in the context (see \secref{sec:config-options}) or system options that @@ -1553,13 +1553,13 @@ \begin{description} - \item @{ML cons}~@{text "x xs"} evaluates to @{text "x :: xs"}. + \<^descr> @{ML cons}~@{text "x xs"} evaluates to @{text "x :: xs"}. Tupled infix operators are a historical accident in Standard ML. The curried @{ML cons} amends this, but it should be only used when partial application is required. - \item @{ML member}, @{ML insert}, @{ML remove}, @{ML update} treat + \<^descr> @{ML member}, @{ML insert}, @{ML remove}, @{ML update} treat lists as a set-like container that maintains the order of elements. See @{file "~~/src/Pure/library.ML"} for the full specifications (written in ML). There are some further derived operations like @@ -1629,7 +1629,7 @@ \begin{description} - \item @{ML AList.lookup}, @{ML AList.defined}, @{ML AList.update} + \<^descr> @{ML AList.lookup}, @{ML AList.defined}, @{ML AList.update} implement the main ``framework operations'' for mappings in Isabelle/ML, following standard conventions for their names and types. @@ -1836,11 +1836,11 @@ \begin{description} - \item @{ML File.tmp_path}~@{text "path"} relocates the base + \<^descr> @{ML File.tmp_path}~@{text "path"} relocates the base component of @{text "path"} into the unique temporary directory of the running Isabelle/ML process. - \item @{ML serial_string}~@{text "()"} creates a new serial number + \<^descr> @{ML serial_string}~@{text "()"} creates a new serial number that is unique over the runtime of the Isabelle/ML process. \end{description} @@ -1883,14 +1883,14 @@ \begin{description} - \item Type @{ML_type "'a Synchronized.var"} represents synchronized + \<^descr> Type @{ML_type "'a Synchronized.var"} represents synchronized variables with state of type @{ML_type 'a}. - \item @{ML Synchronized.var}~@{text "name x"} creates a synchronized + \<^descr> @{ML Synchronized.var}~@{text "name x"} creates a synchronized variable that is initialized with value @{text "x"}. The @{text "name"} is used for tracing. - \item @{ML Synchronized.guarded_access}~@{text "var f"} lets the + \<^descr> @{ML Synchronized.guarded_access}~@{text "var f"} lets the function @{text "f"} operate within a critical section on the state @{text "x"} as follows: if @{text "f x"} produces @{ML NONE}, it continues to wait on the internal condition variable, expecting that @@ -1996,32 +1996,32 @@ \begin{description} - \item Type @{ML_type "'a Exn.result"} represents the disjoint sum of + \<^descr> Type @{ML_type "'a Exn.result"} represents the disjoint sum of ML results explicitly, with constructor @{ML Exn.Res} for regular values and @{ML "Exn.Exn"} for exceptions. - \item @{ML Exn.capture}~@{text "f x"} manages the evaluation of + \<^descr> @{ML Exn.capture}~@{text "f x"} manages the evaluation of @{text "f x"} such that exceptions are made explicit as @{ML "Exn.Exn"}. Note that this includes physical interrupts (see also \secref{sec:exceptions}), so the same precautions apply to user code: interrupts must not be absorbed accidentally! - \item @{ML Exn.interruptible_capture} is similar to @{ML + \<^descr> @{ML Exn.interruptible_capture} is similar to @{ML Exn.capture}, but interrupts are immediately re-raised as required for user code. - \item @{ML Exn.release}~@{text "result"} releases the original + \<^descr> @{ML Exn.release}~@{text "result"} releases the original runtime result, exposing its regular value or raising the reified exception. - \item @{ML Par_Exn.release_all}~@{text "results"} combines results + \<^descr> @{ML Par_Exn.release_all}~@{text "results"} combines results that were produced independently (e.g.\ by parallel evaluation). If all results are regular values, that list is returned. Otherwise, the collection of all exceptions is raised, wrapped-up as collective parallel exception. Note that the latter prevents access to individual exceptions by conventional @{verbatim "handle"} of ML. - \item @{ML Par_Exn.release_first} is similar to @{ML + \<^descr> @{ML Par_Exn.release_first} is similar to @{ML Par_Exn.release_all}, but only the first (meaningful) exception that has occurred in the original evaluation process is raised again, the others are ignored. That single exception may get handled by conventional @@ -2057,7 +2057,7 @@ \begin{description} - \item @{ML Par_List.map}~@{text "f [x\<^sub>1, \, x\<^sub>n]"} is like @{ML + \<^descr> @{ML Par_List.map}~@{text "f [x\<^sub>1, \, x\<^sub>n]"} is like @{ML "map"}~@{text "f [x\<^sub>1, \, x\<^sub>n]"}, but the evaluation of @{text "f x\<^sub>i"} for @{text "i = 1, \, n"} is performed in parallel. @@ -2067,7 +2067,7 @@ program exception that happened to occur in the parallel evaluation is propagated, and all other failures are ignored. - \item @{ML Par_List.get_some}~@{text "f [x\<^sub>1, \, x\<^sub>n]"} produces some + \<^descr> @{ML Par_List.get_some}~@{text "f [x\<^sub>1, \, x\<^sub>n]"} produces some @{text "f x\<^sub>i"} that is of the form @{text "SOME y\<^sub>i"}, if that exists, otherwise @{text "NONE"}. Thus it is similar to @{ML Library.get_first}, but subject to a non-deterministic parallel @@ -2130,19 +2130,19 @@ \begin{description} - \item Type @{ML_type "'a lazy"} represents lazy values over type @{verbatim + \<^descr> Type @{ML_type "'a lazy"} represents lazy values over type @{verbatim "'a"}. - \item @{ML Lazy.lazy}~@{text "(fn () => e)"} wraps the unevaluated + \<^descr> @{ML Lazy.lazy}~@{text "(fn () => e)"} wraps the unevaluated expression @{text e} as unfinished lazy value. - \item @{ML Lazy.value}~@{text a} wraps the value @{text a} as finished lazy + \<^descr> @{ML Lazy.value}~@{text a} wraps the value @{text a} as finished lazy value. When forced, it returns @{text a} without any further evaluation. There is very low overhead for this proforma wrapping of strict values as lazy values. - \item @{ML Lazy.force}~@{text x} produces the result of the lazy value in a + \<^descr> @{ML Lazy.force}~@{text x} produces the result of the lazy value in a thread-safe manner as explained above. Thus it may cause the current thread to wait on a pending evaluation attempt by another thread. @@ -2223,15 +2223,15 @@ \begin{description} - \item Type @{ML_type "'a future"} represents future values over type + \<^descr> Type @{ML_type "'a future"} represents future values over type @{verbatim "'a"}. - \item @{ML Future.fork}~@{text "(fn () => e)"} registers the unevaluated + \<^descr> @{ML Future.fork}~@{text "(fn () => e)"} registers the unevaluated expression @{text e} as unfinished future value, to be evaluated eventually on the parallel worker-thread farm. This is a shorthand for @{ML Future.forks} below, with default parameters and a single expression. - \item @{ML Future.forks}~@{text "params exprs"} is the general interface to + \<^descr> @{ML Future.forks}~@{text "params exprs"} is the general interface to fork several futures simultaneously. The @{text params} consist of the following fields: @@ -2273,7 +2273,7 @@ \end{itemize} - \item @{ML Future.join}~@{text x} retrieves the value of an already finished + \<^descr> @{ML Future.join}~@{text x} retrieves the value of an already finished future, which may lead to an exception, according to the result of its previous evaluation. @@ -2295,7 +2295,7 @@ explicitly when forked (see @{text deps} above). Thus the evaluation can work from the bottom up, without join conflicts and wait states. - \item @{ML Future.joins}~@{text xs} joins the given list of futures + \<^descr> @{ML Future.joins}~@{text xs} joins the given list of futures simultaneously, which is more efficient than @{ML "map Future.join"}~@{text xs}. @@ -2305,23 +2305,23 @@ presently evaluated on other threads only happens as last resort, when no other unfinished futures are left over. - \item @{ML Future.value}~@{text a} wraps the value @{text a} as finished + \<^descr> @{ML Future.value}~@{text a} wraps the value @{text a} as finished future value, bypassing the worker-thread farm. When joined, it returns @{text a} without any further evaluation. There is very low overhead for this proforma wrapping of strict values as futures. - \item @{ML Future.map}~@{text "f x"} is a fast-path implementation of @{ML + \<^descr> @{ML Future.map}~@{text "f x"} is a fast-path implementation of @{ML Future.fork}~@{text "(fn () => f ("}@{ML Future.join}~@{text "x))"}, which avoids the full overhead of the task queue and worker-thread farm as far as possible. The function @{text f} is supposed to be some trivial post-processing or projection of the future result. - \item @{ML Future.cancel}~@{text "x"} cancels the task group of the given + \<^descr> @{ML Future.cancel}~@{text "x"} cancels the task group of the given future, using @{ML Future.cancel_group} below. - \item @{ML Future.cancel_group}~@{text "group"} cancels all tasks of the + \<^descr> @{ML Future.cancel_group}~@{text "group"} cancels all tasks of the given task group for all time. Threads that are presently processing a task of the given group are interrupted: it may take some time until they are actually terminated. Tasks that are queued but not yet processed are @@ -2329,11 +2329,11 @@ invalidated, any further attempt to fork a future that belongs to it will yield a canceled result as well. - \item @{ML Future.promise}~@{text abort} registers a passive future with the + \<^descr> @{ML Future.promise}~@{text abort} registers a passive future with the given @{text abort} operation: it is invoked when the future task group is canceled. - \item @{ML Future.fulfill}~@{text "x a"} finishes the passive future @{text + \<^descr> @{ML Future.fulfill}~@{text "x a"} finishes the passive future @{text x} by the given value @{text a}. If the promise has already been canceled, the attempt to fulfill it causes an exception.