doc-src/IsarImplementation/Thy/ML_old.thy
changeset 39867 a8363532cd4d
parent 39859 381e16bb5f46
child 39874 bbac63bbcffe
--- a/doc-src/IsarImplementation/Thy/ML_old.thy	Mon Oct 18 19:06:07 2010 +0100
+++ b/doc-src/IsarImplementation/Thy/ML_old.thy	Mon Oct 18 21:37:26 2010 +0100
@@ -104,184 +104,6 @@
 *}
 
 
-section {* Thread-safe programming *}
-
-text {*
-  Recent versions of Poly/ML (5.2.1 or later) support robust
-  multithreaded execution, based on native operating system threads of
-  the underlying platform.  Thus threads will actually be executed in
-  parallel on multi-core systems.  A speedup-factor of approximately
-  1.5--3 can be expected on a regular 4-core machine.\footnote{There
-  is some inherent limitation of the speedup factor due to garbage
-  collection, which is still sequential.  It helps to provide initial
-  heap space generously, using the \texttt{-H} option of Poly/ML.}
-  Threads also help to organize advanced operations of the system,
-  with explicit communication between sub-components, real-time
-  conditions, time-outs etc.
-
-  Threads lack the memory protection of separate processes, and
-  operate concurrently on shared heap memory.  This has the advantage
-  that results of independent computations are immediately available
-  to other threads, without requiring untyped character streams,
-  awkward serialization etc.
-
-  On the other hand, some programming guidelines need to be observed
-  in order to make unprotected parallelism work out smoothly.  While
-  the ML system implementation is responsible to maintain basic
-  integrity of the representation of ML values in memory, the
-  application programmer needs to ensure that multithreaded execution
-  does not break the intended semantics.
-
-  \medskip \paragraph{Critical shared resources.} Actually only those
-  parts outside the purely functional world of ML are critical.  In
-  particular, this covers
-
-  \begin{itemize}
-
-  \item global references (or arrays), i.e.\ those that persist over
-  several invocations of associated operations,\footnote{This is
-  independent of the visibility of such mutable values in the toplevel
-  scope.}
-
-  \item direct I/O on shared channels, notably @{text "stdin"}, @{text
-  "stdout"}, @{text "stderr"}.
-
-  \end{itemize}
-
-  The majority of tools implemented within the Isabelle/Isar framework
-  will not require any of these critical elements: nothing special
-  needs to be observed when staying in the purely functional fragment
-  of ML.  Note that output via the official Isabelle channels does not
-  count as direct I/O, so the operations @{ML "writeln"}, @{ML
-  "warning"}, @{ML "tracing"} etc.\ are safe.
-
-  Moreover, ML bindings within the toplevel environment (@{verbatim
-  "type"}, @{verbatim val}, @{verbatim "structure"} etc.) due to
-  run-time invocation of the compiler are also safe, because
-  Isabelle/Isar manages this as part of the theory or proof context.
-
-  \paragraph{Multithreading in Isabelle/Isar.}  The theory loader
-  automatically exploits the overall parallelism of independent nodes
-  in the development graph, as well as the inherent irrelevance of
-  proofs for goals being fully specified in advance.  This means,
-  checking of individual Isar proofs is parallelized by default.
-  Beyond that, very sophisticated proof tools may use local
-  parallelism internally, via the general programming model of
-  ``future values'' (see also @{"file"
-  "~~/src/Pure/Concurrent/future.ML"}).
-
-  Any ML code that works relatively to the present background theory
-  is already safe.  Contextual data may be easily stored within the
-  theory or proof context, thanks to the generic data concept of
-  Isabelle/Isar (see \secref{sec:context-data}).  This greatly
-  diminishes the demand for global state information in the first
-  place.
-
-  \medskip In rare situations where actual mutable content needs to be
-  manipulated, Isabelle provides a single \emph{critical section} that
-  may be entered while preventing any other thread from doing the
-  same.  Entering the critical section without contention is very
-  fast, and several basic system operations do so frequently.  This
-  also means that each thread should leave the critical section
-  quickly, otherwise parallel execution performance may degrade
-  significantly.
-
-  Despite this potential bottle-neck, centralized locking is
-  convenient, because it prevents deadlocks without demanding special
-  precautions.  Explicit communication demands other means, though.
-  The high-level abstraction of synchronized variables @{"file"
-  "~~/src/Pure/Concurrent/synchronized.ML"} enables parallel
-  components to communicate via shared state; see also @{"file"
-  "~~/src/Pure/Concurrent/mailbox.ML"} as canonical example.
-
-  \paragraph{Good conduct of impure programs.} The following
-  guidelines enable non-functional programs to participate in
-  multithreading.
-
-  \begin{itemize}
-
-  \item Minimize global state information.  Using proper theory and
-  proof context data will actually return to functional update of
-  values, without any special precautions for multithreading.  Apart
-  from the fully general functors for theory and proof data (see
-  \secref{sec:context-data}) there are drop-in replacements that
-  emulate primitive references for common cases of \emph{configuration
-  options} for type @{ML_type "bool"}/@{ML_type "int"}/@{ML_type
-  "string"} (see structure @{ML_struct Config} and @{ML
-  Attrib.config_bool} etc.), and lists of theorems (see functor
-  @{ML_functor Named_Thms}).
-
-  \item Keep components with local state information
-  \emph{re-entrant}.  Instead of poking initial values into (private)
-  global references, create a new state record on each invocation, and
-  pass that through any auxiliary functions of the component.  The
-  state record may well contain mutable references, without requiring
-  any special synchronizations, as long as each invocation sees its
-  own copy.  Occasionally, one might even return to plain functional
-  updates on non-mutable record values here.
-
-  \item Isolate process configuration flags.  The main legitimate
-  application of global references is to configure the whole process
-  in a certain way, essentially affecting all threads.  A typical
-  example is the @{ML show_types} flag, which tells the pretty printer
-  to output explicit type information for terms.  Such flags usually
-  do not affect the functionality of the core system, but only the
-  view being presented to the user.
-
-  Occasionally, such global process flags are treated like implicit
-  arguments to certain operations, by using the @{ML setmp_CRITICAL} combinator
-  for safe temporary assignment.  Its traditional purpose was to
-  ensure proper recovery of the original value when exceptions are
-  raised in the body, now the functionality is extended to enter the
-  \emph{critical section} (with its usual potential of degrading
-  parallelism).
-
-  Note that recovery of plain value passing semantics via @{ML
-  setmp_CRITICAL}~@{text "ref value"} assumes that this @{text "ref"} is
-  exclusively manipulated within the critical section.  In particular,
-  any persistent global assignment of @{text "ref := value"} needs to
-  be marked critical as well, to prevent intruding another threads
-  local view, and a lost-update in the global scope, too.
-
-  \end{itemize}
-
-  Recall that in an open ``LCF-style'' system like Isabelle/Isar, the
-  user participates in constructing the overall environment.  This
-  means that state-based facilities offered by one component will
-  require special caution later on.  So minimizing critical elements,
-  by staying within the plain value-oriented view relative to theory
-  or proof contexts most of the time, will also reduce the chance of
-  mishaps occurring to end-users.
-*}
-
-text %mlref {*
-  \begin{mldecls}
-  @{index_ML NAMED_CRITICAL: "string -> (unit -> 'a) -> 'a"} \\
-  @{index_ML CRITICAL: "(unit -> 'a) -> 'a"} \\
-  @{index_ML setmp_CRITICAL: "'a Unsynchronized.ref -> 'a -> ('b -> 'c) -> 'b -> 'c"} \\
-  \end{mldecls}
-
-  \begin{description}
-
-  \item @{ML NAMED_CRITICAL}~@{text "name f"} evaluates @{text "f ()"}
-  while staying within the critical section of Isabelle/Isar.  No
-  other thread may do so at the same time, but non-critical parallel
-  execution will continue.  The @{text "name"} argument serves for
-  diagnostic purposes and might help to spot sources of congestion.
-
-  \item @{ML CRITICAL} is the same as @{ML NAMED_CRITICAL} with empty
-  name argument.
-
-  \item @{ML setmp_CRITICAL}~@{text "ref value f x"} evaluates @{text "f x"}
-  while staying within the critical section and having @{text "ref :=
-  value"} assigned temporarily.  This recovers a value-passing
-  semantics involving global references, regardless of exceptions or
-  concurrency.
-
-  \end{description}
-*}
-
-
 chapter {* Basic library functions *}
 
 text {*