--- a/CONTRIBUTORS Sun Nov 15 13:06:24 2020 +0000
+++ b/CONTRIBUTORS Sun Nov 15 13:06:41 2020 +0000
@@ -6,6 +6,9 @@
Contributions to this Isabelle version
--------------------------------------
+* November 2020: Florian Haftmann
+ Bundle mixins for locale and class expressions.
+
* October 2020: Jasmin Blanchette, Martin Desharnais, Mathias Fleury
Use veriT in proof preplay in Sledgehammer.
@@ -15,6 +18,10 @@
* October 2020: Jasmin Blanchette, Martin Desharnais
Integration of E 2.5 for Sledgehammer.
+* September 2020: Florian Haftmann
+ Substantial reworking and modularization of Word library, with
+ generic type conversions.
+
* August 2020: Makarius Wenzel
Improved monitoring of runtime statistics: ML GC progress and Java.
@@ -24,6 +31,10 @@
* June 2020: Makarius Wenzel
Batch-builds via "isabelle build" allow to invoke Scala from ML.
+* June 2020: Florian Haftmann
+ Simproc defined_all for more aggressive substitution with variables
+ from assumptions.
+
* May 2020: Makarius Wenzel
Antiquotations for Isabelle systems programming, notably @{scala_function}
and @{scala} to invoke Scala from ML.
@@ -31,14 +42,6 @@
* May 2020: Florian Haftmann
Generic algebraically founded bit operations NOT, AND, OR, XOR.
-* Sept. 2020: Florian Haftmann
- Substantial reworking and modularization of Word library, with
- generic type conversions.
-
-* June 2020: Florian Haftmann
- Simproc defined_all for more aggressive substitution with variables
- from assumptions.
-
Contributions to Isabelle2020
-----------------------------
--- a/NEWS Sun Nov 15 13:06:24 2020 +0000
+++ b/NEWS Sun Nov 15 13:06:41 2020 +0000
@@ -32,6 +32,11 @@
*** Document preparation ***
+* Keyword 'document_theories' within ROOT specifies theories from other
+sessions that should be included in the generated document source
+directory. This does not affect the generated session.tex: \input{...}
+needs to be used separately.
+
* The standard LaTeX engine is now lualatex, according to settings
variable ISABELLE_PDFLATEX. This is mostly upwards compatible with old
pdflatex, but text encoding needs to conform strictly to utf8. Rare
@@ -171,6 +176,12 @@
- Use veriT in proof preplay.
- Take adventage of more cores in proof preplay.
+* Syntax for state monad combinators fcomp and scomp is organized in
+bundle state_combinator_syntax. Minor INCOMPATIBILITY.
+
+* Syntax for reflected term syntax is organized in bundle term_syntax,
+discontinuing previous locale term_syntax. Minor INCOMPATIBILITY.
+
*** FOL ***
@@ -231,6 +242,11 @@
* The command-line tool "isabelle console" now supports interrupts
properly (on Linux and macOS).
+* The command-line tool "isabelle document" generates theory documents
+from the underlying session build database (using its exported {\LaTeX}
+sources). INCOMPATIBILITY, the former "isabelle document" tool was
+rather different and has been discontinued.
+
* The command-line tool "isabelle sessions" explores the structure of
Isabelle sessions and prints result names in topological order (on
stdout).
--- a/etc/options Sun Nov 15 13:06:24 2020 +0000
+++ b/etc/options Sun Nov 15 13:06:41 2020 +0000
@@ -8,7 +8,9 @@
option document : string = ""
-- "build document in given format: pdf, dvi, false"
option document_output : string = ""
- -- "document output directory (default within $ISABELLE_BROWSER_INFO tree)"
+ -- "document output directory"
+option document_output_sources : bool = true
+ -- "copy generated sources into document output directory"
option document_variants : string = "document"
-- "alternative document variants (separated by colons)"
option document_tags : string = ""
--- a/src/Doc/Sledgehammer/document/root.tex Sun Nov 15 13:06:24 2020 +0000
+++ b/src/Doc/Sledgehammer/document/root.tex Sun Nov 15 13:06:41 2020 +0000
@@ -47,6 +47,8 @@
\renewcommand\_{\hbox{\textunderscore\kern-.05ex}}
+\hyphenation{Isa-belle super-posi-tion zipper-posi-tion}
+
\begin{document}
%%% TYPESETTING
@@ -103,13 +105,13 @@
historical.}
%
The supported ATPs include agsyHOL \cite{agsyHOL}, Alt-Ergo \cite{alt-ergo}, E
-\cite{schulz-2002}, iProver \cite{korovin-2009}, LEO-II \cite{leo2}, Leo-III
+\cite{schulz-2019}, iProver \cite{korovin-2009}, LEO-II \cite{leo2}, Leo-III
\cite{leo3}, Satallax \cite{satallax}, SPASS \cite{weidenbach-et-al-2009},
Vampire \cite{riazanov-voronkov-2002}, Waldmeister \cite{waldmeister}, and
Zipperposition \cite{cruanes-2014}. The ATPs are run either locally or remotely
via the System\-On\-TPTP web service \cite{sutcliffe-2000}. The supported SMT
solvers are CVC3 \cite{cvc3}, CVC4 \cite{cvc4}, veriT \cite{bouton-et-al-2009},
-and Z3 \cite{z3}. These are always run locally.
+and Z3 \cite{de-moura-2008}. These are always run locally.
The problem passed to the external provers (or solvers) consists of your current
goal together with a heuristic selection of hundreds of facts (theorems) from the
@@ -129,15 +131,15 @@
\newbox\boxA
\setbox\boxA=\hbox{\texttt{NOSPAM}}
-\newcommand\authoremail{\texttt{blan{\color{white}NOSPAM}\kern-\wd\boxA{}chette@\allowbreak
-in.\allowbreak tum.\allowbreak de}}
+\newcommand\authoremail{\texttt{jasmin.blan{\color{white}NOSPAM}\kern-\wd\boxA{}chette@\allowbreak
+google.\allowbreak com}}
To run Sledgehammer, you must make sure that the theory \textit{Sledgehammer} is
imported---this is rarely a problem in practice since it is part of
-\textit{Main}. Examples of Sledgehammer use can be found in Isabelle's
-\texttt{src/HOL/Metis\_Examples} directory.
-Comments and bug reports concerning Sledgehammer or this manual should be
-directed to the author at \authoremail.
+\textit{Main}. Examples of Sledgehammer use can be found in the
+\texttt{src/HOL/Metis\_Examples} directory. Comments and bug reports
+concerning Sledgehammer or this manual should be directed to the author at
+\authoremail.
\section{Installation}
@@ -158,21 +160,21 @@
\begin{sloppy}
\begin{enum}
\item[\labelitemi] If you installed an official Isabelle package, it should
-already include properly setup executables for CVC4, E, SPASS, Vampire, and Z3,
-ready to use. To use Vampire, you must confirm that you are a noncommercial
-user, as indicated by the message that is displayed when Sledgehammer is
-invoked the first time.
+already include properly set up executables for CVC4, E, SPASS, Vampire, veriT,
+and Z3, ready to use. To use Vampire, you must confirm that you are a
+noncommercial user, as indicated by the message that is displayed when
+Sledgehammer is invoked the first time.
\item[\labelitemi] Alternatively, you can download the Isabelle-aware CVC3,
-CVC4, E, SPASS, Vampire, and Z3 binary packages from \download. Extract the
-archives, then add a line to your \texttt{\$ISABELLE\_HOME\_USER\slash etc\slash
-components}%
+CVC4, E, SPASS, Vampire, veriT, and Z3 binary packages from \download. Extract
+the archives, then add a line to your \texttt{\$ISABELLE\_HOME\_USER\slash
+etc\slash components}%
\footnote{The variable \texttt{\$ISABELLE\_HOME\_USER} is set by Isabelle at
startup. Its value can be retrieved by executing \texttt{isabelle}
\texttt{getenv} \texttt{ISABELLE\_HOME\_USER} on the command line.}
-file with the absolute path to CVC3, CVC4, E, SPASS, Vampire, or Z3. For
-example, if the \texttt{components} file does not exist yet and you extracted
-SPASS to \texttt{/usr/local/spass-3.8ds}, create it with the single line
+file with the absolute path to the prover. For example, if the
+\texttt{components} file does not exist yet and you extracted SPASS to
+\texttt{/usr/local/spass-3.8ds}, create it with the single line
\prew
\texttt{/usr/local/spass-3.8ds}
@@ -201,7 +203,7 @@
variable \texttt{CVC3\_\allowbreak SOLVER}, \texttt{CVC4\_\allowbreak SOLVER},
\texttt{VERIT\_\allowbreak SOLVER}, or \texttt{Z3\_SOLVER} to the complete path
of the executable, \emph{including the file name}. Sledgehammer has been tested
-with CVC3 2.2 and 2.4.1, CVC4 1.5-prerelease, veriT smtcomp2019, and Z3 4.3.2.
+with CVC3 2.2 and 2.4.1, CVC4 1.5-prerelease, veriT 2020.10-rmx, and Z3 4.3.2.
Since Z3's output format is somewhat unstable, other versions of the solver
might not work well with Sledgehammer. Ideally, also set
\texttt{CVC3\_VERSION}, \texttt{CVC4\_VERSION}, \texttt{VERIT\_VERSION}, or
@@ -219,7 +221,7 @@
(\texttt{libwww-perl}) installed. If you must use a proxy server to access the
Internet, set the \texttt{http\_proxy} environment variable to the proxy, either
in the environment in which Isabelle is launched or in your
-\texttt{\$ISABELLE\_HOME\_USER/etc/settings} file. Here are a few
+\texttt{\$ISABELLE\_HOME\_USER/etc/\allowbreak settings} file. Here are a few
examples:
\prew
@@ -245,8 +247,8 @@
\postw
Instead of issuing the \textbf{sledgehammer} command, you can also use the
-Sledgehammer panel in Isabelle/jEdit. Sledgehammer produces the following output
-after a few seconds:
+Sledgehammer panel in Isabelle/jEdit. Sledgehammer might produce something like
+the following output after a few seconds:
\prew
\slshape
@@ -265,9 +267,9 @@
provers are installed and how many processor cores are available, some of the
provers might be missing or present with a \textit{remote\_} prefix.
-For each successful prover, Sledgehammer gives a one-line \textit{metis} or
-\textit{smt} method call. Rough timings are shown in parentheses, indicating how
-fast the call is. You can click the proof to insert it into the theory text.
+For each successful prover, Sledgehammer gives a one-line Isabelle proof. Rough
+timings are shown in parentheses, indicating how fast the call is. You can
+click the proof to insert it into the theory text.
In addition, you can ask Sledgehammer for an Isar text proof by enabling the
\textit{isar\_proofs} option (\S\ref{output-format}):
@@ -277,8 +279,8 @@
\postw
When Isar proof construction is successful, it can yield proofs that are more
-readable and also faster than the \textit{metis} or \textit{smt} one-line
-proofs. This feature is experimental and is only available for ATPs.
+readable and also faster than \textit{metis} or \textit{smt} one-line
+proofs. This feature is experimental.
\section{Hints}
@@ -360,16 +362,16 @@
\begin{enum}
\item[\labelitemi]
-The traditional relevance filter, called \emph{MePo}
-(\underline{Me}ng--\underline{Pau}lson), assigns a score to every available fact
-(lemma, theorem, definition, or axiom) based upon how many constants that fact
-shares with the conjecture. This process iterates to include facts relevant to
-those just accepted. The constants are weighted to give unusual ones greater
-significance. MePo copes best when the conjecture contains some unusual
-constants; if all the constants are common, it is unable to discriminate among
-the hundreds of facts that are picked up. The filter is also memoryless: It has
-no information about how many times a particular fact has been used in a proof,
-and it cannot learn.
+The traditional relevance filter, \emph{MePo}
+(\underline{Me}ng--\underline{Pau}lson), assigns a score to every available
+fact (lemma, theorem, definition, or axiom) based upon how many constants that
+fact shares with the conjecture. This process iterates to include facts
+relevant to those just accepted. The constants are weighted to give unusual
+ones greater significance. MePo copes best when the conjecture contains some
+unusual constants; if all the constants are common, it is unable to
+discriminate among the hundreds of facts that are picked up. The filter is also
+memoryless: It has no information about how many times a particular fact has
+been used in a proof, and it cannot learn.
\item[\labelitemi]
An alternative to MePo is \emph{MaSh} (\underline{Ma}chine Learner for
@@ -438,32 +440,7 @@
strongly encouraged to report this to the author at \authoremail.
-\point{How can I tell whether a suggested proof is sound?}
-
-Earlier versions of Sledgehammer often suggested unsound proofs---either proofs
-of nontheorems or simply proofs that rely on type-unsound inferences. This
-is a thing of the past, unless you explicitly specify an unsound encoding
-using \textit{type\_enc} (\S\ref{problem-encoding}).
-%
-Officially, the only form of ``unsoundness'' that lurks in the sound
-encodings is related to missing characteristic theorems of datatypes. For
-example,
-
-\prew
-\textbf{lemma}~``$\exists \mathit{xs}.\; \mathit{xs} \neq []$'' \\
-\textbf{sledgehammer} ()
-\postw
-
-suggests an argumentless \textit{metis} call that fails. However, the conjecture
-does actually hold, and the \textit{metis} call can be repaired by adding
-\textit{list.distinct}.
-%
-We hope to address this problem in a future version of Isabelle. In the
-meantime, you can avoid it by passing the \textit{strict} option
-(\S\ref{problem-encoding}).
-
-
-\point{What are the \textit{full\_types}, \textit{no\_types}, and
+\point{What are the \textit{full\_types}, \textit{no\_types}, and \\
\textit{mono\_tags} arguments to Metis?}
The \textit{metis}~(\textit{full\_types}) proof method
@@ -471,15 +448,10 @@
versions of Metis. It is somewhat slower than \textit{metis}, but the proof
search is fully typed, and it also includes more powerful rules such as the
axiom ``$x = \const{True} \mathrel{\lor} x = \const{False}$'' for reasoning in
-higher-order places (e.g., in set comprehensions). The method is automatically
-tried as a fallback when \textit{metis} fails, and it is sometimes
-generated by Sledgehammer instead of \textit{metis} if the proof obviously
-requires type information or if \textit{metis} failed when Sledgehammer
-preplayed the proof. (By default, Sledgehammer tries to run \textit{metis} with
-various sets of option for up to 1~second each time to ensure that the generated
-one-line proofs actually work and to display timing information. This can be
-configured using the \textit{preplay\_timeout} and \textit{dont\_preplay}
-options (\S\ref{timeouts}).)
+higher-order places (e.g., in set comprehensions). The method is tried as a
+fallback when \textit{metis} fails, and it is sometimes generated by
+Sledgehammer instead of \textit{metis} if the proof obviously requires type
+information or if \textit{metis} failed when Sledgehammer preplayed the proof.
%
At the other end of the soundness spectrum, \textit{metis} (\textit{no\_types})
uses no type information at all during the proof search, which is more efficient
@@ -499,29 +471,25 @@
\textit{full\_types} option to \textit{metis} directly.
-\point{And what are the \textit{lifting} and \textit{hide\_lams} arguments
+\point{And what are the \textit{lifting} and \textit{hide\_lams} \\ arguments
to Metis?}
Orthogonally to the encoding of types, it is important to choose an appropriate
-translation of $\lambda$-abstractions. Metis supports three translation schemes,
-in decreasing order of power: Curry combinators (the default),
+translation of $\lambda$-abstractions. Metis supports three translation
+schemes, in decreasing order of power: Curry combinators (the default),
$\lambda$-lifting, and a ``hiding'' scheme that disables all reasoning under
$\lambda$-abstractions. The more powerful schemes also give the automatic
-provers more rope to hang themselves. See the \textit{lam\_trans} option (\S\ref{problem-encoding}) for details.
+provers more rope to hang themselves. See the \textit{lam\_trans} option
+(\S\ref{problem-encoding}) for details.
\point{Are the generated proofs minimal?}
Automatic provers frequently use many more facts than are necessary.
-Sledgehammer includes a minimization tool that takes a set of facts returned by
+Sledgehammer includes a proof minimization tool that takes a set of facts returned by
a given prover and repeatedly calls a prover or proof method with subsets of
those facts to find a minimal set. Reducing the number of facts typically helps
-reconstruction, while also removing superfluous clutter from the proof scripts.
-
-In earlier versions of Sledgehammer, generated proofs were systematically
-accompanied by a suggestion to invoke the minimization tool. This step is now
-performed by default but can be disabled using the \textit{minimize} option
-(\S\ref{mode-of-operation}).
+reconstruction, while decluttering the proof scripts.
\point{A strange error occurred---what should I do?}
@@ -541,9 +509,9 @@
\point{Why are there so many options?}
-Sledgehammer's philosophy should work out of the box, without user guidance.
-Many of the options are meant to be used mostly by the Sledgehammer developers
-for experiments. Of course, feel free to try them out if you are so inclined.
+Sledgehammer's philosophy is that it should work out of the box, without user
+guidance. Most of the options are meant to be used by the Sledgehammer
+developers for experiments.
\section{Command Syntax}
@@ -758,7 +726,7 @@
1.5-prerelease.
\item[\labelitemi] \textbf{\textit{e}:} E is a first-order resolution prover
-developed by Stephan Schulz \cite{schulz-2002}. To use E, set the environment
+developed by Stephan Schulz \cite{schulz-2019}. To use E, set the environment
variable \texttt{E\_HOME} to the directory that contains the \texttt{eproof}
executable and \texttt{E\_VERSION} to the version number (e.g., ``1.8''), or
install the prebuilt E package from \download. Sledgehammer has been tested with
@@ -775,27 +743,27 @@
higher-order prover developed by Christoph Benzm\"uller et al.\ \cite{leo2},
with support for the TPTP typed higher-order syntax (TH0). To use LEO-II, set
the environment variable \texttt{LEO2\_HOME} to the directory that contains the
-\texttt{leo} executable. Sledgehammer requires version 1.3.4 or above.
+\texttt{leo} executable. Sledgehammer has been tested with version 1.3.4.
\item[\labelitemi] \textbf{\textit{leo3}:} Leo-III is an automatic
higher-order prover developed by Alexander Steen, Max Wisniewski, Christoph
Benzm\"uller et al.\ \cite{leo3},
with support for the TPTP typed higher-order syntax (TH0). To use Leo-III, set
the environment variable \texttt{LEO3\_HOME} to the directory that contains the
-\texttt{leo3} executable. Sledgehammer requires version 1.1 or above.
+\texttt{leo3} executable. Sledgehammer has been tested with version 1.1.
\item[\labelitemi] \textbf{\textit{satallax}:} Satallax is an automatic
higher-order prover developed by Chad Brown et al.\ \cite{satallax}, with
support for the TPTP typed higher-order syntax (TH0). To use Satallax, set the
environment variable \texttt{SATALLAX\_HOME} to the directory that contains the
-\texttt{satallax} executable. Sledgehammer requires version 2.2 or above.
+\texttt{satallax} executable. Sledgehammer has been tested with version 2.2.
\item[\labelitemi] \textbf{\textit{spass}:} SPASS is a first-order resolution
prover developed by Christoph Weidenbach et al.\ \cite{weidenbach-et-al-2009}.
To use SPASS, set the environment variable \texttt{SPASS\_HOME} to the directory
that contains the \texttt{SPASS} executable and \texttt{SPASS\_VERSION} to the
version number (e.g., ``3.8ds''), or install the prebuilt SPASS package from
-\download. Sledgehammer requires version 3.8ds or above.
+\download. Sledgehammer has been tested with version 3.8ds.
\item[\labelitemi] \textbf{\textit{vampire}:} Vampire is a first-order
resolution prover developed by Andrei Voronkov and his colleagues
@@ -807,29 +775,29 @@
\item[\labelitemi] \textbf{\textit{verit}:} veriT \cite{bouton-et-al-2009} is an
SMT solver developed by David D\'eharbe, Pascal Fontaine, and their colleagues.
-It is specifically designed to produce detailed proofs for reconstruction in
-proof assistants. To use veriT, set the environment variable
-\texttt{VERIT\_SOLVER} to the complete path of the executable, including the
-file name. Sledgehammer has been tested with version smtcomp2019.
+It is designed to produce detailed proofs for reconstruction in proof
+assistants. To use veriT, set the environment variable \texttt{VERIT\_SOLVER}
+to the complete path of the executable, including the file name. Sledgehammer
+has been tested with version 2020.10-rmx.
\item[\labelitemi] \textbf{\textit{z3}:} Z3 is an SMT solver developed at
-Microsoft Research \cite{z3}. To use Z3, set the environment variable
+Microsoft Research \cite{de-moura-2008}. To use Z3, set the environment variable
\texttt{Z3\_SOLVER} to the complete path of the executable, including the
file name. Sledgehammer has been tested with a pre-release version of 4.4.0.
\item[\labelitemi] \textbf{\textit{z3\_tptp}:} This version of Z3 pretends to
-be an ATP, exploiting Z3's support for the TPTP typed first-order format
-(TF0). It is included for experimental purposes. It requires version 4.3.1 of
-Z3 or above. To use it, set the environment variable \texttt{Z3\_TPTP\_HOME}
+be an ATP, exploiting Z3's support for the TPTP typed first-order format (TF0).
+It is included for experimental purposes. Sledgehammer has been tested with
+version 4.3.1. To use it, set the environment variable \texttt{Z3\_TPTP\_HOME}
to the directory that contains the \texttt{z3\_tptp} executable.
\item[\labelitemi] \textbf{\textit{zipperposition}:} Zipperposition
\cite{cruanes-2014} is a higher-order superposition prover developed by Simon
-Cruanes and colleagues. To use Zipperposition, set the environment variable
-\texttt{ZIPPERPOSITION\_HOME} to the directory that contains the
-\texttt{zipperposition} executable and \texttt{ZIPPERPOSITION\_VERSION} to the
-version number (e.g., ``2.0.1''). Sledgehammer has been tested with version
-2.0.1.
+Cruanes, Petar Vukmirovi\'c, and colleagues. To use Zipperposition, set the
+environment variable \texttt{ZIPPERPOSITION\_HOME} to the directory that
+contains the \texttt{zipperposition} executable and
+\texttt{ZIPPERPOSITION\_VERSION} to the version number (e.g., ``2.0.1'').
+Sledgehammer has been tested with version 2.0.1.
\end{enum}
\end{sloppy}
@@ -892,8 +860,8 @@
{\small See also \textit{verbose} (\S\ref{output-format}).}
\optrue{minimize}{dont\_minimize}
-Specifies whether the minimization tool should be invoked automatically after
-proof search.
+Specifies whether the proof minimization tool should be invoked automatically
+after proof search.
\nopagebreak
{\small See also \textit{preplay\_timeout} (\S\ref{timeouts})
@@ -982,7 +950,7 @@
are relevant and 1 only theorems that refer to previously seen constants.
\optrue{learn}{dont\_learn}
-Specifies whether MaSh should be run automatically by Sledgehammer to learn the
+Specifies whether Sledgehammer invocations should run MaSh to learn the
available theories (and hence provide more accurate results). Learning takes
place only if MaSh is enabled.
@@ -1098,10 +1066,10 @@
\item[\labelitemi]
\textbf{%
\textit{mono\_guards}, \textit{mono\_tags} (sound);
-\textit{mono\_args} (unsound):} \\
+\textit{mono\_args} \\ (unsound):} \\
Similar to
\textit{raw\_mono\_guards}, \textit{raw\_mono\_tags}, and
-\textit{raw\_mono\_args}, respectively but types are mangled in constant names
+\textit{raw\_mono\_\allowbreak args}, respectively but types are mangled in constant names
instead of being supplied as ground term arguments. The binary predicate
$\const{g}(\tau, t)$ becomes a unary predicate
$\const{g\_}\tau(t)$, and the binary function
@@ -1112,14 +1080,23 @@
first-order types if the prover supports the TF0, TF1, TH0, or TH1 syntax;
otherwise, falls back on \textit{mono\_guards}. The problem is monomorphized.
-\item[\labelitemi] \textbf{\textit{mono\_native\_higher} (sound):} Exploits
-native higher-order types if the prover supports the TH0 syntax; otherwise,
-falls back on \textit{mono\_native} or \textit{mono\_guards}. The problem is
-monomorphized.
+\item[\labelitemi] \textbf{\textit{mono\_native\_fool} (sound):} Exploits native
+first-order types, including Booleans, if the prover supports the TFX0, TFX1,
+TH0, or TH1 syntax; otherwise, falls back on \textit{mono\_native}. The problem
+is monomorphized.
-\item[\labelitemi] \textbf{\textit{poly\_native} (sound):} Exploits native
-first-order polymorphic types if the prover supports the TF1 or TH1 syntax;
-otherwise, falls back on \textit{mono\_native}.
+\item[\labelitemi] \textbf{\textit{mono\_native\_higher},
+\textit{mono\_native\_higher\_fool} \\ (sound):} Exploits native higher-order
+types, including Booleans if ending with ``\textit{\_fool}'', if the prover
+supports the TH0 syntax; otherwise, falls back on \textit{mono\_native} or
+\textit{mono\_native\_fool}. The problem is monomorphized.
+
+\item[\labelitemi] \textbf{\textit{poly\_native}, \textit{poly\_native\_fool},
+\textit{poly\_native\_higher}, \\ \textit{poly\_native\_higher\_fool} (sound):}
+Exploits native first-order polymorphic types if the prover supports the TF1,
+TFX1, or TH1 syntax; otherwise, falls back on \textit{mono\_native},
+\textit{mono\_native\_fool}, \textit{mono\_native\_higher}, or
+\textit{mono\_native\_higher\_fool}.
\item[\labelitemi]
\textbf{%
@@ -1326,9 +1303,8 @@
sledgehammer[prover=e,prover_timeout=10] Huffman.thy
\end{verbatim}
-This command specifies Sledgehammer as the action, using E as the prover with a
-timeout of 10 seconds. The results are written to the file
-\texttt{output/Huffman.log}.
+This command specifies Sledgehammer as the action, using the E prover with a
+timeout of 10 seconds. The results are written to \texttt{output/Huffman.log}.
\subsection{Example of Benchmarking Another Tool}
@@ -1337,7 +1313,7 @@
\end{verbatim}
This command specifies the \textbf{try0} command as the action, with a timeout
-of 10 seconds. The results are written to the file \texttt{output/Huffman.log}.
+of 10 seconds. The results are written to \texttt{output/Huffman.log}.
\subsection{Example of Generating TPTP Files}
--- a/src/Doc/System/Presentation.thy Sun Nov 15 13:06:24 2020 +0000
+++ b/src/Doc/System/Presentation.thy Sun Nov 15 13:06:41 2020 +0000
@@ -15,37 +15,15 @@
HOL, and HOL from Pure), and application sessions further on in the
hierarchy.
- The tools @{tool_ref mkroot} and @{tool_ref build} provide the primary means
- for managing Isabelle sessions, including proper setup for presentation;
- @{tool build} tells the Isabelle process to run any additional stages
- required for document preparation, notably the @{tool_ref document} and
- @{tool_ref latex}. The complete tool chain for managing batch-mode Isabelle
- sessions is illustrated in \figref{fig:session-tools}.
-
- \begin{figure}[htbp]
- \begin{center}
- \begin{tabular}{lp{0.6\textwidth}}
-
- @{tool_ref mkroot} & invoked once by the user to initialize the
- session \<^verbatim>\<open>ROOT\<close> with optional \<^verbatim>\<open>document\<close>
- directory; \\
+ The command-line tools @{tool_ref mkroot} and @{tool_ref build} provide the
+ primary means for managing Isabelle sessions, including options for
+ presentation: ``\<^verbatim>\<open>document=pdf\<close>'' generates PDF output from the theory
+ session, and ``\<^verbatim>\<open>document_output=dir\<close>'' emits a copy of the document sources
+ with the PDF into the given directory (relative to the session directory).
- @{tool_ref build} & invoked repeatedly by the user to keep
- session output up-to-date (HTML, documents etc.); \\
-
- @{tool_ref process} & run through @{tool_ref build}; \\
-
- @{tool_ref document} & run by the Isabelle process if document
- preparation is enabled; \\
-
- @{tool_ref latex} & universal {\LaTeX} tool wrapper invoked
- multiple times by @{tool_ref document}; also useful for manual
- experiments; \\
-
- \end{tabular}
- \caption{The tool chain of Isabelle session presentation} \label{fig:session-tools}
- \end{center}
- \end{figure}
+ Alternatively, @{tool_ref document} may be used to turn the generated
+ {\LaTeX} sources of a session (exports from its build database) into PDF,
+ using suitable invocations of @{tool_ref latex}.
\<close>
@@ -145,101 +123,56 @@
section \<open>Preparing Isabelle session documents \label{sec:tool-document}\<close>
text \<open>
- The @{tool_def document} tool prepares logic session documents, processing
- the sources as provided by the user and generated by Isabelle. Its usage is:
+ The @{tool_def document} tool prepares logic session documents. Its usage
+ is:
@{verbatim [display]
-\<open>Usage: isabelle document [OPTIONS]
+\<open>Usage: isabelle document [OPTIONS] SESSION
Options are:
- -d DIR document output directory (default "output/document")
- -n NAME document name (default "document")
- -t TAGS markup for tagged regions
-
- Prepare the theory session document in document directory, producing the
- specified output format.
-\<close>}
+ -O set option "document_output", relative to current directory
+ -V verbose latex
+ -d DIR include session directory
+ -n no build of session
+ -o OPTION override Isabelle system OPTION (via NAME=VAL or NAME)
+ -v verbose build
- This tool is usually run automatically as part of the Isabelle build
- process, provided document preparation has been enabled via suitable
- options. It may be manually invoked on the generated browser information
- document output as well, e.g.\ in case of errors encountered in the batch
- run.
-
- \<^medskip>
- Option \<^verbatim>\<open>-d\<close> specifies an alternative document output directory. The default
- is \<^verbatim>\<open>output/document\<close> (derived from the document name). Note that the result
- will appear in the parent of this directory.
+ Prepare the theory document of a session.\<close>}
- \<^medskip>
- Option \<^verbatim>\<open>-n\<close> specifies the resulting document file name, the default is
- ``\<^verbatim>\<open>document\<close>'' (leading to ``\<^verbatim>\<open>document.pdf\<close>'').
+ Generated {\LaTeX} sources are taken from the session build database:
+ @{tool_ref build} is invoked beforehand to ensure that it is up-to-date, but
+ option \<^verbatim>\<open>-n\<close> suppresses that. Further files are generated on the spot,
+ notably essential Isabelle style files, and \<^verbatim>\<open>session.tex\<close> to input all
+ theory sources from the session (excluding imports from other sessions).
- \<^medskip>
- The \<^verbatim>\<open>-t\<close> option tells {\LaTeX} how to interpret tagged Isabelle command
- regions. Tags are specified as a comma separated list of modifier/name
- pairs: ``\<^verbatim>\<open>+\<close>\<open>foo\<close>'' (or just ``\<open>foo\<close>'') means to keep, ``\<^verbatim>\<open>-\<close>\<open>foo\<close>'' to
- drop, and ``\<^verbatim>\<open>/\<close>\<open>foo\<close>'' to fold text tagged as \<open>foo\<close>. The builtin default is
- equivalent to the tag specification
- ``\<^verbatim>\<open>+document,+theory,+proof,+ML,+visible,-invisible,+important,+unimportant\<close>'';
- see also the {\LaTeX} macros \<^verbatim>\<open>\isakeeptag\<close>, \<^verbatim>\<open>\isadroptag\<close>, and
- \<^verbatim>\<open>\isafoldtag\<close>, in \<^file>\<open>~~/lib/texinputs/isabelle.sty\<close>.
+ \<^medskip> Options \<^verbatim>\<open>-d\<close>, \<^verbatim>\<open>-o\<close>, \<^verbatim>\<open>-v\<close> have the same meaning as for @{tool build}.
- \<^medskip>
- Document preparation requires a \<^verbatim>\<open>document\<close> directory within the session
- sources. This directory is supposed to contain all the files needed to
- produce the final document --- apart from the actual theories which are
- generated by Isabelle.
+ \<^medskip> Option \<^verbatim>\<open>-V\<close> prints full output of {\LaTeX} tools.
- \<^medskip>
- For most practical purposes, @{tool document} is smart enough to create any
- of the specified output formats, taking \<^verbatim>\<open>root.tex\<close> supplied by the user as
- a starting point. This even includes multiple runs of {\LaTeX} to
- accommodate references and bibliographies (the latter assumes \<^verbatim>\<open>root.bib\<close>
- within the same directory).
+ \<^medskip> Option \<^verbatim>\<open>-O\<close>~\<open>dir\<close> specifies the @{system_option document_output} option
+ relative to the current directory, while \<^verbatim>\<open>-o document_output=\<close>\<open>dir\<close> is
+ relative to the session directory.
- In more complex situations, a separate \<^verbatim>\<open>build\<close> script for the document
- sources may be given. It is invoked with command-line arguments for the
- document format and the document variant name. The script needs to produce
- corresponding output files, e.g.\ \<^verbatim>\<open>root.pdf\<close> for target format \<^verbatim>\<open>pdf\<close> (and
- default variants). The main work can be again delegated to @{tool latex},
- but it is also possible to harvest generated {\LaTeX} sources and copy them
- elsewhere.
-
- \<^medskip>
- When running the session, Isabelle copies the content of the original
- \<^verbatim>\<open>document\<close> directory into its proper place within @{setting
- ISABELLE_BROWSER_INFO}, according to the session path and document variant.
- Then, for any processed theory \<open>A\<close> some {\LaTeX} source is generated and put
- there as \<open>A\<close>\<^verbatim>\<open>.tex\<close>. Furthermore, a list of all generated theory files is
- put into \<^verbatim>\<open>session.tex\<close>. Typically, the root {\LaTeX} file provided by the
- user would include \<^verbatim>\<open>session.tex\<close> to get a document containing all the
- theories.
+ For example, for output directory ``\<^verbatim>\<open>output\<close>'' and the default document
+ variant ``\<^verbatim>\<open>document\<close>'', the generated document sources are placed into the
+ subdirectory \<^verbatim>\<open>output/document/\<close> and the resulting PDF into
+ \<^verbatim>\<open>output/document.pdf\<close>.
- The {\LaTeX} versions of the theories require some macros defined in
- \<^file>\<open>~~/lib/texinputs/isabelle.sty\<close>. Doing \<^verbatim>\<open>\usepackage{isabelle}\<close> in
- \<^verbatim>\<open>root.tex\<close> should be fine; the underlying @{tool latex} already includes an
- appropriate path specification for {\TeX} inputs.
-
- If the text contains any references to Isabelle symbols (such as \<^verbatim>\<open>\<forall>\<close>) then
- \<^verbatim>\<open>isabellesym.sty\<close> should be included as well. This package contains a
- standard set of {\LaTeX} macro definitions \<^verbatim>\<open>\isasym\<close>\<open>foo\<close> corresponding to
- \<^verbatim>\<open>\\<close>\<^verbatim>\<open><\<close>\<open>foo\<close>\<^verbatim>\<open>>\<close>, see @{cite "isabelle-implementation"} for a complete list
- of predefined Isabelle symbols. Users may invent further symbols as well,
- just by providing {\LaTeX} macros in a similar fashion as in
- \<^file>\<open>~~/lib/texinputs/isabellesym.sty\<close> of the Isabelle distribution.
+ \<^medskip> Isabelle is usually smart enough to create the PDF from the given
+ \<^verbatim>\<open>root.tex\<close> and optional \<^verbatim>\<open>root.bib\<close> (bibliography) and \<^verbatim>\<open>root.idx\<close> (index)
+ using standard {\LaTeX} tools. Alternatively, \isakeyword{document\_files}
+ in the session \<^verbatim>\<open>ROOT\<close> may include an executable \<^verbatim>\<open>build\<close> script to take
+ care of that. It is invoked with command-line arguments for the document
+ format (\<^verbatim>\<open>pdf\<close>) and the document variant name. The script needs to produce
+ corresponding output files, e.g.\ \<^verbatim>\<open>root.pdf\<close> for default document variants
+ (the main work can be delegated to @{tool latex}). \<close>
- For proper setup of PDF documents (with hyperlinks and bookmarks),
- we recommend to include \<^file>\<open>~~/lib/texinputs/pdfsetup.sty\<close> as well.
+subsubsection \<open>Examples\<close>
- \<^medskip>
- As a final step of Isabelle document preparation, @{tool document} is run on
- the generated \<^verbatim>\<open>output/document\<close> directory. Thus the actual output document
- is built and installed in its proper place. The generated sources are
- deleted after successful run of {\LaTeX} and friends.
+text \<open>
+ Produce the document from session \<^verbatim>\<open>FOL\<close> with full verbosity, and a copy in
+ the current directory (subdirectory \<^verbatim>\<open>document\<close> and file \<^verbatim>\<open>document.pdf)\<close>:
- Some care is needed if the document output location is configured
- differently, say within a directory whose content is still required
- afterwards!
+ @{verbatim [display] \<open>isabelle document -v -V -O. FOL\<close>}
\<close>
--- a/src/Doc/System/Sessions.thy Sun Nov 15 13:06:24 2020 +0000
+++ b/src/Doc/System/Sessions.thy Sun Nov 15 13:06:41 2020 +0000
@@ -55,7 +55,8 @@
@{syntax_def session_entry}: @'session' @{syntax system_name} groups? dir? '=' \<newline>
(@{syntax system_name} '+')? description? options? \<newline>
sessions? directories? (theories*) \<newline>
- (document_files*) (export_files*)
+ (document_theories?) (document_files*) \<newline>
+ (export_files*)
;
groups: '(' (@{syntax name} +) ')'
;
@@ -77,6 +78,8 @@
;
theory_entry: @{syntax system_name} ('(' @'global' ')')?
;
+ document_theories: @'document_theories' (@{syntax name}+)
+ ;
document_files: @'document_files' ('(' dir ')')? (@{syntax embedded}+)
;
export_files: @'export_files' ('(' dir ')')? ('[' nat ']')? \<newline>
@@ -145,6 +148,13 @@
\<open>HOLCF\<close>, \<open>IFOL\<close>, \<open>FOL\<close>, \<open>ZF\<close>, \<open>ZFC\<close> etc. Regular Isabelle applications
should not claim any global theory names.
+ \<^descr> \isakeyword{document_theories}~\<open>names\<close> specifies theories from other
+ sessions that should be included in the generated document source directory.
+ These theories need to be explicit imports in the current session, or
+ impliciti imports from the underlying hierarchy of parent sessions. The
+ generated \<^verbatim>\<open>session.tex\<close> file is not affected: the session's {\LaTeX} setup
+ needs to \<^verbatim>\<open>\input{\<close>\<open>\<dots>\<close>\<^verbatim>\<open>}\<close> generated \<^verbatim>\<open>.tex\<close> files separately.
+
\<^descr> \isakeyword{document_files}~\<open>(\<close>\isakeyword{in}~\<open>base_dir) files\<close> lists
source files for document preparation, typically \<^verbatim>\<open>.tex\<close> and \<^verbatim>\<open>.sty\<close> for
{\LaTeX}. Only these explicitly given files are copied from the base
@@ -190,10 +200,9 @@
\<^item> @{system_option_def "browser_info"} controls output of HTML browser
info, see also \secref{sec:info}.
- \<^item> @{system_option_def "document"} specifies the document output format,
- see @{tool document} option \<^verbatim>\<open>-o\<close> in \secref{sec:tool-document}. In
- practice, the most relevant values are \<^verbatim>\<open>document=false\<close> or
- \<^verbatim>\<open>document=pdf\<close>.
+ \<^item> @{system_option_def "document"} controls document output for a
+ particular session or theory; \<^verbatim>\<open>document=pdf\<close> means enabled,
+ \<^verbatim>\<open>document=false\<close> means disabled (especially for particular theories).
\<^item> @{system_option_def "document_output"} specifies an alternative
directory for generated output of the document preparation system; the
@@ -203,10 +212,19 @@
document.
\<^item> @{system_option_def "document_variants"} specifies document variants as
- a colon-separated list of \<open>name=tags\<close> entries, corresponding to @{tool
- document} options \<^verbatim>\<open>-n\<close> and \<^verbatim>\<open>-t\<close>.
+ a colon-separated list of \<open>name=tags\<close> entries. The default name
+ \<^verbatim>\<open>document\<close>, without additional tags.
- For example, \<^verbatim>\<open>document_variants=document:outline=/proof,/ML\<close> indicates
+ Tags are specified as a comma separated list of modifier/name pairs and
+ tell {\LaTeX} how to interpret certain Isabelle command regions:
+ ``\<^verbatim>\<open>+\<close>\<open>foo\<close>'' (or just ``\<open>foo\<close>'') means to keep, ``\<^verbatim>\<open>-\<close>\<open>foo\<close>'' to drop,
+ and ``\<^verbatim>\<open>/\<close>\<open>foo\<close>'' to fold text tagged as \<open>foo\<close>. The builtin default is
+ equivalent to the tag specification
+ ``\<^verbatim>\<open>+document,+theory,+proof,+ML,+visible,-invisible,+important,+unimportant\<close>'';
+ see also the {\LaTeX} macros \<^verbatim>\<open>\isakeeptag\<close>, \<^verbatim>\<open>\isadroptag\<close>, and
+ \<^verbatim>\<open>\isafoldtag\<close>, in \<^file>\<open>~~/lib/texinputs/isabelle.sty\<close>.
+
+ In contrast, \<^verbatim>\<open>document_variants=document:outline=/proof,/ML\<close> indicates
two documents: the one called \<^verbatim>\<open>document\<close> with default tags, and the other
called \<^verbatim>\<open>outline\<close> where proofs and ML sections are folded.
--- a/src/Doc/manual.bib Sun Nov 15 13:06:24 2020 +0000
+++ b/src/Doc/manual.bib Sun Nov 15 13:06:41 2020 +0000
@@ -587,6 +587,25 @@
publisher = CUP,
year = 1990}
+@inproceedings{de-moura-2008,
+ author = {Leonardo de Moura and
+ Nikolaj Bj{\o}rner},
+ editor = {C. R. Ramakrishnan and
+ Jakob Rehof},
+ title = {{Z3}: An Efficient {SMT} Solver},
+ booktitle = {Tools and Algorithms for the Construction and Analysis of Systems --- TACAS 2008},
+ series = {Lecture Notes in Computer Science},
+ volume = {4963},
+ pages = {337--340},
+ publisher = {Springer},
+ year = {2008},
+ url = {https://doi.org/10.1007/978-3-540-78800-3\_24},
+ doi = {10.1007/978-3-540-78800-3\_24},
+ timestamp = {Tue, 14 May 2019 10:00:53 +0200},
+ biburl = {https://dblp.org/rec/conf/tacas/MouraB08.bib},
+ bibsource = {dblp computer science bibliography, https://dblp.org}
+}
+
@Book{devlin79,
author = {Keith J. Devlin},
title = {Fundamentals of Contemporary Set Theory},
@@ -1107,10 +1126,23 @@
title = {The Objective Caml system -- Documentation and user's manual},
note = {\url{http://caml.inria.fr/pub/docs/manual-ocaml/}}}
-@misc{agsyHOL,
- author = "Fredrik Lindblad",
- title = "{agsyHOL}",
- note = "\url{https://github.com/frelindb/agsyHOL}"}
+@inproceedings{agsyHOL,
+ author = {Fredrik Lindblad},
+ editor = {St{\'{e}}phane Demri and
+ Deepak Kapur and
+ Christoph Weidenbach},
+ title = {A Focused Sequent Calculus for Higher-Order Logic},
+ booktitle = {Automated Reasoning --- IJCAR 2014},
+ series = {Lecture Notes in Computer Science},
+ volume = {8562},
+ pages = {61--75},
+ publisher = {Springer},
+ year = {2014},
+ url = {https://doi.org/10.1007/978-3-319-08587-6\_5},
+ doi = {10.1007/978-3-319-08587-6\_5},
+ timestamp = {Tue, 14 May 2019 10:00:39 +0200},
+ biburl = {https://dblp.org/rec/conf/cade/Lindblad14.bib},
+ bibsource = {dblp computer science bibliography, https://dblp.org}}
@incollection{lochbihler-2010,
title = "Coinductive",
@@ -1819,14 +1851,23 @@
number = 4
}
-@article{schulz-2002,
- author = "Stephan Schulz",
- title = "E---A Brainiac Theorem Prover",
- journal = "Journal of AI Communications",
- year = 2002,
- volume = 15,
- number ="2/3",
- pages = "111--126"}
+@inproceedings{schulz-2019,
+ author = {Stephan Schulz and
+ Simon Cruanes and
+ Petar Vukmirovi\'c},
+ editor = {Pascal Fontaine},
+ title = {Faster, Higher, Stronger: {E} 2.3},
+ booktitle = {Automated Deduction --- {CADE}-27},
+ series = {Lecture Notes in Computer Science},
+ volume = {11716},
+ pages = {495--507},
+ publisher = {Springer},
+ year = {2019},
+ url = {https://doi.org/10.1007/978-3-030-29436-6\_29},
+ doi = {10.1007/978-3-030-29436-6\_29},
+ timestamp = {Sat, 19 Oct 2019 20:28:03 +0200},
+ biburl = {https://dblp.org/rec/conf/cade/0001CV19.bib},
+ bibsource = {dblp computer science bibliography, https://dblp.org}}
@inproceedings{slind-tfl,
author = {Konrad Slind},
@@ -2019,10 +2060,28 @@
note = {\url{http://x-symbol.sourceforge.net}}
}
-@misc{weidenbach-et-al-2009,
- author = "Christoph Weidenbach and Dilyana Dimova and Arnaud Fietzke and Rohit Kumar and Martin Suda and Patrick Wischnewski",
- title = "{SPASS} Version 3.5",
- note = {\url{http://www.spass-prover.org/publications/spass.pdf}}}
+@inproceedings{weidenbach-et-al-2009,
+ author = {Christoph Weidenbach and
+ Dilyana Dimova and
+ Arnaud Fietzke and
+ Rohit Kumar and
+ Martin Suda and
+ Patrick Wischnewski},
+ editor = {Renate A. Schmidt},
+ title = {{SPASS} Version 3.5},
+ booktitle = {Automated Deduction - CADE-22, 22nd International Conference on Automated
+ Deduction, Montreal, Canada, August 2-7, 2009. Proceedings},
+ series = {Lecture Notes in Computer Science},
+ volume = {5663},
+ pages = {140--145},
+ publisher = {Springer},
+ year = {2009},
+ url = {https://doi.org/10.1007/978-3-642-02959-2\_10},
+ doi = {10.1007/978-3-642-02959-2\_10},
+ timestamp = {Tue, 14 May 2019 10:00:39 +0200},
+ biburl = {https://dblp.org/rec/conf/cade/WeidenbachDFKSW09.bib},
+ bibsource = {dblp computer science bibliography, https://dblp.org}
+}
@manual{isabelle-system,
author = {Makarius Wenzel},
@@ -2255,13 +2314,6 @@
title = {On the Implementation of an Extensible Declarative Proof Language},
crossref = {tphols99}}
-%Z
-
-@misc{z3,
- key = "Z3",
- title = "Z3: An Efficient {SMT} Solver",
- note = "\url{http://research.microsoft.com/en-us/um/redmond/projects/z3/}"}
-
% CROSS REFERENCES
--- a/src/HOL/Code_Evaluation.thy Sun Nov 15 13:06:24 2020 +0000
+++ b/src/HOL/Code_Evaluation.thy Sun Nov 15 13:06:41 2020 +0000
@@ -52,7 +52,7 @@
abbreviation valtermify :: "'a \<Rightarrow> 'a \<times> (unit \<Rightarrow> term)" where
"valtermify x \<equiv> (x, \<lambda>u. termify x)"
-locale term_syntax
+bundle term_syntax
begin
notation App (infixl "<\<cdot>>" 70)
@@ -60,11 +60,6 @@
end
-interpretation term_syntax .
-
-no_notation App (infixl "<\<cdot>>" 70)
- and valapp (infixl "{\<cdot>}" 70)
-
subsection \<open>Tools setup and evaluation\<close>
--- a/src/HOL/Data_Structures/Interval_Tree.thy Sun Nov 15 13:06:24 2020 +0000
+++ b/src/HOL/Data_Structures/Interval_Tree.thy Sun Nov 15 13:06:41 2020 +0000
@@ -78,7 +78,7 @@
fun inv_max_hi :: "('a::{linorder,order_bot}) ivl_tree \<Rightarrow> bool" where
"inv_max_hi Leaf \<longleftrightarrow> True" |
-"inv_max_hi (Node l (a, m) r) \<longleftrightarrow> (inv_max_hi l \<and> inv_max_hi r \<and> m = max3 a (max_hi l) (max_hi r))"
+"inv_max_hi (Node l (a, m) r) \<longleftrightarrow> (m = max3 a (max_hi l) (max_hi r) \<and> inv_max_hi l \<and> inv_max_hi r)"
lemma max_hi_is_max:
"inv_max_hi t \<Longrightarrow> a \<in> set_tree t \<Longrightarrow> high a \<le> max_hi t"
--- a/src/HOL/Data_Structures/RBT_Set.thy Sun Nov 15 13:06:24 2020 +0000
+++ b/src/HOL/Data_Structures/RBT_Set.thy Sun Nov 15 13:06:41 2020 +0000
@@ -108,7 +108,7 @@
fun invc :: "'a rbt \<Rightarrow> bool" where
"invc Leaf = True" |
"invc (Node l (a,c) r) =
- (invc l \<and> invc r \<and> (c = Red \<longrightarrow> color l = Black \<and> color r = Black))"
+ ((c = Red \<longrightarrow> color l = Black \<and> color r = Black) \<and> invc l \<and> invc r)"
text \<open>Weaker version:\<close>
abbreviation invc2 :: "'a rbt \<Rightarrow> bool" where
@@ -116,7 +116,7 @@
fun invh :: "'a rbt \<Rightarrow> bool" where
"invh Leaf = True" |
-"invh (Node l (x, c) r) = (invh l \<and> invh r \<and> bheight l = bheight r)"
+"invh (Node l (x, c) r) = (bheight l = bheight r \<and> invh l \<and> invh r)"
lemma invc2I: "invc t \<Longrightarrow> invc2 t"
by (cases t rule: tree2_cases) simp+
--- a/src/HOL/Data_Structures/Tree2.thy Sun Nov 15 13:06:24 2020 +0000
+++ b/src/HOL/Data_Structures/Tree2.thy Sun Nov 15 13:06:41 2020 +0000
@@ -21,11 +21,11 @@
fun set_tree :: "('a*'b) tree \<Rightarrow> 'a set" where
"set_tree Leaf = {}" |
-"set_tree (Node l (a,_) r) = Set.insert a (set_tree l \<union> set_tree r)"
+"set_tree (Node l (a,_) r) = {a} \<union> set_tree l \<union> set_tree r"
fun bst :: "('a::linorder*'b) tree \<Rightarrow> bool" where
"bst Leaf = True" |
-"bst (Node l (a, _) r) = (bst l \<and> bst r \<and> (\<forall>x \<in> set_tree l. x < a) \<and> (\<forall>x \<in> set_tree r. a < x))"
+"bst (Node l (a, _) r) = ((\<forall>x \<in> set_tree l. x < a) \<and> (\<forall>x \<in> set_tree r. a < x) \<and> bst l \<and> bst r)"
lemma finite_set_tree[simp]: "finite(set_tree t)"
by(induction t) auto
--- a/src/HOL/Data_Structures/Tree23.thy Sun Nov 15 13:06:24 2020 +0000
+++ b/src/HOL/Data_Structures/Tree23.thy Sun Nov 15 13:06:41 2020 +0000
@@ -36,9 +36,9 @@
fun complete :: "'a tree23 \<Rightarrow> bool" where
"complete Leaf = True" |
-"complete (Node2 l _ r) = (complete l & complete r & height l = height r)" |
+"complete (Node2 l _ r) = (height l = height r \<and> complete l & complete r)" |
"complete (Node3 l _ m _ r) =
- (complete l & complete m & complete r & height l = height m & height m = height r)"
+ (height l = height m & height m = height r & complete l & complete m & complete r)"
lemma ht_sz_if_complete: "complete t \<Longrightarrow> 2 ^ height t \<le> size t + 1"
by (induction t) auto
--- a/src/HOL/Library/DAList.thy Sun Nov 15 13:06:24 2020 +0000
+++ b/src/HOL/Library/DAList.thy Sun Nov 15 13:06:41 2020 +0000
@@ -128,21 +128,22 @@
subsection \<open>Quickcheck generators\<close>
-notation fcomp (infixl "\<circ>>" 60)
-notation scomp (infixl "\<circ>\<rightarrow>" 60)
+context
+ includes state_combinator_syntax term_syntax
+begin
-definition (in term_syntax)
+definition
valterm_empty :: "('key :: typerep, 'value :: typerep) alist \<times> (unit \<Rightarrow> Code_Evaluation.term)"
where "valterm_empty = Code_Evaluation.valtermify empty"
-definition (in term_syntax)
+definition
valterm_update :: "'key :: typerep \<times> (unit \<Rightarrow> Code_Evaluation.term) \<Rightarrow>
'value :: typerep \<times> (unit \<Rightarrow> Code_Evaluation.term) \<Rightarrow>
('key, 'value) alist \<times> (unit \<Rightarrow> Code_Evaluation.term) \<Rightarrow>
('key, 'value) alist \<times> (unit \<Rightarrow> Code_Evaluation.term)" where
[code_unfold]: "valterm_update k v a = Code_Evaluation.valtermify update {\<cdot>} k {\<cdot>} v {\<cdot>}a"
-fun (in term_syntax) random_aux_alist
+fun random_aux_alist
where
"random_aux_alist i j =
(if i = 0 then Pair valterm_empty
@@ -152,6 +153,8 @@
(\<lambda>v. random_aux_alist (i - 1) j \<circ>\<rightarrow> (\<lambda>a. Pair (valterm_update k v a))))),
(1, Pair valterm_empty)]))"
+end
+
instantiation alist :: (random, random) random
begin
@@ -163,9 +166,6 @@
end
-no_notation fcomp (infixl "\<circ>>" 60)
-no_notation scomp (infixl "\<circ>\<rightarrow>" 60)
-
instantiation alist :: (exhaustive, exhaustive) exhaustive
begin
--- a/src/HOL/Library/FSet.thy Sun Nov 15 13:06:24 2020 +0000
+++ b/src/HOL/Library/FSet.thy Sun Nov 15 13:06:41 2020 +0000
@@ -1314,12 +1314,18 @@
notation Quickcheck_Exhaustive.orelse (infixr "orelse" 55)
-definition (in term_syntax) [code_unfold]:
+context
+ includes term_syntax
+begin
+
+definition [code_unfold]:
"valterm_femptyset = Code_Evaluation.valtermify ({||} :: ('a :: typerep) fset)"
-definition (in term_syntax) [code_unfold]:
+definition [code_unfold]:
"valtermify_finsert x s = Code_Evaluation.valtermify finsert {\<cdot>} (x :: ('a :: typerep * _)) {\<cdot>} s"
+end
+
instantiation fset :: (exhaustive) exhaustive
begin
@@ -1342,9 +1348,11 @@
no_notation Quickcheck_Exhaustive.orelse (infixr "orelse" 55)
-notation scomp (infixl "\<circ>\<rightarrow>" 60)
+instantiation fset :: (random) random
+begin
-instantiation fset :: (random) random
+context
+ includes state_combinator_syntax
begin
fun random_aux_fset :: "natural \<Rightarrow> natural \<Rightarrow> natural \<times> natural \<Rightarrow> ('a fset \<times> (unit \<Rightarrow> term)) \<times> natural \<times> natural" where
@@ -1373,6 +1381,6 @@
end
-no_notation scomp (infixl "\<circ>\<rightarrow>" 60)
+end
end
--- a/src/HOL/Library/Float.thy Sun Nov 15 13:06:24 2020 +0000
+++ b/src/HOL/Library/Float.thy Sun Nov 15 13:06:41 2020 +0000
@@ -318,9 +318,15 @@
end
-definition (in term_syntax) [code_unfold]:
+context
+ includes term_syntax
+begin
+
+definition [code_unfold]:
"valtermify_float x y = Code_Evaluation.valtermify Float {\<cdot>} x {\<cdot>} y"
+end
+
instantiation float :: full_exhaustive
begin
--- a/src/HOL/Library/IArray.thy Sun Nov 15 13:06:24 2020 +0000
+++ b/src/HOL/Library/IArray.thy Sun Nov 15 13:06:41 2020 +0000
@@ -89,7 +89,8 @@
"IArray.all p = Not \<circ> IArray.exists (Not \<circ> p)"
by (simp add: fun_eq_iff)
-context term_syntax
+context
+ includes term_syntax
begin
lemma [code]:
--- a/src/HOL/Library/Interval.thy Sun Nov 15 13:06:24 2020 +0000
+++ b/src/HOL/Library/Interval.thy Sun Nov 15 13:06:41 2020 +0000
@@ -816,9 +816,15 @@
end
-definition (in term_syntax) [code_unfold]:
+context
+ includes term_syntax
+begin
+
+definition [code_unfold]:
"valtermify_interval x y = Code_Evaluation.valtermify (Ivl::'a::{preorder,typerep}\<Rightarrow>_) {\<cdot>} x {\<cdot>} y"
+end
+
instantiation interval :: ("{full_exhaustive,preorder,typerep}") full_exhaustive
begin
--- a/src/HOL/Library/Multiset.thy Sun Nov 15 13:06:24 2020 +0000
+++ b/src/HOL/Library/Multiset.thy Sun Nov 15 13:06:41 2020 +0000
@@ -3534,17 +3534,24 @@
text \<open>Quickcheck generators\<close>
-definition (in term_syntax)
+context
+ includes term_syntax
+begin
+
+definition
msetify :: "'a::typerep list \<times> (unit \<Rightarrow> Code_Evaluation.term)
\<Rightarrow> 'a multiset \<times> (unit \<Rightarrow> Code_Evaluation.term)" where
[code_unfold]: "msetify xs = Code_Evaluation.valtermify mset {\<cdot>} xs"
-notation fcomp (infixl "\<circ>>" 60)
-notation scomp (infixl "\<circ>\<rightarrow>" 60)
+end
instantiation multiset :: (random) random
begin
+context
+ includes state_combinator_syntax
+begin
+
definition
"Quickcheck_Random.random i = Quickcheck_Random.random i \<circ>\<rightarrow> (\<lambda>xs. Pair (msetify xs))"
@@ -3552,8 +3559,7 @@
end
-no_notation fcomp (infixl "\<circ>>" 60)
-no_notation scomp (infixl "\<circ>\<rightarrow>" 60)
+end
instantiation multiset :: (full_exhaustive) full_exhaustive
begin
--- a/src/HOL/Library/Open_State_Syntax.thy Sun Nov 15 13:06:24 2020 +0000
+++ b/src/HOL/Library/Open_State_Syntax.thy Sun Nov 15 13:06:41 2020 +0000
@@ -8,6 +8,10 @@
imports Main
begin
+context
+ includes state_combinator_syntax
+begin
+
subsection \<open>Motivation\<close>
text \<open>
@@ -55,9 +59,6 @@
again. To achieve this, we use a set of monad combinators:
\<close>
-notation fcomp (infixl "\<circ>>" 60)
-notation scomp (infixl "\<circ>\<rightarrow>" 60)
-
text \<open>
Given two transformations \<^term>\<open>f\<close> and \<^term>\<open>g\<close>, they may be
directly composed using the \<^term>\<open>(\<circ>>)\<close> combinator, forming a
@@ -111,6 +112,8 @@
lemmas monad_collapse = monad_simp fcomp_apply scomp_apply split_beta
+end
+
subsection \<open>Do-syntax\<close>
--- a/src/HOL/Library/Tree.thy Sun Nov 15 13:06:24 2020 +0000
+++ b/src/HOL/Library/Tree.thy Sun Nov 15 13:06:41 2020 +0000
@@ -28,7 +28,7 @@
fun subtrees :: "'a tree \<Rightarrow> 'a tree set" where
"subtrees \<langle>\<rangle> = {\<langle>\<rangle>}" |
-"subtrees (\<langle>l, a, r\<rangle>) = insert \<langle>l, a, r\<rangle> (subtrees l \<union> subtrees r)"
+"subtrees (\<langle>l, a, r\<rangle>) = {\<langle>l, a, r\<rangle>} \<union> subtrees l \<union> subtrees r"
fun mirror :: "'a tree \<Rightarrow> 'a tree" where
"mirror \<langle>\<rangle> = Leaf" |
@@ -53,7 +53,7 @@
fun complete :: "'a tree \<Rightarrow> bool" where
"complete Leaf = True" |
-"complete (Node l x r) = (complete l \<and> complete r \<and> height l = height r)"
+"complete (Node l x r) = (height l = height r \<and> complete l \<and> complete r)"
text \<open>Almost complete:\<close>
definition acomplete :: "'a tree \<Rightarrow> bool" where
@@ -90,7 +90,7 @@
fun bst_wrt :: "('a \<Rightarrow> 'a \<Rightarrow> bool) \<Rightarrow> 'a tree \<Rightarrow> bool" where
"bst_wrt P \<langle>\<rangle> \<longleftrightarrow> True" |
"bst_wrt P \<langle>l, a, r\<rangle> \<longleftrightarrow>
- bst_wrt P l \<and> bst_wrt P r \<and> (\<forall>x\<in>set_tree l. P x a) \<and> (\<forall>x\<in>set_tree r. P a x)"
+ (\<forall>x\<in>set_tree l. P x a) \<and> (\<forall>x\<in>set_tree r. P a x) \<and> bst_wrt P l \<and> bst_wrt P r"
abbreviation bst :: "('a::linorder) tree \<Rightarrow> bool" where
"bst \<equiv> bst_wrt (<)"
@@ -98,7 +98,7 @@
fun (in linorder) heap :: "'a tree \<Rightarrow> bool" where
"heap Leaf = True" |
"heap (Node l m r) =
- (heap l \<and> heap r \<and> (\<forall>x \<in> set_tree l \<union> set_tree r. m \<le> x))"
+ ((\<forall>x \<in> set_tree l \<union> set_tree r. m \<le> x) \<and> heap l \<and> heap r)"
subsection \<open>\<^const>\<open>map_tree\<close>\<close>
--- a/src/HOL/Probability/PMF_Impl.thy Sun Nov 15 13:06:24 2020 +0000
+++ b/src/HOL/Probability/PMF_Impl.thy Sun Nov 15 13:06:41 2020 +0000
@@ -530,22 +530,22 @@
definition single :: "'a \<Rightarrow> 'a multiset" where
"single s = {#s#}"
-definition (in term_syntax)
- pmfify :: "('a::typerep multiset \<times> (unit \<Rightarrow> Code_Evaluation.term)) \<Rightarrow>
- 'a \<times> (unit \<Rightarrow> Code_Evaluation.term) \<Rightarrow>
- 'a pmf \<times> (unit \<Rightarrow> Code_Evaluation.term)" where
+instantiation pmf :: (random) random
+begin
+
+context
+ includes state_combinator_syntax term_syntax
+begin
+
+definition
+ pmfify :: "('b::typerep multiset \<times> (unit \<Rightarrow> Code_Evaluation.term)) \<Rightarrow>
+ 'b \<times> (unit \<Rightarrow> Code_Evaluation.term) \<Rightarrow>
+ 'b pmf \<times> (unit \<Rightarrow> Code_Evaluation.term)" where
[code_unfold]: "pmfify A x =
Code_Evaluation.valtermify pmf_of_multiset {\<cdot>}
(Code_Evaluation.valtermify (+) {\<cdot>} A {\<cdot>}
(Code_Evaluation.valtermify single {\<cdot>} x))"
-
-notation fcomp (infixl "\<circ>>" 60)
-notation scomp (infixl "\<circ>\<rightarrow>" 60)
-
-instantiation pmf :: (random) random
-begin
-
definition
"Quickcheck_Random.random i =
Quickcheck_Random.random i \<circ>\<rightarrow> (\<lambda>A.
@@ -555,8 +555,7 @@
end
-no_notation fcomp (infixl "\<circ>>" 60)
-no_notation scomp (infixl "\<circ>\<rightarrow>" 60)
+end
instantiation pmf :: (full_exhaustive) full_exhaustive
begin
--- a/src/HOL/Product_Type.thy Sun Nov 15 13:06:24 2020 +0000
+++ b/src/HOL/Product_Type.thy Sun Nov 15 13:06:41 2020 +0000
@@ -771,12 +771,24 @@
text \<open>The composition-uncurry combinator.\<close>
-notation fcomp (infixl "\<circ>>" 60)
-
definition scomp :: "('a \<Rightarrow> 'b \<times> 'c) \<Rightarrow> ('b \<Rightarrow> 'c \<Rightarrow> 'd) \<Rightarrow> 'a \<Rightarrow> 'd" (infixl "\<circ>\<rightarrow>" 60)
where "f \<circ>\<rightarrow> g = (\<lambda>x. case_prod g (f x))"
-lemma scomp_unfold: "scomp = (\<lambda>f g x. g (fst (f x)) (snd (f x)))"
+no_notation scomp (infixl "\<circ>\<rightarrow>" 60)
+
+bundle state_combinator_syntax
+begin
+
+notation fcomp (infixl "\<circ>>" 60)
+notation scomp (infixl "\<circ>\<rightarrow>" 60)
+
+end
+
+context
+ includes state_combinator_syntax
+begin
+
+lemma scomp_unfold: "(\<circ>\<rightarrow>) = (\<lambda>f g x. g (fst (f x)) (snd (f x)))"
by (simp add: fun_eq_iff scomp_def case_prod_unfold)
lemma scomp_apply [simp]: "(f \<circ>\<rightarrow> g) x = case_prod g (f x)"
@@ -797,6 +809,8 @@
lemma fcomp_scomp: "(f \<circ>> g) \<circ>\<rightarrow> h = f \<circ>> (g \<circ>\<rightarrow> h)"
by (simp add: fun_eq_iff scomp_unfold)
+end
+
code_printing
constant scomp \<rightharpoonup> (Eval) infixl 3 "#->"
--- a/src/HOL/Quickcheck_Exhaustive.thy Sun Nov 15 13:06:24 2020 +0000
+++ b/src/HOL/Quickcheck_Exhaustive.thy Sun Nov 15 13:06:41 2020 +0000
@@ -180,10 +180,16 @@
end
-definition (in term_syntax)
+context
+ includes term_syntax
+begin
+
+definition
[code_unfold]: "valtermify_pair x y =
Code_Evaluation.valtermify (Pair :: 'a::typerep \<Rightarrow> 'b::typerep \<Rightarrow> 'a \<times> 'b) {\<cdot>} x {\<cdot>} y"
+end
+
instantiation prod :: (full_exhaustive, full_exhaustive) full_exhaustive
begin
@@ -253,11 +259,17 @@
(\<lambda>_::'a. v,
\<lambda>u::unit. Code_Evaluation.Abs (STR ''x'') (Typerep.typerep TYPE('a::typerep)) (t ())))"
-definition (in term_syntax)
+context
+ includes term_syntax
+begin
+
+definition
[code_unfold]: "valtermify_fun_upd g a b =
Code_Evaluation.valtermify
(fun_upd :: ('a::typerep \<Rightarrow> 'b::typerep) \<Rightarrow> 'a \<Rightarrow> 'b \<Rightarrow> 'a \<Rightarrow> 'b) {\<cdot>} g {\<cdot>} a {\<cdot>} b"
+end
+
instantiation "fun" :: ("{equal,full_exhaustive}", full_exhaustive) full_exhaustive
begin
@@ -296,11 +308,17 @@
else check_all (\<lambda>(x, xt).
check_all_n_lists (\<lambda>(xs, xst). f ((x # xs), (\<lambda>_. (xt () # xst ())))) (n - 1)))"
-definition (in term_syntax)
+context
+ includes term_syntax
+begin
+
+definition
[code_unfold]: "termify_fun_upd g a b =
(Code_Evaluation.termify
(fun_upd :: ('a::typerep \<Rightarrow> 'b::typerep) \<Rightarrow> 'a \<Rightarrow> 'b \<Rightarrow> 'a \<Rightarrow> 'b) <\<cdot>> g <\<cdot>> a <\<cdot>> b)"
+end
+
definition mk_map_term ::
"(unit \<Rightarrow> typerep) \<Rightarrow> (unit \<Rightarrow> typerep) \<Rightarrow>
(unit \<Rightarrow> term list) \<Rightarrow> (unit \<Rightarrow> term list) \<Rightarrow> unit \<Rightarrow> term"
@@ -357,7 +375,11 @@
end
-fun (in term_syntax) check_all_subsets ::
+context
+ includes term_syntax
+begin
+
+fun check_all_subsets ::
"(('a::typerep) set \<times> (unit \<Rightarrow> term) \<Rightarrow> (bool \<times> term list) option) \<Rightarrow>
('a \<times> (unit \<Rightarrow> term)) list \<Rightarrow> (bool \<times> term list) option"
where
@@ -365,18 +387,19 @@
| "check_all_subsets f (x # xs) =
check_all_subsets (\<lambda>s. case f s of Some ts \<Rightarrow> Some ts | None \<Rightarrow> f (valtermify_insert x s)) xs"
-
-definition (in term_syntax)
+definition
[code_unfold]: "term_emptyset = Code_Evaluation.termify ({} :: ('a::typerep) set)"
-definition (in term_syntax)
+definition
[code_unfold]: "termify_insert x s =
Code_Evaluation.termify (insert :: ('a::typerep) \<Rightarrow> 'a set \<Rightarrow> 'a set) <\<cdot>> x <\<cdot>> s"
-definition (in term_syntax) setify :: "('a::typerep) itself \<Rightarrow> term list \<Rightarrow> term"
+definition setify :: "('a::typerep) itself \<Rightarrow> term list \<Rightarrow> term"
where
"setify T ts = foldr (termify_insert T) ts (term_emptyset T)"
+end
+
instantiation set :: (check_all) check_all
begin
@@ -423,10 +446,16 @@
end
-definition (in term_syntax) [code_unfold]:
+context
+ includes term_syntax
+begin
+
+definition [code_unfold]:
"termify_pair x y =
Code_Evaluation.termify (Pair :: 'a::typerep \<Rightarrow> 'b :: typerep \<Rightarrow> 'a * 'b) <\<cdot>> x <\<cdot>> y"
+end
+
instantiation prod :: (check_all, check_all) check_all
begin
@@ -442,14 +471,20 @@
end
-definition (in term_syntax)
+context
+ includes term_syntax
+begin
+
+definition
[code_unfold]: "valtermify_Inl x =
Code_Evaluation.valtermify (Inl :: 'a::typerep \<Rightarrow> 'a + 'b :: typerep) {\<cdot>} x"
-definition (in term_syntax)
+definition
[code_unfold]: "valtermify_Inr x =
Code_Evaluation.valtermify (Inr :: 'b::typerep \<Rightarrow> 'a::typerep + 'b) {\<cdot>} x"
+end
+
instantiation sum :: (check_all, check_all) check_all
begin
--- a/src/HOL/Quickcheck_Random.thy Sun Nov 15 13:06:24 2020 +0000
+++ b/src/HOL/Quickcheck_Random.thy Sun Nov 15 13:06:41 2020 +0000
@@ -8,9 +8,6 @@
imports Random Code_Evaluation Enum
begin
-notation fcomp (infixl "\<circ>>" 60)
-notation scomp (infixl "\<circ>\<rightarrow>" 60)
-
setup \<open>Code_Target.add_derived_target ("Quickcheck", [(Code_Runtime.target, I)])\<close>
subsection \<open>Catching Match exceptions\<close>
@@ -33,6 +30,10 @@
instantiation bool :: random
begin
+context
+ includes state_combinator_syntax
+begin
+
definition
"random i = Random.range 2 \<circ>\<rightarrow>
(\<lambda>k. Pair (if k = 0 then Code_Evaluation.valtermify False else Code_Evaluation.valtermify True))"
@@ -41,6 +42,8 @@
end
+end
+
instantiation itself :: (typerep) random
begin
@@ -55,6 +58,10 @@
instantiation char :: random
begin
+context
+ includes state_combinator_syntax
+begin
+
definition
"random _ = Random.select (Enum.enum :: char list) \<circ>\<rightarrow> (\<lambda>c. Pair (c, \<lambda>u. Code_Evaluation.term_of c))"
@@ -62,6 +69,8 @@
end
+end
+
instantiation String.literal :: random
begin
@@ -75,6 +84,10 @@
instantiation nat :: random
begin
+context
+ includes state_combinator_syntax
+begin
+
definition random_nat :: "natural \<Rightarrow> Random.seed
\<Rightarrow> (nat \<times> (unit \<Rightarrow> Code_Evaluation.term)) \<times> Random.seed"
where
@@ -86,9 +99,15 @@
end
+end
+
instantiation int :: random
begin
+context
+ includes state_combinator_syntax
+begin
+
definition
"random i = Random.range (2 * i + 1) \<circ>\<rightarrow> (\<lambda>k. Pair (
let j = (if k \<ge> i then int (nat_of_natural (k - i)) else - (int (nat_of_natural (i - k))))
@@ -98,9 +117,15 @@
end
+end
+
instantiation natural :: random
begin
+context
+ includes state_combinator_syntax
+begin
+
definition random_natural :: "natural \<Rightarrow> Random.seed
\<Rightarrow> (natural \<times> (unit \<Rightarrow> Code_Evaluation.term)) \<times> Random.seed"
where
@@ -110,9 +135,15 @@
end
+end
+
instantiation integer :: random
begin
+context
+ includes state_combinator_syntax
+begin
+
definition random_integer :: "natural \<Rightarrow> Random.seed
\<Rightarrow> (integer \<times> (unit \<Rightarrow> Code_Evaluation.term)) \<times> Random.seed"
where
@@ -124,6 +155,8 @@
end
+end
+
subsection \<open>Complex generators\<close>
@@ -153,25 +186,40 @@
text \<open>Towards type copies and datatypes\<close>
+context
+ includes state_combinator_syntax
+begin
+
definition collapse :: "('a \<Rightarrow> ('a \<Rightarrow> 'b \<times> 'a) \<times> 'a) \<Rightarrow> 'a \<Rightarrow> 'b \<times> 'a"
where "collapse f = (f \<circ>\<rightarrow> id)"
+end
+
definition beyond :: "natural \<Rightarrow> natural \<Rightarrow> natural"
where "beyond k l = (if l > k then l else 0)"
lemma beyond_zero: "beyond k 0 = 0"
by (simp add: beyond_def)
+context
+ includes term_syntax
+begin
-definition (in term_syntax) [code_unfold]:
+definition [code_unfold]:
"valterm_emptyset = Code_Evaluation.valtermify ({} :: ('a :: typerep) set)"
-definition (in term_syntax) [code_unfold]:
+definition [code_unfold]:
"valtermify_insert x s = Code_Evaluation.valtermify insert {\<cdot>} (x :: ('a :: typerep * _)) {\<cdot>} s"
+end
+
instantiation set :: (random) random
begin
+context
+ includes state_combinator_syntax
+begin
+
fun random_aux_set
where
"random_aux_set 0 j = collapse (Random.select_weight [(1, Pair valterm_emptyset)])"
@@ -200,6 +248,8 @@
end
+end
+
lemma random_aux_rec:
fixes random_aux :: "natural \<Rightarrow> 'a"
assumes "random_aux 0 = rhs 0"
@@ -223,9 +273,6 @@
code_reserved Quickcheck Random_Generators
-no_notation fcomp (infixl "\<circ>>" 60)
-no_notation scomp (infixl "\<circ>\<rightarrow>" 60)
-
hide_const (open) catch_match random collapse beyond random_fun_aux random_fun_lift
hide_fact (open) collapse_def beyond_def random_fun_lift_def
--- a/src/HOL/ROOT Sun Nov 15 13:06:24 2020 +0000
+++ b/src/HOL/ROOT Sun Nov 15 13:06:41 2020 +0000
@@ -959,6 +959,8 @@
VC_Principles
Reference
Complex_Types
+ document_theories
+ "HOL-SPARK-Examples.Greatest_Common_Divisor"
document_files
"complex_types.ads"
"complex_types_app.adb"
--- a/src/HOL/Random.thy Sun Nov 15 13:06:24 2020 +0000
+++ b/src/HOL/Random.thy Sun Nov 15 13:06:41 2020 +0000
@@ -6,10 +6,6 @@
imports List Groups_List
begin
-notation fcomp (infixl "\<circ>>" 60)
-notation scomp (infixl "\<circ>\<rightarrow>" 60)
-
-
subsection \<open>Auxiliary functions\<close>
fun log :: "natural \<Rightarrow> natural \<Rightarrow> natural" where
@@ -46,6 +42,10 @@
subsection \<open>Base selectors\<close>
+context
+ includes state_combinator_syntax
+begin
+
fun iterate :: "natural \<Rightarrow> ('b \<Rightarrow> 'a \<Rightarrow> 'b \<times> 'a) \<Rightarrow> 'b \<Rightarrow> 'a \<Rightarrow> 'b \<times> 'a" where
"iterate k f x = (if k = 0 then Pair x else f x \<circ>\<rightarrow> iterate (k - 1) f)"
@@ -134,6 +134,8 @@
fun_eq_iff pick_same [symmetric] less_natural_def)
qed
+end
+
subsection \<open>\<open>ML\<close> interface\<close>
@@ -183,7 +185,4 @@
iterate range select pick select_weight
hide_fact (open) range_def
-no_notation fcomp (infixl "\<circ>>" 60)
-no_notation scomp (infixl "\<circ>\<rightarrow>" 60)
-
end
--- a/src/HOL/Rat.thy Sun Nov 15 13:06:24 2020 +0000
+++ b/src/HOL/Rat.thy Sun Nov 15 13:06:41 2020 +0000
@@ -998,18 +998,25 @@
text \<open>Quickcheck\<close>
-definition (in term_syntax)
+context
+ includes term_syntax
+begin
+
+definition
valterm_fract :: "int \<times> (unit \<Rightarrow> Code_Evaluation.term) \<Rightarrow>
int \<times> (unit \<Rightarrow> Code_Evaluation.term) \<Rightarrow>
rat \<times> (unit \<Rightarrow> Code_Evaluation.term)"
where [code_unfold]: "valterm_fract k l = Code_Evaluation.valtermify Fract {\<cdot>} k {\<cdot>} l"
-notation fcomp (infixl "\<circ>>" 60)
-notation scomp (infixl "\<circ>\<rightarrow>" 60)
+end
instantiation rat :: random
begin
+context
+ includes state_combinator_syntax
+begin
+
definition
"Quickcheck_Random.random i =
Quickcheck_Random.random i \<circ>\<rightarrow> (\<lambda>num. Random.range i \<circ>\<rightarrow> (\<lambda>denom. Pair
@@ -1020,8 +1027,7 @@
end
-no_notation fcomp (infixl "\<circ>>" 60)
-no_notation scomp (infixl "\<circ>\<rightarrow>" 60)
+end
instantiation rat :: exhaustive
begin
--- a/src/HOL/Real.thy Sun Nov 15 13:06:24 2020 +0000
+++ b/src/HOL/Real.thy Sun Nov 15 13:06:41 2020 +0000
@@ -1634,16 +1634,23 @@
text \<open>Quickcheck\<close>
-definition (in term_syntax)
+context
+ includes term_syntax
+begin
+
+definition
valterm_ratreal :: "rat \<times> (unit \<Rightarrow> Code_Evaluation.term) \<Rightarrow> real \<times> (unit \<Rightarrow> Code_Evaluation.term)"
where [code_unfold]: "valterm_ratreal k = Code_Evaluation.valtermify Ratreal {\<cdot>} k"
-notation fcomp (infixl "\<circ>>" 60)
-notation scomp (infixl "\<circ>\<rightarrow>" 60)
+end
instantiation real :: random
begin
+context
+ includes state_combinator_syntax
+begin
+
definition
"Quickcheck_Random.random i = Quickcheck_Random.random i \<circ>\<rightarrow> (\<lambda>r. Pair (valterm_ratreal r))"
@@ -1651,8 +1658,7 @@
end
-no_notation fcomp (infixl "\<circ>>" 60)
-no_notation scomp (infixl "\<circ>\<rightarrow>" 60)
+end
instantiation real :: exhaustive
begin
--- a/src/HOL/TPTP/ATP_Theory_Export.thy Sun Nov 15 13:06:24 2020 +0000
+++ b/src/HOL/TPTP/ATP_Theory_Export.thy Sun Nov 15 13:06:41 2020 +0000
@@ -25,8 +25,8 @@
ML \<open>
if do_it then
"/tmp/isa_filter"
- |> generate_casc_lbt_isa_files_for_theory ctxt thy (THF (Polymorphic, THF_Without_Choice))
- infer_policy "poly_native_higher"
+ |> generate_casc_lbt_isa_files_for_theory ctxt thy
+ (THF (Without_FOOL, Polymorphic, THF_Without_Choice)) infer_policy "poly_native_higher"
else
()
\<close>
--- a/src/HOL/TPTP/atp_problem_import.ML Sun Nov 15 13:06:24 2020 +0000
+++ b/src/HOL/TPTP/atp_problem_import.ML Sun Nov 15 13:06:41 2020 +0000
@@ -298,8 +298,9 @@
val (format, type_enc, lam_trans) =
(case format_str of
"FOF" => (FOF, "mono_guards??", liftingN)
- | "TF0" => (TFF Monomorphic, "mono_native", liftingN)
- | "TH0" => (THF (Monomorphic, THF_Without_Choice), "mono_native_higher", keep_lamsN)
+ | "TF0" => (TFF (Without_FOOL, Monomorphic), "mono_native", liftingN)
+ | "TH0" => (THF (Without_FOOL, Monomorphic, THF_Without_Choice), "mono_native_higher",
+ keep_lamsN)
| "DFG" => (DFG Monomorphic, "mono_native", liftingN)
| _ => error ("Unknown format " ^ quote format_str ^
" (expected \"FOF\", \"TF0\", \"TH0\", or \"DFG\")"))
--- a/src/HOL/TPTP/atp_theory_export.ML Sun Nov 15 13:06:24 2020 +0000
+++ b/src/HOL/TPTP/atp_theory_export.ML Sun Nov 15 13:06:41 2020 +0000
@@ -36,12 +36,12 @@
val prefix = Library.prefix
val fact_name_of = prefix fact_prefix o ascii_of
-fun atp_of_format (THF (Polymorphic, _)) = leo3N
- | atp_of_format (THF (Monomorphic, _)) = satallaxN
+fun atp_of_format (THF (_, Polymorphic, _)) = leo3N
+ | atp_of_format (THF (_, Monomorphic, _)) = satallaxN
| atp_of_format (DFG Polymorphic) = pirateN
| atp_of_format (DFG Monomorphic) = spassN
- | atp_of_format (TFF Polymorphic) = alt_ergoN
- | atp_of_format (TFF Monomorphic) = vampireN
+ | atp_of_format (TFF (_, Polymorphic)) = alt_ergoN
+ | atp_of_format (TFF (_, Monomorphic)) = vampireN
| atp_of_format FOF = eN (* FIXME? *)
| atp_of_format CNF_UEQ = waldmeisterN
| atp_of_format CNF = eN (* FIXME? *)
--- a/src/HOL/Tools/ATP/atp_problem.ML Sun Nov 15 13:06:24 2020 +0000
+++ b/src/HOL/Tools/ATP/atp_problem.ML Sun Nov 15 13:06:41 2020 +0000
@@ -31,15 +31,16 @@
gen_prec : bool,
gen_simp : bool}
+ datatype fool = Without_FOOL | With_FOOL
datatype polymorphism = Monomorphic | Polymorphic
- datatype thf_choice = THF_Predicate_Free | THF_Without_Choice | THF_With_Choice
+ datatype thf_choice = THF_Without_Choice | THF_With_Choice
datatype atp_format =
CNF |
CNF_UEQ |
FOF |
- TFF of polymorphism |
- THF of polymorphism * thf_choice |
+ TFF of fool * polymorphism |
+ THF of fool * polymorphism * thf_choice |
DFG of polymorphism
datatype atp_formula_role =
@@ -188,15 +189,16 @@
gen_prec : bool,
gen_simp : bool}
+datatype fool = Without_FOOL | With_FOOL
datatype polymorphism = Monomorphic | Polymorphic
-datatype thf_choice = THF_Predicate_Free | THF_Without_Choice | THF_With_Choice
+datatype thf_choice = THF_Without_Choice | THF_With_Choice
datatype atp_format =
CNF |
CNF_UEQ |
FOF |
- TFF of polymorphism |
- THF of polymorphism * thf_choice |
+ TFF of fool * polymorphism |
+ THF of fool * polymorphism * thf_choice |
DFG of polymorphism
datatype atp_formula_role =
@@ -915,9 +917,6 @@
end
in add 1 |> apsnd SOME end)
-fun avoid_clash_with_alt_ergo_type_vars s =
- if is_tptp_variable s then s else s ^ "_"
-
fun avoid_clash_with_dfg_keywords s =
let val n = String.size s in
if n < 2 orelse (n = 2 andalso String.sub (s, 0) = String.sub (s, 1)) orelse
@@ -936,8 +935,7 @@
if readable_names then SOME (Symtab.empty, Symtab.empty) else NONE
val avoid_clash =
(case format of
- TFF Polymorphic => avoid_clash_with_alt_ergo_type_vars
- | DFG _ => avoid_clash_with_dfg_keywords
+ DFG _ => avoid_clash_with_dfg_keywords
| _ => I)
val nice_name = nice_name avoid_clash
--- a/src/HOL/Tools/ATP/atp_problem_generate.ML Sun Nov 15 13:06:24 2020 +0000
+++ b/src/HOL/Tools/ATP/atp_problem_generate.ML Sun Nov 15 13:06:41 2020 +0000
@@ -153,7 +153,7 @@
No_Types
datatype type_enc =
- Native of order * polymorphism * type_level |
+ Native of order * fool * polymorphism * type_level |
Guards of polymorphism * type_level |
Tags of polymorphism * type_level
@@ -162,13 +162,12 @@
fun is_type_enc_native (Native _) = true
| is_type_enc_native _ = false
-fun is_type_enc_full_higher_order (Native (Higher_Order THF_Predicate_Free, _, _)) = false
- | is_type_enc_full_higher_order (Native (Higher_Order _, _, _)) = true
- | is_type_enc_full_higher_order _ = false
-fun is_type_enc_higher_order (Native (Higher_Order _, _, _)) = true
+fun is_type_enc_fool (Native (_, With_FOOL, _, _)) = true
+ | is_type_enc_fool _ = false
+fun is_type_enc_higher_order (Native (Higher_Order _, _, _, _)) = true
| is_type_enc_higher_order _ = false
-fun polymorphism_of_type_enc (Native (_, poly, _)) = poly
+fun polymorphism_of_type_enc (Native (_, _, poly, _)) = poly
| polymorphism_of_type_enc (Guards (poly, _)) = poly
| polymorphism_of_type_enc (Tags (poly, _)) = poly
@@ -181,7 +180,7 @@
fun is_type_enc_mangling type_enc =
polymorphism_of_type_enc type_enc = Mangled_Monomorphic
-fun level_of_type_enc (Native (_, _, level)) = level
+fun level_of_type_enc (Native (_, _, _, level)) = level
| level_of_type_enc (Guards (_, level)) = level
| level_of_type_enc (Tags (_, level)) = level
@@ -381,7 +380,7 @@
fun default c = const_prefix ^ lookup_const c
in
fun make_fixed_const _ \<^const_name>\<open>HOL.eq\<close> = tptp_old_equal
- | make_fixed_const (SOME (Native (Higher_Order THF_With_Choice, _, _))) c =
+ | make_fixed_const (SOME (Native (Higher_Order THF_With_Choice, _, _, _))) c =
if c = choice_const then tptp_choice else default c
| make_fixed_const _ c = default c
end
@@ -600,102 +599,125 @@
fold (fn s' => fn NONE => try (unsuffix s') s | some => some) ss NONE
fun type_enc_of_string strictness s =
- (case try (unprefix "tc_") s of
- SOME s => (SOME Type_Class_Polymorphic, s)
- | NONE =>
- (case try (unprefix "poly_") s of
- SOME s => (SOME (Raw_Polymorphic With_Phantom_Type_Vars), s)
- | NONE =>
- (case try (unprefix "ml_poly_") s of
- SOME s => (SOME (Raw_Polymorphic Without_Phantom_Type_Vars), s)
+ let
+ val (poly, s) =
+ (case try (unprefix "tc_") s of
+ SOME s => (SOME Type_Class_Polymorphic, s)
| NONE =>
- (case try (unprefix "raw_mono_") s of
- SOME s => (SOME Raw_Monomorphic, s)
+ (case try (unprefix "poly_") s of
+ SOME s => (SOME (Raw_Polymorphic With_Phantom_Type_Vars), s)
| NONE =>
- (case try (unprefix "mono_") s of
- SOME s => (SOME Mangled_Monomorphic, s)
- | NONE => (NONE, s))))))
- ||> (fn s =>
- (case try_unsuffixes queries s of
- SOME s =>
- (case try_unsuffixes queries s of
- SOME s => (Nonmono_Types (strictness, Non_Uniform), s)
- | NONE => (Nonmono_Types (strictness, Uniform), s))
- | NONE =>
- (case try_unsuffixes ats s of
- SOME s => (Undercover_Types, s)
- | NONE => (All_Types, s))))
- |> (fn (poly, (level, core)) =>
- (case (core, (poly, level)) of
- ("native", (SOME poly, _)) =>
+ (case try (unprefix "ml_poly_") s of
+ SOME s => (SOME (Raw_Polymorphic Without_Phantom_Type_Vars), s)
+ | NONE =>
+ (case try (unprefix "raw_mono_") s of
+ SOME s => (SOME Raw_Monomorphic, s)
+ | NONE =>
+ (case try (unprefix "mono_") s of
+ SOME s => (SOME Mangled_Monomorphic, s)
+ | NONE => (NONE, s))))))
+
+ val (level, s) =
+ case try_unsuffixes queries s of
+ SOME s =>
+ (case try_unsuffixes queries s of
+ SOME s => (Nonmono_Types (strictness, Non_Uniform), s)
+ | NONE => (Nonmono_Types (strictness, Uniform), s))
+ | NONE =>
+ (case try_unsuffixes ats s of
+ SOME s => (Undercover_Types, s)
+ | NONE => (All_Types, s))
+
+ fun native_of_string s =
+ let
+ val (fool, core) =
+ (case try (unsuffix "_fool") s of
+ SOME s => (With_FOOL, s)
+ | NONE => (Without_FOOL, s))
+ in
+ (case (core, poly) of
+ ("native", SOME poly) =>
(case (poly, level) of
(Mangled_Monomorphic, _) =>
- if is_type_level_uniform level then Native (First_Order, Mangled_Monomorphic, level)
- else raise Same.SAME
+ if is_type_level_uniform level then
+ Native (First_Order, fool, Mangled_Monomorphic, level)
+ else
+ raise Same.SAME
| (Raw_Monomorphic, _) => raise Same.SAME
- | (poly, All_Types) => Native (First_Order, poly, All_Types))
- | ("native_higher", (SOME poly, _)) =>
+ | (poly, All_Types) => Native (First_Order, fool, poly, All_Types))
+ | ("native_higher", SOME poly) =>
(case (poly, level) of
(_, Nonmono_Types _) => raise Same.SAME
| (_, Undercover_Types) => raise Same.SAME
| (Mangled_Monomorphic, _) =>
if is_type_level_uniform level then
- Native (Higher_Order THF_With_Choice, Mangled_Monomorphic, level)
+ Native (Higher_Order THF_With_Choice, Without_FOOL, Mangled_Monomorphic, level)
else
raise Same.SAME
| (poly as Raw_Polymorphic _, All_Types) =>
- Native (Higher_Order THF_With_Choice, poly, All_Types)
- | _ => raise Same.SAME)
- | ("guards", (SOME poly, _)) =>
- if (poly = Mangled_Monomorphic andalso level = Undercover_Types) orelse
- poly = Type_Class_Polymorphic then
- raise Same.SAME
- else
- Guards (poly, level)
- | ("tags", (SOME poly, _)) =>
- if (poly = Mangled_Monomorphic andalso level = Undercover_Types) orelse
- poly = Type_Class_Polymorphic then
- raise Same.SAME
- else
- Tags (poly, level)
- | ("args", (SOME poly, All_Types (* naja *))) =>
- if poly = Type_Class_Polymorphic then raise Same.SAME
- else Guards (poly, Const_Types Without_Ctr_Optim)
- | ("args", (SOME poly, Nonmono_Types (_, Uniform) (* naja *))) =>
- if poly = Mangled_Monomorphic orelse poly = Type_Class_Polymorphic then
- raise Same.SAME
- else
- Guards (poly, Const_Types With_Ctr_Optim)
- | ("erased", (NONE, All_Types (* naja *))) =>
- Guards (Raw_Polymorphic With_Phantom_Type_Vars, No_Types)
- | _ => raise Same.SAME))
+ Native (Higher_Order THF_With_Choice, Without_FOOL, poly, All_Types)
+ | _ => raise Same.SAME))
+ end
+
+ fun nonnative_of_string core =
+ (case (core, poly, level) of
+ ("guards", SOME poly, _) =>
+ if (poly = Mangled_Monomorphic andalso level = Undercover_Types) orelse
+ poly = Type_Class_Polymorphic then
+ raise Same.SAME
+ else
+ Guards (poly, level)
+ | ("tags", SOME poly, _) =>
+ if (poly = Mangled_Monomorphic andalso level = Undercover_Types) orelse
+ poly = Type_Class_Polymorphic then
+ raise Same.SAME
+ else
+ Tags (poly, level)
+ | ("args", SOME poly, All_Types (* naja *)) =>
+ if poly = Type_Class_Polymorphic then raise Same.SAME
+ else Guards (poly, Const_Types Without_Ctr_Optim)
+ | ("args", SOME poly, Nonmono_Types (_, Uniform) (* naja *)) =>
+ if poly = Mangled_Monomorphic orelse poly = Type_Class_Polymorphic then
+ raise Same.SAME
+ else
+ Guards (poly, Const_Types With_Ctr_Optim)
+ | ("erased", NONE, All_Types (* naja *)) =>
+ Guards (Raw_Polymorphic With_Phantom_Type_Vars, No_Types)
+ | _ => raise Same.SAME)
+ in
+ if String.isPrefix "native" s then
+ native_of_string s
+ else
+ nonnative_of_string s
+ end
handle Same.SAME => error ("Unknown type encoding: " ^ quote s)
-fun min_hologic THF_Predicate_Free _ = THF_Predicate_Free
- | min_hologic _ THF_Predicate_Free = THF_Predicate_Free
- | min_hologic THF_Without_Choice _ = THF_Without_Choice
+fun min_hologic THF_Without_Choice _ = THF_Without_Choice
| min_hologic _ THF_Without_Choice = THF_Without_Choice
| min_hologic _ _ = THF_With_Choice
fun adjust_hologic hologic (Higher_Order hologic') = Higher_Order (min_hologic hologic hologic')
| adjust_hologic _ type_enc = type_enc
+fun adjust_fool Without_FOOL _ = Without_FOOL
+ | adjust_fool _ fool = fool
+
fun no_type_classes Type_Class_Polymorphic = Raw_Polymorphic With_Phantom_Type_Vars
| no_type_classes poly = poly
-fun adjust_type_enc (THF (Polymorphic, hologic)) (Native (order, poly, level)) =
- Native (adjust_hologic hologic order, no_type_classes poly, level)
- | adjust_type_enc (THF (Monomorphic, hologic)) (Native (order, _, level)) =
- Native (adjust_hologic hologic order, Mangled_Monomorphic, level)
- | adjust_type_enc (TFF Monomorphic) (Native (_, _, level)) =
- Native (First_Order, Mangled_Monomorphic, level)
- | adjust_type_enc (DFG Polymorphic) (Native (_, poly, level)) =
- Native (First_Order, poly, level)
- | adjust_type_enc (DFG Monomorphic) (Native (_, _, level)) =
- Native (First_Order, Mangled_Monomorphic, level)
- | adjust_type_enc (TFF _) (Native (_, poly, level)) =
- Native (First_Order, no_type_classes poly, level)
- | adjust_type_enc format (Native (_, poly, level)) =
+fun adjust_type_enc (THF (fool, Polymorphic, hologic)) (Native (order, fool', poly, level)) =
+ Native (adjust_hologic hologic order, adjust_fool fool fool', no_type_classes poly, level)
+ | adjust_type_enc (THF (fool, Monomorphic, hologic)) (Native (order, fool', _, level)) =
+ Native (adjust_hologic hologic order, adjust_fool fool fool', Mangled_Monomorphic, level)
+ | adjust_type_enc (TFF (fool, Monomorphic)) (Native (_, fool', _, level)) =
+ Native (First_Order, adjust_fool fool fool', Mangled_Monomorphic, level)
+ | adjust_type_enc (DFG Polymorphic) (Native (_, _, poly, level)) =
+ Native (First_Order, Without_FOOL, poly, level)
+ | adjust_type_enc (DFG Monomorphic) (Native (_, _, _, level)) =
+ Native (First_Order, Without_FOOL, Mangled_Monomorphic, level)
+ | adjust_type_enc (TFF (fool, _)) (Native (_, fool', poly, level)) =
+ Native (First_Order, adjust_fool fool fool', no_type_classes poly, level)
+ | adjust_type_enc format (Native (_, _, poly, level)) =
adjust_type_enc format (Guards (no_type_classes poly, level))
| adjust_type_enc CNF_UEQ (type_enc as Guards stuff) =
(if is_type_enc_sound type_enc then Tags else Guards) stuff
@@ -838,8 +860,8 @@
T_args
else
(case type_enc of
- Native (_, Raw_Polymorphic _, _) => T_args
- | Native (_, Type_Class_Polymorphic, _) => T_args
+ Native (_, _, Raw_Polymorphic _, _) => T_args
+ | Native (_, _, Type_Class_Polymorphic, _) => T_args
| _ =>
let
fun gen_type_args _ _ [] = []
@@ -889,7 +911,8 @@
AType
((if s = \<^type_name>\<open>fun\<close> andalso is_type_enc_higher_order type_enc then
`I tptp_fun_type
- else if s = \<^type_name>\<open>bool\<close> andalso is_type_enc_full_higher_order type_enc then
+ else if s = \<^type_name>\<open>bool\<close> andalso
+ (is_type_enc_higher_order type_enc orelse is_type_enc_fool type_enc) then
`I tptp_bool_type
else
`make_fixed_type_const s, []), map term Ts)
@@ -934,17 +957,11 @@
fun to_ho (ty as AType (((s, _), _), tys)) =
if s = tptp_fun_type then to_afun to_ho to_ho tys else to_atype ty
| to_ho _ = raise Fail "unexpected type"
- fun to_lfho (ty as AType (((s, _), _), tys)) =
- if s = tptp_fun_type then to_afun to_ho to_lfho tys
- else if pred_sym then bool_atype
- else to_atype ty
- | to_lfho _ = raise Fail "unexpected type"
fun to_fo 0 ty = if pred_sym then bool_atype else to_atype ty
| to_fo ary (AType (_, tys)) = to_afun to_atype (to_fo (ary - 1)) tys
| to_fo _ _ = raise Fail "unexpected type"
in
- if is_type_enc_full_higher_order type_enc then to_ho
- else if is_type_enc_higher_order type_enc then to_lfho
+ if is_type_enc_higher_order type_enc then to_ho
else to_fo ary
end
@@ -974,7 +991,7 @@
val tm_args =
tm_args @
(case type_enc of
- Native (_, Raw_Polymorphic Without_Phantom_Type_Vars, _) =>
+ Native (_, _, Raw_Polymorphic Without_Phantom_Type_Vars, _) =>
[ATerm ((TYPE_name, ty_args), [])]
| _ => [])
in AAtom (ATerm ((cl, ty_args), tm_args)) end
@@ -1078,8 +1095,7 @@
fun introduce_proxies_in_iterm type_enc =
let
fun tweak_ho_quant ho_quant T [IAbs _] = IConst (`I ho_quant, T, [])
- | tweak_ho_quant ho_quant (T as Type (_, [p_T as Type (_, [x_T, _]), _]))
- _ =
+ | tweak_ho_quant ho_quant (T as Type (_, [p_T as Type (_, [x_T, _]), _])) _ =
(* Eta-expand "!!" and "??", to work around LEO-II 1.2.8 parser
limitation. This works in conjuction with special code in
"ATP_Problem" that uses the syntactic sugar "!" and "?" whenever
@@ -1093,38 +1109,49 @@
fun intro top_level args (IApp (tm1, tm2)) =
IApp (intro top_level (tm2 :: args) tm1, intro false [] tm2)
| intro top_level args (IConst (name as (s, _), T, T_args)) =
+ (* is_type_enc_fool *)
(case proxify_const s of
SOME proxy_base =>
- if top_level orelse is_type_enc_full_higher_order type_enc then
- (case (top_level, s) of
- (_, "c_False") => IConst (`I tptp_false, T, [])
- | (_, "c_True") => IConst (`I tptp_true, T, [])
- | (false, "c_Not") => IConst (`I tptp_not, T, [])
- | (false, "c_conj") => IConst (`I tptp_and, T, [])
- | (false, "c_disj") => IConst (`I tptp_or, T, [])
- | (false, "c_implies") => IConst (`I tptp_implies, T, [])
- | (false, "c_All") => tweak_ho_quant tptp_ho_forall T args
- | (false, "c_Ex") => tweak_ho_quant tptp_ho_exists T args
- | (false, s) =>
- if is_tptp_equal s then
- if length args = 2 then
- IConst (`I tptp_equal, T, [])
+ let
+ val argc = length args
+ val plain_const = IConst (name, T, [])
+ fun proxy_const () = IConst (proxy_base |>> prefix const_prefix, T, T_args)
+ fun if_card_matches card x = if card = argc then x else plain_const
+ val is_fool = is_type_enc_fool type_enc
+ in
+ if top_level orelse is_fool orelse is_type_enc_higher_order type_enc then
+ (case (top_level, s) of
+ (_, "c_False") => IConst (`I tptp_false, T, [])
+ | (_, "c_True") => IConst (`I tptp_true, T, [])
+ | (false, "c_Not") => if_card_matches 1 (IConst (`I tptp_not, T, []))
+ | (false, "c_conj") => if_card_matches 2 (IConst (`I tptp_and, T, []))
+ | (false, "c_disj") => if_card_matches 2 (IConst (`I tptp_or, T, []))
+ | (false, "c_implies") => if_card_matches 2 (IConst (`I tptp_implies, T, []))
+ | (false, "c_All") => if_card_matches 1 (tweak_ho_quant tptp_ho_forall T args)
+ | (false, "c_Ex") => if_card_matches 1 (tweak_ho_quant tptp_ho_exists T args)
+ | (false, s) =>
+ if is_tptp_equal s then
+ if argc = 2 then
+ IConst (`I tptp_equal, T, [])
+ else if is_fool then
+ proxy_const ()
+ else
+ (* Eta-expand partially applied THF equality, because the
+ LEO-II and Satallax parsers complain about not being able to
+ infer the type of "=". *)
+ let val i_T = domain_type T in
+ IAbs ((`I "Y", i_T),
+ IAbs ((`I "Z", i_T),
+ IApp (IApp (IConst (`I tptp_equal, T, []),
+ IConst (`I "Y", i_T, [])),
+ IConst (`I "Z", i_T, []))))
+ end
else
- (* Eta-expand partially applied THF equality, because the
- LEO-II and Satallax parsers complain about not being able to
- infer the type of "=". *)
- let val i_T = domain_type T in
- IAbs ((`I "Y", i_T),
- IAbs ((`I "Z", i_T),
- IApp (IApp (IConst (`I tptp_equal, T, []),
- IConst (`I "Y", i_T, [])),
- IConst (`I "Z", i_T, []))))
- end
- else
- IConst (name, T, [])
- | _ => IConst (name, T, []))
- else
- IConst (proxy_base |>> prefix const_prefix, T, T_args)
+ plain_const
+ | _ => plain_const)
+ else
+ IConst (proxy_base |>> prefix const_prefix, T, T_args)
+ end
| NONE => if s = tptp_choice then tweak_ho_quant tptp_choice T args
else IConst (name, T, T_args))
| intro _ _ (IAbs (bound, tm)) = IAbs (bound, intro false [] tm)
@@ -1251,7 +1278,7 @@
|> transform_elim_prop
|> Object_Logic.atomize_term ctxt
val need_trueprop = (fastype_of t = \<^typ>\<open>bool\<close>)
- val is_ho = is_type_enc_full_higher_order type_enc
+ val is_ho = is_type_enc_higher_order type_enc
in
t |> need_trueprop ? HOLogic.mk_Trueprop
|> (if is_ho then unextensionalize_def
@@ -1264,7 +1291,7 @@
(* Satallax prefers "=" to "<=>" (for definitions) and Metis (CNF) requires "=" for technical
reasons. *)
fun should_use_iff_for_eq CNF _ = false
- | should_use_iff_for_eq (THF _) format = not (is_type_enc_full_higher_order format)
+ | should_use_iff_for_eq (THF _) format = not (is_type_enc_higher_order format)
| should_use_iff_for_eq _ _ = true
fun make_formula ctxt format type_enc iff_for_eq name stature role t =
@@ -1397,7 +1424,7 @@
|> map (rpair {pred_sym = true, min_ary = 0, max_ary = 0, types = [], in_conj = false})) @
([tptp_equal, tptp_old_equal]
|> map (rpair {pred_sym = true, min_ary = 2, max_ary = 2, types = [], in_conj = false}))
- |> not (is_type_enc_full_higher_order type_enc)
+ |> not (is_type_enc_fool type_enc orelse is_type_enc_higher_order type_enc)
? cons (prefixed_predicator_name,
{pred_sym = true, min_ary = 1, max_ary = 1, types = [], in_conj = false})
@@ -1596,8 +1623,9 @@
if is_pred_sym sym_tab s then tm else predicatify completish tm
| _ => predicatify completish tm)
val do_iterm =
- (not (is_type_enc_higher_order type_enc) ? introduce_app_ops)
- #> (not (is_type_enc_full_higher_order type_enc) ? introduce_predicators)
+ (not (is_type_enc_higher_order type_enc) ?
+ (introduce_app_ops #>
+ not (is_type_enc_fool type_enc) ? introduce_predicators))
#> filter_type_args_in_iterm thy ctrss type_enc
in update_iformula (formula_map do_iterm) end
@@ -1679,10 +1707,10 @@
|> class_membs_of_types type_enc add_sorts_on_tvar
|> map (class_atom type_enc)))
#> (case type_enc of
- Native (_, Type_Class_Polymorphic, _) =>
+ Native (_, _, Type_Class_Polymorphic, _) =>
mk_atyquant AForall (map (fn TVar (z as (_, S)) =>
(AType ((tvar_name z, []), []), map (`make_class) (normalize_classes S) )) Ts)
- | Native (_, Raw_Polymorphic _, _) =>
+ | Native (_, _, Raw_Polymorphic _, _) =>
mk_atyquant AForall (map (fn TVar (z as _) => (AType ((tvar_name z, []), []), [])) Ts)
| _ => mk_aquant AForall (map (fn TVar z => (tvar_name z, NONE)) Ts)))
@@ -2101,7 +2129,7 @@
fun decl_lines_of_classes type_enc =
(case type_enc of
- Native (_, Raw_Polymorphic phantoms, _) => map (decl_line_of_class phantoms)
+ Native (_, _, Raw_Polymorphic phantoms, _) => map (decl_line_of_class phantoms)
| _ => K [])
fun sym_decl_table_of_facts thy type_enc sym_tab (conjs, facts, extra_tms) =
@@ -2161,7 +2189,7 @@
? (fold (fold add_fact_syms) [conjs, facts]
#> fold add_iterm_syms extra_tms
#> (case type_enc of
- Native (_, Raw_Polymorphic phantoms, _) =>
+ Native (_, _, Raw_Polymorphic phantoms, _) =>
phantoms = Without_Phantom_Type_Vars ? add_TYPE_const ()
| Native _ => I
| _ => fold add_undefined_const (var_types ())))
--- a/src/HOL/Tools/ATP/atp_proof.ML Sun Nov 15 13:06:24 2020 +0000
+++ b/src/HOL/Tools/ATP/atp_proof.ML Sun Nov 15 13:06:41 2020 +0000
@@ -58,6 +58,7 @@
val z3_tptpN : string
val zipperpositionN : string
val remote_prefix : string
+ val dummy_tfxN : string
val agsyhol_core_rule : string
val spass_input_rule : string
@@ -120,6 +121,7 @@
val z3_tptpN = "z3_tptp"
val zipperpositionN = "zipperposition"
val remote_prefix = "remote_"
+val dummy_tfxN = "dummy_tfx"
val agsyhol_core_rule = "__agsyhol_core" (* arbitrary *)
val spass_input_rule = "Inp"
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/src/HOL/Tools/ATP/scripts/dummy_atp Sun Nov 15 13:06:41 2020 +0000
@@ -0,0 +1,3 @@
+#!/bin/sh
+
+echo "SZS status Unknown"
--- a/src/HOL/Tools/Sledgehammer/sledgehammer.ML Sun Nov 15 13:06:24 2020 +0000
+++ b/src/HOL/Tools/Sledgehammer/sledgehammer.ML Sun Nov 15 13:06:41 2020 +0000
@@ -83,7 +83,15 @@
| try_methss ress (meths :: methss) =
let
fun mk_step fact_names meths =
- Prove ([], [], ("", 0), goal_t, [], ([], fact_names), meths, "")
+ Prove {
+ qualifiers = [],
+ obtains = [],
+ label = ("", 0),
+ goal = goal_t,
+ subproofs = [],
+ facts = ([], fact_names),
+ proof_methods = meths,
+ comment = ""}
in
(case preplay_isar_step ctxt [] timeout [] (mk_step fact_names meths) of
(res as (meth, Played time)) :: _ =>
--- a/src/HOL/Tools/Sledgehammer/sledgehammer_atp_systems.ML Sun Nov 15 13:06:24 2020 +0000
+++ b/src/HOL/Tools/Sledgehammer/sledgehammer_atp_systems.ML Sun Nov 15 13:06:41 2020 +0000
@@ -173,7 +173,7 @@
prem_role = Hypothesis,
best_slices =
(* FUDGE *)
- K [(1.0, (((60, ""), THF (Monomorphic, THF_Without_Choice), "mono_native_higher", keep_lamsN, false), ""))],
+ K [(1.0, (((60, ""), THF (Without_FOOL, Monomorphic, THF_Without_Choice), "mono_native_higher", keep_lamsN, false), ""))],
best_max_mono_iters = default_max_mono_iters - 1 (* FUDGE *),
best_max_new_mono_instances = default_max_new_mono_instances}
@@ -195,7 +195,7 @@
prem_role = Hypothesis,
best_slices = fn _ =>
(* FUDGE *)
- [(1.0, (((100, ""), TFF Polymorphic, "poly_native", liftingN, false), ""))],
+ [(1.0, (((100, ""), TFF (Without_FOOL, Polymorphic), "poly_native", liftingN, false), ""))],
best_max_mono_iters = default_max_mono_iters,
best_max_new_mono_instances = default_max_new_mono_instances}
@@ -293,10 +293,12 @@
best_slices = fn ctxt =>
let
val heuristic = Config.get ctxt e_selection_heuristic
- val ehoh = string_ord (getenv "E_VERSION", "2.3") <> LESS
+ val modern = string_ord (getenv "E_VERSION", "2.3") <> LESS
val (format, enc) =
- if ehoh then (THF (Monomorphic, THF_Predicate_Free), "mono_native_higher")
- else (TFF Monomorphic, "mono_native")
+ if modern then
+ (THF (With_FOOL, Monomorphic, THF_Without_Choice), "mono_native_higher_fool")
+ else
+ (TFF (Without_FOOL, Monomorphic), "mono_native")
in
(* FUDGE *)
if heuristic = e_smartN then
@@ -355,7 +357,7 @@
prem_role = Hypothesis,
best_slices =
(* FUDGE *)
- K [(1.0, (((40, ""), THF (Monomorphic, THF_Without_Choice), "mono_native_higher", keep_lamsN, false), ""))],
+ K [(1.0, (((40, ""), THF (Without_FOOL, Monomorphic, THF_Without_Choice), "mono_native_higher", keep_lamsN, false), ""))],
best_max_mono_iters = default_max_mono_iters - 1 (* FUDGE *),
best_max_new_mono_instances = default_max_new_mono_instances}
@@ -376,7 +378,7 @@
prem_role = Hypothesis,
best_slices =
(* FUDGE *)
- K [(1.0, (((150, ""), THF (Polymorphic, THF_Without_Choice), "mono_native_higher", keep_lamsN, false), ""))],
+ K [(1.0, (((150, ""), THF (Without_FOOL, Polymorphic, THF_Without_Choice), "mono_native_higher", keep_lamsN, false), ""))],
best_max_mono_iters = default_max_mono_iters - 1 (* FUDGE *),
best_max_new_mono_instances = default_max_new_mono_instances}
@@ -399,7 +401,7 @@
prem_role = Hypothesis,
best_slices =
(* FUDGE *)
- K [(1.0, (((150, ""), THF (Monomorphic, THF_Without_Choice), "mono_native_higher", keep_lamsN, false), ""))],
+ K [(1.0, (((150, ""), THF (Without_FOOL, Monomorphic, THF_Without_Choice), "mono_native_higher", keep_lamsN, false), ""))],
best_max_mono_iters = default_max_mono_iters - 1 (* FUDGE *),
best_max_new_mono_instances = default_max_new_mono_instances}
@@ -504,9 +506,9 @@
prem_role = Hypothesis,
best_slices = fn ctxt =>
(* FUDGE *)
- [(0.333, (((500, meshN), TFF Monomorphic, "mono_native", combs_or_liftingN, false), sosN)),
- (0.333, (((150, meshN), TFF Monomorphic, "poly_tags??", combs_or_liftingN, false), sosN)),
- (0.334, (((50, meshN), TFF Monomorphic, "mono_native", combs_or_liftingN, false), no_sosN))]
+ [(0.333, (((500, meshN), TFF (Without_FOOL, Monomorphic), "mono_native", combs_or_liftingN, false), sosN)),
+ (0.333, (((150, meshN), TFF (Without_FOOL, Monomorphic), "poly_tags??", combs_or_liftingN, false), sosN)),
+ (0.334, (((50, meshN), TFF (Without_FOOL, Monomorphic), "mono_native", combs_or_liftingN, false), no_sosN))]
|> Config.get ctxt force_sos ? (hd #> apfst (K 1.0) #> single),
best_max_mono_iters = default_max_mono_iters,
best_max_new_mono_instances = 2 * default_max_new_mono_instances (* FUDGE *)}
@@ -525,10 +527,10 @@
prem_role = Hypothesis,
best_slices =
(* FUDGE *)
- K [(0.5, (((250, meshN), TFF Monomorphic, "mono_native", combsN, false), "")),
- (0.25, (((125, mepoN), TFF Monomorphic, "mono_native", combsN, false), "")),
- (0.125, (((62, mashN), TFF Monomorphic, "mono_native", combsN, false), "")),
- (0.125, (((31, meshN), TFF Monomorphic, "mono_native", combsN, false), ""))],
+ K [(0.5, (((250, meshN), TFF (Without_FOOL, Monomorphic), "mono_native", combsN, false), "")),
+ (0.25, (((125, mepoN), TFF (Without_FOOL, Monomorphic), "mono_native", combsN, false), "")),
+ (0.125, (((62, mashN), TFF (Without_FOOL, Monomorphic), "mono_native", combsN, false), "")),
+ (0.125, (((31, meshN), TFF (Without_FOOL, Monomorphic), "mono_native", combsN, false), ""))],
best_max_mono_iters = default_max_mono_iters,
best_max_new_mono_instances = 2 * default_max_new_mono_instances (* FUDGE *)}
@@ -551,9 +553,9 @@
prem_role = Conjecture,
best_slices = fn _ =>
(* FUDGE *)
- [(0.333, (((128, "meshN"), THF (Monomorphic, THF_Without_Choice), "mono_native_higher", keep_lamsN, false), zipperposition_blsimp)),
- (0.333, (((32, "meshN"), THF (Polymorphic, THF_Without_Choice), "poly_native_higher", keep_lamsN, false), zipperposition_s6)),
- (0.334, (((512, "meshN"), THF (Monomorphic, THF_Without_Choice), "mono_native_higher", keep_lamsN, false), zipperposition_cdots))],
+ [(0.333, (((128, "meshN"), THF (Without_FOOL, Monomorphic, THF_Without_Choice), "mono_native_higher", keep_lamsN, false), zipperposition_blsimp)),
+ (0.333, (((32, "meshN"), THF (Without_FOOL, Polymorphic, THF_Without_Choice), "poly_native_higher", keep_lamsN, false), zipperposition_s6)),
+ (0.334, (((512, "meshN"), THF (Without_FOOL, Monomorphic, THF_Without_Choice), "mono_native_higher", keep_lamsN, false), zipperposition_cdots))],
best_max_mono_iters = default_max_mono_iters,
best_max_new_mono_instances = default_max_new_mono_instances}
@@ -636,29 +638,50 @@
val remote_agsyhol =
remotify_atp agsyhol "agsyHOL" ["1.0", "1"]
- (K (((60, ""), THF (Monomorphic, THF_Without_Choice), "mono_native_higher", keep_lamsN, false), "") (* FUDGE *))
+ (K (((60, ""), THF (Without_FOOL, Monomorphic, THF_Without_Choice), "mono_native_higher", keep_lamsN, false), "") (* FUDGE *))
val remote_alt_ergo =
remotify_atp alt_ergo "Alt-Ergo" ["0.95.2"]
- (K (((250, ""), TFF Polymorphic, "poly_native", keep_lamsN, false), "") (* FUDGE *))
+ (K (((250, ""), TFF (Without_FOOL, Polymorphic), "poly_native", keep_lamsN, false), "") (* FUDGE *))
val remote_e =
remotify_atp e "E" ["2.0", "1.9.1", "1.8"]
- (K (((750, ""), TFF Monomorphic, "mono_native", combsN, false), "") (* FUDGE *))
+ (K (((750, ""), TFF (Without_FOOL, Monomorphic), "mono_native", combsN, false), "") (* FUDGE *))
val remote_iprover =
remotify_atp iprover "iProver" ["0.99"]
(K (((150, ""), FOF, "mono_guards??", liftingN, false), "") (* FUDGE *))
val remote_leo2 =
remotify_atp leo2 "LEO-II" ["1.5.0", "1.4", "1.3", "1.2", "1"]
- (K (((40, ""), THF (Monomorphic, THF_Without_Choice), "mono_native_higher", liftingN, false), "") (* FUDGE *))
+ (K (((40, ""), THF (Without_FOOL, Monomorphic, THF_Without_Choice), "mono_native_higher", liftingN, false), "") (* FUDGE *))
val remote_leo3 =
remotify_atp leo3 "Leo-III" ["1.1"]
- (K (((150, ""), THF (Polymorphic, THF_Without_Choice), "poly_native_higher", keep_lamsN, false), "") (* FUDGE *))
+ (K (((150, ""), THF (Without_FOOL, Polymorphic, THF_Without_Choice), "poly_native_higher", keep_lamsN, false), "") (* FUDGE *))
val remote_vampire =
remotify_atp vampire "Vampire" ["THF-4.4"]
- (K (((400, ""), THF (Monomorphic, THF_Without_Choice), "mono_native_higher", keep_lamsN, false), remote_vampire_command) (* FUDGE *))
+ (K (((400, ""), THF (Without_FOOL, Monomorphic, THF_Without_Choice), "mono_native_higher", keep_lamsN, false), remote_vampire_command) (* FUDGE *))
val remote_waldmeister = gen_remote_waldmeister waldmeisterN "raw_mono_tags??"
val remote_zipperposition =
remotify_atp zipperposition "Zipperpin" ["2.0"]
- (K (((512, ""), THF (Monomorphic, THF_Without_Choice), "mono_native_higher", keep_lamsN, false), "") (* FUDGE *))
+ (K (((512, ""), THF (Without_FOOL, Monomorphic, THF_Without_Choice), "mono_native_higher", keep_lamsN, false), "") (* FUDGE *))
+
+
+(* Dummy prover *)
+
+fun dummy_config prem_role format type_enc uncurried_aliases : atp_config =
+ {exec = K (["ISABELLE_ATP"], ["scripts/dummy_atp"]),
+ arguments = K (K (K (K (K (K ""))))),
+ proof_delims = [],
+ known_failures = known_szs_status_failures,
+ prem_role = prem_role,
+ best_slices =
+ K [(1.0, (((200, ""), format, type_enc,
+ if is_format_higher_order format then keep_lamsN
+ else combsN, uncurried_aliases), ""))],
+ best_max_mono_iters = default_max_mono_iters,
+ best_max_new_mono_instances = default_max_new_mono_instances}
+
+val dummy_tfx_format = TFF (With_FOOL, Polymorphic)
+
+val dummy_tfx_config = dummy_config Hypothesis dummy_tfx_format "mono_native_fool" false
+val dummy_tfx = (dummy_tfxN, fn () => dummy_tfx_config)
(* Setup *)
@@ -696,7 +719,7 @@
val atps =
[agsyhol, alt_ergo, e, iprover, leo2, leo3, satallax, spass, vampire, z3_tptp, zipperposition,
remote_agsyhol, remote_alt_ergo, remote_e, remote_iprover, remote_leo2, remote_leo3,
- remote_vampire, remote_waldmeister, remote_zipperposition]
+ remote_vampire, remote_waldmeister, remote_zipperposition, dummy_tfx]
val _ = Theory.setup (fold add_atp atps)
--- a/src/HOL/Tools/Sledgehammer/sledgehammer_isar.ML Sun Nov 15 13:06:24 2020 +0000
+++ b/src/HOL/Tools/Sledgehammer/sledgehammer_isar.ML Sun Nov 15 13:06:41 2020 +0000
@@ -198,16 +198,24 @@
atp_proof
val assms = map_filter (Option.map fst o get_role (curry (op =) Hypothesis)) atp_proof
- fun add_lemma ((l, t), rule) ctxt =
+ fun add_lemma ((label, goal), rule) ctxt =
let
- val (skos, meths) =
- (if is_skolemize_rule rule then (skolems_of ctxt t, skolem_methods)
+ val (skos, proof_methods) =
+ (if is_skolemize_rule rule then (skolems_of ctxt goal, skolem_methods)
else if is_arith_rule rule then ([], arith_methods)
else ([], rewrite_methods))
||> massage_methods
+ val prove = Prove {
+ qualifiers = [],
+ obtains = skos,
+ label = label,
+ goal = goal,
+ subproofs = [],
+ facts = ([], []),
+ proof_methods = proof_methods,
+ comment = ""}
in
- (Prove ([], skos, l, t, [], ([], []), meths, ""),
- ctxt |> not (null skos) ? (Variable.add_fixes (map fst skos) #> snd))
+ (prove, ctxt |> not (null skos) ? (Variable.add_fixes (map fst skos) #> snd))
end
val (lems, _) =
@@ -239,25 +247,30 @@
atp_proof
fun is_referenced_in_step _ (Let _) = false
- | is_referenced_in_step l (Prove (_, _, _, _, subs, (ls, _), _, _)) =
- member (op =) ls l orelse exists (is_referenced_in_proof l) subs
- and is_referenced_in_proof l (Proof (_, _, steps)) =
+ | is_referenced_in_step l (Prove {subproofs, facts = (ls, _), ...}) =
+ member (op =) ls l orelse exists (is_referenced_in_proof l) subproofs
+ and is_referenced_in_proof l (Proof {steps, ...}) =
exists (is_referenced_in_step l) steps
fun insert_lemma_in_step lem
- (step as Prove (qs, fix, l, t, subs, (ls, gs), meths, comment)) =
+ (step as Prove {qualifiers, obtains, label, goal, subproofs, facts = (ls, gs),
+ proof_methods, comment}) =
let val l' = the (label_of_isar_step lem) in
if member (op =) ls l' then
[lem, step]
else
- let val refs = map (is_referenced_in_proof l') subs in
+ let val refs = map (is_referenced_in_proof l') subproofs in
if length (filter I refs) = 1 then
- let
- val subs' = map2 (fn false => I | true => insert_lemma_in_proof lem) refs
- subs
- in
- [Prove (qs, fix, l, t, subs', (ls, gs), meths, comment)]
- end
+ [Prove {
+ qualifiers = qualifiers,
+ obtains = obtains,
+ label = label,
+ goal = goal,
+ subproofs =
+ map2 (fn false => I | true => insert_lemma_in_proof lem) refs subproofs,
+ facts = (ls, gs),
+ proof_methods = proof_methods,
+ comment = comment}]
else
[lem, step]
end
@@ -268,8 +281,8 @@
insert_lemma_in_step lem step @ steps
else
step :: insert_lemma_in_steps lem steps
- and insert_lemma_in_proof lem (Proof (fix, assms, steps)) =
- Proof (fix, assms, insert_lemma_in_steps lem steps)
+ and insert_lemma_in_proof lem (proof as Proof {steps, ...}) =
+ isar_proof_with_steps proof (insert_lemma_in_steps lem steps)
val rule_of_clause_id = fst o the o Symtab.lookup steps o fst
@@ -295,9 +308,15 @@
accum
|> (if tainted = [] then
(* e.g., trivial, empty proof by Z3 *)
- cons (Prove (if outer then [Show] else [], [], no_label, concl_t, [],
- sort_facts (the_list predecessor, []), massage_methods systematic_methods',
- ""))
+ cons (Prove {
+ qualifiers = if outer then [Show] else [],
+ obtains = [],
+ label = no_label,
+ goal = concl_t,
+ subproofs = [],
+ facts = sort_facts (the_list predecessor, []),
+ proof_methods = massage_methods systematic_methods',
+ comment = ""})
else
I)
|> rev
@@ -318,7 +337,15 @@
else systematic_methods')
|> massage_methods
- fun prove sub facts = Prove (maybe_show outer c, [], l, t, sub, facts, meths, "")
+ fun prove subproofs facts = Prove {
+ qualifiers = maybe_show outer c,
+ obtains = [],
+ label = l,
+ goal = t,
+ subproofs = subproofs,
+ facts = facts,
+ proof_methods = meths,
+ comment = ""}
fun steps_of_rest step = isar_steps outer (SOME l) (step :: accum) infs
in
if is_clause_tainted c then
@@ -327,7 +354,7 @@
if skolem andalso is_clause_tainted g then
let
val skos = skolems_of ctxt (prop_of_clause g)
- val subproof = Proof (skos, [], rev accum)
+ val subproof = Proof {fixes = skos, assumptions = [], steps = rev accum}
in
isar_steps outer (SOME l) [prove [subproof] ([], [])] infs
end
@@ -339,7 +366,15 @@
(if skolem then
(case skolems_of ctxt t of
[] => prove [] deps
- | skos => Prove ([], skos, l, t, [], deps, meths, ""))
+ | skos => Prove {
+ qualifiers = [],
+ obtains = skos,
+ label = l,
+ goal = t,
+ subproofs = [],
+ facts = deps,
+ proof_methods = meths,
+ comment = ""})
else
prove [] deps)
end
@@ -351,16 +386,22 @@
val l = label_of_clause c
val t = prop_of_clause c
val step =
- Prove (maybe_show outer c, [], l, t,
- map isar_case (filter_out (null o snd) cases),
- sort_facts (the_list predecessor, []), massage_methods systematic_methods',
- "")
+ Prove {
+ qualifiers = maybe_show outer c,
+ obtains = [],
+ label = l,
+ goal = t,
+ subproofs = map isar_case (filter_out (null o snd) cases),
+ facts = sort_facts (the_list predecessor, []),
+ proof_methods = massage_methods systematic_methods',
+ comment = ""}
in
isar_steps outer (SOME l) (step :: accum) infs
end
- and isar_proof outer fix assms lems infs =
- Proof (fix, assms,
- fold_rev insert_lemma_in_steps lems (isar_steps outer NONE [] infs))
+ and isar_proof outer fixes assumptions lems infs =
+ let val steps = fold_rev insert_lemma_in_steps lems (isar_steps outer NONE [] infs) in
+ Proof {fixes = fixes, assumptions = assumptions, steps = steps}
+ end
val canonical_isar_proof =
refute_graph
@@ -423,7 +464,8 @@
one_line_proof_text ctxt 0
(if is_less (play_outcome_ord (play_outcome, one_line_play)) then
(case isar_proof of
- Proof (_, _, [Prove (_, _, _, _, _, (_, gfs), meth :: _, _)]) =>
+ Proof {steps = [Prove {facts = (_, gfs), proof_methods = meth :: _, ...}],
+ ...} =>
let
val used_facts' =
map_filter (fn s =>
--- a/src/HOL/Tools/Sledgehammer/sledgehammer_isar_compress.ML Sun Nov 15 13:06:24 2020 +0000
+++ b/src/HOL/Tools/Sledgehammer/sledgehammer_isar_compress.ML Sun Nov 15 13:06:41 2020 +0000
@@ -28,10 +28,10 @@
fun collect_steps _ (accum as ([], _)) = accum
| collect_steps [] accum = accum
| collect_steps (step :: steps) accum = collect_steps steps (collect_step step accum)
- and collect_step (step as Prove (_, _, l, _, subproofs, _, _, _)) x =
+ and collect_step (step as Prove {label, subproofs, ...}) x =
(case collect_subproofs subproofs x of
- (accum as ([], _)) => accum
- | accum as (l' :: lbls', accu) => if l = l' then (lbls', step :: accu) else accum)
+ accum as ([], _) => accum
+ | accum as (l' :: lbls', accu) => if label = l' then (lbls', step :: accu) else accum)
| collect_step _ accum = accum
and collect_subproofs [] accum = accum
| collect_subproofs (proof :: subproofs) accum =
@@ -48,15 +48,32 @@
| update_steps steps [] = (steps, [])
| update_steps (step :: steps) updates = update_step step (update_steps steps updates)
and update_step step (steps, []) = (step :: steps, [])
- | update_step (Prove (qs, xs, l, t, subproofs, facts, meths, comment))
- (steps,
- updates as Prove (qs', xs', l', t', subproofs', facts', meths', comment') :: updates') =
+ | update_step (Prove {qualifiers = qs, obtains = xs, label = l, goal = t, subproofs, facts,
+ proof_methods = meths, comment}) (steps, updates as Prove {qualifiers = qs',
+ obtains = xs', label = l', goal = t', subproofs = subproofs', facts = facts',
+ proof_methods = meths', comment = comment'} :: updates') =
(if l = l' then
update_subproofs subproofs' updates'
- |>> (fn subproofs'' => Prove (qs', xs', l', t', subproofs'', facts', meths', comment'))
+ |>> (fn subproofs'' => Prove {
+ qualifiers = qs',
+ obtains = xs',
+ label = l',
+ goal = t',
+ subproofs = subproofs'',
+ facts = facts',
+ proof_methods = meths',
+ comment = comment'})
else
update_subproofs subproofs updates
- |>> (fn subproofs' => Prove (qs, xs, l, t, subproofs', facts, meths, comment)))
+ |>> (fn subproofs' => Prove {
+ qualifiers = qs,
+ obtains = xs,
+ label = l,
+ goal = t,
+ subproofs = subproofs',
+ facts = facts,
+ proof_methods = meths,
+ comment = comment}))
|>> (fn step => step :: steps)
| update_step step (steps, updates) = (step :: steps, updates)
and update_subproofs [] updates = ([], updates)
@@ -64,9 +81,9 @@
| update_subproofs (proof :: subproofs) updates =
update_proof proof (update_subproofs subproofs updates)
and update_proof proof (proofs, []) = (proof :: proofs, [])
- | update_proof (Proof (xs, assms, steps)) (proofs, updates) =
+ | update_proof (proof as Proof {steps, ...}) (proofs, updates) =
let val (steps', updates') = update_steps steps updates in
- (Proof (xs, assms, steps') :: proofs, updates')
+ (isar_proof_with_steps proof steps' :: proofs, updates')
end
in
fst (update_steps steps (rev updates))
@@ -87,16 +104,25 @@
(hopeful @ hopeless, hopeless)
end
-fun merge_steps preplay_data (Prove ([], xs1, l1, _, subproofs1, (lfs1, gfs1), meths1, comment1))
- (Prove (qs2, xs2, l2, t, subproofs2, (lfs2, gfs2), meths2, comment2)) =
+fun merge_steps preplay_data
+ (Prove (p1 as {qualifiers = [], label = l1, goal = _, facts = (lfs1, gfs1), ...}))
+ (Prove (p2 as {qualifiers = qs2, label = l2, goal = t, facts = (lfs2, gfs2), ...})) =
let
- val (meths, hopeless) = merge_methods preplay_data (l1, meths1) (l2, meths2)
+ val (meths, hopeless) =
+ merge_methods preplay_data (l1, #proof_methods p1) (l2, #proof_methods p2)
val lfs = union (op =) lfs1 (remove (op =) l1 lfs2)
val gfs = union (op =) gfs1 gfs2
+ val prove = Prove {
+ qualifiers = qs2,
+ obtains = inter (op =) (Term.add_frees t []) (#obtains p1 @ #obtains p2),
+ label = l2,
+ goal = t,
+ subproofs = #subproofs p1 @ #subproofs p2,
+ facts = sort_facts (lfs, gfs),
+ proof_methods = meths,
+ comment = #comment p1 ^ #comment p2}
in
- (Prove (qs2, inter (op =) (Term.add_frees t []) (xs1 @ xs2), l2, t,
- subproofs1 @ subproofs2, sort_facts (lfs, gfs), meths, comment1 ^ comment2),
- hopeless)
+ (prove, hopeless)
end
val merge_slack_time = seconds 0.01
@@ -126,8 +152,8 @@
val (get_successors, replace_successor) =
let
- fun add_refs (Prove (_, _, l, _, _, (lfs, _), _, _)) =
- fold (fn key => Canonical_Label_Tab.cons_list (key, l)) lfs
+ fun add_refs (Prove {label, facts = (lfs, _), ...}) =
+ fold (fn key => Canonical_Label_Tab.cons_list (key, label)) lfs
| add_refs _ = I
val tab =
@@ -154,39 +180,58 @@
| _ => preplay_timeout)
(* elimination of trivial, one-step subproofs *)
- fun elim_one_subproof time (step as Prove (qs, xs, l, t, _, (lfs, gfs), meths, comment)) subs
+ fun elim_one_subproof time (step as Prove {qualifiers, obtains, label, goal,
+ facts = (lfs, gfs), proof_methods, comment, ...}) subs
nontriv_subs =
if null subs orelse not (compress_further ()) then
- Prove (qs, xs, l, t, List.revAppend (nontriv_subs, subs), (lfs, gfs), meths, comment)
+ Prove {
+ qualifiers = qualifiers,
+ obtains = obtains,
+ label = label,
+ goal = goal,
+ subproofs = List.revAppend (nontriv_subs, subs),
+ facts = (lfs, gfs),
+ proof_methods = proof_methods,
+ comment = comment}
else
(case subs of
- (sub as Proof (_, assms, [Prove (_, _, l', _, [], (lfs', gfs'), meths', _)])) :: subs =>
+ (sub as Proof {assumptions,
+ steps = [Prove {label = label', subproofs = [], facts = (lfs', gfs'),
+ proof_methods = proof_methods', ...}], ...}) :: subs =>
let
(* merge steps *)
val subs'' = subs @ nontriv_subs
- val lfs'' = union (op =) lfs (subtract (op =) (map fst assms) lfs')
+ val lfs'' = union (op =) lfs (subtract (op =) (map fst assumptions) lfs')
val gfs'' = union (op =) gfs' gfs
- val (meths'' as _ :: _, hopeless) =
- merge_methods (!preplay_data) (l', meths') (l, meths)
- val step'' = Prove (qs, xs, l, t, subs'', (lfs'', gfs''), meths'', comment)
+ val (proof_methods'' as _ :: _, hopeless) =
+ merge_methods (!preplay_data) (label', proof_methods') (label, proof_methods)
+ val step'' = Prove {
+ qualifiers = qualifiers,
+ obtains = obtains,
+ label = label,
+ goal = goal,
+ subproofs = subs'',
+ facts = (lfs'', gfs''),
+ proof_methods = proof_methods'',
+ comment = comment}
(* check if the modified step can be preplayed fast enough *)
- val timeout = adjust_merge_timeout preplay_timeout (time + reference_time l')
+ val timeout = adjust_merge_timeout preplay_timeout (time + reference_time label')
in
(case preplay_isar_step ctxt [] timeout hopeless step'' of
meths_outcomes as (_, Played time'') :: _ =>
(* "l'" successfully eliminated *)
(decrement_step_count ();
set_preplay_outcomes_of_isar_step ctxt time'' preplay_data step'' meths_outcomes;
- map (replace_successor l' [l]) lfs';
+ map (replace_successor label' [label]) lfs';
elim_one_subproof time'' step'' subs nontriv_subs)
| _ => elim_one_subproof time step subs (sub :: nontriv_subs))
end
| sub :: subs => elim_one_subproof time step subs (sub :: nontriv_subs))
- fun elim_subproofs (step as Prove (_, _, l, _, subproofs, _, _, _)) =
+ fun elim_subproofs (step as Prove {label, subproofs, ...}) =
if exists (null o tl o steps_of_isar_proof) subproofs then
- elim_one_subproof (reference_time l) step subproofs []
+ elim_one_subproof (reference_time label) step subproofs []
else
step
| elim_subproofs step = step
@@ -204,7 +249,7 @@
fun try_eliminate i l labels steps =
let
- val (steps_before, (cand as Prove (_, _, _, _, _, (lfs, _), _, _)) :: steps_after) =
+ val (steps_before, (cand as Prove {facts = (lfs, _), ...}) :: steps_after) =
chop i steps
val succs = collect_successors steps_after labels
val (succs', hopelesses) = split_list (map (merge_steps (!preplay_data) cand) succs)
@@ -257,7 +302,8 @@
|> compression_loop candidates'
end))
- fun add_cand (Prove (_, xs, l, t, _, _, _, _)) = cons (l, (length xs, size_of_term t))
+ fun add_cand (Prove {obtains, label, goal, ...}) =
+ cons (label, (length obtains, size_of_term goal))
| add_cand _ = I
(* the very last step is not a candidate *)
@@ -272,16 +318,28 @@
can be eliminated. In the best case, this once again leads to a proof whose proof steps do
not have subproofs. Applying this approach recursively will result in a flat proof in the
best cast. *)
- fun compress_proof (proof as (Proof (xs, assms, steps))) =
- if compress_further () then Proof (xs, assms, compress_steps steps) else proof
+ fun compress_proof (proof as (Proof {steps, ...})) =
+ if compress_further () then
+ isar_proof_with_steps proof (compress_steps steps)
+ else
+ proof
and compress_steps steps =
(* bottom-up: compress innermost proofs first *)
steps
|> map (fn step => step |> compress_further () ? compress_sub_levels)
|> compress_further () ? compress_top_level
- and compress_sub_levels (Prove (qs, xs, l, t, subproofs, facts, meths, comment)) =
+ and compress_sub_levels (Prove {qualifiers, obtains, label, goal, subproofs, facts,
+ proof_methods, comment}) =
(* compress subproofs *)
- Prove (qs, xs, l, t, map compress_proof subproofs, facts, meths, comment)
+ Prove {
+ qualifiers = qualifiers,
+ obtains = obtains,
+ label = label,
+ goal = goal,
+ subproofs = map compress_proof subproofs,
+ facts = facts,
+ proof_methods = proof_methods,
+ comment = comment}
(* eliminate trivial subproofs *)
|> compress_further () ? elim_subproofs
| compress_sub_levels step = step
--- a/src/HOL/Tools/Sledgehammer/sledgehammer_isar_minimize.ML Sun Nov 15 13:06:24 2020 +0000
+++ b/src/HOL/Tools/Sledgehammer/sledgehammer_isar_minimize.ML Sun Nov 15 13:06:41 2020 +0000
@@ -30,19 +30,36 @@
open Sledgehammer_Isar_Preplay
fun keep_fastest_method_of_isar_step preplay_data
- (Prove (qs, xs, l, t, subproofs, facts, meths, comment)) =
- Prove (qs, xs, l, t, subproofs, facts,
- meths |> List.partition (curry (op =) (fastest_method_of_isar_step preplay_data l)) |> op @,
- comment)
+ (Prove {qualifiers, obtains, label, goal, subproofs, facts, proof_methods, comment}) =
+ Prove {
+ qualifiers = qualifiers,
+ obtains = obtains,
+ label = label,
+ goal = goal,
+ subproofs = subproofs,
+ facts = facts,
+ proof_methods = proof_methods
+ |> List.partition (curry (op =) (fastest_method_of_isar_step preplay_data label))
+ |> op @,
+ comment = comment}
| keep_fastest_method_of_isar_step _ step = step
val slack = seconds 0.025
fun minimized_isar_step ctxt chained time
- (Prove (qs, xs, l, t, subproofs, (lfs0, gfs0), meths as meth :: _, comment)) =
+ (Prove {qualifiers, obtains, label, goal, subproofs, facts = (lfs0, gfs0),
+ proof_methods as meth :: _, comment}) =
let
fun mk_step_lfs_gfs lfs gfs =
- Prove (qs, xs, l, t, subproofs, sort_facts (lfs, gfs), meths, comment)
+ Prove {
+ qualifiers = qualifiers,
+ obtains = obtains,
+ label = label,
+ goal = goal,
+ subproofs = subproofs,
+ facts = sort_facts (lfs, gfs),
+ proof_methods = proof_methods,
+ comment = comment}
fun minimize_half _ min_facts [] time = (min_facts, time)
| minimize_half mk_step min_facts (fact :: facts) time =
@@ -58,7 +75,7 @@
end
fun minimize_isar_step_dependencies ctxt preplay_data
- (step as Prove (_, _, l, _, _, _, meth :: meths, _)) =
+ (step as Prove {label = l, proof_methods = meth :: meths, ...}) =
(case Lazy.force (preplay_outcome_of_isar_step_for_method (!preplay_data) l meth) of
Played time =>
let
@@ -74,22 +91,32 @@
| _ => step (* don't touch steps that time out or fail *))
| minimize_isar_step_dependencies _ _ step = step
-fun postprocess_isar_proof_remove_show_stuttering (Proof (fix, assms, steps)) =
+fun postprocess_isar_proof_remove_show_stuttering (proof as Proof {steps, ...}) =
let
fun process_steps [] = []
- | process_steps (steps as [Prove ([], [], _, t1, subs1, facts1, meths1, comment1),
- Prove ([Show], [], l2, t2, _, _, _, comment2)]) =
- if t1 aconv t2 then [Prove ([Show], [], l2, t2, subs1, facts1, meths1, comment1 ^ comment2)]
+ | process_steps (steps as [
+ Prove (p1 as {qualifiers = [], obtains = [], goal = t1, ...}),
+ Prove (p2 as {qualifiers = [Show], obtains = [], goal = t2, ...})]) =
+ if t1 aconv t2 then
+ [Prove {
+ qualifiers = [Show],
+ obtains = [],
+ label = #label p2,
+ goal = t2,
+ subproofs = #subproofs p1,
+ facts = #facts p1,
+ proof_methods = #proof_methods p1,
+ comment = #comment p1 ^ #comment p2}]
else steps
| process_steps (step :: steps) = step :: process_steps steps
in
- Proof (fix, assms, process_steps steps)
+ isar_proof_with_steps proof (process_steps steps)
end
fun postprocess_isar_proof_remove_unreferenced_steps postproc_step =
let
- fun process_proof (Proof (fix, assms, steps)) =
- process_steps steps ||> (fn steps => Proof (fix, assms, steps))
+ fun process_proof (proof as Proof {steps, ...}) =
+ process_steps steps ||> isar_proof_with_steps proof
and process_steps [] = ([], [])
| process_steps steps =
(* the last step is always implicitly referenced *)
@@ -105,15 +132,25 @@
else
(used, accu))
and process_used_step step = process_used_step_subproofs (postproc_step step)
- and process_used_step_subproofs (Prove (qs, xs, l, t, subproofs, (lfs, gfs), meths, comment)) =
+ and process_used_step_subproofs (Prove {qualifiers, obtains, label, goal, subproofs,
+ facts = (lfs, gfs), proof_methods, comment}) =
let
- val (used, subproofs) =
+ val (used, subproofs') =
map process_proof subproofs
|> split_list
|>> Ord_List.unions label_ord
|>> fold (Ord_List.insert label_ord) lfs
+ val prove = Prove {
+ qualifiers = qualifiers,
+ obtains = obtains,
+ label = label,
+ goal = goal,
+ subproofs = subproofs',
+ facts = (lfs, gfs),
+ proof_methods = proof_methods,
+ comment = comment}
in
- (used, Prove (qs, xs, l, t, subproofs, (lfs, gfs), meths, comment))
+ (used, prove)
end
in
snd o process_proof
--- a/src/HOL/Tools/Sledgehammer/sledgehammer_isar_preplay.ML Sun Nov 15 13:06:24 2020 +0000
+++ b/src/HOL/Tools/Sledgehammer/sledgehammer_isar_preplay.ML Sun Nov 15 13:06:41 2020 +0000
@@ -77,10 +77,10 @@
val enrich_with_assms = fold (uncurry enrich_with_fact)
- fun enrich_with_proof (Proof (_, assms, isar_steps)) =
- enrich_with_assms assms #> fold enrich_with_step isar_steps
- and enrich_with_step (Prove (_, _, l, t, subproofs, _, _, _)) =
- enrich_with_fact l t #> fold enrich_with_proof subproofs
+ fun enrich_with_proof (Proof {assumptions, steps = isar_steps, ...}) =
+ enrich_with_assms assumptions #> fold enrich_with_step isar_steps
+ and enrich_with_step (Prove {label, goal, subproofs, ...}) =
+ enrich_with_fact label goal #> fold enrich_with_proof subproofs
| enrich_with_step _ = I
in
enrich_with_proof proof ctxt
@@ -109,31 +109,31 @@
|> apply2 (maps (thms_of_name ctxt)))
handle ERROR msg => error ("preplay error: " ^ msg)
-fun thm_of_proof ctxt (Proof (fixed_frees, assms, steps)) =
+fun thm_of_proof ctxt (Proof {fixes, assumptions, steps}) =
let
val thy = Proof_Context.theory_of ctxt
val concl =
(case try List.last steps of
- SOME (Prove (_, [], _, t, _, _, _, _)) => t
+ SOME (Prove {obtains = [], goal, ...}) => goal
| _ => raise Fail "preplay error: malformed subproof")
val var_idx = maxidx_of_term concl + 1
fun var_of_free (x, T) = Var ((x, var_idx), T)
- val subst = map (`var_of_free #> swap #> apfst Free) fixed_frees
+ val subst = map (`var_of_free #> swap #> apfst Free) fixes
in
- Logic.list_implies (assms |> map snd, concl)
+ Logic.list_implies (assumptions |> map snd, concl)
|> subst_free subst
|> Skip_Proof.make_thm thy
end
(* may throw exceptions *)
fun raw_preplay_step_for_method ctxt chained timeout meth
- (Prove (_, xs, _, t, subproofs, facts, _, _)) =
+ (Prove {obtains = xs, goal, subproofs, facts, ...}) =
let
val goal =
(case xs of
- [] => t
+ [] => goal
| _ =>
(* proof obligation: !!thesis. (!!x1...xn. t ==> thesis) ==> thesis
(cf. "~~/src/Pure/Isar/obtain.ML") *)
@@ -144,7 +144,7 @@
val thesis_prop = HOLogic.mk_Trueprop thesis
(* !!x1...xn. t ==> thesis *)
- val inner_prop = fold_rev Logic.all frees (Logic.mk_implies (t, thesis_prop))
+ val inner_prop = fold_rev Logic.all frees (Logic.mk_implies (goal, thesis_prop))
in
(* !!thesis. (!!x1...xn. t ==> thesis) ==> thesis *)
Logic.all thesis (Logic.mk_implies (inner_prop, thesis_prop))
@@ -195,13 +195,13 @@
Play_Timed_Out (Time.+ (apply2 time_of_play (play1, play2)))
fun set_preplay_outcomes_of_isar_step ctxt timeout preplay_data
- (step as Prove (_, _, l, _, _, _, meths, _)) meths_outcomes0 =
+ (step as Prove {label = l, proof_methods, ...}) meths_outcomes0 =
let
fun lazy_preplay meth =
Lazy.lazy (fn () => preplay_isar_step_for_method ctxt [] timeout meth step)
val meths_outcomes = meths_outcomes0
|> map (apsnd Lazy.value)
- |> fold (fn meth => AList.default (op =) (meth, lazy_preplay meth)) meths
+ |> fold (fn meth => AList.default (op =) (meth, lazy_preplay meth)) proof_methods
in
preplay_data := Canonical_Label_Tab.update (l, fold (AList.update (op =)) meths_outcomes [])
(!preplay_data)
@@ -240,11 +240,11 @@
#> get_best_method_outcome Lazy.force
#> fst
-fun forced_outcome_of_step preplay_data (Prove (_, _, l, _, _, _, meths, _)) =
- Lazy.force (preplay_outcome_of_isar_step_for_method preplay_data l (hd meths))
+fun forced_outcome_of_step preplay_data (Prove {label, proof_methods, ...}) =
+ Lazy.force (preplay_outcome_of_isar_step_for_method preplay_data label (hd proof_methods))
| forced_outcome_of_step _ _ = Played Time.zeroTime
-fun preplay_outcome_of_isar_proof preplay_data (Proof (_, _, steps)) =
+fun preplay_outcome_of_isar_proof preplay_data (Proof {steps, ...}) =
fold_isar_steps (add_preplay_outcomes o forced_outcome_of_step preplay_data) steps
(Played Time.zeroTime)
--- a/src/HOL/Tools/Sledgehammer/sledgehammer_isar_proof.ML Sun Nov 15 13:06:24 2020 +0000
+++ b/src/HOL/Tools/Sledgehammer/sledgehammer_isar_proof.ML Sun Nov 15 13:06:41 2020 +0000
@@ -16,11 +16,26 @@
datatype isar_qualifier = Show | Then
datatype isar_proof =
- Proof of (string * typ) list * (label * term) list * isar_step list
+ Proof of {
+ fixes : (string * typ) list,
+ assumptions: (label * term) list,
+ steps : isar_step list
+ }
and isar_step =
- Let of term * term |
- Prove of isar_qualifier list * (string * typ) list * label * term * isar_proof list
- * facts * proof_method list * string
+ Let of {
+ lhs : term,
+ rhs : term
+ } |
+ Prove of {
+ qualifiers : isar_qualifier list,
+ obtains : (string * typ) list,
+ label : label,
+ goal : term,
+ subproofs : isar_proof list,
+ facts : facts,
+ proof_methods : proof_method list,
+ comment : string
+ }
val no_label : label
@@ -29,6 +44,8 @@
val sort_facts : facts -> facts
val steps_of_isar_proof : isar_proof -> isar_step list
+ val isar_proof_with_steps : isar_proof -> isar_step list -> isar_proof
+
val label_of_isar_step : isar_step -> label option
val facts_of_isar_step : isar_step -> facts
val proof_methods_of_isar_step : isar_step -> proof_method list
@@ -68,11 +85,26 @@
datatype isar_qualifier = Show | Then
datatype isar_proof =
- Proof of (string * typ) list * (label * term) list * isar_step list
+ Proof of {
+ fixes : (string * typ) list,
+ assumptions: (label * term) list,
+ steps : isar_step list
+ }
and isar_step =
- Let of term * term |
- Prove of isar_qualifier list * (string * typ) list * label * term * isar_proof list
- * facts * proof_method list * string
+ Let of {
+ lhs : term,
+ rhs : term
+ } |
+ Prove of {
+ qualifiers : isar_qualifier list,
+ obtains : (string * typ) list,
+ label : label,
+ goal : term,
+ subproofs : isar_proof list,
+ facts : facts,
+ proof_methods : proof_method list,
+ comment : string
+ }
val no_label = ("", ~1)
@@ -94,18 +126,20 @@
(Some preplaying proof methods, e.g. "blast", react poorly to fact reorderings.) *)
fun sort_facts (lfs, gfs) = (sort (label_ord o swap) lfs, sort string_ord gfs)
-fun steps_of_isar_proof (Proof (_, _, steps)) = steps
+fun steps_of_isar_proof (Proof {steps, ...}) = steps
+fun isar_proof_with_steps (Proof {fixes, assumptions, ...}) steps =
+ Proof {fixes = fixes, assumptions = assumptions, steps = steps}
-fun label_of_isar_step (Prove (_, _, l, _, _, _, _, _)) = SOME l
+fun label_of_isar_step (Prove {label, ...}) = SOME label
| label_of_isar_step _ = NONE
-fun subproofs_of_isar_step (Prove (_, _, _, _, subs, _, _, _)) = subs
+fun subproofs_of_isar_step (Prove {subproofs, ...}) = subproofs
| subproofs_of_isar_step _ = []
-fun facts_of_isar_step (Prove (_, _, _, _, _, facts, _, _)) = facts
+fun facts_of_isar_step (Prove {facts, ...}) = facts
| facts_of_isar_step _ = ([], [])
-fun proof_methods_of_isar_step (Prove (_, _, _, _, _, _, meths, _)) = meths
+fun proof_methods_of_isar_step (Prove {proof_methods, ...}) = proof_methods
| proof_methods_of_isar_step _ = []
fun fold_isar_step f step =
@@ -114,10 +148,19 @@
fun map_isar_steps f =
let
- fun map_proof (Proof (fix, assms, steps)) = Proof (fix, assms, map map_step steps)
+ fun map_proof (proof as Proof {steps, ...}) = isar_proof_with_steps proof (map map_step steps)
and map_step (step as Let _) = f step
- | map_step (Prove (qs, xs, l, t, subs, facts, meths, comment)) =
- f (Prove (qs, xs, l, t, map map_proof subs, facts, meths, comment))
+ | map_step (Prove {qualifiers, obtains, label, goal, subproofs, facts, proof_methods,
+ comment}) =
+ f (Prove {
+ qualifiers = qualifiers,
+ obtains = obtains,
+ label = label,
+ goal = goal,
+ subproofs = map map_proof subproofs,
+ facts = facts,
+ proof_methods = proof_methods,
+ comment = comment})
in map_proof end
val add_isar_steps = fold_isar_steps (fn Prove _ => Integer.add 1 | _ => I)
@@ -129,8 +172,17 @@
type key = label
val ord = canonical_label_ord)
-fun comment_isar_step comment_of (Prove (qs, xs, l, t, subs, facts, meths, _)) =
- Prove (qs, xs, l, t, subs, facts, meths, comment_of l meths)
+fun comment_isar_step comment_of (Prove {qualifiers, obtains, label, goal, subproofs, facts,
+ proof_methods, ...}) =
+ Prove {
+ qualifiers = qualifiers,
+ obtains = obtains,
+ label = label,
+ goal = goal,
+ subproofs = subproofs,
+ facts = facts,
+ proof_methods = proof_methods,
+ comment = comment_of label proof_methods}
| comment_isar_step _ step = step
fun comment_isar_proof comment_of = map_isar_steps (comment_isar_step comment_of)
@@ -138,16 +190,25 @@
| chain_qs_lfs (SOME l0) lfs =
if member (op =) lfs l0 then ([Then], remove (op =) l0 lfs) else ([], lfs)
-fun chain_isar_step lbl (Prove (qs, xs, l, t, subs, (lfs, gfs), meths, comment)) =
+fun chain_isar_step lbl (Prove {qualifiers, obtains, label, goal, subproofs,
+ facts = (lfs, gfs), proof_methods, comment}) =
let val (qs', lfs) = chain_qs_lfs lbl lfs in
- Prove (qs' @ qs, xs, l, t, map chain_isar_proof subs, (lfs, gfs), meths, comment)
+ Prove {
+ qualifiers = qs' @ qualifiers,
+ obtains = obtains,
+ label = label,
+ goal = goal,
+ subproofs = map chain_isar_proof subproofs,
+ facts = (lfs, gfs),
+ proof_methods = proof_methods,
+ comment = comment}
end
| chain_isar_step _ step = step
and chain_isar_steps _ [] = []
| chain_isar_steps prev (i :: is) =
chain_isar_step prev i :: chain_isar_steps (label_of_isar_step i) is
-and chain_isar_proof (Proof (fix, assms, steps)) =
- Proof (fix, assms, chain_isar_steps (try (List.last #> fst) assms) steps)
+and chain_isar_proof (proof as Proof {assumptions, steps, ...}) =
+ isar_proof_with_steps proof (chain_isar_steps (try (List.last #> fst) assumptions) steps)
fun kill_useless_labels_in_isar_proof proof =
let
@@ -156,11 +217,25 @@
fun kill_label l = if member (op =) used_ls l then l else no_label
- fun kill_step (Prove (qs, xs, l, t, subs, facts, meths, comment)) =
- Prove (qs, xs, kill_label l, t, map kill_proof subs, facts, meths, comment)
+ fun kill_step (Prove {qualifiers, obtains, label, goal, subproofs, facts, proof_methods,
+ comment}) =
+ Prove {
+ qualifiers = qualifiers,
+ obtains = obtains,
+ label = kill_label label,
+ goal = goal,
+ subproofs = map kill_proof subproofs,
+ facts = facts,
+ proof_methods = proof_methods,
+ comment = comment}
| kill_step step = step
- and kill_proof (Proof (fix, assms, steps)) =
- Proof (fix, map (apfst kill_label) assms, map kill_step steps)
+ and kill_proof (Proof {fixes, assumptions, steps}) =
+ let
+ val assumptions' = map (apfst kill_label) assumptions
+ val steps' = map kill_step steps
+ in
+ Proof {fixes = fixes, assumptions = assumptions', steps = steps'}
+ end
in
kill_proof proof
end
@@ -170,21 +245,31 @@
fun next_label l (next, subst) =
let val l' = ("", next) in (l', (next + 1, (l, l') :: subst)) end
- fun relabel_step (Prove (qs, fix, l, t, subs, (lfs, gfs), meths, comment))
- (accum as (_, subst)) =
+ fun relabel_step (Prove {qualifiers, obtains, label, goal, subproofs, facts = (lfs, gfs),
+ proof_methods, comment}) (accum as (_, subst)) =
let
val lfs' = maps (the_list o AList.lookup (op =) subst) lfs
val ((subs', l'), accum') = accum
- |> fold_map relabel_proof subs
- ||>> next_label l
+ |> fold_map relabel_proof subproofs
+ ||>> next_label label
+ val prove = Prove {
+ qualifiers = qualifiers,
+ obtains = obtains,
+ label = l',
+ goal = goal,
+ subproofs = subs',
+ facts = (lfs', gfs),
+ proof_methods = proof_methods,
+ comment = comment}
in
- (Prove (qs, fix, l', t, subs', (lfs', gfs), meths, comment), accum')
+ (prove, accum')
end
| relabel_step step accum = (step, accum)
- and relabel_proof (Proof (fix, assms, steps)) =
- fold_map (fn (l, t) => next_label l #> apfst (rpair t)) assms
+ and relabel_proof (Proof {fixes, assumptions, steps}) =
+ fold_map (fn (l, t) => next_label l #> apfst (rpair t)) assumptions
##>> fold_map relabel_step steps
- #>> (fn (assms, steps) => Proof (fix, assms, steps))
+ #>> (fn (assumptions', steps') =>
+ Proof {fixes = fixes, assumptions = assumptions', steps = steps'})
in
fst (relabel_proof proof (0, []))
end
@@ -199,21 +284,29 @@
(l', (next + 1, (l, l') :: subst))
end
- fun relabel_step depth (Prove (qs, xs, l, t, subs, (lfs, gfs), meths, comment))
- (accum as (_, subst)) =
+ fun relabel_step depth (Prove {qualifiers, obtains, label, goal,
+ subproofs, facts = (lfs, gfs), proof_methods, comment}) (accum as (_, subst)) =
let
- val lfs' = maps (the_list o AList.lookup (op =) subst) lfs
- val (l', accum' as (_, subst')) = next_label depth have_prefix l accum
- val subs' = map (relabel_proof subst' (depth + 1)) subs
+ val (l', accum' as (_, subst')) = next_label depth have_prefix label accum
+ val prove = Prove {
+ qualifiers = qualifiers,
+ obtains = obtains,
+ label = l',
+ goal = goal,
+ subproofs = map (relabel_proof subst' (depth + 1)) subproofs,
+ facts = (maps (the_list o AList.lookup (op =) subst) lfs, gfs),
+ proof_methods = proof_methods,
+ comment = comment}
in
- (Prove (qs, xs, l', t, subs', (lfs', gfs), meths, comment), accum')
+ (prove, accum')
end
| relabel_step _ step accum = (step, accum)
- and relabel_proof subst depth (Proof (fix, assms, steps)) =
+ and relabel_proof subst depth (Proof {fixes, assumptions, steps}) =
(1, subst)
- |> fold_map (fn (l, t) => next_label depth assume_prefix l #> apfst (rpair t)) assms
+ |> fold_map (fn (l, t) => next_label depth assume_prefix l #> apfst (rpair t)) assumptions
||>> fold_map (relabel_step depth) steps
- |> (fn ((assms, steps), _) => Proof (fix, assms, steps))
+ |> (fn ((assumptions', steps'), _) =>
+ Proof {fixes = fixes, assumptions = assumptions', steps = steps'})
in
relabel_proof [] 0
end
@@ -232,25 +325,34 @@
(ys, ((map Free xs ~~ map Free ys) @ subst, ctxt'))
end
- fun rationalize_step (Prove (qs, xs, l, t, subs, facts, meths, comment)) subst_ctxt =
+ fun rationalize_step (Prove {qualifiers, obtains, label, goal, subproofs, facts, proof_methods,
+ comment}) subst_ctxt =
let
- val (xs', subst_ctxt' as (subst', _)) = rename_obtains xs subst_ctxt
- val t' = subst_atomic subst' t
- val subs' = map (rationalize_proof false subst_ctxt') subs
+ val (obtains', subst_ctxt' as (subst', _)) = rename_obtains obtains subst_ctxt
+ val prove = Prove {
+ qualifiers = qualifiers,
+ obtains = obtains',
+ label = label,
+ goal = subst_atomic subst' goal,
+ subproofs = map (rationalize_proof false subst_ctxt') subproofs,
+ facts = facts,
+ proof_methods = proof_methods,
+ comment = comment}
in
- (Prove (qs, xs', l, t', subs', facts, meths, comment), subst_ctxt')
+ (prove, subst_ctxt')
end
- and rationalize_proof outer (subst_ctxt as (subst, ctxt)) (Proof (fix, assms, steps)) =
+ and rationalize_proof outer (subst_ctxt as (subst, ctxt)) (Proof {fixes, assumptions, steps}) =
let
- val (fix', subst_ctxt' as (subst', _)) =
+ val (fixes', subst_ctxt' as (subst', _)) =
if outer then
(* last call: eliminate any remaining skolem names (from SMT proofs) *)
- (fix, (subst @ map (fn x => (Free (apfst Name.skolem x), Free x)) fix, ctxt))
+ (fixes, (subst @ map (fn x => (Free (apfst Name.skolem x), Free x)) fixes, ctxt))
else
- rename_obtains fix subst_ctxt
+ rename_obtains fixes subst_ctxt
+ val assumptions' = map (apsnd (subst_atomic subst')) assumptions
+ val steps' = fst (fold_map rationalize_step steps subst_ctxt')
in
- Proof (fix', map (apsnd (subst_atomic subst')) assms,
- fst (fold_map rationalize_step steps subst_ctxt'))
+ Proof { fixes = fixes', assumptions = assumptions', steps = steps'}
end
in
rationalize_proof true ([], ctxt)
@@ -348,26 +450,29 @@
space_implode (of_moreover ind) (map (of_subproof ind ctxt) subs)
and add_step_pre ind qs subs (s, ctxt) =
(s ^ of_subproofs ind ctxt qs subs ^ of_indent ind, ctxt)
- and add_step ind (Let (t1, t2)) =
+ and add_step ind (Let {lhs = t1, rhs = t2}) =
add_str (of_indent ind ^ "let ") #> add_term t1 #> add_str " = " #> add_term t2
#> add_str "\n"
- | add_step ind (Prove (qs, xs, l, t, subs, (ls, ss), meth :: _, comment)) =
- add_step_pre ind qs subs
- #> (case xs of
- [] => add_str (of_have qs (length subs) ^ " ")
- | _ =>
- add_str (of_obtain qs (length subs) ^ " ")
- #> add_frees xs
- #> add_str (" where\n" ^ of_indent (ind + 1)))
- #> add_str (of_label l)
- #> add_term (if is_thesis ctxt t then HOLogic.mk_Trueprop (Var thesis_var) else t)
- #> add_str ("\n" ^ of_indent (ind + 1) ^ of_method ls ss meth ^
- (if comment = "" then "" else " (* " ^ comment ^ " *)") ^ "\n")
+ | add_step ind (Prove {qualifiers, obtains, label, goal, subproofs, facts = (ls, ss),
+ proof_methods = meth :: _, comment}) =
+ let val num_subproofs = length subproofs in
+ add_step_pre ind qualifiers subproofs
+ #> (case obtains of
+ [] => add_str (of_have qualifiers num_subproofs ^ " ")
+ | _ =>
+ add_str (of_obtain qualifiers num_subproofs ^ " ")
+ #> add_frees obtains
+ #> add_str (" where\n" ^ of_indent (ind + 1)))
+ #> add_str (of_label label)
+ #> add_term (if is_thesis ctxt goal then HOLogic.mk_Trueprop (Var thesis_var) else goal)
+ #> add_str ("\n" ^ of_indent (ind + 1) ^ of_method ls ss meth ^
+ (if comment = "" then "" else " (* " ^ comment ^ " *)") ^ "\n")
+ end
and add_steps ind = fold (add_step ind)
- and of_proof ind ctxt (Proof (xs, assms, steps)) =
+ and of_proof ind ctxt (Proof {fixes, assumptions, steps}) =
("", ctxt)
- |> add_fix ind xs
- |> fold (add_assm ind) assms
+ |> add_fix ind fixes
+ |> fold (add_assm ind) assumptions
|> add_steps ind steps
|> fst
in
--- a/src/Pure/Admin/build_doc.scala Sun Nov 15 13:06:24 2020 +0000
+++ b/src/Pure/Admin/build_doc.scala Sun Nov 15 13:06:41 2020 +0000
@@ -7,9 +7,6 @@
package isabelle
-import java.io.{File => JFile}
-
-
object Build_Doc
{
/* build_doc */
@@ -19,53 +16,50 @@
progress: Progress = new Progress,
all_docs: Boolean = false,
max_jobs: Int = 1,
- docs: List[String] = Nil): Int =
+ docs: List[String] = Nil)
{
+ val store = Sessions.store(options)
+
val sessions_structure = Sessions.load_structure(options)
- val selection =
+ val selected =
for {
- name <- sessions_structure.build_topological_order
- info = sessions_structure(name)
+ session <- sessions_structure.build_topological_order
+ info = sessions_structure(session)
if info.groups.contains("doc")
doc = info.options.string("document_variants")
if all_docs || docs.contains(doc)
- } yield (doc, name)
+ } yield (doc, session)
- val selected_docs = selection.map(_._1)
- val sessions = selection.map(_._2)
+ val documents = selected.map(_._1)
+ val selection = Sessions.Selection(sessions = selected.map(_._2))
- docs.filter(doc => !selected_docs.contains(doc)) match {
+ docs.filter(doc => !documents.contains(doc)) match {
case Nil =>
case bad => error("No documentation session for " + commas_quote(bad))
}
- progress.echo("Build started for documentation " + commas_quote(selected_docs))
+ progress.echo("Build started for sessions " + commas_quote(selection.sessions))
+ Build.build(options, selection = selection, progress = progress, max_jobs = max_jobs).ok ||
+ error("Build failed")
+
+ progress.echo("Build started for documentation " + commas_quote(documents))
+ val doc_options =
+ options + "document=pdf" + "document_output=~~/doc" + "document_output_sources=false"
+ val deps = Sessions.load_structure(doc_options).selection_deps(selection)
- val res1 =
- Build.build(options,
- selection = Sessions.Selection(requirements = true, sessions = sessions),
- progress = progress, build_heap = true, max_jobs = max_jobs)
- if (res1.ok) {
- Isabelle_System.with_tmp_dir("document_output")(output =>
- {
- val res2 =
- Build.build(
- options.bool.update("browser_info", false).
- string.update("document", "pdf").
- string.update("document_output", output.implode),
- selection = Sessions.Selection(sessions = sessions),
- progress = progress, clean_build = true, max_jobs = max_jobs)
- if (res2.ok) {
- val doc_dir = Path.explode("~~/doc")
- for (doc <- selected_docs) {
- val name = Path.explode(doc + ".pdf")
- File.copy(output + name, doc_dir + name)
- }
- }
- res2.rc
- })
- }
- else res1.rc
+ val errs =
+ Par_List.map((doc_session: (String, String)) =>
+ try {
+ Present.build_documents(doc_session._2, deps, store, progress = progress)
+ None
+ }
+ catch {
+ case Exn.Interrupt.ERROR(msg) =>
+ val sep = if (msg.contains('\n')) "\n" else " "
+ Some("Documentation " + doc_session._1 + " failed:" + sep + msg)
+ }, selected).flatten
+
+ if (errs.nonEmpty) error(cat_lines(errs))
}
@@ -99,10 +93,9 @@
if (!all_docs && docs.isEmpty) getopts.usage()
val progress = new Console_Progress()
- val rc =
- progress.interrupt_handler {
- build_doc(options, progress, all_docs, max_jobs, docs)
- }
- sys.exit(rc)
+
+ progress.interrupt_handler {
+ build_doc(options, progress, all_docs, max_jobs, docs)
+ }
})
}
--- a/src/Pure/Admin/build_history.scala Sun Nov 15 13:06:24 2020 +0000
+++ b/src/Pure/Admin/build_history.scala Sun Nov 15 13:06:41 2020 +0000
@@ -358,8 +358,8 @@
else None
})
- build_out_progress.echo("Writing log file " + log_path.ext("xz") + " ...")
- File.write_xz(log_path.ext("xz"),
+ build_out_progress.echo("Writing log file " + log_path.xz + " ...")
+ File.write_xz(log_path.xz,
terminate_lines(
Protocol.Meta_Info_Marker(meta_info) :: build_result.out_lines :::
session_build_info :::
@@ -377,7 +377,7 @@
first_build = false
- (build_result, log_path.ext("xz"))
+ (build_result, log_path.xz)
}
}
--- a/src/Pure/Admin/jenkins.scala Sun Nov 15 13:06:24 2020 +0000
+++ b/src/Pure/Admin/jenkins.scala Sun Nov 15 13:06:41 2020 +0000
@@ -99,7 +99,7 @@
def download_log(store: Sessions.Store, dir: Path, progress: Progress = new Progress)
{
val log_dir = dir + Build_Log.log_subdir(date)
- val log_path = log_dir + log_filename.ext("xz")
+ val log_path = log_dir + log_filename.xz
if (!log_path.is_file) {
progress.echo(log_path.expand.implode)
--- a/src/Pure/General/exn.scala Sun Nov 15 13:06:24 2020 +0000
+++ b/src/Pure/General/exn.scala Sun Nov 15 13:06:41 2020 +0000
@@ -93,6 +93,12 @@
object Interrupt
{
+ object ERROR
+ {
+ def unapply(exn: Throwable): Option[String] =
+ if (is_interrupt(exn)) Some(message(exn)) else user_message(exn)
+ }
+
def apply(): Throwable = new InterruptedException("Interrupt")
def unapply(exn: Throwable): Boolean = is_interrupt(exn)
--- a/src/Pure/General/file.scala Sun Nov 15 13:06:24 2020 +0000
+++ b/src/Pure/General/file.scala Sun Nov 15 13:06:41 2020 +0000
@@ -323,6 +323,14 @@
def copy(path1: Path, path2: Path): Unit = copy(path1.file, path2.file)
+ def copy_base(base_dir: Path, src0: Path, target_dir: Path)
+ {
+ val src = src0.expand
+ val src_dir = src.dir
+ if (!src.starts_basic) error("Illegal path specification " + src + " beyond base directory")
+ copy(base_dir + src, Isabelle_System.make_directory(target_dir + src_dir))
+ }
+
/* move */
--- a/src/Pure/General/path.scala Sun Nov 15 13:06:24 2020 +0000
+++ b/src/Pure/General/path.scala Sun Nov 15 13:06:41 2020 +0000
@@ -156,6 +156,7 @@
def is_absolute: Boolean = elems.nonEmpty && elems.last.isInstanceOf[Path.Root]
def is_root: Boolean = elems match { case List(Path.Root(_)) => true case _ => false }
def is_basic: Boolean = elems match { case List(Path.Basic(_)) => true case _ => false }
+ def starts_basic: Boolean = elems.nonEmpty && elems.last.isInstanceOf[Path.Basic]
def +(other: Path): Path = new Path((other.elems :\ elems)(Path.apply_elem))
@@ -192,6 +193,10 @@
prfx + Path.basic(s + "." + e)
}
+ def xz: Path = ext("xz")
+ def tex: Path = ext("tex")
+ def pdf: Path = ext("pdf")
+
def backup: Path =
{
val (prfx, s) = split_path
--- a/src/Pure/Isar/bundle.ML Sun Nov 15 13:06:24 2020 +0000
+++ b/src/Pure/Isar/bundle.ML Sun Nov 15 13:06:41 2020 +0000
@@ -6,8 +6,9 @@
signature BUNDLE =
sig
+ type name = string
val check: Proof.context -> xstring * Position.T -> string
- val get: Proof.context -> string -> Attrib.thms
+ val get: Proof.context -> name -> Attrib.thms
val read: Proof.context -> xstring * Position.T -> Attrib.thms
val print_bundles: bool -> Proof.context -> unit
val bundle: binding * Attrib.thms ->
@@ -15,13 +16,13 @@
val bundle_cmd: binding * (Facts.ref * Token.src list) list ->
(binding * string option * mixfix) list -> local_theory -> local_theory
val init: binding -> theory -> local_theory
- val unbundle: string list -> local_theory -> local_theory
+ val unbundle: name list -> local_theory -> local_theory
val unbundle_cmd: (xstring * Position.T) list -> local_theory -> local_theory
- val includes: string list -> Proof.context -> Proof.context
+ val includes: name list -> Proof.context -> Proof.context
val includes_cmd: (xstring * Position.T) list -> Proof.context -> Proof.context
- val include_: string list -> Proof.state -> Proof.state
+ val include_: name list -> Proof.state -> Proof.state
val include_cmd: (xstring * Position.T) list -> Proof.state -> Proof.state
- val including: string list -> Proof.state -> Proof.state
+ val including: name list -> Proof.state -> Proof.state
val including_cmd: (xstring * Position.T) list -> Proof.state -> Proof.state
end;
@@ -42,6 +43,8 @@
(* bundles *)
+type name = string
+
val get_all_generic = #1 o Data.get;
val get_all = get_all_generic o Context.Proof;
--- a/src/Pure/Isar/class_declaration.ML Sun Nov 15 13:06:24 2020 +0000
+++ b/src/Pure/Isar/class_declaration.ML Sun Nov 15 13:06:41 2020 +0000
@@ -6,7 +6,7 @@
signature CLASS_DECLARATION =
sig
- val class: binding -> string list -> class list ->
+ val class: binding -> Bundle.name list -> class list ->
Element.context_i list -> theory -> string * local_theory
val class_cmd: binding -> (xstring * Position.T) list -> xstring list ->
Element.context list -> theory -> string * local_theory
--- a/src/Pure/Isar/expression.ML Sun Nov 15 13:06:24 2020 +0000
+++ b/src/Pure/Isar/expression.ML Sun Nov 15 13:06:41 2020 +0000
@@ -32,7 +32,7 @@
val read_declaration: expression -> (Proof.context -> Proof.context) -> Element.context list ->
Proof.context -> (((string * typ) * mixfix) list * (string * morphism) list
* Element.context_i list * Proof.context) * ((string * typ) list * Proof.context)
- val add_locale: binding -> binding -> string list ->
+ val add_locale: binding -> binding -> Bundle.name list ->
expression_i -> Element.context_i list -> theory -> string * local_theory
val add_locale_cmd: binding -> binding -> (xstring * Position.T) list ->
expression -> Element.context list -> theory -> string * local_theory
--- a/src/Pure/Isar/named_target.ML Sun Nov 15 13:06:24 2020 +0000
+++ b/src/Pure/Isar/named_target.ML Sun Nov 15 13:06:41 2020 +0000
@@ -11,7 +11,7 @@
val locale_of: local_theory -> string option
val bottom_locale_of: local_theory -> string option
val class_of: local_theory -> string option
- val init: string list -> string -> theory -> local_theory
+ val init: Bundle.name list -> string -> theory -> local_theory
val theory_init: theory -> local_theory
val theory_map: (local_theory -> local_theory) -> theory -> theory
val theory_map_result: (morphism -> 'a -> 'b) -> (local_theory -> 'a * local_theory) ->
--- a/src/Pure/ML/ml_console.scala Sun Nov 15 13:06:24 2020 +0000
+++ b/src/Pure/ML/ml_console.scala Sun Nov 15 13:06:41 2020 +0000
@@ -42,7 +42,7 @@
"i:" -> (arg => include_sessions = include_sessions ::: List(arg)),
"l:" -> (arg => logic = arg),
"m:" -> (arg => modes = arg :: modes),
- "n" -> (arg => no_build = true),
+ "n" -> (_ => no_build = true),
"o:" -> (arg => options = options + arg),
"r" -> (_ => raw_ml_system = true))
--- a/src/Pure/ML/ml_process.scala Sun Nov 15 13:06:24 2020 +0000
+++ b/src/Pure/ML/ml_process.scala Sun Nov 15 13:06:41 2020 +0000
@@ -32,7 +32,7 @@
val heaps: List[String] =
if (raw_ml_system) Nil
else {
- sessions_structure.selection(Sessions.Selection.session(logic_name)).
+ sessions_structure.selection(logic_name).
build_requirements(List(logic_name)).
map(a => File.platform_path(store.the_heap(a)))
}
--- a/src/Pure/PIDE/session.ML Sun Nov 15 13:06:24 2020 +0000
+++ b/src/Pure/PIDE/session.ML Sun Nov 15 13:06:41 2020 +0000
@@ -9,8 +9,7 @@
val get_name: unit -> string
val welcome: unit -> string
val get_keywords: unit -> Keyword.keywords
- val init: HTML.symbols -> bool -> Path.T -> string -> string -> (string * string) list ->
- (Path.T * Path.T) list -> Path.T -> string -> string * string -> bool -> unit
+ val init: HTML.symbols -> bool -> Path.T -> string list -> string -> string * string -> bool -> unit
val shutdown: unit -> unit
val finish: unit -> unit
end;
@@ -46,18 +45,12 @@
(* init *)
-val document =
- fn "" => false | "false" => false | "pdf" => true
- | doc => error ("Bad document specification " ^ quote doc);
-
-fun init symbols info info_path doc doc_output doc_variants doc_files graph_file
- parent (chapter, name) verbose =
+fun init symbols info info_path documents parent (chapter, name) verbose =
(Synchronized.change session (fn ({name = parent_name, ...}, parent_finished) =>
if parent_name <> parent orelse not parent_finished then
error ("Unfinished parent session " ^ quote parent ^ " for " ^ quote name)
else ({chapter = chapter, name = name}, false));
- Present.init symbols info info_path (document doc)
- doc_output doc_variants doc_files graph_file (chapter, name) verbose);
+ Present.init symbols info info_path documents (chapter, name) verbose);
(* finish *)
--- a/src/Pure/Sessions.thy Sun Nov 15 13:06:24 2020 +0000
+++ b/src/Pure/Sessions.thy Sun Nov 15 13:06:41 2020 +0000
@@ -8,7 +8,7 @@
imports Pure
keywords "session" :: thy_decl
and "description" "directories" "options" "sessions" "theories"
- "document_files" "export_files" :: quasi_command
+ "document_theories" "document_files" "export_files" :: quasi_command
and "global"
begin
--- a/src/Pure/System/bash.scala Sun Nov 15 13:06:24 2020 +0000
+++ b/src/Pure/System/bash.scala Sun Nov 15 13:06:41 2020 +0000
@@ -45,6 +45,8 @@
/* process and result */
+ type Watchdog = (Time, Process => Boolean)
+
def process(script: String,
cwd: JFile = null,
env: Map[String, String] = Isabelle_System.settings(),
@@ -168,6 +170,7 @@
def result(
progress_stdout: String => Unit = (_: String) => (),
progress_stderr: String => Unit = (_: String) => (),
+ watchdog: Option[Watchdog] = None,
strict: Boolean = true): Process_Result =
{
stdin.close
@@ -177,9 +180,23 @@
val err_lines =
Future.thread("bash_stderr") { File.read_lines(stderr, progress_stderr) }
+ val watchdog_thread =
+ for ((time, check) <- watchdog)
+ yield {
+ Future.thread("bash_watchdog") {
+ while (proc.isAlive) {
+ time.sleep
+ if (check(this)) interrupt()
+ }
+ }
+ }
+
val rc =
try { join }
catch { case Exn.Interrupt() => terminate(); Exn.Interrupt.return_code }
+
+ watchdog_thread.foreach(_.cancel)
+
if (strict && rc == Exn.Interrupt.return_code) throw Exn.Interrupt()
Process_Result(rc, out_lines.join, err_lines.join, false, get_timing)
--- a/src/Pure/System/isabelle_system.scala Sun Nov 15 13:06:24 2020 +0000
+++ b/src/Pure/System/isabelle_system.scala Sun Nov 15 13:06:41 2020 +0000
@@ -347,11 +347,13 @@
redirect: Boolean = false,
progress_stdout: String => Unit = (_: String) => (),
progress_stderr: String => Unit = (_: String) => (),
+ watchdog: Option[Bash.Watchdog] = None,
strict: Boolean = true,
cleanup: () => Unit = () => ()): Process_Result =
{
Bash.process(script, cwd = cwd, env = env, redirect = redirect, cleanup = cleanup).
- result(progress_stdout, progress_stderr, strict)
+ result(progress_stdout = progress_stdout, progress_stderr = progress_stderr,
+ watchdog = watchdog, strict = strict)
}
def jconsole(): Process_Result =
--- a/src/Pure/System/progress.scala Sun Nov 15 13:06:24 2020 +0000
+++ b/src/Pure/System/progress.scala Sun Nov 15 13:06:41 2020 +0000
@@ -32,6 +32,7 @@
def echo_warning(msg: String) { echo(Output.warning_text(msg)) }
def echo_error_message(msg: String) { echo(Output.error_message_text(msg)) }
+ def echo_document(path: Path) { echo("Document at " + path.absolute) }
def timeit[A](message: String = "", enabled: Boolean = true)(e: => A): A =
Timing.timeit(message, enabled, echo)(e)
@@ -53,12 +54,16 @@
env: Map[String, String] = Isabelle_System.settings(),
redirect: Boolean = false,
echo: Boolean = false,
+ watchdog: Time = Time.zero,
strict: Boolean = true): Process_Result =
{
- Isabelle_System.bash(script, cwd = cwd, env = env, redirect = redirect,
- progress_stdout = echo_if(echo, _),
- progress_stderr = echo_if(echo, _),
- strict = strict)
+ val result =
+ Isabelle_System.bash(script, cwd = cwd, env = env, redirect = redirect,
+ progress_stdout = echo_if(echo, _),
+ progress_stderr = echo_if(echo, _),
+ watchdog = if (watchdog.is_zero) None else Some((watchdog, _ => stopped)),
+ strict = strict)
+ if (strict && stopped) throw Exn.Interrupt() else result
}
}
--- a/src/Pure/Thy/present.ML Sun Nov 15 13:06:24 2020 +0000
+++ b/src/Pure/Thy/present.ML Sun Nov 15 13:06:41 2020 +0000
@@ -9,12 +9,8 @@
val tex_path: string -> Path.T
val get_bibtex_entries: theory -> string list
val theory_qualifier: theory -> string
- val document_option: Options.T -> {enabled: bool, disabled: bool}
- val document_variants: Options.T -> (string * string) list
- val init: HTML.symbols -> bool -> Path.T -> bool -> string -> (string * string) list ->
- (Path.T * Path.T) list -> Path.T -> string * string -> bool -> unit
+ val init: HTML.symbols -> bool -> Path.T -> string list -> string * string -> bool -> unit
val finish: unit -> unit
- val theory_output: theory -> string list -> unit
val begin_theory: string list -> int -> (unit -> HTML.text) -> theory -> theory
end;
@@ -30,7 +26,6 @@
val html_path = html_ext o Path.basic;
val index_path = Path.basic "index.html";
val readme_html_path = Path.basic "README.html";
-val doc_indexN = "session";
val session_graph_path = Path.basic "session_graph.pdf";
val document_path = Path.ext "pdf" o Path.basic;
@@ -66,31 +61,19 @@
(** global browser info state **)
-(* type theory_info *)
-
-type theory_info = {tex_source: string list, html_source: string};
-
-fun make_theory_info (tex_source, html_source) =
- {tex_source = tex_source, html_source = html_source}: theory_info;
-
-fun map_theory_info f {tex_source, html_source} =
- make_theory_info (f (tex_source, html_source));
-
-
(* type browser_info *)
type browser_info =
- {theories: theory_info Symtab.table,
- tex_index: (int * string) list,
+ {html_theories: string Symtab.table,
html_index: (int * string) list};
-fun make_browser_info (theories, tex_index, html_index) : browser_info =
- {theories = theories, tex_index = tex_index, html_index = html_index};
+fun make_browser_info (html_theories, html_index) : browser_info =
+ {html_theories = html_theories, html_index = html_index};
-val empty_browser_info = make_browser_info (Symtab.empty, [], []);
+val empty_browser_info = make_browser_info (Symtab.empty, []);
-fun map_browser_info f {theories, tex_index, html_index} =
- make_browser_info (f (theories, tex_index, html_index));
+fun map_browser_info f {html_theories, html_index} =
+ make_browser_info (f (html_theories, html_index));
(* state *)
@@ -98,24 +81,9 @@
val browser_info = Synchronized.var "browser_info" empty_browser_info;
fun change_browser_info f = Synchronized.change browser_info (map_browser_info f);
-fun init_theory_info name info =
- change_browser_info (fn (theories, tex_index, html_index) =>
- (Symtab.update (name, info) theories, tex_index, html_index));
+fun update_html name html = change_browser_info (apfst (Symtab.update (name, html)));
-fun change_theory_info name f =
- change_browser_info (fn (theories, tex_index, html_index) =>
- (case Symtab.lookup theories name of
- NONE => error ("Browser info: cannot access theory document " ^ quote name)
- | SOME info => (Symtab.update (name, map_theory_info f info) theories, tex_index, html_index)));
-
-
-fun add_tex_index txt =
- change_browser_info (fn (theories, tex_index, html_index) =>
- (theories, txt :: tex_index, html_index));
-
-fun add_html_index txt =
- change_browser_info (fn (theories, tex_index, html_index) =>
- (theories, tex_index, txt :: html_index));
+fun add_html_index txt = change_browser_info (apsnd (cons txt));
@@ -125,15 +93,12 @@
type session_info =
{symbols: HTML.symbols, name: string, chapter: string, info_path: Path.T, info: bool,
- document: bool, doc_output: Path.T option, doc_files: (Path.T * Path.T) list,
- graph_file: Path.T, documents: (string * string) list, verbose: bool, readme: Path.T option};
+ verbose: bool, readme: Path.T option};
fun make_session_info
- (symbols, name, chapter, info_path, info, document, doc_output, doc_files,
- graph_file, documents, verbose, readme) =
+ (symbols, name, chapter, info_path, info, verbose, readme) =
{symbols = symbols, name = name, chapter = chapter, info_path = info_path, info = info,
- document = document, doc_output = doc_output, doc_files = doc_files, graph_file = graph_file,
- documents = documents, verbose = verbose, readme = readme}: session_info;
+ verbose = verbose, readme = readme}: session_info;
(* state *)
@@ -152,89 +117,35 @@
(** document preparation **)
-(* options *)
-
-fun document_option options =
- (case Options.string options "document" of
- "" => {enabled = false, disabled = false}
- | "false" => {enabled = false, disabled = true}
- | _ => {enabled = true, disabled = false});
-
-fun document_variants options =
- let
- val variants =
- space_explode ":" (Options.string options "document_variants") |> map (fn s =>
- (case space_explode "=" s of
- [name] => (name, "")
- | [name, tags] => (name, tags)
- | _ => error ("Malformed document variant specification: " ^ quote s)));
- val _ =
- (case duplicates (op =) (map #1 variants) of
- [] => ()
- | dups => error ("Duplicate document variants: " ^ commas_quote dups));
- in variants end;
-
-
(* init session *)
-fun init symbols info info_path document document_output doc_variants doc_files graph_file
- (chapter, name) verbose =
+fun init symbols info info_path documents (chapter, name) verbose =
let
- val doc_output =
- if document_output = "" then NONE else SOME (Path.explode document_output);
-
- val documents = if not document orelse null doc_files then [] else doc_variants;
val readme = if File.exists readme_html_path then SOME readme_html_path else NONE;
val docs =
(case readme of NONE => [] | SOME p => [(Url.File p, "README")]) @
- map (fn (name, _) => (Url.File (document_path name), name)) documents;
+ map (fn name => (Url.File (document_path name), name)) documents;
in
session_info :=
- SOME (make_session_info (symbols, name, chapter, info_path, info, document,
- doc_output, doc_files, graph_file, documents, verbose, readme));
+ SOME (make_session_info (symbols, name, chapter, info_path, info, verbose, readme));
Synchronized.change browser_info (K empty_browser_info);
add_html_index (0, HTML.begin_session_index symbols name (Url.File session_graph_path) docs)
end;
-(* isabelle tool wrappers *)
-
-fun isabelle_document {verbose} doc_name tags dir =
- let
- val script =
- "isabelle document -d " ^ File.bash_path dir ^ " -n " ^ Bash.string doc_name ^
- (if tags = "" then "" else " -t " ^ Bash.string tags);
- val doc_path = dir + Path.parent + document_path doc_name;
- val _ = if verbose then writeln script else ();
- val {out, err, rc, ...} = Bash.process script;
- val _ = if verbose then writeln (trim_line (normalize_lines out)) else ();
- val _ = if not (File.exists doc_path) orelse rc <> 0 then error (trim_line err) else ();
- in doc_path end;
-
-
(* finish session -- output all generated text *)
fun sorted_index index = map snd (sort (int_ord o apply2 fst) (rev index));
fun index_buffer index = Buffer.add (implode (sorted_index index)) Buffer.empty;
-fun write_tex src name path =
- File.write_buffer (path + tex_path name) src;
-
-fun write_tex_index tex_index path =
- write_tex (index_buffer tex_index) doc_indexN path;
-
-fun finish () =
- with_session_info () (fn {name, chapter, info, info_path, doc_output, doc_files, graph_file,
- documents, verbose, readme, ...} =>
+fun finish () = with_session_info () (fn {name, chapter, info, info_path, verbose, readme, ...} =>
let
- val {theories, tex_index, html_index} = Synchronized.value browser_info;
- val thys = Symtab.dest theories;
+ val {html_theories, html_index} = Synchronized.value browser_info;
val session_prefix = info_path + Path.basic chapter + Path.basic name;
- fun finish_html (a, {html_source, ...}: theory_info) =
- File.write (session_prefix + html_path a) html_source;
+ fun finish_html (a, html) = File.write (session_prefix + html_path a) html;
val _ =
if info then
@@ -242,45 +153,11 @@
File.write_buffer (session_prefix + index_path)
(index_buffer html_index |> Buffer.add HTML.end_document);
(case readme of NONE => () | SOME path => Isabelle_System.copy_file path session_prefix);
- List.app finish_html thys;
+ Symtab.fold (K o finish_html) html_theories ();
if verbose
then Output.physical_stderr ("Browser info at " ^ show_path session_prefix ^ "\n")
else ())
else ();
-
- fun document_job doc_prefix backdrop (doc_name, tags) =
- let
- val doc_dir = doc_prefix + Path.basic doc_name;
- fun purge () = if backdrop then Isabelle_System.rm_tree doc_dir else ();
- val _ = purge ();
- val _ = Isabelle_System.make_directory doc_dir;
- val _ =
- Isabelle_System.bash ("isabelle latex -o sty " ^
- File.bash_path (doc_dir + Path.basic "root.tex"));
- val _ = List.app (fn file => Isabelle_System.copy_file_base file doc_dir) doc_files;
- val _ = Isabelle_System.copy_file graph_file (doc_dir + session_graph_path);
- val _ = write_tex_index tex_index doc_dir;
- val _ =
- List.app (fn (a, {tex_source, ...}) =>
- write_tex (fold Buffer.add tex_source Buffer.empty) a doc_dir) thys;
- in
- fn () =>
- (isabelle_document {verbose = true} doc_name tags doc_dir before purge (),
- fn doc =>
- if verbose orelse not backdrop then
- Output.physical_stderr ("Document at " ^ show_path doc ^ "\n")
- else ())
- end;
-
- val jobs =
- (if info orelse is_none doc_output then
- map (document_job session_prefix true) documents
- else []) @
- (case doc_output of
- NONE => []
- | SOME path => map (document_job path false) documents);
-
- val _ = jobs |> Par_List.map (fn job => job ()) |> List.app (op |>);
in
Synchronized.change browser_info (K empty_browser_info);
session_info := NONE
@@ -289,12 +166,6 @@
(* theory elements *)
-fun theory_output thy output =
- with_session_info () (fn _ =>
- if is_session_theory thy then
- (change_theory_info (Context.theory_name thy) o apfst) (K output)
- else ());
-
fun theory_link (curr_chapter, curr_session) thy =
let
val {chapter, name = session, ...} = Browser_Info.get thy;
@@ -316,13 +187,12 @@
Theory.parents_of thy |> map (fn parent =>
(Option.map Url.File (theory_link (chapter, session_name) parent),
(Context.theory_name parent)));
- val html_source = HTML.theory symbols name parent_specs (mk_text ());
- val _ = init_theory_info name (make_theory_info ([], html_source));
+
+ val _ = update_html name (HTML.theory symbols name parent_specs (mk_text ()));
val bibtex_entries' =
if is_session_theory thy then
(add_html_index (update_time, HTML.theory_entry symbols (Url.File (html_path name), name));
- add_tex_index (update_time, Latex.theory_entry name);
bibtex_entries)
else [];
in
--- a/src/Pure/Thy/present.scala Sun Nov 15 13:06:24 2020 +0000
+++ b/src/Pure/Thy/present.scala Sun Nov 15 13:06:41 2020 +0000
@@ -77,9 +77,8 @@
/* finish session */
def finish(
- progress: Progress,
browser_info: Path,
- graph_file: JFile,
+ graph_pdf: Bytes,
info: Sessions.Info,
name: String)
{
@@ -89,9 +88,7 @@
if (info.options.bool("browser_info")) {
Isabelle_System.make_directory(session_fonts)
- val session_graph = session_prefix + Path.basic("session_graph.pdf")
- File.copy(graph_file, session_graph.file)
- Isabelle_System.chmod("a+r", session_graph)
+ Bytes.write(session_prefix + session_graph_path, graph_pdf)
HTML.write_isabelle_css(session_prefix)
@@ -193,131 +190,212 @@
- /** make document source **/
+ /** build documents **/
- def write_tex_index(dir: Path, names: List[Document.Node.Name])
- {
- val path = dir + Path.explode("session.tex")
- val text = names.map(name => "\\input{" + name.theory_base_name + ".tex}\n\n").mkString
- File.write(path, text)
- }
-
-
+ val session_tex_path = Path.explode("session.tex")
+ val session_graph_path = Path.explode("session_graph.pdf")
- /** build document **/
-
- val document_format = "pdf"
+ def tex_name(name: Document.Node.Name): String = name.theory_base_name + ".tex"
+ def document_tex_name(name: Document.Node.Name): String = "document/" + tex_name(name)
- val default_document_name = "document"
- def default_document_dir(name: String): Path = Path.explode("output") + Path.basic(name)
-
- def document_tags(tags: List[String]): String =
- {
- cat_lines(
+ def isabelletags(tags: List[String]): String =
+ Library.terminate_lines(
tags.map(tag =>
tag.toList match {
case '/' :: cs => "\\isafoldtag{" + cs.mkString + "}"
case '-' :: cs => "\\isadroptag{" + cs.mkString + "}"
case '+' :: cs => "\\isakeeptag{" + cs.mkString + "}"
case cs => "\\isakeeptag{" + cs.mkString + "}"
- })) + "\n"
- }
+ }))
- def build_document(
- document_name: String = default_document_name,
- document_dir: Option[Path] = None,
- tags: List[String] = Nil)
+ def build_documents(
+ session: String,
+ deps: Sessions.Deps,
+ store: Sessions.Store,
+ progress: Progress = new Progress,
+ verbose: Boolean = false,
+ verbose_latex: Boolean = false): List[(String, Bytes)] =
{
- val document_target = Path.parent + Path.explode(document_name).ext(document_format)
+ /* session info */
+
+ val info = deps.sessions_structure(session)
+ val options = info.options
+ val base = deps(session)
+ val graph_pdf = graphview.Graph_File.make_pdf(options, base.session_graph_display)
- val dir = document_dir getOrElse default_document_dir(document_name)
- if (!dir.is_dir) error("Bad document output directory " + dir)
+ def find_tex(name: Document.Node.Name): Option[Bytes] =
+ deps.sessions_structure.build_requirements(List(session)).reverse.view
+ .map(session_name =>
+ using(store.open_database(session_name))(db =>
+ Export.read_entry(db, session_name, name.theory, document_tex_name(name)).
+ map(_.uncompressed(cache = store.xz_cache))))
+ .collectFirst({ case Some(x) => x })
+
+
+ /* prepare document directory */
- val root_name =
+ lazy val tex_files =
+ for (name <- base.session_theories ::: base.document_theories)
+ yield Path.basic(tex_name(name)) -> find_tex(name).getOrElse(Bytes.empty)
+
+ def prepare_dir(dir: Path, doc_name: String, doc_tags: List[String]): (Path, String) =
{
- val root1 = "root_" + document_name
- if ((dir + Path.explode(root1).ext("tex")).is_file) root1 else "root"
+ val doc_dir = dir + Path.basic(doc_name)
+ Isabelle_System.make_directory(doc_dir)
+
+ Isabelle_System.bash("isabelle latex -o sty", cwd = doc_dir.file).check
+ File.write(doc_dir + Path.explode("isabelletags.sty"), isabelletags(doc_tags))
+ for ((base_dir, src) <- info.document_files) File.copy_base(info.dir + base_dir, src, doc_dir)
+ Bytes.write(doc_dir + session_graph_path, graph_pdf)
+
+ File.write(doc_dir + session_tex_path,
+ Library.terminate_lines(
+ base.session_theories.map(name => "\\input{" + tex_name(name) + "}")))
+
+ for ((path, tex) <- tex_files) Bytes.write(doc_dir + path, tex)
+
+ val root1 = "root_" + doc_name
+ val root = if ((doc_dir + Path.explode(root1).tex).is_file) root1 else "root"
+
+ (doc_dir, root)
}
- /* bash scripts */
+ /* produce documents */
+
+ val doc_output = info.document_output
- def root_bash(ext: String): String = Bash.string(root_name + "." + ext)
+ val docs =
+ for ((doc_name, doc_tags) <- info.documents)
+ yield {
+ Isabelle_System.with_tmp_dir("document")(tmp_dir =>
+ {
+ val (doc_dir, root) = prepare_dir(tmp_dir, doc_name, doc_tags)
+ doc_output.foreach(prepare_dir(_, doc_name, doc_tags))
- def latex_bash(fmt: String, ext: String = "tex"): String =
- "isabelle latex -o " + Bash.string(fmt) + " " + Bash.string(root_name + "." + ext)
+
+ // bash scripts
+
+ def root_bash(ext: String): String = Bash.string(root + "." + ext)
- def bash(script: String): Process_Result =
- Isabelle_System.bash(script, cwd = dir.file)
+ def latex_bash(fmt: String = "pdf", ext: String = "tex"): String =
+ "isabelle latex -o " + Bash.string(fmt) + " " + Bash.string(root + "." + ext)
+
+ def bash(items: String*): Process_Result =
+ progress.bash(items.mkString(" && "), cwd = doc_dir.file,
+ echo = verbose_latex, watchdog = Time.seconds(0.5))
- /* prepare document */
+ // prepare document
- File.write(dir + Path.explode("isabelletags.sty"), document_tags(tags))
+ val result =
+ if ((doc_dir + Path.explode("build")).is_file) {
+ bash("./build pdf " + Bash.string(doc_name))
+ }
+ else {
+ bash(
+ latex_bash(),
+ "{ [ ! -f " + root_bash("bib") + " ] || " + latex_bash("bbl") + "; }",
+ "{ [ ! -f " + root_bash("idx") + " ] || " + latex_bash("idx") + "; }",
+ latex_bash(),
+ latex_bash())
+ }
- List("log", "blg").foreach(ext => (dir + Path.explode(root_name).ext(ext)).file.delete)
- val result =
- if ((dir + Path.explode("build")).is_file) {
- bash("./build " + Bash.string(document_format) + " " + Bash.string(document_name))
- }
- else {
- bash(
- List(
- latex_bash("sty"),
- latex_bash(document_format),
- "{ [ ! -f " + root_bash("bib") + " ] || " + latex_bash("bbl") + "; }",
- "{ [ ! -f " + root_bash("idx") + " ] || " + latex_bash("idx") + "; }",
- latex_bash(document_format),
- latex_bash(document_format)).mkString(" && "))
+ // result
+
+ val root_pdf = Path.basic(root).pdf
+ val result_path = doc_dir + root_pdf
+
+ if (!result.ok) {
+ cat_error(
+ Library.trim_line(result.err),
+ cat_lines(Latex.latex_errors(doc_dir, root) ::: Bibtex.bibtex_errors(doc_dir, root)),
+ "Failed to build document " + quote(doc_name))
+ }
+ else if (!result_path.is_file) {
+ error("Bad document result: expected to find " + root_pdf)
+ }
+ else doc_name -> Bytes.read(result_path)
+ })
}
-
- /* result */
-
- if (!result.ok) {
- cat_error(
- Library.trim_line(result.err),
- cat_lines(Latex.latex_errors(dir, root_name) ::: Bibtex.bibtex_errors(dir, root_name)),
- "Failed to build document in " + File.path(dir.absolute_file))
+ def output(dir: Path)
+ {
+ Isabelle_System.make_directory(dir)
+ for ((name, pdf) <- docs) {
+ val path = dir + Path.basic(name).pdf
+ Bytes.write(path, pdf)
+ progress.echo_document(path)
+ }
}
- bash("if [ -f " + root_bash(document_format) + " ]; then cp -f " +
- root_bash(document_format) + " " + File.bash_path(document_target) + "; fi").check
+ if (info.options.bool("browser_info") || doc_output.isEmpty) {
+ output(store.browser_info + Path.basic(info.chapter) + Path.basic(session))
+ }
+
+ doc_output.foreach(output)
+
+ docs
}
/* Isabelle tool wrapper */
val isabelle_tool =
- Isabelle_Tool("document", "prepare theory session document", args =>
- {
- var document_dir: Option[Path] = None
- var document_name = default_document_name
- var tags: List[String] = Nil
+ Isabelle_Tool("document", "prepare session theory document", args =>
+ {
+ var verbose_latex = false
+ var dirs: List[Path] = Nil
+ var no_build = false
+ var options = Options.init()
+ var verbose_build = false
- val getopts = Getopts("""
-Usage: isabelle document [OPTIONS]
+ val getopts = Getopts(
+ """
+Usage: isabelle document [OPTIONS] SESSION
Options are:
- -d DIR document output directory (default """ +
- default_document_dir(default_document_name) + """)
- -n NAME document name (default """ + quote(default_document_name) + """)
- -t TAGS markup for tagged regions
+ -O set option "document_output", relative to current directory
+ -V verbose latex
+ -d DIR include session directory
+ -n no build of session
+ -o OPTION override Isabelle system OPTION (via NAME=VAL or NAME)
+ -v verbose build
- Prepare the theory session document in document directory, producing the
- specified output format.
+ Prepare the theory document of a session.
""",
- "d:" -> (arg => document_dir = Some(Path.explode(arg))),
- "n:" -> (arg => document_name = arg),
- "t:" -> (arg => tags = space_explode(',', arg)))
+ "O:" -> (arg => options += ("document_output=" + Path.explode(arg).absolute.implode)),
+ "V" -> (_ => verbose_latex = true),
+ "d:" -> (arg => dirs = dirs ::: List(Path.explode(arg))),
+ "n" -> (_ => no_build = true),
+ "o:" -> (arg => options = options + arg),
+ "v" -> (_ => verbose_build = true))
- val more_args = getopts(args)
- if (more_args.nonEmpty) getopts.usage()
+ val more_args = getopts(args)
+ val session =
+ more_args match {
+ case List(a) => a
+ case _ => getopts.usage()
+ }
+
+ val progress = new Console_Progress(verbose = verbose_build)
+ val store = Sessions.store(options)
- try {
- build_document(document_dir = document_dir, document_name = document_name, tags = tags)
- }
- catch { case ERROR(msg) => Output.writeln(msg); sys.exit(2) }
- })
+ progress.interrupt_handler {
+ if (!no_build) {
+ val res =
+ Build.build(options, selection = Sessions.Selection.session(session),
+ dirs = dirs, progress = progress, verbose = verbose_build)
+ if (!res.ok) error("Failed to build session " + quote(session))
+ }
+
+ val deps =
+ Sessions.load_structure(options + "document=pdf", dirs = dirs).
+ selection_deps(Sessions.Selection.session(session))
+
+ build_documents(session, deps, store, progress = progress,
+ verbose = true, verbose_latex = verbose_latex)
+ }
+ })
}
--- a/src/Pure/Thy/sessions.ML Sun Nov 15 13:06:24 2020 +0000
+++ b/src/Pure/Thy/sessions.ML Sun Nov 15 13:06:41 2020 +0000
@@ -27,6 +27,9 @@
val in_path =
Parse.$$$ "(" |-- Parse.!!! (Parse.$$$ "in" |-- Parse.position Parse.path --| Parse.$$$ ")");
+val document_theories =
+ Parse.$$$ "document_theories" |-- Scan.repeat1 (Parse.position Parse.name);
+
val document_files =
Parse.$$$ "document_files" |--
Parse.!!! (Scan.optional in_path ("document", Position.none)
@@ -55,11 +58,12 @@
Scan.optional
(Parse.$$$ "directories" |-- Parse.!!! (Scan.repeat1 (Parse.position Parse.path))) [] --
Scan.repeat theories --
+ Scan.optional document_theories [] --
Scan.repeat document_files --
Scan.repeat export_files))
- >> (fn ((((session, _), _), dir),
- ((((((((parent, descr), options), sessions), directories), theories),
- document_files), export_files))) =>
+ >> (fn (((((session, _), _), dir),
+ (((((((((parent, descr), options), sessions), directories), theories),
+ document_theories), document_files), export_files)))) =>
Toplevel.keep (fn state =>
let
val ctxt = Toplevel.context_of state;
@@ -80,6 +84,10 @@
ignore (Completion.check_option_value ctxt x y (Options.default ()))
handle ERROR msg => Output.error_message msg);
+ fun check_thy (path, pos) =
+ ignore (Resources.check_file ctxt (SOME Path.current) (Path.implode path, pos))
+ handle ERROR msg => Output.error_message msg;
+
val _ =
maps #2 theories |> List.app (fn (s, pos) =>
let
@@ -87,13 +95,18 @@
Resources.import_name session session_dir s
handle ERROR msg => error (msg ^ Position.here pos);
val theory_path = the_default node_name (Resources.find_theory_file theory_name);
- val _ = Resources.check_file ctxt (SOME Path.current) (Path.implode theory_path, pos);
- in () end);
+ in check_thy (theory_path, pos) end);
val _ =
directories |> List.app (ignore o Resources.check_dir ctxt (SOME session_dir));
val _ =
+ document_theories |> List.app (fn (thy, pos) =>
+ (case Resources.find_theory_file thy of
+ NONE => Output.error_message ("Unknown theory " ^ quote thy ^ Position.here pos)
+ | SOME path => check_thy (path, pos)));
+
+ val _ =
document_files |> List.app (fn (doc_dir, doc_files) =>
let
val dir = Resources.check_dir ctxt (SOME session_dir) doc_dir;
--- a/src/Pure/Thy/sessions.scala Sun Nov 15 13:06:24 2020 +0000
+++ b/src/Pure/Thy/sessions.scala Sun Nov 15 13:06:41 2020 +0000
@@ -53,6 +53,7 @@
session_directories: Map[JFile, String] = Map.empty,
global_theories: Map[String, String] = Map.empty,
session_theories: List[Document.Node.Name] = Nil,
+ document_theories: List[Document.Node.Name] = Nil,
loaded_theories: Graph[String, Outer_Syntax] = Graph.string,
used_theories: List[(Document.Node.Name, Options)] = Nil,
known_theories: Map[String, Document.Node.Entry] = Map.empty,
@@ -249,6 +250,30 @@
": need to include sessions " + quote(qualifier) + " in ROOT"
}
+ val document_errors =
+ info.document_theories.flatMap(
+ {
+ case (thy, pos) =>
+ val parent_sessions = sessions_structure.build_requirements(List(session_name))
+
+ def err(msg: String): Option[String] =
+ Some(msg + " " + quote(thy) + Position.here(pos))
+
+ known_theories.get(thy).map(_.name) match {
+ case None => err("Unknown document theory")
+ case Some(name) =>
+ val qualifier = deps_base.theory_qualifier(name)
+ if (session_theories.contains(name)) {
+ err("Redundant document theory from this session:")
+ }
+ else if (parent_sessions.contains(qualifier)) None
+ else if (dependencies.theories.contains(name)) None
+ else err("Document theory from other session not imported properly:")
+ }
+ })
+ val document_theories =
+ info.document_theories.map({ case (thy, _) => known_theories(thy).name })
+
val dir_errors =
{
val ok = info.dirs.map(_.canonical_file).toSet
@@ -297,6 +322,7 @@
session_directories = sessions_structure.session_directories,
global_theories = sessions_structure.global_theories,
session_theories = session_theories,
+ document_theories = document_theories,
loaded_theories = dependencies.loaded_theories,
used_theories = dependencies.theories_adjunct,
known_theories = known_theories,
@@ -306,7 +332,8 @@
sources = check_sources(session_files),
session_graph_display = session_graph_display,
errors = dependencies.errors ::: loaded_files_errors ::: import_errors :::
- dir_errors ::: sources_errors ::: path_errors ::: bibtex_errors)
+ document_errors ::: dir_errors ::: sources_errors ::: path_errors :::
+ bibtex_errors)
session_bases + (info.name -> base)
}
@@ -391,6 +418,7 @@
imports = info.deps,
directories = Nil,
theories = List((Nil, required_theories.map(thy => ((thy, Position.none), false)))),
+ document_theories = Nil,
document_files = Nil,
export_files = Nil))))
}
@@ -426,6 +454,7 @@
imports: List[String],
theories: List[(Options, List[(String, Position.T)])],
global_theories: List[String],
+ document_theories: List[(String, Position.T)],
document_files: List[(Path, Path)],
export_files: List[(Path, Int, List[String])],
meta_digest: SHA1.Digest)
@@ -452,6 +481,36 @@
def timeout: Time = Time.seconds(options.real("timeout") * options.real("timeout_scale"))
+ def document_enabled: Boolean =
+ options.string("document") match {
+ case "" | "false" => false
+ case "pdf" => true
+ case doc => error("Bad document specification " + quote(doc))
+ }
+
+ def documents: List[(String, List[String])] =
+ {
+ val variants =
+ for (s <- Library.space_explode(':', options.string("document_variants")))
+ yield {
+ Library.space_explode('=', s) match {
+ case List(name) => (name, Nil)
+ case List(name, tags) => (name, Library.space_explode(',', tags))
+ case _ => error("Malformed document_variants: " + quote(s))
+ }
+ }
+ val dups = Library.duplicates(variants.map(_._1))
+ if (dups.nonEmpty) error("Duplicate document variants: " + commas_quote(dups))
+
+ if (!document_enabled || document_files.isEmpty) Nil else variants
+ }
+
+ def document_output: Option[Path] =
+ options.string("document_output") match {
+ case "" => None
+ case s => Some(dir + Path.explode(s))
+ }
+
def bibtex_entries: List[Text.Info[String]] =
(for {
(document_dir, file) <- document_files.iterator
@@ -509,12 +568,15 @@
entry.export_files.map({ case (dir, prune, pats) => (Path.explode(dir), prune, pats) })
val meta_digest =
- SHA1.digest((name, chapter, entry.parent, entry.directories, entry.options, entry.imports,
- entry.theories_no_position, conditions, entry.document_files).toString)
+ SHA1.digest(
+ (name, chapter, entry.parent, entry.directories, entry.options, entry.imports,
+ entry.theories_no_position, conditions, entry.document_theories, entry.document_files)
+ .toString)
Info(name, chapter, dir_selected, entry.pos, entry.groups, session_path,
entry.parent, entry.description, directories, session_options,
- entry.imports, theories, global_theories, document_files, export_files, meta_digest)
+ entry.imports, theories, global_theories, entry.document_theories, document_files,
+ export_files, meta_digest)
}
catch {
case ERROR(msg) =>
@@ -711,8 +773,9 @@
restrict(build_graph), restrict(imports_graph))
}
+ def selection(session: String): Structure = selection(Selection.session(session))
+
def selection_deps(
- options: Options,
selection: Selection,
progress: Progress = new Progress,
loading_sessions: Boolean = false,
@@ -760,6 +823,7 @@
private val SESSIONS = "sessions"
private val THEORIES = "theories"
private val GLOBAL = "global"
+ private val DOCUMENT_THEORIES = "document_theories"
private val DOCUMENT_FILES = "document_files"
private val EXPORT_FILES = "export_files"
@@ -773,6 +837,7 @@
(OPTIONS, Keyword.QUASI_COMMAND) +
(SESSIONS, Keyword.QUASI_COMMAND) +
(THEORIES, Keyword.QUASI_COMMAND) +
+ (DOCUMENT_THEORIES, Keyword.QUASI_COMMAND) +
(DOCUMENT_FILES, Keyword.QUASI_COMMAND) +
(EXPORT_FILES, Keyword.QUASI_COMMAND)
@@ -789,6 +854,7 @@
imports: List[String],
directories: List[String],
theories: List[(List[Options.Spec], List[((String, Position.T), Boolean)])],
+ document_theories: List[(String, Position.T)],
document_files: List[(String, String)],
export_files: List[(String, Int, List[String])]) extends Entry
{
@@ -822,6 +888,9 @@
val in_path = $$$("(") ~! ($$$(IN) ~ path ~ $$$(")")) ^^ { case _ ~ (_ ~ x ~ _) => x }
+ val document_theories =
+ $$$(DOCUMENT_THEORIES) ~! rep1(position(name)) ^^ { case _ ~ x => x }
+
val document_files =
$$$(DOCUMENT_FILES) ~!
((in_path | success("document")) ~ rep1(path)) ^^ { case _ ~ (x ~ y) => y.map((x, _)) }
@@ -843,10 +912,11 @@
(($$$(SESSIONS) ~! rep1(session_name) ^^ { case _ ~ x => x }) | success(Nil)) ~
(($$$(DIRECTORIES) ~! rep1(path) ^^ { case _ ~ x => x }) | success(Nil)) ~
rep(theories) ~
+ (opt(document_theories) ^^ (x => x.getOrElse(Nil))) ~
(rep(document_files) ^^ (x => x.flatten)) ~
- (rep(export_files))))) ^^
- { case _ ~ ((a, pos) ~ b ~ c ~ (_ ~ (d ~ e ~ f ~ g ~ h ~ i ~ j ~ k))) =>
- Session_Entry(pos, a, b, c, d, e, f, g, h, i, j, k) }
+ rep(export_files)))) ^^
+ { case _ ~ ((a, pos) ~ b ~ c ~ (_ ~ (d ~ e ~ f ~ g ~ h ~ i ~ j ~ k ~ l))) =>
+ Session_Entry(pos, a, b, c, d, e, f, g, h, i, j, k, l) }
}
def parse_root(path: Path): List[Entry] =
--- a/src/Pure/Thy/thy_info.ML Sun Nov 15 13:06:24 2020 +0000
+++ b/src/Pure/Thy/thy_info.ML Sun Nov 15 13:06:41 2020 +0000
@@ -61,9 +61,8 @@
else
let
val body = Thy_Output.present_thy options thy segments;
- val option = Present.document_option options;
in
- if #disabled option then ()
+ if Options.string options "document" = "false" then ()
else
let
val thy_name = Context.theory_name thy;
@@ -71,9 +70,7 @@
val latex = Latex.isabelle_body thy_name body;
val output = [Latex.output_text latex, Latex.output_positions file_pos latex];
- val _ = Export.export thy (Path.binding0 document_path) (XML.blob output);
- val _ = if #enabled option then Present.theory_output thy output else ();
- in () end
+ in Export.export thy (Path.binding0 document_path) (XML.blob output) end
end));
--- a/src/Pure/Tools/build.ML Sun Nov 15 13:06:24 2020 +0000
+++ b/src/Pure/Tools/build.ML Sun Nov 15 13:06:41 2020 +0000
@@ -89,9 +89,9 @@
(fn [args_yxml] =>
let
val (symbol_codes, (command_timings, (verbose, (browser_info,
- (document_files, (graph_file, (parent_name, (chapter, (session_name, (master_dir,
+ (documents, (parent_name, (chapter, (session_name, (master_dir,
(theories, (session_positions, (session_directories, (doc_names, (global_theories,
- (loaded_theories, bibtex_entries)))))))))))))))) =
+ (loaded_theories, bibtex_entries))))))))))))))) =
YXML.parse_body args_yxml |>
let
open XML.Decode;
@@ -99,12 +99,12 @@
val path = Path.explode o string;
in
pair (list (pair string int)) (pair (list properties) (pair bool (pair path
- (pair (list (pair path path)) (pair path (pair string (pair string (pair string
+ (pair (list string) (pair string (pair string (pair string
(pair path
(pair (((list (pair Options.decode (list (pair string position))))))
(pair (list (pair string properties))
(pair (list (pair string string)) (pair (list string)
- (pair (list (pair string string)) (pair (list string) (list string))))))))))))))))
+ (pair (list (pair string string)) (pair (list string) (list string)))))))))))))))
end;
val symbols = HTML.make_symbols symbol_codes;
@@ -124,11 +124,7 @@
symbols
(Options.default_bool "browser_info")
browser_info
- (Options.default_string "document")
- (Options.default_string "document_output")
- (Present.document_variants (Options.default ()))
- document_files
- graph_file
+ documents
parent_name
(chapter, session_name)
verbose;
--- a/src/Pure/Tools/build.scala Sun Nov 15 13:06:24 2020 +0000
+++ b/src/Pure/Tools/build.scala Sun Nov 15 13:06:41 2020 +0000
@@ -165,12 +165,7 @@
val options: Options = NUMA.policy_options(info.options, numa_node)
private val sessions_structure = deps.sessions_structure
-
- private val graph_file = Isabelle_System.tmp_file("session_graph", "pdf")
- graphview.Graph_File.write(options, graph_file, deps(session_name).session_graph_display)
-
- private val export_consumer =
- Export.consumer(store.open_database(session_name, output = true), cache = store.xz_cache)
+ private val documents = info.documents.map(_._1)
private val future_result: Future[Process_Result] =
Future.thread("build", uninterruptible = true) {
@@ -180,22 +175,20 @@
YXML.string_of_body(
{
import XML.Encode._
- pair(list(pair(string, int)), pair(list(properties), pair(bool,
- pair(Path.encode, pair(list(pair(Path.encode, Path.encode)), pair(string,
- pair(string, pair(string, pair(string, pair(Path.encode,
+ pair(list(pair(string, int)), pair(list(properties), pair(bool, pair(Path.encode,
+ pair(list(string), pair(string, pair(string, pair(string, pair(Path.encode,
pair(list(pair(Options.encode, list(pair(string, properties)))),
pair(list(pair(string, properties)),
pair(list(pair(string, string)),
pair(list(string), pair(list(pair(string, string)),
- pair(list(string), list(string)))))))))))))))))(
- (Symbol.codes, (command_timings0, (verbose,
- (store.browser_info, (info.document_files, (File.standard_path(graph_file),
- (parent, (info.chapter, (session_name, (Path.current,
+ pair(list(string), list(string))))))))))))))))(
+ (Symbol.codes, (command_timings0, (verbose, (store.browser_info,
+ (documents, (parent, (info.chapter, (session_name, (Path.current,
(info.theories,
(sessions_structure.session_positions,
(sessions_structure.dest_session_directories,
(base.doc_names, (base.global_theories.toList,
- (base.loaded_theories.keys, info.bibtex_entries.map(_.info))))))))))))))))))
+ (base.loaded_theories.keys, info.bibtex_entries.map(_.info)))))))))))))))))
})
val env =
@@ -234,6 +227,9 @@
}
}
+ val export_consumer =
+ Export.consumer(store.open_database(session_name, output = true), cache = store.xz_cache)
+
val stdout = new StringBuilder(1000)
val stderr = new StringBuilder(1000)
val messages = new mutable.ListBuffer[XML.Elem]
@@ -242,6 +238,7 @@
val session_timings = new mutable.ListBuffer[Properties.T]
val runtime_statistics = new mutable.ListBuffer[Properties.T]
val task_statistics = new mutable.ListBuffer[Properties.T]
+ val document_output = new mutable.ListBuffer[String]
def fun(
name: String,
@@ -355,7 +352,7 @@
use_prelude = use_prelude, eval_main = eval_main,
cwd = info.dir.file, env = env)
- val errors =
+ val build_errors =
Isabelle_Thread.interrupt_handler(_ => process.terminate) {
Exn.capture { process.await_startup } match {
case Exn.Res(_) =>
@@ -367,25 +364,58 @@
val process_result =
Isabelle_Thread.interrupt_handler(_ => process.terminate) { process.await_shutdown }
- val process_output =
- stdout.toString ::
- messages.toList.map(message =>
- Symbol.encode(Protocol.message_text(List(message), metric = Symbol.Metric))) :::
- command_timings.toList.map(Protocol.Command_Timing_Marker.apply) :::
- theory_timings.toList.map(Protocol.Theory_Timing_Marker.apply) :::
- session_timings.toList.map(Protocol.Session_Timing_Marker.apply) :::
- runtime_statistics.toList.map(Protocol.ML_Statistics_Marker.apply) :::
- task_statistics.toList.map(Protocol.Task_Statistics_Marker.apply)
+
+ val export_errors =
+ export_consumer.shutdown(close = true).map(Output.error_message_text)
+
+ val document_errors =
+ try {
+ if (build_errors.isInstanceOf[Exn.Res[_]] && process_result.ok && documents.nonEmpty) {
+ val document_progress =
+ new Progress {
+ override def echo(msg: String): Unit =
+ document_output.synchronized { document_output += msg }
+ override def echo_document(path: Path): Unit =
+ progress.echo_document(path)
+ }
+ Present.build_documents(session_name, deps, store, verbose = verbose,
+ verbose_latex = true, progress = document_progress)
+ }
+ val graph_pdf =
+ graphview.Graph_File.make_pdf(options, deps(session_name).session_graph_display)
+ Present.finish(store.browser_info, graph_pdf, info, session_name)
+ Nil
+ }
+ catch { case Exn.Interrupt.ERROR(msg) => List(msg) }
val result =
- process_result.output(process_output).error(Library.trim_line(stderr.toString))
+ {
+ val more_output =
+ Library.trim_line(stdout.toString) ::
+ messages.toList.map(message =>
+ Symbol.encode(Protocol.message_text(List(message), metric = Symbol.Metric))) :::
+ command_timings.toList.map(Protocol.Command_Timing_Marker.apply) :::
+ theory_timings.toList.map(Protocol.Theory_Timing_Marker.apply) :::
+ session_timings.toList.map(Protocol.Session_Timing_Marker.apply) :::
+ runtime_statistics.toList.map(Protocol.ML_Statistics_Marker.apply) :::
+ task_statistics.toList.map(Protocol.Task_Statistics_Marker.apply) :::
+ document_output.toList
- errors match {
- case Exn.Res(Nil) => result
- case Exn.Res(errs) =>
- result.error_rc.output(
- errs.flatMap(s => split_lines(Output.error_message_text(s))) :::
- errs.map(Protocol.Error_Message_Marker.apply))
+ val more_errors =
+ Library.trim_line(stderr.toString) :: export_errors ::: document_errors
+
+ process_result.output(more_output).errors(more_errors)
+ }
+
+ build_errors match {
+ case Exn.Res(build_errs) =>
+ val errs = build_errs ::: document_errors
+ if (errs.isEmpty) result
+ else {
+ result.error_rc.output(
+ errs.flatMap(s => split_lines(Output.error_message_text(s))) :::
+ errs.map(Protocol.Error_Message_Marker.apply))
+ }
case Exn.Exn(Exn.Interrupt()) =>
if (result.ok) result.copy(rc = Exn.Interrupt.return_code) else result
case Exn.Exn(exn) => throw exn
@@ -404,23 +434,7 @@
def join: (Process_Result, Option[String]) =
{
- val result0 = future_result.join
- val result1 =
- export_consumer.shutdown(close = true).map(Output.error_message_text) match {
- case Nil => result0
- case errs => result0.errors(errs).error_rc
- }
-
- if (result1.ok) {
- for (document_output <- proper_string(options.string("document_output"))) {
- val document_output_dir =
- Isabelle_System.make_directory(info.dir + Path.explode(document_output))
- Present.write_tex_index(document_output_dir, deps(session_name).session_theories)
- }
- Present.finish(progress, store.browser_info, graph_file, info, session_name)
- }
-
- graph_file.delete
+ val result1 = future_result.join
val was_timeout =
timeout_request match {
--- a/src/Tools/Graphview/graph_file.scala Sun Nov 15 13:06:24 2020 +0000
+++ b/src/Tools/Graphview/graph_file.scala Sun Nov 15 13:06:41 2020 +0000
@@ -44,4 +44,11 @@
write(file, graphview)
}
+
+ def make_pdf(options: Options, graph: Graph_Display.Graph): Bytes =
+ Isabelle_System.with_tmp_file("graph", ext = "pdf")(graph_file =>
+ {
+ write(options, graph_file.file, graph)
+ Bytes.read(graph_file)
+ })
}