104
|
1 |
%% $Id$
|
108
|
2 |
|
104
|
3 |
\newcommand{\rmindex}[1]{{#1}\index{#1}\@}
|
|
4 |
\newcommand{\mtt}[1]{\mbox{\tt #1}}
|
|
5 |
\newcommand{\ttfct}[1]{\mathop{\mtt{#1}}\nolimits}
|
|
6 |
\newcommand{\ttrel}[1]{\mathrel{\mtt{#1}}}
|
|
7 |
\newcommand{\Constant}{\ttfct{Constant}}
|
|
8 |
\newcommand{\Variable}{\ttfct{Variable}}
|
|
9 |
\newcommand{\Appl}[1]{\ttfct{Appl}\mathopen{\mtt[}#1\mathclose{\mtt]}}
|
|
10 |
|
|
11 |
|
|
12 |
|
|
13 |
\chapter{Defining Logics} \label{Defining-Logics}
|
|
14 |
|
|
15 |
This chapter is intended for Isabelle experts. It explains how to define new
|
|
16 |
logical systems, Isabelle's {\em raison d'\^etre}. Isabelle logics are
|
|
17 |
hierarchies of theories. A number of simple examples are contained in {\em
|
|
18 |
Introduction to Isabelle}; the full syntax of theory definition files ({\tt
|
|
19 |
.thy} files) is shown in {\em The Isabelle Reference Manual}. This chapter's
|
|
20 |
chief purpose is a thorough description of all syntax related matters
|
|
21 |
concerning theories. The most important sections are \S\ref{sec:mixfix} about
|
|
22 |
mixfix declarations and \S\ref{sec:macros} describing the macro system. The
|
|
23 |
concluding examples of \S\ref{sec:min_logics} are more concerned with the
|
|
24 |
logical aspects of the definition of theories. Sections marked with * can be
|
|
25 |
skipped on the first reading.
|
|
26 |
|
|
27 |
|
|
28 |
%% FIXME move to Refman
|
|
29 |
% \section{Classes and types *}
|
|
30 |
% \index{*arities!context conditions}
|
|
31 |
%
|
|
32 |
% Type declarations are subject to the following two well-formedness
|
|
33 |
% conditions:
|
|
34 |
% \begin{itemize}
|
|
35 |
% \item There are no two declarations $ty :: (\vec{r})c$ and $ty :: (\vec{s})c$
|
|
36 |
% with $\vec{r} \neq \vec{s}$. For example
|
|
37 |
% \begin{ttbox}
|
|
38 |
% types ty 1
|
|
39 |
% arities ty :: ({\ttlbrace}logic{\ttrbrace}) logic
|
|
40 |
% ty :: ({\ttlbrace}{\ttrbrace})logic
|
|
41 |
% \end{ttbox}
|
|
42 |
% leads to an error message and fails.
|
|
43 |
% \item If there are two declarations $ty :: (s@1,\dots,s@n)c$ and $ty ::
|
|
44 |
% (s@1',\dots,s@n')c'$ such that $c' < c$ then $s@i' \preceq s@i$ must hold
|
|
45 |
% for $i=1,\dots,n$. The relationship $\preceq$, defined as
|
|
46 |
% \[ s' \preceq s \iff \forall c\in s. \exists c'\in s'.~ c'\le c, \]
|
|
47 |
% expresses that the set of types represented by $s'$ is a subset of the set of
|
|
48 |
% types represented by $s$. For example
|
|
49 |
% \begin{ttbox}
|
|
50 |
% classes term < logic
|
|
51 |
% types ty 1
|
|
52 |
% arities ty :: ({\ttlbrace}logic{\ttrbrace})logic
|
|
53 |
% ty :: ({\ttlbrace}{\ttrbrace})term
|
|
54 |
% \end{ttbox}
|
|
55 |
% leads to an error message and fails.
|
|
56 |
% \end{itemize}
|
|
57 |
% These conditions guarantee principal types~\cite{nipkow-prehofer}.
|
|
58 |
|
|
59 |
|
|
60 |
|
|
61 |
\section{Precedence grammars} \label{sec:precedence_grammars}
|
|
62 |
|
|
63 |
The precise syntax of a logic is best defined by a \rmindex{context-free
|
|
64 |
grammar}. In order to simplify the description of mathematical languages, we
|
|
65 |
introduce an extended format which permits {\bf
|
|
66 |
precedences}\indexbold{precedence}. This scheme generalizes precedence
|
|
67 |
declarations in \ML\ and {\sc prolog}. In this extended grammar format,
|
|
68 |
nonterminals are decorated by integers, their precedence. In the sequel,
|
|
69 |
precedences are shown as subscripts. A nonterminal $A@p$ on the right-hand
|
|
70 |
side of a production may only be replaced using a production $A@q = \gamma$
|
|
71 |
where $p \le q$.
|
|
72 |
|
|
73 |
Formally, a set of context free productions $G$ induces a derivation
|
|
74 |
relation $\rew@G$ on strings as follows:
|
|
75 |
\[ \alpha A@p \beta ~\rew@G~ \alpha\gamma\beta ~~~iff~~~
|
|
76 |
\exists (A@q=\gamma) \in G.~q \ge p
|
|
77 |
\]
|
|
78 |
Any extended grammar of this kind can be translated into a normal context
|
|
79 |
free grammar. However, this translation may require the introduction of a
|
|
80 |
large number of new nonterminals and productions.
|
|
81 |
|
|
82 |
\begin{example} \label{ex:precedence}
|
|
83 |
The following simple grammar for arithmetic expressions demonstrates how
|
|
84 |
binding power and associativity of operators can be enforced by precedences.
|
|
85 |
\begin{center}
|
|
86 |
\begin{tabular}{rclr}
|
|
87 |
$A@9$ & = & {\tt0} \\
|
|
88 |
$A@9$ & = & {\tt(} $A@0$ {\tt)} \\
|
|
89 |
$A@0$ & = & $A@0$ {\tt+} $A@1$ \\
|
|
90 |
$A@2$ & = & $A@3$ {\tt*} $A@2$ \\
|
|
91 |
$A@3$ & = & {\tt-} $A@3$
|
|
92 |
\end{tabular}
|
|
93 |
\end{center}
|
|
94 |
The choice of precedences determines that {\tt -} binds tighter than {\tt *}
|
|
95 |
which binds tighter than {\tt +}, and that {\tt +} associates to the left and
|
|
96 |
{\tt *} to the right.
|
|
97 |
\end{example}
|
|
98 |
|
|
99 |
To minimize the number of subscripts, we adopt the following conventions:
|
|
100 |
\begin{itemize}
|
|
101 |
\item All precedences $p$ must be in the range $0 \leq p \leq max_pri$ for
|
|
102 |
some fixed $max_pri$.
|
|
103 |
\item Precedence $0$ on the right-hand side and precedence $max_pri$ on the
|
|
104 |
left-hand side may be omitted.
|
|
105 |
\end{itemize}
|
|
106 |
In addition, we write the production $A@p = \alpha$ as $A = \alpha~(p)$,
|
|
107 |
i.e.\ the precedence of the left-hand side actually appears in the uttermost
|
|
108 |
right. Finally, alternatives may be separated by $|$, and repetition
|
|
109 |
indicated by \dots.
|
|
110 |
|
|
111 |
Using these conventions and assuming $max_pri=9$, the grammar in
|
|
112 |
Example~\ref{ex:precedence} becomes
|
|
113 |
\begin{center}
|
|
114 |
\begin{tabular}{rclc}
|
|
115 |
$A$ & = & {\tt0} & \hspace*{4em} \\
|
|
116 |
& $|$ & {\tt(} $A$ {\tt)} \\
|
|
117 |
& $|$ & $A$ {\tt+} $A@1$ & (0) \\
|
|
118 |
& $|$ & $A@3$ {\tt*} $A@2$ & (2) \\
|
|
119 |
& $|$ & {\tt-} $A@3$ & (3)
|
|
120 |
\end{tabular}
|
|
121 |
\end{center}
|
|
122 |
|
|
123 |
|
|
124 |
|
|
125 |
\section{Basic syntax} \label{sec:basic_syntax}
|
|
126 |
|
|
127 |
The basis of all extensions by object-logics is the \rmindex{Pure theory},
|
|
128 |
bound to the \ML-identifier \ttindex{Pure.thy}. It contains, among many other
|
|
129 |
things, the \rmindex{Pure syntax}. An informal account of this basic syntax
|
|
130 |
(meta-logic, types etc.) may be found in {\em Introduction to Isabelle}. A
|
|
131 |
more precise description using a precedence grammar is shown in
|
|
132 |
Figure~\ref{fig:pure_gram}.
|
|
133 |
|
|
134 |
\begin{figure}[htb]
|
|
135 |
\begin{center}
|
|
136 |
\begin{tabular}{rclc}
|
|
137 |
$prop$ &=& \ttindex{PROP} $aprop$ ~~$|$~~ {\tt(} $prop$ {\tt)} \\
|
|
138 |
&$|$& $logic@3$ \ttindex{==} $logic@2$ & (2) \\
|
|
139 |
&$|$& $logic@3$ \ttindex{=?=} $logic@2$ & (2) \\
|
|
140 |
&$|$& $prop@2$ \ttindex{==>} $prop@1$ & (1) \\
|
|
141 |
&$|$& {\tt[|} $prop$ {\tt;} \dots {\tt;} $prop$ {\tt|]} {\tt==>} $prop@1$ & (1) \\
|
|
142 |
&$|$& {\tt!!} $idts$ {\tt.} $prop$ & (0) \\\\
|
|
143 |
$logic$ &=& $prop$ ~~$|$~~ $fun$ \\\\
|
|
144 |
$aprop$ &=& $id$ ~~$|$~~ $var$
|
|
145 |
~~$|$~~ $fun@{max_pri}$ {\tt(} $logic$ {\tt,} \dots {\tt,} $logic$ {\tt)} \\\\
|
|
146 |
$fun$ &=& $id$ ~~$|$~~ $var$ ~~$|$~~ {\tt(} $fun$ {\tt)} \\
|
135
|
147 |
&$|$& $fun@{max_pri}$ {\tt(} $logic$ {\tt,} \dots {\tt,} $logic$ {\tt)} \\
|
|
148 |
&$|$& $fun@{max_pri}$ {\tt::} $type$ \\
|
104
|
149 |
&$|$& \ttindex{\%} $idts$ {\tt.} $logic$ & (0) \\\\
|
|
150 |
$idts$ &=& $idt$ ~~$|$~~ $idt@1$ $idts$ \\\\
|
|
151 |
$idt$ &=& $id$ ~~$|$~~ {\tt(} $idt$ {\tt)} \\
|
|
152 |
&$|$& $id$ \ttindex{::} $type$ & (0) \\\\
|
135
|
153 |
$type$ &=& $tfree$ ~~$|$~~ $tvar$ ~~$|$~~ $tfree$ {\tt::} $sort$
|
|
154 |
~~$|$~~ $tvar$ {\tt::} $sort$ \\
|
104
|
155 |
&$|$& $id$ ~~$|$~~ $type@{max_pri}$ $id$
|
|
156 |
~~$|$~~ {\tt(} $type$ {\tt,} \dots {\tt,} $type$ {\tt)} $id$ \\
|
|
157 |
&$|$& $type@1$ \ttindex{=>} $type$ & (0) \\
|
|
158 |
&$|$& {\tt[} $type$ {\tt,} \dots {\tt,} $type$ {\tt]} {\tt=>} $type$&(0)\\
|
|
159 |
&$|$& {\tt(} $type$ {\tt)} \\\\
|
|
160 |
$sort$ &=& $id$ ~~$|$~~ {\tt\ttlbrace\ttrbrace}
|
|
161 |
~~$|$~~ {\tt\ttlbrace} $id$ {\tt,} \dots {\tt,} $id$ {\tt\ttrbrace}
|
|
162 |
\end{tabular}\index{*"!"!}\index{*"["|}\index{*"|"]}
|
|
163 |
\indexbold{type@$type$} \indexbold{sort@$sort$} \indexbold{idt@$idt$}
|
|
164 |
\indexbold{idts@$idts$} \indexbold{logic@$logic$} \indexbold{prop@$prop$}
|
|
165 |
\indexbold{fun@$fun$}
|
|
166 |
\end{center}
|
|
167 |
\caption{Meta-Logic Syntax}
|
|
168 |
\label{fig:pure_gram}
|
|
169 |
\end{figure}
|
|
170 |
|
|
171 |
The following main categories are defined:
|
|
172 |
\begin{description}
|
|
173 |
\item[$prop$] Terms of type $prop$, i.e.\ formulae of the meta-logic.
|
|
174 |
|
|
175 |
\item[$aprop$] Atomic propositions.
|
|
176 |
|
|
177 |
\item[$logic$] Terms of types in class $logic$. Initially, $logic$ contains
|
|
178 |
merely $prop$. As the syntax is extended by new object-logics, more
|
108
|
179 |
productions for $logic$ are added automatically (see below).
|
104
|
180 |
|
|
181 |
\item[$fun$] Terms potentially of function type.
|
|
182 |
|
|
183 |
\item[$type$] Meta-types.
|
|
184 |
|
135
|
185 |
\item[$idts$] A list of identifiers, possibly constrained by types. Note
|
104
|
186 |
that \verb|x :: nat y| is parsed as \verb|x :: (nat y)|, i.e.\ {\tt y}
|
108
|
187 |
would be treated like a type constructor applied to {\tt nat}.
|
104
|
188 |
\end{description}
|
|
189 |
|
|
190 |
|
|
191 |
\subsection{Logical types and default syntax}
|
|
192 |
|
|
193 |
Isabelle is concerned with mathematical languages which have a certain
|
|
194 |
minimal vocabulary: identifiers, variables, parentheses, and the lambda
|
|
195 |
calculus. Logical types, i.e.\ those of class $logic$, are automatically
|
|
196 |
equipped with this basic syntax. More precisely, for any type constructor
|
135
|
197 |
$ty$ with arity $(\vec{s})c$, where $c$ is a subclass of $logic$, the
|
|
198 |
following productions are added:
|
104
|
199 |
\begin{center}
|
|
200 |
\begin{tabular}{rclc}
|
|
201 |
$ty$ &=& $id$ ~~$|$~~ $var$ ~~$|$~~ {\tt(} $ty$ {\tt)} \\
|
|
202 |
&$|$& $fun@{max_pri}$ {\tt(} $logic$ {\tt,} \dots {\tt,} $logic$ {\tt)}\\
|
|
203 |
&$|$& $ty@{max_pri}$ {\tt::} $type$\\\\
|
|
204 |
$logic$ &=& $ty$
|
|
205 |
\end{tabular}
|
|
206 |
\end{center}
|
|
207 |
|
|
208 |
|
|
209 |
\subsection{Lexical matters *}
|
|
210 |
|
|
211 |
The parser does not process input strings directly, it rather operates on
|
|
212 |
token lists provided by Isabelle's \rmindex{lexical analyzer} (the
|
|
213 |
\bfindex{lexer}). There are two different kinds of tokens: {\bf
|
|
214 |
literals}\indexbold{literal token}\indexbold{token!literal} and {\bf valued
|
|
215 |
tokens}\indexbold{valued token}\indexbold{token!valued}.
|
|
216 |
|
108
|
217 |
Literals can be regarded as reserved words\index{reserved word} of the syntax
|
|
218 |
and the user can add new ones, when extending theories. In
|
|
219 |
Figure~\ref{fig:pure_gram} they appear in typewriter type, e.g.\ {\tt PROP},
|
|
220 |
{\tt ==}, {\tt =?=}, {\tt ;}.
|
104
|
221 |
|
|
222 |
Valued tokens on the other hand have a fixed predefined syntax. The lexer
|
|
223 |
distinguishes four kinds of them: identifiers\index{identifier},
|
|
224 |
unknowns\index{unknown}\index{scheme variable|see{unknown}}, type
|
|
225 |
variables\index{type variable}, type unknowns\index{type unknown}\index{type
|
|
226 |
scheme variable|see{type unknown}}; they are denoted by $id$\index{id@$id$},
|
|
227 |
$var$\index{var@$var$}, $tfree$\index{tfree@$tfree$},
|
|
228 |
$tvar$\index{tvar@$tvar$}, respectively. Typical examples are {\tt x}, {\tt
|
|
229 |
?x7}, {\tt 'a}, {\tt ?'a3}, the exact syntax is:
|
|
230 |
|
|
231 |
\begin{tabular}{rcl}
|
|
232 |
$id$ & = & $letter~quasiletter^*$ \\
|
|
233 |
$var$ & = & ${\tt ?}id ~~|~~ {\tt ?}id{\tt .}nat$ \\
|
|
234 |
$tfree$ & = & ${\tt '}id$ \\
|
|
235 |
$tvar$ & = & ${\tt ?}tfree ~~|~~ {\tt ?}tfree{\tt .}nat$ \\[1ex]
|
|
236 |
|
|
237 |
$letter$ & = & one of {\tt a}\dots {\tt z} {\tt A}\dots {\tt Z} \\
|
|
238 |
$digit$ & = & one of {\tt 0}\dots {\tt 9} \\
|
|
239 |
$quasiletter$ & = & $letter ~~|~~ digit ~~|~~ {\tt _} ~~|~~ {\tt '}$ \\
|
|
240 |
$nat$ & = & $digit^+$
|
|
241 |
\end{tabular}
|
|
242 |
|
|
243 |
A string of $var$ or $tvar$ describes an \rmindex{unknown} which is
|
|
244 |
internally a pair of base name and index (\ML\ type \ttindex{indexname}).
|
|
245 |
These components are either explicitly separated by a dot as in {\tt ?x.1} or
|
|
246 |
{\tt ?x7.3} or directly run together as in {\tt ?x1}. The latter form is
|
108
|
247 |
possible, if the base name does not end with digits. And if the index is 0,
|
|
248 |
it may be dropped altogether: {\tt ?x} abbreviates {\tt ?x0} or {\tt ?x.0}.
|
104
|
249 |
|
|
250 |
Note that $id$, $var$, $tfree$, $tvar$ are mutually disjoint, but it is
|
|
251 |
perfectly legal that they overlap with the set of literal tokens (e.g.\ {\tt
|
|
252 |
PROP}, {\tt ALL}, {\tt EX}).
|
|
253 |
|
|
254 |
The lexical analyzer translates input strings to token lists by repeatedly
|
|
255 |
taking the maximal prefix of the input string that forms a valid token. A
|
|
256 |
maximal prefix that is both a literal and a valued token is treated as a
|
|
257 |
literal. Spaces, tabs and newlines are separators; they never occur within
|
|
258 |
tokens.
|
|
259 |
|
|
260 |
Note that literals need not necessarily be surrounded by white space to be
|
|
261 |
recognized as separate. For example, if {\tt -} is a literal but {\tt --} is
|
|
262 |
not, then the string {\tt --} is treated as two consecutive occurrences of
|
|
263 |
{\tt -}. This is in contrast to \ML\ which would treat {\tt --} always as a
|
|
264 |
single symbolic name. The consequence of Isabelle's more liberal scheme is
|
|
265 |
that the same string may be parsed in different ways after extending the
|
|
266 |
syntax: after adding {\tt --} as a literal, the input {\tt --} is treated as
|
|
267 |
a single token.
|
|
268 |
|
|
269 |
|
|
270 |
\subsection{Inspecting syntax *}
|
|
271 |
|
|
272 |
You may get the \ML\ representation of the syntax of any Isabelle theory by
|
|
273 |
applying \index{*syn_of}
|
|
274 |
\begin{ttbox}
|
|
275 |
syn_of: theory -> Syntax.syntax
|
|
276 |
\end{ttbox}
|
108
|
277 |
\ttindex{Syntax.syntax} is an abstract type. Values of this type can be
|
|
278 |
displayed by the following functions: \index{*Syntax.print_syntax}
|
|
279 |
\index{*Syntax.print_gram} \index{*Syntax.print_trans}
|
104
|
280 |
\begin{ttbox}
|
|
281 |
Syntax.print_syntax: Syntax.syntax -> unit
|
|
282 |
Syntax.print_gram: Syntax.syntax -> unit
|
|
283 |
Syntax.print_trans: Syntax.syntax -> unit
|
|
284 |
\end{ttbox}
|
108
|
285 |
{\tt Syntax.print_syntax} shows virtually all information contained in a
|
|
286 |
syntax, therefore being quite verbose. Its output is divided into labeled
|
|
287 |
sections. The syntax proper is represented by {\tt lexicon}, {\tt roots} and
|
104
|
288 |
{\tt prods}. The rest refers to the manifold facilities to apply syntactic
|
135
|
289 |
translations (macro expansion etc.).
|
104
|
290 |
|
|
291 |
To simplify coping with the verbosity of {\tt Syntax.print_syntax}, there are
|
|
292 |
\ttindex{Syntax.print_gram} to print the syntax proper only and
|
108
|
293 |
\ttindex{Syntax.print_trans} to print the translation related information
|
|
294 |
only.
|
104
|
295 |
|
|
296 |
Let's have a closer look at part of Pure's syntax:
|
|
297 |
\begin{ttbox}
|
|
298 |
Syntax.print_syntax (syn_of Pure.thy);
|
|
299 |
{\out lexicon: "!!" "%" "(" ")" "," "." "::" ";" "==" "==>" \dots}
|
|
300 |
{\out roots: logic type fun prop}
|
|
301 |
{\out prods:}
|
|
302 |
{\out type = tfree (1000)}
|
|
303 |
{\out type = tvar (1000)}
|
|
304 |
{\out type = id (1000)}
|
|
305 |
{\out type = tfree "::" sort[0] => "_ofsort" (1000)}
|
|
306 |
{\out type = tvar "::" sort[0] => "_ofsort" (1000)}
|
|
307 |
{\out \vdots}
|
|
308 |
{\out consts: "_K" "_appl" "_aprop" "_args" "_asms" "_bigimpl" \dots}
|
|
309 |
{\out parse_ast_translation: "_appl" "_bigimpl" "_bracket"}
|
|
310 |
{\out "_idtyp" "_lambda" "_tapp" "_tappl"}
|
|
311 |
{\out parse_rules:}
|
|
312 |
{\out parse_translation: "!!" "_K" "_abs" "_aprop"}
|
|
313 |
{\out print_translation: "all"}
|
|
314 |
{\out print_rules:}
|
|
315 |
{\out print_ast_translation: "==>" "_abs" "_idts" "fun"}
|
|
316 |
\end{ttbox}
|
|
317 |
|
|
318 |
\begin{description}
|
|
319 |
\item[\ttindex{lexicon}]
|
|
320 |
The set of literal tokens (i.e.\ reserved words, delimiters) used for
|
|
321 |
lexical analysis.
|
|
322 |
|
|
323 |
\item[\ttindex{roots}]
|
|
324 |
The legal syntactic categories to start parsing with. You name the
|
|
325 |
desired root directly as a string when calling lower level functions or
|
|
326 |
specifying macros. Higher level functions usually expect a type and
|
|
327 |
derive the actual root as follows:\index{root_of_type@$root_of_type$}
|
|
328 |
\begin{itemize}
|
|
329 |
\item $root_of_type(\tau@1 \To \tau@2) = \mtt{fun}$.
|
|
330 |
|
|
331 |
\item $root_of_type(\tau@1 \mathrel{ty} \tau@2) = ty$.
|
|
332 |
|
|
333 |
\item $root_of_type((\tau@1, \dots, \tau@n)ty) = ty$.
|
|
334 |
|
|
335 |
\item $root_of_type(\alpha) = \mtt{logic}$.
|
|
336 |
\end{itemize}
|
135
|
337 |
Thereby $\tau@1, \dots, \tau@n$ are types, $ty$ an infix or ordinary type
|
|
338 |
constructor and $\alpha$ a type variable or unknown. Note that only the
|
|
339 |
outermost type constructor is taken into account.
|
104
|
340 |
|
|
341 |
\item[\ttindex{prods}]
|
|
342 |
The list of productions describing the precedence grammar. Nonterminals
|
135
|
343 |
$A@n$ are rendered in {\sc ascii} as {\tt $A$[$n$]}, literal tokens are
|
|
344 |
quoted. Some productions have strings attached after an {\tt =>}. These
|
104
|
345 |
strings later become the heads of parse trees, but they also play a vital
|
|
346 |
role when terms are printed (see \S\ref{sec:asts}).
|
|
347 |
|
|
348 |
Productions which do not have string attached and thus do not create a
|
|
349 |
new parse tree node are called {\bf copy productions}\indexbold{copy
|
|
350 |
production}. They must have exactly one
|
|
351 |
argument\index{argument!production} (i.e.\ nonterminal or valued token)
|
|
352 |
on the right-hand side. The parse tree generated when parsing that
|
|
353 |
argument is simply passed up as the result of parsing the whole copy
|
|
354 |
production.
|
|
355 |
|
|
356 |
A special kind of copy production is one where the argument is a
|
|
357 |
nonterminal and no additional syntactic sugar (literals) exists. It is
|
|
358 |
called a \bfindex{chain production}. Chain productions should be seen as
|
|
359 |
an abbreviation mechanism: conceptually, they are removed from the
|
|
360 |
grammar by adding appropriate new rules. Precedence information attached
|
|
361 |
to chain productions is ignored, only the dummy value $-1$ is displayed.
|
|
362 |
|
|
363 |
\item[\ttindex{consts}, \ttindex{parse_rules}, \ttindex{print_rules}]
|
108
|
364 |
This is macro related information (see \S\ref{sec:macros}).
|
104
|
365 |
|
|
366 |
\item[\tt *_translation]
|
|
367 |
\index{*parse_ast_translation} \index{*parse_translation}
|
|
368 |
\index{*print_translation} \index{*print_ast_translation}
|
|
369 |
The sets of constants that invoke translation functions. These are more
|
|
370 |
arcane matters (see \S\ref{sec:asts} and \S\ref{sec:tr_funs}).
|
|
371 |
\end{description}
|
|
372 |
|
|
373 |
Of course you may inspect the syntax of any theory using the above calling
|
108
|
374 |
sequence. Beware that, as more and more material accumulates, the output
|
|
375 |
becomes even more verbose. When extending syntaxes, new {\tt roots}, {\tt
|
|
376 |
prods}, {\tt parse_rules} and {\tt print_rules} are appended to the end. The
|
|
377 |
other lists are displayed sorted.
|
104
|
378 |
|
|
379 |
|
|
380 |
|
|
381 |
\section{Abstract syntax trees} \label{sec:asts}
|
|
382 |
|
|
383 |
Figure~\ref{fig:parse_print} shows a simplified model of the parsing and
|
|
384 |
printing process.
|
|
385 |
|
|
386 |
\begin{figure}[htb]
|
|
387 |
\begin{center}
|
|
388 |
\begin{tabular}{cl}
|
|
389 |
string & \\
|
|
390 |
$\downarrow$ & parser \\
|
|
391 |
parse tree & \\
|
|
392 |
$\downarrow$ & \rmindex{parse ast translation} \\
|
|
393 |
ast & \\
|
|
394 |
$\downarrow$ & ast rewriting (macros) \\
|
|
395 |
ast & \\
|
|
396 |
$\downarrow$ & \rmindex{parse translation}, type-inference \\
|
|
397 |
--- well-typed term --- & \\
|
|
398 |
$\downarrow$ & \rmindex{print translation} \\
|
|
399 |
ast & \\
|
|
400 |
$\downarrow$ & ast rewriting (macros) \\
|
|
401 |
ast & \\
|
|
402 |
$\downarrow$ & \rmindex{print ast translation}, printer \\
|
|
403 |
string &
|
|
404 |
\end{tabular}
|
|
405 |
\end{center}
|
|
406 |
\caption{Parsing and Printing}
|
|
407 |
\label{fig:parse_print}
|
|
408 |
\end{figure}
|
|
409 |
|
108
|
410 |
The parser takes an input string (more precisely the token list produced by
|
|
411 |
the lexer) and produces a \rmin«ôx{parse tree}, which is directly derived
|
|
412 |
from the productions involved. By applying some internal transformations the
|
|
413 |
parse tree becomes an \bfindex{abstract syntax tree} (\bfindex{ast}). Macro
|
104
|
414 |
expansion, further translations and finally type inference yields a
|
|
415 |
well-typed term\index{term!well-typed}.
|
|
416 |
|
|
417 |
The printing process is the reverse, except for some subtleties to be
|
|
418 |
discussed later.
|
|
419 |
|
|
420 |
Asts are an intermediate form between the very raw parse trees and the typed
|
|
421 |
$\lambda$-terms. An ast is either an atom (constant or variable) or a list of
|
108
|
422 |
{\em at least two\/} subasts. Internally, they have type \ttindex{Syntax.ast}
|
104
|
423 |
which is defined as: \index{*Constant} \index{*Variable} \index{*Appl}
|
|
424 |
\begin{ttbox}
|
|
425 |
datatype ast =
|
|
426 |
Constant of string |
|
|
427 |
Variable of string |
|
|
428 |
Appl of ast list
|
|
429 |
\end{ttbox}
|
|
430 |
|
|
431 |
Notation: We write constant atoms as quoted strings, variable atoms as
|
|
432 |
non-quoted strings and applications as list of subasts separated by space and
|
|
433 |
enclosed in parentheses. For example:
|
|
434 |
\begin{ttbox}
|
|
435 |
Appl [Constant "_constrain",
|
|
436 |
Appl [Constant "_abs", Variable "x", Variable "t"],
|
|
437 |
Appl [Constant "fun", Variable "'a", Variable "'b"]]
|
|
438 |
{\rm is written as} ("_constrain" ("_abs" x t) ("fun" 'a 'b))
|
|
439 |
\end{ttbox}
|
|
440 |
|
|
441 |
Note that {\tt ()} and {\tt (f)} are both illegal.
|
|
442 |
|
135
|
443 |
The resemblance of Lisp's S-expressions is intentional, but notice the two
|
104
|
444 |
kinds of atomic symbols: $\Constant x$ and $\Variable x$. This distinction
|
108
|
445 |
has some more obscure reasons and you can ignore it about half of the time.
|
|
446 |
You should not take the names ``{\tt Constant}'' and ``{\tt Variable}'' too
|
|
447 |
literally. In the later translation to terms, $\Variable x$ may become a
|
|
448 |
constant, free or bound variable, even a type constructor or class name; the
|
|
449 |
actual outcome depends on the context.
|
104
|
450 |
|
|
451 |
Similarly, you can think of ${\tt (} f~x@1~\ldots~x@n{\tt )}$ as some sort of
|
|
452 |
application of $f$ to the arguments $x@1, \ldots, x@n$. But the actual kind
|
|
453 |
of application (say a type constructor $f$ applied to types $x@1, \ldots,
|
|
454 |
x@n$) is determined later by context, too.
|
|
455 |
|
|
456 |
Forms like {\tt (("_abs" x $t$) $u$)} are perfectly legal, but asts are not
|
|
457 |
higher-order: the {\tt "_abs"} does not yet bind the {\tt x} in any way,
|
|
458 |
though later at the term level, {\tt ("_abs" x $t$)} will become an {\tt Abs}
|
|
459 |
node and occurrences of {\tt x} in $t$ will be replaced by {\tt Bound}s. Even
|
135
|
460 |
if non-constant heads of applications may seem unusual, asts should be
|
108
|
461 |
regarded as first-order. Purists may think of ${\tt (} f~x@1~\ldots~x@n{\tt
|
|
462 |
)}$ as a first-order application of some invisible $(n+1)$-ary constant.
|
104
|
463 |
|
|
464 |
|
|
465 |
\subsection{Parse trees to asts}
|
|
466 |
|
|
467 |
Asts are generated from parse trees by applying some translation functions,
|
|
468 |
which are internally called {\bf parse ast translations}\indexbold{parse ast
|
|
469 |
translation}.
|
|
470 |
|
|
471 |
We can think of the raw output of the parser as trees built up by nesting the
|
|
472 |
right-hand sides of those productions that were used to derive a word that
|
|
473 |
matches the input token list. Then parse trees are simply lists of tokens and
|
|
474 |
sub parse trees, the latter replacing the nonterminals of the productions.
|
|
475 |
Note that we refer here to the actual productions in their internal form as
|
|
476 |
displayed by {\tt Syntax.print_syntax}.
|
|
477 |
|
|
478 |
Ignoring parse ast translations, the mapping
|
|
479 |
$ast_of_pt$\index{ast_of_pt@$ast_of_pt$} from parse trees to asts is derived
|
|
480 |
from the productions involved as follows:
|
|
481 |
\begin{itemize}
|
|
482 |
\item Valued tokens: $ast_of_pt(t) = \Variable s$, where $t$ is an $id$,
|
|
483 |
$var$, $tfree$ or $tvar$ token, and $s$ its value.
|
|
484 |
|
|
485 |
\item Copy productions: $ast_of_pt(\ldots P \ldots) = ast_of_pt(P)$.
|
|
486 |
|
|
487 |
\item $0$-ary productions: $ast_of_pt(\ldots \mtt{=>} c) = \Constant c$.
|
|
488 |
|
|
489 |
\item $n$-ary productions: $ast_of_pt(\ldots P@1 \ldots P@n \ldots \mtt{=>}
|
|
490 |
c) = \Appl{\Constant c,$ $ast_of_pt(P@1),$ $\ldots,$ $ast_of_pt(P@n)}$,
|
|
491 |
where $n \ge 1$.
|
|
492 |
\end{itemize}
|
|
493 |
Thereby $P, P@1, \ldots, P@n$ denote sub parse trees or valued tokens and
|
|
494 |
``\dots'' zero or more literal tokens. That means literal tokens are stripped
|
|
495 |
and do not appear in asts.
|
|
496 |
|
|
497 |
The following table presents some simple examples:
|
|
498 |
|
|
499 |
{\tt\begin{tabular}{ll}
|
|
500 |
\rm input string & \rm ast \\\hline
|
|
501 |
"f" & f \\
|
|
502 |
"'a" & 'a \\
|
|
503 |
"t == u" & ("==" t u) \\
|
|
504 |
"f(x)" & ("_appl" f x) \\
|
|
505 |
"f(x, y)" & ("_appl" f ("_args" x y)) \\
|
|
506 |
"f(x, y, z)" & ("_appl" f ("_args" x ("_args" y z))) \\
|
|
507 |
"\%x y.\ t" & ("_lambda" ("_idts" x y) t) \\
|
|
508 |
\end{tabular}}
|
|
509 |
|
|
510 |
Some of these examples illustrate why further translations are desirable in
|
|
511 |
order to provide some nice standard form of asts before macro expansion takes
|
|
512 |
place. Hence the Pure syntax provides predefined parse ast
|
|
513 |
translations\index{parse ast translation!of Pure} for ordinary applications,
|
135
|
514 |
type applications, nested abstractions, meta implications and function types.
|
104
|
515 |
Their net effect on some representative input strings is shown in
|
|
516 |
Figure~\ref{fig:parse_ast_tr}.
|
|
517 |
|
|
518 |
\begin{figure}[htb]
|
|
519 |
\begin{center}
|
|
520 |
{\tt\begin{tabular}{ll}
|
|
521 |
\rm string & \rm ast \\\hline
|
|
522 |
"f(x, y, z)" & (f x y z) \\
|
|
523 |
"'a ty" & (ty 'a) \\
|
|
524 |
"('a, 'b) ty" & (ty 'a 'b) \\
|
|
525 |
"\%x y z.\ t" & ("_abs" x ("_abs" y ("_abs" z t))) \\
|
|
526 |
"\%x ::\ 'a.\ t" & ("_abs" ("_constrain" x 'a) t) \\
|
|
527 |
"[| P; Q; R |] => S" & ("==>" P ("==>" Q ("==>" R S))) \\
|
|
528 |
"['a, 'b, 'c] => 'd" & ("fun" 'a ("fun" 'b ("fun" 'c 'd)))
|
|
529 |
\end{tabular}}
|
|
530 |
\end{center}
|
|
531 |
\caption{Built-in Parse Ast Translations}
|
|
532 |
\label{fig:parse_ast_tr}
|
|
533 |
\end{figure}
|
|
534 |
|
|
535 |
This translation process is guided by names of constant heads of asts. The
|
|
536 |
list of constants invoking parse ast translations is shown in the output of
|
|
537 |
{\tt Syntax.print_syntax} under {\tt parse_ast_translation}.
|
|
538 |
|
|
539 |
|
|
540 |
\subsection{Asts to terms *}
|
|
541 |
|
|
542 |
After the ast has been normalized by the macro expander (see
|
|
543 |
\S\ref{sec:macros}), it is transformed into a term with yet another set of
|
|
544 |
translation functions interspersed: {\bf parse translations}\indexbold{parse
|
|
545 |
translation}. The result is a non-typed term\index{term!non-typed} which may
|
|
546 |
contain type constraints, that is 2-place applications with head {\tt
|
|
547 |
"_constrain"}. The second argument of a constraint is a type encoded as term.
|
|
548 |
Real types will be introduced later during type inference, which is not
|
|
549 |
discussed here.
|
|
550 |
|
|
551 |
If we ignore the built-in parse translations of Pure first, then the mapping
|
|
552 |
$term_of_ast$\index{term_of_ast@$term_of_ast$} from asts to (non-typed) terms
|
|
553 |
is defined by:
|
|
554 |
\begin{itemize}
|
|
555 |
\item $term_of_ast(\Constant x) = \ttfct{Const} (x, \mtt{dummyT})$.
|
|
556 |
|
|
557 |
\item $term_of_ast(\Variable \mtt{"?}xi\mtt") = \ttfct{Var} ((x, i),
|
|
558 |
\mtt{dummyT})$, where $x$ is the base name and $i$ the index extracted
|
|
559 |
from $xi$.
|
|
560 |
|
|
561 |
\item $term_of_ast(\Variable x) = \ttfct{Free} (x, \mtt{dummyT})$.
|
|
562 |
|
|
563 |
\item $term_of_ast(\Appl{f, x@1, \ldots, x@n}) = term_of_ast(f) ~{\tt\$}~
|
|
564 |
term_of_ast(x@1)$ {\tt\$} \dots\ {\tt\$} $term_of_ast(x@n)$.
|
|
565 |
\end{itemize}
|
|
566 |
Note that {\tt Const}, {\tt Var}, {\tt Free} belong to the datatype {\tt
|
|
567 |
term} and \ttindex{dummyT} is bound to some dummy {\tt typ}, which is ignored
|
|
568 |
during type-inference.
|
|
569 |
|
108
|
570 |
So far the outcome is still a first-order term. {\tt Abs}s and {\tt Bound}s
|
104
|
571 |
are introduced by parse translations associated with {\tt "_abs"} and {\tt
|
|
572 |
"!!"} (and any other user defined binder).
|
|
573 |
|
|
574 |
|
|
575 |
\subsection{Printing of terms *}
|
|
576 |
|
|
577 |
When terms are prepared for printing, they are first transformed into asts
|
|
578 |
via $ast_of_term$\index{ast_of_term@$ast_of_term$} (ignoring {\bf print
|
|
579 |
translations}\indexbold{print translation}):
|
|
580 |
\begin{itemize}
|
|
581 |
\item $ast_of_term(\ttfct{Const} (x, \tau)) = \Constant x$.
|
|
582 |
|
|
583 |
\item $ast_of_term(\ttfct{Free} (x, \tau)) = constrain (\Variable x,
|
|
584 |
\tau)$.
|
|
585 |
|
|
586 |
\item $ast_of_term(\ttfct{Var} ((x, i), \tau)) = constrain (\Variable
|
|
587 |
\mtt{"?}xi\mtt", \tau)$, where $\mtt?xi$ is the string representation of
|
|
588 |
the {\tt indexname} $(x, i)$.
|
|
589 |
|
|
590 |
\item $ast_of_term(\ttfct{Abs} (x, \tau, t)) = \ttfct{Appl}
|
|
591 |
\mathopen{\mtt[} \Constant \mtt{"_abs"}, constrain(\Variable x', \tau),$
|
|
592 |
$ast_of_term(t') \mathclose{\mtt]}$, where $x'$ is a version of $x$
|
|
593 |
renamed away from all names occurring in $t$, and $t'$ the body $t$ with
|
|
594 |
all {\tt Bound}s referring to this {\tt Abs} replaced by $\ttfct{Free}
|
|
595 |
(x', \mtt{dummyT})$.
|
|
596 |
|
|
597 |
\item $ast_of_term(\ttfct{Bound} i) = \Variable \mtt{"B.}i\mtt"$. Note that
|
|
598 |
the occurrence of loose bound variables is abnormal and should never
|
|
599 |
happen when printing well-typed terms.
|
|
600 |
|
|
601 |
\item $ast_of_term(f \ttrel{\$} x@1 \ttrel{\$} \ldots \ttrel{\$} x@n) =
|
|
602 |
\ttfct{Appl} \mathopen{\mtt[} ast_of_term(f), ast_of_term(x@1), \ldots,$
|
|
603 |
$ast_of_term(x@n) \mathclose{\mtt]}$, where $f$ is not an application.
|
|
604 |
|
|
605 |
\item $constrain(x, \tau) = x$, if $\tau = \mtt{dummyT}$ \index{*dummyT} or
|
|
606 |
\ttindex{show_types} not set to {\tt true}.
|
|
607 |
|
|
608 |
\item $constrain(x, \tau) = \Appl{\Constant \mtt{"_constrain"}, x, ty}$,
|
|
609 |
where $ty$ is the ast encoding of $\tau$. That is: type constructors as
|
|
610 |
{\tt Constant}s, type variables as {\tt Variable}s and type applications
|
|
611 |
as {\tt Appl}s with the head type constructor as first element.
|
|
612 |
Additionally, if \ttindex{show_sorts} is set to {\tt true}, some type
|
135
|
613 |
variables are decorated with an ast encoding of their sort.
|
104
|
614 |
\end{itemize}
|
|
615 |
|
|
616 |
\medskip
|
|
617 |
After an ast has been normalized wrt.\ the print macros, it is transformed
|
|
618 |
into the final output string. The built-in {\bf print ast
|
|
619 |
translations}\indexbold{print ast translation} are essentially the reverse
|
|
620 |
ones of the parse ast translations of Figure~\ref{fig:parse_ast_tr}.
|
|
621 |
|
|
622 |
For the actual printing process, the names attached to grammar productions of
|
|
623 |
the form $\ldots A@{p@1}^1 \ldots A@{p@n}^n \ldots \mtt{=>} c$ play a vital
|
|
624 |
role. Whenever an ast with constant head $c$, i.e.\ $\mtt"c\mtt"$ or
|
|
625 |
$(\mtt"c\mtt"~ x@1 \ldots x@n)$, is encountered it is printed according to
|
|
626 |
the production for $c$. This means that first the arguments $x@1$ \dots $x@n$
|
|
627 |
are printed, then put in parentheses if necessary for reasons of precedence,
|
|
628 |
and finally joined to a single string, separated by the syntactic sugar of
|
|
629 |
the production (denoted by ``\dots'' above).
|
|
630 |
|
|
631 |
If an application $(\mtt"c\mtt"~ x@1 \ldots x@m)$ has more arguments than the
|
|
632 |
corresponding production, it is first split into $((\mtt"c\mtt"~ x@1 \ldots
|
108
|
633 |
x@n) ~ x@{n+1} \ldots x@m)$. Applications with too few arguments or with
|
104
|
634 |
non-constant head or without a corresponding production are printed as
|
|
635 |
$f(x@1, \ldots, x@l)$ or $(\alpha@1, \ldots, \alpha@l) ty$. A single
|
|
636 |
$\Variable x$ is simply printed as $x$.
|
|
637 |
|
108
|
638 |
Note that the system does {\em not\/} insert blanks automatically. They
|
135
|
639 |
should be part of the mixfix declaration the production has been derived
|
|
640 |
from, if they are required to separate tokens. Mixfix declarations may also
|
|
641 |
contain pretty printing annotations.
|
104
|
642 |
|
|
643 |
|
|
644 |
|
|
645 |
\section{Mixfix declarations} \label{sec:mixfix}
|
|
646 |
|
|
647 |
When defining theories, the user usually declares new constants as names
|
|
648 |
associated with a type followed by an optional \bfindex{mixfix annotation}.
|
|
649 |
If none of the constants are introduced with mixfix annotations, there is no
|
|
650 |
concrete syntax to speak of: terms can only be abstractions or applications
|
|
651 |
of the form $f(t@1, \dots, t@n)$. Since this notation quickly becomes
|
|
652 |
unreadable, Isabelle supports syntax definitions in the form of unrestricted
|
|
653 |
context-free \index{context-free grammar} \index{precedence grammar}
|
|
654 |
precedence grammars using mixfix annotations.
|
|
655 |
|
108
|
656 |
Mixfix annotations describe the {\em concrete\/} syntax, a standard
|
|
657 |
translation into the abstract syntax and a pretty printing scheme, all in
|
|
658 |
one. Isabelle syntax definitions are inspired by \OBJ's~\cite{OBJ} {\em
|
|
659 |
mixfix\/} syntax. Each mixfix annotation defines a precedence grammar
|
|
660 |
production and optionally associates a constant with it.
|
104
|
661 |
|
135
|
662 |
There is a general form of mixfix annotation and some shortcuts for common
|
|
663 |
cases like infix operators.
|
104
|
664 |
|
108
|
665 |
The general \bfindex{mixfix declaration} as it may occur within the {\tt
|
104
|
666 |
consts} section\index{consts section@{\tt consts} section} of a {\tt .thy}
|
|
667 |
file, specifies a constant declaration and a grammar production at the same
|
|
668 |
time. It has the form {\tt $c$ ::\ "$\tau$" ("$sy$" $ps$ $p$)} and is
|
|
669 |
interpreted as follows:
|
|
670 |
\begin{itemize}
|
|
671 |
\item {\tt $c$ ::\ "$\tau$"} is the actual constant declaration without any
|
|
672 |
syntactic significance.
|
|
673 |
|
|
674 |
\item $sy$ is the right-hand side of the production, specified as a
|
|
675 |
symbolic string. In general, $sy$ is of the form $\alpha@0 \_ \alpha@1
|
|
676 |
\dots \alpha@{n-1} \_ \alpha@n$, where each occurrence of \ttindex{_}
|
|
677 |
denotes an argument\index{argument!mixfix} position and the strings
|
|
678 |
$\alpha@i$ do not contain (non-escaped) {\tt _}. The $\alpha@i$ may
|
108
|
679 |
consist of delimiters\index{delimiter},
|
104
|
680 |
spaces\index{space (pretty printing)} or \rmindex{pretty printing}
|
|
681 |
annotations (see below).
|
|
682 |
|
|
683 |
\item $\tau$ specifies the syntactic categories of the arguments on the
|
|
684 |
left-hand and of the right-hand side. Arguments may be nonterminals or
|
|
685 |
valued tokens. If $sy$ is of the form above, $\tau$ must be a nested
|
|
686 |
function type with at least $n$ argument positions, say $\tau = [\tau@1,
|
|
687 |
\dots, \tau@n] \To \tau'$. The syntactic category of argument $i$ is
|
|
688 |
derived from type $\tau@i$ (see $root_of_type$ in
|
|
689 |
\S\ref{sec:basic_syntax}). The result, i.e.\ the left-hand side of the
|
|
690 |
production, is derived from type $\tau'$. Note that the $\tau@i$ and
|
|
691 |
$\tau'$ may be function types.
|
|
692 |
|
|
693 |
\item $c$ is the name of the constant associated with the production. If
|
|
694 |
$c$ is the empty string {\tt ""} then this is a \rmindex{copy
|
|
695 |
production}. Otherwise, parsing an instance of the phrase $sy$ generates
|
|
696 |
the ast {\tt ("$c$" $a@1$ $\ldots$ $a@n$)}, where $a@i$ is the ast
|
|
697 |
generated by parsing the $i$-th argument.
|
|
698 |
|
|
699 |
\item $ps$ is an optional list of at most $n$ integers, say {\tt [$p@1$,
|
|
700 |
$\ldots$, $p@m$]}, where $p@i$ is the minimal \rmindex{precedence}
|
|
701 |
required of any phrase that may appear as the $i$-th argument. Missing
|
|
702 |
precedences default to $0$.
|
|
703 |
|
|
704 |
\item $p$ is the \rmindex{precedence} the of this production.
|
|
705 |
\end{itemize}
|
|
706 |
|
|
707 |
Precedences\index{precedence!range of} may range between $0$ and
|
|
708 |
$max_pri$\indexbold{max_pri@$max_pri$} (= 1000). If you want to ignore them,
|
|
709 |
the safest way to do so is to use the declaration {\tt $c$ ::\ "$\tau$"
|
|
710 |
("$sy$")}. The resulting production puts no precedence constraints on any of
|
|
711 |
its arguments and has maximal precedence itself.
|
|
712 |
|
|
713 |
\begin{example}
|
|
714 |
The following theory specification contains a {\tt consts} section that
|
|
715 |
encodes the precedence grammar of Example~\ref{ex:precedence} as mixfix
|
|
716 |
declarations:
|
|
717 |
\begin{ttbox}
|
|
718 |
EXP = Pure +
|
|
719 |
types
|
|
720 |
exp 0
|
|
721 |
arities
|
|
722 |
exp :: logic
|
|
723 |
consts
|
|
724 |
"0" :: "exp" ("0" 9)
|
|
725 |
"+" :: "[exp, exp] => exp" ("_ + _" [0, 1] 0)
|
|
726 |
"*" :: "[exp, exp] => exp" ("_ * _" [3, 2] 2)
|
|
727 |
"-" :: "exp => exp" ("- _" [3] 3)
|
|
728 |
end
|
|
729 |
\end{ttbox}
|
|
730 |
Note that the {\tt arities} declaration causes {\tt exp} to be added to the
|
|
731 |
syntax' roots. If you put the above text into a file {\tt exp.thy} and load
|
135
|
732 |
it via {\tt use_thy "EXP"}, you can run some tests:
|
104
|
733 |
\begin{ttbox}
|
|
734 |
val read_exp = Syntax.test_read (syn_of EXP.thy) "exp";
|
|
735 |
read_exp "0 * 0 * 0 * 0 + 0 + 0 + 0";
|
|
736 |
{\out tokens: "0" "*" "0" "*" "0" "*" "0" "+" "0" "+" "0" "+" "0"}
|
|
737 |
{\out raw: ("+" ("+" ("+" ("*" "0" ("*" "0" ("*" "0" "0"))) "0") "0") "0")}
|
|
738 |
{\out \vdots}
|
|
739 |
read_exp "0 + - 0 + 0";
|
|
740 |
{\out tokens: "0" "+" "-" "0" "+" "0"}
|
|
741 |
{\out raw: ("+" ("+" "0" ("-" "0")) "0")}
|
|
742 |
{\out \vdots}
|
|
743 |
\end{ttbox}
|
|
744 |
The output of \ttindex{Syntax.test_read} includes the token list ({\tt
|
|
745 |
tokens}) and the raw ast directly derived from the parse tree, ignoring parse
|
|
746 |
ast translations. The rest is tracing information provided by the macro
|
|
747 |
expander (see \S\ref{sec:macros}).
|
|
748 |
|
108
|
749 |
Executing {\tt Syntax.print_gram (syn_of EXP.thy)} reveals the actual grammar
|
|
750 |
productions derived from the above mixfix declarations (lots of additional
|
|
751 |
information deleted):
|
104
|
752 |
\begin{ttbox}
|
|
753 |
exp = "0" => "0" (9)
|
|
754 |
exp = exp[0] "+" exp[1] => "+" (0)
|
|
755 |
exp = exp[3] "*" exp[2] => "*" (2)
|
|
756 |
exp = "-" exp[3] => "-" (3)
|
|
757 |
\end{ttbox}
|
|
758 |
\end{example}
|
|
759 |
|
|
760 |
Let us now have a closer look at the structure of the string $sy$ appearing
|
|
761 |
in mixfix annotations. This string specifies a list of parsing and printing
|
|
762 |
directives, namely delimiters\index{delimiter},
|
|
763 |
arguments\index{argument!mixfix}, spaces\index{space (pretty printing)},
|
|
764 |
blocks of indentation\index{block (pretty printing)} and optional or forced
|
|
765 |
line breaks\index{break (pretty printing)}. These are encoded via the
|
|
766 |
following character sequences:
|
|
767 |
|
|
768 |
\begin{description}
|
|
769 |
\item[~\ttindex_~] An argument\index{argument!mixfix} position.
|
|
770 |
|
|
771 |
\item[~$d$~] A \rmindex{delimiter}, i.e.\ a non-empty sequence of
|
108
|
772 |
non-special or escaped characters. Escaping a
|
104
|
773 |
character\index{escape character} means preceding it with a {\tt '}
|
|
774 |
(quote). Thus you have to write {\tt ''} if you really want a single
|
|
775 |
quote. Delimiters may never contain white space, though.
|
|
776 |
|
|
777 |
\item[~$s$~] A non-empty sequence of spaces\index{space (pretty printing)}
|
|
778 |
for printing.
|
|
779 |
|
|
780 |
\item[~{\ttindex($n$}~] Open a block\index{block (pretty printing)}. $n$ is
|
|
781 |
an optional sequence of digits that specifies the amount of indentation
|
|
782 |
to be added when a line break occurs within the block. If {\tt(} is not
|
|
783 |
followed by a digit, the indentation defaults to $0$.
|
|
784 |
|
|
785 |
\item[~\ttindex)~] Close a block.
|
|
786 |
|
|
787 |
\item[~\ttindex{//}~] Force a line break\index{break (pretty printing)}.
|
|
788 |
|
|
789 |
\item[~\ttindex/$s$~] Allow a line break\index{break (pretty printing)}.
|
|
790 |
Spaces $s$ right after {\tt /} are only printed if the break is not
|
|
791 |
taken.
|
|
792 |
\end{description}
|
|
793 |
|
|
794 |
In terms of parsing, arguments are nonterminals or valued tokens, while
|
|
795 |
delimiters are literal tokens. The other directives have only significance
|
|
796 |
for printing. The \rmindex{pretty printing} mechanisms of Isabelle is
|
108
|
797 |
essentially the one described in \cite{paulson91}.
|
104
|
798 |
|
|
799 |
|
|
800 |
\subsection{Infixes}
|
|
801 |
|
|
802 |
Infix\index{infix} operators associating to the left or right can be declared
|
|
803 |
conveniently using \ttindex{infixl} or \ttindex{infixr}.
|
|
804 |
|
|
805 |
Roughly speaking, the form {\tt $c$ ::\ "$\tau$" (infixl $p$)} abbreviates:
|
|
806 |
\begin{ttbox}
|
|
807 |
"op \(c\)" ::\ "\(\tau\)" ("op \(c\)")
|
|
808 |
"op \(c\)" ::\ "\(\tau\)" ("(_ \(c\)/ _)" [\(p\), \(p + 1\)] \(p\))
|
|
809 |
\end{ttbox}
|
|
810 |
and {\tt $c$ ::\ "$\tau$" (infixr $p$)} abbreviates:
|
|
811 |
\begin{ttbox}
|
|
812 |
"op \(c\)" ::\ "\(\tau\)" ("op \(c\)")
|
|
813 |
"op \(c\)" ::\ "\(\tau\)" ("(_ \(c\)/ _)" [\(p + 1\), \(p\)] \(p\))
|
|
814 |
\end{ttbox}
|
|
815 |
|
|
816 |
Thus, prefixing infixes with \ttindex{op} makes them behave like ordinary
|
|
817 |
function symbols. Special characters occurring in $c$ have to be escaped as
|
135
|
818 |
in delimiters. Also note that the expanded forms above would be actually
|
|
819 |
illegal at the user level because of duplicate declarations of constants.
|
104
|
820 |
|
|
821 |
|
|
822 |
\subsection{Binders}
|
|
823 |
|
|
824 |
A \bfindex{binder} is a variable-binding construct, such as a
|
|
825 |
\rmindex{quantifier}. The constant declaration \indexbold{*binder}
|
|
826 |
\begin{ttbox}
|
|
827 |
\(c\) ::\ "\(\tau\)" (binder "\(Q\)" \(p\))
|
|
828 |
\end{ttbox}
|
|
829 |
introduces a binder $c$ of type $\tau$, which must have the form $(\tau@1 \To
|
|
830 |
\tau@2) \To \tau@3$. Its concrete syntax is $Q~x.P$. A binder is like a
|
|
831 |
generalized quantifier where $\tau@1$ is the type of the bound variable $x$,
|
|
832 |
$\tau@2$ the type of the body $P$, and $\tau@3$ the type of the whole term.
|
|
833 |
For example $\forall$ can be declared like this:
|
|
834 |
\begin{ttbox}
|
|
835 |
All :: "('a => o) => o" (binder "ALL " 10)
|
|
836 |
\end{ttbox}
|
|
837 |
This allows us to write $\forall x.P$ either as {\tt All(\%$x$.$P$)} or {\tt
|
|
838 |
ALL $x$.$P$}. When printing terms, Isabelle usually uses the latter form, but
|
|
839 |
has to fall back on $\mtt{All}(P)$, if $P$ is not an abstraction.
|
|
840 |
|
|
841 |
Binders $c$ of type $(\sigma \To \tau) \To \tau$ can be nested; then the
|
|
842 |
internal form $c(\lambda x@1. c(\lambda x@2. \ldots c(\lambda x@n. P)
|
|
843 |
\ldots))$ corresponds to external $Q~x@1~x@2 \ldots x@n. P$.
|
|
844 |
|
|
845 |
\medskip
|
|
846 |
The general binder declaration
|
|
847 |
\begin{ttbox}
|
|
848 |
\(c\) ::\ "(\(\tau@1\) => \(\tau@2\)) => \(\tau@3\)" (binder "\(Q\)" \(p\))
|
|
849 |
\end{ttbox}
|
|
850 |
is internally expanded to
|
|
851 |
\begin{ttbox}
|
|
852 |
\(c\) ::\ "(\(\tau@1\) => \(\tau@2\)) => \(\tau@3\)"
|
|
853 |
"\(Q\)" ::\ "[idts, \(\tau@2\)] => \(\tau@3\)" ("(3\(Q\)_./ _)" \(p\))
|
|
854 |
\end{ttbox}
|
|
855 |
with $idts$ being the syntactic category for a list of $id$s optionally
|
|
856 |
constrained (see Figure~\ref{fig:pure_gram}). Note that special characters in
|
|
857 |
$Q$ have to be escaped as in delimiters.
|
|
858 |
|
|
859 |
Additionally, a parse translation\index{parse translation!for binder} for $Q$
|
|
860 |
and a print translation\index{print translation!for binder} for $c$ is
|
|
861 |
installed. These perform behind the scenes the translation between the
|
|
862 |
internal and external forms.
|
|
863 |
|
|
864 |
|
|
865 |
|
|
866 |
\section{Syntactic translations (macros)} \label{sec:macros}
|
|
867 |
|
|
868 |
So far we have pretended that there is a close enough relationship between
|
|
869 |
concrete and abstract syntax to allow an automatic translation from one to
|
108
|
870 |
the other using the constant name supplied with each non-copy production. In
|
|
871 |
many cases this scheme is not powerful enough. Some typical examples involve
|
104
|
872 |
variable binding constructs (e.g.\ {\tt ALL x:A.P} vs.\ {\tt Ball(A, \%x.P)}
|
108
|
873 |
or convenient notations for enumerations like finite sets, lists etc.\ (e.g.\
|
104
|
874 |
{\tt [x, y, z]} vs.\ {\tt Cons(x, Cons(y, Cons(z, Nil)))}).
|
|
875 |
|
|
876 |
Isabelle offers such translation facilities at two different levels, namely
|
|
877 |
{\bf macros}\indexbold{macro} and {\bf translation functions}.
|
|
878 |
|
108
|
879 |
Macros are specified by first-order rewriting systems that operate on asts.
|
104
|
880 |
They are usually easy to read and in most cases not very difficult to write.
|
108
|
881 |
Unfortunately, some more obscure translations cannot be expressed as macros
|
|
882 |
and you have to fall back on the more powerful mechanism of translation
|
|
883 |
functions written in \ML. These are quite unreadable and hard to write (see
|
|
884 |
\S\ref{sec:tr_funs}).
|
104
|
885 |
|
|
886 |
\medskip
|
108
|
887 |
Let us now get started with the macro system by a simple example:
|
104
|
888 |
|
|
889 |
\begin{example}~ \label{ex:set_trans}
|
|
890 |
|
|
891 |
\begin{ttbox}
|
|
892 |
SET = Pure +
|
|
893 |
types
|
|
894 |
i, o 0
|
|
895 |
arities
|
|
896 |
i, o :: logic
|
|
897 |
consts
|
|
898 |
Trueprop :: "o => prop" ("_" 5)
|
|
899 |
Collect :: "[i, i => o] => i"
|
|
900 |
"{\at}Collect" :: "[idt, i, o] => i" ("(1{\ttlbrace}_:_./ _{\ttrbrace})")
|
|
901 |
Replace :: "[i, [i, i] => o] => i"
|
|
902 |
"{\at}Replace" :: "[idt, idt, i, o] => i" ("(1{\ttlbrace}_./ _:_, _{\ttrbrace})")
|
|
903 |
Ball :: "[i, i => o] => o"
|
|
904 |
"{\at}Ball" :: "[idt, i, o] => o" ("(3ALL _:_./ _)" 10)
|
|
905 |
translations
|
|
906 |
"{\ttlbrace}x:A. P{\ttrbrace}" == "Collect(A, %x. P)"
|
|
907 |
"{\ttlbrace}y. x:A, Q{\ttrbrace}" == "Replace(A, %x y. Q)"
|
|
908 |
"ALL x:A. P" == "Ball(A, %x. P)"
|
|
909 |
end
|
|
910 |
\end{ttbox}
|
|
911 |
|
|
912 |
This and the following theories are complete working examples, though they
|
|
913 |
are fragmentary as they contain merely syntax. They are somewhat fashioned
|
|
914 |
after {\tt ZF/zf.thy}, where you should look for a good real-world example.
|
|
915 |
|
|
916 |
{\tt SET} defines constants for set comprehension ({\tt Collect}),
|
|
917 |
replacement ({\tt Replace}) and bounded universal quantification ({\tt
|
|
918 |
Ball}). Without additional syntax you would have to express $\forall x \in A.
|
|
919 |
P(x)$ as {\tt Ball(A, P)}. Since this is quite awkward, we define additional
|
|
920 |
constants with appropriate concrete syntax. These constants are decorated
|
|
921 |
with {\tt\at} to stress their pure syntactic purpose; they should never occur
|
|
922 |
within the final well-typed terms. Another consequence is that the user
|
108
|
923 |
cannot refer to such names directly, since they are not legal identifiers.
|
104
|
924 |
|
108
|
925 |
The translations cause the replacement of external forms by internal forms
|
135
|
926 |
after parsing, and vice versa before printing of terms.
|
104
|
927 |
\end{example}
|
|
928 |
|
|
929 |
This is only a very simple but common instance of a more powerful mechanism.
|
|
930 |
As a specification of what is to be translated, it should be comprehensible
|
|
931 |
without further explanations. But there are also some snags and other
|
|
932 |
peculiarities that are typical for macro systems in general. The purpose of
|
|
933 |
this section is to explain how Isabelle's macro system really works.
|
|
934 |
|
|
935 |
|
|
936 |
\subsection{Specifying macros}
|
|
937 |
|
|
938 |
Basically macros are rewrite rules on asts. But unlike other macro systems of
|
|
939 |
various programming languages, Isabelle's macros work two way. Therefore a
|
|
940 |
syntax contains two lists of rules: one for parsing and one for printing.
|
|
941 |
|
|
942 |
The {\tt translations} section\index{translations section@{\tt translations}
|
108
|
943 |
section} consists of a list of rule specifications of the form:
|
|
944 |
|
|
945 |
\begin{center}
|
104
|
946 |
{\tt $[$ ($root$) $]$ $string$ $[$ => $|$ <= $|$ == $]$ $[$ ($root$) $]$
|
|
947 |
$string$}.
|
108
|
948 |
\end{center}
|
104
|
949 |
|
|
950 |
This specifies a \rmindex{parse rule} ({\tt =>}) a \rmindex{print rule} ({\tt
|
|
951 |
<=}) or both ({\tt ==}). The two $string$s preceded by optional parenthesized
|
135
|
952 |
$root$s denote the left-hand and right-hand side of the rule as `source
|
108
|
953 |
code', i.e.\ in the usual syntax of terms.
|
104
|
954 |
|
|
955 |
Rules are internalized wrt.\ an intermediate signature that is obtained from
|
|
956 |
the parent theories' ones by adding all material of all sections preceding
|
135
|
957 |
{\tt translations} in the {\tt .thy} file. Especially, new syntax defined in
|
104
|
958 |
{\tt consts} is already effective.
|
|
959 |
|
|
960 |
Then part of the process that transforms input strings into terms is applied:
|
108
|
961 |
lexing, parsing and parse ast translations (see \S\ref{sec:asts}). Macros
|
|
962 |
specified in the parents are {\em not\/} expanded. Also note that the lexer
|
104
|
963 |
runs in a different mode that additionally accepts identifiers of the form
|
|
964 |
$\_~letter~quasiletter^*$ (like {\tt _idt}, {\tt _K}). The syntactic category
|
|
965 |
to parse is specified by $root$, which defaults to {\tt logic}.
|
|
966 |
|
|
967 |
Finally, Isabelle tries to guess which atoms of the resulting ast of the rule
|
|
968 |
should be treated as constants during matching (see below). These names are
|
|
969 |
extracted from all class, type and constant declarations made so far.
|
|
970 |
|
|
971 |
\medskip
|
|
972 |
The result are two lists of translation rules in internal form, that is pairs
|
135
|
973 |
of asts. They can be viewed using {\tt Syntax.print_syntax} (sections
|
|
974 |
\ttindex{parse_rules} and \ttindex{print_rules}). For {\tt SET} of
|
104
|
975 |
Example~\ref{ex:set_trans} these are:
|
|
976 |
\begin{ttbox}
|
|
977 |
parse_rules:
|
|
978 |
("{\at}Collect" x A P) -> ("Collect" A ("_abs" x P))
|
|
979 |
("{\at}Replace" y x A Q) -> ("Replace" A ("_abs" x ("_abs" y Q)))
|
|
980 |
("{\at}Ball" x A P) -> ("Ball" A ("_abs" x P))
|
|
981 |
print_rules:
|
|
982 |
("Collect" A ("_abs" x P)) -> ("{\at}Collect" x A P)
|
|
983 |
("Replace" A ("_abs" x ("_abs" y Q))) -> ("{\at}Replace" y x A Q)
|
|
984 |
("Ball" A ("_abs" x P)) -> ("{\at}Ball" x A P)
|
|
985 |
\end{ttbox}
|
|
986 |
|
|
987 |
Note that in this notation all rules are oriented left to right. In the {\tt
|
|
988 |
translations} section, which has been split into two parts, print rules
|
|
989 |
appeared right to left.
|
|
990 |
|
|
991 |
\begin{warn}
|
|
992 |
Be careful not to choose names for variables in rules that are actually
|
|
993 |
treated as constant. If in doubt, check the rules in their internal form or
|
|
994 |
the section labeled {\tt consts} in the output of {\tt Syntax.print_syntax}.
|
|
995 |
\end{warn}
|
|
996 |
|
|
997 |
|
|
998 |
\subsection{Applying rules}
|
|
999 |
|
|
1000 |
In the course of parsing and printing terms, asts are generated as an
|
|
1001 |
intermediate form as pictured in Figure~\ref{fig:parse_print}. These asts are
|
|
1002 |
normalized wrt.\ the given lists of translation rules in a uniform manner. As
|
135
|
1003 |
stated earlier, asts are supposed to be first-order `terms'. The rewriting
|
108
|
1004 |
systems derived from {\tt translations} sections essentially resemble
|
|
1005 |
traditional first-order term rewriting systems. We first examine how a single
|
|
1006 |
rule is applied.
|
104
|
1007 |
|
|
1008 |
Let $t$ be the ast to be normalized and $(l, r)$ some translation rule. A
|
108
|
1009 |
subast $u$ of $t$ is called {\bf redex}\indexbold{redex (ast)} (reducible
|
|
1010 |
expression), if it is an instance of $l$. In this case $l$ is said to {\bf
|
|
1011 |
match}\indexbold{match (ast)} $u$. A redex matched by $l$ may be replaced by
|
|
1012 |
the corresponding instance of $r$, thus {\bf rewriting}\index{rewrite (ast)}
|
|
1013 |
the ast $t$.
|
104
|
1014 |
|
|
1015 |
Matching requires some notion of {\bf place-holders}\indexbold{place-holder
|
|
1016 |
(ast)} that may occur in rule patterns but not in ordinary asts, which are
|
|
1017 |
considered ground. Here we simply use {\tt Variable}s for this purpose.
|
|
1018 |
|
|
1019 |
More formally, the matching of $u$ by $l$ is performed as follows (the rule
|
|
1020 |
pattern is the second argument): \index{match (ast)@$match$ (ast)}
|
|
1021 |
\begin{itemize}
|
|
1022 |
\item $match(\Constant x, \Constant x) = \mbox{OK}$.
|
|
1023 |
|
|
1024 |
\item $match(\Variable x, \Constant x) = \mbox{OK}$.
|
|
1025 |
|
|
1026 |
\item $match(u, \Variable x) = \mbox{OK, bind}~x~\mbox{to}~u$.
|
|
1027 |
|
|
1028 |
\item $match(\Appl{u@1, \ldots, u@n}, \Appl{l@1, \ldots, l@n}) = match(u@1,
|
|
1029 |
l@1), \ldots, match(u@n, l@n)$.
|
|
1030 |
|
|
1031 |
\item $match(u, l) = \mbox{FAIL}$ in any other case.
|
|
1032 |
\end{itemize}
|
|
1033 |
|
|
1034 |
This means that a {\tt Constant} pattern matches any atomic asts of the same
|
|
1035 |
name, while a {\tt Variable} matches any ast. If successful, $match$ yields a
|
108
|
1036 |
substitution $\sigma$ that is applied to $r$, generating the appropriate
|
104
|
1037 |
instance that replaces $u$.
|
|
1038 |
|
|
1039 |
\medskip
|
|
1040 |
In order to make things simple and fast, ast rewrite rules $(l, r)$ are
|
|
1041 |
restricted by the following conditions:
|
|
1042 |
\begin{itemize}
|
|
1043 |
\item Rules have to be left linear, i.e.\ $l$ must not contain any {\tt
|
|
1044 |
Variable} more than once.
|
|
1045 |
|
|
1046 |
\item Rules must have constant heads, i.e.\ $l = \mtt"c\mtt"$ or $l =
|
|
1047 |
(\mtt"c\mtt" ~ x@1 \ldots x@n)$.
|
|
1048 |
|
|
1049 |
\item The set of variables contained in $r$ has to be a subset of those of
|
|
1050 |
$l$.
|
|
1051 |
\end{itemize}
|
|
1052 |
|
|
1053 |
\medskip
|
108
|
1054 |
Having first-order matching in mind, the second case of $match$ may look a
|
104
|
1055 |
bit odd. But this is exactly the place, where {\tt Variable}s of non-rule
|
|
1056 |
asts behave like {\tt Constant}s. The deeper meaning of this is related with
|
135
|
1057 |
asts being very `primitive' in some sense, ignorant of the underlying
|
|
1058 |
`semantics', not far removed from parse trees. At this level it is not yet
|
104
|
1059 |
known, which $id$s will become constants, bounds, frees, types or classes. As
|
|
1060 |
$ast_of_pt$ (see \S\ref{sec:asts}) shows, former parse tree heads appear in
|
|
1061 |
asts as {\tt Constant}s, while $id$s, $var$s, $tfree$s and $tvar$s become
|
|
1062 |
{\tt Variable}s.
|
|
1063 |
|
|
1064 |
This is at variance with asts generated from terms before printing (see
|
|
1065 |
$ast_of_term$ in \S\ref{sec:asts}), where all constants and type constructors
|
|
1066 |
become {\tt Constant}s.
|
|
1067 |
|
|
1068 |
\begin{warn}
|
|
1069 |
This means asts may contain quite a messy mixture of {\tt Variable}s and {\tt
|
|
1070 |
Constant}s, which is insignificant at macro level because $match$ treats them
|
|
1071 |
alike.
|
|
1072 |
\end{warn}
|
|
1073 |
|
|
1074 |
Because of this behaviour, different kinds of atoms with the same name are
|
|
1075 |
indistinguishable, which may make some rules prone to misbehaviour. Regard
|
|
1076 |
the following fragmentary example:
|
|
1077 |
\begin{ttbox}
|
|
1078 |
types
|
|
1079 |
Nil 0
|
|
1080 |
consts
|
|
1081 |
Nil :: "'a list"
|
|
1082 |
"[]" :: "'a list" ("[]")
|
|
1083 |
translations
|
|
1084 |
"[]" == "Nil"
|
|
1085 |
\end{ttbox}
|
|
1086 |
Then the term {\tt Nil} will be printed as {\tt []}, just as expected. What
|
|
1087 |
happens with \verb|%Nil.t| or {\tt x::Nil} is left as an exercise.
|
|
1088 |
|
|
1089 |
|
|
1090 |
\subsection{Rewriting strategy}
|
|
1091 |
|
|
1092 |
When normalizing an ast by repeatedly applying translation rules until no
|
|
1093 |
more rule is applicable, there are in each step two choices: which rule to
|
|
1094 |
apply next, and which redex to reduce.
|
|
1095 |
|
|
1096 |
We could assume that the user always supplies terminating and confluent
|
|
1097 |
rewriting systems, but this would often complicate things unnecessarily.
|
|
1098 |
Therefore, we reveal part of the actual rewriting strategy: The normalizer
|
|
1099 |
always applies the first matching rule reducing an unspecified redex chosen
|
|
1100 |
first.
|
|
1101 |
|
135
|
1102 |
Thereby, `first rule' is roughly speaking meant wrt.\ the appearance of the
|
104
|
1103 |
rules in the {\tt translations} sections. But this is more tricky than it
|
|
1104 |
seems: If a given theory is {\em extended}, new rules are simply appended to
|
|
1105 |
the end. But if theories are {\em merged}, it is not clear which list of
|
|
1106 |
rules has priority over the other. In fact the merge order is left
|
|
1107 |
unspecified. This shouldn't cause any problems in practice, since
|
|
1108 |
translations of different theories usually do not overlap. {\tt
|
|
1109 |
Syntax.print_syntax} shows the rules in their internal order.
|
|
1110 |
|
|
1111 |
\medskip
|
|
1112 |
You can watch the normalization of asts during parsing and printing by
|
108
|
1113 |
setting \ttindex{Syntax.trace_norm_ast} to {\tt true}. An alternative is the
|
|
1114 |
use of \ttindex{Syntax.test_read}, which is always in trace mode. The
|
|
1115 |
information displayed when tracing\index{tracing (ast)} includes: the ast
|
|
1116 |
before normalization ({\tt pre}), redexes with results ({\tt rewrote}), the
|
|
1117 |
normal form finally reached ({\tt post}) and some statistics ({\tt
|
|
1118 |
normalize}). If tracing is off, \ttindex{Syntax.stat_norm_ast} can be set to
|
|
1119 |
{\tt true} in order to enable printing of the normal form and statistics
|
|
1120 |
only.
|
104
|
1121 |
|
|
1122 |
|
|
1123 |
\subsection{More examples}
|
|
1124 |
|
|
1125 |
Let us first reconsider Example~\ref{ex:set_trans}, which is concerned with
|
|
1126 |
variable binding constructs.
|
|
1127 |
|
|
1128 |
There is a minor disadvantage over an implementation via translation
|
|
1129 |
functions (as done for binders):
|
|
1130 |
|
|
1131 |
\begin{warn}
|
|
1132 |
If \ttindex{eta_contract} is set to {\tt true}, terms will be
|
108
|
1133 |
$\eta$-contracted {\em before\/} the ast rewriter sees them. Thus some
|
104
|
1134 |
abstraction nodes needed for print rules to match may get lost. E.g.\
|
|
1135 |
\verb|Ball(A, %x. P(x))| is contracted to {\tt Ball(A, P)}, the print rule is
|
|
1136 |
no longer applicable and the output will be {\tt Ball(A, P)}. Note that
|
108
|
1137 |
$\eta$-expansion via macros is {\em not\/} possible.
|
104
|
1138 |
\end{warn}
|
|
1139 |
|
|
1140 |
\medskip
|
|
1141 |
Another common trap are meta constraints. If \ttindex{show_types} is set to
|
|
1142 |
{\tt true}, bound variables will be decorated by their meta types at the
|
|
1143 |
binding place (but not at occurrences in the body). E.g.\ matching with
|
|
1144 |
\verb|Collect(A, %x. P)| binds {\tt x} to something like {\tt ("_constrain" y
|
|
1145 |
"i")} rather than only {\tt y}. Ast rewriting will cause the constraint to
|
|
1146 |
appear in the external form, say \verb|{y::i:A::i. P::o}|. Therefore your
|
|
1147 |
syntax should be ready for such constraints to be re-read. This is the case
|
|
1148 |
in our example, because of the category {\tt idt} of the first argument.
|
|
1149 |
|
|
1150 |
\begin{warn}
|
|
1151 |
Choosing {\tt id} instead of {\tt idt} is a very common error, especially
|
|
1152 |
since it appears in former versions of most of Isabelle's object-logics.
|
|
1153 |
\end{warn}
|
|
1154 |
|
|
1155 |
\begin{example} \label{ex:finset_trans}
|
|
1156 |
This example demonstrates the use of recursive macros to implement a
|
|
1157 |
convenient notation for finite sets.
|
|
1158 |
\begin{ttbox}
|
|
1159 |
FINSET = SET +
|
|
1160 |
types
|
|
1161 |
is 0
|
|
1162 |
consts
|
|
1163 |
"" :: "i => is" ("_")
|
|
1164 |
"{\at}Enum" :: "[i, is] => is" ("_,/ _")
|
|
1165 |
empty :: "i" ("{\ttlbrace}{\ttrbrace}")
|
|
1166 |
insert :: "[i, i] => i"
|
|
1167 |
"{\at}Finset" :: "is => i" ("{\ttlbrace}(_){\ttrbrace}")
|
|
1168 |
translations
|
|
1169 |
"{\ttlbrace}x, xs{\ttrbrace}" == "insert(x, {\ttlbrace}xs{\ttrbrace})"
|
|
1170 |
"{\ttlbrace}x{\ttrbrace}" == "insert(x, {\ttlbrace}{\ttrbrace})"
|
|
1171 |
end
|
|
1172 |
\end{ttbox}
|
|
1173 |
|
|
1174 |
Finite sets are internally built up by {\tt empty} and {\tt insert}.
|
|
1175 |
Externally we would like to see \verb|{x, y, z}| rather than {\tt insert(x,
|
|
1176 |
insert(y, insert(z, empty)))}.
|
|
1177 |
|
|
1178 |
First we define the generic syntactic category {\tt is} for one or more
|
|
1179 |
objects of type {\tt i} separated by commas (including breaks for pretty
|
|
1180 |
printing). The category has to be declared as a 0-place type constructor, but
|
|
1181 |
without {\tt arities} declaration. Hence {\tt is} is not a logical type, no
|
|
1182 |
default productions will be added, and we can cook our own syntax for {\tt
|
|
1183 |
is} (first two lines of {\tt consts} section). If we had needed generic
|
|
1184 |
enumerations of type $\alpha$ (i.e.\ {\tt logic}), we could have used the
|
|
1185 |
predefined category \ttindex{args} and skipped this part altogether.
|
|
1186 |
|
|
1187 |
Next follows {\tt empty}, which is already equipped with its syntax
|
|
1188 |
\verb|{}|, and {\tt insert} without concrete syntax. The syntactic constant
|
|
1189 |
{\tt\at Finset} provides concrete syntax for enumerations of {\tt i} enclosed
|
|
1190 |
in curly braces. Remember that a pair of parentheses specifies a block of
|
108
|
1191 |
indentation for pretty printing. The category {\tt is} can later be reused
|
104
|
1192 |
for other enumerations like lists or tuples.
|
|
1193 |
|
108
|
1194 |
The translations may look a bit odd at first sight, but rules can only be
|
|
1195 |
fully understood in their internal forms, which are:
|
104
|
1196 |
\begin{ttbox}
|
|
1197 |
parse_rules:
|
|
1198 |
("{\at}Finset" ("{\at}Enum" x xs)) -> ("insert" x ("{\at}Finset" xs))
|
|
1199 |
("{\at}Finset" x) -> ("insert" x "empty")
|
|
1200 |
print_rules:
|
|
1201 |
("insert" x ("{\at}Finset" xs)) -> ("{\at}Finset" ("{\at}Enum" x xs))
|
|
1202 |
("insert" x "empty") -> ("{\at}Finset" x)
|
|
1203 |
\end{ttbox}
|
|
1204 |
This shows that \verb|{x, xs}| indeed matches any set enumeration of at least
|
|
1205 |
two elements, binding the first to {\tt x} and the rest to {\tt xs}.
|
|
1206 |
Likewise, \verb|{xs}| and \verb|{x}| represent any set enumeration. Note that
|
|
1207 |
the parse rules only work in this order.
|
|
1208 |
|
|
1209 |
\medskip
|
|
1210 |
Some rules are prone to misbehaviour, as
|
|
1211 |
\verb|%empty insert. insert(x, empty)| shows, which is printed as
|
|
1212 |
\verb|%empty insert. {x}|. This problem arises, because the ast rewriter
|
|
1213 |
cannot discern constants, frees, bounds etc.\ and looks only for names of
|
|
1214 |
atoms.
|
|
1215 |
|
|
1216 |
Thus the names of {\tt Constant}s occurring in the (internal) left-hand side
|
135
|
1217 |
of translation rules should be regarded as `reserved keywords'. It is good
|
104
|
1218 |
practice to choose non-identifiers here like {\tt\at Finset} or sufficiently
|
|
1219 |
long and strange names.
|
|
1220 |
\end{example}
|
|
1221 |
|
|
1222 |
\begin{example} \label{ex:prod_trans}
|
|
1223 |
One of the well-formedness conditions for ast rewrite rules stated earlier
|
|
1224 |
implies that you can never introduce new {\tt Variable}s on the right-hand
|
|
1225 |
side. Something like \verb|"K(B)" => "%x. B"| is illegal and could cause
|
|
1226 |
variable capturing, if it were allowed. In such cases you usually have to
|
|
1227 |
fall back on translation functions. But there is a trick that makes things
|
108
|
1228 |
quite readable in some cases: {\em calling parse translations by parse
|
104
|
1229 |
rules}. This is demonstrated here.
|
|
1230 |
\begin{ttbox}
|
|
1231 |
PROD = FINSET +
|
|
1232 |
consts
|
|
1233 |
Pi :: "[i, i => i] => i"
|
|
1234 |
"{\at}PROD" :: "[idt, i, i] => i" ("(3PROD _:_./ _)" 10)
|
|
1235 |
"{\at}->" :: "[i, i] => i" ("(_ ->/ _)" [51, 50] 50)
|
|
1236 |
translations
|
|
1237 |
"PROD x:A. B" => "Pi(A, %x. B)"
|
|
1238 |
"A -> B" => "Pi(A, _K(B))"
|
|
1239 |
end
|
|
1240 |
ML
|
|
1241 |
val print_translation = [("Pi", dependent_tr' ("{\at}PROD", "{\at}->"))];
|
|
1242 |
\end{ttbox}
|
|
1243 |
|
|
1244 |
{\tt Pi} is an internal constant for constructing dependent products. Two
|
|
1245 |
external forms exist: {\tt PROD x:A.B}, the general case, and {\tt A -> B},
|
|
1246 |
an abbreviation for \verb|Pi(A, %x.B)| with {\tt B} not actually depending on
|
|
1247 |
{\tt x}.
|
|
1248 |
|
|
1249 |
Now the second parse rule is where the trick comes in: {\tt _K(B)} is
|
|
1250 |
introduced during ast rewriting, which later becomes \verb|%x.B| due to a
|
|
1251 |
parse translation associated with \ttindex{_K}. Note that a leading {\tt _}
|
|
1252 |
in $id$s is allowed in translation rules, but not in ordinary terms. This
|
135
|
1253 |
special behaviour of the lexer is very useful for `forging' asts containing
|
|
1254 |
names that are not directly accessible normally.
|
|
1255 |
|
|
1256 |
Unfortunately, there is no such trick for printing, so we have to add a {\tt
|
|
1257 |
ML} section for the print translation \ttindex{dependent_tr'}.
|
104
|
1258 |
|
|
1259 |
The parse translation for {\tt _K} is already installed in Pure, and {\tt
|
|
1260 |
dependent_tr'} is exported by the syntax module for public use. See
|
|
1261 |
\S\ref{sec:tr_funs} for more of the arcane lore of translation functions.
|
|
1262 |
\end{example}
|
|
1263 |
|
|
1264 |
|
|
1265 |
|
|
1266 |
\section{Translation functions *} \label{sec:tr_funs}
|
|
1267 |
|
|
1268 |
This section is about the remaining translation mechanism which enables the
|
|
1269 |
designer of theories to do almost everything with terms or asts during the
|
108
|
1270 |
parsing or printing process, by writing \ML-functions. The logic \LK\ is a
|
|
1271 |
good example of a quite sophisticated use of this to transform between
|
104
|
1272 |
internal and external representations of associative sequences. The high
|
|
1273 |
level macro system described in \S\ref{sec:macros} fails here completely.
|
|
1274 |
|
|
1275 |
\begin{warn}
|
|
1276 |
A full understanding of the matters presented here requires some familiarity
|
|
1277 |
with Isabelle's internals, especially the datatypes {\tt term}, {\tt typ},
|
|
1278 |
{\tt Syntax.ast} and the encodings of types and terms as such at the various
|
|
1279 |
stages of the parsing or printing process. You probably do not really want to
|
|
1280 |
use translation functions at all!
|
|
1281 |
\end{warn}
|
|
1282 |
|
|
1283 |
As already mentioned in \S\ref{sec:asts}, there are four kinds of translation
|
|
1284 |
functions. All such functions are associated with a name which specifies an
|
|
1285 |
ast's or term's head invoking that function. Such names can be (logical or
|
|
1286 |
syntactic) constants or type constructors.
|
|
1287 |
|
|
1288 |
{\tt Syntax.print_syntax} displays the sets of names associated with the
|
|
1289 |
translation functions of a {\tt Syntax.syntax} under
|
|
1290 |
\ttindex{parse_ast_translation}, \ttindex{parse_translation},
|
|
1291 |
\ttindex{print_translation} and \ttindex{print_ast_translation}. The user can
|
|
1292 |
add new ones via the {\tt ML} section\index{ML section@{\tt ML} section} of a
|
|
1293 |
{\tt .thy} file. But there may never be more than one function of the same
|
|
1294 |
kind per name.
|
|
1295 |
|
|
1296 |
\begin{warn}
|
|
1297 |
Conceptually, the {\tt ML} section should appear between {\tt consts} and
|
|
1298 |
{\tt translations}, i.e.\ newly installed translation functions are already
|
|
1299 |
effective when macros and logical rules are parsed. {\tt ML} has to be the
|
|
1300 |
last section because the {\tt .thy} file parser is unable to detect the end
|
|
1301 |
of \ML\ code in another way than by end-of-file.
|
|
1302 |
\end{warn}
|
|
1303 |
|
|
1304 |
All text of the {\tt ML} section is simply copied verbatim into the \ML\ file
|
|
1305 |
generated from a {\tt .thy} file. Definitions made here by the user become
|
|
1306 |
components of a \ML\ structure of the same name as the theory to be created.
|
|
1307 |
Therefore local things should be declared within {\tt local}. The following
|
|
1308 |
special \ML\ values, which are all optional, serve as the interface for the
|
|
1309 |
installation of user defined translation functions.
|
|
1310 |
|
|
1311 |
\begin{ttbox}
|
|
1312 |
val parse_ast_translation: (string * (ast list -> ast)) list
|
|
1313 |
val parse_translation: (string * (term list -> term)) list
|
|
1314 |
val print_translation: (string * (term list -> term)) list
|
|
1315 |
val print_ast_translation: (string * (ast list -> ast)) list
|
|
1316 |
\end{ttbox}
|
|
1317 |
|
|
1318 |
The basic idea behind all four kinds of functions is relatively simple (see
|
|
1319 |
also Figure~\ref{fig:parse_print}): Whenever --- during the transformations
|
|
1320 |
between parse trees, asts and terms --- a combination of the form
|
|
1321 |
$(\mtt"c\mtt"~x@1 \ldots x@n)$ is encountered, and a translation function $f$
|
|
1322 |
of appropriate kind exists for $c$, the result will be $f \mtt[ x@1, \ldots,
|
|
1323 |
x@n \mtt]$. Thereby, $x@1, \ldots, x@n$ (with $n \ge 0$) are asts for ast
|
135
|
1324 |
translations and terms for term translations. A `combination' at ast level is
|
104
|
1325 |
of the form $\Constant c$ or $\Appl{\Constant c, x@1, \ldots, x@n}$, and at
|
|
1326 |
term level $\ttfct{Const} (c, \tau)$ or $\ttfct{Const} (c, \tau) \ttrel{\$}
|
|
1327 |
x@1 \ttrel{\$} \dots \ttrel{\$} x@n$.
|
|
1328 |
|
|
1329 |
\medskip
|
|
1330 |
Translation functions at ast level differ from those at term level only in
|
|
1331 |
the same way, as asts and terms differ. Terms, being more complex and more
|
|
1332 |
specific, allow more sophisticated transformations (typically involving
|
|
1333 |
abstractions and bound variables).
|
|
1334 |
|
108
|
1335 |
On the other hand, {\em parse\/} (ast) translations differ from {\em print\/}
|
104
|
1336 |
(ast) translations more fundamentally:
|
|
1337 |
\begin{description}
|
|
1338 |
\item[Parse (ast) translations] are applied bottom-up, i.e.\ the arguments
|
|
1339 |
supplied ($x@1, \ldots, x@n$ above) are already in translated form.
|
|
1340 |
Additionally, they may not fail, exceptions are re-raised after printing
|
135
|
1341 |
an error message.
|
104
|
1342 |
|
|
1343 |
\item[Print (ast) translations] are applied top-down, i.e.\ supplied with
|
|
1344 |
arguments that are partly still in internal form. The result is again fed
|
|
1345 |
into the translation machinery as a whole. Therefore a print (ast)
|
|
1346 |
translation should not introduce as head a constant of the same name that
|
|
1347 |
invoked it in the first place. Alternatively, exception \ttindex{Match}
|
|
1348 |
may be raised, indicating failure of translation.
|
|
1349 |
\end{description}
|
|
1350 |
|
|
1351 |
Another difference between the parsing and the printing process is, which
|
|
1352 |
atoms are {\tt Constant}s or {\tt Const}s, i.e.\ able to invoke translation
|
|
1353 |
functions.
|
|
1354 |
|
|
1355 |
For parse ast translations only former parse tree heads are {\tt Constant}s
|
|
1356 |
(see also $ast_of_pt$ in \S\ref{sec:asts}). These and additionally introduced
|
|
1357 |
{\tt Constant}s (e.g.\ by macros), become {\tt Const}s for parse translations
|
|
1358 |
(see also $term_of_ast$ in \S\ref{sec:asts}).
|
|
1359 |
|
|
1360 |
The situation is slightly different, when terms are prepared for printing,
|
|
1361 |
since the role of atoms is known. Initially, all logical constants and type
|
|
1362 |
constructors may invoke print translations. New constants may be introduced
|
|
1363 |
by these or by macros, able to invoke parse ast translations.
|
|
1364 |
|
|
1365 |
|
|
1366 |
\subsection{A simple example *}
|
|
1367 |
|
|
1368 |
Presenting a simple and useful example of translation functions is not that
|
|
1369 |
easy, since the macro system is sufficient for most simple applications. By
|
|
1370 |
convention, translation functions always have names ending with {\tt
|
|
1371 |
_ast_tr}, {\tt _tr}, {\tt _tr'} or {\tt _ast_tr'}. You may look for such
|
|
1372 |
names in the sources of Pure Isabelle for more examples.
|
|
1373 |
|
|
1374 |
\begin{example} \label{ex:tr_funs}
|
|
1375 |
|
|
1376 |
We continue Example~\ref{ex:prod_trans} by presenting the \ML\ sources of the
|
|
1377 |
parse translation for \ttindex{_K} and the print translation
|
|
1378 |
\ttindex{dependent_tr'}:
|
|
1379 |
|
|
1380 |
\begin{ttbox}
|
|
1381 |
(* nondependent abstraction *)
|
|
1382 |
|
|
1383 |
fun k_tr (*"_K"*) [t] = Abs ("x", dummyT, incr_boundvars 1 t)
|
|
1384 |
| k_tr (*"_K"*) ts = raise_term "k_tr" ts;
|
|
1385 |
|
|
1386 |
(* dependent / nondependent quantifiers *)
|
|
1387 |
|
|
1388 |
fun dependent_tr' (q, r) (A :: Abs (x, T, B) :: ts) =
|
|
1389 |
if 0 mem (loose_bnos B) then
|
|
1390 |
let val (x', B') = variant_abs (x, dummyT, B);
|
|
1391 |
in list_comb (Const (q, dummyT) $ Free (x', T) $ A $ B', ts)
|
|
1392 |
end
|
|
1393 |
else list_comb (Const (r, dummyT) $ A $ B, ts)
|
|
1394 |
| dependent_tr' _ _ = raise Match;
|
|
1395 |
\end{ttbox}
|
|
1396 |
|
|
1397 |
This text is taken from the Pure sources, ordinary user translations would
|
|
1398 |
typically appear within the {\tt ML} section of the {\tt .thy} file.
|
|
1399 |
|
|
1400 |
\medskip
|
|
1401 |
If {\tt k_tr} is called with exactly one argument $t$, it creates a new {\tt
|
|
1402 |
Abs} node with a body derived from $t$: loose {\tt Bound}s, i.e.\ those
|
|
1403 |
referring to outer {\tt Abs}s, are incremented using
|
|
1404 |
\ttindex{incr_boundvars}. This and many other basic term manipulation
|
|
1405 |
functions are defined in {\tt Pure/term.ML}, the comments there being in most
|
|
1406 |
cases the only documentation.
|
|
1407 |
|
|
1408 |
Since terms fed into parse translations are not yet typed, the type of the
|
|
1409 |
bound variable in the new {\tt Abs} is simply {\tt dummyT}.
|
|
1410 |
|
|
1411 |
\medskip
|
|
1412 |
The argument $(q, r)$ for {\tt dependent_tr'} is supplied already during the
|
108
|
1413 |
installation within an {\tt ML} section. This yields a parse translation that
|
104
|
1414 |
transforms something like $c(A, \mtt{Abs}(x, T, B), t@1, \ldots, t@n)$ into
|
|
1415 |
$q(x', A, B', t@1, \ldots, t@n)$ or $r(A, B, t@1, \ldots, t@n)$. The latter
|
|
1416 |
form, if $B$ does not actually depend on $x$. This is checked using
|
108
|
1417 |
\ttindex{loose_bnos}, yet another function of {\tt Pure/term.ML}. Note that
|
|
1418 |
$x'$ is a version of $x$ renamed away from all names in $B$, and $B'$ the
|
|
1419 |
body $B$ with {\tt Bound}s referring to our {\tt Abs} node replaced by
|
|
1420 |
$\ttfct{Free} (x', \mtt{dummyT})$.
|
104
|
1421 |
|
|
1422 |
We have to be more careful with types here. While types of {\tt Const}s are
|
|
1423 |
completely ignored, type constraints may be printed for some {\tt Free}s and
|
135
|
1424 |
{\tt Var}s (if \ttindex{show_types} is set to {\tt true}). Variables of type
|
|
1425 |
\ttindex{dummyT} are never printed with constraint, though. Thus, a
|
104
|
1426 |
constraint of $x'$ may only appear at its binding place, since {\tt Free}s of
|
|
1427 |
$B'$ replacing the appropriate {\tt Bound}s of $B$ via \ttindex{variant_abs}
|
135
|
1428 |
have all type {\tt dummyT}. \end{example}
|
104
|
1429 |
|
|
1430 |
|
|
1431 |
|
|
1432 |
\section{Example: some minimal logics} \label{sec:min_logics}
|
|
1433 |
|
|
1434 |
This concluding section presents some examples that are very simple from a
|
|
1435 |
syntactic point of view. They should rather demonstrate how to define new
|
|
1436 |
object-logics from scratch. In particular we need to say how an object-logic
|
|
1437 |
syntax is hooked up to the meta-logic. Since all theorems must conform to the
|
|
1438 |
syntax for $prop$ (see Figure~\ref{fig:pure_gram}), that syntax has to be
|
|
1439 |
extended with the object-level syntax. Assume that the syntax of your
|
|
1440 |
object-logic defines a category $o$ of formulae. These formulae can now
|
|
1441 |
appear in axioms and theorems wherever $prop$ does if you add the production
|
|
1442 |
\[ prop ~=~ o. \]
|
|
1443 |
More precisely, you need a coercion from formulae to propositions:
|
|
1444 |
\begin{ttbox}
|
|
1445 |
Base = Pure +
|
|
1446 |
types
|
|
1447 |
o 0
|
|
1448 |
arities
|
|
1449 |
o :: logic
|
|
1450 |
consts
|
|
1451 |
Trueprop :: "o => prop" ("_" 5)
|
|
1452 |
end
|
|
1453 |
\end{ttbox}
|
|
1454 |
The constant {\tt Trueprop} (the name is arbitrary) acts as an invisible
|
|
1455 |
coercion function. Assuming this definition resides in a file {\tt base.thy},
|
135
|
1456 |
you have to load it with the command {\tt use_thy "Base"}.
|
104
|
1457 |
|
108
|
1458 |
One of the simplest nontrivial logics is {\em minimal logic\/} of
|
|
1459 |
implication. Its definition in Isabelle needs no advanced features but
|
|
1460 |
illustrates the overall mechanism quite nicely:
|
104
|
1461 |
\begin{ttbox}
|
|
1462 |
Hilbert = Base +
|
|
1463 |
consts
|
|
1464 |
"-->" :: "[o, o] => o" (infixr 10)
|
|
1465 |
rules
|
|
1466 |
K "P --> Q --> P"
|
|
1467 |
S "(P --> Q --> R) --> (P --> Q) --> P --> R"
|
|
1468 |
MP "[| P --> Q; P |] ==> Q"
|
|
1469 |
end
|
|
1470 |
\end{ttbox}
|
|
1471 |
After loading this definition you can start to prove theorems in this logic:
|
|
1472 |
\begin{ttbox}
|
|
1473 |
goal Hilbert.thy "P --> P";
|
|
1474 |
{\out Level 0}
|
|
1475 |
{\out P --> P}
|
|
1476 |
{\out 1. P --> P}
|
|
1477 |
by (resolve_tac [Hilbert.MP] 1);
|
|
1478 |
{\out Level 1}
|
|
1479 |
{\out P --> P}
|
|
1480 |
{\out 1. ?P --> P --> P}
|
|
1481 |
{\out 2. ?P}
|
|
1482 |
by (resolve_tac [Hilbert.MP] 1);
|
|
1483 |
{\out Level 2}
|
|
1484 |
{\out P --> P}
|
|
1485 |
{\out 1. ?P1 --> ?P --> P --> P}
|
|
1486 |
{\out 2. ?P1}
|
|
1487 |
{\out 3. ?P}
|
|
1488 |
by (resolve_tac [Hilbert.S] 1);
|
|
1489 |
{\out Level 3}
|
|
1490 |
{\out P --> P}
|
|
1491 |
{\out 1. P --> ?Q2 --> P}
|
|
1492 |
{\out 2. P --> ?Q2}
|
|
1493 |
by (resolve_tac [Hilbert.K] 1);
|
|
1494 |
{\out Level 4}
|
|
1495 |
{\out P --> P}
|
|
1496 |
{\out 1. P --> ?Q2}
|
|
1497 |
by (resolve_tac [Hilbert.K] 1);
|
|
1498 |
{\out Level 5}
|
|
1499 |
{\out P --> P}
|
|
1500 |
{\out No subgoals!}
|
|
1501 |
\end{ttbox}
|
|
1502 |
As you can see, this Hilbert-style formulation of minimal logic is easy to
|
|
1503 |
define but difficult to use. The following natural deduction formulation is
|
|
1504 |
far preferable:
|
|
1505 |
\begin{ttbox}
|
|
1506 |
MinI = Base +
|
|
1507 |
consts
|
|
1508 |
"-->" :: "[o, o] => o" (infixr 10)
|
|
1509 |
rules
|
|
1510 |
impI "(P ==> Q) ==> P --> Q"
|
|
1511 |
impE "[| P --> Q; P |] ==> Q"
|
|
1512 |
end
|
|
1513 |
\end{ttbox}
|
|
1514 |
Note, however, that although the two systems are equivalent, this fact cannot
|
|
1515 |
be proved within Isabelle: {\tt S} and {\tt K} can be derived in {\tt MinI}
|
|
1516 |
(exercise!), but {\tt impI} cannot be derived in {\tt Hilbert}. The reason is
|
108
|
1517 |
that {\tt impI} is only an {\em admissible\/} rule in {\tt Hilbert},
|
|
1518 |
something that can only be shown by induction over all possible proofs in
|
|
1519 |
{\tt Hilbert}.
|
104
|
1520 |
|
|
1521 |
It is a very simple matter to extend minimal logic with falsity:
|
|
1522 |
\begin{ttbox}
|
|
1523 |
MinIF = MinI +
|
|
1524 |
consts
|
|
1525 |
False :: "o"
|
|
1526 |
rules
|
|
1527 |
FalseE "False ==> P"
|
|
1528 |
end
|
|
1529 |
\end{ttbox}
|
|
1530 |
On the other hand, we may wish to introduce conjunction only:
|
|
1531 |
\begin{ttbox}
|
|
1532 |
MinC = Base +
|
|
1533 |
consts
|
|
1534 |
"&" :: "[o, o] => o" (infixr 30)
|
|
1535 |
rules
|
|
1536 |
conjI "[| P; Q |] ==> P & Q"
|
|
1537 |
conjE1 "P & Q ==> P"
|
|
1538 |
conjE2 "P & Q ==> Q"
|
|
1539 |
end
|
|
1540 |
\end{ttbox}
|
|
1541 |
And if we want to have all three connectives together, we define:
|
|
1542 |
\begin{ttbox}
|
|
1543 |
MinIFC = MinIF + MinC
|
|
1544 |
\end{ttbox}
|
|
1545 |
Now we can prove mixed theorems like
|
|
1546 |
\begin{ttbox}
|
|
1547 |
goal MinIFC.thy "P & False --> Q";
|
|
1548 |
by (resolve_tac [MinI.impI] 1);
|
|
1549 |
by (dresolve_tac [MinC.conjE2] 1);
|
|
1550 |
by (eresolve_tac [MinIF.FalseE] 1);
|
|
1551 |
\end{ttbox}
|
|
1552 |
Try this as an exercise!
|
|
1553 |
|