| author | krauss |
| Wed, 21 Mar 2007 16:07:40 +0100 | |
| changeset 22493 | db930e490fe5 |
| parent 22438 | 96e650157b1e |
| child 22869 | 9f915d44a666 |
| permissions | -rw-r--r-- |
| 18537 | 1 |
|
2 |
(* $Id$ *) |
|
3 |
||
4 |
theory prelim imports base begin |
|
5 |
||
6 |
chapter {* Preliminaries *}
|
|
7 |
||
| 20429 | 8 |
section {* Contexts \label{sec:context} *}
|
| 18537 | 9 |
|
| 20429 | 10 |
text {*
|
| 20451 | 11 |
A logical context represents the background that is required for |
12 |
formulating statements and composing proofs. It acts as a medium to |
|
13 |
produce formal content, depending on earlier material (declarations, |
|
14 |
results etc.). |
|
| 18537 | 15 |
|
| 20451 | 16 |
For example, derivations within the Isabelle/Pure logic can be |
17 |
described as a judgment @{text "\<Gamma> \<turnstile>\<^sub>\<Theta> \<phi>"}, which means that a
|
|
| 20429 | 18 |
proposition @{text "\<phi>"} is derivable from hypotheses @{text "\<Gamma>"}
|
19 |
within the theory @{text "\<Theta>"}. There are logical reasons for
|
|
| 20451 | 20 |
keeping @{text "\<Theta>"} and @{text "\<Gamma>"} separate: theories can be
|
21 |
liberal about supporting type constructors and schematic |
|
22 |
polymorphism of constants and axioms, while the inner calculus of |
|
23 |
@{text "\<Gamma> \<turnstile> \<phi>"} is strictly limited to Simple Type Theory (with
|
|
24 |
fixed type variables in the assumptions). |
|
| 18537 | 25 |
|
| 20429 | 26 |
\medskip Contexts and derivations are linked by the following key |
27 |
principles: |
|
28 |
||
29 |
\begin{itemize}
|
|
30 |
||
31 |
\item Transfer: monotonicity of derivations admits results to be |
|
| 20451 | 32 |
transferred into a \emph{larger} context, i.e.\ @{text "\<Gamma> \<turnstile>\<^sub>\<Theta>
|
33 |
\<phi>"} implies @{text "\<Gamma>' \<turnstile>\<^sub>\<Theta>\<^sub>' \<phi>"} for contexts @{text "\<Theta>'
|
|
34 |
\<supseteq> \<Theta>"} and @{text "\<Gamma>' \<supseteq> \<Gamma>"}.
|
|
| 18537 | 35 |
|
| 20429 | 36 |
\item Export: discharge of hypotheses admits results to be exported |
| 20451 | 37 |
into a \emph{smaller} context, i.e.\ @{text "\<Gamma>' \<turnstile>\<^sub>\<Theta> \<phi>"}
|
38 |
implies @{text "\<Gamma> \<turnstile>\<^sub>\<Theta> \<Delta> \<Longrightarrow> \<phi>"} where @{text "\<Gamma>' \<supseteq> \<Gamma>"} and
|
|
39 |
@{text "\<Delta> = \<Gamma>' - \<Gamma>"}. Note that @{text "\<Theta>"} remains unchanged here,
|
|
40 |
only the @{text "\<Gamma>"} part is affected.
|
|
| 20429 | 41 |
|
42 |
\end{itemize}
|
|
| 18537 | 43 |
|
| 20451 | 44 |
\medskip By modeling the main characteristics of the primitive |
45 |
@{text "\<Theta>"} and @{text "\<Gamma>"} above, and abstracting over any
|
|
46 |
particular logical content, we arrive at the fundamental notions of |
|
47 |
\emph{theory context} and \emph{proof context} in Isabelle/Isar.
|
|
48 |
These implement a certain policy to manage arbitrary \emph{context
|
|
49 |
data}. There is a strongly-typed mechanism to declare new kinds of |
|
| 20429 | 50 |
data at compile time. |
| 18537 | 51 |
|
| 20451 | 52 |
The internal bootstrap process of Isabelle/Pure eventually reaches a |
53 |
stage where certain data slots provide the logical content of @{text
|
|
54 |
"\<Theta>"} and @{text "\<Gamma>"} sketched above, but this does not stop there!
|
|
55 |
Various additional data slots support all kinds of mechanisms that |
|
56 |
are not necessarily part of the core logic. |
|
| 18537 | 57 |
|
| 20429 | 58 |
For example, there would be data for canonical introduction and |
59 |
elimination rules for arbitrary operators (depending on the |
|
60 |
object-logic and application), which enables users to perform |
|
| 20451 | 61 |
standard proof steps implicitly (cf.\ the @{text "rule"} method
|
62 |
\cite{isabelle-isar-ref}).
|
|
| 18537 | 63 |
|
| 20451 | 64 |
\medskip Thus Isabelle/Isar is able to bring forth more and more |
65 |
concepts successively. In particular, an object-logic like |
|
66 |
Isabelle/HOL continues the Isabelle/Pure setup by adding specific |
|
67 |
components for automated reasoning (classical reasoner, tableau |
|
68 |
prover, structured induction etc.) and derived specification |
|
69 |
mechanisms (inductive predicates, recursive functions etc.). All of |
|
70 |
this is ultimately based on the generic data management by theory |
|
71 |
and proof contexts introduced here. |
|
| 18537 | 72 |
*} |
73 |
||
74 |
||
75 |
subsection {* Theory context \label{sec:context-theory} *}
|
|
76 |
||
| 20429 | 77 |
text {*
|
| 20447 | 78 |
\glossary{Theory}{FIXME}
|
79 |
||
| 20451 | 80 |
A \emph{theory} is a data container with explicit named and unique
|
81 |
identifier. Theories are related by a (nominal) sub-theory |
|
82 |
relation, which corresponds to the dependency graph of the original |
|
83 |
construction; each theory is derived from a certain sub-graph of |
|
84 |
ancestor theories. |
|
85 |
||
86 |
The @{text "merge"} operation produces the least upper bound of two
|
|
87 |
theories, which actually degenerates into absorption of one theory |
|
88 |
into the other (due to the nominal sub-theory relation). |
|
| 18537 | 89 |
|
| 20429 | 90 |
The @{text "begin"} operation starts a new theory by importing
|
91 |
several parent theories and entering a special @{text "draft"} mode,
|
|
92 |
which is sustained until the final @{text "end"} operation. A draft
|
|
| 20451 | 93 |
theory acts like a linear type, where updates invalidate earlier |
94 |
versions. An invalidated draft is called ``stale''. |
|
| 20429 | 95 |
|
| 20447 | 96 |
The @{text "checkpoint"} operation produces an intermediate stepping
|
| 20451 | 97 |
stone that will survive the next update: both the original and the |
98 |
changed theory remain valid and are related by the sub-theory |
|
99 |
relation. Checkpointing essentially recovers purely functional |
|
100 |
theory values, at the expense of some extra internal bookkeeping. |
|
| 20447 | 101 |
|
102 |
The @{text "copy"} operation produces an auxiliary version that has
|
|
103 |
the same data content, but is unrelated to the original: updates of |
|
104 |
the copy do not affect the original, neither does the sub-theory |
|
105 |
relation hold. |
|
| 20429 | 106 |
|
| 20447 | 107 |
\medskip The example in \figref{fig:ex-theory} below shows a theory
|
| 20451 | 108 |
graph derived from @{text "Pure"}, with theory @{text "Length"}
|
109 |
importing @{text "Nat"} and @{text "List"}. The body of @{text
|
|
110 |
"Length"} consists of a sequence of updates, working mostly on |
|
111 |
drafts. Intermediate checkpoints may occur as well, due to the |
|
112 |
history mechanism provided by the Isar top-level, cf.\ |
|
113 |
\secref{sec:isar-toplevel}.
|
|
| 20447 | 114 |
|
115 |
\begin{figure}[htb]
|
|
116 |
\begin{center}
|
|
| 20429 | 117 |
\begin{tabular}{rcccl}
|
| 20447 | 118 |
& & @{text "Pure"} \\
|
119 |
& & @{text "\<down>"} \\
|
|
120 |
& & @{text "FOL"} \\
|
|
| 18537 | 121 |
& $\swarrow$ & & $\searrow$ & \\ |
| 21852 | 122 |
@{text "Nat"} & & & & @{text "List"} \\
|
| 18537 | 123 |
& $\searrow$ & & $\swarrow$ \\ |
| 20447 | 124 |
& & @{text "Length"} \\
|
| 18537 | 125 |
& & \multicolumn{3}{l}{~~$\isarkeyword{imports}$} \\
|
126 |
& & \multicolumn{3}{l}{~~$\isarkeyword{begin}$} \\
|
|
127 |
& & $\vdots$~~ \\ |
|
| 20447 | 128 |
& & @{text "\<bullet>"}~~ \\
|
129 |
& & $\vdots$~~ \\ |
|
130 |
& & @{text "\<bullet>"}~~ \\
|
|
131 |
& & $\vdots$~~ \\ |
|
| 18537 | 132 |
& & \multicolumn{3}{l}{~~$\isarkeyword{end}$} \\
|
| 20429 | 133 |
\end{tabular}
|
| 20451 | 134 |
\caption{A theory definition depending on ancestors}\label{fig:ex-theory}
|
| 20447 | 135 |
\end{center}
|
136 |
\end{figure}
|
|
| 20451 | 137 |
|
138 |
\medskip There is a separate notion of \emph{theory reference} for
|
|
139 |
maintaining a live link to an evolving theory context: updates on |
|
| 20488 | 140 |
drafts are propagated automatically. Dynamic updating stops after |
141 |
an explicit @{text "end"} only.
|
|
| 20451 | 142 |
|
143 |
Derived entities may store a theory reference in order to indicate |
|
144 |
the context they belong to. This implicitly assumes monotonic |
|
145 |
reasoning, because the referenced context may become larger without |
|
146 |
further notice. |
|
| 18537 | 147 |
*} |
148 |
||
| 20430 | 149 |
text %mlref {*
|
| 20447 | 150 |
\begin{mldecls}
|
151 |
@{index_ML_type theory} \\
|
|
152 |
@{index_ML Theory.subthy: "theory * theory -> bool"} \\
|
|
153 |
@{index_ML Theory.merge: "theory * theory -> theory"} \\
|
|
154 |
@{index_ML Theory.checkpoint: "theory -> theory"} \\
|
|
| 20547 | 155 |
@{index_ML Theory.copy: "theory -> theory"} \\
|
156 |
\end{mldecls}
|
|
157 |
\begin{mldecls}
|
|
| 20447 | 158 |
@{index_ML_type theory_ref} \\
|
159 |
@{index_ML Theory.self_ref: "theory -> theory_ref"} \\
|
|
160 |
@{index_ML Theory.deref: "theory_ref -> theory"} \\
|
|
161 |
\end{mldecls}
|
|
162 |
||
163 |
\begin{description}
|
|
164 |
||
| 20451 | 165 |
\item @{ML_type theory} represents theory contexts. This is
|
166 |
essentially a linear type! Most operations destroy the original |
|
167 |
version, which then becomes ``stale''. |
|
| 20447 | 168 |
|
169 |
\item @{ML "Theory.subthy"}~@{text "(thy\<^sub>1, thy\<^sub>2)"}
|
|
170 |
compares theories according to the inherent graph structure of the |
|
171 |
construction. This sub-theory relation is a nominal approximation |
|
172 |
of inclusion (@{text "\<subseteq>"}) of the corresponding content.
|
|
173 |
||
174 |
\item @{ML "Theory.merge"}~@{text "(thy\<^sub>1, thy\<^sub>2)"}
|
|
175 |
absorbs one theory into the other. This fails for unrelated |
|
176 |
theories! |
|
177 |
||
178 |
\item @{ML "Theory.checkpoint"}~@{text "thy"} produces a safe
|
|
179 |
stepping stone in the linear development of @{text "thy"}. The next
|
|
180 |
update will result in two related, valid theories. |
|
181 |
||
182 |
\item @{ML "Theory.copy"}~@{text "thy"} produces a variant of @{text
|
|
| 20451 | 183 |
"thy"} that holds a copy of the same data. The result is not |
184 |
related to the original; the original is unchanched. |
|
| 20447 | 185 |
|
| 20451 | 186 |
\item @{ML_type theory_ref} represents a sliding reference to an
|
187 |
always valid theory; updates on the original are propagated |
|
| 20447 | 188 |
automatically. |
189 |
||
| 20449 | 190 |
\item @{ML "Theory.self_ref"}~@{text "thy"} and @{ML
|
191 |
"Theory.deref"}~@{text "thy_ref"} convert between @{ML_type
|
|
192 |
"theory"} and @{ML_type "theory_ref"}. As the referenced theory
|
|
193 |
evolves monotonically over time, later invocations of @{ML
|
|
| 20451 | 194 |
"Theory.deref"} may refer to a larger context. |
| 20447 | 195 |
|
196 |
\end{description}
|
|
| 20430 | 197 |
*} |
198 |
||
| 18537 | 199 |
|
200 |
subsection {* Proof context \label{sec:context-proof} *}
|
|
201 |
||
202 |
text {*
|
|
| 20447 | 203 |
\glossary{Proof context}{The static context of a structured proof,
|
204 |
acts like a local ``theory'' of the current portion of Isar proof |
|
205 |
text, generalizes the idea of local hypotheses @{text "\<Gamma>"} in
|
|
206 |
judgments @{text "\<Gamma> \<turnstile> \<phi>"} of natural deduction calculi. There is a
|
|
207 |
generic notion of introducing and discharging hypotheses. |
|
208 |
Arbritrary auxiliary context data may be adjoined.} |
|
| 20429 | 209 |
|
| 20447 | 210 |
A proof context is a container for pure data with a back-reference |
| 20449 | 211 |
to the theory it belongs to. The @{text "init"} operation creates a
|
| 20451 | 212 |
proof context from a given theory. Modifications to draft theories |
213 |
are propagated to the proof context as usual, but there is also an |
|
214 |
explicit @{text "transfer"} operation to force resynchronization
|
|
215 |
with more substantial updates to the underlying theory. The actual |
|
216 |
context data does not require any special bookkeeping, thanks to the |
|
217 |
lack of destructive features. |
|
| 20429 | 218 |
|
| 20447 | 219 |
Entities derived in a proof context need to record inherent logical |
220 |
requirements explicitly, since there is no separate context |
|
221 |
identification as for theories. For example, hypotheses used in |
|
| 20451 | 222 |
primitive derivations (cf.\ \secref{sec:thms}) are recorded
|
| 20447 | 223 |
separately within the sequent @{text "\<Gamma> \<turnstile> \<phi>"}, just to make double
|
224 |
sure. Results could still leak into an alien proof context do to |
|
225 |
programming errors, but Isabelle/Isar includes some extra validity |
|
| 22438 | 226 |
checks in critical positions, notably at the end of a sub-proof. |
| 20429 | 227 |
|
| 20451 | 228 |
Proof contexts may be manipulated arbitrarily, although the common |
229 |
discipline is to follow block structure as a mental model: a given |
|
230 |
context is extended consecutively, and results are exported back |
|
231 |
into the original context. Note that the Isar proof states model |
|
232 |
block-structured reasoning explicitly, using a stack of proof |
|
233 |
contexts internally, cf.\ \secref{sec:isar-proof-state}.
|
|
| 18537 | 234 |
*} |
235 |
||
| 20449 | 236 |
text %mlref {*
|
237 |
\begin{mldecls}
|
|
238 |
@{index_ML_type Proof.context} \\
|
|
239 |
@{index_ML ProofContext.init: "theory -> Proof.context"} \\
|
|
240 |
@{index_ML ProofContext.theory_of: "Proof.context -> theory"} \\
|
|
241 |
@{index_ML ProofContext.transfer: "theory -> Proof.context -> Proof.context"} \\
|
|
242 |
\end{mldecls}
|
|
243 |
||
244 |
\begin{description}
|
|
245 |
||
246 |
\item @{ML_type Proof.context} represents proof contexts. Elements
|
|
247 |
of this type are essentially pure values, with a sliding reference |
|
248 |
to the background theory. |
|
249 |
||
250 |
\item @{ML ProofContext.init}~@{text "thy"} produces a proof context
|
|
251 |
derived from @{text "thy"}, initializing all data.
|
|
252 |
||
253 |
\item @{ML ProofContext.theory_of}~@{text "ctxt"} selects the
|
|
| 20451 | 254 |
background theory from @{text "ctxt"}, dereferencing its internal
|
255 |
@{ML_type theory_ref}.
|
|
| 20449 | 256 |
|
257 |
\item @{ML ProofContext.transfer}~@{text "thy ctxt"} promotes the
|
|
258 |
background theory of @{text "ctxt"} to the super theory @{text
|
|
259 |
"thy"}. |
|
260 |
||
261 |
\end{description}
|
|
262 |
*} |
|
263 |
||
| 20430 | 264 |
|
| 20451 | 265 |
subsection {* Generic contexts \label{sec:generic-context} *}
|
| 20429 | 266 |
|
| 20449 | 267 |
text {*
|
268 |
A generic context is the disjoint sum of either a theory or proof |
|
| 20451 | 269 |
context. Occasionally, this enables uniform treatment of generic |
| 20450 | 270 |
context data, typically extra-logical information. Operations on |
| 20449 | 271 |
generic contexts include the usual injections, partial selections, |
272 |
and combinators for lifting operations on either component of the |
|
273 |
disjoint sum. |
|
274 |
||
275 |
Moreover, there are total operations @{text "theory_of"} and @{text
|
|
276 |
"proof_of"} to convert a generic context into either kind: a theory |
|
| 20451 | 277 |
can always be selected from the sum, while a proof context might |
278 |
have to be constructed by an ad-hoc @{text "init"} operation.
|
|
| 20449 | 279 |
*} |
| 20430 | 280 |
|
| 20449 | 281 |
text %mlref {*
|
282 |
\begin{mldecls}
|
|
283 |
@{index_ML_type Context.generic} \\
|
|
284 |
@{index_ML Context.theory_of: "Context.generic -> theory"} \\
|
|
285 |
@{index_ML Context.proof_of: "Context.generic -> Proof.context"} \\
|
|
286 |
\end{mldecls}
|
|
287 |
||
288 |
\begin{description}
|
|
| 20430 | 289 |
|
| 20449 | 290 |
\item @{ML_type Context.generic} is the direct sum of @{ML_type
|
| 20451 | 291 |
"theory"} and @{ML_type "Proof.context"}, with the datatype
|
292 |
constructors @{ML "Context.Theory"} and @{ML "Context.Proof"}.
|
|
| 20449 | 293 |
|
294 |
\item @{ML Context.theory_of}~@{text "context"} always produces a
|
|
295 |
theory from the generic @{text "context"}, using @{ML
|
|
296 |
"ProofContext.theory_of"} as required. |
|
297 |
||
298 |
\item @{ML Context.proof_of}~@{text "context"} always produces a
|
|
299 |
proof context from the generic @{text "context"}, using @{ML
|
|
| 20451 | 300 |
"ProofContext.init"} as required (note that this re-initializes the |
301 |
context data with each invocation). |
|
| 20449 | 302 |
|
303 |
\end{description}
|
|
304 |
*} |
|
| 20437 | 305 |
|
| 20476 | 306 |
|
307 |
subsection {* Context data \label{sec:context-data} *}
|
|
| 20447 | 308 |
|
309 |
text {*
|
|
| 20451 | 310 |
The main purpose of theory and proof contexts is to manage arbitrary |
311 |
data. New data types can be declared incrementally at compile time. |
|
312 |
There are separate declaration mechanisms for any of the three kinds |
|
313 |
of contexts: theory, proof, generic. |
|
| 20449 | 314 |
|
315 |
\paragraph{Theory data} may refer to destructive entities, which are
|
|
| 20451 | 316 |
maintained in direct correspondence to the linear evolution of |
317 |
theory values, including explicit copies.\footnote{Most existing
|
|
318 |
instances of destructive theory data are merely historical relics |
|
319 |
(e.g.\ the destructive theorem storage, and destructive hints for |
|
320 |
the Simplifier and Classical rules).} A theory data declaration |
|
321 |
needs to implement the following specification (depending on type |
|
322 |
@{text "T"}):
|
|
| 20449 | 323 |
|
324 |
\medskip |
|
325 |
\begin{tabular}{ll}
|
|
326 |
@{text "name: string"} \\
|
|
327 |
@{text "empty: T"} & initial value \\
|
|
328 |
@{text "copy: T \<rightarrow> T"} & refresh impure data \\
|
|
329 |
@{text "extend: T \<rightarrow> T"} & re-initialize on import \\
|
|
330 |
@{text "merge: T \<times> T \<rightarrow> T"} & join on import \\
|
|
331 |
@{text "print: T \<rightarrow> unit"} & diagnostic output \\
|
|
332 |
\end{tabular}
|
|
333 |
\medskip |
|
334 |
||
335 |
\noindent The @{text "name"} acts as a comment for diagnostic
|
|
336 |
messages; @{text "copy"} is just the identity for pure data; @{text
|
|
337 |
"extend"} is acts like a unitary version of @{text "merge"}, both
|
|
338 |
should also include the functionality of @{text "copy"} for impure
|
|
339 |
data. |
|
340 |
||
| 20451 | 341 |
\paragraph{Proof context data} is purely functional. A declaration
|
342 |
needs to implement the following specification: |
|
| 20449 | 343 |
|
344 |
\medskip |
|
345 |
\begin{tabular}{ll}
|
|
346 |
@{text "name: string"} \\
|
|
347 |
@{text "init: theory \<rightarrow> T"} & produce initial value \\
|
|
348 |
@{text "print: T \<rightarrow> unit"} & diagnostic output \\
|
|
349 |
\end{tabular}
|
|
350 |
\medskip |
|
351 |
||
352 |
\noindent The @{text "init"} operation is supposed to produce a pure
|
|
| 20451 | 353 |
value from the given background theory. The remainder is analogous |
354 |
to theory data. |
|
| 20449 | 355 |
|
| 20451 | 356 |
\paragraph{Generic data} provides a hybrid interface for both theory
|
357 |
and proof data. The declaration is essentially the same as for |
|
358 |
(pure) theory data, without @{text "copy"}, though. The @{text
|
|
359 |
"init"} operation for proof contexts merely selects the current data |
|
360 |
value from the background theory. |
|
| 20449 | 361 |
|
362 |
\bigskip In any case, a data declaration of type @{text "T"} results
|
|
363 |
in the following interface: |
|
364 |
||
365 |
\medskip |
|
366 |
\begin{tabular}{ll}
|
|
367 |
@{text "init: theory \<rightarrow> theory"} \\
|
|
368 |
@{text "get: context \<rightarrow> T"} \\
|
|
369 |
@{text "put: T \<rightarrow> context \<rightarrow> context"} \\
|
|
370 |
@{text "map: (T \<rightarrow> T) \<rightarrow> context \<rightarrow> context"} \\
|
|
371 |
@{text "print: context \<rightarrow> unit"}
|
|
372 |
\end{tabular}
|
|
373 |
\medskip |
|
374 |
||
375 |
\noindent Here @{text "init"} needs to be applied to the current
|
|
376 |
theory context once, in order to register the initial setup. The |
|
377 |
other operations provide access for the particular kind of context |
|
378 |
(theory, proof, or generic context). Note that this is a safe |
|
379 |
interface: there is no other way to access the corresponding data |
|
| 20451 | 380 |
slot of a context. By keeping these operations private, a component |
381 |
may maintain abstract values authentically, without other components |
|
382 |
interfering. |
|
| 20447 | 383 |
*} |
384 |
||
| 20450 | 385 |
text %mlref {*
|
386 |
\begin{mldecls}
|
|
387 |
@{index_ML_functor TheoryDataFun} \\
|
|
388 |
@{index_ML_functor ProofDataFun} \\
|
|
389 |
@{index_ML_functor GenericDataFun} \\
|
|
390 |
\end{mldecls}
|
|
391 |
||
392 |
\begin{description}
|
|
393 |
||
394 |
\item @{ML_functor TheoryDataFun}@{text "(spec)"} declares data for
|
|
395 |
type @{ML_type theory} according to the specification provided as
|
|
| 20451 | 396 |
argument structure. The resulting structure provides data init and |
397 |
access operations as described above. |
|
| 20450 | 398 |
|
| 20470 | 399 |
\item @{ML_functor ProofDataFun}@{text "(spec)"} is analogous to
|
400 |
@{ML_functor TheoryDataFun} for type @{ML_type Proof.context}.
|
|
| 20450 | 401 |
|
| 20470 | 402 |
\item @{ML_functor GenericDataFun}@{text "(spec)"} is analogous to
|
403 |
@{ML_functor TheoryDataFun} for type @{ML_type Context.generic}.
|
|
| 20450 | 404 |
|
405 |
\end{description}
|
|
406 |
*} |
|
407 |
||
| 20447 | 408 |
|
| 20476 | 409 |
section {* Names *}
|
| 20451 | 410 |
|
| 20476 | 411 |
text {*
|
412 |
In principle, a name is just a string, but there are various |
|
| 20488 | 413 |
convention for encoding additional structure. For example, ``@{text
|
414 |
"Foo.bar.baz"}'' is considered as a qualified name consisting of |
|
415 |
three basic name components. The individual constituents of a name |
|
416 |
may have further substructure, e.g.\ the string |
|
417 |
``\verb,\,\verb,<alpha>,'' encodes as a single symbol. |
|
| 20451 | 418 |
*} |
| 20437 | 419 |
|
420 |
||
421 |
subsection {* Strings of symbols *}
|
|
422 |
||
| 20476 | 423 |
text {*
|
424 |
\glossary{Symbol}{The smallest unit of text in Isabelle, subsumes
|
|
425 |
plain ASCII characters as well as an infinite collection of named |
|
426 |
symbols (for greek, math etc.).} |
|
| 20470 | 427 |
|
| 20476 | 428 |
A \emph{symbol} constitutes the smallest textual unit in Isabelle
|
| 20488 | 429 |
--- raw characters are normally not encountered at all. Isabelle |
430 |
strings consist of a sequence of symbols, represented as a packed |
|
431 |
string or a list of strings. Each symbol is in itself a small |
|
432 |
string, which has either one of the following forms: |
|
| 20437 | 433 |
|
| 20451 | 434 |
\begin{enumerate}
|
| 20437 | 435 |
|
| 20488 | 436 |
\item a single ASCII character ``@{text "c"}'', for example
|
437 |
``\verb,a,'', |
|
| 20437 | 438 |
|
| 20488 | 439 |
\item a regular symbol ``\verb,\,\verb,<,@{text "ident"}\verb,>,'',
|
| 20476 | 440 |
for example ``\verb,\,\verb,<alpha>,'', |
| 20437 | 441 |
|
| 20488 | 442 |
\item a control symbol ``\verb,\,\verb,<^,@{text "ident"}\verb,>,'',
|
| 20476 | 443 |
for example ``\verb,\,\verb,<^bold>,'', |
| 20437 | 444 |
|
| 20488 | 445 |
\item a raw symbol ``\verb,\,\verb,<^raw:,@{text text}\verb,>,''
|
446 |
where @{text text} constists of printable characters excluding
|
|
| 20476 | 447 |
``\verb,.,'' and ``\verb,>,'', for example |
448 |
``\verb,\,\verb,<^raw:$\sum_{i = 1}^n$>,'',
|
|
| 20437 | 449 |
|
| 20488 | 450 |
\item a numbered raw control symbol ``\verb,\,\verb,<^raw,@{text
|
| 20476 | 451 |
n}\verb,>, where @{text n} consists of digits, for example
|
| 20451 | 452 |
``\verb,\,\verb,<^raw42>,''. |
| 20437 | 453 |
|
| 20451 | 454 |
\end{enumerate}
|
| 20437 | 455 |
|
| 20476 | 456 |
\noindent The @{text "ident"} syntax for symbol names is @{text
|
457 |
"letter (letter | digit)\<^sup>*"}, where @{text "letter =
|
|
458 |
A..Za..z"} and @{text "digit = 0..9"}. There are infinitely many
|
|
459 |
regular symbols and control symbols, but a fixed collection of |
|
460 |
standard symbols is treated specifically. For example, |
|
| 20488 | 461 |
``\verb,\,\verb,<alpha>,'' is classified as a letter, which means it |
462 |
may occur within regular Isabelle identifiers. |
|
| 20437 | 463 |
|
| 20488 | 464 |
Since the character set underlying Isabelle symbols is 7-bit ASCII |
465 |
and 8-bit characters are passed through transparently, Isabelle may |
|
466 |
also process Unicode/UCS data in UTF-8 encoding. Unicode provides |
|
467 |
its own collection of mathematical symbols, but there is no built-in |
|
468 |
link to the standard collection of Isabelle. |
|
| 20476 | 469 |
|
470 |
\medskip Output of Isabelle symbols depends on the print mode |
|
471 |
(\secref{FIXME}). For example, the standard {\LaTeX} setup of the
|
|
472 |
Isabelle document preparation system would present |
|
| 20451 | 473 |
``\verb,\,\verb,<alpha>,'' as @{text "\<alpha>"}, and
|
474 |
``\verb,\,\verb,<^bold>,\verb,\,\verb,<alpha>,'' as @{text
|
|
475 |
"\<^bold>\<alpha>"}. |
|
476 |
*} |
|
| 20437 | 477 |
|
478 |
text %mlref {*
|
|
479 |
\begin{mldecls}
|
|
480 |
@{index_ML_type "Symbol.symbol"} \\
|
|
481 |
@{index_ML Symbol.explode: "string -> Symbol.symbol list"} \\
|
|
482 |
@{index_ML Symbol.is_letter: "Symbol.symbol -> bool"} \\
|
|
483 |
@{index_ML Symbol.is_digit: "Symbol.symbol -> bool"} \\
|
|
484 |
@{index_ML Symbol.is_quasi: "Symbol.symbol -> bool"} \\
|
|
| 20547 | 485 |
@{index_ML Symbol.is_blank: "Symbol.symbol -> bool"} \\
|
486 |
\end{mldecls}
|
|
487 |
\begin{mldecls}
|
|
| 20437 | 488 |
@{index_ML_type "Symbol.sym"} \\
|
489 |
@{index_ML Symbol.decode: "Symbol.symbol -> Symbol.sym"} \\
|
|
490 |
\end{mldecls}
|
|
491 |
||
492 |
\begin{description}
|
|
493 |
||
| 20488 | 494 |
\item @{ML_type "Symbol.symbol"} represents individual Isabelle
|
495 |
symbols; this is an alias for @{ML_type "string"}.
|
|
| 20437 | 496 |
|
| 20476 | 497 |
\item @{ML "Symbol.explode"}~@{text "str"} produces a symbol list
|
| 20488 | 498 |
from the packed form. This function supercedes @{ML
|
| 20476 | 499 |
"String.explode"} for virtually all purposes of manipulating text in |
500 |
Isabelle! |
|
| 20437 | 501 |
|
502 |
\item @{ML "Symbol.is_letter"}, @{ML "Symbol.is_digit"}, @{ML
|
|
| 20476 | 503 |
"Symbol.is_quasi"}, @{ML "Symbol.is_blank"} classify standard
|
504 |
symbols according to fixed syntactic conventions of Isabelle, cf.\ |
|
505 |
\cite{isabelle-isar-ref}.
|
|
| 20437 | 506 |
|
507 |
\item @{ML_type "Symbol.sym"} is a concrete datatype that represents
|
|
| 20488 | 508 |
the different kinds of symbols explicitly, with constructors @{ML
|
509 |
"Symbol.Char"}, @{ML "Symbol.Sym"}, @{ML "Symbol.Ctrl"}, @{ML
|
|
| 20451 | 510 |
"Symbol.Raw"}. |
| 20437 | 511 |
|
512 |
\item @{ML "Symbol.decode"} converts the string representation of a
|
|
| 20451 | 513 |
symbol into the datatype version. |
| 20437 | 514 |
|
515 |
\end{description}
|
|
516 |
*} |
|
517 |
||
518 |
||
| 20476 | 519 |
subsection {* Basic names \label{sec:basic-names} *}
|
520 |
||
521 |
text {*
|
|
522 |
A \emph{basic name} essentially consists of a single Isabelle
|
|
523 |
identifier. There are conventions to mark separate classes of basic |
|
524 |
names, by attaching a suffix of underscores (@{text "_"}): one
|
|
525 |
underscore means \emph{internal name}, two underscores means
|
|
526 |
\emph{Skolem name}, three underscores means \emph{internal Skolem
|
|
527 |
name}. |
|
528 |
||
529 |
For example, the basic name @{text "foo"} has the internal version
|
|
530 |
@{text "foo_"}, with Skolem versions @{text "foo__"} and @{text
|
|
531 |
"foo___"}, respectively. |
|
532 |
||
| 20488 | 533 |
These special versions provide copies of the basic name space, apart |
534 |
from anything that normally appears in the user text. For example, |
|
535 |
system generated variables in Isar proof contexts are usually marked |
|
536 |
as internal, which prevents mysterious name references like @{text
|
|
537 |
"xaa"} to appear in the text. |
|
| 20476 | 538 |
|
| 20488 | 539 |
\medskip Manipulating binding scopes often requires on-the-fly |
540 |
renamings. A \emph{name context} contains a collection of already
|
|
541 |
used names. The @{text "declare"} operation adds names to the
|
|
542 |
context. |
|
| 20476 | 543 |
|
| 20488 | 544 |
The @{text "invents"} operation derives a number of fresh names from
|
545 |
a given starting point. For example, the first three names derived |
|
546 |
from @{text "a"} are @{text "a"}, @{text "b"}, @{text "c"}.
|
|
| 20476 | 547 |
|
548 |
The @{text "variants"} operation produces fresh names by
|
|
| 20488 | 549 |
incrementing tentative names as base-26 numbers (with digits @{text
|
550 |
"a..z"}) until all clashes are resolved. For example, name @{text
|
|
551 |
"foo"} results in variants @{text "fooa"}, @{text "foob"}, @{text
|
|
552 |
"fooc"}, \dots, @{text "fooaa"}, @{text "fooab"} etc.; each renaming
|
|
553 |
step picks the next unused variant from this sequence. |
|
| 20476 | 554 |
*} |
555 |
||
556 |
text %mlref {*
|
|
557 |
\begin{mldecls}
|
|
558 |
@{index_ML Name.internal: "string -> string"} \\
|
|
| 20547 | 559 |
@{index_ML Name.skolem: "string -> string"} \\
|
560 |
\end{mldecls}
|
|
561 |
\begin{mldecls}
|
|
| 20476 | 562 |
@{index_ML_type Name.context} \\
|
563 |
@{index_ML Name.context: Name.context} \\
|
|
564 |
@{index_ML Name.declare: "string -> Name.context -> Name.context"} \\
|
|
565 |
@{index_ML Name.invents: "Name.context -> string -> int -> string list"} \\
|
|
566 |
@{index_ML Name.variants: "string list -> Name.context -> string list * Name.context"} \\
|
|
567 |
\end{mldecls}
|
|
568 |
||
569 |
\begin{description}
|
|
570 |
||
571 |
\item @{ML Name.internal}~@{text "name"} produces an internal name
|
|
572 |
by adding one underscore. |
|
573 |
||
574 |
\item @{ML Name.skolem}~@{text "name"} produces a Skolem name by
|
|
575 |
adding two underscores. |
|
576 |
||
577 |
\item @{ML_type Name.context} represents the context of already used
|
|
578 |
names; the initial value is @{ML "Name.context"}.
|
|
579 |
||
| 20488 | 580 |
\item @{ML Name.declare}~@{text "name"} enters a used name into the
|
581 |
context. |
|
| 20437 | 582 |
|
| 20488 | 583 |
\item @{ML Name.invents}~@{text "context name n"} produces @{text
|
584 |
"n"} fresh names derived from @{text "name"}.
|
|
585 |
||
586 |
\item @{ML Name.variants}~@{text "names context"} produces fresh
|
|
587 |
varians of @{text "names"}; the result is entered into the context.
|
|
| 20476 | 588 |
|
589 |
\end{description}
|
|
590 |
*} |
|
591 |
||
592 |
||
593 |
subsection {* Indexed names *}
|
|
594 |
||
595 |
text {*
|
|
596 |
An \emph{indexed name} (or @{text "indexname"}) is a pair of a basic
|
|
| 20488 | 597 |
name and a natural number. This representation allows efficient |
598 |
renaming by incrementing the second component only. The canonical |
|
599 |
way to rename two collections of indexnames apart from each other is |
|
600 |
this: determine the maximum index @{text "maxidx"} of the first
|
|
601 |
collection, then increment all indexes of the second collection by |
|
602 |
@{text "maxidx + 1"}; the maximum index of an empty collection is
|
|
603 |
@{text "-1"}.
|
|
| 20476 | 604 |
|
| 20488 | 605 |
Occasionally, basic names and indexed names are injected into the |
606 |
same pair type: the (improper) indexname @{text "(x, -1)"} is used
|
|
607 |
to encode basic names. |
|
608 |
||
609 |
\medskip Isabelle syntax observes the following rules for |
|
610 |
representing an indexname @{text "(x, i)"} as a packed string:
|
|
| 20476 | 611 |
|
612 |
\begin{itemize}
|
|
613 |
||
| 20479 | 614 |
\item @{text "?x"} if @{text "x"} does not end with a digit and @{text "i = 0"},
|
| 20476 | 615 |
|
616 |
\item @{text "?xi"} if @{text "x"} does not end with a digit,
|
|
617 |
||
| 20488 | 618 |
\item @{text "?x.i"} otherwise.
|
| 20476 | 619 |
|
620 |
\end{itemize}
|
|
| 20470 | 621 |
|
| 20488 | 622 |
Indexnames may acquire large index numbers over time. Results are |
623 |
normalized towards @{text "0"} at certain checkpoints, notably at
|
|
624 |
the end of a proof. This works by producing variants of the |
|
625 |
corresponding basic name components. For example, the collection |
|
626 |
@{text "?x1, ?x7, ?x42"} becomes @{text "?x, ?xa, ?xb"}.
|
|
| 20476 | 627 |
*} |
628 |
||
629 |
text %mlref {*
|
|
630 |
\begin{mldecls}
|
|
631 |
@{index_ML_type indexname} \\
|
|
632 |
\end{mldecls}
|
|
633 |
||
634 |
\begin{description}
|
|
635 |
||
636 |
\item @{ML_type indexname} represents indexed names. This is an
|
|
637 |
abbreviation for @{ML_type "string * int"}. The second component is
|
|
638 |
usually non-negative, except for situations where @{text "(x, -1)"}
|
|
| 20488 | 639 |
is used to embed basic names into this type. |
| 20476 | 640 |
|
641 |
\end{description}
|
|
642 |
*} |
|
643 |
||
644 |
||
645 |
subsection {* Qualified names and name spaces *}
|
|
646 |
||
647 |
text {*
|
|
648 |
A \emph{qualified name} consists of a non-empty sequence of basic
|
|
| 20488 | 649 |
name components. The packed representation uses a dot as separator, |
650 |
as in ``@{text "A.b.c"}''. The last component is called \emph{base}
|
|
651 |
name, the remaining prefix \emph{qualifier} (which may be empty).
|
|
652 |
The idea of qualified names is to encode nested structures by |
|
653 |
recording the access paths as qualifiers. For example, an item |
|
654 |
named ``@{text "A.b.c"}'' may be understood as a local entity @{text
|
|
655 |
"c"}, within a local structure @{text "b"}, within a global
|
|
656 |
structure @{text "A"}. Typically, name space hierarchies consist of
|
|
657 |
1--2 levels of qualification, but this need not be always so. |
|
| 20437 | 658 |
|
| 20476 | 659 |
The empty name is commonly used as an indication of unnamed |
| 20488 | 660 |
entities, whenever this makes any sense. The basic operations on |
661 |
qualified names are smart enough to pass through such improper names |
|
| 20476 | 662 |
unchanged. |
663 |
||
664 |
\medskip A @{text "naming"} policy tells how to turn a name
|
|
665 |
specification into a fully qualified internal name (by the @{text
|
|
| 20488 | 666 |
"full"} operation), and how fully qualified names may be accessed |
667 |
externally. For example, the default naming policy is to prefix an |
|
668 |
implicit path: @{text "full x"} produces @{text "path.x"}, and the
|
|
669 |
standard accesses for @{text "path.x"} include both @{text "x"} and
|
|
670 |
@{text "path.x"}. Normally, the naming is implicit in the theory or
|
|
671 |
proof context; there are separate versions of the corresponding. |
|
| 20437 | 672 |
|
| 20476 | 673 |
\medskip A @{text "name space"} manages a collection of fully
|
674 |
internalized names, together with a mapping between external names |
|
675 |
and internal names (in both directions). The corresponding @{text
|
|
676 |
"intern"} and @{text "extern"} operations are mostly used for
|
|
677 |
parsing and printing only! The @{text "declare"} operation augments
|
|
| 20488 | 678 |
a name space according to the accesses determined by the naming |
679 |
policy. |
|
| 20476 | 680 |
|
| 20488 | 681 |
\medskip As a general principle, there is a separate name space for |
682 |
each kind of formal entity, e.g.\ logical constant, type |
|
683 |
constructor, type class, theorem. It is usually clear from the |
|
684 |
occurrence in concrete syntax (or from the scope) which kind of |
|
685 |
entity a name refers to. For example, the very same name @{text
|
|
686 |
"c"} may be used uniformly for a constant, type constructor, and |
|
687 |
type class. |
|
| 20476 | 688 |
|
| 20479 | 689 |
There are common schemes to name theorems systematically, according |
| 20488 | 690 |
to the name of the main logical entity involved, e.g.\ @{text
|
691 |
"c.intro"} for a canonical theorem related to constant @{text "c"}.
|
|
692 |
This technique of mapping names from one space into another requires |
|
693 |
some care in order to avoid conflicts. In particular, theorem names |
|
694 |
derived from a type constructor or type class are better suffixed in |
|
695 |
addition to the usual qualification, e.g.\ @{text "c_type.intro"}
|
|
696 |
and @{text "c_class.intro"} for theorems related to type @{text "c"}
|
|
697 |
and class @{text "c"}, respectively.
|
|
| 20437 | 698 |
*} |
699 |
||
| 20476 | 700 |
text %mlref {*
|
701 |
\begin{mldecls}
|
|
702 |
@{index_ML NameSpace.base: "string -> string"} \\
|
|
|
20530
448594cbd82b
renamed NameSpace.drop_base to NameSpace.qualifier;
wenzelm
parents:
20488
diff
changeset
|
703 |
@{index_ML NameSpace.qualifier: "string -> string"} \\
|
| 20476 | 704 |
@{index_ML NameSpace.append: "string -> string -> string"} \\
|
| 21862 | 705 |
@{index_ML NameSpace.implode: "string list -> string"} \\
|
706 |
@{index_ML NameSpace.explode: "string -> string list"} \\
|
|
| 20547 | 707 |
\end{mldecls}
|
708 |
\begin{mldecls}
|
|
| 20476 | 709 |
@{index_ML_type NameSpace.naming} \\
|
710 |
@{index_ML NameSpace.default_naming: NameSpace.naming} \\
|
|
711 |
@{index_ML NameSpace.add_path: "string -> NameSpace.naming -> NameSpace.naming"} \\
|
|
| 20547 | 712 |
@{index_ML NameSpace.full: "NameSpace.naming -> string -> string"} \\
|
713 |
\end{mldecls}
|
|
714 |
\begin{mldecls}
|
|
| 20476 | 715 |
@{index_ML_type NameSpace.T} \\
|
716 |
@{index_ML NameSpace.empty: NameSpace.T} \\
|
|
717 |
@{index_ML NameSpace.merge: "NameSpace.T * NameSpace.T -> NameSpace.T"} \\
|
|
718 |
@{index_ML NameSpace.declare: "NameSpace.naming -> string -> NameSpace.T -> NameSpace.T"} \\
|
|
719 |
@{index_ML NameSpace.intern: "NameSpace.T -> string -> string"} \\
|
|
720 |
@{index_ML NameSpace.extern: "NameSpace.T -> string -> string"} \\
|
|
721 |
\end{mldecls}
|
|
| 20437 | 722 |
|
| 20476 | 723 |
\begin{description}
|
724 |
||
725 |
\item @{ML NameSpace.base}~@{text "name"} returns the base name of a
|
|
726 |
qualified name. |
|
727 |
||
|
20530
448594cbd82b
renamed NameSpace.drop_base to NameSpace.qualifier;
wenzelm
parents:
20488
diff
changeset
|
728 |
\item @{ML NameSpace.qualifier}~@{text "name"} returns the qualifier
|
| 20476 | 729 |
of a qualified name. |
| 20437 | 730 |
|
| 20476 | 731 |
\item @{ML NameSpace.append}~@{text "name\<^isub>1 name\<^isub>2"}
|
732 |
appends two qualified names. |
|
| 20437 | 733 |
|
| 21862 | 734 |
\item @{ML NameSpace.implode}~@{text "name"} and @{ML
|
735 |
NameSpace.explode}~@{text "names"} convert between the packed string
|
|
| 20488 | 736 |
representation and the explicit list form of qualified names. |
| 20476 | 737 |
|
738 |
\item @{ML_type NameSpace.naming} represents the abstract concept of
|
|
739 |
a naming policy. |
|
| 20437 | 740 |
|
| 20476 | 741 |
\item @{ML NameSpace.default_naming} is the default naming policy.
|
742 |
In a theory context, this is usually augmented by a path prefix |
|
743 |
consisting of the theory name. |
|
744 |
||
745 |
\item @{ML NameSpace.add_path}~@{text "path naming"} augments the
|
|
| 20488 | 746 |
naming policy by extending its path component. |
| 20437 | 747 |
|
| 20476 | 748 |
\item @{ML NameSpace.full}@{text "naming name"} turns a name
|
749 |
specification (usually a basic name) into the fully qualified |
|
750 |
internal version, according to the given naming policy. |
|
751 |
||
752 |
\item @{ML_type NameSpace.T} represents name spaces.
|
|
753 |
||
754 |
\item @{ML NameSpace.empty} and @{ML NameSpace.merge}~@{text
|
|
| 20488 | 755 |
"(space\<^isub>1, space\<^isub>2)"} are the canonical operations for |
756 |
maintaining name spaces according to theory data management |
|
757 |
(\secref{sec:context-data}).
|
|
| 20437 | 758 |
|
| 20476 | 759 |
\item @{ML NameSpace.declare}~@{text "naming name space"} enters a
|
| 20488 | 760 |
fully qualified name into the name space, with external accesses |
761 |
determined by the naming policy. |
|
| 20476 | 762 |
|
763 |
\item @{ML NameSpace.intern}~@{text "space name"} internalizes a
|
|
764 |
(partially qualified) external name. |
|
| 20437 | 765 |
|
| 20488 | 766 |
This operation is mostly for parsing! Note that fully qualified |
| 20476 | 767 |
names stemming from declarations are produced via @{ML
|
| 20488 | 768 |
"NameSpace.full"} (or its derivatives for @{ML_type theory} and
|
769 |
@{ML_type Proof.context}).
|
|
| 20437 | 770 |
|
| 20476 | 771 |
\item @{ML NameSpace.extern}~@{text "space name"} externalizes a
|
772 |
(fully qualified) internal name. |
|
773 |
||
| 20488 | 774 |
This operation is mostly for printing! Note unqualified names are |
| 20476 | 775 |
produced via @{ML NameSpace.base}.
|
776 |
||
777 |
\end{description}
|
|
778 |
*} |
|
| 20437 | 779 |
|
| 18537 | 780 |
end |