diff -r 0932dc251248 -r e77def9a63a6 src/Doc/Isar_Ref/Outer_Syntax.thy --- a/src/Doc/Isar_Ref/Outer_Syntax.thy Fri Nov 13 20:03:27 2015 +0100 +++ b/src/Doc/Isar_Ref/Outer_Syntax.thy Fri Nov 13 21:28:57 2015 +0100 @@ -7,33 +7,29 @@ chapter \Outer syntax --- the theory language \label{ch:outer-syntax}\ text \ - The rather generic framework of Isabelle/Isar syntax emerges from - three main syntactic categories: \<^emph>\commands\ of the top-level - Isar engine (covering theory and proof elements), \<^emph>\methods\ for - general goal refinements (analogous to traditional ``tactics''), and - \<^emph>\attributes\ for operations on facts (within a certain - context). Subsequently we give a reference of basic syntactic - entities underlying Isabelle/Isar syntax in a bottom-up manner. - Concrete theory and proof language elements will be introduced later - on. + The rather generic framework of Isabelle/Isar syntax emerges from three main + syntactic categories: \<^emph>\commands\ of the top-level Isar engine (covering + theory and proof elements), \<^emph>\methods\ for general goal refinements + (analogous to traditional ``tactics''), and \<^emph>\attributes\ for operations on + facts (within a certain context). Subsequently we give a reference of basic + syntactic entities underlying Isabelle/Isar syntax in a bottom-up manner. + Concrete theory and proof language elements will be introduced later on. \<^medskip> - In order to get started with writing well-formed - Isabelle/Isar documents, the most important aspect to be noted is - the difference of \<^emph>\inner\ versus \<^emph>\outer\ syntax. Inner - syntax is that of Isabelle types and terms of the logic, while outer - syntax is that of Isabelle/Isar theory sources (specifications and - proofs). As a general rule, inner syntax entities may occur only as - \<^emph>\atomic entities\ within outer syntax. For example, the string - \<^verbatim>\"x + y"\ and identifier \<^verbatim>\z\ are legal term - specifications within a theory, while \<^verbatim>\x + y\ without - quotes is not. + In order to get started with writing well-formed Isabelle/Isar documents, + the most important aspect to be noted is the difference of \<^emph>\inner\ versus + \<^emph>\outer\ syntax. Inner syntax is that of Isabelle types and terms of the + logic, while outer syntax is that of Isabelle/Isar theory sources + (specifications and proofs). As a general rule, inner syntax entities may + occur only as \<^emph>\atomic entities\ within outer syntax. For example, the + string \<^verbatim>\"x + y"\ and identifier \<^verbatim>\z\ are legal term specifications within a + theory, while \<^verbatim>\x + y\ without quotes is not. - Printed theory documents usually omit quotes to gain readability - (this is a matter of {\LaTeX} macro setup, say via \<^verbatim>\\isabellestyle\, - see also @{cite "isabelle-system"}). Experienced - users of Isabelle/Isar may easily reconstruct the lost technical - information, while mere readers need not care about quotes at all. + Printed theory documents usually omit quotes to gain readability (this is a + matter of {\LaTeX} macro setup, say via \<^verbatim>\\isabellestyle\, see also @{cite + "isabelle-system"}). Experienced users of Isabelle/Isar may easily + reconstruct the lost technical information, while mere readers need not care + about quotes at all. \ @@ -59,8 +55,10 @@ subsubsection \Examples\ -text \Some common diagnostic commands are retrieved like this - (according to usual naming conventions):\ +text \ + Some common diagnostic commands are retrieved like this (according to usual + naming conventions): +\ help "print" help "find" @@ -68,35 +66,31 @@ section \Lexical matters \label{sec:outer-lex}\ -text \The outer lexical syntax consists of three main categories of - syntax tokens: - - \<^enum> \<^emph>\major keywords\ --- the command names that are available - in the present logic session; +text \ + The outer lexical syntax consists of three main categories of syntax tokens: - \<^enum> \<^emph>\minor keywords\ --- additional literal tokens required - by the syntax of commands; + \<^enum> \<^emph>\major keywords\ --- the command names that are available + in the present logic session; - \<^enum> \<^emph>\named tokens\ --- various categories of identifiers etc. + \<^enum> \<^emph>\minor keywords\ --- additional literal tokens required + by the syntax of commands; + \<^enum> \<^emph>\named tokens\ --- various categories of identifiers etc. - Major keywords and minor keywords are guaranteed to be disjoint. - This helps user-interfaces to determine the overall structure of a - theory text, without knowing the full details of command syntax. - Internally, there is some additional information about the kind of - major keywords, which approximates the command type (theory command, - proof command etc.). + Major keywords and minor keywords are guaranteed to be disjoint. This helps + user-interfaces to determine the overall structure of a theory text, without + knowing the full details of command syntax. Internally, there is some + additional information about the kind of major keywords, which approximates + the command type (theory command, proof command etc.). - Keywords override named tokens. For example, the presence of a - command called \<^verbatim>\term\ inhibits the identifier \<^verbatim>\term\, but the - string \<^verbatim>\"term"\ can be used instead. - By convention, the outer syntax always allows quoted strings in - addition to identifiers, wherever a named entity is expected. + Keywords override named tokens. For example, the presence of a command + called \<^verbatim>\term\ inhibits the identifier \<^verbatim>\term\, but the string \<^verbatim>\"term"\ can + be used instead. By convention, the outer syntax always allows quoted + strings in addition to identifiers, wherever a named entity is expected. - When tokenizing a given input sequence, the lexer repeatedly takes - the longest prefix of the input that forms a valid token. Spaces, - tabs, newlines and formfeeds between tokens serve as explicit - separators. + When tokenizing a given input sequence, the lexer repeatedly takes the + longest prefix of the input that forms a valid token. Spaces, tabs, newlines + and formfeeds between tokens serve as explicit separators. \<^medskip> The categories for named tokens are defined once and for all as follows. @@ -134,66 +128,62 @@ \end{supertabular} \end{center} - A @{syntax_ref var} or @{syntax_ref typevar} describes an unknown, - which is internally a pair of base name and index (ML type @{ML_type - indexname}). These components are either separated by a dot as in - \?x.1\ or \?x7.3\ or run together as in \?x1\. The latter form is possible - if the base name does not end with digits. If the index is 0, it may be - dropped altogether: \?x\ and \?x0\ and \?x.0\ all refer to the - same unknown, with basename \x\ and index 0. + A @{syntax_ref var} or @{syntax_ref typevar} describes an unknown, which is + internally a pair of base name and index (ML type @{ML_type indexname}). + These components are either separated by a dot as in \?x.1\ or \?x7.3\ or + run together as in \?x1\. The latter form is possible if the base name does + not end with digits. If the index is 0, it may be dropped altogether: \?x\ + and \?x0\ and \?x.0\ all refer to the same unknown, with basename \x\ and + index 0. The syntax of @{syntax_ref string} admits any characters, including - newlines; ``\<^verbatim>\"\'' (double-quote) and ``\<^verbatim>\\\'' - (backslash) need to be escaped by a backslash; arbitrary - character codes may be specified as ``\<^verbatim>\\\\ddd\'', - with three decimal digits. Alternative strings according to - @{syntax_ref altstring} are analogous, using single back-quotes - instead. + newlines; ``\<^verbatim>\"\'' (double-quote) and ``\<^verbatim>\\\'' (backslash) need to be + escaped by a backslash; arbitrary character codes may be specified as + ``\<^verbatim>\\\\ddd\'', with three decimal digits. Alternative strings according to + @{syntax_ref altstring} are analogous, using single back-quotes instead. The body of @{syntax_ref verbatim} may consist of any text not containing - ``\<^verbatim>\*}\''; this allows to include quotes without further - escapes, but there is no way to escape ``\<^verbatim>\*}\''. Cartouches - do not have this limitation. + ``\<^verbatim>\*}\''; this allows to include quotes without further escapes, but there + is no way to escape ``\<^verbatim>\*}\''. Cartouches do not have this limitation. - A @{syntax_ref cartouche} consists of arbitrary text, with properly - balanced blocks of ``@{verbatim "\"}~\\\~@{verbatim - "\"}''. Note that the rendering of cartouche delimiters is - usually like this: ``\\ \ \\''. + A @{syntax_ref cartouche} consists of arbitrary text, with properly balanced + blocks of ``@{verbatim "\"}~\\\~@{verbatim "\"}''. Note that the rendering + of cartouche delimiters is usually like this: ``\\ \ \\''. - Source comments take the form \<^verbatim>\(*\~\\\~\<^verbatim>\*)\ and may be nested, although the user-interface - might prevent this. Note that this form indicates source comments - only, which are stripped after lexical analysis of the input. The - Isar syntax also provides proper \<^emph>\document comments\ that are - considered as part of the text (see \secref{sec:comments}). + Source comments take the form \<^verbatim>\(*\~\\\~\<^verbatim>\*)\ and may be nested, although + the user-interface might prevent this. Note that this form indicates source + comments only, which are stripped after lexical analysis of the input. The + Isar syntax also provides proper \<^emph>\document comments\ that are considered as + part of the text (see \secref{sec:comments}). - Common mathematical symbols such as \\\ are represented in - Isabelle as \<^verbatim>\\\. There are infinitely many Isabelle - symbols like this, although proper presentation is left to front-end - tools such as {\LaTeX} or Isabelle/jEdit. A list of - predefined Isabelle symbols that work well with these tools is given - in \appref{app:symbols}. Note that \<^verbatim>\\\ does not belong - to the \letter\ category, since it is already used differently - in the Pure term language.\ + Common mathematical symbols such as \\\ are represented in Isabelle as \<^verbatim>\\\. + There are infinitely many Isabelle symbols like this, although proper + presentation is left to front-end tools such as {\LaTeX} or Isabelle/jEdit. + A list of predefined Isabelle symbols that work well with these tools is + given in \appref{app:symbols}. Note that \<^verbatim>\\\ does not belong to the + \letter\ category, since it is already used differently in the Pure term + language. +\ section \Common syntax entities\ text \ - We now introduce several basic syntactic entities, such as names, - terms, and theorem specifications, which are factored out of the - actual Isar language elements to be described later. + We now introduce several basic syntactic entities, such as names, terms, and + theorem specifications, which are factored out of the actual Isar language + elements to be described later. \ subsection \Names\ -text \Entity @{syntax name} usually refers to any name of types, - constants, theorems etc.\ that are to be \<^emph>\declared\ or - \<^emph>\defined\ (so qualified identifiers are excluded here). Quoted - strings provide an escape for non-identifier names or those ruled - out by outer syntax keywords (e.g.\ quoted \<^verbatim>\"let"\). - Already existing objects are usually referenced by @{syntax - nameref}. +text \ + Entity @{syntax name} usually refers to any name of types, constants, + theorems etc.\ that are to be \<^emph>\declared\ or \<^emph>\defined\ (so qualified + identifiers are excluded here). Quoted strings provide an escape for + non-identifier names or those ruled out by outer syntax keywords (e.g.\ + quoted \<^verbatim>\"let"\). Already existing objects are usually referenced by + @{syntax nameref}. @{rail \ @{syntax_def name}: @{syntax ident} | @{syntax symident} | @@ -208,9 +198,10 @@ subsection \Numbers\ -text \The outer lexical syntax (\secref{sec:outer-lex}) admits - natural numbers and floating point numbers. These are combined as - @{syntax int} and @{syntax real} as follows. +text \ + The outer lexical syntax (\secref{sec:outer-lex}) admits natural numbers and + floating point numbers. These are combined as @{syntax int} and @{syntax + real} as follows. @{rail \ @{syntax_def int}: @{syntax nat} | '-' @{syntax nat} @@ -218,8 +209,8 @@ @{syntax_def real}: @{syntax float} | @{syntax int} \} - Note that there is an overlap with the category @{syntax name}, - which also includes @{syntax nat}. + Note that there is an overlap with the category @{syntax name}, which also + includes @{syntax nat}. \ @@ -244,10 +235,9 @@ subsection \Type classes, sorts and arities\ text \ - Classes are specified by plain names. Sorts have a very simple - inner syntax, which is either a single class name \c\ or a - list \{c\<^sub>1, \, c\<^sub>n}\ referring to the - intersection of these classes. The syntax of type arities is given + Classes are specified by plain names. Sorts have a very simple inner syntax, + which is either a single class name \c\ or a list \{c\<^sub>1, \, c\<^sub>n}\ referring + to the intersection of these classes. The syntax of type arities is given directly at the outer level. @{rail \ @@ -263,18 +253,16 @@ subsection \Types and terms \label{sec:types-terms}\ text \ - The actual inner Isabelle syntax, that of types and terms of the - logic, is far too sophisticated in order to be modelled explicitly - at the outer theory level. Basically, any such entity has to be - quoted to turn it into a single token (the parsing and type-checking - is performed internally later). For convenience, a slightly more - liberal convention is adopted: quotes may be omitted for any type or - term that is already atomic at the outer level. For example, one - may just write \<^verbatim>\x\ instead of quoted \<^verbatim>\"x"\. - Note that symbolic identifiers (e.g.\ \<^verbatim>\++\ or \\\ are available as well, - provided these have not been superseded - by commands or other keywords already (such as \<^verbatim>\=\ or - \<^verbatim>\+\). + The actual inner Isabelle syntax, that of types and terms of the logic, is + far too sophisticated in order to be modelled explicitly at the outer theory + level. Basically, any such entity has to be quoted to turn it into a single + token (the parsing and type-checking is performed internally later). For + convenience, a slightly more liberal convention is adopted: quotes may be + omitted for any type or term that is already atomic at the outer level. For + example, one may just write \<^verbatim>\x\ instead of quoted \<^verbatim>\"x"\. Note that + symbolic identifiers (e.g.\ \<^verbatim>\++\ or \\\ are available as well, provided + these have not been superseded by commands or other keywords already (such + as \<^verbatim>\=\ or \<^verbatim>\+\). @{rail \ @{syntax_def type}: @{syntax nameref} | @{syntax typefree} | @@ -294,10 +282,10 @@ @{syntax_def insts}: (@{syntax inst} *) \} - Named instantiations are specified as pairs of assignments \v = - t\, which refer to schematic variables in some theorem that is - instantiated. Both type and terms instantiations are admitted, and - distinguished by the usual syntax of variable names. + Named instantiations are specified as pairs of assignments \v = t\, which + refer to schematic variables in some theorem that is instantiated. Both type + and terms instantiations are admitted, and distinguished by the usual syntax + of variable names. @{rail \ @{syntax_def named_inst}: variable '=' (type | term) @@ -307,10 +295,10 @@ variable: @{syntax name} | @{syntax var} | @{syntax typefree} | @{syntax typevar} \} - Type declarations and definitions usually refer to @{syntax - typespec} on the left-hand side. This models basic type constructor - application at the outer syntax level. Note that only plain postfix - notation is available here, but no infixes. + Type declarations and definitions usually refer to @{syntax typespec} on the + left-hand side. This models basic type constructor application at the outer + syntax level. Note that only plain postfix notation is available here, but + no infixes. @{rail \ @{syntax_def typespec}: @@ -325,10 +313,11 @@ subsection \Term patterns and declarations \label{sec:term-decls}\ -text \Wherever explicit propositions (or term fragments) occur in a - proof text, casual binding of schematic term variables may be given - specified via patterns of the form ``\(\ p\<^sub>1 \ p\<^sub>n)\''. - This works both for @{syntax term} and @{syntax prop}. +text \ + Wherever explicit propositions (or term fragments) occur in a proof text, + casual binding of schematic term variables may be given specified via + patterns of the form ``\(\ p\<^sub>1 \ p\<^sub>n)\''. This works both for @{syntax + term} and @{syntax prop}. @{rail \ @{syntax_def term_pat}: '(' (@'is' @{syntax term} +) ')' @@ -337,13 +326,12 @@ \} \<^medskip> - Declarations of local variables \x :: \\ and - logical propositions \a : \\ represent different views on - the same principle of introducing a local scope. In practice, one - may usually omit the typing of @{syntax vars} (due to - type-inference), and the naming of propositions (due to implicit - references of current facts). In any case, Isar proof elements - usually admit to introduce multiple such items simultaneously. + Declarations of local variables \x :: \\ and logical propositions \a : \\ + represent different views on the same principle of introducing a local + scope. In practice, one may usually omit the typing of @{syntax vars} (due + to type-inference), and the naming of propositions (due to implicit + references of current facts). In any case, Isar proof elements usually admit + to introduce multiple such items simultaneously. @{rail \ @{syntax_def vars}: (@{syntax name} +) ('::' @{syntax type})? @@ -353,14 +341,13 @@ @{syntax_def props'}: (@{syntax prop} @{syntax prop_pat}? +) \} - The treatment of multiple declarations corresponds to the - complementary focus of @{syntax vars} versus @{syntax props}. In - ``\x\<^sub>1 \ x\<^sub>n :: \\'' the typing refers to all variables, while - in \a: \\<^sub>1 \ \\<^sub>n\ the naming refers to all propositions - collectively. Isar language elements that refer to @{syntax vars} - or @{syntax props} typically admit separate typings or namings via - another level of iteration, with explicit @{keyword_ref "and"} - separators; e.g.\ see @{command "fix"} and @{command "assume"} in + The treatment of multiple declarations corresponds to the complementary + focus of @{syntax vars} versus @{syntax props}. In ``\x\<^sub>1 \ x\<^sub>n :: \\'' the + typing refers to all variables, while in \a: \\<^sub>1 \ \\<^sub>n\ the naming refers to + all propositions collectively. Isar language elements that refer to @{syntax + vars} or @{syntax props} typically admit separate typings or namings via + another level of iteration, with explicit @{keyword_ref "and"} separators; + e.g.\ see @{command "fix"} and @{command "assume"} in \secref{sec:proof-context}. @{rail \ @@ -372,21 +359,22 @@ The category @{syntax "fixes"} is a richer variant of @{syntax vars}: it admits specification of mixfix syntax for the entities that are introduced - into the context. An optional suffix ``@{keyword "for"}~\fixes\'' - is admitted in many situations to indicate a so-called ``eigen-context'' - of a formal element: the result will be exported and thus generalized over - the given variables.\ + into the context. An optional suffix ``@{keyword "for"}~\fixes\'' is + admitted in many situations to indicate a so-called ``eigen-context'' of a + formal element: the result will be exported and thus generalized over the + given variables. +\ subsection \Attributes and theorems \label{sec:syn-att}\ -text \Attributes have their own ``semi-inner'' syntax, in the sense - that input conforming to @{syntax args} below is parsed by the - attribute a second time. The attribute argument specifications may - be any sequence of atomic entities (identifiers, strings etc.), or - properly bracketed argument lists. Below @{syntax atom} refers to - any atomic entity, including any @{syntax keyword} conforming to - @{syntax symident}. +text \ + Attributes have their own ``semi-inner'' syntax, in the sense that input + conforming to @{syntax args} below is parsed by the attribute a second time. + The attribute argument specifications may be any sequence of atomic entities + (identifiers, strings etc.), or properly bracketed argument lists. Below + @{syntax atom} refers to any atomic entity, including any @{syntax keyword} + conforming to @{syntax symident}. @{rail \ @{syntax_def atom}: @{syntax nameref} | @{syntax typefree} | @@ -400,35 +388,33 @@ @{syntax_def attributes}: '[' (@{syntax nameref} @{syntax args} * ',') ']' \} - Theorem specifications come in several flavors: @{syntax axmdecl} - and @{syntax thmdecl} usually refer to axioms, assumptions or - results of goal statements, while @{syntax thmdef} collects lists of - existing theorems. Existing theorems are given by @{syntax thmref} - and @{syntax thmrefs}, the former requires an actual singleton - result. + Theorem specifications come in several flavors: @{syntax axmdecl} and + @{syntax thmdecl} usually refer to axioms, assumptions or results of goal + statements, while @{syntax thmdef} collects lists of existing theorems. + Existing theorems are given by @{syntax thmref} and @{syntax thmrefs}, the + former requires an actual singleton result. There are three forms of theorem references: - \<^enum> named facts \a\, + \<^enum> named facts \a\, - \<^enum> selections from named facts \a(i)\ or \a(j - k)\, + \<^enum> selections from named facts \a(i)\ or \a(j - k)\, - \<^enum> literal fact propositions using token syntax @{syntax_ref altstring} - \<^verbatim>\`\\\\\<^verbatim>\`\ or @{syntax_ref cartouche} - \\\\\ (see also method @{method_ref fact}). - + \<^enum> literal fact propositions using token syntax @{syntax_ref altstring} + \<^verbatim>\`\\\\\<^verbatim>\`\ or @{syntax_ref cartouche} + \\\\\ (see also method @{method_ref fact}). - Any kind of theorem specification may include lists of attributes - both on the left and right hand sides; attributes are applied to any - immediately preceding fact. If names are omitted, the theorems are - not stored within the theorem database of the theory or proof - context, but any given attributes are applied nonetheless. + Any kind of theorem specification may include lists of attributes both on + the left and right hand sides; attributes are applied to any immediately + preceding fact. If names are omitted, the theorems are not stored within the + theorem database of the theory or proof context, but any given attributes + are applied nonetheless. - An extra pair of brackets around attributes (like ``\[[simproc a]]\'') abbreviates a theorem reference involving an - internal dummy fact, which will be ignored later on. So only the - effect of the attribute on the background context will persist. - This form of in-place declarations is particularly useful with - commands like @{command "declare"} and @{command "using"}. + An extra pair of brackets around attributes (like ``\[[simproc a]]\'') + abbreviates a theorem reference involving an internal dummy fact, which will + be ignored later on. So only the effect of the attribute on the background + context will persist. This form of in-place declarations is particularly + useful with commands like @{command "declare"} and @{command "using"}. @{rail \ @{syntax_def axmdecl}: @{syntax name} @{syntax attributes}? ':' @@ -492,9 +478,9 @@ @@{command unused_thms} ((@{syntax name} +) '-' (@{syntax name} * ))? \} - These commands print certain parts of the theory and proof context. - Note that there are some further ones available, such as for the set - of rules declared for simplifications. + These commands print certain parts of the theory and proof context. Note + that there are some further ones available, such as for the set of rules + declared for simplifications. \<^descr> @{command "print_theory"} prints the main logical content of the background theory; the ``\!\'' option indicates extra verbosity. @@ -502,68 +488,56 @@ \<^descr> @{command "print_definitions"} prints dependencies of definitional specifications within the background theory, which may be constants \secref{sec:consts} or types (\secref{sec:types-pure}, - \secref{sec:hol-typedef}); the ``\!\'' option indicates extra - verbosity. + \secref{sec:hol-typedef}); the ``\!\'' option indicates extra verbosity. \<^descr> @{command "print_methods"} prints all proof methods available in the - current theory context; the ``\!\'' option indicates extra - verbosity. + current theory context; the ``\!\'' option indicates extra verbosity. \<^descr> @{command "print_attributes"} prints all attributes available in the - current theory context; the ``\!\'' option indicates extra - verbosity. + current theory context; the ``\!\'' option indicates extra verbosity. \<^descr> @{command "print_theorems"} prints theorems of the background theory - resulting from the last command; the ``\!\'' option indicates - extra verbosity. + resulting from the last command; the ``\!\'' option indicates extra + verbosity. - \<^descr> @{command "print_facts"} prints all local facts of the current - context, both named and unnamed ones; the ``\!\'' option indicates - extra verbosity. + \<^descr> @{command "print_facts"} prints all local facts of the current context, + both named and unnamed ones; the ``\!\'' option indicates extra verbosity. - \<^descr> @{command "print_term_bindings"} prints all term bindings that - are present in the context. + \<^descr> @{command "print_term_bindings"} prints all term bindings that are present + in the context. - \<^descr> @{command "find_theorems"}~\criteria\ retrieves facts - from the theory or proof context matching all of given search - criteria. The criterion \name: p\ selects all theorems - whose fully qualified name matches pattern \p\, which may - contain ``\*\'' wildcards. The criteria \intro\, - \elim\, and \dest\ select theorems that match the - current goal as introduction, elimination or destruction rules, - respectively. The criterion \solves\ returns all rules - that would directly solve the current goal. The criterion - \simp: t\ selects all rewrite rules whose left-hand side - matches the given term. The criterion term \t\ selects all - theorems that contain the pattern \t\ -- as usual, patterns - may contain occurrences of the dummy ``\_\'', schematic - variables, and type constraints. + \<^descr> @{command "find_theorems"}~\criteria\ retrieves facts from the theory or + proof context matching all of given search criteria. The criterion \name: p\ + selects all theorems whose fully qualified name matches pattern \p\, which + may contain ``\*\'' wildcards. The criteria \intro\, \elim\, and \dest\ + select theorems that match the current goal as introduction, elimination or + destruction rules, respectively. The criterion \solves\ returns all rules + that would directly solve the current goal. The criterion \simp: t\ selects + all rewrite rules whose left-hand side matches the given term. The criterion + term \t\ selects all theorems that contain the pattern \t\ -- as usual, + patterns may contain occurrences of the dummy ``\_\'', schematic variables, + and type constraints. - Criteria can be preceded by ``\-\'' to select theorems that - do \<^emph>\not\ match. Note that giving the empty list of criteria - yields \<^emph>\all\ currently known facts. An optional limit for the - number of printed facts may be given; the default is 40. By - default, duplicates are removed from the search result. Use - \with_dups\ to display duplicates. + Criteria can be preceded by ``\-\'' to select theorems that do \<^emph>\not\ match. + Note that giving the empty list of criteria yields \<^emph>\all\ currently known + facts. An optional limit for the number of printed facts may be given; the + default is 40. By default, duplicates are removed from the search result. + Use \with_dups\ to display duplicates. - \<^descr> @{command "find_consts"}~\criteria\ prints all constants - whose type meets all of the given criteria. The criterion \strict: ty\ is met by any type that matches the type pattern - \ty\. Patterns may contain both the dummy type ``\_\'' - and sort constraints. The criterion \ty\ is similar, but it - also matches against subtypes. The criterion \name: p\ and - the prefix ``\-\'' function as described for @{command - "find_theorems"}. + \<^descr> @{command "find_consts"}~\criteria\ prints all constants whose type meets + all of the given criteria. The criterion \strict: ty\ is met by any type + that matches the type pattern \ty\. Patterns may contain both the dummy type + ``\_\'' and sort constraints. The criterion \ty\ is similar, but it also + matches against subtypes. The criterion \name: p\ and the prefix ``\-\'' + function as described for @{command "find_theorems"}. - \<^descr> @{command "thm_deps"}~\a\<^sub>1 \ a\<^sub>n\ - visualizes dependencies of facts, using Isabelle's graph browser - tool (see also @{cite "isabelle-system"}). + \<^descr> @{command "thm_deps"}~\a\<^sub>1 \ a\<^sub>n\ visualizes dependencies of facts, using + Isabelle's graph browser tool (see also @{cite "isabelle-system"}). - \<^descr> @{command "unused_thms"}~\A\<^sub>1 \ A\<^sub>m - B\<^sub>1 \ B\<^sub>n\ - displays all theorems that are proved in theories \B\<^sub>1 \ B\<^sub>n\ - or their parents but not in \A\<^sub>1 \ A\<^sub>m\ or their parents and - that are never used. - If \n\ is \0\, the end of the range of theories - defaults to the current theory. If no range is specified, + \<^descr> @{command "unused_thms"}~\A\<^sub>1 \ A\<^sub>m - B\<^sub>1 \ B\<^sub>n\ displays all theorems + that are proved in theories \B\<^sub>1 \ B\<^sub>n\ or their parents but not in \A\<^sub>1 \ + A\<^sub>m\ or their parents and that are never used. If \n\ is \0\, the end of the + range of theories defaults to the current theory. If no range is specified, only the unused theorems in the current theory are displayed. \