doc-src/Logics/CTT.tex
author paulson
Wed, 09 Jan 2002 13:43:05 +0100
changeset 12679 8ed660138f83
parent 9695 ec7d7f877712
child 42637 381fdcab0f36
permissions -rw-r--r--
changed the index sort key from |-| to just |- because pdflatex can't cope with the two ||s in the index entry

%% $Id$
\chapter{Constructive Type Theory}
\index{Constructive Type Theory|(}

\underscoreoff %this file contains _ in rule names

Martin-L\"of's Constructive Type Theory \cite{martinlof84,nordstrom90} can
be viewed at many different levels.  It is a formal system that embodies
the principles of intuitionistic mathematics; it embodies the
interpretation of propositions as types; it is a vehicle for deriving
programs from proofs.  

Thompson's book~\cite{thompson91} gives a readable and thorough account of
Type Theory.  Nuprl is an elaborate implementation~\cite{constable86}.
{\sc alf} is a more recent tool that allows proof terms to be edited
directly~\cite{alf}.

Isabelle's original formulation of Type Theory was a kind of sequent
calculus, following Martin-L\"of~\cite{martinlof84}.  It included rules for
building the context, namely variable bindings with their types.  A typical
judgement was
\[   a(x@1,\ldots,x@n)\in A(x@1,\ldots,x@n) \; 
    [ x@1\in A@1, x@2\in A@2(x@1), \ldots, x@n\in A@n(x@1,\ldots,x@{n-1}) ]
\]
This sequent calculus was not satisfactory because assumptions like
`suppose $A$ is a type' or `suppose $B(x)$ is a type for all $x$ in $A$'
could not be formalized.  

The theory~\thydx{CTT} implements Constructive Type Theory, using
natural deduction.  The judgement above is expressed using $\Forall$ and
$\Imp$:
\[ \begin{array}{r@{}l}
     \Forall x@1\ldots x@n. &
          \List{x@1\in A@1; 
                x@2\in A@2(x@1); \cdots \; 
                x@n\in A@n(x@1,\ldots,x@{n-1})} \Imp \\
     &  \qquad\qquad a(x@1,\ldots,x@n)\in A(x@1,\ldots,x@n) 
    \end{array}
\]
Assumptions can use all the judgement forms, for instance to express that
$B$ is a family of types over~$A$:
\[ \Forall x . x\in A \Imp B(x)\;{\rm type} \]
To justify the CTT formulation it is probably best to appeal directly to the
semantic explanations of the rules~\cite{martinlof84}, rather than to the
rules themselves.  The order of assumptions no longer matters, unlike in
standard Type Theory.  Contexts, which are typical of many modern type
theories, are difficult to represent in Isabelle.  In particular, it is
difficult to enforce that all the variables in a context are distinct.
\index{assumptions!in CTT}

The theory does not use polymorphism.  Terms in CTT have type~\tydx{i}, the
type of individuals.  Types in CTT have type~\tydx{t}.

\begin{figure} \tabcolsep=1em  %wider spacing in tables
\begin{center}
\begin{tabular}{rrr} 
  \it name      & \it meta-type         & \it description \\ 
  \cdx{Type}    & $t \to prop$          & judgement form \\
  \cdx{Eqtype}  & $[t,t]\to prop$       & judgement form\\
  \cdx{Elem}    & $[i, t]\to prop$      & judgement form\\
  \cdx{Eqelem}  & $[i, i, t]\to prop$   & judgement form\\
  \cdx{Reduce}  & $[i, i]\to prop$      & extra judgement form\\[2ex]

  \cdx{N}       &     $t$               & natural numbers type\\
  \cdx{0}       &     $i$               & constructor\\
  \cdx{succ}    & $i\to i$              & constructor\\
  \cdx{rec}     & $[i,i,[i,i]\to i]\to i$       & eliminator\\[2ex]
  \cdx{Prod}    & $[t,i\to t]\to t$     & general product type\\
  \cdx{lambda}  & $(i\to i)\to i$       & constructor\\[2ex]
  \cdx{Sum}     & $[t, i\to t]\to t$    & general sum type\\
  \cdx{pair}    & $[i,i]\to i$          & constructor\\
  \cdx{split}   & $[i,[i,i]\to i]\to i$ & eliminator\\
  \cdx{fst} \cdx{snd} & $i\to i$        & projections\\[2ex]
  \cdx{inl} \cdx{inr} & $i\to i$        & constructors for $+$\\
  \cdx{when}    & $[i,i\to i, i\to i]\to i$    & eliminator for $+$\\[2ex]
  \cdx{Eq}      & $[t,i,i]\to t$        & equality type\\
  \cdx{eq}      & $i$                   & constructor\\[2ex]
  \cdx{F}       & $t$                   & empty type\\
  \cdx{contr}   & $i\to i$              & eliminator\\[2ex]
  \cdx{T}       & $t$                   & singleton type\\
  \cdx{tt}      & $i$                   & constructor
\end{tabular}
\end{center}
\caption{The constants of CTT} \label{ctt-constants}
\end{figure}


CTT supports all of Type Theory apart from list types, well-ordering types,
and universes.  Universes could be introduced {\em\`a la Tarski}, adding new
constants as names for types.  The formulation {\em\`a la Russell}, where
types denote themselves, is only possible if we identify the meta-types~{\tt
  i} and~{\tt t}.  Most published formulations of well-ordering types have
difficulties involving extensionality of functions; I suggest that you use
some other method for defining recursive types.  List types are easy to
introduce by declaring new rules.

CTT uses the 1982 version of Type Theory, with extensional equality.  The
computation $a=b\in A$ and the equality $c\in Eq(A,a,b)$ are interchangeable.
Its rewriting tactics prove theorems of the form $a=b\in A$.  It could be
modified to have intensional equality, but rewriting tactics would have to
prove theorems of the form $c\in Eq(A,a,b)$ and the computation rules might
require a separate simplifier.


\begin{figure} \tabcolsep=1em  %wider spacing in tables
\index{lambda abs@$\lambda$-abstractions!in CTT}
\begin{center}
\begin{tabular}{llrrr} 
  \it symbol &\it name     &\it meta-type & \it priority & \it description \\
  \sdx{lam} & \cdx{lambda}  & $(i\To o)\To i$ & 10 & $\lambda$-abstraction
\end{tabular}
\end{center}
\subcaption{Binders} 

\begin{center}
\index{*"` symbol}\index{function applications!in CTT}
\index{*"+ symbol}
\begin{tabular}{rrrr} 
  \it symbol & \it meta-type    & \it priority & \it description \\ 
  \tt `         & $[i,i]\to i$  & Left 55       & function application\\
  \tt +         & $[t,t]\to t$  & Right 30      & sum of two types
\end{tabular}
\end{center}
\subcaption{Infixes}

\index{*"* symbol}
\index{*"-"-"> symbol}
\begin{center} \tt\frenchspacing
\begin{tabular}{rrr} 
  \it external                  & \it internal  & \it standard notation \\ 
  \sdx{PROD} $x$:$A$ . $B[x]$   &  Prod($A$, $\lambda x. B[x]$) &
        \rm product $\prod@{x\in A}B[x]$ \\
  \sdx{SUM} $x$:$A$ . $B[x]$    & Sum($A$, $\lambda x. B[x]$) &
        \rm sum $\sum@{x\in A}B[x]$ \\
  $A$ --> $B$     &  Prod($A$, $\lambda x. B$) &
        \rm function space $A\to B$ \\
  $A$ * $B$       &  Sum($A$, $\lambda x. B$)  &
        \rm binary product $A\times B$
\end{tabular}
\end{center}
\subcaption{Translations} 

\index{*"= symbol}
\begin{center}
\dquotes
\[ \begin{array}{rcl}
prop    & = &  type " type"       \\
        & | & type " = " type     \\
        & | & term " : " type        \\
        & | & term " = " term " : " type 
\\[2ex]
type    & = & \hbox{expression of type~$t$} \\
        & | & "PROD~" id " : " type " . " type  \\
        & | & "SUM~~" id " : " type " . " type 
\\[2ex]
term    & = & \hbox{expression of type~$i$} \\
        & | & "lam " id~id^* " . " term   \\
        & | & "< " term " , " term " >"
\end{array} 
\]
\end{center}
\subcaption{Grammar}
\caption{Syntax of CTT} \label{ctt-syntax}
\end{figure}

%%%%\section{Generic Packages}  typedsimp.ML????????????????


\section{Syntax}
The constants are shown in Fig.\ts\ref{ctt-constants}.  The infixes include
the function application operator (sometimes called `apply'), and the 2-place
type operators.  Note that meta-level abstraction and application, $\lambda
x.b$ and $f(a)$, differ from object-level abstraction and application,
\hbox{\tt lam $x$. $b$} and $b{\tt`}a$.  A CTT function~$f$ is simply an
individual as far as Isabelle is concerned: its Isabelle type is~$i$, not say
$i\To i$.

The notation for~CTT (Fig.\ts\ref{ctt-syntax}) is based on that of Nordstr\"om
et al.~\cite{nordstrom90}.  The empty type is called $F$ and the one-element
type is $T$; other finite types are built as $T+T+T$, etc.

\index{*SUM symbol}\index{*PROD symbol}
Quantification is expressed by sums $\sum@{x\in A}B[x]$ and
products $\prod@{x\in A}B[x]$.  Instead of {\tt Sum($A$,$B$)} and {\tt
  Prod($A$,$B$)} we may write \hbox{\tt SUM $x$:$A$.\ $B[x]$} and \hbox{\tt
  PROD $x$:$A$.\ $B[x]$}.  For example, we may write
\begin{ttbox}
SUM y:B. PROD x:A. C(x,y)   {\rm for}   Sum(B, \%y. Prod(A, \%x. C(x,y)))
\end{ttbox}
The special cases as \hbox{\tt$A$*$B$} and \hbox{\tt$A$-->$B$} abbreviate
general sums and products over a constant family.\footnote{Unlike normal
infix operators, {\tt*} and {\tt-->} merely define abbreviations; there are
no constants~{\tt op~*} and~\hbox{\tt op~-->}.}  Isabelle accepts these
abbreviations in parsing and uses them whenever possible for printing.


\begin{figure} 
\begin{ttbox}
\tdx{refl_type}         A type ==> A = A
\tdx{refl_elem}         a : A ==> a = a : A

\tdx{sym_type}          A = B ==> B = A
\tdx{sym_elem}          a = b : A ==> b = a : A

\tdx{trans_type}        [| A = B;  B = C |] ==> A = C
\tdx{trans_elem}        [| a = b : A;  b = c : A |] ==> a = c : A

\tdx{equal_types}       [| a : A;  A = B |] ==> a : B
\tdx{equal_typesL}      [| a = b : A;  A = B |] ==> a = b : B

\tdx{subst_type}        [| a : A;  !!z. z:A ==> B(z) type |] ==> B(a) type
\tdx{subst_typeL}       [| a = c : A;  !!z. z:A ==> B(z) = D(z) 
                  |] ==> B(a) = D(c)

\tdx{subst_elem}        [| a : A;  !!z. z:A ==> b(z):B(z) |] ==> b(a):B(a)
\tdx{subst_elemL}       [| a = c : A;  !!z. z:A ==> b(z) = d(z) : B(z) 
                  |] ==> b(a) = d(c) : B(a)

\tdx{refl_red}          Reduce(a,a)
\tdx{red_if_equal}      a = b : A ==> Reduce(a,b)
\tdx{trans_red}         [| a = b : A;  Reduce(b,c) |] ==> a = c : A
\end{ttbox}
\caption{General equality rules} \label{ctt-equality}
\end{figure}


\begin{figure} 
\begin{ttbox}
\tdx{NF}        N type

\tdx{NI0}       0 : N
\tdx{NI_succ}   a : N ==> succ(a) : N
\tdx{NI_succL}  a = b : N ==> succ(a) = succ(b) : N

\tdx{NE}        [| p: N;  a: C(0);  
             !!u v. [| u: N; v: C(u) |] ==> b(u,v): C(succ(u)) 
          |] ==> rec(p, a, \%u v. b(u,v)) : C(p)

\tdx{NEL}       [| p = q : N;  a = c : C(0);  
             !!u v. [| u: N; v: C(u) |] ==> b(u,v)=d(u,v): C(succ(u))
          |] ==> rec(p, a, \%u v. b(u,v)) = rec(q,c,d) : C(p)

\tdx{NC0}       [| a: C(0);  
             !!u v. [| u: N; v: C(u) |] ==> b(u,v): C(succ(u))
          |] ==> rec(0, a, \%u v. b(u,v)) = a : C(0)

\tdx{NC_succ}   [| p: N;  a: C(0);  
             !!u v. [| u: N; v: C(u) |] ==> b(u,v): C(succ(u)) 
          |] ==> rec(succ(p), a, \%u v. b(u,v)) =
                 b(p, rec(p, a, \%u v. b(u,v))) : C(succ(p))

\tdx{zero_ne_succ}      [| a: N;  0 = succ(a) : N |] ==> 0: F
\end{ttbox}
\caption{Rules for type~$N$} \label{ctt-N}
\end{figure}


\begin{figure} 
\begin{ttbox}
\tdx{ProdF}     [| A type; !!x. x:A ==> B(x) type |] ==> PROD x:A. B(x) type
\tdx{ProdFL}    [| A = C;  !!x. x:A ==> B(x) = D(x) |] ==> 
          PROD x:A. B(x) = PROD x:C. D(x)

\tdx{ProdI}     [| A type;  !!x. x:A ==> b(x):B(x)
          |] ==> lam x. b(x) : PROD x:A. B(x)
\tdx{ProdIL}    [| A type;  !!x. x:A ==> b(x) = c(x) : B(x)
          |] ==> lam x. b(x) = lam x. c(x) : PROD x:A. B(x)

\tdx{ProdE}     [| p : PROD x:A. B(x);  a : A |] ==> p`a : B(a)
\tdx{ProdEL}    [| p=q: PROD x:A. B(x);  a=b : A |] ==> p`a = q`b : B(a)

\tdx{ProdC}     [| a : A;  !!x. x:A ==> b(x) : B(x)
          |] ==> (lam x. b(x)) ` a = b(a) : B(a)

\tdx{ProdC2}    p : PROD x:A. B(x) ==> (lam x. p`x) = p : PROD x:A. B(x)
\end{ttbox}
\caption{Rules for the product type $\prod\sb{x\in A}B[x]$} \label{ctt-prod}
\end{figure}


\begin{figure} 
\begin{ttbox}
\tdx{SumF}      [| A type;  !!x. x:A ==> B(x) type |] ==> SUM x:A. B(x) type
\tdx{SumFL}     [| A = C;  !!x. x:A ==> B(x) = D(x) 
          |] ==> SUM x:A. B(x) = SUM x:C. D(x)

\tdx{SumI}      [| a : A;  b : B(a) |] ==> <a,b> : SUM x:A. B(x)
\tdx{SumIL}     [| a=c:A;  b=d:B(a) |] ==> <a,b> = <c,d> : SUM x:A. B(x)

\tdx{SumE}      [| p: SUM x:A. B(x);  
             !!x y. [| x:A; y:B(x) |] ==> c(x,y): C(<x,y>) 
          |] ==> split(p, \%x y. c(x,y)) : C(p)

\tdx{SumEL}     [| p=q : SUM x:A. B(x); 
             !!x y. [| x:A; y:B(x) |] ==> c(x,y)=d(x,y): C(<x,y>)
          |] ==> split(p, \%x y. c(x,y)) = split(q, \%x y. d(x,y)) : C(p)

\tdx{SumC}      [| a: A;  b: B(a);
             !!x y. [| x:A; y:B(x) |] ==> c(x,y): C(<x,y>)
          |] ==> split(<a,b>, \%x y. c(x,y)) = c(a,b) : C(<a,b>)

\tdx{fst_def}   fst(a) == split(a, \%x y. x)
\tdx{snd_def}   snd(a) == split(a, \%x y. y)
\end{ttbox}
\caption{Rules for the sum type $\sum\sb{x\in A}B[x]$} \label{ctt-sum}
\end{figure}


\begin{figure} 
\begin{ttbox}
\tdx{PlusF}       [| A type;  B type |] ==> A+B type
\tdx{PlusFL}      [| A = C;  B = D |] ==> A+B = C+D

\tdx{PlusI_inl}   [| a : A;  B type |] ==> inl(a) : A+B
\tdx{PlusI_inlL}  [| a = c : A;  B type |] ==> inl(a) = inl(c) : A+B

\tdx{PlusI_inr}   [| A type;  b : B |] ==> inr(b) : A+B
\tdx{PlusI_inrL}  [| A type;  b = d : B |] ==> inr(b) = inr(d) : A+B

\tdx{PlusE}     [| p: A+B;
             !!x. x:A ==> c(x): C(inl(x));  
             !!y. y:B ==> d(y): C(inr(y))
          |] ==> when(p, \%x. c(x), \%y. d(y)) : C(p)

\tdx{PlusEL}    [| p = q : A+B;
             !!x. x: A ==> c(x) = e(x) : C(inl(x));   
             !!y. y: B ==> d(y) = f(y) : C(inr(y))
          |] ==> when(p, \%x. c(x), \%y. d(y)) = 
                 when(q, \%x. e(x), \%y. f(y)) : C(p)

\tdx{PlusC_inl} [| a: A;
             !!x. x:A ==> c(x): C(inl(x));  
             !!y. y:B ==> d(y): C(inr(y))
          |] ==> when(inl(a), \%x. c(x), \%y. d(y)) = c(a) : C(inl(a))

\tdx{PlusC_inr} [| b: B;
             !!x. x:A ==> c(x): C(inl(x));  
             !!y. y:B ==> d(y): C(inr(y))
          |] ==> when(inr(b), \%x. c(x), \%y. d(y)) = d(b) : C(inr(b))
\end{ttbox}
\caption{Rules for the binary sum type $A+B$} \label{ctt-plus}
\end{figure}


\begin{figure} 
\begin{ttbox}
\tdx{FF}        F type
\tdx{FE}        [| p: F;  C type |] ==> contr(p) : C
\tdx{FEL}       [| p = q : F;  C type |] ==> contr(p) = contr(q) : C

\tdx{TF}        T type
\tdx{TI}        tt : T
\tdx{TE}        [| p : T;  c : C(tt) |] ==> c : C(p)
\tdx{TEL}       [| p = q : T;  c = d : C(tt) |] ==> c = d : C(p)
\tdx{TC}        p : T ==> p = tt : T)
\end{ttbox}

\caption{Rules for types $F$ and $T$} \label{ctt-ft}
\end{figure}


\begin{figure} 
\begin{ttbox}
\tdx{EqF}       [| A type;  a : A;  b : A |] ==> Eq(A,a,b) type
\tdx{EqFL}      [| A=B;  a=c: A;  b=d : A |] ==> Eq(A,a,b) = Eq(B,c,d)
\tdx{EqI}       a = b : A ==> eq : Eq(A,a,b)
\tdx{EqE}       p : Eq(A,a,b) ==> a = b : A
\tdx{EqC}       p : Eq(A,a,b) ==> p = eq : Eq(A,a,b)
\end{ttbox}
\caption{Rules for the equality type $Eq(A,a,b)$} \label{ctt-eq}
\end{figure}


\begin{figure} 
\begin{ttbox}
\tdx{replace_type}    [| B = A;  a : A |] ==> a : B
\tdx{subst_eqtyparg}  [| a=c : A;  !!z. z:A ==> B(z) type |] ==> B(a)=B(c)

\tdx{subst_prodE}     [| p: Prod(A,B);  a: A;  !!z. z: B(a) ==> c(z): C(z)
                |] ==> c(p`a): C(p`a)

\tdx{SumIL2}    [| c=a : A;  d=b : B(a) |] ==> <c,d> = <a,b> : Sum(A,B)

\tdx{SumE_fst}  p : Sum(A,B) ==> fst(p) : A

\tdx{SumE_snd}  [| p: Sum(A,B);  A type;  !!x. x:A ==> B(x) type
          |] ==> snd(p) : B(fst(p))
\end{ttbox}

\caption{Derived rules for CTT} \label{ctt-derived}
\end{figure}


\section{Rules of inference}
The rules obey the following naming conventions.  Type formation rules have
the suffix~{\tt F}\@.  Introduction rules have the suffix~{\tt I}\@.
Elimination rules have the suffix~{\tt E}\@.  Computation rules, which
describe the reduction of eliminators, have the suffix~{\tt C}\@.  The
equality versions of the rules (which permit reductions on subterms) are
called {\bf long} rules; their names have the suffix~{\tt L}\@.
Introduction and computation rules are often further suffixed with
constructor names.

Figure~\ref{ctt-equality} presents the equality rules.  Most of them are
straightforward: reflexivity, symmetry, transitivity and substitution.  The
judgement \cdx{Reduce} does not belong to Type Theory proper; it has
been added to implement rewriting.  The judgement ${\tt Reduce}(a,b)$ holds
when $a=b:A$ holds.  It also holds when $a$ and $b$ are syntactically
identical, even if they are ill-typed, because rule {\tt refl_red} does
not verify that $a$ belongs to $A$.  

The {\tt Reduce} rules do not give rise to new theorems about the standard
judgements.  The only rule with {\tt Reduce} in a premise is
{\tt trans_red}, whose other premise ensures that $a$ and~$b$ (and thus
$c$) are well-typed.

Figure~\ref{ctt-N} presents the rules for~$N$, the type of natural numbers.
They include \tdx{zero_ne_succ}, which asserts $0\not=n+1$.  This is
the fourth Peano axiom and cannot be derived without universes \cite[page
91]{martinlof84}.  

The constant \cdx{rec} constructs proof terms when mathematical
induction, rule~\tdx{NE}, is applied.  It can also express primitive
recursion.  Since \cdx{rec} can be applied to higher-order functions,
it can even express Ackermann's function, which is not primitive recursive
\cite[page~104]{thompson91}.

Figure~\ref{ctt-prod} shows the rules for general product types, which
include function types as a special case.  The rules correspond to the
predicate calculus rules for universal quantifiers and implication.  They
also permit reasoning about functions, with the rules of a typed
$\lambda$-calculus.

Figure~\ref{ctt-sum} shows the rules for general sum types, which
include binary product types as a special case.  The rules correspond to the
predicate calculus rules for existential quantifiers and conjunction.  They
also permit reasoning about ordered pairs, with the projections
\cdx{fst} and~\cdx{snd}.

Figure~\ref{ctt-plus} shows the rules for binary sum types.  They
correspond to the predicate calculus rules for disjunction.  They also
permit reasoning about disjoint sums, with the injections \cdx{inl}
and~\cdx{inr} and case analysis operator~\cdx{when}.

Figure~\ref{ctt-ft} shows the rules for the empty and unit types, $F$
and~$T$.  They correspond to the predicate calculus rules for absurdity and
truth.

Figure~\ref{ctt-eq} shows the rules for equality types.  If $a=b\in A$ is
provable then \cdx{eq} is a canonical element of the type $Eq(A,a,b)$,
and vice versa.  These rules define extensional equality; the most recent
versions of Type Theory use intensional equality~\cite{nordstrom90}.

Figure~\ref{ctt-derived} presents the derived rules.  The rule
\tdx{subst_prodE} is derived from {\tt prodE}, and is easier to use
in backwards proof.  The rules \tdx{SumE_fst} and \tdx{SumE_snd}
express the typing of~\cdx{fst} and~\cdx{snd}; together, they are
roughly equivalent to~{\tt SumE} with the advantage of creating no
parameters.  Section~\ref{ctt-choice} below demonstrates these rules in a
proof of the Axiom of Choice.

All the rules are given in $\eta$-expanded form.  For instance, every
occurrence of $\lambda u\,v. b(u,v)$ could be abbreviated to~$b$ in the
rules for~$N$.  The expanded form permits Isabelle to preserve bound
variable names during backward proof.  Names of bound variables in the
conclusion (here, $u$ and~$v$) are matched with corresponding bound
variables in the premises.


\section{Rule lists}
The Type Theory tactics provide rewriting, type inference, and logical
reasoning.  Many proof procedures work by repeatedly resolving certain Type
Theory rules against a proof state.  CTT defines lists --- each with
type
\hbox{\tt thm list} --- of related rules. 
\begin{ttdescription}
\item[\ttindexbold{form_rls}] 
contains formation rules for the types $N$, $\Pi$, $\Sigma$, $+$, $Eq$,
$F$, and $T$.

\item[\ttindexbold{formL_rls}] 
contains long formation rules for $\Pi$, $\Sigma$, $+$, and $Eq$.  (For
other types use \tdx{refl_type}.)

\item[\ttindexbold{intr_rls}] 
contains introduction rules for the types $N$, $\Pi$, $\Sigma$, $+$, and
$T$.

\item[\ttindexbold{intrL_rls}] 
contains long introduction rules for $N$, $\Pi$, $\Sigma$, and $+$.  (For
$T$ use \tdx{refl_elem}.)

\item[\ttindexbold{elim_rls}] 
contains elimination rules for the types $N$, $\Pi$, $\Sigma$, $+$, and
$F$.  The rules for $Eq$ and $T$ are omitted because they involve no
eliminator.

\item[\ttindexbold{elimL_rls}] 
contains long elimination rules for $N$, $\Pi$, $\Sigma$, $+$, and $F$.

\item[\ttindexbold{comp_rls}] 
contains computation rules for the types $N$, $\Pi$, $\Sigma$, and $+$.
Those for $Eq$ and $T$ involve no eliminator.

\item[\ttindexbold{basic_defs}] 
contains the definitions of {\tt fst} and {\tt snd}.  
\end{ttdescription}


\section{Tactics for subgoal reordering}
\begin{ttbox}
test_assume_tac : int -> tactic
typechk_tac     : thm list -> tactic
equal_tac       : thm list -> tactic
intr_tac        : thm list -> tactic
\end{ttbox}
Blind application of CTT rules seldom leads to a proof.  The elimination
rules, especially, create subgoals containing new unknowns.  These subgoals
unify with anything, creating a huge search space.  The standard tactic
\ttindex{filt_resolve_tac}
(see \iflabelundefined{filt_resolve_tac}{the {\em Reference Manual\/}}%
        {\S\ref{filt_resolve_tac}})
%
fails for goals that are too flexible; so does the CTT tactic {\tt
  test_assume_tac}.  Used with the tactical \ttindex{REPEAT_FIRST} they
achieve a simple kind of subgoal reordering: the less flexible subgoals are
attempted first.  Do some single step proofs, or study the examples below,
to see why this is necessary.
\begin{ttdescription}
\item[\ttindexbold{test_assume_tac} $i$] 
uses {\tt assume_tac} to solve the subgoal by assumption, but only if
subgoal~$i$ has the form $a\in A$ and the head of $a$ is not an unknown.
Otherwise, it fails.

\item[\ttindexbold{typechk_tac} $thms$] 
uses $thms$ with formation, introduction, and elimination rules to check
the typing of constructions.  It is designed to solve goals of the form
$a\in \Var{A}$, where $a$ is rigid and $\Var{A}$ is flexible; thus it
performs type inference.  The tactic can also solve goals of
the form $A\;\rm type$.

\item[\ttindexbold{equal_tac} $thms$]
uses $thms$ with the long introduction and elimination rules to solve goals
of the form $a=b\in A$, where $a$ is rigid.  It is intended for deriving
the long rules for defined constants such as the arithmetic operators.  The
tactic can also perform type-checking.

\item[\ttindexbold{intr_tac} $thms$]
uses $thms$ with the introduction rules to break down a type.  It is
designed for goals like $\Var{a}\in A$ where $\Var{a}$ is flexible and $A$
rigid.  These typically arise when trying to prove a proposition~$A$,
expressed as a type.
\end{ttdescription}



\section{Rewriting tactics}
\begin{ttbox}
rew_tac     : thm list -> tactic
hyp_rew_tac : thm list -> tactic
\end{ttbox}
Object-level simplification is accomplished through proof, using the {\tt
  CTT} equality rules and the built-in rewriting functor
{\tt TSimpFun}.%
\footnote{This should not be confused with Isabelle's main simplifier; {\tt
    TSimpFun} is only useful for CTT and similar logics with type inference
  rules.  At present it is undocumented.}
%
The rewrites include the computation rules and other equations.  The long
versions of the other rules permit rewriting of subterms and subtypes.
Also used are transitivity and the extra judgement form \cdx{Reduce}.
Meta-level simplification handles only definitional equality.
\begin{ttdescription}
\item[\ttindexbold{rew_tac} $thms$]
applies $thms$ and the computation rules as left-to-right rewrites.  It
solves the goal $a=b\in A$ by rewriting $a$ to $b$.  If $b$ is an unknown
then it is assigned the rewritten form of~$a$.  All subgoals are rewritten.

\item[\ttindexbold{hyp_rew_tac} $thms$]
is like {\tt rew_tac}, but includes as rewrites any equations present in
the assumptions.
\end{ttdescription}


\section{Tactics for logical reasoning}
Interpreting propositions as types lets CTT express statements of
intuitionistic logic.  However, Constructive Type Theory is not just another
syntax for first-order logic.  There are fundamental differences.

\index{assumptions!in CTT}
Can assumptions be deleted after use?  Not every occurrence of a type
represents a proposition, and Type Theory assumptions declare variables.
In first-order logic, $\disj$-elimination with the assumption $P\disj Q$
creates one subgoal assuming $P$ and another assuming $Q$, and $P\disj Q$
can be deleted safely.  In Type Theory, $+$-elimination with the assumption
$z\in A+B$ creates one subgoal assuming $x\in A$ and another assuming $y\in
B$ (for arbitrary $x$ and $y$).  Deleting $z\in A+B$ when other assumptions
refer to $z$ may render the subgoal unprovable: arguably,
meaningless.

Isabelle provides several tactics for predicate calculus reasoning in CTT:
\begin{ttbox}
mp_tac       : int -> tactic
add_mp_tac   : int -> tactic
safestep_tac : thm list -> int -> tactic
safe_tac     : thm list -> int -> tactic
step_tac     : thm list -> int -> tactic
pc_tac       : thm list -> int -> tactic
\end{ttbox}
These are loosely based on the intuitionistic proof procedures
of~\thydx{FOL}.  For the reasons discussed above, a rule that is safe for
propositional reasoning may be unsafe for type-checking; thus, some of the
`safe' tactics are misnamed.
\begin{ttdescription}
\item[\ttindexbold{mp_tac} $i$] 
searches in subgoal~$i$ for assumptions of the form $f\in\Pi(A,B)$ and
$a\in A$, where~$A$ may be found by unification.  It replaces
$f\in\Pi(A,B)$ by $z\in B(a)$, where~$z$ is a new parameter.  The tactic
can produce multiple outcomes for each suitable pair of assumptions.  In
short, {\tt mp_tac} performs Modus Ponens among the assumptions.

\item[\ttindexbold{add_mp_tac} $i$]
is like {\tt mp_tac}~$i$ but retains the assumption $f\in\Pi(A,B)$.  It
avoids information loss but obviously loops if repeated.

\item[\ttindexbold{safestep_tac} $thms$ $i$]
attacks subgoal~$i$ using formation rules and certain other `safe' rules
(\tdx{FE}, \tdx{ProdI}, \tdx{SumE}, \tdx{PlusE}), calling
{\tt mp_tac} when appropriate.  It also uses~$thms$,
which are typically premises of the rule being derived.

\item[\ttindexbold{safe_tac} $thms$ $i$] attempts to solve subgoal~$i$ by
  means of backtracking, using {\tt safestep_tac}.

\item[\ttindexbold{step_tac} $thms$ $i$]
tries to reduce subgoal~$i$ using {\tt safestep_tac}, then tries unsafe
rules.  It may produce multiple outcomes.

\item[\ttindexbold{pc_tac} $thms$ $i$]
tries to solve subgoal~$i$ by backtracking, using {\tt step_tac}.
\end{ttdescription}



\begin{figure} 
\index{#+@{\tt\#+} symbol}
\index{*"- symbol}
\index{"|"-@{{\tt"|"-"|} symbol}}
\index{#*@{\tt\#*} symbol}
\index{*div symbol}
\index{*mod symbol}
\begin{constants}
  \it symbol  & \it meta-type & \it priority & \it description \\ 
  \tt \#*       & $[i,i]\To i$  &  Left 70      & multiplication \\
  \tt div       & $[i,i]\To i$  &  Left 70      & division\\
  \tt mod       & $[i,i]\To i$  &  Left 70      & modulus\\
  \tt \#+       & $[i,i]\To i$  &  Left 65      & addition\\
  \tt -         & $[i,i]\To i$  &  Left 65      & subtraction\\
  \verb'|-|'    & $[i,i]\To i$  &  Left 65      & absolute difference
\end{constants}

\begin{ttbox}
\tdx{add_def}           a#+b  == rec(a, b, \%u v. succ(v))  
\tdx{diff_def}          a-b   == rec(b, a, \%u v. rec(v, 0, \%x y. x))  
\tdx{absdiff_def}       a|-|b == (a-b) #+ (b-a)  
\tdx{mult_def}          a#*b  == rec(a, 0, \%u v. b #+ v)  

\tdx{mod_def}           a mod b ==
                  rec(a, 0, \%u v. rec(succ(v) |-| b, 0, \%x y. succ(v)))

\tdx{div_def}           a div b ==
                  rec(a, 0, \%u v. rec(succ(u) mod b, succ(v), \%x y. v))

\tdx{add_typing}        [| a:N;  b:N |] ==> a #+ b : N
\tdx{addC0}             b:N ==> 0 #+ b = b : N
\tdx{addC_succ}         [| a:N;  b:N |] ==> succ(a) #+ b = succ(a #+ b) : N

\tdx{add_assoc}         [| a:N;  b:N;  c:N |] ==> 
                  (a #+ b) #+ c = a #+ (b #+ c) : N

\tdx{add_commute}       [| a:N;  b:N |] ==> a #+ b = b #+ a : N

\tdx{mult_typing}       [| a:N;  b:N |] ==> a #* b : N
\tdx{multC0}            b:N ==> 0 #* b = 0 : N
\tdx{multC_succ}        [| a:N;  b:N |] ==> succ(a) #* b = b #+ (a#*b) : N
\tdx{mult_commute}      [| a:N;  b:N |] ==> a #* b = b #* a : N

\tdx{add_mult_dist}     [| a:N;  b:N;  c:N |] ==> 
                  (a #+ b) #* c = (a #* c) #+ (b #* c) : N

\tdx{mult_assoc}        [| a:N;  b:N;  c:N |] ==> 
                  (a #* b) #* c = a #* (b #* c) : N

\tdx{diff_typing}       [| a:N;  b:N |] ==> a - b : N
\tdx{diffC0}            a:N ==> a - 0 = a : N
\tdx{diff_0_eq_0}       b:N ==> 0 - b = 0 : N
\tdx{diff_succ_succ}    [| a:N;  b:N |] ==> succ(a) - succ(b) = a - b : N
\tdx{diff_self_eq_0}    a:N ==> a - a = 0 : N
\tdx{add_inverse_diff}  [| a:N;  b:N;  b-a=0 : N |] ==> b #+ (a-b) = a : N
\end{ttbox}
\caption{The theory of arithmetic} \label{ctt_arith}
\end{figure}


\section{A theory of arithmetic}
\thydx{Arith} is a theory of elementary arithmetic.  It proves the
properties of addition, multiplication, subtraction, division, and
remainder, culminating in the theorem
\[ a \bmod b + (a/b)\times b = a. \]
Figure~\ref{ctt_arith} presents the definitions and some of the key
theorems, including commutative, distributive, and associative laws.

The operators~\verb'#+', \verb'-', \verb'|-|', \verb'#*', \verb'mod'
and~\verb'div' stand for sum, difference, absolute difference, product,
remainder and quotient, respectively.  Since Type Theory has only primitive
recursion, some of their definitions may be obscure.  

The difference~$a-b$ is computed by taking $b$ predecessors of~$a$, where
the predecessor function is $\lambda v. {\tt rec}(v, 0, \lambda x\,y. x)$.

The remainder $a\bmod b$ counts up to~$a$ in a cyclic fashion, using 0
as the successor of~$b-1$.  Absolute difference is used to test the
equality $succ(v)=b$.

The quotient $a/b$ is computed by adding one for every number $x$
such that $0\leq x \leq a$ and $x\bmod b = 0$.



\section{The examples directory}
This directory contains examples and experimental proofs in CTT.
\begin{ttdescription}
\item[CTT/ex/typechk.ML]
contains simple examples of type-checking and type deduction.

\item[CTT/ex/elim.ML]
contains some examples from Martin-L\"of~\cite{martinlof84}, proved using 
{\tt pc_tac}.

\item[CTT/ex/equal.ML]
contains simple examples of rewriting.

\item[CTT/ex/synth.ML]
demonstrates the use of unknowns with some trivial examples of program
synthesis. 
\end{ttdescription}


\section{Example: type inference}
Type inference involves proving a goal of the form $a\in\Var{A}$, where $a$
is a term and $\Var{A}$ is an unknown standing for its type.  The type,
initially
unknown, takes shape in the course of the proof.  Our example is the
predecessor function on the natural numbers.
\begin{ttbox}
Goal "lam n. rec(n, 0, \%x y. x) : ?A";
{\out Level 0}
{\out lam n. rec(n,0,\%x y. x) : ?A}
{\out  1. lam n. rec(n,0,\%x y. x) : ?A}
\end{ttbox}
Since the term is a Constructive Type Theory $\lambda$-abstraction (not to
be confused with a meta-level abstraction), we apply the rule
\tdx{ProdI}, for $\Pi$-introduction.  This instantiates~$\Var{A}$ to a
product type of unknown domain and range.
\begin{ttbox}
by (resolve_tac [ProdI] 1);
{\out Level 1}
{\out lam n. rec(n,0,\%x y. x) : PROD x:?A1. ?B1(x)}
{\out  1. ?A1 type}
{\out  2. !!n. n : ?A1 ==> rec(n,0,\%x y. x) : ?B1(n)}
\end{ttbox}
Subgoal~1 is too flexible.  It can be solved by instantiating~$\Var{A@1}$
to any type, but most instantiations will invalidate subgoal~2.  We
therefore tackle the latter subgoal.  It asks the type of a term beginning
with {\tt rec}, which can be found by $N$-elimination.%
\index{*NE theorem}
\begin{ttbox}
by (eresolve_tac [NE] 2);
{\out Level 2}
{\out lam n. rec(n,0,\%x y. x) : PROD x:N. ?C2(x,x)}
{\out  1. N type}
{\out  2. !!n. 0 : ?C2(n,0)}
{\out  3. !!n x y. [| x : N; y : ?C2(n,x) |] ==> x : ?C2(n,succ(x))}
\end{ttbox}
Subgoal~1 is no longer flexible: we now know~$\Var{A@1}$ is the type of
natural numbers.  However, let us continue proving nontrivial subgoals.
Subgoal~2 asks, what is the type of~0?\index{*NIO theorem}
\begin{ttbox}
by (resolve_tac [NI0] 2);
{\out Level 3}
{\out lam n. rec(n,0,\%x y. x) : N --> N}
{\out  1. N type}
{\out  2. !!n x y. [| x : N; y : N |] ==> x : N}
\end{ttbox}
The type~$\Var{A}$ is now fully determined.  It is the product type
$\prod@{x\in N}N$, which is shown as the function type $N\to N$ because
there is no dependence on~$x$.  But we must prove all the subgoals to show
that the original term is validly typed.  Subgoal~2 is provable by
assumption and the remaining subgoal falls by $N$-formation.%
\index{*NF theorem}
\begin{ttbox}
by (assume_tac 2);
{\out Level 4}
{\out lam n. rec(n,0,\%x y. x) : N --> N}
{\out  1. N type}
\ttbreak
by (resolve_tac [NF] 1);
{\out Level 5}
{\out lam n. rec(n,0,\%x y. x) : N --> N}
{\out No subgoals!}
\end{ttbox}
Calling \ttindex{typechk_tac} can prove this theorem in one step.

Even if the original term is ill-typed, one can infer a type for it, but
unprovable subgoals will be left.  As an exercise, try to prove the
following invalid goal:
\begin{ttbox}
Goal "lam n. rec(n, 0, \%x y. tt) : ?A";
\end{ttbox}



\section{An example of logical reasoning}
Logical reasoning in Type Theory involves proving a goal of the form
$\Var{a}\in A$, where type $A$ expresses a proposition and $\Var{a}$ stands
for its proof term, a value of type $A$.  The proof term is initially
unknown and takes shape during the proof.  

Our example expresses a theorem about quantifiers in a sorted logic:
\[ \infer{(\ex{x\in A}P(x)) \disj (\ex{x\in A}Q(x))}
         {\ex{x\in A}P(x)\disj Q(x)} 
\]
By the propositions-as-types principle, this is encoded
using~$\Sigma$ and~$+$ types.  A special case of it expresses a
distributive law of Type Theory: 
\[ \infer{(A\times B) + (A\times C)}{A\times(B+C)} \]
Generalizing this from $\times$ to $\Sigma$, and making the typing
conditions explicit, yields the rule we must derive:
\[ \infer{\Var{a} \in (\sum@{x\in A} B(x)) + (\sum@{x\in A} C(x))}
         {\hbox{$A$ type} &
          \infer*{\hbox{$B(x)$ type}}{[x\in A]}  &
          \infer*{\hbox{$C(x)$ type}}{[x\in A]}  &
          p\in \sum@{x\in A} B(x)+C(x)} 
\]
To begin, we bind the rule's premises --- returned by the~{\tt goal}
command --- to the {\ML} variable~{\tt prems}.
\begin{ttbox}
val prems = Goal
    "[| A type;                       \ttback
\ttback       !!x. x:A ==> B(x) type;       \ttback
\ttback       !!x. x:A ==> C(x) type;       \ttback
\ttback       p: SUM x:A. B(x) + C(x)       \ttback
\ttback    |] ==>  ?a : (SUM x:A. B(x)) + (SUM x:A. C(x))";
{\out Level 0}
{\out ?a : (SUM x:A. B(x)) + (SUM x:A. C(x))}
{\out  1. ?a : (SUM x:A. B(x)) + (SUM x:A. C(x))}
\ttbreak
{\out val prems = ["A type  [A type]",}
{\out              "?x : A ==> B(?x) type  [!!x. x : A ==> B(x) type]",}
{\out              "?x : A ==> C(?x) type  [!!x. x : A ==> C(x) type]",}
{\out              "p : SUM x:A. B(x) + C(x)  [p : SUM x:A. B(x) + C(x)]"]}
{\out             : thm list}
\end{ttbox}
The last premise involves the sum type~$\Sigma$.  Since it is a premise
rather than the assumption of a goal, it cannot be found by {\tt
  eresolve_tac}.  We could insert it (and the other atomic premise) by
calling
\begin{ttbox}
cut_facts_tac prems 1;
\end{ttbox}
A forward proof step is more straightforward here.  Let us resolve the
$\Sigma$-elimination rule with the premises using~\ttindex{RL}.  This
inference yields one result, which we supply to {\tt
  resolve_tac}.\index{*SumE theorem}
\begin{ttbox}
by (resolve_tac (prems RL [SumE]) 1);
{\out Level 1}
{\out split(p,?c1) : (SUM x:A. B(x)) + (SUM x:A. C(x))}
{\out  1. !!x y.}
{\out        [| x : A; y : B(x) + C(x) |] ==>}
{\out        ?c1(x,y) : (SUM x:A. B(x)) + (SUM x:A. C(x))}
\end{ttbox}
The subgoal has two new parameters, $x$ and~$y$.  In the main goal,
$\Var{a}$ has been instantiated with a \cdx{split} term.  The
assumption $y\in B(x) + C(x)$ is eliminated next, causing a case split and
creating the parameter~$xa$.  This inference also inserts~\cdx{when}
into the main goal.\index{*PlusE theorem}
\begin{ttbox}
by (eresolve_tac [PlusE] 1);
{\out Level 2}
{\out split(p,\%x y. when(y,?c2(x,y),?d2(x,y)))}
{\out : (SUM x:A. B(x)) + (SUM x:A. C(x))}
{\out  1. !!x y xa.}
{\out        [| x : A; xa : B(x) |] ==>}
{\out        ?c2(x,y,xa) : (SUM x:A. B(x)) + (SUM x:A. C(x))}
\ttbreak
{\out  2. !!x y ya.}
{\out        [| x : A; ya : C(x) |] ==>}
{\out        ?d2(x,y,ya) : (SUM x:A. B(x)) + (SUM x:A. C(x))}
\end{ttbox}
To complete the proof object for the main goal, we need to instantiate the
terms $\Var{c@2}(x,y,xa)$ and $\Var{d@2}(x,y,xa)$.  We attack subgoal~1 by
a~$+$-introduction rule; since the goal assumes $xa\in B(x)$, we take the left
injection~(\cdx{inl}).
\index{*PlusI_inl theorem}
\begin{ttbox}
by (resolve_tac [PlusI_inl] 1);
{\out Level 3}
{\out split(p,\%x y. when(y,\%xa. inl(?a3(x,y,xa)),?d2(x,y)))}
{\out : (SUM x:A. B(x)) + (SUM x:A. C(x))}
{\out  1. !!x y xa. [| x : A; xa : B(x) |] ==> ?a3(x,y,xa) : SUM x:A. B(x)}
{\out  2. !!x y xa. [| x : A; xa : B(x) |] ==> SUM x:A. C(x) type}
\ttbreak
{\out  3. !!x y ya.}
{\out        [| x : A; ya : C(x) |] ==>}
{\out        ?d2(x,y,ya) : (SUM x:A. B(x)) + (SUM x:A. C(x))}
\end{ttbox}
A new subgoal~2 has appeared, to verify that $\sum@{x\in A}C(x)$ is a type.
Continuing to work on subgoal~1, we apply the $\Sigma$-introduction rule.
This instantiates the term $\Var{a@3}(x,y,xa)$; the main goal now contains
an ordered pair, whose components are two new unknowns.%
\index{*SumI theorem}
\begin{ttbox}
by (resolve_tac [SumI] 1);
{\out Level 4}
{\out split(p,\%x y. when(y,\%xa. inl(<?a4(x,y,xa),?b4(x,y,xa)>),?d2(x,y)))}
{\out : (SUM x:A. B(x)) + (SUM x:A. C(x))}
{\out  1. !!x y xa. [| x : A; xa : B(x) |] ==> ?a4(x,y,xa) : A}
{\out  2. !!x y xa. [| x : A; xa : B(x) |] ==> ?b4(x,y,xa) : B(?a4(x,y,xa))}
{\out  3. !!x y xa. [| x : A; xa : B(x) |] ==> SUM x:A. C(x) type}
{\out  4. !!x y ya.}
{\out        [| x : A; ya : C(x) |] ==>}
{\out        ?d2(x,y,ya) : (SUM x:A. B(x)) + (SUM x:A. C(x))}
\end{ttbox}
The two new subgoals both hold by assumption.  Observe how the unknowns
$\Var{a@4}$ and $\Var{b@4}$ are instantiated throughout the proof state.
\begin{ttbox}
by (assume_tac 1);
{\out Level 5}
{\out split(p,\%x y. when(y,\%xa. inl(<x,?b4(x,y,xa)>),?d2(x,y)))}
{\out : (SUM x:A. B(x)) + (SUM x:A. C(x))}
\ttbreak
{\out  1. !!x y xa. [| x : A; xa : B(x) |] ==> ?b4(x,y,xa) : B(x)}
{\out  2. !!x y xa. [| x : A; xa : B(x) |] ==> SUM x:A. C(x) type}
{\out  3. !!x y ya.}
{\out        [| x : A; ya : C(x) |] ==>}
{\out        ?d2(x,y,ya) : (SUM x:A. B(x)) + (SUM x:A. C(x))}
\ttbreak
by (assume_tac 1);
{\out Level 6}
{\out split(p,\%x y. when(y,\%xa. inl(<x,xa>),?d2(x,y)))}
{\out : (SUM x:A. B(x)) + (SUM x:A. C(x))}
{\out  1. !!x y xa. [| x : A; xa : B(x) |] ==> SUM x:A. C(x) type}
{\out  2. !!x y ya.}
{\out        [| x : A; ya : C(x) |] ==>}
{\out        ?d2(x,y,ya) : (SUM x:A. B(x)) + (SUM x:A. C(x))}
\end{ttbox}
Subgoal~1 is an example of a well-formedness subgoal~\cite{constable86}.
Such subgoals are usually trivial; this one yields to
\ttindex{typechk_tac}, given the current list of premises.
\begin{ttbox}
by (typechk_tac prems);
{\out Level 7}
{\out split(p,\%x y. when(y,\%xa. inl(<x,xa>),?d2(x,y)))}
{\out : (SUM x:A. B(x)) + (SUM x:A. C(x))}
{\out  1. !!x y ya.}
{\out        [| x : A; ya : C(x) |] ==>}
{\out        ?d2(x,y,ya) : (SUM x:A. B(x)) + (SUM x:A. C(x))}
\end{ttbox}
This subgoal is the other case from the $+$-elimination above, and can be
proved similarly.  Quicker is to apply \ttindex{pc_tac}.  The main goal
finally gets a fully instantiated proof object.
\begin{ttbox}
by (pc_tac prems 1);
{\out Level 8}
{\out split(p,\%x y. when(y,\%xa. inl(<x,xa>),\%y. inr(<x,y>)))}
{\out : (SUM x:A. B(x)) + (SUM x:A. C(x))}
{\out No subgoals!}
\end{ttbox}
Calling \ttindex{pc_tac} after the first $\Sigma$-elimination above also
proves this theorem.


\section{Example: deriving a currying functional}
In simply-typed languages such as {\ML}, a currying functional has the type 
\[ (A\times B \to C) \to (A\to (B\to C)). \]
Let us generalize this to the dependent types~$\Sigma$ and~$\Pi$.  
The functional takes a function~$f$ that maps $z:\Sigma(A,B)$
to~$C(z)$; the resulting function maps $x\in A$ and $y\in B(x)$ to
$C(\langle x,y\rangle)$.

Formally, there are three typing premises.  $A$ is a type; $B$ is an
$A$-indexed family of types; $C$ is a family of types indexed by
$\Sigma(A,B)$.  The goal is expressed using \hbox{\tt PROD f} to ensure
that the parameter corresponding to the functional's argument is really
called~$f$; Isabelle echoes the type using \verb|-->| because there is no
explicit dependence upon~$f$.
\begin{ttbox}
val prems = Goal
    "[| A type; !!x. x:A ==> B(x) type;                           \ttback
\ttback               !!z. z: (SUM x:A. B(x)) ==> C(z) type             \ttback
\ttback    |] ==> ?a : PROD f: (PROD z : (SUM x:A . B(x)) . C(z)).      \ttback
\ttback                     (PROD x:A . PROD y:B(x) . C(<x,y>))";
\ttbreak
{\out Level 0}
{\out ?a : (PROD z:SUM x:A. B(x). C(z)) -->}
{\out      (PROD x:A. PROD y:B(x). C(<x,y>))}
{\out  1. ?a : (PROD z:SUM x:A. B(x). C(z)) -->}
{\out          (PROD x:A. PROD y:B(x). C(<x,y>))}
\ttbreak
{\out val prems = ["A type  [A type]",}
{\out              "?x : A ==> B(?x) type  [!!x. x : A ==> B(x) type]",}
{\out              "?z : SUM x:A. B(x) ==> C(?z) type}
{\out               [!!z. z : SUM x:A. B(x) ==> C(z) type]"] : thm list}
\end{ttbox}
This is a chance to demonstrate \ttindex{intr_tac}.  Here, the tactic
repeatedly applies $\Pi$-introduction and proves the rather
tiresome typing conditions.  

Note that $\Var{a}$ becomes instantiated to three nested
$\lambda$-abstractions.  It would be easier to read if the bound variable
names agreed with the parameters in the subgoal.  Isabelle attempts to give
parameters the same names as corresponding bound variables in the goal, but
this does not always work.  In any event, the goal is logically correct.
\begin{ttbox}
by (intr_tac prems);
{\out Level 1}
{\out lam x xa xb. ?b7(x,xa,xb)}
{\out : (PROD z:SUM x:A. B(x). C(z)) --> (PROD x:A. PROD y:B(x). C(<x,y>))}
{\out  1. !!f x y.}
{\out        [| f : PROD z:SUM x:A. B(x). C(z); x : A; y : B(x) |] ==>}
{\out        ?b7(f,x,y) : C(<x,y>)}
\end{ttbox}
Using $\Pi$-elimination, we solve subgoal~1 by applying the function~$f$.
\index{*ProdE theorem}
\begin{ttbox}
by (eresolve_tac [ProdE] 1);
{\out Level 2}
{\out lam x xa xb. x ` <xa,xb>}
{\out : (PROD z:SUM x:A. B(x). C(z)) --> (PROD x:A. PROD y:B(x). C(<x,y>))}
{\out  1. !!f x y. [| x : A; y : B(x) |] ==> <x,y> : SUM x:A. B(x)}
\end{ttbox}
Finally, we verify that the argument's type is suitable for the function
application.  This is straightforward using introduction rules.
\index{*intr_tac}
\begin{ttbox}
by (intr_tac prems);
{\out Level 3}
{\out lam x xa xb. x ` <xa,xb>}
{\out : (PROD z:SUM x:A. B(x). C(z)) --> (PROD x:A. PROD y:B(x). C(<x,y>))}
{\out No subgoals!}
\end{ttbox}
Calling~\ttindex{pc_tac} would have proved this theorem in one step; it can
also prove an example by Martin-L\"of, related to $\disj$-elimination
\cite[page~58]{martinlof84}.


\section{Example: proving the Axiom of Choice} \label{ctt-choice}
Suppose we have a function $h\in \prod@{x\in A}\sum@{y\in B(x)} C(x,y)$,
which takes $x\in A$ to some $y\in B(x)$ paired with some $z\in C(x,y)$.
Interpreting propositions as types, this asserts that for all $x\in A$
there exists $y\in B(x)$ such that $C(x,y)$.  The Axiom of Choice asserts
that we can construct a function $f\in \prod@{x\in A}B(x)$ such that
$C(x,f{\tt`}x)$ for all $x\in A$, where the latter property is witnessed by a
function $g\in \prod@{x\in A}C(x,f{\tt`}x)$.

In principle, the Axiom of Choice is simple to derive in Constructive Type
Theory.  The following definitions work:
\begin{eqnarray*}
    f & \equiv & {\tt fst} \circ h \\
    g & \equiv & {\tt snd} \circ h
\end{eqnarray*}
But a completely formal proof is hard to find.  The rules can be applied in
countless ways, yielding many higher-order unifiers.  The proof can get
bogged down in the details.  But with a careful selection of derived rules
(recall Fig.\ts\ref{ctt-derived}) and the type-checking tactics, we can
prove the theorem in nine steps.
\begin{ttbox}
val prems = Goal
    "[| A type;  !!x. x:A ==> B(x) type;                    \ttback
\ttback       !!x y.[| x:A;  y:B(x) |] ==> C(x,y) type            \ttback
\ttback    |] ==> ?a : PROD h: (PROD x:A. SUM y:B(x). C(x,y)).    \ttback
\ttback                     (SUM f: (PROD x:A. B(x)). PROD x:A. C(x, f`x))";
{\out Level 0}
{\out ?a : (PROD x:A. SUM y:B(x). C(x,y)) -->}
{\out      (SUM f:PROD x:A. B(x). PROD x:A. C(x,f ` x))}
{\out  1. ?a : (PROD x:A. SUM y:B(x). C(x,y)) -->}
{\out          (SUM f:PROD x:A. B(x). PROD x:A. C(x,f ` x))}
\ttbreak
{\out val prems = ["A type  [A type]",}
{\out              "?x : A ==> B(?x) type  [!!x. x : A ==> B(x) type]",}
{\out              "[| ?x : A; ?y : B(?x) |] ==> C(?x, ?y) type}
{\out               [!!x y. [| x : A; y : B(x) |] ==> C(x, y) type]"]}
{\out             : thm list}
\end{ttbox}
First, \ttindex{intr_tac} applies introduction rules and performs routine
type-checking.  This instantiates~$\Var{a}$ to a construction involving
a $\lambda$-abstraction and an ordered pair.  The pair's components are
themselves $\lambda$-abstractions and there is a subgoal for each.
\begin{ttbox}
by (intr_tac prems);
{\out Level 1}
{\out lam x. <lam xa. ?b7(x,xa),lam xa. ?b8(x,xa)>}
{\out : (PROD x:A. SUM y:B(x). C(x,y)) -->}
{\out   (SUM f:PROD x:A. B(x). PROD x:A. C(x,f ` x))}
\ttbreak
{\out  1. !!h x.}
{\out        [| h : PROD x:A. SUM y:B(x). C(x,y); x : A |] ==>}
{\out        ?b7(h,x) : B(x)}
\ttbreak
{\out  2. !!h x.}
{\out        [| h : PROD x:A. SUM y:B(x). C(x,y); x : A |] ==>}
{\out        ?b8(h,x) : C(x,(lam x. ?b7(h,x)) ` x)}
\end{ttbox}
Subgoal~1 asks to find the choice function itself, taking $x\in A$ to some
$\Var{b@7}(h,x)\in B(x)$.  Subgoal~2 asks, given $x\in A$, for a proof
object $\Var{b@8}(h,x)$ to witness that the choice function's argument and
result lie in the relation~$C$.  This latter task will take up most of the
proof.
\index{*ProdE theorem}\index{*SumE_fst theorem}\index{*RS}
\begin{ttbox}
by (eresolve_tac [ProdE RS SumE_fst] 1);
{\out Level 2}
{\out lam x. <lam xa. fst(x ` xa),lam xa. ?b8(x,xa)>}
{\out : (PROD x:A. SUM y:B(x). C(x,y)) -->}
{\out   (SUM f:PROD x:A. B(x). PROD x:A. C(x,f ` x))}
\ttbreak
{\out  1. !!h x. x : A ==> x : A}
{\out  2. !!h x.}
{\out        [| h : PROD x:A. SUM y:B(x). C(x,y); x : A |] ==>}
{\out        ?b8(h,x) : C(x,(lam x. fst(h ` x)) ` x)}
\end{ttbox}
Above, we have composed {\tt fst} with the function~$h$.  Unification
has deduced that the function must be applied to $x\in A$.  We have our
choice function.
\begin{ttbox}
by (assume_tac 1);
{\out Level 3}
{\out lam x. <lam xa. fst(x ` xa),lam xa. ?b8(x,xa)>}
{\out : (PROD x:A. SUM y:B(x). C(x,y)) -->}
{\out   (SUM f:PROD x:A. B(x). PROD x:A. C(x,f ` x))}
{\out  1. !!h x.}
{\out        [| h : PROD x:A. SUM y:B(x). C(x,y); x : A |] ==>}
{\out        ?b8(h,x) : C(x,(lam x. fst(h ` x)) ` x)}
\end{ttbox}
Before we can compose {\tt snd} with~$h$, the arguments of $C$ must be
simplified.  The derived rule \tdx{replace_type} lets us replace a type
by any equivalent type, shown below as the schematic term $\Var{A@{13}}(h,x)$:
\begin{ttbox}
by (resolve_tac [replace_type] 1);
{\out Level 4}
{\out lam x. <lam xa. fst(x ` xa),lam xa. ?b8(x,xa)>}
{\out : (PROD x:A. SUM y:B(x). C(x,y)) -->}
{\out   (SUM f:PROD x:A. B(x). PROD x:A. C(x,f ` x))}
\ttbreak
{\out  1. !!h x.}
{\out        [| h : PROD x:A. SUM y:B(x). C(x,y); x : A |] ==>}
{\out        C(x,(lam x. fst(h ` x)) ` x) = ?A13(h,x)}
\ttbreak
{\out  2. !!h x.}
{\out        [| h : PROD x:A. SUM y:B(x). C(x,y); x : A |] ==>}
{\out        ?b8(h,x) : ?A13(h,x)}
\end{ttbox}
The derived rule \tdx{subst_eqtyparg} lets us simplify a type's
argument (by currying, $C(x)$ is a unary type operator):
\begin{ttbox}
by (resolve_tac [subst_eqtyparg] 1);
{\out Level 5}
{\out lam x. <lam xa. fst(x ` xa),lam xa. ?b8(x,xa)>}
{\out : (PROD x:A. SUM y:B(x). C(x,y)) -->}
{\out   (SUM f:PROD x:A. B(x). PROD x:A. C(x,f ` x))}
\ttbreak
{\out  1. !!h x.}
{\out        [| h : PROD x:A. SUM y:B(x). C(x,y); x : A |] ==>}
{\out        (lam x. fst(h ` x)) ` x = ?c14(h,x) : ?A14(h,x)}
\ttbreak
{\out  2. !!h x z.}
{\out        [| h : PROD x:A. SUM y:B(x). C(x,y); x : A;}
{\out           z : ?A14(h,x) |] ==>}
{\out        C(x,z) type}
\ttbreak
{\out  3. !!h x.}
{\out        [| h : PROD x:A. SUM y:B(x). C(x,y); x : A |] ==>}
{\out        ?b8(h,x) : C(x,?c14(h,x))}
\end{ttbox}
Subgoal~1 requires simply $\beta$-contraction, which is the rule
\tdx{ProdC}.  The term $\Var{c@{14}}(h,x)$ in the last subgoal
receives the contracted result.
\begin{ttbox}
by (resolve_tac [ProdC] 1);
{\out Level 6}
{\out lam x. <lam xa. fst(x ` xa),lam xa. ?b8(x,xa)>}
{\out : (PROD x:A. SUM y:B(x). C(x,y)) -->}
{\out   (SUM f:PROD x:A. B(x). PROD x:A. C(x,f ` x))}
\ttbreak
{\out  1. !!h x.}
{\out        [| h : PROD x:A. SUM y:B(x). C(x,y); x : A |] ==>}
{\out        x : ?A15(h,x)}
\ttbreak
{\out  2. !!h x xa.}
{\out        [| h : PROD x:A. SUM y:B(x). C(x,y); x : A;}
{\out           xa : ?A15(h,x) |] ==>}
{\out        fst(h ` xa) : ?B15(h,x,xa)}
\ttbreak
{\out  3. !!h x z.}
{\out        [| h : PROD x:A. SUM y:B(x). C(x,y); x : A;}
{\out           z : ?B15(h,x,x) |] ==>}
{\out        C(x,z) type}
\ttbreak
{\out  4. !!h x.}
{\out        [| h : PROD x:A. SUM y:B(x). C(x,y); x : A |] ==>}
{\out        ?b8(h,x) : C(x,fst(h ` x))}
\end{ttbox}
Routine type-checking goals proliferate in Constructive Type Theory, but
\ttindex{typechk_tac} quickly solves them.  Note the inclusion of
\tdx{SumE_fst} along with the premises.
\begin{ttbox}
by (typechk_tac (SumE_fst::prems));
{\out Level 7}
{\out lam x. <lam xa. fst(x ` xa),lam xa. ?b8(x,xa)>}
{\out : (PROD x:A. SUM y:B(x). C(x,y)) -->}
{\out   (SUM f:PROD x:A. B(x). PROD x:A. C(x,f ` x))}
\ttbreak
{\out  1. !!h x.}
{\out        [| h : PROD x:A. SUM y:B(x). C(x,y); x : A |] ==>}
{\out        ?b8(h,x) : C(x,fst(h ` x))}
\end{ttbox}
We are finally ready to compose {\tt snd} with~$h$.
\index{*ProdE theorem}\index{*SumE_snd theorem}\index{*RS}
\begin{ttbox}
by (eresolve_tac [ProdE RS SumE_snd] 1);
{\out Level 8}
{\out lam x. <lam xa. fst(x ` xa),lam xa. snd(x ` xa)>}
{\out : (PROD x:A. SUM y:B(x). C(x,y)) -->}
{\out   (SUM f:PROD x:A. B(x). PROD x:A. C(x,f ` x))}
\ttbreak
{\out  1. !!h x. x : A ==> x : A}
{\out  2. !!h x. x : A ==> B(x) type}
{\out  3. !!h x xa. [| x : A; xa : B(x) |] ==> C(x,xa) type}
\end{ttbox}
The proof object has reached its final form.  We call \ttindex{typechk_tac}
to finish the type-checking.
\begin{ttbox}
by (typechk_tac prems);
{\out Level 9}
{\out lam x. <lam xa. fst(x ` xa),lam xa. snd(x ` xa)>}
{\out : (PROD x:A. SUM y:B(x). C(x,y)) -->}
{\out   (SUM f:PROD x:A. B(x). PROD x:A. C(x,f ` x))}
{\out No subgoals!}
\end{ttbox}
It might be instructive to compare this proof with Martin-L\"of's forward
proof of the Axiom of Choice \cite[page~50]{martinlof84}.

\index{Constructive Type Theory|)}