# The TPTP Problem Library

### TPTP v6.0.0

 Geoff Sutcliffe Department of Computer Science University of Miami geoff@cs.miami.edu

### Abstract

This report provides a detailed description of the TPTP Problem Library for automated theorem proving systems. The library is available online, and forms a common basis for development of and experimentation with automated theorem provers. This report provides:
• the motivations for building the library;
• a discussion of the inadequacies of previous problem collections, and how these have been resolved in the TPTP;
• a description of the library structure, including overview information;
• descriptions of supplementary utility programs;
• guidelines for obtaining and using the library.

 If you're new to all this and don't want to wade through the details provided in this manual, you might want to start at the TPTP and TSTP Quick Guide. You could also work your way through the slides of the TPTP World Tutorial.

## Introduction

The TPTP (Thousands of Problems for Theorem Provers) is a library of problems, in classical logic with an interpreted equality symbol, for Automated Theorem Proving (ATP) systems. The TPTP contains problems in typed higher-order form (THF) - simply typed lambda calculus, typed first-order form (TFF) - monomorphic typed first-order logic with predefined numeric types, first-order form (FOF) - classical first-order logic with quantifiers, and clause normal form (CNF) - first-order logic in conjunctive clausal form. The TPTP supplies the automated reasoning community with a comprehensive library of the ATP test problems that are available today, in order to provide an overview and a simple, unambiguous reference mechanism. The principal motivation for the TPTP is to support the testing and evaluation of ATP systems, to help ensure that performance results accurately reflect the capabilities of the ATP system being considered. A common library of problems is necessary for meaningful system evaluations, meaningful system comparisons, repeatability of testing, and the production of statistically significant results. The TPTP is such a library. The TPTP problems are stored in a specifically designed, easy to understand format. Utilities for manipulating the problems, for converting the problems to other known ATP formats, and for finding problems with certain characteristics, are provided. Since its first release in 1993, many researchers have used the TPTP as an appropriate and convenient basis for ATP system evaluation.

This technical report explains the motivations and reasoning behind the development of the TPTP, and thus implicitly explains the design decisions made. It also serves as a manual explaining the structure and use of the TPTP: it provides a full description of the TPTP contents and organization, details of the utility programs, and guidelines for obtaining and using the TPTP.

 A Quick Installation Guide for the TPTP is given in Section  Getting and Using the TPTP. Please be sure to read the guidelines for using TPTP problems and presenting results, given in Section Using the TPTP.

What's New in TPTP v6.0.0 :

• This is the first release with polymorphic TFF problems. There are 537 polymorphic TFF problems in 5 domains.
• In SyntaxBNF
• Noted that \$distinct should be used only as a fact with constant arguments.
• Allowed multiple binders for <tff_let_term_defn> and <tff_let_formula_defn>.

### Previous Problem Collections

A large number of interesting problems had accumulated over the years in the ATP community. Besides publishing particularly interesting individual problems, from early on researchers collected problems in order to obtain a basis for experimentation. Problems in First Order Form (FOF), published by mathematicians and logicians prior to the mechanization of reasoning (e.g., [
Chu56]) provided the first source for ATP researchers. The first major publication in this regard was [MOW76], which provides an explicit listing of clauses for 63 Clause Normal Form (CNF) problems, many of which are still relevant today (to our knowledge, the first circulation of problems for testing ATP systems was due to Larry Wos in the late sixties). In the same year [WM76] documented 86 CNF problems which were commonly used for ATP testing. However, the problem clauses are not supplied in [WM76]. A second major thrust was provided by [Pel86], which lists 75 problems in both FOF and CNF. Other more recent papers are [BL+86], [Qua92], [MW92], and [McC93], to name a few. The Journal of Automated Reasoning's Problem Corner also provided interesting challenge problems. Problems published in hardcopy form are, however, often not suitable for testing ATP systems, because they have to be transcribed to electronic form. This is a cumbersome, error-prone process, and is feasible for only small numbers of small problems.

The sparsness of research into ATP systems for FOF problems meant that no electronic collections of FOF test problems had previously been commonly available. A CNF problem collection in electronic form was made publicly available by Argonne National Laboratories (in Otter format [McC94]) in 1988 [ANL]. This collection was a major source of problems for ATP researchers. Other electronic collections of CNF problems have been available, but not announced officially (e.g., that distributed with the SPRFN ATP system [SPRFN]). Although some of these collections provided significant support to researchers, and formed the early core of the TPTP library, none (with the possible exception of the ANL collection) was specifically designed to serve as a common basis for ATP research. Rather, these collections typically were built in the course of research into a particular ATP system. As a result there are several factors that limited their usefulness as a common basis for research. In particular, previously existing problem collections:

• were often hard to discover and obtain.
System development and system evaluations typically rely on a small set of test problems, depending on the collections of problems available to the researcher.
• needed to be transformed to the syntax of the ATP system being considered.
The problem format used in a collection may not be appropriate for the desired purpose, and a comparatively large effort is required just to make the problems locally usable (which in practice often means that such a collection of problems is simply ignored).
• were often limited in scope and size.
The problems used are often homogeneous, and thus cannot be used for broad testing of the capabilities of the ATP system under consideration. If there are too few problems, statistically significant testing is not possible.
• may be outdated.
The problems may insufficiently reflect the current state-of-the-art in ATP research.
• were sometimes designed and tuned (regarding formula selection, formula ordering, and formula structure) for a particular ATP system.
Using a collection designed and tuned for a particular ATP system may lead to biases in results.
• provideed no indication of the significance or difficulty of the problems.
The significance and difficulty of a problem, with respect to the current state-of-the-art in ATP systems, is hard to assess by newcomers to the field. Existing test problems are often not adequate anymore (e.g., Schubert's Steamroller [Sti86]), while others may be solvable only with specialized techniques (e.g., LIM+ [Ble90]), and therefore are much too hard to start with.
• were inconsistent in their presentation of equally named problems.
Many copies and variants of the same "original" problem may exist in different collections. This means that unambiguous identification of problems, and therefore a clear interpretation of performance figures for given problems, is difficult.
• were usually undocumented.
It is hard to obtain information on problem semantics, the original problem source, and the particular style of axiomatization. This also contributes to the problem of unambiguous problem identification.
• were almost always unserviced.
They do not provide a mechanism for adding new problems or correcting errors in existing problems, and cannot be used to electronically distribute new and corrected problems to the ATP community. This in turn perpetuates the use of old and erroneous problems.
• provided no guidelines for their use.
Quite often, inadequate system evaluations are performed. As a consequence, results that provide little indication of the system properties are reported.

The problem of meaningfully interpreting results can be even worse than indicated. A few problems may be selected and hand-tuned (formulae arranged in a special way, irrelevant formulae omitted, lemmas added in, etc) specifically for the ATP system being tested. The presentation of a problem can significantly affect the nature of the problem, and changing the formulae clearly makes a different problem altogether. Nevertheless the problem may be referenced under the same name as it was presented elsewhere. As a consequence the experimental results reveal little. Some researchers avoid this ambiguity by listing the formulae explicitly, but obviously this usually cannot be done for a large number of problems or for large individual problems. The only satisfactory solution to these issues is a common and stable library of problems. The TPTP is such a library.

### What is Required?

The goal for building the TPTP has been to overcome previous drawbacks, and to centralize the burden of problem collection and maintenance. The TPTP tries to address all relevant issues. In particular, the TPTP:
• is easy to discover and obtain.
Awareness of the TPTP is assured by extensive formal and informal announcements. The TPTP is available online, and is thus easily available to the research community.
• is easy to use.
Problems are presented in a specifically designed, easy to understand format. Automatic conversion to other known formats is also provided, thus eliminating the necessity for any other transcription.
• spans a diversity of subject matters.
This reduces biases in the development and testing of ATP systems, which arise from the use of a limited scope of problems. It also provides an overview of the domains in which ATP systems are used.
• is large enough for statistically significant testing.
In contrast to common practice, an ATP system should be evaluated over a large number of problems, rather than a small set of judiciously selected examples. The large size of the TPTP makes this possible.
• is comprehensive.
The TPTP contains most problems known to the community. There is no longer a need to look elsewhere.
• is up-to-date.
As new problems appear in the literature and elsewhere (see Section Inside the TPTP), they are added to the TPTP as soon as possible.
• is independent of any particular ATP system.
The problems are arranged so as to be modular and human-readable, rather than arranged for a particular ATP system.
• contains problems varying in difficulty, with a difficulty rating for each problem.
The difficulty of problems in the TPTP ranges from very simple problems through to open problems. This allows all interested researchers, from newcomers to experts, to rely on the same problem library. The difficulty ratings provided with each problem are important for several reasons:
1. It simplifies problem selection according to the user's intention.
2. It allows the quality of an ATP system to be judged.
3. Over the years, changes in the problem ratings will provide an indicator of the advancement in ATP.
• provides statistics for each problem and for the library as a whole.
This provides information about the syntactic nature of the problems.
• has an unambiguous naming scheme.
This provides unambiguous problem reference, and makes the comparison of results meaningful. See Section  Problem and Axiomatization Naming for details.
• is well structured and documented.
This allows effective and efficient use of the library. Useful background information, such as an overview of ATP application domains, is provided.
• documents each problem.
This contributes to the unambiguous identification of each problem.
• provides a mechanism for adding new problems.
The TPTP contains standard axiomatizations that can be used in new problems. This simplifies the construction of new problems (see Section Please contact us if). A template is provided for submission of new problems. The TPTP is thus a channel for making new problems available to the community, in a simple and effective way.
• provides a mechanism for correcting errors in existing problems.
All errors, noticed by the developers or reported by users, are corrected. Patched TPTP releases are made regularly.
• provides guidelines for its use in evaluating ATP systems.
A standard library of problems together with evaluation guidelines makes reported results meaningful and reproducible by others. This will in turn simplify and improve system comparisons, and allow ATP researchers to accurately gauge their progress.

The development of the TPTP problem library is an ongoing project, with the aim to provide all of the desired properties.

Current Limitations of the TPTP.
The current release of the TPTP library is limited to problems expressed in 1st order logic. There are no problems for induction, or for non-classical theorem proving. However, see Sections Current Activities and Further Plans for upcoming and planned extensions.

## Inside the TPTP

Scope.
The TPTP contains problems in classical logic with an interpreted equality symbol. Release v6.0.0 of the TPTP contains 12933 abstract problems, which result in 20306 ATP problems (due to alternative presentations, see Sections
Problem Versions and Problem Generators). There are 2306 THF abstract problems, which result in 3025 THF ATP problems, 1367 TFF abstract problems, which result in 1523 TFF ATP problems, and 4979 FOF abstract problems, which result in 7923 FOF ATP problems, and 6045 CNF abstract problems, which result in 7835 CNF ATP problems. The TPTP documents OverallSynopsis, THFSynopsis, TFFSynopsis, FOFSynopsis, and CNFSynopsis provide some overall statistics about the TPTP.

The problems in the TPTP are syntactically diverse, as is indicated by the ranges of the values in the tables. The problems in the TPTP are also semantically diverse, as is indicated by the range of domains that are covered. The problems are grouped into domains, covering topics in the fields of logic, mathematics, computer science, engineering, social sciences, and others. The domains are presented and discussed in Section The TPTP Domain Structure.

Sources
The problems have been collected from various sources. The two principal sources have been existing electronic problem collections and the ATP literature. Other sources include logic programming, mathematics, puzzles, and correspondence with ATP researchers. Many people and organizations have contributed towards the TPTP: the foundations of the TPTP were laid with David Plaisted's SPRFN collection; many problems were taken from Argonne National Laboratory's ATP problem library (special thanks to Bill McCune here); Art Quaife provided several hundred problems in set theory and algebra; the Journal of Automated Reasoning, CADE Proceedings, and Association for Automated Reasoning Newsletters have provided a wealth of material; smaller numbers of problems have been provided by a number of further contributors (see the Acknowledgements).

Releases
The TPTP is managed in the manner of a software product, in the sense that fixed releases are made. Each release of the TPTP is identified by a release number, in the form vVersion.Edition.Patch level. The Version number enumerates major new releases of the TPTP, in which important new features have been added. The Edition number is incremented each time new problems are added to the current version. The Patch level is incremented each time errors, found in the current edition, are corrected. All non-trivial changes are recorded in a history file, as well as in the file for an affected problem.

### The TPTP Domain Structure

This section provides the structure according to which the problems are grouped into domains. Some information about the domains is also given.

An attempt has been made to classify the totality of the TPTP problems in a systematic and natural way. The resulting domain scheme reflects the natural hierarchy of scientific domains, as presented in standard subject classification literature. The current classification is based mainly on the Dewey Decimal Classification (DDC) [CB+89] and the Mathematics Subject Classification (MSC) [MSC92] used for the Mathematical Reviews by the American Mathematical Society. Five main fields are defined: logic, mathematics, computer science, engineering, and other. Each field contains further subdivisions, called domains. Each domain is identified by a three-letter mnemonic. These mnemonics are also part of the TPTP problem naming scheme (see Section  Problem and Axiomatization Naming). The TPTP domains constitute the basic units of the classification. The full classification scheme is shown in Figure  The Domain Structure of the TPTP, and the numbers of abstract problems, problems, and generic problems in each domain are shown in the TPTP document OverallSynopsis

 Logic Combinatory logic COL Logic calculi LCL Henkin models HEN Mathematics Set theory SET, SEU, and SEV Graph theory GRA Algebra Relation algebra REL Boolean algebra BOO Robbins algebra ROB Left distributive LDA Lattices LAT Quantales QUA Kleene algebra KLE Groups GRP Rings RNG Real Algebra RAL Fields FLD Homological algebra HAL General algebra ALG Number theory NUM Topology TOP Analysis ANA Geometry GEO Category theory CAT Computer science Computing theory COM Knowledge representation KRS Natural Language Processing NLP Planning PLA Agents AGT Commonsense Reasoning CSR Semantic Web SWB Data Structures DAT Software creation SWC Software verification SWV and SWW Science and Engineering Biology BIO Hardware creation HWC Hardware verification HWV Medicine MED Processes PRO Products PRD Social sciences Social Choice Theory SCT Management MGT Geography GEG Arts and Humanities Philosophy PHI Other Arithmetic ARI Syntactic SYN and SYO Puzzles PUZ Miscellaneous MSC
The Domain Structure of the TPTP.

A brief description of the domains, with a non-ATP reference for a general introduction and a generic ATP reference, is given below. For each domain, appropriate DDC and MSC numbers are also given:

• AGT Agents.
An agent is an autonomous software component of a computer program, typically designed to act intelligently and communicate with other agents.
Indices: DDC ???; MSC 68T35.
References: General [
RN95]; ATP --.
• ALG Algebra.
Algebra is a branch of mathematics concerning the study of structure, relation, and quantity. An algebra is a set with a system of operations defined on it.
Indices: DDC 512; MSC 06XX, 20XX.
References: General [Bou89, BM65, BB70]; ATP --.
• ARI Arithmetic.
Arithmetic problems that do arithmetic computations (but do not reason about arithmetic (see the NUM domain).
Indices: DDC 513; MSC 97FXX.
References: General [Hig10]; ATP [Sim86].
• ANA Analysis.
Analysis is a branch of mathematics concerned with functions and limits. The main parts of analysis are differential calculus, integral calculus, and the theory of functions.
Indices: DDC 515; MSC 26XX.
References: General [Ros90]; ATP [Ble90].
• BIO Biology.
Biology is a natural science concerned with the study of life and living organisms, including their structure, function, growth, evolution, distribution, and taxonomy.
Indices: DDC 570; MSC 92XX.
References: General [RU+13]; ATP [CC+13, CDI13].
• BOO Boolean Algebra.
A Boolean algebra is a set of elements with two binary operations which are idempotent, commutative, and associative. These operations are mutually distributive, there exist universal bounds 0, 1, and there is a unary operation of complementation.
Indices: DDC 511.324, 512.89; MSC 06EXX.
References: General [Whi61, BM65, BB70]; ATP --.
• CAT Category Theory.
A category is a mathematical structure together with the morphisms that preserve this structure.
Indices: DDC 512.55; MSC 18XX.
References: General [Mac71]; ATP [MOW76].
• COL Combinatory Logic.
Combinatory logic is about applying one function to another. It can be viewed as an alternative foundation of mathematics (or, due to its Turing-completeness, as a programming language). More formally, it is a system satisfying two combinators and satisfying reflexivity, symmetry, transitivity, and two equality substitution axioms for the function that exists implicitly for applying one combinator to another.
Indices: DDC 510.101; MSC 03B40.
References: General [CF58, CHS72, Bar81]; ATP [WM88].
• COM Computing Theory.
Computing theory is a subfield of computer science dealing with theoretical issues such as decidability (whether or not a given problem admits an algorithmic solution), completeness (does an algorithm always find a solution if one exists?), correctness (are only solutions produced?), and computational complexity (the resource requirements of algorithms).
Indices: DDC 004-006; MSC 68XX.
References: General [HU79]; ATP --.
• CSR Commonsense Reasoning.
Commonsense reasoning is the branch of artificial intelligence concerned with replicating human thinking. There are several components to this problem, including: developing adequately broad and deep commonsense knowledge bases; developing reasoning methods that exhibit the features of human thinking, including the ability to reason with knowledge that is true by default, the ability to reason rapidly across a broad range of domains, and the ability to tolerate uncertainty in your knowledge; developing new kinds of cognitive architectures that support multiple reasoning methods and representations.
Indices: DDC 121.3; MSC 68TXX.
References: General [Sha97]; ATP [Lif95, McC59, SME04]
• DAT Data Structures.
A data structure is a particular way of storing and organizing data in a computer so that it can be used efficiently.
Indices: DDC 005.73; MSC 68P05.
References: General [AHU87]; ATP --.
• FLD Field Theory.
A field is ring (see below) in which the * operation is commutative, and for which there is an identity element in G, and for which each non-identity element of G has an inverse in G.
Indices: DDC 512.32; MSC 12XX.
References: General [Ada82]; ATP [Dra93].
• GEG Geography.
Geography is the study of the Earth and its lands, features, inhabitants, and phenomena.
Indices: DDC 910; MSC 91C99.
References: General [NGS99]; ATP --.
• GEO Geometry.
Geometry is a branch of mathematics that deals with the measurement, properties, and relationships of points, lines, angles, surfaces, and solids.
Indices: DDC 516; MSC 51.
References: General [Tar51, Tar59]; ATP [Qua92].
• GRA Graph Theory.
A graph consists of a finite non-empty set of vertices together with a prescribed set of edges, each edge connecting a pair of vertices.
Indices: DDC 510.09; MSC 05CXX, 68R10.
References: General [Har69, BB70]; ATP --.
• GRP Group Theory.
A group is a set G and a binary operation +:GxG -> G which is associative, and for which there is an identity element in G, and for which each element of G has an inverse in G.
Indices: DDC 512.2; MSC 20
References: General [Bou89, BM65]; ATP [MOW76].
• HAL Homological Algebra.
Homological algebra is an abstract algebra concerned with results valid for many different kinds of spaces. Modules are the basic tools used in homological algebra.
Indices: DDC 512.55; MSC 18XX.
References: General [Wei94]; ATP --.
• HEN Henkin Models.
Henkin models provide a generalized semantics for higher order logics. This leads to a larger class of models and, as a consequence, fewer true sentences. However, in contrast to standard semantics, complete and correct calculi can be found.
Indices: DDC 160; MSC 03CXX.
References: General [Hen50, Leb83]; ATP --.
• HWC Hardware Creation.
Computer hardware is created by inter-connecting logic gates. Hardware creation is used to form a circuit that will transform given input patterns to required output patterns.
Indices: DDC 621.395; MSC 94CXX.
References: General [Hay93]; ATP [WW83].
• HWV Hardware Verification.
Hardware verification is used to ensure that a previously designed circuit performs the desired transformation of input patterns to required output patterns. One approach is to check the performance of the circuit for every possible combination of given inputs. Other techniques are also used.
Indices: DDC 621.395; MSC 94CXX.
References: General [Hay93]; ATP [Woj83].
• KLE Kleene Algebra.
A Kleene Algebra is a bounded distributive lattice with an involution satisfying De Morgan's laws, and the inequality x∧−x ≤ y∨−y. Alternatively, a Kleene Algebra is an algebraic structure that generalizes the operations known from regular expressions.
Indices: DDC 512.74; MSC 06A99
References: General [Koz90, BMS03]; ATP [HS07].
• KRS Knowledge Representation.
Knowledge Representation is concerned with writing down descriptions that can be manipulated by a machine.
Indices: DDC 006.3; MSC 68T30.
References: General [BL85, CM87]; ATP --.
• LAT Lattice Theory.
A lattice is a set of elements, with two binary operations which are idempotent, commutative, and associative, and which satisfy the absorption law.
Indices: DDC 512.865; MSC 06BXX.
References: General [BM65]; ATP [McC88].
• LCL Logic Calculi.
A logic calculus defines axioms and rules of inference that can be used to prove theorems. Indices: DDC 511.3; MSC 03XX. References: General [Luk63]; ATP [MW92].
• LDA Left Distributive Algebra.
LD-algebras are related to large cardinals. Under a very strong large cardinal assumption, the free-monogenic LD-algebra can be represented by an algebra of elementary embeddings. Theorems about this algebra can be proven from a small number of properties, suggesting the definition of an embedding algebra.
Indices: DDC 512; MSC 20N02, 03E55, 08B20
References: General --; ATP [Jec93].
• LIN Linear Algebra.
Linear algebra is the branch of mathematics concerning vector spaces and linear mappings between such spaces. Indices: DDC 512.5; MSC 15Axx.
References: General [LH+01]; ATP --.
• MED Medicine.
The science of diagnosing and treating illness, disease, and injury.
Indices: DDC 610; MSC --.
References: General [LH+01]; ATP [HLB05].
• MGT Management.
Management is the study of systems, and their use and production of resources.
Indices: DDC 302-303; MSC 90XX.
References: General --; ATP [PM94, PB+94].
• MSC Miscellaneous.
A collection of problems which do not fit well into any other domain.
• NLP Natural Language Processing.
Natural language processing considers the automated generation and comprehension of languages used by humans.
Indices: DDC 006.3; MSC 68T50
References: General [Obe89i, GWS86]; ATP [Bos00].
• NUM Number Theory.
Number theory is the study of integers and their properties.
Indices: DDC 512.7; MSC 11YXX.
References: General [HW92]; ATP [Qua92].
• PHI Philosophy.
Philosophy is the study of general and fundamental problems, such as those connected with reality, existence, knowledge, values, reason, mind, and language.
Indices: DDC 100; MSC -
References: General [TE99]; ATP -.
• PLA Planning.
Planning is the process of determining the sequence of actions to be performed by an agent, to reach a specified desired state from a specified initial state.
Indices: DDC 006.3; MSC 68T99.
References: General [AK+91]; ATP [Pla81, Pla82].
• PRO Processes.
Process information includes production scheduling, process planning, workflow, business process reengineering, simulation, process realization process modeling, and project management.
Indices: DDC 670; MSC 93-XX.
References: General [Sch99]; ATP [Rei01, BG05].
• PRD Products.
A product is anything that can be offered to a market that might satisfy a want or need.
Indices: DDC ???; MSC ??.
References: General [KA+06]; ATP --.
• QUA Quantales.
Quantales are partially ordered algebraic structures that generalize locales (point free topologies) as well as various lattices of multiplicative ideals from ring theory and functional analysis. Quantales are sometimes referred to as complete residuated semigroups.
Indices: DDC 512; MSC 06F07.
References: General [Mul01, Con71]; ATP --.
• PUZ Puzzles.
Puzzles are designed to test the ingenuity of humans.
Indices: DDC 510, 793.73; MSC --.
References: General [Car86, Smu78, Smu85]; ATP --.
• RAL Real Algebra.
Real algebra is the study of "real" objects such as real rings, real places and real varieties. Indices: DDC 512; MSC 13J30
References: General [Lam84]; ATP --.
• REL Relation Algebra.
A relation algebra is a residuated Boolean algebra supporting an involutary unary operation called converse. The motivating example of a relation algebra is the algebra 2^X^2 of all binary relations on a set X, with RoS interpreted as the usual composition of binary relations.
Indices: DDC 512; MSC 03G15, 06D99
References: General [SS93, Mad06]; ATP [HS08].
• RNG Ring Theory.
A ring is a group (see above) in which the binary operation is commutative, with an associative and distributive operation *:GxG -> G for which there is an identity element in G.
Indices: DDC 200; MSC -.
References: General [Bou89, BB70]; ATP [MOW76].
• ROB Robbins Algebra.
The Robbins Algebra domain revolves around the question "Is every Robbins algebra Boolean?". Most of the problems in this domain identify conditions that make a near-Boolean algebra Boolean.
Indices: DDC 512; MSC 03G15.
References: General [HMT71]; ATP [Win90].
• SCT Social Choice Theory.
Social choice theory is a theoretical framework for measuring individual interests, values, or welfares as an aggregate towards collective decision..
Indices: DDC 330.1; MSC -.
References: General [Arr63]; ATP [Nip09].
• SET, SEU, and SEV Set Theory.
Classically, a set is a totality of certain definite, distinguishable objects of our intuition or thought - called the elements of the set. Due to paradoxes that arise from such a naive definition, mathematicians now regard the notion of a set as an undefined, primitive concept [How72].
Indices: DDC 511.322, 512.817; MSC 03EXX, 04XX.
References: General [Neu25, Qui69]; ATP [Qua92].
• SWB Semantic Web.
Semantic Web is a group of methods and technologies to allow machines to understand the meaning – or "semantics" – of information on the World Wide Web.
Indices: DDC ??.?; MSC --.
References: General [BHL01]; ATP [HV06].
• SWC Software Creation.
Software creation is used to form a computer program that meets given specifications.
Indices: DDC ???.?; MSC 68N30.
References: General --; ATP --.
• SWV and SWW Software Verification.
Software verification formally establishes that a computer program does the task it is designed for.
Indices: DDC 005.14; MSC 68Q60.
References: General --; ATP [WO+92, MOW76].
• SYN and SYO Syntactic.
Syntactic problems have no obvious semantic interpretation.
Indices: DDC --; MSC --.
References: General [Chu56]; ATP [Pel86].
• TOP Topology.
Topology is the study of properties of geometric configurations (e.g., point sets) which are unaltered by elastic deformations (homeomorphisms, i.e., functions that are 1-1 mappings between sets, such that both the function and its inverse are continuous).
Indices: DDC 514; MSC 46AXX.
References: General [Kel55, Mun75]; ATP [WM89].

### Problem Versions and Standard Axiomatizations

There are often many ways to formulate a problem for presentation to an ATP system. Thus, in the TPTP, there are often alternative presentations of a problem. The alternative presentations are called versions of the underlying abstract problem. As the problem versions are the objects that ATP systems must deal with, they are referred to simply as problems, and the abstract problems are referred to explicitly as such. Each problem is stored in a separate physical file. In the TPTP the most coarse grain differentiation between problem versions is whether the problem is presented in FOF or CNF. Within a given presentation, the primary reason for different versions of an abstract problem is the use of different axiomatizations. This issue is discussed below. A secondary reason is different formulations of the theorem to be proven, e.g., different clausal forms of a FOF problem.

Different Axiomatizations
Commonly, many different axiomatizations of a theory exist. By using different axiomatizations of a particular theory, different versions of a problem can be formed.

In the TPTP an axiomatization is standard if it is a complete axiomatization of an established theory, no lemmas have been added, and it is not designed specifically to be suited or ill-suited to any ATP system, calculus, or control strategy. A problem version is standard if it uses a standard axiomatization. (Note: A standard axiomatization may be redundant, because some axioms are dependent on others. In general, it is not known whether or not an axiomatization is minimal, and thus the possibility of redundancy in standard axiomatizations must be tolerated.) In the TPTP, standard axiomatizations are kept in separate axiom files, and are included in problems as appropriate. Sets of specialization axioms may be used to extend or constrain an axiomatization. Specialization axioms are also kept in separate axiom files.

Within the ATP community, some problems have been created with non-standard axiomatizations. A non-standard axiomatization may be formed by modifying a standard axiomatization: the standard axiomatization may be reduced (i.e., axioms are removed) and the result is an incomplete axiomatization, or it may be augmented (i.e., lemmas are added) and the result is a redundant axiomatization. Incomplete and redundant axiomatizations are typically used to find a proof of a conjecture (based on the axiomatization) using a particular ATP system. An axiomatization may also be non-standard because it is biased, i.e., designed specifically to be suited or ill-suited to some ATP system, calculus, or control strategy. A problem version is incomplete, redundant, or biased if its axiomatization is. In the TPTP, for each incomplete, redundant, and biased problem, a new version of the problem with a standard axiomatization is usually created. Finally, an axiomatization may be non-standard because it does not capture any established theory. i.e., a standard axiomatization does not exist, but the axioms are not biased. A problem version with such an axiomatization is especial. Typically, the axioms in an especial problem are specific to that problem, and are not used in any other problem.

In any 'real' application of an ATP system, a standard or especial problem version would typically be used, at least initially. The use of standard axiomatizations is particularly desirable.

Equality Axiomatization
In the TPTP equality is represented in infix using = and != for equality and inequality. An inequality has the same meaning as a negated equality. If equality is present in a problem, axioms of equality are not provided; it is assumed that equality reasoning is builtin to every ATP system. If equality axioms are required by an ATP system they can be added using the tptp2X utility (see Section
The TPTP2X Utility). If any axioms are added when the problems are submitted to an ATP system, then the addition must be explicitly noted in reported results (see Section Using the TPTP).

### Problem Generators

Some abstract problems have a generic nature, and particular instances of the abstract problem are formed according to some size parameter(s). An example of a generic problem is the N-queens problem: place N queens on a N by N chess board such that no queen attacks another. The formulae for any size of this problem can be generated automatically, for any size of N >= 2. Note that satisfiability may depend on the problem size.

For convenience, the TPTP still contains, where they exist, a theorem and a non-theorem size instance of each generic problem. The TPTP sizes chosen are non-trivial, unless the problem remains easy up to sizes that have very large files. In the latter situation the TPTP size is that used in TPTP v1.2.0. The statistics in the TPTP documents Overall synopsis, FOF synopsis, and CNF synopsis take into account these instances of generic problems.

### Problem and Axiomatization Naming

Providing unambiguous names for all problems is necessary in a problem library. A naming scheme has been developed for the TPTP, to provide unique, stable names for abstract problems, problems, axiomatizations, and generators. Files are assigned names according to the schemes depicted in Tables
Problem Naming and Axiom Naming. The DDDNNN combination provides an unambiguous name for an abstract problem or axiomatization. The DDDNNNFV[.SSS] combination provides an unambiguous name for a problem or generator, and the DDDNNNFE combination provides an unambiguous name for a set of axioms. The complete file names are unique within the TPTP. For example, the file GRP135-1.002.p contains the group theory problem number 135, version number 1, generated size 2. Similarly, the file CAT001-0.ax contains the basic category theory axiomatization number 1.

A regular expression for recognizing problem file names is "[A-Z]{3}[0-9]{3}[-+^=_][1-9][0-9]*(\.[0-9]{3})*\.p". A regular expression for recognizing axiom file names is "[A-Z]{3}[0-9]{3}[-+^=_][1-9][0-9]*\.ax".

DDD NNN F V .SSS .T
DDD - Domain name abbreviation. The domain names and their abbreviations are listed in Section  The Domain Structure of the TPTP.
NNN - Abstract problem number. It is unique within the domain.
F - Form. ^ for THF, _ for TFF without arithmetic, = for TFF with arithmetic, + for FOF, and - for CNF.
V - Version number. It differentiates between different versions of the abstract problem, as discussed in Section Problem Versions.
SSS - Size parameter(s). These only occur for generated problems, and give the size parameter(s).
T - File name extension. p for problem files, g for generator files.

Problem file naming scheme.

The version numbers used for each abstract problem do not always start at 1, and are not always successive. This is because the same version number is assigned (wherever possible) to all problems that come from the same source, within each domain.

DDD NNN F V .TT
DDD - Domain name abbreviation. The domain names and their abbreviations are listed in Section  The Domain Structure of the TPTP.
NNN - Axiomatization number. It is unique within the domain.
F - Form. ^ for THF, _ for TFF without arithmetic, = for TFF with arithmetic, + for FOF, and - for CNF.
V - Specialization number. It identifies sets of axioms that are used to specialize an axiomatization. Axiomatizations of basic theories are allocated the number 0, and specialization axiom sets are numbered from 1 onwards.
T - File name extension. An extension ax denotes a file containing axioms specific to a theory. An extension eq denotes a file containing equality substitution axioms for the corresponding theory specific axioms.

Axiom file naming scheme.

### Problem Presentation

The physical presentation of the TPTP problem library is such that ATP researchers can easily use the problems. The syntax of all files is that of Prolog (with some operators defined). This conformance makes it trivial to manipulate the files using Prolog. All information that is not for use by ATP systems is formatted as comments. Comments extend from a % character to the end of the line, or may be block comments within /* ... */ bracketing. The TPTP file format, for problem files and axiom files, has three main sections. The first section is a header section that contains information for the user. This information is not for use by ATP systems (see Section
Using the TPTP). The second section contains include directives for axiom files. The last section contains the formulae that are specific to the problem or axiomatization. TPTP generator files have three sections, different from the problem and axiom files. The header section of generator files is similar to that of problem and axiom files, but with place-holders for information that is filled in based on the size of problem and the formulae generated. Following that comes Prolog source code to generate the formulae, and finally data describing the permissible sizes and the chosen TPTP size for the problem. More details are given in Section  Writing your own Problem Generators.

A full BNF specification of the problem and axiom file formats is provided in the Documents directory of the TPTP (see Section Physical Organization). The tptp2X utility can be used to convert TPTP files to other known ATP system formats (see Section The TPTP2X Utility). A description of the information contained in TPTP files is given below.

The problem files SYN000* are contrived to use most features of the TPTP language, and are thus suitable for testing parsers, etc.

#### The Header Section

This section contains information about the problem, for the user. It is divided into four parts separated by blank lines. The first part identifies and describes the problem. The second part provides information about occurrences of the problem in the literature and elsewhere. The third part gives the problem's ATP status and a table of syntactic measurements made on the problem. The last part contains general information about the problem. An example of a TPTP header, from the problem file GRP194+1.p, is shown in Figure Example header.

```%--------------------------------------------------------------------------
% File     : GRP194+1 : TPTP v4.0.1. Released v2.0.0.
% Domain   : Group Theory (Semigroups)
% Problem  : In semigroups, a surjective homomorphism maps the zero
% Version  : [Gol93] axioms.
% English  : If (F,*) and (H,+) are two semigroups, phi is a surjective
%            homomorphism from F to H, and id is a left zero for F,
%            then phi(id) is a left zero for H.

% Refs     : [Gol93] Goller (1993), Anwendung des Theorembeweisers SETHEO a
% Source   : [Gol93]
% Names    :

% Status   : Theorem
% Rating   : 0.35 v4.0.1, 0.30 v4.0.0, 0.29 v3.7.0, 0.15 v3.5.0, 0.16 v3.4.0,
%            0.21 v3.3.0, 0.14 v3.2.0, 0.18 v3.1.0, 0.11 v2.7.0, 0.17 v2.6.0,
%            0.14 v2.5.0, 0.12 v2.4.0, 0.25 v2.3.0, 0.33 v2.2.1, 0.00 v2.1.0
% Syntax   : Number of formulae    :    8 (   2 unit)
%            Number of atoms       :   21 (   4 equality)
%            Maximal formula depth :    8 (   4 average)
%            Number of connectives :   13 (   0 ~  ;   0  |;   6  &)
%                                         (   1 <=>;   6 =>;   0 <=)
%                                         (   0 <~>;   0 ~|;   0 ~&)
%            Number of predicates  :    3 (   0 propositional; 2-2 arity)
%            Number of functors    :    5 (   3 constant; 0-3 arity)
%            Number of variables   :   15 (   0 singleton;  14 !;   1 ?)
%            Maximal term depth    :    3 (   1 average)
% SPC      : FOF_NKC_RFO_SEQ

%--------------------------------------------------------------------------
```
Example of a problem file header (GRP194+1.p).

The % File field.
This field contains three items of information. The first item is the problem's name. The current TPTP release number is given next, followed by the TPTP release in which the problem was released or last bugfixed (i.e., the formulae were changed).

The % Domain field.
The domain field identifies the domain, and possibly a subdomain, from which the problem is drawn (see Section
The TPTP Domain Structure). The domain corresponds to the first three letters of the problem name.

The % Problem field.
This field provides a one-line, high-level description of the abstract problem. In axiom files, this field is called % Axioms, and provides a one-line, high-level description of the axiomatization.

The % Version field.
This field gives information that differentiates this version of the problem from other versions of the problem. The first possible differentiation is the axiomatization that is used. If a specific axiomatization is used, a citation is provided. If the axiomatization is a pure equality axiomatization (uses only the equal/2 predicate) then this is noted too.

The second possible differentiation is the status of the axiomatization, as discussed in Section Problem Versions. There are 12 possiblities:

• empty. Indicates that the axiomatization is standard, i.e., it is complete and has had no lemmas added (it may be redundant).
• Especial. Indicates that the axiomatization does not capture any established theory.
• Biased. Indicates that the axiomatization is designed specifically to be suited or ill-suited to some ATP system, calculus, or control strategy.
• Incomplete. Indicates that the axiomatization is incomplete.
• Reduced > Complete. Indicates that a existing standard axiomatization has had axioms removed, and the result is complete. The existing axiomatization is necessarily redundant.
• Reduced > Incomplete. Indicates that a existing standard axiomatization has had axioms removed, and the result is non-standard due to incompleteness.
• Augmented. Indicates that a existing standard axiomatization has had lemmas added, and the result is non-standard due to redundancy.
• Reduced & Augmented > Complete. Indicates that a existing standard axiomatization has had axioms removed and lemmas added, and the result is complete. (Non-standard) after Complete. >
• Reduced & Augmented > Incomplete. Indicates that a existing standard axiomatization has had axioms removed and lemmas added, and the result is non-standard due to incompleteness.
• Incomplete > Reduced > Incomplete. Indicates that an existing incomplete axiomatization has had axioms removed, and the result is non-standard due to incompleteness.
• Incomplete > Augmented > Complete. Indicates that an existing incomplete axiomatization has had axioms added, and the result is complete.
• Incomplete > Augmented > Incomplete. Indicates that an existing incomplete axiomatization has had lemmas added, and the result is non-standard due to incompleteness.
• Incomplete > Reduced & Augmented > Complete. Indicates that an existing incomplete axiomatization has had axioms removed and lemmas added, and the result is complete. (Non-standard) after Complete. >
• Incomplete > Reduced & Augmented > Incomplete. Indicates that an existing incomplete axiomatization has had axioms removed and lemmas added, and the result is non-standard due to incompleteness.
• Especial > Reduced > Especial. Indicates that an especial axiomatization has had axioms removed, and the result remains especial.
• Especial > Augmented > Especial. Indicates that an especial axiomatization has had axioms added, and the result remains especial.
The third possible differentiation between problem versions is in the theorem formulation. Variations in the theorem formulation are noted in a Theorem formulation entry, e.g.,
```% Version  : [McCharen, et al., 1976] (equality) axioms.
%            Theorem formulation : Explicit formulation of the commutator.
```

The % English field.
This field provides a full description of the problem, if the one-line description in the % Problem field is too terse.

The % Refs field.
This field provides a list of abbreviated references for material in which the problem has been presented. Other relevant references are also listed. The reference keys identify BibTeX entries in the Bibliography.bib file supplied with the TPTP.

The % Source field.
The problems in the TPTP have been (physically) obtained from a variety of sources, both hardcopy and electronic. In this field the source of the problem is acknowledged, usually as a citation. If the problem was sourced from an existing problem collection then the collection name is given in [ ] brackets. The problem collections used thus far are:

• ANL - the Argonne National Laboratory library [ANL]
• OTTER - the collection distributed with the Otter ATP system [McC98]
• Quaife - Art Quaife's collection of set theory (based) problems [Qua92]
• ROO - the problems used to test the ROO ATP system [LM92]
• SPRFN - the collection distributed with the SPRFN ATP system [SPRFN]
• TPTP - the problem first ever appeared in the TPTP
• SETHEO - the collection used to test the SETHEO ATP system [LS+92]
The % Names field.
Problems which have appeared in other problem collections or the literature, often have names which are known in the ATP community. This field lists all such names known to us for the problem, along with the source of the name. If the source is an existing problem collection then the collection name is cited, as in the % Source field. If the source of a name is a paper then a citation is given. If a problem is not given a name in a paper then ``-'' is used as the known name and a citation is given. Problems which first appeared in the TPTP have source TPTP, and no other name. In generator files all known names for instances of the abstract problem are listed, with the instance size given before the source. A reverse index, from known names to TPTP names, is distributed with the TPTP (see Section Physical Organization).

The % Status field.
This field gives the ATP status of the problem, according to the SZS problem status ontology. For THF, TFF, and FOF problems it is one of the following values. For THF problems the default semantics is Henkin with extensionality and choice. If the status is different for other semantics, this is noted on lines after the % Status field.

• Theorem - Every model of the axioms (and other non-conjecture formulae, e.g., hypotheses and lemmas) is a model of all the conjectures, including the case where the axioms and hypotheses have no models.
• CounterSatisfiable - Some models of the axioms (and there are some) are models of the negation of at least one of the conjectures.
• Satisfiable - There are no conjectures, and some interpretations are models of the axioms.
• Unsatisfiable - There are no conjectures, and no interpretations are models of the axioms.
• Unknown - The problem has never been solved by an ATP system.
• Open - The abstract problem has never been solved.
For CNF problems it is one of:
• Unsatisfiable - No interpretations are models of the clauses.
• Satisfiable - Some interpretations are models of the clauses.
• Unknown - The problem has never been solved by an ATP system.
• Open - The abstract problem has never been solved.
The % Rating field.
This field gives the difficulty of the problem, measured relative to state-of-the-art ATP systems. It is a comma separated list of ratings, each with a TPTP release number. The rating is a real number in the range 0.0 to 1.0, where 0.0 means that all state-of-the-art ATP systems can solve the problem (i.e., the problem is easy), and 1.0 means no state-of-the-art ATP system can solve the problem (i.e., the problem is hard). The rating is followed by a TPTP release number, indicating in which release the rating was assigned. If no rating has been assigned, a ? is given.

The % Syntax field.
This field lists various syntactic measures of the problem's clauses. The measures for THF problems are:

• the number of formulae, the number of unit formulae, the number of type formulae, the number of definition formulae,
• the number of atoms, the number of equality atoms, the number of variable atoms,
• the maximal and average formula depth,
• the number of logical connectives, the numbers of each type of logical connective, the number of type connectives, the numbers of each type of type connective,
• the number of distinct (non-variable) symbols, the number of typed symbols, the number of defined symbols,
• the number of distinct variables, the number of singleton variables (variables that appear only once in a formula), the numbers of universally, existentially, and lambda quantified variables, the numbers of typed and defined variables, the numbers of dependent product and sum type quantified variables, the numbers of variables under choice and description binders.
The measures for TFF problems are:
• the number of formulae, the number of unit formulae, the number of type formulae,
• the number of atoms, the number of equality atoms,
• the maximal and average formula depth,
• the number of logical connectives, the numbers of each type of logical connective, the number of type connectives, the numbers of each type of type connective,
• the number of distinct predicates symbols, the number of propositional symbols, the minimal and maximal predicate symbol arities,
• the number of distinct function symbols, the number of constants, the minimal and maximal functor arities,
• the number of distinct variables, the number of singleton variables, the numbers of universally and existentially quantified variables,
• the maximal and average term depth (with constants and variables having depth 1).
For TFF problems containing arithmetic, additionally:
• the number of arithmetic symbols, the numbers of arithmetic predicates, functions, and constants.
For TFF problems containing polymorphism, additionally:
• the number of polymorphic types.
The measures for FOF problems are:
• the number of formulae, the number if unit formulae,
• the number of atoms, the number of equality atoms,
• the maximal and average formula depth,
• the number of logical connectives, the numbers of each type of logical connective,
• the number of distinct predicates symbols, the number of propositional symbols, the minimal and maximal predicate symbol arities,
• the number of distinct function symbols, the number of constants, the minimal and maximal functor arities,
• the number of distinct variables, the number of singleton variables, the numbers of universally and existentially quantified variables,
• the maximal and average term depth. (with constants and variables having depth 1).
The measures for CNF problems are:
• the number of clauses, the number of non-Horn clauses, the number of unit clauses, the number of range restricted clauses,
• the number of literals, the number of equality literals,
• the maximal and average clause size (measured by number of literals),
• the number of distinct predicate symbols, the number of propositional symbols, the minimal and maximal predicate symbol arities,
• the number of distinct function symbols, the number of constants, the minimal and maximal functor arities,
• the number of distinct variables, the number of singleton variables,
• the maximal and average term depth.
See Section Inside the TPTP for a summary of this information over the entire TPTP.

The % SPC field.
This field indicates to which Specialist Problem Class (SPC) the problem belongs. SPCs are sets of problems that have similar syntactic characteristics, so that that the problems are reasonably homogeneous with respect ATP systems' performances on the problems. SPCs are used for caclulating the difficulty rating [SS01], and for recommending systems to use on problems [SS99]. The characteristics used to build the SPCs are:

• The form of the problem: TH1, TH0, TF1, TF0, FOF, CNF
• The known or expected logical (SZS) status: THM (theorem), CAX (contradictory axioms), CSA (countersatisfiable), UNS (unsatisfiable), SAT (satisfiable), UNK (unknown), OPN (open)
• The effective order of the logic (for FOF and CNF problems): PRP (propositional), EPR (effectively propositional), RFO (really first-order),
• The amount of equality in the problem: NEQ (no equality), EQU (with equality), SEQ (some, but not only, equality), PEQ (pure equality), NUE (non-unit pure equality), UEQ (unit equality)
• The amount of arithmetic (for TFF problems): NAR (no arithmetic), ARI (with arithmetic), SAR (some arithmetic), PAR (pure arithmetic)
• Horness (for CNF problems): HRN (Horn), NHN (non-Horn)
The SPCs are: CNF_UNS_EPR, CNF_UNS_PRP, CNF_UNS_RFO_NEQ_HRN, CNF_UNS_RFO_NEQ_NHN, CNF_UNS_RFO_PEQ_NUE, CNF_UNS_RFO_PEQ_UEQ, CNF_UNS_RFO_SEQ_HRN, CNF_UNS_RFO_SEQ_NHN, CNF_SAT_EPR, CNF_SAT_PRP, CNF_SAT_RFO_EQU_NUE, CNF_SAT_RFO_NEQ, CNF_SAT_RFO_PEQ_UEQ, CNF_UNK_EPR, CNF_UNK_PRP, CNF_UNK_RFO_NEQ_HRN, CNF_UNK_RFO_NEQ_NHN, CNF_UNK_RFO_PEQ_NUE, CNF_UNK_RFO_PEQ_UEQ, CNF_UNK_RFO_SEQ_HRN, CNF_UNK_RFO_SEQ_NHN, CNF_OPN_EPR, CNF_OPN_PRP, CNF_OPN_RFO_NEQ_HRN, CNF_OPN_RFO_NEQ_NHN, CNF_OPN_RFO_PEQ_NUE, CNF_OPN_RFO_PEQ_UEQ, CNF_OPN_RFO_SEQ_HRN, CNF_OPN_RFO_SEQ_NHN, FOF_THM_EPR, FOF_THM_PRP, FOF_THM_RFO_NEQ, FOF_THM_RFO_PEQ, FOF_THM_RFO_SEQ, FOF_CAX_EPR, FOF_CAX_PRP, FOF_CAX_RFO_NEQ, FOF_CAX_RFO_PEQ, FOF_CAX_RFO_SEQ, FOF_CSA_EPR, FOF_CSA_PRP, FOF_CSA_RFO_NEQ, FOF_CSA_RFO_PEQ, FOF_CSA_RFO_SEQ, FOF_UNS_EPR, FOF_UNS_PRP, FOF_UNS_RFO_NEQ, FOF_UNS_RFO_PEQ, FOF_UNS_RFO_SEQ, FOF_SAT_EPR, FOF_SAT_PRP, FOF_SAT_RFO_NEQ, FOF_SAT_RFO_PEQ, FOF_SAT_RFO_SEQ, FOF_UNK_EPR, FOF_UNK_PRP, FOF_UNK_RFO_NEQ, FOF_UNK_RFO_PEQ, FOF_UNK_RFO_SEQ, FOF_OPN_EPR, FOF_OPN_PRP, FOF_OPN_RFO_NEQ, FOF_OPN_RFO_PEQ, FOF_OPN_RFO_SEQ, TF0_THM_EQU_ARI, TF0_THM_EQU_NAR, TF0_THM_NEQ_ARI, TF0_THM_NEQ_NAR, TF0_CAX_EQU_ARI, TF0_CAX_EQU_NAR, TF0_CAX_NEQ_ARI, TF0_CAX_NEQ_NAR, TF0_CSA_EQU_ARI, TF0_CSA_EQU_NAR, TF0_CSA_NEQ_ARI, TF0_CSA_NEQ_NAR, TF0_UNS_EQU_ARI, TF0_UNS_EQU_NAR, TF0_UNS_NEQ_ARI, TF0_UNS_NEQ_NAR, TF0_SAT_EQU_ARI, TF0_SAT_EQU_NAR, TF0_SAT_NEQ_ARI, TF0_SAT_NEQ_NAR, TF0_UNK_EQU_ARI, TF0_UNK_EQU_NAR, TF0_UNK_NEQ_ARI, TF0_UNK_NEQ_NAR, TF0_OPN_EQU_ARI, TF0_OPN_EQU_NAR, TF0_OPN_NEQ_ARI, TF0_OPN_NEQ_NAR, TF1_THM_EQU_ARI, TF1_THM_EQU_NAR, TF1_THM_NEQ_ARI, TF1_THM_NEQ_NAR, TF1_CAX_EQU_ARI, TF1_CAX_EQU_NAR, TF1_CAX_NEQ_ARI, TF1_CAX_NEQ_NAR, TF1_CSA_EQU_ARI, TF1_CSA_EQU_NAR, TF1_CSA_NEQ_ARI, TF1_CSA_NEQ_NAR, TF1_UNS_EQU_ARI, TF1_UNS_EQU_NAR, TF1_UNS_NEQ_ARI, TF1_UNS_NEQ_NAR, TF1_SAT_EQU_ARI, TF1_SAT_EQU_NAR, TF1_SAT_NEQ_ARI, TF1_SAT_NEQ_NAR, TF1_UNK_EQU_ARI, TF1_UNK_EQU_NAR, TF1_UNK_NEQ_ARI, TF1_UNK_NEQ_NAR, TF1_OPN_EQU_ARI, TF1_OPN_EQU_NAR, TF1_OPN_NEQ_ARI, TF1_OPN_NEQ_NAR, TH0_THM_EQU_ARI, TH0_THM_EQU_NAR, TH0_THM_NEQ_ARI, TH0_THM_NEQ_NAR, TH0_CAX_EQU_ARI, TH0_CAX_EQU_NAR, TH0_CAX_NEQ_ARI, TH0_CAX_NEQ_NAR, TH0_CSA_EQU_ARI, TH0_CSA_EQU_NAR, TH0_CSA_NEQ_ARI, TH0_CSA_NEQ_NAR, TH0_UNS_EQU_ARI, TH0_UNS_EQU_NAR, TH0_UNS_NEQ_ARI, TH0_UNS_NEQ_NAR, TH0_SAT_EQU_ARI, TH0_SAT_EQU_NAR, TH0_SAT_NEQ_ARI, TH0_SAT_NEQ_NAR, TH0_UNK_EQU_ARI, TH0_UNK_EQU_NAR, TH0_UNK_NEQ_ARI, TH0_UNK_NEQ_NAR, TH0_OPN_EQU_ARI, TH0_OPN_EQU_NAR, TH0_OPN_NEQ_ARI, TH0_OPN_NEQ_NAR, TH1_THM_EQU_ARI, TH1_THM_EQU_NAR, TH1_THM_NEQ_ARI, TH1_THM_NEQ_NAR, TH1_CAX_EQU_ARI, TH1_CAX_EQU_NAR, TH1_CAX_NEQ_ARI, TH1_CAX_NEQ_NAR, TH1_CSA_EQU_ARI, TH1_CSA_EQU_NAR, TH1_CSA_NEQ_ARI, TH1_CSA_NEQ_NAR, TH1_UNS_EQU_ARI, TH1_UNS_EQU_NAR, TH1_UNS_NEQ_ARI, TH1_UNS_NEQ_NAR, TH1_SAT_EQU_ARI, TH1_SAT_EQU_NAR, TH1_SAT_NEQ_ARI, TH1_SAT_NEQ_NAR, TH1_UNK_EQU_ARI, TH1_UNK_EQU_NAR, TH1_UNK_NEQ_ARI, TH1_UNK_NEQ_NAR, TH1_OPN_EQU_ARI, TH1_OPN_EQU_NAR, TH1_OPN_NEQ_ARI, TH1_OPN_NEQ_NAR.

The % Comments field.
This field contains free format comments about the problem, e.g., the significance of the problem, or the reason for creating the problem. If the problem was created using the tptp2X utility (see Section The TPTP2X Utility) then the tptp2X parameters are given.

The % Bugfixes field.
This field describes any bugfixes that have been done to the formulae of the problem. Each entry gives the release number in which the bugfix was done, and attempts to pinpoint the bugfix accurately.

#### The Include Section

The include section contains include directives for TPTP axiom files. An example is shown in Figure Example include section, extracted from the problem file GEO146+1.p.

```%--------------------------------------------------------------------------
%----Include simple curve axioms
include('Axioms/GEO004+0.ax').
%----Include axioms of betweenness for simple curves
include('Axioms/GEO004+1.ax').
%----Include oriented curve axioms
include('Axioms/GEO004+2.ax').
%----Include trajectory axioms
include('Axioms/GEO004+3.ax',[connect_defn,symmetry_of_at_the_same_time]).
%--------------------------------------------------------------------------
```
Example of a problem file include section (adapted from GEO146+1.p).

Full versions of TPTP problems (without include directives) can be created by using the tptp2X utility (see Section The TPTP2X Utility).

A side effect of separating out the axioms into axiom files is that the formula order in the TPTP presentation of problems is typically different from that of any original presentation. This reordering is acceptable because the performance of a decent ATP system should not be very dependent on a particular formula ordering.

#### The Formulae Section

Each logical formula is wrapped in an annotated formula structure of the form language(name,role,formula,source,[useful_info]).. The languages currently supported are thf - formulae in typed higher-order form, tff - formulae in typed first-order form, fof - formulae in first order form, and cnf - formulae in clause normal form. The name identifies the formula within the problem. The role gives the user semantics of the formula, one of axiom, hypothesis, definition, assumption, lemma, theorem, corollary, conjecture, negated_conjecture, plain, type, and unknown. axioms are accepted, without proof, as a basis for proving conjectures in THF, TFF, and FOF. In CNF problems axioms are accepted as part of the set whose satisfiability has to be established. There is no guarantee that the axioms of a problem are consistent. hypothesiss are assumed to be true for a particular problem, and are used like axioms. definitions are used to define symbols, and are used like axioms. assumptions can be used like axioms, but must be discharged before a derivation is complete. lemmas and theorems have been proven from the axioms, can be used like axioms, but are redundant wrt the axioms. theorem is used as the role of proven conjectures, in output. A problem containing a lemma or theorem that is not redundant wrt the axioms is ill-formed. theorems are more important than lemmas from the user perspective. corollarys have been proven from the axioms and a theorem, can be used like axioms, but are redundant wrt the axioms and theorem. A problem containing a corollary that is not redundant wrt the axioms and theorem is ill-formed. conjectures occur in only THF, TFF, and FOF problems, and are to all be proven from the axiom(-like) formulae. A problem is solved only when all conjectures are proven. (TPTP problems never contain more than one conjecture, and the creation of problems with more than one conjecture is currently deprecated, because contemporary ATP systems commonly treat them as a disjunction, and will prove only one of them.) negated_conjectures are formed from negation of a conjecture, typically in FOF to CNF conversion. plains have no special user semantics, and can be used like axioms. type formulae define the types of symbols globally. unknowns have unknown role, and this is an error situation.

The syntax for atoms is that of Prolog: variables start with upper case letters, atoms and terms are written in prefix notation, uninterpreted predicates and functors either start with lower case and contain alphanumerics and underscore, or are in 'single quotes'. The language also supports interpreted predicates and functors. These come in two varieties: defined predicates and functors, whose interpretation is specified by the TPTP language, and system predicates and functors, whose interpretation is ATP system specific. Interpreted predicates and functors are syntactically distinct from uninterpreted ones - they are = and !=, or start with a \$, a ", or a digit.

The defined predicates recognized so far are

• \$true and \$false, with the obvious interpretations
• = and != for equality and inequality
• \$distinct, whose arguments are hence known to be unequal from each other (but not necessarily unequal to any other constants). \$distinct maybe overloaded with different arities. \$distinct is used in only the TFF language.
• The arithmetic predicates described below. The arithmetic predicates are used in only the TFF language.
The defined functors recognized so far are

System predicates and functors are used for interpreted predicates and functors that are available in particular ATP tools. System predicates and functors start with \$\$. The names are not controlled by the TPTP language, so they must be used with caution.

Non-variable symbols can be given a type globally in the formula with role type, in the same format. Types can be constants, a defined type, or built from types using the type mapping connective >, which is right associative. The defined types recognized so far are \$tType - the type of all types, \$o - the Boolean type, \$i - the type of individuals, \$real - the type of reals, \$rat - the type of rational, and \$int - the type of integers.

The connectives used to build non-atomic formulae are written using short notations. New atomic types are introducted by formula with the type role, declaring them to be of the type \$tType. The universal quantifier is !, the existential quantifier is ?, and the lambda binder is ^. Quantified formulae are written in the form Quantifier [Variables] :  Formula. In the THF, TFF, and FOF languages, every variable in a Formula must be bound by a preceding quantification with adequate scope. Typed Variables are given their type by a :type suffix. The binary connectives are infix | for disjunction, infix & for conjunction, infix <=> for equivalence, infix => for implication, infix <= for reverse implication, infix <~> for non-equivalence (XOR), infix ~| for negated disjunction (NOR), infix ~& for negated conjunction (NAND), infix @ for application. The only unary connective is prefix ~ for negation. Negation has higher precedence than quantification, which in turn has higher precedence than the binary connectives. No precedence is specified between the binary connectives; brackets are used to ensure the correct association. The binary connectives are left associative.

The THF and TFF languages also offer term and formula construction functions. For THF there are the conditional formula \$ite_f, and the let-binders \$let_tf and \$let_ff. \$ite_f takes three formulae as arguments, the first of which must be of type \$o, and the second two of which must be of the same type. \$ite_f returns its second argument or third argument if the first argument is true or false, respectively. \$let_tf takes two arguments, the first of which is a (possibly universally quantified) equation, and the second of which is a formula. \$let_ff takes two arguments, the first of which is a (possibly universally quantified) equivalence, and the second of which is a formula. The lefthand side of the equation or equivalence must be a constant or an application of a constant to pairwise distinct variables that appear in the quantification. \$let_tf and \$let_ff return their second argument with every matching occurrence of the lefthand side of the first argument replaced by the righthand side of the first argument, subject to unification of the universally quantified variables.

For TFF there are the conditional term \$ite_t, the conditional formula \$ite_f, the term let-binders \$let_tt and \$let_ft, and the formula let-binders \$let_tf and and \$let_ff. \$ite_t takes a formula of type \$o and two terms of the same type as arguments. \$ite_f takes three formulae of type \$o as arguments. \$ite_t and \$ite_f return their second argument or third argument if the first argument is true or false, respectively. \$let_tt takes two arguments, the first of which is a (possibly universally quantified) equation, and the second of which is a term. \$let_ft takes two arguments, the first of which is a (possibly universally quantified) equivalence, and the second of which is a term containing a \$ite_t term. \$let_tf takes two arguments, the first of which is a (possibly universally quantified) equation, and the second of which is a formula. \$let_ff takes two arguments, the first of which is a (possibly universally quantified) equivalence, and the second of which is a formula. The lefthand side of a let_?? equation or equivalence must be flat with pairwise distinct variable arguments that must be bound by the quantification. \$let_tt, \$let_ft, \$let_tf, and \$let_ff return their second argument with every matching occurrence of the lefthand side of the first argument replaced by the righthand side of the first argument, subject to unification of the universally quantified variables.

The useful_info field of an annotated formula is optional, and if it is not used then the source field becomes optional. The source field is used to record where the annotated formula came from, and is most commonly a file record or an inference record. A file record stores the name of the file from which the annotated formula was read, and optionally the name of the annotated formula as it occurs in that file - this may be different from the name of the annotated formula itself, e.g., if an ATP systems reads in an annotated formula, renames it, and then prints it out. An inference record stores information about an inferred formula. The useful_info field of an annotated formula is a list of arbitrary useful information formatted as Prolog terms, as required for user applications.

An example of a THF formula section, extracted from the problem file LCL633^1.p, is shown in Figure  Example THF formulae. An example of a TFF formula section, extracted from the problem file LCL633^1.p, is shown in Figure  Example THF formulae. An example of a FOF formula section, extracted from the problem file GRP194+1.p, is shown in Figure  Example FOF formulae. An example of a clause section, extracted from the problem file GRP039-7.p, is shown in Figure  Example CNF clauses.

```%------------------------------------------------------------------------------
%----Signature
thf(a,type,(
a: \$tType )).

thf(p,type,(
p: ( a > \$i > \$o ) > \$i > \$o )).

thf(g,type,(
g: a > \$i > \$o )).

thf(e,type,(
e: ( a > \$i > \$o ) > a > \$i > \$o )).

thf(r,type,(
r: \$i > \$i > \$o )).

%----Axioms
thf(positiveness,axiom,(
! [X: a > \$i > \$o] :
( mvalid
@ ( mimpl @ ( mnot @ ( p @ X ) )
@ ( p
@ ^ [Z: a] :
( mnot @ ( X @ Z ) ) ) ) ) )).

thf(g,definition,
( g
= ( ^ [Z: a,W: \$i] :
! [X: a > \$i > \$o] :
( mimpl @ ( p @ X ) @ ( X @ Z ) @ W ) ) )).

thf(e,definition,
( e
= ( ^ [X: a > \$i > \$o,Z: a,P: \$i] :
! [Y: a > \$i > \$o] :
( mimpl @ ( Y @ Z )
@ ( mbox @ r
@ ^ [Q: \$i] :
! [W: a] :
( mimpl @ ( X @ W ) @ ( Y @ W ) @ Q ) )
@ P ) ) )).

%----Conjecture
thf(thm,conjecture,
( mvalid
@ ^ [W: \$i] :
! [Z: a] :
( mimpl @ ( g @ Z ) @ ( e @ g @ Z ) @ W ) )).
%------------------------------------------------------------------------------
```
Example of a THF problem file formulae section (LCL633^1.p).
```%------------------------------------------------------------------------------
tff(list_type,type,(
list: \$tType )).

tff(nil_type,type,(
nil: list )).

tff(mycons_type,type,(
mycons: ( \$int * list ) > list )).

tff(sorted_type,type,(
fib_sorted: list > \$o )).

tff(empty_fib_sorted,axiom,(
fib_sorted(nil) )).

tff(single_is_fib_sorted,axiom,(
! [X: \$int] : fib_sorted(mycons(X,nil)) )).

tff(double_is_fib_sorted_if_ordered,axiom,(
! [X: \$int,Y: \$int] :
( \$less(X,Y)
=> fib_sorted(mycons(X,mycons(Y,nil))) ) )).

tff(recursive_fib_sort,axiom,(
! [X: \$int,Y: \$int,Z: \$int,R: list] :
( ( \$less(X,Y)
& \$greatereq(Z,\$sum(X,Y))
& fib_sorted(mycons(Y,mycons(Z,R))) )
=> fib_sorted(mycons(X,mycons(Y,mycons(Z,R)))) ) )).

tff(check_list,conjecture,(
fib_sorted(mycons(1,mycons(2,mycons(4,mycons(7,mycons(100,nil)))))) )).
%------------------------------------------------------------------------------
```
Example of a TFF problem file formulae section (DAT002=1.p).
```%--------------------------------------------------------------------------
%----Definition of a homomorphism
fof(homomorphism1,axiom,
( ! [X] :
( group_member(X,f)
=> group_member(phi(X),h) ) )).

fof(homomorphism2,axiom,
( ! [X,Y] :
( ( group_member(X,f)
& group_member(Y,f) )
=> multiply(h,phi(X),phi(Y)) = phi(multiply(f,X,Y)) ) )).

fof(surjective,axiom,
( ! [X] :
( group_member(X,h)
=> ? [Y] :
( group_member(Y,f)
& phi(Y) = X ) ) )).

%----Definition of left zero
fof(left_zero,axiom,
( ! [G,X] :
( left_zero(G,X)
<=> ( group_member(X,G)
& ! [Y] :
( group_member(Y,G)
=> multiply(G,X,Y) = X ) ) ) )).

%----The conjecture
fof(left_zero_for_f,hypothesis,
( left_zero(f,f_left_zero) )).

fof(prove_left_zero_h,conjecture,
( left_zero(h,phi(f_left_zero)) )).
%--------------------------------------------------------------------------
```
Example of a FOF problem file formulae section (GRP194+1.p).

```%--------------------------------------------------------------------------
%----Redundant two axioms
cnf(right_identity,axiom,
( multiply(X,identity) = X )).

cnf(right_inverse,axiom,
( multiply(X,inverse(X)) = identity )).

... some clauses omitted here for brevity

cnf(property_of_O2,axiom,
( subgroup_member(X)
| subgroup_member(Y)
| multiply(X,element_in_O2(X,Y)) = Y )).

%----Denial of theorem
cnf(b_in_O2,negated_conjecture,
( subgroup_member(b) )).

cnf(b_times_a_inverse_is_c,negated_conjecture,
( multiply(b,inverse(a)) = c )).

cnf(a_times_c_is_d,negated_conjecture,
( multiply(a,c) = d )).

cnf(prove_d_in_O2,negated_conjecture,
( ~ subgroup_member(d) )).
%--------------------------------------------------------------------------
```
Example of a CNF problem file clause section (GRP039-7.p).

### The TFF Monomorphic Type System (TF0)

• Defined atomic types
The atomic types \$i (individuals), \$o (Booleans), and \$tType are defined. (\$tType is the "type" of atomic types - it is not really a type, and in other places this is known as the "kind" of atomic types. See below for motivation for its existence and usage.) Other defined atomic types are associated with specific theories. In particular, \$int, \$rat, and \$real are atomic types for
interpreted arithmetic.

• User atomic type declarations:
User atomic types can (but don't have to) be declared in advance of use, to be of "type" \$tType. Example:
```    tff(food_type,type,
food: \$tType ).

tff(fruit_type,type,
fruit: \$tType ).

tff(list_type,type,
list: \$tType ).  ```
Declaration of atomic types is not required, i.e., atomic types can be introduced on the fly. TPTP problems will have all atomic types declared, so as to provide a typo check.

• Defined functions and predicates
Defined functions and predicates have preassigned types.
• \$true is of type \$o
• \$false is of type \$o
• Distinct objects in "double quotes" are of type \$i
• = and != are ad-hoc polymorphic over atomic types other than \$o, with result type \$o.
• \$distinct is ad-hoc polymorphic over all atomic types other than \$o, with result type \$o.
• The (atomic) types of numbers are defined on the interpreted arithmetic section.
• The types of the arithmetic predicates and functions are defined on the interpreted arithmetic section.

• Function and predicate type declarations
Every function and predicate symbol has maximally one declared type. Example:
```    tff(cons_type,type,
cons: (fruit * list) > list   ).

tff(is_empty_type,type,
isEmpty: list > \$o   ).```
The argument types must be atomic, and cannot be \$o. The range type of a function must be atomic, and cannot be \$o. The range type of a predicate must be \$o. Note that symbols of arity greater than one use the * for a cross-product type. Currying is not possible, i.e., the first example above cannot be written
```    tff(cons_type,type,
cons: fruit > list > list   ). ```

If a symbol's type is declared more than once, and the types are not the same, that's an error.

• Types of variables
Every variable can be given an atomic type at quantification time. Example:
```tff(list_not_empty,axiom,
! [X: fruit,Xs: list] : ~isEmpty(cons(X,Xs))   ).```

• Implicit typing for functions and predicates
If a symbol is used and its type has not been declared, then default types are assumed.
• All untyped predicates get type (\$i * ... * \$i) > \$o.
• All untyped functions get type (\$i * ... * \$i) > \$i.
• All untyped variables get type \$i.
If a symbol's type is declared later to be different from the assumed type, that's an error.

• Interpretations
An interpretation has a non-empty domain for each atomic type. These domains are pairwise disjoint. It also maps function and predicate symbols to functions and predicates that respect the types of those symbols.

• Translation
A set of well-typed TFF formulae can be translated to an equisatisfiable set of FOF formulae. Each atomic type becomes a new unary predicate in the untyped world.
• All atomic types are inhabited, so a type declaration
```tff(one_type_decl,type,
one_type: \$tType ).```
produces the axiom:
```fof(one_type_inhabited,axiom,
? [X] : one_type(X) ).```
• Domains are pairwise disjoint, so pair of type declarations
```tff(one_type_decl,type,
one_type: \$tType ).

tff(two_type_decl,type,
two_type: \$tType ).```
can be used to produce the axiom:
```fof(one_two_distinct,axiom,
! [X,Y] :
( ( one_type(X)
& two_type(Y) )
=> X != Y ) ).```
These axioms are not logically necessary, because a model of the FOF formulae without these axioms can be used to construct a model of the TFF formulae. For resolution-based theorem proving it is impossible to claim that terms of different types are equal in a well-typed formula, and every inferred formula is well typed. However, for model generation these axioms are useful because they force terms with different types to be interpreted as different domain elements, i.e., the domain of the FOF model can be divided into subdomains for the different types.
• A function type declaration of the form
```tff(f_type,type,
f: (t1 * ... * tn) > tf  ).```
is translated into:
```fof(f_type,axiom,
! [X1,...,Xn] : tf(f(X1,...,Xn)) ).```
It is unnecessary to have an implication with the antecedent checking the types of the arguments X1,...,Xn because it is impossible to use incorrectly typed arguments in a well-typed formula.
• Predicate type declarations are ignored (!).
• A quantification of the form
`! [X1: t1,...,Xn: tn] : p(X1,...,Xn)`
is translated into:
```! [X1,...,Xn] :
( ( t1(X1)
& ...
& tn(Xn) )
=> p(X1,...,Xn) )```
• A quantification of the form
`? [X1: t1,...,Xn: tn] : p(X1,...,Xn)`
is translated into:
```? [X1,...,Xn] :
( t1(X1)
& ...
& tn(Xn)
& p(X1,...,Xn) )```

### The TFF Polymorphic Type System (TF1)

The polymorphic type system an extension of the monomorphic type system, with rank-1 polymorphism. Syntactically, the types, terms, and formulas of TF1 are analogous to those of TF0, except that function and predicate symbols can be declared to be polymorphic, types can contain type variables, and n-ary type constructors are allowed. Type variables in type signatures and formulas are explicitly bound. Instances of polymorphic symbols are specified by explicit type arguments, rather than inferred. Refer to the
TF1 specification for more details on type checking and semantics, and much more.
• Types
The types of TF1 are built from type variables and type constructors of fixed arities. Nullary type constructors are called type constants. The usual conventions of TPTP apply: Type variables start with an uppercase letter and type constructors with a lowercase letter. A type is polymorphic if it contains type variables; otherwise, it is monomorphic.

As in TF0, the type \$i of individuals is predefined but has no peculiar semantics, whereas the arithmetic types \$int, \$rat, and \$real are modeled by Z, Q, and R, respectively. Refer to the TF0 specification for the semantics of the arithmetic types and their operations. It is perfectly acceptable for a TFF implementation not to support arithmetic. "TFFk with arithmetic" is sometimes called "TFAk".

• Type Signatures
Each function and predicate symbol occurring in a formula must be associated with a type signature that specifies the types of the arguments and, for functions, the return type. Type signatures can take any of the following forms:
1. a type (predefined or user-defined);
2. the Boolean pseudotype \$o;
3. (τ1 * ... * τn) > υ for n > 0, where τ1, ..., τn are types and υ is a type or \$o;
4. !>[α1 : \$tType, ..., αn : \$tType]: ς for n > 0, where α1, ..., αn are distinct type variables and ς has one of the previous three forms.
The parentheses in form (c) are omitted if n = 1. The binder !> in form (d) denotes universal quantification. If ς is of form (c), it must be put in parentheses.

Form (a) is used for monomorphic constants; form (b), for propositional constants, including the predefined symbols \$true and \$false; form (c), for monomorphic functions and predicates; and form (d), for polymorphic functions and predicates. It is often convenient to regard all forms above as instances of the general syntax

!>[α1 : \$tType, ..., αm : \$tType]: ((τ1 * ... * τn) > ς)
where m and n can be 0.

Type variables that are bound by !> without occurring in the type signature's body are called ghost type variables. These make it possible to specify operations and relations directly on types, providing a convenient way to encode type classes. For example, we can declare a polymorphic propositional constant is_linear with the signature !>[A : \$tType]: \$o and use it as a guard to restrict the axioms specifying that a binary predicate less_eq with the signature !>[A : \$tType]: ((A * A) > \$o) is a linear order to those types that satisfy the is_linear predicate.

• Type Declarations
Type constructors, like function and predicate symbols, can optionally be declared. The following declarations introduce a type constant bird, a unary type constructor list, and a binary type constructor map:
```tff(bird_t, type, bird: \$tType).
tff(list_t, type, list: \$tType > \$tType).
tff(map_t, type, map: (\$tType * \$tType) > \$tType).
```
If a type constructor is used before being declared, its arity is determined by the first occurrence. Any later declaration must give it the same arity.

A declaration of a function or predicate symbol specifies its type signature. Every type variable occurring in a type signature must be bound by a !>-binder. The following declarations introduce a monomorphic constant pi, a polymorphic predicate is_empty, and a pair of polymorphic functions cons and lookup:

```tff(pi_t, type, pi: \$real).
tff(is_empty_t, type, is_empty : !>[A : \$tType]: (list(A) > \$o)).
tff(cons_t, type, cons : !>[A : \$tType]: ((A * list(A)) > list(A))).
tff(lookup_t, type, lookup : !>[A : \$tType, B : \$tType]: ((map(A, B) * A) > B)).
```
If a function or predicate symbol is used before being declared, a default type signature is assumed: (\$i * ... * \$i) > \$i for functions and (\$i * ... * \$i) > \$o for predicates. If a symbol is declared after its first use, the declared signature must agree with the assumed signature.

If a type constructor, function symbol, or predicate symbol is declared more than once, it must be given the same type signature up to renaming of bound type variables. All symbols share the same namespace; in particular, a type constructor cannot have the same name as a function or predicate symbol.

• Function and Predicate Application
To keep the required type inference to a minimum, every use of a polymorphic symbol must explicitly specify the type instance. A function or predicate symbol with a type signature !>[\$α1\$ : \$tType, ..., \$αm\$ : \$tType]: ((\$τ1\$ * ... * \$τn\$) > υ) must be applied to m type arguments and n term arguments. Given the above signatures for is_empty, cons, and lookup, the term lookup(\$int, list(A), M, 2) and the atom is_empty(\$i, cons(\$i, X, nil(\$i))) are well-formed and contain free occurrences of the type variable A and the term variable X, respectively.

In keeping with TF1's rank-1 polymorphic nature, type variables can only be instantiated with actual types. In particular, \$o, \$tType, and !>-binders cannot occur in type arguments of polymorphic symbols.

For systems that implement type inference, the following nonstandard extension of TF1 might be useful. When a type argument of a polymorphic symbol can be inferred automatically, it may be replaced with the wildcard \$_. For example:

is_empty(\$_, cons(\$_, X, nil(\$_)))
Although nil's type argument cannot be inferred locally from the types of its term arguments (since there are none), the Hindley–Milner type inference can deduce it. The producer of a TF1 problem must be aware of the type inference algorithm implemented in the consumer to omit only redundant type arguments.

• Type and Term Variables
Every variable in a TF1 formula must be bound. It can be given a type at binding time:
```tff(bird_list_not_empty, axiom,
![B : bird, Bs : list(bird)]: ~ is_empty(bird, cons(bird, B, Bs))).

```
If the type and the preceding colon (:) are omitted, the variable is given type \$i. Every type variable occurring in a TF1 formula (whether in a type argument or in the type of a bound variable) must also be bound, with the pseudotype \$tType:
```tff(lookup_update_same, axiom,
![A : \$tType, B : \$tType, M : map(A, B), K : A, V : B]:
lookup(A, B, update(A, B, M, K, V), K) = V).
```
A single quantifier cluster can bind both type variables and term variables. Universal and existential quantifiers over type variables are allowed under the propositional connectives, including equivalence, as well as under other quantifiers over type variables, but not in the scope of a quantifier over a term variable. Rationale: A statement of the form "for every integer k, there exists a type α such that ..." effectively makes α a dependent type. On such statements, type skolemization is impossible, and there is no easy translation to ML-style polymorphic formalisms. Moreover, type handling in an automatic prover would be more difficult were such constructions allowed, since they require paramodulation into types.

On the other hand, all the notions and procedures described in the TF1 specification—except for type skolemization—are independent of this restriction. The rules of type checking and the notion of interpretation are directly applicable to unrestricted formulas. The encoding into a monomorphic is sound and complete on unrestricted formulas, and the proofs require no adjustments. This prepares the ground for TFF2, which is expected to lift the restriction and support more elaborate forms of dependent types. Implementations of TF1 are encouraged to support unrestricted formulas, treating them according to the semantics given here, if practicable.

• Terms and Formulas
Apart from the differences described above, the terms and formulas of TF1 are identical to those of TF0. Refer to the TF0 specification for further information.

• Example
The following problem gives the general flavor of TF1. It first declares and axiomatizes lookup and update operations on maps, then conjectures that update is idempotent for fixed keys and values. Its SZS status is Theorem.
```tff(map_t,type,(
map: ( \$tType * \$tType ) > \$tType )).

tff(lookup_t,type,(
lookup:
!>[A: \$tType,B: \$tType] :
( ( map(A,B) * A ) > B ) )).

tff(update_t,type,(
update:
!>[A: \$tType,B: \$tType] :
( ( map(A,B) * A * B ) > map(A,B) ) )).

tff(lookup_update_same,axiom,(
! [A: \$tType,B: \$tType,M: map(A,B),K: A,V: B] :
lookup(A,B,update(A,B,M,K,V),K) = V )).

tff(lookup_update_diff,axiom,(
! [A: \$tType,B: \$tType,M: map(A,B),V: B,K: A,L: A] :
( K != L
=> lookup(A,B,update(A,B,M,K,V),L) = lookup(A,B,M,L) ) )).

tff(map_ext,axiom,(
! [A: \$tType,B: \$tType,M: map(A,B),N: map(A,B)] :
( ! [K: A] : lookup(A,B,M,K) = lookup(A,B,N,K)
=> M = N ) )).

tff(update_idem,conjecture,(
! [A: \$tType,B: \$tType,M: map(A,B),K: A,V: B] :
update(A,B,update(A,B,M,K,V),K,V) = update(A,B,M,K,V) )).
```

### The TPTP Arithmetic System

Arithmetic must be done in the context of
TFF logic, which supports the predefined atomic numeric types \$int, \$rat, and \$real. Using TFF enforces semantics that separate \$int from \$rat from \$real from \$i from \$o. If arithmetic is not done in the context of TFF, then the domain of every interpretation is the integers/rationals/reals.

The numbers are unbounded and the reals of of infinite precision (rather than some specific implementation such as 32 bit 2's complement integer, or IEEE floating point). Systems that implement limited arithmetic must halt in an SZS error state if they hit overflow.

The following interpreted predicates and interpreted functions are defined. Each symbol is ad-hoc polymorphic over the numeric types (with one exception - \$quotient is not defined for \$int). All arguments must have the same numeric type. All the functions, except for the coercion functions \$to_int and \$to_rat, have the same range type as their arguments. For example, \$sum can be used with the type signatures (\$int * \$int) > \$int, (\$rat * \$rat) > \$rat, and (\$real * \$real) > \$real. The coercion function \$to_int always has a \$int result, and the coercion function \$to_rat always has a \$rat result. All the predicates have a \$o result. For example, \$less can be used with the type signatures (\$int * \$int) > \$o, (\$rat * \$rat) > \$o, and (\$real * \$real) > \$o.

Symbol Operation Comments, examples                                        -
\$int The type of integers 123, -123
```<integer>            ::- (<signed_integer>|<unsigned_integer>)
<signed_integer>     ::- <sign><unsigned_integer>
<unsigned_integer>   ::- <decimal>
<decimal>            ::- (<zero_numeric>|<positive_decimal>)
<positive_decimal>   ::- <non_zero_numeric><numeric>*
<sign>               ::: [+-]
<zero_numeric>       ::: [0]
<non_zero_numeric>   ::: [1-9]
<numeric>            ::: [0-9]```
\$rat The type of rationals 123/456, -123/456, +123/456
```<rational>           ::- (<signed_rational>|<unsigned_rational>)
<signed_rational>    ::- <sign><unsigned_rational>
<unsigned_rational>  ::- <decimal><slash><positive_decimal>
<slash>              ::: [/]```
\$real The type of reals 123.456, -123.456,
123.456E789, 123.456e789, -123.456E789,
123.456E-789, -123.456E-789
```<real>               ::- (<signed_real>|<unsigned_real>)
<signed_real>        ::- <sign><unsigned_real>
<unsigned_real>      ::- (<decimal_fraction>|<decimal_exponent>)
<decimal_exponent>   ::- (<decimal>|<decimal_fraction>)<exponent><decimal>
<decimal_fraction>   ::- <decimal><dot_decimal>
<dot_decimal>        ::- <dot><numeric><numeric>*
<dot>                ::: [.]
<exponent>           ::: [Ee]```
= (infix) Comparison of two numbers. The numbers must be the same atomic type (see the type system).
\$less/2 Less-than comparison of two numbers. \$less, \$lesseq, \$greater, and \$greatereq are related by
! [X,Y] : ( \$lesseq(X,Y) <=> ( \$less(X,Y) | X = Y ) )
! [X,Y] : ( \$greater(X,Y) <=> \$less(Y,X) )
! [X,Y] : ( \$greatereq(X,Y) <=> \$lesseq(Y,X) )
i.e, only \$less and equality need to be implemented to get all four relational operators.
\$lesseq/2 Less-than-or-equal-to comparison of two numbers.
\$greater/2 Greater-than comparison of two numbers.
\$greatereq/2 Greater-than-or-equal-to comparison of two numbers.
\$uminus/1 Unary minus of a number. \$uminus, \$sum, and \$difference are related by
! [X,Y] : \$difference(X,Y) = \$sum(X,\$uminus(Y))
i.e, only \$uminus and \$sum need to be implemented to get all three additive operators.
\$sum/2 Sum of two numbers.
\$difference/2 Difference between two numbers.
\$product/2 Product of two numbers.
\$quotient/2 Exact quotient of two \$rat or \$real numbers. For non-zero divisors, the result can be computed. For zero divisors the result is not specified. In practice, if an ATP system does not "know" that the divisor is non-zero, it should simply not evaluate the \$quotient. Users should always guard their use of \$quotient using inequality, e.g.,
! [X: \$real] : ( X != 0.0 => p(\$quotient(5.0,X)) )
\$quotient_e/2, \$quotient_t/2, \$quotient_f/2 Integral quotient of two numbers. The three variants use different rounding to an integral result:
• \$quotient_e(N,D) - the Euclidean quotient, which has a non-negative remainder. If D is positive then \$quotient_e(N,D) is the floor (in the type of N and D) of the real division N/D, and if D is negative then \$quotient_e(N,D) is the ceiling of N/D.
• \$quotient_t(N,D) - the truncation of the real division N/D.
• \$quotient_f(N,D) - the floor of the real division N/D.
For zero divisors the result is not specified.
\$remainder_e/2, \$remainder_t/2, \$remainder_f/2 Remainder after integral division of two numbers. For τ ∈ {\$int,\$rat, \$real}, ρ ∈ {e, t,f}, \$quotient_ρ and \$remainder_ρ are related by
! [N:τ,D:τ] : \$sum(\$product(\$quotient_ρ(N,D),D),\$remainder_ρ(N,D)) = N
For zero divisors the result is not specified.
\$floor/1 Floor of a number. The largest integral number not greater than the argument.
\$ceiling/1 Ceiling of a number. The smallest integral number not less than the argument.
\$truncate/1 Truncation of a number. The nearest integral value with magnitude not greater than the absolute value of the argument.
\$round/1 Rounding of a number. The nearest integral number to the argument. When the argument is halfway between two integral numbers, the nearest even integral number to the argument.
\$is_int/1 Test for coincidence with an integer.
\$is_rat/1 Test for coincidence with a rational.
\$to_int/1 Coercion of a number to \$int. The result is the largest integer less than or equal to the argument (the floor function).
\$to_rat/1 Coercion of a number to \$rat. This function is not fully specified. For reals that are (known to be) rational the result is the rational value. For other reals the result is not specified. In practice, if an ATP system does not "know" that the argument is rational, it should simply not evaluate the \$to_rat. Users should always guard their use of \$to_rat using \$is_rat, e.g.,
! [X: \$real] : ( \$is_rat(X) => p(\$to_rat(X)) )
\$to_real/1 Coercion of a number to \$real.

The extent to which ATP systems are able to work with the arithmetic predicates and functions may vary, from a simple ability to evaluate ground terms, e.g., \$sum(2,3) may be evaluated to 5, through an ability to instantiate variables in equations involving such functions, e.g., \$product(2,\$uminus(X)) = \$uminus(\$sum(X,2)) may instantiate X to 2, to extensive algebraic manipulation capability. The syntax does not axiomatize arithmetic theory, but may be used to write axioms of the theory. If such a set of axioms is implicit, then reasoning "about" integer arithmetic becomes possible, e.g.,

`    ! [X:\$int] : \$greater(\$succ(X),X)`
becomes provable.

### Physical Organization

The TPTP is physically organized into six subdirectories, as follows:
• Problems - The problem files directory, with subdirectories for each domain. The domain name abbreviations, as described in Section The TPTP Domain Structure, are used as subdirectory names. The subdirectories contain the problem files.
• Axioms - The axiom files directory.
• Generators - The generator files directory.
• TPTP2X - The directory containing the tptp2X utility, described in Section The TPTP2X Utility.
• Scripts - A directory containing C shell scripts that may be used with the TPTP, as described in Section tptp2T.
• Documents - A directory containing documents that relate to the TPTP:
• ReadMe - General information about the TPTP.
• History - A history of the changes made to the TPTP up to the current release.
• ProblemList - A list of all the abstract problems in the TPTP, giving their names, number of FOF and CNF versions, and one-line descriptions.
• AxiomList - A list of the axiomatizations in the TPTP, giving their names, number of FOF and CNF specializations, and one-line descriptions.
• GeneratorList - A list of the generic problems in the TPTP, giving their names, number of FOF and CNF versions, and one-line descriptions. For each version the constraints that determine the permissible sizes, and the TPTP sizes, are given.
• ReverseIndex - An index from existing known names for problems to their TPTP file names.
• Bibliography.bib - The BibTeX file containing references for all material cited in the TPTP.
• ProblemAndSolutionStatistics - Lists all the problems in the TPTP, giving statistics about the problems and information about ATP systems' performances on the problems. The fields are listed and explained below.
• THFSynopsis, TFFSynopsis, FOFSynopsis, and CNFSynopsis - Statistics on the THF, TFF, FOF, and CNF parts of the TPTP, and a chart showing the structure of the TPTP problem domains.
• OverallSynopsis - Statistics on the overall TPTP, and a chart showing the structure of the TPTP problem domains.
• VerySimilarProblemsLists - A directory containing lists of very similar problems. Problems are considered to be be "very similar" if:
• They have many formulae in common, e.g., when the same axioms are used for very many conjectures.
• They have every similar characteristics, e.g., when the problems have been created using different parameters in some automated process.
• SyntaxBNF - BNF of the TPTP syntax
• SZSOntology - The SZS ontology of problem statuses, as may be established by an ATP system, and used in the % Status field of problem headers.
• Template - A template for submitting new TPTP problems.
• Users - A list of registered TPTP users. Registered users receive notification of TPTP patches, and other information relevant to the TPTP. If you would like to have your name added to or removed from this list, please let us know. Our addresses are given on the title page.
The files in the Documents directory contain comprehensive online information about the TPTP. They summarize much of the information contained in this report, in specific files. Their format provides quick access to the data, using standard system tools (e.g., grep, awk).

The fields in the problem lines of the ProblemAndSolutionStatistics file are:

• V - Version, one of S (standard), I (incomplete), A (augmented), R (redundant), B (biased), or E (especial). See Section Problem Versions
• SZS - The SZS status (using the SZS problem status ontology)
• Rtng - The problem's difficulty rating.
• Form - The number of formulae.
• nHrn - The number of non-Horn clauses.
• Unit - The number of unit formulae/clauses
• Type - The number of type declarations (THF and TFF)
• Defn - The number of definitions
• RR - The number of range restricted clauses
• Atom - The number of atoms
• EqA - The number of equality atoms
• VarA - The number of variable atoms (THF)
• MxFD - The maximal formula depth (nested connectives) - this is the maximal number of literals in a clause in CNF problems
• AvFD - The average formula depth - this is the average number of literals per clause in CNF problems
• Pred - The number of disinct predicate symbols
• Prop - The number of disinct propositional predicate symbols
• Arity - The arity range of predicate symbols
• Conn - The number of logical connectives
• TCon - The number of type connectives (THF and TFF)
• Vars - The number of distinct variables
• Sgn - The number of singleton variables
• !> - The number of polymorphic types
• ! - The number of universally quantified variables
• ? - The number of existentially quantified variables
• ^ - The number of lambda expressions (THF)
• MxTD - The maximal term depth
• AvTD - The average term depth
• Func - The number of distinct function symbols
• Cnst - The number of distinct constant function symbols
• Arity - The arity range of function symbols
• Arith - The number of arithmetic symbols (TFF)

The fields in the solution lines of the ProblemAndSolutionStatistics file are:

• Res - The result established (using the SZS problem status ontology)
• RsT - The CPU time taken to get the result
• Out - The output provided (using the SZS problem status ontology)
• OtT - The CPU time taken to get the output
• Forms - The number of formulae in the output
• Claus - The number of clauses in the output
• Leafs - The number of leaves if the output is a DAG
• Depth - The depth of the output if it is a DAG
• Atoms - The number of atoms in the output
• EqAts - The number of equality atoms in the output
• MaxDp - The maximal formula depth (nested connectives) in the output
• MaxLc - The maximal number of literals in a clause in the output
• MaxTD - The maximal term depth in the output

## Utilities

### The TPTP2X Utility

The TPTP2X utility is a multi-functional utility for reformatting, transforming, and generating TPTP problem files. In particular, it
• Converts from the TPTP format to formats used by existing ATP systems.
• Applies various transformations to TPTP problems.
• Controls the generation of TPTP problem files from TPTP generator files.
• Expands FOF files with multiple conjectures to multiple files each with a single conjecture and all the other formulae.
TPTP2X is written in Prolog, and should run on most Prolog systems. It is known to run on current versions of
Eclipse, SWI, and YAP Prolog. Before using TPTP2X, it is necessary to install the code for the dialect of Prolog that is to be used. This and other installation issues are described next.

 WARNING: For historical reasons TPTP2X omits the quotes around some constants and function symbols that should be 'Quoted', e.g., as found in PUZ001+2. I'm really sorry about this - it's an artifact of a very generic printing framework that became overloaded when THF formulae came into the TPTP. Changing it would be a HUGE task, and now there's TPTP4X that is better in most respects.

#### Installation

The TPTP2X utility consists of the following files:
• tptp2X - A shell script for running the tptp2X utility.
• tptp2X_install - A csh script for installing the tptp2X utility.
• tptp2X.config - The configuration file with site specific information.
• tptp2X.main - The main source code file of the tptp2X utility.
• tptp2X.read - Procedures for reading TPTP problem files.
• tptp2X.generate - Procedures for using TPTP generator files.
• tptp2X.syntax - Procedures for extracting syntactic measures from files.
• tptp2X.format - Procedures used by format modules.
• transform.<TRAN> - Procedures for doing <TRAN> transformations on TPTP problems.
• format.<ATP> - Procedures for outputing formulae in <ATP> format.
TPTP2X is installed by running tptp2X_install, which will prompt for required information. To install TPTP2X by hand, the following must be attended to:
• Copy the *.uninstalled files to their basenames (without .uninstalled).
• In the tptp2X script file:
• TPTPDirectory must be set to the absolute path name of the TPTP directory.
• PrologInterpreter must be set to the absolute path name of the Prolog interpreter.
• PrologArguments must be set to any command line arguments for the Prolog interpreter.
• The Gawk variable must be set to the absolute path name of gawk or awk.
• In the tptp2X.config file:
• The appropriate facts for the desired Prolog dialect must be uncommented.
• The absolute path name of the TPTP directory must be set in the tptp_directory/1 fact.
• In the tptp2X.main file:
• If your Prolog interpreter does not support compile/1 for loading source code, the compile/1 directives must be changed to something appropriate, e.g., [ ].
• The compile directives for unwanted output format modules can be commented out.

#### Using TPTP2X

The most convenient way of using the TPTP2X utility is through the tptp2X script. The use of this script is:
```tptp2X [-h][-q<Level>][-i][-s<Size>][-t<Transform>][-f<Format>] [-d<Directory>] -l<NamesFile>|<TPTPFiles>
```
The -h flag specifies that usage information should be output. The -q<Level> flag specifies the level of quietness at which the script should operate. There are three quietness levels; 0 is verbose mode, 1 suppresses informational output from the Prolog interpreter, and 2 suppresses all informational output except lines showing what files are produced. The default quietness level is 1. The -i flag specifies that the script should enter interactive mode after processing all other command line parameters. Interactive mode is described below. The other command line parameter values are:
The output files produced by TPTP2X are named according to the input file name and the options used. The TPTP problem name (the input file name without the .p or .g) forms the basis of the output file names. For files created from TPTP generators, the size parameters are appended to the base name, separated from the base name by a ".". Then, for each transformation applied, a suffix is appended preceded by a +. The suffixes for the transformations are given in Section TPTP2X Transformations. Finally an extension to indicate the output format is appended to the file name. The suffixes for the transformations are given in Section TPTP2X Transformations. To record how a TPTP2X output file has been formed, the TPTP2X parameters are given in a % Comments field entry of the output file.

#### TPTP2X Transformations

The transformations are:
• stdfof, to remove the connectives <=, <~>, ~|, and ~& from FOF problems, by rewriting formulae to use the other connectives. Suffix +stdfof.
• expand:conjectures, to expand a single FOF problem with multiple conjectures into multiple FOF problems with single conjectures. Suffix +xc_<Index>_<conjecture_name>.
• clausify:<Algorithm>, to transform FOF problems to CNF problems. The <Algorithm> is one of: Details of these algorithms can found in [SM96]. Note that these transformations do not simplify the resultant clause set; see the next two transformations for this. Suffix +cls_<Algorithm>.
• simplify, to simplify a set of clauses. Details of the simplifications performed can be found in [SM96]. Suffix +simp.
• cnf:<Algorithm>, to do clausify:<Algorithm> followed by simplify. Details of the performance of these transformations can be found in [SM96]. Generally cnf:otter and cnf:bundy produce clause sets with lower symbol counts than the other two. Suffix +cnf_<Algorithm>.
• fofify:obvious, to convert a set of 1st order clauses to a set of first-order formulae, converting negated_conjecture clauses into a correspoding conjecture. The transformation is very naive. Suffix +fof_obvious.
• axiomate, to remove all conjecture formulae. Suffix +axed.
• tff2fof[:<Inequalities>[:<KnownTypes>]], to convert TFF formulae to FOF, as explained in the section on translation. The value of :<Inequalities> can be either with or without, to indicate whether or not axioms that specify that terms from different domains are unequal should be generated. The default is with. The value of :<KnownTypes> is a []ed list of domains for which type guards are not needed. The default is [\$i,\$int,\$rat,\$real], i.e., assuming that the output will be given to a system that knows about terms and the numeric types. Suffix +tff2fof.
• propify, to convert a set of 1st order clauses to a set of propositional clauses, preserving satisfiability. The transformation is very naive. Suffix +prop.
• lr, to reverse the literal ordering in the clauses of CNF problems. Suffix +lr.
• cr, to reverse the clause ordering in CNF problems. Suffix +cr.
• clr, to do both clausal reversals. Suffix +clr.
• fr, to reverse the formula ordering in FOF problems. Suffix +fr.
• random, to randomly reorder the clauses and literals in CNF problems, or to randomly reorder the formulae in FOF problems. The rearrangement of formulae, clauses, and literals in a problem facilitates testing the sensitivity of an ATP system to the order of presentation. Suffix +ran.
• er, to reverse the arguments of unit equality clauses in CNF problems. Suffix +er.
• ran_er, to reverse the arguments of randomly selected unit equality clauses in CNF problems. Suffix +ran_er.
• add_equality[:<Add>], to add missing equality axioms to problems that contain equality. If the optional :<Add> flags are omitted, then all equality axioms are added. If the optional :<Add> flags are included, then <Add> is a string that indicates which equality axioms to add. The characters that can be in the string are as for the rm_equality transformation. Suffix +add_<Add>.
• rm_equality[:<Remove>], to remove equality axioms. If the optional :<Remove> flags are omitted, then all equality axioms are removed. If the optional :<Remove> flags are included, then <Remove> is a string that indicates which equality axioms to remove. The characters that can be in the string are:
• r, to remove reflexivity,
• s, to remove symmetry,
• t, to remove transitivity,
• f, to remove function substitution,
• p, to remove predicate substitution.
For example, -t rm_equality:stfp would remove symmetry, transitivity, function substitution, and predicate substitution. This transformation works only if the equality axiomatization is complete (i.e., including the axioms of reflexivity, symmetry, transitivity, function substitution for all functors, and predicate substitution for all predicate symbols). Suffix +rm_eq_<Remove>.
• set_equality[:<Set>], to set the equality axioms in problems that contain equality. If the optional :<Set> flags are omitted, then all equality axioms are set. If the optional :<Set> flags are included, then <Set> is a string that indicates which equality axioms to set. The characters that can be in the string are as for the rm_equality transformation. Suffix +set_eq_<Set>.
• to_equality, to convert CNF problems to a pure equality representation. Every non-equality literal is converted to an equality literal with the same sign. The arguments of the new equality literal are the atom of the non-equality literal and the constant true. Suffix +2eq.
• ifof:<Axioms> to convert intuitionistic propositional formulae to FOF. The <Axioms> specify which axiomatic basis to use. The options are:
• goedel1
• goedel2
• mckinsey
Suffix +ifof_<_<Axioms>.
• mfof:<Logic>:<Axioms> to convert modal propositional formulae to FOF. The <Logic> specifies which modal logic to use. The options are:
• mm for multi-modal logic
• k
• d
• m
• b
• s4
• s5
The <Axioms> specify which modal logic axioms to use. The options are:
• benzmueller
• k
• d
• m
• b
• s4
• s5
Suffix +mfof_<_<Logic>_<Axioms>.
• magic, to apply Mark Stickel's magic set transformation [Sti94] to CNF problems. Suffix +magic.
• shorten, to replace all the symbols in the problem by short, meaningless symbols. This is useful if you are only interested in the syntax of the problem, and do not want to read through the long, meaningful symbols that are often used in TPTP problems. Note that equality atoms are not affected. Suffix +shorten.
• none, to do nothing (same as omitting the -t parameter, but required for advanced use; see Section  Running TPTP2X from within Prolog). No suffix.
The default transformation is none.

#### TPTP2X Output Formats

The available output formats are:
• bliksem, to convert CNF problems to the Bliksem format [deN98]. Suffix .blk.
• carine, to convert CNF problems to the CARINE format. Suffix .car.
• dedam to convert CNF unit equality problems to the Dedam format [NRV97]. Suffix .dedam.
• dfg, to convert CNF problems to the DFG format [HKW96]. Suffix .dfg.
• dimacs, to convert propositional CNF problems to the DIMACS format [DIM]. Suffix .dimacs.
• eqp to convert CNF unit equality problems to the EQP format [McC00]. Suffix .eqp.
• geo, to convert problems to the Geo format [dNM06]. Suffix .geo.
• isabelle:[<Tactics>], to convert problems to the Isabelle format [NPW02]. <Tactics> is a comma separated list of Isabelle tactics, selected from sledgehammeroracle, simp, blast, auto, metis, fast, fastsimp, best, force, meson, smt. Suffix .thy.
• kif, to convert problems to the KIF format [GF92]. Suffix .kif.
• leancop, to convert problems to the leanCoP format [OB03]. Suffix .leancop.
• leantap, to convert problems to the leanTaP format [BP95]. Suffix .leantap.
• metitarski, to convert problems to the MetiTarski format [AP10]. Suffix .mski.
• omdoc, to convert problems to the OmDoc format [Koh06]. Suffix .omdoc.
• otter:<SoS>:'<Otter options>', to convert FOF and CNF problems to the Otter .in format [McC94]. <SoS> specifies the Set-of-Support to use. It can be one of:
• conjecture, to use the formulae whose type is conjecture
• not_conjecture, to use the formulae whose type is not conjecture
• hypothesis, to use the formulae whose type is hypothesis or conjecture
• axioms, to use the formulae whose type is axiom
• positive, to use the positive clauses
• negative, to use the negative clauses
• unit, to use the unit clauses
• all, to use all formulae
• none, to use no formulae (needed fo Otter's auto mode)
'<Otter options>' is a quoted (to avoid UNIX shell errors), comma separated list of Otter options, which will be output before the formula lists. See the Otter Reference Manual and Guide [McC94] for details of the available options. For example, -f otter:none:'set(auto),assign(max_seconds,5)' would configure Otter to use its auto mode with a time limit of 5 seconds. As the auto mode is commonly used with Otter, the TPTP2X script allows the abbreviation -f otter to specify -f otter:none:'set(auto),set(tptp_eq),clear(print_given)'. If no -t parameter is specified then -f otter also sets -t equality:stfp. Suffix .in.
• protein, to convert CNF problems to the PROTEIN format [BF94]. Suffix .tme.
• prover9:<Options>, to convert problems to the Prover format [McCURL]. The <Options> specifies any Prover9 options. Suffix .in.
• setheo:<Style>, to convert CNF problems to the SETHEO .lop format [STK90].
<Style> specifies the style of SETHEO clauses to write. It can be one of:
• sign, to write the atoms of the negative and positive literals of each clause in the antecendent and consequent of an implication, respectively;
• type In the type style, if there are no negative axiom or hypothesis clauses, then the sign style is used. Otherwise in all negative axiom and hypothesis clauses the first literal is negated to form the consequent of an implication, with the remaining literals' atoms being written as the antecedent. Further, in all conjecture clauses all positive literals are negated so that all literals are written in the antecedent of an implication.
The default style is sign, i.e., the abbreviation -f setheo means -f setheo:sign. Suffix .lop.
• sex, to convert THF problems to the s-expression format used by Satallax [BB10]. Suffix .sex.
• smt, to convert TFF and FOF problems to the SMT 1.2 format [RT06]. Suffix .smt.
• smt2, to convert TFF and FOF problems to the SMT 2.0 format [BST10]. Suffix .smt2.
• tps, to convert THF problems to the TPS format [AB06]. Suffix .tps.
• oldtptp, to convert FOF and CNF problems to the old TPTP format [SS98]. Suffix .oldtptp.
• prefix, to convert all equality to the old TPTP format prefix form. Suffix .prefix.
• tptp, to convert FOF and CNF problems to the TPTP format [SZS03]. Suffix .tptp.
• waldmeister, to convert CNF unit equality problems to the Waldmeister format [HBF96]. Suffix .pr.
The default format is tptp.

Example

```~/TPTP/TPTP2X> tptp2X -tfr,random -f prover9 PUZ001+1
---------------------------------------------------------------------
TPTP2X directory      = /home/graph/tptp/TPTP/TPTP2X
TPTP directory        = /home/graph/tptp/TPTP
Files to convert      = PUZ001+1
Size                  =
Transformation        = fr,random
Format to convert to  = prover9
Output directory      = /home/graph/tptp/TPTP/TPTP2X/prover9
---------------------------------------------------------------------
Made the directory /home/graph/tptp/TPTP/TPTP2X/prover9/PUZ
/home/graph/tptp/TPTP/Problems/PUZ/PUZ001+1.p -> /home/graph/tptp/TPTP/TPTP2X/prover9/PUZ/PUZ001+1+fr.in
/home/graph/tptp/TPTP/Problems/PUZ/PUZ001+1.p -> /home/graph/tptp/TPTP/TPTP2X/prover9/PUZ/PUZ001+1+ran.in
~/TPTP/TPTP2X>
```
This run applies two separate transformations to the problem ALG001-1. The transformations are formula order reversal and formula order randomizing. The transformed problems are output in prover9 format, in a directory prover9/PUZ below the current directory. The file names are PUZ001+1+fr.in and PUZ001+1+ran.in.

Example

```~/TPTP/TPTP2X> tptp2X -q2 -s3..5 -fotter -d~tptp/tmp SYN001-1.g
/home/graph/tptp/TPTP/Problems/SYN/SYN001-1.p -> /home/tptp/tmp/SYN/SYN001-1.003+rm_eq_stfp.in
/home/graph/tptp/TPTP/Problems/SYN/SYN001-1.p -> /home/tptp/tmp/SYN/SYN001-1.004+rm_eq_stfp.in
/home/graph/tptp/TPTP/Problems/SYN/SYN001-1.p -> /home/tptp/tmp/SYN/SYN001-1.005+rm_eq_stfp.in
~/TPTP/TPTP2X>
```
This run generates three sizes of the generic problem SYN001-1. The sizes are 3, 4, and 5. The output files are formatted for Otter, to use its auto mode. The files are placed in \verb/~/tptp/tmp/SYN, and are named SYN001-1.003.lop, SYN001-1.004.lop, and SYN001-1.005.lop. The quietness level is set to 2, thus suppressing all informational output except the lines showing what files are produced. Note that the file SYN001-1.g is automatically found in the generators directory.

Example

```~/TPTP/TPTP2X> tptp2X -tmagic+shorten - < ~tptp/TPTP/Problems/GRP/GRP001-1.p
---------------------------------------------------------------------
TPTP2X directory      = /home/graph/tptp/TPTP/TPTP2X
TPTP directory        = /home/graph/tptp/TPTP
Files to convert      = -
Size                  =
Transformation        = magic+shorten
Format to convert to  = tptp:short
Output directory      = -
---------------------------------------------------------------------
%------------------------------------------------------------------------------
% File     : Shortened file, so there is no header
%------------------------------------------------------------------------------
cnf(clause_1,axiom,
( p1(c2,X1,X1) )).

cnf(clause_2,axiom,
( p1(X1,c2,X1) )).

... Lots of output omitted here for brevity

cnf(clause_19,negated_conjecture,
( p6(c8,c7,c9) )).

cnf(clause_20,negated_conjecture,
( ~ p1(c8,c7,c9) )).
%------------------------------------------------------------------------------
~/TPTP/TPTP2X>
```
This run uses the TPTP2X script as a filter, to apply the non-Horn magic set transformation and then the symbol shortening transformation to GRP001-1.p. GRP001-1.p is fed in from standard input, and the output is written to standard output (thus all informational output is suppressed).

#### The TPTP2X Interactive Mode

If the -i flag is set, the TPTP2X script enters interactive mode after processing all other command line parameters. In interactive mode the user can change the value of any of the command line parameters. The user is prompted for the <TPTPFiles>, the <Size>, the <Transform>, the <Format>, and the <Directory>. In each prompt the current value is given. The user may respond by specifying a new value or by entering <cr> to accept the current value. The prompt-respond loop continues for each parameter until the user accepts the value for the parameter. Example
```~/TPTP/TPTP2X> tptp2X -q0 -d~tptp/tmp -i
---- Interactive mode -----------------------------------------------
Files to convert      [Problems/*/*.p] : ../Problems/GRP/GRP001-1.p
Files to convert      [../Problems/GRP/GRP001-1.p] :
Size                  [] :
Transformation        [none] : lr+rm_equality:stfp
Transformation        [lr+rm_equality:stfp] :
Format to convert to  [tptp] : setheo
Format to convert to  [setheo] :
Output directory      [/home/tptp/tmp] :
---- End of Interactive mode ----------------------------------------
---------------------------------------------------------------------
TPTP2X directory      = /home/2/tptp/TPTP/TPTP2X
TPTP directory        = /home/2/tptp/TPTP
Files to convert      = ../Problems/GRP/GRP001-1.p
Size                  =
Transformation        = lr+rm_equality:stfp
Format to convert to  = setheo:sign
Output directory      = /home/tptp/tmp
---------------------------------------------------------------------
Made the directory /home/tptp/tmp/GRP

... Lots of informational output omitted here for brevity

{/home/2/tptp/TPTP/TPTP2X/tptp2X.main compiled, 12080 msec 785600 bytes}

yes

yes
GRP001-1 -> /home/tptp/tmp/GRP/GRP001-1+lr+rm_eq_stfp.lop
~/TPTP/TPTP2X>
```
This run converts the problem GRP001-1 to a SETHEO format file. The literals are reversed and all equality clauses except reflexivity are removed. The top level output directory is specified as \verb/~/tptp/tmp, below which the subdirectory GRP is made. The output file is named GRP001-1+lr+eq_stfp.lop. Verbose mode is used, so all informational output is given.

 The following subsections are only of interest to real Prolog users. If you do not want to use Prolog directly, skip to the next section.

#### Running TPTP2X from within Prolog

The TPTP2X utility may also be run from within a Prolog interpreter. The tptp2X.main file has to be loaded, and the entry point is then tptp2X/5, in the form:

?-tptp2X(<TPTPFile>,<Size>,<Transform>,<Format>,<Directory>).

• <TPTPFile> is the name of a single TPTP file. If the file name is not absolute, then it is considered to be relative to the directory specified in the tptp_directory/1 fact in the tptp2X.config file (!). If the file name is user (!), then input is taken from standard input.
• <Size> is either an integer, a : separated string of <Size>s, a <Low>..<High> integer size range, or a Prolog list of <Size>s (!). Each <Size> in a Prolog list of <Size>s is used to generate separate sets of formulae.
• <Transform> is either a single transformation specifier, a + separated string of <Transform>s, or a Prolog list (!) of <Transform>s.
• <Format> is an output format or a Prolog list (!) of output formats. An output file is written for each output format specified. For the otter format, the syntax is otter:<SoS>:[<Otter options>] (!), i.e., the Otter options form a Prolog list.
• <Directory> specifies the directory in which the output files are to be placed. If the <Directory> is user (!) then output is sent to standard output.
Example
```~/TPTP/TPTP2X> sicstus
SICStus 3  #6: Mon Nov 3 18:32:08 MET 1997
| ?- ['tptp2X.main'].
{consulting /home/2/tptp/TPTP/TPTP2X/tptp2X.main...}

... Lots of informational output omitted here for brevity

{/home/tptp/TPTP/TPTP2X/tptp2X.main consulted, 10750 msec 773088 bytes}

yes
| ?- tptp2X('Generators/SYN010-1.g',3:[2,4],[lr,cr]+magic,kif,'.').
{compiling /home/tptp/TPTP/Generators/SYN010-1.g...}
{/home/tptp/TPTP/Generators/SYN010-1.g compiled, 90 msec 4288 bytes}
SYN010-1 -> ./SYN010-1.003:002+lr+nhms.kif
SYN010-1 -> ./SYN010-1.003:002+cr+nhms.kif
SYN010-1 -> ./SYN010-1.003:004+lr+nhms.kif
SYN010-1 -> ./SYN010-1.003:004+cr+nhms.kif

yes
| ?- ^D
~/TPTP/TPTP2X>
```
This run generates two sizes of the generic problem SYN010-1, and does two transformation sequences on each of the two sets of clauses, to produce four output files. The sizes are 3:2 and 3:4. The first transformation sequence is literal order reversal followed by the non-Horn magic set transformation, and the second transformation sequence is clause order reversal followed by the non-Horn magic set transformation. The files are output in KIF format in the current directory. The file names are
SYN010-1.003:002+lr+nhms.kif, SYN010-1.003:002+cr+nhms.kif,
SYN010-1.003:004+lr+nhms.kif, and SYN010-1.003:004+cr+nhms.kif.

Note that the TPTP file name is quoted in the query, to form a Prolog atom.

#### Writing your own Transformations and Output Formats

The transformations and output formatting are implemented in Prolog, in the files transform.<TRAN> and format.<ATP>, respectively. It is simple to add new transformations and output formats to the TPTP2X utility, by creating new transformation and format files. New files should follow the structure of the existing files. Typically, a new file can be created by modifying a copy of one of the existing files.

The entry point in a transformation file is <Transform>/6. The first three arguments are inputs: a list of the problem's formulae, the variable dictionary (a bit complicated), and the transformation specification. The next three arguments are outputs: the transformed formulae, the transformed variable dictionary (typically the same as the input dictionary), and the transformation suffix. As well as the <Transform>/6 entry point, a <Transform>_file_information/2 fact must be provided. The two arguments of the <Transform>_file_information/2 fact are the atom transform and a description of the possible transformation specifications (as used in the third argument of <Transform>/6). See the existing transform.<TRAN> files for examples.

The entry point in a format file is <Format>/3. The three arguments are inputs: the format specification, a list of the problem's formulae, and the file header information. (It is not necessary to output a file header here; this information is available only for producing supplementary documentation.) As well as the <Format>/3 entry point, a <Format>_format_information/2 fact and a <Format>_file_information/2 fact must be provided. The two arguments of the <Format>_format_information/2 fact are a character that can be used to start a comment in the output file and the format extension, both as Prolog atoms. The two arguments of the <Format>_file_information/2 fact are the atom format and a description of the possible format specifications. See the existing format.<ATP> files for examples.

New transformation and format files must be compiled in with the other TPTP2X files. This is done by adding a compile query in the tptp2X.main file, alongside the queries that compile in the existing files.

#### Writing your own Problem Generators

The TPTP generators are implemented in Prolog. It is simple to write new generators. New files should follow the structure of the existing files.

The entry point in a generator file is <Problem name>/3, where <Problem name> is the file name without the .g suffix. The first argument is an input: the size parameter for generation. The second and third arguments are outputs: the formulae generated and the % Status of the formulae. The formulae must be a Prolog list of formulae in TPTP format. A <Problem name>_file_information/3 fact must also be provided. The three arguments of the fact are the atom generator, a description of the possible size parameters, and the TPTP size for this problem (this is hard to determine!). See the existing generator files for examples.

### The TPTP4X Utility

The TPTP4X utility is a multi-functional utility for reformatting, transforming, and generating TPTP problem files. It is the successor to the TPTP2X utility, and offers some of the same functionality, and some extra functionality. TPTP4X is written in C, and is thus faster than TPTP2X. It is distributed with the TPTP as a Linux 32 bit binary. Source code is available on request.

The TPTP4X usage is Usage: tptp4X <options> <files>. The options are (defaults in ()s):

• -q<quietness> - control amount of output (1)
• -d<directory> - output directory (stdout)
• -f<format> - output format (tptp)
• tptp - long tptp format
• tptp:short - short tptp format
• oldtptp - old tptp format
• -t<transform> - transform the formulae (none)
• add_equality - adds all axioms of equality
• add_equality:rstfp - adds selected axioms of equality
• fofify -V - make universally quantified fof
• fofify:! -V - universally quantify fof
• noint - rename integers to constants
• aritize - make function and predicate names unique by arity
• dequote - make function and predicate names unquoted
• numbernamesN - add N digit extension to formula names
• uniquenamesN - add N digit extension to duplicate formula names
• randomize - randomize formulae and their order
• -u<user> - user type (human)
• human - pretty printed with indenting
• machine - one line per formula
• -V - allow free variables (no)
• -N - allow duplicate formula names (no)
• -x - expand includes (no)
• -c - clean away non-logicals (no)
• -w - warnings (no)
• -h - print this help

### The TPTP2T Script

The TPTP2T script selects lines from either the
ProblemAndSolutionStatistics file, by problem and solution characteristics.

The TPTP2T script is written in perl.

#### Installation

Installation of the TPTP2T utility requires simple changes in the tptp2T script. These changes can be made by running tptp2T_install, which will prompt for required information.

#### Using TPTP2T

The use of this script is:

tptp2T [-f FileName] [-q1 or -q2 or -q3] [-pp or -ps or -pps] {[-]Constraint {[and/or] [-]Constraint}}

• The optional -f <ProblemFile> flag specifies the name of a file containing TPTP problem names. tptp2T will select statistics file lines for only those problems whose names appear in the <ProblemFile>. The problem names (without the .p extension) must appear one per line in the <ProblemFile>, and lines that start with # are ignored.
• The optional -q flag sets quietness: 1=continuous update, 2=final count (default), 3=quiet.
• The option -p flag indicates what output is required. -pp prints problem lines, -ps prints solution lines, -pps prints both. (Defaults: Only problem constraints = -pp, only solution constraints = -ps, both types of constraints = -pps)
The Constraints specify required problem and solution characteristics that must be satisfied for the statistics file line(s) to be selected. There are two sets of constraints, problem constraints and solution constraints, which apply to problem lines and solution lines separately. For a solution line to be printed, the problem line it accompanies must pass all problem constraints, if any are provided. Different sets of solution constraints may be applied to different systems individually.
• An optional - negates the meaning of any constraint.
• or allows for logical or between constraints.
• and allows for logical and between constraints.
• A space between constraints is treated as an and.
• { } allow for grouping of terms.
• For constraints in which an upper and lower bound are required, a dash (-) may be used to indicate don't care.

A problem Constraint is selected from:

• Form ???
One of THF, TFF, FOF, CNF, ANY. Default is ANY.
• Version Ver
One of Standard, Incomplete, Augmented, Especial, Biased.
• Status SZSStatus
One of Theorem, CounterSatisfiable, Unsatisfiable, Satisfiable, Unknown, Open.
• Rating Min Max
In the range 0.00 (easiest) to 1.00 (hardest)
• Formulae Min Max (THF, TFF, FOF)
The number of formulae in the problem.
• Clauses Min Max (CNF)
The number of clauses in the problem.
• NonHorn (CNF)
Only non-Horn CNF problems.
• NonHornClauses Min Max (CNF)
The number of non-Horn clauses in a CNF problem.
• UnitFormulae Min Max (THF, TFF, FOF)
The number of unit formulae in a THF/TFF/FOF problem.
• UnitClauses Min Max (CNF)
The number of unit clauses in a CNF problem.
• RangeRestricted (CNF)
Only range restricted CNF problems.
• RRClauses Min Max (CNF)
The number of range restricted clauses in a CNF problem.
• Atoms Min Max
The number of atoms.
• Equality
Only problems that use equality.
• EqualityAtoms Min Max
The number of equality atoms.
• PureEquality
Only problems that use only equality.
• UnitEquality
Only problems with only unit-equality clauses/formulae.
• Arithmetic (THF and TFF)
Only THF and TFF problems that use arithmetic.
• ArithmeticSymbols Min Max (THF and TFF)
The number of distinct arithmetic symbols in THF and TFF problems.
• Symbols Min Max (THF)
The number of distinct symbols in THF problems.
• MinimalSymbolArity Min Max (THF)
The minimal symbol arity. MaximalSymbolArity Min Max (THF)
The maximal symbol arity.
• Predicates Min Max (TFF, FOF, CNF)
The number of predicates in TFF/FOF/CNF problems.
• Propositional (TFF, FOF, CNF)
The number of propositions in TFF/FOF/CNF problems.
• MinimalPredicateArity Min Max (TFF, FOF, CNF)
The minimal predicate arity in TFF/FOF/CNF problems.
• MaximalPredicateArity Min Max (TFF, FOF, CNF)
The maximal predicate arity in TFF/FOF/CNF problems.
• Variables
Only problems with variables.
• Functions Min Max (TFF, FOF, CNF)
The number of functions in TFF/FOF/CNF problems.
• MinimalFunctionArity Min Max (TFF, FOF, CNF)
The minimal function arity in TFF/FOF/CNF problems.
• MaximalFunctionArity Min Max (TFF, FOF, CNF)
The maximal function arity in TFF/FOF/CNF problems.
• AverageLiterals Min Max (CNF)
The average number of literals per clause in CNF problems.
• Domains ALG ANA ... TOP
Only the listed problem domains.
A solution <Constraint> is selected from:
• System Name[---Version]
The ATP system which found a solution (ANY for any system).
• Result SZSStatus (Any SZS value, e.g., THM)
Solutions with this result, or a result lower in the SZS ontology.
• ResultTime Min Max
Solutions found in this time range.
• Output SZSDataform (Any SZS value, e.g., Ref)
Solutions with this output, or an output lower in the SZS ontology.
• SolutionFormulae Min Max
The number of formulae in the solution.
• SolutionClauses Min Max
The number of clauses in the solution.
• SolutionDepth Min Max
The depth of the solution (for DAGs, from an axiom to a root).
• SolutionLeaves Min Max
The number of leaves in the solution (for DAGs).
• Equality
Only solutions that use equality.
• SolutionEqualityAtoms Min Max
The number of equality atoms in the soltuion.
• PureEquality
Only solution that use only equality.
• Selectivity Min Max
The ratio of solution leaves to problem formulae.
• Girth Min Max
The ratio of solution leaves to solution depth.

## Getting and Using the TPTP

### Quick Installation Guide

This section explains how to obtain and install the TPTP, and how to syntax-convert the TPTP problems.

#### Obtaining the TPTP

The distribution consists of two files:
• ReadMe-v6.0.0 contains an overview of the TPTP.
• TPTP-v6.0.0 .tgz (243.0 MByte, 2.3 GByte unpacked) contains the library (including a copy of the ReadMe-v6.0.0 file).
There also might be a file called BuggedProblems-v6.0.0 , containing a list of files that have been found to contain errors, in the current release (bugs that have been discovered after the release has been distributed).

#### Installing the TPTP

prompt> tar xzf TPTP-v6.0.0 .tgz

prompt> cd TPTP-v6.0.0

prompt> ./Scripts/tptp2T_install
... the script will then ask for required information

prompt> ./TPTP2X/tptp2X_install
... the script will then ask for required information

If you don't have any Prolog installed, you need to get one first. Both utilities can be installed in a default configuration by appending a -default flag to the install command.

#### Converting Problems to Other Syntax

```prompt> ./TPTP2X/tptp2X <desired_syntax>
```
As <desired_syntax>, choose any one of those listed in Section Using tptp2X.

Some of the formats cannot cope with certain types of problems, e.g., dimacs format is for only propositional CNF problems. The tptp format simply expands include directives (see Section Problem Presentation) in problems, producing files in the TPTP Prolog-style syntax. tptp2X offers much more than this. See Section The tptp2X Utility for a detailed description of the utility, including information on how to produce output in your own syntax.

### Important: Using the TPTP

Appropriate use of the TPTP allows developers and users to meaningfully evaluate ATP systems. To that end, the following guidelines for using the TPTP and presenting results should be followed.
• The TPTP release number must be stated.
• Each problem must be referenced by its unambiguous syntactic name.
• The problem formulae should, as far as is possible, not be changed in any way. Any changes made (addition, removal, reordering, reformatting, etc.) must be explicitly noted.
• Any information given to the ATP system, other than that in the formulae, must be explicitly noted. All system switches and settings must be recorded. The header information in each problem may not be used by the ATP system without explicit notice.
Abiding by these conditions will allow unambigous identification of the problem, the formulae used, and further input to the ATP system. If you follow the rules, please make it clear in any presentation of your results, by an explicit statement. We propose that you state "These results were obtained in compliance with the guidelines for use of the TPTP". By making this clear statement, ATP researchers are assured of your awareness of our guidelines. Conversely, it will become clear when the guidelines may have been ignored.

Our contact addresses are given on in the title.

## Conclusion

### Current Activities

The collection of more problems is continuing. Everybody is invited to submit problems for inclusion. Please contact us for details if you would like to contribute.

### Further Plans

We have several short and long term plans for further development of the TPTP. The main ideas are listed here.

• Other logics.
The TPTP may be extended to include problems expressed in non-classical logics.

### Acknowledgements

We are indebted to the following people and organizations who have helped with the construction of the TPTP.

For contributing problems: Argonne National Laboratory, AR Research Group at TU München, Negin Arhami, Alessandro Armando, Rob Arthan, Jaroslav Barta, Peter Baumgartner, Bjoern Pelzer, Dan Benanav, Christoph Benzmueller, Marc Bezem, Jasmin Blanchette, Woody Bledsoe, Conrad Bock, Harald Boley, Johan Bos, Marc Boule, Chad Brown, Cristóbal Camarero, Vinay Chaudri, Koen Claessen, Simon Colton, Simon Cruanes, Gerard de Melo, Louise Dennis, Alexander Dvorsky, Christian Fermüller, Jean-Christophe Filliâtre, Bernd Fischer, Paul Fodor, Deepak Garg, Tim Geisler, Keith Goolsbey, Sylvia Grewe, Jay Halcomb, John Harrison, Lifeng He, Stefan Hetzl, Thomas Hillenbrand, Tim Hinrichs, Peter Hoefner, Arjen Hommersom, Ullrich Hustadt, Thomas Jech, Cezary Kaliszyk, Mark Kaminski, Zurab Khasidashvili, Daniel Kuehlwein, Boris Konev, Laura Kovacs, Ivan Kossey, Tobias Kuhn, Lars Kulik, Rob Lewis, Meng Luan, Bill McCune, Andreas Meier, Jia Meng, Johan Martensson, Takuya Matsuzaki, Juan Antonio Navarro Perez, Georg Neis, Jens Otten, Andrei Paskevich, Dominique Pastre, Adam Pease, David Plaisted, Larry Paulson, Art Quaife, Florian Rabe, Silvio Ranise, Thomas Raths, Pace Reagan-Smith, Martin Riener, Philipp Rümmer, Alberto Segre, Renate Schmidt, Michael Schneider, Stefan Schulz, Martina Seidl, Ted Sider, John Slaney, David Stanovsky, Graham Steel, Mark Stickel, Nik Sultana, Hans Svensson, Tanel Tammet, Steven Trac, Josef Urban, Bob Veroff, Uwe Waldmann, Peter Watson, Christoph Weidenbach, Christoph Wernhard, Michael Wessel, Patrick Wischnewski, Thomas Wies, Bruno Woltzenlogel Paleo, Aparna Yerikalapudi, Jian Zhang.

For helping with problems and/or pointing out errors: Geoff Alexander, Johan Belinfante, Nikolaj Bjorner, Woody Bledsoe, Maria Poala Bonacina, Heng Chu, Koen Claessen, Ingo Dahn, Morgan Deters, Damien Doligez, Alexander Fuchs, Matthias Fuchs, Tim Geisler, John Harrison, Thomas Jech, Konstantin Korovin, Ivan Kossey, Harald Ganzinger, Mark Lemay, Reinhold Letz, Ann Lilliestrom, Thomas Ludwig, Klaus Mayr, Bill McCune, Monty Newborn, Xumin Nie, Dominique Pastre, Jeff Pelletier, Petr Pudlak, Art Quaife, Dimitris Raptis, Michael Raskin, Thomas Raths, Piotr Rudnicki, Michael Schneider, Len Schubert, Alberto Segre, Stuart Shapiro, Guy Shefner, Andrés Sicard-Ramírez, Nick Siegel, John Slaney, Nick Smallbone, Mark Stickel, Christoph Sticksel, Martin Suda, Nik Sultana, Bob Veroff, Russell Wallace, TC Wang, Christoph Weidenbach, Hantao Zhang.

For support regarding the utilities: Jesse Alma, Stefan Berghofer, Marc Bezem, Lucas Dixon, Bernd Fischer, Alexander Fuchs, Kal Hodgson, Jose Morales, Max Moser, Gerd Neugebauer, Petr Pudlak, Alex Roederer, Paul Tarau, Frank Theiss, Abdul Sattar, Renate Schmidt, Andrés Sicard-Ramírez, Jing Tang, Andrei Tchaltsev, Christoph Weidenbach, Makarius Wenzel, Jürgen Zimmer.

For general advice and comments: Serge Autexier, Jasmin Blanchette, Reiner Hähnle, Joe Hurd, Reinhold Letz, Selene Makarios, Diego Páez, Andrei Paskevich, Jeff Pelletier, Florian Rabe, Stephan Schulz, Mark Stickel, Allen van Gelder, Uwe Waldmann, Russell Wallace.