- Darwin 1.2
- DCTP 10.21p (CASC-J2 EPR division winner)
- E and EP 0.9
- Equinox 1.0
- Gandalf c-2.6-SAT (CASC-J2 SAT division, assurance class, winner)
- Mace2 2.2
- Mace4 2004-D
- MathServ 0.62
- MUSCADET 2.5
- Octopus JN05
- Otter 3.3
- Paradox 1.0 (CASC-J2 SAT division, model class, winner)
- Paradox 1.3
- Prover9 July-2005
- THEO JN05
- Vampire 7.0 (CASC-J2 MIX and FOF divisions, assurance and proof classes, winner)
- Vampire 8.0
- Waldmeister 704 (CASC-J2 UEQ Division winner)
- Wgandalf 0.1

The current version of Darwin implements first-order versions of unit
propagation inference rules analogously to a restricted form of unit
resolution and subsumption by unit clauses. To retain completeness,
it includes a first-order version of the (binary) propositional
splitting inference rule.

Proof search in Darwin starts with a default interpretation for a given clause set, which is evolved towards a model or until a refutation is found.

Darwin is implemented in OCaml and has been tested under various Linux distributions (compiled but untested on FreeBSD, MacOS X, Windows). It is available from:

http://goedel.cs.uiowa.edu/Darwin/

Darwin traverses the search space by iterative deepening over the term depth of candidate literals. Darwin employs a uniform search strategy for all problem classes.

Darwin is a first implementation for the Model Evolution calculus. We expect its performance to be strong in the EPR divisions; we anticipate performance below average in the MIX division, and weak performance in the SAT division.

Technische Universität München, Germany

DCTP 1.31 [Ste02a] is an automated theorem prover
for first order clause logic.
It is an implementation of the disconnection calculus described in
[Bil96,LS01,
Ste02b].
The disconnection calculus is a proof confluent and inherently
cut-free tableau calculus with a weak connectedness condition.
The inherently depth-first proof search is guided by a literal selection
based on literal instantiatedness or literal complexity and a heavily
parameterised link selection.
The pruning mechanisms mostly rely on different forms of
*variant deletion* and *unit based strategies*.
Additionally the calculus has been augmented by full tableau pruning.

DCTP 10.21p is a strategy parallel version using the technology of E-SETHEO [SW99] to combine several different strategies based on DCTP 1.31.

DCTP 1.31 has been implemented as a monolithic system in the Bigloo dialect of the Scheme language. The most important data structures are perfect discrimination trees, which are used in many variations. DCTP 10.21p has been derived of the Perl implementation of E-SETHEO and includes DCTP 1.31 as well as the E prover as its CNF converter. Both versions run under Solaris and Linux.

We are currently integrating a range of new techniques into DCTP which are mostly based on the results described in [LS04], as well as a certain form of unit propagation. We are hopeful that these improvements will be ready in time for CASC-J2.

DCTP 1.31 is a single strategy prover. Individual strategies are started by DCTP 10.21p using the schedule based resource allocation scheme known from the E-SETHEO system. Of course, different schedules have been precomputed for the syntactic problem classes. The problem classes are more or less identical with the sub-classes of the competition organisers. Again, we have no idea whether or not this conflicts with the organisers' tuning restrictions.

DCTP 10.21p is the CASC-J2 EPR division winner.

Technische Universität München, Germany, and ITC/irst, Italy

E is based on the DISCOUNT-loop variant of the *given-clause*
algorithm, i.e. a strict separation of active and passive facts.
Proof search in E is primarily controlled by a literal selection
strategy, a clause evaluation heuristic, and a simplification
ordering. The prover supports a large number of preprogrammed literal
selection strategies, many of which are only experimental. Clause
evaluation heuristics can be constructed on the fly by combining
various parameterized primitive evaluation functions, or can be
selected from a set of predefined heuristics. Supported term orderings
are several parameterized instances of Knuth-Bendix-Ordering (KBO) and
Lexicographic Path Ordering (LPO).

The prover uses a preprocessing step to convert formulas in full first order format to clause normal form. Preprocessing also unfolds equational definitions and performs some simplifications on the clause level.

The automatic mode can selects literal selection strategy, term ordering, and search heuristic based on simple problem characteristics of the preprocessed clausal problem.

EP 0.9 is just a combination of E 0.9 in verbose mode and a proof analysis tool extracting the used inference steps.

The program has been successfully installed under SunOS 4.3.x, Solaris 2.x, HP-UX B 10.20, MacOS-X, and various versions of Linux. Sources of the latest released version are available freely from:

http://www.eprover.org

EP 0.9 is a simple Bourne shell script calling E and the postprocessor in a pipeline.

E distinguishes problem classes based on a number of features, all of which have between 2 and 4 possible values. The most important ones are:

- Is the most general non-negative clause unit, Horn, or Non-Horn?
- Is the most general negative clauce unit or non-unit?
- Are all negative clauses unit clauses?
- Are all literals equality literals, are some literas equality literals, or is the problem non-equational?
- Are there a few, some, or many clauses in the problem?
- Is the maximum arity of any function symbol 0, 1, 2, or greater?
- Is the sum of function symbol arities in the signature small, medium, or large?

For classes above a threshold size, we assign the absolute best heuristic to the class. For smaller, non-empty classes, we assign the globally best heuristic that solves the same number of problems on this class as the best heuristic on this class does. Empty classes are assigned the globally best heuristic. Typically, most selected heuristics are assigned to more than one class.

EP 0.9 will be hampered by the fact that it has to analyse the inference step listing, an operation that typically is about as expensive as the proof search itself. Nevertheless, it should be competitive among the MIX and FOF proof class systems.

Chalmers University of Technology, Sweden

- Give all ground clauses in the problem to a SAT solver.
- Run the SAT-solver.
- If a contradiction is found, we have a proof and we terminate.
- If a model is found, we use the model to indicate which new ground instantiations should be added to the SAT-solver.
- Goto 2.

Tallinn Technical University, Estonia

Gandalf [Tam97,Tam98] is a family of automated theorem provers, including classical, type theory, intuitionistic and linear logic provers, plus finite a model builder. The version c-2.6 contains the classical logic prover for clause form input and the finite model builder. One distinguishing feature of Gandalf is that it contains a large number of different search strategies and is capable of automatically selecting suitable strategies and experimenting with these strategies.

The finite model building component of Gandalf uses the Zchaff propositional logic solver by L.Zhang [MM+01] as an external program called by Gandalf. Zchaff is not free, although it can be used freely for research purposes. Gandalf is not optimised for Zchaff or linked together with it: Zchaff can be freely replaced by other satisfiability checkers.

Gandalf is implemented in Scheme and compiled to C using the Hobbit Scheme-to-C compiler. Version scm5d6 of the Scheme interpreter scm by A.Jaffer is used as the underlying Scheme system. Zchaff is implemented in C++.

Gandalf has been tested on Linux, Solaris, and MS Windows under Cygwin.

Gandalf is available under GPL from:

http://www.ttu.ee/it/gandalf

One of the basic ideas used in Gandalf is time-slicing: Gandalf typically runs a number of searches with different strategies one after another, until either the proof is found or time runs out. Also, during each specific run Gandalf typically modifies its strategy as the time limit for this run starts coming closer. Selected clauses from unsuccessful runs are sometimes used in later runs.

In the normal mode Gandalf attempts to find only unsatisfiability.
It has to be called with a `-sat` flag to find satisfiability.
The following strategies are run:

- Finite model building by incremental search through function symbol interpretations.
- Ordered binary resolution (term depth): only for problems not containing equality.
- Finite model building using MACE-style flattening and the external propositional prover.

Gandalf c-2.6-SAT is the CASC-J2 SAT division, assurance class, winner.

Universität des Saarlandes, Germany

MathServ currently integrates the ATP systems Otter 3.3, EP 0.82, SPASS 2.1 and Vampire 7.0. All ATP services get a problem description in the new TPTP format (TPTP Library v3.0.1) and a time limit in seconds as an input. The problem is transformed into the provers' input format using the tptp2X utility. The result specifies the status of the given problem with one of the statuses defined in the SZS Status Ontology [SZS04]. If the system delivered a refutation proof then the proof is translated into TSTP format (optionally in XTSTP) using tools developed by Geoff Sutcliffe.

Since MathServ does not form an ATP system on its own it only enters the Demonstration Division of the CASC.

Université René Descartes - Paris, France

The MUSCADET theorem prover is a knowledge-based system. It is based on Natural Deduction, following the terminology of [Ble71] and [Pas78], and uses methods which resembles those used by humans. It is composed of an inference engine, which interprets and executes rules, and of one or several bases of facts, which are the internal representation of "theorems to be proved". Rules are either universal and put into the system, or built by the system itself by metarules from data (definitions and lemmas). Rules may add new hypotheses, modify the conclusion, create objects, split theorems into two or more subtheorems or build new rules which are local for a (sub-)theorem.

http://www.math-info.univ-paris5.fr/~pastre/muscadet/muscadet.html

The system is also able to work with second order statements. It may also receive knowledge and know-how for a specific domain from a human user; see [Pas89] and [Pas93]. These two possibilities are not used while working with the TPTP Library.

Argonne National Laboratory, USA

Mace2 [McC01] searches for finite models of first-order (including equality) statements. Mace2 iterates through domain sizes, starting with 2. For a given domain size, a propositional satisfiability problem is constucted from the ground instances of the statements, and a DPLL procedure is applied.

Mace2 is an entirely different program from Mace4 [McC03b], in which the ground problem for a given domain size contains equality and is decided by rewriting.

Mace2 is coded in ANSI C. It uses the same code as Otter [McC03a] for parsing input, and (for the most part) accepts the same intput files as Otter. Mace2 is packaged and distributed with Otter, which is available from:

http://www.mcs.anl.gov/AR/otter

Mace2 has been evolving slowly for about ten years. Two important strategies have been added recently. In 2001, a method to reduce the number of isomorphic models was added; this method is similar in spirit to the least number optimization used in rewrite-based methods, but it applies only to the first five constants. In 2003, a clause-parting method (based on the variable occurrences in literals of flattened clauses) was added to improve performace on inputs with large clauses. Although Mace2 has several experimental features, it uses one fixed stragegy for CASC.

Mace2 is not expected to win any prizes, because it uses one fixed strategy, and no tuning has been done with respect to the TPTP problem library. Also, Mace2 does not accept function symbols with arity greater than 3 or predicate symbols with arity greater than 4. Overall performace, however, should be respectable.

Argonne National Laboratory, USA

Mace4 is an entirely different program from Mace2, in which the problem for a given domain size is reduced to a purely propositional SAT problem that is decided by DPLL.

http://www.mcs.anl.gov/AR/mace4

Mace4 is not expected to win any prizes, because it uses one fixed strategy, and no tuning has been done with respect to the TPTP problem library. Overall performace, however, should be respectable. An early version of Mace4 competed in CASC-2002 under then name ICGNS.

McGill University, Canada

Argonne National Laboratory, USA

Otter 3.3 [McC03a] is an ATP system for statements in first-order (unsorted) logic with equality. Otter is based on resolution and paramodulation applied to clauses. An Otter search uses the "given clause algorithm", and typically involves a large database of clauses; subsumption and demodulation play an important role.

Otter is written in C. Otter uses shared data structures for clauses and terms, and it uses indexing for resolution, paramodulation, forward and backward subsumption, forward and backward demodulation, and unit conflict. Otter is available from:

http://www-unix.mcs.anl.gov/AR/otter/

Otter's original automatic mode, which reflects no tuning to the TPTP problems, will be used.

Otter has been entered into CASC-J2 as a stable benchmark against which progress can be judged (there have been only minor changes to Otter since 1996 [MW97], nothing that really affects its performace in CASC). This is not an ordinary entry, and we do not hope for Otter to do well in the competition.

*Acknowledgments: Ross Overbeek, Larry Wos, Bob Veroff, and
Rusty Lusk contributed to the development of Otter.*

Chalmers University of Technology and Gothenburg University, Sweden

Paradox 1.0 [CS03] is a finite-domain model generator. It is based on a MACE-style [McC94] flattening and instantiating of the FO clauses into propositional clauses, and then the use of a SAT solver to solve the resulting problem.

Paradox incorporates the following novel features: New polynomial-time
*clause splitting heuristics*, the use of *incremental SAT*,
*static symmetry reduction* techniques, and the use of *sort
inference*.

The main part of Paradox is implemented in Haskell using the GHC compiler. Paradox also has a built-in incremental SAT solver which is written in C++. The two parts are linked together on the object level using Haskell's Foreign Function Interface. Paradox uses the following non-standard Haskell extensions: local universal type quantification and hash-consing.

There is only one strategy in Paradox:

- Analyze the problem, finding an upper bound N on the domain size of models, where N is possibly infinite. A finite such upper bound can for example be found for EPR problems.
- Flatten the problem, and split clauses and simplify as much as possible.
- Instantiate the problem for domain sizes 1 up to N, applying the SAT solver incrementally for each size. Report "SATISFIABLE" when a model is found.
- When no model of sizes smaller or equal to N is found, report "CONTRADICTION".

In this way, Paradox can be used both as a model finder and as an EPR solver.

Paradox 1.0 is the CASC-J2 SAT division, model class, winner.

Chalmers University of Technology and Gothenburg University, Sweden

Paradox incorporates the following features: Polynomial-time
*clause splitting heuristics*, the use of *incremental SAT*,
*static symmetry reduction* techniques, and the use of *sort
inference*.

The main differences with Paradox 1.0 are: a better SAT-solver, better memory behaviour, and a faster clause instantiation algorithm.

- Analyze the problem, finding an upper bound N on the domain size of models, where N is possibly infinite. A finite such upper bound can be found, for example, for EPR problems.
- Flatten the problem, and split clauses and simplify as much as possible.
- Instantiate the problem for domain sizes 1 up to N, applying the SAT solver incrementally for each size. Report "SATISFIABLE" when a model is found.
- When no model of sizes smaller or equal to N is found, report "CONTRADICTION".

Argonne National Laboratory, U.S.A.

Prover9 has available positive ordered (and nonordered) resolution and paramodulation, negative ordered (and nonordered) resolution, factoring, positive and negative hyperresolution, UR-resolution, and demodulation (term rewriting). Terms can be ordered with LPO, RPO, or KBO. Selection of the "given clause" is by an age-weight ratio.

Proofs can be given at two levels of detail: (1) standard, in which each line of the proof is a stored clause with detailed justification, and (2) expanded, with a separate line for each operation. When FOF problems are input, proof of transformation to clauses is not given.

Completeness is not guaranteed, so termination does not indicate satisfiability.

http://www.mcs.anl.gov/~mccune/prover9/

Given a problem, Prover9 adjusts its inference rules and strategy according to the category of the problem (HNE, HEQ, NNE, NEQ, PEQ, FNE, FEQ, UEQ) and according several other syntactic properties of the input clauses.

Terms are ordered by LPO for demodulation and for the inference rules, with a simple rule for determining symbol precedence.

For the FOF problems, a preprocessing step attempts to reduce the problem to independent subproblems by a miniscope transformation [MINISCOPE]; if the problem reduction succeeds, each subproblem is clausified and given to the ordinary search procedure; if the problem reduction fails, the original problem is clausified and given to the search procedure.

As this description is being written, the specific rules for deciding the strategy have not been finalized. They will be available from the URL given above by the time CASC occurs.

Finishes in the middle of the pack are anticipated in all categories in which Prover9 competes (MIX, FOF, and UEQ).

McGill University, Canada

When THEO participated in the 2004 CASC Competition, it used a learning strategy described in this paragraph. When given a theorem, it first created a list of potential ways to "weaken" the theorem by weakening one of the clauses. It then randomly selected one of the weakenings, tried to prove the weakened version of the theorem, and then used the results from this effort to help prove the given theorem. A weakened version was created by modifying one clause by replacing a constant or function by a variable or by deleting a literal. Certain clauses from the proof of the weakened version were added to the base clauses when THEO next attempted to prove the given theorem. In addition, base clauses that participated in the proof of the weakened version were placed in the set-of-support. THEO then attempted to prove the given theorem with the revised set of base clauses.

Over the last year, this learning strategy has been further developed as described here. When THEO is given a theorem, it first tries to prove a weakened version. It does this for 50% of the allotted time. If successful, it then attempts to prove the given theorem as described in the previous paragraph. If unsuccessful, however, THEO then weakens the given theorem further by weakening an additional clause. With two clauses now weakened, THEO then attempts to prove the given theorem for 50% of the remaining time (25% of the originally allotted time). If successful, THEO uses the results to attempt to prove the given theorem. If unsuccessful, THEO picks two other weakenings and ties again for 50% of the remaining time.

Now certain weakened proofs are thrown out, as they generally seem to be useless. Weakened proofs that are shorter than three inferences are considered useless, as are those that are less than five but do not involve the negated conclusion. Attempts to improve the learning strategy are of major interest in the ongoing development of THEO. Octopus, the parallel version of THEO, uses variations of the described learning strategy.

University of Manchester, England

Vampire [RV02] 7.0 is an automatic theorem prover for first-order classical logic. Its kernel implements the calculi of ordered binary resolution and superposition for handling equality. The splitting rule and negative equality splitting are simulated by the introduction of new predicate definitions and dynamic folding of such definitions. A number of standard redundancy criteria and simplification techniques are used for pruning the search space: subsumption, tautology deletion (optionally modulo commutativity), subsumption resolution, rewriting by ordered unit equalities, basicness restrictions and irreducibility of substitution terms. The reduction orderings used are the standard Knuth-Bendix ordering and a special non-recursive version of the Knuth-Bendix ordering. A number of efficient indexing techniques is used to implement all major operations on sets of terms and clauses. Run-time algorithm specialisation is used to accelerate some costly operations, e.g., checks of ordering constraints. Although the kernel of the system works only with clausal normal forms, the preprocessor component accepts a problem in the full first-order logic syntax, clausifies it and performs a number of useful transformations before passing the result to the kernel. When a theorem is proven, the system produces a verifiable proof, which validates both the clausification phase and the refutation of the CNF. The current release features a built-in proof checker for the clausifying phase, which will be extended to check complete proofs.

Vampire 7.0 is implemented in C++. The supported compilers are gcc 3.2.x, gcc 3.3.x, and Microsoft Visual C++. This version has been successfully compiled for Linux, but has not been fully tested on Solaris and Win32. It is available (conditions apply) from:

http://www.cs.man.ac.uk/~riazanoa/Vampire

The Vampire kernel provides a fairly large number of features for strategy selection. The most important ones are:

- Choice of the main saturation procedure : (i) OTTER loop, with or without the Limited Resource Strategy, (ii) DISCOUNT loop.
- A variety of optional simplifications.
- Parameterised reduction orderings.
- A number of built-in literal selection functions and different modes of comparing literals.
- Age-weight ratio that specifies how strongly lighter clauses are preferred for inference selection.
- Set-of-support strategy.

The automatic mode of Vampire 7.0 is derived from extensive experimental data obtained on problems from TPTP v2.6.0. Input problems are classified taking into account simple syntactic properties, such as being Horn or non-Horn, presence of equality, etc. Additionally, we take into account the presence of some important kinds of axioms, such as set theory axioms, associativity and commutativity. Every class of problems is assigned a fixed schedule consisting of a number of kernel strategies called one by one with different time limits.

University of Manchester, England

A number of efficient indexing techniques are used to implement all major operations on sets of terms and clauses. Run-time algorithm specialisation is used to accelerate some costly operations, e.g., checks of ordering constraints. Although the kernel of the system works only with clausal normal forms, the shell accepts a problem in the full first-order logic syntax, clausifies it and performs a number of useful transformations before passing the result to the kernel. When a theorem is proved, the system produces a verifiable proof, which validates both the clausification phase and the refutation of the CNF.

- Choice of the main saturation procedure :
- Limited Resource Strategy
- DISCOUNT loop
- Otter loop

- A variety of optional simplifications.
- Parameterised reduction orderings.
- A number of built-in literal selection functions and different modes of comparing literals.
- Age-weight ratio that specifies how strongly lighter clauses are preferred for inference selection.
- Set-of-support strategy.

The automatic mode of Vampire 8.0 is derived from extensive experimental data obtained on problems from TPTP v3.0.1. Input problems are classified taking into account simple syntactic properties, such as being Horn or non-Horn, presence of equality, etc. Additionally, we take into account the presence of some important kinds of axiom, such as set theory axioms, associativity and commutativity. Every class of problems is assigned a fixed schedule consisting of a number of kernel strategies called one by one with different time limits.

Main differences between Vampire 8.0 and 7.0

- a naming technique for a short clause form transformation
- a better Skolemisation algorithm
- the new TPTP input syntax as well as the KIF syntax
- possibility of working with multiple knowledge bases
- a query answering mode
- lexicographic path ordering

Waldmeister 704 is a system for unit equational deduction. Its theoretical basis is unfailing completion in the sense of [BDP89] with refinements towards ordered completion (cf. [AHL03]). The system saturates the input axiomatization, distinguishing active facts, which induce a rewrite relation, and passive facts, which are the one-step conclusions of the active ones up to redundancy. The saturation process is parameterized by a reduction ordering and a heuristic assessment of passive facts [HJL99]. For an in-depth description of the system, see [Hil03].

Waldmeister 704 improves over last year's version in several respects. Firstly, the detection of redundancies in the presence of associative-commutative operators has been strenghtened (cf. [Loe04]). In a class of AC-equivalent equations, an element is redundant if each of its ground instances can be rewritten, with the ground convergent rewrite system for AC, into an instance of another element. Instead of elaborately checking this kind of reducability explicitly, it can be rephrased in terms of ordering constraints and efficiently be approximated with a polynomial test. Secondly, the last teething troubles of the implementation of the Waldmeister loop have been overcome. Thirdly, a number of strategies have slightly been revised.

The prover is coded in ANSI-C. It runs on Solaris, Linux, and newly also on MacOS X. In addition, it is now available for Windows users via the Cygwin platform. The central data strucures are: perfect discrimination trees for the active facts; group-wise compressions for the passive ones; and sets of rewrite successors for the conjectures. Visit the Waldmeister web pages at:

http://www.waldmeister.org

The approach taken to control the proof search is to choose the search
parameters according to the algebraic structure given in the problem
specification [HJL99].
This is based on the observation that proof tasks sharing major parts of
their axiomatization often behave similar.
Hence, for a number of domains, the influence of different reduction orderings
and heuristic assessments has been analyzed experimentally; and in most cases
it has been possible to distinguish a strategy uniformly superior on the whole
domain.
In essence, every such strategy consists of an instantiation of the first
parameter to a Knuth-Bendix ordering or to a lexicographic path ordering, and
an instantiation of the second parameter to one of the weighting functions
*addweight*, *gtweight*, or *mixweight*, which, if
called on an equation $$*s* = *t*, return
$|$*s*| + |*t*|,
$|max$_{>}(*s*,*t*)|, or
$|max$_{>}(*s*,*t*)| · (|*s*|
+ |*t*| + 1) + |*s*| + |*t*|,
respectively, where $|$*s*| denotes the number of symbols
in $$*s*.

Waldmeister 704 is the CASC-J2 UEQ division winner.

Tallinn Technical University, Estonia

Wgandalf uses some ideas, but no code from the earlier Gandalf system of the same author: essentially, it is a completely new system.

Wgandalf is available under GPL, and at some point in the future it will be available from the site:

which will contain further details about the architecture, strategies and implementation.http://www.ttu.ee/it/wgandalf

- AHL03
- Avenhaus J., Hillenbrand T., Löchner B. (2003),
**On Using Ground Joinable Equations in Equational Theorem Proving**,*Journal of Symbolic Computation*36(1-2), pp.217-233, Elsevier Science. - BDP89
- Bachmair L., Dershowitz N., Plaisted D.A. (1989),
**Completion Without Failure**, Ait-Kaci H., Nivat M.,*Resolution of Equations in Algebraic Structures*, pp.1-30, Academic Press. - BFT04
- Baumgartner P., Fuchs A., Tinelli C. (2004),
**Darwin - A Theorem Prover for the Model Evolution Calculus**, Sutcliffe G., Schulz S., Tammet T.,*Proceedings of the Workshop on Empirically Successful First Order Reasoning, 2nd International Joint Conference on Automated Reasoning*(Cork, Ireland). - BT03
- Baumgartner P., Tinelli C. (2003),
**The Model Evolution Calculus**, Baader F.,*Proceedings of the 19th International Conference on Automated Deduction*(Miami, USA), pp.350-364, Lecture Notes in Artificial Intelligence 2741, Springer-Verlag. - Bil96
- Billon J-P. (1996),
**The Disconnection Method: A Confluent Integration of Unification in the Analytic Framework**, Miglioli P., Moscato U., Mundici D., Ornaghi M.,*Proceedings of TABLEAUX'96: the 5th Workshop on Theorem Proving with Analytic Tableaux and Related Methods*(Palermo, Italy), pp.110-126, Lecture Notes in Artificial Intelligence 1071, Springer-Verlag. - Ble71
- Bledsoe W.W. (1971),
**Splitting and Reduction Heuristics in Automatic Theorem Proving**,*Artificial Intelligence*2, pp.55-77. - CS03
- Claessen K., Sorensson N. (2003),
**New Techniques that Improve MACE-style Finite Model Finding**, Baumgartner P., Fermueller C.,*Proceedings of the CADE-19 Workshop: Model Computation - Principles, Algorithms, Applications*(Miami, USA). - HJL99
- Hillenbrand T., Jaeger A., Löchner B. (1999),
**Waldmeister - Improvements in Performance and Ease of Use**, Ganzinger H.,*Proceedings of the 16th International Conference on Automated Deduction*(Trento, Italy), pp.232-236, Lecture Notes in Artificial Intelligence 1632, Springer-Verlag. - Hil03
- Hillenbrand T. (2003),
**Citius altius fortius: Lessons Learned from the Theorem Prover Waldmeister**, Dahn I., Vigneron L.,*Proceedings of the 4th International Workshop on First-Order Theorem Proving*(Valencia, Spain), Electronic Notes in Theoretical Computer Science 86.1, Elsevier Science. - LS01
- Letz R., Stenz G. (2001),
**Model Elimination and Connection Tableau Procedures**, Robinson A., Voronkov A.,*Handbook of Automated Reasoning*, pp.2015-2114, Elsevier Science. - LS04
- Letz R., Stenz G. (2004),
**Generalised Handling of Variables in Disconnection Tableaux**, Rusinowitch M., Basin D.,*Proceedings of the 2nd International Joint Conference on Automated Reasoning*(Cork, Ireland), pp.289-306, Lecture Notes in Artificial Intelligence 3097. - Loe04
- Löchner B. (2004),
**A Redundancy Criterion Based on Ground Reducibility by Ordered Rewriting**, Rusinowitch M., Basin D.,*Proceedings of the 2nd International Joint Conference on Automated Reasoning*(Cork, Ireland), pp.45-59, Lecture Notes in Artificial Intelligence 3097. - MarURL
- Martin D. et al. (URL),
**OWL-S 1.1 Release**, http://www.daml.org/services/owl-s/1.1/. - MW97
- McCune W.W., Wos L. (1997),
**Otter: The CADE-13 Competition Incarnations**,*Journal of Automated Reasoning*18(2), pp.211-220. - McC94
- McCune W.W. (1994),
**A Davis-Putnam Program and its Application to Finite First-Order Model Search: Quasigroup Existence Problems**, ANL/MCS-TM-194, Argonne National Laboratory, Argonne, USA. - McC97
- McCune W.W. (1997),
**33 Basic Test Problems: A Practical Evaluation of Some Paramodulation Strategies**, Veroff R.,*Automated Reasoning and its Applications: Essays in Honor of Larry Wos*, pp.71-114, MIT Press. - McC01
- McCune W.W. (2001),
**MACE 2.0 Reference Manual and Guide**, ANL/MSC-TM-249, Argonne National Laboratory, Argonne, USA. - McC03a
- McCune W.W. (2003),
**Otter 3.3 Reference Manual**, ANL/MSC-TM-263, Argonne National Laboratory, Argonne, USA. - McC03b
- McCune W.W. (2003),
**Mace4 Reference Manual and Guide**, ANL/MSC-TM-264, Argonne National Laboratory, Argonne, USA. - McCURL
- McCune W.W. (URL),
**LADR: Library of Automated Deduction Routines**, http://www.mcs.anl.gov/AR/ladr. - MM+01
- Moskewicz M., Madigan C., Zhao Y., Zhang L., Malik S. (2001),
**Chaff: Engineering an Efficient SAT Solver**, Blaauw D., Lavagno L.,*Proceedings of the 39th Design Automation Conference*(Las Vegas, USA), pp.530-535. - New01
- Newborn M. (2001),
**Automated Theorem Proving: Theory and Practice**, Springer. - NW04
- Newborn M., Wang Z. (2004),
**Octopus: Combining Learning and Parallel Search**,*Journal of Automated Reasoning*33(2), pp.171-218. - Pas78
- Pastre D. (1978),
**Automatic Theorem Proving in Set Theory**,*Artificial Intelligence*10, pp.1-27. - Pas89
- Pastre D. (1989),
**MUSCADET : An Automatic Theorem Proving System using Knowledge and Metaknowledge in Mathematics**,*Artificial Intelligence*38, pp.257-318. - Pas93
- Pastre D. (1993),
**Automated Theorem Proving in Mathematics**,*Annals of Mathematics and Artificial Intelligence*8, pp.425-447. - Pas02
- Pastre D. (2002),
**Strong and Weak Points of the MUSCADET Theorem Prover**,*AI Communications*15(2-3), pp.147-160. - RV02
- Riazanov A., Voronkov A. (2002),
**The Design and Implementation of Vampire**,*AI Communications*15(2-3), pp.91-110. - Sch02
- Schulz S. (2002),
**E: A Brainiac Theorem Prover**,*AI Communications*15(2-3), pp.111-126. - Sch04a
- Schulz S. (2004),
**System Abstract: E 0.81**, Rusinowitch M., Basin D.,*Proceedings of the 2nd International Joint Conference on Automated Reasoning*(Cork, Ireland), pp.223-228, Lecture Notes in Artificial Intelligence 3097. - Sch04b
- Schulz S. (2004),
**Simple and Efficient Clause Subsumption with Feature Vector Indexing**, Sutcliffe G., Schulz S., Tammet T.,*Proceedings of the Workshop on Empirically Successful First Order Reasoning, 2nd International Joint Conference on Automated Reasoning*(Cork, Ireland). - SW99
- Stenz G., Wolf A. (1999),
**E-SETHEO: Design, Configuration and Use of a Parallel Automated Theorem Prover**, Foo N.,*Proceedings of AI'99: The 12th Australian Joint Conference on Artificial Intelligence*(Sydney, Australia), pp.231-243, Lecture Notes in Artificial Intelligence 1747, Springer-Verlag. - Ste02a
- Stenz G. (2002),
**DCTP 1.2 - System Abstract**, Fermüller C., Egly U.,*Proceedings of TABLEAUX 2002: Automated Reasoning with Analytic Tableaux and Related Methods*(Copenhagen, Denmark), pp.335-340, Lecture Notes in Artificial Intelligence 2381, Springer-Verlag. - Ste02b
- Stenz G. (2002),
**The Disconnection Calculus**, PhD thesis, Institut für Informatik, Technische Universität München, Munich, Germany. - SS01
- Sutcliffe G., Suttner C.B. (2001),
**Evaluating General Purpose Automated Theorem Proving Systems**,*Artificial Intelligence*131(1-2), pp.39-54. - SZS04
- Sutcliffe G., Zimmer J., Schulz S. (2004),
**TSTP Data-Exchange Formats for Automated Theorem Proving Tools**, Zhang W., Sorge V.,*Distributed Constraint Problem Solving and Reasoning in Multi-Agent Systems*, pp.201-215, Frontiers in Artificial Intelligence and Applications 112, IOS Press. - Tam97
- Tammet T. (1997),
**Gandalf**,*Journal of Automated Reasoning*18(2), pp.199-204. - Tam98
- Tammet T. (1998),
**Towards Efficient Subsumption**, Kirchner C., Kirchner H.,*Proceedings of the 15th International Conference on Automated Deduction*(Lindau, Germany), pp.427-440, Lecture Notes in Artificial Intelligence 1421, Springer-Verlag. - Vor05
- Voronkov A. (2005),
**Vampire Reference Manual and User's Guide**, http://www.vampire.fm. - Zha96
- Zhang J. (1996),
**Constructing Finite Algebras with FALCON**,*Journal of Automated Reasoning*17(1), pp.1-22.