The Beth Manual

The motivation behind the design of Beth

The strength and weakness of a traditional top down approach can be briefly summerized as follows:
  1. strength: the generation of literals is inherently directed by the heuristic search process itself: only the set of literals that make refinements to clauses in the search beam are generated
  2. weakness: the enumeration of all possible combination of variables generates many more literals than necessary; some literals generated by the algorithm are not even guaranteed to cover one positive example.
The strength and weakness of traditional inverse entailment (i.e. Progol) can be briefly summerized as follows:
  1. strength: a literal is created using a ground atom describing a known positive example. The advantages are: 1) specializing using this literal results in a clause that is guaranteed to cover at least the seed example, and 2) the set of literals generated are constrained to those that satisfy 1).
  2. weakness: the bottom clause generated in saturation is exponential in size making the hypothesis space (the subsumption lattice) doubly exponential in complexity.
Hopefully, by integrating traditional top-down and bottom-up approaches, one can combine the strengths of each and eliminate the weaknesses of both.

For more details on Beth, please refer to this paper.

Downloading the Beth code

The beth code and other related files can be downloaded here.

A sample problem on grammar learning is also provided here.

Get all the necessary files ready

The Beth definition file

Dynamic Predicates

Just simply add these two lines at the beginning of your definition file:
  1. :- dynamic tparam/2.
  2. :- dynamic explicitly_declared_bias/2.

Parameters Values

See the details in "Setting Parameters".

Predicate Specifications

Predicate Specifications is the list of name-arity specification of each background predicate (used) in the problem.

Predicate Specifications is a list of the form: predicates([PredSpec, ... ,PredSpec]).

Each PredSpec is a an entity of the form: PredName/Arity where PredName is the name of the background predicate and Arity its arity.

For example, noun/2 is the predicate specification for the background predicate noun (from a grammar learning problem) which checks if the first word in a given sentence is a noun.

Predicate Specifications: Literals with theory constants

One can specify which argument in which background predicate can take a constant as its argument instead of a variable (the default).

The specification takes the form:

  1. pred_with_theory_const([PTCSpec,...,PTCSpec]). or
  2. pred_with_theory_const([]).
If the list of PTCSpec's is empty (case 2), then Beth will assume that every argument of every background predicate has to be a variable.

Each PTCSpec takes the form: PredName(VCList) where PredName is the name of the background predicate that you want to allow theory constants to appear as its arguments.

VCList takes the form: x1,...,xN where xI can be 'c' (constant) or 'v' (variable). If xI is 'c' then the Ith argument of (background predicate) PredName can be a constant. Otherwise, a variable. At least one of the xI's has to be a 'v'. N is the arity of the background predicate PredName.

Predicate Ranking

A predicate ranking specification takes the form: pred_ranks([PredRank,...,PredRank]).

PredRank takes the form: PredName/Arity-Rank where PredName is the name of a background predicate, Arity its arity, and Rank is a natural number indicating the relative preference on the background predicates; the higher the rank, the more "important" the predicate is.

More precisely, if two literals L1 and L2 are used to specialize a given clause C giving two specialized clauses C1 and C2 respectively. The two specialized clauses have the same m-estimates. Suppose r1 and r2 are the ranks of L1 and L2 respectively such that r1 > r2. Then, C1 has a better heuristic value than C2 in the search beam.

If a certain predicate whose rank is not specified, it is set to the default which is zero.

Type Ordering

An argument type ordering specification takes the form: pred_type_order([TypeOrder,...,TypeOrder]).

TypeOrder takes the form: Type:Order where Type is an argument type and Order is a natural number indicating the relative preference on argument types; the higher the order, the more "important" is the type of arguments.

The score given to a literal is the sum of the order of the type of each argument in the literal.

Similar to predicate ranking, type ordering is used to break tie between two literals when two respective specialized clauses have the same m-estimates and predicate ranking. The literal giving better score in type ordering gives a specialized clause with a better heuristic value and thus a higher preference in the search beam.

If a certain type whose order is not specified, it is set to the default which is zero.

Input-Output Mode Specification

An (input-output) mode specification takes the form: pred_mode_specs([PredMode,...,PredMode]).

A PredMode takes the form: PredName(ModeList) where PredName is the name of a background predicate.

ModeList takes the form: m1,...,mN where mI can be '+' (output) or '-' (input). If mI is '+' then the Ith argument of (background predicate) PredName is an output argument. Otherwise, an input argument. N is the arity of the background predicate PredName.

NOTE: Beth allows one to provide an empty list of mode specifications (say if the mode information is not readily available) by simply specifying pred_mode_specs([]).

Target Predicate

The target predicate (to learn) is specified by

target_predicate(PredMode).

where PredMode is like defined above, only that it is specified for the target predicate.

Predicate Argument Type Specification

A predicate argument type specification takes the form: pred_type_specs([TypeSpec,...,TypeSpec]).

A TypeSpec takes the form: PredName(TypeList) where PredName is the name of a background predicate.

TypeList takes the form: t1,...,tN where tI is the type of the Ith argument of (background predicate) PredName. N is the arity of the background predicate PredName.

Explicitly Declared Bias

An explicitly declared bias is a clause which (explicitly) defines the language bias.

It takes the form: explicitly_declared_bias(Lit,Clause) :- Bias

where Lit is a literal generated to be added to the body of Clause (the current clause) if the goal Bias is satisfied.

For example, the following declares that only positive (or non-negated) literals are acceptable:

explicitly_declared_bias(Lit,_) :- \+ user:neg_lit(Lit).

The user has to provide a definition for any predicate used in Bias either in the Beth definition file or a file to be included in it. In this case, neg_lit/1 is defined in the Beth code and therefore it is called in the user module.

Positive examples file

Positive examples are grounded tuples of the target concept. List each example on a distinct line in the file like this ended in a period (suppose t/1 is the target predicate):

t(a1).
t(a2).
...
t(a10).

Negative examples file

It is exactly the same as the way positive examples are handled, just make sure the file name is different. ;-p

Note: If it is positive only learning, then one does not need to provide negative examples but the parameter posonly has to be set to 'yes'.

Positive test cases file

List each test case on a separate line in the file as if they were positive (training) examples.

Negative test cases file

If posonly is set to 'yes', then negative test cases should not be provided because Beth will generate them automatically given the background predicates for testing. Otherwise, list them in the file as if they were positive test cases.

Using Beth

Loading Beth

First load up the Beth code into your Prolog compiler which is compatible with Sicstus Prolog version 3.8.5 or later. The Beth code is in the file 'beth.pl'.

This is an example of how the Beth code is loaded:

enceladus:~/eeld/beth/gram> sicstus
SICStus 3.8.5 (x86-linux-glibc2.1): Fri Oct 27 10:16:41 CEST 2000
Licensed to cs.utexas.edu
| ?- compile('beth.pl').
{compiling /v/filer1a/v0q063/ml/eeld/beth/gram/beth.pl...}
...
{compiled /v/filer1a/v0q063/ml/eeld/beth/gram/beth.pl in module user, 770 msec 288168 bytes}

yes
| ?-

Loading background knowledge for training

Feed your Prolog compiler with this line: compile(BGFile). where BGFile is the file name of the file containing the background facts.

For example,

:- compile('gram.pl').

compiles the background (knowledge) file 'gram.pl' in Prolog.

Loading the Beth definition file

load_background(DefFile). loads the Beth definition file DefFile into Prolog.

For example,

:- load_background('gram_trn.i').

loads the Beth definition file 'gram_trn.i' in Prolog.

Reading training examples

read_pos_exs(PosExsFile,TPredName,TArity). loads the file with positive examples PosExsFile into Prolog.

TPredName is the name of the target predicate and TArity its arity.

For example,

:- read_pos_exs('gram.f',s,2).

loads all the positive training examples 'gram.f' in Prolog.

If posonly is set to 'yes', then reading negative examples is not necessary. Otherwise, read_neg_exs(NegExsFile,TPredName,TArity). loads all the negative examples in NegExsFile in Prolog.

Reading test cases

read_pos_test(PosTestFile,TPredName,TArity). loads the file with positive test cases PosTestFile into Prolog.

TPredName is the name of the target predicate and TArity its arity.

For example,

:- read_pos_test('gram.f.1',s,2).

loads all the positive test cases 'gram.f.1' in Prolog.

If posonly is set to 'yes', then one should not load any negative test case since Beth will generate them automatically. Otherwise, read_neg_test(NegTestFile,TPredName,TArity). loads all the negative test cases in NegTestFile in Prolog.

Contruct a theory

To learn a theory, just feed Prolog the line:

induce.

A trace of clauses searched by Beth is usually printed out like this:

[Positives to cover = 5 Negatives to eliminate = 0]
[Seed example = s([the,man,walks,the,dog],[])]

[Top clause = s(_3223,_3224):-true m-estimate = 0.00010009574679356965]
0.00010009574679356965-5-5-(s(A,B):-det(A,C)).
0.00010009574679356965-5-5-(s(A,B):-np(A,C)).
0.00010009574679356965-5-5-(s(A,B):-det(A,C),noun(C,D)).
0.00010009574679356965-5-5-(s(A,B):-det(A,C),np(A,D)).
0.00010009574679356965-5-5-(s(A,B):-np(A,C),tverb(C,D)).
0.00010009574679356965-5-5-(s(A,B):-np(A,C),iverb(C,D)).
0.00010009574679356965-5-5-(s(A,B):-np(A,C),vp(C,D)).

along with the seed example chosen, the m-estimate of the clause, and the positive and negative examples covered by the clause.

When a good clause is found by Beth, it will be shown along with the bottom clause found in the search:

[Bottom Clause Found]
(s([the,man,walks,the,dog],[]) :-
det([the,dog],[dog]),
det([the,man,walks,the,dog],[man,walks,the,dog]),
iverb([walks,the,dog],[the,dog]),
noun([man,walks,the,dog],[walks,the,dog]),
np([the,dog],[]),
np([the,man,walks,the,dog],[walks,the,dog]),
tverb([walks,the,dog],[the,dog]),
vp([walks,the,dog],[the,dog])).
[Bottom Clause Size] [8/19]

[Good Clauses Found]
[Clause 1]
4-0-(s(A,B):-np(A,C),vp(C,D),np(D,B)).

At the end, the theory learned by Beth will be printed out like this:

[Theory]
[Rule 1] 4-0-(s(A,B):-np(A,C),vp(C,D),np(D,B)).
[Rule 2] 1-0-(s(A,B):-np(A,C),tverb(C,B)).

[time taken = 0.1]
[total clauses constructed = 26]

along with the number of positive and negative examples covered by each clause, the time taken (in cpu seconds) to find the theory, and the total number of clauses constructed in the search.

Loading background knowledge for testing

To load the set of background facts to evaluate the performance of the learned theory on the test cases, feed to Prolog this line:

test:compile(TestBGFile).

where TestBGFile is the file of background facts used exclusively for evaluating the theory.

Note that while traditional ILP approaches assume that the background facts used in learning is exactly the same as that in testing, Beth makes no such assumption. If it happens that they are indeed the same, that one just need to use BGFile (the background knowledge file for learning) in place of TestBGFile.

The separation of the sets of background facts for learning and testing allows a user a higher degree of freedom and speed up in evaluation time of the learned theory.

For example, if the set of background facts for learning are generated independently of that of testing, only the set of background facts for testing is needed in evaluating the learned theory (and similarly for learning).

So, loading exclusively the set of background facts into a separate module (the 'test' module here) allows much speed up in evaluating the learned theory since the amount of theorem proving is much reduced as a result of a big reduction in the size of the background facts, particularly when the set of background knowledge in training is huge.

Evaluate the theory

To evaluate the theory learned by Beth, use the command:

test.

The details of testing the theory will be printed out like this:

[Actual]
[s([the,man,hits,the,dog],[]) pos] [classified pos]
[s([a,ball,hits,the,dog],[]) pos] [classified pos]
[s([a,ball,hits],[]) pos] [classified pos]
[s([every,ball,hits],[]) pos] [classified pos]
[s([every,dog,walk(],[]) pos] [clastified pos]
[s([every,man,walks],[]) pos] [classified pos]
[s([a,man,walks],[]) pos] [classified pos]
[s([a,small,man,walks],[]) pos] [classified pos]
[s([every,nice,dog,barks],[]) pos] [classified neg] [incorrect]
[Negatives Covered = 2]

[F-measure = 0.8421052631578948 Recall = 0.8888888888888888 Precision = 0.8]

[Test Time Taken = 0.01]

along with the recall, the precision, and the f-measure of the theory, and the test time taken (in cpu seconds).

If posonly was set to 'no' (i.e. learning with negative training examples provided), only the recall (a.k.a. accuracy) of the theory would be shown. In addition, the details on the classification of each negative test case by the theory will be printed out.

Setting Parameters

Each parameter in Beth takes the form:

tparam(PARAM,VALUE)

where PARAM is the name of the parameter and VALUE its value.

Beth has the following set of parameters:

tparam(verbosity,V).
This controls the amount of traces printed out. V = 0 makes Beth print very little. Recommended value is 3.

tparam(sample,V).
This defines the percentage of training examples used. V is in the range [0,1]. This is to facilitate generating learning curves by varying the size of the training data through this parameter value. Recommended value is 1.0.

tparam(reseed,V).
V is a natural number which sets the maximum number of times Beth can choose a new random seed example in case the search of a good clause is stuck (i.e. the current clause cannot be further specialized when it still covers negative examples). Recommended value is 0 or 1.

tparam(beamsize,V).
V is a positive integer that defines the beam width. Recommended value is 3 or 4.

tparam(stopn,V).
V is a positive integer. This is the required minimum number of consistent clauses (or sufficiently accurate clauses) present in the beam before the search of a good clause terminates. Note that if V > 1, then the search for a (good) clause continues until this number is met (even if there is already a good clause in the search beam). Setting this value to greater than one tends to help overcome local minima in the search at the expense of time. Only the minimum of V and the beam width will be used. Recommended value is 1.

tparam(bestn,V).
V is a positive integer. This is the maximum number of consistent clauses (or sufficiently accurate clauses) in the beam added to the building theory when at least one such clause is present in the search beam. Recommended value is 1.

tparam(litbound,V).
V is a positive integer. This is the max number of literals considered when generating refinements to the current clause. Recommended value is 10 to 15.

tparam(littime,V).
V is a positive integer. This is the amount of cpu time (in millisec) given to the generation of refinements to the current clause. If one does not want to give such a limit to the process, just set V to 'inf'. Recommended value is 'inf'.

tparam(mestimate,V).
V specifies the value of m in computing the m-estimate of a clause.

tparam(vdbound,V).
V is a positive integer. It is the variable depth bound (like the parameter i in Progol).

tparam(maxlits,V).
V is a positive integer. This defines the max length of a clause (i.e. the max number of literals in the body of the clause).

tparam(minacc,V).
V is a positive real number. This is the required minimum value of the m-estimate of a given clause.

tparam(maxproof,V).
V is a positive integer. V is the max number of times the system tries to generate a sufficient number of unique ground atoms satisfying all the refinement constraints. Recommend value is ten times the value of the recall bound.

tparam(maxref,V).
V is a positive integer. It is tantamount to the recall bound in Prolog except that the same value is used for every background predicate. Recommend value is 5 or less.

tparam(cachesize,V).
V is a positive integer. This is the size of the cache of proofs stored for each clause in the search beam. This value depends on maxref. If maxref = 1, just set V to 1. Otherwise, a value in the range [2,5] is recommended.

tparam(sigthres,V).
V is a real number which defines the threshold at which a clause is considered statistically significant. This parameter is adopted from mFoil. Recommended value is 6.64.

tparam(minpos,V).
V is a positive integer. It is the required minimum number of positive examples covered by a clause. Recommended value is 1.

tparam(prooftime,V).
V is a positive integer. It is the amount of cpu time (in milliseconds) given to computing the set of (positive or negative) examples covered by a clause. If one does not want to give such a time bound on theorem proving, just set it to 'inf'. Recommended value is 'inf'.

tparam(goaltime,V).
V is a positive integer. This is the amount of cpu time (in milliseconds) given to finding a ground atom satisfying all the refinement constraints. If one does not want to give such a time bound on theorem proving, just set it to 'inf'. Recommended value is 'inf'.

tparam(occbound,V).
V is a positive integer. This is the maximum number of literals in the body of a clause that has the same predicate name. Recommended value is 2 or 3.

tparam(search,V).
V can be either 'bnb' (branch and bound search) or 'beam' (beam search). This allows a user to choose between two different types of search methods available. Recommended value is 'bnb'.

tparam(prune,V).
V is either 'on' or 'off'. If it is set to 'on', the system discards any literal that

  1. does not produce an increase in m-estimate (when added to the current clause), and
  2. has no new variable (variable not already appeared in the clause)

Recommended value is 'on'.

tparam(outfile,V).
V is the name of the log file of the traces of output produced. One can set it to '' if such a log file is not necessary.

tparam(makebottomclause,V).
V is either 'on' or 'off'. If it is set to 'on', the system makes the virtual bottom clause during the search of a good clause.

tparam(posonly,V).
V is either 'yes' or 'no'. If it is set to 'yes', the system automatically generates negative training examples and negative test cases. Otherwise, the user has to provide them to the system.

tparam(qbound,V).
V is a positive integer. This is the maximum number of answers generated when proving an output query (in evaluating the theory found by Beth). This feature is used only when posonly is set to 'yes'.

tparam(maxans,V).
V is a positive integer. This is the maximum number of answers generated when proving an output query (in constructing a theory). This feature is used only when posonly is set to 'yes'.

tparam(negsize,V).
V is a positive integer. This is an estimate on the number of negative examples that could be provided to the system if they were available. This parameter is used to compute an estimate of the prior probability of the positive examples (i.e. the class "+"). This parameter is used only when posonly is set to 'yes'.

tparam(usize,V).
V is a positive integer. This is an estimate on the "universe size" (the size of the domain of each argument of the target predicate). This parameter is used only when posonly is set to 'yes'.

Automate it through a script

One is recommended to compile all these steps in a script file so that one just needs to simply compile the script file in Prolog to run all these steps (and save some typing ;-). A script file for the grammar learning problem looks like this:

:- compile('beth.pl').
:- compile('gram.pl').
:- load_background('gram_trn.i').
:- read_pos_exs('gram.f',s,2).
:- read_pos_test('gram.f.1',s,2).
:- induce.
:- test:compile('gram.pl').
:- test.

Suppose this script file is saved as 'run.pl'. Then one just needs to compile this file in Prolog to run Beth on the problem:

enceladus:~/ftp/beth/gram> sicstus
SICStus 3.8.5 (x86-linux-glibc2.1): Fri Oct 27 10:16:41 CEST 2000
Licensed to cs.utexas.edu
| ?- compile('run.pl').
{compiling /stage/ftp/pub/mooney/beth/gram/run.pl...}
{compiling /stage/ftp/pub/mooney/beth/gram/beth.pl...}
...
{compiled /stage/ftp/pub/mooney/beth/gram/run.pl in module user, 940 msec 306048 bytes}
[Test Time Taken = 0.02]
yes
| ?-



This page and the Beth code is copyrighted by Lap Poon Rupert Tang. All rights are reserved by Lap Poon Rupert Tang. The code is available for academic research only. Please contact Rupert at rupert@cs.utexas.edu for any question regarding Beth. Thank you.

Last updated: June 18, 2003