Front Ends for SAT solvers
Date: Thu, 25 Oct 2012
From: Vladimir Lifschitz
To: TAG
SAT solvers and ASP solvers are two kinds of tools that are used for
solving hard combinatorial search problems. One of the differences
between them is that an ASP solver includes a grounder, such as gringo
--a front end that turns a program with variables into an equivalent
ground program. There are no similar front ends for SAT solvers, as
far as I know. Why is that?
One possible reason, it occurs to me, is that "intelligent" grounding,
which uses simplifications to produce relatively small output, may be
impossible in the context of classical logic. For instance, given the
input
p(a,b).
q(X) :- p(X,Y).
gringo produces the ground program
p(a,b).
q(a).
These two programs are equivalent to each other in the sense that they
have the same stable models. (Actually, the latter is the only stable
model of the former.) But if we ground the clauses
p(a,b)
p(X,Y) -> q(X)
as grounding is understood in classical logic, we'll get something different:
p(a,b)
p(a,a) -> q(a)
p(a,b) -> q(a)
p(b,a) -> q(b)
p(b,b) -> q(b)
The output of gringo is equivalent to the conjunction of the ground clauses
in lines 1 and 3; lines 2, 4 and 5 are not included.
This example shows that gringo can't be used as a grounder for classical
logic. Perhaps grounding in classical logic can't be nontrivial.
Any thoughts?
============
Date: Thu, 25 Oct 2012
From: Pashootan Vaezipoor
To: Vladimir Lifschitz
I wish that I have understood the issue that you are putting forward
here correctly, but if so, I should direct you to the research work
that our group at SFU has been doing over the course of past few
years. The aim is to devise a declarative constraint programming
framework whose task is to 1. Take the problem specification of an NP
search problem, which is formulated in First-Order logic (FO), 2.
Perform the grounding (with some simplifications) and then 3. Use the
SAT solver to solve the ground formula.
The following papers can describe the above process better:
1. A framework for representing and solving NP search problems
(by David Mitchell and Eugenia Ternovska)
2. Declarative Programming of Search Problems with Built-in
Arithmetic (by David Mitchell and Eugenia Ternovska)
And as for the refutation of the clauses in lines 2, 4 and 5, the
above method would work by explicitly adding the negations of p(a, a),
p(b, a) and p(b, b) as unit clauses to the ground formula and relying
on SAT-solver’s unit propagation module to eliminate them. Recently a
paper was published to perform this task at the specification level
and free the SAT solver from doing that. The technique is called
Lifted Unit Propagation (LUP) and the result of grounding after doing
LUP is close (or as in this case similar) to gringo’s:
- Lifted Unit Propagation for Effective Grounding (by Pashootan
Vaezipoor, David Mitchell, Maarten Marien)
Another group with similar line of research is Marc Denecker’s group
at KU Leuven.
============
Date: Thu, 25 Oct 2012
From: Mirek Truszczynski
To: TAG
There are front ends to sat solvers and very good ones in that. Psgrnd that
Deborah East and I developed around year 2000 is one and it is still quite
good. More modern ones are gidl (Marc and his students), conga, which my
student David Brown built (this one still has some bugs) and the grounder
for the model expansion setting by Evgenia Ternovska and David Mitchell. All
these programs accept an FOL formula and a domain and can output equivalent
DIMACS CNF theory. All grounders I mentioned are competitive with ASP
grounders. I use them (now mostly gidl and conga).
So the question in my mind is why they are rarely used in the SAT community.
============
Date: Thu, 25 Oct 2012
From: Vladimir Lifschitz
To: Pashootan Vaezipoor and Mirek Truszczynski
Thank you for telling me about the existing front ends for SAT solvers.
I was not aware of them.
But I am wondering if these grounders do nearly as much as, say, lparse,
dlv, and gringo, in terms of simplifications used in the process of
grounding to make the output more compact. Take the simple example from
my email: turning
p(a,b).
q(X) :- p(X,Y).
into
p(a,b).
q(a).
instead of a set of 5 ground rules. Is anything like this is even possible
in the context of classical logic?
============
Date: Fri, 26 Oct 2012
From: Mirek Truszczynski
To: Vladimir Lifschitz
Psgrnd would perform extensive simplifications. In particular, if p is
an input (data) symbol specified to p(a,b), and the theory is p(X,Y)
-> q(X) (of course, it has a different meaning than the theory you
gave) the outcome would be p(a,b) and q(a). However, q is an *output*
symbol and {p(a,b), q(b)} will be interpreted as the abbreviation for
all classical models where q(b) (if there are no other values q could
take) might be either true or false. Which is exactly how it should be
interpreted.
In slightly more general terms, the grounder uses CWA when
interpreting the data set of atoms and propagates that. Whenever no
constraint is left for an output atom, the grounder remembers that it
may be either true or false but does not bother the solver with it.
Psgrnd has an option of performing a lookahead, which further reduces
the size of ground theory.
The bottom line is, all grounders I mentioned yesterday do quite a bit
of work to produce a small grounding and I suspect especially the
newer ones (gidl and the ones from SFU) are quite smart in that.
============
Date: Fri, 26 Oct 2012
From: Martin Brain
To: Vladimir Lifschitz
> SAT solvers and ASP solvers are two kinds of tools that are used for
> solving hard combinatorial search problems. One of the differences
> between them is that an ASP solver includes a grounder, such as gringo
> --a front end that turns a program with variables into an equivalent
> ground program. There are no similar front ends for SAT solvers, as
> far as I know. Why is that?
> This example shows that gringo can't be used as a grounder for classical
> logic. Perhaps grounding in classical logic can't be nontrivial.
>
> Any thoughts?
It seems others have already covered some of the interesting takes on
this question - that there are grounders and (perhaps the question you
really wanted answered) the differences in the practicality of
instantiation. Thus I shall try to add something different.
1. There are people looking at instantiation for predicate logic in the
theorem proving community. iProver works via interleaved instantiation
and calls to a SAT solver.
2. One interesting place to build / deploy grounding technology for
classical logic would be SMT solvers. Current systems for quantifiers
are complex, rare and instantiation based. This would also give a wide
range of immediate applications.
3. I think there is a rather fluffy point about 'research culture' here
-- why does the SAT community not take modelling seriously / view it in
the way it does ('not seriously' is probably too strong but they have a
very different view of it to the ASP and CSP community). I think it is
probably due to the way the tools have developed and, critically, DIMACS
format. This has to be generated by machine for any non-trivial use.
Also it is canonical to some degree; there are very few options to do
something smart with the interpretation that isn't a search / reasoning
technique. A consequence of this is that (classically) systems that
used SAT solvers were 'a system that contained a SAT solver', while
systems using ASP / IDP solvers then to be 'a solver, with some input
and output glue (or not for some folks)'.
Add to this the focus on implementation and experimental work and it
seems (to me) that you have a community whose starting point is DIMACS;
how it is generated and what happens to the model are, largely, other
people's problems. When they publish work about logical modeling, it
tends to be called 'encoding' (which is in itself an interesting
contrast) and follows the narrative of "here is another problem which
can be reduced to SAT" similar to the work on proving things NP
complete, rather than "here is how to express this concept in a
computable logic".
As with all(?) statements about 'culture', this is a rather wooly
point which people will no doubt find exceptions to and pick apart as a
gross overgeneralisation. However it is one of the great (to my mind)
things about the ASP community is that people take logic modelling
seriously.
============
Date: Fri, 26 Oct 2012
From: Martin Gebser
To: Vladimir Lifschitz
Your question repeatedly occurred to myself and colleagues, and a good
answer seems hard to pin down. The constructive flavor of ASP almost
immediately points the way to "intelligent" grounding (deduction), but
it would be incorrect for classical logic. For instance, one cannot
ground a tautology like
p(X) :- p(X).
without explicit type information, even if supported (Herbrand) models
are to be considered only. (From my modeling experience, I can say that
explicit type information is a source of inefficiency, if too general,
or incorrectness, if too restrictive. Thus, modeling gets much easier
if there's no need of explicit type information.)
While type information may be recovered for finite Herbrand universes, I
don't see how a rule as above may be grounded correctly for classical
logic in general. But as a matter of fact, non-zero arity functions,
making the Herbrand universe infinite, are needed to implement
applications like MetaASP (cf. http://www.cs.uni-potsdam.de/wv/metasp/)
or action languages (cf. http://www.cs.uni-potsdam.de/wv/coala/).
Though my intuitions are vague here, I guess that uniform definability
is an issue too. Aren't transitive closures and alike "undefinable" in
FO, while (non-tight) ASP programs and FO(ID) can capture them? (What
would be good literature references for that?)
If I'm right that concepts like transitivity cannot be captured nicely
in classical logic, any FO modeling language for SAT would be limited,
which may make "taking off" difficult. This could explain why there is
no general front-end (but probably application-specific ones) for SAT.
(There's still lp2sat from Helsinki to convert ground ASP code to SAT ;-)
===========
Date: Fri, 26 Oct 2012
From: Marc Denecker
To: Vladimir Lifschitz
As Mirek and Pashootan said, there are several competitive solvers that
include grounders for FO - and extensions of FO. In the groups of Mirek,
of David Mitchell and Eugenia Ternovska, and of myself such grounders
are developed. IDP, the system developed in my group (and presented
recently in Austin) is one of them: if fed with a (typed) FO theory, it
will produce a SAT theory. Also some theorem provers for FO contain
grounders to SAT.
I need to make a remark on your comparison between ASP and FO. In my
opinion it is bound to produce confusion. You compare the ASP program
p(a,b)
q(x) <- p(x,y)
with the FO theory
p(a,b)
p(x,y) => q(x)
Both theories have completely different meaning.
Therefore, one should not expect that anything sensible can be learned
from comparing their groundings.
Your ASP program is equivalent to the FO theory
Domain Closure Axiom
p(x,y) <=> x=a & y=b
q(x) <=> ?y p(x,y)
Some grounders (e.g. IDP) will simply not ground these formulas, but
realize on grounding time that p is true only for (a,b) and q only for
a. Nontrivial grounding techniques are necessary for this.
Vice versa, the ASP program that corresponds to your FO theory
p(a,b)
p(x,y) => q(x)
could be as follows
D(a).
D(b).
{p(x,y)} :- D(x), D(y). %c hoice rule
{q(x)} :- D(x). % choice rule
:- not p(a,b).
:- p(x,y), not q(x).
(To make it groundable, I introduced a domain predicate D
to specify the range for the vars in choice rules)
Its grounding will be substantially different from the grounding of your
ASP program and probably contain rules that ressemble the clauses that
you sketch for the grounding of the FO theory.
============
Date: Fri, 26 Oct 2012
From: Michael Gelfond
To: Martin Gebser
This is slightly off topic but I am going to ask anyway:
You write:
"(From my modeling experience, I can say that
explicit type information is a source of inefficiency, if too general,
or incorrectness, if too restrictive. Thus, modeling gets much easier
if there's no need of explicit type information.)"
Can you elaborate on that?
===========
Date: Fri, 26 Oct 2012
From: Marc Denecker
To: TAG
I thought it was conventional wisdom that typing information improves
modelling or programming - e.g., by making models more precise and by
eliminating bugs. I believe it does. IDP supports a typed logic.
In my opinion, the problems sketched here by Martin are not inherent to
typing but to a naive implementation of a grounder for a typed formalism.
============
Date: Fri, 26 Oct 2012
From: Martin Gebser
To: Michael Gelfond
Let me try to illustrate what I mean on Fibonacci numbers (up to 12), as
a small toy problem. The nice encoding is as follows:
fib(1,1). fib(2,1).
fib(X+2,M+N) :- fib(X,M), fib(X+1,N), X < 11.
Here "intelligent" grounding computes the numbers, leading to 12 facts.
However, we can also turn a grounder like gringo into a "dull"
substitution machine by using an encoding like this:
dom(1..144).
fib(1,1). fib(2,1).
1 { fib(X,N) : dom(N) } 1 :- dom(X), X < 13.
:- fib(X,N), fib(X-2,M), not fib(X-1,N-M).
In this formulation, dom/1 shall emulate a fixed finite Herbrand
universe, used to restrict instances of fib/2 to a finite set. One may
wonder where the upper limit 144 (which happens to be the Fibonacci
number of 12) comes from. In fact, a modeler needs to know what the
plausible instances of fib/2 can be to provide type information as
above. Although the two versions of the Fibonacci program give similar
results, clasp reports 207431 ground rules for the second version, while
we had only 12 ground facts before.
For more sophisticated problems, a priori restrictions can be quite
non-obvious, and too strong ones may spoil correctness. For instance,
the second program becomes unsatisfiable if one forgets about 144 (using
"dom(1..143)."). The conflict of interest between efficiency and
correctness can make finding good a priori restrictions difficult. I
think that it's a relief if one doesn't need to provide them and let a
grounder take care of the "right" subset of the Herbrand universe.
============
Date: Fri, 26 Oct 2012
From: Gregory Gelfond
To: Martin Gebser
Consider your first encoding of the Fibonacci numbers:
fib(1,1).
fib(2,1).
fib(X+2,M+N) :- fib(X,M), fib(X+1,N), X < 11.
It seems that you are suggesting that this representation lacks explicit
type information. Am I correct in my understanding? If this is the case,
then how do you interpret the restriction that X < 11 in the body of the
recursive rule?
Regarding the second representation:
dom(1..144).
fib(1,1).
fib(2,1).
1 { fib(X,N) : dom(N) } 1 :- dom(X), X < 13.
:- fib(X,N), fib(X-2,M), not fib(X-1,N-M).
It seems that the source of the inefficiency stems from the use of the
choice rule coupled with the constraint (which uses default negation) am I
right? This seems to be an unnatural representation, particularly if the
goal is to have "type restricted definition of the Fibonacci numbers".
Consider the following encoding:
positive(1..144).
fibonacci(1, 1).
fibonacci(2, 1).
fibonacci(X+2, M+N) :- fibonacci(X, M), fibonacci(X+1, N),
positive(X), positive(M), positive(N).
:- fibonacci(X, M), not positive(X), not positive(M).
#hide.
#show fibonacci/2.
This encoding does have a larger corresponding ground program, but it
appears that this is solely due to the inclusion of the definition of the
"type" positive. Looking at the recursive rule, it makes the type
information more explicit than the first variant, by binding the variables X,
M, and N, to the type positive, yet it appears to preserve the natural
definition of the sequence. Furthermore it appears to be robust in the
sense that adding a fact, fibonacci(-1,-2), causes the program to no longer
have any answer sets because it violates the type specification.
Does this representation still suffer from the issues you suggest?
============
Date: Sat, 27 Oct 2012
From: Martin Gebser
To: Gregory Gelfond
> Consider your first encoding of the Fibonacci numbers:
>
> fib(1,1).
> fib(2,1).
> fib(X+2,M+N) :- fib(X,M), fib(X+1,N), X < 11.
>
> It seems that you are suggesting that this representation lacks explicit
> type information. Am I correct in my understanding? If this is the case,
> then how do you interpret the restriction that X < 11 in the body of the
> recursive rule?
With "lack of explicit type information" I actually referred to M and N
in the third rule above. Assume that our goal is to calculate the 12th
Fibonacci number by a logic program (or FO theory). By the construction
of Fibonacci numbers, it's apparent that [1,12] is sufficient as range
for X. On the other hand, we'd need extra knowledge about outcomes in
order to restrict terms (integers here) generated via M+N a priori.
However, the construction of the 12th Fibonacci number can be performed
in 12 steps (12 facts are generated by "intelligent" grounding). We can
actually rely on this and trust that our instantiation won't be infinite
by considering the structure of the above program: X in the first
argument of fib/2 increases strictly monotonically in the third rule,
and X < 11 predetermines an end on this. On the other hand, any
explicit restriction on the second argument of fib/2 won't change the
fact that the 12th Fibonacci number exists, but if such a restriction is
too strong, we don't get the result anymore.
To unwind the loop of the discussion a bit, I started with writing that
grounding FO theories seems problematic if the Herbrand universe is
infinite and there's no explicit type information to restrict it. Then
Michael asked for some elaboration of my side remark why I think that
such explicit type information can be painful, and I provided the above
example as a reply. What I wanted to illustrate and also to explain now
is that type information on (all) variables, especially M and N in the
third rule, wouldn't help us to get what we want, viz. the result (12th
Fibonacci number here) obtained by "intelligent" grounding.
However, it should be noted that "intelligent" grounding isn't complete
under classical FO semantics: we miss plenty (classical) Herbrand models
of the logic program above. This observation easily extends to
supported Herbrand models because we can simply add the tautology
fib(X,N) :- fib(X,N).
Under classical semantics, we inherently get infinite Herbrand models,
and thus there cannot be a finite grounding to SAT. This issue is
certainly problematic if one wants to perform classical reasoning via
grounding and SAT, and I understand that explicit type information
guaranteeing finite instantiations is a way to escape the problem.
The Fibonacci example was supposed to illustrate that ASP semantics and
"intelligent" grounding may sometimes have it easier due to not being
committed to preserve all (classical) Herbrand models. This was
utilized in providing a rule with nominally infinitely many ground
instances, while still being sure that only finitely many of them will
be produced. Whether one still prefers a priori restrictions on values
of variables to guarantee finiteness is probably a matter of taste, but
hopefully I could make my argument against them somewhat comprehensible.
> Regarding the second representation:
>
> dom(1..144).
>
> fib(1,1).
> fib(2,1).
> 1 { fib(X,N) : dom(N) } 1 :- dom(X), X < 13.
> :- fib(X,N), fib(X-2,M), not fib(X-1,N-M).
>
> It seems that the source of the inefficiency stems from the use of the
> choice rule coupled with the constraint (which uses default negation) am
> I right?
The "unnatural" choice rule shall kind of emulate classical reasoning,
where atoms can be true without proof, in the language of gringo. To
this end, for each value for X in [1,12], we allow guessing associated
values for N from the entire Herbrand universe, consisting of integers
in [1,144]. The lower and upper bound 1 of the choice rule as well as
the integrity constraint are included to make sure that pairs for which
fib/2 holds represent (the first 12) Fibonacci numbers.
> This seems to be an unnatural representation, particularly if
> the goal is to have "type restricted definition of the Fibonacci
> numbers". Consider the following encoding:
>
> positive(1..144).
>
> fibonacci(1, 1).
> fibonacci(2, 1).
> fibonacci(X+2, M+N) :- fibonacci(X, M), fibonacci(X+1, N),
> positive(X), positive(M), positive(N).
>
> :- fibonacci(X, M), not positive(X), not positive(M).
>
> #hide.
> #show fibonacci/2.
>
> This encoding does have a larger corresponding ground program, but it
> appears that this is solely due to the inclusion of the definition of
> the "type" positive. Looking at the recursive rule, it makes the type
> information more explicit than the first variant, by binding the
> variables X, M, and N, to the type positive, yet it appears to preserve
> the natural definition of the sequence. Furthermore it appears to be
> robust in the sense that adding a fact, fibonacci(-1,-2), causes the
> program to no longer have any answer sets because it violates the type
> specification.
>
> Does this representation still suffer from the issues you suggest?
I think that your program needs to be understood under ASP semantics,
while my "unnatural" program aimed at emulating classical reasoning.
Regardless of this, I think that the explicit typing via positive/1
cannot resolve the hen-egg problem I wanted to illustrate: the "magic"
number 144 is available once one knows that 144 is the 12th Fibonacci
number. When knowing outcomes, one can certainly type any variable in
the input precisely, but it cannot help to find values of an unknown
type. Bringing 144 into play myself was just because it was needed for
the encoding emulating classical reasoning, but it was sort of "divine"
intervention. (Preferably I write ASP program to find out something I
don't know beforehand :-)
===============
Date: Sat, 27 Oct 2012
From: Vladimir Lifschitz
To: Mirek Truszczynski
> Psgrnd would perform extensive simplifications. In particular, if p is
> an input (data) symbol specified to p(a,b), and the theory is p(X,Y)
> -> q(X) (of course, it has a different meaning than the theory you
> gave) the outcome would be p(a,b) and q(a). However, q is an *output*
> symbol and {p(a,b), q(b)}
You probably mean {p(a,b), q(a)}.
> will be interpreted as the abbreviation for
> all classical models where q(b) (if there are no other values q could
> take) might be either true or false. Which is exactly how it should be
> interpreted.
>
> In slightly more general terms, the grounder uses CWA when
> interpreting the data set of atoms and propagates that.
Juidging by this example, psgrnd does not really simplify the set of clauses
obtained by naive grounding. Rather, it produces a compact description of
that set of clauses in a higher-level language. Translating from that
language into the language of ground clauses is based on the CWA, that is
to say, uses a nonmonotonic syntactic transformation.
So your example only strengthens my suspicion that classical propositional
logic gives us few possibilities for simplifying sets of clauses, in
comparison with its nonmonotonic extensions. We can drop a clause if it
is subsumed by another clause. Unit resolution produces a simplification.
Anything else?
Marc asked about my remark "Perhaps grounding in classical logic can't be
nontrivial" whether it contains an unintended double negation. No Marc,
this is exactly what I meant: I suspect that possibilities for simplifying
the result of naive grounding in classical logic are quite limited, in
comparison with what can be done in nonmotonic languages based, for
instance, on stable models or the CWA.
============
Date: Sun, 28 Oct 2012
From: Mirek Truszczynski
To: Vladimir Lifschitz
> Anything else?
Psgrnd simplifies based on unit propagation and look ahead (assume
negation of a literal, unit-propagate, set to the opposite value if
conflict).
> So your example only strengthens my suspicion that classical propositional
> logic gives us few possibilities for simplifying sets of clauses, in
> comparison with its nonmonotonic extensions.
I would say that the SAT front-ends ground as much as they can given
the semantics with which they work subject to the condition that the
running time of the grounder should remain polynomial (and trying to
balance the amount of work with the size of the output).
I think that problems that involve inductive definitions give rise to
powerful grounding methods exploited by ASP (the Fibonacci number
example given by Martin illustrates the point) and could suggest that
ASP grounders are ``smarter.'' But this is not a fair statement. Since
these inductive definitions cannot be modeled directly in FOL, there
are no counterpart SAT encodings on which one could compare the
grounders fairly. I would conjecture that on tight programs and
corresponding FOL theories the ASP and SAT grounders are more or less
equally efficient.
Again, as said on that forum I think before, the key difference
between ASP and FOL is in inductive definitions. They are inherent in
the ASP languages but must be explicitly added in formalisms stemming
from FOL to obtain a comparable modeling convenience. Once inductive
definitions are allowed grounders such as psgrnd or gidl can and
mostly do exploit all the ``tricks of trade'' ASP grounders use (for
the record: psgrnd accepts theories extended with Horn programs to
model definitions, gidl accepts a still more general language).
============
Date: Mon, 29 Oct 2012
From: Michael Gelfond
To: Martin Gebser
Thank you very much for your response. It is exactly the type of clarification
I has been looking for.
One more question:
Suppose my program is written as:
positive(1..n).
fibonacci(1, 1).
fibonacci(2, 1).
fibonacci(X+2, M+N) :- fibonacci(X, M), fibonacci(X+1, N),
positive(X), positive(M), positive(N),positive(M+N),
X < 11.
where n is a large number, e.g. maxint.
As far as I understand the current grounders will die or,
if n is not that big, produce a large collection of atoms
of the form positive(..).
Am I right?
If so, would it be possible to avoid large grounding of sorts
and only produce what the grounder produces for the program
without positive()?
(I am assuming that we only want to "show" fibonacci{}.)
============
Date: Mon, 29 Oct 2012
From: Martin Gebser
To: Michael Gelfond
Trying your program for n=1000 with lparse, which relies on the type
information in positive/1 for performing instantiation, it takes quite
some time and memory. One wouldn't want to run lparse for n=10000
anymore, so it's justified to say that it dies.
Unlike lparse, gringo and dlv (the latter was first implementing the
approach) work by forward chaining and produce a ground instance of a
rule only if *all* literals in the body have been identified as
potentially true. In view of this, the information in positive/1 is
redundant (or incorrect if n is too small), and gringo generates
instances of fibonacci/2 almost as quickly as without positive/1. Of
course, it also takes a bit of time to additionally output statically
given instances of positive/1.
=============
Date: Thu, 11 Apr 2013
From: Yuliya Lierler
To: Vladimir Lifschitz
here may be a good place to grasp a state of the art of EPR and its
grounding/solving technology:
www.mpi-sws.org/~jnavarro/papers/phdthesis.pdf
It is a Phd Thesis by Juan Antonio Navarro-Perez "Encoding and Solving
problems in EPR". (2007) A. Voronkov was an advisor of Navarro-Perez.
Sections 4.2 and 4.3 speak of solving method used in EPR. Section 4.2
focuses on the techniques that in our community would be referred to as
intelligent grounding techniques. Here is an example from these techniques:
Do you remember SAT propagator called PureLiteral (when an atom always
occurs positively or always occurs negatively then all the clauses with
this atom may be dropped/assigned true)? This technique is among ones used
in EPR on FO level.
Here is something not directly related to above: One difference that I
noticed is that in their world models themselves do not seem to be as
important as in ASP. So for EPR it is sufficient to produce a formula that
is satisfiable iff the original one is. For ASP the latter is not
sufficient. Yet, in the same dissertation they speak of using EPR as a
language to encode planning problems. In planning, models should be of
importance.