## Reviews and Comments on Paper 12

Paper: Ashutosh Gupta. Improved Single Pass Algorithms for Resolution Proof Reduction (Regular paper) |

Current decision: (no decision) |

### Summary of Received Reviews and Comments

Reviews superseded by other reviews are shown in the grey color in the table. All times are GMT.

date | PC member | subreviewer | score | confidence | |||
---|---|---|---|---|---|---|---|

Review 1 | May 27 | Gianpiero Cabodi | 2 | 4 | 4 | ||

Review 2 | May 30 | Zurab Khasidashvili | 1 | 3 | 5 | ||

Review 3 | Jun 1 | Alan Hu | Sam Bayless | 2 | 3 | 5 | |

Review 4 | Jun 2 | E. Allen Emerson | Oswaldo Olivo | -2 | 2 | 2 | (revise) |

Comment 1 | Jun 2 | Madhavan Mukund |

Review 1 | |

PC member: | Gianpiero Cabodi |

Overall rating: | 2 (accept) |

Confidence: | 4 (expert) |

Clarity of presentation | 4 (good (mostly understandable)) |

Review: | The paper describea an approach to reductions of refutation proofs generared ba SAT solvers. Most modern SAT solvers are equipped with the possibility to log an UNSAT call onto a resolution graph (a DAG) rooted at a NULL clause. The graph is a proof of unsatisfiability. As the proof is not unique, many algorithms exist in literature, proposing different ways to reduce proofs. The paper can be consider as a follow-up of ref. [3], that, in constrast to other works, targeted low complexity algorithms for proof reduction. Starting from a linear algorithm able to work on tree subsets of the proof DAG. The authors initially propose two algorithms that can work DAGs (instead of tree subsets): the algorithms propose different trade-offs between time complexity and reduction power. Given the strengths and limitation of the two algorithm, the author propose a third (parametrized) one,that explores the intermediate range between the two. The resulting algorithm improves over [3], still remaining a fast (low complexity) algoritms. The paper is readable and well-balanced. Related works are well described, as well as the proposed contribution. The main limitation of this work is related to its usage in potential applications. As the authors essentially propose a fast algorithm for a sub-optimal solution (the author recognize that higher complexity algorithms could obtain higher reductions), the best way to show the value of the algorithm would be to evaluate it within the framework of an application. This is not done in this paper, where the authors just show that they get a higher (few percents) degree of reduction with respect to [3]. So the differenct between a delta-paper and a more relevant contribution is heavility dependant on applicability. |

Confidential remarks for the PC: | This is a relatively well written paper, but it could just be a delta paper, if the proposed optimization proved to provide little improvements with respect to related work [3]. So far I'm for acceptance, due to comparisons with other papers, but I could lower my score through discussion. |

Time: | May 27, 20:53 |

Review 2 | |

PC member: | Zurab Khasidashvili |

Overall rating: | 1 (weak accept) |

Confidence: | 3 (high) |

Clarity of presentation | 5 (lucid) |

Review: | Contemporary SAT solvers are able to extract resolution proofs of unsatisfiability from unsatisfiability proofs. Many algorithms have been developed recently for further reducing the size of the resolution proofs by manipulating the resolution proofs returned by the SAT solvers. Reducing the size of resolution proofs is closely co-related with reducing the unsat core size of a given unsat problem. Small unsat cores have many applications in advanced model checking algorithms, thus the problem of obtaining small resolution proofs of unsatisfiability is an important problem. The paper proposes three enhancements to a known resolution proof reduction algorithm (HVC 2008 paper by Bar-Ilan et al.). On SAT 2011 competition MUS (Minimal Unsat Subset) benchmarks, the above algorithm gives 11% reduction in the resolution proof size. The first enhancement is a pretty simple one, and it brings a marginal improvement to the reduction of proof sizes with almost no increase in the runtime. The second enhancement brings an additional 11% increase (compared to the original algorithm) in the proof size reduction in the expense of around 3x increase in the runtime. This enhancement is the main technical contribution of the paper as it extends the original algorithm to work on DAG-like resolution proofs (the original algorithm works on tree-like resolution proofs only). The third enhancement is a heuristic for applying the second enhancement so that the runtime increase compared to the original algorithm is significantly reduced (from 3x to around 2x) and the additional gain in the reduction of the proof size is still close to 2x (22% instead of 11%). The paper is well written, and examples explaining the intuition behind the enhancements are given. (There seems to be a typo in the line 2 of the algorithms in Fig.5.: the second occurrence of v must be v’?) The drawback of the paper, as I see, it, is in that it extends a resolution proofs size reduction algorithm which is not the best today, and there is no discussion in the paper on how this work improves upon the state of the art. |

Time: | May 30, 18:23 |

Review 3 | |

PC member: | Alan Hu |

Reviewer: | Sam Bayless |

Overall rating: | 2 (accept) |

Confidence: | 3 (high) |

Clarity of presentation | 5 (lucid) |

Review: | This paper introduces three algorithms for reducing the sizes of resolution proofs, each of which improves upon an algorithm introduced in Bar-Ilan, Fuhrmann, Hoory, Schacham, and Strichman (2008). This original algorithm, referred to as RmPivots in the current paper, operates by removing redundant steps in the resolution proof. Although that algorithm produces less compact resolution proofs than previous algorithms, it has a runtime that is linear in the size of the proof, instead of exponential. Reducing the size of resolution proofs is an important research topic, and improving significantly upon this algorithm would be useful. The first algorithm introduced in the current paper by A. Gupta makes a minor change to the original algorithm, and produces modestly more compact resolution proofs when tested on ~350 examples from the SAT 11 competition and SMTLIB benchmarks. The second algorithm makes more extensive changes, and nearly doubles the amount of reduction (from 12% to 22% in one benchmark, and from 6% to 10.5% in the other) - but does so at the cost of being much slower (almost 4 times slower in these benchmarks). The third algorithm simply modifies the second to only search for redundant steps in nodes in the resolution proof that have fewer than a bounded number of children. They find that this change makes the third algorithm almost as fast as the original RmPivots algorithm (when small bounds are used), while still being almost as effective as the second algorithm - that is, still reducing the proof by nearly twice as many nodes as RmPivots. While the first algorithm change is minor and not by itself noteworthy, the second and third algorithms make clear improvements to the state of the art. The paper is very well-organized and clearly written (modulo some minor grammar errors), and provides strong experimental results on relevant benchmark instances in Figures 7 and 8. The algorithm descriptions are also very clear. Notes: The graphs in Figure 9 could be improved - they are small and hard to read. The line in 9a should be explained in the caption (or removed). In several places, you describe the algorithms as 'additionally removing 0.89% and 10.57% of clauses'. You should consider instead just describing the absolute numbers of removed clauses (eg, 12.86% and 22.54%) because otherwise the percentages are slightly ambiguous (because there are multiple ways to combine them with the 11.97% removed by RmPivots). |

Confidential remarks for the PC: | Sam is a PhD student who is currently the SAT guru in my group. He is very familiar with the literature and has come up with several clever ideas already, so I respect his assessment greatly. I have also read the paper and support his review: Sam is sometimes a bit over-enthusiastic, so I reduced his "strong accept" to an "accept", based on my feeling that this work, while very solid, is somewhat incremental. |

Time: | Jun 1, 05:02 |

Review 4 | |

PC member: | E. Allen Emerson |

Reviewer: | Oswaldo Olivo |

Overall rating: | -2 (reject) |

Confidence: | 2 (medium) |

Clarity of presentation | 2 (poor (hinders understandig)) |

Review: | - The paper presents three new algorithms for reducing resolution refutation proofs in satisfiability. - Although the previous claim is technically true, it is easy to see that the third algorithm, k-RMPIVOTS, emcompasses the others by changing the parameter, so in fact the authors are providing only one algorithm. - RMPIVOTS* simply consists of resetting a redundancy set to the clauses in the current node instead of the empty set (as was done in the original RMPIVOTS). This is very simple, and I don't believe it could be considered as a new algorithm, but merely a simple implementation flavor of the original algorithm, so this doesn't seem to be publishable beyond a journal submission of [1] or a workshop paper. - On the other hand, the generalization to k-children in k-RMPIVOTS is a little more interesting, although the definition of expansion sets is not defined very clearly in the paper. - The latter parts of the paper contain many typos. - The complexity section is sketchy, mainly because the underlying data structures are not explicitly stated. Insertion costs O(log(n)) in one algorithm and O(1) in another. - The experiments do not seem to be convincing. The benchmarks in Fig. 8 have small running times and reductions. Another problem is the average one might not be a sufficient indication of the effectiveness of k-RMPIVOTS; as indicated by the authors, there might be huge variations within the reductions among individual examples. I would have liked to see a more detailed experimental section. In summary, the first algorithm is trivial enough to not be called a new algorithm, the more complex algorithm needs some manual tunning for the parameter k to provide a good tradeoff between time and effectiveness, and the experiments are not sufficiently detailed. The presentation of the paper also needs to be improved. [1] O. Bar-Ilan, O. Fuhrmann, S. Hoory, O. Shacham, and O. Strichman. Linear-time reductions of resolution proofs. In Haifa Verification Conference, 2008. |

Time: | Jun 2, 02:07 |

Comment 1 | |

By: | Madhavan Mukund |

Comment: | Allen, You are the guardian for this paper and your reviewer's opinion is diametrically opposed to the others. Also, your reviewer has the lowest confidence of the four. Can you get a second opinion and also initiate a discussion? |

Time: | Jun 2, 16:06 |