Project 1 for CS 371R: Information Retrieval and Web Search
Adding Proximity Preference to Vector-Space Retrieval

Due: September 20, 2023 at 11:59 p.m.


Existing System

As discussed in class, a basic system for vector-space retrieval (VSR) is available in /u/mooney/ir-code/ir/vsr/. See the Javadoc for this system. Use the main method for InvertedIndex to index a set of documents and then process queries.

You can use the web pages in /u/mooney/ir-code/corpora/curlie-science/ and /u/mooney/ir-code/corpora/cs-faculty/ as the sets of test documents. The Curlie Science dataset contains 900 pages, 300 random samples each from the Curlie indices for biology, physics, and chemistry. The UTCS dataset contains 1000 pages from the UT CS website.

See the sample trace of using the system on the Curlie Science dataset and on the UTCS dataset.

Problem

One of the problems with VSR is that it does not consider the proximity of the query words in the document. With a bag-of-words model, the location of words in the document is irrelevant, only their frequency matters. Therefore, multi-word queries where the specific combination of words indicates a specific concept only when occurring close together, are not handled very well.

For example, in the sample trace for the Curlie Science dataset, for the first query "background radiation", the top retrieved document does not contain the phrase "background radiation". Instead, it has 41 occurrences of the word "radiation" and zero occurrences of the word "background". In fact, none of the top 10 retrieved queries contain the phrase "background radiation".

The next sample query "virtual reality" has similar problems that the top retrieved documents do not contain the phrase. The first document includes 42 occurrences of the word "reality" but no occurrence of "virtual". The second document only contains 3 occurrences of the word "virtual". The document that actually talks about "virtual reality" is only ranked number 8. Not until the eighth result does a relevant page that contains "virtual reality" actually appear.

We can see similar examples in the sample trace for the UTCS dataset. For the first query "academic achievements", the top results contain the words "academic" and "achievements" in separate parts of the web page but do not include relevant information about the academic achievements.

For the second query "real world", the top two results are web pages with nothing related to "real world" but different "real time" projects. The third result is relevant and includes the phrase "real-world".

You Fix It

Your task is to change the existing VSR code to add a proximity component to the final document scoring function. For multi-word queries, documents in which the words in the query appear closer together (although not necessarily directly adjacent) and in the same order as in the query, should be preferred. Your approach should be general-purpose and should produce better results for the examples in the sample traces for the Curlie Science dataset and the UTCS dataset.

Here are the sample solution traces produced by my solution to this problem or the Curlie Science dataset and the UTCS dataset. Note that the top documents now contain all the query words, close together and in the correct order.

In addition to the normal cosine-similarity metric, I calculated a specific proximity score for each retrieved document that measured how far apart the query words appeared in the document. The final score was the ratio of the vector score and the proximity score (both components are shown in the trace). The proximity score was computed to be the closest distance in the document (measured in number of words, excluding stop words) that a query word appeared from another query word averaged across all pairs of words in the query and all occurrences of the words in the document. A multiplicative penalty factor was included in the distance metric when a pair of words appeared in the reverse order from that in the query. This is only a sketch of what I did, many details are omitted.

You do not have to adopt this exact approach. Feel free to be creative. However, your solution should be general-purpose (not hacked to the specific test queries), address the fundamental issue of proximity, and produce similarly improved results for the sample queries. Note that you may need to change many of the fundmental classes and methods in the code to extract and store information on the position of tokens in documents. When making changes, try to add new methods and classes rather than changing existing ones. The final system should support both the original approach and the new proximity-enhanced one (e.g. I created a specialization of InvertedIndex called InvertedProxIndex for the new verison). Hint: I found it useful to use the Java Arrays.binarySearch method to efficently find the closest position of a token to the occurence of another token given a sorted array of token positions.

Report Instructions

Make sure to include the following in your report:

Submission Instructions

Please submit the following to Gradescope:

  1. All code files that you modified in a folder called code/. This should necessarily include the main java class called InvertedProxIndex.java. In the autograder, the code will be executed as follows:
    java ir.vsr.InvertedProxIndex -html <path-to-dataset>
    Apart from this, you can include any new or modified java files and their .class files. Note that all the files that you upload should be compilable from ir/vsr and not any other directories.
  2. Your report: report.pdf
  3. Trace files of you running your proximity preference enhanced inverted index. trace/curlie.txt and trace/faculty.txt. Don't include trace files of the original inverted index.

On submitting to Gradescope, your files should look something like this:


In submitting your solution, follow the general course instructions on submitting projects on the course homepage.

Grading

You will be graded on your code, documentation, and report. Please make sure that your code compiles and runs on Gradescope and passes the following tests before the submission deadline.
NOTE: If your code does not pass all of the below sanity tests before the submission deadline, you may lose a substantial part of the coding score.

After the deadline, we will evaluate your code on a set of hidden queries and autograde the retrievals based on a set of rubrics. These rubrics are yet to be finalized. But a general guideline might be that the retrievals using proximity scoring need to boost the scoring of the multi-word queries and improve their ranking in the top retrievals.

The grading breakdown for this assignment is: