Project 3 for CS 371R: Information Retrieval and Web Search
Web Spidering and PageRanking

Due: 11:59pm, Nov. 1, 2023
(Making web page: Part 1 due: Oct. 24;  Part 2 due: Oct. 26)

IMPORTANT: This assignment has THREE parts, each with a different due date. Please note that late submissions will NOT be accepted for the first two parts.

Existing Spiders

As discussed in class, a basic system for spidering the web is available in /u/mooney/ir-code/ir/webutils/ See the Javadoc for this code. Use the main method for the Spider class to start from a particular URL and spider the web breadth-first and save the documents in a specified directory for subsequent indexing and searching with VSR. Also see the specializations SiteSpider and DirectorySpider, which restrict their crawling to a particular site (host) or directory, respectively.

See a sample trace of running SiteSpider on the UT CS department faculty page to collect 75 pages related to CS faculty.

This assignment will not require using the "-safe" spidering flag that invokes restrictions according to the Robot Exclusion Policy since we will be sticking to spidering within the department. Therefore the Safe* and Robot* classes can be ignored for now. However, if you spider outside the department, be sure to use "-safe".

A collection of 1000 department pages SiteSpidered from http://www.cs.utexas.edu/faculty are cached in /u/mooney/ir-code/corpora/cs-faculty/. Like curlie-science, this directory can be indexed and searched using VSR, as in Project 1.

Your Task

Your assignment is to make a specialization of the Spider class called PageRankSpider that computes the PageRanks of the spidered pages based on their link structure, and make a specialization of the InvertedIndex class called PageRankInvertedIndex that utilizes the PageRanks to compute the relevance of documents. Make sure to override only the methods you change. You should also create further a specialization PageRankSiteSpider that restricts its spidering accordingly. 

While crawling, PageRankSpider should form a graph based on the incoming and outgoing links. When computing PageRank, only those pages which are actually indexed (saved to disk) should be included in the graph as nodes. You may find ir.webutils.Graph and ir.webutils.Node data structures helpful for building and manipulating the graph. Then it should run the PageRank algorithm on the graph and store all the PageRanks in a text file named page_ranks.txt in the same directory as the crawled pages. The format of page_ranks.txt should be like this example:

P001.html 0.006494458532952974
P002.html 0.009569125239295519
P003.html 0.006569776377162855
Each line contains a file name, a single space, and then computed PageRank for that document.

With respect to the PageRank algorithm's parameters, use 0.15 for alpha and 50 for the number of iterations.

You can crawl the following URL to help you verify that your PageRank algorithm works:

 https://www.cs.utexas.edu/~mooney/ir-course/proj3/a.html 

In addition to indexing the documents, PageRankInvertedIndex should read the PageRanks of the documents from the page_ranks.txt file described above. When computing the relevance of the document for a query it should add its PageRank scaled by a weight parameter to the score. The weight parameter should be a command line argument for PageRankInvertedIndex specified with "-weight value"

Making Web Pages (Parts 1 and 2)

As discussed in class, in order to create test data for this assignment, everyone should create a special personal page for this class and submit it to Canvas.

Important: Parts 1 and 2 of this assignment cannot be turned in late!

Part 1 (2.5 points): due Oct. 24
You should include links to at least 5 webpages of the courses that you have enjoyed from the list of CS courses located at http://www.cs.utexas.edu/users/mooney/ir-course/proj3/course-list.html. Please include the links exactly as they are given in this list, and don't worry if your favorite class is not included - we are just creating a toy link structure. For example, your webpage may look like this example (Or the example as a webpage).
This simple part counts for 2.5% of the project grade. After the deadline, they all will be linked from http://www.cs.utexas.edu/users/mooney/ir-course/favorite_classes.html.
Please do not include any personal information on your webpage. We will post your webpages using anonymized URLs to comply with FERPA.

Submission instructions for Part 1: Submit a single file on Canvas under the assignment "Project 3 - Part 1 (Webpage)". The file should be named [PREFIX]_favorite_classes.html (e.g. proj3_jd1234_favorite_classes.html) and will have the HTML for your webpage.

Part 2 (2.5 points): due Oct. 26
Once all favorite classes pages are in place by Oct. 24, select at least 3 favorite classes pages of other students and link them from your page. You may use some criteria like how much you agree with their favorite courses. Again, the bottom-line is to make an interesting toy link structure. Here is an example of the updated web page. This part counts 2.5% of the project grade.
Submission instructions for Part 2: Submit a single file on Canvas under the assignment "Project 3 - Part 2 (Webpage)". The file should be named [PREFIX]_favorite_classes.html (e.g. proj3_jd1234_favorite_classes.html) and will have the updated HTML for your webpage.

Crawling, indexing and searching the webpages (Part 3)

Use your PageRankSiteSpider to crawl from http://www.cs.utexas.edu/users/mooney/ir-course/favorite_classes.html  and index all student course pages. Use a limit of 200 pages. Your PageRankSiteSpider trace file should have the same options as this spider solution trace file. This is our solution.

Index and search the resulting directory of pages using PageRankInvertedIndex and compare the search results for the following values of weight: {0.0, 1.0, 5.0, 10.0}. Note that a weight of 0 should be the same as the original InvertedIndex. Try the following queries:

Your PageRankInvertedIndex trace file should have the same input weights and queries as these three sample solution trace files: w=0.0 , w=1.0 , w=5.0 .

You should submit two trace files of the spidering and the queries as described in the submission section below.

Report and Trace Files

In your report, describe the PageRank algorithm as you have implemented it, describe how you changed the retrieval process to incorporate it and instructions to run the code. Additionally, try several queries to get a feel for the effects of PageRank and then answer at least these two questions

  1. Does PageRank seem to have an effect on the quality of your results, as compared to the original retrieval code? Why or why not?
  2. How does varying weight change your results? Why do you think it changes in this way?

(You should explicitly answer these questions, i.e. in your report put "Q1. Does..?" and then give the answer underneath so that we do not have to search for your answers. Make sure to explain your answer rather than just state it.):

Submission

You should submit your work on Gradescope. In submitting your solution, follow the general course instructions on submitting projects on the course homepage. Along with that, follow these specific instructions for Project 3:

*** You will need to ensure that the following commands run successfully on the lab machines: ***


The submitted files on Gradescope should look like this:


The autograder output on Gradescope should look like this:

Grading Rubrics

The autograder scoring is as follows: