The map-reduce programming paradigm is a fundamental tool used in processing large data sets, and is supported in current tools such as Hadoop. Apache Spark offers another programming paradigm for processing large data sets. In this course you will gain an understanding of the concepts embodied in map-reduce, and will investigate how map-reduce is used to address various problems in processing and analyzing large data sets. This course will explore map-reduce as implemented in Hadoop, as well as the associated distributed file system (HDFS). In this course you will gain an understanding of the concepts offered and supported in Spark, and will investigate how to apply these concepts to address various problems including those you addressed using map-reduce.
Upon completing this course, the student will be able to design and implement map-reduce programs for various large data set processing tasks, and will be able to design and implement programs using Apache Spark.
Data structures, Java programming experience.
|Aug. 31||0||Dean, Ghemawat paper|
|Sep. 5||1||0||0||Design Patterns, Ch. 1|
|Sep. 12||2||1||10||Design Patterns, Ch. 2|
|Sep. 26||4||3||15||Design Patterns, Ch. 4|
|Oct. 10||6||5||20||Design Patterns, Ch. 5|
|Oct. 17||7||6||20||Design Pattern, Ch. 3|
|Oct. 24||8||7||15||Design Pattern, Ch. 6|
|Oct. 31||Exam 1 (Hadoop Map-Reduce)|
|Nov. 2||9||8||25||Learning Spark, Ch. 3|
|Nov. 9||Learning Spark, Ch. 4|
|Nov. 21||Learning Spark, Ch. 5,6|
|Nov. 23||Thanksgiving Holiday|
|Nov. 28||12||11||20||Ch. 5,6 cont.|
|Nov. 30||Learning Spark, Ch. 9|
|Dec. 7||12||20||Exam 2|
This course will focus on writing code to solve various problems, so assignments will be programming assignments. These programs will be cumulative in that subsequent assignments will build on previous programs you have written, so it is important to complete assignments on time so you can move on to the next assignment.
Small datasets will be provided for each assignment so that you will not consume too much computing resource (time and space) while developing your solution. Some assignments will also offer a large dataset so that you can measure how your map-reduce solution scales with the dataset size and the computing resources available.
You are free to discuss approaches to solving the assigned problems with your classmates, but each student is expected to write their own code. Source code must be submitted for each assignment, in addition to the results you obtained when running your program against the datasets provided. If duplicate work is detected, all parties involved will be penalized. All students should read and be familiar with the UTCS Rules to Live By.
Percentages for each element may be different for each assignment.
Required artifacts for each programming assignment are due at the start of class (9:30 AM) on the due date, as we will be discussing the solution during that class period. Penalty for late submission is 25%.
Special Notes on Assignment Submission and Grading:
In using cloud-based services such as AWS, you will have an account that is charged for the various resources used. You are responsible for shutting down any services you start-up in the course of doing your assignments. It is possible to start services (like Elastic Map Reduce) and leave them running even though you are not doing any work. I encourage you to shut down any services you have started at the end of a work session. If you repeatedly leave services running that you are not using, your account will be charged and you may exhaust the free credits of your account.
Final grade will be determined on the cumulative percentage score over all assignments and the exams. Assignment and exam percentages are: