This is the multi-page printable view of this section. Click here to print.
Projects
1 - Final Project - NP-Hard Problems
Analzying an NP-Hard Problem
This project will explore an known NP-complete problem and will consist of the following components:
- Problem presentation explaining the problem to be solved and its reduction to another accepted NP-complete problem.
- Exact Solution Code – develop code that exactly solves the problem.
- Approximation presentation – show an approximation algorithm for this problem and analyze it.
- Approximation code – develop code that performs this approximation
- Evaluation of Other Projects – you will submit 1 review of other projects.
Below you will find each of these components described in more detail.
Problem and Group Selection
This is a group project with ideally 2 members (maximum 3 with permission of instructor). Your group will select an NP-complete problem to study from the list below:
- Max 3-Sat
- Max clique
- Traveling Salesman Problem (with a complete graph)
- Longest Path
- Vertex Cover
- Min Graph Coloring
- Max knapsack
Please note:
- all of the graph problem MUST accept input in the format used by the labs.
Submission and Grading
You should never start design or construction until you completely understand the project.
You should start by carefully reading the project specifications. (In general it is a good idea to print a paper copy so that you can take notes and perform calculations as you read.)
Implement all of the classes (in accordance with the specifications, perhaps informed by the Implementation Advice above) and submit them to gradescope .
You are not required to submit tests for these classes.
Grading
Project Part | Weight |
---|---|
Checkstyle | 20% |
Correctness | 60% |
Code Quality | 20% |
The code quality grade will be based on such things as:
- Comment clarity
- Code clarity (including variable names)
- Code duplication
- Elegance
- Acknowledgements (as appropriate)
You may submit to Gradescope an unlimited number of times.
2 - Homework 1 - Dualing Stacks and Objects with Python
Introduction
Recall the stack data structure which you probably learned about in CS 240. Stacks support LIFO (last in, first out) operations via push and pop operations. Amortized analysis provides insight into the most efficient ways to do certain expensive manipulations, and specifies when we can spread their cost over a set of operations. For example, if we are using a dynamic array to store data, and we need to expand it when it gets full or shrink it when parts of it are not in use, when and how much should we grow or shrink the array? Amortized analysis answers these questions.
In this assignment you will implement a data structure that grows or shrinks at times and in amounts based on amortized analysis. In particular, suppose we need two stacks that are related in such a way that usually one is large when the other is small. To make the most efficient use of space, we can have an array/list in which we store both of them, with each having its bottom at one end of the array/list. In this way, the stacks grow toward one another, sharing the space in the array/list.
Of course, it might happen that the stacks grow together to the point that they use up all the space in the array/list. In this case, we need to expand the array. How much? If we were to expand the array by just one slot (the minimum necessary), then a sequence of push operations on a full array would incur copying all the elements in the full array into a new array, as well as the single copy operation needed to add the new value to a stack. In this case, each push operation when the underlying array is full would each require O(n) data copy operations, where n is the size of the array, which is rather expensive.
However, if we increase the size of the array by a factor of k, so that an array of size n is replaced by an array of size kn, then there will be n + 1 data copy operations when the array grows (n to copy the existing data and 1 more for the new element), followed (in the worst case) by (k −1)n operations for all the (k −1)n pushes that can be done before the array becomes full again. The average cost of a push between expansions is therefore $$((n + 1) + (k −1)n)/(k −1)n = (kn + 1)/(kn −n) \in \mathcal{O}(1)$$. So it is more efficient to expand the array by a constant factor k. What should k be? The bigger k is, the lower the amortized cost, but we are also trying to save space. Generally a factor of 2 is used. So we double the size of the array when it is expanded. A similar analysis shows that if we want to minimize space usage without incurring excessive costs when shrinking the array, we should shrink the array when the it gets to be half full.