Announcements

ICS-33: Intermediate Programming

In reverse-chronological order


#24: 6/11/22
Program #5 Graded
The TAs/Readers have run the rubric that I gave them for Program #5 (observing the behavior of your simulations, and looking at your code for inheritance) and the grades are now recorded. See the assignment grades and Grades(zipped .xlsm file) files, whose details are discussed below, in Announcement #7. The class average was about 87% and the median was 92%, meaning that most students correctly solved most problems, and over half (65%) of the class correctly solved all the problems (or had minor deductions). Overall there were 65% As, 17% Bs, 6% Cs, and 9% Ds and Fs.

About 43% of the students submitted early, and these early submitters scored much better (average 95%) than students submitting on the due day (average 80%); I am assuming that some students ran out of time before they finished all the parts.

In the assignment spreadsheet, Column A contains the Hashed IDs of all students (in sorted order); Column B contains an X if we believe the student submitted work on time; Column C shows who graded your program; Column D shows the extra credit points for early submissions.

Row 2 shows the worth (in number of points) for each part of the problem. Rows 4-5 shows further information about the tests performed in each column.

Rows 6 and beyond show credit awarded for each student: a blank cell means full credit; X means no credit; and P means partial credit, which means substantially correct but missing something important. Each of these marks should have a comment attached to it with the TAs brief description of the problem. Column S shows each student's cumulative score, for all the parts in the single problem in this assignment. Columns T-V show each student's cumulative Score, the score Rounded to an integer (what integer is entered in the Grades spreadsheet) and Percent, based on the number of points the assginment is worth (here 50).

Note the following instructions were in the assignment:

When you define these classes, you should not modify any code written in their base classes. You should also not access by name or duplicate any instance variables (attributes) that are inherited. You can call methods in the bases classes to access/update their instance variables and override any methods in the base clases (whose bodies might call the overridden methods to help them accomplish their task). You can also define new methods. Each leaf class must define or inherit an update and display method, which respectively implement the behavior of objects in that class and the image that they display on the canvas.

For the last three columns, my grading instructions to the TAs were as follows. Although subjective, the averages for all the programs the TAs graded were within a few points of each other, so the grading was consistent.

  • For Specials, observe any "interesting behavior" (can you see something; just changing a size/color under special circumstantces, or avoding some already programmed behavior, is not interesting).
    1. If no comment in module telling what it does: P
    2. If no noticeable interesting behavior: P
    3. If neither comment nor interesting code (also, not submitted): X

  • For the model class
    1. if update_all loops over multiple sets (e.g., one per type of simulton; not extensible for different simultons) or does removals (individual classes should): X
    2. if update_all calls type or isinstance (the update methods of the classes should do this): X
    3. if update_all calls update without passing model as an argument: X

  • For the Hunter class
    1. if Hunter doesn't derive from Pulsator/call some Pulsator.method: X
    2. if Hunter refers to self._anything (any attributes not declared in Hunter.__init__): X
    3. if Hunter has a loop (find in the model and Pulsator should do the heavy lifting in his class): P

This assignment was designed to illustrate the mechanics of Python inheritance, and its use to minimize code (including using multiple inheritance, once). It also illustrated the Model-View-Controller (MVC) way of writing Graphical User Interface (GUI) applications, with the student supplying code related to the Model only, and me supplying code for the View and Controller for its animation.


#23: 6/5/22
Quiz #8 Graded
The Readers have graded the code and document submissions for Quiz #8. The grades are now recorded. Recall that this quiz was ony 15 points, not the normal 25. See the assignment grades and Grades(zipped .xlsm file) files, whose details are discussed below, in Announcement #7.

The class average was about 85% and the median was 87%, meaning that most students correctly solved most problems, and most (47%) of the class correctly solved all the problems (or had minor deductions). Overall there were 47% As, 26% Bs, 10% Cs, and 17% Ds and Fs.

In the assignment spreadsheet, Column A contains the Hashed IDs of all students (in sorted order) and Column B contains an X if we believe the student submitted work on time; Column C shows the extra credit points for early submissions Row 1 for Columns D-G shows how many points the problems were worth.

Rows 4 and beyond show the number of points earned by each student.. Columns H-J show each student's cumulative Score, the score Rounded to an integer (what integer is entered in the Grades spreadsheet) and Percent, based on the number of points the assginment is worth (here 15).

The Readers graded the following problems.

1a: Hyuho Oh (half: average 3.71 pts)
1a: Aaron Winter (half: average 3.68 pts)
1b: Jakub Antosz (all)
2a: Valeria Maya (half)
2b: Elizabeth Wing Yun Lui (all)
The spreadsheets have some comments explaining point deductions: read the comments in the cells. Email the appropriate Reader about any grading issues. Here is a quick overview of the general rubric (see my answers online).
  • Problem #1a: (2 pts): The first lambda should time closest_2d called on a pre-existing random list (not constructed during the timing); the second lambda should compute and bind a new random list on which closest_2d is run; typically this requires the use/setting of a global variable or an object/attribute, but there are other ways, so long as the graph is DIFFERENT for every call to closest_2d that performance calls (called 5 times). Many students just created one random list for each new size, and closest_d was then called 5 times on that same list; calling sort first ends in the same list, but if you randomize the order of the points (instead of creating new points) that was counted as a different list. The numbers should generally increase by about factor of a bit over 2 (more variability early on); it should produce appropriately labeled output (as appears in the sample.pdf). (2 points): your code should include a try/except to catch exception raised by timings that were too short (not even 1 clock tick) which is required to print the size test that was unable to run. (1 point): Output correctly labelled (as in sample) with good spacing between data, for readability.

  • Problem #1b: Ratios (especially for big N) should be a bit over 2; complexity is O(N Log N), but because O(N) is so similar to O(N Log N) we will count either as correct. Your answer must have been written correctly using big-O notation. We then computed the time for 1,000,000,000 based on the timing in the last row in your table; your result should have the appropriate units.

  • Problem #2a: Profiling should be done once, not include generation of the random graph; the first output should be decreasing by ncalls and the second by totime, showing the top 12 functions (with their directories stripped, as shown in the lecture notes).

  • Problem #2b: Nothing special to say about grading, except there were a variety of answers considered correct, so long as they were in reasonable ranges. We did not grade part here b (just a and c). Some students in part 3 did not use as their denominator from the table in part 1b). So a, 2a, 2c, and 3 were each 1/2 point.

This assignment was designed to provide you with an opportunity to use both performance tools on a small scale, in the context of problems related to analyzing algorithms and code. It is easy to scale up and measure arbitrarily complicated code. As with all assignments, you should examine my solutions.


#22: 5/29/22
Quiz #7 Graded
The Readers have graded the Gradescope submissions for Quiz #7. The grades are now recorded. See the assignment grades and Grades(zipped .xlsm file) files, whose details are discussed below, in Announcement #7. The class average was about 75% and the median was 80%, meaning that some students correctly solved most problems, but less than half (28%) of the class correctly solved all the problems (or had minor deductions). Overall there were 28% As, 28% Bs, 11% Cs, and 33% Ds and Fs.

In the assignment spreadsheet, Column A contains the Hashed IDs of all students (in sorted order); Column B contains an X if we believe the student submitted work on time; Column C shows the extra credit points for early submissions Row 1 in Columns D-I shows how many points the problems were worth.

Rows 3 and beyond show the number of points earned by each student. Columns J-L show each student's cumulative Score, the score Rounded to an integer (what integer is entered in the Grades spreadsheet) and Percent, based on the number of points the assginment is worth (here 25).

The TAs/Readers graded the following problems.

  Problem 1: Rahima
  Problem 2: Elizabeth
  Problem 3: Kyuho
  Problem 4: Aaron (left problem) and Valeria (right problem)
  Problem 5: Jakub
  Problem 6: Haining
You can see your score and the rubrics on Gradescope. I will be doing all the regrades. If you think the rubric was applied incorrectly (see my solution too) you can ask for a regrade on Gradescope. But, before you ask for a regrade, contact the ta/reader who graded that problem first, to determine whether it was graded correctly. For Quiz #6 a few students didn't contact the ta/reader and lost points for not following the correct protocol. If you have a question about the rubric, ask it on Ed Discussion: you may do so anonymously.

Students might have a hard time understanding their grade on part 2a: I wanted to see the calcualtion set up as it was in the notes, and simplification occur (cancellation) to get to the answer; using big-O notation was not appropriate in the calculation. Likewise, for part 3c many students wrote about N as if it was the argument to the function, instead of N being the size of the argument: to get full credit students needed to correctly describe the argument.

Students should examine their graded work immediately and get any regrade issues settled as soon as possible (within a week of when the grade is assigned).

This assignment was designed to provide you with an introduction to solving problems related to complexity classes and analyzing algorithms and code. All these topics may be tested again on the Final exam (and frequently come up in job interviews, exploring a students depth of understanding about the non-coding aspects of programming). As with all assignments, you should examine my solutions.


#21: 5/26/22
In-Lab Programming Exam #3 Graded
I have run the automatic batch self-check tests for In-Lab Exam #3 and the grades are now recorded. See the assignment grades and Grades(zipped .xlsm file) files, whose details are discussed below, in Announcement #7. The actual file I used for grading is bsc. You can find your solutions (by your ID Hashed), my solution, and the bscile3S22.txt file that I used to compute grades for this assignment in Ed Resources for this class (see the name ile3materials.zip).
About 2 dozen students had timeouts. So scores for these students do not appear in the spreadsheets. If you are one of these students, talk to your TA about regrading.

I computed your scores in two ways, recording the larger of the two numbers in column V in the spreadsheet.

  • I computed your score in the announced way. I computed a base score that was 36, 43, or 50 (72%, 86%, and 100%) if you solved 1, 2, or 3 problems completely correctly, respectively. For any problem that you did not solve completely correctly, you received up to 7 points for passing some of the batch self-check tests. See column N in the spreadsheet.

  • I computed your scored in an alternative way. For each problem you received up to 50/21*7 points (16.6.. = 33%) for passing some of the batch self-check tests. See column R in the spreadsheet. This alternative improved scores mostly of students who did not solve any problem completely correctly. Improved scores are highlighted in yellow.

  • Your actual score, the largest of these, appears in column V.

The raw class average was 56% and the median was 52%. At the extreme, 11% of the students scored 100% -or more (because of the extra credit point; all required methods passed all batch self-check tests) and 53% scored less than 50% (solved no problems fully correctly). ,1--- The skew between the average and median statistics shows that although many students did well, there were some students who did very poorly, dragging down the average but not the median. ---> Because the raw average was below 75%, there are about 9.7 normalization points for this testing instrument (raising averages by about 19%).

The approximate distribution of grades on this In-Lab exam (after normalization) is 47% As, 0% Bs, 3% Cs, 7% Ds, and 42% Fs; last quarter the scores were 43% As, 20% Bs, 10% Cs, 0% Ds, and 22% Fs; Fall quarter they were 49% As, 9% Bs, 2% Cs, 2% Ds, and 38% Fs. In terms of the problems: about 20% solved problem# 1 fully, about 25% solved problem# 2 fully, and about 32% solved problem# 3 fully. As another comparison this quarter about 59% solved no problems fully correct, about 26% solved one problem fully correct, about 10% solved two problems fully correct, and about 11% solved all three problems fully correct.

This U-shaped normalized distribution (89% As and Fs) is common for In-Lab Programming Exams, where we are testing competency/mastery of programming concepts: the ones who attained it scored As (could do most everything in the allotted time); the ones who have not attained it scored Fs (solved no problems in the allotted time); only about 10% (which was actually lower this quarter than normal) of the students scored somewhere in-between

FYI, the normalized averages for the different lab times were: 74% for students in Labs 1, 2, and 3 (meeting at 9am), 74% for students in Labs 4, 5, and 6 (meeting at 11am), and 77% for students in Labs 7, 8, and 9 (meeting at 1pm), and 75% for students in Lab 10 (meeting at 3pm). Lab 1 (at 9am) had the highest average (85%) and Lab 3 (also at 9am) had the lowest average (61%).

Students should talk to the TA for their Lab first, if they do not understand why they received the marks they did or dispute any of these marks. The best time to talk with your TA/Reader about grades is during one of your Labs, when both student and TA/Reader are physically present to examine the submission and the grade, possibly running the solution on a computer they can share.

Students should examine their graded work immediately and get any regrade issues settled as soon as possible (within a week of when the grade is assigned).

IMPORTANT Information about Student Grades

  • If you submitted code that had syntax errors and therefore failed all tests: the TAs are authorized to allow you to fix a few simple syntax errors in the code you submitted and rerun/regrade the code (but, I will deduct some points for not submitting executable code). Also see Announcement #5 below.

  • If you submitted code that didn't finish executing in at least one of their methods and therefore failed all its tests: the TAs are authorized to allow you to replace any method body with pass and rerun/regrade the code (but, I will deduct some points for submitting code with an infinite loop).

ANALYSIS:

Certainly it was a hard exam (I don't think any of my exams are easy), but 47% scored an A after nomalization. I felt that the problems that I asked were similar to those on quizzes 4, 5, and 6 and on last quarter's In-Lab Exam #3. Of course, similarity is in the eye of the beholder: generally "better" programmers will find more similarities. Of course, memorizing the solution code from the quizzes will do little good; understanding the code is the goal that enables writing similar code.

So, in total, students completely solved about 26% (47% last quarter) of all the problems on the exam. Unlike the previous two exams, it was harder to hack your way to a solution for these problems: you had to understand better the problem and what you were doing.

If we look at the current course grades, there are 53% As, 20% Bs, 13% Cs, 4% Ds, and 10% Fs (although some of those students have dropped the course). Last quarter the grades at this point were 57% As, 17% Bs, 8% Cs, 6% Ds, and 12% Fs Fall quarter they were. 43% As, 27% Bs, 11% Cs, 4% Ds, and 15% Fs. The percentages of As and Bs was 73%, 74%, and 70%: so similar Finally, last quarter the final grades were 47% As, 22% Bs, 11% Cs, 5% Ds, and 15% Fs. So, expect a similar down-shifting after the 200 point final exam.

If you want to discuss general grading issues (not your how your specific submission was graded), I suggest posting on Ed Discussion. It is OK to post anonymously to your classmates, for grading issues.


#20: 5/26/22
Program #4 Graded
I have run the automatic batch self-check tests for Program #4 and the grades are now recorded. See the assignment grades and Grades(zipped .xlsm file) files, whose details are discussed below, in Announcement #7. The class average was about 100% and the median was 102%, meaning that most students correctly solved most problems, and well over half (95%) of the class correctly solved all the problems (or had minor deductions). Note that this problem had an extra credit part, as well as extra credit for an early submission. Overall there were 95% As, 3% Bs, 1% Cs, and 1% Ds and Fs. About 41% of the students submitted early, and these early submitters scored slightly better (average 104%) than students submitting on the due day (average 97%); I am assuming that some students ran out of time before they finished all the problems, and will plan to get started earlier on later program.

In the assignment spreadsheet, Column A contains the Hashed IDs of all students (in sorted order); Column B contains an X if we believe the student submitted work on time (for pairs, only the submitting student will show an X, not their partner); Column C shows the extra credit points for early submissions

Row 2 shows the worth (in number of points) for each problem. Row 3 shows the number of tests performed for each problem: all were batch-self check tests. Rows 4-5 shows further information about the tests performed in each column.

Rows 6 and beyond show the number of failed tests for each student (a blank indicates no failed tests: equivalent to 0 failed tests). To compute the number of points for a problem/in a column, compute the percentage of successful tests and multiply it by the number of points the problem is worth. So for example, if a student missed 1 of 4 tests on a 4 point problem, he/she would receive 3/4*4 = 3 points. Column P shows each student's cumulative score, for all the tests in the single problem in this assignment. Columns Q-S show each student's cumulative Score, the score Rounded to an integer (what integer is entered in the Grades spreadsheet) and Percent, based on the number of points the assginment is worth (here 50). Note that these columns are filled in both for submitters and their partners (these are the only columns filled in for partners): a partner should see his/her submitter's line for details.

To get the extra credit point for processing string annotations (Column O), your code must pass all 6 tests.

Students should talk to the TA for their Lab first, if they do not understand why they received the marks they did or dispute any of these marks. The best time to talk with your TA about grades is during one of your Labs, when both student and TA are physically present to examine the submission and the grade, possibly running the solution on a computer they can share.

Students should examine their graded work immediately and get any regrade issues settled as soon as possible (within a week of when the grade is assigned).

IMPORTANT Information about Student Grades

  • A few students submitted code that had syntax errors and therefore failed all tests: the TAs are authorized to allow you to fix a few simple syntax errors in the code you submitted and rerun/regrade the code (but, I will deduct some points for not submitting executable code). Also see Announcement #5 below.

  • A few students submitted code that didn't finish executing in some part and therefore failed all that part's tests: the TAs are authorized to allow you to replace any method body with pass and rerun/regrade the code (but, I will deduct some points for submitting code with an infinite loop).

  • A few students submitted code that (a) incorrectly named partners -wrong format or wrong UCInetID, or (b) had students listed a partners of multiple submitters, or (c) had both students submitting and listed as partners of submitters. The TAs are authorized to try to understand these problems and help me correct them (but, I will deduct some points for dealing with these problems).

This assignment was designed to illustrate the mutual recursion used for checking annotations (parameter and return) for functions, by overloading the __call__ method in a class, creating a function decorator. It provided some introspection code (examining function headers) that you needed to use to write your decorator. As with all assignments, you should examine my solution.


#19: 5/23/22
Quiz #6 Graded
I have run the automatic batch self-check tests for Quiz #6 and the TAs/Readers have graded the paper submissions. I used the following batch self-check files (similar to the ones I provided, but with some altered/additional tests). The grades are now recorded. See the assignment grades and Grades(zipped .xlsm file) files, whose details are discussed below, in Announcement #7. The class average was about 81% and the median was 86%, meaning that most students correctly solved most problems, and almost half (41%) of the class correctly solved all the problems (or had minor deductions). Overall, there were 41% As, 27% Bs, 9% Cs, and 23% Ds and Fs. About 47% of the students submitted early (to Checkmate), and these early submitters scored much better (average of 91%) than students submitting on the due day (average of 72%); I am assuming that some students ran out of time before they finished all the problems, and will plan to get started earlier on later quizzes.

In the assignment spreadsheet, Column A contains the Hashed IDs of all students (in sorted order) and Column B contains an X if we believe the student submitted work on time; Column C shows the extra credit points for early submissions: ins some cases I "fixed" student programs by finding/removing bad imports (a 2 point deduction), so this column will show a negative number. Row 1 for Columns D-K shows how many points the problems were worth. Row 2 shows the number of tests performed for each problem.

Rows 4 and beyond show the number of failed tests for each student (a blank indicates no failed tests: equivalent to 0 failed tests). To compute the number of points for a problem/in a column, compute the percentage of successful tests and multiply it by the number of points the problem is worth. So for example, if a student missed 5 of 20 tests on a 4 point problem, he/she would receive 15/20*4 = 3 points. Columns L-Q show the cumulative score for each Problem. Columns R-T show each student's cumulative Score, the score Rounded to an integer (what integer is entered in the Grades spreadsheet) and Percent, based on the number of points the assginment is worth (here 25).

You can see your score and the rubric from questions 1-2 on Gradescope. If you think the rubric was applied incorrectly (see my solution too), I will soon open up these problems for regrading on Gradescope. If you have a question about the ruberic, ask it on Ed Discussion.

Students should talk to their TA, if they do not understand why they received the marks they did. The best time to talk with any TA about grades is during one of their Labs, when both student and TA/Reader are physically present to examine the submission and the grade, possibly running the solution on a computer they can share.

Students should examine their graded work immediately and get any regrade issues settled as soon as possible (within a week of when the grade is assigned).

IMPORTANT Information about Student Grades

  • The module detectrecursive.py is one of the grading files. It includes two function decoraters that automatically detect whether your functions were iterative (a requirement for 3) or recursive (a requirement for the other problems). These decorators examine the "stack" (information about the what function calls got the code to the current funtion call) to detect whether a function did or didn't call itself recursively. In problem 3a you received 0 points if you wrote your function recursively. In problem 3b you received 0 points if you wrote your function iteratively. If you used a recursive local function, this might be graded incorrectly: talk to your TA.

  • Another test detected whether more than one new LN object was created while executing your code (see how __init__ is written in gradehelperq6.py). Violating this rule can also results in a score of 0 points.

  • If columns G or I have either a -1 or -2, that indicates one or both criteria were not met; if any criteria is not met, you received 0 for that question (regardless of whether your code produced the correct answer). If you believe that your code was not correctly detected, talk to your TA. Note that a handful of students wrote their recursive function calling their iterative one, which does not meet the recursive critera; many students allocated more than one new LN objects while executing their code.

  • A few students submitted code that had syntax errors and therefore failed all tests: the TAs are authorized to allow you to fix a few simple syntax errors in the code you submitted and rerun/regrade the code (but, I will deduct some points for not submitting executable code). Also see Announcement #5 below.

  • A few students submitted code that didn't finish executing in at least one of their functions and therefore failed all its tests: the TAs are authorized to allow you to replace any method body with pass and rerun/regrade the code (but, I will deduct some points for submitting code with an infinite loop).

This assignment was designed to provide you with a chance to demonstrate you understand the execution of linked-list and binary (search) tree algorithms. Also, it served as an introduction to coding linked-list and tree functions, involving iteration and recursion. Finally, it also allowed you to experiment with a derived class. All these topics may be tested again on In-Lab Exam #3 and the Final exam. As with all assignments, you should examine my solutions.

#18: 5/13/22
In-Lab Programming Exam #2 Graded
I have run the automatic batch self-check tests for In-Lab Exam #2 and the grades are now recorded. See the assignment grades and Grades(zipped .xlsm file) files, whose details are discussed below, in Announcement #7. The actual file I used for grading is bsc. You can find your solutions (by your ID Hashed), my solution, and the bscile2S22.txt file that I used to compute grades for this assignment on Ed Resources: see the name ile2materials.zip.

Each of the 7 parts were worth 100/7 (~14.3) points: 3 points for getting it completely correct and ~11.3 points for passing all the individual tests. So for example, if a student missed 5 of 20 tests on such a 14.3 point problem, they would receive 0 + 15/20*11.3 ~ 8.5 points. In this grading system, missing 1 test would result in 0 + 19/20*11.3 points or 75% on that test.

If you scored better on In-Lab Exam #2 than on In-Lab Exam #1, your score for this exam will be highlighted in yellow, and the sum of these two scores (column X in the grades spreadsheet) will be (.25*In-Lab1 + .75*In-Lab2) + In_Lab2. So, if you scored 50 on In-Lab #1 and 80 on In-Lab #2, your total is (.25*50 + .75*80) + 80 = (12.5 + 60) + 80 = 72.5 + 80 = 152.5 which is rounded to 153, for an In-Lab average of 76.5%. Without this weighting of In-Lab #1, the average would be 65%.

The class average was about 89% and the median was about 100%. The skew between these statistics shows that although the majority of students did very well, there were some students who did very poorly, dragging down the average but not the median. At the extreme, 57% of the students scored 100% or more (because of the extra credit points; all required methods passed all batch self-check tests) and 7% students scored below 60%. A total of 34% of the students who took this exam received one extra credit point and 5% received both extra credit points.

The approximate distribution of grades on this In-Lab exam is 64% As, 14% Bs, 6% Cs, 9% Ds, and 6% Fs.

This U-shaped distribution (80% As and Ds/Fs) is common for In-Lab Programming Exams, where we are testing competency/mastery of programming concepts: the ones who attained it scored As (could do everything in the allotted time); the ones who have not attained it scored Fs (solved just a few problems in the allotted time); only about 20% of the students scored somewhere in-between (spread out evenly).

FYI, the averages for the different lab times were 89% for students in Labs 1, 2, and 3 (meeting at 9am), 89% for students in Labs 4, 5, and 6 (meeting at 11am), and 89% for students in Labs 7, 8, and 9 (meeting at 1pm), and 92% for students in Lab 10 (meeting at 3pm). Lab 1 (at 9am) had the highest average (96%) and Lab 2 (also at 9am) had the lowest average (83%).

Students should talk to the TA for their Lab first, if they do not understand why they received the marks they did or dispute any of these marks. The best time to talk with your TA about grades is during one of your Labs, when both student and TA are physically present to examine the submission and the grade, possibly running the solution on a computer they can share.

Students should examine their graded work immediately and get any regrade issues settled as soon as possible (within a week of when the grade is assigned).

IMPORTANT Information about Student Grades

  • If you submitted code that had syntax errors and therefore failed all tests: the TAs are authorized to allow you to fix a few simple syntax errors in the code you submitted and rerun/regrade the code (but, I will deduct some points for not submitting executable code). Also see Announcement #5 below.

  • If you submitted code that didn't finish executing in at least one of their methods and therefore failed all its tests: the TAs are authorized to allow you to replace any method body with pass and rerun/regrade the code (but, I will deduct some points for submitting code with an infinite loop).

#17: 5/11/22
Program #3 Graded
I have run the automatic batch self-check tests for Program #3 and the grades are now recorded. See the assignment grades and Grades(zipped .xlsm file) files, whose details are discussed below, in Announcement #7. I used a different bsc test file for grading: one that defined a pnamedtuple('Quad1', 'x y z f'), having a different number of fields with different names not in alphabetical order. The class average was about 100% and the median was 104%, meaning that most students correctly solved most problems, and over half (90%) of the class correctly solved all the problems (or had minor deductions). Note that this problem had an extra credit part, as well as the standard extra credit for an early submission. Overall there were 90% As, 2% Bs, 3% Cs, and 5% Ds and Fs. About 48% of the students submitted early, and these early submitters scored better (average of 104%) than students submitting on the due day (average of 95%); I am assuming that some students ran out of time before they finished all the problems, and will plan to get started earlier on later program.

In the assignment spreadsheet, Column A contains the Hashed IDs of all students (in sorted order); Column B contains an X if we believe the student submitted work on time (for pairs, only the submitting student will show an X, not their partner); Column C shows the extra credit points for early submissions.

Row 2 shows the number of points each group of batch-self checks is worth; row 3 shows the number of tests performed for each problem: all were batch-self check tests. Rows 4-5 shows further information about the tests performed in each column.

Rows 6 and beyond show the number of failed tests for each student (a blank indicates no failed tests: equivalent to 0 failed tests). To compute the number of points for a problem/in a column, compute the percentage of successful tests and multiply it by the number of points the problem is worth. So for example, if a student missed 1 of 4 tests on a 4 point problem, he/she would receive 3/4*4 = 3 points. Column L shows each student's cumulative score, for all the tests in the single problem in this assignment. Columns M-O show each student's cumulative Score, the score Rounded to an integer (what integer is entered in the Grades spreadsheet) and Percent, based on the number of points the assginment is worth (here 50). Note that these columns are filled in both for submitters and their partners (these are the only columns filled in for partners): a partner should see his/her submitter's line for details.

Students should talk to the TA for their Lab first, if they do not understand why they received the marks they did or dispute any of these marks. The best time to talk with your TA about grades is during one of your Labs, when both student and TA are physically present to examine the submission and the grade, possibly running the solution on a computer they can share.

Students should examine their graded work immediately and get any regrade issues settled as soon as possible (within a week of when the grade is assigned).

IMPORTANT Information about Student Grades

  • A few students submitted code that had syntax errors and therefore failed all tests: the TAs are authorized to allow you to fix a few simple syntax errors in the code you submitted and rerun/regrade the code (but, I will deduct some points for not submitting executable code). Also see Announcement #5 below.

  • A few students submitted code that didn't finish executing in some part and therefore failed all that part's tests: the TAs are authorized to allow you to replace any method body with pass and rerun/regrade the code (but I will deduct some points for submitting code with an infinite loop).

    A few students submitted code that (a) incorrectly named partners -wrong format or wrong UCInetID, or (b) had students listed a partners of multiple submitters, or (c) had both students submitting and listed as partners of submitters. The TAs are authorized to try to understand these problems and help me correct them (but, I will deduct some points for dealing with these problems).

This assignment was designed to illustrate the richness of ways to solve programming problems: writing a program that automatically writes a class, given the required information to specify it (class name and fields). It also provided an opportunity to improve your string-processing abilities. As with all assignments, you should examine my solution.


#16: 5/9/22
Quiz #5 Graded
I have run the automatic batch self-check tests for Quiz #5 and the TAs/Readers have checked all of the solutions for appropriate use of recursion (as discussed in functional programming), to solve each problem. I used the following batch self-check file (similar to the one I provided, but with some altered/additional tests). The grades are now recorded. See the assignment grades and Grades(zipped .xlsm file) files, whose details are discussed below, in Announcement #7. The class average was about 94% and the median was 100%, meaning that most students correctly solved most problems, and over about half (67%) of the class correctly solved all the problems (or had minor deductions). Overall there were 67% As, 22% Bs, 6% Cs, and 5% Ds and Fs. About 62% of the students submitted early, and these early submitters scored much better (average of 99%) than students submitting on the due day (average of 86%); I am assuming that some students ran out of time before they finished all the problems, and will plan to get started earlier on later quizzes.

In the assignment spreadsheet, Column A contains the Hashed IDs of all students (in sorted order) and Column B contains an X if we believe the student submitted work on time; Column C shows the extra credit points for early submissions. Row 1 for Columns D-H shows how many points the problems were worth. Row 2 shows the number of tests performed for each problem Rows 4 and beyond show the number of failed tests for each student (a blank indicates no failed tests: equivalent to 0 failed tests). To compute the number of points for a problem/in a column, compute the percentage of successful tests and multiply it by the number of points the problem is worth. So for example, if a student missed 5 of 20 tests on a 4 point problem, he/she would receive 15/20*4 = 3 points.

A -1 in colums I-M means that the student did not solve the problem according to the requirements (see the comments there for more information). In such cases, not only was this point lost, but 1/2 the correctness points as well. Columns N-P show each student's cumulative Score, the score Rounded to an integer (what integer is entered in the Grades spreadsheet) and Percent, based on the number of points the assginment is worth (here 25).

The Readers who graded columns I-M for these problems:

  1. Elizabeth: compare
  2. Jakub: is_sorted
  3. Haining: merge
  4. Kyuho: sort
  5. Aaron: max_value
The following were some of the most common problems with appropriate use of recursion. These numbers are used in column comments in the spreadsheet to explain why points were deducted.
  1. Rebind variables (including things like +=)
  2. Used mutator method (.append, .extend, .add, del, .remove, .pop, ...); x[0] =; etc.
  3. Used loop (for/while) or loop in comprehension, or explicit iter/next
  4. Created extra data structures (list, tuple, set, dict): empty ones ok: [], ()
  5. Used Try/Except with no explicit base case
  6. Added parameters; it is ok to write local helper functions
  7. Used code that "trivialized" problem
  8. Missing code, written as pass or anything else trivial or not using recursion
  9. Other things
Students should email the Reader who graded a problem, if they do not understand why they received the appropriateness marks they did. Read my solution first as well.

Students should examine their graded work immediately and get any regrade issues settled as soon as possible (within a week of when the grade is assigned).

IMPORTANT Information about Student Grades

  • A few students submitted code that had syntax errors and therefore failed all tests: the TAs are authorized to allow you to fix a few simple syntax errors in the code you submitted and rerun/regrade the code (but, I will deduct some points for not submitting executable code). Also see Announcement #5 below.

  • A few students submitted code that didn't finish executing in at least one of their functions and therefore failed all its tests: the TAs are authorized to allow you to replace any method body with pass and rerun/regrade the code (but, I will deduct some points for submitting code with an infinite loop).
The purpose of the assignment was to improve your ability to write recursive functions correctly, according to three proof rules and the constraints of functional programming. All these topics may appear on In-Lab Exam #3 and the Final exam. As with all assignments, you should examine my solutions.

#15: 5/2/22
Midterm Graded
The TAs/Readers and I have graded and recorded the scores for the midterm exam. The TAs will distribute the graded midterms in their labs this week. If you do not pick up your exam then, you will have to come to my office hours to retrieve it (and I would prefer not to have hundreds of exams stockpiled in my office). See the assignment grades and Grades(zipped .xlsm file) files.

If you took the exam but do not show a score in the spreadsheets, please contact me ASAP. Sometimes the TAs enter a score on the wrong line (the scores are hand-entered), but since we have all the original exams that problem is easy to rectify.

The class average was about 57% and the median was the same. Of course, because the average was below 75%, about about 18 normalization points (18%) will be added when computing the average of all graded instruments on the spreadsheet. The grades recorded in the spreadsheet (both in Columns S and T) are the actual exam grades (without normalization points; see cell S8, highlighted in yellow, for the actual number of normalization points that will boost your score). So your actual score, for computing your average in column AJ is the sum of your recorded score + all normalization points. These extra points are added into cell AB (your final average); AA includes only a sum of the points you received, without normalization points.

After normalizing the scores on the midterm, overall there were 19% As, 20% Bs, 22% Cs, 21% Ds, and 18% Fs; last Winter there were 29% As, 19% Bs, 13% Cs, 20% Ds, and 19% Fs; last Fall there were 23% As, 19% Bs, 21% Cs, 19% Ds, and 18% Fs; last Spring there were 22% As, 22% Bs, 19% Cs, 20% Ds, and 17% Fs.

Here is a list of the normalized averages for the various pages. Recall that although the exam had 110 points, 10 points were counted as extra credit, so the final averages were computed out of only 100 points.

  • Problem 1: 67% Functions on Data Structures (Durgs)
  • Problem 2: 48% Regular Expressions
  • Problem 3: 60% Operator Overloading 1 (Modular)
  • Problem 4: 53% Operator Overloading 2 (Drugs)
  • Problem 5: 50% Iterators, Generators, Decorators
  • Problem 6: 29% Variables, Classes, Attributes, Operators, FEOOP
  • Problem 7: 50% Short Answer

Now is a good time to look at course grades as well, as we have graded nearly half of the total number of testing instruments (410 of 1,000 points). Now is the first time that recorded grades are truly meaningful, because they include testing instruments in all the major categories: quizzes, programs, in-lab exams, and written exams. The approximate distribution of course grades (for those students who submitted a midterm exam) is 52% As, 23% Bs, 10% Cs, and 15% Ds and Fs (in Winter there were 57% As, 22% Bs, 11% Cs, and 10% Ds and Fs; in Fall there were: 44% As, 23% Bs, 13% Cs, and 20% Ds and Fs; last Spring there were: 52% As, 26% Bs, 11% Cs, and 11% Ds and Fs) numbers much better than my original prediction of of 25% in each of these four categories (e.g., we have 75% As and Bs instead of 50% As and Bs). Note that final grades for students finishing ICS-33 last quarter were shifted a bit lower than the midterm grades: 53% As, 25% Bs, 14% Cs, and 8% Ds and Fs.

Here is a list of who graded which problems.

  • Problem 1a: Pills 1 : Phuc
  • Problem 1b: Pills 2 : Aaron
  • Problem 1c: Pills 3 : Phuc
  • Problem 2a: T/F : Aaron
  • Problem 2b1: RE+Code(RE) : Rahima
  • Problem 2b2: RE+Code(Code) : Rahima
  • Problem 3a: Modular repr : Tommasso
  • Problem 3b: Modular + : Carlos
  • Problem 3c: Modular in : Kyuho
  • Problem 4a: Pills 1 : Elizabeth
  • Problem 4b: Pills 2 : Carlos
  • Problem 4c: Pills 3 : Jakub
  • Problem 5a: while : Kyuho
  • Problem 5b: generator 1 : Valeria
  • Problem 5c: generator 2 : Elizabeth
  • Problem 6a: Attributes : Valeria
  • Problem 6b: Operators/FEOOP : Jakub
  • Problem 6c: Class Attributes: Haining
  • Problem 7a: Picture : Haining
  • Problem 7b: Function call : Rich
  • Problem 7c: __setattr__ : Tommaso

I would like to thank the TAs/Readers for their efforts over the weekend. Each spent about 12 hours preparing to grade and grading their problems (and then entering the grades onto the spreadsheet) on over 350 exams; they will spend even more time finishing all regrading.

If you have any issues with how any exam problem was graded, talk to the staff member who graded it, and they can discuss the rubric with you and resolve any issues. But first, please examine my solution and understand the differences between it and your answer. Students should examine their graded work immediately and get any regrade issues settled as soon as possible. You can see the TAs in their labs on Tuesday this week: on 5/5. I'm going to ask the Readers to visit labs on this day too: I'll announce their schedule in an email soon.

Because we examined code (unlike for the In-Lab exam) partially credit was awarded, but sometimes points were deducted not for correctness issues, but for stylistic issues: e.g., using non-optimal views (e.g. .keys(), .values() and .items() for dictionaries); using unnecesssary data structures, loops, ifs, dictionary accesses; not using boolean values simply; not unpacking appropriately; not using the 9 important functions when useful, etc. This happened most frequently from problems on pages 1 and 4.

Important: As with the In-Lab Exams, if a student performs better on the Final Exam (since it is cumulative), I will increase their Midterm Exam score to be 75% of the Final normalized score + 25% of the Midterm normalized score.

Students showing a red cell in column AC-AD have a computed grade of C or above, but have not met the requirement that the average of either their In-Lab OR Written exams is at least 72.5%: these students will received a C- grade if one of these averages does not improve. Students often complain about this policy, and I have investigated doing away with it, by making the In-Lab and Written Exam worth more points. When I recomputed my spreadsheet, all the red-cell student would still end up scoring a C- or below, but many other students would also receive lower grades. So, by keeping the current system, the same students would not "pass" but other students will have hig20%her grades. That is why I still use the current policy.

Finally, the normalized average for the 9am labs 73%, the 11am labs was 74%, the 1pm labs was 77%, and the 3pm lab (there was just one) was 76%. If I would group every student into a random lab time, the averages would still show similar variation: 74%, 74%, 77, and 73%. The normalized averages for the individual labs were Lab 1: 78%, Lab 2: 68%, Lab 3: 74%, Lab 4: 74%, Lab 5: 76%, Lab 6: 71%, Lab 7: 80%, Lab 8: 76% Lab 9: 75%, and Lab 10: 76%. So, the relative scores differed among different labs at the same time as well, with the highest scoring lab scoring 80% (at 1pm) and the lowest scoring lab scoring 68% (at 9am).


#14: 5/2/22
Quiz #4 Graded
I have run the automatic batch self-check tests for Quiz #4 and the grades are now recorded. I used the following batch self-check file (similar to the one I provided, but with some altered/additional tests).

See the assignment grades and Grades(zipped .xlsm file) files, whose details are discussed below, in Announcement #7. The class average was about 95% and the median was 100%, meaning that most students correctly solved most problems, and more than half (87%) of the class correctly solved all the problems (or had minor deductions). Overall there were 87% As, 5% Bs, 2% Cs, and 6% Ds and Fs. Some students scored 0 because their code timed-out on one of the problems; see the information below about rectifying this issue. About 47% of the students submitted early, and these early submitters scored better (103% average) than students submitting on the due day (89%); I am assuming that some students ran out of time before they finished all the problems, and will plan to get started earlier on later quizzes.

In the assignment spreadsheet, Column A contains the Hashed IDs of all students (in sorted order) and Column B contains an X if we believe the student submitted work on time; Column C shows the extra credit points for early submissions. Row 1 for Columns D-K shows how many points the problems were worth. Row 2 shows the number of tests performed for each problem. Row 3 shows the part of the problems in more detail.

Rows 4 and beyond show the number of failed tests for each student (a blank indicates no failed tests: equivalent to 0 failed tests). To compute the number of points for a problem/in a column, compute the percentage of successful tests and multiply it by the number of points the problem is worth. So for example, if a student missed 5 of 20 tests on a 4 point problem, he/she would receive 15/20*4 = 3 points. Columns L-M show the cumulative score for each Problem. Columns N-P show each student's cumulative Score, the score Rounded to an integer (what integer is entered in the Grades spreadsheet) and Percent, based on the number of points the assginment is worth (here 25).

Students should talk to the TA for their Lab first, if they do not understand why they received the marks they did or dispute any of these marks. The best time to talk with your TA about grades is during one of your Labs, when both student and TA are physically present to examine the submission and the grade, possibly running the solution on a computer they can share.

Students should examine their graded work immediately and get any regrade issues settled as soon as possible (within a week of when the grade is assigned).

IMPORTANT Information about Student Grades

  • A few students submitted code that had syntax errors and therefore failed all tests: the TAs are authorized to allow you to fix a few simple syntax errors in the code you submitted and rerun/regrade the code (but, I will deduct some points for not submitting executable code). Also see Announcement #5 below.

  • Many students submitted code that didn't finish executing in at least one of their functions and therefore failed all its tests: the TAs are authorized to allow you to replace any generator function body with pass and rerun/regrade the code (but, I will deduct some points for submitting code with an infinite loop).

This assignment was designed to provide you with a good grounding in the use of iterators and generator functions (which can be used to implement iterators), and how to write code that calls the functions iter and next directly (using while loop instead of for loop). All these topics will be tested again on the Midterm and In-Lab Exam #2 and In-Lab Exam #3. As with all assignments, you should examine my solutions.

#13: 4/27/22
Program #2 Graded
I have run the automatic batch self-check tests for Program #2 and the grades are now recorded. I used the following batch self-check files (similar to the ones I provided, but with some altered/additional tests). See the assignment grades and Grades(zipped .xlsm file) files, whose details are discussed below, in Announcement #7. The class average was about 98% and the median was 102%, meaning that most students correctly solved most problems, and way over half (91%) of the class correctly solved all the problems (or had minor deductions). Overall there were 91% As, 3% Bs, 1% Cs, and 5% Ds and Fs. About 68% of the students submitted early, and these early submitters scored a bit better (103% average) than students submitting on the due day (94% average); I am assuming that some students ran out of time before they finished all the problems, and will plan to get started earlier on later programs.

Note: The second problem, the DictList class, was worth 30 of the 50 total points for this assignment.

In the assignment spreadsheet, Column A contains the Hashed IDs of all students (in sorted order); Column B contains an X if we believe the student submitted work on time (for pairs, only the submitting student will show an X, not their partner); Column C shows the extra credit points for early submissions.

Row 2 for Columns D-AA shows how many points the problems were worth. Row 3 shows the number of tests performed for each problem: all were batch-self check tests. Row 4 shows further information about the tests performed in each column.

Rows 5 and beyond show the number of failed tests for each student (a blank indicates no failed tests: equivalent to 0 failed tests). To compute the number of points for a problem/in a column, compute the percentage of successful tests and multiply it by the number of points the problem is worth. So for example, if a student missed 1 of 4 tests on a 4 point problem, he/she would receive 3/4*4 = 3 points. Columns AG-AH show each student's cumulative score, for all the tests in each of the two problems in this assignment. Columns AI-AK show each student's cumulative Score, the score Rounded to an integer (what integer is entered in the Grades spreadsheet) and Percent, based on the number of points the assginment is worth (here 50). Note that these columns are filled in both for submitters and their partners (these are the only columns filled in for partners): a partner should see his/her submitter's line for details.

Students should talk to the TA for their Lab first, if they do not understand why they received the marks they did or dispute any of these marks. The best time to talk with your TA about grades is during one of your Labs, when both student and TA are physically present to examine the submission and the grade, possibly running the solution on a computer they can share.

Students should examine their graded work immediately and get any regrade issues settled as soon as possible (within a week of when the grade is assigned).

IMPORTANT Information about Student Grades

  • A few students submitted code that had syntax errors and therefore failed all tests: the TAs are authorized to allow you to fix a few simple syntax errors in the code you submitted and rerun/regrade the code (but, I will deduct some points for not submitting executable code). Also see Announcement #5 below.

  • A few students submitted code that didn't finish executing in some part and therefore failed all that part's tests: the TAs are authorized to allow you to replace any method body with pass and rerun/regrade the code (but, I will deduct some points for submitting code with an infinite loop).

  • A few students submitted code that (a) incorrectly named partners -wrong format or wrong UCInetID, or (b) had students listed a partners of multiple submitters, or (c) had both students submitting and listed as partners of submitters. The TAs are authorized to try to understand these problems and help me correct them (but, I will deduct some points for dealing with these problems).

This assignment was designed to provide you with a good grounding in the use of classes and the practice of overloading operators in classes, including a bit of writing iterators. Quiz #4 covers decorators for iterators using generator functions in much more detail. All these topics will be tested again on the Midterm and In-Lab Exam #2. As with all assignments, you should examine my solutions.


#12: 4/27/22
Quiz #3 Graded
I have run the automatic batch self-check tests for Quiz #3 and the grades are now recorded. I used the following batch self-check files (similar to the ones I provided, but with some altered/additional tests). See the assignment grades and Grades(zipped .xlsm file) files, whose details are discussed below, in Announcement #7. The class average was about 95% and the median was 100%, meaning that most students correctly solved most problems, and well over half (74%) of the class correctly solved all the problems (or had minor deductions). Overall there were 74% As, 17% Bs, 3% Cs, and 6% Ds and Fs. About 57% of the students submitted early, and these early submitters scored much a better (101% average) than students submitting on the due day (87% average); I am assuming that some students ran out of time before they finished all the problems, and will plan to get started earlier on later quizzes.

There were a few students whose code timed-out when I graded it; I let the grading program run everyone's code for 30 seconds. If you code timed-out, talk to your TA about replacing the body of any offending code by just pass, so that it won't time-out, allowing all the other code to be graded.

In the assignment spreadsheet, Column A contains the Hashed IDs of all students (in sorted order) and Column B contains an X if we believe the student submitted work on time; Column C shows the extra credit points for early submissions. Row 1 for Columns D-N shows how many points the problems were worth. Row 2 shows the number of tests performed for each problem.

Rows 4 and beyond show the number of failed tests for each student (a blank indicates no failed tests: equivalent to 0 failed tests). To compute the number of points for a problem/in a column, compute the percentage of successful tests and multiply it by the number of points the problem is worth. So for example, if a student missed 5 of 20 tests on a 4 point problem, he/she would receive 15/20*4 = 3 points. Columns O-P show the cumulative score for each Problem. Columns Q-S show each student's cumulative Score, the score Rounded to an integer (what integer is entered in the Grades spreadsheet) and Percent, based on the number of points the assginment is worth (here 25).

Students should talk to the TA for their Lab first, if they do not understand why they received the marks they did or dispute any of these marks. The best time to talk with your TA about grades is during one of your Labs, when both student and TA are physicall present to examine the submission and the grade, possibly running the solution on a computer they can share.

Students should examine their graded work immediately and get any regrade issues settled as soon as possible (within a week of when the grade is assigned).

IMPORTANT Information about Student Grades

  • A few students submitted code that had syntax errors and therefore failed all tests: the TAs are authorized to allow you to fix a few simple syntax errors in the code you submitted and rerun/regrade the code (but, I will deduct some points for not submitting executable code). Also see Announcement #5 below.

  • A few students submitted code that didn't finish executing in at least one of their functions and therefore failed all its tests: the TAs are authorized to allow you to replace any method body with pass and rerun/regrade the code (but, I will deduct some points for submitting code with an infinite loop or just code that took too long).

This assignment was designed to provide you with a good grounding in the use of operator overloading in classes: this includes both standard arithmetic and relational operators, as well as other methods that Python calls automatically (e.g., __repr__, __str__, __getitem__, etc). All these topics will be tested again on the Midterm and In-Lab Exam #2. As with all assignments, you should examine my solutions.

In the Date class, some students wrote operators that returned strings instead of Date objects; a few bsc tests fail in such cases, but many succeeded because the e test calls the str function on the left argument; it is supposed to call str on a Date object, but if you return a string, it will call str on it, which returns the same string.


#11: 4/18/22
Program #1 Graded
I have run the automatic batch self-check tests for Program #1 and the grades are now recorded. I used the same batch self-check files that I provided for this assignment.

See the assignment grades and Grades(zipped .xlsm file) files, whose details are discussed below, in Announcement #7. Note that columns AK-AP show information about the submissions for each student: what days submissions occurred (AK-AM), the total number of submissions (AN), how many submissions were graded and counted (AO), and how many were graded but not counted (AP). Of 262 students/pairs submitting, 27 (10%) had one or more submissions not counted: they submitted more than one on the due date, and/or more than two during the last two days. Of these, 16 (6%) had more than one submission not counted.

The class average was about 90% and the median was 102%, meaning that many students correctly solved most problems, and more than half (75%) of the class correctly solved all the problems (or had minor deductions). Overall there were 77% As, 7% Bs, 2% Cs, and 14% Ds and Fs. FYI, last Spring quarter, there were 75% As, 8% Bs, 3% Cs, and 14% Ds and Fs. About 41% of the students submitted early, and these early submitters scored a bit better (by less than 1/2 grade) (98% average) than students submitting any programs on the due day (94% average). I am assuming that some students ran out of time before they finished all the problems, and will plan to get started earlier on later programs.

In the assignment spreadsheet, Column A contains the Hashed IDs of all students (in sorted order); Column B contains an X if we believe the student submitted work on time (for pairs, only the submitting student will show an X, not their partner); Column C shows the extra credit points for early submissions. If column B has a red triangle, hover over it to read its message; sometimes you have to right-click the comment, then select "Edit Comment" to enlarge the comment box to read all of it.

Row 2 for Columns AB-AA shows how many points the problems were worth (Columns W-AA record whether a reasonable/executable script was written for the parts of this programming assignment). All prompting and printing (except for the tracing) should appear in the script; if any of your code printed something, you will have to see your TA for a regrade (removing those print statements). Row 3 shows the number of tests performed for each problem: all were batch-self check tests. Rows 4-5 show further information about the tests performed in each column.

Rows 6 and beyond show the number of failed tests for each submission (a blank indicates no failed tests: equivalent to 0 failed tests). To compute the number of points for a problem/in a column, compute the percentage of successful tests and multiply it by the number of points the problem is worth. So for example, if a student failed 1 of 4 tests on a 5 point problem, he/she would receive (4-1)/4*5 = 3/4*5 = 3.75 points. Columns AB-AF show each student's cumulative score, for all the tests in each of the problems in this assignment. Columns AG-Ai show each student's cumulative Score, the score Rounded to an integer (what integer is entered in the Grades spreadsheet) and Percent, based on the number of points the assginment is worth (50). Note that these columns are filled in both for submitters and their partners (these are the only columns filled in for partners): a partner should refer to his/her submitter's line for grading details.

Students should talk to the TA for their Lab first, if they do not understand why they received the marks they did or dispute any of these marks. The best time to talk with your TA about grades is during one of your Labs, when both student and TA are physically present to examine the submission and the grade, possibly running the solution on a computer they can share.

Students should examine their graded work immediately and get any regrade issues settled as soon as possible (within a week of when the grade is assigned).

IMPORTANT Information about Student Grades

  • A few students submitted code that had syntax errors and therefore failed all tests for that part: the TAs are authorized to allow you to fix a few simple syntax errors in the code you submitted and rerun/regrade the code (but, I will deduct some points for not submitting executable code). You cannot "debug" the code beyond fixing syntax errors. Also see Announcement #5 below.

  • Some students submitted code that didn't finish executing and therefore failed all their tests: the TAs are authorized to allow you to replace any function body with pass (to avoid the infinite loop) and rerun/regrade the code. But, I will deduct some points for submitting code with an infinite loop: in the future, replace any function body with pass if it causes an infinite loop. Students who had this problem have a comment attached to their X in Column B.

  • A few students wrote function that required the user to input information on the keyboard. This in antithetical to the idea of what functions do, unless the entire purpose of the function is to do input (which was NOT the case FOR ANY of the functions in this Programming Assignment). The autograder cannot grade such programs, so you will have to see your TA about removing these input statements and getting your program regraded. Likewise, if you printed anything in your functions, the autograder cannot grade the programming assignment, so you will have to see your TA about removing these print statements and getting your program regraded.

  • A few students submitted code that (a) incorrectly named partners -wrong format or wrong UCInetID, or (b) had students listed a partners of multiple submitters, or (c) had both students submitting and listed as partners of submitters. The TAs are authorized to try to understand these problems and help me correct them Generally, I will deduct some points for staff having to deal with these problems manually. As always, see your TA first.

This assignment was designed to provide you with a good grounding in the use of the standard data structures in Python: list, tuple, set, and especially dict (and the defaultdict variant). It also included practice iterating overs such structures, writing comprehensions, use of the sorted function and lambda, and other useful/important Python functions. Unlike Quiz #1, the problems were bigger, requiring more interesting algorithms to solve, but still all expressible with a small number of Python language features. All these topics were tested on In-Lab Exam #1 (I assume students did well on the exam because they learned the material here and in Quiz #1) are will be tested again on the Midterm. As with all assignments, you should examine my solutions. I hope the "tracing" requirements for some of the problems showed you how to instrument the code you write to aid in debugging: if you added the tracing code after your program was running correctly, you missed the point of this part of the assignment.


#10: 4/18/22
Quiz #2 Graded
I have run the automatic batch self-check tests for Quiz #2 and the grades are now recorded. I used the following batch self-check and data files

See the assignment grades and Grades(zipped .xlsm file) files, whose details are discussed below, in Announcement #7.

The class average was about 93% and the median was 100%, meaning that most students correctly solved most problems; about (78%) of the class correctly solved all the problems (or had minor deductions). Overall there were 78% As, 3% Bs, 8% Cs, and 11% Ds and Fs; last quarter there were 86% As, 2% Bs, 4% Cs, and 8% Ds and Fs. About 48% of the students submitted early and these early submitters scored a much better (102% average) than students submitting on the due day (84% average). I am assuming that some students ran out of time before they finished all the problems, and will plan to get started earlier on later quizzes.

In the assignment grades spreadsheet, Column A contains the ID Hashed of all students (in sorted order). Column B contains an X if we believe the student submitted work on time; Column C shows any extra credit point for submitting early: early means that the Checkmate submission was 1 or more days early. Row 1 for Columns D-L shows how many points the problems were worth. Row 2 shows the number of tests performed for each problem.

Rows 4 and beyond show the number of failed tests for each student (a blank indicates no failed tests: equivalent to 0 failed tests). To compute the number of points for a problem/in a column, compute the percentage of successful tests and multiply it by the number of points the problem is worth. So for example, if a student missed 5 of 20 tests on a 4 point problem, he/she would receive (20-5)/20*4 = 15/20*4 = 3 points. Columns M-O show each student's cumulative Score, the score Rounded to an integer (what integer is entered in the Grades spreadsheet) and Percent, based on the number of points the assginment is worth (here 25).

Students should talk to the TA for their Lab first, if they do not understand why they received the marks they did or dispute any of these marks. The best time to talk with your TA about grades is during one of your Labs, when both student and TA are physically present to examine the submission and the grade, possibly running the solution on a computer they can share.

Students should examine their graded work immediately and get any regrade issues settled as soon as possible (within a week of when the grade is assigned).

IMPORTANT Information about Student Grades

  • A few students submitted code that had syntax errors and therefore failed all tests for that part: the TAs are authorized to allow you to fix a few simple syntax errors in the code you submitted and rerun/regrade the code (but, I will deduct some points for not submitting executable code). You cannot "debug" the code beyond fixing syntax errors. Also see Announcement #5 below.

  • A few students submitted code that didn't finish executing in at least one of their functions and therefore failed all its tests: the TAs are authorized to allow you to replace any function body with pass and rerun/regrade the code (but, I will deduct some points for submitting code with an infinite loop).

  • A few students submitted all parts of this assignment after the deadline; they received 0 points. I'm sure they finished some parts well before the deadline, but failed to submit them at that time. I recommend that when students finish a part of an assignment, they submit that part immediately. Checkmate always allows a part to be resubmitted.
This assignment was designed to provide you with a good grounding in the use of writing regular expressions and using the re module to write code that processes text using regular expression in interesting ways. All these topics will be tested again on the Midterm. As with all assignments, you should examine my solutions.

#9: 4/15/22
In-Lab Programming Exam #1 Graded
I have run the automatic batch self-check tests for In-Lab Exam #1 and the grades are now recorded. They were similar tests as you were provided: if you wrote reasonable code that worked during the exam, your code should have produced the same results for the tests I used for grading. See the assignment grades and Grades(zipped .xlsm file) files, whose details are discussed below, in Announcement #7.

You can find my solution, and the actual bsc and data files that I used to compute grades for this assignment, by following the Solutions (Ed Resources) link for this class (see the name ile1materials.zip). To test your code with these new files, you must put them in your project folder, comment out all the tests in the script in your code (the ones you ran during the exam), and then you should add and run the following code

    import driver
    driver.default_file_name = "bscile1S22.txt"
    driver.driver()

I believe the In-Lab Exams are the best indicator, of all testing instruments, of your ability to program: read specifications and transform them into working code (writing code and debugging it). In-Lab Exams are mastery exams: if you have a mastery of the materials you should be able to solve all these problems in the alloted time. Technically, to earn an A on this exam, I think students should be able to solve all the problems and the extra credit problem: many students did. But that was not the grading criteria.

As I'll say in class, Tolstoy is often quoted (from Anna Karenina) as writing,

"Happy families are all alike; every unhappy family is unhappy in its own way."
My adaptation of this quote is
"High-scoring students are all alike (knowing how to program well); every low-scoring student did poorly in his/her own way: e.g., lack of programming or debugging ability, freezing on the exam, misreading or misunderstanding some problem statements, spending too much time debugging one problem, being ill when taking the exam, arriving late, etc."
So, I understand that there are many possible reasons that students don't do well on In-Lab Exams. If you did poorly, think about why; but, don't fool yourself.

The spreadsheet computes grades as follows: if a problem passed all tests you received 20 points for it; if it failed one or more test (most problems had two tests) you scored 7 points for it; if you failed more more than 1 test you received 0 points for it. Column I computes this number, which is also the same as the rounded value (Column J) and the percentage (Column K)

The result was the class average was about 89% and the median was 100%. The skew between these statistics shows that although the majority of students solved most of the problems correctly, there were other students who did very poorly, which dragged down the average much more than the median. At the extremes, 73% of the students submitted code in which all five functions passed all batch self-check tests (about 65% of those had the extra credit problem correct too); 13 students submitted code in which no functions passed any batch self-check tests

The approximate distribution of grades on this In-Lab exam is 73% As, 10% Bs, 1% Cs, 7% Ds, and 9% Fs; a few years ago I gave a simlar exam and the grades were 65% As, 11% Bs, 1% Cs, 9% Ds, and 15% Fs. This U-shaped distribution (89% either As or Ds/Fs) is common for In-Lab Programming Exams, where we are testing competency/mastery of programming concepts: the ones who attained it scored As (could do everything in the allotted time); the ones who have not attained it scored Ds/Fs (solved just a few problems in the allotted time); only about 11% of the students scored somewhere in-between (all Bs/Cs).

There were about 180 students (48% of the class) who solved the extra credit problem.

FYI, the averages for the different exam times were 87% for students in Labs 1-3 (meeting at 10am), 89% for students in Labs 4-6 (meeting at 12noon), and 89% for students in Labs 7-9 (meeting at 2pm); 92% for students in Labs 10 (meeting at 4pm); Lab 1 (at 9am) had the highest average (96%) and Labs, 2, 3, and 9 (at 9am and 1pm) had the lowest average (82%).

Remember, we didn't grade on simplicity of solutions or good use of Python: we graded just on correctness; you can still learn something by looking at my solutions. On the written exams, which are all graded by hand, we pay closer attention to simplicity and good use of Python. Students should talk to the TA for their Lab first, if they do not understand why they received the marks they did or dispute any of these marks. The best time to talk with your TA about grades is during one of your Labs, when both student and TA are physically present to examine the submission and the grade, possibly running the solution on a computer they can share.

Students should examine their graded work immediately and get any regrade issues settled as soon as possible (within a week of when the grade is assigned).

IMPORTANT Information about Student Grades

  • If you submitted code that had syntax errors and therefore failed all tests: the TAs are authorized to allow you to fix a few simple syntax errors in the code you submitted and rerun/regrade the code (but, I will deduct some points for not submitting executable code). You cannot "debug" the code beyond fixing the syntax errors. Also see Announcement #5 below.

  • If you submitted code that didn't finish executing in at least one of their functions and therefore failed all its tests: the TAs are authorized to allow you to replace any function body with pass and rerun/regrade the code (but, I will deduct some points for submitting code with an infinite loop).
The actual batch self-check tests I used for grading were similar to the tests in the script of the exam; but, all produce differents results, so students could not "hard-code" any answers into their functions, hoping to get some correctness points. Often I just changed a few values, which leads to slightly different but equivalent output.

Finally, if students score a higher percentage on their In-Lab Exam #2 (which involves material from the first, as well as Classes, Operator Overloading, and writing Iterators), I will score their In-Lab Exam #1 higher. In the recent past, I have built a composite score that is 25% the first score and 75% the second. Therefore, even a terrible grade on this exam can have a minimal effect on your final grade if you perform much better on In-Lab Exam #2.


#8: 4/10/22
Quiz #1 Graded
I have run the automatic batch self-check tests for Quiz #1 (checking correctness) and the Readers/TAs have examined problem 1 and the code (checking requirements: e.g., 1 return statements/solution for 3a, 3b, and 3c) and the grades are now recorded and posted. I used the following batch self-check file. You should run your program on this file to understand your recorded grade.

See the assignment grades and Grades(zipped .xlsm file) files, whose details are discussed below, in Announcement #7. The class average was about 83% and the median was 92%, meaning that many students correctly solved most problems; in fact 24% of the students scored 100%. Overall there were 53% As, 23% Bs, 6% Cs, and 18% Ds and Fs for those students who submitted work; some of the students who scored near 0 submitted code that we could not run (see the paragraphs below for possible regrading by your TA). FYI, a recent quarter with a similar quiz had the following grades: 66% As, 18% Bs, 5% Cs, and 11% Ds and Fs for those students who submitted work.

If the average of all students on any testing instrument is less than 75%, the column for that instrument will show the number of normalization points in row 8 for that column, highlighted in yellow: the number of points that need to be added to each students score so that the instrument's average is 75%: these number of points are then added to the sum of the points for each student (in column AB) to compute their grade (so some student will score more than 100%). This is the only "curving" I do when grading; I do NOT do special curving at the end of the quarter. On this testing instrument there was no normalization points, because the average was above 75%.

About 46% of the students submitted early (for quizzes there is one extra credit point for submitting one day early) and these early submitters scored much better than students submitting on the due day (94% compared to 74%): a difference of 2 full grades! I am assuming that some students ran out of time before they finished all the problems, and will plan to get started earlier on later quizzes.

In the assignment grades spreadsheet, Column A contains the ID Hashed of all students (in sorted order). Column B contains an X if we believe the student submitted work on time; Column C shows any extra credit point for submitting early: early means that the Checkmate submission was 1 or more days early. Row 1 for Columns D-N shows how many points the problems were worth. Some problems show points in two columns: e.g., Problem #3a has 2 points in Column G (3a/C: produced correct answers, graded by the batch self-checks) and 1 point in Column J (3a/R: the requirement of 1 return statement, graded by the staff). Any /C column relates to correctness; any /R relates to requirements. Note that you did not receive any Requirement points unless you received at least some Correctness points: you do not get credit for writing one line of code that never worked. Row 2 shows the number of batch self-check tests performed for each problem (for those checked automatically; for the other columns it is typically the number of points the problem is worth).

Rows 4 and beyond show the number of failed tests for each student (a blank indicates no failed tests: equivalent to 0 failed tests). IMPORTANT: To compute the number of points you scored for a problem/in a column, compute the percentage of successful tests and multiply it by the number of points the problem is worth. So for example, if a student missed 2 of 6 tests on a 5 point problem, they would receive (6-2)/6 * 5 = 3.3333... points for that column: they got 2/3 of the tests correct for a 5 point problem, so they get 2/3 * 5 points. Columns O-Q show each student's cumulative Score, the score Rounded to an integer (that integer is the score entered in the Grades spreadsheet) and Percent, based on the number of points the assginment is worth (here 25).

Requirements points for the functions were deducted only for too many statements. But many students created extra/temporary data structures that are not needed (extra names are OK; but extra/unneeded data structures are not). For exmaple, if sorted is called using any iterable, you do NOT need to first create a list of the iterable; just call sorted on the iterable directly. We will deduct points on written exams for poor/unnecessary use of Python. Look at my solutions to see how to avoid such extra data structures in future work.

For the bsc testing, students should talk to the TA for their Lab first, if they do not understand why they received the marks they did or dispute any of these marks. They can access your code and re-run the grading program on it. The best time to talk with your TA about grades is during one of your Labs, when both student and TA are physically present to examine the submission and the grade, possibly running the solution on a computer they can share.

Students should examine their graded work immediately and get any regrade issues settled as soon as possible (within a week of when the grade is assigned). Show up to lab and settle these issues immediately.

IMPORTANT Information about Student Grades

  • A few students submitted code that had extraneous imports or had syntax errors and therefore failed all tests: the TAs are authorized to allow you to fix a few simple syntax errors in the code you submitted and rerun/regrade the code (but, I will deduct some points for submitting unexecutable code). You cannot "debug" the code beyond fixing syntax errors. Also see Announcement #5 below.

  • If you submitted an assignment, and there is a T in column B, which has the comment TIMEOUT it means that one of your functions contained an infinite loop, and therefore failed all tests: the TAs are authorized to allow you to replace the body of any function by pass (so you will receive 0 points for that function) and rerun/regrade the rest of the code (but, I will deduct some points for not submitting gradable code). Also see Announcement #5 below.

  • You can see how your picture was graded on Gradescope. Infrequently a rubric item is wrongly clicked/unclicked. You can request a regrade, but you should do so only after you have carefully read the rubric (and any other annotations on your submission) and emailed (consulted) a reader (Aaron graded part 1: Picture Elements; Valeria graded part 2: Reference Correctness) to get their opinion. Note, you cannot question whether the rubric is reasonable; you can question only whether it was applied correctly to your submission. I will do all regrades, and there will be a penalty for requests that are not granted...so check with a reader first!
This assignment was designed to provide you with a good grounding in the use of the standard data structures in Python: list, tuple, set, and dict (and the defaultdict variant). It also included practice iterating over such structures, writing comprehensions (for 1 statements methods), and use of the sorted functions (and using lambdas for its key arguments). All these topics will be tested again on the Midterm and In-Lab Exam #1 (along with appearing in Programming Assignment #1 as well). As with all assignments, you should examine my solutions.

#7: 4/4/22
Programming Assignment #0 Graded
The TAs/Readers have graded (and I have recorded the grades for) Programming Assignment #0. As with most assignments, there are two files that you should download, unzip, and examine to understand your performance on this assignment, and your cumulative performance in this class. Learn to download, unzip, and read these files now, so you will know how to do it for all the later assignments.

Both of these files are sorted by Hashed IDs (which are computed from the 8-digit UCI IDs of all the students in the class). To determine your Hashed ID, see Message #6a below.

  • The first file to examine stores the assignment grades, a zipped Excel file that details how each student was graded on this instrument: what marks were given and why. It is sorted by Hashed IDs (column A). Column B contains an X if the student submitted work (later, if you work in pairs, the X will appear only on the Submitter's cell, not in the Partner's cell, although both will receive the same grade). Column C (for Programming Assignments only) shows extra credit points for early submission: 1 point for submitting 24 hours early; 2 points for submitting 48 hours (or more) early; a blank here means no extra credit/early submission points.

    Columns D and beyond show marks for the various parts of the assignment. The last three columns show your Score, the score Rounded to an integer (see the discussion below) and your Percent, based on the number of points the assginment is worth. If a cell contains a comment (those cells with a red-triangle in their upper-right hand corner) you can hover over the cell and you will see the comment that explains why the marks were given: sometimes you must right-click the comment and then enlarge its bounding box to see the full comment.

    Students should talk to/email the TA/Reader who graded a question, if they do not understand why they received the marks they did or to dispute any of these marks. For Programming Assignment #0, the grading was as follows

    Part A: Reader Haining Zhou
    Part B: Reader Jakub Antosz
    Part C: Reader Aaron Winter 
    Part D: Readers Valeria Maya and Elizabeth Lui
    

    The best time to talk to TAs is in their labs; the best way to communicate with Readers is by email.

  • The second file to examine stores the cumulative Grades(zipped .xlsm file) -also available as a link on course web- unzip it, and then click the tab labeled Spring 2022. This tab records all the grades for all the testing instruments that you submit during the quarter. It is also sorted by Hashed IDs (column A). You will notice that in this spreadsheet all recorded grades are rounded up to integers: so receiving a 7.5 on the first spreadsheet will translate into a 8 recorded on the second one. We will use this same "round-up" process for recording all grades during the quarter. Note, if you score a 7.47, it will appear as 7.5, but it will not round up to 8.

    On this spreadsheet, columns B-U contain your scores: for the Quizzes (B-I), Programming Assignments (J-O), In-Lab Programming Exams (P-R), and Written Exams (S and U: I'll discuss T after the Midterm). Columns V-Y contain the sums for all these testing instruments. Column Z contains special extra credit points (for example, submitting the faculty/course evaluation at the end of the quarter; more on this then). Column AA-AE contains: AA - your cumulative points WITHOUT nomalization points added in. AB-AE are all based on your scores WITH normalization points added in. AB - your average, AC - your rank in class (1 means highest-scoring student), AD-AE - your current grade (AD is the letter, AE is +/- if appropriate).

    You should check this spreadsheet after every assignment is graded to ensure that your score was recorded correctly. Again, students should talk to the TAs/Readers first, if this spreadsheet contains any errors (not me): the staff will contact me and cc you for any grade changes .

IMPORTANT: Scores will soon revert to 0, if I do not receive a signed Academic Integrity Contract from you (we are tabulating them this week). Please submit a .pdf file to Checkmate, showing the signed/dated form.

This assignment was designed to test you on whether you have mastered the basics of using Python in Eclipse, the Eclipse Debugger perspective, and batch-self-check files in the driver.py module (in courselib). It was also designed to see if you could follow instructions and ask questions: more on that below.

The class average was about 9.8 (or about 98%) and the median was 10 (or about 100%). For those students submtting work, there were 78% As, 12% Bs, 6% Cs, and 4% Ds and Fs (I don't often distinguish these two non-passing grades). Last quarter there were 83% As, 10% Bs, 2% Cs, and 5% Ds and Fs.

The assignment was not meant to be hard, but it was not trivial either, and there were many opportunities to lose points (and learn from your mistakes). Your work in the Eclipse/Python Integrated Developement Environment (IDE) throughout the quarter will leverage off the understanding and skills that you acquired in this assignment.

Let me talk about what will probably be the most contentious half point of the 1,000 points that this course is worth (thus, only .05% of the grade; so you can still get 99.95% of the points in this course). I took off .5 points if you corrected the misspelling Inteprxter (or had anything other than Inteprxter). When some students hear about this point deduction, their heads explode and they cannot believe that I am taking off a point for correcting what you thought was my mistake. But... I am trying to foster an atmosphere where nothing is taken for granted in the instructions that I give: if anything seems confusing or plain wrong, I should be questioned about it -preferably in public, in the appropriate Ed Discussion category- so others can learn if there really is a problem, and if so the correction.

  • In fact, some students did ask me outside of class if they should correct the misspelling, and I told them "no"; some students asked me by email if they should correct the misspelling, and I told them "no"; one student asked on Ed Discussion whether they should correct the misspelling; I answered no. It is critical for programmers to be sure they know the specifications of the problem they are being asked to solve, otherwise they will solve, test, debug document, etc. the problem incorrectly, and another cycle of development will be needed to fix the misconceptions. The overview lecture included a graph that showed that the later in development a problem is found, the harder/more expensive it is to fix. So if we can find problems at the time we are reading the specification of the problem to solve, that can save us a lot of work/money later.

  • The bottom line is that you are responsible for reading the instructions carefully and reporting any confusion so that I can clear it up (best reported in the appropriate Ed Discussion category). Of course, you can freely talk to anyone about the problem specifications, just not the code that you write for your solutions. If you make any assumptions (like the node names in Programming Assignment #1 always being one letter long -that is not part of the specification), they might come back later to haunt you later (gradewise). When working with a partner, you'll have two pairs of eyes reading the specifications and looking for issues. I am willing to deduct this painful half point at the start of the quarter, from many students, to get across this perspective, and save everyone grading grief during the quarter. I hope you submitted early so the extra credit erased this point loss (see below for the statistics on early submission).

  • I will never intentionally do anything like this on subsequent assignments; but I can certainly be unclear about the specifications (which have lots of details) or even contradict myself from one spot to the next in a specification. It is up to you (the hundreds of eyes looking at my specifications) to clear up the confusion, and best to do it on Piazza, so I can clear up the problem once for all students.

Here is some insight into how the parts were graded.

  1. This part was worth 3 points: -3 points if nothing was submitted; -2 points if running the code didn't print the right answer (exactly as it appears in the problem statement); -1 point for not copy/pasting the big number in the output, and -.5 points for correcting the spelling of Inteprxter (described above) and other small issues in program formatting: spaces, blank lines, etc.

    Also, we deducted .5 points on the demo.py program if your # Submitter line did not perfectly match what was required, including using correct spacing, punctuation, lower-/upper-case letters, etc. Many students lost a half point here; ensure that you know what you did wrong so you won't lose points in subsequent submissions. I'd like to clear up all problems related to this issue immediately. Most deductions came from not using the correct special characters, not using correct spacing, not using the correct "case" for letters, not writing your UCInetID (instead writing your Student ID, which is all numbers). If you have a non-standard name and you think it was automatically graded incorrectly, contact your TA and they will check it.

  2. This part was worth 2 points: -2 points if the code was not submitted or not runnable; -1 point if running the code didn't print the right answer. Many students forgot to change +=2 to +=1 as described in the instructions: the computer cannot detect such (intent) an error, so you must. See the line in the problem specification that said, "Fix this error in line 75 by changing the integer literal 2 into the integer literal 1."

  3. This part was worth 3 points: we deducted .5 points for any mistake in each of the 7 questions (but capped the total deductions at -3 points). Some students did not carefully read the instructions in the Debugger Perspective document for the quiz part, which required them to change a line in the craps script before running it with the debugger to gather the required information. With this change in your program, you can determine the correct answers; without it, you cannot (so likely all your answers were counted wrong).

  4. This part was worth 2 points: we deducted (1) 2 points for no submission; we deducted (2) 1 point if the final two lines were not.
      Done batch_self_check: 9 correct; 6 incorrect
      Failed checks: [4, 5, 6, 7, 8, 9]
    And we deducted (3) -.5 points, once, for any of the following:
    1. Exact comments not on lines 3, 11, or 19. I showed the correct output in my example output; in such cases you should copy it.
    2. Didn't use a set in line 20 (some students used a list)
    3. Did not call the remove method (discard is OK too) in line 21.

Finally, about 40% of the students submitted the program 2 or more days early (their average was 110%); about 26% submitted the program 1 day early (their average was 98%). So, about 66% of the students submitted this assignment early. The other 34% of the students had an average of 85%, which was much lower. Keep up the early submissions: although it will be harder in upcoming assignments, it is doable, and it is to your advantage to try. You can earn up to 20 extra points if you turn in every Programming Assignment and Quiz early (upping your grade by 2%): for some students, this boost will be enough to raise their final letter grade. It will be to everyone's benefit -students and staff alike- if students try to finish and submit early.

IMPORTANT If you believe that we graded your work incorrectly, please examine the files mentioned above first, then contact the TA/Reader who graded your work, to discuss the issues with him/her (not me, yet). Such a discussion can have only positive outcomes: either they will agree with you that you deserve more credit (and, we do want you to receive all the credit that you are due), or you will come to understand the question, program, or solution better and realize why you lost points. This is certainly a win-win situation. Please read my solution and the assignment grades spreadsheet carefully before contacting a TA/Reader; ensure that you understand what is the correct answer and what points were deducted from your assignment and why. If there is a problem, the TA/Reader will email me a revised summary about your program, and cc a copy to you. I will update the grades spreadsheet as appropriate and email you what I did. Confirm the change when I release the spreadsheet for the next graded assignment.

If you feel there is still an unresolved problem after talking to a TA/Reader, please contact me (but always contact your TA/Reader first). IMPORTANT: Also, because of the size of this class, if you have a grading issue, we will consider it only if you bring it to your TA's/Reader's attention within a week of when I return the materials. This policy is in place to avoid an avalanche of work because of "grade-grubbing" late in the quarter.


#6a: 3/28/22
Hashed ID Lookup
When we grade assignments, we often distribute/update various spreadsheets with the relevant grading information. These spreadsheets are indexed and sorted by each student's Hashed ID. The course web-page has a Find ID Hashed link (the leftmost bottom/green link on the course web page) , which you can use to retrieve your Hashed ID (or click Find ID Hashed). Use the result it shows when examining any spreadsheets of grades; I suggest that you find this number once, and write it down for future reference.
  • Typically this website is not loaded with this quarter's information until the middle of the first full week of classes.

  • If you are an ACCESS student, this link may not work for you (check it in the middle of the first full week of classes); in this, case email me when you submit your first graded work.
#6b: 3/28/22
Checkmate Signup
Please visit Checkmate (http://checkmate.ics.uci.edu) and sign up for ICS-33. When signing up, please use your official name at UCI (as it appears on my roster) and your UCI email address (e.g., mine is pattis@uci.edu) so that how you appear in Checkmate matches how you appear in my gradebook. Programming Assignment #0 includes a Checkmate Tutorial.
#6c: 3/28/22
Gradescope Signup
Please visit Gradescope (https://www.gradescope.com) and sign up for ICS-33. When signing up, please use your official name at UCI (as it appears on my roster) and your UCI email address (e.g., mine is pattis@uci.edu) so that how you appear in Gradescope matches how you appear in my gradebook. The Entry Code for ICS-33 this quarter is 4P8X4K.
#6d: 3/28/22
Ed Discussion Signup
Please visit Ed Discussion Signup and sign up for ICS-33.

#5: 3/28/22
Important:
Submitting Code
without Losing Points
ICS-33 uses software that automatically checks the correctnes of code in most quizzes and programming assignments; it uses (self-checking) test cases that we supply with the assignments that we distribute (sometimes slight variants). You will learn about these tools in Programming Assignment #0. Here are a few hints to ensure that you will understand the grading process better and minimize your point loss.
  1. Ensure that you submit the code you wrote, not empty files, nor the original files that you downloaded. Be very careful and double-check what you submit to avoid this mistake: if you are not sure that you submitted the correct code, resubmit it. After submitting (the correct file) to Checkmate, ensure that it shows the assignment's status as completely submitted. Bottom Line: If you do not submit code with your solution (e.g., you submit the wrong file), you will receive 0 points for the assignment.

  2. If you are submitting with a partner, ensure that the Submitter and Partner lines of the program are correctly specified. The names must appear in the exact format required, with no misspellings nor punctuation errors. The student listed as Submitter must be the one who actually submits the code. See the Programming Assignments web page for the exact form required (and you must follow that exact form, with no misspelling nor punctuation errors).

  3. Ensure that you submit your code on time. We can, and mostly do, ignore any work submitted after the deadline (even by a few minutes). It is a fairness issue for other students who do submit on time. The best strategy is to finish the work and submit it well before the deadline (possibly getting extra credit points): by submitting early, you will learn more too, if you aren't rushing to meet a deadline. To ensure that we will grade something, submit partially complete code ahead of the deadline; then, if you miss the deadline, we will still grade the partially complete code. Be warned: Checkmate can get bogged down if many students all try to submit a few minutes before the due time, so do not wait until the last minute to submit your code. Submit your code immediately when you finish; you can always remove a submission and resubmit corrected code later

  4. Ensure that you test your code using the self-checks that we provide and use for grading. By using these self-checks, you will know when your code contains errors that will result in point deductions when we grade it. The actual tests that we will use for grading might be a bit different, but will be similar in form and content: so, think a bit about testing your code beyond the self-checks that we supply. No finite amount of testing can show that code is correct for all inputs.

  5. Avoid Common Problems: Ensure that your files...
    1. ...contain no syntax errors.
    2. ...contain only appropriate import statements, typically just the ones provided in the download file(s); if Eclipse adds extra imports (which it sometimes does erroneously) remove them.
    3. ...contain only functions that execute quickly (typically under a few seconds - unless specified otherwise in the assignment).

    Any syntax errors, inappropriate import statements, or excessive execution time may cause all self-check tests to fail during automatic grading. For functions that take excessive time, it is best to comment out their bodies, replacing their code with pass, resulting in the function immediately returning None: it will be counted wrong, but doing so will allow other functions to be run and graded for correctness.

    To help avoid inappropriate imports and losing points, ensure that in Python you have selected Window | Preferences | PyDev | Editor | Auto Imports and unchecked all boxes (illustrated below) and then clicked Apply following by OK.

After an assignment is graded automatically, the Announcement for it will contain a link to an Excel file that you can examine for detailed information about how your score was computed.

If this information does not match your expectations from you running the assignment's self-checks while developing your code, contact your TA. It is best to meet with your TA during lab hours: they can talk to you about your code and run it while you are present, to help resolve the difference. But, if we have to modify your code to grade it properly (see the typical source of problems above), then we will deduct points. I hope that by students carefully writing/submitting their code, these grading anomalies and point deductions will be minimized during the quarter.


#4: 3/28/22
Communication
There are many ways to communicate with me (and other staff and students). Here is a quick overview.
  1. Instructor Email: If you send email to me, please do it through your UCI email address. Include a well-thought Subject line. Also, please include your full name and the course (ICS-33). I teach many hundreds of students each quarter, often in multiple courses. Providing this information helps me quickly determine the context of your email, so I can better answer it. Finally, when I respond to your email, please do not send a "Thank you" acknowledgement. Such niceties are not necessary for work-related email. For me, it just increases the number of emails that I must read.

Note that for questions that are not specific to you -questions that are relevant to the entire class- it is best to ask them in the appropriate Ed Discussion thread.

  1. Ed Discussion Threads: Post in the Category most closely related to your question (and check there first, before posting, to see if another student has already asked that -or a similar- question): use the Search tool. Include a well-thought Title line that clearly identifies the context and issue you are asking about; doing so helps me, my staff, and other students who are deciding whether to read your question and how to respond to it. If you discover the solution to your own question, revisit the posting and explain (without supplying the solution/code) any useful information that you learned that might help other students with the same problem. Avoid a post that says just, "Never mind: I figured it out myself."; try to supply some useful information about your solution, without giving-away the answer.

  2. Course Email (ics33-W22@classes.uci.edu): Mostly this is for me to use to communicate with all the students in the class (all course email is archived on EEE). But, there are instances (very rarely) for students to use it: the best example is that if Checkmate appears to be unavailable. Sending a Checkmate down email to this address tells me that it is unavailable, and tells all the other students that (a) it is unavailable and (b) I have been informed that it is unavailable.

#3: 3/28/22
First/Later Labs
I expect students to attend all their scheduled labs (unless they have already finished the current programming assignment). Programming Assignment #0 is assigned before the first lab of the quarter; so if you have not already finished it, I expect you to attend your first lab and work on it there.

Generally, you can get invaluable help in lab from the TAs and Tutors relating to

  • understaing the specifications of the assignment
  • understanding Python language features
  • understanding how to deubg your Python code
Learning how to ask technical questions and interpret answers is an important skill that you can acquire/practice during your labs with TAs and Tutors.

For debugging, don't expect the staff to understand your code unaided and then debug it for you. Instead, expect to explain your code to them (and answer questions about it) so that they can help you learn how to debug code in general, using your current problem/code as a concrete example. TAs/tutors will model the debugging process for you, so that you can follow it by yourself for subsequent bugs. One goal of ICS-33 is to make students much more independent programmers and debuggers: you should continually improve you debugging skills throughout the quarter.


#2: 3/28/22
Install Course Software
All students should download and install the course Software: Eclipse (which installs Java, which is needed to run it) and Python. Both products are available for free on the internet. Students can view instructions for downloading and installing this software by following the Course Software link. If you are using a Mac, there are special instructions for you (which are a bit out of date: I don't own a Mac): e.g., Java is already installed.

If you have installed a version of Python prior to 3.9, you should install the current version of Python (3.9 or later). If you have installed a version of Eclipse prior to 2019-06, you should install the current version of Eclipse (2021-06 or later). My PC instructions show installation of the latest versions available during Summer of 2021, so you will likely follow similar but not identical instructions.

Although students can work on their programming assignments on the computers in the UCI labs, I expect students with computers to download and install this software by the end of the first week of the quarter. If you are having difficulty with this task, the TAs and Lab Tutors will help you during the first Lab meeting (or beyond, if necessary: bring your computer to the lab). If you have successfully downloaded and installed this software, please help other students do so too. Finally, you can also use the Ed Discussion threads to ask questions about installing this software and help other students install it: disclose the Logistics category. Installing software is sometimes confusing, but it is a one-time event: do it now; use it for the entire quarter.

I strongly suggest that you BACKUP YOUR WORK daily: computers can malfunction, break, or be stolen. Every quarter I hear from a few students who have lost their work because they didn't backup their work; get into the backup habit now. I backup all my ICS-33 materials every day by zipping a folder that has all my ICS-33 materials and putting it on a USB memory stick.


#1: 3/28/22
First Message
Welcome to ICS-33. I am going to post and archive important messages about the class in this announcements web page: each entry will be numbered, dated, and labeled. The entries will appear in reverse chronological order. Whenever you follow the link to this page, scan its top for new announcements; scan downward for older announcements. This message will always appear at the bottom of this page. I will never remove a message from this page

I have already posted some important messages before the start of the quarter. Expect a few new messages to be posted here each week, mostly regarding understanding returned and graded work.

Check this Announcements page, along with your email, and Ed Discussion threads daily.