Over the past several months, a conversation has been going on amongst Law Center faculty, administration, and a few students. The central question is this: should historical grade curve information be available to students?
(Share your views by following the link at the bottom of this post. All responses are anonymous.)
To many first-years, this may seem like a pointless inquiry because the recommended curve is followed in 1L classes. Second and third-year students likely know that the recommended curve becomes more of a suggestion in upper-level courses. The latter assumption better reflects reality. What most upper-class students may not know, however, is the magnitude of variation.
Discussions in the Academic Standards Committee–a student-faculty committee that deals with, among other things, grading policies and new course approvals–began with a request from several students that average historical grading information be made available for clinics. After lengthy deliberations and research, the Committee found that variance went well beyond clinical programs. On March 22, the Committee sent a memorandum to the full faculty recommending that “historical grade information with respect to each course and instructor [and clinics] be made available” to students. It further recommended that the Registrar make available the three year aggregated curve for each of six categories of classes: “(1) exam courses, separately as to first-year and upper-level; (2) seminars; (3) clinics; (4) practicum classes; (5) trial advocacy classes; and (6) LRW fellow classes.”
Multiple faculty members reported that this recommendation was tabled at a subsequent meeting of the full faculty. In parliamentary procedure, a tabled item is removed from consideration for an indefinite time–often forever.
As noted in the Academic Standards memorandum, one thing that students do not know is “variance– how consistently individual faculty members adhere to the recommended curve, or how they grade in courses to which the recommended curve does not apply.” An analysis of the data as a whole shows a significant variance from class to class and from professor to professor.
For example, in a three-credit practicum course offered at multiple times in both the Fall and Spring semesters, depending upon when and with whom that course is taken, the chances of receiving an A or A- vary widely; choose the correct semester and professor and your chance is 90%; choose wrong and it dwindles to 35%. The variations extend to common large-format classes as well. Consider course X, a 4-
credit course that is offered multiple times each semester and is considered foundational by most. In some sections, 40% of the students enrolled receive an A or A-, in others 30%. Course Y, another common 4-credit course, follows the same pattern.
Seminars are a whole different beast. To start, seminars are graded significantly higher on average than regular courses. But as they say, the devil is in the details. While the aggregate trend is higher, individual courses’ grade distributions are highly variable. Some could almost come with a guarantee of an A or A-. Others hew more or less to the recommended curve. As a general matter, seminars with exams will grade lower than seminars requiring papers.
Two issues of concern
The total number of issues raised by this information and its potential release are numerous, but two of the largest are inequity and arbitrariness.
First, the data point to real inequity in the current system. If you know whom to ask, if you have the right upper-class friends, you can get some information on some courses and professors. Those in the know are then positioned to exploit this information to the detriment of others. All students, at the end of the day, are in competition with one another. Two identical students, equally capable and hardworking, will have different outcomes–different GPAs–because they have disparate levels of access to historical grading information.
Second, the system as presently constituted increases the arbitrariness already inherent in law school. How will I do on the LSAT? Can I afford an expensive preparatory course? Does my personal statement touch some personal chord with the unknown admissions person reading it? Does the current clerk of a judge have a soft-spot for Georgetown, making them more likely to pull my application out of the mountain that they have received? What did the judge eat for breakfast? The variation in grade distribution, and its being hidden, adds to this arbitrariness. One student takes Professor X and the other takes Professor Y. One gets an A and the other a B+. Would you want to know that their respective chances of getting an A were set–one at 40% and the other at 25%–before they even purchased their books?
This is not a simple issue, and it is one that students deserve to know of and confront. Transparency is better than silence.
Please share your views on whether historical grade curves should be available to students by filling out this form. All responses will remain anonymous.
Disclosure: I am Co-Editor-in-Chief of the Law Weekly and have been involved in the SBA since 1L year, first as a representative and now as Chief of Staff. I became involved in the grade curve discussions through the SBA as a member of the Academic Standards Committee. The samples of historical grade curve information included in this article are not guesses or hypotheticals, but represent real data. This information did not come from my association with the SBA or the Academic Standards Committee, but was obtained independently by the Law Weekly.
Since this article’s posting, eighty-three members of the Georgetown Law community have responded to the linked survey.
Many of these responses have included careful and thoughtful reasoning. These comments are illuminating and valuable additions to the discussion. Because of their value, a summary will be compiled into a separate post in the near future.
In the mean time, please add your voice to the discussion.