It has taken much longer than I had anticipated to put this second entry together. I am certainly not destined to be a high volume blogger!
In my previous entry I posed this question: Can department based faculty evaluation rubrics be developed that are broad enough to capture reasonable variations in paths to productivity and impact, yet still retain enough substance to be a useful resource for pre-tenure faculty?
When I have asked department chairs to consider this question, the answer that I get is often along the lines of “our department is too complex”, “we have too many specializations”, or “there is not consensus across interest groups in the program”.
I want to focus, for a moment, on the notion of consensus. Consider what a positive tenure and promotion vote from tenured faculty represents. The idea is that the tenured faculty in a program will come together, evaluate a candidate’s record, and then cast a vote. The vote is based on an assessment of the candidate’s cumulative academic record against some standard. It is logical that there may be variation in the assessments of the tenured faculty. Ideally, this variation should represent differences in assessments of how the record of scholarship being evaluated matches up to a common standard. In reality, it likely represents some unknown mix of variation in both perceptions of what the evaluation standard should be and how well the record matches the standard being used by an individual tenured faculty member.
Tenured faculty are expected to provide a consensus evaluation at the time of the tenure vote, and ideally give some indicators of successful trajectories through intermediate reappointment votes. Would’t it be wiser to work on forging a consensus at the beginning of the process? Indeed, best practice guidlelines point to the advantage of revisiting and discussing the standards that will be used to evaluate pre-tenure faculty annually. This is particularly valuable for effective mentoring. As Kerry Anne Rockquemore of the National Center for Faculty Development and Diversity argues, clear guidelines for tenure and promotion is the “best mentoring of all and it’s also the one thing that most colleges and universities refuse to provide.”
More to the point, when standards are not transparent, it increases the potential various forms of bias influence the outcomes of tenure votes. See, for example, the research reported at the recent American Sociological Association on gender bias in tenure processes.
The Center for the Education of Women at the University of Michigan outlines the benefits, rationale and strategies for developing a transparent tenure process. Transparency can make it easier for pre-tenure faculty to make decisions that are informed by the standards.
Creating Faculty Evaluation Rubrics
The University of Washington ADVANCE program has posted resources for developing faculty evaluation rubrics. The point made at the UW Advance program and other sources is that rubrics “not only help maintain consistency in the evaluation process and reduce bias, but they also help those under evaluation have a more clear understanding of performance expectation” for both the pre-tenure and tenured faculty.
In 2006, early in my term as a department chair, I was approached by a faculty member who was one of the investigators on an NSF ADVANCE program. She convinced me that I should approach the faculty in our department to consider developing pre-tenure evaluation rubrics. I was not convinced that we would be able to move the tenured faculty off of the ‘we know tenure when we see it’ mentality that I had observed in my 25 years in the academy. While it took a few faculty meetings, and the better part of a year, we were able to shape a document that everyone could support.
The pre-tenure research rubrics we initially developed provide a brief narrative for each pre-tenure reappointment review along with descriptors for each level of performance using the university standard evaluation adjectives. The progression of research expectations, across evaluation adjectives and with successive reviews, provides a foundation upon which pre-tenure faculty and their mentors can make general assessments of their progress. The descriptors are general, providing a range of criteria conditioned upon the impact of the research. They still provide considerable latitude for interpretation.
Here, I have reproduced the descriptors for the higher research evaluation adjectives for each review cycle. At the first reappointment review, early in the second year, the following descriptors were developed for the evaluations of ‘good’, ‘superior’ and ‘outstanding’.
At tenure review, typically starting early in the sixth year, the descriptors for the adjectives again show trajectory. The adjective shading used also reflects the expectation that, by tenure review, the evaluation needs to be at least superior or above.
Note that the descriptors still leave room for interpretation that has to be delineated through effective mentoring. For example, what are “high quality peer review publications”? In our department, we had a practice of identifying the very top specialty journal(s) in the candidates substantive research area, along with the highly ranked generalist journals in the discipline, as the journals that met this criteria. So, for example, if someone was working in the area of the sociology of health, the Journal of Health and Social Behavior would likely be one of the top specialty journals.
However, the point is not about what the specific standards for any department. Other departments may, for example, place more emphasis on publishing books than ours did. What matters is that the expectations can be spelled out with sufficient clarity to better inform the pre-tenure faculty.
Over time, these rubrics have evolved from these initial statements. But the evolution has been gradual, not sudden, and thus does not produce the kind of risks for shifts in standards that have sometimes been argued in cases of negative tenure recommendations.
In my next entry, I will do my best to address the next question based upon my experiences with faculty evaluation rubrics:
If created, are rubrics helpful to pre-tenure faculty? Are they helpful to tenured faculty making the evaluations?
I have included the full one page rubric narratives for each of the review cycles below, with shading on the adjectives indicating problematic/adequate progression. It includes some additional descriptive narrative about the standards and interpretation.
If you select and click on any review rubric page, it will expand to full screen for easy reading.