May 2024 Legal Brief

32

THE LEGAL BRIEF

VOLUME 42, ISSUE 3

(Continued from page 27) Bias in Judicial Performance Evaluations, the Problem and Proposed Solutions by Andrea Răchită and Judge Rebecca Glasgow

(Emphasis added). 7 We were unable to find studies evaluating whether there is similar disparity in judicial performance evaluations for LGBTQ+ or judges with disabilities.

Study authors have also concluded that judicial evaluation survey results for BIPOC and female judges often did not align with more objective measures of a judge ’ s performance like reversal rates, whether the judge was reappointed or reelected, prior judicial experience, and instances of judicial discipline. 8 Study authors have concluded that one reason for this is that the subjective nature of many evaluation questions can allow for unconscious race and gender bias to permeate the results. For example, when one jurisdiction used questions that allowed respondents to subjectively define for themselves the qualities referenced in the questions, “ acting in a dignified manner ” for example, this led to increased discrepancies in the survey results across races and genders. When questions were rewritten or rephrased to focus on objectively observable judicial behaviors, the discrepancies between judicial officers of various races and genders fluctuated, indicating that the way evaluation questions are written can influence outcomes. 9 Studies on judicial performance evaluations suggest alternatives and strategies for reducing bias in the results. First, bar associations should consider eliminating judicial performance surveys entirely in favor of other methods of providing feedback to judges, like volunteer secret shoppers who view dockets and provide written or verbal feedback, or one - on - one observation and feedback from retired mentor judges. Bar associations that retain their judicial evaluation surveys should focus on the purpose of the survey. Is the goal to promote judicial self - improvement, provide information to voters and appointing authorities, or something else? If the primary goal is to promote improvement in judicial performance, then bar associations should be conscious that framing feedback in terms of personal characteristics (intelligence, patience, personality traits), rather than observable behaviors, makes it more likely the recipient will hear negative feedback as an accusation of a personal flaw. Judges tend to more easily dismiss this type of feedback and respond better to feedback about specific behaviors. Bar associations should also consider the pool of survey respondents. Responses from people who have appeared in front of the judge in the previous year are the most valuable, and there may be value in seeking feedback from pro se parties who meet this requirement in addition to attorneys. Seeking responses from people with recent courtroom experience with the judge helps avoid responses that rely on secondhand information or social interactions rather than observable workplace or courtroom behavior. Expanding the survey to seek input from attorneys who are not local bar association members may provide a higher number of responses. And bar associations should publish response rates and the number of respondents compared with number of attorneys in the jurisdiction or the number of attorneys appearing in the rated court so that people reading the results understand the limitations. With regard to specific survey questions, bar associations should use language that describes concrete behaviors that respondents could reasonably observe in their direct courtroom or workplace experiences with the judge. In other words, questions should focus on concrete courtroom behaviors that judges exhibit as opposed to inferred attributes or personality traits. Bar associations should avoid using language that focuses on evaluating personal characteristics that are more susceptible to bias. For example, asking whether the judge showed that they had listened to the arguments of all parties is better than asking if the judge is patient.

__________________________

7 Rebecca D. Gill, Implicit Bias in Judicial Performance Evaluations: We Must Do Better Than This, The Justice System Journal, 12 (2014). 8 Id. at 19 - 20. 9 Gill, Rebecca D., et al., Are Judicial Performance Evaluations Fair to Women and Minorities? A Cautionary Tale from Clark County, 45 Nevada. Law & Society Review , 731 – 59 (2011). JSTOR ; Elek, Jennifer K, et al., Judicial Performance Evaluation: Steps to Improve Survey Process and Measurement, 96 Judicature, 65 - 75 (2012). (Continued on next page)

Made with FlippingBook - professional solution for displaying marketing and sales documents online