By Regina F. Bendix
This essay is part of the series PoLAR Online Emergent Conversation on Peer Review as Intellectual Accompaniment
Every two years, the Deutsche Gesellschaft für Empirische Kulturwissenschaft (formerly Volkskunde, aka European Ethnology) holds what is termed a working meeting on matters regarding higher education (Hochschultagung). Usually, there is a session scheduled with the elected representatives of the discipline in the German Research Foundation (Deutsche Forschungsgemeinschaft, DFG). In 2016, one of the field’s two representatives offered precise guidelines to the assembled professors and postdocs with regard to peer reviewing grant proposals. Her stern though encouraging remarks can be summarized succinctly follows: if at all possible, agree to review; if you see merit in the proposal, word your review very carefully, abstain from suggestions that might have made an excellent proposal still better (as this may lead to a rejection), and avoid any mention of omitted literatures. Notice that she did not say that a poor proposal should be recommended. But if someone’s proposal held water and was all around well shored up, then every “maybe,” “perhaps,” and “one could” was likely to diminish the applicant’s chances, for in the tight race between proposals scoring high, review panels scour peer reviews for precisely those kinds of clauses: they give them opportunity to rank applications that would seem to be equally deserving.
Such advice points to shifts in the parameters when peers are called on to review proposed work rather than articles or books representing the fruits of completed research. In the context of a national grant-funding institution open to all disciplines such as the DFG, these resources are, to begin with, far more limited for the humanities and social sciences than for the natural sciences. There is correspondingly high competition between those fields, and in a small discipline such as European Ethnology, reviewers do well to set aside squabbles among different factions so as to improve the chances of the collective vis-à-vis larger fields. Some small fields adhere to this pragmatism with stiff upper lip, and onlookers are correspondingly amazed when they see just how many grants are awarded to, e.g., ancient history or Byzantine archeology. Their practitioners have learned that the place for a detailed peer critique is within the discipline and not in a setting where many disciplines compete for sparse funds. An article submission to a discipline’s flagship may receive harsh reviews, as alternate schools of thought carry out disputes both on the printed page and in peer reviews. But when it comes to getting grants, it is not just an individual professor who gets funding, it is a university institute that receives research and overhead funds, strengthening that field overall in a time when disciplines with small enrollments consistently face threats of amalgamation or elimination in today’s economically driven university.
Advice like the field representative’s in the 2016 working meeting is rare, yet crucial, particularly for peers in the humanities and social sciences. We carefully craft and read texts and thus in reading and comparing peer reviews on evaluation panels, what is written is also scrutinized through the lens of how it is written. During the years I was appointed as a coordinating peer reviewer for the Swiss National Science Foundation (NSF), the work was phenomenally encumbered by peers who submitted—both excessively positive or disastrously negative—assessments without carefully reading the grant, written assessments far more positive than the numerical ratings they assigned, or who faulted a proposal for all the things it had not set out to do in the first place (an issue frequently encountered also in reviews for journals). My task was threefold: I had to suggest between 5 to 8 reviewers (thankfully with the support of the NSF staff) for grants submitted in ethnographic fields, in the hope that 3 to 4 would consent to do the work; read and review those grants tempering my own assessments with those of the solicited peer reviews; and present each case, together with the second representative (who did not necessarily share my assessment) in the panel sessions which over the course of two days would select the successful grants.
Before the panel sessions, the preliminary scores of first and second presenters were made available to us, and, time permitting, we could also read all grants and full reviews. The diligent panel member could, time permitting, assemble enough information to determine which of the worthy grants in his/her portfolio might have a fighting chance in the generally congenial but extraordinarily high-level discussions. Not every panel member had the same amount of perseverance and not every panel member the same gusto for making a case, but the panel chairs, amazingly and admirably, proved themselves not only to be deeply prepared but also patiently persistent in cases where they could sense that fair assessment required more scrutiny than the allotted 6-8 minutes per grant. Advocating for a grant—just as much as downgrading one—required sufficient footing not only in the topic and nature of work the applicant proposed, but also in gauging the caliber and station of the proposal’s reviewers. The panel composition was interdisciplinary as were, frequently, also the reviewers for a given grant. If I wanted to go to bat for, e.g., a proposal in medical anthropology that also considered legislation in organ transplantation, it was important for me to spot the reviewer whose negative assessment was guided by their legal expertise but whose lack of ethnographic experience did not entitle them to trash a fieldwork design. Reviews excessively praising a grant for an applicant’s promise and the innovative scope of the project proved useless if they did not contain the few concrete sentences nailing just what made the proposed project crucial for a particular subdiscipline; indeed, excessive praise often took a grant down rather than up. Very rarely, there was the brilliant peer reviewer who delivered concisely the reasons for why a proposed project absolutely had to be funded or rejected.
Participating in the NSF evaluation process was enormously time-consuming, and, it being Switzerland, it came with an honorarium that compensated for the many weekends and evenings poured into the tasks. It also was brain-expanding, enjoyable, and humbling: I have not since been in an interdisciplinary setting with academics so seriously committed to foster the cause of excellent research while simultaneously assessing the research design for the opportunities it foresaw for junior scholars, relying on tiered peer reviews and their own broad knowledge of disciplinary traditions and developments. It was in those sessions that I felt in the presence of an ethos parallel to, if different in its goal, what I value in co-editing a journal: all involved were weighing submissions, peer reviews, and disciplinary well-being and future with utmost care.
The NSF experience is in sharp contrast to peer reviews I have been asked to do for individual proposals for various national granting agencies that have led me to the point of declining to review. When I am approached to review grants, I usually have a sense of mutual obligation. I am grateful when colleagues take time to review grants I have submitted in a timely fashion, perhaps even clarifying for me why I need to design a given project idea differently for it to be successful; in turn, I review grants to provide that same kind of collegial service and hopefully assist researchers in my field to receive funding or to revise an application to be successful. However, the increased digital automation of at least some review processes, coupled with an emphasis on numeric assessments, has had a deep impact on my readiness to serve. It is not only the fact that every grant agency and foundation has designed a different web interface to retrieve a grant and enter one’s peer review. It is also the fact that such web designs are in some cases driven by the skill and logic of IT technicians, rather than by scholars familiar with the essential components of grant peer reviews and the varied milieus established in disciplines over time.
My most absurd experience was with a national grant agency whose digital review form required that each field be filled with at least 200 words—it was not possible to move to the next field before that quota was reached. I could understand the motivation behind this setting—reviewers were given a nominal fee for doing the work and so this technical fix was designed to prevent minimalist reviews. But there were some simple questions for which one could reach 200 words only by repeating oneself interminably. The system accepted the repetitions, and heaven knows what the ultimate decision makers did with that kind of prose. Exchanges with administrators in that particular agency and a number of other such state-based institutions brought me to the point of declining to review for them out of principle. Realizing that such systems are usually not administered by academically trained individuals who share the kind of peer review ethos outlined above, I came to doubt the overall grant-giving schemes of some organizations. Furthermore, frequently, such funding offices are affiliated with ministries which regard research as a product to be made available to the taxpayer, evident—if thinly veiled—on occasion even in the prose that accompanied the invitation to review.
This mechanical approach undermined my commitment to what I regard in principle as an important, shared duty of the profession. I realize, of course, that it is up to us scholars to combat systems that devalue the disciplinary perspectives ideally flowing into peer reviews. But my efforts to write to such agency division heads or to communicate with scholars in a given country alerting them to the unproductive turn some grant evaluation schemes have taken have been met either with silence or with increasingly demoralized responses of the one-cannot-do-anything-against-our-system type. With their digitally turning cogs and wheels, such evaluation systems remind me of Charlie Chaplin’s Modern Times from 1936. They make me wonder whether some of these governmentally orchestrated peer review processes purposefully enact systems reminiscent of early mass manufacturing. There are other instances that give peer reviewers and applicants alike a feeling of being churned through the wheels of a huge apparatus driven by artificial intelligence, their time and skills being abused in the effort to support the carefully evaluated disbursement (or securing) of funding. In some of the EU’s programs, such as e.g. HERA, a given sub-call may net perhaps 150 proposals, each requiring several peer reviews; the actually available funds often only suffice for awarding very few projects. Applicants then are gifted with rejections containing 2 to 3 positive or even glowing peer reviews (which perhaps are collated in a machine generated process?) and a cover letter informing them, that sadly there is not enough funding to support their very good project.
I am familiar with some private foundations that have developed alternate ways to achieve productive evaluations while circumventing some of the mass labor that grant reviewing has turned into. One notable private foundation distributes the grant submissions that have been submitted for a specific call to all members of a given review panel. These reviewers are then brought together and discuss the submissions orally. Staff members of the foundation, all academically trained, moderate the discussion and take notes. Neither awardees nor those rejected receive written peer reviews. They are provided with a summary of the panel’s discussion and invited to write or call.
So much is at stake for the course of individual careers when grants are submitted. For individuals with limited academic employment, a successful grant application may be the lifeline for another three years of work (and insurance). For permanently employed academics, bringing in third-party funding increasingly determines the value of their discipline within a university’s decision-making cauldron. When considering an invitation to peer review grants, we do well to keep in mind the severity and precarity of the conditions within which many colleagues live. If you see merit in the proposal, if you can tolerate the digital interface the requesting agency presents you with, and if you are fully aware that the grant review process does not tolerate the friendly yet critical particularity that a journal peer review requires, then, to repeat the words of my colleague that open this essay: if at all possible, agree to review.
Regina F. Bendix is professor of cultural anthropology/European ethnology at Göttingen University, Germany. She served as co-editor of a graduate student-run journal already in the early 1980s, was co-editor of Ethnologia Europaea from 2007-2015, co-initiated and co-edits the journal Narrative Culture (2014-) and is also one of four co-editors of Zeitschrift für Empirische Kulturwissenschaft (formerly Zeitschrift für Volkskunde) since 2020. She has peer reviewed for many international journals as well as publishers, and is no longer keeping count of how many grant proposals she has reviewed.