Skip to content

Can we learn what a grant reviewer is thinking based on their language?

Grant deadlines are approaching, and this season, I’m proposing a shift in thinking about proposal writing. For our next grant, lets’s remove all the shoulds and musts we think exist in application writing, like “I need to include all of my preliminary data”, “I should include all of the relevant literature so they can see this is an emerging topic”, or “I must get across that my track record is above average!”, and instead simply focus on writing something reviewers will want to read.

How do we do this?

What if we tried to determine how a grant-evaluation panel thinks based on what they write about each application they review and work from there?

Because if we can see what the reviewers were thinking when they judged each grant application, we can get a much better idea of what matters in an application and what to do or not to do.

What goes through the mind of a reviewer?

An interesting recent article by Peter van den Besselaar and Ulf Sandström on the LSE Impact Blog related the details of their linguistic study on peer-reviewed grant applications (the original open-access article can be found here). They looked at the evaluation reports written by a peer-review panel judging applications for a grant and evaluated the wording of the reports for things like positive and negative words (either emotional or evaluation-based) as well as superlatives, achievement words, ability words, and words relating to the track record of the PI, the proposal itself, etc., in terms of how well each application did. In doing this, they were able to relate the language of the reviewers to the success of the application.

In terms of the fairly obvious results, they not surprisingly discovered that positive words correlated with a more successful application and negative words with the unsuccessful applications, sure. The successful applications also showed many more instances of achievement words, superlatives, ability words, and positive emotion and evaluation words. These applications also had significantly less negative words associated with their evaluations.


Three major points jumped out at me in this study:

1. It starts getting interesting when they looked at the words relating to the proposal itself and the track record of the PI in that an increase in the number of words related to these categories in the evaluation was correlated to a NEGATIVE result. This indicates that the peer reviewers spent more time justifying their reasons for NOT recommending an application by discussing the proposal itself and the track record then when they recommended an application. The authors pose the theory that more words indicate more disagreement, and that people spend more time discussing things they are in disagreement with than things they agree with.

2. Additionally, in theory, the peer reviewers needed to evaluate the project and the abilities of the PI separately and provide two separate scores. In practice, the scores for the PI and for the project correlated strongly, indicating, unsurprisingly, that an opinion in one category probably influenced the opinion of the other. Likely, the reviewer made up their mind from one category and found a way to justify their decision in the other.

3. When the authors examined actual performance metrics (the normalized journal citation score reflecting the impact factors of the journals published in, the share of the top 5% most cited papers, the number of grants received, and the quality of the collaboration network), the positive words correlated weakly, though positively, with the performance metrics. The negative words correlate moderately and negatively with the performance metrics. The authors indicate that the lack of a strong correlation indicates that different reviewers have different personal scoring metrics.


These above points are interesting for several reasons.

First, as the authors suggest, it looks like the peer reviewers might search for the lesser applications and eliminate them instead of searching for the best and brightest and funding them. This makes sense considering that these panels process an average of 100 proposals in  2-3 days…they HAVE to do some elimination work in their decision-making process to even have a chance of completing this daunting task.

Additionally, it seems like the reviewers also tended to see both parts of the score in the same light. This likely indicates that a proposal or track record that was deemed unfavorable by a reviewer caused them to view the other part in a negative light. Consequently, it is important to make sure that no part of the application can put the reviewer in a negative frame of mind and to remember that keeping the reviewer in a positive frame of mind can influence all parts of the application.


Actual photo of reviewing panelist staring at proposal 89/100 on the third day of reviewing.

Also, I’m going to argue that this means for successful applications, at a certain point, the PI’s credentials are not that important.


Because most of the discussion on the track record of the PI was in negative applications, meaning that it was more used as a way to justify the elimination of a proposal instead of justifying why to give someone a proposal. If the track record was used as a reason to give someone a proposal, there should be more correlation between track record-related words and positive applications and a stronger correlation to the applications from PIs with outstanding performance metrics. Instead, the track record is used by reviewers as a way to eliminate applications that aren’t up to snuff, with outstanding performance metrics corresponding only weakly to positive applications.

That means that what is required in terms of track record is “good enough”, without anything that can eliminate your application. Obviously, it would be excellent to have the best track record possible, but don’t let that weigh you down in an application or prevent you from applying to something. It is more important to have an overall solid application that doesn’t give the reviewer a need to toss your application right away (and justify this tossing).

And this argument, once again, ties in to the previous post on cover letters, wherein the credentials in the sample emails (and your future cover letters!) aren’t as important when the rest is done right. If someone likes you/your application, they will perform a lot of convoluted mental gymnastic justifications to get you a job/funding. If they don’t like you/your application, they will be able to find a reason to justify not giving you something, and no one’s credentials are flawless enough to avoid that.


SO, what this means is that arguably, the best application isn’t the one with the most data, the best PI, lists of publications in top journals, the coolest science, or, really, any of those metrics we all want to believe make a great application.

The best application is the one that your reviewers happen to like.

Sure, this doesn’t mean your proposal can have bad science or ideas or be poorly thought out – all of those things still play an important role. BUT, if you can get your reviewer on your side, your chances of getting any application through are going to be much higher.


Ok, that all sounds well and good, but how does one do that exactly?

1. The number one takeaway, and one that should go without saying but is so important it needs to be mentioned, is not to give the reviewers a reason to reject your manuscript. When they are processing over 100 applications in such a short time, any reason you give them, no matter how small, will be a relief as they can just toss your application and move on. There is no benefit of doubt. Follow the rules.

2. Leave no weaknesses. With a process so cut throat that panelists are actively searching for reasons to reject manuscripts, give them no opportunities to do this. Have people read your manuscripts to find your holes or confusing sections. Cover your bases. Don’t underestimate the reviewer’s ability to justify rejecting a manuscript.

3. The chances of your panel members being experts in your field are very slim. They are also not going to take the time to study up on your field to give your application a “fair shot” based on your science and ideas due to the number of applications they are processing. It is therefore up to you, and you alone, to convince them within the scope of your application why they need to fund you. This means that your proposal needs to be written assuming that the person reading it has taken only the entry level course in your field. No big words when possible, every term (EVERY.TERM.) needs to be defined, and concepts should be explained in the absolute simplest terms possible.

4. I’m convinced that this next point is key to getting any application through any process, so pay attention. Be nice to your panel. What do you think your chances are if you are application number 89 of 100 and have written a dense proposal that consists of 10 solid pages of text packed with as many detailed facts as possible? I’m going to say not great.

  • Sacrifice the less important data for clarity of the data you do present, with clear explanations of why each presented fact is important, how it fits, and how it justifies your proposal. More data is not your friend, no matter how counter-intuitive this seems.


  • Your reviewer needs to judge the impact/novelty/risk of your project. Help them out by spelling it out…”The impact of this work is…”, “The novelty of this work lies in the…”, etc. It can ensure, first, that you do clearly address these points in your proposal and don’t forget, and it also ensures that your reviewer will see them. After reading 89 or however many proposals, your reviewer will appreciate you spelling it out for them and may just be scanning the proposals for keywords at that point, anyways (no matter how much they want to think they are really evaluating each one).


  • MAKE IT INTERESTING! Throwing data and facts at a reviewer, especially towards the end when their attention and willpower is waning, is not going to do your reviewer, and therefore you, any favors. Connect your ideas. Tell a story. Let it flow in a way that is easy to read and makes sense. This will catch your reader’s attention and stand out in a positive way.


  • In this same light, give your reviewer’s eyes a place to go. Include figures, breaks in text, headings (even colored!), bold or italic phrases, etc. Break it up. Make it something the reviewer wants to read instead of dreads at each page turn.


We’ll go into a lot of these points in much more detail in future posts, but for now, most of these problems can be eliminated when writing grant applications with the reviewing panelists in mind and giving them something they want to read. What can you do in your proposal to make the reviewer’s life easier? What would you appreciate seeing in a proposal if you were reading so many in so short a time? What should you eliminate in your proposals with this in mind?

Leave a Reply

Your email address will not be published. Required fields are marked *

Sign up to get instant access to your


Videos included in scientific abstract email course

-> Detailed breakdowns of ideal abstracts
->Most common mistakes and how to avoid them
->How to WRITE your abstract from scratch
->And all of our best tips, info, and everything you need to know

But we don’t stop there! Joining our community includes:
->Members-only discounts on all of our courses
->Tips for writing, editing, and publishing your science
->First access to all our material direct to your inbox NO SPAM!

Email Address *


There’s more where this came from!