How to improve Australian research grant systems and support researcher mental health at the same time
Shane Huntington | 5 September 2021
It is time for a researcher-centred funding system argues Dr Shane Huntington
A few months ago, I joined a friend for a coffee and we started talking about the NHMRC grant system. The moment it came up, I noticed her demeanor shift. She seemed beaten. She didn’t even know if she had won or lost the grant. The process alone was enough to damage her confidence. I had supported her in the grant writing process so I knew it was an excellent submission. But in the current system, quality doesn’t seem to matter, and everyone knows it.
So I asked her, would she be willing to trade the current system for something else? I proposed a short application of two pages, with a 70% cull of applications followed by a simple lottery. “Absolutely”, she responded. This would be substantially better than the current system. Then she got excited because she realized the lottery part could be electronically resolved in a day.
Researchers would not be concerned about their careers right up until weeks before contracts are about to end. We would not pretend that quality was the main factor in making decisions. We could have a system that was not biased against women. We could make sure that old white men with large research teams didn’t get most of the cash. We could guarantee that truly innovative ideas get funded even if they are risky. We could stop making 90% of the research community feel like garbage.
Currently, we have a system that is taking between 3–6 months of a researcher’s time to be part of, another month or so if you are a reviewer, and we could trade that for something that would probably take a few days. Importantly, researchers would actually see this alternative as fairer.
Make no mistake, the research grant systems in Australia are doing real damage to mental health. When the system itself causes damage to the very people it is supposed to support, something has to change. The current situation is not okay.
The lack of care for mental health goes well beyond the research bodies themselves. We put the care of our researchers at the very bottom of the pile across the sector. It’s institutionalized. There is a lot of talk of mental health, but institutions do nothing to rectify these top issues. Show me a researcher that is happy with the current grant systems and I will show you 100 that are not.
There is little doubt that having more money in the system would alleviate some of the problems. Money, however, is not the only issue that exists. If we really want to address problems with our granting systems, we need to look deeper. We need to understand just how much damage a bad system can do to the mental health of our researchers.
The existing systems are so bad that even when researchers are successful they tend to keep quiet about it because they know so many around them are suffering. They have “grant success guilt”.
The Top Issues with the Current Grant Systems in Australia
Before running through this list, I want to be clear that I see tertiary and independent research institutions along with government agencies as part of the “system”. It’s not just the NHMRC or ARC that cause all the problems. It’s a solid team effort.
This list is not exhaustive and is in no particular order of priority, but obviously #1 is the most important. Not all problems apply to all granting agencies.
#1 — Time wasted: Months of research time on long applications
#2 — Waiting for results: Unknown outcome dates, year after year
#3 — Timing of applications: Discriminatory for parents
#4 — Reviewer feedback: No quality control, sometimes hurtful
#5 — Impact of grants on overall workloads: Extreme, up to 30% of the year (or more?)
#6 — System biases: Race, gender, age, field
#7 — Success rates: Devastatingly low — career ending
#8 — Institutional pressure: Extreme pressure from institutions to apply
#9 — Removal of review panels: No oversight of reviewers
#10 –Track record assessment: Field biased, not comprehensive
#11 — The reviewer lottery: Vast standard differences
#12 — A system that is not adaptive: Difficult to improve grants based on feedback
#13 — Decision making transparency: Completely opaque system
#14 — Relative to opportunity: No real structure for assessment
#15 — ROI for universities: Salary expenditure on applications enormous
#16 — The Medical Research Future Fund (MRFF): Not viewed as a competitive system
#17 — Partial funding of grant budgets: Budgets arbitrarily cut without cause
#18 — Assessment based on academic level, not years of service: perverse incentive to stay junior
#19 — Preprints not allowed: Systems out of date with current publication practice
There is a consistent theme in almost all of the problems that are listed above. Above all, most of the issues are due to a lack of understanding of what is being communicated to researchers and what the impact of that will be. The proposed solutions below, are based primarily on a more nuanced communication strategy.
A Researcher-Centred Funding System
In healthcare, there has long been the goal of achieving what is often called ‘patient centred care’. For the majority of health systems this is still a goal, yet to be reached, but we all understand the need and the positive impact on lives and economies that would result.
Our grant systems need to follow a similar path. The public investment in research needs to get to our researchers in a way that is positive, competitive and efficient. Architecturally, the wellbeing of our researchers should be at the very centre of any design and there should be strong support for the system even by those who fail to be funded. So basically the opposite of what we have right now.
Despite recent reviews and redesigns of some of our granting systems, many of the most significant issues still exist.
To be blunt, any system that requires more than a 20-page application for $100k in funding is simply not fit for purpose.
Any system that discriminates based on family commitments, gender, location, background or seniority is not fit for purpose.
What I am proposing below is not a complete redesign of the system, (that will take a lot more time than I have in this article). Instead, I am suggesting a few rapid changes that hopefully will take care of the majority of the big issues that have been listed. This is not an exhaustive list, nor will it address all of the issues raised.
Change #1 — Expression of Interest Model
Over the last 25 years, I have engaged with a lot of grant systems. I’ve been an applicant, a reviewer and a consultant. The most effective schemes I have come across all have a two tiered application process. Typically, this requires an initial expression of interest submission that would be short and sharp, followed by a small selection of projects being invited to submit a more substantial application.
The expression of interest application must have the goal of rapid communication with minimal time spent. Researchers will be aware that some 70–80 percent of these applications will be rejected. The full length of these applications should not exceed 10 pages and preferably should be limited to 5 pages. With much shorter applications, more reviews can be done for each applicant. The goal, given these are about 1/10 of the length of the current applications, should be for a minimum of 10 reviews. If you are going to cull 70–80 percent of people, you had better have some good statistics to back you up — more reviews will help with that. Limits on the number of applications would still need to apply.
Every review must outline 5–10 ways in which the expression of interest could be improved. Preferably this feedback would be according to deliberate instructions from the funding agency. It should not be a free for all — applicants should have a fair idea of what areas they will get help with. Applications should have a simple budget during the EOI phase — in fact budgets should not be a factor for transition to the next round. The results of an EOI round should be delivered within 6–8 weeks.
The 20–30 percent of applications that make it through the EOI round would be asked to provide ‘complete cases’ for funding. To be clear though, even these should be shorter than those currently being used, preferably limited to 30–40 pages. Similar requirements for feedback would be required by reviewers. All final decisions would be made by expert panels.
Back in 2005 I applied for funding under the State Government of Victoria’s Science, Technology and Innovation program. After an initial 5-page application that resulted in a cull from 157 down to 23, we were all put into a workshop with a professor from the Melbourne Business School. He provided us with guidance on what a good business plan for government looked like. We all learned. We all felt like we were working with the funding agency. At no point did we feel our time was being wasted. Above all, our projects ended up being better designed because of the grant process itself.
Change #2 — Multiple Application Times
One of the benefits of reducing both applicant and reviewer time loads with EOI models is that you can have multiple grant rounds without increased burdens. Currently, there is significant discrimination in our grant systems due to applications being required proximate to school holidays and other significant dates. This can be reduced by offering multiple rounds per year — each of equal value. Ideally 3 rounds would be offered but with a rule that you could not apply in consecutive rounds. If only 2 rounds were offered, then this rule should not apply.
The outcomes of each round should be delivered before the next round opens. Having multiple rounds will also spread the ministerial responsibilities out which hopefully would lead to faster and consistent turn around. Having numerous rounds would also enable researchers to engage with the research grants systems shortly after they start new positions. Currently some staff on 1–2 year contracts that start just after application are due, have to wait almost a year to get their 10% chance of success. Multiple rounds would remove this delay and assist new appointments.
Change #3 — Decouple Ideas and Track Records
In the current system, researchers are chained to the past by the assessment of their track records. Ultimately a system that decouples the assessment of ideas from the assessment of track records would be more effective. Frankly I don’t care where a good idea comes from. It should receive top marks if it is well constructed and argued. The removal of bias should always be the goal and decoupling ideas and people will help at least partially achieve this.
Applications could be readily split and sent to different reviewer groups. The assessments would later be recombined and if there was a need to connect the two areas of assessment this could be done via expert panel discussions.
Change #4 — Relative to Opportunity Calculator
I believe it is time we started working on a system that would calculate a ‘relative to opportunity’ index that would be utilized in the review process. The bias in the system currently means that researchers have little faith in how significant life and career events are taken into account when their track record is being assessed. An automated system would not be perfect at this but I suspect it would do a far better job than our current ‘reviewer lottery’ system and it would level the playing ground for assessment. It would need to account for different fields and different types of employees — e.g. clinicians vs academic researchers etc. Careful design would be required here to guarantee that the system itself did not perpetuate further biases.
Change #5 — Use of Budget Bands
It would appear that almost nobody gets the budget that they ask for from grants. In the case of ARC Discovery programs, the average is about 70%. I don’t see the point of researchers going to great effort to demonstrate specific costing when that is ultimately ignored. I have never understood how institutions put forward proposals and then get told they have to do the exact same job with no negotiation for 70% of the cost.
With this in mind, we could save a lot of time by asking researchers to simply nominate a ‘band’ for funding during the EOI phase. The band ranges would need to be in increments for 50k-100k. More detailed budgets could then be required for the more detailed 2nd stage applications — but even then, if the 70% funding model continues the value of budget specificity cannot be argued.
Change #6 — Feedback Systems — Review the Reviewers Program
To solve some of the issues with reviews that are either unprofessional, lacking field expertise, or potentially biased, we should establish a ‘review the reviewers’ program. In this program every review generated would need to be read by one other reviewer. If there is a concern with the review, then the second reviewer would flag the review and it would then need approval by a 3rd party (possibly even an expert panel) before being sent to the applicant. This would be a simple pass fail program — if the review was deemed inappropriate it would simply be removed.
The act of having this program in place will in itself improve the standard of reviews and reduce the amount of damage that reviews can do to research self-esteem, mental health and careers. If the time load here becomes problematic, then it could be done on a percentage of reviews each year randomly. Not a perfect system but works for the tax office!
Change #7 — Transparency and Communication Charter
All grant systems should have charters in place that guarantee that there is transparency regarding decision making. Despite the elaborate systems in place right now there is a wide spread concern regarding fairness. Part of this no doubt comes from the crushing success rates coupled with extraordinary workloads for grant applications. But the lion’s share comes from the opaque nature of the review process and the inability of applications to iterate the standard of their applications from year to year.
Similarly, the effectiveness of communication in the system needs to be actively monitored. In situations where 100 page applications are required there has been a communication failure. Not only is such length unnecessary, it also ends up being ineffective because reviewers cannot effectively engage with the weight of the material. Every aspect of the process should be optimised from a communication perspective. Minimise time spent, maximize information quality delivered, at every level of the process.
It is the role of our funding agencies to distribute the public investment in research in a competitive, responsible and fair way. The current lack of feedback and empathy in the grant systems means they are failing in this task. Systems that do not have the support of their constituents need to be carefully modified.
There are many barriers to changing these systems. Some of these are significant but given the current negative impact on researcher mental health, I believe the effort is well and truly warranted.
* * *
This article was first published in Medium
Dr Shane Huntington OAM has been providing consulting services in communication and strategy for over 20 years. As a successful broadcaster, business owner, academic and strategist he draws together experience from multiple sectors, offering clients a more detailed and analytical approach than competitors.