Why Some High-Scoring i3 Rural Projects Did Not Receive Funding


Last Updated: October 27, 2011
 

This article appeared in the October 2011 Rural Policy Matters.

There were many disappointed “near misses” among the applicants in the first round of Investing in Innovation, the U.S. Department of Education’s competitive grant program.

Overall, 212 proposals earned a score of at least 80 points (out of a possible 100). Those that were not funded have been dubbed “Tier 2” applicants, meaning they scored high, but not as high as those in Tier I who received funding.

One-hundred, or nearly half of these applicants who scored at least 80 points, claimed to include rural schools in their proposed project. Of those 100, 19 scored high enough to be among the Tier I applicants that actually received funding, and another five were found to be ineligible for funding for reasons not related to their proposal’s score. That left 76 applicants who claimed to want to serve rural areas, scored highly, but not quite highly enough to be among the final winners of the competition.

Among the Rural projects that did receive awards (see Taking Advantage, a Rural Trust report on the issue), a disturbingly high percentage were not really authentically rural in origin, scope, or proposed work. They claimed rural activity in order to win bonus points. That may or may not be the case with the Tier 2 rural applicants.

Nonetheless, we wanted to know “why” these Tier 2 rural proposals did not make the grade. That is, on which of the seven criteria did they lose the most of the possible points they could have received? And what effect did an arcane “standardized scoring” system that adjusted the final points earned by a proposal based on how “easy” or “difficult” their reviewers scored other proposals?

So we asked the Department of Education to send us the five reviewers’ score sheets for each of these 76 proposals. The Department obliged us, but withheld the reviews from 17 proposals because the applicants claimed that public disclosure would violate a proprietary secret contained in the proposal and discussed by the reviewers. We decided that the 59 sets of reviews would give us substantial insight into the Tier 2 problem.

Of the 59, 38 are in the “Validation” grant category, larger projects eligible for up to $15 million in federal funding and focused on large-scale evaluation of innovations with some existing evidence of effectiveness. The other 21 are “Development” projects, smaller (up to $3 million) and focused on innovations for which there is little evidence of effectiveness but which are based on a plausible hypothesis that suggests they will be effective.

Preliminary findings

We are analyzing the score sheets and will selectively read the review narratives in detail to gain insight into why these applicants failed. At this point, we can report several clear findings.

The two criteria that “cost” these proposals the most lost points were those for the “research evidence” supporting the validity of the proposed innovation, and the quality of the “evaluation plan” for the proposed project.

For Validation proposals, these two criteria were worth up to 30 of the 100 possible points (15 points each), but 40% of the points lost by these applicants were on these two criteria. For Development proposals, the damage was more severe from these two criteria. These two criteria combined for a potential 25 points (research up to 10 points, evaluation up to 15). But 69% of all the points lost by these applicants were on these two criteria.

In the case of the research criteria, Tier 2 rural applicants in the development category were awarded only 51% of the possible points. For the evaluation criteria, they were awarded only 43%. The paucity of rural education research and the lack of rural education expertise among the reviewers probably contributed to the low scores on these two criteria.

The effects of standardized scoring were mixed but had limited effect on the final grant making decisions.

Sixteen Validation projects were downgraded by standardization, while 22 were upgraded. The biggest loss was 12.3 points, probably enough to take EdVisions from success to failure in the funding competition. This is the clearest case of an impact of standardizing scores. The biggest gain from standardization was 14.3 points garnered by both Area Education Agency 11 and the Stanislaw County Office of Education, but neither was a big enough boost to move the proposal into the grant award level.

Eight Development proposals were downgraded by standardization, while 13 were upgraded. The biggest loss was only 2.7 points, and without the loss the project still would have fallen short of the points necessary to receive funding. The biggest gain was 12.3 points by the Northwest Service Cooperative, but it was not enough to put it in the grant winner’s circle.

Read more from the October 2011 Rural Policy Matters.