LEARNING

Conservation Organization Improves Grant Review Process with LLMs

Discover how a conservation organization leveraged an LLM software platform to improve their grant review process.


SYNOPSIS

A philanthropic organization focused on conservation sought to streamline its manual, labor-intensive grant review process by implementing Generative AI. Using Composable's LLM platform, the organization automated the initial evaluation of grant proposals, reducing the workload on volunteer reviewers and ensuring more consistent, objective scoring. This solution addressed challenges like time-consuming reviews, subjectivity, and bottlenecks in decision-making.

Since adopting the AI platform, the organization has seen up to 7x efficiency gains, improved consistency in evaluations, and increased satisfaction among volunteers. The platform has allowed for faster funding decisions, enhancing the organization's ability to support impactful conservation projects more efficiently.

 

A leading philanthropic organization dedicated to funding, distributing, and measuring national conservation grants sought to explore Generative AI to improve the labor-intensive, manual grant review and issuance process.

As with any commercial or non-profit organization, any implementation of Generative AI requires clear, responsible AI safeguards, governance, and transparency of the outcomes (hint: LLMs are particularly good at ‘showing their work,’ or why specific scores or outcomes were arrived at), and active collaboration and participation from business leaders who best understood the process end-to-end, in this case, the Program Directors.

With thousands of submissions for each program, the organization relied heavily on a team of expert scientists, economists, and community leaders to participate as volunteer reviewers. They signed up each year to participate in training, meet with the teams, and join a team of 3-5 reviewers for each proposal, where they manually review and score each proposal based on a rubric or published guidelines for each grant program.

Each grant program across the organization had a lengthy, labor-intensive review process based on the designated volunteer reviewers.  Like all processes where humans, spreadsheets, and reading 100s of pages of domain-specific content intersect, this time-consuming process was prone to bias, error, and variability of results. 

Key Objectives Sought:

  1. Apply foundational structure for Responsible AI processes - governance, security, safeguards, and controls
  2. Reduce time-consuming manual steps (humans, spreadsheets, notes taken from reading 100s of dense technical pages)
  3. Improve the quality and speed of the review cycle
  4. Improve the experience for the expert volunteer grant reviewers
  5. Create a force multiplier for the organization’s future growth and scale

Challenges & Pains

  • Scale and Complexity. The sheer volume in any year’s program could overwhelm the team. Each proposal required scientific, project, and budgetary specificity, making them complex and time-consuming to read, review, and compare.  Reviews and scores were based on a complex rubric, which became a significant bottleneck in the decision-making process. Once scores were applied, explaining the scores across such varied project proposals was difficult.

  • The review process was time-consuming, often leading to extension requests to the funding timelines. Delays in grant approval decisions equated to delays in funding, which ultimately impacted many aspects of these projects (staffing, equipment, partnerships with other research organizations, contingencies based on grant approvals, etc.)

  • The scoring rubric covered various criteria such as project impact, feasibility, budget justification, and alignment with the organization’s core mission. It required reviewers to invest significant time to ensure accurate and thorough scoring.

  • The lengthy evaluation cycles per program, across 100s of programs, could burn out the volunteer reviewers, who were often experts in their fields, working on their own programs, publications, etc.  Ongoing recruitment of highly skilled and regionally focused reviewers was needed to ensure a sufficient pool of resources. Varying degrees of participation and experience also contributed to potential variability in evaluation.

Solution

Recognizing the need for a more efficient and reliable evaluation process, the organization decided to implement an LLM software platform to automate the initial review and scoring of grant proposals.

Why the Organization Chose Composable
  • Scalability
    The platform offered the ability to process large volumes of proposals quickly and efficiently. By automating the initial evaluation, the organization can now handle an increasing number of submissions without overwhelming their human evaluators.

  • Consistency & Objectivity
    LLMs tasks are augmented with the organization’s specific scoring rubric using Retrieval-Augmented Generation (RAG). This helps eliminate the subjective variations that had previously plagued the manual process.

  • Prompt and Metadata Refinement
    While the prompts, workflows, and metadata can be reused across similar programs, Composable enables program differentiation and unique structure to optimize for the best outcomes and goals per program. 

  • Time Efficiency
    The platform significantly reduced the time required for proposal evaluations. By expediting decision-making and allocating funding more rapidly, the organization can positively impact conservation program efforts earlier in the year.

  • Enhanced Decision-Making
    The platform provided detailed analytics and insights based on the automated scoring, allowing the organization’s decision-makers to focus on the most promising proposals. The human evaluators can now spend their time on high-value tasks, such as conducting in-depth reviews of top-scoring proposals and making strategic funding decisions.

Results After Leveraging Composable for Proposal Evaluation & Scoring

Since implementing Composable, the organization has seen significant improvements in its grant proposal review process:

  • Efficiency Gains
    The organization has seen up to 7x efficiency gains, allowing the organization to make funding decisions more quickly.

  • Improved Consistency
    The automated scoring process has virtually eliminated the inconsistencies that previously arose from human subjectivity.

  • Volunteer Satisfaction
    The volunteers now focus on high-impact tasks, reducing their workload and increasing their overall satisfaction with the process.

  • Better Outcomes
    The organization now provides early feedback to applicants to improve their submission quality, inform routing decisions for evaluations, and more.

Conclusion

The organization’s adoption of Composable has transformed its grant proposal review process. By addressing the challenges of volume, consistency, and time constraints, the organization has improved its ability to support innovative community projects efficiently and effectively. This customer story underscores the value of integrating advanced technology into business operations to enhance decision-making and drive greater impact.

TAKE THE NEXT STEP

How to Evaluate, Select, and Implement LLM Use Cases

This comprehensive how-to guide explores the strategies, tactics, and best practices for selecting and deploying LLM use cases. It explores eight real-world use cases and explains the potential impact of LLM-powered tasks on business operations, efficiency, and innovation. ​

Similar posts

Get notified on new blog articles

Be the first to know about new blog articles from Composable. Stay up to date on industry trends, news, product updates, and more.

Subscribe to our blog