Transparent, Personalized and User-Friendly: Rethinking the Grantmaking Process

Transparent, Personalized and User-Friendly: Rethinking the Grantmaking Process

Britt Neuhaus & Carly Roberts, Program Officers, Overdeck Family Foundation

The challenge:

A common goal unites both nonprofits and funders: the desire, as Michael Etzel and Hilary Pennington argue, to build “successful, resilient organizations” that “help as many people as possible.”  Achieving this goal requires these two entities work in harmonious, productive, and transparent ways. However, ask any room of education leaders and changemakers about their funding partners, and it will become clear: the grantmaking process is ripe for improvement. Foundations are often guilty of:

  • Imposing one-sided and non-user-friendly application processes.
  • Operating in a transactional fashion with an emphasis on monitoring progress on a set of linear goals, which is in direct tension with the ways effective organizations need to learn and pivot as they navigate complex social changes and dynamics on the ground.
  • Perpetuating unproductive structures by tying grantmaking to individual organizations’ success, specific projects, and rigid annual cycles.

As a first step toward addressing such shortcomings in our own work, we surveyed 52 of our grantees from 2016 about their experiences with our grant process (71% responded, of which 84% were new grantees). We learned that only 65% of our surveyed new grantees found that we communicated information consistently throughout the grant process. We also heard from 42% of new grantees that the time required for the grant process was minimally burdensome for the size of the grant — meaning more than half found our process at least somewhat burdensome. This helpful feedback validated the need for grant process improvements.

While examining these clear pain points on the grantee side, we recognized that grantees are not the only end-users in the grantmaking process; our Program Officers (or POs) are also important end-users and face challenges in areas such as:

  • Process efficiency: POs struggle to gather the most important and necessary details to support our decision-making efficiently. From 2015 to 2016, we saw a 9% increase in grantees who found our diligence process to be at least somewhat burdensome. In a survey of our POs prior to beginning any grant process improvements, half rated the efficiency of their process as a 2/5, and none rated it as a 5/5. Efficiency is clearly a pain point on both sides of the process
  • Maximizing dollars: How do we maximize the chances, through our diligence, that we will select organizations that will drive the greatest impact?
  • Getting the real story: POs face challenges in having transparent and honest conversations about the work with non-profit organizations, who understandably feel pressure to paint as rosy a picture as possible

PO’s completed “journey maps” of their own pain points. They also leveraged our annual survey and our experience to identify grantee pain points. We color-coded the resulting data by phase of our grantmaking process and found the diligence process, including the development of grant goals, as a high-leverage area for change and improvement.

As a newer foundation, we have always felt we have an opportunity to think outside the box. Though we had solidified a basic grantmaking “process” during our first two years of operation, we were eager to understand some “win-win” opportunities that would resolve pain points on both sides and encourage a more open, iterative, learning-oriented funder-grantee dynamic. So, we started fresh.

What we learned:

With a deep belief that greater empathy and understanding of our end-users would help us identify productive solutions for grantees and POs alike, we contracted with third party design firm Public Policy Lab (PPL), which uses human-centered design principles to solve challenges.

While we survey our grantees annually to gauge satisfaction, recognizing the power dynamic at play, we felt an outside organization could help us get the most candid answers by asking the right questions, confidentially. PPL conducted 10 grantee interviews and identified the top pain points grantees experienced. PPL pointed out that grantmaking is similar to dating. In searching for that perfect match, one ideally wants to ask the right questions at the right phase in the relationship, to ultimately understand if the partnership will work. Following the interviews, PPL highlighted three common wishes from our grantees: user-friendliness, personalization, and transparency.

  • User-friendliness (i.e., “Be more like TurboTax.” Take a necessary and at times annoying process and make it as easy to use as possible): When it comes to designing a user experience, we learned that taking a highly empathetic approach is key to identifying the “annoying” parts of our process. For example, we used to ask potential grantees to submit information in Powerpoint, aligned with how we present proposals to our board; not only was this difficult and time-consuming for many users, but it created inefficiencies for organizations that write proposals in other formats. We also recognized that many organizations work together on proposals internally, so making collaboration more seamless would be helpful. Finally, we heard the need to streamline requests for information — fewer asks and fewer questions
  • Personalization (Adapt your process to the organization, not vice versa): Personalization was the most exciting opportunity for us, as it was clear early on that each grantee is a little different. We learned that the most salient difference between grantee organizations is the type of work they do; despite our limited funding focus on domestic education, there are differences across organizations doing direct service, research, advocacy, and field-building work. For example, it may not make sense to ask a researcher situated within a university to provide an organizational budget or strategic plan. A policy/advocacy organization may struggle to isolate their work into projects. We recognized that a one-size-fits-all diligence approach wouldn’t work
  • Transparency (Have the best process by having the most transparent process):  As with any good dating profile, organizations wanted upfront information that would help them understand if we would be a good match. What makes the Foundation’s approach different? What are we looking for? What are our values? When we surveyed grantees in 2016, only 66% found our communication to be “very consistent.” Beyond that, because we don’t have a standard process or application, some felt unclear both on how long the diligence process would take and on our “relationship status” at any given time.

What we’ve done so far:

Based on these insights, we have begun rapidly prototyping new processes, taking a “minimum-viable product” approach and honing them just enough to get some feedback from our team and a few grantees. Some examples of what we are testing:

  • A more user-friendly tool: We are testing Google’s suite of interactive collaboration tools (G Suite) to design a transparent and easy-to-use “one-stop-shop” for our diligence process. In one document, we outline all of the steps of our process and the information we’ll need to collect at each stage, and allow organizations to input their information directly
  • Customized templates: We tailored this Google Doc prototype to make three separate versions, personalized based on the type of work organizations do (direct services, research, and advocacy). Our POs then further customized them for each organization. We’re collecting feedback from our team and from organizations that have (sometimes unwittingly) volunteered to be our first testers, and we are using internal data to understand how the process changes are working, making adjustments as we learn. We’ve also tried to create more flexibility when setting grant expectations by introducing a new template that leaves more room for qualitative details and bigger picture goals, and which is also customized by grantee type (related to the above point)
  • More transparency in communicating our values: We created a set of customized forms that assess organizational attributes as a way to communicate what we feel is essential to understanding, early in the process, whether we are a good match. We hope this will drive more efficient decision-making about whether or not we plan to partner, while making our priorities and values clear to potential partners

So far, we’ve learned:

  • Google’s G Suite isn’t quite right. The product’s ease of editing and built-in collaboration features have been helpful. However, we’ve also heard that the live collaboration functionality was a double-edged sword; some grantees felt nervous drafting responses directly in the document while their PO could be “watching.” Instead, they valued the opportunity to edit internally before sharing information with the Foundation. Ultimately, we’ll likely look for an online tool that combines the best of Google Docs with an even more user-friendly interface
  • Change is hard. We’ve tested our new process with new potential grantees, so we couldn’t measure a change in their perspective. But for our staff, the new initiatives represented a big shift. Limited bandwidth, combined with existing habits, made the adjustment difficult–even for an iterative start-up organization like ours. We’ve started supporting the team to use the new tools in various ways, ranging from team-wide discussions to one-on-one coaching. In other words, providing personalized support for a more personalized grantmaking approach.
  • We can use data to determine the most important diligence information. We take our due diligence responsibility seriously and, as a result, request a great deal of information from potential grantees. We’re just beginning to look at our data to understand the connections between different organizational attributes and grant performance, so that we can prioritize and collect only the most essential information. We hope that our new, more cohesive process of data collection will help us do this

What’s next:

As we continue this improvement journey, we expect to move out of “beta mode” gradually, making upgrades along the way. While we test out the new process, we will also:

  • Continue collecting both quantitative and qualitative feedback from our grantees and our internal team to drive ongoing updates and changes.
  • Gain greater insight into the aspects of diligence that matter most. We hope to find the “must-ask” questions and the highest-leverage pieces of information–the ones that correlate best with the greatest success, so that we can streamline our process even further
  • Determine the best tools to support our new approach to grantmaking, or maybe even build our own
  • Extend what we’ve learned into other areas of our work, such as grant monitoring or evaluation

Throughout the process of measuring our performance and processes through grantee and PO interviews–and experimenting with improvement efforts–we’ve learned a great deal about designing with the end-user in mind and taking iterative steps toward improvement. Our learning journey is by no means over; in fact, it’s just beginning. We hope some of the lessons we’re learning along the way will prove useful to other organizations, as well.

Overdeck Family Foundation would like to extend deep thanks to our partners who took the time to share their valuable feedback in our annual survey, as well as those who went above and beyond by interviewing with Public Policy Lab. We appreciate your time and support as we work to improve!

Additional thanks to Allie Steel, Anu Malipatil, and Eliot Walsh for their contributions and editorial support

Advancing the Field of Education Research

Advancing the Field of Education Research

Anu Malipatil, Director of Education, Overdeck Family Foundation

Despite efforts to study the effectiveness of interventions over the last several decades, the education field has yet to boast a strong, clear evidence base about what works and what doesn’t, reliably and at scale. However, this is no reason to despair. Historically, sectors like the medical research field have faced such challenges – and have built strong evidence bases in relatively short order.  Ultimately, a renewed focus on shorter-cycle education research can strengthen the existing evidence base to better understand which measures matter and which interventions work in which contexts, reliably at scale.

Indeed, making strides toward these goals is a key reason why the Overdeck Family Foundation is co-sponsoring the 2016 Carnegie Foundation Summit. Ultimately, a renewed focus on improvement science will empower our teachers to apply more empirical, effective practices in the classroom.

Diagnosing the problem

Our field’s current ignorance hasn’t arisen from lack of effort. Numerous research-related entities engaged in this work (including the Institute of Education Sciences, National Center for Education Statistics, National Research Council, and National Science Foundation), not to mention private philanthropies and research firms like RAND and MDRC, have deployed significant amounts of time and money toward finding strong evidence to determine what interventions truly work. So far, however, the results have been underwhelming.

Several factors have inhibited progress. Despite investments from the organizations noted above, the field suffers from low overall R&D spending. Public funding for education research at the national level amounts to less than 1% of the federal budget (less than 6% of which is allocated to education overall)—and even that paltry amount remains under pressure.

Additionally, the field suffers from a narrow definition of “research” that leads to excessively long feedback loops and impedes continuous improvements among stakeholders. Meanwhile, the field as a whole lacks consensus about what measures and metrics—not just of accountability, but also of learning itself—matter most. Finally, the Department of Education has attached insufficient value to serving as a central clearing agency for disseminating knowledge and proven best practices.

Taking a cue from medical research

Fortunately, the experience of the medical research community from the mid-20th century onward offers a potential roadmap for education researchers. While the fields have innumerable differences, they also share a number of parallels: both encompass many layers of human stakeholders, complicated relationships among the various stakeholders, and the complex challenges of behavioral change required to drive improvements. Yet, the medical field has advanced extremely rapidly in the last few decades, while the education field has languished, in comparison.

How did medical research make such great strides so quickly? In his book The Checklist Manifesto, surgeon and author Dr. Atul Gawande deftly articulates and charts the medical field’s progression from a state of relative ignorance (little research and little application of that research), through what he calls “ineptitude” (applying newly acquired research base incorrectly), and finally towards “eptitude” (applying the knowledge we have consistently and correctly).

Gawande notes that as recently as the 1950s, for example, mistakes by doctors were quite frequent, and the field generally experienced high mortality rates among patients.  Back then, doctors had virtually no idea how to prevent or treat heart attacks; the field did not yet know many of the contributing factors, such as high blood pressure, and the first safe medication to treat hypertension didn’t come into common use until the 1960s. Today, we know of at least a dozen ways to reduce the likelihood of having a heart attack.

Over the last six decades, the arc of knowledge development in the medical field has transformed how its participants engage and interact. Most importantly, it has reduced mortality rates significantly.  Simply put, the field has moved from a state of ignorance, through the “ineptitude” stage, and now towards “eptitude.” Making this transition entailed several interrelated efforts, including:

  • Significant R&D investment: Research is expensive and can take a long time to bear fruit, but it is the foundation upon which all further advances rest. Empirical testing tells us what is true and what isn’t, and it paves the way for effective new practices that produce real results. Over the past 60 years, public (as well as private) spending on medical research has skyrocketed globally. For example, U.S. National Institute of Health appropriations have grown from roughly $53 million in 1950 to about $30 billion as of 2014.
  • Shortening the timeline of research application: Shorter research feedback loops are a key goal of improvement science. Once a problem—be it at the micro or systemic level—is identified, practitioners or researchers work to determine the right metrics through which to understand it, and then, using those metrics, they can develop solutions rapidly and encourage necessary behavioral shifts. Gawande heralds the use of the checklist as the field’s most recent advancement.
  • Adopting clear measures of success: By agreeing to common, rigorous standards of measurement, medical researchers can more clearly understand whether various interventions work; it’s simply good science. Additionally, researchers can mine numerous data sources, such as follow-up visits and insurance claims, to measure the efficacy of an intervention.
  • Setting up a knowledge clearinghouse: The American Medical Association has taken on the valuable role of a central convening agency that facilitates knowledge dissemination and best-practice sharing across the field.

A path forward for education research

Although significant differences exist between the medical and education research fields, we should seek to emulate some of the key practices highlighted above. The good news is that on some fronts, policy is moving in the right direction. The Every Student Succeeds Act, for example, prioritizes the value of research throughout the entire education bill. And in 2002, the Department of Education began building a rigorous, scientific evidence base aiming to inform researchers, educators, and policymakers. Known today as the What Works Clearinghouse, this online database now offers access to more than 700 publications and more than 10,500 reviewed studies. Unfortunately, however, educators, district leaders, and state agency leaders are often either unaware of it or unsure how to use the information it contains.

Clearly, if we want to progress toward “eptitude,” we must do more. The Overdeck Family Foundation has built its agenda and strategy around many of the practices listed above. We believe that the philanthropic sector can help encourage shorter research cycles with closed feedback loops and strengthen education research’s existing evidence base to help clarify which metrics are most important. Finally, we can advocate for broader and more effective knowledge dissemination. In doing so, we aim to help educators act with “eptitude” to the maximum extent possible.

The Overdeck Family Foundation is proud to be supporting the work of the Carnegie Foundation for the Advancement of Teaching. As sponsors of this year’s Summit, we look forward to advocating for the advancement of improvement science and harnessing the power of innovation in education research. Many challenges lie ahead for the field, but we’re excited for the journey. We invite you to partner with us to amplify the work and drive the progress we hope to see in the decades to come.