USAID/BHA proposal development: The next chapter featuring AI | Opinion

By Ali Al Mokdad

USAID/BHA proposal development: The next chapter featuring AI | Opinion

The impact of Artificial Intelligence on proposal development and project implementation is inevitable and we have already witnessed generative AI like ChatGPT assisting proposal writers, grant management, and program professionals in NGOs. This impact is also reshaping the way that donors assess proposals and challenging their systems and processes.

This article delves into the possible future of donor and AI interaction — one that may or may not happen or could even go beyond what’s anticipated here. The focus on the Bureau for Humanitarian Assistance (BHA) is for good reason because the United States Agency for International Development (USAID), the driving force behind BHA, shows a keen willingness to embrace innovations that will bolster the development sector, despite having to navigate certain challenges and negative impacts. USAID’s technological approach, encapsulated in the phrase, ‘Taking smart risks to transform development,’ is a testament to its forward-thinking approach.

Interestingly, USAID has had AI on its radar for several years and had already established an AI action plan and a digital action plan before the surge in popularity of generative AI tools such as ChatGPT. These plans and USAID’s ethical guidelines reveal strategic foresight and suggest that USAID, and by extension BHA, could be among the first to either leverage generative AI in proposal development or establish guidelines to govern its use.

So, what might the future of USAID/BHA proposal development look like?

USAID/BHA regulations and AI

USAID is already seeking information to develop AI for a Global Development Playbook and it is unlikely that BHA will implement overly restrictive regulations on the use of generative AI. Instead, it is anticipated that BHA will shape its policies and guidelines regarding AI based on foundational principles such as those outlined in the blueprint for an AI Bill of Rights. These principles include ensuring safe and effective systems, protecting against algorithmic discrimination, upholding data privacy, providing notice and explanation, and offering human alternatives, consideration, and fallback options.

For NGOs, this could translate into a set of best practices beginning with a ‘do no harm’ mindset, routinely analyzing risks in a context-aware manner, engaging with those impacted by AI interventions to adapt based on their experiences and potential consequences, and thinking critically about the representativeness and relevance of data. It also involves ensuring human oversight where necessary to prevent ‘automation bias’, using AI tools only when these align with development objectives and offer a clear advantage, understanding how AI applications fit within the broader digital ecosystem, and assessing the risks and harms specific to the context or population.

Furthermore, the USAID/BHA approach is likely to incorporate elements from NIST’s AI Risk Management Framework to highlight the need for a secure-by-design philosophy for AI and machine learning systems that are intended for operational deployment which it is already started working on.

In simpler terms, BHA’s regulatory focus is likely to revolve around the ethical use of AI, encompassing aspects of rights, opportunities, and access.

BHA’s near-future challenges

While BHA is poised to recognize the value of generative AI, it will also face several challenges.

One significant risk is a high number of applications potentially overloading BHA’s platforms. As submitting applications becomes easier and more organizations are able to produce high-quality writing that aligns with BHA’s strategy, there is likely to be a surge in the quantity of applications.

BHA could also face difficulties in assessing the operational capacity and uniqueness of organizations’ programs. AI tends to generate or edit narrative text that appears to reflect high-quality programming which can make it challenging to assess an organization’s actual capacity.

Another challenge involves the rapid pace of AI development which could outstrip BHA’s ability to advise its partners. Its approach will need to be sufficiently general to suit various organizational profiles as its partners/NGOs will vary in their readiness to use AI and in their digitalization progress.

BHA will also face several internal dilemmas. Will it use other AI to assess proposals? Could it incorporate all its rules and regulations into a chatbot and deploy this to partners for immediate assistance with questions of compliance? Will it introduce restrictions on its platforms and websites to block AI and robotic automation? Is there a plan to request information that highlights an organization’s operational capability and consider this as a major criterion? These are among the many questions regarding BHA’s methods of assessing proposals and supporting its partners. A major consideration will be how AI will impact upon its internal human resources, necessitating AI experts and developers to offer support to answer many of these questions while the scope of some roles may shift from compliance support to more operational support.

It is expected that USAID will increase field visits and possibly launch more funding initiatives covering AI and digitalization while engaging more with digital lead NGOs or digitalization working groups and platforms such as NetHope among others.

NGOs: Opportunities and concerns in leveraging AI in proposal development

As the integration and use of AI in NGO operations increase, this opens a window of opportunity while simultaneously giving rise to certain concerns.

The adoption of AI in the analysis of calls for proposals enables NGOs to build a more targeted and efficient approach in their proposals and project designs that closely match the community’s specific needs and funding opportunity as well as the operational scope and capacity of the organization.

AI will help proposal writers and grant management professionals to speed up tasks such as Go-No-Go decisions, preparing minutes of meetings, planning proposals, tracking grants, and reviewing grant applications, making these tasks much more manageable and less time-consuming, particularly in terms of writing and editing.

Another area where AI will prove its worth is in automated reporting. The auto-generation of reports, a task that usually requires considerable time and effort, will be streamlined and improve both efficiency and accuracy. In addition, the quality of reports, proposals, and grant management documentation will be substantially improved. AI will also play a role in identifying risks and flagging contractual issues during implementation thus helping NGOs to achieve proactive and informed grant and contract management.

However, this shift towards AI integration is not without its challenges. A major concern is the change in skill requirements for professionals in the field. With AI taking over routine tasks, the ability to effectively prompt and manage data could become a primary skill. The reduced administrative tasks will lead NGOs to potentially lead to a significant shift in the focus of these roles. Instead of concentrating solely on grants and contract management, there may be a move towards donor engagement, fundraising, and advocacy/communication.

To be prepared for this change, investment in data management and raising awareness are essential.

Given the significant role that data plays in shaping AI models, data infrastructure becomes a key component of AI ecosystem development and its future use. Data infrastructure includes all the technology, rules, standards, people, and activities needed to handle, share, use, take care of, and keep data safe and up-to-date. This includes not only traditionally structured data collection systems such as monitoring and evaluation, accountability, learning systems and business process or Enterprise Resource Planning (ERP) systems but also other components such as policies, guidelines, standards, SOPs, etc.

AI depends on investments in the quality, security, representativeness, and interoperability of data sets as well as the open, inclusive, and secure digital systems that support these. Organizations should therefore build their AI strategies while also developing privacy and data protection policies and data-sharing agreements that promote innovation and developing the appropriate methods and skills to securely handle, keep, and share data as well as being aware of how data affects AI systems.

When investing in data management, AI strategy, and governance, it is important to raise awareness regarding data protection and the ethical use of AI, and to invest in the capacity building of staff.

Final thoughts

As USAID/BHA proposal development enters a new chapter with AI, along with many other areas, it is important to explore these tools, understand their benefits and risks, and perhaps even reach out to a USAID/BHA focal point to ask about the availability of an AI playbook or if one is on the horizon.