The Legal and Ethical Risks of Using AI in Nonprofit Grant Reporting
Artificial intelligence (AI) has swiftly entered our personal and professional lives, reshaping how we live and work. As with any new technology, this means new opportunities, but also new risks. The nonprofit sector’s use of AI in grant reporting and program analysis is no exception.
For many nonprofits stretched thin on staff and resources, AI’s ability to make grant reporting more efficient and effective is extremely appealing. As nonprofits consider integrating AI into operations, it is critical to understand the benefits, as well as the legal, ethical and operational risks.
3 Risks of Using AI for Grant Reporting
Here are key areas of risk for nonprofits to consider when using AI for grant reporting and program analysis.
1. Mishandling Sensitive Internal, Donor and Client Data
AI tools rely not only on publicly available data but also on user-provided inputs to generate content and insights. If a nonprofit enters sensitive information, such as personally identifiable health, personnel or financial information into an AI platform, without proper safeguards, it may inadvertently violate confidentiality agreements, privacy obligations or applicable federal and state laws.
Compounding problems, this sensitive internal information about donors, clients or employees could become part of the AI tool’s future responses to other users and accessible to others in unpredictable ways, creating more risks and problems. The nonprofit may have little to no ability to address these problems, particularly if its staff entered the information into an AI tool that does not provide the user with the ability to control the use of the inputted information.
2. Bias Amplification in Outcome Analysis
When your team is considering the use of AI for program evaluation purposes, it is important to understand that AI tools generally reflect and reinforce existing biases in datasets, particularly when analyzing program outcomes across demographics. Accordingly, if the historical datasets used underrepresent certain groups or reflect structural inequities, this could lead to AI-generated reports that appear data-driven but actually misrepresent the experiences and outcomes for the marginalized populations that the nonprofits are intending to help.
These biases are especially problematic in areas like education, health services or criminal justice-related programming, where systemic inequities are often embedded in the very data being analyzed. If these flawed patterns are treated as objective benchmarks, they can distort outcome comparisons, perpetuate inequitable funding decisions or obscure real disparities in program reach and effectiveness.
3. AI Does Not Eliminate Accountability or Human Touch
AI can help synthesize data and even draft narrative sections, providing the opportunity to work more efficiently, but it is no substitute for thoughtful program analysis or sound judgment, which requires a human element.
For example, AI technology may also prioritize efficiency or easy-to-measure metrics over what truly matters to the communities served. AI-generated reports may emphasize favorable data points while downplaying complex or long-term impacts that better reflect the nonprofit’s mission. Strategic decisions or reporting shaped primarily by algorithmic output can misalign with a nonprofit’s core values and goals, which is often a key element to donor and funder engagement. Reports that lack the nonprofit’s unique voice, specificity, or a clear narrative connection to the community served may read as impersonal or generic.
Failure to review the AI tool’s output to ensure it aligns with the actual programming, operations, use of the funds or the nonprofit’s voice could jeopardize the nonprofit’s reputation and future eligibility for grant funds.
Best Practices to Overcome Risks Associated With AI Grant Reporting
To guard against these issues, nonprofits should adopt clear internal policies governing the responsible use of AI tools. These policies should address how and when AI may be used, to help conform AI use to applicable privacy laws, contractual obligations and internal data protection standards.
For example, policies should require that sensitive information, such as personally identifiable details, be anonymized before being entered into any third-party platform. Staff and volunteers utilizing AI for reporting should also be trained on these policies, so they understand the data usage, retention and security policies of the tools they use, and how to opt out of data training features when possible. Nonprofits using AI for outcome comparison or benchmarking must take extra care to understand how these tools handle bias, outliers and equity-related variables.
AI-generated content should always be treated as a starting point, not a finished product. Nonprofits must maintain strong human oversight to review, verify and revise outputs for accuracy, appropriateness and alignment with the nonprofit’s purpose and presentation to the community. When selecting AI platforms, nonprofits should prioritize those with transparent licensing terms and avoid tools that reserve the right to reuse the nonprofit’s content or data for future model training.
As it has with all aspects of our lives, AI has the potential to transform how nonprofits approach reporting and analysis but only if adopted carefully and intentionally.
By staying on top of the best practices when incorporating AI tools into their operations, nonprofits can strengthen their capacity to show results without compromising legal compliance or community trust. However, before integrating AI tools, nonprofits should consult with legal counsel and IT professionals to ensure that AI usage respects confidentiality, intellectual property and data protection standards, and aligns with the nonprofit’s broader mission and public image.
The preceding content was provided by a contributor unaffiliated with NonProfit PRO. The views expressed within may not directly reflect the thoughts or opinions of the staff of NonProfit PRO.
Related story: 4 Actionable Lessons on Artificial Intelligence for Nonprofits
Victoria M. Gómez Philips is an associate in the Los Angeles office of Liebert Cassidy Whitmore. Victoria advises clients on business and transactional matters, including legal and business risks concerning strategic partnerships, contracts, employment, operations, and company policies.
Casey Williams is a partner and chair of the Nonprofit Practice Group at Liebert Cassidy Whitmore, a trusted adviser to California’s public entities, educational institutions, and nonprofits. Her practice focuses on helping mission-driven organizations achieve their goals while staying compliant and working through complex disputes.





