Event banner
Future of AI for Nonprofits AMA with Patrick J. McGovern Foundation
Event Ended
Thursday, Mar 14, 2024, 09:00 AM PDTEvent details
We're excited to announce a Future of AI for Nonprofits AMA on Thursday, March 14th at 9:00 AM Pacific Time with guests from the Patrick J. McGovern Foundation.
Vilas Dhar, President of the Pat...
Kim_Brooks_MSP
Updated Dec 27, 2024
babaravaleria
Mar 14, 2024Copper Contributor
Hi! I am Valeria Babără, Legal and Advocacy Officer with Women's Initiatives for Gender Justice, an NGO based in The Hague (The Netherlands) working to advance gender justice with the International Criminal Court and other justice mechanisms. As I started introducing AI to my team and our partners, I receive a lot of feedback about the adverse effects of AI and how it risks increasing inequality, replicating bias etc. Most of the research recently issued in our international criminal law field on AI-related themes similarly focuses on negative aspects. While I believe it is important to be mindful of potential negative consequences, I feel that these efforts are not proportional to the efforts of providing access to AI resources. In other words, we will soon have a lot of research criticizing AI tools and initiatives before many actors have had a chance to learn what they are, let alone experience working with them. Any thoughts on how to navigate this? How to encourage colleagues to also be open to AI opportunities and potential?
HazemPJMF
Mar 14, 2024Copper Contributor
I love this question Valeria, because its a common concern in a lot of circles. The way I like to think about it, is that each and every one of us has the ability to shape the narrative for how we use AI in our society. There will definitely be bad actors looking to make use of AI for their own personal gains, but we should all demand a seat at the table to shape the direction of this. So while the fears are justified, what I fear more is people not engaging in AI to define the course it needs to take. And all the items you raised, around bias in the data, increasing risk inequality, etc are all real but there are steps we can take to address them, we just have to be intentional about these risks as we develop solutions - and that intentionality should force changes in the entire product development lifecycle for any product you are building. It increases the cost/time of developing safe products, but it’s necessary.