I recently had the opportunity to participate in a conversation with Nicole Alexander, head of web marketing at Meta on stage at Black Tech Week.
Not only was the discussion insightful for all, but we received some thoughtful questions. So I wrote down my thoughts from the conversation on building “Responsible AI”.
Often the Responsible AI framework uses a list of principles. And, yes, I generated this list with the help of GenAI tools along with my input.
In practice, implementing responsible AI is difficult to achieve. Here are the 9 key challenges we must address:
A way of addressing the problems above would be to employ an IPO framework—Input, Process, and Output.
Source data with permission. Is this data set growing, shrinking or the same over time? Have a data governance policy that addresses data timeliness.
Ensure the data sourced is inclusive and diverse where relevant. Diverse teams bring diverse perspectives and diverse data, so who is sourcing matters along with what you are sourcing?
Post an AI transparency policy on your site. This should be similar to a privacy policy that describes when and if you are using data in an AI system.
Know your data. We are producing data all the time with every breath. Becoming data aware and educated on what data you are creating and where it is going to be is important.
Remove bias. Develop guidelines on what to exclude from input to remove bias.
Perform regular audits. Bring accountability by having regular audits of system IPO from data officers.
Promote a data-first culture. Proactively work to establish a culture within the company that values data and understands its importance. This includes training other employees on data-related topics, and ensuring data is used responsibly and ethically.
Design Transparently. In designing systems that process data, systems should be able to account for, and be transparent about, a fact or an opinion vs. a generated statement.
A fake AD can be identified as fake if its ownership is identified along with whether the AD is a fact or an opinion or a genAI statement.
Security. Know who has access to the data, so rogue parties don't have access to it.
Test for bias. A loan approval system should never include race in the mix and test that people of all races have the same outcome. Ideally, we need third party systems to test bias to rate outputs.
Qualify time from which data was processed.
Data privacy. Ensure the company's data practices respect the privacy of customers and employees, and that data is stored and used securely.
Quote your sources in the output.
Tag the output. Build a system to identify or tag a fact vs. a generated statement.
Third-party testing. Develop and mention third-party testing to improve trust. I expect badges to come from third parties that perform testing and auditing of AI systems.
Overall, we need to improve AI literacy across users, companies and governments to create balanced policies that benefit all parties while encouraging innovation.
Knowing where your user data is being collected, processed and leveraged is going to lead to more responsible AI-led innovation.
I would love to hear your comments on the topic.
Please add to the conversation.
Ajay Bam is the CEO and Co-founder at Vyrill, a first-of-its-kind video intelligence company launched in 2017 through UC Berkeley’s Skydeck Incubator program. Vyrill helps brands and shoppers find the “moments that matter” inside videos. Its AI-powered “In-Video’' search technology analyzes & shares insights hidden within videos to improve personalization, SEO, and conversion. Before Vyrill, Ajay launched Boston-based, mobile shopping app company Modiv Media. He is a proven and accomplished product management professional, entrepreneurial thinker, and innovator with more than 13 years of experience leading startups and world-class brands.