​​Bridging AI Innovation with Responsibility: Key Takeaways from #LATechWeek Responsible AI - Made in LA Event

Oct 31, 2024

By Kim Owens

As #LATechWeek was winding down, Angelenos made their way through Friday evening traffic to catch the sunset at AI LA’s Cerebral Beach HQ in Playa Del Rey and to attend the Responsible AI Reading Group’s “Responsible AI - Made in LA” event. 

This took place after LA’s Largest Hackathon, Cerebral Beach, organized by AI LA where 405 hackers had 24 hours to submit an AI-based solution to providing a social-good impact on society, amongst other challengers. We got a chance to meet Achala, Vivek and Prathik from the winning team, Patronus, who designed a platform to provide dementia care patients with a customized experience to help them interact with others and the outside world.

Hosted by AI LA, the evening featured a diverse lineup of leading Responsible AI voices  from the government, academia, nonprofit, and private sectors: 

  • Jen García is the Founder and CEO of Employ California, a non-profit leader providing education programs to assist and train knowledge workers, specializing in AI literacy aimed at underrepresented groups and small businesses.
  • Amy Tong serves as the Secretary of Government Operations for the State of California, where she leads initiatives that prioritize the responsible deployment of technology across state agencies, exploring AI use cases that balance innovation with public safety.
  • Dr. Sarah T. Roberts is an Associate Professor at UCLA in Information Studies, Labor Studies, and Gender Studies, having spent over 15 years researching the social impacts of technology, with a focus on content moderation and the human implications of AI. 
  • Kareem Saleh is the Founder and CEO of FairPlay AI, a company that advocates for "fairness as a service" in AI applications within financial institutions to reduce algorithmic biases in lending, ensuring AI systems promote equitable treatment across demographics.
[left to right: Todd Terrazas, Kareem Saleh, Jen Garcia, Dr. Sarah Roberts, Amy Tong, Paul-Marie Carfantan]

Centering Equity, Inclusion, and Access “Ecosystems” in Responsible AI

The discussion was kicked off by the moderator, Paul-Marie Carfantan, Founder and Head of the Responsible AI Reading Group at AI LA, sharing some statistics on the rapid increase of AI adoption, almost doubling from this same time last year, “What kind of risk exists and why do we need responsible AI? What risk is responsible AI addressing?”

“One of the risks…is that some of us have access to these tools and how to use them, and many of us don’t,” Jen stated. “And that disparity would truly be a risk economically and socio-economically for all of us.”

Jen is also creating a new multi-stakeholder model that connects AI startups, small businesses, and government entities within a collaborative “ecosystem” that benefits all. The government agency puts people through a workforce development program, who are then placed at an AI startup that services small businesses in underserved communities. The government agency connects the AI startup with small business clients, who get access to AI-based services to grow their own companies at a discount offered by the government agency. Everybody wins!

This program, which will be piloting in San Francisco, “Serves all and also uplifts our communities.We all need to think about how we can [best] serve each other.”

California’s Plan for an AI-Educated Workforce and Societal Safeguards

The rapid pace of AI development has nearly the whole world wondering if jobs held by knowledge workers like engineers, lawyers, or marketers will be replaced by a company of AI agents, but if careers in trucking and manufacturing will be taken over by robots.

“Many people worry about, ‘Would this AI replace our job?’ And if not, what is the best way we can actually be smarter than the AI?” Amy posed. She stressed California's commitment to building AI literacy among employees and communities, and that preparing workers to engage thoughtfully with AI is essential not only for jobs but for broader public safety and responsible innovation.

Amy also shared California’s continued pursuit of regulation that goes further than what Congress is able to do currently, and their “measured approach” to AI, where state-led pilots test both the innovation potential and ethical challenges of AI, especially in high-stakes areas. 

One pilot will be addressing traffic management to prepare for the Olympics in L.A. in 2028, which, with the mere mention, caused all of us to squirm in our seats just thinking about it. Amy laughed, “It's going to be here. I know! I see some people shaking their heads like, ‘I don't know what we're going to do with the traffic.’ Well, the Department of Transportation has already started utilizing AI as a use case to figure out if there's a way to improve that.”

The administration is also exploring ways to benefit the public by preparing safeguards for long-term impact. One project included a collaborative cyber security task force with Homeland Security Advisory in California to look into threats to infrastructure like the water system and energy grid. “California can continue to be a leading force when it comes to responsibility,” Amy added. “That's really mission specific as to why we're here today.”

Addressing AI’s Societal Bias by Amplifying Marginalized Voices in AI Development

One of the well-known issues with AI is the existence of inherent societal biases within the LLM models themselves, which can then inadvertently reinforce historical biases. “The existence of the field or the endeavor of Responsible AI indicates to me that something is wrong without it,” Sarah states, frankly. “We've taken a society that struggles with deep inequities, deep injustice, injustices. You know, historical discrimination against all swaths of citizens, and we've not addressed those things. But we've taken a layer of technology and put that on top of those social issues and accelerated it, right?”

Sarah also notes the growing lobbying power of tech lobbies while major companies walk back or outright eliminate their responsible AI teams, as Elon Musk did when he took over Twitter, firing a lot of staff, including Rumman Chowdhury, the engineering director of the Machine Learning Ethics, Transparency, and Accountability (META) team. 

To address this, Sarah argued we have an opportunity to address these issues by amplifying and valuing the perspectives of those historically overlooked in tech development, mainly women, people of color, and other marginalized groups. 

“I just want to highlight [that we need] to listen to the voices that have been working on these issues,” she said. “It is not coincidental that those are voices of women, women of color, LGBTQ women, people who are living with disability, people who have a gender difference or gender variance who find themselves again, on the wrong side of that door when it closes. But with that comes invaluable insight, invaluable learnings, knowledge and a way forward.”

Evaluating Fairness in Financial Services With Accountability and Transparency in AI

Kareem shared a real-world example from FairPlay’s work with financial institutions, where “fairness as a service” comes into play.He described a lender using AI to evaluate auto loans based on factors—such as interactions between seemingly neutral variables like mileage and location—that have the potential for surprisingly hidden biases in AI-driven decision-making.

“The problem is the combination of Nevada and mileage…if you’re buying a high-mileage car in Nevada, there’s like a 70 percent probability that you’re a person of color,” he explained. Through the use of FairPlay’s tools, lenders are able to identify, assess, and then mitigate those unintended algorithmic biases.

By encouraging regulators and consumers to demand transparency, Kareem highlighted how public accountability and advocacy–along with hefty fines–can motivate companies to adopt fairer practices, particularly in high-stakes and highly regulated areas like finance, healthcare, and employment. “Nobody buys fairness out of the goodness of their heart,” he stated bluntly. 

This is why some of the earliest use cases for responsible AI are being seen in those regulated market sectors, including financial services, “which have these pretty severe laws that prohibit discrimination.” When discovered by regulators the potential fines are three times the size of the offense. He adds, “And there's a six year statute of limitations for discrimination in financial services, so any discriminatory decision you make has a six year tail on it.”

Many thanks to all of you who made it out for our really informative and engaging event and a special thanks goes out to our guest speakers and Paul-Marie for leading the conversation. 

If you couldn’t make it, not to worry! We have more events just around the corner…

Coming in November & December

Discover how to address bias in Large Language Models at AI LA’s FREE hands-on workshop!

This after-work interactive workshop is perfect for developers, data scientists, students, and anyone passionate about responsible AI.

Event Details:

  • Time: 6pm - 8pm PT
  • Date: November 7, 2024
  • Where: Gridspace in Downtown Los Angeles

Pizza & 🍻 beer provided by Gridspace in DTLA! Space is limited to 20 participants – register now to secure your spot and be part of creating fairer AI systems https://lu.ma/seminar

In December – Navigating AI Policy: Legislative Insight and Practical Engagement - stay tuned, details coming soon!

Join Our Growing Responsible AI Reading Group

Whether you’re an engineer or data scientist, non-tech professional or policy maker, student or educator, or are simply curious about the ethical implications of AI, we welcome you to the perks of our community:

  • Weekly Reading Sessions: Online discussions focusing on the latest research and news in Responsible AI.
  • Hands-On Learning Seminars: In-person events delving into specialized topics like bias in large language models and effective AI policies.
  • Community Advocacy: Opportunities for public feedback on AI risks through comments to organizations like NIST and federal agencies.
  • Monthly Mixers: Networking events to encourage collaboration within the AI community
  • Special events: Thematic events to explore a strategic topic (e.g. Responsible AI made in LA)
  • Non-profit consulting: please reach out to pm@joinai.la with your project scope 

Learn more and join us in our mission to establish Los Angeles as a hub for community-driven Responsible AI solutions and advocacy by clicking here.

Let's shape the future of AI, responsibly and together!

ABOUT THE AUTHOR