By Kim Owens and Paul-Marie Carfantan
It was a lovely spring evening when members of AI LA joined together in DTLA over local craft beer for our monthly AI Policy mixer, chatting about this era we’re living in where artificial intelligence is reshaping our world at an unprecedented pace. How we get both excited and concerned about AI’s impact on our work, careers, and personal lives.
That night the Responsible AI Reading Group was born as an initiative within AI LA to stand at the forefront of this crucial dialogue, offering a unique platform for enthusiasts, professionals, and students to engage with the ethical dimensions of AI. Through weekly discussions, monthly in-person meetings, and a lively Discord channel, our vibrant community of diverse individuals delve into and analyze AI-related peer-reviewed papers and research, debating the ethical considerations surrounding AI technologies.
As part of the AI LA Responsible AI Symposium earlier this year we had the pleasure of hosting a vibrant and insightful session on AI Red-Teaming used to test AI systems for vulnerabilities. The discussion led by Paul-Marie Carfantan, Kim Owens, and Scott Pansing provided a comprehensive overview of how red-teaming—a socio-technical evaluation method borrowed from cybersecurity—can help in identifying needed safety provisions in large language models to ensure AI systems do not generate harmful or unethical outputs.
One participant noted, "The red-teaming exercise really opened my eyes to the subtle ways AI systems can be manipulated. It's crucial for developing robust safeguards.” This workshop propelled the Responsible AI Reading Group into conversations about Global Governance, Explainability, Agentic AI and misuse risks for foundations models; including our response to U.S. Artificial Intelligence Safety Institute’s (AISI) request for public comment to the NIST AI 800-1 Draft, which you can view here.
In response to the rising concerns about the ethical implications and safety of AI systems, AI LA is excited to launch the Responsible AI Challenge: Hack for Change, taking place during LA’s Largest GenAI Hackathon on October 12-13, 2024 in Santa Monica, Los Angeles.
This challenge encourages participants to incorporate responsible AI practices into their projects, regardless of their chosen focus area. Whether you’re working on health, finance, social impact, or creative AI solutions, the Responsible AI Challenge dares you to think critically about how your project addresses AI safety, fairness, transparency, and the prevention of harmful applications. We want teams to explore how AI can be designed to protect against misuse and unintended consequences, fostering a future where technology is aligned with human values.
To learn more and register to participate please visit https://hack.cerebralbeach.com.
To kick-start the weekend of LA Tech Week, on October 18th, AI LA is hosting a special Responsible AI evening in Playa Del Rey, bringing together leading local voices in the field. This intimate rooftop gathering aims to elevate dialogue on Responsible AI development in the Greater Los Angeles area. We're gathering distinguished speakers to represent key perspectives: academia, government, industry, and nonprofits.
Each speaker will have the opportunity to share unique insights on their Responsible AI work, engage in meaningful discussions with peers, and help shape the future of AI ethics in our region. This event offers not only a platform to learn from Responsible AI experts but also to network with influential figures in a relaxed, picturesque setting. Join us in fostering a collaborative approach to Responsible AI that reflects the diverse perspectives of our vibrant tech community.
Event Details:
- Date: October 18, 2024
- Location: Playa del Rey, Los Angeles
To learn more and register to participate please visit https://lu.ma/responsibleaila.
Whether you’re an engineer or data scientist, non-tech professional or policy maker, student or educator, or are simply curious about the ethical implications of AI, we welcome you to the perks of our community:
- Attend our weekly discussions and monthly mixers
- Participate in our upcoming Hack for Change and Responsible AI - made in LA event
- Contribute to shaping the future of AI safety by providing your feedback (similar to the comments formalized on the AI misuse risks)
- Stay connected through our Discord channel and email updates
Learn more and join us in our mission to establish Los Angeles as a hub for community-driven Responsible AI solutions and advocacy by clicking here.
Let's shape the future of AI, responsibly and together.