Home Startup Girls in AI: Brandie Nonnecke of UC Berkeley says buyers ought to insist on accountable AI practices

Girls in AI: Brandie Nonnecke of UC Berkeley says buyers ought to insist on accountable AI practices

0
Girls in AI: Brandie Nonnecke of UC Berkeley says buyers ought to insist on accountable AI practices

[ad_1]

To provide AI-focused girls lecturers and others their well-deserved — and overdue — time within the highlight, TechCrunch is launching a sequence of interviews specializing in exceptional girls who’ve contributed to the AI revolution. We’ll publish a number of items all year long because the AI increase continues, highlighting key work that usually goes unrecognized. Learn extra profiles right here.

Brandie Nonnecke is the founding director of the CITRIS Coverage Lab, headquartered at UC Berkeley, which helps interdisciplinary analysis to handle questions across the position of regulation in selling innovation. Nonnecke additionally co-directors the Berkeley Middle for Regulation and Expertise, the place she leads tasks on AI, platforms and society, and the UC Berkeley AI Coverage Hub, an initiative to coach researchers to develop efficient AI governance and coverage frameworks.

In her spare time, Nonnecke hosts a video and podcast sequence, TecHype, that analyzes rising tech insurance policies, laws and legal guidelines, offering insights into the advantages and dangers and figuring out methods to harness tech for good.

Q&A

Briefly, how did you get your begin in AI? What attracted you to the sector?

I’ve been working in accountable AI governance for almost a decade. My coaching in expertise, public coverage and their intersection with societal impacts drew me into the sector. AI is already pervasive and profoundly impactful in our lives — for higher and for worse. It’s essential to me to meaningfully contribute to society’s skill to harness this expertise for good somewhat than stand on the sidelines.

What work are you most happy with (within the AI subject)?

I’m actually happy with two issues we’ve completed. First, The College of California was the primary college to ascertain accountable AI ideas and a governance construction to higher guarantee accountable procurement and use of AI. We take our dedication to serve the general public in a accountable method critically. I had the honour of co-chairing the UC Presidential Working Group on AI and its subsequent everlasting AI Council. In these roles, I’ve been capable of acquire firsthand expertise pondering via the right way to finest operationalize our accountable AI ideas with a purpose to safeguard our college, workers, college students, and the broader communities we serve. Second, I believe it’s vital that the general public perceive rising applied sciences and their actual advantages and dangers. We launched TecHype, a video and podcast sequence that demystifies rising applied sciences and gives steerage on efficient technical and coverage interventions.

How do you navigate the challenges of the male-dominated tech business, and, by extension, the male-dominated AI business?

Be curious, persistent and undeterred by imposter syndrome. I’ve discovered it essential to hunt out mentors who help range and inclusion, and to supply the identical help to others getting into the sector. Constructing inclusive communities in tech has been a robust solution to share experiences, recommendation and encouragement.

What recommendation would you give to girls looking for to enter the AI subject?

For ladies getting into the AI subject, my recommendation is threefold: Search information relentlessly, as AI is a quickly evolving subject. Embrace networking, as connections will open doorways to alternatives and provide invaluable help. And advocate for your self and others, as your voice is crucial in shaping an inclusive, equitable future for AI. Keep in mind, your distinctive views and experiences enrich the sector and drive innovation.

What are among the most urgent points dealing with AI because it evolves?

I consider one of the crucial urgent points dealing with AI because it evolves is to not get hung up on the newest hype cycles. We’re seeing this now with generative AI. Certain, generative AI presents important developments and may have large impression — good and unhealthy. However different types of machine studying are in use immediately which might be surreptitiously making selections that straight have an effect on everybody’s skill to train their rights. Reasonably than specializing in the newest marvels of machine studying, it’s extra essential that we concentrate on how and the place machine studying is being utilized no matter its technological prowess.

What are some points AI customers ought to concentrate on?

AI customers ought to concentrate on points associated to information privateness and safety, the potential for bias in AI decision-making and the significance of transparency in how AI programs function and make selections. Understanding these points can empower customers to demand extra accountable and equitable AI programs.

What’s the easiest way to responsibly construct AI?

Responsibly constructing AI includes integrating moral issues at each stage of improvement and deployment. This contains various stakeholder engagement, clear methodologies, bias administration methods and ongoing impression assessments. Prioritizing the general public good and making certain AI applied sciences are developed with human rights, equity and inclusivity at their core are basic.

How can buyers higher push for accountable AI?

That is such an essential query! For a very long time we by no means expressly mentioned the position of buyers. I can not categorical sufficient how impactful buyers are! I consider the trope that “regulation stifles innovation” is overused and is usually unfaithful. As a substitute, I firmly consider smaller companies can expertise a late mover benefit and be taught from the bigger AI firms which have been growing accountable AI practices and the steerage rising from academia, civil society and authorities. Traders have the facility to form the business’s route by making accountable AI practices a vital issue of their funding selections. This contains supporting initiatives that target addressing social challenges via AI, selling range and inclusion inside the AI workforce and advocating for robust governance and technical methods that assist to make sure AI applied sciences profit society as an entire.

[ad_2]

LEAVE A REPLY

Please enter your comment!
Please enter your name here