To give AI-focused female academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focused on extraordinary women who have contributed to the AI revolution.
Anika Collier Navaroli is a senior fellow at the Tow Center for Digital Journalism at Columbia University and a Technology Public Voices Fellow with the OpEd Project, held in collaboration with the MacArthur Foundation.
He is known for his research and advocacy in technology. Previously, she worked as a race and technology practitioner fellow at the Stanford Center on Philanthropy and Civil Society. Prior to this, he led Trust & Safety at Twitch and Twitter. Navaroli is perhaps best known for his congressional testimony about Twitter, in which he spoke about the ignored warnings of impending violence on social media that preceded what would become the January 6 Capitol attack.
Wait a minute, how do you get started with AI? What drew you to the field?
About 20 years ago, I worked as a newsroom clerk at my hometown newspaper during the summer when it went digital. At the time, I was an undergraduate studying journalism. Social media sites like Facebook swept my campus, and I became obsessed with trying to understand how the laws built on the printing press would evolve with evolving technology. That curiosity led me to law school, where I moved to Twitter, studied law and media policy, and watched the Arab Spring and Occupy Wall Street movements. I put it all together and wrote my master’s thesis on how this new technology is changing the way information flows and how society exercises freedom of expression.
I worked at several law firms after graduation and then found my way to the Data & Society Research Institute leading a new research think tank on what was then called “big data,” civil rights, and justice. My work there looked at how early AI systems like facial recognition software, predictive policing tools, and criminal justice risk assessment algorithms replicated biases and created unintended consequences that affected marginalized communities. I then worked at Color of Change and led the first civil rights audit of a technology company, developed the organization’s handbook for technology accountability campaigns, and advocated for technology policy change for governments and regulators. From there, I became a senior policy officer on the Trust & Safety team at Twitter and Twitch.
What work are you most proud of in the AI field?
I’m most proud of my work at tech companies using policy to shift the right balance of power and bias in culture and the algorithmic systems that generate knowledge. On Twitter, I ran a series of campaigns to verify individuals who were shocked before being excluded from the exclusive verification process, including black women, people of color, and queer people. It also includes leading AI scholars like Safiya Noble, Alondra Nelson, Timnit Gebru, and Meredith Broussard. This was in 2020 when Twitter was still Twitter. At the time, verification meant that your name and content became part of Twitter’s core algorithm as tweets from verified accounts were injected into recommendations, search results, front timelines, and contributed to trending. So work to verify new people with different perspectives on AI on the basis of change whose voices are empowered as thought leaders and add new ideas to the public conversation during a critical period.
I’m also very proud of the research I did at Stanford that was incorporated into Black in Moderation. When I was working at a tech company, I also noticed that no one was writing or talking about the experiences I had every day as a black person working in Trust & Safety. So when I left the industry and returned to academia, I decided to talk to Black tech workers and tell their stories. The research ended up being the first of its kind and has sparked many new and important conversations about the experiences of tech employees with marginalized identities.
How do you address the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?
As a Black woman, navigating male-dominated spaces and spaces has been a part of my entire life journey. In technology and AI, I think the most challenging aspect is what my research calls “forced identity work.” I coined the term to describe the all-too-frequent situation where an employee with a marginalized identity is considered the voice and/or representative of the entire identity-sharing community.
Because of the high stakes that come with developing new technologies like AI, that workforce is sometimes almost impossible to escape. I had to learn to set very specific boundaries for myself about what issues I am willing to engage in and when.
What are some of the most important issues facing AI as it develops?
According to investigative reports, generative AI models have now consumed all the data on the internet and will soon run out of available data to feed on. So, the world’s largest AI companies are turning to synthetic data, or information generated by AI itself, rather than humans, to continue training its systems.
The idea took me down the rabbit hole. So, I recently wrote an Op-Ed arguing that using synthetic data as training data is one of the most important ethical issues facing new AI development. Generative AI systems have shown that based on their original training data, their output is to mimic bias and create false information. So training a new system with synthetic data means constantly producing biased and inaccurate output back to the system as new training data. I described this as potentially devolving into a hellish feedback loop.
Since I wrote the piece, Mark Zuckerberg has praised that his updated Llama 3 Meta chatbot is partially powered by synthetic data and is the “smartest” generative AI product on the market.
What are some issues AI users should be aware of?
AI is part of our lives today, from spellcheck and social media feeds to chatbots and image generators. In many ways, society has become the guinea pig for this new, untested technological experiment. But AI users don’t have to feel helpless.
I’ve argued that technology advocates should get together and organize AI users to demand a People Pause on AI. I feel that the Writers Guild of America has shown that with organization, collective action, and determined patience, people can come together to create meaningful boundaries for the use of AI technology. I also believe that if we pause now to fix past mistakes and create new ethical guidelines and regulations, AI should not be an existential threat to our future.
What is the best way to build AI responsibly?
My experience working at a tech company shows how important it is to be in the room writing the policies, presenting the arguments, and making the decisions. Pathway also showed that I developed the skills I needed to succeed in the tech industry by starting journalism school. Now I’m back to work at Columbia Journalism School and I want to train the next generation who will take on the responsibility of technology and the responsibility of developing AI in technology companies and as external supervisors.
I think (journalism) school gives people a unique training in the interrogation of information, searching for the truth, considering different points of view, making logical arguments, and filtering facts and reality from opinion and misinformation. I believe this is a solid foundation for the people who will be responsible for writing the rules of what the next AI can and cannot do. And I hope to pave a more paved path for those to come.
I also believe that in addition to skilled Trust & Safety employees, the AI industry needs external regulation. In the US, I argue that it should be a new agency to regulate American technology companies with the power to establish and enforce basic security and privacy standards. I also want to continue working to connect current and future regulators with ex-tech workers who can help those in power ask the right questions and come up with new and practical solutions.