Exploring the frontiers of artificial intelligence and data ethics involves not only a commitment to specific areas of focus but also an open-minded approach to emerging challenges and opportunities.

Our research is aimed at addressing issues such as bias and discrimination in generative models, AI's impact on employment, and the proliferation of misinformation. While we have a defined mission in these domains, we recognize that the evolving landscape of technology and society demands adaptability. This is why, in addition to our dedicated pursuits, we remain open to exploring new areas where AI and data ethics intersect, seeking to provide insights, foster ethical foundations, and contribute to informed decision-making in our ever-changing world.

Current Research Areas

AI and Data Ethics Case Studies

man working in server room

With the rapid rise of AI tools and technologies, it is crucial that current and future policymakers, business leaders, and data/computer science professionals have a strong ethical foundation. Our research team has embarked on a project to aggregate and analyze real-world case studies related to AI ethics, algorithmic bias, privacy, and other emerging issues. By curating a database of case studies and examining trends, common challenges, and best practices, we aim to develop educational materials that will prepare students in technical and non-technical fields to grapple with complex sociotechnical trade-offs.

AI Impact on Jobs

student working with AI

In a rapidly evolving technological landscape, concerns about the impact of AI on jobs have gained prominence. At CAIDE, we go beyond the headlines and anecdotes, bringing you real-life stories that highlight the nuanced ways AI is shaping employment. Through real-life stories and comprehensive surveys, we delve into various industries and job sectors, offering a balanced perspective on the challenges, transformations, and new possibilities that AI introduces to the workforce. Whether you're a job seeker, a business owner, or a policy influencer, our research equips you with the knowledge to make informed decisions in the face of AI-driven changes.


Craig Newmark Fund for Data Ethics, 2019
Established by generous donation of Craig Newmark, an internet entrepreneur best known for being the founder of Craigslist. He was awarded an honorary degree from USF in 2009. The Center for AI & Data Ethics (formerly CADE) aligns with Craig’s priorities to strengthen the foundation of trustworthy press and access for women in technology.


Every year, our faculty and graduate students in data science collaborate with organizations worldwide to tackle real-world data science and data engineering challenges.

  • Candid

    Student Team: Zemin Cai, Harrison Jinglun Yu

    Faculty Mentor: Shan Wang

    Company Liaison: Cathleen Clerkin

    Project Outcomes: Candid's Insights department engaged students in impactful research projects in data ethics. These projects included an examination of diversity, equity, and inclusion within nonprofits, an exploration of nonprofits' societal impact, and an investigation into real-time grantmaking data, particularly in relation to issues like racial equity. Students were tasked with identifying factors influencing organizations' willingness to share demographic data and analyzing data to predict nonprofits' societal impact. Additionally, they explored methodologies to provide real-time insights into philanthropic trends while addressing potential biases and confounding factors. These projects harnessed various data science techniques and underscored the importance of ethical considerations in data analysis.

    Kidas Inc.

    Student Team: Raghavendra Kommavarapu

    Faculty Mentor: Mustafa Hajij

    Company Liaison: Amit Yungman

    Project Outcomes: Students optimized point-of-interest detection algorithms, including hate speech and sexual content detection, using data and metadata. They attempted age detection in audio and text, emotion detection in audio and text, and voice changer detection in audio. Additionally, they worked on displaying data visualizations on personal pages based on user activity and algorithm results using Python.

    YLabs (Youth Development Labs)

    Student Team: Tejaswi Dasari

    Faculty Mentor: Diane Woodbridge

    Company Liaison: Robert On

    Project Outcomes: In the CyberRwanda project, focused on enhancing the well-being and prospects of urban teenagers through digital education, students used various technologies and techniques to measure project progress and effectiveness. They employed Google Analytics to track engagement metrics and designed KPI dashboards for automatic data generation. However, challenges included manual data tracking, discrepancies between Google Analytics versions, and gaps in tracking product pick-ups. Integrating and utilizing data from different sources for decision-making was identified as a crucial goal.

  • ACLU

    Our Team: Joleena Marshall

    Faculty Mentor: Michael Ruddy

    Company Liaisons: Linnea Nelson, Tedde Simon, Brandon Greene

    Project Outcomes: The team developed a tool with Python to acquire and preprocess publicly-available data related to the Oakland Unified School District to investigate whether or not OUSD’s allocation of resources results in inequities between schools. The team also provided an updated data analysis on educational outcomes for indigenous students for a select number of Humboldt County unified school districts, including data visualizations.

    California Forward

    Our Team: Evie Klaassen

    Faculty Mentor: Michael Ruddy

    Company Liaison: Patrick Atwater

    Project Outcomes: The team built a tool with Python to determine where high wage jobs are located in California. This tool serves as an extension to current data tools created and maintained by the organization. The team also developed a pipeline to clean and prepare new public data when it is released, and for the tool’s outputs to be regularly updated given any new data.

  • ACLU Criminal Justice

    Our Team: Qianyun Li

    Goal: At the ACLU, the student identified potential discrimination in school suspensions by performing feature importance analysis with machine learning models and statistical tests.

    ACLU Micromobility

    Our Team: Max Shinnerl

    Goal: At the ACLU, the student analyzed COVID-19 vaccine equitable distribution data. They developed interactive maps with Leaflet to visualize shortcomings of the distribution algorithm and automated the cleaning of legislative record data. They also developed a pipeline for storing data to enable remote SQL queries using Amazon RDS and S3 from AWS.

  • Human Rights Data Analysis Group (HRDAG)

    Our Team: Bing Wang

    Goal: At the Human Rights Data Analysis Group (HRDAG), Bing gleaned critical location of death information from unstructured text fields in Arabic using Google Translate and Python Pandas, adding identifiable records to Syrian conflict data. She wrote R scripts and bash Makefiles to create blocks of similar records on killings in the Sri Lankan conflict to reduce the size of search space in the semi-supervised machine learning record linkage (database deduplication) process.


Research activities and publications by our faculty, accomplished fellows, and affiliates.

Make A Gift

Your support plays a pivotal role in shaping the trajectory of ethical discussions and practices within the field, empowering us to lead the way in responsible AI and data ethics.