How large language and deep learning models can prevent toxicity such as unconscious biases

Jun 1, 2023·
Armstrong Foundjem
Armstrong Foundjem
· 1 min read
Image credit: Unsplash
Abstract
Despite the proliferation of AI models in our everyday activities to make impactful decisions, there are growing concerns about trustworthiness. It is of utmost importance to have fairer interpretable models to make decisions in healthcare, finances, the justice system, etc. This presentation aims to predict biases earlier enough in a multi-class and multi-label problem before they can induce harm. The distributed nature of online communities and their complex data sources makes it difficult to identify biases in data. Thus, we use large language models to accurately classify textual/images/video data across languages, cultures, religions, ages, genders, etc. Also, we fine-tune a transformer (BERT) to predict complicated NLP tasks, which traditional machine learning models would be limited. A typical BERT model can contextually generate text embedding for a multi-class problem and task-specific classification embedding. Our result predicts biases with an accuracy of 98.7%
Date
Jun 1, 2023 1:00 PM — 3:00 PM
Event
    / [pdf]
Click on the Download button above to view the built-in slides feature.