Workshop 3: Prof. Alexandra Cristea, ‘What is Bias in AI?’
On Thursday 27th January 2022, Alexandra Cristea (Durham University) presented the keynote “What is Bias in AI?” at the third AEOLIAN workshop. The workshop looked at the topic of “What challenges do Machine Learning and AI raise in terms of privacy, ethics, research integrity, reproducibility, and bias?”.
Abstract: Artificial Intelligence is a thriving area in Computer Science. Especially trending is the sub-area of Machine Learning and Deep Learning, including Data Analytics. However, the latter comes often with various forms of bias. Bias in AI can be introduced in many forms, from data to methods and algorithms, and it negatively affects people as well as research quality. It also impacts upon an increasing amount of areas, including sensitive ones, such as healthcare, law, criminal justice, hiring.
Thus, an important task for researchers is to use AI to identify and reduce (human or machine) biases, as well as improve AI systems, to prevent introducing and perpetuating bias.
Aspects of Bias in AI range:
- from statistical/theoretical perspectives –where bias should be avoided with new algorithmic solutions, methodologically correct procedures (e.g., bias induced by overlapping training/test set, historically inaccurate time series, average accuracy results only in classification); sensitivity analysis (including k-anonymity, l-diversity, t-closeness, k-safety, k-confusability, t-plausibility) for structured/unstructured data, or ways of quantifying uncertainty in deep learning, e.g., via adversarial learning, generative models, invertible networks, meta-learning nets.
- to human perspectives – where specific types of bias introduced by data or methodology can do harm, such as in implicit racial, ethnic, gender, ideological biases.
The former perspectives are to produce correct or optimised results, the latter are to lead to conversational explanations and explainable AI, in view of GDPR and increasing ethical concerns, and the move from symbolic AI to sub-symbolic (deep) representations, with no direct answer to the classic AI questions of ‘Why’ and ‘How’. This includes the novel field of Machine Teaching, expanding on the classical field of knowledge extraction from (shallow or, more recently, deep) Neural Networks. This area should lead to novel insights into accountability of AI. This talk will consider some of these aspects of Bias in AI and lead to thoughts and possibly a wider discussion on the social impact of AI.