Bias – A Lurking Danger that Can Convert Algorithmic Systems into Discriminatory Entities

Gasser, Thea; Klein, Eduard; Seppänen, Lasse (18 October 2020). Bias – A Lurking Danger that Can Convert Algorithmic Systems into Discriminatory Entities In: Centric2020 - The 13th Int. Conf. on Advances in Human-oriented and Personalized Mechanisms, Technologies, and Services. (pp. 1-7). IARIA

[img]
Preview
Text (peer reviewed publication)
Bias_Gasser-Klein-Seppänen_centric_2020_1_10_30004.pdf - Published Version
Available under License Publisher holds Copyright.

Download (510kB) | Preview

Bias in algorithmic systems is a major cause of unfair and discriminatory decisions in the use of such systems. Cognitive bias is very likely to be reflected in algorithmic systems as humankind aims to map Human Intelligence (HI) to Artificial Intelligence (AI). An extensive literature review on the identification and mitigation of bias leads to precise measures for project teams building AI-systems. Aspects like AI-responsibility, AI-fairness and AI-safety are addressed by developing a framework that can be used as a guideline for project teams. It proposes measures in the form of checklists to identify and mitigate bias in algorithmic systems considering all steps during system design, implementation and application.

Item Type:

Conference or Workshop Item (Paper)

Division/Institute:

Business School > Institute for Public Sector Transformation
Business School

Name:

Gasser, Thea;
Klein, Eduard0000-0002-6860-5845 and
Seppänen, Lasse

Subjects:

Q Science > QA Mathematics > QA75 Electronic computers. Computer science
Q Science > QA Mathematics > QA76 Computer software

ISSN:

2308-3492

ISBN:

978-1-61208-829-7

Publisher:

IARIA

Language:

English

Submitter:

Eduard Klein

Date Deposited:

03 Nov 2020 11:07

Last Modified:

22 Jun 2022 10:56

Related URLs:

Additional Information:

Die Erlaubnis, diese Datei im ARBOR-Repository zu veröffentlichen, wurde eingeholt

Uncontrolled Keywords:

bias; algorithm; artificial intelligeince; ai-safety; algorithmic system

ARBOR DOI:

10.24451/arbor.13189

URI:

https://arbor.bfh.ch/id/eprint/13189

Actions (login required)

View Item View Item
Provide Feedback