Draft European ethics guidelines for trustworthy artificial intelligence.

Update 8 April 2019 for the final guidelines following review, see here.

An ethics-related posting seems apprioprate as last before ‘the’ season.

The relevant European expert group seeks feedback on draft ethics guidelines for trustworthy artificial intelligence.

Chapter I deals with ensuring AI’s ethical purpose, by setting out the fundamental rights, principles and values that it should comply with.
From those principles, Chapter II derives guidance on the realisation of Trustworthy AI, tackling both ethical purpose and technical robustness. This is done by listing the requirements for Trustworthy AI and offering an overview of technical and non-technical methods that can be used for its implementation.
Chapter III subsequently operationalises the requirements by providing a concrete but nonexhaustive assessment list for Trustworthy AI. This list is then adapted to specific use cases.

Of particular note at p.12-13 are the implications for the long term use of AI, on which the expert group did not reach consensus. Given that autonomous AI systems in particular have raised popular concern, most of which predicted in the longer term, it is clear that this section could prove particularly sticky as well as interesting.

For me the draft is a neat warm-up for when the group’s co-ordinator, Nathalie Smuha, returns to Leuven in spring to focus on her PhD research with me on the very topic.

Geert.

 

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.