On the nature of private international law. Applying islamic law in the European Court of Human Rights.

Update 13 July 2020 see for an illustration of the issues, Matthians Lehmann here, on the classification by German judges of the mahr, akin to a dowry – with consideration (and eventually side-stepping of all) of the Rome I, III, and the maintenance and matriomonial property Regulations. The Court’s analysis feels like ten little monkeys bouncing on a bed: one by one the Rome I, Maintenance, Matrimonial property, Rome III Regulations are considered yet cast aside. See also Jan Jakob Bornheim’s reference here to Almarzooqi v Salih, [2020] NZHC 1049, where the New Zealand High Court assumed that the mahr was a contractual promise without much consideration of the characterisation issue. And Mukarrum Ahmed, who commented ‘in England, the leading case on the characterisation of mahr is Shahnaz v Rizwan. The wife’s claim was treated as a contractual obligation.’ [GAVC, that’s Shahnaz v Rizwan [1965] 1 QB 390].

Anyone planning a conflict of laws course in the next term might well consider the succinct Council of Europe report on the application of islamic law in the context of the European Convention on Human Rights – particularly the case-law of the Court. It discusses ia kafala, recognition of marriage, minimum age to marry, and the attitude towards Shari’a as a legal and political system.

Needless to say, ordre public features, as does the foundation of conflict of laws: respect for each others’ cultures.

Geert.

 

 

Draft European ethics guidelines for trustworthy artificial intelligence.

Update 8 April 2019 for the final guidelines following review, see here.

An ethics-related posting seems apprioprate as last before ‘the’ season.

The relevant European expert group seeks feedback on draft ethics guidelines for trustworthy artificial intelligence.

Chapter I deals with ensuring AI’s ethical purpose, by setting out the fundamental rights, principles and values that it should comply with.
From those principles, Chapter II derives guidance on the realisation of Trustworthy AI, tackling both ethical purpose and technical robustness. This is done by listing the requirements for Trustworthy AI and offering an overview of technical and non-technical methods that can be used for its implementation.
Chapter III subsequently operationalises the requirements by providing a concrete but nonexhaustive assessment list for Trustworthy AI. This list is then adapted to specific use cases.

Of particular note at p.12-13 are the implications for the long term use of AI, on which the expert group did not reach consensus. Given that autonomous AI systems in particular have raised popular concern, most of which predicted in the longer term, it is clear that this section could prove particularly sticky as well as interesting.

For me the draft is a neat warm-up for when the group’s co-ordinator, Nathalie Smuha, returns to Leuven in spring to focus on her PhD research with me on the very topic.

Geert.

 

%d bloggers like this: