Sign up for our free daily newsletter
YOUR PRIVACY - PLEASE READ CAREFULLY DATA PROTECTION STATEMENT
Below we explain how we will communicate with you. We set out how we use your data in our Privacy Policy.
Global City Media, and its associated brands will use the lawful basis of legitimate interests to use
the
contact details you have supplied to contact you regarding our publications, events, training,
reader
research, and other relevant information. We will always give you the option to opt out of our
marketing.
By clicking submit, you confirm that you understand and accept the Terms & Conditions and Privacy Policy
The The International Technology Law Association (ITechLaw) “Responsible AI: A Global Policy Framework” report offers for comment an in-depth review that proposes ethical guideposts to encourage the responsible development, deployment and use of artificial intelligence. ITechLaw will undertake similar engagement at its events in Boston, Bangalore and Singapore through to April 2020.
Eight core principles
The publication, written by a multi-disciplinary group of 54 technology law experts, industry representatives and researchers from 16 countries across the globe, develops a detailed and actionable framework composed of eight core principles: ethical purpose and societal benefit, accountability, transparency and explainability, fairness and non-discrimination, safety and reliability, open data and fair competition, privacy, and AI & intellectual property. Each of the eight principles is treated in depth, first through the enunciation of a general principle, then through a detailed analysis of the principle’s contours and finally as expressed in an actionable policy framework. The authors say book is a “call to action” for stakeholders to actively participate in a dialogue regarding the component features of responsible AI. “The accelerating rate of progress in AI research, development and deployment is both exhilarating and alarming,” says ITechLaw president Charles Morgan. “AI has enormous potential for positive societal impact, but also for unintended and grave consequences. This places a great weight of ethical responsibility on all those who are engaged in the development and deployment of such AI systems. It is not surprising, therefore, that not only policy-makers, but also industry representatives and AI researchers are looking for solid guideposts. In this context, the authors hope that the publication will make a valuable contribution to the ongoing efforts to promote responsible AI.”
An “Embryonic stage”
Among the areas needing resolved are human-centric “accountability,” the “legal personality” of AI, transparency and explainability, and the notion of “elegant failure” and revisiting the “tragic choices” dilemma. Also in need of greater understanding are fair privacy, consent, and competitiveness, and responsible AI by design. While still at an “embryonic stage.” Mr Morgan argues AI systems “will significantly affect society in ways that most people are only starting to grasp.” He concluded, “Now is the time to set voluntary boundaries of responsible behaviour to ensure that AI research and development can mature and thrive to the benefit of all.” ITechLaw has invited all stakeholders – industry representatives, policy-makers, researchers, and general public - to comment on the draft policy framework and to submit feedback. Comments will be considered by the authors and ITechLaw in anticipation of an updated second-edition of the policy framework, to be published before the end of the year. The discussion principles and the framework can be downloaded for free here.
Email your news and story ideas to: [email protected]