Sign up for our free daily newsletter
YOUR PRIVACY - PLEASE READ CAREFULLY DATA PROTECTION STATEMENT
Below we explain how we will communicate with you. We set out how we use your data in our Privacy Policy.
Global City Media, and its associated brands will use the lawful basis of legitimate interests to use
the
contact details you have supplied to contact you regarding our publications, events, training,
reader
research, and other relevant information. We will always give you the option to opt out of our
marketing.
By clicking submit, you confirm that you understand and accept the Terms & Conditions and Privacy Policy
The use of AI for decision-making in areas such as immigration requires independent oversight and to have carefully developed standards of use, warns a new report from the University of Toronto.
‘lab rats’
The University’s international human rights program at the faculty of law has teamed up with the citizen lab at the Munk school of global affairs to present ‘Bots at the Gate: A Human Rights Analysis of Automated Decision-Making in Canada’s Immigration and Refugee System.’ The report is aimed at the Canadian government, which last year launched a $125-million ‘artificial intelligence strategy’ with the ambition to ‘position Canada as a world-leading destination for companies seeking to invest in AI and innovation.’ The authors state the federal government has been using algorithms and other AI technology since 2014, turning immigrants and refugee claimants into ‘lab rats’ for the new technology. Certain duties, formerly undertaken by immigration officers, have been algorithmically automated, including the evaluation of immigrant and visitor applications. The government has intimated it will expand this technology to areas such as evaluating the legitimacy of a marriage, judging completeness of an application, whether a person merits refugee protection, and security risk assessments of applicants.
Human rights concerns
The report recommends independent oversight of all automated decision-making by the federal government, publication of all uses of AI for any purpose and the establishment of a task force composed of government, civil society and academia. The authors are worried the use of AI technology is creating problems in relation to human rights such as freedom of expression, religion, mobility and discrimination, as well raising privacy and equality concerns. The report was developed using interviews with government analysts, policies, records and public statements made by the government and information requests. Additionally, the researchers await responses to 27 access to information requests filed last April. Further meetings are planned by the report’s authors with several federal departments.
Email your news and story ideas to: [email protected]