Niamh seminar 19/8/2020 - Shared screen with speaker view
Who can see your viewing activity?
Niamh would love to have your slides if you are willing to share Savitri (firstname.lastname@example.org)
Thanks Savitri, I’ve noted down your details for Niamh.
i would also appreciate getting the power points thank u!
I agree that automation has some benefits e.g. taking the subjectivity out of RSD. or limiting it
Q Niahm - when you talk about algorithmic decision-making, are you talking about the use of algorithms, or machine learning/neural nets (sorry - late to the meeting) - or both?
question about whether we as lawyers can design processes so as to overcome some of the problems you have laid out?
Pintarich is not a good decn
I liked Kerrs dissent in that case
question about whether we can use automation to create COI assessments to partly automate RSD. to decide on risk.profiles
great work Niamh..I note the ALRC nominated 'Automated decision-making and administrative law' as one of it's topics for reform in Australia in 2020-25 in its 2019 report...but they seem to not really deal with refugee law specifically. I get the impression that the law is lagging way behind technology.
I struggle with the tech too!
An algorithm is any iterative rule set - the algorithm can be developed by a human, or an algorithm can be derived by the machine generating it.
The designer will always build in assumptions and presumptions that are fundamentally problematic
If you haven't stumbled across the book 'Weapons of Math Destruction' you will find examples in other sectors.
Good point @John Littrich. I put this in a subm to the HRC Inq on a tech on hrts but I think a lot of people think that it's too far off
More about Pintarich and framing here: https://pursuit.unimelb.edu.au/articles/what-is-the-law-when-ai-makes-the-decisions
can I plug my recent article on automation too? sorry! 😅 http://www.unswlawjournal.unsw.edu.au/article/revitalising-public-law-in-a-technological-era-rights-transparency-and-administrative-justice/
Yes please! : )
@Markus - I think that the emphasis on EthicsbyDesign, TrustbyDesign, PrivacybyDesign in Engineering/STEM opens up space for the engagement of lawyers in the front end. So long as a culture of multidisciplinarity iis present.
@Yvonne - not enough. Palantir is already in with the WFP sucking up data, which is then used by a number of govs (including biometric data) to reject refugee claims.
For those not familiar with Palantir, they've facilitated the ICE intervention in the US
@Yvonne - that strikes me as correct. The problem is that lawyers are often seen as naysayers. It's getting to be seen as constructive and not as destructive when the systems are close to being put in place
Pompeu Casanovas Romeu
There is an active community on Artificial Intelligence and Law: https://waset.org/artificial-intelligence-and-law-conference-in-june-2021-in-toronto
Not the right question. First question to always ask is: what is the benefit of this system? We can use this system, but SHOULD we use this system. WEapons of MAths Destruction by Cathy O'Neill is a great place to start with some of these questions.
@Niamh: following up on what Maria is saying: is there a way to divide things up between quantitative and qualitative data? I've had some fruitful debates about this in the past
Also the newly formed Australian Society for Computer Science and the LAw
Pompeu Casanovas Romeu
90% are computer scientists, with some lawyers and legal scholars. The community is growing now.
Yes, the issue of life death raises the issue of Type I and Type II errors. The "cost" of classifying someone who is a genuine refugee as a non-refugee is considerably higher than the converse. Do any data sets exist where these AI's have been tested empirically?
Hello all. Thanks for these excellent resources + the presentation. @Maria an interesting point about risk profiles, and recalls prima facie RSD which is under explored for many groups. Interesting to think about automation alongside prima facie determinations.
'Computer science communities are also disparate - Natural Language Processing ahead of Computer Vision and HCI, just to give an example. Comp Sci is quite divided into different camps of tools.
Also refer to Uber case - they're using the tort of conspiracy - the law isn't fit for purpose in certain cases - conspiracy is Uber 'jumping over' the existing requirements for taxi drivers
Also check out the work of the Centre for AI and Digital Ethics - an interdisciplinary centre at Uni Melb https://law.unimelb.edu.au/centres/caide
Genevieve Bell looking at this from more of an industry perspective at 3Ai
And Julia Powles (also a lawyer) just launched her Centre today at UWA
I have to go. Great presentation Niamh. Thanks Dora for organising this seminar
Also Centre for Excellence on automation led by Julian Thomas RMIT across multiple universities looking at lots of these topics - so lots of opportunities to jump into this space if anyone is interested.
great point from @Andrea Vogl about using concept of prima facie RSD
Thanks Dora, thanks Niamh...sorry for leaving but I have to go to another meeting.
Re @Marett - there are times when AI should not be used - http://www.futureleaders.com.au/book_chapters/Artificial-Intelligence/Kobi-Leins.php
Thank you for the presentation, very interesting, can you please send me the slides email@example.com
Thank you for a wonderful presentation
Thanks for the presentation - happy to chat more offline. K.
fantastic, thank u!
Super interesting! Thank you so much!