Video animation overview about AI ethics & fairness Catherine (30 Sep 2021 17:30 EDT)
Re: [AI4K12] Video animation overview about AI ethics & fairness Dave Touretzky (01 Oct 2021 04:23 EDT)
Re: [AI4K12] Video animation overview about AI ethics & fairness Dungan, Charlotte (01 Oct 2021 13:05 EDT)
RE: [AI4K12] Video animation overview about AI ethics & fairness charles.fadel@xxxxxx (01 Oct 2021 12:06 EDT)
Re: [AI4K12] Video animation overview about AI ethics & fairness Follow Your Dreams (01 Oct 2021 08:16 EDT)

Re: [AI4K12] Video animation overview about AI ethics & fairness Dave Touretzky 01 Oct 2021 01:23 PDT

>   Here's a video we made that is a quick introduction / crash course to
>   AI ethics and fairness. It answers the question: how could artificial
>   intelligence algorithms be unfair and unethical?
>   https://www.youtube.com/watch?v=CampJppwgWU

This is a delightfully clear video that explains how automated decision
making systems are affected by biased training data.

My only criticism is that it repeats the old canard that systems for
rating loan applications discriminate on the basis of protected personal
characteristics such as gender, age, race, or religion.  It's time for
people to stop propagating this fiction.

No one in their right mind would ask gender, race, or religion on a loan
application.  Doing so would be extremely foolish if not blatantly
illegal.  (Age is required to assure that the applicant can enter into a
legaly binding contract.)

The problem is that protected attributes can correlate with things that
*are* permissible to include on a loan application, such as education
level, zip code of primary residence, years of residence at the same
address, owning vs. renting, and so on.  As a result, applicants who we
might wish to see treated the same for social policy reasons are not
viewed as "the same" by the decision making system, and it can exploit
these differences to reproduce any biases reflected in its training
data.

This is a more subtle explanation than saying that the computer learns
to discriminate against what the video calls "sensitive attributes", so
it's harder to convey in a short video.  But it helps explain why bias
can be difficult to eliminate from these systems.  Eliminating bias
cannot be done simply by not telling the system the race or religion of
the applicants; that data was never provided to it in the first place.

A harder problem to wrestle with is what attributes are legitimate to
consider in a loan application.  For example, one might believe that
graduating from high school was evidence of character traits that make
one creditworthy.  And it might be (I don't know for sure) that high
school graduates have better loan repayment histories than dropouts.
But it's also the case that blacks are less likely than whites to
graduate from high school: you can get those statistics from the US
Department of Education at
https://nces.ed.gov/programs/coe/indicator/coi.  So using education
level as a factor in rating loan applications will have disparate impact
on black vs. white populations.  But disparate impact does not
necessarily demonstrate unfairness.

-- Dave Touretzky