Responsible AI

Keynote from Susan

When launching AI and ML-powered apps, it’s important to think about how you will responsibly design, develop, and deploy your new technology in an eithical, unbiased manner.

Open



Case Studies

In small groups, read about the following cases:
  • Winterlight Labs auditory detection of Alzheimer’s disease: In 2016, Winterlight labs designed an AI-powered auditory test for Alzheimer’s disease, where users’ speech would be recorded and AI would be used to detect signs of Alzheimer’s such as vocabulary richness, pauses in speech, and syntactic complexity. However, the initial research findings revealed a serious problem; non-native English speakers were being inaccurately flagged as having Alzheimer’s disease. Since the data that was used to train the model had been collected from native English speakers from Ontario, Canada, this technology was unable to work reliably across different populations.
  • Wireless baby monitors hacked: In 2018, there were several instances where wireless baby monitors were hacked which ultimately made national news headlines. In one case, a hacker used his newfound access to the baby monitor device to broadcast threats and shout sexual expletives. In another case, a more benevolent hacker used his newfound access to warn parents about the susceptibility of their device, in hopes that the parents would be able to address the situation before being targeted by nefarious hackers.
  • Smart Doorbell Data Sharing: The home security company ‘Ring’ creates smart doorbell devices that utilize video surveillance. In 2019, it was discovered that the company had partnered with more than 600 police departments across the U.S., allowing law enforcement to request access to video footage collected from users’ devices. Once police departments gain possession of video footage, there are no guidelines that place restrictions on how long it can be stored and for what purposes it can be used.

Breakout Discussions

In small groups, consider the following questions:
  • Which one of these cases do you find most concerning? The least?
  • What do you consider to be the relevant ethical challenges?
  • What do you think the designers of this technology could have done differently?
  • Are the people who build AI responsible for its outcomes? To what extent?
  • What other ethical challenges are you concerned about? Why?

Large Group Discussion

As we return, let's discuss the following together;
  • What question(s) did your group reach consensus on?
  • What was the most controversial question(s) in your breakout discussion?
  • What are some other ethical challenges your group was concerned about? Why?