Posted: 2 Min ReadFeature Stories

Holding AI Accountable

From RSA: How do we know the answers AI delivers are correct?

Artificial intelligence (AI) can recognize patterns in data and associate probable outcomes with stunning accuracy – or, so we think. By and large, we seem to inherently trust AI—rarely is it asked to show its work.

I attended a panel at RSA today on “The Calculated Risk of AI.” One panelist, Michael Troncoso, works for a healthcare institute and related the impact of AI on the field of radiology. Radiologists might be asked to view as many as 100 to 150 images a day, from a broken wrist to a lung mass to more, all in rapid succession. Radiology is also an area of healthcare that is a target for lawsuits. If a radiologist misses something or misdiagnoses what she sees, the door is wide open for a lawsuit.

Zebra Medical Vision has created an algorithm that scans radiology images for a mere one dollar per scan. Troncoso mused, “Why wouldn’t we use AI?” AI could catch something that a human might miss, or it could validate a radiologist’s diagnosis. Both scenarios mean less time in court—as long as AI can defend itself.

The panelists all asked, how does AI reach its conclusions? If AI is to be trusted with a medical diagnosis, where to build a road, or when a person’s qualifies for a loan, then AI needs to be held accountable.

Another panelist, Stephen Wu of the Silicon Valley Law Group, talked about the ethical questions posed by autonomous cars. He is for greater governance—i.e., what if an autonomous car runs into a woman with a baby trolley? Careens into a crowd? Or crashes into a wall that will surely kill the driver? A human would likely sacrifice himself before he kills another, but who among us is going to buy a car that we know will sacrifice us when there’s danger?

A final example from the panel concerned privacy. What role does AI play? Machine learning works to find commonalities in patterns. Piecing together information that can result in identifying a person is easily achieved with artificial intelligence. Currently, regulations are either enacted or are coming globally that require the consent of an individual when his personal data is processed or monitored. How can that be governed or regulated regarding AI? We need people working on this scenario.

At the close of the session, Wu outlined two key rights in security law that are coming:

  • If you are a data subject of automated data processing, you have a right to an explanation as to how the data processing reached a decision that impacts you.
  • You have a right to a human involved to take a second look at a case that was determined by an automated decision.

The field of “AI accountability” is wide open for innovators to step in and develop systems to help AI explain itself. Without these explanations, we put our lives in the hands of systems, and not humans. Then where are we? The panelists concluded by encouraging everyone to take an interest in AI’s potentially vast impact. They stressed the need for an inter-disciplinary approach to the governance of AI: legal, business, security, consumer rights advocates, all of us must participate if we’re to keep the humanity in the decision-making that affects humans.

Join Symantec at RSA Conference 2018 Booth #3901 North Expo Hall.  Click Here for the schedule and follow @Symantec on Twitter for highlights.

 

 

About the Author

Rebecca Donaldson

Symantec Cyber Security Staff Writer

Rebecca Donaldson is a writer and community manager for Symantec. For over ten years, she has had the privilege of publishing content that captures the guidance and information of Symantec experts, customers, and partners.

Want to comment on this post?

We encourage you to share your thoughts on your favorite social platform.