JUL 17, 2018 3:13 PM PDT

Should Governments Regulate Facial Recognition Technology?

WRITTEN BY: Julia Travers

Facial recognition technology seems to become more prevalent and controversial daily. From police usage to unlocking phones, the advancement of this tech can now be found in many areas of life. For example, in China, it is now widely used by the government in public spaces. Meanwhile, in Australia, a new trial of facial recognition tech in airports has just begun. In July 2018, Microsoft President Brad Smith wrote a blog post calling on the U.S. government to take a more substantial role in evaluating and regulating these advancements. Smith calls on a measured and thoughtful approach to developing face-recognizing tech and applications, and for government and industry leaders to work together towards this goal. He writes:

If we move too fast with facial recognition, we may find that people’s fundamental rights are being broken … In a democratic republic, there is no substitute for decision making by our elected representatives regarding the issues that require the balancing of public safety with the essence of our democratic freedoms. Facial recognition will require the public and private sectors alike to step up – and to act.

Facial recognition illustration, credit: techgruit.com

Smith points out that a bipartisan government commission could be created to begin to address pressing topics related to these technological advances, such as when, where and how people should be identified about the use of facial scans and records. Privacy rights and data concerns have been a hot topic in the last few years (and before) due to what is seen by some as lack of transparency from tech companies along with incidents of major data theft and misuse.

Other topics Smith references that could be covered by a governmental group include possible restrictions on national-security and/or law-enforcement use of these methods and the need for legal protections for people who may end up misidentified.

Racial profiling is also a concern, and one that exists throughout AI and broader tech fields, where white males continue to fill most of the roles at the helm and under the hood. Some Microsoft services have already been shown to identify white males better than women or people of color, Scott Thurm of Wired reports. Smith lists reducing “the risk of bias in facial recognition technology,” as the first responsibility of the tech sector in his blog.

“That’s why our researchers and developers are working to accelerate progress in this area, and why this is one of the priorities for Microsoft’s Aether Committee, which provides advice on several AI ethics issues inside the company,” he shares.

Eileen Donahoe, an adjunct professor at Stanford’s Center for Democracy, Development, and the Rule of Law, applauds Microsoft’s work to examine and fix the ways their technology can fail or harm people based on gender or cultural and racial identities, along with other factors. 

“Microsoft is way ahead of the curve in thinking seriously about the ethical implications of the technology they’re developing and the human rights implications of the technology they’re developing,” she tells Wired.

As previously reported by LabRoots, the city of Orlando, Florida stopped a trial of Amazon-provided facial recognition technology called Rekognition in June 2018.  This decision followed news stories and publicized letters from both civil rights groups and Amazon shareholders requesting that the company stop providing the software to the police.

 

Source:

Wired

Microsoft blog

About the Author
Bachelor's (BA/BS/Other)
Julia Travers is a writer, artist and teacher. She frequently covers science, tech, conservation and the arts. She enjoys solutions journalism. Find more of her work at jtravers.journoportfolio.com.
You May Also Like
Loading Comments...