Got ethics 2020’s most ethical tech companies ranked: Who made the cut?
- chlorderodersflava
- Aug 19, 2023
- 5 min read
For the eighth consecutive year, Dr. Jessica Baron, in collaboration with the John J. Reilly Center for Science, Technology and Values at the University of Notre Dame, has released the annual list of emerging ethical dilemmas and policy issues in science and technology for 2020.
An increasing number of scientists are urging researchers to avoid working with firms or universities linked to unethical projects, to re-evaluate how they collect and distribute facial-recognition data sets and to rethink the ethics of their own studies. Some institutions are already taking steps in this direction. In the past year, several journals and an academic conference have announced extra ethics checks on studies.
Got ethics 2020’s most ethical tech companies announced
Nature asked researchers what the scientific community should do about ethically questionable studies. The most popular answer was that during peer review, authors of facial-recognition papers should be asked explicitly about the ethics of their studies. The survey also asked whether research that uses facial-recognition software should require prior approval from ethics bodies, such as IRBs, that consider research with human subjects. Almost half felt it should, and another quarter said it depended on the research.
Equifax faced a large ethical scandal when hackers stole data from more than 148 million consumers. After the fact, the research found that the systems Equifax was using were old, and their security systems were out-of-date and could have been updated to prevent the breach. This is an example of business ethics where small businesses and big companies alike have a duty to ensure the are following guidelines for a safe work environment and safe opportunities for consumers.
Following a massive international backlash against police racism and brutality sparked by the killing of George Floyd in Minneapolis in May 2020, private technology companies started coming under increased scrutiny for their relationships with law enforcement.
To take a holistic approach across all ethical dilemmas to better uncover and address inevitable problems, technology companies can focus on five key moves to build more clarity around their ethical responsibility and a stronger capacity to operationalize their ethical principles (figure 2):
There have been recent ethical cases of AI hiring systems favoring white men, parole recommendation systems rating people of color as higher risk, credit card companies giving lower credit ratings to women and facial recognition technology resulting in the arrests of innocent people.
Several avenues are being pursued to address these inequities. Some target technical algorithmic performance. First, algorithms can train on diverse and representative datasets, as standard training databases are predominantly White and male. Inclusion within these datasets should require consent by each individual. Second, the data sources (photos) can be made more equitable. Default camera settings are often not optimized to capture darker skin tones, resulting in lower-quality database images of Black Americans. Establishing standards of image quality to run face recognition, and settings for photographing Black subjects, can reduce this effect. Third, to assess performance, regular and ethical auditing, especially considering intersecting identities (i.e. young, darker-skinned, and female, for example), by NIST or other independent sources can hold face recognition companies accountable for remaining methodological biases.
Other approaches target the application setting. Legislation can monitor the use of face recognition technology, as even if face recognition algorithms are made perfectly accurate, their contributions to mass surveillance and selective deployment against racial minorities must be curtailed. Multiple advocacy groups have engaged with lawmakers, educating on racial literacy in face recognition and demanding accountability and transparency from producers. For example, the Safe Face Pledge calls on organizations to address bias in their technologies and evaluate their application. Such efforts have already achieved some progress. The 2019 Algorithmic Accountability Act empowered the Federal Trade Commission to regulate companies, enacting obligations to assess algorithmic training, accuracy, and data privacy. Furthermore, several Congressional hearings have specifically considered anti-Black discrimination in face recognition. The powerful protests following the murder of George Floyd also drove significant change. Congressional Democrats introduced a police reform bill containing stipulations to restrain the use of face recognition technologies. More astonishing was the tech response: IBM discontinued its system, Amazon announced a one-year freeze on police use of Rekognition, and Microsoft halted sales of its face recognition technology to the police until federal regulations are instituted. These advances have supported calls for more progressive legislation, such as the movements to reform or abolish policing. For now, the movement for equitable face recognition is intertwined with the movement for an equitable criminal justice system.
We work to ensure that technology is inclusive, trusted, and increases sustainability. In 2020, we announced our sustainability commitments and detailed plans for working towards a more sustainable future.
While many tech companies and AI clearly recognize the importance of governing the societal and ethical impact of AI, few have institutionalized steps that turn high-level commitments into concrete procedures. Their AI ethics research and principles, while representing good- faith intentions, mostly lack concrete implementation measures that address the specific issues they identify, from algorithmic fairness to data privacy.
Government agencies, tech companies, scholars and to some extent the public actively explore AI ethics and governance issues in China. Some of the resulting principles and research bear strong similarities to international efforts. This section examines whether and how these are put into practice by delving into specific ethics and governance issues confronted in relation to AI applications in healthcare, autonomous driving and public security. This shows that the government and companies have a long way to go in turning their normative discourse into practical governance mechanisms.
Chinese actors are also driving global AI standard-setting. In 2019, Beijing hosted the first meeting of an influential AI standardization committee and Chinese companies are involved in ethics-related projects at leading international standards bodies.62 In the academic sphere, Chinese scholars conduct research with international counterparts on issues like the use of facial recognition during the Covid-19 pandemic.63 Researchers at tech companies also engage in international research projects on technical challenges.64
These examples all contribute to distrust or rejection of potentially beneficial new technological solutions. What ethical principles can we use to address the flaws in technologies that increase biases, profiling, and inequality? This question has led to significant growth in interest in data ethics over the last decade (Figures 1 and 2). And this is why many countries are now developing or adopting ethical principles, standards, or guidelines.
A court in Shenzhen found that He and two collaborators forged ethical review documents and misled doctors into unknowingly implanting gene-edited embryos into two women, according to Xinhua, China's state-run press agency. One mother gave birth to twin girls in November 2018; it has not been made clear when the third baby was born. The court ruled that the three defendants had deliberately violated national regulations on biomedical research and medical ethics, and rashly applied gene-editing technology to human reproductive medicine.
In 2020, the European Commission banned facial recognition technology in public spaces for up to five years to make changes to their legal framework and include guidelines on privacy and ethical abuse.
Facial recognition technology is at the heart of most tech companies that focus on customer safety while protecting their systems from potential security threats. Let's examine three such examples of companies using facial recognition ethically. 2ff7e9595c
Comments