Following the recent TechTalk, Marty Wolf, Don Gotterbarn and Keith Miller were kind enough to answer some additional questions that we were not able to get to during the presentation. Part 1 of their responses are presented below:
What is snake oil…? How does it relate to computation science and why is it relevant in 2020?
DG: Snake oil was supposedly a cure for everything and in many cases, it was primarily alcohol and does not really solve the cancer you were taking it for. Sometimes technology is peddled in the same way.
Is there a repository for the 150 codes of ethics related to AI mentioned in the presentation?
DG: No repository that I know of. I saw the lists in a United Nations report series of footnotes
I’m going to quibble about the bolt. Many engineering projects, such as airplanes and bridges, depend on fasteners meeting critical strength and installation criteria. Some air crashes have occurred because the wrong bolt was used, either in construction or in maintenance. That said, it’s a lot easier to miss or misplace something in software .
KM: Your comments about bolts are interesting. Looking at both the similarities and differences between bolts and software is a useful exercise. I will always use the analogy more carefully after your thoughtful comment.
Do you have concrete suggestions for undoing the manufacturing mindset in the setting of a university computing course?
KM: I find case histories useful. When examining students’ responses to a case, if I recognize the manufacturing mindset, I bring it up in discussions, either with the student, his/her small group, or the whole class. I try not to embarrass the student when making the point.
DG: Reflective engineering: After a project is done conduct a review of ways to approach the subject asking, which is better, being sure to have them openly discuss the impacts of their choices on different stakeholders. In programming class use practical assignments, have a matrix be a storage place for blood containers of different types and have them process the blood in an out of the container. When the assignment is done you might ask them what was wrong with the assignment you gave them; what did you fail to consider. They are criticizing you and will feel free to join in.
How can a developer be acting ethically if the code to write belongs to a weapon or to a social network spying on the users?
KM: This issue was discussed at great length when we revised the ACM Code of Ethics in 2018. As an organization, the ACM was not likely to declare all defense and law enforcement work as immoral. This issue is the main reason there was a rather involved use of language to distinguish between “intentional harm” and “unintentional harm.” Speaking personally (not for the ACM or the committee), I think one can make an individual judgment that working on a weapon or a surveillance system can be morally, ethically correct. I have never been faced with that decision in my career, but I would find it difficult.
Move fast and break things? Get it out then patch it? Of course, patching is maintenance and is much less beneficial for reputation and more poorly paid than original development (or redevelopment).
KM: Yes, the strategies you name in your comment are, I think, ethically problematic.
Would it be alright to use some of your definitions for Ethic and Ethicking in the computing ethics class I teach?
KM: Yes, we hope you do! Don Gotterbarn will give us the citation, I’m sure.
DG: All references are on the last slide: slides are/soon will be available at ethics.acm.org and for now… Winograd, Terry (1995), “Computers Ethics and Social Responsibility” in Computers, Ethics and Social Values, eds, D. Johnson, Helen Nissenbaum.
Defining ethics via effects (p. 20) is the 2nd “design pattern” of the Belmont Principles (1978, where they call it “beneficence”). The other two are: 1. Respect for personhood (including autonomy and consent) and 3. Justice (including fairness). Like all good design patterns, they are by construction often in tension to be negotiated, not to be a checklist to be enumerated. I worry though that if we define ethics only in terms of “effects” or “public good,” how do we integrate 3. fairness and 1. consent of stakeholders?
KM: It’s interesting that you draw the parallels between medical ethics and computer ethics. My wife is a bioethicist, and discussions about those parallels got me into the field of computer ethics. There are also insights to be gained by looking at environmental ethics, feminist ethics, and the ethics of care (especially in nursing). The similarities and differences can both be instructive.
MW: Ethics certainly encompasses all that you suggest. It takes a while to get good at applied ethics. Our suggestion is merely a starting point. Computing professionals who engage in professional development will need to address issues of fairness, consent, justice, and other concerns in the context of the projects that they work on. Integration of these issues comes over time with continued professional development.
Has the CARE process been tested as encouraging better ethical thinking and behavior?
KM: I don’t know of any empirical research on this, but it might be a fruitful area to explore.
MW: It has not. It is a recent development. Given other work on ethics, however, it is clear that a single, brief exposure to it is unlikely to make a difference.
DG: Proactive CARE has not been tested, but it is a reformulation of standard ethical analysis: Consider the context, identify the stakeholders and your options, think about how the options impact each of the stakeholder. This conscious process helps reduce framing out and psychologists would categorize it as a form of pre-commitment.
How would a professional resolve the ethical implications of dual-use artifacts they are producing? (Example: Tracking apps may be used to contain the spread of a pandemic as well as suppressing freedom of speech). How could an ethical approach to work on such artifacts go beyond not working on such artifacts at all, which might be equivalent to not working in the field at all?
KM: People in medical ethics have worked on dual use for a while. For example: Miller, Seumas, and Michael J. Selgelid. “Ethical and philosophical consideration of the dual-use dilemma in the biological sciences.” Science and engineering ethics 13, no. 4 (2007): 523-580. More recent work has focused on dual use in computer ethics: Rashid, Awais, John Weckert, and Richard Lucas. “Software engineering ethics in a digital world.” Computer 42, no. 6 (2009): 34-41. AND Miller, Seumas. “Concept of Dual Use.” In Dual Use Science and Technology, Ethics and Weapons of Mass Destruction, pp. 5-20. Springer, Cham, 2018.
A practical approach that sometimes works is to build the software in a way that discourages some unethical dual use you can anticipate. The Google Apple API we spoke about was designed to not work well with a central server collecting the information. An article by Floridi on this point out that one county changed its model away from a central server model to use this app. This works sometimes.
I am involved in the development of safety-critical software (IEC61508, EN50128, ISO26262, etc.). Those standards define “acceptably safe” as being societal – what is “acceptably safe” in one society, may not be in another. Is this also true for the wider “ethics” question – is it societal?
KM: Yes, I think it is. But many different “societies” can be involved. The general public is one such group, but so is ACM as a professional organization. Perhaps this (somewhat old) paper might be of interest to you: Collins, W. Robert, Keith W. Miller, Bethany J. Spielman, and Phillip Wherry. “How good is good enough? An ethical analysis of software construction and use.” Communications of the ACM 37, no. 1 (1994): 81-91.
Be careful with the “it is societal.” When working on the Software Engineering Code and now with the Updated ACM Code we found that professional standards and commitment to avoid harm were tied to a higher standard of responsibility and care. In a specification for software in one society they may say it is alright if the in-flight software fails 10% of the time…losing 10% of the people is their definition of “acceptable safety” and that may conform to some legal standard; however, when talking about professional ethics…
What are some of the ways for implementing ethical aspects into AI systems? Are there any sample implementations available?
KM/DG: Ron Arkin of Georgia Tech has done work on this for military systems. Wallach and Allen’s book Moral Machines might be of interest to you.
Is there a way of getting the Context from a given Software?
KM: If you are suggesting an automated way to infer the context from the code itself, I don’t think so. The plans of the programmer may be quite different from the plans of the buyers and users of a given system, and the code does not (in my opinion) have to reflect any of the future uses, or the values of the developers, buyers, and users.
Proactive Care / Review Code: Assuming that in enterprise settings, code artifacts are typically not under control of the creator after initial roll-out, isn’t reviewing code a pointless exercise when it comes to ethics?
KM/DG: I understand the difficulty you are identifying, and I agree with you that it is problematic. However, at the moment of creation, the programmer DOES have control, and in that moment her professional ethics ARE in play. Everything she can do at that moment to protect the public good, and to weave in positive value to the software counts, even after the system is no longer in her direct control.
Sometimes the design of the software can be such that you make it unlikely that anticipated misuse of unintentional ethics failures can be prevented. Yes, it is worth the effort, one positive change will affect millions of people.
Do you have strategies for educating bosses on why it is a good idea to include CARE in development?
DG: There are folk tales–ethics takes time and is expensive. There is research–having a commitment to the well-being of your customer and stakeholder in your product improves retention and reduces recalled product–financial benefit. Asking staff to contribute to building a technically and ethically better product get their UX in and ownership of the product.
Can we say the computing field has a lot of ethical issues due to its inability to belong to a common platform with legal backings as compared to other fields like jurist, health, and others?
KM: I think you are correct that computing has special problems as you describe. But remember that law, medicine, and accounting have their own ethical challenges. Even WITH their relatively common platforms, they still have lots of ethics work to do.
If ethical software engineering implies stakeholder engagement to ensure that blind spots are eliminated, how do I know whether I have done enough? How much engagement is sufficient and that I have consulted the right people?
KM/DG: We can never be sure, but it is a good sign if a software development team is worrying about this question (rather than ignoring it). A long time ago, a group of us wrote about the issue you raise: Collins, W. Robert, Keith W. Miller, Bethany J. Spielman, and Phillip Wherry. “How good is good enough? An ethical analysis of software construction and use.” Communications of the ACM 37, no. 1 (1994): 81-91.
“You can’t be sure you have found them all” is sometimes used as an excuse not to do the extra work of checking for blind spots. This is a very weak argument with dangerous consequences. It is like saying, “Looking both ways when walking into the street does not guarantee that I will not get hit, so I am not going to look at all.”
What are your thoughts on ethical considerations for emerging platforms such as VR and Augmented Reality?
KM: Here is a paper on the very topic you mention. Please excuse the shameless self-promotion: Wolf, Marty J., Frances S. Grodzinsky, and Keith W. Miller. “There’s something in your eye: Ethical implications of augmented visual field devices.” Journal of Information, Communication and Ethics in Society (2016).
Most of the examples given could be seen as examples of (poor) quality management. What is the relationship between ethics and quality control? Put differently: If I do proper quality assurance, have I done my ethical duties?
DG: When consulting I was asked this question frequently or when I gave an example it was challenged as “just a failure of quality control”. If you build into quality considerations more than just meeting budget, schedule, and function then the examples we gave in the talk are failures of quality control even though they met function budget and schedule requirements.
KM: Great question. I was intrigued and found some scholarly literature about the relationship: Fisscher, Olaf, and AndrĂ© Nijhof. “Implications of business ethics for quality management.” The TQM Magazine (2005). AND TarĂ, Juan JosĂ©. “Research into quality management and social responsibility.” Journal of Business Ethics 102, no. 4 (2011): 623-638. In addition, I think the whole “VALUE SENSITIVE DESIGN” literature is somewhat relevant, but earlier in the developmental process. Quality control often is emphasized towards the end of a project, whereas design dominates early.
What recommendations do you have for teaching computer science students to follow the CARE framework in group projects for an ethics course—particularly in an online format?
MW: Quite simply, make it a mandatory part of the project. Have students address the questions individually and in their teams. Make sure that the output of the CARE process is evaluated and made a substantial part of the overall score on the project.
DG: Ethics is a shared enterprise where they can reason about issues. One useful technique is to have them share their results with other groups. Even if it is done asynchronously–each group posting what they found and asking other groups to add to the analysis.
The issues of beliefs, morals, and values seems to be missing from the discussion. The example of contact tracing is telling: there seems to be an implicit value that “a government must not be able to track people.” I am no saying that this value is immoral, but it should and must be stated explicitly. Otherwise, the discussion of ethics is based on expected shared beliefs, morals, and values, and is thus flawed. What do the speakers think about this?
DG: Yes, the focus was on providing a methodology for doing Ethicking, suggesting some ways to incorporate the professional values embodied in the Code of Ethics. The philosophical underpinnings of the Code were not addressed. I think that the Proactive CARE model assumes significant ethical commitment, professional obligation.
KM: The talk did not emphasize it, but there is a long and growing strain of literature on values and virtues in computer ethics. For example, Grodzinsky, Frances. “The practitioner from within: revisiting the virtues.” ACM SIGCAS Computers and Society 29, no. 1 (1999): 9-15. ALSO: Huff, Chuck, Laura Barnard, and William Frey. “Good computing: A pedagogically focused model of virtue in the practice of computing (part 1).” Journal of Information, Communication and Ethics in Society (2008). AND Moor, James H. “If Aristotle were a computing professional.” ACM SIGCAS Computers and Society 28, no. 3 (1998): 13-16. Also see Shannon Valor’s book, *Technology and the Virtues (*2016). She addresses the very issue of universality.
What are some sustainable ways to guarantee proper and mandatory teachings of ethics in university education? Especially within earlier years in a bachelor’s degree? Thank you!
MW: This is a very challenging problem. Computer science faculty rarely see themselves as qualified to teach ethics. Students rarely see faculty from outside of computer science as having the same sort of authority as the computer science faculty do. CS faculty can develop that expertise, but then coverage of ethics is limited to a narrow set of courses. CS faculty and non-CS faculty can work to develop materials. Make clear the ethical learning outcomes for your CS program and for each course in your CS curriculum.
Reflecting on comments made for slides #15 and #36 on design carefully and review thoroughly, how do the new paradigms of Agile (MVP) and DevOps (release fast, release often) affect ethicking? It would appear that these new paradigms would result in software with less ethical considerations.
KM: It certainly is a danger, but I don’t think the de-emphasis you describe is inevitable. For example, you could require that each release includes explicit consideration of ethically significant changes since the last release. Frequent interaction with stakeholders could INCREASE ethical sensitivity among developers, if handled in a certain way.
Looking for more on “non-symmetric” sharing as constituting cheating or non-cheating. For example, submitting a term paper to the writing center versus proofing a classmate’s paper. What should the rule be? Perhaps, one may ask for help, but one may only receive help from authorized roles. Could publishers be considered authorized? For example, refusing to publish answers to textbook exercises? There is a right to publish anything. Otherwise, consulting a library resource ought to be considered an honorable strategy.
MW: Your question points to some larger questions that are worth asking and many have begun this discussion already. What is it that we want our students to learn? Are those learning goals consistent with how professionals actually do their work? How do we achieve that given the socio-technical nature of the world today? Starting with these questions, you may find that there is no “rule,” but merely guidelines that require faculty to give careful guidance and make difficult decisions from time to time.
The “internal whistleblower” function sounds positively utopian even to me here in Sweden and even less realistic based on my experiences with US companies. Does this actually happen somewhere? If so, could you say some more about that.
DG: It exists in many companies. One avionics company I did work for installed a button on the phone to call the ethics officer. Experience encourages contacting the ethics officer. It has an 800 number and they give you a “case number.” They don’t ask your name. The Integrity Line is available 24 hours a day, seven days a week. Trained specialists from an independent third-party provider of corporate compliance services will answer your call, document your concerns, and forward a written report to Equifax Corporate Ethics Officer for further investigation: https://www.equifax.com/assets/corp/code_of_ethics.pdf