Leveraging the ACM Code of Ethics Against Ethical Snake Oil and Dodgy Development

Title: Leveraging the ACM Code of Ethics Against Ethical Snake Oil and Dodgy Development
Date: Monday, June 8, 12:00 PM ET/9:00 AM PT
Duration:1 hr

SPEAKERS:
Don Gotterbarn, Professor Emeritus, East Tennessee State University; Co-Chair, ACM Committee on Professional Ethics (COPE)
Marty Wolf, Professor, Bemidji State University; Co-Chair, ACM Committee on Professional Ethics (COPE)

Resources:
Tech Talk Registration
ACM Code of Ethics
Big Data and Ethics (Science Direct Book, Free for ACM Members)
Cyberethics (O’Reilly Book, Free for ACM Members)
Ethics and Data Science (O’Reilly Book, Free for ACM Members)
Ethics and the Algorithm (O’Reilly Book, Free for ACM Members)
Ethics and Project Management (Skillsoft Course, Free for ACM Members)
Ethics for Science Policy (Science Direct Book, Free for ACM Member)
The Place for Ethics in Organizations (O’Reilly Book, Free for ACM Members)
Rethinking Machine Ethics in the Age of Ubiquitous Technology (Skillsoft Book, Free for ACM Members)

I recently published an article on the advancements made in three professional ethical codes, one being ACM’s new Code of Ethics.

My area of expertise is in computer security, digital forensics, and incident response, so the Malware Disruption case study really resonated with me. I wanted to thank you for creating it and would love your thoughts on how I used it.

I also wanted to ask whether ACM has any plans to collaborate with other organizations in refining realistic case studies like this that can be shared across organizations to achieve better consistency in the content and application of ethical codes. I think this could be quite valuable to the broader community.

2 Likes

The Australian Computer Society was the first association of IT practitioners to be recognised as a professional society, and the ACS Code of Ethics was first published more than 50 years ago. The current version is here: https://www.acs.org.au/content/dam/acs/acs-documents/Code-of-Ethics.pdf

1 Like

Following the recent TechTalk, Marty Wolf, Don Gotterbarn and Keith Miller were kind enough to answer some additional questions that we were not able to get to during the presentation. Part 1 of their responses are presented below:

What is snake oil…? How does it relate to computation science and why is it relevant in 2020?

DG: Snake oil was supposedly a cure for everything and in many cases, it was primarily alcohol and does not really solve the cancer you were taking it for. Sometimes technology is peddled in the same way.

Is there a repository for the 150 codes of ethics related to AI mentioned in the presentation?

DG: No repository that I know of. I saw the lists in a United Nations report series of footnotes

I’m going to quibble about the bolt. Many engineering projects, such as airplanes and bridges, depend on fasteners meeting critical strength and installation criteria. Some air crashes have occurred because the wrong bolt was used, either in construction or in maintenance. That said, it’s a lot easier to miss or misplace something in software .

KM: Your comments about bolts are interesting. Looking at both the similarities and differences between bolts and software is a useful exercise. I will always use the analogy more carefully after your thoughtful comment.

Do you have concrete suggestions for undoing the manufacturing mindset in the setting of a university computing course?

KM: I find case histories useful. When examining students’ responses to a case, if I recognize the manufacturing mindset, I bring it up in discussions, either with the student, his/her small group, or the whole class. I try not to embarrass the student when making the point.

DG: Reflective engineering: After a project is done conduct a review of ways to approach the subject asking, which is better, being sure to have them openly discuss the impacts of their choices on different stakeholders. In programming class use practical assignments, have a matrix be a storage place for blood containers of different types and have them process the blood in an out of the container. When the assignment is done you might ask them what was wrong with the assignment you gave them; what did you fail to consider. They are criticizing you and will feel free to join in.

How can a developer be acting ethically if the code to write belongs to a weapon or to a social network spying on the users?

KM: This issue was discussed at great length when we revised the ACM Code of Ethics in 2018. As an organization, the ACM was not likely to declare all defense and law enforcement work as immoral. This issue is the main reason there was a rather involved use of language to distinguish between “intentional harm” and “unintentional harm.” Speaking personally (not for the ACM or the committee), I think one can make an individual judgment that working on a weapon or a surveillance system can be morally, ethically correct. I have never been faced with that decision in my career, but I would find it difficult.

Move fast and break things? Get it out then patch it? Of course, patching is maintenance and is much less beneficial for reputation and more poorly paid than original development (or redevelopment).

KM: Yes, the strategies you name in your comment are, I think, ethically problematic.

Would it be alright to use some of your definitions for Ethic and Ethicking in the computing ethics class I teach?

KM: Yes, we hope you do! Don Gotterbarn will give us the citation, I’m sure.

DG: All references are on the last slide: slides are/soon will be available at ethics.acm.org and for now… Winograd, Terry (1995), “Computers Ethics and Social Responsibility” in Computers, Ethics and Social Values, eds, D. Johnson, Helen Nissenbaum.

Defining ethics via effects (p. 20) is the 2nd “design pattern” of the Belmont Principles (1978, where they call it “beneficence”). The other two are: 1. Respect for personhood (including autonomy and consent) and 3. Justice (including fairness). Like all good design patterns, they are by construction often in tension to be negotiated, not to be a checklist to be enumerated. I worry though that if we define ethics only in terms of “effects” or “public good,” how do we integrate 3. fairness and 1. consent of stakeholders?

KM: It’s interesting that you draw the parallels between medical ethics and computer ethics. My wife is a bioethicist, and discussions about those parallels got me into the field of computer ethics. There are also insights to be gained by looking at environmental ethics, feminist ethics, and the ethics of care (especially in nursing). The similarities and differences can both be instructive.

MW: Ethics certainly encompasses all that you suggest. It takes a while to get good at applied ethics. Our suggestion is merely a starting point. Computing professionals who engage in professional development will need to address issues of fairness, consent, justice, and other concerns in the context of the projects that they work on. Integration of these issues comes over time with continued professional development.

Has the CARE process been tested as encouraging better ethical thinking and behavior?

KM: I don’t know of any empirical research on this, but it might be a fruitful area to explore.

MW: It has not. It is a recent development. Given other work on ethics, however, it is clear that a single, brief exposure to it is unlikely to make a difference.

DG: Proactive CARE has not been tested, but it is a reformulation of standard ethical analysis: Consider the context, identify the stakeholders and your options, think about how the options impact each of the stakeholder. This conscious process helps reduce framing out and psychologists would categorize it as a form of pre-commitment.

How would a professional resolve the ethical implications of dual-use artifacts they are producing? (Example: Tracking apps may be used to contain the spread of a pandemic as well as suppressing freedom of speech). How could an ethical approach to work on such artifacts go beyond not working on such artifacts at all, which might be equivalent to not working in the field at all?

KM: People in medical ethics have worked on dual use for a while. For example: Miller, Seumas, and Michael J. Selgelid. “Ethical and philosophical consideration of the dual-use dilemma in the biological sciences.” Science and engineering ethics 13, no. 4 (2007): 523-580. More recent work has focused on dual use in computer ethics: Rashid, Awais, John Weckert, and Richard Lucas. “Software engineering ethics in a digital world.” Computer 42, no. 6 (2009): 34-41. AND Miller, Seumas. “Concept of Dual Use.” In Dual Use Science and Technology, Ethics and Weapons of Mass Destruction, pp. 5-20. Springer, Cham, 2018.

A practical approach that sometimes works is to build the software in a way that discourages some unethical dual use you can anticipate. The Google Apple API we spoke about was designed to not work well with a central server collecting the information. An article by Floridi on this point out that one county changed its model away from a central server model to use this app. This works sometimes.

I am involved in the development of safety-critical software (IEC61508, EN50128, ISO26262, etc.). Those standards define “acceptably safe” as being societal – what is “acceptably safe” in one society, may not be in another. Is this also true for the wider “ethics” question – is it societal?

KM: Yes, I think it is. But many different “societies” can be involved. The general public is one such group, but so is ACM as a professional organization. Perhaps this (somewhat old) paper might be of interest to you: Collins, W. Robert, Keith W. Miller, Bethany J. Spielman, and Phillip Wherry. “How good is good enough? An ethical analysis of software construction and use.” Communications of the ACM 37, no. 1 (1994): 81-91.

Be careful with the “it is societal.” When working on the Software Engineering Code and now with the Updated ACM Code we found that professional standards and commitment to avoid harm were tied to a higher standard of responsibility and care. In a specification for software in one society they may say it is alright if the in-flight software fails 10% of the time…losing 10% of the people is their definition of “acceptable safety” and that may conform to some legal standard; however, when talking about professional ethics…

What are some of the ways for implementing ethical aspects into AI systems? Are there any sample implementations available?

KM/DG: Ron Arkin of Georgia Tech has done work on this for military systems. Wallach and Allen’s book Moral Machines might be of interest to you.

Is there a way of getting the Context from a given Software?

KM: If you are suggesting an automated way to infer the context from the code itself, I don’t think so. The plans of the programmer may be quite different from the plans of the buyers and users of a given system, and the code does not (in my opinion) have to reflect any of the future uses, or the values of the developers, buyers, and users.

Proactive Care / Review Code: Assuming that in enterprise settings, code artifacts are typically not under control of the creator after initial roll-out, isn’t reviewing code a pointless exercise when it comes to ethics?

KM/DG: I understand the difficulty you are identifying, and I agree with you that it is problematic. However, at the moment of creation, the programmer DOES have control, and in that moment her professional ethics ARE in play. Everything she can do at that moment to protect the public good, and to weave in positive value to the software counts, even after the system is no longer in her direct control.

Sometimes the design of the software can be such that you make it unlikely that anticipated misuse of unintentional ethics failures can be prevented. Yes, it is worth the effort, one positive change will affect millions of people.

Do you have strategies for educating bosses on why it is a good idea to include CARE in development?

DG: There are folk tales–ethics takes time and is expensive. There is research–having a commitment to the well-being of your customer and stakeholder in your product improves retention and reduces recalled product–financial benefit. Asking staff to contribute to building a technically and ethically better product get their UX in and ownership of the product.

Can we say the computing field has a lot of ethical issues due to its inability to belong to a common platform with legal backings as compared to other fields like jurist, health, and others?

KM: I think you are correct that computing has special problems as you describe. But remember that law, medicine, and accounting have their own ethical challenges. Even WITH their relatively common platforms, they still have lots of ethics work to do.

If ethical software engineering implies stakeholder engagement to ensure that blind spots are eliminated, how do I know whether I have done enough? How much engagement is sufficient and that I have consulted the right people?

KM/DG: We can never be sure, but it is a good sign if a software development team is worrying about this question (rather than ignoring it). A long time ago, a group of us wrote about the issue you raise: Collins, W. Robert, Keith W. Miller, Bethany J. Spielman, and Phillip Wherry. “How good is good enough? An ethical analysis of software construction and use.” Communications of the ACM 37, no. 1 (1994): 81-91.

“You can’t be sure you have found them all” is sometimes used as an excuse not to do the extra work of checking for blind spots. This is a very weak argument with dangerous consequences. It is like saying, “Looking both ways when walking into the street does not guarantee that I will not get hit, so I am not going to look at all.”

What are your thoughts on ethical considerations for emerging platforms such as VR and Augmented Reality?

KM: Here is a paper on the very topic you mention. Please excuse the shameless self-promotion: Wolf, Marty J., Frances S. Grodzinsky, and Keith W. Miller. “There’s something in your eye: Ethical implications of augmented visual field devices.” Journal of Information, Communication and Ethics in Society (2016).

Most of the examples given could be seen as examples of (poor) quality management. What is the relationship between ethics and quality control? Put differently: If I do proper quality assurance, have I done my ethical duties?

DG: When consulting I was asked this question frequently or when I gave an example it was challenged as “just a failure of quality control”. If you build into quality considerations more than just meeting budget, schedule, and function then the examples we gave in the talk are failures of quality control even though they met function budget and schedule requirements.

KM: Great question. I was intrigued and found some scholarly literature about the relationship: Fisscher, Olaf, and André Nijhof. “Implications of business ethics for quality management.” The TQM Magazine (2005). AND Tarí, Juan José. “Research into quality management and social responsibility.” Journal of Business Ethics 102, no. 4 (2011): 623-638. In addition, I think the whole “VALUE SENSITIVE DESIGN” literature is somewhat relevant, but earlier in the developmental process. Quality control often is emphasized towards the end of a project, whereas design dominates early.

What recommendations do you have for teaching computer science students to follow the CARE framework in group projects for an ethics course—particularly in an online format?

MW: Quite simply, make it a mandatory part of the project. Have students address the questions individually and in their teams. Make sure that the output of the CARE process is evaluated and made a substantial part of the overall score on the project.

DG: Ethics is a shared enterprise where they can reason about issues. One useful technique is to have them share their results with other groups. Even if it is done asynchronously–each group posting what they found and asking other groups to add to the analysis.

The issues of beliefs, morals, and values seems to be missing from the discussion. The example of contact tracing is telling: there seems to be an implicit value that “a government must not be able to track people.” I am no saying that this value is immoral, but it should and must be stated explicitly. Otherwise, the discussion of ethics is based on expected shared beliefs, morals, and values, and is thus flawed. What do the speakers think about this?

DG: Yes, the focus was on providing a methodology for doing Ethicking, suggesting some ways to incorporate the professional values embodied in the Code of Ethics. The philosophical underpinnings of the Code were not addressed. I think that the Proactive CARE model assumes significant ethical commitment, professional obligation.

KM: The talk did not emphasize it, but there is a long and growing strain of literature on values and virtues in computer ethics. For example, Grodzinsky, Frances. “The practitioner from within: revisiting the virtues.” ACM SIGCAS Computers and Society 29, no. 1 (1999): 9-15. ALSO: Huff, Chuck, Laura Barnard, and William Frey. “Good computing: A pedagogically focused model of virtue in the practice of computing (part 1).” Journal of Information, Communication and Ethics in Society (2008). AND Moor, James H. “If Aristotle were a computing professional.” ACM SIGCAS Computers and Society 28, no. 3 (1998): 13-16. Also see Shannon Valor’s book, *Technology and the Virtues (*2016). She addresses the very issue of universality.

What are some sustainable ways to guarantee proper and mandatory teachings of ethics in university education? Especially within earlier years in a bachelor’s degree? Thank you!

MW: This is a very challenging problem. Computer science faculty rarely see themselves as qualified to teach ethics. Students rarely see faculty from outside of computer science as having the same sort of authority as the computer science faculty do. CS faculty can develop that expertise, but then coverage of ethics is limited to a narrow set of courses. CS faculty and non-CS faculty can work to develop materials. Make clear the ethical learning outcomes for your CS program and for each course in your CS curriculum.

Reflecting on comments made for slides #15 and #36 on design carefully and review thoroughly, how do the new paradigms of Agile (MVP) and DevOps (release fast, release often) affect ethicking? It would appear that these new paradigms would result in software with less ethical considerations.

KM: It certainly is a danger, but I don’t think the de-emphasis you describe is inevitable. For example, you could require that each release includes explicit consideration of ethically significant changes since the last release. Frequent interaction with stakeholders could INCREASE ethical sensitivity among developers, if handled in a certain way.

Looking for more on “non-symmetric” sharing as constituting cheating or non-cheating. For example, submitting a term paper to the writing center versus proofing a classmate’s paper. What should the rule be? Perhaps, one may ask for help, but one may only receive help from authorized roles. Could publishers be considered authorized? For example, refusing to publish answers to textbook exercises? There is a right to publish anything. Otherwise, consulting a library resource ought to be considered an honorable strategy.

MW: Your question points to some larger questions that are worth asking and many have begun this discussion already. What is it that we want our students to learn? Are those learning goals consistent with how professionals actually do their work? How do we achieve that given the socio-technical nature of the world today? Starting with these questions, you may find that there is no “rule,” but merely guidelines that require faculty to give careful guidance and make difficult decisions from time to time.

The “internal whistleblower” function sounds positively utopian even to me here in Sweden and even less realistic based on my experiences with US companies. Does this actually happen somewhere? If so, could you say some more about that.

DG: It exists in many companies. One avionics company I did work for installed a button on the phone to call the ethics officer. Experience encourages contacting the ethics officer. It has an 800 number and they give you a “case number.” They don’t ask your name. The Integrity Line is available 24 hours a day, seven days a week. Trained specialists from an independent third-party provider of corporate compliance services will answer your call, document your concerns, and forward a written report to Equifax Corporate Ethics Officer for further investigation: https://www.equifax.com/assets/corp/code_of_ethics.pdf

Part II of Marty Wolf, Don Gotterbarn, and Keith Miller’s responses are presented below:

I’m trying to remember how it felt to be a moderately skilled, working programmer. I did not know enough about the full context of the company that I worked for to judge these ethical issues, to even begin applying the CARE process. I doubt anyone below the VP level could talk about how my inventory control programs would affect the warehouse employees and what ethical responses to that were planned. Wouldn’t even raising these issues look like I was challenging the executive’s morality, looking like a union sympathizer instead of making life better for our customers and stockholders?

KM: It might at that. I have a son who works with IT and logistics. He reports ethical implications of his work frequently. However, he doesn’t often CALL it ethical - he talks about doing the right thing for truck drivers and warehouse workers. I think this kind of analysis goes on all the time, but not always in the methodical way it is outlined in CARE.

DG: Being “ethical” is doing a good job. Suggestions to help your company expand their market by making the product more accessible is one way of saying, “We are being unfair if we disenfranchise people who are color blind.” The former way of putting it gets the job done and wins you praise; the latter may get you fired.

Do you have examples of pre-commitment planning documents?

DG: These are you regular planning documents. Some have a section on risks with questions about listing stakeholders and questions about their risks, or a section that asks which are the most relevant principles from the Code of Ethics, why are they the most relevant, and a statement about how you intend to uphold them in this project.

If the organizational culture is one that tends to “frame out” context and the more expansive view of Ethics presented here, how can an individual operate Ethically?

How do I engage a team in re-framing context without provoking prudent management practices associated with responses like: you’re making this too complicated? I’m finding that as an individual all I can do is raise a concern, and too often that is where it ends; that as long as I document this concern for the project and raise it with my supervisor I have satisfied Ethical practice as defined by corporate policy.

DG: Unfortunately, much officially defined corporate practice is in terms of legal compliance. I have found that doing things ethically is easy to argue for. You make a suggestion that will improve the product or improve the bottom line, and it just so happens that it also improves the stakeholder situation by making the product safer or have it involve fewer recalls. Doing something even if it is just at one level makes a difference.

Does ACM provide help/assistance for members taking the “moral hero” decision? Legal? Financial?

DG: Not as far as I know.

I see that there is an enforcement procedure for ACM code violations…are these cases reported anywhere? I ask because some bar associations do post ethical violations and disciplinary actions for lawyers…this can be helpful for teaching what to do and not do in tricky ethical situations. Does ACM have anything like this?

DG: No. There is confidentiality maintained about the Code of Ethics violation cases. They do however support the investigation and hire outside support to deal with some of the cases, There are some plagiarism cases where the employer is informed of the plagiarism, but that is not the sort of thing you are talking about. The function of the Code is primarily for guidance in decision-making and education about the values of the profession.

Has the ACM considered highly public discipline of members who contribute to widely publicized system failures due to ethical failures?

KM: The ACM ethics enforcement policy is available here: https://ethics.acm.org/enforcement/. Some of the enforcement actions may be confidential, others can be publicized to the membership. As far as I know, public “shaming” by the ACM is rare.

DG: For some level of plagiarism the employer may be notified, and the DL [ACM Digital Library] will maintain a reference the plagiarizing paper with a pointer to the original article plagiarized from so the chain of authorship is clear. The plagiarizer may also be banned from publishing with the ACM. https://www.acm.org/publications/policies/plagiarism-overview

Hi! Do you recommend any books, articles, or podcasts to learn more about moral and ethical responsibility when designing and developing technology? Thank you

KM:

Can I use the recording of this session for teaching purposes (at university)?

KM: Yes.

Natascha Scholl wrote Addiction by Design: Machine Gambling in Las Vegas (2012). Based on the talk and the book, shouldn’t SW design for gambling purposes be outlawed from the get-go? And other things (addiction-inducing features of social media, etc.)?

KM: Perhaps I would personally hesitate to work on such a thing, but I don’t think it is a slam dunk in general. I can think of counterarguments based on utilitarianism, Kantianism, and social-contract theory. Virtue ethics seems like more of a stretch. Certainly, the ACM isn’t going to make a categorical statement about this.

For students seeking jobs, do you have advice for filtering out companies which may be behaving in an unethical manner? What happens when large companies have many divisions, some more problematic than others? How can you take an ethical stand in job searches without compromising your career?

KM: I think someone interviewing for a job should ask about this directly. An ethically aware organization or individual will give you extra points, and an organization or individual against ethical awareness will shy away from you. That should help you get to a better place to work. Will this slow your advance? Perhaps. But it may preserve your soul.

How does the ACM Code of Ethics describe the trans-border ethical aspects?

KM: It is a difficulty. I remember (during the construction of another code), a lively (somewhat acrimonious) debate about bribery. Some people insisted that bribery should NOT be prohibited in the code because it was necessary when doing business in some countries. It is a daunting task to try to distinguish universal rules and principles from more culturally specific concerns. But people who work on codes have to do that work. I’m sure we don’t always succeed, but we try.

DG: The Code does take a no bribery position. So do some other codes in countries where it is alleged bribery is common.

Do you think the CARE system of ethical decision-making is an effective means of avoiding the Collingridge Dilemma?

KM: I don’t think that dilemma can be avoided. But I do think CARE is a way to approach it.

DG: We also tried to address some of that in the Code. Although it cannot be avoided we expect people to try to figure out what is expected as far as they can, and to also plan to monitor it for future difficulties.

How to integrate the Code of Ethics into a computer science class like “Data Structures” and other computer science courses?

DG: Use practical examples. Model parking for a concert in a large field. Have students move cars into a large field as a stack model and have them empty the field. Then assign them a one-page paper to say what was wrong with the project from the perspective of the Code of Ethics. Some will notice that there is no handicapped parking, no one can leave the parking lot in an emergency, etc. Similar examples at https://csciwww.etsu.edu/gotterbarn/ArtTE2.htm

KM: WARNING: this answer includes a shameless self-promotion: Here is an ancient paper on that topic. If you read it, you may be amused at how dated its examples are: Miller, Keith. “Integrating computer ethics into the computer science curriculum.” Computer Science Education 1, no. 1 (1988): 37-52.

I will begin to study for my Doctor of Information Technology. My Doctorate project depends on Computer Ethics and ML (text recognition). Can I also reach the papers that are similar to my project from ACM? If yes, should I pay for each resource?

KM: If the interlibrary loan at your institution can’t get you those papers for free, please contact me and I can help.

DG: The ACM resource has the Digital Library. Check with Oliver Burmeister if your library has access to it.

Those are examples of “user interface” issues: airplane and surgery.

KM: Yes. Don and Keith wrote a paper that includes case studies that emphasize user interfaces: Gotterbarn, Donald, and Keith W. Miller. “The public is the priority: Making decisions using the software engineering code of ethics.” Computer 6 (2009): 66-73.

DG: Yes, there are many more examples of ethical issues in computing that do not involve user interface decisions.

Why not 50%, 50% men and women? That is the only "good mix."

KM: I’m not sure there is always a perfect ratio, though I certainly like 50/50. Some might argue that the ratio should reflect the current ratio of the ACM members; I don’t like that one because it perpetuates current problems. Sadly, because we rely on volunteers, we can’t dictate the ratio on the ethics committee. However, we have tried to recruit more diversity, with mixed results.

Do you consider the Hippocratic (Open Source) License to be a potentially effective systematic pre-commitment device?

KM: I hadn’t thought of that, but it is intriguing. I will have to ponder that. Thanks for the comment.

DG: Yes, it can be used as a form of pre-commitment. Pre-commitments require constant retelling to prevent ethical fading and mitigate framing out. We need to elevate the existence of this standard regularly to a conscious level.

Where is the ACM Code of Ethics posted or how can I get a copy (as an ACM member)?

KM: Anyone can grab a copy from: https://ethics.acm.org/code-of-ethics/.

Please share it widely!

DG: You can also get a pdf of the ACM CODE booklet which has the Code and sample analysis cases: https://www.acm.org/binaries/content/assets/about/acm-code-of-ethics-booklet.pdf.

Check out https://datasociety.net/, We Build Black, and Techqueria for people of color in tech. There are countless. Think about how your environment is or is not welcoming to their voices, and how that may be influencing them volunteering themselves to your space. It’s a similar process of how you’re describing being intentional in an ethical decision-making process.

KM: Thank you for that pointer. After the talk, we discussed the difference between being passive (we welcome anyone who volunteers) versus being active (how can we effectively recruit diverse voices). Your emphasis on active engagement is well said.

So, the committee is based on volunteers. Is there a process to encourage volunteers who are people of color and other ethnicities? It comes back to stakeholders; how was/is “stakeholder” defined with regard to the committee and the ethics code? Note: not complaining about how the code currently stands

KM: Thank you for your comment After the talk, we discussed the difference between being passive (we welcome anyone who volunteers) versus being active (how can we effectively recruit diverse voices). Your emphasis on active engagement is well said. Might you be interested in joining the committee?

To what extent is ethical behavior possible when working within inherently non-sustainable economies or those that perpetuate systematic repression?

KM: I think it is a real challenge. On a “micro-ethics” level, we control our own ethical integrity in our professional practice. But on a “macro-ethics” level, improvement requires more systematic, large scale change. These are in tension. I think ethics codes (including the ACM Code, which I worked on) may over-emphasis the micro, and under-emphasize the macro. We debated this when revising the ACM Code.

DG: There have been some documented occasions where employees have referenced the Code to move a company to give up on going in a bad direction. The Code can be used as leverage when dealing at a macro level because of its support.

Do you think social media, specifically Facebook, really care about ethics that you mentioned today? Thanks

KM: This is only my personal opinion, but I think the powers-that-be in social media care most about their bottom line. I think ethics is a secondary concern at best, and only comes to the fore when the bottom line is threatened.

I have student(s) who’ve requested teaching on ethics. However, my university doesn’t offer it. I will create some content for next semester. Is there an online library of resources for teaching ethics to undergraduates?

KM: There are many scattered resources. If I had to name one, I might pick https://www.onlineethics.org/.

Please contact me if you’d like to discuss this further.

Thank you for your thought and labor in holding this panel. Ethics – a never-ending negotiation of what matters – is critically important in a world of difference. I have some reflections for the panelists to consider post-panel, no questions: How does your positionality influence your framing of CARE? How might folks who identify with different cultures (ethnic, racial, religious, genders, sexuality) relate to this western-based framing of computing ethics and “good practices”?

KM: I am so impressed with your turn of phrase: “never-ending negotiation of what matters.” May I quote that? In response to your question, I think all of who I am is reflected in my personal interpretation of ethical practice.

Hello!! I would like to know which book to refer for an introduction to this complex subject of IT Ethics. Thank you!

KM: There are many. I helped write the 4th edition of a book by Deborah Johnson, but that book is pretty old. When I teach computer ethics to Computer Science undergrads lately I use a book by Michael Quinn. He and his publisher keep coming out with new editions, and that keeps his examples relatively fresh. Also, my students find it approachable. https://www.amazon.com/Ethics-Information-Age-Michael-Quinn/dp/0134296540.

How diverse is your ethics committee and how are you raising and amplifying Black voices with your ethics efforts? Thank you for answering the question and being transparent about race and gender on the committee. I argue that it’s incredibly shortsighted to be relying on volunteers and using this as a reason for lack of diversity.

KM: Don, Marty, and Keith discussed exactly this issue right after the end of the webcast. We plan to more actively recruit new members for the committee. I agree with “shortsighted.” We will do better.