Code 2018 Draft 3: Section 2 discussion


#1

Please use this thread for discussion of the changes to the Section 2 (text of Draft 3 of section 2 is included below). If you have a significant issue to discuss you can start a new thread about it and ping me (Bo Brinkman) and I will add a link to it below.

Draft 3 text

2. PROFESSIONAL RESPONSIBILITIES.

A computing professional should…

2.1 Strive to achieve high quality in both the process and products of professional work.

Computing professionals should insist on high quality work from themselves and from colleagues. This includes respecting the dignity of employers, colleagues, clients, users, and anyone else affected either directly or indirectly by the work. Computing professionals have an obligation to keep the client or employer properly informed about progress toward completing the work. Professionals should be cognizant of the serious negative consequences affecting any stakeholder that may result from poor quality work and should resist any inducements to neglect this responsibility.

2.2 Maintain high standards of professional competence, conduct, and ethical practice.

High quality computing depends on individuals and teams who take personal and group responsibility for acquiring and maintaining professional competence. Professional competence starts with technical knowledge and with awareness of the social context in which the work may be deployed. Professional competence also requires skill in reflective analysis and in recognizing and navigating ethical challenges. Upgrading necessary skills should be ongoing and should include independent study, conferences, seminars, and other informal or formal education. Professional organizations and employers should encourage and facilitate those activities.

2.3 Know, respect, and apply existing rules pertaining to professional work.

"Rules" here includes regional, national, and international laws and regulations, as well as any policies and procedures of the organizations to which the professional belongs. Computing professionals must obey these rules unless there is a compelling ethical justification to do otherwise. Rules that are judged unethical should be challenged. A rule may be unethical when it has an inadequate moral basis, it is superseded by another rule, or it causes recognizable harm that could be mitigated through its violation. A computing professional who decides to violate a rule because it is unethical, or for any other reason, must consider potential consequences and accept responsibility for that action.

2.4 Accept and provide appropriate professional review.

High quality professional work in computing depends on professional review at all stages. Whenever appropriate, computing professionals should seek and utilize peer and stakeholder review. Computing professionals should also provide constructive, critical reviews of other's work.

2.5 Give comprehensive and thorough evaluations of computer systems and their impacts, including analysis of possible risks.

Computing professionals should strive to be perceptive, thorough, and objective when evaluating, recommending, and presenting system descriptions and alternatives.

Computing professionals are in a position of trust, and therefore have a special responsibility to provide objective, credible evaluations to employers, clients, users, and the public. Extraordinary care should be taken to identify and mitigate potential risks in self-changing systems. A system for which future risks cannot be reliably predicted requires frequent reassessment of risk as the system evolves in use, or it should not be deployed. Any issues that might result in major risk should be reported.

2.6 Have the necessary expertise, or the ability to obtain that expertise, for completing a work assignment before accepting it. Once accepted, that commitment should be honored.

A computing professional is accountable for evaluating potential work assignments.

Once it is decided that a project is feasible and advisable, the professional should make a judgment about whether the work assignment is appropriate to the professional's expertise. If the professional does not currently have the expertise necessary to complete the assignment, the professional should disclose this shortcoming to the employer or client. The client or employer may decide to pursue the assignment with the professional after time for additional training, to pursue the assignment with someone else who has the required expertise, or to forego the assignment. A computing professional's ethical judgment should be the final guide in deciding whether to work on the assignment.

2.7 Improve public awareness and understanding of computing, related technologies, and their consequences.

Computing professionals should share technical knowledge with the public, foster awareness of computing, and encourage understanding of computing. Important issues include the impacts of computer systems, their limitations, their vulnerabilities, and opportunities that they present. Additionally, a computing professional should counter false views related to computing.

2.8 Access computing and communication resources only when authorized to do so.

No one should access another's computer system, software, or data without permission. A computing professional should have appropriate approval before using system resources unless there is an overriding concern for the public good. To support this principle, a computing professional should take appropriate action to secure resources against unauthorized use. Individuals and organizations have the right to restrict access to their systems and data so long as the restrictions are consistent with other principles in the Code.

2.9 Design and implement systems that are robustly and usably secure.

Breaches of computer security cause harm. It is the responsibility of computing professionals to design and implement systems that are robustly secure. Further, security precautions are of no use if they cannot or intentionally will not be used appropriately by their intended audience in practice; for example, if those precautions are too confusing, too time consuming, or situationally inappropriate. Therefore, the design of security features should make usability a priority design requirement.

Related threads

None yet.


Code 2018 discussion - Start here!
#2

In Principle 2.1, there’s no discussion of the tradeoffs between quality, cost, time, and scope. I agree with the principle of insisting on high quality, but there’s also a broader context. There are tradeoffs, and going for the highest quality is not always feasible, and reducing quality (to a certain extent) to reduce cost or increase scope may, in some situations, be the right thing to do. It all depends on the situation.

In Principle 2.1, there’s mention of keeping the client or employer informed about progress toward completing the work. However, there are also the up-front estimates of cost, schedule, and risk that needs to be communicated properly before work begins.

In Principle 2.2, I don’t understand how “high quality computing” depends on “individuals and teams who take personal and group responsibility for acquiring and maintaining professional competence”. I don’t understand the concept of “high quality computing”. Perhaps this is supposed to be related to work products?

In Principle 2.2, I’d also like to see some level of discussion around understanding the business context and the needs of the customers, clients, and users. I think there’s also room to talk about ensuring effective procedures are used and that everyone working on a given project are aware of the procedures and standards they are held to.

Principle 2.2 also feels incomplete. It talks about technical knowledge, social context, and ethics. But it doesn’t mention other knowledge that is essential - the problem domain or business domain, development processes and method for people working in product development, business and economics, etc. None of these fall into what I would consider technical knowledge for a computing professional.

Principle 2.5 feels incomplete, too. If another computing professional or a member of the general public is asking questions about a computer system, knowledgeable computing professionals have a responsibility to answer those questions and address those concerns. This is related to Principle 1.3 and Principle 2.6 - not all computing professionals have the knowledge and experience to answer all questions.

Along those same lines, Principle 2.7 is very much related to Principle 2.5. In fact, they almost feel like the same thing. 2.5 calls out clients, employers, and users while 2.7 is exclusively focused on the general public. This seems like it would be very easy to combine these two principles and cut down on wordiness of the document as a whole.

Principle 2.8 is missing a statement to the effect that computing professionals should not knowingly use software that has been obtained illegally. It may be more controversial, but computing professionals should also report the use of illegal software to the appropriate authorities.

I have many objections to Principle 2.9. First, it’s one of the few principles focused on people working specifically in product development and beyond that, it’s focused on people doing design and implementation. That’s a subset of the target audience - how does this related to instructors, researchers, influencers, or users of technology? Second, there’s a lot of overlap with Principles 1.1, 1.2, 1.6, 1.7, 2.1, 2.5, and 2.6 and, when these are properly applies, should remove the need for this Principle on its own. Finally, it calls out one specific quality attribute of a software system - why not call out Accessibility, Usability, Availability, or Compliance? It seems like the addition of this Principle is a knee-jerk reaction to recent events and not a good look at applying other, more broadly useful principles, to ethical concerns.

I think there’s a missing principle about being supportive of colleagues - not interfering with their personal or career development, helping to be aware of organizational and industry best practices, hearing their opinions and concerns. These may be incorporated into Principle 2.1, perhaps.


#3

Principle 2.2: I agree Thomas, something needs to be said about non-technical knowledge and the business and practical context that the software will be used in. Also, consider replacing upgrading with improving, there is no reason to use jargon here.

Principle 2.4: Given the difficulty of accepting criticism and the temptation to use criticism as a weapon, I feel that more deserves to be said here. At a minimum, I would add a reminder that reviews should be approached with humility, for both the reviewer and reviewee.


#4

Security is inclusive of Accessibility, Availability, and Compliance. That said, usability is important enough that, if Security is a principle, so is Usability.

Seconded. Again, this is important, though a little of it should go into “professional review”. Also, avoiding badmouthing colleagues should go in that.


#5

I am happy with the addition of 2.9, and making security of systems a prime concern for professionals. This addition though seems particularly focused on usability of systems, which seems out of place for this otherwise generic code.

Many computing professionals are also responsible for maintaining systems (see also 3.6). Proper configuration and active maintenance of systems is equally important to security as is usability and secure design of said systems.

I would suggest making the clause more generic and not only focus on the design of secure systems.


#6

We have reviewed “ACM Code of Ethics and Professional Conduct” (Draft 3) and applaud this important guidance for computing professionals [1].

A significant area of ongoing activity is not yet addressed: remote and autonomous systems. While several aspects of the draft Code pertain, the design and deployment of such systems pose special challenges that computing professionals must specifically consider. As noted in the Turing Award Lecture of fifty years ago: ethics, professional behavior, and social responsibility cannot be separated from the diverse fields in which computer science is applied [2].

Many robotic systems are being deployed with high degrees of autonomy, long operational endurance, and independence from direct human supervision. Examples include self-driving cars, unmanned air vehicles (drones), military sentry vehicles, and many others. Whether manned, unmanned, civil or military, such systems have significant potential for applying indiscriminate lethal force at a distance. Complex situations, unforeseen interactions, and emergent behaviors often occur that are beyond the original scope or intent of designers and engineers.

Special considerations are necessary for such machines, since preprogrammed machine responses remain inadequate in isolation. Protections for human life must be considered and engineered into systems capable of prolonged operations beyond the range of direct remote control. A critical enabler is available to help: the combination of human judgement and artificial intelligence can yield more effective systems than is possible by either alone [3]. Thus sufficient human supervisory guidance, and permission checks for recognizably dangerous situations, must be available for systems that are allowed to operate autonomously. Simply put: if a human is not in charge, then no one is in charge.

Constraints on action (such as limits of authority, and conditions requiring explicit human approval) can be achieved for remote systems presenting potential hazard to life. For example, recent work has shown that human ethical considerations can be expressed using validatable syntax and logical semantics when defining executable robot missions [4]. Indeed, if ethical approaches combining machine and human capabilities can better ensure human safety, it is unethical to not consider them. Understanding such issues when engineering systems with autonomy requires the technical expertise and moral judgement of computer-science professionals.

We recommend adding a section to the ACM Code that articulates these vital concerns. Suggested draft Professional Responsibilities paragraph 2.10 follows.

====================================================

Recognize potential risks associated with autonomy. Systems operating remotely or with minimal human supervision (for example, drones or driverless vehicles) may have the capacity for inflicting unintended lethal force. Safeguards, legal requirements, moral imperatives, and means for asserting direct human control must be considered, in order to avoid the potential for unintended injury or loss of life due to emergent behavior by robotic systems.”

====================================================

Our chosen wording of “must” vice “should” is intentional, since recognizing such risks meets thresholds described in [1] and failure to consider such issues is negligent.

Ethical constraints on robot mission execution are possible today. There is no need to wait for future developments in Artificial Intelligence (AI). It is a moral imperative that ethical constraints in some form be introduced immediately into the software of all robots that are capable of inflicting unintended or deliberate harm to humans or property.

Very respectfully submitted.

Don Brutzman, Bob McGhee, Curt Blais and Duane Davis
Naval Postgraduate School (NPS), Monterey California USA

[1] Don Gotterbarn, Amy Bruckman, Catherine Flick, Keith Miller, and Marty J. Wolf, “ACM Code of Ethics: A Guide for Positive Action,” Communications of the ACM (CACM), vol. 61 no. 1, pp. 121-128.

[2] Richard W. Hamming, “One Man’s View of Computer Science,” ACM Turing Award Lecture, Journal of the ACM (JACM), vol. 16 no. 1, January 1969. https://dl.acm.org/citation.cfm?id=1283923

[3] Richard W. Hamming, Learning to Learn: The Art of Doing Science and Engineering, CRC Press, 1997.

[4] Don Brutzman, Curtis Blais, Duane Davis, and Robert B. McGhee, “Ethical Mission Definition and Execution for Maritime Robots under Human Supervision,” IEEE Journal of Oceanic Engineering (JOE), January 2018, http://ieeexplore.ieee.org/document/8265218