Code 2018 Draft 3: Section 1 discussion


#1

Please use this thread for discussion of the changes to the Section 1 (text of Draft 3 of section 1 is included below). If you have a significant issue to discuss you can start a new thread about it and ping me (Bo Brinkman) and I will add a link to it below.

Draft 3 text

1. GENERAL MORAL PRINCIPLES.

A computing professional should…

1.1 Contribute to society and to human well-being, acknowledging that all people are stakeholders in computing.

This principle, concerning the quality of life of all people, affirms an obligation of computing professionals to use their skills for the benefit of society, its members, and the environment surrounding them. This obligation includes promoting fundamental human rights and protecting each individual's right to autonomy in day-to-day decisions. An essential aim of computing professionals is to minimize negative consequences of computing, including threats to health, safety, personal security, and privacy.

Computing professionals should consider whether the results of their efforts respect diversity, will be used in socially responsible ways, will meet social needs, and will be broadly accessible. They are encouraged to actively contribute to society by engaging in pro bono or volunteer work. When the interests of multiple groups conflict, the needs of the least advantaged should be given increased attention and priority.

In addition to a safe social environment, human well-being requires a safe natural environment. Therefore, computing professionals should promote environmental sustainability both locally and globally.

1.2 Avoid harm.

In this document, "harm" means negative consequences to any stakeholder, especially when those consequences are significant and unjust. Examples of harm include unjustified physical or mental injury, unjustified destruction or disclosure of information, and unjustified damage to property, reputation, and the environment. This list is not exhaustive.

Well-intended actions, including those that accomplish assigned duties, may lead to harm. When that harm is unintended, those responsible are obligated to undo or mitigate the harm as much as possible. Avoiding harm begins with careful consideration of potential impacts on all those affected by decisions. When harm is an intentional part of the system, those responsible are obligated to ensure that the harm is ethically justified and to minimize unintended harm.

To minimize the possibility of indirectly harming others, computing professionals should follow generally accepted best practices. Additionally, the consequences of emergent systems and data aggregation should be carefully analyzed. Those involved with pervasive or infrastructure systems should also consider Principle 3.7.

A computing professional has an additional obligation to report any signs of system risks that might result in harm. If leaders do not act to curtail or mitigate such risks, it may be necessary to "blow the whistle" to reduce potential harm. However, capricious or misguided reporting of risks can itself be harmful. Before reporting risks, a computing professional should thoroughly assess all relevant aspects.

1.3 Be honest and trustworthy.

Honesty is an essential component of trust. A computing professional should be transparent and provide full disclosure of all pertinent system limitations and potential problems. Making deliberately false or misleading claims, fabricating or falsifying data, and other dishonest conduct are violations of the Code.

Computing professionals should be honest about their qualifications, and about any limitations in competence to complete a task. Computing professionals should be forthright about any circumstances that might lead to conflicts of interest or otherwise tend to undermine the independence of their judgment.

Computing professionals often belong to organizations associated with their work. They should not misrepresent any organization's policies or procedures, and should not speak on behalf of an organization unless authorized to do so.

1.4 Be fair and take action not to discriminate.

The values of equality, tolerance, respect for others, and justice govern this principle. Computing professionals should strive to build diverse teams and create safe, inclusive spaces for all people, including those of underrepresented backgrounds. Prejudicial discrimination on the basis of age, color, disability, ethnicity, family status, gender identity, labor union membership, military status, national origin, race, religion or belief, sex, sexual orientation, or any other inappropriate factor is an explicit violation of the Code. Harassment, including sexual harassment, is a form of discrimination that limits fair access to the virtual and physical spaces where such harassment takes place.

Inequities between individuals or different groups of people may result from the use or misuse of information and technology. Technologies and practices should be as inclusive and accessible as possible. Failure to design for inclusiveness and accessibility may constitute unfair discrimination.

1.5 Respect the work required to produce new ideas, inventions, creative works, and computing artifacts.

Developing new ideas, inventions, creative works, and computing artifacts creates value for society, and those who expend this effort should expect to gain value from their work. Computing professionals should therefore provide appropriate credit to the creators of ideas or work. This may be in the form of respecting authorship, copyrights, patents, trade secrets, license agreements, or other methods of assigning credit where it is due.

Both custom and the law recognize that some exceptions to a creator's control of a work are necessary for the public good. Computing professionals should not unduly oppose reasonable uses of their intellectual works. Efforts to help others by contributing time and energy to projects that help society illustrate a positive aspect of this principle. Such efforts include free and open source software and other work put into the public domain. Some work contributes to or comprises shared community resources. Computing professionals should avoid misappropriation of these resources.

1.6 Respect privacy.

The responsibility of respecting privacy applies to computing professionals in a particularly profound way. Therefore, a computing professional should become conversant in privacy's various definitions and forms.

Technology enables the collection, monitoring, and exchange of personal information quickly, inexpensively, and often without the knowledge of the people affected.

Computing professionals should only use personal data for legitimate ends and without violating the rights of individuals and groups. This requires taking precautions to prevent unauthorized data collection, ensuring the accuracy of data, and protecting it from unauthorized access and accidental disclosure. Computing professionals should establish transparent policies and procedures that allow individuals to give informed consent to automatic data collection, review their personal data, correct inaccuracies, and, where appropriate, remove data.

Only the minimum amount of personal information necessary should be collected in a system. The retention and disposal periods for that information should be clearly defined, enforced, and communicated to data subjects. Personal information gathered for a specific purpose should not be used for other purposes without the person's consent. Computing professionals should take special care for privacy when data collections are merged. Individuals or groups may be readily identifiable when several data collections are merged, even though those individuals or groups are not identifiable in any one of those collections in isolation.

1.7 Honor confidentiality.

Computing professionals should protect confidentiality unless required to do otherwise by a bona fide requirement of law or by another principle of the Code.

User data observed during the normal duties of system operation and maintenance should be treated with strict confidentiality, except in cases where it is evidence of the violation of law, of organizational regulations, or of the Code. In these cases, the nature or contents of that information should not be disclosed except to appropriate authorities, and a computing professional should consider thoughtfully whether such disclosures are consistent with the Code.

Related threads


Code 2018 discussion - Start here!
#2

It seems like Principle 1.1 and 1.2 are really very close, to the point where I’m not sure why they are two different principles. It seems like they can be combined into one section, and probably reduce a good chunk of the verbosity that makes this document incredible dense. The theme across both is very similar: Do not cause harm, considering how computing technologies touches all people, even if they are unaware.

I also don’t like how Principle 1.2 cross-references to another Principle. Each Principle should stand on its own, and then the Code as a whole should stand on its own. I find it also interesting that a general principle for all refers “those involved with pervasive or infrastructure systems” to a Principle (when read in context) is focused on leaders.

In Principle 1.3, I’d like to see bribery explicitly called out. In some parts of the world, bribery is not illegal (so it would not be covered by those Principles). However, in my experience, it’s part of nearly every code of ethics for the organizations that I’ve been a part of. It’s something of great concern to companies in doing business. It makes sense to call it out, I believe.

I disagree with the second sentence of the second paragraph of Principle 1.5. “Reasonable” and “legal” uses of intellectual works aren’t always the same and what is reasonable is open to interpretation. What I’m reading says that there are instances where a user of intellectual property is illegal but reasonable and the computing professional who created the work should not oppose that use. As a content creator, I cannot follow or endorse a code that tells me that I cannot oppose uses of my work because it may be “reasonable” in someone’s eyes. This is compounded by the fact that, as an ACM Member, I am held to this code. It seems to say that if I oppose an illegal but reasonable use of my work, the ACM can take action against me.

Overall, I like 1.6. I just think it’s rather low on the list. For something so important, it seems like it’s almost an afterthought. More attention should be paid to the ordering and grouping of these principles, I think, to help build a more cohesive document.

There is a missing principle: accepting responsibility for ones work, and in some cases, the work done under their supervision. Principle 2.3 mentions accepting responsibility for instances where one chooses to violate a rule or law to align with ethical decision making. However, every professional should take full responsibility for everything that they do - every document written, every line of code or test written, every phone call with a user or client. I don’t see this anywhere in the Code. I feel that this is base Principle that makes sense for section 1. Extending this, accepting responsibility for communicating about and correcting mistakes that are made.


#3

In light of Section 1.4, I find the ACM’s choice to invite Robert Martin to speak on a webinar very interesting. Although he has a significant contribution to software engineering - coauthoring the Agile Manifesto, an advocate for craftsmanship, contributions to software design and architecture - he also has a reputation for repeated sexist remarks at different talks as well as in writing.

In 2014, he wrote a blog post describing not one, not two, but three instances where he made remarks that were construed as sexist by listeners and he had to apologize.

In the summer of 2017, he wrote two blog posts here and here expressing support for the Google engineer fired for writing a memo against Google’s diversity programs. In a later blog post, not only does he express support for Damore, but also opposes the disinviting of Doug Crockford from a conference following sexist remarks.

After these remarks and events, I’ve decided to not buy any of Martin’s books. However, there’s also some apparent controversy surrounding Clean Architecture: see Martin’s blog post here. And this isn’t even the first time someone has had a problem with content of a book, which has plenty of time for review and editing by multiple people that may not exist in off-the-cuff remarks at conferences.

Part of this also touches on Section 4 of the code - upholding and promoting the code. I realize that this Code isn’t in effect yet, but it has been in circulation for long enough. I would hope that the ACM would start by setting good examples that are consistent with the code. If the ACM is saying that this is a good code that all computing professionals should follow, then the ACM should be taking the lead on living up to it to set an example.


The Programmer's Oath
#4

I object to putting social justice language into the code. We’re a professional organization of computing professionals, of all political orientations. We should strive to be inclusive of all manner of conservative and liberal political persuasions in our members.

With Draft 3, however, having a conservative or libertarian orientation will be grounds for dismissal from the ACM and possibly grounds for dismissal at work. I think we can do better than this.

References and commentary:
1.4: "​Computing professionals should strive to build diverse teams and create safe, inclusive spaces for all people, including those of underrepresented backgrounds."

Safe spaces are psychologically harmful to the people they are trying to protect. Diversity is great, but there should be no such obligation in the code. This is pushing progressivism onto all computing professionals.

1.1: "Therefore, computing professionals should promote environmental sustainability both locally and globally.

This is pushing a progressive agenda onto all computing professionals.

1.1: “Computing professionals should consider whether the results of their efforts respect diversity, will be used in socially responsible ways, will meet social needs […] When the interests of multiple groups conflict, the needs of the least advantaged should be given
increased attention and priority.”

Again, these goals are all laudable if you are a progressive, but have no place in a code of ethics for people of all political beliefs.

Do we only want progressives in the ACM? Are we going to expel all members who cannot in good conscience agree with the political beliefs espoused in this document?

These sections need to be rewritten to be more broadly inclusive of all political viewpoints.

These sections become especially worrisome when combined with section 3.4, which places an obligation on leaders to ensure that their policies are in line with this code. This means that all compliant organizations must adopt progressive politics or be out of compliance with the code.

We’ve already seen behavior in section 3.4 abused at Google, where managers awarded bonuses to people to make progressively oriented posts on their intranet against an employee perceived to be anti-progressive, and who was later fired. This is troubling in the extreme. I think we can all agree that rewarding one political belief system over another is anti-inclusive, and therefore against the spirit of the code of ethics that we are striving for.

So I sincerely hope these sections will be reverted back to the Draft 2 wording, or broadened to be more inclusive of people of all political beliefs.

We are a professional organization, not a political organization, and our Code of Ethics needs to reflect that.


#5

An interesting counterargument, but, at the same time, I find it difficult to avoid bringing considerations of social justice into this code of ethics. After all, when we are creating a systems that will, among other things, decide who belongs in jail, some consideration of social justice seems to be a prerequisite for ethical actions. It would, after all, be unethical to send someone to jail simply for the color of their skin. How would you reconcile those two problems?


#6

As an alternative, I might consider moving a bit of the content of section one into section two, while also emphasizing the broad reach of many software projects into the lives of even those who never use them directly.

Seconded; this is important.

This time a thousand. Responsibility for one’s actions needs to be a principle in this section of the document. It should come third, after avoiding harm, and before honesty.


#7

Also, on 1.3 honesty, I feel we should also add a reminder that, when we do not know something, we should say that, using the words “I do not know”. The temptation to bullshit and then correct later is strong enough to actively say something here.


#8

Agreed.

The social justice language, recruiting the ACM members to not only be responsible practitioners but social justice warriors as well is concerning.

I have studied the current code and have used it for valuable guidance in my professional career.

The ACM is the premier academic computing society, and sjw-ism is currently all the rage in academia, but it is also a powerful engineering society. The Association for “Computing Machinery” after all. Our machines are not only beautiful crystalline nodes on a world wide circulatory system flowing with rivers of electrons but occasionally noisy, oily, smelly, dangerous machines. Machinery not fluffy bunnies.

Given the current chaotic flux, conservative vs. progressive, R vs. D, SJW vs. ??? and the almost daily shifts in political sentiment it is impossible to know enough to make a sweeping change like this.

I recommend the current code refactoring effort be abandoned and revisited at a minimum of one year from now.


#9

Apparently, the software industry ignores General Moral Principles as all of them are violated consistently.
Here is how to know this: 3seas dot org/EAD-RFI-response.pdf

"To use an analogy, where is the painter’s ability to paint a rainbow if they are allowed only two of the three primary colors? Likewise, how well can an end user benefit from only having access to two of the three primary user interfaces?

The usefulness of a computer is based on how well it can automate and be used to automate!

The two standard user interfaces commonly available are the command line type of interface and the Graphical User Interface type. The third, but missing, standard user interface is the Applications, Libraries, and Devices side door port. The user oriented, easy to use Inter-Process Communication port.

As the end-user has access to all the functionality the programmers allow the user via the two primary interfaces, there is no reason to not also allow the user access to all this same functionality in a manner that allows the user to automate not only within an application but across applications, even allowing direct access to the functionality in function libraries and devices. "

“While A.I. is so often compared to and effort to be design to emulate human thought processes, the end-users have been denied to do so, to apply their thought processes via automation. What insights have been missed about the bridge between computer functionality and human thought process due to the end-user denial?”

Now here is the hard reality of this long running ethics violation. Either correct it or suffer the consequences of its result that will happen with this new component of Augmented Intelligence. Or is the goal to ensure bias in the continuation of “the way to become wealthy is to make people need you”?

Consider with this in mind, where have the “General Moral Principles” not been violated?

Security? all the security needed is already there constraining end-users access to selected functionality, but constraining end-users ability to automate what they manually have access to is not part of that security need.