That whole post was great.
This particular code not only should not be enforced as letter of the law, but it really can't. Not only that, I would find it difficult to use some parts of this code as a spirit of the law. There are things that I'm either still not convinced that it's something that I want to do or things that I disagree with and choose to do/not do even though the code suggests otherwise. Of course, this is all based on my current interpretation of wording - I understand that there is supplementary information and examples and cases that demonstrate true intent.
Personally, I see value in having some kind of Letter of the Law Code.
Most, if not all, professional organizations have one. Generally speaking, some broader communities or organizations do as well. I think it adds significant value to being a member. The same themes recur in these other codes - accepting responsibility, avoiding conflicts of interests, rejecting bribery, continuous learning, seeking and accepting criticism or critique, oppose discrimination, avoid injury, support colleagues. I think that these are the things, and I'm sure that there are possibly others, that we can generally say "we must do" and can be enforced as a letter of the law for members.
Specifically, I'm also a member of the IEEE. In exchange for my membership fee, I get access to learning resources (papers, books, magazines, certification programs), career development information and job postings, discounts on conferences, local meetups and meetings, etc. But I also get something else - the code of ethics. When I say that I'm a member of the IEEE, that means that I subscribe to their code of ethics and conduct - I accept responsibility for the decisions that I make and will disclose factors that impact the public, I avoid or disclose conflicts of interest, I am honest and realistic when making claims professionally, I reject bribery, I continually learn and understand technology and its applications and consequences, I see and offer criticism to others, I acknowledge my errors and give credit to other people, I don't discriminate, I avoid injury and harm by avoiding false or malicious action, and I support the people I work professionally.
If I were to violate this code, I would diminish the reputation of the IEEE and everyone who is a member in good standing. If I did, I would hope that someone challenged me on it, even to the point of turning me in and following that process. I know that I would want members who did not live up to some minimum standards to be made to no longer be members. Having people who behave in what the group determines to be the right way is incredibly important to me.
My question is this: What is this minimum acceptable set of required behaviors for ACM membership?
If there isn't one, then I need to reflect on my purpose for membership. My primary purpose for joining the ACM was for the educational and professional development content (magazines, digital library, etc.). However, I actually joined because membership meant something - the codes of ethics and conduct assured that if I told someone "I am a member of the ACM", anyone would know that I committed to meet some minimum standard of behavior and have been doing so (in my case, since 2011).
As this code gets closer to release, I need to make a determination. Am I able to commit to myself, and to everyone else, that I can use this code as a basis for ethical decision making and to promote it as a useful tool to adhere to (which is required by section 4.1 of the code itself)? Right now, I don't think that I can. I cannot stand up and say that I will do all of the things that this code tells me to do and tell others that they should also do the same thing. Since, minimally, I can't uphold or promote the code (along with the sections that I disagree with), I'm already in violation of treating violations as inconsistent with membership in the ACM. The right thing to do then, is to simply not be a member.
Now, the Death Star is a really interesting case study. I'd argue that a code of ethics that would lead to the engineers responsible for it being removed from a professional organization isn't a good code of ethics.
Throughout this process, I've been looking at other codes from other professional organizations (the IEEE Code of Ethics, the Software Engineering Code of Ethics and Professional Practice, the British Computer Society Code of Conduct).
In my interpretation, none of these other codes were violated by an engineer simply building the Death Star or a system on the Death Star. These codes would be violated if engineers didn't accept responsibility for their work or if they didn't work with their employer or client to understand all consequences of using the technology.
However, in the current wording, you may be able to make arguments that people even working on the Death Star are in violation of this code of ethics. The overly broad statements in Principle 1.1, 1.2, 3.1, and 3.2 can be used, generally, to make assertions that many weapons systems may be unethical.
As someone who worked in the defense industry (a little over 5 years full-time, plus a couple of co-ops/internships), I would personally not work on an offensive weapon system (and never did), but have no problem working on defensive or information gathering systems (which I did). Working on some of the things that I did, I recognized that, if improperly used, could (for example) diminish privacy. However, by understanding the laws, the use of these technology on American soil targets was limited, and there was nothing inherently wrong with providing the capabilities. Once delivered, it was impossible for me to determine how customers and users were using the technologies provided.
Personally speaking, although I don't have a desire to work on offensive systems, I do hope that the people who do are some of the best engineers out there and are committing to the highest technical and ethical standards. Knowing that the technology exists, I would want it to be of the highest quality. I would want it to be generally safe for the people deploying the technology and be designed in a way that minimizes harm to third-parties. I'd also want someone who, if the technology was unsafe or illegal or particularly harmful (in any respect), exhaust every legal option and then exhaust every ethical responsibility to inform people.
Going back to the Death Star, building a weapon is not unethical. Depending on your perspective, leaking construction plans to an enemy organization because you have information that leads you to conclude that your customer was going to improperly use this technology may also be ethical (I think it would be, especially under certain leaders who make unsound judgement). Admittedly, the SE Code of Ethics and Professional Practice doesn't account for whistle-blowing (which, may not have been a big concern in the late 1990s).
Generally speaking, any code of ethics needs to account for people who may work on technology that others may find ethically objectionable. Assuming that the project is legal, the technology or product doesn't violate laws by simply existing and that engineers are ensuring that management and customers are aware of the application and consequences of using the technology.