Principle 2.8 - Avoid unauthorized access


The following is the Code 2018 Draft 2 version.

If you wish to contribute to this section: Reply to this post with your suggested re-wording and a justification for your proposed changes.

2.8 Access computing and communication resources only when authorized to do so.

This principle derives from Principle 1.2 - "Avoid harm to others." No one should access or use another's computer system, software, or data without permission. One should have appropriate approval before using system resources, unless there is an overriding concern for the public good. To support this clause, a computing professional should take appropriate action to secure resources against unauthorized use. Individuals and organizations have the right to restrict access to their systems and data so long as the restrictions are consistent with other principles in the Code (such as Principle 1.4).




Authorization reflects the “Organizational Hierarchy” of authority vested in colleagues. Authentication usually follows the Authorization. This is to authenticate the individual authorized appropriately.

I suggest the following change in the wording of the Principle 2.8:

One should have appropriate approval that facilitates assurance, availability and accessibility of the system resources, unless there is an overriding concern for the public good.


I agree and concur becuase this happens everyday in the high tech world where you have seasonal, temp workers or full time employees who have varied levels of access on their access cards like HID ID or RFID


Clause 2.8 effectively prohibits white hat hacking, which is definitely a valuable practice if done responsibly.

I strongly recommend removing this clause.


I think that is the entire reason for the clause stating:


I don’t think appeals to the “public good” can help these matters. I’ve elaborated here.


I share your concern with making white hat hacking a problem in this code. I’m intimately familiar with all the pro’s and cons of this debate, especially in the Netherlands where the practice is now legal (within certain guidelines, etc.).

Appealing to the “public good” is an easy out, but I don’t really see an easy way to describe this more elegantly. The other option would be to explicitly call out the practice, and explain that the code applies to finding and reporting vulnerabilities.

On the other hand, reporting vulnerabilities is becoming more acceptable, the debate is still ongoing on how to implement in many countries. The tendency currently is to resolve this by having companies publish disclosure policies, which set guidelines and restrictions. These policies also explicitly give permission to security researchers to attempt to access systems.


thanks for the post. and definitely thanks for the link and explaining it.