If it's possible, it's allowed
Last week I had a meeting with a customer that is in the process of implementing a new configuration management system. We had a tough discussion about building in security measure to protect the system from malpractice of software developers.
Surprisingly, they were very much concerned about deliberate, intentional misbehaviour of their software developers. For them it was even more an issue than intentional, accidental faults. The argument was that intentional misbehaviour may be very difficult to detect because the engineers try to be extremely smart on that. The risk they are concerned about is that after delivery of those changes, the integrator may not discover problems in the system through the build and smoke test. So the problem is propagated to system level and may be even to the customer.
What they try to do is maximise the security mechanisms to make it is impossible for developers to bypass it. In my opinion, it is absolutely foolish to even try it. First, you will never outsmart those saboteurs and try it takes extreme costs and efforts. A better approach is to build in logging and detection mechanisms. Then, if someone runs in an unintentional malpractice you can build in protection mechanisms, but if someone deliverately hacks the system he should be warned first and fired next regardless of his other competences and knowledge level. It is better to spend the money on finding the right people and on try to secure the system against those terrorists.
What do you think we should do about it?
No comments:
Post a Comment