
If NONE of these bugs have been fixed prior to this disclosure, it'll be really bad news, and hurt the reputation of most security software. but it'll be a reinforcement of many peoples' world-views. If these bugs all (or mostly) slipped through the radar, it's probably evidence of just how complex cryptography really is. If they caught these 'bugs' early and often, this is a wonderful story and a victory for properly-managed Open-source security software.
#Quez stresser fbi backdoor code
OpenBSD makes a point of their thorough and regular code audits. That's really why I want to see where this goes. Which is just to say that being open-source doesn't necessarily provide an advantage in this case, unless there are regular and thorough audits (if the whistle-blower is correct then OpenBSD might have been compromised for years). If the devs of OpenSWAN decided to sneak-in a backdoor, how many people would necessarily notice (especially if the devs were particularly crafty)? Take the Debian bug which made SSH keys guessable, that wasn't malicious, but still was released unnoticed by the developers (and was kinda egregious). Sure you can audit the code, but how often is that done? And if you don't know what you're looking for, it can be hard to find. If it's open source, then anyone could potentially slip in a backdoor. If it's closed-source then the backdoor would have been done intentionally by the creator of the software. For closed source software, this isn't possible. If the claims turn out to be true it brings into question that assumption.Īctually, I think it brings into question: What other projects has this been done to where we can't examine the source to verify the claims?Īt least with OpenBSD, it can be done.
#Quez stresser fbi backdoor software
One of the basic tenets of open source software is that it is inherently safer than proprietary software because of the transparency and so many people looking at it. To me, this ability makes it more secure. To put it another way, I'm going to use the axiom, "shit happens." Given that, I'd prefer to be able to determine where/how that shit happens vs. If anything, it talks to how the the maintenance the IPSEC stack has been managed. Simply because workflow is such that things like this get missed doesn't truly mean that it makes the practice of open source vs proprietary development any less secure. Whereas the question by thiago_pc's isn't necessarily, or at the time of its allegation, couldn't necessarily be validated, here the code can be analysed and resolved within a reasonable time frame.

In the end, it's humans writing the code and humans auditing the code (for the most part) and humans managing the workflow process for the code to move into the distribution.

If this were true, I imagine OSS would be inherently bug-free, too. I'm not certain that it's even been assumed to be inherently safer. That may be true to some degree, however, I would argue that it is safer to have the ability to audit the code. subsequent developers saw code that didn't seem tight enough, or focused enough, or secure enough, and changed it, even if they didn't realize that the weakness was intentional). If they didn't last long, that would strengthen the open source claims (i.e. It will definitely be interesting to see what people have to say about it, whether the backdoors are really there, and at what point they might have been disabled or removed by code changes. It may be that everyone assumed because anyone could audit it, someone else already had. If the claims turn out to be true it brings into question that assumption. I see this as a broader issue (and not even getting into the politics and motivation behind it): One of the basic tenets of open source software is that it is inherently safer than proprietary software because of the transparency and so many people looking at it. "If Perry's allegations prove true, the presence of FBI backdoors that have gone undetected for a decade would be a major embarrassment for OpenBSD."
