How are open source programs safe? Doesn’t open source make it easy for hackers to find vulnerabilities or contribute malicious code?

950 views

Perhaps I need a better understanding of what open source means, but how can a program that is openly publishing its code not be super vulnerable to cyber security threats. That’s like a bank publishing exactly how all it’s security works right? Obviously I’m missing something here, so ELI5!

In: 427

44 Answers

Anonymous 0 Comments

Its exactly as the tale of the cake, the apple, and panda… or was a watermelon? In anycase, you get the point, no need for extra details.

Anonymous 0 Comments

Anyone is able to submit patches to security vulnerabilities when it’s open source, just as many or more ppl contribute to the security of the program vs the ppl looking to exploit it

Anonymous 0 Comments

The situation you are thinking of is called “security through obscurity”. That is a situation where the security of a system is dependent on a bad actor not understanding it.

Typically in information security, relying on security through obscurity is not considered safe enough. You want security systems that are safe even if someone would know how they work.

Basically its the difference between locking you front door with a commercial lock, or relying on a burglar not finding the door because you planted a bush in front of it. With the lock, its public knowledge that a key is needed, but unless the burglar has your key that knowledge isn’t helpful to them (lockpicking not withstanding).

As far as congributing malicious code goes, open source projects have review processes. Its not that easy to slip something malicious into at least an estabilished project.

Anonymous 0 Comments

Because obscurity is the worst form of security. It is relatively trivial to turn closed source into something human readable, so you are gaining very little security by keeping things closed. On the other hand, the more people that examine the code, the more likely issues will be discovered, so in that way open source can in fact be more secure than keeping things hidden.

Anonymous 0 Comments

Open source can be improved by anyone anytime, and you can know what it is doing.
Closed source only owner can improve it, and you can’t know what’s doing or collecting.
Closed source will always be easier to hack than open, you just can’t read assembly or binary

Anonymous 0 Comments

If exposing code makes threats more likely, then the code is flawed to begin with.

For instance knowing how passwords are managed in software isn’t going help you hack it, if that solution is solid. What isn’t great is NOT knowing how passwords are managed, and that’s how closed source software operates. It could be executed poorly, and you wouldn’t know. They have no incentive to fix it since they delude themselves into thinking they can trust their employees and they are safe behind closed software. They can’t, and they aren’t.

Anonymous 0 Comments

There are two main things you’re missing here.

The first is that obscurity is not security so hiding your security flaws only protects you until someone finds them – and they will virtually always be found eventually. Also at least in terms of software security isn’t a particular set of methods that if exposed become exploitable, secure software is software that is as close to bug-free as is possible and doesn’t make any of numerous exploitable errors.

The second is that being open source also exposes your code to lots of other eyes who can spot those bugs and flaws and fix them. A community of security-minded programmers is also less likely to get locked into a particular set of assumptions and thus won’t miss errors that your dev team misses over and over again because they assumed they weren’t a problem. Different things stand out to different people for different reasons; the more eyes that look at your code the more perspectives there are from which to spot things that have been overlooked by people who are too invested in the project.

Anonymous 0 Comments

You seem to think that vulnerabilities MUST occur in code. Not true. It is possible to write code with no (as-yet discovered) vulnerabilities, but it is easy to accidentally include a vulnerability — which leads to the conclusion that the more eyes on the code, the less likely accidental vulnerabilities will happen. Which fits open source neatly.

Anonymous 0 Comments

At a very simple analogy.

You can have a hidden safe in your house – that has publically released data how to pick it. However, people still need to enter your house and find out that you have that specific safe (with goodies in it) to be a problem.

TLDR: multiple types of security

Anonymous 0 Comments

Say you built a tree house and said anyone in the neighborhood could use it if they helped maintain it. A few people would find flaws in it, maybe break a board off here or there. But a few people would fix it, add trim and paint, and improve it. That’s open source. You’re worried that problems exist at all; open source programmers fix problems when they find them. Open source users aren’t all criminals; some of them are building inspectors who improve treehouses.