How are open source programs safe? Doesn’t open source make it easy for hackers to find vulnerabilities or contribute malicious code?

1.03K views

Perhaps I need a better understanding of what open source means, but how can a program that is openly publishing its code not be super vulnerable to cyber security threats. That’s like a bank publishing exactly how all it’s security works right? Obviously I’m missing something here, so ELI5!

In: 427

44 Answers

Anonymous 0 Comments

Well yes, but also no. If you were to share how your home security system works, that of course in theory makes it easier for burglars to break in to your house, both by revealing how the security works and potentially revealing any weaknesses you might not know about. But on the other hand, if your reason for revealing that publicly is because there are hundreds and hundreds of independent people who for some reason have an interest in making sure your home is secure, and wish to collaborate with you on doing that, then they’re all checking your’s and each other’s work to make sure those vulnerabilities are solved. So there are advantages and disadvantages. Large open source projects that are of interest to may different parties probably don’t have to worry about this problem as much, while smaller ones definitely do.

Anonymous 0 Comments

Yes it makes it easy for people to find vulnerabilities, thats the idea. Other companies like google have a system to pay people if they find these vulnerabilities so they dont use it fpr bad stuff. Being open about bugs is a good thing, otherwise it will just become an exploit being sold on the darknet.

And contributers are public, anyone can track down who changed what line and is reviewed by multiple other people.

The core idea is to be transparent and not use security by obscurity to secure your stuff.

Anonymous 0 Comments

It makes it easier for *everyone* to research how the program works. That means they can verify that the developer is being honest about what the program does, and they can also find vulnerabilities and help get them fixed. Imagine having a thousand people poking around at your product trying to make it better instead of just a small team of twenty who might be tired, overworked, prone to overlook things. It is a popular scheme, it builds a sense of community and companies will even pay for reports of vulnerabilities (Google is known to pay out thousands and thousands of dollars in bounty money for people to find these things).

Anonymous 0 Comments

The basic idea is that of many eyes looking at the code.
Making sure a program is secure is quite hard and takes a lot of time. In open source the argument is usually that (at least for popular programs) if everyone can easily look for security issues, they will also be found by honest people who report it to the devs. (Because most people are not actively malicious). In Closed source only the people the company pays to look at it can look at it in an easy way. Everyone else has to work from the binaries, which is way harder. So this is more likely to be something done by people who stand to gain something, I. E. Malicious people.

In practice this argument doesn’t necessarily quite work out to make open more secure than closed, but it is good enough, that it’s not worse.

Anonymous 0 Comments

Open Source being saver than closed source comes down to the concept that security by obscurity is a bad practice. In other words security researchers believe that it is unsecure to rely on the fact that no one knows your code to make sure that it is save, because that would mean the second someone gets your code all defenses have been broken.
Open source is the “extreme” result of this thinking. By showing everyone your code, you have to make it so secure that even if everyone knows your code it is still secure. Basicly you cant slack off on security because it will be found out very fast. Another advantage of some Open Source Projects is that other people can contribute to the Project and thuse more people will find flaws in the code faster than if only the original developer it self would work on it.

Anonymous 0 Comments

In addition to the other answers, it’s also worth mentioning a bit about how computer security works.

Open source projects don’t publish the lock and the key, so to speak. It’s more like publishing a process for creating secure locks, but the user has to provide the key. And these lock-making processes are designed to be secure even if you know the process.

(This applies to all programs, really, not just open source. You should never have “keys” in your source code, only the locks.)

So reading the source code (in theory) doesn’t make it any easier to hack the program, because hackers don’t have the key, and the lock-making method has (hopefully) been checked by lots of different people to ensure it’s secure.

Anonymous 0 Comments

Do you want your developers to write code that is only seen by their colleagues, or do you want them to write code that can withstand scrutiny by the whole world?

Anonymous 0 Comments

Make it easy for hacker to find vulnerabilities: yes, but we accept this trade-off because we got a ton of people who find vulnerabilities and then propose fixes.

Contribute malicious code: not really, since code contributions are reviewed by a trusted circle.

At the end of the day, kind of yes to both counts, so that is why (1) open source is not used for particularly sensitive applications or (2) the open source code is some very widely scrutinized stuff that we feel good about.

For example, if you open source a very good algorithm to compute the matrix determinant and someone else runs it somewhere, there’s no vector of attack being introduced anywhere.

Anonymous 0 Comments

To answer your question about malicious users installing bad code. Usually the original author of a program will “digitally sign” a compiled program of the open source code. This way it can ensure that you are getting the program from the original author and not someone who has used the open source code to create a malicious fake copy of it.

Anonymous 0 Comments

The best (readily available) encryption algorithms are designed in a way that is mathematically impossible to recover data from unless you have the password. Think of it like making a cake. Everyone has the recipe, but no one will ever tell if you put the salt or the sugar into the mix first unless you tell them.

Open source code is also constantly reviewed by a lot of independent experts to make sure it’s secure and reliable, so you can refer to hundreds of expert opinions or even your own knowledge and research. If there is a security issue, anyone can send their solutions in and the developers can review them and choose the one they think is best or use their own. For proprietary software, the publisher says “I promise it’s secure, honest”, maybe they add some actual references but there is no 100% transparent way for you to make sure it does what it says it does.

Most proprietary software actually includes a lot of open source parts, which also means that whoever makes the proprietary part has to review the open part for security to make sure their own product is secure.

For example chromium is an open source browser that most browsers are based on (Chrome is the closest to the original, but Edge, Opera and many others are also based on it), so all these companies work to make sure the chromium project works as well as it can.

Android is also open source, Google adds their services to it, then phone manufacturers also add theirs, which aren’t open source, but they rely on the open source base, so that base has to be solid and secure before these companies even touch it