How are open source programs safe? Doesn’t open source make it easy for hackers to find vulnerabilities or contribute malicious code?

978 views

Perhaps I need a better understanding of what open source means, but how can a program that is openly publishing its code not be super vulnerable to cyber security threats. That’s like a bank publishing exactly how all it’s security works right? Obviously I’m missing something here, so ELI5!

In: 427

44 Answers

Anonymous 0 Comments

It’s been said here, quite rightly, that by and large the open source community has an interest in scrutinising and contributing to making these programs as safe as possible. Contributors are also use the same programs thus want them to be safe to use for themselves as well as you.

The short answer underpinning this is **not everyone is an evil d—head aiming to screw you over**. There’s a lot of kindness going on in the community.

Anonymous 0 Comments

Imagine a country like US. Even if you know how their defense work, or where the defenses are located, does not mean that it’s easy to attack them. Even if you know where all the nukes are located, doing a preemptive strike is not easy, because they have radars and sol-air misiles and other systems in place which would make for you impossible to hit said targets.

On the other hand, a country like North Korea is very secretive, but as you can imagine, you could brute-force your way into the country if that would be decided to be necessary. Even if the location of their defence is secret, it is not impenetrable because the defense is not strong enough.

On top of this, imagine that US makes a bet: you can attack me for one week and I will not hit you back. Not only this, if you manage to touch a certain building, I will give you a price. Well, you might find a way to do it, but the damage would be minimal (as it is practice mode), but the country would discover a weak spot which can be hardened (or watched) in case of a real threat. Do this several times and all points which are easy to exploit would be handled. Does not mean that there are new weak spots, just that they are harder and harder to exploit.

Anonymous 0 Comments

Its only vulnerable if there are actually flaws in the security! In your example, imagine a bank so secure that they publish their security system so that thousands of people can learn from it and improve it for other banks. They don’t include any of the passcodes, just the prize-winning design of the system, which is considered undefeatable without passcodes and biometrics. Thousands of bank security experts, after all, have found no way in after years of actively looking!

As to the second point: you can’t just upload code; it has to be approved by someone the project trusts.

Edit: A good starting point for more, and quite approachable, is Eric S. Raymond’s _The Cathedral and the Bazaar_, which is summarized quite effectively on its Wikipedia page.

Anonymous 0 Comments

In theory, open-source should be safer because many people can look at the code and see problems and maybe fix them.

In reality, both open- and closed-source projects could have good or bad situations. A project could have a single developer, no tests, and many problems lurking in the code. Or a project could have a lot of devs, good practices, testing, a QA department, etc.

A corp may have the resources to pay for good devs and good testing and good QA and fast bug-fixing etc.

Long-standing bugs have been found in open-source software that was heavily used:

https://www.theregister.com/2021/01/26/qualys_sudo_bug/ (10 years)

https://www.theregister.com/2021/06/11/linux_polkit_package_patched/ (7 years)

https://thehackernews.com/2022/08/as-nasty-as-dirty-pipe-8-year-old-linux.html (8 years)

Anonymous 0 Comments

With many aspects of commerce, security, testing or even logic… A crowd is always better. A larger sample size is better since edge cases even themselves out into the average.

You are absolutely right that disclosing the inner working of a software can create security issues, but in turn it creates more security benefits, compared to proprietary system with less people to access and test.

If I reveal that I have a specific type of lock, I’m only at risk if that lock is bad to begin with… and people showing me that lock is bad leads to the change of the lock. Also I don’t reveal the key itself, just the lock type.

Also open source usually pushes Release candidates before features are pushed to production.

Basically I show the lock I want to install to my house, “What do you guys think?” could you test this lock or other people use that lock before I even install it to my door.

Open source systems are not perfect, but they have less zero day vulnerabilites than closed loop and hidden software, what you can only test by using it after it is released.

Anonymous 0 Comments

Yes it does and that is the idea. I mean not for hackers but for everyone. Everyone can see problems and can help fix them… Ofc that’s theoretical and only really matters if your project interests a large enough group, otherwise no one will really pay attention enough.

But the main thing and I think the reason why it confuses you, I a typical mistake people have done a million times in the past, but modern security professionals warn against it. You equate security with obscurity. And thats the way it was done for a long time. But it’s a false friend. Obscurity does not increase security it just makes it harder to detect a breach. Transparency and secure design is what makes a software truly secure. And you can achieve both with open source. And in a large and highly intrested in project you will have thousands of people looking at you code. Thousands that would otherwise be a dozen. And even though anyone can commit anything only a few selected people, that arr hopefully trusted, are allowed to actually merge these changes into the master

Anonymous 0 Comments

It makes it easier for people to find vulnerabilities, but you can’t really add malicious code to open source projects.

It’s incredibly easy to spot, and before code is added to the base, it’s usually checked by multiple people, sometimes it’s also checked by a program that specificly tells you what vulnerabilities your code has and how to fix them.

In short, it’s possible, but in reality, it almost never happens.

Anonymous 0 Comments

The answer is that it does, and it doesn’t.

For popular projects, you can have thousands of people.lookog at the code, big companies like google,. Microsoft, Amazon etc. even contributing code and fixes, and that makes it pretty save because there’s a lot of oversight over what goes on.

However there are other projects that are popular, but that don’t have as much attention, so it’s easier for a vulnerability to go unnoticed.

Then there are other projects that were hugely popular, but now nobody really looks at, that is very easy to slip in a vulnerability, and all the users that upgrade or whatever now have that vulnerability.

So basically, open source is “safer” proportionally to how many eyes are on it.

Anonymous 0 Comments

If a bank only keeps your money safe by being secret how security works then the bank is unsafe the second someone figures it out. It’s a terrible security model. Just using obscurity to hope people don’t know something, and open source knows it can never use it so it doesn’t.

Anonymous 0 Comments

You are right, it is “easier” for hackers to find vulnerabilities on open source code. However, security experts can also contribute to open source programs.

On the other hand, proprietary programs/programs with closed source code needs to **pay/hire** auditors to secure their code.