Eli5: Why can’t open source software easily be hacked?

959 views

Typically a source code leak is a safety danger. But with open source applications it is available from the start. How do you prevent people from intruding when all safety measurements can be plainly seen?

In: 6

21 Answers

Anonymous 0 Comments

Knowing how a safety measure works and specifically what it does isn’t the same as knowing how to circumvent it.

Here’s a common example from the software industry: two-factor authentication (2FA).

We KNOW it’s a user-permission validation scheme that combines two things.

* a piece of data that’s provided by an appliance (e.g. a “token”) or a time-sensitive software program (a “virtual token”), and
* a second piece of data that’s made up and memorized by a user.

The user needs to provide BOTH when logging into or connecting to a computer system or network or account. Without BOTH, they can’t get in.

Knowing what the data is on the token, and knowing how 2FA works, heck, even possessing the source code for the 2FA routines, doesn’t help break 2FA because *we still need that second piece of data*. There is **nothing in the source code** that helps us identify or obtain that second piece of data, and it provides no capability or “workaround” to get there.

So, we would still have to use brute force to complete the process 2FA requires to attach to another network or validate an account.

So the source code doesn’t help us because it doesn’t give us what we need to complete the necessary process. Either that data is provided through some other means, or the source code is blocked from any sort of interaction with it… so having it’s not a security threat.

Anonymous 0 Comments

> Typically a source code leak is a safety danger.

This is the most problematic when you’re relying on security by obscurity. Your security mechanism is that people don’t know what’s going on. But this has never been a reliable security measure. A good programmer with interest, time and effort can figure most anything out. It’s just code, and that’s what we do for a job. It just turns it into a more annoying job.

This can work in the program’s favor when the program is something specialized or rarely used. It’s less likely that somebody will bother. But if you have something popular like say, World of Warcraft, you better bet there’s all kinds of people poking at it, and the lack of source doesn’t help all that much with the security.

> But with open source applications it is available from the start. How do you prevent people from intruding when all safety measurements can be plainly seen?

A good safety measure doesn’t depend on the secrecy of the code. Eg, the old code of Reddit is available. But if Reddit is written right, and the code says you can only log into my account by knowing a password, then seeing the logic of the login code doesn’t let you hijack my account. All you see is that it correctly checks passwords.

Now if it had a flaw, then you could see it in the code. But not being able to see the code wouldn’t be a guarantee that nobody would run into it by accident, or by using domain knowledge and experimenting with known tricky conditions, or such other methods.

A lot of things in software don’t have that many ways of being done. A skilled programmer in a given field can make very reasonable guesses at what sorts of mistakes a developer could make when doing a thing, and run experiments to see if the application indeed breaks or not.

Anonymous 0 Comments

Open Source code is likely reviewed by more people and vulnerabilities would be discovered. Security mostly relies on mathematical problems that are difficult to work around, not obscurity. Vulnerabilities may happen if the software by mistake allows the user to write outside of the expected memory, which can be discovered in the source by both hackers and security people alike. Proprietary software may have secrets that protect the software itself but not necessarily the user’s data.

Anonymous 0 Comments

A source code leak should not reduce the security of your application. You should instead build a secure application from the start so that even if an attacker have the source code they can not find any way to attack it without the secret keys. And attackers have the resources to reverse engineer machine code and even hardware chips, while regular researchers might not. So the idea of security in open source software is to allow everyone to look at the source code and find potential problems. This way you find more issues which might potentially be used by at attacker and fix those issues before they can be exploited.

As an example a lot of police and military services have switched to using an encrypted radio called TETRA. All the source code and technical descriptions of this is kept secret and have even been made illegal to possess. So a security research group at a university for example are not allowed to study it. Recently such a group, working for the public interest, did publish their finding after having studied the system using limited funds. And they found a number of different security issues and even a few possible attacks. These have been there for over 15 years and nobody knew about them because nobody studied them enough. Not even the services who bought this system. But it is fair to say that some state actors would have put this much effort into researching TETRA. And they would have probably been able to spend a lot more resources on it, acquire radios and technical documentation in non-legal ways, and deploy attacks against the system in the wild to test them.

If the source code had been available from day one it would have been much easier to research it. So research groups with limited resources would have been able to study it the same way as state actors. We would have found these issues when it was launched when it could have been easily fixed rather then years later when it have been widely deployed and might have already been exploited.

Anonymous 0 Comments

All the other answers are great, I will add a little bit of a historical context of why open source actually safe.

Back in the early days of household computers and internets every company built their own secure implementation of ciphers. But as you may know, when you write something, you can do mistakes in it but you personally will not see them(as you are the author) or you will not see them because you do not properly understand the reason behind the solution. And as such many in-house solutions of ciphers were done incorrectly – be it laziness, time crunch or not enough information. In the end, ciphers are math and programming is IT (while it is related, it is not the same field :)). Therefore a code-leak of such bad implementations would be dangerous.

After few problems people started to realize, that instead of in-house closed solutions it is better to build the security via known and reviewed mechanisms. These are safe because more people can check them. So, this is why nowadays you know how the cipher mechanisms work and you can use pre-implemented modules, because the security is NOT in the fact, that nobody knows how it works(and as such the code leak would be dangerous), but in the fact, that it is safe because math(science) says so.

Open source is just going into this fully, most companies do not do that as they have some fancy solutions they do not want to share as it may give them the competitive advantage. e.g. Youtube can have a great implementation of video storage but they do not want others to use it as well. So a code-leak would share that but it wouldnt be a security risk. (hopefully, I dont work at youtube so I dunno)

Anonymous 0 Comments

There are basically two opposing approaches to security:

1. Closed source relies on making the it harder for an attacker to know about any potential security issues by, well, keeping the inner workings of the code secret.
2. Open Source relies on as many people as possible to have a look at the code and analyse it, and thus closing security issues before they can be misused.

Both approaches have their advantages and disadvantages. For closed source, it is really just a matter of time until an interested party analyses the software anyways, and then your “security by obscurity” falls apart … on the other side, there are also many “open source” projects where there is effectively only one person ever looking at the source code.

However, in most situations, the open source approach has proven to be the safer/more reliable. Especially for software that is widely used and has a large developer community.

Anonymous 0 Comments

I can think of a building plan metaphor I would perhaps use for an actual five year old.

Think of the source code as a map of a building. If the map shows a secret tunnel, or maybe a door that’s always open, or perhaps a low fence without lights… well that shows you a way in.

But if the plan shows good security, all doors safely locked and no unguarded way inside, then having the plans doesn’t really help you to break in.

Same with software. The source code is only a plan. The actual running application is somewhere else. Created from the code but living an independent life.

Source code leak is only a problem if it shows a flaw. If the source code is secure then it’s safe to show off.

Anonymous 0 Comments

Code leak is not a safety danger unless the code itself is a safety danger to begin with.

Security through obscurity is nonsense, properly secure software is built so it doesn’t matter if the attacker knows how it’s built, they still have no access.

Now, open source doesn’t make things perfect, but it does display all the embarrassing mistakes to the entire world to see making it more likely they actually get fixed. With closed source these mistakes are usually wiped under the carpet in hopes that nobody ever finds out. Sooner or later, they always find out and then security through obscurity fails miserably.

Anonymous 0 Comments

Open Source software can absolutely be exploited. The idea that open source software is inherently more secure is simply not true. It’s based on the premise that if everyone can view the source code, someone will come along and fix all of the bugs. Whom do you think has more motivation, bored hobbyists sleuthing through someone else’s program code to find obscure vulnerabilities and fix them out of the goodness of their heart, or blackhats sleuthing through someone else’s program code to find obscure vulnerabilities and exploit them for monetary gain?

Virtually all current web browsers are either open source or based on open source underpinnings. However, browser vulnerabilities pop up all of the time. In fact, exploits found by looking at the source code for Apple’s WebKit rendering engine were used to exploit the system software on Sony’s Playstation 4 and 5 game consoles.

Really good open source software projects have paid developers with responsibility over specific parts of the code base. Having all eyes on a project is not good enough because those eyes don’t know what they’re looking for. Rather, it’s much more important to have the right eyes on the project.

Anonymous 0 Comments

Imagine you invent a new combination lock. You play with it, and realize it’s literally uncrackable, and sell it on Amazon for $50.

No matter how smart you are, you have certain blind spots, and more people looking at it might get you some ideas on how else you could try breaking it without the code. 100 people make suggestions, most of which go nowhere, but *at least they’ve been tried*. The one person who comes up with a viable way to break it open is who you’re looking for.

Closed source software relies on the idea that “no one would buy one of these locks, take it apart, and look for weaknesses” or even “you can’t really take this lock apart at all!”

Open source software security comes from the idea that you can publish how the lock works, and many many people can look for weaknesses, which means your lock gets better over time.

So the idea is that even if someone knows EXACTLY how the lock is made, they can’t get in without the code that the user set. That’s a lot more reassuring than hoping that no one has taken it apart, right?