Why can’t OSes or browsers make a perfect sandbox to securely run applications?

134 views

Every few days, weeks, or months, it seems, there’s an announcement that this VM or that web browser has a newly discovered vulnerability that allows someone to arbitrarily execute code outside of the intended “sandbox”. And while I am a software developer, and understand the basic nature of some of the exploits taking advantages of flaws, over-writing memory, etc. I still don’t seem to accept or fully understand why there’s no way to create an environment in which it was effectively/absolutely impossible — as opposed to semi routine. Can anyone explain it to me?

In: 0

9 Answers

Anonymous 0 Comments

Well, there are some points you can’t avoid. Your program has to run on real hardware, it has to receive user input and it has to output the result to someone, so your sandbox needs entry and exit wich means it cannot be closed off completely.

The vulnerabilities can be super weird stuff that you wouldn’t even consider to be dangerous when constructing it. As an example there is a hardware exploit called “rowhammer” wich uses electromagnetic interference in the hardware to flip protected memory cells (by hammering alternating bits into adjactant memory cells). To avoid that you must prevent the code from accessing memory cells that are physically close together. How would a programmer have foreseen that issue before someone discovered the exploit?

Anonymous 0 Comments

It’s not necessarily impossible, it’s just very hard and not as simple as going “absolutely no hacking us you mean hackers”.

A sandbox is only as powerful as the tools it gives to programs running inside it. If your sandbox doesn’t allow programs running in it to manage memory, then it can’t benefit from a well-memory-managed program. If your sandbox doesn’t allow programs running in it to take I/O from users, it can’t benefit from that I/O. If your sandbox doesn’t allow programs running on it to know about the hardware they’re running on, it can’t benefit from programs written to take advantage of that hardware. And so on.

But each bit of power you give to programs inside the sandbox creates surface area for attack. If programs inside the sandbox have good control of memory, they can try to execute buffer overflows or other memory-management-based attacks. If programs inside the sandbox know about the hardware they’re running on, they can try to execute hardware-specific attacks (like the well-publicized Spectre and Meltdown attacks from ~five years ago).

On a more basic level: sandboxed code is still running on the same physical processors and physical memory as non-sandboxed code. In most cases, it’s also running on the same operating syste. And so any hardware exploit, or error in the relatively complex logic by which operating systems manage memory, presents a chance for escape. Making things more secure often involves tradeoffs, too – for example, Spectre and Meltdown arose from a feature processors used to improve performance in certain computations.

Anonymous 0 Comments

Well, there are some points you can’t avoid. Your program has to run on real hardware, it has to receive user input and it has to output the result to someone, so your sandbox needs entry and exit wich means it cannot be closed off completely.

The vulnerabilities can be super weird stuff that you wouldn’t even consider to be dangerous when constructing it. As an example there is a hardware exploit called “rowhammer” wich uses electromagnetic interference in the hardware to flip protected memory cells (by hammering alternating bits into adjactant memory cells). To avoid that you must prevent the code from accessing memory cells that are physically close together. How would a programmer have foreseen that issue before someone discovered the exploit?

Anonymous 0 Comments

It’s not necessarily impossible, it’s just very hard and not as simple as going “absolutely no hacking us you mean hackers”.

A sandbox is only as powerful as the tools it gives to programs running inside it. If your sandbox doesn’t allow programs running in it to manage memory, then it can’t benefit from a well-memory-managed program. If your sandbox doesn’t allow programs running in it to take I/O from users, it can’t benefit from that I/O. If your sandbox doesn’t allow programs running on it to know about the hardware they’re running on, it can’t benefit from programs written to take advantage of that hardware. And so on.

But each bit of power you give to programs inside the sandbox creates surface area for attack. If programs inside the sandbox have good control of memory, they can try to execute buffer overflows or other memory-management-based attacks. If programs inside the sandbox know about the hardware they’re running on, they can try to execute hardware-specific attacks (like the well-publicized Spectre and Meltdown attacks from ~five years ago).

On a more basic level: sandboxed code is still running on the same physical processors and physical memory as non-sandboxed code. In most cases, it’s also running on the same operating syste. And so any hardware exploit, or error in the relatively complex logic by which operating systems manage memory, presents a chance for escape. Making things more secure often involves tradeoffs, too – for example, Spectre and Meltdown arose from a feature processors used to improve performance in certain computations.

Anonymous 0 Comments

Well, there are some points you can’t avoid. Your program has to run on real hardware, it has to receive user input and it has to output the result to someone, so your sandbox needs entry and exit wich means it cannot be closed off completely.

The vulnerabilities can be super weird stuff that you wouldn’t even consider to be dangerous when constructing it. As an example there is a hardware exploit called “rowhammer” wich uses electromagnetic interference in the hardware to flip protected memory cells (by hammering alternating bits into adjactant memory cells). To avoid that you must prevent the code from accessing memory cells that are physically close together. How would a programmer have foreseen that issue before someone discovered the exploit?

Anonymous 0 Comments

It’s not necessarily impossible, it’s just very hard and not as simple as going “absolutely no hacking us you mean hackers”.

A sandbox is only as powerful as the tools it gives to programs running inside it. If your sandbox doesn’t allow programs running in it to manage memory, then it can’t benefit from a well-memory-managed program. If your sandbox doesn’t allow programs running in it to take I/O from users, it can’t benefit from that I/O. If your sandbox doesn’t allow programs running on it to know about the hardware they’re running on, it can’t benefit from programs written to take advantage of that hardware. And so on.

But each bit of power you give to programs inside the sandbox creates surface area for attack. If programs inside the sandbox have good control of memory, they can try to execute buffer overflows or other memory-management-based attacks. If programs inside the sandbox know about the hardware they’re running on, they can try to execute hardware-specific attacks (like the well-publicized Spectre and Meltdown attacks from ~five years ago).

On a more basic level: sandboxed code is still running on the same physical processors and physical memory as non-sandboxed code. In most cases, it’s also running on the same operating syste. And so any hardware exploit, or error in the relatively complex logic by which operating systems manage memory, presents a chance for escape. Making things more secure often involves tradeoffs, too – for example, Spectre and Meltdown arose from a feature processors used to improve performance in certain computations.

Anonymous 0 Comments

There’s a balance between usability and security. The most secure machine is one that takes no inputs and produces no outputs. However, that machine is useless to almost everyone. This extends to VMs. If you can’t add input to a VM, and the VM doesn’t output anything, what’s the point?

For modern machines to be useful, you’d need an avenue to feed them instructions, at the very least. By virtue of its existence, that avenue is potentially an avenue to feed it *bad* instructions.

Lastly, there exist critical vulnerabilities that can detect/breakout of VMs.

Anonymous 0 Comments

There’s a balance between usability and security. The most secure machine is one that takes no inputs and produces no outputs. However, that machine is useless to almost everyone. This extends to VMs. If you can’t add input to a VM, and the VM doesn’t output anything, what’s the point?

For modern machines to be useful, you’d need an avenue to feed them instructions, at the very least. By virtue of its existence, that avenue is potentially an avenue to feed it *bad* instructions.

Lastly, there exist critical vulnerabilities that can detect/breakout of VMs.

Anonymous 0 Comments

There’s a balance between usability and security. The most secure machine is one that takes no inputs and produces no outputs. However, that machine is useless to almost everyone. This extends to VMs. If you can’t add input to a VM, and the VM doesn’t output anything, what’s the point?

For modern machines to be useful, you’d need an avenue to feed them instructions, at the very least. By virtue of its existence, that avenue is potentially an avenue to feed it *bad* instructions.

Lastly, there exist critical vulnerabilities that can detect/breakout of VMs.