Hacking is a race between users and developers to understand a system. When the users get ahead, they begin to use the system in ways that the developers didn’t intend. When the developers are ahead, they are able to block misuse by testing and removing various software vulnerabilities without compromising the integrity of the program.
So considering this environment, “Exploits”, or vulnerabilities in software are at their most valuable the moment they are discovered. We call this “Day Zero” because the *user/hacker* sees the hole but the developer is still unaware of it.
As soon as the developers learn of the vulnerability (oftentimes because it was used against them, or responsibly disclosed by “white-hats”) they begin to patch the hole, and the day counter begins. So a “day two” exploit is substantially less valuable than a “zero day” exploit because its already in the process of being patched against.
It takes a while to patch every single affected system, so even “Day 489” exploits can still work against a target, but are nearly worthless since the majority of systems that *were* vulnerable to it probably got patched in that time.
The zero-days are a big deal because as long as they are kept secret, they can serve as a persistent avenue of re-entry into owning a system. This is why governments get hacked all the time, because they are more interested in keeping a library of 0-day vulns for their own use than they are in helping vendors harden security against those holes, and in some cases they even legally prevent companies from patching certain 0-days in case the feds want to use them. And sometimes feds even work undercover as developers just so they can introduce 0-days for their own use! See [Goto Fail;](https://www.latimes.com/business/technology/la-fi-tn-apple-gotofail-mistake-conspiracy-nsa-20140223-story.html)
It’s the difference in time between (a) when the hackers find out about a security problem and (b) when the software publisher finds out about it.
The expression came about because security researchers want to do two things: (1) they want to publish their findings, but (2) they don’t want the bad guys to take advantage of what they learn for criminal activity. So, they will do something like “Hey Microsoft, we discovered this vulnerability in your software. We’re going to publish that vulnerability in 60 days.” And then Microsoft has 60 days to fix the problem and push it out. The idea is that giving Microsoft a deadline gives them a strong incentive to fix problems, and letting researchers publish their findings gives them an incentive to actually find vulnerabilities.
A “0-day” vulnerability means that the hackers found out about the problem at the same time as the publisher or even before.,
Friday late afternoon, I’m ready to call it a day.
I get my one and only Day 0 text. FYI, I am retiring in 3 weeks.
It was a wonderful weekend of engaging multiple teams to address the issue. Research and apply software fixes, deploy builds, run regression test scripts, implement new builds.
Did I mention how much I vehemently “dislike” hackers?
Its an exploit that hasnt been publically disclosed.
Software can have “vulnerabilities”, which are bugs in them that we can use to develop an “exploit”, which is an application that takes advantage of the vulnerability in such a way to let us compromise the system.
If a vuln is publically known, the devloper can patch it so that the program isnt vulnerable anymore. If my exploit payload is publically known, you can analyze how it works and write rulesets for things like antivirus or IDS systems to detect and mitigate it.
If its not publically known, youve had no time to prepare your systems for my attack, and so youll be defenseless. Im attacking you on “day 0” of this vuln being publically disclosed…because my attack *is* the disclosure.
A zero day is a vulnerability both the hacker found before the software was released and that the developer has zero days to fix, so it’s something that is very serious and needs to be fixed immediately.
Think of it like being in an emergency room triage. One guy walks in with his finger jammed and another guy is wheeled in by paramedics unconscious and in critical condition after being hit by a car. The guy wheeled in is the zero day because they have very little time to fix them, where as the guy with the jammed finger can wait.
Say your car has a structural weakness, in its struts or sheet-metal, such that, if it’s exposed to 550 Hertz vibrations for extended periods, it will crack and shatter and fall apart.
Eventually — given enough financial/legal incentive — the car manufacturer will release a public warning (“your car has a weakness, and might fall apart, have it fixed immediately”). The time between the vulnerability surfacing and the public-release is **the zero-day window**, where the attack/flaw exists, but the “good guys” don’t yet know about it, or how to stop it.
There exists considerable tension in the industry regarding “How long to wait for companies to announce their flaws” versus “How soon should independent hackers publish their discovered flaws, whether for altruistic or fame-oriented purposes.” Michael Lynn + Tavis Ormandy (concerning Cisco + Google, respectively) are two prominent exemplars of same.
Lot’s of perfectly correct answers here, but let’s also address an incorrect one, which is the one that many of the mainstream media clearly believe. Being a zero day, is all about the timing. It says nothing about how serious it is. Often news reports treat “zero day” to mean “really bad.” If there’s a potential remote code execution vulnerability that’s fixed by a patch, many news organizations will call it a zero day, even though it’s being being fixed before the bad guys have started using it.
Latest Answers