Okay sure, but what happens when a high CVE is discovered that requires immediate patching – does that get around the Upload Queue? If so, it's possible one could opportunistically co-author the patch and shuttle in a vulnerability, circumventing the Upload Queue.
If you instead decide that the Upload Queue can't be circumvented, now you're increasing the duration a patch for a CVE is visible. Even if the CVE disclosure is not made public, the patch sitting in the Upload Queue makes it far more discoverable.
Best as I can tell, neither one of these fairly obvious issues are covered in this blog post, but they clearly need to be addressed for Upload Queues to be a good alternative.
--
Separately, at least with NPM, you can define a cooldown in your global .npmrc, so the argument that cooldowns need to be implemented per project is, for at least one (very) common package manger, patently untrue.
# Wait 7 days before installing
> npm config set min-release-age 7
This literal example is actually addressed by the Debian example - the security team has powers to shuttle critical CVEs through but it’s a manual review process.
There’s a bunch of other improvements they call out like automated scanners before distribution and exactly what changed between two distributed versions.
The only oversight I think in the proposal is staggered distributions so that projects declare a UUID and the distribution queue progressively makes it available rather than all or nothing
This doesn’t solve the problem either, which is that of the Confused Deputy [1]. An arbitrary piece of code I’m downloading shouldn’t be able to run as Ryan by default with access to everything Ryan has.
We need to revitalize research into capabilities-based security on consumer OSs, which AFAIK is the only thing that solves this problem. (Web browsers solve these with capabilities too - webapps get explicit access to resources, no ambient authority to files, etc.)
Solving this problem will only become more pressing as we have more agents acting on our behalf.
The people who will benefit from a cooldown weren’t reviewing updates anyway. Without the cooldown they would just be one more malware victim. If you don’t review code before you update, it just makes sense to wait until others have. Despite what the article says, the only people who benefit from a rush to update are the malware spreaders.
The core point is of course solid. By not updating on day 0, maybe somebody else spend the effort to discover this and you didn't. But there are plenty of other benefits for not rolling with the newest and greatest versions enabled.
I'd argue for intentional dependency updates. It just so happens that it's identified in one sprint and planned for the next one, giving the team a delay.
First of all, sometimes you can reject the dependency update. Maybe there is no benefit in updating. Maybe there are no important security fixes brought by an update. Maybe it breaks the app in one way or another (and yes, even minor versions do that).
After you know why you want to update the dependency, you can start testing. In an ideal world, somebody would look at the diff before applying this to production. I know how this works in the real world, don't worry. But you have the option of catching this. If you automatically update to newest you don't have this option.
And again, all these rituals give you time - maybe someone will identify attacks faster. If you perform these rituals, maybe that someone will be you. Of course, it is better for the business to skip this effort because it saves time and money.
- One idea is for projects not to update each dep just X hours after release, but on their own cycles, every N weeks or such. Someone still gets bit first, of course, but not everyone at once, and for those doing it, any upgrade-related testing or other work also ends up conveniently batched.
- Developers legitimately vary in how much they value getting the newest and greatest vs. minimizing risk. Similar logic to some people taking beta versions of software. A brand new or hobby project might take the latest version of something; a big project might upgrade occasionally and apply a strict cooldown. For users' sake, there is value in any projects that get bit not being the widely-used ones!
- Time (independent of usage) does catch some problems. A developer realizes they were phished and reports, for example, or the issue is caught by someone looking at a repo or commit stream.
As I lamented in the other post, it's unfortunate that merely using an upgraded package for a test run often exposes a bunch of a project's keys and so on. There are more angles to attack this from than solely when to upgrade packages.
Mature professionals and organizations have always waited to install updated dependencies in production, with exceptions for severe security issues such as zero day attacks.
"Free riding" is not the right term here. It's more a case of being the angels in the saying "fools rush in where angels fear to tread".
If the industry as a whole were mature (in the sense of responsibility, not age), upgrades would be tested in offline environments and rolled out once they pass that process.
Of course, not everyone has the resources for that, so there's always going to be some "free riding" in that sense.
That dilutes the term, though. Different organizations have different tolerance for risk, different requirements for running the latest stuff, different resources. There's always going to be asymmetry there. This isn't free riding.
I think the appeal to the categorical imperative is very interesting though. Someone needs to try it. If everyone were wise as you term it, then it's essentially a stalemate while you wait for someone else to blink first and update.
Then again, there are other areas where I feel that Kantian ethics also fail on collective action problems. The use of index funds for example can be argued against on the same line as we argue against waiting to update. (That is, if literally everyone uses index funds then price discovery stops working.) I wonder if this argument fails because it ignores that there are a diversity of preferences. Some organizations might be more risk averse, some less so. Maybe that's the only observation that needs to be made to defeat the argument.
If you instead decide that the Upload Queue can't be circumvented, now you're increasing the duration a patch for a CVE is visible. Even if the CVE disclosure is not made public, the patch sitting in the Upload Queue makes it far more discoverable.
Best as I can tell, neither one of these fairly obvious issues are covered in this blog post, but they clearly need to be addressed for Upload Queues to be a good alternative.
--
Separately, at least with NPM, you can define a cooldown in your global .npmrc, so the argument that cooldowns need to be implemented per project is, for at least one (very) common package manger, patently untrue.
# Wait 7 days before installing > npm config set min-release-age 7
There’s a bunch of other improvements they call out like automated scanners before distribution and exactly what changed between two distributed versions.
The only oversight I think in the proposal is staggered distributions so that projects declare a UUID and the distribution queue progressively makes it available rather than all or nothing
We need to revitalize research into capabilities-based security on consumer OSs, which AFAIK is the only thing that solves this problem. (Web browsers solve these with capabilities too - webapps get explicit access to resources, no ambient authority to files, etc.)
Solving this problem will only become more pressing as we have more agents acting on our behalf.
[1] https://en.wikipedia.org/wiki/Confused_deputy_problem
I'd argue for intentional dependency updates. It just so happens that it's identified in one sprint and planned for the next one, giving the team a delay.
First of all, sometimes you can reject the dependency update. Maybe there is no benefit in updating. Maybe there are no important security fixes brought by an update. Maybe it breaks the app in one way or another (and yes, even minor versions do that).
After you know why you want to update the dependency, you can start testing. In an ideal world, somebody would look at the diff before applying this to production. I know how this works in the real world, don't worry. But you have the option of catching this. If you automatically update to newest you don't have this option.
And again, all these rituals give you time - maybe someone will identify attacks faster. If you perform these rituals, maybe that someone will be you. Of course, it is better for the business to skip this effort because it saves time and money.
If you're not doing the work yourself, it makes sense to give the people who review and test their dependencies some time to do their work.
Avg tech company: "that's perfect, we love to be free riders."
idk if one of the touted benefits is really real - you need to be able to jump changes to the front of the queue and get them out asap sometimes.
hacked credentials will definitely be using that path. it gives you another risk signal sure, but the power sticks around
- One idea is for projects not to update each dep just X hours after release, but on their own cycles, every N weeks or such. Someone still gets bit first, of course, but not everyone at once, and for those doing it, any upgrade-related testing or other work also ends up conveniently batched.
- Developers legitimately vary in how much they value getting the newest and greatest vs. minimizing risk. Similar logic to some people taking beta versions of software. A brand new or hobby project might take the latest version of something; a big project might upgrade occasionally and apply a strict cooldown. For users' sake, there is value in any projects that get bit not being the widely-used ones!
- Time (independent of usage) does catch some problems. A developer realizes they were phished and reports, for example, or the issue is caught by someone looking at a repo or commit stream.
As I lamented in the other post, it's unfortunate that merely using an upgraded package for a test run often exposes a bunch of a project's keys and so on. There are more angles to attack this from than solely when to upgrade packages.
"Free riding" is not the right term here. It's more a case of being the angels in the saying "fools rush in where angels fear to tread".
If the industry as a whole were mature (in the sense of responsibility, not age), upgrades would be tested in offline environments and rolled out once they pass that process.
Of course, not everyone has the resources for that, so there's always going to be some "free riding" in that sense.
That dilutes the term, though. Different organizations have different tolerance for risk, different requirements for running the latest stuff, different resources. There's always going to be asymmetry there. This isn't free riding.
Then again, there are other areas where I feel that Kantian ethics also fail on collective action problems. The use of index funds for example can be argued against on the same line as we argue against waiting to update. (That is, if literally everyone uses index funds then price discovery stops working.) I wonder if this argument fails because it ignores that there are a diversity of preferences. Some organizations might be more risk averse, some less so. Maybe that's the only observation that needs to be made to defeat the argument.
I suspect there are some reasonable points to be made here, but frankly, I pretty much stopped reading after that. Way too simple minded.
But I get the point, it's a numbers game so any and all usage can help catching issues.