I had summarized public information and questions from as the implosion was happening, trying to be impartial and not voice my guesses as to what actually went on, and concluded that everyone should be given the benefit of the doubt, given all the unknowns.
But that impartiality could be upsetting to one of the parties (by bringing up things that, say, a good guy maybe can't talk about), and maybe this isn't the time, as people are recovering and moving forward.
At the same time, HN people might need to know some lessons from this noteworthy incident in development and privacy&security, since it should inform how we think about how other projects and startups can fail.
For example: a prominent security product can suddenly be compromised (e.g., in the sense of trusted security update mechanism broken, or a change in who controls updates), or a business partner can force out you and your intentions/stewardship of the product that you think depends on you being in the loop to not be evil, or a different business partner can delete very important keys, or (vaguely) this is another way it can be difficult to reconcile privacy&security goals with business ones.
These failures might be things we briefly consider as hypotheticals when planning or evaluating, but they do happen, in real life, on prominent efforts.
The open source project was started before Copperhead existed and long before it was incorporated. The project was never directly owned or controlled by the company, as that was an explicit condition of the collaboration with the company. GrapheneOS is the continuation of that original project, but a lot has been learned and it will never become associated with another company or organization to the same extent. The purpose and values behind the project were eroded by the association with a company focused on a business model. It was a problematic relationship long before you heard about it and eventually Copperhead betrayed the project. You can see for yourself that they made a bunch of ultimatums and threats trying to take over the project and end the independence from the company. They failed at doing that, but they succeeded at hijacking all of the infrastructure and preventing it from ever pushing out another update to the existing installs. The OS was never compromised, but it lost all of the infrastructure and resources supporting it so it has taken a long time to even get some basics back up and running. Most of the initial focus after the disaster was on standalone projects like https://github.com/GrapheneOS/hardened_malloc and https://github.com/GrapheneOS/Auditor. It took a long time to get things back up and running, and it has definitely been massively set back both not only in terms of the development work but also in many other ways. It has still managed to continue onwards and while the OS itself hasn't been fully restored, there's a bunch of useful standalone work that's far better than anything the project offered in the past.
You're substantially misrepresenting the events that occurred based on a very incomplete account of the events that you've seen. People seeing your comment are going to end up with an incorrect understanding, just as you did. You're stating your assumptions and misconceptions about what happened as if they're facts. It's a very incorrect account of a very small part of the story. This game of broken telephone where people misinform themselves and then propagate variations of that to many other people is a poor way of spreading knowledge.
I don't think he's representing or misrepresenting much at all. Most of his comment (like mine) talks about what we don't know and asks questions, and like mine admits it would be hard to know anything even if we were told what "really" happened.
>It was a problematic relationship long before you heard about it
Probably day 1 as it sounds like there was a fundamental conflict with the business and non business entities, I don't get what anyone thought such an arrangement would really do that would be positive. Hopefully everyone learned from their experience, can do some good work now, and can avoid such things in the future.
Agreed. It is so strange how it just sort of "happened" generally I don't think such things usually just happen out of nowhere, so half the story I think would how everyone came together to work together and then end up in a situation that was as apocalyptic as far as how it played out for the project.
I had summarized public information and questions from as the implosion was happening, trying to be impartial and not voice my guesses as to what actually went on, and concluded that everyone should be given the benefit of the doubt, given all the unknowns.
But that impartiality could be upsetting to one of the parties (by bringing up things that, say, a good guy maybe can't talk about), and maybe this isn't the time, as people are recovering and moving forward.
At the same time, HN people might need to know some lessons from this noteworthy incident in development and privacy&security, since it should inform how we think about how other projects and startups can fail.
For example: a prominent security product can suddenly be compromised (e.g., in the sense of trusted security update mechanism broken, or a change in who controls updates), or a business partner can force out you and your intentions/stewardship of the product that you think depends on you being in the loop to not be evil, or a different business partner can delete very important keys, or (vaguely) this is another way it can be difficult to reconcile privacy&security goals with business ones.
These failures might be things we briefly consider as hypotheticals when planning or evaluating, but they do happen, in real life, on prominent efforts.