There are lots of justifications. It's the same as why people can be soldiers or build missiles and still sleep at night: you believe (or at least tell yourself) that you're stopping bad people.
There are good applications of these tools. If you can hack the phones of a terrorist organization, you can find out about attacks before they happen and stop them. If you can extract data off of locked computers, you can help win convictions that wouldn't otherwise be possible against people who do truly awful things.
The question, of course, is whether these good applications outweigh the misuse, but that's where it gets murky in a hurry. Individual researchers at these privately owned "boutique" exploit companies (to my knowledge) tend not to know the nitty gritty details of how their work is used out in the world unless it gets caught and dissected online. The more reputable western companies sell only to "democratic" governments which are political allies, but that only goes so far as misuse and abuse is always a risk (not to mention the shaky nature of...certain... western democracies).
At the end of the day, you really just have to hope your work is being used to target terrorists and not journalists. The money obviously makes it easier, but it's not completely disingenuous of the people who work there to believe they're doing good.
> "The money obviously makes it easier, but [...]"
But, but, but.
> "[...] it's not completely disingenuous of the people who work there to believe they're doing good."
Given how well and widely NSO and their merchandise were reported on, including the dissection of various associated scandals in the mainstream media, I beg to differ. These people are not dumb, they know exactly what they do, and who their clients are. Your good-faith assumptions with regards to these players come across as extremely naive, to put it mildly.
It's fairly common to use something like nginx as a forward proxy and do TLS there. IPv4 and NAT makes this essentially mandatory if you want to host multiple services due to eSNI. You wouldn't necessarily have protection inside the server network (which isn't great) but you at least get protection everywhere else.
Vivado has free and paid tiers. The free tiers ("WebPack") support their a myriad of their smaller devices (which includes the Kria boards). The larger devices (Virtex, etc.), however, are generally only supported by the paid versions of Vivado.
It's inconvenient for hobbyists, sure, but for enterprise uses the cost of Vivado for a team is largely inconsequential (which I suspect is why they get away with this).
The licensing scheme is not terribly robust, either; there are a number of cracked (full) licenses floating around if you know where to look. Given that AMD makes most of their money on hardware sales, and the software is only really useful in conjunction with that hardware, I suspect they don't care very much.
It is clear from the quality that all vendors of EDA software actively hate their paying customers.
Just wonder - why are EDA tools just now starting to get HiDPI support? I’m pretty sure Altera, Xilinx/AMD, etc haven’t bought HiDPI monitors for their own developers!
> avoid the hardest and most essential skill: translating from your language to the other.
The hardest and most essential skill, second only to: not translating from your language to the other :)
(or maybe it should be the other way around; translating is useful but a really hard crutch to kick. Keeping it around will make it hard to keep up while speaking/listening and make reading a slog)
> Keeping it around will make it hard to keep up while speaking/listening and make reading a slog
That's not something against translating, that just means you haven't done it enough yet, so you're still too slow.
The first time you translate a basic sentence it might take 10s. 2nd time 5s. And so on. The 100th time it takes 0.05s and you can just say it without thinking. If one just keeps translating, you automatically reach that point.
> especially given how typical allocators behave under the hood.
To say more about it: nearly any modern high performance allocator will maintain a local (private) cache of freed chunks.
This is useful, for example, if you're allocating and deallocating about the same amount of memory/chunk size over and over again since it means you can avoid entering the global part of the allocator (which generally requires locking, etc.).
If you make an allocation while the cache is empty, you have to go to the global allocator to refill your cache (usually with several chunks). Similarly, if you free and find your local cache is full, you will need to return some memory to the global allocator (usually you drain several chunks from your cache at once so that you don't hit this condition constantly).
If you are almost always allocating on one thread and deallocating on another, you end up increasing contention in the allocator as you will (likely) end up filling/draining from the global allocator far more often than if you kept in on just one CPU. Depending on your specific application, maybe this performance loss is inconsequential compared to the value of not having to actually call free on some critical path, but it's a choice you should think carefully about and profile for.
> The most common provider is cloudflare, but you need to install cloudflared on the machine you’re using to host
This is actually (strictly) true. You can use cloudflared on any system which can communicate with your host. This is useful in more realistic deployments as it means you can install cloudflared in a VM/container and then have it relay your services hosted in other VMs/containers/devices. It isn't helpful here as "I hosted my website on an iPad (but I now have to have this other real computer plugged in all the time so that iPad works)" is not as zesty :)
That's not necessarily true. Many modern CPUs have uop caches, and it's reasonable to assume implementations will eliminate NOPs before placing lines in the uop cache. A hit on the uop cache loses any slowdown you hoped to achieve since now the NOPs are neither fetched nor decoded.
That's an interesting question, because the uop cache is filled early in the pipeline, but zero-idioms are only detected around the rename stage.
They might be able to skip a plain 0x90, but something like mov rax, rax would still emit a uop for the cache, before being eliminated later in rename. So at best it would be a fairly limited optimization.
It's also nice because rename is a very predictable choke point, no matter what the rest of the frontend and the backend are busy doing.
This is one of the things which cheeses me the most about LLVM. I can't build LLVM on less than 16GB of RAM without it swapping to high heaven (and often it just gets OOM killed anyways). You'd think that LLVM needing >16GB to compile itself would be a signal to take a look at the memory usage of LLVM but, alas :)
The thing that causes you run out of memory isn't actually anything in LLVM, it's all in ld. If you're building with debugging info, you end up pulling in all of the debug symbols for deduplication purposes during linking, and that easily takes up a few GB. Now link a dozen small programs in parallel (because make -j) and you've got an OOM issue. But the linker isn't part of LLVM itself (unless you're using lld), so there's not much that LLVM can do about it.
(If you're building with ninja, there's a cmake option to limit the parallelism of the link tasks to avoid this issue).
There are good applications of these tools. If you can hack the phones of a terrorist organization, you can find out about attacks before they happen and stop them. If you can extract data off of locked computers, you can help win convictions that wouldn't otherwise be possible against people who do truly awful things.
The question, of course, is whether these good applications outweigh the misuse, but that's where it gets murky in a hurry. Individual researchers at these privately owned "boutique" exploit companies (to my knowledge) tend not to know the nitty gritty details of how their work is used out in the world unless it gets caught and dissected online. The more reputable western companies sell only to "democratic" governments which are political allies, but that only goes so far as misuse and abuse is always a risk (not to mention the shaky nature of...certain... western democracies).
At the end of the day, you really just have to hope your work is being used to target terrorists and not journalists. The money obviously makes it easier, but it's not completely disingenuous of the people who work there to believe they're doing good.