nostr cryptographic developer here (author of libnoscrypt C library)
Nip04 has been deprecated, and to be clear, in practice the nip04 payload is in a signed nip01 event wrapper.
nip44 replaced nip04, which has been reviewed/audited. Does use authenticated encryption in the message payload with forward secrecy, again in practice wrapped in a nip01 event, singed by the author, usually by the same cryptographic software used to encrypt the message.
nip44 is becoming more widely used for direct messages and other "private" metadata stored on relays. It's chacha20 + hkdf.
I don't really so much care whether Nostr is good or bad. I'm a connoisseur of cryptographic vulnerabilities, and the ones in that paper are fun. We host a podcast (me, Deirdre Connolly, and David Adrian) that is mostly about good crypto vulns. If there's someone affiliated with Nostr that would want to chat for an hour or so about how applicable the vulns in this paper are or aren't, and how they're addressed in NIP44 --- we'd love to talk. My email address is in my profile. Whoever showed up, they'd be in good company!
Hard to say how relevant that is. DMs are simply a collection of events sitting on a relay. It's not really a mutual tunnel, most clients implement nip44 via nip19 (giftwrap DMs) so your ni04 message wouldn't likely make it to them. It's not considered backward compatible such that you could send a user a DM, then cause their client to downgrade to the DM scheme that uses nip04.
It's also worth noting, the user _must_ be made aware of the encryption method that was used, their "signer" application, which is also responsible for encryption and decryption, would require their permission to do an operation in either direction. Users may often choose to grant a trusted client application the permission to decrypt all nip04 or nip44 messages alike, automatically, or generally manually with a popup. That's up the signer application how granular the permissions get.
To be clear this is a client implementation detail, im not a client developer, so I can't say in practice how many have handled the UX on this, but know that the signer, and the user had the final say on which algorithm was granted permission.
Clients and signers alike could choose to block obsolete encryption methods if they choose.
> It's also worth noting, the user _must_ be made aware of the encryption method that was used, their "signer" application, which is also responsible for encryption and decryption, would require their permission to do an operation in either direction.
Let me ask a more pointed question about downgrade attack resistance then:
Is the algorithm being used determined by the encrypted message contents? Or is it determined by the key controlled by the signer app?
Regardless of the signer interface the procedure call remains the same. The client application determines what method it wants to use, then the plaintext is passed to the signer (web extension, nip46 remote signing, android etc) with the nip44.encrypt or nip04.encrypt procedure calls.
The user is then requested to confirm the encryption operation. So a "downgrade" could happen in two ways. The client selects nip04 without the user's instructions, and the signer does not properly guard or notify the user that the message to be encrypted is using nip04. Still not really an attack I don't think, since no "sessions" exist in DMs there shouldn't be any way a remote user gets to cause a client to change algorithms.
To answer directly, the client app chooses, makes a remote procedure call with the desired algorithm, user confirms, message is encrypted, returned, signed (another rpc round-trip), then written to relays.
The signer application is ALWAYS authoritative, if it chooses to.
Is the decision (regardless of who fucking decides) based on metadata attached to the key the client controls or from a breadcrumb included in the message itself?
We are obviously speaking from different understandings. I would say neither. I would need you to define your terms differently maybe.
In all cases the client application chooses the algorithm used when the user writes a DM. What do you mean by breadcrumb in the message. Message in what context? Message sent to the signer?
Edit: Maybe I should say the client developer? Is that the answer you're looking for? The developer _could_ give the user the option of choosing which to use, but clients generally are hard-coded to use one or the other.
I think that the (unsatisfying) answer is that there's no established standard for how protocol selection works. nip04 and nip44 are completely different types of messages, and it's up to the client how it'll use and/or respond to them.
I guess it's worth also saying, there is no algorithm selection. Nip04 is dead for DMs. It doesn't need to be backward compatible. A user can't know which client another user is on, nor what their capabilities are, nostr is not smart in that way. Most, if not all, operations on nostr are completely stateless.
When a user decides to send a DM to another user, the client must choose the standard for encryption, and message wrapping. Then hope the other user is using a client that implements the same standard, in order to decrypt the message.
Again, remember, DMs don't have a session. Every message derives a new symmetric key. The only metadata that makes a "chat" session is the timestamp, and the public keys of the users.
Since relays don't own user generated content, there is no need to "federate" client's generally rely on user-selected relay sets. The user chooses where they want to read/write events to/from.
That said, many of the "larger" relays do store events from other relays (federation if you prefer). Primal does, TheForrest does, nostr.land and so on. Nostr.land specifically has a purpose of aggregating notes from many other public relays, with spam filtering. It's a paid relay built for that purpose. Don't want that, use someone else.
Most users get to see 99% of notes from the current relay federation now, but it's also impossible to check those metrics.
Certain clients and signers store notes privately so if a relay ever decides to censor your notes you just publish to a different relay if they don't have your notes already.
Chances are if you use ANY of the popular paid relay providers, your going to get warnings on 3/4 write events that the other relays _already_ have the note published to the first. It's usually that quick...
Finally, relays "federate" by acting as clients themselves. Most relay software available already offers this as an option, may use it as a local cache for when on mobile and network/wifi is slow. Their local relay slowly pulls notes from other relays (or outbox) and caches those notes for when they load their client up. It's cache and the client dev didn't even have to write that functionality, it was transparent.
Finally, other's mentioned outbox, which has it's own set of issues as well, but it doesn't matter because a client developer can choose to give the user the option if they want. Going from federated, to peer-to-peer which was the intention.
All kinds of personal FOSS projects I have mostly yet to release.
[1] noscrypt - portable C cryptography library for nostr
[2] vnlib - C# + C libraries for server applications, eventual high performance alternative to ASP.NET. It's really just a collection of libraries that are optimized for long running server applications.
[3] vncache - vnlib cache extensions and cluster cache server over web-sockets
[4] cmnext - self-hosted, vnlib based, json-file CMS + podcast 2.0 server
[5] simple-bookmark - kind of deprecated, vnlib based, self hosted bookmark server
This is a huge step I took a few years ago as well. My mobile phone is still a smart phone but It can only do dumb phone things + email which I keep quiet and check on a schedule. No notifications really of any kind. No real social media and I never receive notifications from any of them I have to schedule time to check on them. It has become quite freeing although I definitely miss the "euphoria" of that world if it makes sense, which is a sign of addiction I didn't realize till I opted out.
I think I would prefer 2005 web again. I'd probably be able to see more of the internet. I use heavy DNS filtering, no javascript on untrusted sites, no cookies, no fonts, VPN and so on. With cloudflare blocking me I basically can't see the majority of websites.
I hope to keep seeing posts like these. I believe software "bloat" is a serious issue that should be handled, however if you look at SWE job listings it's not even remotely a concern for employers IMO. Your encouraged to understand complex and heavy frameworks and performance/optimization is not even a consideration.
Because those frameworks are easy at first glance. Adding some interactions with React is easier than with jQuery. Even better if you could make the whole page a React app, then you can add those 100s libraries to do…stuff. Optimizing a React app is hard and it will probably require some deep thinking about global state and its modification and we don't have time for that /s.
By then, the app is built and running. Even though the code is a mess because the developer only know about React, nothing about the DOM and software architecture.
To your last point, I like to think of modern professional software development as a trade, it's not much of a science anymore imo. For me it's outside looking in.
> It was the path of least resistance, so we took it.
Well said. I believe many of the "hard" issues in software were not "solved" but worked around. IMO containers are a perfect example. Polyglot application distribution was not solved, it was bypassed with container engines. There are tools to work AROUND this issue, I ship build scrips that install compilers and tools on user's machines if they want but that can't be tested well, so containers it is. Redbean and Cosmopolitan libc are the closest I have seen to "solving" this issue
It's also a matter of competition, if I want users to deploy my apps easily and reliably, container it is. Then boom there goes 100mb+ of disk space plus the container engine.
It's very platform specific. MacOS has had "containers" since switching to NeXTStep with OS X in 2001. An .app bundle is essentially a container from the software distribution PoV. Windows was late to the party but they have it now with the MSIX system.
It's really only Linux where you have to ship a complete copy of the OS (sans kernel) to even reliably boot up a web server. A lot of that is due to coordination problems. Linux is UNIX with extra bits, and UNIX wasn't really designed with software distribution in mind, so it's never moved beyond that legacy. A Docker-style container is a natural approach in such an environment.
Is it? I'm using LXC containers, but that mostly because I don't want to run VMs on my devices (not enough cores). I've noted down the steps to configure them if I ever have to redo it so I can write a shell script. I don't see the coordination problem if you choose one distro as your base and then provision them with shell scripts or ansible. Shipping a container instead of a build is the same as building desktop apps instead of electrons, optimizing for developer time instead of user resources.
Yes obviously if you control the whole stack then you don't really need containers. If you're distributing software that is intended to run on Linux and not RHEL/Ubuntu/whatever then you can't rely on the userspace or packaging formats, so that's when people go to containers.
And of course if part of your infrastructure is on containers, then there's value in consistency, so people go all the way. It introduces a lot of other problems but you can see why it happens.
Back in around 2005 I wasted a few years of my youth trying to get the Linux community on-board with multi-distro thinking and unified software installation formats. It was called autopackage and developers liked it. It wasn't the same as Docker, it did focus on trying to reuse dependencies from the base system because static linking was badly supported and the kernel didn't have the necessary features to do containers properly back then. Distro makers hated it though, and back then the Linux community was way more ideological than it is today. Most desktops ran Windows, MacOS was a weird upstart thing with a nice GUI that nobody used and nobody was going to use, most servers ran big iron UNIX still. The community was mostly made up of true believers who had convinced themselves (wrongly) that the way the Linux distro landscape had evolved was a competitive advantage and would lead to inevitable victory for GNU style freedom. I tried to convince them that nobody wanted to target Debian or Red Hat, they wanted to target Linux, but people just told me static linking was evil, Linux was just a kernel and I was an idiot.
Yeah, well, funny how that worked out. Now most software ships upstream, targets Linux-the-kernel and just ships a whole "statically linked" app-specific distro with itself. And nobody really cares anymore. The community became dominated by people who don't care about Linux, it's just a substrate and they just want their stuff to work, so they standardized on Docker. The fight went out of the true believers who pushed against such trends.
This is a common pattern when people complain about egregious waste in computing. Look closely and you'll find the waste often has a sort of ideological basis to it. Some powerful group of people became subsidized so they could remain committed to a set of technical ideas regardless of the needs of the user base. Eventually people find a way to hack around them, but in an uncoordinated, undesigned and mostly unfunded fashion. The result is a very MVP set of technologies.
The dumpster fire at the bottom of that is libc and the C ABI. Practically everything is built around the assumption that software will be distributed as source code and configured and recompiled on the target machine because ABI compatibility and laying out the filesystem so that .so's could even be found in the right spot was too hard.
To quote Wolfgang Pauli, this is not just not right, it's not even wrong ...
The "C ABI" and libc are a rather stable part of Linux. Changing the behaviour of system calls ? Linus himself will be after you. And libc interfaces, to the largest part, "are" UNIX - it's what IEEE1003.1 defines. While Linux' glibc extends that, it doesn't break it. That's not the least what symbol revisions are for, and glibc is a huge user of those. So that ... things don't break.
Now "all else on top" ... how ELF works (to some definition of "works"), the fact stuff like Gnome/Gtk love to make each rev incompatible to the prev, that "higher" Linux standards (LSB) don't care that much about backwards compat, true.
That, though, isn't the fault of either the "C ABI" or libc.
Things do break sadly, all the time, because the GNU symbol versioning scheme is badly designed, badly documented and has extremely poor usability. I've been doing this stuff for over 20 years now [1] [2], and over that time period have had to help people resolve mysterious errors caused by this stuff over and over and over again.
Good platforms allow you to build on newer versions whilst targeting older versions. Developers often run newer platform releases than their users, because they want to develop software that optionally uses newer features, because they're power users who like to upgrade, they need toolchain fixes or security patches or many other reasons. So devs need a "--release 12" type flag that lets them say, compile my software so it can run on platform release 12 and verify it will run.
On any platform designed by people who know what they're doing (literally all of the others) this is possible and easy. On Linux it is nearly impossible because the entire user land just does not care about supporting this feature. You can, technically, force the GNU ld to pick a symbol version that isn't the latest, but:
• How to do this is documented only in the middle of a dusty ld manual nobody has ever read.
• It has to be done on a per symbol basis. You can't just say "target glibc 2.25"
• What versions exist for each symbol isn't documented. You have to discover that using nm.
• What changes happened between each symbol isn't documented, not even in the glibc source code. The header, for example, may in theory no longer match older versions of the symbols (although in practice they usually do).
• What versions of glibc are used by each version of each distribution, isn't documented.
• Weak linking barely works on Linux, it can only be done at the level of whole libraries whereas what you need is symbol level weak linking. Note that Darwin gets this right.
And then it used to be that the problems would repeat at higher levels of the stack, e.g. compiling against the headers for newer versions of GTK2 would helpfully give your binary silent dependencies on new versions of the library, even if you thought you didn't use any features from it. Of course everyone gave up on desktop Linux long ago so that hardly matters now. The only parts of the Linux userland that still matter are the C library and a few other low level libs like OpenSSL (sometimes, depending on your language). Even those are going away. A lot of apps now are being statically linked against muslc. Go apps make syscalls directly. Increasingly the only API that matters is the Linux syscall API: it's stable in practice and not only in theory, and it's designed to let you fail gracefully if you try to use new features on an old kernel.
The result is this kind of disconnect: people say "the user land is unstable, I can't make it work" and then people who have presumably never tried to distribute software to Linux users themselves step in to say, well technically it does work. No, it has never worked, not well enough for people to trust it.
> How to do this is documented only in the middle of a dusty ld manual nobody has ever read.
This got an audible laugh out of me.
> Good platforms allow you to build on newer versions whilst targeting older versions.
I haven't been doing this for 20 years (13), but I've written a fair amount of C. This, among other things, is what made me start dabbling with zig.
~ gcc -o foo foo.c
~ du -sh foo
16K foo
~ readelf -sW foo | grep 'GLIBC' | sort -h
1: 0000000000000000 0 FUNC GLOBAL DEFAULT UND __libc_start_main@GLIBC_2.34 (2)
3: 0000000000000000 0 FUNC GLOBAL DEFAULT UND puts@GLIBC_2.2.5 (3)
6: 0000000000000000 0 FUNC GLOBAL DEFAULT UND __libc_start_main@GLIBC_2.34
6: 0000000000000000 0 FUNC WEAK DEFAULT UND __cxa_finalize@GLIBC_2.2.5 (3)
9: 0000000000000000 0 FUNC GLOBAL DEFAULT UND puts@GLIBC_2.2.5
22: 0000000000000000 0 FUNC WEAK DEFAULT UND __cxa_finalize@GLIBC_2.2.5
~ ldd foo
linux-vdso.so.1 (0x00007ffc1cbac000)
libc.so.6 => /usr/lib/libc.so.6 (0x00007f9c3a849000)
/lib64/ld-linux-x86-64.so.2 => /usr/lib64/ld-linux-x86-64.so.2 (0x00007f9c3aa72000)
~ zig cc -target x86_64-linux-gnu.2.5 foo.c -o foo
~ du -sh foo
8.0K foo
~ readelf -sW foo | grep 'GLIBC' | sort -h
1: 0000000000000000 0 FUNC GLOBAL DEFAULT UND __libc_start_main@GLIBC_2.2.5 (2)
3: 0000000000000000 0 FUNC GLOBAL DEFAULT UND printf@GLIBC_2.2.5 (2)
~ ldd foo
linux-vdso.so.1 (0x00007ffde2a76000)
libc.so.6 => /usr/lib/libc.so.6 (0x0000718e94965000)
/lib64/ld-linux-x86-64.so.2 => /usr/lib64/ld-linux-x86-64.so.2 (0x0000718e94b89000)
edit: I haven't built anything complicated with zig as I have with the other c build systems, but so far it seems to have some legit quality of life improvements.
Interesting that zig does this. I wonder what the binaries miss out on by defaulting to such an old symbol version. That's part of the problem of course: finding that out requires reverse engineering the glibc source code.
I'd only like to add one thing here ... on static linking.
It's not a panacea. For non-local applications (network services), it may isolate you from compatibility issues, but only to a degree.
First, there are Linux syscalls with "version featuritis" - and by design. Meaning kernel 4.x may support a different feature set for the given syscall than 5.x or 6.x. Nothing wrong with feature flags at all ... but a complication nonetheless. Dynamic linking against libc may take advantage of newer features of the host platform whereas the statically linked binary may need recompilation.
Second, certain "features" of UNIX are not implemented by the kernel. The biggest one there is "everything names" - whether hostnames/DNS, users/groups, named services ... all that infra has "defined" UNIX interfaces (get...ent, get...name..., ...) yet the implementation is entirely userland. It's libc which ties this together - it makes sure that every app on a given host / in a given container gets the same name/ID mappings. This does not matter for networked applications which do not "have" (or "use") any host-local IDs, and whether the DNS lookup for that app and the rest of the system gives the same result is irrelevant if all-there-is is pid1 of the respective docker container / k8s pod. But it would affect applications that share host state. Heck, the kernel's NFS code _calls out to a userland helper_ for ID mapping because of this. Reimplement it from scratch ... and there is absolutely no way for your app and the system's view to be "identical". glibc's nss code is ... a true abyss.
Another such example is (another "historical" wart) timezones or localization. glibc abstracts this for you, but language runtime reimplementations exist (like the C++2x date libs) that may or may not use the same underlying state - and may or may not behave the same when statically compiled and the binary run on a different host.
Static linking "solves" compatibility issues also only to a degree.
It's providing backwards compatibility (by symbol versioning). And that way allows for behaviour to evolve while retaining it for those who need that.
I would agree it's possibly messy. Especially if you're not willing or able to change your code providing builds for newer distros. That said though... ship the old builds. If they need it only libc, they'll be fine.
(the "dumpster fire" is really higher up the chain)
> Practically everything is built around the assumption that software will be distributed as source code
Yup, and I vendor a good number dependencies and distribute source for this reason. That and because distributing libs via package managers kinda stinks too, it's a lot of work. Id rather my users just download a tarball from my website and build everything local.
I don't think that users expect developers to maintain packages for every distro. I had to compile ffmpeg lately for a debian installation and it went without an hitch. Yes, the average user is far away from compiling packages, but they're also far away from random distributions.
I recently have to reconsider this definition and set boundaries with my friends. Finding the ability to have a fun experience together that wasn't centered around paying for access to the experience, such as access to a place or event, or in drugs/alcohol. We were all kinda taught we have to "buy" something in order to have fun. Beer, food, vacation, concert tickets, so on. I'm not saying I don't do that any more but when I'm with my friends I'm there for a reason and I want it to be them, not the thing we paid for, it comes secondary.
I find this surprising. Did we not just have a pandemic, where people had to have fun without 'going out' and spending money? It was great to see so many people gathering in parks etc. instead of going to restaurants.
I found it very strange and sad that after the lifting of lockdowns people went right back to the way things used to be. Did we learn nothing? Such a squandered opportunity.
What happened to just meeting up in a street or at someone's place and 'hanging out' informally? At what point did it become near-obligatory to have a sit-down dinner or 'go out' to enjoy the company of friends?
I have my personal opinions with the pandemic, but I would have to agree as an opportunity to embrace more positive social behaviors. Surely didn't stick in my woods either. Plenty of things changed, but I would agree. The other part is that at this point were going on 4 years since and I would say my social habits in my age group just changed anyway, so IDK to be honest.
I understand, like others have pointed out this will be an echo chamber result set. However it's nice to know I am not alone (especially being young). My newest vehicle is from 1999, my smartphone is going on 6 years old (still 2 days on a charge) I have 15 apps total on it and spend a max of 20 minutes a day on it. My newest computers are in my rack from 2018 (I have a small hosting business). Asus ultrabook from 2016 still plays my video games fine and decent battery life.
No IoT, no automation stuff. I have a tv w/htpc but it has been unplugged for 6 years. I get all of my current events/media from RSS feeds or email (highly curated). During the day mobile notifications are silenced. I mostly pay in cash when I go out. Had a fitbit 5 years ago never connected to phone, got rid of it when it died.
I like the sound of the engine when I drive, otherwise FM radio. I avoid taking my phone when I go places unless I need GPS, but I often print directions if I'm traveling to semi familiar areas. I always keep a few maps in the glove box as well.
I was recently laid off and I'm bummed because it's going to be hard finding a new remote work job in tech that isn't soul sucking in regards to tech.
If your vehicle is a car, modern cars are a lot safer than the 90s cars. It’s not only about the structure and airbags in case of a crash, the stability control electronics have improved a lot in the recent years. Now you can turn the wheel to aim where you want to go and the electronics try their best.
Yeah, I don't want it. I spend the past 7 years in automotive engineering (firmware/controls systems stuff) I much prefer older stuff. Every time I hop in a newer car and it jerks the wheel from me or slams the brakes because a smudge is on the camera or a reflection. My truck weighs nearly 8klbs and is over 20ft long, I carry enough insurance to assume myself and the vehicle I hit will all be dead, god forbid it ever happens. Again my experience in automotive I have learned that newer is not "built to last" in a business sense and parts manufacturers (and remans) obsolete parts for newer vehicles faster as well (or at least the supply chain dries up MUCH quicker), we have seen vehicles scrapped for easy reasons because the repair job cost more than the vehicle was worth IMO due to complexity or availability.
I could continue to blow hot air in your direction for hours. Needless to say I'm hoping to get out of automotive and keep my old vehicles on the road for as long as the government allows.
Why throw all the working stuff away? I don't feel any need-for or right-to new & shiny things if there are less new things that work just as well.
I'm bothered by the fact that my partner doesn't want to use silverware from a thrift store... Despite the fact it's no different than eating at a restaurant... But mostly because why new? If everyone always buys new shit then we're always throwing out old shit and things start getting made for the sake of being new and shiny rather than by doing any one thing better. We add niceties like auto-adjusting steeringwheels and heated seats as an excuse to keep producing. Ah. Well. Steam ran out. I just prefer things simple.
What point would you like me to hit? Surveillance even if it's passive and I have nothing to hide is probably the biggest reason for most of my habits. Money and trust. The other reason is probably mental health, reducing stimulus helps me dramatically with anxiety. I typically walk around with headphones in silence. I can still hear everything, it's just quieter. If you do it long enough you'll know what I mean.