Hacker Newsnew | past | comments | ask | show | jobs | submit | ohazi's commentslogin

Hey, Joe! This is one of my favorite cello pieces -- so hauntingly beautiful. I've probably listened to Janos Starker's performance dozens of times, but I also liked Inbal Segev's version. Parts of it seemed brighter somehow.


Fuel grade is like 3%. It's exponentially harder to go from 3%-60% (months-years) than 60%-90%(days-weeks). So no, the only reason to enrich that high is to keep your breakout time threateningly short.


Which still, astonishingly, does not make it weapons grade.


Yes brother you are technically correct about that substring of that comment. “Weapons-grade” was indeed not 100% accurate and therefore, technically , inaccurate. That is true, you are right.

That same comment also said, even led with “flies in the face of”. That was the most important part of the comment: ‘saying that Iran is enriching weapons-grade uranium “flies in the face of” intelligence reports which reported no weapons-grade uranium.’ But that part was not correct: the difference between Iran’s uranium (60% enriched) and weapons-grade uranium, while >0, is not large enough to characterize that assessment “flying in the face of”.

So yes if you focus on that substring of the comment you are right. But why would you? It’s not the point of the comment.

Which makes it nit picking. Which is why you’re getting so much pushback.


The parent comment says it flies in the face of the US IC's holistic assessment of Iran's efforts. Which it does.


True, but can you name a reason to create a stockpile of 60% enriched uranium that doesn't involve weapons?


Yep! Negotiation.


I suspected that this was the case when they mentioned adding "one bit at a time" -- the CPU design that they implemented is Olof Kindgren's SERV [0], a tiny bit-serial risc-v CPU/soc (award-winning, of course).

From [1]:

> Olof Kindgren

> 5th April 2025 at 10:59 am

> It’s a great achievement, but I’m of course a little sad to see that it’s not mentioned anywhere that Wuji is just a renaming of my CPU, SERV. They even pasted in block diagrams from my documentation.

[0] https://github.com/olofk/serv

[1] https://www.electronicsweekly.com/news/business/2d-32-bit-ri...


They do mention SERV in their references (38).

https://www.nature.com/articles/s41586-025-08759-9

Sadly I can't access the full article right now.


That sort of copying without attribution should be considered outright misconduct; it certainly would be in academia.


Huh? This is a paper published in Nature, and it does cite Olof Kindgren and SERV in the references: https://www.nature.com/articles/s41586-025-08759-9#Bib1

The paper itself is behind a paywall so I can't see it, but it looks from the references like they provided proper attribution.

It's unfortunate that some of the articles around it don't mention that, but it seems like the main point of this is discussing the process for building the transistors, and then showing that can be used to build a complete CPU, not the CPU design itself which they just used an off-the-shelf open source one, which is designed to use a very small number of gates.


Thanks to the Archive.org link, we can see that indeed they link directly to the SERV github in reference 38:

    38. Kindgren, O. et al. SERV - The SErial RISC-V CPU. GitHub http:/github.com/olofk/serv (2020).


> The paper itself is behind a paywall so I can't see it

https://archive.org/details/s41586-025-08759-9


This doesn't integrate the antenna, so it's not comparable.


Yes it does. The HJ-131 module has an onboard antenna that’s used by connecting pins 12 and 13.


I encountered the same issue on my desktop, but ended up doing the partition surgery a little differently.

First I resized /, /home, etc. for some extra space, then I moved them all forward a bit, then I resized /boot. Complicated by various layers of LVM and encryption.

This took longer and in retrospect was perhaps a bit riskier, but it had the advantage of not needing to change any configuration on the actual system (since all of the partition numbers/names/uuids remained the same -- they just showed up at slightly different places with slightly different sizes). So the whole operation could be done entirely from a live/rescue environment.


Rust hasn't fallen off, it's just largely considered mainstream now.


Earthquakes.

Options are wood again, or steel and concrete.


Somehow, all these nations around the world with earthquakes still have their houses standing.

Why is it always whataboutism with earthquakes when presented with "don't build houses out of matchsticks"?


Countries like Japan use the same construction techniques as the western US. Few countries have earthquakes as strong as the Pacific Rim, where M8-9+ are regular occurrences. Properly designed wood-framed houses will survive that.

I’ve never seen a house in Europe that was engineered to the M8.5 earthquake standard that is mandatory where I live in the US. They used to construct houses like in Europe but they kept getting destroyed in earthquakes and were made illegal for safety reasons.


They do not have their houses standing. Look at the recent earthquake in Turkey and Syria. 60k dead and 150 billion in damage.


RIP S3 sleep... Took years to get it to work reliably under Linux, then we had a good decade+ run of it "just working" like this, now back to trying to weed out all the wacky platform quirks and weird hardware/firmware behavior that make the S0ix states be just barely unusable.

Maybe in another five years...


Can you explain a bit more? What happened?

Linux used to be able to do S3 sleep well, and now it can't because... new platforms removed S3 for S0ix? Or S3 became even more complicated with mroe platform quirks and weird hardware?


The problem is platforms moved away from S3 sleep. I've heard people claim it was mostly so managed Windows laptops could force updates with the lid shut and the laptop suspended.

Now I have to worry about my laptop randomly overheating itself in my backpack and even catching fire.


I've heard people claim it was mostly so managed Windows laptops could force updates with the lid shut and the laptop suspended.

That, but probably also to compete with Mac's Power Nap feature (2012) that updates Mail, Messages, and other applications during sleep (so that when you open up the laptop messaging apps are immediately up to date):

https://www.engadget.com/2012-06-11-apple-introduces-power-n...

Apple managed to do it without setting your laptop on fire. Meanwhile Dell recommends you to switch off a laptop when you put it in your backpack:

https://www.dell.com/support/kbdoc/en-us/000124304/notebook-....


Hold on, Apple used the same intel chips as everyone else when Power Nap was introduced. In fact, it was implemented via S0ix state. It's just almost no one except Apple figured how to utilize it correctly.

Now i'm wondering if it's Apple fault that S3 got removed.


Perfect "we have PowerNap at home" moment for Dell and friends. Well played, Apple.


Nah. Apple's Intel implementation at least was equally crap. I've had my 2018 MBP cook itself in my bag a couple times.


My Macbook Pro lasts about two days on battery while doing work (in clamshell mode, with the screen off). My Thinkpad drains its battery in less time than that in sleep. The removal of S3 is a travesty.


I believe it came about during the "Windows must run on tablets" era. They needed a way for WiFi to stay on during sleep so things like notifications would continue to work. It also enabled media players to continue playing audio in sleep mode, similar to iOS and Android.


Would be great to have a bios switch for it then.


I tried a few times as some BIOS have a hidden or disabled setting but I never got past a plain crash. Device and CPU vendor support for classic S3 is shrinking. E.g. on framework laptops the Intel CPU(!) does not officially support S3 sleep.

So I can understand that there is no option for it if all you can get is out of spec behavior and crashes.

Also note that it is incompatible with some secure boot and system integrity settings.


Thinkpads do. It's poorly named but they let you choose Windows (S0x) or Linux (S3) sleep.


No, not on modern Thinkpads.


Every XPS and Thinkpad that I owned, had a bios setting to "enable linux compatibility" which was enabling S3 state.


> It also enabled media players to continue playing audio in sleep mode

Is that actually a thing? On my Windows machine media stops playing when I put it to sleep. The machine is clearly not completely off, though, judging by the fan spinning like crazy from time to time.

Also, the whole "keep checking for e-mails" and whatever is clearly broken, since after waking up Outlook needs a while to come back to life and show new messages.


weirds me out since acpi etc. is uaed to control power qnd such states why would devices even need to do such things to support some OS.. the OS should be able to manage states, its the controller and hw should listen... in this case, windows could simply not put the devices to sleep?

i know it didnt end up with this logic but it melts my brain as to why... is it cheaper to implement the hw without support for deep sleep?

most specifications have it included (pcie, nvme, ahci etc. etc.) so you'd expect most devices working via pc platform would implement these things :(

cant wait to push my OS onto real hardware and burn my fucking house down


There also seem to be a lot of anecdotal suggestions that learning how to use that AR/HUD single-eyepiece thing when flying the Apache helicopter similarly requires some brain-reorganizing shenanigans to fully master.


True. I flew Black Hawks instead of Apaches, but we also had a HUD monocle that we could attach to our NVGs (so it was only for aided night flight in our case). It prevented us from having to shift our eyes from the NVG image down to the actual cockpit instruments. For me, adapting to the HUD was much easier than adapting to the NVGs themselves; I remember drifting backwards at 15 knots and 30 feet over a runway when I thought I was at a stable 10-foot hover a few weeks into NVG training[1]. The Apache monocle is probably more difficult to use because it overlays the instrumentation on the real-world instead of another synthetic image.

[1] Somewhat amused, the instructor pilot let it happen for a few seconds before asking me if I was OK and could see what was happening. It was a formative learning experience.


> Somewhat amused, the instructor pilot let it happen for a few seconds before asking me if I was OK and could see what was happening. It was a formative learning experience.

I think the formative learning experience is why the instructor let you drift for a while. Of course, plus 20 feet has a different impact on safety than minus 20 feet :P



Heh, edit jinx. This answer leaves me almost as at sea, so I guess my time's not best spent on this article. Thanks.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: