Kinda sad that the top comment is something about a web browser. There's much more to computers and the internet than web browsers. I use links text-only browser coupled with stunnel or haproxy for TLS. Unlike the "tech" company browsers, this software compiles quickly and easily, it's relatively small and it works great on older computers.
Here is what no one is mentioning. Currently, 90% of the world's most advanaced chips are made by one manufacturer, TSMC. If something hapens to TSMC, then the ability to utilise older chips and older computers is going to become relevant.
Personally, I think "modern" web browsers suck for using the web recreationally, e.g., visiting random websites, i.e., usage that is not banking, shopping or other commercially-related activity. However, when applied to local files, these "browsers" can be useful as PDF readers, audio and video players. Whether older computers can handle today's media files is left as a question for the reader.
Reading websites is easy without a big, bloated "tech" company browser. Using a text-only browser on a daily basis, I have been doing this for decades.
If something happens to TSMC there will be a global effort to lift all that knowledge back into stable, developed countries to continue production as fast as humanly possible.
Just like the lifting of Nazi rocket scientists, it's a technology that is directly intertwined with the existence of contemporary advanced economies/societies, if TSMC ceases to exist there will be a concerted effort from major economies to re-gain that knowledge. Right now it's just too expensive to do it instead of outsourcing it, the moment it becomes an existential threat it won't be relatively that expensive anymore.
I don't think old computers as such will ever be relevant as you envision, if it happens it will be a blip during a gap of 3-5 years until production restarts somewhere else.
The entire Manhattan project ran less than 5 years. If it's top priority for national security (which it absolutely would be), the heavens and the earth can be moved.
Sad to think of the list of things that don't make that cut. You would think that eco-friendly energy independence would merit top "national security" priority.
And food. And yet, pretty much 90% of the world could die in starvation and not a single finger would be lifted. Cut the chips production and it's WW3.
Samsung is probably less than 3 years behind tsmc some complex chips are manufactured in their high end chips, I doubt it will be as catastrophic as we think, most of the critical parts of cars and iot are manufactured in much Biggers litigtaphics than "3n" and Intel produce "7n", the catastrophic part will probably be the political implications of this.
I'm not talking business competition trying to fund R&D into TSMC-levels. It would be national efforts, funded by gobs of state money to set it up and go online as fast as possible.
Later selling to private enterprises, etc. is another matter to be discussed but governments would very very likely be directly involved.
As another comment said, the Manhattan Project took 5 years, it's not impossible.
We do have basically latest Firefox on OpenBSD. But yeah, to be fair, I have to restart Firefox frequently on my X230's running OpenBSD. It either just stops rendering altogether (literally even application UI fails to render) or eats up so much memory and CPU the cooling fan is just always running at a kinda high level. Otherwise, it's super solid and a great OS for older machines IMO.
> restart Firefox frequently on my X230's running OpenBSD
what wildly multithreaded memory hogs are you running? I love my x230s, and I have dozens and dozens of windows open all the time. probably I benefit a good bit from running ublock Origin and uMatrix, but I have gigabytes of RAM free. I don't think running a newer machine would give me any benefits at all.
The problem of Firefox "freezing" (rendering no longer working and UI elements like menu/scroll broken) and requiring a restart I don't think relates to memory. For example the browser is only using around 1.8gb at this moment and the machine has 8gb of RAM. I've been in contact with the Firefox OpenBSD maintainer (landry@) and he didn't have any certain indication from the logs/data I provided. He says he runs into the same problem, and simply quits/reopens. He's given a couple other steps to try getting more debugging data but otherwise, the cause is thus far unknown.
(oh and yeah I have no intention of using a newer system, I bought four X230's and an X220 for dirt cheap lol)
I quit (actually I pkill) and then I restart and restore previous session with wifi/internet turned off. Then I turn internet back on and use the tabs that I want, and they reload only when I use them. If one of them misbehaves, I'll find out when I get there, but generally, I'm closing other tabs first so whatever interaction it may have been might be gone.
if I don't do this when i restart, HN tells me it can't serve pages that fast :)
What changes when the program closes and restarts other than memory refresh would be the obvious starting point. I like hunting these things down, may take a look!
Oh that's good to know, I didn't realize obsdfreqd was an optional thing - I thought the frequency scaling was a new thing that's always on now. Thanks!
As for login.conf, yeah, I already raised those limits (especially to debug firefox with gdb lol), haven't enabled SMT though. Not sure it will help much with pages bogging down the CPU though, considering, waves broadly at the state of the web today >_>
It's bloat caused by stuff like Electron, and people choosing to write desktop apps in HTML + JavaScript/TypeScript + some framework.
If people even used a framework like Qt (a really mature and well-maintained desktop framework the favored framework of KDE Plasma, etc.), their memory usage would be so much lower. (I'm not counting the memory used by Qt's shared .so / dynamic library files.)
Qt is actually not that good. I used to work for a large, well-known software company that distributed an app using Qt; every day that I worked on the native client was a nightmare of framework bugs, quirks, and limitations. For all of CSS' quirks and limitations, it's still an order of magnitude less frustrating to develop with a TypeScript/React/HTML/CSS stack than it is to use Qt. TypeScript is also a vastly better language in most respects than C++ or Python, the only two blessed Qt languages.
I don't know what "[it] was not as productive as it was" means (it's a logically nonsensical statement, so I assume you made a typo). It was not an especially great (though not terrible) or productive place to work, I'll give you that.
For colour, I'll add: it was a fairly old application first developed before QML/Qt Quick. It was also a multiplatform desktop application; then and now it's unclear to me as to whether or not Qt Quick is especially suitable for desktop applications.
We'd been in the process of moving chunks of it into an HTML/JS web view for several years - I think when that decision was first made it was well before QtQuick would've been sufficient for desktop usage. Other reasons we were doing it, besides the aforementioned bugs/quirks/limitations were that it's easier (and cheaper) to hire web devs than native devs, and that it allowed for code/infrastructure/design reuse with our web frontend. QML doesn't address either of those issues.
Admittedly, I haven't tried QML in anger, but I also feel like...given the quality of the rest of the framework, which is comparatively mature, it'd be pretty shocking to me if it didn't have its own slew of bugs/quirks/limitations.
It is web designers and JavaScript devs not wanting to acknowledge that desktop dev and C++ exist.
I use an old 2008 iMac running Linux frequently and it runs remarkably well. Firefox and MS Edge are certainly the apps that bring it down from a memory perspective ( it only has 6 Gigs ). Electron apps like MS Teams are right up there though and make it chug even more than the browsers do.
To be fair, I have dozens of tabs open including beasts like GMail but I certainly have to restart the browsers from time to time.
Compiling huge C++ code bases is a memory nightmare as well.
I started with C++, did that for about 5 years, then got sucked into web where I've been for about 20 years. In a funny turn of events, my company has adopted C++ and now I'm writing C++ again. I don't hate it, but 15 minutes to compile/boot a server or 5 minutes to run some tests is kind of unpleasant. And the occasionally UB bug or dangling pointer. It's just really outdated. Enough so that I don't want to relearn how to build a GUI in C++. I'll wait a few more years for Zig or Carbon to mature.
These same companies are writing custom SSD drivers and cutting edge Rust backends for the absolute best performance. I think it's pretty safe to say they picked Electron because it's the best tool for the job, not because the worlds devs are all too lazy.
> they picked Electron because it's the best tool for the job
Yes, but things they're counting as "best" include considerations other than performance and resource usage. Not that they're wrong in doing so -- everything's a tradeoff, after all, and engineering is mostly about selecting tradeoffs.
But as a user, Electron apps tend to be pretty bad, and so from my point of view, Electron is as well. I tend to avoid them as a result. But my point of view as a user is very narrow.
I'm a web developer but I've also worked in C (no plus) and other stuff. I fully acknowledge it exists; I actually would love to use it.
But then I'd be reaching way less people than with the web. My web apps can be tried without installing anything, work on mobile and tablets and on any desktop (except may be cool weirdos using iMac g4s).
The web allows me to reach more people; if I'm working on apps that help make the world a better place (or so I think), that feels more important than the RAM they use (even though using more RAM might be driving more e-waste, thus also making the world a bit worse on that count.)
> Is virtually every single company and developer wrong?
Virtually every company and developer opts for simpler development and worse runtimes. Is that "wrong"? I'm not sure how to answer that. You could probably make a case either way.
Exactly, we have mediocre software for the same reason that we have chinesium for physical products - because everyone opts for the cheapest option as long as it is kind of good enough. Makes sense for the individuals involved perhaps, but is a giant loss for society as a whole.
The average consumer running electron apps on a MacBook is using far less power than the average HN user with a 1000w home lab made out of salvaged obsolete parts.
Not to mention half the users here rally against EVs, public transport, and probably own an F150. The environment only matters when we are talking about a 1w savings with election vs qt.
> The average consumer running electron apps on a MacBook is using far less power than the average HN user with a 1000w home lab made out of salvaged obsolete parts.
Both of these would be using less power with more efficient software.
But no, lets excuse shitty practices that could save a lot (over the entire userbase) for relatively little effort because it doesn't solve everything.
Not to disagree, but the bigger factor for environmental gains is likely backend languages. If people could stop writing servers in languages that are literally 40 times slower (Python with Django, Ruby on Rails, etc.) in terms of the number of requests it can handle / throughput, that directly translates to more server instances behind a load balancer, which translates to 10 times the electricity usage.
That, imo, is totally unacceptable (also, financially not optimal).
I've had people tell me that the "developer productivity gains" of using Python totally justifies the circa 10 X electricity usage + 10 X hosting/cloud costs. (Yes, developers cost a lot, but the hypothesis that Python/other slow languages result in so much higher developer productivity is a questionable assertion -- especially, in contrast to languages like Kotlin, Go, etc.)
Are you sure that servers are really a bigger factor? Sure, it's easier to see how much power a data center consume but there are literally billions of client devices out there.
At least with backend software there is a financial incentive for efficient software. For client software there is no incentive unless there is actual competition, of which there is often effectively none as in the user cannot choose better software because interopability is not a thing.
They're not wrong, but that also doesn't mean Qt is not good. It just means they're optimising for ease & speed of shipping software over resource usage. Building on top of a modern web browser (Electron basically) gets you a ton of stuff for free that you'd otherwise have to wire together painstakingly with frameworks like Qt. Also the language limitation. Far easier to throw a ton of JS devs on to a project than a ton of C++ devs.
I completely agree with you. It would be extremely odd for a company like Discord, who has done things like using nightly Rust in production, to naively choose Electron because they "don't know what they're doing" with native apps.
Many people on HN either don't understand or don't care for trading off runtime performance and other technical qualities for other things like e.g. development speed, DX, easier hiring, etc.
Or to put it another way, HN users tend to strongly value things that aren't important to most people, and not much value other things that most people do care about.
Maybe, although I suspect most people here understand that we're a fairly small subculture. I do, at least. The thing is, it doesn't matter if my opinions/needs/pain points are fringe. They're still real, and I still care about them and advocate about them. Just like most people.
> Many people on HN either don't understand or don't care for trading off runtime performance and other technical qualities for other things like e.g. development speed, DX, easier hiring, etc.
Most people on HN understand those tradeoff very well but also understand that those tradeoffs are being optimized for the company's profit and not for the good of society overall. Without meaningful competition (and for Discord that would mean interopable clients, not just alternate chat applicaitons with their own servers) the incentive for making more efficient software is ~0.
There are a few options depending on your application type. Fltk is old and ugly, but may work for you. There are others that come from the world of game UIs. Things like sdl2, Nuklear, and dear imgui. I’ve also seen some vulkan based frameworks (mostly from the rust community)
That really depends which addional parts of Qt you use and what additional services you need. The core is LGPL, which is free even for commercial closed source use.
I don't know the details, but you're likely right. We certainly use more than just the core -- the core isn't where the value of Qt really is.
That said, the cost really is extreme. So much so that there is an ongoing effort to remove Qt from our products entirely. There is general agreement among the devs that the cost is not justified by the benefits, and nobody can remember why it was chosen in the first place.
All that said, I like Qt. It's quite capable and you can produce decent, portable UIs with it. But I agree it's too expensive to justify.
and people choosing to write desktop apps in HTML + JavaScript/TypeScript + some framework.
While the "web everything" trend is definitely responsible for a lot of the bloat, it is upon closer inspection not the whole story. For example, I remember various "AJAX chat" clients were quite popular around the turn of the century, worked in a browser like IE5 on a system with probably 64-128MB of RAM and a single core Pentium II/III, and you could also do other things like listen to music, play flash games, or read other sites at the same time. Those clients loaded relatively quickly over connections with bandwidth measured in KBps, and I don't recall much in the way of complaints about their resource usage.
They still exist. A small corner of the French free software community uses "tribunes" to communicate, and that was developed a bit before 2000. It consists of a web interface (originally not Ajax) and a standardised protocol supported by a handful of clients (most of which can federate multiple tribunes). It was briefly popular outside this community when I wrote a Drupal module that provided a tribune for any Drupal site, but then the modern web and smartphones happened and today it's really confidential.
I still use it though, and the graphical client running on my system (which is also an RSS reader) uses 23 MB of RAM at the moment, displaying 1375 messages.
Presumably the result of Google making Chromium more and more like a whole OS (and everyone else following along). Is it ridiculous to give a browser such fine-grained access to your computer? If we were actually designing web standards for usability (instead of for maximising attention) our browsers would look al= lot more like Web 1.0 and nothing like Web 3.0!
Websites that required Java or Flash for basic functionality like navigation or showing text were always ridiculed. But somehow what is now supposed to be acceptable for JS-heavy sites?
I wouldn't say so in and off itself but it can be argued using a thousandfold more resources to get the same functionality is wrong.
The reasons for this are manifold and not just coming from those making cross platform easy to hire for web tech. (Tho it would be nice if those web stacks were also less bloated) In a more ideal world we'd have a few cross UI options already and the companies behind OS's running them would have incentive to support that.
I was a system administrator when people were migrating away from Windows XP and after that debacle the appeal of Web apps was really clear. Doubly so with cell phones coming into the mix.
The Web stack probably isn’t what you’d come up with if you were starting from scratch. But are C, x86, or a lot of other standards we are living with? It seems like the ultimate demonstration of “worse is better:” people had the option of sitting around waiting for some nicely-designed, clean option that was cross-platform across all desktop devices and mobile devices, or they could piggyback on the Web. And with all the investment that’s gone into it as a result of that popularity, it is now fairly good anyway.
I'll agree with everything but "fairly good anyway" - I see most web pages and apps as only adequate, but they follow the current fashion (sigh)
By "only adequate" I mean full-of and/or based-on bad design patterns and techniques: eg. usual for active areas (buttons) to be not distinct from other areas; normal for pages to be based-on inefficient/heavy-weight toolkits (base gmail page is 10MB of download)...
I mean more in terms of being one of the quickest ways to get a functional AI that looks the way you want in front of people. Styling trends may butt up against usability but it's not as though using native code would have protected us from that -- look at Metro, for instance.
10MB might be a lot relative to what's going on (hard to think of many comparable applications that are as old as Gmail to me) but it's still a blip for most people's connections and certainly less trouble than using an installer.
Isn't like 99% of the reason/bloat on today's computers due to the massive amount of surveillance, telemetry, and phoning home with analytics of everything we do to a huge list of companies?
I should imagine that removing simply that would make a lot more possible on older hardware, especially when it comes to base operating systems. Removing much of the unnecessary visuals that the web has today by using something like lynx may get much of the browsing experience desired as well.
Not, is about blouted virtual dom and big and ever-growing standard who need to be run in every plataform and architecture whit the millions of edge cases while patch constantly for security
It's not something that's remotely practical on most modern sites. I hate everything new as well, but at some point it's easier to just suck it up and face the hell that is the modern internet.
Is there a good way to automatically *always* have reader mode on?
Better yet, are there any ways to *not even load the other junk*, just load the text only for the reader mode.
As far as I understand, Firefox still loads everything, does the parsing for everything, loads all those ads etc (and no, uBO does not get all of them), and then - *if you click the reader button in the URL box* - it will switch to reader mode. Seems quite backwards to the point of not really a workable solution for the automation junkie programmers of HN.
Would be nice to have a solution with better defaults to give reader mode automatically, default use uMatrix (much better than noscript btw) for control native to the browser, etc for a more snappy and controllable (with paths to private and secure) browsing experience. Having a tui would be icing on the cake.
Lynx just works though, and sidesteps all the js junk.
The thing is w3m really filter yourself out of the shitty web. Most interesting websites are quite readable on w3m. The ones that aren't are usually the worst ones or thoses that provide an api for which better gui and tui clients exist.
I use w3m as often as I can. I already pretty much live in the command line, and text-heavy sites like HN or some sort of documentation look just fine (or, IMO, even better) via w3m. Not always, but often. It's a very nice option to have.
The web has gotten exponentially more useful. The fact that my computer needs to be trivially more modern to use it seems like a completely reasonable trade-off.
You couldn't buy a fully functioning computer for $35 in 2023 dollars back before the web "went astray." That wouldn't even buy you a modem!
You're only using 61 MB of your 512 MB RAM. Megabyte. I love it. Megabyte.
I distinctly remember the happiness of attaching a 512KB RAM expansion card to my Amiga, bringing total RAM to 1MB. You have 512 of those!
Any Linux distribution that can be configured to be minimal (e.g. NixOS, Arch Linux, etc.) can boot a mainstream stack (systemd + pulseaudio + Xorg + tiling window manager) in less than 300 MB RAM. It's a great experience. I use an old NUC (i5-4250U) as a daily driver and it's almost as fast as a new machine for regular tasks, it feels like the OP's.
I always get whiplash seeing people mention hardware newer than the stuff in my server rack, which I would consider to be high-performance, and call it "old" and describe daily driving it as if it's a nearly unbelievable phenomenon
Well that's a bit unfair. Lots of CPUs slightly older than the one I pointed out will struggle a lot as a daily driver since they will lack hardware support for lots of functionality, e.g. video decoding.
Also that CPU is ultra low power, like those on laptops, and it doesn't age nearly as well as server stuff.
MicroSDs and even thumb drives are INSANE. I remember how difficult it used to be to move large amounts of data reliably. Superdisks, Zip drives, CD-R/RW, and on and on. Portable storage solutions were an absolute nightmare for ages.
I was thinking back on this the other day and I remembered my iPod Classic with the 1.8" HDD in it. Boggles the mind how that thing didn't just immediately die. The number of 2.5" drives I've lost in laptops and external drives over the years...
IIRC, the 1.8 inch drives were designed to take a beating. I think they had circuitry that would actively protect the platters from the drive heads when jostled(?)
Wasn't it just the day before yesterday that Bill Gates famously said nobody would ever need more than 64 KB storage?
It /is/ amazing the progress we've seen in our lifetimes. I remember life before the web (http) was invented, before mobile phones (much less smartphones) became commonplace.
Some of the supposed 'progress' is really regression---like the number of people who economically could productively use computers but think a mainstream smartphone can let them do the same things. Sure, they can web browse, access online banking, ticketing, commerce, and other services through apps, but they usually miss out on the entire creative and productive side of computing.
However, I don't just blame technology. At least in Occidental countries, we tell ourselves we're consumers, vocabulary that hides the fact we're also producers/creators in our work. But it fits a mechanistic view of society, and works well with the central planners Utopian pretensions.
I love hearing of projects like the OPs---breathing new life into unused machines.
It was! It was exactly the day before yesterday when he said that. I remember.
I started my "career" doing "multimedia" in the 90s. Macromedia Director programming. I also remember those who had a 486 and that one guy who had 486DX. I remember seeing the "web" for the first time somewhere in 1994. I also remember looking at my friend asking if he thinks anyone will ever make money off this thing. We both said "Naahhhhh" at the same time and laughed.
I still finding it hard to trust 1TB worth of important data stored in that medium. I'm half joking but still, only half joking. The other half is dead serious.
You're right. SD lacks things like overprovisioning and wear leveling that would reduce failure rates. It's also rather slow to read/write from. On another note, it's incredibly easy to lose them.
It is always a bad idea to store any amount of important data on a single medium.
Seen too many (including myself at some point) have all their data in one external HDD and pass it around friends with a caution to be safe with their data.
I've lost most of my photo and video collection due to two HDD failures between 2017-2020.
SD cards for me is just a medium to transfer from a device to permanent storage so I don't have to worry about data retention on a tiny device unless it get lost.
>I distinctly remember the happiness of attaching a 512KB RAM expansion card to my Amiga, bringing total RAM to 1MB.
I thought the same (A500, new enough to have the Agnus variant and jumper to make it 1MB chip total). It happens a lot when it's about memory.
1MB was so much RAM, relative to just 512KB. It could suddenly run Monkey Island and Sherman M4, and it was suddenly much easier to work with Deluxe Paint and Prowrite.
I got the 4MB card for mine, since I worked at a computer shop and could get a staff discount. The only catch was that you had to solder a wire from that to the motherboard, which I had done with one of those trigger-operated soldering guns, and just about screwed the pooch. Lifted a trace off the board, taped it back down.
There was also the IDE hard drive interface that I thought was ingenious; it was a daughter board in the 68000 socket, under the CPU, with a ribbon cable connector beside the CPU socket, and the main chips between the legs of the CPU socket.
Oh, I had the 68010, not the rest though, since it was drop-in compatible. I gave mine to a friend, wonder what became of that thing. Such a nice little machine.
I cut my initial programming teeth on an 8Mhz (Maybe?) 8086 with 512KB RAM. Had the luxury of a 20MB HDD. At the time it felt like an ocean of storage. Nowadays it can be transferred around the world in less than a second.
At that point you have a local energy inefficient terminal that barely contributes to what you are trying to do, and a cloud computer which must be run on your behalf somewhere, both of which require energy. What's the point? At that point I doubt it has any benefit compared to running a more modern machine.
There are some hacky ways of getting a little distance on modern OS's, like using terminal based browsers. I'm thinking Links/Elinks but that will only get you so far and so long as the security certificates are available.
I actually meant a lightweight OS but I guess the parallel construction was so tempting I wrote the wrong thing. Everything seems great until you ask the computer to render the modern Web.
This week I've been "rejuvenating" my 11 year old i7 2600k desktop. I built a zen 3 based computer a couple years ago, purely as a luxury, after spending 5 days in the hospital ICU. The i7 was chugging along just fine doing whatever I needed it to do, and it seemed a waste to just forget about it.
I fully disassembled it and cleaned out the dust. I spent an evening lapping the CPU and heat sink down to 2000 grit. I put it in a new case, as the old one's front panel fell off. I installed my old GTX 970 GPU, which seems like a perfect pairing.
Maybe I'll gift it to a young nephew, or maybe I'll set it to easily boot up into a mentally stimulating game for my 4 year old daughter. Her knowledge of computer games right now is beamng.drive and tux racer!
I think you can still be secure on the net with the Spectre and Meltdown mitigations disabled on a personal system. If your browser is current, it already has mitigations built into it. Are there any mathematical calculations showing the risk % of disabling mitigations? Or is this just herd activity where everyone sets it as a default because they don't want to accept the liability for the remote chance of lightning striking?
This is dangerous, and maybe not in the way you expect.
Personal systems are running a lot more untrusted code than, for example, servers.
Mitigations are more necessary on personal computers than on servers.
As for "it's mitigated in the browser", the issue is that there's no "one" mitigation for anything, you have to mitigate it on the OS, in the browser, in the microcode for the CPU. Everything. You can't just apply one mitigation and be done. FWIW the "mitigation" in browsers was to disable high precision timers; which seems to be making its way back, and was never necessary, it just made the exploit easier.
I highly recommend keeping mitigations enabled for personal machines; when it comes to servers with a decent WAF that are not shared amongst users that can run code: disabling mitigations makes more sense. (dedicated databases, for example).
Someone described the mitigations as: "Surrounding a drunk person with trampolines in the hope that when they collapse they will be corrected into the right position", I think this is the most apt example I can think of.
For personal systems that run a lot of untrusted code (due mostly to the web): I wouldn't personally advocate for removing a trampoline.
The problem with arguments like these is to me two-fold:
1. Physical space. Most of the older tech takes too much space. I don't live in a mansion and I have no dedicated rooms just for big bulky computers.
2. Power consumption, heat, noise. I would love to be less consumerist and use stuff from 15 years ago and stop feeding greedy corporations but... the old stuff consumes 90 to 200 watts, heats up easily and noticeably, and can get loud.
Both parts of this problem are trivially solved by modern tech. I got a 13" Chinese fanless laptop second hand for 160 euro; it has a Celeron J CPU with 12GB RAM and 256GB SATA SSD. It can do anything I need that's not programming or rendering -- including playing 60FPS video -- and last time I checked it, it consumed 30W under full load (all CPU cores at 100%) and idled at 11W or so.
Modern tech is not only about its buyers being mindless consumerist drones. It offers tangible benefits, including long-term environmental sustainability.
Yes a lot of CO2 was likely released due to its production but I'd wager that tech that heats up less and uses less power can last 10-13 years and will pay off its environmental footprint with a generous interest on top.
So unless you're a retro collector or want to write compilers for older CPUs, there are zero reasons to be excited about older computers.
(That being said, if somebody finally creates a modern 6502 CPU computer e.g. in the shape of a 10" netbook I'll buy it immediately, provided I can also program it through USB from my workstation.)
In digital, pollution happens during construction way more than usage. It still makes sense to use a decade-old computer.
The second-hand market is filled with material that was built at same point; you having access to your nice second hand laptop is due to newer laptops being built all the time. It is still sending a signal upstream that such a computer can be sold.
I do agree that buying second hand is much better than buying new, but not even buying is still better here
> It still makes sense to use a decade-old computer.
I don't really think it does, at least not some of them.
A lot of those older machines are just running really, really high idle power draw. Ex: Pentium 4/5, Core 2 Duo, Athlon64, etc. Running from 40 to 100 watts idle.
For comparison, a new RPI will go as low as ~1-3 watt idle, tops out at ~5-7 watts under full load, and will generally have better performance characteristics.
I run a lot of old machines in my basement because it can make sense (I throw them in a k8s farm), but I also generally have them spend a week on an electricity meter first (honestly, just a function of the UPS I run down there) and anything drawing more than 20 watts idle gets sent to the recycling instead.
The bad ones tend to run high + hot + loud, and they absolutely aren't worth it.
A decade ago was 2013, the pentium 4 is at least 13 years old now, but entered production 23 years ago.
There's still a lot of utility to be had from a 10 year old machine. My daily driver laptop os a lenovo t420 which came out in 2011. Sure it can't play games but it'll do fine with youtube, programming, etc.
My server box is also a 2012 era machine, it's a 3770k build. It's perfectly fine for what I use it for.
Yeah it takes a bit more power than a modern intel chip, but it sure beats the alternative of throwing it in a landfill to save a few watts at idle.
If you have a need and you already own it - knock yourself out.
But comparing spending 40 bucks on craigslist or ebay for an old pentium/core 2 duo, vs getting an rpi... You're probably better off with the Pi (assuming you can source it, which is not a given atm).
I don't even buy most of mine used, I just get hand-me-downs from friends/family because they want to get rid of the machine and ask me to wipe the drive in exchange for the hardware, but even at an upfront cost of free, a machine using 100 watts over a year is 80 bucks in my region, and my power is right around the US average.
Plus power in my region is tiered, so the first 650kwh per month are cheap, the next 350 are avg, and the rest are expensive. So adding machines at this point is actually closer to ~$130 a year at 100 watts idle (paying for t3 rates).
Personally, it's just not worth it to me to take a freebie that runs me $100+ in costs a year. Not even accounting for the knock-on costs, like the extra AC needed because 100 watts is a small heater.
----
So sure - lots of older machines do just fine (the end of the core 2 duo line is quite reasonable, and most laptops are actually fine) but it's really probably worth measuring the power draw.
The five year cost of operating that rpi is ~$30, the five year cost of operating a 100 watt machine are ~$450. And I can run a LOT more low power machines and stay in my t1/t2 pricing for power.
Helpful thought process, as I've defaulted for a while to older machines I have around, but now I question how much power they're drawing. I've never really considered power more than how long do my laptop batteries last.
Do you have any pointers what sort of equipment to use to measure the power draw? Are you simply measuring it from a UPS?
These are always handy to have around. The first day with one is like the first day with a label maker. You test everything!
Some figures from around my house.
Lenovo Tkinkpad T400 - 2.4Ghz Core 2 Duo Laptop idles at about 10 Watts and peaks at just over 30 watts.
Lenovo Thinkpad T420 - 2nd Gen i5 is about the same.
Lenovo Mini 10 - 1.66Ghz Atom doesn't seem to know how to idle its power regardless of OS. Runs at a constant 10.5 Watts - 15 watts if using a Hard Drive installed instead of SSD.
Core 2 Duo Mac Mini - 8 watts idle - 24 watts peak.
Sony Bluray player - 0.5 Watts idle. 5 Watts during Bluray. 3 Watts during DVD.
Desktop PC - 2nd Gen i7 + Geforce 1650. Idle 61 Watts - Peak in game 95-101 Watts.
Sony AM/FM Radio - 0.5 Watts. That one was actually really neat to know just how lower power it takes to run.
The short answer - Live somewhere very hot and humid. In my case - Atlanta.
Add an old house on top and throw in working from home, and usage can creep up on you very quickly. We average around 1000kwh a month, but summer months where it's 95+ and near 100% humidity means the AC reasonably needs to be on.
Jun/Jul are the worst, we hit 2200kwh over those months last year.
As for the machines - like I mentioned, they run a k8s cluster I host numerous services on. I run seafile/bookstack/jellyfin/keycloak/homeassistant/email/dns/mealie/snipeit/etc on them. Most idle around 5watts and aren't a big deal. I recycle the machines that would end up costing me real money in power usage.
---
I'll add - my next personal project is going to be moving the entire cluster over to a dedicated solar system. As long as I don't do any grid-tie, I can do it without permits. Just waiting to find a decent deal on inverters at this point. Since I already have a pallet of used panels.
Yep, I was talking about the environmental argument, where drawing 40W idle might not be that bad compared to what happened at construction. Especially when you can source electricity from low-impact sources.
Replacing really obsolete machines used as servers by RPI does certainly makes sense power-wise (as long as you don't need storage), so not arguing with the point you make about that.
However, I agree with with the OP comment that computer should not be replaced simply because newer, more energy efficient alternatives exist: at least not for anything made after 2010.
When comparing whether it is worse it or not to extend the life of a device that still does the job its intended to do, it's almost always worth it to extend the device's life.
As an exemple, if we go by Apple's data: a 2016 13' Intel MacBook Pro create a yearly impact of 18 kg of CO2eq through usage (electricity).
The 2021 13' MacBook M1 is a better machine environmentally by all account: manufacturing it polluted less than the 2016 machine (168kg CO2 vs 386kg), and it is more energy efficient (13 kg / CO2 per year).
Still, you would need a whooping 33.6 years for the use of the more energy efficient laptop to cover the CO2 from manufacturing it. And that's using global carbon data, if you live in a low carbon grid country, it could take a century to recover the greenhouse gas emitted during manufacturing of the new laptop.
(Not saying I would not still buy a M1 over keeping the slow Mac Intel but please don't claim your are helping the environment when doing so).
> Replacing really obsolete machines used as servers by RPI does certainly makes sense power-wise (as long as you don't need storage)
You do need storage, but you need it for all of them anyways. I certainly won't trust the original spinning disk HDD that comes with hand me down machines with anything more than an OS image. Everything else goes on the NAS.
> However, I agree with with the OP comment that computer should not be replaced simply because newer, more energy efficient alternatives exist: at least not for anything made after 2010.
This - this is where the answer is to measure it. Sometimes the result will surprise you quite a bit. Most machines made after 2010 will be fine, but trust me, it's really worth plugging it into a watt meter and making sure.
Turns out all sorts of things adjust power draw, and one of the least maintained parts of old machines is the PSU - usually it's filled with dust/dirt, and as long as the machine still runs no one has paid it any mind in the last 10 years.
> Not saying I would not still buy a M1 over keeping the slow Mac Intel but please don't claim your are helping the environment when doing so
Who the fuck is talking about replacing an intel macbook with an m1? I'm talking about getting old tower machines from office/school closeouts or as hand-me-downs and repurposing them. An intel mac (even a 2010-2013 era one) is still worth WAY more than an RPI (~$200 vs $40), and laptops in general use a very different power curve (batteries matter - who knew...?).
Not to mention you can't even run a decent OS on the modern intel macs after 2016... (although at least Asahi is making progress these days) so trust me - I'm hardly suggesting a macbook update (I would strongly suggest purchasing from a different company, though - but that's a different conversation related to a different set of morals).
My post was mainly answering to the OP comment and your follow-up.
> It still makes sense to use a decade-old computer.
>> I don't really think it does, at least not some of them.
I totally agree with you regarding the homelab topic (although personally, I started using low-powered refurbished x64 thin-clients (using about 10W) in place of a RPI cluster, mainly to get real sata ports on them.)
My point is that, environmentally, continuing to use a 10y computer rather than buying a new one is generally a sound advice (assuming it does still what you want it to do).
Since I didn't have data for a 10y old computer, I used what I could find: the 2016 vs the 2021 Macbooks Lifecycle assessments from Apple. The same point would probably stand between a 2010 computer and a 2023 computer. Even if the old one is less energy efficient, the 150 to 300kg+ of CO2 you save not building a new device goes a long way.
And instead of consumer devices or RPi, if you are talking about datacenter-class servers, then you save even more CO2 by prolonging your device life since a new server's manufacturing footprint is more like 1500 - 3000 kg CO2eq.
Sadly mine is not per-socket. I have an old APC that shows load in watts for the sockets under battery backup, and also load in watts for the entire load (it's split and offers overdraw protection to everything, but battery backup only to half the sockets on the back).
Usually I just leave the new machine as the only thing idling on the sockets without battery backup and compare the two.
I agree that extra pollution comes from manufacturing and said as much in my post already.
But I believe the fact that this laptop can last at least 10 years + it will consume a single order of magnitude less power over these 10 years adds up to more environment care.
The production is already completed. The damage has been done. Let's use the machines at least because they are 10x more efficient and we can slow down the environmental damage.
Also to many of us outright not buying machines means we gotta switch industry. And a lot of us don't want to.
> It is still sending a signal upstream that such a computer can be sold.
That's a bit out there though; it only signals the vendor that people hold on to their older machines for longer time. If anything, the laptop I got second-hand is no longer even listed on the vendor's AliExpress store. It only has a marketing page on the website and you can only find it second-hand or here and there on the internet.
I think we need to take a step back and see the big picture. There are 10-year olds computers sitting unused; as a whole, it can fill a percentage of usages.
If it does then the people using them don't need to buy another one, new or refurbished.
If they don't need to buy, other people can't sell as easily.
If they can't sell as easily, they might use it longer, and not need/want to buy another one, old or refurbished, etc...
It's a whole market and if constructors see that the overall tendency is of using longer, they will get the signal that there can be less sales overall. And thus build less.
Everything we do is part of a larger system, and I believe we should understand what the system is like and how it works if we want to have a real impact.
I believe many of us are aware of the bigger picture outlined by you, but you keep ignoring the point that the previous 10-year old computers burn much more electricity. And as I already said, the newer more efficient machines have been built already so we might as well use them and help the planet by using less watts.
The process you describe is yet to happen e.g. the computers I have I don't want to replace unless they crash and burn. So in 5-ish years I believe vendors will slow down manufacturing indeed because many people do like I do: they get the newer more power-efficient stuff and then hold on to it for longer periods.
I was going to disagree with you, but a back of the envelope calculation shows that the energy of computer production is probably equivalent to that of a few hundred hours of use, and so if you can save 10% on energy you are probably environmentally justified in switching to a new device. This is very counterintuitive to me.
To first order the upper bound for carbon cost of production/recycling is going to be a multiple of the energy required to melt the components. Call that multiple four: production of raw materials + reshaping in factory + recycling of materials at EOL + energy cost for assembly. So a rough upper limit of the energy to switch a computers would be four times the energy to heat the computer's mass in water (4 J/gC) up to the melting point of silica (1900 C), and a rough lower bound would be the same using the heat capcity and melting point of iron (10x lower heat capacity, 1500 C), and neglecting assembly. For 1 kg of material, that gives a range of 1.8 and 32 MJ. Now you can compare to the energy used. For myself, I'm on a 2 kg laptop that draws up to 250W, but conservatively 50W. That adds up to 32 MJ/kg in 400 h. If I can reduce the energy use by 5W with a new model, that will pay itself back in 2 years of workday computation. Crazy.
Caveat: there will be additional non-CO2 environmental costs. Most components are not recycled, and resource extraction is also environmentally damaging.
The only remark I'd have here is that the cost to produce is amortized over the course of producing several hundred or even thousands of units. It's still there but is greatly reduced per unit I think.
Oh I'm well aware that newer machines use less electricity and that's good; it's just not necessarily true that, looking at the entire lifecycle, it is better for the planet compared to taking your older computer out of the closet.
Again, I'm not saying what you're doing is bad, if everyone could fulfill their digital needs and wants out of second hand, that'd be awesome. I only wanted to point out that using less electricity isn't automatically better.
It's a balancing act between what we need to get our jobs done, what are our ergonomic preferences -- I have the so-called "lap desk" (basically two parallelepiped-shaped pillows and a well-designed plate with USB extensions to put on top of them) where I like to work on the couch or the bed -- and what is kinda sorta good for the planet.
I can't hyper-optimize only for the planet. I am doing my best but obviously everyone else has to help a little bit as well.
Thanks for entertaining this discussion, it was interesting.
I'm not completely convinced. The early PCs didn't need heat sinks or fans other than the 80mm PSU fan. They were inefficient, but the total power wasn't that huge. Even for newer computers, I have a Raspberry Pi 2 that draws 3 watts at maximum load and less when idle. It's powerful enough for what it does and even with this energy crisis it would need to run for years before the energy savings would cover the cost of a new model.
Yep, it's not a super clear win. But I was talking about the old desktop or all-in-one machines. They still draw a bit more but you're also right that the amortized cost blurs things. But since I plan on using the refurbished modern machines that I got, for years, maybe the equation will be in my favor... eventually. :)
I wonder if the same is true for a -say- G3 based laptop or desktop. Apple’s laptops would often be usable much longer on batteries compared to its Wintel counterparts.
It is an interesting question. Apple hyper-optimized the OS for PPC when it was still relevant. Even modern PPC OS's with a good 20 years of additional compiler optimizations don't run those things anywhere near as well.
I recently literally took out of a drawer my old Sony Vaio laptop bought in 2011 and converted to a home server, replacing a RPI4 whose SD card just died (I used it for home automation).
Well, the performance upgrade has been astonishing for me, probably it was due to the SD vs the SSD in the Vaio but I'm not going back. I'm actually thinking about self-hosting much more now!
I still use a 2015 MacBook Pro daily. (Also have an Apple Silicon MacBook but I keep the Intel-based one downstairs and pretty much just use it as a web browser.) It's had its battery replaced (and the screen replaced under warranty for a defect) but it's just fine for my needs.
The power issue is more with large desktop systems. I was setting up a moderately old but fairly hefty system as a file server. Couldn't get it working--which I discovered belatedly was an issue related to my network switch--so I bought an unpopulated Synology box. But, given power consumption, I also realized that was probably the more cost-effective thing to do anyway.
> It still makes sense to use a decade-old computer.
Economically the incentives are not aligned this way though, if you pay for grid power... you still have to pay for the electricity, and there's also the opportunity cost of that payment (e.g. you could spend the money you save on your electric bill on carbon credits or green charities or whatever).
Manufacturing pollution is a localized problem. Using energy inefficient machines lead to more energy consumption and CO2 emissions that are threatening global climate.
My desktop PC, which is over a decade old, performs fine under current releases of Linux and Windows. My laptop PC is a bit over two years old. Performance is lacklustre under the current release of Windows. Newer isn't always better, particularly when you're comparing laptops to desktops. I also doubt that many laptops would have an equivalent lifespan to a desktop. They have more perilous lives due to their portability and portability means they are frequently built with more compact and less robust components. That is especially true when you consider that laptops are more difficult to repair (due to more integration and more difficult to obtain parts).
Granted, the iMac G4 is in a different league since it is a 20 year old machine that was produced when performance jumps were more meaningful. I can see it being a useful machine for someone who has limited interest and use for computers or simply wants something disconnected to get work done. I took the latter approach in the early 2000's and it worked quite well. (The fastest machine I had was a 68040, and I did run NetBSD on it.)
Yeah, I don't disagree. My gaming PC is 11 years old and I have zero complaints; the only thing I ever had to do for it was change the GPU 6-7 years ago. It has been rock-solid otherwise.
But again, power consumption.
And I'll grand you that it's possible that this Chinese laptop I got will have its logic board burn long before the 10 year mark. Sadly that's likely, yes. But as mentioned, when I am not programming or doing anything that requires more CPU power, it performs perfectly for what I need, and I feel better knowing it consumes ~17W on average.
There are likely sweet spots e.g. get a Ryzen R1000 or V1000 mini PC and stuff it with 2x NVMe and 2x SATA SSDs and have it drive 2-4 displays at home. The displays will likely consume 20x the power of the machine itself (lol), and that machine is very likely to last for a long, long time.
For my needs however a Chinese laptop that costed me 160 EUR and has proven it can last anywhere from 7 to 11 hours, is perfect. (Though I'll get pissed if it lasts less than 5-6 years for sure.)
I agree with general gist of your laptop vs desktop argument. I will add though that there exist laptops that are / were built to last ; I am nowhere near the ThinkPad fanatic some people are :-) but I still have several t420s laptops, 11-12 years old I think, running as daily drivers with windows 10. That's about as long or longer as any desktop I've owner particularly considering how little I've changed it (mainly adding an ssd).
T420 would be perfect if it wasn't for the god awful displays manufacturers used. I'd still be using mine otherwise. Just going from a 1080p monitor back to it was a pain. After using my M1 MBP und my 4k monitor for the past year, I could never go back.
T420s is the smaller version of t420; still modular and much more portable.
There were likely several versions of lcd panels with it - the t420s's I got were way better than the t480 and e580 I picked up recently, even though there's 6 years between them. I believe the ones I have are either 1920 x 1080 or 1600 x 900 (whereas the t480 is inexplicably only 1366 x 768 ; and e580 is 1080p but awful contrast or gamma or something)
Nice, I've considered doing this... though then that would effectively double the amount I paid for the system. Are the improved displays from a later/different ThinkPad model, or are they something being newly-manufactured? Part of the reason I chose such an old computer is to purchase already-existing hardware second-hand, rather than put my money towards newly-manufactured stuff.
Very cool. I just picked up a more portable, refurbished T450s to use for when travel or general portability matters, as well as being less to worry about than my primary device.
Is there a good site for recommended upgrades by model? I'm pretty pleased with this as is for Linux use already. (8 GB RAM, 250 GB SSD).
I checked and it looks like the PowerPC's were power efficient. The 667Mhz model usually ran around 14W up to a maximum of 19W. The G5 processors were known to be hot and needed a lot of power. I remember having a PowerBook with a G4 back then, and remember closing the lid without shutting it down, and coming back days later with only a few percent's of battery draw. Of course, performance wise, the Intel mobile CPU's at the time were starting to really pull ahead of the mobile PowerPC chips at the point, except for a few very specific workloads.
My White Whale... the Dual 2Ghz PowerMac G5 I bought for $50. Sure it took months until I finally figured out how to get the OS to install properly. Having to delve into the patch work of information on Open Firmware and even then I am not entirely sure how I did it.
And in the end I was left with a machine that ran well over 300 watts to do what a Raspberry Pi4 can easy do in well under 5 watts. I should have known, I was there when the 1st Intel Mac Mini came along running as fast as these things on launch - I knew they were a slouch even then. I should have known better.
Eventually something went wrong with the power supply and I ended up scrapping everything but the case which now has my PC mounted in it.
Dang, yeah it's unfortunate how high the power usage is. I didn't realize this when I bought a couple for ~$75 each. They are really cool-looking systems, but definitely not too efficient :\
Oh yeah, I consider them some of the peak of case design. As much as it is cool to run these as a G5, keeping the case is a neat homage to what they were.
This. The thing I dislike the most about openbsd is the lack of modern, supported hardware.
One notable exception is the PC Engines APU 2, though that’s stretching the definition of modern.
Seriously: Pick a laptop SKU that is still being manufactured and has similar ergonomics to a MacBook Pro or classic thinkpad (centered keyboard + trackpad, 4K screen, all the other bits in the right spots, reasonably quiet, 8 hour real world battery), and mark it up by $200. Sell it from a site that OpenBSD links to.
Make sure the manufacturer agrees to build that exact laptop for at least 5 years. (Moore’s law is dead, after all.)
Have the project test 100% of the hardware devices in that laptop on each point release, and donate the $200 markup to the project.
The biggest things that OpenBSD doesn't support (well) are Bluetooth, some wifi cards and TPM modules. And you can literally fix all of these issues with a supported USB device. Been running OpenBSD on multiple generations of Thinkpad laptops without issue. Using the Protectli 6 port device as an OpenBSD router (Intel NICs are well-supported).
Add to that bluetooth is generally a mess. The only thing where it is vaguely useful is for audio and I believe you can still use a bluetooth audio transceiver such as the Creative BT-W3 that present itself as a usb audio interface to the OS if you really want to connect those audio headphones of yours.
My desktop is a new-ish AMD CPU, brand-new intel wifi, brand new AMD GPU computer, and I didn't so much as think about anything related to compatibility with OpenBSD. For laptops, I'd assume the Framework laptop works without issue? Hard to compete with Apple on portables for sure, but anything but macOS is a WIP there.
For me the thing that keeps me from running OpenBSD 100% of the time is my Steam library, but I'm considering demoting my PC to a living room gaming device and handling my modest desktop computing needs via something small and fanless running OpenBSD.
I really want to see something like this. I'd honestly be happy with something like a RaspPi with an SSD and a huge battery in a laptop with good build quality. My system reqs are really low; a laptop that would survive 20 years with regular use would be fantastic.
The Pinebook Pro is well supported by NetBSD (not sure about OpenBSD), and it can take eMMC, at least. It's quite affordable, battery life is excellent, and my Pinebook feels pretty rugged. It has a six core RK3399 and 4 gigs of memory. I don't think you can get a better laptop for the price ($220 US).
The old stuff is far more open and documented, which is very important from a freedom perspective. Repairability is really low for anything even slightly recent.
I'm not sure which Celeron J you're referring to, but I know some models were affected by the infamous LPC clock bug --- which might explain why there seems to be a surplus of machines with those CPUs available for amazingly low prices.
I like computers the same way I like my cars, loud and hot ;-)
It's 100% a fact that 1 million of us can't make the difference that 5-10 plants will make if they modernize their processes and are less toxic to the environment. Facts are facts.
But I also don't like paying a lot for electricity. :)
Virtue signalling is dumb and I hate it. In this case I happen to also have a personal interest.
> including long-term environmental sustainability
Err... no not really !
The environmental impact of pretty much all consumer electronics is by (very) far dominated by the impact of its manufacturing (mining, water for manufacturing the chips, oil for transport, robots for assembly in countries where electricity is made with coal, etc), therefore anyone trying to be serious about "environmental sustainability" understands that it mostly implies to extend as much as possible existing equipment and avoid buying new ones (even if the old ones are bulky, noisy and consume a lot of electricity compared to the new)
Is anyone actually using the pi as a desktop? Feels like most people need a laptop for portability, which is already many times more powerful than the pi, and then if you do have a non portable desktop, it's for gaming.
I've read comments in the past of people adventuring into these kinds of scenarios though I haven't tried to use a pi as a desktop.
I use old laptops. My daily driver until a year ago was a Dell e7470 which worked well enough for browsing and coding in VSCode.. until the keyboard broke, twice. Now it's gathering dust because I'm too lazy to change the keyboard again but it should work fine on linux atleast.
My current daily driver is a thinkpad t490 with an i7-8565U and 16 GB of ram that was provided for me at work and which I can buy used for 500$ atm.
But if what you want is fanless and decent hardware I'd recommend a NUC.
Sadly where I live it never really gets below 15 degrees C, and is normally around or above 30C for most of the year, so computers heating up my office is a real problem.
I think there’s something to be said for the fundamentals of modern computers too. A 4K screen, excellent keyboard, blazing fast NVMe, etc are all attainable on a budget now with CPUs using less power, as you said.
Its always odd to hear people complain about noise or power consumption to me.. It's just never a thought that even crosses my mind.. much less the co2 emissions haha. My concerns are generally about my compile times on older hardware and nothing else.
It's not the end of the world but I hated the 2019 Intel Macbook I had at a previous job. The thing would be roasting hot, fans blasting constantly if you had docker open. Now I run docker and a million other things on my 2021 Macbook in total silence and a cold laptop.
Feels like we jumped 10 years forward in a single iteration.
I tried to get into Gemini for a while - it’s like Gopher but reimagined with some more modern ideas.
However I quickly figured out there is not much to actually do there. The way modern web works is horrible (I don’t want to click just another cookie banner thanks), but at least it’s easy for all people to contribute.
Sorry, but that's futile entire X.org and drivers are bundled under the Xenocara brand in base. If you don't have it, consider it lost forever.
If any, you can set
machdep.allowaperture=2
in /etc/sysctl.conf
Also, /etc/fbtab needs some settings for MESA, too. Basically you need to set read write the files under /dev/dri* with fbtab from /dev/ttyC5 if I can recall it well.
I have plenty of criticisms of OpenBSD but that wide array of support is why I end up running it on hobby systems. I’ve got a Lemote netbook (which uses a homegrown Chinese MIPS chip) and a PPC Mac of similar vintage in the post. It is really cool to boot up these oddball systems and use familiar programs on them.
When I really think about the kinds of things I find essential in a computer, almost everything could be achieved with a TUI or a very light GUI.
Unfortunately, the problem of proprietary software and APIs is always around the corner. I can use a TUI client for Telegram, but I can't if I want to message someone on Facebook. I can use a simple newsreader for blogs, but I can't for my friends' Instagram posts.
I could honestly scrap by with a 20yo computer if the world was more open sourced. Such a shame
Fourth generation Intel i5 laptop from around 2013 is still kicking hard provided enough Ram (8Go or 16Go depending on your needs) and a SSD.
Not "usable (but read the fine print)". Comfortable to use (running Linux at least).
IMac G5 from 2004 on the contrary?
Too heavy, too hot, two slow (single thread cpu doesn't cut it)
Late 2007 Intel core 2 duo? RAM maxes out at 4Go but surprisingly usable. Not comfortable but usable.
That should give you an idea of where to set your expectations. I do recommend buying second hand but I wouldn't go under 2012 for a daily driver personally, for peace of mind.
> Because frankly the iMac wasn't really useful in a modern or a retro sense, though I expect some will debate its retro merits.
Can it run OS 9 natively? The earliest iMac G4 could, I've heard.
I bought a Mac G4 a couple years ago so I could have the most powerful computer that runs Mac OS 9 natively which then enables me to run Symantec's MORE.
MORE is this ancient, super dope outliner editor, which is just so fast and ergonomic at the keyboard. It's go all these features that have never really been replicated since, like cascading styles and presentation mode.
On PC side something like 2004-2005 vintage desktop wouldn't necessarily be all that impractical to use even today. I'm basing this on Anands "buyers guide" to get a feeling what the general lineup was like then: https://www.anandtech.com/show/1692/11
You got 64-bit CPU, SATA HDD, PCIe graphics with OpenGL 2, USB 2.0 connectors, and DVI flat panel display. Its something I'd expect you can just throw Debian Stable in and be perfectly usable with lightweight WM, or maybe even something like XFCE for full graphical desktop. Of course as others have mentioned, browser is the real problem, but 1GB might be enough to browse HN at least?
I love the idea of OpenBSD but have only had a couple of sad reintroductions over the years and never spent the time apparently required to "get it just right" before needing to get something else done.
It feels like a base install leaves you in several hours of configuration debt, and I apparently don't have the concentration span to dig my way out to somewhere interesting and not just half way back to productive on the systems I already use.
My newest computer is from 2014 for reference, oldest from around 2008 and I emulate older stuff.
I know I'm "doing it wrong" and perhaps it isn't a right fit for everyone but ... That's how it is.
I have a love-hate relationship with OpenBSD but I don't feel this "configuration debt". The whole point of it is that you don't enable stuff you don't really need; and if you really need something, then you should make the effort to actually understand what it does and how to configure it.
For this same reason, it's simply a poor desktop OS. A modern UNIX-like desktop is a very complex system, with so many moving parts held together with hacks and duct tape. As OpenBSD effectively forces you to familiarise yourself with such duct tape solutions, it also forces you to do more work than you'd want - or to stick with environments that feel somewhat basic.
Yeah, I was running Firefox on my iMac G4 with OpenBSD. It was being killed frequently though by hitting the memory limit (which at the time I didn't know how to raise). I'm intending to go back and try again with my deeper knowledge of the OS. I have MANY old machines which could benefit from having a latest-features OS to use.
Last time I used a computer with 1GB RAM was in 2014, with 32-bit Windows XP. The computer had 512MB in use after startup and could only handle 2-3 tabs in the Chrome of the time, everything (e.g. Gmail at the time) seemed to use about 100MB.
So 2014 Chrome with webapps like 2014 Gmail wasn't really practical on 1GB. So I very much doubt that 2023 Chrome with 2023 webapps would be practical on 512MB RAM.
Netsurf works well on low spec and render most pages well enough. Also, switching to lite or mobile versions of websites when they exist render them much lighter and more readable.
The power usage is a serious issue, I recently scrapped an old PC. It was perfectly fine and usable, but I found it draws with graphics card more than 200W. And taking into account how much electricity rose in Europe it's super expensive to run. And the new one it's 15 times less. So let's take care at least a bit about the environment.
I always take time to consider where the balance point is between replacing working, but less efficient equipment with newer, more efficient equipment.
It took materials and energy to produce the old one. I know electricity is expensive some places, but even then, sometimes the break even point is years in the future.
Despite all of this, I only realize its more complicated than I hope.
> let's take care at least a bit about the environment
You know that producing that PC used way more energy than it would consume if you left it running?
> taking into account how much electricity rose in Europe
This seems like your main reason actually. Your wallet, not the environment. If the energy prices went down, you would have kept the old PC around rather than having it end up in a landfill or shipped to a poor country where people literally burn the PCBs to recover tiny bits of rare metals.
I think your priorities are a bit off. If you want to help the environment, look at your other habits first that have impacts that are orders of magnitude more. Flying, driving, plastic, food, etc
I totally understand the nostalgia value, but energy doesn't grow on trees. A small mini-PC would consume a fraction of the current needed to run those old machines, offering much more computing power in return. Not to talk about the much better scaling achieved by modern processors; some iMac models would idle at almost 100 Watts.
I’m not suggesting one should use a setup like this daily, mostly commenting on how good the experience still is when running *BSD on it.
At the end of the day if all you need is a computer, it’s nice to know that basically any computer can still be serviceable. In a world where you can get 4k series i5s in the trash though, I don’t actually think anyone will be using such an old Mac seriously.
Seriously, you should see the hardware companies are throwing away right now… i5/7 6xxx computers and up… what a waste. They’re fully functional but they’ve been amortized so thrown away to the landfill and replaced with todays model to be thrown away in 3–4 years.
Ohh.. I've always wondered how to get my hands on this "garbage". The machine I use every day is from like 2013... the fastest PC I own has a processor that was launched in 2014. It's partly because I'm frugal and partly because I don't believe in constantly replacing hardware that works just fine. But it sounds like by this point companies are throwing away stuff far newer than what I use! haha
Depending on where you are, there'll either be local auctions of surplus tech, or eBay / refurbishing stores selling these kinds of things. I always went for Dell Optiplex, but there are HP and Lenovo equivalents too
Yeah, we used to have Free Geek Vancouver here, but they unfortunately shut down recently.
Still, my intent is moreso, well, companies are literally throwing these systems into the trash, so paying for them feels like I am getting ripped off. I'm always hoping for the opportunity to snag a system or two that is being tossed. It has happened before, or I've paid like $5-10 for something that is destined for the dumpster. It definitely seems like a "have to be in the right place at the right time" kind of thing, though.
In Germany, https://www.afbshop.de/ provides PC disposal services, and then sells the reconditioned machines with new hard drives/SSDs. They also have retail shops in some German cities; I bought an HP Prodesk micro PC from the one in Nürnberg.
Indeed! I am heating myself exclusively with wood every winter for 10 years now, since I moved to the countryside. Energy does literally grow on trees.
ARM or RISC-V based single board computers are becoming a thing. If one needs a small low-power Linux system with a fully-working desktop experience nothing beats those in terms of power consumption. Most of them are even passively cooled and, hence, very silent without efforts.
The thing is the fact it can work well on an old imac g3/g4 means you can probably have a decent enough experience on an old raspberry pi 3 or similar or that 30$ laptop with a much more recent cpu and 1G of ram that is also supported by openbsd.
It is kind of neat just how little power some of these older systems used. A big part was it was cheaper to make processors that used non-ceramic caps on them - but it limited the amount of power they could use due to thermal dissipation limits.
A good example in a similar fashion is the Super Nintendo. The power supply is rated to 7.85 Watts but it only uses roughly half of that when running. I believe the Neo Geo used 5 Watts peak. And these aren't physically tiny machines like a Raspberry Pi.
Typing this on my T420 running OpenBSD. It does everything I really need it to do. I don't play (new) games, and I have a more powerful server tucked away here in case I have any CPU-intensive jobs I need done.
Honestly after seeing the absolute state of Linux on the desktop over the last decade, there is no reason for me to even consider it any more.
Can't comment for all aspects of a Linux desktop versus an OpenBSD desktop, but one example that comes to mind is the sound system. Just try reading the design paper of sndio [1]. It is a pure joy to read. A beautiful architecture.
Now compare that with ALSA / PulseAudio / JACK / PipeWire... The audio system in Linux is fragmented and incomprehensible.
That’s a fine piece of hardware and a fine OS to run on it. I would feel a bit hampered without a good browser though. So much of my work involves trawling GitHub for bug reports and stack overflow for bug mitigations. Very little modern software development, for me at least, happens in isolation. There’s a third-party dependency or two in almost everything I do.
I try to use w3m a lot for that stuff. Many sites still have a cheeseburger menu made out of a tree of <ul> lists up the top of the page, and require CSS indentation or borders to render discussions nicely. Are there terminal browsers that support these features better?
No big distro supports it and I can only think of one that still does (Adelie). Modern web browsers won't easily compile for it, probably other packages. And soon support for GPUs like the ATI Rage commonly used in 1998-2001ish era Macs will be removed from the kernel.
Isn't there an unofficial Debian port? Stuff can be added back to the kernel if people commit to maintain it and keep it in line with the evolving code base.
"How much computing power do I really need on a day to day basis?"
I guess this is a kind of Computational Maslow's Hierarchy of Needs.
Basic needs can be covered by the vast majority of computer hardware. But once you get to large scale web browsing and gaming - more complex hardware is required.
The difference between a hermit living content in a cave and a billionaire in a 42 bed room mansion. Both could be happy but it really depends on their perspective.
Can relate, a while ago (around 2016) we bought 7 second-hand dual opteron 8439 (12 cores! no ht) desktops with 64GiB of ram in them, added pcie nvme adapter, for about ~850 EUR (including 21% vat) a piece.
We have been using those systems since then and they are really amazing performers (linux and bsd, many java apps, jetbrains IDEs, etc). This is totally because of how efficient and minimal modern Linux and BSD distros can be, by the way - even with modern desktop environment like Gnome 43!
Only downside is the powerdraw. They idle at 180 watts, s3 sleep is still 150 watts and on BSD idle is 240 watts! This wasn't too much of a problem before the whole energy crisis thing, because electricity was quite cheap (around 0.19cts per kwh). But with prices touching 0.80cts pet kwh, it becomes untenable quickly.
We currently Wake-on-LAN these boxes only when we need them and they auto turn off after 1 hour of idle.
Dell Optiplex 3020 w/ i5 but running FreeBSD to build prototypes. Runs alright. I have used OpenBSD on a Pentium box for similar projects before and the support was good too.
So *BSDs are now among my considerations when it comes to get usage out older machines.
Speaking of good enough computers, I'm really happy with the power of mobile processors right now. With my usage (dev, some Docker, browsing, on a light Manjaro+i3), I can shop for form factor and utility and know it won't feel sluggish. Replaced my 2017 15" Dell workstation with a 8" GPD Pocket 3 that feels more snappy and sharper. I thought it would be my couch/on-the-go/emergency computer but it became my mobile workstation as well. Only grip with the form factor: it's too small to fit on both laps so I have to use my bag to have a "laptop".
I couldn't agree more. I've been using OpenBSD on a RaspberryPi 4 B for the past few months as a nginx, uwsgi, dnsmasq, ftp (for 4x security cameras) and squid proxy server for the past few months. This replaced a debian install that keep annoying me with it's systemd/networkmanager crap. Not only am I impressed with the performance, the simplicity and confidence I get in it's security is a breath of fresh air.
There's not much that's specific to OpenBSD wrt. this post, or to the G4. The RAM use (61MiB) is roughly in line with a super lightweight Linux install for any 32 bit system (requirements for a modern 64 bit system tend to be higher). So any computer really is good enough if it fits your end use, otherwise it's no good.
Dealing with some hardware is a lot easier on OpenBSD, at least for me. I spent the better part of a day on Debian trying to get ofono and connman to play well together after network-manager and modemmanager (side note: really?) failed to get my LTE modem to connect. In OpenBSD it was just
The sad thing that trips me up is that I can easily run out of memory web browsing on most machines w/ less than 16gb just by forgetting to close tabs. But my desktop is has a CPU from over a decade ago and I don't really foresee needing a new one soon.
> While I fully acknowledge there's a lot you can't do on a system like this (Docker, anything GPU related) it's still able to do a lot of productive work without feeling like a total dog to use.
How so? Celeron, 2GB of RAM, ~4GB with compressed ZRAM. Libreoffice works, most stuff works with GL 2.1, even 720p video with MPV. LibreOffice, Ted, Abiword, GNUMeric...
For the rest, UBlock Origin it's your friend and git://bitreich.org/privacy-haters to cut in half the realistical Chromium requeriments.
Avoid brave. It' s spyware. If any, Chromium-Ungoogle with the settings from:
git clone git://bitreich.org/privacy-haters
Copy the file in the chromium subdir to /etc/profile.d/chromium.sh. Chmod +x it. Change CHROMIUM_FLAGS to UNGOOGLED_CHROMIUM_FLAGS or whatever flag shows up in /usr/bin/ungoogled-chromium script.
That and UBlock Origin with the country specific ad blocking rules will cut down a good bunch of crap.
The GitHub repo I posted has sane defaults for both Chromium based browsers (adapt the environment variable for the custom fork, you have it at /usr/bin/chrom file) and Mozilla under *Nix. If something "clean" as Mozilla has a good chunk to be tweaked, just imagine the amount of telemetry Brave has.
I lived the 90's and 00's in computing. I pretty surely know the spyware and adware terms. And Brave it's spyware, period.
They recently upgraded FFS for 64-bit. I think they have the right philosophy, these FS's you mention are FAAATTTTT and would probably at least double to size of the tree. You can always set up a NAS with ZFS and simply use it with NFS.
There are lots of people in the comments here who appear to not get it. This isn't about replacing a contemporary computer with something old. It's about the usability of a decent OS on very modest hardware.
One of the systems on which I run NetBSD is a 33 MHz m68030 Mac LC III+ (http://elsie.zia.io - it's hosting a site about an LC II, which I'm still working on). It's quite useful to see how assumptions people make about "acceptable" performance regressions bear out in the real world. Sure, not much can be done about taking six or seven minutes to ssh, but when bad coding causes a shell script to take twice as long, it's much more obvious on hardware like this than on a modern Ryzen system.
Nobody is telling others to forego your modern computer and your Windows needs. It's just interesting to some of us that we can still run modern things on very modest and, in some cases, very non-mainstream hardware in 2023.
Not nearly as low-end, but I ran a server off of a P3 450MHz for over a decade, then sold it to someone who was going to continue using it for a similar purpose. 4 of those years were in a college dorm room with no A/C.
The CPU fan died at some point, and I didn't notice until I opened it up to change a hard-drive. If it's not still running it's probably due to an exploded capacitor, because that thing was indestructible.
Excellent point. Software testing (not for stability but instead for usability, and certainly not for compiling, lol) ought to be done on minimal-spec systems (the specs of which might possibly be dialed-down further once optimization is done, which would be a bonus). Assuming that a faster system (or one with more RAM or storage) will cure the real problem has allowed all sorts of sloppiness to creep into software.
NetBSD is still routinely tested on 486s and old Vaxen, and developed on ARM SBCs or Pinebooks. It helps keep the code base light and efficient.
The latest minimal kernel will still boot on 4MB of memory on 32 bits machines, although it will take more than that to be usable.
Unfortunately NetBSD can only do so much considering the sad state of the Linux ecosystem. GCC has become so big it won't self host on most platform, making cross compilation mandatory for them. Switching to PCC would be a solution, but a lot of software won't compile with PCC and obviously their maintainers couldn't care less.
Then you have the newer behemoths like Rust and Go. Once they start creeping in some majors projects like GCC or OpenSSL, a lot of machines will turn into e-waste, and the resource requirements will climb even further.
We once heard a story about how someone optimized their game engine for a crappy old netbook and ended up with some tens of thousands of frames per second on any vaguely modern system.
It does not. Slow hardware _can_ give you a certain amount of _motivation_ for speeding up your software, but this is by no means a given (if software running slowly on your own computer automatically made you optimize it, there would not be software that is really slow even on fast hardware, and there clearly is).
If you care about making software fast, you invest time in the measurements-ideas-measurements loop, where having fast hardware helps a lot (it allows you get measurements faster, and try out more ideas in a given time frame). Since any sane measurement is based on benchmarking and not eyeballing, slower CPUs don't help you. And you certainly can care about fast software without having a slow computer personally.
>if software running slowly on your own computer automatically made you optimize it
I recently changed 1 byte in Chrome.dll and decreased the lag for copying large images by a factor of 2-3. (The UI freezes for 5-10 seconds. Turns out it's re-encoding a 2MB JPG into a 20MB PNG for the clipboard, at compression level 6 no less!)
I put up with it for years thinking there was nothing I could do, but one day I decided to ask some people smarter than me, and they gave me enough info to track down the code in IDA (I had never used it before).
My point is, motivation is a powerful thing!
(Offtopic but turns out Firefox copies the original file, so you can paste it right into Windows explorer—blew my mind!)
An improvement would be to replace EncodeBGRASkBitmap with FastEncodeBGRASkBitmap so that it uses the fastest zlib level instead of the default (6). This is what the Linux X11 (ozone) implementation uses. This would be an easy fix to upstream if someone wants to do it.
Yeah, like you said they literally already wrote the code that does that, not sure why they didn't use it. Setting compression to 0 is still 2 full seconds of lag for me (down from 8s), so there's still a lot of overhead. There are faster png implementations but I think re-encoding a 2MB JPEG into a 20MB PNG (to be re-encoded back to JPG by whatever program you paste it into) is kind of missing the mark.
In the case of OpenBSD itself, the popularity of the PCEngines APU2 has definitely provided motivation for improving the performance of PF.
Also, ironically, people elsewhere in this article thread keep mentioning that older hardware is too power intensive. But the reason the APU2 is still around is because neither Intel nor AMD offer modern chips with a maximum TDP as low as the AMD G series. At maximum load, with all 4 cores, DRAM, and NICs at full throttle, the APU2 only dissipates something like 10W of heat. Modern x86 chips are more power efficient, but their maximum TDPs are just too high. Some of the laptop and embedded series get close at ~15W (not sure if that includes DRAM and NICs), but that's too hot. The APU2 is designed to only require passive cooling, without finned cases, without conditioned air, and still provide years of reliable always-on service.
I see the same issue with higher-end AMD and Intel chips. Under light load their power draw is very low, but under load their chips are often more power hungry than comparable models from 10 years ago. Yes, they get more work done per watt, but there are many situations where I'd rather know that a server will never draw more than, say, 50-60W, even if that means requests end up being throttled. But I don't want to use their crippled low-end chips, either, especially when I can just continue using my 10+ year-old servers that provide better performance. (Theoretically you could manually throttle requests to maintain the power envelope, but I don't want to be burdened managing software configurations.) I prefer the "edge computing" series of chips from Intel and AMD, but over the years their max TDP has slowly crept up. Thankfully there are still a few options that work well, but it's definitely a niche that doesn't see much attention.
Back when Fry's Electronics was still around, they had a sale on old AMD MSi AM1 motherboards and Athlon 5350 quad core 2 GHz CPUs (APUs) for $45 or so for a set. I bought quite a few, and now I regret not buying more. At 100% CPU load they take less than 25 watts measured at the wall, and less than 40 watts with two spinning rust 3.5" disks. They make excellent NAT routers / firewalls / DNS servers, et cetera. They can route / NAT a full gigabit and still have cycles to spare.
The closest thing we have today is getting a quad core Ryzen and running sysctl -w machdep.cpu.frequency.target=1600. Personally, I'd rather have more of those wonderful 2014 motherboards and CPUs.
How about the fact that performance best practices vary on different hardware?
CPUs have changed a lot, in that caches, speculative execution, and parallelism are the way to get performance, and that was not true in the heyday of a classic Mac.
I'm still in favor of porting to old platforms for various reasons. But we need to keep in mind that performance optimization is not one size fits all.
In fact, I found an over-dimensioned workstation to be one of the best tools improving performance of non-trivial programs. There's a lot of performance issues that don't stand out sufficiently on today's common-place enduser hardware, that will be very visible on a current two socket system. And fixing the issues benefits the lower end hardware of today. More often than not, the large issue one can see in large machines today, are issues seen on lower-end hardware in a few years.
It's easy to see using too many cycles on enduser hardware. The effects of memory latency, various forms of cache contention, etc however, not so much.
> Since any sane measurement is based on benchmarking and not eyeballing, slower CPUs don't help you
I do find that somehow, to a non-rational degree, I have a considerably easier time getting a "feeling" for the performance bottlenecks when benchmarking on my workstation, than when benchmarking on a server. Which I guess kinda falls out of proper benchmarking.
I heartily agree. I have an old Asus netbook (castoff from a family member who switched to computing nearly exclusively on her smartphone; the machine originally ran MeeGo Linux on a 8GB internal disk) that runs Debian stable, slowly, but I've been wondering if one of the BSDs might be slightly faster. I'm thinking of making it a kitchen computer primarily to serve as a recipe database.
I think any article on HN is really just a jumping off point for people to talk about themselves. Everyone's had old computers, so that's easier to talk about.
Running sshd on a 33 MHz m68030 (OpenSSH_9.1), and connecting to it from a modern(ish) machine (OpenSSH_8.1p1) takes about six minutes to complete when the system is otherwise idle. Right now it's busy with lots of http requests due to putting the link here, so it's taking about 9 minutes or so ;)
I'd expect a 50 MHz '030 to be faster than a 33 MHz one ;) But running newer ssh and sshd means more expensive default ciphers. I can speed it up a bit by being more deliberate about the choice of cipher, but I want to run things by default, generally speaking.
Here is what no one is mentioning. Currently, 90% of the world's most advanaced chips are made by one manufacturer, TSMC. If something hapens to TSMC, then the ability to utilise older chips and older computers is going to become relevant.
Personally, I think "modern" web browsers suck for using the web recreationally, e.g., visiting random websites, i.e., usage that is not banking, shopping or other commercially-related activity. However, when applied to local files, these "browsers" can be useful as PDF readers, audio and video players. Whether older computers can handle today's media files is left as a question for the reader.
Reading websites is easy without a big, bloated "tech" company browser. Using a text-only browser on a daily basis, I have been doing this for decades.