I think it's more likely to say that this comes from a place of laziness than some enlightened peak. (I say this as someone who does the same, and is lazy).
When I watch the work of coworkers or friends who have gone these rabbit holes of customization I always learn some interesting new tools to use - lately I've added atuin, fzf, and a few others to my linux install
I went through a similar cycle. Going back to simplicity wasn't about laziness for me, it was because i started working across a bunch more systems and didn't want to do my whole custom setup on all of them, especially ephemeral stuff like containers allocated on a cluster for a single job. So rather than using my fancy setup sometimes and fumbling through the defaults at other times, i just got used to operating more efficiently with the defaults.
You can apply your dotfiles to servers you SSH into rather easily. I'm not sure what your workflow is like but frameworks like zsh4humans have this built in, and there are tools like sshrc that handle it as well. Just automate the sync on SSH connection. This also applies to containers if you ssh into them.
Do you have experience with these tools? Some such as sshrc only apply temporarily per session and don't persist or affect other users. I keep plain 'ssh' separate from shell functions that apply dotfiles and use each where appropriate. You can also set up temporary application yourself pretty easily.
Sometimes we need to use service accounts, so while you do have your own account all the interesting things happen in svc_foo which you cannot add your .files.
You said you were already using someone else's environment.
You can't later say that you don't.
Whether or not shell access makes sense depends on what you are doing, but a well written application server running in a cloud environment doesn't need any remote shell account.
It's just that approximately zero typical monolithic web applications meet that level of quality and given that 90% of "developers" are clueless, often they can convince management that being stupid is OK.
They do get to work on someone else's server, they do not get a separate account on that server. There client would be not happy to have them mess around with the environment.
They specifically mentioned service accounts. If they’re given an user account to login as, they still might have to get into and use the service account, and its environment, from there. If the whole purpose was to get into the service account, and the service account is already setup for remote debug, then the client might prefer to skip the creation of the practically useless user account.
Could you help me understand what assumptions about the access method you have in place that make this seem unprofessional?
Let's assume they need access to the full service account environment for the work, which means they need to login or run commands as the service account.
This is a bit outside my domain, so this is a genuine question. I've worked on single user and embedded systems where this isn't possible, so I find the "unprofessional" statement very naive.
If, in the year 2025, you are still using a shared account called "root" (password: "password"), and it's not a hardware switch or something (and even they support user accounts these days), I'm sorry, but you need to do better. If you're the vendor, you need to do better, if you're the client, you need to make it an issue with the vendor and tell them they need to do better. I know, it's easy for me to say from the safety of my armchair at 127.0.0.1. I've got some friends in IT doing support that have some truly horrifying stories. But holy shit why does some stuff suck so fucking much still. Sorry, I'm not mad at you or calling you names, it's the state of the industry. If there were more pushback on broken busted ass shit where this would be a problem, I could sleep better at night, knowing that there's somebody else that isn't being tortured.
The defaults are unbearable. I prefer using chezmoi to feel at home anywhere. There's no reason I can't at least have my aliases.
I'd rather take the pain of writing scripts to automate this for multiple environments than suffer the death by a thousand cuts which are the defaults.
chezmoi is the right direction, but I don't want to have to install something on the other server, I should just be able to ssh to a new place and have everything already set up, via LocalCommand and Host * in my ~/.ssh/config
I gave it a try a few months ago, but did not work for me. My main issue is that atuin broke my workflow with fzf (If I remember correctly, pressing ctrl + r to lookup my shell history did not work well after installing atuin).
I can totally see wanting to automate your life like this for work - "re-order that shipment from last week" or "bump my flight a day". But using this for personal stuff, it does seem like a slide towards just living a totally automated life.
I think there’s always a danger of these foundational model companies doing RLHF on non-expert users, and this feels like a case of that.
The AIs in general feel really focused on making the user happy - your example, and another one is how they love adding emojis to the stout and over-commenting simple code.
With RLVR, the LLM is trained to pursue "verified rewards." On coding tasks, the reward is usually something like the percentage of passing tests.
Let's say you have some code that iterates over a set of files and does processing on them. The way a normal dev would write it, an exception in that code would crash the entire program. If you swallow and log the exception, however, you can continue processing the remaining files. This is an easy way to get "number of files successfully processed" up, without actually making your code any better.
> This is an easy way to get "number of files successfully processed" up, without actually making your code any better.
Well, it depends a bit on what your goal is.
Sometimes the user wants to eg backup as many files as possible from a failing hard drive, and doesn't want to fail the whole process just because one item is broken.
You're right, but the way to achieve this is to allow the error to propagate at the file level, then catch it one function above and continue to the next one.
However, LLM generated code will often, at least in my experience, avoid raising any errors at all, in any case. This is undesirable, because some errors should result in a complete failure - for example, errors which are not transient or environment related but a bug. And in any case, a LLM will prefer turning these single file errors into warnings, though the way I see it, they are errors. They just don't need to abort the process, but errors nonetheless.
> And in any case, a LLM will prefer turning these single file errors into warnings, though the way I see it, they are errors.
Well, in general they are something that the caller should have opportunity to deal with.
In some cases, aborting back to the caller at the first problem is the best course of action. In some other cases, going forward and taking note of the problems is best.
In some systems, you might event want to tell the caller about failures (and successes) as they occur, instead of waiting until the end.
It's all very similar to the different options people have available when their boss sends them on an errand and something goes wrong. A good underling uses their best judgement to pick the right way to cope with problems; but computer programs don't have that, so we need to be explicit.
And more advanced users are more likely to opt out of training on their data, Google gets around it with a free api period where you can't opt out and I think from did some of that too, through partnerships with tool companies, but not sure if you can ever opt out there.
They do seem to leave otherwise useless comments for itself. Eg: on the level of
// Return the result
return result;
I find this quite frustrating when reading/reviewing code generated by AI, but have started to appreciate that it does make subsequent changes by LLMs work better.
It makes me wonder if we'll end up in a place where IDEs hide comments by default (similar to how imports are often collapsed by default/automatically managed), or introduce some way of distinguishing between a more valuable human written comment and LLM boilerplate comments.
The question I wonder when reading this is why did this work for you when other exercise routines didn’t? It sounds like you at least tried the gym before and couldn’t stick with it - so why did this one stick?
To me it seems a lot of healthy people end up not needing discipline because they find healthy things they enjoy and want to do.
Like I wondered if someone copying this would be better off targeting 1000 air squats instead. But maybe that’s not as “cool” and wouldn’t have brought as much intrinsic motivation.
I have no idea why this worked other than I really took to the process of doing the exercises and then logging it all. Once I had a little bit of data I started writing Google Sheets formulas, creating charts, etc and it suddenly became fun. Then when I did get into shape it became a game of beating my previous 5K and 10K times. Lately every few days I go outside and run hard to beat my last time (currently PR is 28:10). I would have smashed this time a few days ago but about 2 miles in I suddenly had a terrible calf cramp that took a few days to get past. Not going to tempt fate again until after completing the Columbus 1/2 marathon on October 19.
> Lately every few days I go outside and run hard to beat my last time (currently PR is 28:10). [...] a few days ago [...] I suddenly had a terrible calf cramp that took a few days to get past.
As someone with horrible back pain issues after a very intense block of training for a 1/2 marathon at the beginning of this year, I do hope you'll reconsider that first part of the quote above, since it's probably one of the causes of the latter. Took me a while to internalize the "run slow to run fast", but it does make a huge difference for injury prevention.
Not putting words in your mouth but your 10k mark I think coincided with my viewpoint for my own fitness journey. You did it in your living room. The gym is a whole thing. Getting the without gear, the water bottle, the driving, etc etc. Then when you're exhausted and tired you have to drive back or walk back or whatever. Doing push ups in your living room has no barrier to entry or exit. You do them when you want to, you stop when you want to, and you're back on the couch watching tv to recover within seconds. Can do it in hotel rooms, late at night, 5am, whatever. For me, that's the benefit and why such exercises work when gym doesn't. Maybe some of your success is from a similar vein, maybe not
100% yes! Everything you describe is spot on with my experience. Took me a long, long time to realize in order to get into ridiculous shape I need 1) the floor 2) running shoes.
You don't need a water bottle at the gym: they have a fountain. And if you're too exhausted after a gym workout to drive home then you're really doing something wrong.
The biggest risk they face is perhaps competition from unskilled workers who can do trades by just wearing Meta AR glasses and following instructions from an AI.
Of course you’d still need training on how to work with your hands but it would cut down on the need for years of experience and planning.
I imagine it's incredibly useful for prototyping movies, tv, commercials before going to the final version. CGI will probably get way cheaper too with some hybrid approach.
Obviously this will get used for a lot of evil or bad as well
I feel like that's missing the point of pre-vis anyway, its purpose is to lay down key details with precision but without regard for fidelity (e.g. https://youtu.be/KMMeHPGV5VE), a system with high fidelity but very loose control is the exact opposite of what they want.
Yeah it seems like the essence of this deal is that Nvidia is selling their chips in exchange for stock in AI companies. Makes a ton of sense, Nvidia has more cash than they need and it’s a great investment.
I wonder if OpenAI starts getting looked at for antitrust at some point. They now have significant ownership by Microsoft and Nvidia, 2 major parts of the AI supply chain.
I'm not really sure how your comment would disprove the story. It seems that it's unclear if the Dead Hand was even active at this time, and also it probably triggers when impact occurs rather than on satellite detection.
Meanwhile this case was a false satellite detection which, if reported, may have caused a retaliatory launch of nukes.
reply