For me personally I use VSCode + Vim Extension and I find the combo quite perfect. I've been using VSCode pretty much my entire career and with the vim extension I can make things like editing very "vim like" but still be able to break out and do other things the "vscode way"
I've been using React for just about a decade at this point and IMHO the thing that makes it a killer framework isn't one feature but all of them combined that makes it possible to move very fast in product development with multiple contributors. It's easy for devs to pickup relative to other languages, yes there are footguns but for the most part they are in the minority and you figure out the "Rules of React" pretty quickly and avoid them.
Remember folks, were all talking about our figurative drills and hammers in this thread and the folks paying us largely don't care and just want to see a working product. React in this analogy is a very quirky toolbox but it won because that toolbox helps people build the end product faster than most other toolboxes, even if it's filled with some questionable tools.
Last year I purchased a Lenovo P15 Gen 1 used, originally it came with a sticker price of $5700 but I managed to get it used for ~$500. All these hyper expensive laptops fall into one of two buckets, either they are top end gaming rigs or they are like my Lenovo and designed for large engineering companies that will just lease them and not give a crap how much they cost.
For the average consumer though I highly recommend going on Ebay and finding these hyper expensive laptops used from a few years ago. Mine came with an i9 processor, an RTX 5000 and can support up to 128GB of RAM and even 5 years on those are still wild numbers except that same computer can be found for maybe 10-15% of the original price.
Though I will say one downside of buying one of these is they are customizable to an insane degree so finding the "right" one might take you a while (took me around a month to find mine).
Another downside is that the seller might install spyware on your machine - had that with a Lenovo too. Ended up buying a brand new Asus that was so heavily discounted it cost the same as the 2nd hand Lenovo I returned.
> Another downside is that the seller might install spyware on your machine
At least in my case it came without a hard drive so there was no vector for attack there. Sure they might have installed spyware at the BIOS level though the practical chance of that happening from a seller that does any sort of volume is more unlikely than winning the lottery IMHO.
Sellers (especially volume sellers) just want to ship you your stuff and make a buck off the margin.
Reading the bullet points I can see it skew a little toward Newsom in the way it frames some things though that seems to be mostly from it's web search. I have to say that beyond that it seems that ChatGPT at least tries to be unbiased and reinforces that only I can make that decision in the end.
Now granted this is about the US Presidential election which I would speculate is probably the most widely reported on election in the world so there are plenty of sources, and based on how it responded I can see how it might draw different conclusions about less reported on elections and just side with whatever side has more content on the internet about it.
Bottom line, the issue I see here is not really an issue with technology, it's more an issue with what I call "public understanding". When Google first came out, tech savvy folks understood how it worked but the common person did not which led to some people thinking that Google could give you all the answers you needed. As time went on that understanding trickled down to the every day person and now were at a time where there is a wide "public understanding" of how Google works and thus we don't get similar articles about "Don't google who to vote for". What I see now though is AI is currently in that phase where the tech savvy person knows how it comes up with answers but the average person thinks of it in the same way they though of Google in the early 2000's. We'll eventually get to a place where people don't need to be told what AI is good and what it's bad at but were not there yet.
I‘d draw a different conclusion namely that it’s decidedly pro republican via omission. There is not a single reference to the current administration‘s role (which Vance is obviously part of) in dismantling the US democratic norms and institutions.
While Gary is very bearish on AI, I think there's some truth to his claims here though I disagree with how he got there. The problem I see with AI and AGI is not so much a technical problem as an economic one.
If we keep down our current trajectory of pouring billions on top of billions into AI then yes I think it would be plausible that in the next 10-20 years we will have a class of models that are "pseudo-AGI", that is we may not achieve true AGI but the models are going to be so good that it could well be considered AGI in many use cases.
But the problem I see is that this will require exponential growth and exponential spending and the wheels are already starting to catch fire. Currently we see many circular investments and unfortunately I see it as the beginning of the AI bubble bursting. The root of the issue is simply that these AI companies are spending 10x-100x or more on research than they bring in with revenue, OpenAI is spending ~$300B on AI training and infra while their revenue is ~$12B. At some point the money and patience from investors is going to run out and that is going to happen long before we reach AGI.
And I have to hand it to Sam Altman and others in the space that made the audacious bet that they could get to AGI before the music stops but from where I'm standing the song is about to come to an end and AGI is still very much in the future. Once the VC dollars dry up the timeline for AGI will likely get pushed another 20-30 years and that's assuming that there aren't other insurmountable technical hurdles along the way.
The incentive structure for managers (and literally everyone up the chain) is to maximize headcount. More people you managed, the more power you have within the organization.
No one wants to say on their resume, "I manage 5 people, but trust me, with AI, its like managing 20 people!"
Managers also don't pay people's salaries. The Tech Tools budget is a different budget than People salaries.
Also keep in mind, for any problem space, there is an unlimited number of things to do. 20 people working 20% more efficiently wont reach infinity any faster than 10 people.
> The incentive structure for managers (and literally everyone up the chain) is to maximize headcount. More people you managed, the more power you have within the organization
Ding ding ding!
AI can absolutely reduce headcount. It already could 2 years ago, when we were just getting started. At the time I worked at a company that did just that, succesfully automating away thousands of jobs which couldn't pre-LLMs. The reason it ""worked"" was because it was outsourced headcount, so there was very limited political incentive to keep them if they were replaceable.
The bigger and older the company, the more ossified the structures are that have a want to keep headcount equal, and ideally grow it. This is by far the biggest cause of all these "failed" AI projects. It's super obvious when you start noticing that for jobs that were being outsourced, or done by temp/contracted workers, those are much more rapidly being replaced. As well as the fact that tech startups are hiring much less than before. Not talking about YC-and-co startups here, those are global exceptions indeed affected a lot by ZIRP and what not. I'm talking about the 99.9% of startups that don't get big VC funds.
A lot of the narrative on HN that it isn't happening and AI is all a scam is IMO out of reasonable fear.
If you're still not convinced, think about it this way. Before LLMs were a thing, if I asked you what the success rate of software projects at non-tech companies was, what would you have said? 90% failure rate? To my knowledge, the numbers are indeed close. And what's the biggest reason? Almost never "this problem cannot be technically solved". You'd probably name other, more common reasons.
Why would this be any different for AI? Why would those same reasons suddenly disappear? They don't. All the politics, all the enterprise salesmen, the lack of understanding of actual needs, the personal KPIs to hit - they're all still there. And the politics are even worse than with trad. enterprise software now that the premise of headcount reduction looms larger than ever.
Yes, and it’s instructive to see how automation has reduced head count in oil and gas majors. The reduction comes when there’s a shock financially or economically and layoffs are needed for survival. Until then, head count will be stable.
Trucks in the oil sands can already operate autonomously in controlled mining sites, but wide adoption is happening slowly, waiting for driver turnover and equipment replacement cycles.
> The bigger and older the company, the more ossified the structures are that have a want to keep headcount equal, and ideally grow it.
I don't know, most of the companies doing regular layoffs wheneveer they can get away with it are pretty big and old. Be it in tech - IBM/Meta/Google/Microsoft, or in physical things - car manufacturers, shipyards, etc.
Through top-down, hard mandates directly by the exec level, absolutely! They're an unstoppable force, beating those incentives.
The execs aren't the ones directly choosing, overseeing and implementing these AI efforts - or in the preceding decades, the software efforts. 9 out of 10 times, they know very little about the details. They may ""spearhead"" it in so far that's possible, but there's tonnes of layers inbetween with their own incentives which are required to cooperate to actually make it work.
If the execs say "Whole office full-time RTO from next month 5 days a week", they really don't depend on those layers at all, as it's suicide for anyone to just ignore it or even fake it.
Did you not see the backlash the Duolingo CEO got and how hard he backtracked? Coming out and saying "We're replacing a big bunch of people with LLMs" is about the worst PR you can get in 2025, it's really an wful idea for anyone but maybe pure B2B companies that are barely hanging on and super desperate for investor cash.
This was a big, traditional non-tech company.
Also as implied, these were cheap offshore contracting jobs being replaced. Still magnitudes more expensive than LLMs, making it very "worth it" from a company perspective. But not prime earnings call material.
Everyone in the industry also knows that it's not particularly unique, far away from something no one has been able to do. Go look at the job markets for translation, data entry, customer support compared to 2 years ago. And as mentioned, even junior web devs.
Maybe 40 years ago or in some cultures, but I've always focused on $ / person. If we have a smaller team that can generate $2M in ARR per developer that's far superior to $200K. The problem is once you have 20 people doing the job nobody thinks it's possible to do it with 10. You're right that "there is an unlimited number of things to do" and there's really obvious things that must be done and must not be done, but the majority IME are should or could be done, and in every org I've experienced it's a challenge to constrain the # of parallel initiatives, which is the necessary first step to reducing active headcount.
we use AI (LLMs) to improve the recall and precision of our classification models for content moderation. Our human moderators can only process so many items per day, at a high cost.
AI (LLMS) act as a pre-filter, auto-approving or auto-rejecting before they get to the humans for review.
I don't mean to be dismissive and crappy right out of the gate with that question, I'm merely drawing on my experience with AI and the broader trends I see emerging: AI is leveraged when you need knowledge products for the sake of having products, not when they're particularly for something. I've noticed a very strange phenomenon where middle managers will generate long, meandering report emails to communicate what is, frankly, not complicated or terribly deep information, and send them to other people, who then paradoxically use AI to summarize those emails, likely into something quite similar to what was prompted to be generated in the first place.
I've also noticed it being leveraged heavily in spaces where a product existing, like a news release, article, social media post, etc. is in itself the point, and the quality of it is a highly secondary notion.
This has led me to conclude that AI is best leveraged in such cases where nobody including the creator of a given thing really... cares much what the thing is, if it's good, or does it's job well? It exists because it should exist and it's existence performs the function far more than anything to do with the actual thing that exists.
And in my organization at least, our "cultural opinion" on such things would be... well if nobody cares what it says, and nobody is actually reading it... then why the hell are we generating it and then summarizing it? Just skip the whole damn thing and send a short, list email of what needs communicating and be done.
He's either lying or hard-selling. The company in his profile "neofactory.ai" says they "will build our first production line in Dallas, TX in Q3." well, we just entered Q4, so not that. Despite that it has no mentions online and the website is just a "contact us" form.
The anthropologist David Graeber wrote a book called "Bullshit Jobs" that explored the subject. It shouldn't be surprising that a prodigious bullshit generator could find a use in those roles.
I am still of the conviction that "reducing employee head count" with AI should start at the top of the org chart. The current iterations of AI already talk like the C-suites, and deliver approximately same value. It would provide additional benefits, in that AIs refuse to do unethical things and generally reason acceptably well. The cost cutting would be immense!
I am not kidding. In any large corps, the decision makers refuse to take any risks, show no creativity, move as a flock with other orgs, and stay middle-of-the-road, boring, beige khaki. The current AIs are perfect for this.
> I am still of the conviction that "reducing employee head count" with AI should start at the top of the org chart. The current iterations of AI already talk like the C-suites
That is exactly what it can't do. We need someone to hold liable in key decisions.
Right, because one really widely-known fact about CEOs is that whenever anything goes wrong at a company, they take the full blame, and if it's criminal, they go to jail!
Can it turn simple yes-or-no questions, or "hey who's the person I need to ask about X?" into scheduled phone calls that inexplicably invite two or three other people as an excuse to fill up its calendar so it looks very busy?
It's not the top IME, but the big fat middle of the org chart (company age seems to mirror physical age maybe?) where middle to senior managers can hide out, deliver little demonstratable value and ride with the tides. Some of these people are far better at surfing the waves than they are at performing the tasks of their job title, and they will outlast you, both your political skills and your tolerance for BS.
> In any large corps, the decision makers refuse to take any risks, show no creativity, move as a flock with other orgs, and stay middle-of-the-road, boring, beige khaki.
It's hard to take this sentiment seriously from a source that doesn't have direct experience with the c-suite. The average person only gets to see the "public relations" view of the c-suite (mostly the CEO) so I can certainly see why a "LLM based mouthpiece" might be better.
The c-suite is involved in thousands of decisions that 90% of the rest of the world is not privy to.
FWIW - As a consumer, I'm highly critical of the robotic-like external personas the c-suite take on so I can appreciate the sentiment, but it's simply not rooted in any real experience.
> AI in its current state will likely not replace any workers.
This is a puzzling assertion to me. Hasn’t even the cheapest Copilot subscription arguably replaced most of the headcount that we used to have of junior new-grad developers? And the Zendesks of the world have been selling AI products for years now that reduce L1 support headcount, and quite effectively too since the main job of L1 support is/was shooting people links to FAQs or KB articles or asking them to try restarting their computer.
> Pretty soon we will have articles like "That time that CEO's thought that AI could replace workers".
Yup, it's just the latest management fad. Remember Six Sigma? Or Agile (in its full-blown cultish form; some aspects can be mildly useful)? Or matrix management? Business leaders, as a class, seem almost uniquely susceptible to fads. There is always _some_ magic which is going to radically increase productivity, if everyone just believes hard enough.
I was working with a team on a pretty simple AI solution we were adding to our larger product. Every time we talk to someone we're telling them "still need a human to validate this..."
I mean, nah, we've seen enough to these cycles to know exactly how this will end.. with a sigh and a whimper and the Next Big Thing taking the spotlight. After all, where are all the articles about how "that time that CEOs thought blockchain could replace databases" etc?
One thing I always do is say a car is stopped at an intersection and is making a right turn while I'm in the crosswalk, I always look at the driver and where they are looking. Often times what I see is the driver will just look to see that the road is clear and never looks to see that the sidewalk is clear and just goes. I can count maybe 2-3 occasions where had I not done this I would have been run over.
This was one thing not talked about in the article: drivers in the US are not used to pedestrians outside of major cities like Boston, NYC, etc. I've seen drivers blow past me while I was in the crosswalk to rush and make a right turn and were bewildered that someone was actually using the crosswalk.
I was just in Montpelier, VT yesterday, which has a population of just 8000 people, but as the state capital enjoys a busy downtown with a lot of activity. The moment a pedestrian approaches a non-signal crosswalk, traffic in both directions immediately stops to allow them to cross.
Not sure why the people in Vermont have all worked this out, but they do.
Drivers in Hawaii have taken this to an extreme level and will stop in the middle of the road to let pedestrians or other cars go ahead of them even when they have the right of way. And they throw the shaka when they do it.
Question: are there any good resources out there for leading those that are neurodivergent? I haven't led anyone that's neurodiverent yet but it's something I think alot about.
reply