Hacker Newsnew | past | comments | ask | show | jobs | submit | djoldman's commentslogin

Running a mile burns as many calories as are in 2 or 3 Oreos.

That fact is about all anyone should need to conclude that weight loss is dominated by calorie intake.


I can't wait to read the transcript of the AUSA in front of a federal judge trying to explain threatening to declare a company a supply chain risk if the company doesn't supply things to the government.

As an aside, why is it not a law that the government can't pay another entity to do something it's not allowed to do itself, without a warrant? I'm thinking about geo data from mobile apps.

Because the US has been corrupted for quite a long time now, we just liked to bury ours heads in the sand and pretend otherwise until now because it hadn't bitten us in the ass too hard. There is no such thing as the spirit of the law, it has no useful meaning in US law. Loop holes and oversight in legislation and rulings is not seen as a bad thing, it is seen as desirable because it lets us be corrupt legally, and in many cases earns courts and cops and lawyers a hefty profit off the backs of the citizenry.

It’s due to the third party doctrine, a Supreme Court precedent

https://en.wikipedia.org/wiki/Third-party_doctrine


which has been warped all out of any comprehensible reality. It hinges on the idea of 'voluntarily' turning over information. Much of what is now considered information voluntarily turned over isn't even information that people know exist much less that they are turning over much less doing so voluntarily.

It's not voluntary if you don't know about it.

Requiring someone pay for information by any common sense interpretation isn't voluntary.

The courts have lost their goddamn minds.


The 3rd party doctrine is why it is allowed when no law restricts it. It does not prevent Congress to pass a law.

And yet, Congress hasn't passed a law to prevent it

Give up social media, make the man do it the old fashion way.

If the government is being too obvious about the fact that the entity in question is nothing more than its puppet, then something can be done about that. Entities that are government entities in everything but name can be considered to be government entities and become subject to all the relevant restrictions. There's some fancy-ass phrase for this, but I can't remember it at the moment.

Also, the third-party doctrine hasn't been good enough for certainly the last thirty and maybe the last hundred years. But, authoritarians aren't easily separated from their tools of oppression, so I expect to not see that cluster of regulations updated to be actually protective within my lifetime.


> why is it not a law that the government can't pay another entity to do something it's not allowed to do itself, without a warrant?

I think the median American favors security over freedom right now. The reality of cable news and now social media is that an unsolved crime is a national anxiety. When we’re whipped into a collective panic like that, it seems outright ridiculous that the cops not be allowed to access anything that could help.


"why is there not a law...?"

If you're just venting with friends, or trying to build cred as a moral philosopher, then yes, obviously there should be such a law.

Vs. if you're talking about cause and effect, in the real world... its kinda like how foxes never pass laws against foxes moonlighting as henhouse guards. Or somehow Officer Fox, Prosecutor Fox, and Judge Fox don't seem keen on enforcing that law.


Why even should they be allowed to contract an action that they themselves cannot perform - even _with_ a warrant? Is that not still "doing" the action?

Because the government makes the law?

> Mozee went into detail comparing slow concrete curb accessibility work to the faster asphalt street work. Per Mozee, “there’s approximately 14 ramps in a mile.” So for “one crew to build out those 14 ramps will take approximately three months.” In contrast, he said, “a paving crew on a good day … could pave that same mile in a weekend or one week, at most.”

Why don't they asphalt curb to curb for a mile and then come back and do the ramps one at a time?


> Why don't they asphalt curb to curb for a mile and then come back and do the ramps one at a time?

As someone who did a stint in this kind of construction: not possible, you'd still need to re-pave about 30-50cm worth of road, because curbstones are (usually) suspended in a bunch of concrete to avoid them getting dislocated by cars hitting or driving over them. The result will be a faultline from which you will get potholes in freeze cycles.

The proper way is to do everything at once, leaving one slab of contiguous asphalt without faultlines.


LA is fortunate in that it doesn't suffer from freeze/thaw cycles and can put down a lot more concrete without worrying about expansion/contraction and water ingress.

I've noticed that a fair amount of concrete sidewalk in Los Angeles appears to have been poured when the neighborhoods were first developed (as in post-WW2) and haven't been removed or updated since then (at least based on the date/contractor stamps). Again, the lack of freezing weather, wide streets that don't necessitate parking/loading on the sidewalk, and fewer tree roots to uproot/disturb the gutters and sidewalks means that the original infrastructure is still in use.

More to the point - creating curb cuts is more than just customizing concrete forms. Oftentimes you'll need to regrade the surrounding area to reduce slope, move any in-ground utilities, and revisit any other updates to building codes (such as the bike lane stuff mentioned in the article). Not everything in/under the streets is owned by the same city/county/state/federal department/private org so that further complicates the work.

If only the real estate speculators that settled this swampy valley had considered this stuff in the early 20th century...


> LA is fortunate in that it doesn't suffer from freeze/thaw cycles and can put down a lot more concrete without worrying about expansion/contraction and water ingress.

The freeze thaw cycles more impact the asphalt. Basically, wherever there is a joint that has been improperly sealed with tar, or the asphalt cracks due to overload - e.g. from heavy vehicles in general or especially surrounding bus stops due to the force of buses accelerating in the summer, when the asphalt is softer, you get water seeping through into the asphalt... and when it then freezes, it expands, making the pothole worse with every cycle of thawing and freezing.

That is why it is so important to properly repair potholes. Some youtubers have made themselves infamous by fixing potholes themselves, but they use non-melting ready mix... that works in a pinch to make sure that vehicles don't get damaged, but you will need to rip that out to the foundation, fix the holes with gravels, compact that, place proper hot molten asphalt, compact that, fill again and compact, and then seal the edges with tar. Otherwise the ready-mix will disintegrate over time and you'll end up with the original pothole, or with an even worse one if you have freeze and thaw cycles.


Worth noting that LA does not have freeze cycles. I wonder what the pothole formation likelihood is as a result.

LA uses asphalt overlays on top of concrete. These have adhesion problems compared to monolithic asphalt over gravel.

Interesting! Is it possible to make the ramps offsite and then fit into place?

EDIT: I'm assuming the difficulty here is the pedestrian ramps at intersections. NOT the curb that spans the entirety of a road section.


> EDIT: I'm assuming the difficulty here is the pedestrian ramps at intersections. NOT the curb that spans the entirety of a road section.

The curb elements are made offsite. All you do onsite is to cut the stones to length if need be.

The challenge is properly anchoring them into the surrounding soil, and for that you need a concrete foundation. Basically, you make a gravel (or concrete) foundation, then you put down the curb element onto a few small pieces of wood, then you make a sort of mold cavity, and then you pour that mold full of concrete. Once that has cured, you put gravel to have an equal height with the road's gravel foundation on the road side and either soil or gravel on the pedestrian side to grade height - gravel if you want to place paving stones for pedestrians, or straight out soil if you want a grass siding.

You can see a few pictures and diagrams on how we do it in Germany here [1].

[1] https://www.beton-info.de/randsteine-setzen/


Ah, that's not how they do it in Chicago.

In Chicago, they concrete form the curb on site.

https://blackhawkpaving.com/wp-content/uploads/2021/03/Concr...


Oh good god. I can see that cracking all the way to Germany. Concrete surfaces need stress relief.

They put expansion gaps at regular intervals.

I don't know what you mean, but I belive we're talking about "wheelchair ramps" at street corners

some of the laws mandating that type of thing specify "if/when you renovate something, you need to bring it up to code, otherwise you can skate on the code"

this affects a lot of the little tiny shops in NYC. if you change your facade or bathrooms, they need to be made accessible. however, it's not the cost of renovation, it's that accessibility can entail many many square feet of space that is now inaccessible-to-make-any-money-from, making the rent much more unaffordable. so, renovations are still done, but meticulously match what any previous plans on file would look like.


Around here real estate listings are starting to not do interior pictures because the towns are known to predate on them for "hey you didn't get a permit for that bathroom reno" type crap.

Y'all need permits for non-structural (i.e. layout/framing stays the same) bathroom renovations? Holy shit.

It's a make work scheme. They don't give a crap. They'll rubber stamp just about anything. It's like $50. They just want to force you to use a licensed trade if applicable.

The trades themselves don't pull permits because it's not about the permissionn, it's about using them. The towns don't care unless you've cut them out so much that they feel slighted in which case they'll send you angry letters about violations, demand a million bucks and you'll hire a $10k lawyer (another licensed trade, lol) who'll get you off for $1k.

Needless to say compliance is pretty low outside of the rich neighborhoods because normal people can't afford to tack $4k of engineering onto a "repave my shitty 2-car driveway" project or a $3k panel upgrade onto a "renovate my 50yo bathroom and add a couple outlets".

It's all shit and should be replaced with a much lower touch system that's cheap enough people can afford to comply with it. But there's so many parties in on the racket that it'll come crashing down before that happens.


Because you need to build a form for concrete, and to build the form after paving means you'd have to cut then patch that new asphalt, which will just end up forming potholes.

I read this section and, along with the surrounding context, it seems like the issue is less having-money-for-materials and more having-money-to-hire-enough-skilled-workers-to-work-on-multiple-ramps-in-parallel. Different budgets, perhaps?

Johns Hopkins University is not a university. Many other "Universities" are not universities either.

"Johns Hopkins Labs" would be a more accurate name as less than 10% of revenue is tuition related.

I'm not sure why folks including professors continue to view these places as primarily about teaching students or academics. These $100-$250 million building projects are pretty inconsequential when research grants and contracts bring in more than $4.5 billion per year.


The "deal" often being made with academia is "we'll give you a place to do research, and even fund your research, but you have to teach the next generation." This isn't a bad deal, and is the reason many scientists give up MUCH larger paychecks that they'd get from the private sector to be a professor. These people would rather do research than have a more directed engineering (or engineering research) role that the private sector would give them.

But that deal has also shifted. Duties have changed and often many of the academics do not get to do much research, instead being managers of grad students who do the research. Being a professor is a lot of work and it is a lot of bureaucratic work.

I'm not sure why you're complaining about researchers. Think about the system for a second. We've trained people for years to be researchers and then... make them managers. Imagine teaching people to program, then once you've decided they're fully trained and good programmers we say "you're free to do all the programming you want! But you have to also teach more programmers, grade their work, create their assignments and tests, mentor the advanced programmers, help them write papers, help them navigate the university system, write grants to ensure you have money for those advanced programmers, help manage your department's organization, and much more." This is even more true for early career academics who don't have tenure[0]. For the majority of professors the time they have to continue doing research (the thing which they elected to train to do! That they spent years honing! That they paid and/or gave up lots of money for!) is nights and weekends. And that's a maybe since the above tasks usually don't fit in a 40hr work week. My manager at a big tech company gets more time to do real programming work than my advisor did during my PhD.

I'd also mention that research has a lot of monetary value. I'm not sure why this is even questioned by some people. Research lays the foundation for all the rest. Sure, a lot of it fails, but is that surprising when you're trying to push the bounds of human knowledge? Yet it is far worth it because there are singular discoveries/inventions that create more economic value than decades worth of the current global economy. It's not hard to recognize that since basically the entire economy is standing on that foundation...

[0] Just because you have tenure doesn't mean you don't have a lab full of graduate students who need to graduate.


>>> The "deal" often being made with academia is "we'll give you a place to do research, and even fund your research, but you have to teach the next generation." This isn't a bad deal, and is the reason many scientists give up MUCH larger paychecks that they'd get from the private sector to be a professor. These people would rather do research than have a more directed engineering (or engineering research) role that the private sector would give them.

Teaching graduate students. Most undergraduate teaching is done by "adjuncts" who do not do research.

Salaries are a mixed bag. Scientists who want to continue doing research in the private sector also give up much larger paychecks. Many work in facilities that are barely nicer than sweatshops.

Disclosure: Adjunct for one semester, 30 years ago.


Regarding the teaching workload: This is not generalizable; during my undergraduate studies a significant fraction (maybe the majority? too long ago to be sure) of my classes were taught by graduate students, especially the math and computer science classes. At the graduate level, your statement was true for me at my second university. In fact, I'm not sure if a graduate student would be allowed even to teach a graduate-level class, considering their credentials.

My experience around universities (as an academic) is that, generally, the number of adjuncts scales linearly with overall funding/skill at grantsmanship in the department. That is, the smaller universities I know saddled professors and their graduate students with substantially more non-research work, including teaching and administration.


At both the universities I went to most classes were taught by the professors. I say most because when I was the TA for my advisor (during my PhD) I taught his class. That said, the students were happier when he didn't show up to class and it was only me.

It definitely depends on the size of the university and the size of classes. As I was graduating a few grad students started becoming the official instructor. These were only the lower level courses though (freshman and sophomore). My partner's department had grad students teaching some classes for longer and they had a similar pattern.

My undergrad was at a small university with essentially no grad students. As far as coursework, I'm confident I got a better education than my peers that went to top schools like Stanford and Berkeley (I did physics). But they got more internships, connection to labs, and connection to research projects. YMMV


I think that's the whole point. Many university's very nature has shifted significantly and lots of people don't like it and lament the change.

This is probably true since at least WW2 but isn't the central idea that Professors closest to cutting edge research can do the most interesting teaching?

If you want the best teachers you can always go to Liberal Arts Colleges where this isn't really an issue.


Professors at schools like this do not view these places as about teaching students. Academics, to include performing research in their field and publishing the results, yes, and the students get in the way of that.

Yes. If you want a really high quality education, you don't go to a big research school. You go to a small school, like a liberal arts school, where the teachers are both highly trained and really passionate about teaching.

I went to a small liberal arts school for an undergrad degree in STEM, and to a R1 research university for graduate work.

The absolute best classes at the big-name research university were about as good as the average class at my small undergrad. The classes at the small school were of distinctly better quality: more engaged teachers, more engaging work, and simply higher quality teaching.


Did you go to an elite (or close to it) liberal arts school? I have gone to only R1 schools myself, but my exposure to liberal arts schools would indicate they are a mixed bag, especially in the sciences (not disagreeing with you or saying that R1 schools aren't also a mixed bag in some/many senses).

Most undergraduates don't realize it, but the purpose of going to an R1 is access to an alumni network and (for the small percentage that are interested) access to people performing cutting edge research in a discipline and their physical resources.

I suspect that honesty in their marketing materials would not increase applications though.


Not the poster you asked, but I think their point stands for (at least many) non-elite liberal arts schools. (Heck, I think it stands for some community colleges, too.) Teachers at those institutions have often attended elite programs, and in any case have self-selected into (primarily) teaching roles, and you'll get a lot of their individual attention, which you wouldn't at a big school.

(For the benefit of students reading this: go to office hours, especially early in the term, even if it's just to shoot the breeze. If you don't, you're cheating yourself out of the main advantage of that institutional model.)

Where your take is correct, and even demands greater emphasis, is the value of the alumni network, and the "name recognition" of a degree from somewhere people, well, recognize. As someone who deeply believes in the value of education for its own sake it pains me to be this cynical, but those are the only things that matter in the world at large.

That's the honest take, which, indeed, no one in higher education will ever put so baldly.

Disclosure: graduated from, and also spent five years teaching at a (very) non-elite liberal arts college. The education was good - even great, in some programs / by some professors - but the professional advantages absolutely nil. I will council my own son not to attend a similar school (should any of them even survive by the time he gets there - they're by and large on life-support right now); even tuition-free it wouldn't be (economically) worth it, and at the actual price it's the worst life decision many of those students will ever make.


i think the only people that realize this are people that are actively doing research in academia. not even the undergrads at the school realize that teaching undergrads is at best a side-hustle for the institution.

i've seen so many "our tuition pays your salary so you you need to XXX" type rants I've seen from disgruntled students/parents over the years and i've always bit my tongue when it comes to setting the record straight.


R1 Research University.

Teaching mostly by TA, not Faculty.

Not a "college".


Are you a professor at a R1 school? All the faculty I know at R1s (see CMU, MIT, etc) are doing quite a lot of teaching in addition to their research.

I think he is mostly explaining the experience of many a student, which finds themselves, especially in the first few years, with very large class sizes and minimal interactions with professors. It's not that the professors don't do any teaching, but that your first two years probably feel like a scam, especially if there are many general requirements not tied to your major.

TAs soon to be replaced by AI.

Johns Hopkins gets a lot of money from vested interests to push whatever suits them.

Exactly.

The author's electricity bill went up and his cat got stolen in part because his colleagues working under the university incentive systems (i.e. don't publish stuff that pisses off the interests that fund your lab) created work that legitimized those policy decisions so that those decisions could be made and the funding interests, whatever they may be, could benefit from them.

One wonders if there are similar incentives in the university ranking, administration and consulting that legitimize the university's otherwise questionable decision to engage in these seemingly irresponsible ventures.


The early nod to Agora Institute mission of “building stronger global democracy” Followed by bemoaning USAID cuts makes me wonder if the author is deliberately missing one of the most glaring examples of this.

How can we have a "stronger global democracy" if we don't currently have "global democracy" to begin with? Democracy suggests it is worldwide, whereas we know a number of countries out there are not democratic, or are barely democratic (due to corruption, war and other issues.)

s/gets/accepts

Nobody is waterboarding the money down their throat. They can say no. The actual question is: why don't they?


"Nobody is waterboarding the money down their throat. They can say no. The actual question is: why don't they?"

Leaving aside that metaphor, the obvious answer is that they either like or need it. Most likely the former, because many of these well known universities are swimming in money already.


Why would they not accept money to do something they are interested in doing?

What is the downside to the school of a nicer student union or a public policy/international relations campus in the nation's capital?


Because that's not what the GP was talking about. For example, say there is some controversial economic policy passed by one of the parties. Then a researcher goes out to research if the policy is working or not. But when they do the research, they find out that the policy doesn't work and has bad side effects too. However, the majority of the university votes and supports the party that passed the policy.

So the researcher intentionally changes some of the ways the data is collected and poof, it looks like the policy works. Extra funding comes your way but now you have committed academic fraud. Not that anything will happen to you for this, but still, you know you did it. That's what the GP is talking about and it happens quite a bit in the humanities and economics. Its why private economists and public economists almost seem like different species.


The GP invented some sort of conspiracy theory that doesn't really seem like it's worth discussing, whether it happens a lot or not in reality.

Whether you believe what he said or not, my questions remain.


I stated facts, I invented nothing. I was asking a question that apparently rubbed you the wrong way, which is great! Makes you think!

I (and I believe the person I responded to) were talking about the comment above yours, which was a statement that Hopkins basically sells control of its research outcomes to donors.

Your question didn't bother me in the least, but I don't see why people are so surprised that a school or any other organization would accept millions and millions of cash to upgrade their surroundings.


That's fair. I'm not surprised per se, I think the point is about the strings attached to accepting that money. At least that's how I've been reading this thread.

That is their point, and mine is that it's baseless speculation that is almost certainly inaccurate, probably originating from a similarly uninformed and angry internal source to the one that produced the article in question.

I'm not saying it can't happen, or even that it's never happened, but I see no evidence from personal experience or news in academia that would indicate it's anything other than extraordinarily rare at most, and it certainly shouldn't be assumed to be the case for all donations unless proven otherwise.


The thing I described happened about 6 months ago.

Can you provide a link, rather than extremely vague accusations?

They are interested in doing some of these things precisely because they are being paid to.

They're interested in a new student union because they're being paid to? What does that mean?

They get the money for facilities etc off someone, and then do their bidding.

People, including university administrators, are generally interested in upgrading their surroundings whether or not they have the means to do so.

When the means are dropped in their lap, people act on those interests.


Yeah, a shiny object gets dangled in front of them. The Drew Pavlou case in Australia is very telling. The University of Queensland was pretty much in the pocket of the CCP, including having the local consul on its board. When Pavlou protested on Chinese human rights issues, he ended up suspended for two years. The UoQ obviously relishes Chinese students and investment, but wouldn't allow criticism of the regime.

I am all for criticizing the Chinese government, but that is not a remotely accurate description of what happened with Pavlou, nor particularly relevant to this article unless you have substantiated claims of that type of behavior at Hopkins (or even elsewhere in the US).

A lot of the previous calculus around refactoring and "rewrite the whole thing in a new language" is out the window now that AI is ubiquitous. Especially in situations where there is an extensive test suite.

Testing has become 10x as important as ever.


For a personal thing I had AI write some python libraries to power a cli. It has to do with simple excel file filtering, grouping and aggregating. Nothing too fancy. However since it's backed by a library, I am playing with different UIs for the same thing and it's fun to say.. Do it with streamlit. Oh it can't do this particular thing. Fine do it with shiny. No? OK Dash. It takes only like an hour to prototype with a whole new UI library then I get to say "nah" like a spoiled child. :)

Well, I am on the provocative side that as AI tooling matures current programming languages will slowly become irrelevant.

I am already using low code tooling with agents for some projects, in iPaaS products.


> Well, I am on the provocative side that as AI tooling matures current programming languages will slowly become irrelevant.

I have the opposite opinion. As LLM become ubiquitous and code generation becomes cheap, the choice of language becomes more important.

The problem with LLM for me is that it is now possible to write anything using only assembly. While technically possible, who can possibly read and understand the mountain of code that it is going to generate?

I use LLM at work in Python. It can, and will, easily use hacks upon hacks to get around things.

Thus I maintain that as code generation is cheap, it is more important to constraint that code generation.

All of this assume that you care even a tiny bit about what is happening in your code. If you don't, I suppose you can keep banging the LLM to fix that binary blob for you.


> The problem with LLM for me is that it is now possible to write anything using only assembly. While technically possible, who can possibly read and understand the mountain of code that it is going to generate?

As a very practical problem the assembly would consume the context window like no other. And another is having some static guardrails; sometimes LLMs make mistakes, and without guard rails it debugging some of them becomes quite a big workload.

So to keep things efficient, an LLM would first need to create its own programming language. I think we'll actually see some proposals for a token-effective language that has good abstraction abilities for this exact use.


Lets say years of offshoring projects have helped to reach that opinion.

> As LLM become ubiquitous and code generation becomes cheap, the choice of language becomes more important.

I think, changes to languages/tooling to accomodate Agentic loops will become important.

> All of this assume that you care even a tiny bit about what is happening in your code. If you don't...

I mean, as software engineers, we most certainly do. I suspect there'll be a new class of "developers" who will have their own way of making software, dealing with bugs, building debugging tools that suit their SDLC etc. LLMs will be to software development what Relativity was to Astrophysics, imo: A fundamental & permanent shift.


I don't agree. For one thing, the language directly impacts things like iteration speed, runtime performance, and portability. For another, there's a trade-off between "verbose, eats context" and "implicit, hard to reason about".

IMO Rust will strike a very strong balance here for LLMs.


Formal specifications and automated testing, will beat any language specific tooling.

Hardly much different than dealing with traditional offshoring projects output.


> Formal specifications and automated testing, will beat any language specific tooling.

I don't understand what you mean. Beat any language at what? Correctness? I don't think that's true at all, but I also don't see how that's relevant, it definitely doesn't address the fact that Rust will virtually always produce faster code than the majority of other languages.

> Hardly much different than dealing with traditional offshoring projects output.

I don't know what you mean here either.


Any tool that can plug into MLIR and use LLVM, can potentically produce fast code.

Also there is the alternative path to execute code via agents workestration, just like low code tooling work.

I see you never had the fortune to review code provided by cheap offshoring teams.


> Any tool that can plug into MLIR and use LLVM, can potentically produce fast code.

I guess that's sort of technically true, but not even really? Like, obviously you can compile Python to C and then compile that with clang, but it doesn't make it fast. But even if that were the case, there aren't that many languages that have Rust performance so who cares? "Potentially" is sort of saying we might have a future language that's better, but of course anyone would agree.

> Also there is the alternative path to execute code via agents workestration, just like low code tooling work.

I don't understand how this is relevant.

> I see you never had the fortune to review code provided by cheap offshoring teams.

I just don't understand why you're bringing it up tbh I don't understand the relevance.


It doesn't need to win the benchmarks Olympics, it needs to be fast enough.

Plenty of AI based tooling is already trying out this path.

Agents execute actions that in the past would be manually programmed applications, now tasks can be automated given a few mcp endpoints.

LLMs are already at the same output quality of lousy offshoring companies, thus having to fix a bit of it is something that unfortunately many of us are already used with fellow humans.


I feel like maybe we're drifting here. You said this:

> Well, I am on the provocative side that as AI tooling matures current programming languages will slowly become irrelevant.

And I said I disagree because language directly impacts things like performance. And it does, massively. Like, order of magnitude differences are not hard to achieve simply by changing language.

You are now saying that things just need to be "fast enough", but I don't get how that's relevant. The point is that a different language will have different tradeoffs, and AI changes some of the calculus there, but language is still a major component of the produced artifact. If you agree that language has major implications on the produced artifact, then we agree. If you don't, then I'll just once again appeal to the massive performance gaps between different languages.

I still am not understanding the offshoaring conversation.


> And I said I disagree because language directly impacts things like performance. And it does, massively. Like, order of magnitude differences are not hard to achieve simply by changing language.

Only because you focus too much into the frontend, instead of the whole compiler infrastructure, with multiple frontends for the same compilation pipeline.

> I still am not understanding the offshoaring conversation.

Because you never had to review human written code by cheap offshoring teams, zero difference with LLM generated code quality, even today.


The frontend to a language is critical to how that language performs. I don't see how you can consider this irrelevant.

If the offshore company provides me a Rust crate that compiles, that is already a lot of guarantee. Now that does not solve the logic issues and you still need testing.

But testing in Python is so easy to abuse as LLM. It will create mocks upon mocks of classes and dynamically patch functions to get things going. Its hell to review.


What is a programming language used for if not the most formal specification possible? Of course it doesn't matter what language you use if you perfectly describe the behavior of the program. Of course, there's also no point in using LLMs (or outsourcing!) at that point.

Im already using models to reason about and summarize part of the code from programming language to prose. They are good at that. I can see the process being something like english to machine lang, machine lang to english if the human needs to understand. However amother truism is that compilers are a great guardrail against bad generated code. More deterministic guardrails are good for llms. So yeah im not there yet where i trust binaries to the statistical text generators.

There is the notion that a lot of programming language preferences are based on the notion of people using them. As soon as it's LLMs using them, a lot of what motivates their choices becomes less valid.

I've been doing a few projects that are definitely outside my comfort zone with LLMs and its fine. I can read the code but I just don't have the muscle memory to produce it.


I would say that current programming languages have a better chance due to the huge amount of code that AI can train on. New languages do not have that leverage. Moreover, current languages have large ecosystems that still matter.

I see the opposite. New languages have more difficulty breaking into popularity due to lack of enough existing codename and ecosystems.


Interesting take, what do you think comes next? A programming language optimized for coding agents?

Kind of, more in line with formal specs used in high integrity computing, instead of classical programming languages.

For the folks looking at tools to help manage personal and work identities on the same computer: don't.

Never access personal accounts from a work computer or work accounts from a personal computer under any circumstances.

This goes for laptops, desktops, and especially cellphones.

If an employer asks that you violate the above, ask for a dedicated device owned by your employer to access a work account. If they refuse, that's a big red flag. "Oh just use your phone to check your email/slack" - 1. don't assume everyone has a cellphone and 2. if you want folks doing work on a device, pay for it.

Managing multiple personal accounts on computer A or multiple work accounts on computer B is totally fine.

As an aside, company general counsels might be shocked at how often their employees log in to slack/email/etc. from their personal cellphone: suddenly any and all company and customer intellectual property has a way to leave the network. And it's not even a "pull" from the employee as the other employees just "push" them messages.


In some states, employers are required to provide this as an option, at least for some devices: https://www.driversnote.com/blog/state-requirements-cell-pho...

Obviously it should be everywhere, for all tools needed to do your job, but it's especially clear for tech where nearly all large companies will also exert control over the device.


"What's your desired salary?"

"A million an hour, obviously (haha). But in all seriousness, I'd expect to be compensated commensurate with the responsibilities of the role, keeping in mind that the salary number is just one aspect of a compensation package as health insurance and other benefits are important to me."

There are only two reasons HR asks this:

1. possible leverage later in the process.

2. attempting to not waste time if the candidate's expectations are way out of line with the amount the company is willing to pay.

Either way, there is no good reason to name numbers prior to the company making an offer with compensation package details.


> Either way, there's no reason to name numbers until AFTER the company makes an offer with included compensation package details.

I agree that a candidate shouldn't name numbers until after an offer.

But I think the company should give a range as early as possible. This is because of point #2 above. As an engineering manager I've had at least one heartbreaking experience where we took a candidate through the hiring cycle and then found out we and they were way out of line re: comp. Hiring sucks enough without that curveball.

That's why, for all the warts, I'm a fan of salary disclosure laws (like those in Colorado, USA). Yes, it's hard to have an accurate range, because jobs and skills are squishy. Yes, candidates anchor towards the top. Yes, it's weird for a buyer of a thing (labor) to state a price.

But companies have more power in the hiring process (there are, after all, many employees working for a company, but usually only one company an employee works for). Companies, or the hiring managers, also have a budget.

If you are a hiring manager, I'd encourage you to have your salary range shared with candidates as early as possible in the process.


No one wants to work for a company in category #1, though I recognize some people might have to.

Find out their range and standard benefits package as soon as possible in the process. If you still don't know after the first phone screen/chat and are not in dire need of employment, move on. It's a great filter.


> 2. attempting to not waste time if the candidate's expectations are way out of line with the amount the company is willing to pay.

Good reason for them to say what they're willing to pay before I bother reading their job advert


Genuine question: why are you interested in buying a brand new car as opposed to a used one given the massive markdown on even 2 year old cars?


This isn't a (totally) bad question, but it suffers from problem common to a lot of internet forum posts and comments. It lumps "people" into one shiny bucket, and then shouts at the bucket.

Over 16 million new cars were purchased in 2025 in the United States.

Meanwhile, there are about 242 million licensed drivers in that same country.

So that's than 7% of drivers buying a new car in a year's time.

But what is the point of your question, in response to a new car announcements? That new cars should not exist... ? You do realize how much harder that would eventually make it to buy a used one, right?

So yeah, sure. "People" would be well-advised to consider buying a 2-3 year old vehicle that has depreciated. Let someone else carry that depreciation cost. (But "someone" has to.)

(My used EV depreciated 57% before I bought it, 2 years old with 20,000 miles on it. It's a great way to go!)


I'm not sure about some of the numbers. PCD is pretty dominant in gas and oil drilling.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: