Not a bad analogy as a river is fed by its watershed (shareholders, inhabitants, landowners, state reserves, etc.) and delivers water downstream (customers, clients, dependents, etc.) as well as having its own inherent structure and function, water quality, biodiversity support (eg providing steady employ to 100K people in a local region, the daily structural business of capital and material flows, etc.).
Some hand-written (not AI-generated) prompts to consider:
"An expert in university-level linear algebra, including solving systems of equations, matrices, determinants, eigenvalues and eigenvectors, symmetry calculations, etc. - is asked the following question by a student: "This is all great, professor, and linearity is also at the heart of calculus, eg the derivative as a linear transformation, but I would now like you to explain what distinguishes linear from non-linear algebra."
"What kind of trouble can the student of physics and engineering and computation get into if they start assuming that their linear models are exact representations of reality?"
"A student new to the machine learning field states confidently, 'machine learning is based on linear models' - but is that statement correct in general? Where do these models fail?"
The point is that even though it takes a lot of time and effort to grasp the inner workings of linear models and the tools and techniques of linear algebra used to build such models, understanding their failure modes and limits is even more important. Many historical engineering disasters (and economic collapses, ahem) were due to over-extrapolation of and excessive faith in linear models.
For most people going into science and engineering as opposed to pure mathematics, Poole's "Linear Algebra: A Modern Introduction" is probably more suitable as it's heavy on applications, such as Markov chains, error-correcting codes, spatiel orientation in robotics, GPS calculations, etc.
> "OpenAI can now provide API access to US government national security customers, regardless of the cloud provider."
And this one might be related:
> "OpenAI can now jointly develop some products with third parties. API products developed with third parties will be exclusive to Azure. Non-API products may be served on any cloud provider."
Now, does anyone think MIC customers want restricted, safe, aligned models? Is OpenAI going to provide turnkey solutions, unaligned models run in 'secure sandboxed cloud environments' in partnership with private weapons manufacturers and surveillance (data collection and storage/search) specialists?
This pattern is not historically unusual, turning to government subsidies and contracts to survive a lack of immediate commercial viability wouldn't be surprising. The question to ask Microsoft-OpenAI is what percentage of their estimated future revenue stream is going to come from MIC contracting including the public private grey area (that is, 'private customers' who are entirely state-funded, eg Palantir, so it's still government MIC one step removed).
Well, 'calculus' is the kind of marketing word that sounds more impressive than 'arithmetic' and I think 'quantum logic' has gone a bit stale, and 'AI-based' might give more hope to the anxious investor class, as 'AI-assisted' is a bit weak as it means the core developer team isn't going to be cut from the labor costs on the balance sheet, they're just going to be 'assisted' (things like AI-written unit tests that still need some checking).
"The Arithmetic of AI-Assisted Coding Looks Marginal" would be the more honest article title.
Yes, unfortunately a phrase that's used in an attempt to lend gravitas and/or intimidate people. It sort of vaguely indicates "a complex process you wouldn't be interested in and couldn't possibly understand". At the same time it attempts to disarm any accusation of bias in advance by hinting at purely mechanistic procedures.
Could be the other way around, but I think marketing-speak is taking cues here from legal-ese and especially the US supreme court, where it's frequently used by the justices. They love to talk about "ethical calculus" and the "calculus of stare decisis" as if they were following any rigorous process or believed in precedent if it's not convenient. New translation from original Latin: "we do what we want and do not intend to explain". Calculus, huh? Show your work and point to a real procedure or STFU
Imagine if compilers were only available via the SaaS model. Developers and the tech community in general would never accept this, and compilers were open sourced well before the internet had developed to the point where it would even be possible. Nobody would trust a system that sent their proprietary code off to a data center to be compiled to binaries for their platform on a monthly subscription model - the idea is ludicrous.
Currently the only SaaS product that still makes sense are LLMs, but this is temporary - anyone with sense realizes that the ideal situation is to run an open-source LLM model locally and privately, but this still requires a significant investment in high-end hardware and IT technical people.
That's the basic calculation: is it overall more efficient and less expensive to hire a skilled IT team to manage your in-house solutions, mostly open-source, including security patches, than it is to rely on external providers who charge high monthly fees and use all manner of sneaky tactics to keep you locked into their products? The latter is going to win in the long run.
Imagine a town with two landlords who own all rental properties. Yes consumers prefer cheaper rentals, but all the landlords have to do is write an app that they can use to set prices as high as they can while not having too many units empty. If the homeless population in the town increases, that's an externality - especially if the landlords themselves don't live in the town.
This works if landlords don't have significantly more units than are demanded by the population AND it is both very expensive for new units to be built and new competitors to enter the market. If enough supply comes on the market and the best move for the landlord with the additional supply would be to lower prices. Tenants then all move into the better value units and the expensive landlord is left with either empty buildings or is forced to lower his price.
Wouldn't that require a large flooding of units not attached to those landlords? And considering they already cornered the market on the "old" units, unless this market disrupting supply of units is owned by someone generous, they'll just match the old prices and call it a win.
usually prices dont go down. the cost does relative to inflation. what usually happens is a new investor will do the analysis and build new units that are even more expensive but only slightly. now all the current tenants that can afford it will leave the current landlords and the current landlords wont be able to increase prices because there is a better product at that price level.
It does depend on where you are and how elastic supply is. In Austin for example there has been a recent decrease in rent (even relative to inflation) despite continually growing demand.
austin had such an insane explosion of supply. but they also have a price explosion just a couple years ago. probably going to see something similar with respect to GPU rentals in a couple years
The timeline here is interesting. Microsoft releases info and instructions for mitigation on July 19, and a more complete report on July 22nd, here's a copy of that:
Then according to this report, 'sometime in August' the exploit is used against the Honeywell-managed nuclear facility, since it wasn't patched, if I read correctly? So it really could have been anyone, and it's hardly just Russia and China who have a record of conducting nuclear espionage in the USA using their nation-state cybercapabilities (Israel?). As the article notes:
> "The transition from zero-day to N-day status, they say, opened a window for secondary actors to exploit systems that had not yet applied the patches."
Also this sounds like basically everything that goes into modern nuclear weapons, including the design blueprints. Incredible levels of incompetence here.
> "Located in Missouri, the KCNSC manufactures non-nuclear mechanical, electronic, and engineered material components used in US nuclear defense systems."
Solution: run open source LLMs on local hardware where inference takes a while but you're not leaking your sacred proprietary code to some backdoored cloud cluster. Then downtime arises naturally, see the relevant xkcd on compiling:
Note also, compilers automated the process of machine instruction generation - quite a bit more reproducibly than 'prompt engineers' are able to control the output of their LLMs. If you really want the LLMs to generate high-quality programming code to feed to the compiler, the overnight build might have to come back.
Also, in many fields the processes can't be shut down just because the human needs to sleep, so you become a process caretaker, living your life around the process - generally this involves coordinating with other human beings, but nobody likes the night shift, and automating that makes a lot of sense. Eg, a rare earth refinery needs to run 24/7, etc.
Finally, I've known many grad students who excelled at gaming the 996 schedule - hour long breaks for lunch, extended afternoon discussions, tracking the boss's schedule so when they show up, everyone looks busy. It's a learned skill, even if overall this is kind of a ridiculous thing to do.
reply