I imagine, only slightly extrapolating current AI trends, that in 20 years most codebases can be easily modified by AI. I'd even say they are especially well suited to such tasks that typically don't require extremely abstract and complex logic, or imagination, but rather "just" a huge attention span and a lot of work.
Tricky bit with ancient codebases is that their only requirements, generally speaking, tend to be that they should keep working exactly like they have been since 1983, except for the bit that needs changing of course, that needs to change in a way that implements the change, but doesn't have any unintended side-effects in a system that is a fractal of undocumented interdependencies (systems that have been patched for a few decades under those types of constraints tend to become especially gnarly that way).
I feel we can't make laws against the subject of moral panic every generation. People have felt the same way about the activities of youths since forever. But ultimately it often turns out fine. Change can bring new problems, but it also brings positives that are hard to understand and even formulate, that's culture. Trying to be the arbiter of that is foolish.
Is ticktock addictive because of it's design, or is it addictive because it brings thousands of people and experiences and emotions right to you? Probably both, but it's hard to separate one from the other. Apps are not opium, it's not as clear cut.
Instead of micromanaging technology and culture they should make sure that society is kind, that there is slack in the system, that people don't have reason to want to flee their real lives, that those hurt by new technology get support.
Of course truly malicious dark patterns and fraud should be punished. But that feels like a different category.
Tiktok will never have any competitors after this law comes into force. They will have the resources the implement the require changes, and the customer base will remain with them. Anyone starting a new service will have a tough time building something that jumps through all the hoops required by the EU, on top of the usual problems with network effects.
That would certainly be a difficult scenario. But it doesn't seem very likely. For example, consciousness and material systems seem to interact. Putting drugs in your blood changes your conscious experience etc.
When you say dark matter theory doesn't require updates when new data arrives, it sounds like you don't count the parameters that describe the dark matter distribution to be part of the theory.
Or just don't refund it. Most people want to make contributions to open source, and everyone can afford $1. Exceptions can be made for very active contributors.
In fact, we can use an automated schedule: first PR - if rejected, 5€ are drawn from the contributor’s account, then 4€, 3€, etc (plug in your favourite decreasing function, round to 0€ when sufficiently close).
But, crucially, if accepted, the contributor gets to draw 5€ from the repository’s fund of failed PRs (if it is there), so that first bona fide contributors are incentiviced to contribute. Nobody gets to profit from failed PRs except successful new contributors. Virtuous cycle, does not appeal to the individual self-interest of repo maintainers.
One thing I am unsure of is whether fly-by AI contributions are typically made with for-free AI or there's already a hidden cost to them. This expected cost of machine-driven contribution is a factor to take into account when coming up with the upside/downside of first PR.
PS. this is a Gedankenexperiment, I am not sure what introducing monetary rewards / penalties would do to the social dynamics, but trying with small amounts may teach us something.
Well that's awfully assumptuous. So now a young college kid needs to spend time and money to be able to help out a project? I also don't like that this model inentivizes a few big PR's over small, lean, readable ones.
We're completely mixing up the incentives here anyway. We need better moderation and a cost to the account, not to each ccontribution. SomethingAwful had a great system for this 20 years ago; make it cost $10-30 to be an external contributor and report people who make slop/consistently bad PR's. They get reviewed and lose their contributor status, or even their entire account.
Sure, you can whip up another account, but you can't whip the reputation back up. That's how you make sure seasoned accounts are trustworthy and keep accounts honest.
Given how much of the spending is hard goods and simply not AI-able (rent, most of housing new construction, most of other goods, most health care, much of other services), the replacement theory would require a massive displacement.
Is ai running regular python really a problem? I see that in principle there is an issue. But in practice I don't know anyone who's had security issues from this. Have you?
i think there’s a confusion around what use-case Monty is solving (i was confused as well). this seems to isolate in a scope of execution like function calls, not entire Python applications
What file formats are the existing datasets you have? I also work on data processing in a scientific domain where HDF5 is a common format. Unfortunately Duckdb doesn't support HDF5 out of the box, and the existing hdf5 extension wasn't fast enough and didn't have the features needed, so I made a new one based on the c++ extension template. I'd love to collaborate on it if anyone is interested.
That's really fascinating. Is your format open source? I don't know if I'd have overlapping needs for something like that (though I did investigate hdf5 early on, it seemed very promising as a place to store our outputs) but I'd be curious to explore it and see what you're doing with it.
Right now we typically read from CSV or Excel, because that's what the scientists prefer to work with. For better or worse. There's a bit of parquet kicking around. The wrappers around handling imports for DuckDB are very, very thin. It handles just about everything seamlessly
Why would 2x the transportation cost be intractable, but ruining the environment, killing life in the oceans, destroying the basis of our future food production, etc, be tractable?
reply