Wow - this is just wild. I've seen lots of arguments around "AI won't take everyone's job, it will just open up new areas for new jobs." Even if you take that with the benefit of the doubt (which I don't really think is warranted):
1. You don't need to take everyone's job. You just need to take a shitload of people's jobs. I think a lot of our current sociological problems, problems associated with wealth inequality, etc., are due to the fact that lots of people no longer have competitive enough skills because technology made them obsolete.
2. The state of AI progress makes it impossible for humans in many fields to keep up. Imagine if you spent your entire career working on NLP, and now find GPT-4 will run rings around whatever you've done. What do you do now?
I mean, does anyone think that things like human translators, medical transcriptionists, court reporters, etc. will exist as jobs at all in 10-20 years? Maybe 1-2 years? It's fine to say "great, that can free up people for other thing", but given our current economic systems, how are these people supposed to eat?
EDIT: I see a lot of responses along the lines of "Have you seen the bugs Google/Bing Translate has?" or "Imagine how frustrated you get with automated chat bots now!" Gang, the whole point is that GPT-4 blows these existing models out of the water. People who work in these fields are blown away by the huge advances in quality of output in just a short time. So I'm a bit baffled why folks are comparing the annoyances of ordering at a McDonald's automated kiosk to what state-of-the-art LLMs can do. And reminder that the first LLM was only created in 2018.
> I think a lot of our current sociological problems, problems associated with wealth inequality, etc.,
I see where you’re coming from, but is this really the main source of the inequality?
Based on numbers relating to workers’ diminishing share of profits, it seems to be that the capital class has been able to take a bigger piece of the profit pie without sharing. In the past, companies have shared profits more widely due to benevolence (it happens), government edict (e.g., ww2 era), or social/political pressure (e.g., post-war boom).
Fwiw, I think that the mid-20th century build up of the middle class was an anomaly (sadly), and perhaps we are just reverting to the norm in terms of capital class and worker class extremes.
I see tons of super skilled folks still getting financially fucked by the capital class simply because there is no real option other than to try to attempt to become part of the capital class.
I think you and the one you're replying to are both very right.
Yes, more of this money is going, instead of middle-class workers, straight to the capital class who own the "machines" that do the work people used to do. Except instead of it being a factory that makes industrial machines owned by some wealthy industrialist, the machines are things like Google and AWS and the owners are the small number of people with significant stock holdings.
It's really striking though that a person graduating high school in say, 1970, could easily pick from a number of career choices even without doing college or even learning an in-demand trade, like plumbing, welding, etc. Factory work still existed and had a natural career progression that wasn't basically minimum wage, and the same went for retail. Sure, McDonalds burger flippers didn't expect then to own the restaurant in 10 years, but you could take lots of retail or clerical jobs, advance through hard work and support a family on those wages. Those are the days that are super gone and I totally agree with you both that something has changed for the worse for everyone who's not already wealthy.
> but you could take lots of retail or clerical jobs, advance through hard work and support a family on those wages. Those are the days that are super gone
Only in certain places, and only mostly due to crazy policies that made housing ridiculously unaffordable. I'm in an area where my barber lives on 10 acres of land he didn't inherit and together with his wife raises two children. This type of relaxed life is possible to do in wide swathes of the country outside of the tier-one cities that have global competition trying to get in and live there, as long as you make prudent choices.
I think 20- to 30-something engineers who have spent their entire adult lives in major coastal cities have a huge blind spot to how middle America lives.
Very anecdotal, but I don't know anyone making minium wage for my area. I do know tons of people making within 1-2 dollars of minimum wage though. If we divvy up the data into groups like that, I wonder how much that 1% jumps.
I don't know how accurate this data is, but this website[0] breaks the US average down to single percentiles, and has detailed data for many metro areas, as well.
The comment I was replying to stated that that lifestyle is achievable for traditionally minimum wage (or close to it) jobs as long as you choose to live in a small city though
I was about to reply the same and I looked up the BLS data. The median wage for a barber is $14.41. Now I know that's at or below minimum wage in cities, but I suspect it's quite higher the minimum wage in rural areas.
I don't see a problem with that, as long as those who may be earning minimum wage at any given time have the opportunity to improve their lot. I see no reason why the lowest incomes would result in median lifestyles.
To be clear, I don't think most people expect that either -- but I'd argue that most entry-level jobs today are just a big endless cycle of unskilled labor staying long enough to get fed up and moving to the next dead-end job, with the companies moving on to the next as well.
In general, I'd argue minimum wage jobs aren't anymore a stepping stone to some sustainable good job; they're basically viewed like consumables by companies. Even someone with a decade of experience, say, at several retail stores or restaurants, can't expect to be offered a position making $50,000 a year plus benefits just for having done his job well every day. By contrast, a dedicated factory worker with a decade of experience 50 years ago could expect to have advanced somewhat, and would expect continued advancement. Today everyone working in retail, restaurants, etc. all know that if they're going to do any better it's going to be by leaving that sector, via learning a trade, going to college, or perhaps founding their own small business. All things which were good options in the past too, but advancement was once a realistic expectation too.
How middle America lives, for a lot of people, is making within a buck or two of minimum wage, with virtually zero chance of significant advancement, trying to scrape together enough to meet your expenses. You might become assistant manager of the big box store, but that won't transform your life. The only way out is learning a skilled trade or certain college degrees (and likely leaving town).
This isn't specific to cities.
In fact, people in rural areas are worse impacted, because the rise of Walmart, Dollar General, and others funnel money out of their towns that would have otherwise enable many local families to capture the profits from local spending. Today a lot of that spending goes mostly to those companies, and only a fraction of the money stays, in the form of a few low-wage jobs.
I'm not saying it's impossible to not live in poverty. I'm just saying it's much much harder, because "advancement" is obsolete in a lot of occupations where it used to be a thing.
The idea that an average working person could buy a house in their twenties was only possible: 1) in America, and 2) for the baby boomers generation, maybe for some of Gen X. Nowhere else, never after. This is the exception of exceptions, not a norm.
Be that as it may, I would like to continue at least this part of American exceptionalism. There is still plenty of room in this country for young families to own their own homes.
The entire world watched in awe. Here in Switzerland (a very well-developed country), buying a house is something done in the forties, if you are middle to upper middle class. Otherwise, you rent forever.
I am curious if your barber can afford health insurance plus out of pocket maximums for a family of four ($30k+ per year just in premiums plus $5k to $10k oop max), not to mention short and long term disability insurance in case he gets hurt and cannot work.
The only situation I can imagine would be if the wife has a government job with extremely generous health insurance subsidies.
Not OP, but I pay my barber $50 for a 15 minute haircut. Runs his “barbershop” out of his house, which he owns, in one of the most desirable neighborhoods in Seattle.
There’s always another appointment lined up before and after mine, so I guess he’s pulling 6 figures without much sweat.
Something tells me a rural barber in a place he can afford to buy 10 acres without inheritance money is not in a location with many people willing or able to pay $50 for a 15min haircut.
You're quite right, people don't pay $50 for a haircut here, but it scales a little less than the cost of living does. I pay $35 inclusive of tip. For what it's worth, the cost of living here is a little over half of that in Seattle.
I live in the principal city of the local Metropolitan Statistical Area; it's by no means a big city, but it's representative of many small cities around the country. My barber lives out in the county, outside of city limits, where it is much more rural and one can indeed buy 10 acres for not a whole lot of money.
I believe his wife is a schoolteacher; I don't believe public employee benefits are especially generous in this state.
They are usually very generous, especially health insurance subsidies. My friend with a teacher wife pays almost zero, for premiums and out of pocket. And they had multiple IVF rounds covered.
Ask them what their deductible/oop max, and how they get that insurance, and I bet you will have your answer for how your friend can afford to raise a family of 4 as a barber and buy and live on 10 acres of land. I doubt a 2 barber couple could pull it off. The security/benefits of one half of a couple being a government employee is pretty valuable.
Probably, but my barber is awesome. Older now, but he was once very prolific and well known in the music & arts scene. He’s an excellent story teller, and an overall entertaining person to interact with.
I’m not just paying for a haircut, it’s an experience that’s worth every penny.
> It's really striking though that a person graduating high school in say, 1970, could easily pick from a number of career choices even without doing college or even learning an in-demand trade, like plumbing, welding, etc. [...] Those are the days that are super gone
Isn't this rather a strong argument for the claim that what high school as of today teaches is a strong mismatch with what the labour market demands? In other words: the pupils are taught skills for many years of their life that are rather worthless for the job market.
This is true, but I dont think high school was ever intended to be preparing kids for the labor market; I thought it was to teach a basic education for understanding the world.
Until maybe four decades ago, high school was exactly intended to prepare kids for the labor market. They taught conformity, punctuality, the three Rs, and civics. It was the final education for the great majority of people in the US, as few went to college.
But high school and college both got dumbed down, and now an education at a state university is comparable to high school in the first half of the twentieth century.
Schools have always been like that. There's a reason that all of those skilled trades require multi year apprenticeships.
School teaches everybody to read, write and reason about things in general to a decent level. You can't teach high school kids the basics of all the careers out there beyond stuff that's generally applicable - you wouldn't have the time or the equipment. And schools do often have elective shop, cooking, electronics etc classes for those who want to do them.
I think there is a risk to the trades as well. Tradesman service prices are skyrocketing, too. With fewer people able to hire them because they cannot afford it, won't trade jobs fall as well?
I think there's risk in trades like any business, but the world will always need plumbers, electricians, etc. Someone will always find the market price for when people are willing to pay to not have to interact with sewage or potentially get shocked.
I hope they would, but in my 40 years of owning homes, prices to have things done double every 7 years. This would mean 10% inflation year over year, but that isn't the case, and wood prices have not gone up that much. Everyone is just charging more because they can. I keep thinking: at some point homeowners won't be able to afford this, but I've been wrong for 4 decades and just keep writing the checks with firmly clenched buttocks every time I need a major repair.
Look .. I am one of the working class here. But I gotta point out. Our standard of living is far superior to someone living in 1970 when it comes to stuff/technology. The stuff is cheaper because of China/globalization. The tech is there because we outsourced production and kept specializing the work force.
I don't think UBI is the solution. Nor is squeezing people more than they are being squeezed. Efficiency and productivity are good things. What is wasteful are things like make-work programs like the DMV or other govt office. That crap needs to be automated away. Hospitals need more funding. Schools are unclear. I think schools would benefit from privatization. I don't think the same of hospitals. Not sure why.
It's not about our "standard of living", it's more about class mobilization. If you were born poor, you could still get a job at McDonalds, go to college on those wages, and buy a home/start a family and live middle class easily, if not upper-middle class if you chose a high-earnings potential career.
Today, that's close to impossible unless you take student loans, go through the gauntlet of getting a higher paying job, and then have to grapple with home prices assuming you don't live in a place with reasonably affordable home ownership.
If you think the DMV and government offices are "make-work" then we need to start with cutting military spending because it's the biggest "make-work" government program we have that the vast majority of U.S. spending goes to.
Well, I paid into SS and Medicare for fifty years, and I’m getting less out than I put in. But, still glad to have them.
Some people call it a Ponzi scheme, but it’s a pool, as designed.
The military is a big jobs program and all the industrial support generates a lot of economic activity. Better if they just dropped it in the ocean, though. Too much temptation to test it.
Standard of living shouldn't be judged purely in terms of "stuff"; that's not how human brains work. There's relative effects ("keeping up with the joneses") as well as the effects of having options and possibilities for advancement ("American dream"). Make those things less accessible and people will feel less well off.
And outside of electronics a lot of physical-goods/land stuff is less attainable in many places in the country anyway.
The problem with non-redistributive approaches is that generational wealth rarely goes away. So if you don't have it, the number of people who don't have to try to out-spend you for whatever you want only goes up as time passes.
televisions are cheaper, food, housing and healthcare are more expensive [0]
I'm pretty sure the latter 3 are more important to standard of living than the former.
Ergo, I believe your claim that standard of living is superior to 1970 is false. Having a shiny iPhone to distract you from the fact that you're homeless, sick and starving is not a step up.
> I think schools would benefit from privatization
Well, we (in the US), have slowly been privatizing them and it’s bad! If you look at test results though, it’s great! Because private charter schools can drop underperforming students before the end of the term and artificially inflate their numbers. There are many more reasons that education with a profit motive isn’t better than without. I suggest maybe reading up on this before casually suggesting how you think we should radically erode our institution.
I went to private school and this is very true. If you are going to privatize schools well then private schools need to keep students with iep regardless of cost, like public schools , or they should get no funding whatsoever from taxpayers.
Sorry, my phrasing was bad. Totally agree, even today trades are still AMAZING for this. I meant even if you were to set aside the trades, 50 years ago there was plenty of stuff you could at least support a family on without even that level of specialized skill. You could "start in the mailroom" or on the sales floor and end up in middle management after 20 years, in a variety of companies, most of which don't even exist anymore, or if they do, they employ far fewer workers domestically today due to a combo of offshoring and automation.
> the capital class has been able to take a bigger piece of the profit pie without sharing.
In the current world, where do you think a lot of the capital class is able to get their capital?
Technological progress, and especially the Internet, has made much bigger markets out of what were previously lots of little markets, and now th "winner take all/most" dynamics make it so that where you previously could have, for example, lots of "winners" in every city (e.g. local newspapers selling classified ads), where now Google, FB and Amazon gobble up most ad dollars - I think someone posted that Amazon's ad business alone is bigger than all US (maybe more than that?) newspaper ad businesses.
IMO the "main source of inequality" is that tech allows a small number of people to use technological and fiscal leverage to make an outsized impact on society as a whole. Anyone who has a job that produces value in a 1:1 way is positioned to be 'disrupted'. NLP, etc, just provides more tools for companies to increase their leverage in the market. My bet is that GPT-4 is probably better at being a paralegal than at least some small number of paralegals. GPT-5 will be better at that job than a larger percentage.
Anyone who only has the skills to affect the lives and/or environments of the people in their immediate surrounding are going to find themselves on the 'have nots' end of the spectrum in the coming decades.
This is exactly what has happened to commercial and investment banking (market/trading) in the last 30 years. Computers and mass automation. Even if your profits only grow with inflation (in reality, they grow much faster), but you can reduce costs each year (less labour required), then return on equity continues to rise. It is crazy to me that most big commercial banks still have so many physical branches. I guess they exist for regulatory purposes -- probably _very_ hard to close a branch to avoid "banking deserts".
This has changed considerably. Chase remodeled most of their existing branches so they have like, 1 teller, and 2 or 3 people sitting at desks for other transactions. That's it. The days where your usual branch had a line of like 15 tellers are long gone. Out here in southern california, I think they also closed many of their branches in the past few years.
But looking back further - ohhhhhh yeah dude. Oh yeah. Totally. For a brief period my mom worked at a BofA facility that _processed paper checks_. Like they had a whole big office for it. That's completely 100% gone now. The checks get scanned at the point of entry (cash registers, teller counters, etc) and then shredded.
I have family that has been on the front lines of fighting global poverty and corruption, for their entire life (more than 50 years -at the very highest levels).
I submit that it is not hyperbole to say that probably 95% of all global human problems can have their root cause traced to poverty. That is not a scientific number, so don't ask for a citation (it ain't happening).
There is no sharing and there never was. Companies don’t share profits with workers and they never have. Workers get paid on the marginal value of their productivity, not some portion of the total or average.
I don't know what you mean to imply by "government edict (e.g., ww2 era)" but WW2 in the US was the era of the US fixing wages under the authority of the president
> Based on numbers relating to workers’ diminishing share of profits, it seems to be that the capital class has been able to take a bigger piece of the profit pie without sharing.
I was with that site until it compared the US budget to a household budget with a credit card. The average American family doesn't control the federal reserve or set interest rates and employs zero economists, so that seems like a dumb way to think about the problem.
I used to do a job that was eventually automated. We did the one and only thing the computer couldn’t do - again and again in a very mechanical fashion.
It was a shit job. You might get promoted to supervisor - but that was like being a supervisor at McDonalds.
Why not treat the job seriously? Why didn’t the company use it as a way to recruit talent? Why didn’t the workers unionize?
Because we all knew it would be automated anyway.
We were treated like robots, and we treated the org like it was run by robots.
There’s a huge shadow over the economy that treats most new jobs like shit jobs.
> I mean, does anyone think that things like human translators, medical transcriptionists, court reporters, etc. will exist as jobs at all in 10-20 years? Maybe 1-2 years? It's fine to say "great, that can free up people for other thing", but given our current economic systems, how are these people supposed to eat?
And it doesn't mean that the replacements will be much better, or even as good as the Humana they replace. They will probably suck in ways that will become familiar and predictable, and at the same time irritating and inescapable. Think of the outsourced, automated voice systems at your doctor's office, self-checkout at the grocery store, those touchscreen kiosks at McDonalds, etc.
I already find myself wanting to scream
> GIVE ME A FUCKING HUMAN BEING
every now and then. That's only going to get worse.
A lot of those jobs exist for legal compliance reasons and are cost center. Every company in the world wants to make them as shitty and cheap as possible but remain compliant. Right now the floor is minimum wage but soon it will be gpt chat
Perhaps those in tech can simply see further out. It reminds me of the advent of the internet, the lay person also didn't care, until websites and web apps were made that catered to their needs. But the people who made those sites and apps were precisely the tech people who could see beyond the lay person's idea of what the internet was. So too with AI.
Teaching kids how to communicate by, for example, writing a book report or an essay is not "busy work". I feel very lucky that I had good teachers that focused on the basics of writing. I remember at the time not liking some of it at all (I had one teacher who made us diagram sentences endlessly), but as an adult I'm really grateful I had that education.
I think saying "oh it's busy work" without proposing a solution that doesn't involve orders of magnitude more work for teachers probably should spend a a few days in close proximity to a high school teacher.
Ask the copywriter whether it takes longer to write the content themselves or to edit GPT's output to satisfy the client. If it's 10% faster to edit GPT's output, that means that we need 10% fewer copywriters.
It is very obvious there is a mass unemployment wave coming - or at least a mass "retraining" wave, though the new jobs "teaching AIs" or whatever remain to be seen. I hope everyone currently just questioning whether this will happen now is prepared to state it with conviction in the coming months and fight for some sort of social protection program for all these displaced people, because the profits from this new world aren't getting distributed without a fight.
There should be no tax payer money to these people. AI can never perform physical labour, so there is still a market for physical labour open for those replaced by AI. There will still be people working physical jobs as there are today, and you demand that they support those who are too lazy to work a physical job? No, they have to work as well.
Everybody who doesn't have to do physical labour - including me - should be happy and grateful for that privilege - not try to rob even more from physical labourers, who in the end, create everything.
I don't think it's the economy, it's the policy. Automating a shit-ton of jobs is great for the economy. The economy is just fine if 90% of people are starving because big corps are saving shit-tons of money.
The government of a wealthy country should ensure that its citizens are able to eat, and have a sheltered place to sleep, without them needing to work. Because the way things are going, there won't be enough work to go around. Even now, with the supposed "labour shortage" there are record numbers of homeless people, and people living paycheck-to-paycheck. Housing is more unaffordable than ever. Minimum wage is not keeping up with the economic realities.
Governments need to step in; they need to change policy so big corps are paying more taxes, and that tax goes to a basic income that can cover the cost of housing and the cost of food. Maybe not right away, maybe it starts at $100/month. But eventually the goal should be to get everyone on a basic income that can cover the necessities, then if they want to be able to enjoy luxuries (concerts, gourmet food, hobbies, streaming services, etc.) they can choose to work.
Looking around here in Amsterdam, there is a lot of work to be done. And all the places that aren’t as nice, there is a LOT of work to be done.
Computational irreducibility combined with human insatiability combined with ethics entails: a lot of work to be done. Everyone needs to get to work, we aren’t done yet. If robots could do all the work… well, god bless. There is SO much work to do. That makes me think the issue is not “robots taking jobs” but the focus and intention of the collective.
Any new hire is WAY more valuable now that they can use chatGPT. So why aren’t we hiring more people?
Well, one small issue: notice how you get taxed for hiring people a lot more than buying tech? That’s probably something to be fixed.
I wonder why not more people are seeing it from this angle. My tech company is in southern africa, and I struggle to find the talent to fit my customers / the market’s needs. We can’t compete with salaries in the US or western countries in general.
Now, my average tech staff can actually solve challenging coding problems that before could only be done by the few talented ones.
It does not solve all shortcomings, but it definitely shifts the collective cognitive load in the company.
Why do you say you can't compete with Western salaries? If your customers don't want to pay what it takes, why the hell would it be your potential employees' responsibility to reduce their salary so you can offer this bargain? And since you are in tech, you can sell to the whole world. If your business can't get enough profit to pay for your workers, then the company should shut down the business as a failure.
how much of that work requires physical labor? when I think of the work to be done, I think of like, building roads and houses and factories, transitioning to renewable energy, staffing hospitals with doctors and nurses, picking up trash on the street, cleaning up oceans, research in biology and pharmacology.
I don't really think of filing paperwork, or writing code, or meetings in general. It's the stuff that requires arms and legs and fingers that's our bottleneck. ChatGPT can't help with that. Robots can, but RL is lagging.
> building roads and houses and factories, transitioning to renewable energy, staffing hospitals with doctors and nurses, picking up trash on the street, cleaning up oceans, research in biology and pharmacology
All that takes knowledge work. Physical too, but most of the cost today is knowledge work.
If there is a housing shortage and a jobs shortage then shouldn't people just be building houses? Why should the government be involved and just giving people money? The problem is too much government involvement already. Look at areas where there are shortages and problems with very little advancements being made. Those are the most heavily regulated industries, a system that allows for the most corruption, entrenchment and commingling of power and wealth.
Probably not very different… since we basically have this in the form of insanely rich billionaires and multi-hundred millionaires, who basically exist in a life where they are divorced from everyone else and they don’t care about “us” … their money just compounding its growth automatically because they passed the tipping point where it’s hard to (other than deciding to do some crazy expensive thing like buy Twitter) spend their money faster than it comes in… I mean sure it might be a little different since an AI won’t eat or sleep, but when the heavily computerised economic activity involved in global investment and banking is already not sleeping and a sort of diffuse collective intelligence… yeah I’m not sure how different it would be unless we’re talking humans they would have to literally let the AI starve people for whatever reasons it may have… I feel like the odds of “communist revolution” type activities, where the workers seize the means of production, is probably higher for ephemeral AI overlord then for flesh and blood bosses and capital owners.
Think of people who have jobs like archaeology, digging up bones. The only way these jobs can exist is if technology has taken over much of the grunt work of production.
As for human translators, the need for them far, far exceeds the number of them. Have you ever needed translation help? I sure have, but no human translator was available or was too expensive.
This is probably the real problem. Translators are payed shit nowadays for what is a really high-skill job. I have translators in the extended family who had to give up on that line of work because the pay wouldn’t sustain them anymore.
yep, exactly. the issue isn't that there will no longer be a need for human translators - machine translation makes subtle mistakes that legal/technical fields will need a human to double-check.
the issue is that many translation jobs will, and already are, being replaced with 'proofread machine translation output' jobs that simply don't pay enough. translation checking is careful, detailed work that often takes almost as much time as translating passages yourself, yet it pays a third or less of the rate because 'the machine is doing most of the work.'
I don’t think it’s really because “the machine is doing most of the work”, but because there’s no good way for clients to assess the quality of the supplemental human work, and therefore the market gets flooded with subpar translators who do the task sloppily on the cheap, in a way that still passes as acceptable.
AI translation is good enough to get business done. And it is instant, which opens up completely new opportunities and markets. Getting e-mail in a foreign language, translating it, writing a response, translating that back and closing a deal. Using a human translator would take much more time, because they cannot always be on call.
An adequate AI translation is a lot better than no translation.
When you have to use any documents within another country that doesn't list their original languages as official, not much, if anything at all, is machine-translated AFAIK. Is this not the case for most legal paperwork as well? You almost always need certified translation (by a human), for which you have to pay out a reasonable sum. And if it's not a good translator, you pay double.
e.g. Italian citizenship can cost as much as a brand new car in Brazil and almost half of that cost could come from certified translation hurdles.
Jobs like archeology could probably also be automated further. Somewhat limiting this is possibly the fact that the "market size" for automation here is small.
This is possibly a death spiral. GPT is only possible because it's been trained on the work humans have learned to do and then put out in the world. Now GPT is as good as them and will put them all out of work. How can it improve if the people who fed it are now jobless?
Presumably it will improve the same way humans did -- once it's roughly on par with us it'll be just as capable of innovating and trying new things. The only difference is that for humans, trying a truly new approach to something isn't really done that often by most. "GPT-9" might regularly and automatically try recomputing all the "tricky problems" it remembers from the past with updated models, or with a few tweaked parameters and then analyze whether any of these experiments provided "better" solutions. And it might do this operation during all idle cycles continuously.
Honestly as a human who grasps how the economy works, this doesn't sound like a good thing, but I don't see any path to trying the fundamental changes that would be required for really good general AI to not be an absolute Depression generator.
The only thing I'm wondering is, will the wealthiest ones, who actually have any power to influence these fundamental thing, figure this out before it's too late? I really doubt your Musks and Bezoses would enjoy living out their lives on ring-fenced compounds or remote islands while the rest of the world devolves into the Hunger Games.
I keep seeing people post a line like this and it makes about zero sense to me...
Just what the shit do you think Google/Microsoft/Amazon are pouring billions of dollars into machine learning/AI for? The first one that creates a self improving-self learning machine wins the game (or destroys the earth with a paperclip maximizer).
You and your human intelligence are not magic. You're biological hardware and software that a lot of people are spending a lot of time and effort on reproducing in a digital format.
This is (materialistic) nihilism. Materialism is a philosophy and not a new one, but an exceedingly sterile one, in my view. If you want to take that philosophical position you can, but others are free to reject it (and most people do) because it is only a philosophical position, not a proved description of reality (how can it be?).
That was very bad for the weavers. Well earning middle class jobs were replaced by toiling in the mills, It then took a few generations for the work act to put a 10hr maximum workday etc.
The luddites were trying to defend their livelihoods, communities etc. It's the same rational thing people are looking at now: How to survive
Consider how expensive textiles were before weaving machines. People wore their one set of clothes until they disintegrated.
Fast forward to today. I received a solicitation for a charity in the mail the other day. They enclosed pictures of the poor kids. The kids were all wearing fashionable, spotless clothes in perfect condition.
They did?! I thought they just had to learn and adapt doing something else eventually during a transition which was not very fast while also enjoying the new, better and cheaper products available to all.
Any source for those starvation deaths? I would like to learn more about what prevented them from simply doing what the survivors did.
And weaving machines have not fully trickled down to citizens.
You can not easily buy a weaving machine (there are some second hand ones) or easily go to your local maker space and use the weaving machine to create the design you desire. Open source in the space of textile making is in its infancy even though there are some projects. I bet it is easier to get a low volume tape-out of some custom chips than it is to get a custom roll of textile. (you can get printing but that's not the same thing)
Textiles have become way cheaper and both higher quality (when demanded) and lower quality (cost saving fast fashion) and available in far higher quantities.
That medieval era technology can still be manufactured at an individual scale was not my point.
My point was that access to the technological advancement has not trickled down and that this creates an imbalance of power.
Compare how easy it is to get a custom textile (not a custom print) made to how easy it is to get a custom PCB made (it is reasonably easy to etch a double sided board and multi-layer and flexible ones can easily be ordered online). The situation with regards to knitting is somewhat better.
Saying that a "a basic loom isn't hard to make" in a world of high speed air jet digital looms is equivalent to saying that perf-board still exists in a world of SMD components.
This hoary take irks me. There were still places for human endeavour to go when the looms were automated.
That is no longer the case.
Think of it instead as cognitive habitat. Sure, there has been habitat loss in the past, but those losses have been offset by habitat gains elsewhere.
This time, I don't see anywhere for habitat gains to come, and I see a massive, enormous, looming (ha!) cognitive habitat loss.
--
EDIT:
Reply to reply, posted as edit because I hit the HN rate limit:
> Your job didn't exist then. Mine didn't, either.
Yes, that was my point. New habitat opened up. I infer (but cannot prove) that the same will not be true this time. At the least, the newly-created habitat (prompt engineer, etc.) will be miniscule compared to what has been lost.
Reasoning from historical lessons learned during the introduction of TNT was of course tried when nuclear arms were created as well. Yet lessons from the TNT era proved ineffective at describing the world that was ushered into being. Firebombing, while as destructive as a small nuclear warhead, was hard, requiring fantastic air and ground support to achieve. Whereas dropping nukes is easy. It was precisely that ease-of-use that raised the profile of game theory and Mutually Assured Destruction, tit-for-tat, and all the other novelties occurrent in the nuclear world and not the one it supplanted.
Arguing from what happened with looms feels like the sort of undergrad maneuver that makes for a good term paper, but lousy economic policy. So many disanalogies.
> This prediction has occurred with every technology revolution. It hasn't been borne out yet.
So what? You are performing 'induction from history', which is possibly the hand-waviest possible means of estimating what is next to occur.
Discontinuities occur. Fire gets tamed. Alphabets get invented. What went before is only a solid guide to the future absent any major disruption to the status quo. There is no a-priori reason to think that this time will be the same, either. Burden of proof is yours.
> It's a variation of the broken window fallacy.
I appreciate parsimony as much as the next academic but I'd appreciate you fleshing out your position here, so I can take it apart at the joints, in the custom and manner of my people >:)
You still haven't stepped away from historical induction -- your argument still depends on this time not being radically different than last time. There are good reasons -- presented everywhere, right now -- to suppose that this time is substantively different. Sundar Pichai called the invention of AI the most important thing humanity has worked on -- more important than fire, or the alphabet -- and I share his view. It's out there, commonly, in the intellectual wild; you cannot, on pain of being unconvincing, simply ignore it. "Big, if true," and it very well might be. https://www.youtube.com/watch?v=sqd516M0Y5A
I propose that you invest in a more convincing line of argument. The burden of proof lies heavy upon you.
Secondly -- for the life of me -- I don't see how we got from "prosperity doesn't come from jobs that are little more than make-work" (a claim, by the way, that Keynes would take exception to) to the view that automating most of the intellectual work on the planet will have nugatory impact, or that we'll all just vy to become celebrated Twitch streamers or influencers or whatever (assuming that synthetic influencers don't take off -- oh, wait, they did: https://www.synthesia.io/glossary/ai-influencer)
Even were you correct (and Keynes wrong), the instantaneous conversion of meaningful labour --- journalism, counselling, engineering -- into, as you say, "make-work" (the position I infer you are taking) would have tremendous cost.
At minimum, the psychological impact of such a transition would make the developed world's COVID hangover look like a day at the zoo.
Finally, the Parable of the Broken Window specifically refers to destructive work. Non-productive work is not covered. https://finshots.in/archive/dig-holes-and-get-paid-to-fill-t.... And that is to say nothing about how economic fruits are distributed -- a whole other matter, upon which, I again infer, you have no further comment.
Allow me to indulge in my own historical induction.
Up until Louis Pasteur invented the germ theory of illness, it was broadly understood, across many different cultures, that disease had its origins in one or all of: witchcraft, possession, loose morals, blocked meridians, etc.
Were you to do historical induction on the spawning of illness theory, you might well conclude that no theory of illness would be scientifically verifiable. You might have argued that anyone claiming a radical change in medicine was deluded, alarmist, or simply excitable.
And you would have missed out on the multiple decades of extra health that you've had on account of antibiotics, sterile procedure, and disinfectant. Your induction from history would have caused you to miss the disanalogy.
Something to think about next time you're at the doctor.
Non-productive work is exactly what breaking a window and then fixing it is, as well as doing work that is far better done by machine.
As to distribution of economic fruits, as I mentioned before, replacing labor with machines made the US the most prosperous country in history, along with the richest poor people.
> Non-productive work is not destructive work -- again, Keynes ditches.
Sorry, but breaking a window and then fixing it is non-productive.
> Your country, the USA, is very close to system collapse because of its inequal distribution of fruits.
Hardly. If the US will collapse, it's because of the current leftist swing of the government engaging in ever-increasing wealth redistribution.
> I daresay it makes good popcorn-time.
It's the equal distribution of income countries that repeatedly collapse. France is in the news currently because they've discovered that the math of redistribution does not work, and the people who cannot accept the math are rioting.
Competition still has potential for infinite growth. Even if ai is better than humans at everything, humans will be finite and will likely be better at making people with money feel important. Potentially the future economy is everyone just competing to make the wealthy feel important whether fighting their wars, worshiping at their cults, or working at their “startups”
You joke, but an economy that is 97% artists (aka content creators) sounds... good? Isn't this the utopic end goal after we automate the scarcity out of our lifes?
Have you seen some of that content? This sounds like a level in Dante’s inferno, all day everyday all “these” (and myself probably ) people going blah blah blah into the either. Navel gazing to the extreme.
In theory it's great, in practice... who knows. The cynic in me would expect it to go worse than anyone could ever imagine. If everything is automated, why do you still need humans?
Horizon: Zero Dawn was so compelling (to me) because of the outright horrifying plausibility of a gassed up tech CEO, convinced that software safeties were infallible, unleashes the consequences for their hubris upon the whole of humanity.
Ted Faro was a horrible human being blinded by delusions of grandeur, but he wasn’t “evil” - he was even convinced he was saving humanity by ending the threat of both war and climate change.
I don’t see GPT itself as representing a new Faro Plague, but I do see a lot of wannabe Ted Faros making the decisions at the top.
If LLMs come even close to achieving their short-term potential, we’re unleashing a bigger destabilizing force on the world than the smartphone/social media combo - and the world of 202x seems blatantly incapable of absorbing that level of disruption.
I saw a stream the other day that was just the output of an AI trained on a popular streamer’s past streams. It would select a random clip for video, respond to viewers’ comments in the voice of the streamer. It even superimposed roughly corresponding lip movements on the video.
I've listed to some popular podcasters. Over time, they all run out of material and their newer podcasts are just rehashes of the old ones. I suppose AI will take over that job!
Literally everything you do online is training data. This comment and discussion is future training data. Your browser history is logged somewhere and will be training data. Your OS probably spies on what you do...training data. It's training data all the way down. And they've hardly begun to take into account the physical world, video, music, etc. as training data.
Also what happens to the intuition and unwritten skills that humans learned and passed on over time? Sure, the model has probably internalized them implicitly from the training data. But what happens in a case where you need to have a human perform the task again (say after a devastating war)? The ones with the arcane knowledge are gone, and now humans are starting from scratch.
Incredible that we've been writing speculative fiction about this for decades and still we sleepwalk right into it. I'd love to be wrong, but I think we're all still too divided and self-interested for this kind of technology to be successfully integrated. A lot of people are going to suffer.
It’s not just sci fi. It’s has already happened in past with construction. Things like pyramids and certain cathedrals and what not are no longer possible even with machines. At least this is what I’ve read and heard, I’m not actually an engineer or architect.
Tangent, I’m looking for some sci fi about this topic. Any suggestions?
No. Things like Greek fire or Roman cement aren't possible - because we don't know the precise mixture or formulation involved. Many old descriptions mean we don't know how to do it, because they are very vague.
But we can technically do much better waterproof concrete or whatever - however our incentives are also not aligned in the same ways.
Here's a tangential link to monks building a Gothic cathedral with modern machines: https://carmelitegothic.com/
Presumably this problem is solved with technology improvements or the need is recognized to hire experts capable of generating high quality training material. In either situation, there's going to be extreme discomfort.
There is a problem, how will people become experts in the field. If all entry level positions are taken by AI, nobody will be able to become an expert.
GPT is good because of collective knowledge, lots of data. What do you have in mind by "hire experts"? Isn't that what we have now? Many experts in many fields, hired to do their work. Cut this number down and you reduce training data.
Let's assume that GPT eliminates an entire field of experts, runs out of training data, and whoever is at the helm of that GPT program decides that it's lucrative enough to obtain more/better data. One alternative is subsidizing these experts to do this type of work and plug it directly into the model. I don't expect the nature of the work to change, more likely it's the signature on the check and the availability of the datasets.
It's important to note however, that GPT does not itself have any knowledge, only information. Knowledge implies it has comprehension or understanding. It can just as easily produce bad information as good and it has little to no ability to self-assess the accuracy of information it provides.
You also may underestimate how quickly that AI could pass expert level. The experts out there still have many years of life left so they won't be disappearing soon. If we get self improving/self training AI sooner than later then, we'll humans won't be the experts.
> The state of AI progress makes it impossible for humans in many fields to keep up
The way it works, someone would have to produce more original training data in the first place. In long term, it is AI that has to keep up, not the other way around.
> wealth inequality
Microsoft, OpenAI run closed for-profit LLMs that are inherently only possible thanks to creative work of all the people that might stand to lose jobs now. Not only it should be clear where the driving force for rising wealth inequality is going to be going forward—these companies’ effectively scraping original works of living humans and repackaging them for profit should be in violation of intellectual property law, if it isn’t already. Perhaps more people should start adding two and two together.
> If you think the IP law that is already making the rich even richer
On the contrary, the confusion is all yours. This is literally the reason plenty of photographers, illustrators were able to make money, as just one example. Without this any major publication could just grab whatever photo or artwork they saw fit, but they didn’t precisely because there’s such thing as IP law.
Before you argue for some form of communism without any intellectual property, ask yourself why people produced creative work in the first place (you know, all those works thanks to which an LLM can do its thing). Could it be because the fruit of their work was considered their intellectual property? As in, they were paid for it and were in control of it?
Now as soon as LLMs are trained on that work and can suddenly can produce derivatives cheaper, techbros are suddenly all like “let’s pretend IP law is some unfair thing that only benefitted the rich”. Those publishers now pay OpenAI/Microsoft for those very photographs and illustrations, that were taken for free, while the very people who created them would be losing jobs and gigs. Similar to book authors and other creative industries.
Do you really think this works in favor of decreasing the wealth gap? Not having to pay original creators for their work and instead paying a fraction of a penny to Microsoft while those creators starve? Aren’t you living in perpetual cognitive dissonance from these mental gymnastics?
No surprise non-tech people and especially creatives hate tech people more and more.
* Verbal translation, where accuracy is usually important enough to want to also have a human onboard since humans still have an easier time with certain social clues.
* High-culture translation, where there's a lot to personal choice and explaining it. GPT can give out many versions but can't yet sufficiently explain its reasoning, nor would its tastes necessarily match that of humans.
* Technical translations for manuals and such. This market will be under severe threat from GPTs, though for high-accuracy cases one would still want a human editor just in case.
All in all, GPT will contract the market, but many human translators will be fine. There's still areas where you'd still want a human, and deskilling isn't a bug threat - a human can decide to immerse and get experience directly, and many will still do so by necessity.
A Chinese washing machine manufacturer may not hesitate to use MT (Google Translate, chatGPT or otherwise) to translate the instruction manual to English.
In some cases, the quality may not go down or even go up (we've probably all seen some pretty bad human translations).
"AI" is not the only automation that threatens translation jobs: translation memories (plain ol' databases that remember past translations) have killed a lot of business for human translators, namely re-translating new versions of products, where not a lot has changed. Nowadays, they only get paid for the sentences that were modified compared to the previous version ("the diff").
SDL Trados is an example of this simple approach that is extremely effective,
and used heavily e.g. by the European Commission's translation service.
> does anyone think that things like human translators, medical transcriptionists, court reporters, etc. will exist as jobs at all in 10-20 years? Maybe 1-2 years?
Maybe the very, very basic transcription/translation stuff might go away, but arguably this race to the bottom market was already being killed by google translate as bad as it is anyway.
In areas where quality is required (eg. localizing video games from japanese to english and vis versa) people would be (justifiably) fussy about poor localization quality even when the translation was being done by humans, so I have to imagine that people will continue to be fussy and there will still be significant demand for quality job done by people who aren't just straight translating text, but localizing text for a different audience from another culture.
You are a founder of a startup. A notable VC wants to invest millions of dollars but insists that the contract will be in their language which is Finnish. Would you trust GPT to translate the contract or reach out to a professional human translator?
We've got Google translate from 2006, and there are still millions of translators at work all around the world. I wouldn't be so quick to dismiss those jobs.
- Google translate and its ilk have already significantly cut down the number of translators required for multinational companies. Google translate in 2006 is also a bad example, it really only got excellent in the past few years.
- I would trust GPT to write the first draft, and then hire a translator to check it. That goes from many billable hours to one, or two. That is a material loss of work for said translator.
- High profile translations, as your example is, are a sharp minority of existing translator jobs.
Tools like deepL and Google translate have absolutely put casual translators and part-time translators out of a job. GPT and tools like whisper that can do automatic speech recognition and instant translation will make that even worse.
There will always be a need for careful human translation work in such things as legal documents for government work but those positions will become even more competitive.
I was just using bing translate last night, and it was literally making up english words that do not exist - I tried to google for them to see if it was just some archaic word, and it was complete fabrication. So I dunno how many years are left before we all trust machine translation unflinchingly, but I agree today's not the day.
Recently I needed a couple payslips translated from German to English and then certified. Google Translate can translate, but cannot certify.
Professional licensed translator took significant amount of money and sent me a document for review with most numbers somehow mixed up. It took days of back and forth over the email for them to fix it.
ok - but a translator with certification is available to replace the one that made a mistake. Their guild exists.. The recourse and legal weight of the Google Translate service are very different. You can't compare the two fully without context.
And there’s the fact that many tasks don’t need to be perfect. AI is ideal for doing 90% of the legwork in those use cases.
Or you could still have a human in the loop, for example a CTO reviewing their AI developer’s code. But now you can pay one person instead of 10.
That said, these tools still need some domain knowledge. It’s not at the point yet where anyone off the street can use it to accomplish an unfamiliar task.
There’s an old story about someone charging $1,000 to fix a machine. They did it in 10 seconds and the client complained. The consultant said “I get paid for knowing how to fix it”.
The same might be true for knowing how to prompt AI.
For the translator job, that is already the reality. I saw Chinese students major in English and translation post comments in social media that company are already asking the employees to first ask ChatGPT and then review the response.
It's true that there are few top students can do better in translating than ChatGPT, but it was told that only students are outstanding in both English and Chinese can do it.
There are hundreds of thousands of people making their living on translating, and now most of them need to find something else to survive now.
this is partly true but not accurate in important ways..
The result of ChatGPT depends on the domain of the language. Technical reasoning based on many parts of input, is what the machine was designed for.. human language A to human language B depends on specific models built over the last ten years or so..
But if you subtract those two.. language structure and technical structure, you are left with giant holes in the way humans talk, interact and create.
Overall I would say that people who translate formal documents are in trouble yes.. people who translate to no-loyalty cost-sensitive markets like corporate ads for products or travel recreation things.. are in trouble. But there are many other kinds of communication, and therefore translation.
> 1. You don't need to take everyone's job. You just need to take a shitload of people's jobs. I think a lot of our current sociological problems, problems associated with wealth inequality, etc., are due to the fact that lots of people no longer have competitive enough skills because technology made them obsolete.
I think the problem is that money saved from making those jobs redundant is not going back to the society but few heads on top. That's the fundamental problem, company moves money from payroll to tech that replaces the people and any savings are entirely company's profits.
And then they go ahead on tax avoiding spree so increased profits don't even flow back to the society
> Imagine if you spent your entire career working on NLP, and now find GPT-4 will run rings around whatever you've done. What do you do now?
I have been doing NLP since 1993. Before ca. 1996, there were mostly rule-based systems that were just toys. They lacked robustness. Then statistical systems came up and things like spell-checking (considering context when doing it), part of speech tagging and eventually even parsing started to work. Back then, people could still only analyze sentences with fewer than 40 words - the rest was often cut off. Then came more and more advanced machine learning models (decision trees, HMMs, CRFs), first a whole zoo, and then support vector regressors (SVM/SVR) ate everything else for breakfast. Then in machine learning a revival of neural networks happened, because better training algorithms were discovered, more data became available and cheap GPUs were suddenly available because kids needed them for computer games. This led to what some call the ¨deep learning revolution¨. Tasks like speech recognition where people for decades tried to squeeze out another half percent drop in error rate suddenly made huge jumps, improving quality by 35% - so jaws dropped. (But today's models like BERT still only can process 512 words of text.)
So it is understandable that people worry at several ends. To lose jobs, to render ¨NLP redundant¨. I think that is not merited. Deep neural models have their own set of problems, which need to be solved. In particular, lack of transparency and presence of different types of bias, but also the size and energy consumption. Another issue is that for many tasks, no much data is actually available. The big corps like Google/Meta etc. push the big ¨foundational¨ models because in the consumer space there is ample data available. But there are very important segments (notably in the professional space - applications for accountants, lawyers, journalists, pharmacologists - all of which I have conducted projects in/for), where training data can be constructed for a lot of money, but it will never reach the size of the set of today`s FB likes. There will always be a need for people who build bespoke systems or customize systems for particular use cases or languages, so my bet is things will stay fun and exciting.
Also note that "NLP" is a vast field that includes much more than just word based language models. The field of propositional (logical) semantics, which is currently disconnected from the so-called foundational models, is much more fascinating than, say, chatGPT if you ask me. The people there, linguist-logicians like Johan Bos identify laws that restrict what a sentence can mean, given its structure, and rules how to map from sentences like "The man gave the girl a rose" to their functor-argument structure - something like "give(man_0, rose_1)¨ - which models the "who did what to whom?". When such symbolic approaches are integrated with neural foundational models, there will be a much bigger breakthrough than what we are seeing today (mark my words!). Because these tools, for instance Lambda Discourse Representation Theory and friends, permit you to represent how the meaning of "man bites dog" is different from "dog bites man".
So whereas today`s models SEEM a bit intelligent, but are actually only sophisticated statistical parrots, the future will bring something more principled. Then the ¨ "hallucinations" of models will stop.
I am glad I am in the field of NLP - it has been getting more exciting every year since 1993, and the best time still lies ahead!
BERT can process 512 tokens. LLAMA and FLAN-UL2 can process 2048 tokens. GPT-4 can process 32768 tokens, and is much better at ignoring irrelevant context.
These general models can be fine tuned with domain specific data with a very small number of samples, and have surprisingly good transfer performance (beating classical models). New research like LORA/PEFT are making things like continuous finetuning possible. Statistical models also do a much better job at translating sentences to formal structure than the old ways ever did – so I wouldn't necessarily view those fields are disconnected.
I agree with the general sentiment, there are still major issues with the newer generation of models and things aren't fully cracked yet. But the scaling laws are saying there's still a lot of upside, even without new paradigms or architectural improvements.
> Another issue is that for many tasks, no much data is actually available. The big corps like Google/Meta etc. push the big ¨foundational¨ models because in the consumer space there is ample data available. But there are very important segments (notably in the professional space - applications for accountants, lawyers, journalists, pharmacologists - all of which I have conducted projects in/for), where training data can be constructed for a lot of money, but it will never reach the size of the set of today`s FB likes.
This is a really important point. GPT-x knows nothing about my database schema, let alone the data in that schema, it can’t it learn it, and it’s too big to fit in a prompt.
Until we have AI that can learn on the job it’s like some delusional consultant who thinks they have all the solutions on day 1 and understands nothing about the business.
Rich Socher had this to say about the transition to neural networks: "Chris Manning also, to his credit – when I came to him and I had all these ideas from the computer vision world and neural networks, I still remember that meeting – he said, "Look, I don't know anything about neural networks right now either, but I'm willing to learn, and we'll get through it together." And obviously he knew everything there is about NLP in general." Adapt or die.
"I think that is not merited. Deep neural models have their own set of problems,"
But I think that is the issue...going forward wouldn't you hire a machine learning specialist rather than an NLP specialist for those problems? As far industry goes, is there any value in all the syntax/semantics/phonology theory NLP folks command?
I don't think so. In practice, you'll need to hire _both_ the domain expert and the ML specialist. Or maybe even no change at all... you still want the domain expert, because the problems may be fundamentally related to the framing of the task the AI system is trying to solve, not the model architecture or training/fine-tuning.
You definitely see this in the weather space. Despite flashy headlines, AI has really failed to make much of a difference at core weather forecasting, because the specialized statistical systems that combine many numerical weather prediction models are so greatly refined to the generic forecasting problem that there is little room for improvement. And AI practitioners rarely even focus on the actual interesting problems in the field where we suspect there can be huge gains - like convective initiation (predicting where exactly storms will form and their potential phase trajectory, e.g. what is the probably it will go tornadic or produce large hail?). The reality is that meteorologists can refine the prediction task so precisely that you don't need innovative, brand new model architectures. And the crazy brand new pure DL/data-driven models like NVIDIA's FourCastNet or DeepMind's GraphCast have a long way to go to be a practical competitor to traditional NWP and basic post-processing/statistical bias correction.
The late Fred Jelinek, founder and manager of IBM's speech recognition R&D team at TJ Watson Research Center @ Yorktown Heights created the famous joke "every time I fire a linguist, recognition accuracy goes up" - as someone with a stake in both ML and NLP I would say more credit goes to ML than to NLP for sure.
Will linguistic knowledge not be needed at all? I don't want to speculate about the far-out future, but what I can safely say from industry experience that at any stage (1996 - now) there was always some extra gain to be had on top of the ((statistical | neural) - only) approach of the day by engineering hybrid solutions that at some level also exploit human-injected linguistic knowledge and human-injected business rules.
Next month at ECIR 2023 in Dublin, I will present a "shoot out" between a BERT model especially pre-trained (for months) and fine-tuned for document summarization of financial meetings (earnings calls) and a one-line POSIX shell script (two cascaded grep commands) written in 3 minutes by yours truly that also extrcts a summary - with surprising results...
I see the real story at the end of the day as "Ai will take a sht load of jobs from everyone until it screws up royally, and then companies will need to rehire at a higher rate...
We keep committing seppuku by emerging technology to where hubris becomes a large scale financial or even a physical crash, and the smartest ones of us avoid the knee jerk reaction to quick over-adoption of new and unpredictable tech. The main problem is that there are so many forces pushing us towards adoption of whatever is packaged and marketed most heavily, and we need to incorporate better rollbacks and ways of gradually adopting new tech. Testing is also a quickly dying art, as seen with Twitter... This will be our downfall if not properly reigned in.
The people (self proclaimed tech leaders) who we exalt have already shown us that they are impulsive, and driven by greed, vanity, and ego. If we continue to let them act as "golden children", and continue to make them more and more wealthy, there will be no going back to fairness in our world. Tax them fairly, hold them accountable, stop letting them influence politics, stop electing rich people and people with conflicts of interest, and let everyone have a voice and equal opportunity to climb into responsibility based on their incremental successes... We're not doing any of that at all right now -- it greatly worries me.
Technology never affects the economy in isolation. It acts in concert with policy. Broadly speaking, inequality rises when capital is significantly more valuable than labor. The value of either depends on taxes, the education system, technology, and many other factors. We're never going to stop technology. We just have to adjust the other knobs and levers to make its impact positive.
> human translators, medical transcriptionists, court reporters
Yes, they will be all called 'ai data labellers'.
For a long time, "People don't just want jobs, they want good jobs" was the slogan of industries that automated the boring stuff. Now AI is suddenly good at all the jobs people actually want and the only thing it can't do is self-improve. In an AI future, mediocre anything will not exist anymore.
Either you are brilliant enough to be sampling from 'out of distribution', or you're in the other 99 percent normies that follow the standard : "learn -> imitate -> internalize -> practice" cycle. That other 99% is now and eternally inferior to an AI.
>In an AI future, mediocre anything will not exist anymore.
Right! Aren't we all mediocre before we're excellent? Isn't every entry level job some version of trying to get past being mediocre? i.e. Isn't a jr developer "mediocre" compared to a senior dev? If AI replaces the jr dev, how will anyone become a senior dev if they never got the chance to gain experience to become less mediocre?
> given our current economic systems, how are these people supposed to eat?
I've said it before and I'll say it again. This right here is the crux of the issue. The only way people get to eat is if we change the economic systems.
Capitalism supercharged by AI will lead to misery for almost everyone, with a few Musks, Bezoses and Thiels being our neofeudal overlords.
The only hope is a complete break in economic systems, towards a techno-utopian socialism. AI could free us from having to do work to survive and usher in a Star Trek-like vision of the future where people are free to pursue their passions for their own sake.
We're at a fork in the road. We need to make sure we take the right path.
It will take massive cooperation. Given how rough it was to make it through the pandemic... how can we hope to come together on something this daunting?
> I mean, does anyone think that things like human translators, medical transcriptionists, court reporters, etc.
Bad examples. Those are instances where you need human beings to provide interpretation of the context surrounding the translation/transcription, and where strict regulatory regimes are in place. Those are likely the last to be automated.
Even in a world of perfect AI, there will be plenty of jobs. Anything involving movement and manipulation of matter will still require humans for the time being. We’re not at a point yet where an intelligent an AI could simply build you a house without human labor involved.
Many of these jobs are cheap and easy to understand and quick to train in. These aren’t the kind of jobs people probably wanted, but they’ll be there.
When 90% of people are toiling in fields, there won’t be as many complaints I’d think because there would be a lot more equality among the 90%, vs the 50/50 split that we have today.
> It's fine to say "great, that can free up people for other thing", but given our current economic systems, how are these people supposed to eat?
Yeah, exactly this. The funny thing about economic upheavals and industrial revolutions is that while people might get reallocated eventually (say, 30 years), that doesn't provide any comfort to the people who are getting upheaved now.
What should happen is a thorough investigation of our assumptions about economics and see if they hold true. 20-30 years ago saying "just get a robot to do it" would've been met with great cynicism, but now it's not that unthinkable. Especially once we apply what we learn to robotics - at that point doing things at scale is just playing an RTS
> What can possibly be the benefit of requiring this constraint?
It's not "requiring this constraint". If you have some plausible pathway to get from our current system to some "Star Trek-like nirvana", I'm all ears. Hand-wavy-ness doesn't cut it.
> "Current economic systems" will simply have to yield.
Why? For most of human history there were a few overloads and everyone else was starving half the time. Even look at now. I'm guessing you probably live a decent existence in a decent country, but meanwhile billions of people around the world (who can't compete skills-wise with upper income countries) barely eke out an existence.
For the world that just lived through the pandemic, do you honestly see systems changing when worldwide cooperation and benevolence is a prerequisite?
> I mean, does anyone think that things like human translators, medical transcriptionists, court reporters, etc. will exist as jobs at all in 10-20 years?
Before mechanical alarm clocks, there were people paid to tap on windows to wake them up.
Isn't this just a specific case of the general risk of devoting your career to pushing the envelope (science/research/business, etc)? You can work 30 years on a problem, finally come out with a solution only to find you're a week late and someone already published/patented exactly what you independently came up with. Then the only thing you have to show for your 30 years of work is everyone calling you a copycat. Weren't there other physicists like a week behind Einstein with relativity? And probably tons of other examples.
I think the panic about AI suddenly taking people's jobs is a bit exaggerated, and even if it does it is impossible to tell what the consequences will be in the long term.
Most likely society will reorganize around it and new professions that don't exist today will be created.
And we haven't even considered the positive impact of AI.
It could for example accelerate the process of drug manufacturing and genetic therapies that will considerably increase human's lifespan, triggering another form of social reorganization.
So it is really impossible to tell with certainty how the world will look like in 50 or 100 years from now.
I am optimistic and would like to think it will be a much better place, not necessarily perfect and just in every aspect, inequality and armed conflicts will probably continue to exist, but overall it will be better than it is today.
There was a lot of fear during the industrial revolution, prompting many intellectuals to have a very grim outlook of the future, particularly when it comes to social issues (yes you, Marx).
But ultimately, if you look at the data compiled decades later, such as GDP and life expectancy around the world, it is undeniable that social and economic changes resulting from the industrial revolution made the world a much better place.
Not a perfect one, but certainly much better than it was (at least for humans).
Yeah – first off, I agree with everything you are saying. I would just add that this is a larger problem just starting to come into focus, which is this: over a long enough period of time virtually ALL jobs will be eliminated. There was a time when man thought – let's make our lives easier by inventing 'tools' and eventually, we will have 'tools' so advance we'll sit around and discuss philosophy like we're all Greek aristocrats. Or something like that.
Somewhere toward that goal we became overly fixated with money/wealth and the pursuit of endless profits. Meanwhile, companies continue to post record profits while downsizing - gee, I wonder why.
The reality is, we need to start to figure out and move toward a new way to run the world. I always semi-land in a kind of Star Trek: TNG 'creation socialism', or whatever it is they have as a means of structuring a society, where you have a replicator that can create almost anything for you. They also have 'intellect machines' that are often used to build things from just describing what you want, or in other situation to dig into engineering problems, etc. Things that we are now starting to do with our new GPT tech.
Putting all this another way – how are all these corporations going to continue to exist when there are no customers to buy their products, because there are no jobs left. Basically, money as we know it is going to go away, we won't be a society of limited resources anymore.
And again as a reminder, this isn't something I'm saying is 2-5 or even 50 years away (maybe?). What I'm saying is that set the timeline however you want, it is going to happen, and we need to start to plan for it now. Based on what's happening already, we're likely at the start of a 20 – 50 year transition towards a new form of society entirely.
> Basically, money as we know it is going to go away, we won't be a society of limited resources anymore.
I doubt it. There will probably always be some scarcity. Even if we remove (or effectively remove) scarcity of physical resources or energy, there is still only a limited amount of time. You could base an entire economy on time and you'd still need some kind of a medium of exchange. Even in Star Trek TNG, every schmo on Earth doesn't have access to Federation supercomputing clusters or their own starship.
When I was studying Computational Linguistics I kept running into the unspoken question: given that Google Translate already exists, what is even the point of all of this? We were learning all these ideas about how to model natural language and tag parts of speech using linguistic theory so we could eventually discover that utopian solution that would let us feed two language models into a machine to make it perfectly translate a sentence from one language into another. And here was Google Translate being "good enough" for 80% of all use cases using a "dumb" statistic model that didn't even have a coherent concept of what a language is.
It's been close to two decades and I still wonder if that "pure" approach has any chance of ever turning into something useful. Except now it's not just language but "AI" in general: ChatGPT is not an AGI, it's a model fed with prose that can generate coherent responses for a given input. It doesn't always work out right and it "hallucinates" (i.e. bullshits) more than we'd like but it feels like this is a more economically viable shot at most use cases for AGI than doing it "right" and attempting to create an actual AGI.
We didn't need to teach computers how language works in order to get them to provide adequate translations. Maybe we also don't need to teach them how the world works in order to get them to provide answers about it. But it will always be a 80% solution because it's an evolutionary dead end: it can't know things, we have only figured out how to trick it into pretending that it does.
I personally think that humans easily apply structure to language that doesn’t really exist. In fact, we restructure our languages daily, as individuals, when communicating verbally and through text. We make up words and shorthands and abbreviations and portmanteaus. But I think the brain simply makes connections between words and things and the structure of speaking those words is interpreted like audio or visuals in our brains — just patterns to be placed.
Really, words, utterances by themselves, carry meaning. Language is just a structure for _us_, so to speak, that we agree on for ease of communication. I think this is why probabilistic models do so well: the ideas we all have are mostly similar, it really is about just mapping from one kind of word to another, or kind of phrase to another.
Feel free to respond, I’m most certainly out of my depth here.
> Computational Linguistics I kept running into the unspoken question
I've done a lot of work in NLP and the times when computational linguistics has been useful is very rare. The only time I shipped something to production that used it was a classifier for documents that needed to evaluate them on a sentence by sentence basis for possible compliance issues. Computational linguistics was useful then because I could rewrite mulit-clause sentences into simpler single clause sentences which the classifier could get better accuracy on.
> And here was Google Translate being "good enough" for 80% of all use cases using a "dumb" statistic model that didn't even have a coherent concept of what a language is.
I assume you are aware if Frederick Jelinek quote "Every time I fire a linguist, the performance of the speech recognizer goes up"?[1]
That was in 1998. It's been pretty clear for a long time that computational linguistics can provide some tools to help us understand language but it is insufficiently reliable to use for unconstrained tasks.
> But it will always be a 80% solution because it's an evolutionary dead end: it can't know things, we have only figured out how to trick it into pretending that it does.
At the margin, these are equivalent (Chinese room, and all). I wonder if humans also learn similarly then retroactively tell themselves they actually do know things instead of just containing experiences encoded in their neurons (and whether that is any different than a neural network encoding trained "knowledge" in its neurons, too). This is the semantics of epistemology, at the end of the day.
Ask a toddler how the world works and you'll get a very similar response. It is entirely likely the 80%-of-human-intelligence barrier is not a "dead end" but merely a temporary limitation until these models are made to hone their understanding and update over time (i.e. get feedback) instead of going for zero-shot perfection. The GPT models incorporating video should start developing this "memory" naturally as they incorporate temporal coherence (time) into the model.
The fact we got this far through brute force is just insanely telling. This is a natural phenomena we're stumbling upon, not something crafted by humans.
Also - fun fact, the Facebook Llama model that fits on a Raspberry Pi and is almost as good as GPT3? Also basically brute force. They just trained it a lot longer and it shrunk the model. Food for thought.
Google translate works amazingly will on languages with a similar grammar (or at least, it works so on European languages, which I have the experience to judge).
However, translation of more distant languages is pretty terrible. Vietnamese to English is something I use Google translate for everyday and it's a mess. I can usually guess what the intended meaning was but if you're translating a paragraph or more it won't even be able to translate the same important subject words consistently throughout. Throw in any kind of slang or abbreviations (which Vietnamese people use a lot when messaging each other) and it's completely lost.
Given that Google Translate already exists, what is even the point of all of this?
Because for the other 20 percent it's plainly -not- good enough. It can't even produce an acceptable business letter in a resource-rich target language, for example. It just gets you "a good chunk of the way there."
And there's no evidence that either (1) throwing exponentially more data at the problem with see matching gains in accuracy or (2) this additional data will even be available.
Yeah... Google Translate is still occasionally translating good/item as "baby" on taobao. "Return Defective Baby" was hilarious for a year or two, but that was ~8 years ago IIRC, and now it just stands as a reminder that Google Translate still has a considerable way to go.
Indeed. Google Translate is just barely useful. Whenever I use it to translate to English, what I get is generally poor. It's good enough to understand the gist of what the original text said, but that's about it. Fortunately, most of the time, understanding the gist is enough.
Google translate lets you correct a translation, right?
Did people stop doing this at some point? Maybe after the advent of massively addictive social media, people more often ended up screenshotting it and sharing it for likes instead of correcting it.
I'm sure Google have stats on this but no idea whether they're public.
I learnt some very basics of computational linguistics since it was related to a side project. I kept wondering why people were spending huge amounts of resources into tagging and labelling corpora of thousands of words, while to me it seems that in theory it should be possible to feed wikipedia (of a certain language) into a program and have it spit out some statistically correct rules about words and grammar.
I guess the same intuition led to these new AI technologies...
^ This. I think the more we internalize the fact that we're also basically LLMs, the more we'll realize that there likely isn't some hard barrier beyond which no AI can climb. If you watch the things kids who are learning language say, you'll see the same kinds of slip-ups that belie the fact that they don't yet understand all the words themselves, but nobody thinks that 2-year-olds aren't people or thinks they will never learn to understand these concepts.
On one hand, considering our current understanding, one could say that that's a good analogy for our brain; on the other, it reminds me of the fact that people in the past used to think that their thinking was akin to the work of a steam engine. A number of expressions in English language refer to the fact, such as "cool off".
Well, I didn't say a brain is just an LLM. It's more complicated than just this. But what LLM showed is that it works using the same simple building blocks + specialised parts of the brain: memory, constant relearning and world input, etc.
So in this particular case of LLMs we've managed to optimise our way around having specialised brain structures using a powerful enough math function. Next steps require improvements in how the model is trained, how we can reduce the amount of training data, what additional machinery might be necessary, etc...
But, damn it, this very thing that makes humanity possible - our language - is solved now. Natural language is a solved problem now. That very thing that makes complex societies possible - it's done. This is the fact. And it is crazy.
I don't share any of the excitement that ppl like you appear to be feeling, and I recoil from the grandiose claims being made by people who, in my opinion, are being fooled by the 21st century equivalent of a ventriloquist's doll.
(I did feel excitement while following the development of AlphaZero and its played Go matches, but that was because it was revealing greater depths and beauty in the human created game of Go. And I maintain some interest in following the development of self-driving, particularly by Tesla.)
With regard to LLMs I can see how they could be useful. I think more particularly useful when they work from a constrained corpus, so the user can know what they're drawing from (and thus the limitations of that knowledge base). The example site that been posted by its maker to HN [1] where you can ask questions against a particular book is a good one for showing the use of the tool I think. But it's just a tool and it's not in any way a breakthrough in our understanding of ourselves, of cognition or anything like that. I think the people who are making these claims can't distinguish science fiction from actual reality. They are fantasists and I think they are leading themselves and others into delusion.
Right at the beginning of the current wave (2010-2012) of ML approaches I did some work on ML systems and NLP, and back then I clearly saw how nothing truly outstanding is happening, we were only starting to figure our what GPUs were capable of.
So all of this was fun: NLP, ML, vintage AI. But nothing felt like it did was groundbreaking, or would solve fundamental true GAI problems, or was even close.
Yet, 10 years later, here we are. Language is solved. In most areas I know /something/ about (programming, ML, NLP, compilers) this is huge and makes mountains of knowledge obsolete.
For me AlphaZero was boring. :-) Solution space is vast but rules are simple. It was a question of time when somebody could put things together here. There was nothing unknowable about it, unlike how natural languages were always a mystery to me. Even with all the syntax, grammars, linguistic knowledge, NLP... Something was lacking.
Interesting to have this contrast in perspectives. For me, the language generated by ChatGPT is flat and boring. No spark of human creativity or originality or flair. And this cheap trick of getting it to write in rhyme or 'in the style of' such and such I find awfully tacky.
I'm not saying AlphaZero was creative either. But because it was operating inside a system that was already beautiful and which had such a vast 'solution space' as you put it, its exploration into greater depths of that space I found intriguing.
I think that's the contrast for me. Machine learning can be useful and even intriguing inside constrained spaces That's why I liked AlphaZero, working inside a very constrained (but deep) space. And why I also find Tesla's progress with self-driving interesting. It's a constrained task, even though it has a huge range of variables. And again why I find ChatGPT potentially useful in drawing from a constrained corpus but still don't find the language it generates appealing. It comes across as exactly what it is - machine generated text.
The breakthrough of ChatGPT is not a brilliant literary work per se.
It's how it interprets what people write and provides coherent answers. This was not possible previously.
AlphaZero, chess algos do not have to break this barrier, they work form a very clear and well-defined input. It was clear that a mixture of machine brute force and smart thinking would eventually beat us at these games. No magic here. Alpha family algos are /very/ understandable.
Language, on the contrary, is fundamentally not very well defined. Is it flawed, fluid, diverse... not possible to formalize and make it properly machine-readable. All the smaller bits (words, syntax, etc) are easy. But how these things come together - this can be only vaguely described through rigid formal frammars, but never fully.
Compare that to how on the lowest level we understand our brain very well. Every neuron is a trivial building brick. It's how super-complex functions of input to output arise from these trivial pieces - that's amazing. Every neural network is unique. Abstractions, layers of knowledge - everything is there. And it's kind of unique for every human so unknowable in the general case...
Your third paragraph describes language pretty well (although I'd quibble with formal grammars only being 'vague' in their coverage - I think they do a pretty good job although I agree they can never be perfect). And I appreciate the achievement of LLMs in being able to take in language prompts and return useful responses. So it's an achievement that is useful certainly in providing a 'natural language' querying and information collating tool. (I agree here with your second paragraph.)
But it remains a tool and a derivative one. You will see people in these recent HN threads making grandiose claims about LLMs 'reasoning' and 'innovating' and 'trying new things' (I replied negatively under a comment just like this in this thread). LLMs can't and will never be able to do these things because, as I've already said, they are completely derivative. They may, by collating information and presenting it to the user, provoke new insights in the human user's mind. But they won't be forming any new insights themselves, because they are machines and machines are not alive, they are not intelligent, and they cannot think or reason (even if a machine model can 'learn').
> LLMs can't and will never be able to do these things because, as I've already said, they are completely derivative.
I agree, they are completely derivative. And so are you and I. We have copied everything we know, either from other humans or from whatever we have learned from our simple senses.
I'm not asking you to bet that LLMs will do any of those things really, I suppose it's not a guarantee that anything will improve to a certain point. But I am cautioning not to bet heavily against it because, after witnessing what this generation of LLM is capable of, I no longer believe there's anything fundamentally different about human brains, so, to me, it's like asking if an x86-64 PC will ever be able to emulate a PS5. Maybe not today, but I don't see any reason why a typical PC in 10 or 15 years would have trouble.
Well... complaining about people online or in the media making grandiose is like fighting wind.
I totally see your point about inherent "derivativeness" of LLMs. This is true.
But note how "being alive" or "being intelligent" or "be able to think" are hard to define. I'd work for the "duck test" approach: if it is not possible to distinguish a simulation from the original then it doesn't make sense to draw a line.
Anyways, yes, LLMs are boring. I am just not sure we people are not boring as well.
> a world without religion leads people to moral depravity
This is exactly the case. Religion is a constant in human society across time and space. And one of its main functions is exactly that - to keep people away from moral depravity. Speaking against this function of religion (again, proved across all cultures and all times) only shows profound shallowness.
Personally I'm not much of a fan of Christianity or the other Abrahamic religions. And I think generally its time is coming to an end (with a long tapering off). But I think you can see quite clearly that the moving away from Christianity (probably inevitable) over the last century or so has led to moral decline and depravity in the West. If Christianity's time is coming to an end, we will neeed (and I believe we will generate) a new religion to replace it. And I don't think it will be in the Abrahamic tradition.
No, humans continually develop moral behaviour and codify it in religion (for transmission and social maintenance). Gods and deities aren't strictly necessary. Buddhism started without such but it is interesting that the conceptualisations of Buddhas & Boddhisattvas have become more and more like the gods & deities of other religions (at least in the popular religious traditions of Buddhism).
Well, I agree with you in that humans inevitable develop moral rules and codify these one way or another. Otherwise a society wouldn't be possible.
Religions are one of the traditional ways of motivating these rules... Legal systems are supporting them on an enforcement level.
I like religions as a very deep cultural phenomena/ideology having all kinds of effects on the, ehm, society. I just don't think religions are strictly necessary for a functioning society.
I've just started testing GPT-4 on translation from Japanese to English, but it seems to blow Google Translate out of the water. It was particularly good with a novel excerpt. I encourage people to try it with a variety of languages and texts to see if those results hold up.
No problem. My first language is English, but I worked as a Japanese to English translator for twenty years and have written books in Japanese. My personal website is linked from my user page.
Thanks. Refreshing to see one of these claims made by someone actually qualified to verify them! I might have to play with some Norwegian <-> English translation at some point. Should be even more effective than for Japanese due to the common germanic origins.
I think a huge part is that computational linguistics still chases the idea of a universal language model, which may simply not be possible. I haven't followed the science in general linguistics but something feels off when most of the information ends up being tagged onto nil particles (i.e. parts of speech present neither in utterances nor written language and not affecting intonation or otherwise being detectable except by contrasting the structure with related languages).
In a sense the model is universal. It's just a 100GB (give or take) neural network.
And apparently (or so I heard, I think) feeding transformer models training data of Language A could improve its ability to understand Language B. So maybe there's something truly universal in some sense.
I know only the bare basics of NLP and AI but isn't Google basically just a specialized case of LLM? Translate and Search work on the same principle that all you need to do is regression analysis on a huge pile of human behavior. Early search engines worked so hard to try to understand content and user intent and got thumped by a comparatively naive heuristic and a giant pile of data.
To take it further: if 80% is good enough and you gotta do some work anyways on the output of LLMs, maybe all the extrapolations like "just wait 10 years and most jobs are doomed" are exaggerated. It’s not unlikely that LLMs hit a wall, because they inherently lack any sort of logic and reasoning.
Which of course is a good thing to make sure many people get to keep their jobs.
The PR folks at my current company are in full panic mode on Linkedin, judging from the passive-aggressive tone of their posts (sometimes very nearly begging customers not to use ChatGPT and friends).
They fully understand that LLMs are stealing lunch money from established information retrieval industry players selling overpriced search algorithms. For a long time, my company was deluded about being protected by insurmountable moats. I'm watching our PR folks going through the five stages of grief very loudly and very publicly on social media (particularly noticeable on Linkedin).
Here's a new trend happening these days. Upon releasing new non-fiction books to the general public, authors are simultaneously offering an LLM-based chatbot box where you can ask the book any question.
There is no good reason this should not work everywhere else, in exactly the same way. Take for example a large retailer who has a large internal knowledge base. Train an LLM on that corpus, ask the knowledge base any question. And retail is a key target market of my company.
Needless to say I'm looking for employment elsewhere.
> There is no good reason this should not work everywhere else, in exactly the same way. Take for example a large retailer who has a large internal knowledge base. Train an LLM on that corpus, ask the knowledge base any question.
Since LLM’s can’t scope themselves to be strictly true or accurate, there are indeed good reasons, like liability for false claims and added traditional support burden from incorrect guidance.
Everybody is getting so far ahead of the horse with this stuff, but we’re just not there yet and don’t know for sure how far we’re going to get.
Hmm. Hypothetically if a human on first line help desk gives advice that is so completely bad as to be a crime, are they liable or the company? Because I guess a chat-bot would definitely not be liable.
Correctness isn't one-dimensional. A wrong fast-food order might substitute or leave something out. There's essentially no chance the employee will swap in a random product from some other store.
But in this example the AI could hallucinate a statement attributed to you it actually formed by putting together reddit comments.
I'm interested to hear what these techniques are. Decreasing the generality will help, but I fail to see how that scopes the output. At best that mitigates the errors to an extent.
> Since LLM’s can’t scope themselves to be strictly true or accurate
Bing tries to solve this and succeeds somewhat. It will insert Wikipedia style citations against each of its claims. You can visit them and verify the statement if you want. And I do it often.
No reason why a future DocAI can't link to specific sections in internal documents whenever it answers a question.
> Here's a new trend happening these days. Upon releasing new non-fiction books to the general public, authors are simultaneously offering an LLM-based chatbot box where you can ask the book any question.
I couldn't get "designing data intensive applications" to explain to me how to design a graph database (from scratch, without using existing graph frameworks or technologies), but it only suggested reasons why graph databases are useful and the properties I have to keep in mind while designing it. I want to know how I can build one in practice.
Using a prompt like "Tell me how to build a graph database from scratch. Specifically, how to design the data model, implement the data storage layer, and design the query language." only gives a very vague answer. Sometimes it suggests using existing technologies.
One of my initial prompts mentioned graph databases as an example of a scalable system, so I wanted to ask it about the design properties that make it so. I figured that because it was a book about designing systems, it could give me an outline of how a graph database works in practice.
It's pretty annoying how the site erases your prompt once you receive your output. By the time it finishes loading I've half forgotten what my original question was.
Incredible results to my questions. Do these work by finding similar pieces of text from a vector DB, and then embedding those similar pieces of text in the prompt? The answers I'm getting seem to be comprehensive, as if it has considered large amounts of book text, curious how this works as there's an OpenAI token limit. I've heard this is what tools like langchain can help with, so maybe I should play around with that as this all seems like a mystery to me.
Genuinely unknown at this time. At some point this will be litigated in court, and if the parties don't end up settling, we'll then have some precedent that can answer your question.
I saw at least two examples of this here on HN. One of the books was about tech entrepreneurship 101, and I remember asking how to launch if you're a sole developer with no legal entity behind the product. I remember the answer being fairly coherent and useful. I don't have the URL handy, I suspect if you search HN for "entrepreneur book" you'll find it.
How did GPS tracking companies survive Google and Google Maps? I think there will probably be many niches to explore even as the big names work hard to compete and eventually commoditize LLMs
I worked in a research capacity in the voice assistant org of a big tech company until very recently. There was a lot of panic when ChatGPT came out, as it became clear that the vast bulk of the org's modeling work and research essentially had no future. I feel bad for some of my colleagues who were really specialized in specific NLP technology niches (e.g. building NLU ontologies) which have been made totally obsolete by these generalized LLMs.
Personally - I'm moving to more of a focus on analytical modeling. There is really nothing interesting about deep learning to me anymore. The reality is that any new useful DL models will be coming out of mega-teams in a few companies, where improving output through detailed understanding of modeling is less cost effective than simply increasing data quality and scale. Its all very boring to me.
“ Seeking an improvement that makes a difference in the shorter term, researchers seek to leverage their human knowledge of the domain, but the only thing that matters in the long run is the leveraging of computation. “
This is a great read. It is accurate to what I felt the last time I trained a CNN -- it's not fun, and I don't get to feel clever. My brain isn't wired to give me a dopamine hit when the training does its job. It's just a, "wait, that's it?"
We will always want to do the discovery ourselves, and I can see why fighting that instinct is a challenge for those in the field.
Isn't the CNN a discovery in itself? Without it, we'd be following the bitter lesson and "leveraging computation" to throw more data / compute at an MLP.
Clearly someone felt that there'd be a better inductive bias and attempted something else, and now CNNs are what's used "in the long run".
I've seen many interpretations of this article and I'm curious as to the mainstream CS reading of it.
One could look at the move from linear models to non-linear models or the use of ConvNets (yes I know ViTs exist, to my knowledge the base layers are still convolution layers) as 'leveraging human knowledge'. Only after those shifts were made did the leveraging of computation help. It would seem to me that the naive reading of that quote only rings true between breakthroughs.
During my master's degree in data science, we had several companies visit our faculty to recruit students. Not a single one was a specialized NLP company, but many of them had NLP projects going on.
Most of those projects were the usual "solution looking for a problem to solve". Even those projects that might have had _some_ utility, would have been way more effective to buy/license a product than to develop an in-house solution. Because really, what's the use of throwing a dozen 25-30 years old with non-specialized knowledge, when there are companies full of guys with PhDs in NLP that devote all their resources to NLP? Yeah, you can pipe together some python, but these kind of products will always be subpar and more expensive long-term than just buying a proper solution from a specialized company.
To me it was pretty clear that those projects were just PR so that c-levels could sell how they were preparing their company for a digital world. Can't say I'm sorry for all the people working on those non-issues though. From the attitude of recruiters and employees, you'd think they were about to find a cure for cancer. Honestly, I can't wait for GPT and other productivity tools to wrech havock upon the tech labour market. Some people in tech really need to be taken down a notch or two.
those projects were just PR so that c-levels could sell how they were preparing their company for a digital world
This is exactly it. The 2017-2019 corporate version of "invest in AI" meant to build an in-house team to do ML experiments on internal data, and then usually evolved a bit to get some "ml-ops" thrown in so they could "deploy" the models they built. I spent some time with a few companies doing this and it always reminded my of "the cat in the hat comes back" when the cat let all the little cats out of his hat and they went to work on the snow spots... just doing busy work...
Anyway it's a symptom of the hype cycle - AI was the next electricity, but there were no actual products and nothing clear to do with it, just hire a bunch of kids to act like they were in a kaggle competition, or worse a bunch of PhDs to be under-utilized building scikit-learn models.
Now that there are (potentially) products coming along that at least bypass the low-level layer of ML, having an internal team makes no sense. Maybe the most logical thing that will happen is the pendulum will swing too far, and this bubble will consist more of businessy types using chatGPT without remotely understanding it or realizing it's just a computer program.
>The 2017-2019 corporate version of "invest in AI" meant to build an in-house team to do ML experiments on internal data, and then usually evolved a bit to get some "ml-ops" thrown in so they could "deploy" the models they built.
You nailed it, although very few models actually ever got deployed to Prod at Fortune 500 non-tech companies and the few that did delivered little value. I'm a consultant and most internal AI/ML/DS teams that I interacted with were just running experiments on internal data as you said, and the results would get pasted into Powerpoint, a narrative created, and then presented to executives, who did little or nothing with the "insights". Reminded me of the "Big Data" boom a few years earlier where every company created a Big Data Team who then promptly stood up a Hadoop cluster on prem, ingested every log file they could find, and then..................did nothing with it.
Disagree. I was on one of these R&D/prototyping teams running ML experiments and you're right, it was the company wanting to present itself as future-leaning, ready to adapt, and I would say that at this point it was a good move to have employees who understand where the tech is going.
Companies with internal teams that are able to implement open source models are in a much better negotiating position for the B2B contracts they're looking at for integrating GPT into their workflow, they won't need GPT as much, if they can fallback on their own models, and they will be better able to sit down with the sales engineers and call bullshit when they're being sold snake oil.
You tend to oversimplify the GPT's - they don't just work all the time, you got to test how well they work, then you got to select the best prompt and demonstrations, then you got to update your prompt it as new data comes along. There is plenty of work parsing various inputs into a format it could understand and then parsing its outputs, especially for information extraction.
> Honestly, I can't wait for GPT and other productivity tools to wrech havock upon the tech labour market. Some people in tech really need to be taken down a notch or two.
You have to remember that when these sorts of things happen, the ones who get "taken down" in ways that actually affect their lives are invariably the ones who already have the least. The ones who "need" that takedown will be just fine, unless they've made incredibly stupid investment decisions.
Personal computing didn't automate too many things that only humans could previously do. Personal computer enabled you to move the data haystack from paper medium to digital but you still had to know the right SW incantations and meticulously dig through it to find the needle.
ChatGPT and other ML apps can find you the needle in the data haystack. To look up stuff on the PC you still needed to know the location of your stuff, filesystem info and how to formulate queries. You no longer need to learn to "speak machine language" but finally the machines can now understand human language to do what you tell them to do.
Of course, ChatGPT & friends can also say dumb shit or just hallucinate stuff up so you still need a human in the loop to double-check everything.
The question was about the case "when these sorts of things happen". This is a generalization which assumes something analogous happened in the past. We're not talking narrowly about something which could "automate too many things that only humans could previously do", which we didn't have before ChatGPT, at least in such a short time. We're talking about similar, but not the same, things and PC revolution seems like a good example.
So it would be good to see the parallels with 1980-s here, if this generalization holds.
I wonder if this is a bad as everyone thinks. When a new technology arrives which is not completely understood, isn't the right approach to try to find some applications for it? Sure, most will fail, but some valid use cases will likely emerge.
I'm pretty sure almost all technologies at some point were solutions looking for a problem to solve. Examples include the internet, the computer and math.
> I'm pretty sure almost all technologies at some point were solutions looking for a problem to solve. Examples include the internet, the computer and math.
I think the opposite -- nearly all technologies came about as a result of people trying to solve existing real problems. Examples include the internet, the computer and math. (Although I don't think "math" counts as a technology.)
The internet came about from darpanet, which was solving the problem of network resiliency. Computers automated what used to be a human job ("computer") of doing very large amounts of computations. That automation was solving the problem of needing to do more computations than could be done with armies of people.
I think it is. If they actually do end up finding a problem to solve, that would be serendipitous but I imagine the vast majority of the time they find themselves in the business of trying to convince the rest of us to buy a thing that we don’t need. And while the latter may drive the economy to some degree as I get older I detest it more and more.
This appears to be the computing model of the past 20 years, from what I can tell?
There have been no real advancements since the desktop model of the late 1990s. We might have more animations and applications running in virtual machines for security purposes, but literally nothing new has come out.
Even all the web apps are reimplementation of basic desktop capabilities from the decades before, but slower and with more RAM usage. They might be easier to write (I personally don't think so - RAD apps from the 90s were quicker to write and use) but the actual utility hasn't changed; if anything it's just shoving all of your data from your microcomputer to someone else's microcomputer, and being tracked and losing control of said data whilst you're at it!
And we have easier access to videos on the Internet, I guess??
It all seems to be missing the point of actually having a computational device locally. There is no computation going on. It's all digital paper pushing.
It might not be optimal if we knew the future but to me its just a natural organic process, organizations and factions inside of organizations are slime molds. A new value gradient appears in the environment and we all spread out and crawl in a million different out growths feeling blindly in the general direction of something that feels like a good idea until one of the tendrils hits actual value and becomes a path of least resistance and the other ones dry out and die.
R&D is fraught with risk, but some risks are more rewarding than others. These companies don't just sit on useless knowledge. Take Google who now sits as a "loser" in the current AI "competition"; their projects are far from worthless. Because they've built up expertise, they're now in a very good position to overtake Microsoft on AI, even though they currently seem a bit behind. (And frankly on many fields they're already far ahead.) So OK, perhaps the behemoth that is Google is a bad example, but I still think the same thing is true for smaller companies. If you just read the news, you would think that a technological race like this only has one winner, but that just isn't true. Even quote unquote "worthless projects" can help increase the understanding and expertise in quite important areas, that while not "worth" anything currently, may still have huge value in the future. The only way to know, is to stay in the race.
The computer was always designed to be a computational machine. It didn't just appear and then someone thought "what could I actually use this for?"
Also the Internet came out of DARPA which was a method of sharing data between geographically remote military facilities. It wasn't like they wired up devices and thought "what could we use this for?".
GPs point is that the technologies you've mentioned solved real problems before they were adapted for different use cases. They didn't make Darpanet and then think "man, if only there was some use for this" until the Internet came along. They designed it to send signals between distant nodes while being resilient to individual nodes being nuked.
Only after DARPAnet solved that problem did it get adapted to some other problems (ex: how do I send cat pictures to people)?
AI was developed with solid used cases - mostly image recognition or other classification problems. Remember when you couldn’t select text in an image file and paste it into a document? Well that’s a solved problem. The fact that AI has been overhyped and business struggle to find uses for it doesn’t diminish the cases where it does work well.
Fair. Maybe let’s look at it a different way. The computer is a solution to N known problems and M unknown problems at the time of creation. I’d say the N/M ratio is vanishing small.
Because product execution at SO many places sucks. LLMs won’t help with that, either. They’ll just help people market their crappy products more cheaply. Woe to the marketers, however.
> Honestly, I can't wait for GPT and other productivity tools to wrech havock upon the tech labour market. Some people in tech really need to be taken down a notch or two.
Sure. But recruiting scheduling coordinators do not. Those people would better serve society stringing up new HVDC lines, which the current model does not incentivize.
It's evidence of resentment, but not of well reasoned discourse against something the tech industry is doing. Characterizations like this anthropomorphize a group into a single entity that is easier to hate and assign intentions, too. It's not constructive to any conversation that moves a discussion forward. A person who is mad at "tech bros" is likely more upset about systemic forces that they want to blame on a target. It's logically equivalent to making sweeping statements blaming immigrants for suppressed wages.
Comparing affluent ivory-tower digital landlords to vulnerable people being blamed for things outside their control is definitely one of the decisions of all time. It also seems like a lot of exercise just to feel justified in discarding a large group of opinions.
People start generalizing about groups like this when they've stopped caring about negative policy consequences which affect those groups. Politicians who blame wage stagnation on immigrants do not expect to have those immigrants who gain citizenship vote for them. Why do you think people might have stopped caring what happens to the group designated "tech bros"?
AI or technology won't reduce bullshit jobs. To the contrary, they might increase bullshit jobs, because there would be more resources to allocate for those jobs.
Yes, yes, yes. My wife (who teaches humanities) keeps asking me if AI is going to take all the jobs. I keep saying society has already had to deal with the fact that not that many people are required to build the things and grow the food, and we adapted by inventing all kinds of vaguely bullshit jobs, and jobs to provide services to the people who get their money from bullshit jobs, and all of those jobs spread the wealth around. There will be more and more of those (and, to be clear, many more people who aren't in a position to get one.)
Still better to train lots of electricians and refrigerant techs and solar installers and all the other workers that the energy transition will need, of course.
Also have yet to Graeber's book ("Bullshit Jobs: A Theory").
Society definitely needs those, but the incentives of the system most societies live under do not align to those needs. We are 100% into a society of wants, not needs, and the rewards are for those who sell stuff for these wants. Our needs went into the "cost center" of society's calculation, not an investment, and so it's been a race to the bottom for those professions.
While adtech, crypto and other bullshit gets massive funding because it can turn a profit.
The incentives to have a good society don't align with the incentives of financial capitalism.
Counterpoint, if one doesn't have their own baseline model how does one know the vendor is providing value.
Yeah having a whole big team create the internal baseline is not cost effective, but having at least one or two people work on something to actually know the vendor is worth their cost is important.
As one of the comments on reddit posts - it's not just big tech companies, but also entire university teams which feel the goalposts moving miles ahead all of a sudden. Imagine working on your PhD on chat bots since start of 2022. Your entire PhD topic might be irrelevant already...
>Imagine working on your PhD on chat bots since start of 2022. Your entire PhD topic might be irrelevant already...
In fairness most PhD topics people work on these days, outside of the select few top research universities in the world, are obsolete before they begin. At least from what my friends in the field tell me.
Anecdata of one: I finished my PhD about 20 years ago in programming language theory. I created something innovative but not revolutionary. Given how slowly industry is catching up on my domain, it will probably take another 20-30 years before something similarly powerful makes it into an industrial programming language.
Counter-anecdata of one: On the other hand, one of the research teams of which I've been a member after my PhD was basically inventing Linux containers (in competition with other teams). Industry caught up pretty quickly on that. Still, academia arrived first.
I developed a new static analysis (a type system, to be precise) to guarantee statically that a concurrent/distributed system could fail gracefully in case of (D)DoS or other causes of resource exhaustion. Other people in that field developed comparable tools to statically guarantee algorithmic space or time complexity of implementations (including the good use of timeouts/resource sandboxes if necessary). Or type system-level segregation between any number of layers of classified/declassified information within a system. Or type systems to guarantee that binary (byte)code produced on a machine could find all its dependencies on another machine. Or type systems to prove that an algorithm was invariant with respect to all race conditions. Or to guarantee that a non-blocking algorithm always progresses. Or to detect deadlocks statically. etc.
All these things have been available in academia for a long time now. Even languages such as Rust or Scala, that offer cutting edge (for the industry) type systems, are mostly based on academic research from the 90s.
For comparison, garbage-collectors were invented in the 60s and were still considered novelties in the industry in the early 2000s.
Perhaps - but normally you'll have a narrowly defined and very specific technical topic/hypothesis that you're working on, and many/most of these aren't going to be closed off by ChatGPT4
Will this effect the job market (both academic and commercial) for these folks? It's very hard to say. Clearly lots of value will be generated by the new generation of models. There will be a lot of catchup and utilisation work where people will want to have models in house and with specific features that the hyperscale models don't have (for example constrained training sets). I'm wondering how many commercial illustrators have had their practices disrupted by Stable Diffusion? Will the same dynamics (what ever they are) apply for the use of LLM's?
> but normally you'll have a narrowly defined and very specific technical topic/hypothesis that you're working on, and many/most of these aren't going to be closed off by ChatGPT4
Pretty hard disagree. Even if your NLP PhD topic is looking at hypotheses on underlying processes about how languages work (and LLMs can't give you this insight), 9 times out of 10 it's with an eye for some sort of "applicability" of this for the future. GPT-4 just cut off the applicability parts of this for huge swaths of NLP research.
I'm not too worried about that. We don't actually understand fully how LLMs function internally, so research on how language works and how to process it is still useful in advancing our understanding. It may not lead to products that can compete with GPT, but PhDs aren't about commercialisation, they're about advancing human knowledge.
All these people don't understand how hireable and desirable they are now. They need to get out of academia and plugged into AI positions at tech companies and startups.
Their value just went up tremendously, even if their PhD thesis got cancelled.
Easily millionaires waiting to happen.
---
edit: Can't respond to child comment due to rate limit, so editing instead.
> That is not how it works at all.
Speak for yourself. I'm hiring folks off 4chan, and they're kicking ass with pytorch and can digest and author papers just fine.
People stopped caring about software engineering and data science degrees in the late 2010's.
People will stop caring about AI/ML PhDs as soon as the challenge to hire talent hits - and it will hit this year.
That is not how it works at all. You won't get hired if you don't have the academic pedigree in the first place. That means a completed Ph.D and good publications in good journals.
Sorry, that's patently untrue. Perhaps it's anecdotal, but I know a host of undergrads who got head hunted into quite elite tech positions either directly from Uni where I studied, or due to private projects they were in. And I even know a few that doesn't even have any uni edu that got hired to very high technical positions. Usually they were nerdy types who had worked with or had exposure to large systems for whatever reason, or who showed some promise due to previous work, demos or programs they'd made. But sure, most people have to go the edu route. It's the safest way into tech, as you are - at least in principle - fully vetted before you apply. Thinking that you can get a data science or hacker job just by installing Kali is ofc also very untrue.
I think my post is more representative of the truth than yours. I am sure you are telling the truth, but these unique talents you are talking about are not representative of the bulk of people working in research.
The demand for AI/ML will fast outstrip available talent. We'll be pulling students right out of undergrad if they can pass an interview.
I'm hiring folks off Reddit and 4chan that show an ability to futz with PyTorch and read papers.
Also, from your sibling comment:
> Maybe it is also a matter of location. I am in Germany.
Huge factor. US cares about getting work done and little else. Titles are honestly more trouble than they're worth and you sometimes see negative selection for them in software engineering. I suspect this will bleed over into AI/ML in ten years.
Work and getting it done is what matters. If someone has an aptitude for doing a task, it doesn't matter where it came from. If they can get along with your team, do the work, learn on the job and grow, bring them on.
I recommend taking all the introductory courses you can find on both AI and ML. If you like the introductory courses, and you feel compelled to move on, then chances are you'll do well in a job regarding AI or ML. There are also several ways into it, either through pure mathematics, statistical modelling, data science, particularly through learning about various algorithms and reading papers, or even through practical application within data warehousing or day-to-day programming. I'd say it helps to have an academic background in either IT, statistics or mathematics, though, but depending on what you're aiming for it doesn't need be a firm prerequisite. Btw. linguists or anyone interested in natural language ought also apply!
I guess we are living in two different universes. Any job ad for an ML role or ML adjacent role says Ph.d required or Ph.d preferable. Maybe it is also a matter of location. I am in Germany.
For a plain SWE role a Ph.d might be a disadvantage here too, but for anything ML related it is mandatory from what I can see.
In my hiring experience as an interviewer, 90% of candidates with PhD or not will actually have mediocre grasp on ML. It is a rare happy day when I get a good candidate. We interview for months for one hire. I got to interview candidates worldwide so I've seen people from many countries.
As someone who hired for this in general we'd use PhD (or maybe a Masters degree) as a filter by HR before I even saw them.
It's true that a PhD doesn't guarantee anything though. I once interviewed a candidate with 2 PhDs who couldn't explain the difference between regression and classification (which was sort of our "ok lets calm your nerves" question).
Yeah, you don't want to be anywhere near a place claiming to hire HS graduates/4chan posters in disciplines requiring advanced knowledge for successful product development, unless, idk, they have demonstrated mathematical talent through well-established media e.g. math olympiads, thesis on some relevant discipline.
Almost all the time, they're shitty startups, where bankruptcy is a matter of time, run by overpromising-underdelivering grifter CTOs pursuing a get-rich-quick scheme using whatever is trendy right now -crypto, AI, whatever has the most density on the frontpage-.
Yeah true, I've had to work with too many fresh college grads to not relate to this. People try to take some rare case and generalize when that's really not applicable.
As much as I'd wish to say "you're wrong, people care about intelligent, passionate people who do great work, not PhDs" you're right about much of the work out there.
We've tried many time to work with CSIRO (the NSF of Australia) and it's fallen flat. They love impressive resumes and nothing else. I'm having a chat with their "Director of ML" who's never heard of the words "word2vec" or "pytorch" before. (And I'm a UX designer!)
I think at most corporate firms you'll end up running into more resume stuffers than people who actually know how to use ML tools.
Chris Olah was at OpenAI but is now one of the founders of Anthropic but doesn't have any degree (he joined Google Brain after dropping out of his undergrad degree).
I think they are pretty exceptional though - most people on the ML team do AFAIK. Perhaps on the infrastructure side the backgrounds are different.
We are just building a chatbot to assist students that study (that study AI itself, to be precise).
We could not "just use ChatGPT" because the answers have to be correct, or students could take legal action against the university in situations where they'd give an answer in an exam that the chatbot gave them and that might be wrong.
It's exciting for sure to compare e.g. ChaptGPT with other approaches, for a set of controlled benchmark questions such as:
-- snip --
Q: What is the A* algorithm?
A: The A* algorithm is a popular heuristic search algorithm used in pathfinding and graph traversal. It is an extension of Dijkstra's algorithm, but uses a heuristic function to guide the search towards the goal node, which can make it more efficient than Dijkstra's algorithm in certain situations.
The algorithm works by maintaining a priority queue of nodes to be explored, where the priority of each node is the sum of its actual cost from the starting node (g-score) and an estimate of the cost to reach the goal node (h-score), based on a heuristic function. The heuristic function must be admissible, meaning it never overestimates the actual cost to the goal node.
At each step, the algorithm selects the node with the lowest priority from the queue and expands it, generating its neighboring nodes and computing their priority scores. The algorithm then adds these nodes to the queue if they have not been explored before or if a better path to them has been found.
If the goal node is reached, the algorithm terminates and returns the path from the starting node to the goal node. If the queue becomes empty before the goal node is reached, then there is no path from the starting node to the goal node.
Overall, the A* algorithm is a powerful and widely used algorithm that has many applications in fields such as robotics, video games, and logistics.
-- snip --
This is variation on the common "it has to be perfect" argument. I don't really buy it.
Right now, GPT 4 would earn a top 10% SAT score, implying that it can give very high quality answers on a range of scholarly topic. This output is basically free.
Nobody that's beholden to even mild economic pressures is going to pay for an expensively constructed 100% solution if they can have the 99% solution for free.
This is where it pays off to be researching something completely esoteric rather than something immediately applicable. I mostly scoffed at such research in the past, but now I see the value of it. The guy researching QML algorithms for NLP is not panicking yet, I think.
Maybe this is alarmist, but I don't see how LLMs don't collapse our entire economic system over the next decade or so. This is coming for all of us, not just the NLP experts in big company research groups. Being able to cheaply/instantly perform virtually any task is great until you realize there is now nobody left to buy your product or service because the entire middle class has been put out of work by LLMs. And the service industries that depend on those middle class knowledge workers will be out of work because nobody can afford to purchase their services. I don't see how this doesn't end with guillotines coming out for the owner class and/or terrorism against the companies powering this revolution. I hope I'm wrong.
The owner class gets enlightened and makes sure that the govt taxes them and implements a solid Universal Basic Income
This is part of what the original UBI concept was about.
If this doesn't happen, yes, there will likely be violence until it is fixed.
The other view is that many technologies that were supposed to reduce work actually net added work, because now more sophisticated tasks could be done by the humans, so the net was similar to the highway paradox where more and wider highways breed more traffic by induced demand.
Where would this demand come from? IDK, but at least initially, these LLMs make such massive errors that keeping a lid on the now-hyper-industrial-scale bullshit[0] spewed by these machines will make many more full time jobs.
Seriously, just today I was amazed at how the GPT model tried to not only BS me with completely fabricated author names for an article that I had it summarize, but it repeatedly did so even after being successively prompted more and more specifically to where it could find the actual author (hint: right after the byline starting with the word "Author". It just keep apologizing and then doubling down on more fantastic lies, as if it were very motivated to hide the truth (I know it's not, that's just how fantablous it was).
[0] Bullshit being defined as speech or writing telling a good tale but with zero regard to the truth or falsehood of any part of it — with no malice but nonetheless a salad of truth and lies.
The answer will either be some socialist rationing policy, or genocide.
Current "work for a living" systems only sustained the population because a human could be the most cost-effective way to get something done. Unless there are still tasks where human labor is the best option (research jobs maybe), this entire economic system will collapse.
> The owner class gets enlightened and makes sure that the govt taxes them and implements a solid Universal Basic Income
But where will they make their billions from if everyone will be living on a basic income? Less money for them will mean less tax money, and less UBI. It will spiral out of control into complete societal collapse if AI doesn't hit a plateau soon.
That is why I said "solid UBI", as in more than merely survival wages, i.e., enough to not merely buy food & shelter, but also to live.
That said, this does need some thinking through multiple stages. On one hand, societies did still work when there was effectively unlimited slave labor, but that may be no more than a rough proxy.
Go to the endpoint assumption that AI and robots can produce everything needed for the population to live, and they are owned by 1% of the population. They made so much money so fast that they bought all the means of production. As of 01-January-2025 everyone is fired. Now what? Your'e right, no one can buy anything. The remaining populace cannot do anything because the new oligarchs have enough money and power to buy &/or threaten any politician.
The population overall is not going to simply lie down and die. About four days after running our of all the food in their pantries, they'll be revolting in the streets. One plausible result is a lot of carnage and the oligarchs are all killed and the AI is destroyed and outlawed. Or, they actually have sufficient command of the military and no defections and the military isn't smart enough to figure out that they're next on the starvation list, so the populace is wiped out as they revolt, and the world is left with the 1% of owners, and 1% of military. Or, there's some kind of balance reached, and the non-AI-owning class of fired people reconstitutes something similar to last year's economy, while the 1% go off to Mars or withdraw into their metaverse-ish thing.
That's just a few random thoughts on rolling the dice among the big forces, but it never plays out like that, so all of these are 99%+ likely to be wrong.
Seems like the only thing we know is that this potentially massively magnifies instability.
There are entire sectors of the economy that LLMs can't touch - hospitality, manufacturing, caregivers, religious sectors, live-action entertainment, etc. Sure some of these will be replaced by robots but there will always be new jobs too.
No, there are not. Everything in the economy is connected and you can't have a vibrant industry without customers. The customers of hospitality/entertainment/healthcare/etc businesses are largely the middle class who will be put out of work by LLMs. So the person who today makes $200/night in tips waiting tables at a nice restaurant.... who will be buying those meals?
I think the material point is that there will be far fewer of those people if everything goes well. You should need fewer people to do any arbitrary task when you are leveraging LLMs.
My own opinion is that people are going to have to become creators. And quickly. You can still create digital products, but you'll need to be a lot more quiet about what you're doing. And you'll need to have a facility for abstract thought to come up with ideas that no one else has yet.
With a few exceptions, using an LLM to perform a useful service is something that almost anyone will be able to do. Therefore these jobs will be not pay well.
That seems kind of like saying "using Excel to add numbers is something that almost anyone will be able to do" -- true, but the difficult part is (obviously a vast simplification) determining which numbers to add, under what conditions, and to decide what to do based on the result.
There is huge variation in how well people can prompt LLMs, prompt-engineering has many tricks that aren't obvious, that's why there's prompt engineers,
the only reason they studied, went to university etc was to avoid doing manual labour. this has been happening for decades, a century. they ll be depressed
Just give them the same lecture they like to trot out about supply and demand and how automation simply creates new opportunities. And then have an AI compose a dirge to play on the world's smallest violin for them.
Yeah also considering the article on Ars today about people testing whether ChatGPT poses no harm. I think singularity is coming but much differently than anticipated. So yes, the economy will have to change massively and less work hours are needed. (And the concepts are there: Negative Income tax, Universal basic income...) The change will be more graceful the less everybody is in denial.
As mentioned elsewhere this is not the first technological disruption in the economy. The change from heavy industry to a service industry didn't go well, hopefully it's possible to take learnings from this and do it right this time.
I'm not completely sure but considering how the coal industry went slowly out-of-business with subsidiaries going on for decades - at least that's how it was in Germany - this doesn't seem the way to go. At least I'm more than surprised that companies and the state were looking into early pensions instead of options to start from 0 at a new job.
For other legacy industries like the German ICE car industry it was at times a close call, so that's when the 35 hour week became widely adopted during the 90s. Even today it is still an option for new people joining (working on non-legacy products of course).
I tried translating something from English to German (my native language) yesterday with ChatGPT4 and compared it to Microsoft Translate, Google Translate and DeepL.
My ranking:
1. ChatGPT4 - flawless translation. I was blown away
2. DeepL - very close, but one mistake
3. Google Translate - good translation, some mistakes
4. Microsoft Translate - bad translation, many mistakes
Fellow German here. Funny thing about DeepL: It translates "pathetisch" as "pathetic". For example: "Das war eine pathetische Rede." -> "That was a pathetic speech."
I guess we have to get used to software redefining the meaning of words. It was kind of funny when that happened regarding Google Maps / neighborhood names, but with LLMs it's a different ballgame.
This strikes me as a good example of how nuanced language can be.
A native English speaker probably would only use "pathetic" to mean "emotional" if the emotions were specifically negative. They also would use pathetic to describe someone experiencing non-emotional suffering such as injury or poverty.
Therefore, a native English speaker probably would not use "pathetic" to mean "emotional" in everyday writing. However, I could definitely see someone using it to mean emotional when they were being more poetic. For example, I could see someone calling an essay on the emotional toll of counseling "The Pathetic Class" in order to imply that social workers are a class that society has tasked with confronting negative emotions.
> For anyone who doesn't speak German, pathetisch means with pathos, impassioned.
It's the same in Romanian, and I guess many other languages.
Many of the common words of European languages are derived from Greek and Latin, and where the meaning has diverged in English, now (because of its ubiquity) these false friends are being realigned to mean what they do in English.
I think we should not undervalue DeepL. Not only its default-translation is already very good, it allows users to select different alternatives and remember these preferences, too. Which is not possible, at least not easy with GPT.
And as with anything else, with the time it will get improved, too. LLM is not the answer to all linguistic problems.
The most amazing thing about ChatGPT translation is, that you can even instruct it how to translate. For example "dutzen" and "sietzen" in German. I just simply tell it how it should do it and it did. Absolutely amazing. It's like actually working with a real translator.
That's something i'm really sorry for but those jobs will be likely the first to fade away, there is a whole university faculty dedicated to the profession of the language translator where i live.
Another German here, and I have to admit I would have actually translated "pathetisch" as "pathetic" as well. I guess my German vocabulary has suffered quite a bit over the years of living abroad.
I've spent a lot of time comparing ChatGPT4 to DeepL Pro on English to French translation. ChatGPT4 is quite good, but not as consistently good as DeepL.
But the interesting thing IMHO is the nature of the mistakes ChatGPT makes... often they're quite elementary mistakes (e.g., the occasional subject-adjective word order) while it gets the big picture right. Whereas DeepL is sometimes the reverse. ChatGPT also has the advantage of being able to tailor its output to a particular context, e.g., it can tailor legal translations to terms used in Canadian law rather than French law. However, for longer texts, I've noticed that ChatGPT will sometimes omit small parts of the source text from the translation, which is unfortunate.
I have a colleague who says that the Mandarin translations done by ChatGPT are an order of magnitude better than DeepL though, which is interesting.
I'm actually not sure what will become of tools like DeepL. Whatever edge they may have with dataset tuning and other tricks under the hood are likely superseded by a better architecture, which in turn requires a ton of capital to train. By the time they come up with a GPT4 equivalent, we will be using GPT5.
They may panic, but they shouldn't. They can quickly pivot. GPT programs can be used off the shelf, but they can also use custom training. Every large org has a huge internal set of documents, plus a large external set of documents relevant to its work (research articles, media articles, domain relevant rules and regulations). They can train a GPT bot to their particular codebase. And that is now. Soon (I'd give it at most one year), we'll be able to train GPT bots to videos.
Yeah this thread has been the motivation for me to sign up on the wait list and cost out what it would take to try fine-tuning their older models on our data. There's still plenty of work out there when it comes to building a solution to a problem.
100%. Anybody with experience in distributed systems, networking, or SRE knows the plumbing can be as challenging as the “big idea”. Training these models is a plumbing job. And that’s actually really hard to pull off.
Not big tech (or PhD level research), but half the work I did on my side project (subtitles for Chinese learning/OCR) is sort of obsolete now, most of the rest of it within a year or two. I put months into an NLP pipeline to segment Chinese sentences, classifying pinyin and translating words in-context, something ChatGPT is great at out the box. My painstaking heuristic for determining show difficulty using word frequencies and comparing distributions to children's shows is now the simple task of giving part of the transcript and asking ChatGPT how difficult it is. Next up, the OCR I did will probably be solved by ChatGPT4. It seems the writing is on the wall: most tasks on standard media (text/images/video), will be "good enough" for non-critical use. The only remaining advantage of bespoke solutions is speed and cost and that will also be a fleeting advantage.
But it's also extremely exciting, we'll be able to build really great things very easily, and focus our efforts elsewhere. Today anyone can throw together a language learning tutor to rival Duolingo. As long as you're in it for solving problems you shouldn't be too threatened by whatever tool set you're currently becoming obsolete.
Everyone here is saying that people can simply transition easily into startups and other big companies. To a certain extent that's true, but what exactly are they going to do? As technology consolidates into one or two major LLM's, likely only accessible by API, I feel most orgs would be better served by relying heavily on finetuning or optimizing those for their purpose. Previous experience with NLP certainly helps with that, although this type of work would not necessarily be as exciting as trying to build the next big thing, which everyone was scrambling for before.
OpenAI could build a state-of-the-art tool with a few hundred developers - to me, that means that money will converge to them and other big orgs rather than the opposite.
With a PhD in the domain, I consider myself pretty good at (a subset of) distributed programming. But these days, when companies hire for distributed programming, they seem to want developers who know a specific set of tools and APIs. I'm more suited at reimplementing them for scratch.
I guess I'm not panicked about my job in the face of AI because objective correctness is required. I dream about the day that OpenAI can write the 100 lines of code that connect the BLE stack, the ADC sensor and the power management code so that my IoT sensor doesn't crash once every 8 days.
I see the AI stuff as very different from, say, the microcomputer revolution. People had LOTS of things they wanted to use computers for, but the computers were simply too expensive.
As soon as microprocessors arrived, people had LOTS of things they were already waiting to apply them to. Factory automation was screaming for computers. Payroll management was screaming for computers.
I don't see that with the current AI stuff. What thing was waiting for NLP/OpenAI to get good enough?
Yes, things like computer games opened up whole new vistas, and maybe AI will do that, but that's a 20 year later thing. What stuff was screaming for AI right now? Maybe transcription?
When I see the search bar on any of my favorite forums suddenly become useful, I'll believe that OpenAI stuff actually works.
Finally, the real problem is that OpenAI needs to cough up what I want but then it needs to cough up the original references to what I want. I normally don't make other humans do that. If I'm asking someone for advice, I've already ascertained that I can trust them and I'm probably going to accept their answers. If it's random conversation and interesting or unusual, I'll mark it, but I'm not going to incorporate it until I verify.
Although, given the current political environment, pehaps I should ask other humans to give me more references.
I'm not at a big tech company, and we don't sell algorithms, but my team does use a lot of NLP stuff in internal algorithms. The only panic I have is trying to keep up and take the time to learn the new stuff. If anything, things like GPT-4 are going to make my team 10x more successful without having to hire an army of PhDs.
The price isn't even that bad, even the most expensive at 6cents per 1k tokens, it won't cost me much. It's the context size that's amazing. Gone are the days of only being able to pass ~500 tokens into something like BERT.
I work at a small company, but it’s hard for me to imagine that generative AI will replace predictive AI/ML any time soon.
Smaller models trained supervised/in-domain are simply more efficient and more accurate than unsupervised/out-of-domain. Plus we own and operate the technology much more cheaply.
I don’t doubt that if your were trying to build a competing product to what OpenAI is doing that you’d feel affected, but there’s also a lot of other problems that are not being solved by generative models.
I think education goal for people shifted. I teach my kids to be flexible and embrace the change. Invest in abilities that transfer well to various things you could be doing during your life. Be a problem solver.
In the future -- forget about cosy job you can be doing for the rest of your life. You no longer have any guarantees even if you own the business and even if you are farmer.
What you absolutely don't want is spend X years at uni learning something, and then 5-10 years into your "career" finding out it was obsoleted overnight and you now don't have plan B.
> What you absolutely don't want is spend X years at uni learning something, and then 5-10 years into your "career" finding out it was obsoleted overnight and you now don't have plan B.
That seems to be running directly opposite of the current trend of admin assistant jobs requiring 2 years specialized admin assistant diplomas. Tech (and I would guess the world of the business MBA) is a unique space where people are learning and changing so quickly, but for a lot of those outside the bubble things seem to be calcifying and requiring more and more training at the expensive of the worker.
Really the only safe career in the moderate future is going to be manual labor. There is always need to send a bunch of humans into the middle of nowhere to dig ditches.
Oh I do believe it. There will always be a market for snobs who will want to pay extra for handmade things vs AI-generated. The issue here is that it is all driven by fads and unstable. If you want to make money you will have to be flexible.
Not even experts in the domain could see themselves being replaced and pivot in time. What hope does an ordinary person have in preparing for what’s coming? Telling people to retrain will not be an acceptable answer because no one can predict which skills will be safe from AI in 5 years.
Fascinating -- I think the comments on the HN post are almost as good.
I think everyone mostly agrees that AI is coming for a lot of jobs. There's disagreement about how many, how it will impact society and the like.
The pace of technology is not linear, it accelerates. I've never seen something that has so rapidly crossed into the "magical" territory as "nearly every single big LLM/Generative AI thing" seems to. It redefines what was previously laughably impossible ... a decade ago.
We're riding a curve upward that is making it extremely hard to see what's coming next. All of the pontificating, all of the attempts at finding solutions to imagined problems ... I can't see one that doesn't feel like a blindfolded person aiming at what they were told was a dart board with what they were told was a dart. There's really nothing to do but hang on and hope you land where any new opportunities creep up.
Here's my two cents, as I work with NLP in a tech company, mostly with Dialogflow and Rasa, in my current project, we are using Chat GPT (and previously GPT 3) to summarize articles, and I see that it can really handle FAQ questions really well. One of most common requirements was to train our bot to handle FAQ type question apart from complex conversation stories/flows,but this thing can straight up take the content from an article,summarize it neatly and send a response back.
We have had some issues and complaints with the API,( mostly with GPT 3 as the fine tuning was only open for the base model and that had some trouble with some questions). Also there is a finicky response time, despite using having paid access. Response time varies from 10 seconds to even a minute (during some downtime that occured a few days ago, and a few days even before that there was a complete outage).
This fad too shall pass. And the tech will end up where it always does: helping some, changing some but nowhere near as much as the gold rush profiteers would make you believe.
This is not an event that calls for pithy adages. The fruits of ML are not a fad just like personal computing was not a fad. It's a watershed event that cuts across every knowledge worker's domain. If you're not currently using these LLMs it may not be obvious to you but those of us that have tried to apply them to our current fields see huge gains in productivity. Just in my own little slice of knowledge work, I've seen yield increases that have saved me multiple days of work on a single project.
Everyone is going to feel this, most prominently people in the sorts of industries that frequent HN. If you haven't yet, you will or you will be forced to when you discover everyone in your field is out-producing you armed with these tools.
Well, we certanly didn't hit the "apppocalypse", iPads didn't fully replace desktop computing and phones haven't ended up equalizing wealth inequality. A lot of breathless BS that was spoken around those technologies didn't come to pass - they became tools. Great tools, but just tools that fit into our lives where it made sense.
Same for the internet - things changed, but the breathless predictions that retails stores are dead and everyone will be WFH haven't happened. It became a tool.
So will ChatGPT - I look forward to good autocomplete and a nice bullshitter-in-a-box tool to generate these HN comments. But most jobs aren't improved or more productive by adding a faster bullshit generator. The bullshit generators in suits are already really good at that.
There's certainly been sweeping social changes thanks to smartphones, but it's arguably not because of smartphones in of themselves, so much as mobile connectivity. In a world where we invented William Gibson-style cyberdecks or if netbooks became wildly popular, that would have happened as well.
db112nl@gmail.com
Why wasn't Blockchain a watershed moment but chatGPT is.
There are several factors that contribute to why blockchain and chatGPT have had different impacts on their respective fields.
Firstly, blockchain technology has had a significant impact in certain areas, particularly in the financial industry, where it has been used to create new forms of digital currency and facilitate secure and efficient transactions. However, it has not yet achieved the widespread adoption and transformative impact that some had predicted. One reason for this may be that the technology is still relatively new and complex, and many organizations are still trying to figure out how best to implement it.
In contrast, chatGPT is a language model that has been designed to understand and generate natural language text. This technology has been more widely adopted because it has a broad range of potential applications, from customer service chatbots to language translation services. Additionally, language is a fundamental aspect of human communication, which makes the potential impact of language models like chatGPT more readily apparent to people.
Another factor that may have contributed to the different impacts of blockchain and chatGPT is the level of public attention and interest. While blockchain has received significant media coverage, it has not captured the popular imagination in the same way that chatGPT has. ChatGPT has been widely publicized and has even been featured in news articles and talk shows, which has helped to raise awareness and generate interest in the technology.
In conclusion, the differences in impact between blockchain and chatGPT can be attributed to a range of factors, including the complexity and novelty of the technology, the level of public awareness, and the broad range of potential applications.
IMO "public attention and interest" in ChatGPT dissipates at an r-squared rate the further you move out from the tech industry (or the NLP research) bubble.
I'm sure that for the people in the thick of crypto and web3, it would've felt like that was the most important thing on everyone's mind.
If you were an NLP researcher at a university whose past years of experience is facing existential threat due to this rapid innovation causing your area to become obsolete, what would be some good areas to pivot to or refocus on?
Why the hell stay in in academia? This is clearly the next technological wave, and you shouldn't sleep on it. Especially when you're so well positioned to take advantage of your experience. You can make $500,000/yr (maybe more with all the new startups and options) and be on the bleeding edge.
If you want to go back to academia later, you can comfortably do so. Most don't, but that doesn't mean it isn't an option.
If you go into industry you’ll be given a chance to deploy these models and rush them into products. You’ll also make good money. If you go into academia (or research, whether it’s in academia or industry) you’ll be given the chance to try to understand what they’re doing. I can see the appeal of making money and rushing products out. But it wouldn’t even begin to compete with my curiosity. Makes me wish I was younger and could start my research career over.
ETA: And though it may take longer, people who understand these models will eventually be in possession of the most valuable skill there is. Perhaps one of the last valuable human skills, if things go a certain direction.
Getting your hands dirty is the best way to understand how something works. Think about all the useless SE and PL work that gets done by folks who never programmed for a living, and how often faculty members in those fields with 10 yoe in industry spend their first few years back in academia just slamming ball after ball way out of the park.
More importantly, $500K gross is $300K net. Times 5 is $1.5, or time 10 is $3M. That's pretty good "fuck you" money. On top which some industry street cred allows new faculty to opt out of a lot of the ridiculous BS that happens in academia. Seen this time and again.
I think the easiest and best path for a fresh NLP phd grad can do right now is find the highest paying industry position, stick it out 5-10 years, then return as a profess of practice and tear it up pre-tenure (or just say f u to the tenure track because who needs tenure when you've got a flush brokerage account?)
Plot twist: as these models increase in function, complexity and size, behaviors given activations will be as inscrutable to us as our behaviors are given gene and neuron activations.
We can’t isolate individual neurons in a functioning brain or train custom models (“probes”) inside of a living human brain that lets us see what they’re feeling on specific inputs. The scope to understand how these models work is incredible: the more intelligent they get, the more we can learn about intelligence works.
The danger is that the opportunity academia is giving you is something more like "you’ll be given the chance to try to understand what they were doing 5 years ago".
$500,000 is not a lot after all the inflation we had.
$100,000 in 1970 is worth almost $800,000 today.
Yes, downvote me all you want. But if you're an NLP expert thinking of working for a company that will make billions off your work, you can and should demand millions at least.
Where is some evidence that NLP is 'solved'? What does it even mean? OpenAI itself acknowledges the fundamental limitations of ChatGPT and the method of training it, but apparently everybody is happily sweeping them under the rug:
"ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging, as: (1) during RL training, there’s currently no source of truth; (2) training the model to be more cautious causes it to decline questions that it can answer correctly; and (3) supervised training misleads the model because the ideal answer depends on what the model knows, rather than what the human demonstrator knows." (from https://openai.com/blog/chatgpt )
Certainly ChatGPT/GPT-4 are impressive accomplishments, and it doesn't mean they won't be useful, but we were pretty sure in the past that we had "solved" AI or that we were just about to crack it, just give it a few years... except there's always a new rabbit hole to fall into waiting for you.
Natural language processing/understanding/generation is solved (at least in English).
LLMs produce perfectly fluent output and can understand natural language input as well as any human.
However knowledge representation is not solved. We still don't know how to interface a perfect LLM to other systems in the same way a human does things like looking up facts we aren't confident of or using a calculator to do math we cant' do in our head.
These are very significant problems and super important. But they are more adjacent to NLP in the same way tasks like something like Text-to-SQL [1] isn't a pure NLP task.
Is the goal of NLP for the model to actually understand the language it is processing? By understand I mean having the ability to relate the language to the real world and reason about it the same way a human would. To me, that goes far beyond NLP into true AI territory where the "model" is at the least conscious of its environment and possesses a true memory of past experiences. Maybe it would not be consciously aware of its self but it would be damn close.
I think LLMs have essentially solved the natural language processing problem but they have not solved reasoning or logical abilities including mathematics.
LLMs have (maybe/probably) solved the language modeling problem, sure. That’s hardly NLP, right? NLG is more than “producing text with no semantics” and both NLG and NLU are only part of NLP.
ChatGPT cannot even reason reliably on what it knows and doesn’t know… it’s the library of Babel, but every book is written in excellent English.
Knowledge representation is a separate problem. NLP gives us some insights into what works here, but the multi-modal aspects of things like GPT4 show there is a lot more to knowledge presentation than just NLP.
It'd be great if GPT could provide it's sources for the text it generated.
I've been asking it about lyrics from songs that I know of, but where I can't find the original artist listed. I was hoping chat gpt had consumed a stack of lyrics and I could just ask it, "What song has this chorus or one similar to X..." It didn't work. Instead it firmly stated the wrong answer. And when I gave it time ranges it just noped out of there.
I think If I could ask it a question and it could go, I've used these 20-100 sources directly to synthesize this information, it'd be very helpful.
To answer the question above, these systems cannot provide sources because they don’t work that way. Their source for everything is, basically, everything. They are trained on a huge corpus of text data and every output depends on that entire training.
They have no way to distinguish or differentiate which piece of the training data was the “actual” or “true” source of what they generated. It’s like the old questions “which drop caused the flood” or “which pebble caused the landslide”.
> Their source for everything is, basically, everything. They are trained on a huge corpus of text data and every output depends on that entire training.
Bing chat is explicitly taking in extra data. It's a distinctly different setup from chatgpt.
Even if that were true, LLMs don't give any kind of "handles" on the semantics. You just get what you get and have to hope it is tuned for your domain. This is 100% fine for generic consumer-facing services where the training data is representative, but for specialized and jargon-filled domains where there has to be a very opinionated interpretation of words, classical NLU is really the only ethical choice IMHO.
It does seem like the (misnamed because it’s not open) OpenAI is very far ahead of most other efforts, especially at the edges in areas like instruction training and output filtering.
Playing with Llama 65G gave me a sense for what the median raw effort is probably like. It seems to take a lot of work to fine tune and harness these systems and get them reliably producing useful output.
I don't think it's possible to build a moat around models at all. The model architectures are public, and there are already distributed group training projects so the compute isn't a barrier. The only moat is data.
Some big tech companies are witnessing a panic inside their entire org because they focus almost entirely on their competitors (except for the business divisions which are monopolies).
This is imo a wake-up call about the value of having "AI teams" embedded in companies.
Bad analogy- if you had an integrated circuit team in your product company building custom CPUs and Intel came out with the 8080 (or whatever was the first modern commercial chip), probably time to disband the org and use the commercial tech
I would have never guessed that menial jobs would be the hardest ones to automate. I realized that humans have some incredible skills that were vital to our survival, and were honed over billions of years. But we take these skills for granted because most humans are born with these natural abilities, which are actually very impressive in the grand scheme of things. A janitor might not get paid very much relative to other jobs, but it took billions of years of evolution to hone their brain, muscles, balance, hand eye coordination, energy efficiency, information processing, etc. We're actually very impressive creatures!
I wonder if this will be a repeat of what happened with speech recognition. It used to be a specialized field dominated by smaller companies like Nuance.
More recently Google, Microsoft, Apple, etc. decided they wanted to have speech recognition as an internal piece of their platforms.
Google poached lots of Nuance's talent. And then Microsoft bought what remained of the company.
Now speech recognition is a service integrated into the larger tech company's platforms, and also uses their more statistical/ML approaches, rather than being a component created by specialist companies/groups.
(I'm sure I'm grossly simplifying this — just seeing a potential parallel.)
As someone who's a crap NLP practitioner, everything is just fine and dandy.
I've never really had the gear or the skills to put together anything that improves over what I can pull from huggingface.
What I do have, and virtually none of my (not remotely technical) colleagues have, is a clue what to do with all this stuff.
They reckon it's about churning out poems and boilerplate text, the minute I figured it could give me whatever json I could reasonably ask for from a source doc, I was overjoyed.
I see more things I can be doing now, not a risk of being replaced.
I see chatGPT as eliminating an awful lot of drudgery.
For example, back in the 60's my dad was working on his book. The text was typed out double spaced, and he (and others) would make corrections. After a while, my mom would retype the whole thing.
Imagine typing a whole book. Again and again and again. She'd type hour after hour. It's dehumanizing.
And then came word processors. What a magical revolution! You could edit text instead of typing it all over again. I bet few people today realize what a great achievement that was.
All chatgpt does is select the most likely next word out of a corpus of existing text. It is not creative.
We don't need rooms full of typists anymore. Good riddance. I bet we get rid of a bunch of drudgery jobs with chatgpt.
It sounds like you’re giving a lot of human ‘occupations’ a whole lot of undeserved credit. There are stark differences between these two scenarios. Remixing existing ideas in an incredibly abstract way is not drudgery, it’s what a lot of people do, and get enjoyment out of.
Maybe semantics, but... The implication of this post is that ChatGPT is causing the demise of NLP jobs, which, sure, that's true - but I don't think it's fair to classify this as replacing the job of anyone who works on NLP. This is not the same as GPT replacing a human translator, which is a direct replacement of their role - it's not like ChatGPT is going to start researching NLP. It's just a fairly mundane example of a superior technology outpacing an inferior one. The technology of NLP is being obsoleted, not the job of researching NLP (which is just unnecessary, not being replaced with something else). And in fact, NLP researchers are probably some of the most well positioned people to move laterally to working on transformers and LLMs.
My university professor who specialises in NLP kinda feels like what's the point of research in the time of chatgpt. He says for now it's not possible to scale retrieval easily when using these llms so that's what he is looking into for now
Is the entire field of data science (Itself maybe a decade old in terms of being a college major?) now obsolete, in terms of being a distinct job field? Are all data science majors now going to be "just" coming up with the proper prompts to get GPT to correctly massage datasets?
If someone build a software that would be able to attend a meeting instead of you - it will become as popular as Zoom. Combine LLM with voice and video recognition and synthesis, and voila - an immense chunk of wasted time in every corporation is gone.
Is the compute for running an LLM cheap enough to scale at the moment? LLMs seem to be a great generalist solution but could specifically targeted NLP solutions still outperform in terms of speed/cost when you are processing high volumes of inputs?
Somebody in that thread is speculating that the company in question is grammarly.
I'm not sure but I'm now curious as to what the execs there are thinking, especially now with the recent Microsoft 365 news. Feels like the body blows keep coming.
This mostly makes me wonder about the varying definitions of NLP. Most business folks I’ve worked with in this field don’t even know the question to ask or prompt- the implementation details of models like GPT are the easy part.
I look at it the other way. Much of what people use GPT-4 can be done better (with a little more work) with transformer models that are specialized with the task.
Things that were a struggle 5 years ago are about to be easy.
Definitely true for GPT-3, but after using GPT-4, I am not so sure. I am sure there are cases where a specialized NLP model can beat GPT-4, but I think they might be in the minority now. Whether I tried summarization, data extraction, or code generation, GPT-4 just kinda worked.
We thought AI will take out the mechanical and manual jobs first that don't need intellectual capabilities, but only repetition. Who knew AI will first learn to write poems, novels and code.
I remember thinking about this when AlphaFold was announced. Did it happen back then? Were there large shifts in companies/universities that were doing folding research?
I’ve been thinking about this. My current theory is that molecular simulation is a much more heterogeneous activity than language modeling. Language is always the same kind of data. Molecular simulations span orders of magnitude in space and time and depending on that, data and even objectives have very different form. AlphaFold is just one small piece in this puzzle and it’s very easy for a research project to incorporate AlphaFold into an existing pipeline and shift its goal.
Sunk cost! This is why so many AI scientists are so skeptical. They don’t use chatGPT because they spent years developing skills that enabled them to use LLMs and now any kid on the street can.
so finally the tech sector is experiencing themselves what they have done to other lines of professions for the past decades, namely eradicting them (rightfully) with innovation?
well same advice applies then:
* embrace, move on and retrain for another profession
* learn empathy from the panic and hurt
1. You don't need to take everyone's job. You just need to take a shitload of people's jobs. I think a lot of our current sociological problems, problems associated with wealth inequality, etc., are due to the fact that lots of people no longer have competitive enough skills because technology made them obsolete.
2. The state of AI progress makes it impossible for humans in many fields to keep up. Imagine if you spent your entire career working on NLP, and now find GPT-4 will run rings around whatever you've done. What do you do now?
I mean, does anyone think that things like human translators, medical transcriptionists, court reporters, etc. will exist as jobs at all in 10-20 years? Maybe 1-2 years? It's fine to say "great, that can free up people for other thing", but given our current economic systems, how are these people supposed to eat?
EDIT: I see a lot of responses along the lines of "Have you seen the bugs Google/Bing Translate has?" or "Imagine how frustrated you get with automated chat bots now!" Gang, the whole point is that GPT-4 blows these existing models out of the water. People who work in these fields are blown away by the huge advances in quality of output in just a short time. So I'm a bit baffled why folks are comparing the annoyances of ordering at a McDonald's automated kiosk to what state-of-the-art LLMs can do. And reminder that the first LLM was only created in 2018.