Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you go into industry you’ll be given a chance to deploy these models and rush them into products. You’ll also make good money. If you go into academia (or research, whether it’s in academia or industry) you’ll be given the chance to try to understand what they’re doing. I can see the appeal of making money and rushing products out. But it wouldn’t even begin to compete with my curiosity. Makes me wish I was younger and could start my research career over.

ETA: And though it may take longer, people who understand these models will eventually be in possession of the most valuable skill there is. Perhaps one of the last valuable human skills, if things go a certain direction.



Do both.

Getting your hands dirty is the best way to understand how something works. Think about all the useless SE and PL work that gets done by folks who never programmed for a living, and how often faculty members in those fields with 10 yoe in industry spend their first few years back in academia just slamming ball after ball way out of the park.

More importantly, $500K gross is $300K net. Times 5 is $1.5, or time 10 is $3M. That's pretty good "fuck you" money. On top which some industry street cred allows new faculty to opt out of a lot of the ridiculous BS that happens in academia. Seen this time and again.

I think the easiest and best path for a fresh NLP phd grad can do right now is find the highest paying industry position, stick it out 5-10 years, then return as a profess of practice and tear it up pre-tenure (or just say f u to the tenure track because who needs tenure when you've got a flush brokerage account?)


What does "profess of practice and tear it up pre-tenure" mean?


Plot twist: as these models increase in function, complexity and size, behaviors given activations will be as inscrutable to us as our behaviors are given gene and neuron activations.


This is as likely to happen as that someone will fully understand how the brain works. I don't think you're missing much out in academia


We can’t isolate individual neurons in a functioning brain or train custom models (“probes”) inside of a living human brain that lets us see what they’re feeling on specific inputs. The scope to understand how these models work is incredible: the more intelligent they get, the more we can learn about intelligence works.


The danger is that the opportunity academia is giving you is something more like "you’ll be given the chance to try to understand what they were doing 5 years ago".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: