Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The only way to understand this is by knowing: Meta already has two (!!) AI labs who are already at existential odds with one-another and both are in the process of failing spectacularly.

One (FAIR) is lead by Rob Fergus (who? exactly!) because the previous lead quit. Relatively little gossip on that one other than top AI labs have their pick of outgoing talent.

The other (GenAI) is lead by Ahmad Al-Dahle (who? exactly!) and mostly comprises of director-level rats who jumped off the RL/metaverse ship when it was clear it was gonna sink and by moving the centre of genAI gravity from Paris where a lot of llama 1 was developed to MPK where they could secure political and actual capital. They've since been caught with their pants down cheating on objective and subjective public evals and have cancelled the rest of Llama 4 and the org lead is in the process of being demoted.

Meta are paying absolute top dollar (exceeding OAI) trying to recruit superstars into GenAI and they just can't. Basically no-one is going to re-board the Titanic and report to Captain Alexandr Wang of all people. Its somewhat telling that they tried to get Koray from GDM and Mira from OAI and this was their 3rd pick. Rumoured comp for the top positions is well into the 10's of millions. The big names who are joining are likely to stay just long enough for stocks to vest and boomerang L+1 to an actual frontier lab.



I wouldn't categorize FAIR as failing. Their job is indeed fundamental research and are still a leading research lab, especially in perception and vision. See SAM2, DINOv2, V-JEPA-2, etc. The "fair" (hah) comparisons of FAIR are not to DeepMind/OAI/Anthropic, but to other publishing research labs like Google Research, NVIDIA Research, and they are doing great by that metric. It does seem that for whatever reason that FAIR resisted productization, unlike DeepMind, which is not necessarily a bad thing if you care about open research culture (see [1]). GenAI was supposed to be the "product lab" but failed for many reasons, including the ones you mentioned. Anyways, Meta does have a reputation problem that they are struggling to solve with $$ alone, but its somewhat of a category error to deem it FAIR's fault when FAIR is not a product LLM lab. Also Rob Fergus is a legit researcher; he published regularly with people like Ilya and Pushmeet (VP of Deepmind Research), just didn't get famous :P.

not affiliated with meta or fair.

[1] https://docs.google.com/document/d/1aEdTE-B6CSPPeUWYD-IgNVQV...


FAIR is failing. Dino and JEPA at least are irrelevant in this age. This is why GenAI exists. GenAI took the good people, the money, the resources and the scope. Zuck tolerates entertains ideas until he doesn’t. It’s clear blue sky research is going to be pushed even further into the background. For perception reasons you can’t fire AI researchers or disband an ai research org but it’s clear which way this is headed.

As for your comparisons, well Google Research doesn’t exist anymore (to all intents and purposes) for similar reasons.


This is why GenAI exists. GenAI took the good people, the money, the resources and the scope. Zuck tolerates entertains ideas until he doesn’t. It’s clear blue sky research is going to be pushed even further into the background

I agree with most this, I just think we have different meanings of failure. FAIR has "failed" in the eyes of Meta leadership in that they have not converted into a "frontier AI lab" like Deepmind, and as a result they are being sidelined (much like Google Research, which I admit was a bad example). But the orgs were founded to pursue basic research and I think it's not the a failure of the scientists at FAIR that management has failed to properly spin out GenAI. Of course, it sounds like your metric is "AI/LLM competitiveness" and we have no disagreements that FAIR is failing on that end (I just don't think its only important or right metric to be judging FAIR).

* Normatively, I think that it's good to have monopolistic big tech firms be funding basic open research as a counterbalance to academia and also because good basic research requires lots of compute these days. It feels somewhat shortsighted to reallocate all resources to LLM research.

* DINO and JEPA aren't particularly useful for language modeling, but are still important for embodiment/robotics/3D, which indeed seems to be the "next big thing." Also, to their credit, FAIR is still doing interesting and useful work on LLMs for encoders [1], training dynamics [2], and multimodality [3], just not training frontier models.

** GenAI took the money, scope, and resources, but not sure about the good people lol, that seems to be their problem.

[1] https://arxiv.org/abs/2504.01017 [2] https://arxiv.org/abs/2505.24832 [3] https://arxiv.org/abs/2412.14164


This is exactly why Zuck feels he needs a Sam Altman type in charge. They have the labs, the researchers, the GPUs, and unlimited cash to burn. Yet it takes more than all that to drive outcomes. Llama 4 is fine but still a distant 6th or 7th in the AI race. Everyone is too busy playing corporate politics. They need an outsider to come shake things up.


The corporate politics at Meta is the result of Zuck's own decisions. Even in big tech, Meta is (along with Amazon) rather famous for its highly political and backstabby culture.

This is because these two companies have extremely performance-review oriented cultures where results need to be proven every quarter or you're grounds for laying off.

Labs known for being innovative all share the same trait of allowing researchers to go YEARS without high impact results. But both Meta and Scale are known for being grind shops.


Can't upvote this enough. From what I saw at Meta, the idea of a high performance culture (which I generally don't have an issue with) found its ultimate form and became performance review culture. Almost every decision made filtered through "but how will this help me during the next review". If you ever wonder about some of the moves you see at Meta, perf review optimization was probably at the root of it.


I may or may not have worked there for 4 years and may or may not be able to confirm that Meta is one of the most poorly run companies I've ever seen.

They are, at best, 25-33% efficient at taking talent+money and turning it into something. Their PSC process creates the wrong incentives, they either ignore or punish the type of behavior you actually want, and talented people either leave (especially after their cliff) or are turned into mediocre performers by Meta's awful culture.

Or so I've heard.


Beyond that, the leaders at Facebook are deeply unlikeable, well beyond the leaders at Google, which is not a low bar. I know more people who reflexively ignore Facebook recruiters than who ignore recruiters from any other company. With this announcement, they have found a way to make that problem even worse.


Interesting that "high-impact" on the one hand, and innovative/successful in the marketplace on the other, should be at odds at Meta. Makes one wonder how they measure impact.


It doesn't matter much how they measure if it's empirical. Once they say the scoring system, all the work that scores well gets done, and the work that resists measurement does not get done.

The obvious example was writing eng docs. It was probably the single most helpful things you could do with your time, but there was no way to get credit because we couldn't say exactly how much time your docs might have saved others (the quantifiable impact from your work). That meant that we only ever developed a greater and greater unfilled need for docs, but it only ever got riskier and riskier to your career to try to dive into that work.

People were split on how to handle this. Some said "do the work that most needs doing and the perf review system will work it out long term." Other said, "just play the perf game to win."

I listened to the first group because I'm what you call a "believer." In a tech role I think my responsibility is primarily to users. I was let go (escorted off campus) after bottoming out a stack ranking during a half in which I did a lot of great work for the company that half (I think) but utterly failed to get a good score by the rules of the perf game (specifically I missed the deadline to land a very large PR and so most of my work for the half failed to make the key criteria for perf review: it had to be *landed* impact)

I think I took it graciously, but also I will never think of these companies as a home or a family again.


> Some said "do the work that most needs doing and the perf review system will work it out long term." Other said, "just play the perf game to win."

With all the layoffs that happened in the last few years, I suspect there are less and less believers.


It's not that he needs a Sam Altman, but that he cannot be Sam Altman (for path-dependent reasons related to his standing in international politics),

not any advantage in virtue (or vices, for that matter)

In national politics, Sam is toe to toe with Elon,which is to say, not great, not terrible


> In national politics, Sam is toe to toe with Elon,which is to say, not great, not terrible

That’s quite the stretch, Elon is now PNG with the MAGA crowd and was already reviled by the left


PNG may not be a stretch, but so isn't the purported health of Sam's local ambitions? :)


These people should better make a lot of money while they can, because for most of them their careers may be pretty short. The half life of AI technologies is measured in months.


Meta is struggling here for the same reason Microsoft couldn’t stop the talent bleed to Google back in the day.

Even if you’re giving massive cash and stock comp, OpenAI has a lot more upside potential than Meta.


Microsoft back in the day and today still doesn’t pay top dollar. So you can’t get top talent with 65th percentile pay.


Microsoft used to have a location advantage, the Seattle area was a lot cheaper than the bay area.

They've long since lost that advantage.


California has a weather advantage over rainy Seattle though.


This is wrong. OpenAI has almost no upside now at these valuations and there is a >2 year effective cliff on any possibility of liquidity whereas Meta is paying 7-8 figures liquid.

Metas problem is that everyone knows that it’s a dumpster fire so you will only attract people who only care about comp which is typically not the main motivation for the best people.


It's not a 2 year cliff: it's 6 months before vesting, then 2 years before you can sell.


Effective cliff. What use is vested “equity” (ppus aren’t even equity) that you cannot sell?


It means that you can keep those shares even if you leave. Otherwise the term vesting cliff would be meaningless at any startup where the shares are not liquid.


They are yours. That’s a huge difference between a real cliff and illiquid stock.

If you decide you don’t like it, you take what’s vested after the cliff and leave. Even if you have to wait another year and a half to sell, you still got the gain.


Massive difference. You can vest and move on, even if you don’t have liquidity, which most private companies don’t for employees anyway.


Except you can only sell a prescribed amount at an undetermined time. By the earliest possible sell date you have already made 8 figures liquid at Meta.


Ok, but that’s a trade off anyone who works for a private company makes, and it’s never called an “effective cliff”.


Ok fine, 8 figure opportunity cost if that makes you feel better.


Rob Fergus is one of the creators of FAIR. It makes sense for him to lead it.


Lead it where?


You forgot LeCunn, but yeah that guy's on its own death spiral.


No I didn’t. He is functionally irrelevant at Meta and he doesn’t actually lead anything.


Weird, Meta says it's their Chief AI Scientist [1].

But maybe they're wrong ...

1: https://ai.meta.com/people/396469589677838/yann-lecun/


You really don't understand that what is advertised on a "people" page can be different from what the person actually does?

FYI if you worked at FB you could pull up his WP and see he does absolutely nothing all day except link to arxiv.


Cool. So what does a chief AI scientist do?


ideally lead AI science, but in reality mostly pontificate on social media. One could say that is fitting for Meta though right?


So be a mascot.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: