Hacker Newsnew | past | comments | ask | show | jobs | submit | qoez's commentslogin

> It took them two months, to develop chip for Llama 3.1 8B. In the AI world where one week is a year, it's super slow. But in a world of custom chips, this is supposed to be insanely fast.

LLama 3.1 is like 2 years at this point. Taking two months to convert a model that only updates every 2 years is very fast


It only looks that way because Llama failed. Good models like Qwen are shipping every 6 months.

2 months of design work is fast, but how much time does fabrication, packaging, testing add? And that just gets you chips, whatever products incorporate them also need to be built and tested.

Open router is highly subsidized. This might be cheaper in the long run once these companies shift to taking profits

But why not cross that bridge then. By that time you might have much more optimized local infrastructure. Although I do see that someone suffering through the local slowness now is what drives the development of these local options.

I'm predicting some wave of articles why clawd is over and was overhyped all along in a few months and the position of not having delved into it in the first place will have been the superior use of your limited time alive

do you remember “moltbook”?

Is it gone?

Of course if the proponents are right, this approach may fit to skipping coding :-)

you're right, i should draft one now

Use a clawd, it'll have a GitHub repo and Show HN in minutes to go with it. It's what the cool kids are doing anyhow

What a new an interesting viewpoint which has the ability to change as the evidence does!

Openclaw the actual tool will be gone in 6 months, but the idea will continue to be iterated on. It does make a lot of sense to remotely control an ai assistant that is connected to your calendar, contacts, email, whatever.

Having said that this thing is on the hype train and its usefulness will eventually be placed in the “nice tool once configured” camp


I can remember at least since the 90s people were saying "Soon I won't even have to work anymore!"

What surprises me is that this obvious inefficiency isn't competed out of the market. Ie this is clearly such a suboptimal use of time and yet lots of companies do it and don't get competed out by other ones that don't do this

I think the issue is everyone's stuck in the same boat - the alternative to using AI and spending time reviewing is just writing it yourself, which takes even longer. so even if it's not a net win, it's still better than nothing. plus a lot of companies aren't actually measuring the review overhead properly - they see 'AI wrote 500 lines in 2 minutes' and call it a productivity win without tracking the 3 hours spent debugging it later. the inefficiency doesn't get competed out because everyone has the same constraints and most aren't measuring it honestly

Short term gets faster more competitive results than long term.

OpenAI and google are too scared of music industry lawyers to tackle this. Internally they without a doubt have models that would crush these startups over night if they chose to release them.

Is your claim that music industry lawyers are that much scarier than movie industry lawyers? Because the big labs don't seem to have any problem releasing models that create (possibly infringing) video.

The movie industry is doing well from AI.

Thus far AI has only been used to create fan fiction clips that generate free marketing for legacy IP on TikTok. And the rights holders know that if AI gets good enough to make feature length movies then they'll be able to aggressively use various legal mechanisms to take the videos off major sites and pursue the creators. Long term it could potentially lower internal production costs by getting rid of actors & writers.

Music is very different. The production cost is already zero, and people generating their own Taylor Swift songs is a real competitive threat to Spotify etc.


Just right now: ByteDance to curb AI video app after Disney legal threat

https://www.bbc.com/news/articles/c93wq6xqgy1o


> Is your claim that music industry lawyers are that much scarier than movie industry lawyers?

Not qoez:

You have to balance market opportunities with the risk of reputational damage and litigation risk.

Video will probably make a lot more money than audio, so you are willing to take a bigger risk. Additionally, at least for Google there exists a strong synergy between their video generation models and YouTube, which makes it even more sensible for Google to make video models available to the public despite these risks.


well i guess the music industry is a lot more monopolized than video, plus there is a lot of video out there that isn't "movies," while there's not a lot of music that isn't... "music"

What about Disney's lawyers? GenAI for images exists ...

Disney is actually quite excited about GenAI [0]

[0] https://openai.com/index/disney-sora-agreement/


I'm not sure it's just fear of lawyers, although that's definitely part of it. Big companies have way more to lose reputationally and legally, so the bar for releasing something is much higher

Great read but damn those are some questionable curve fittings on some very scattered data points

Better than some of the science papers I've tried to parse.

In other words, just another Tuesday.

I find it helps me just forced to be focused on a task for a few hours. Just the blocked out attention I spend on it will help refine and discover new problems and angles etc. I don't think just blocking out the time without actually trying to code it (staring at a wall) is as effective.


I love apple and mainly use one for personal use, but apple users consistently overrate how fast their machines are. I used to see sentiment like "how will nvidia ever catch up with apples unified silicon approach" a few years ago. But if you just try nvidia vs apple and compare on a per dollar level, nvidia is so obviously the winner.


For day to day use, my base spec M1 MacBook Pro is snappier than my i9 desktop with 128GB of ram and a 4090.


People always claimed this as a data leak vector but I've always been sceptical. Like just writing style and vocabulary is probably extremely shared among too many people to narrow it down much. (How people that you know could have written this reply?) The counter argument is that he had a very specific style in his mail so maybe this is a special case.


If you have a large enough set to test against and a specific person you are looking for, this is totally doable currently.


Of course it's doable. The question is how reliable the results are.


I wonder if it works on zoomers too. I have noticed a slight mode collapse among this population ;)


Not to mention, I'd argue that most people have a (subtly?) different writing style depending on where they post and to who they talk to.


It just needs to find the needles in the haystack. Humans can better verify if they're truly needles.


Not just a test set, but enough of a set to search through and compare against. Several pages of in-depth writing isn't anywhere near sufficient, even when limiting the search space to ~10k people.


this is a well-studied field (stylometry). when combining writing styles, vocabulary, posting times, etc. you absolutely can narrow it down to specific people.

even when people deliberately try to feign some aspects (e.g. switching writing styles for different pseudonyms), they will almost always slip up and revert to their most comfortable style over time. which is great, because if they aren't also regularly changing pseudonyms (which are also subject to limited stylometry, so pseudonym creation should be somewhat randomized in name, location, etc.), you only need to catch them slipping once to get the whole history of that pseudonym (and potentially others, once that one is confirmed).


Stylometry is okay if you're trying to deanonymize a large enough sample text. A reddit account would be doable. But individual 4chan posts? You barely have enough content within the text limit.


People do change over time, I used to write "ha" after every sentence for some reason


You know, i had a particularly cringy period in which i put "la" at the end of sentences.


Don't throw the baby out with the bathwater. "Ooh, la" sounds really unnatural.

But on a serious note, what did "la" mean in your context? I've never seen this.


It’s a common thing for speakers of Singaporean English to end sentences with la/leh. But no idea if that’s what’s going on here.


In one use case, it is kind of a verbal exclamation point, but it has more meanings and uses than just that. Likely originates from Hokkien, but it has evolved into it is own thing. If you are curious, more details here https://en.wikipedia.org/wiki/Singlish


In Turkish la at the end disrespectfully refers to a male person.


You left off something.


sure, not denying that. my writing style is fairly different now in my 40s than it was in my late teens/early twenties.

but, those changes are usually pretty gradual and relatively small. thats why when attempting to identify someone via writing, you look at several aspects of the writing and not just word choice (grammar, use of specific slang, sentence length, paragraph structure, punctuation, etc.). it is highly unlikely that all aspects of someones writing changes at the same time. simply removing "ha" is inconsequential to identification if not much else changed.

additionally, this data is typically combined with other data/patterns (posting times, username (themes, length, etc.), writing that displays certain types of expertise, and more) to increase the confidence level of correct identification.


> Training and serving frontier AI at scale takes hundreds of thousands of GPUs. xAI’s Colossus cluster reportedly has 200,000 GPUs. OpenAI has plans for millions of them. Competing in this market would require launching hundreds of thousands, if not millions, of satellites into space

No? You'd only need one with lots of gpus on the ship at the same time


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: