Hacker Newsnew | past | comments | ask | show | jobs | submit | more drtgh's commentslogin

As with all Ponzi schemes OpenAI will eventually go bankrupt, yet it continues to receive money from the unwary. I wonder how the research division at Disney feels about what their bosses have done...

But it should happen earlier due what is happening with the RAM, as it sounds quite illegal, like anticompetitive hoarding, cornering the market, raising rivals' costs, consumer welfare harm, and so on.


Qualcomm acquired Nuvia in order to bypass the licence fees charged by ARM, with I can guess ARM tried to block in good terms first, and latter in bad terms without success as we saw. It may make sense now that ARM is refusing to license them the newer ones.

Qualcomm may be solely to blame themselves, as they now has to invest in researching and developing an underdeveloped architecture, quickly, while their competitors -including Chinese ones- take advantage with newer ARM designs (and perhaps they could even develop their own alternatives peacefully in the meantime).


> Qualcomm acquired Nuvia in order to bypass the licence fees charged by ARM

Both Nuvia and Qualcomm had Arm Architecture licenses that allowed them to develop and sell their own Arm-compatible CPUs.

There was no bypassing of license fees.

If Qualcomm had hired the Nuvia engineers before they developed their core at Nuvia, and they developed exactly the same core while employed at Qualcomm, then there would be no question that everyone was obeying the terms of their licenses.

Arm's claim rests on it being ok for Nuvia to sell chips of their own design, but not to sell the design itself, and not to transfer the design as part of selling the company.


https://www.phoronix.com/news/Alex-Gaynor-Rust-Maintainer

    << Alex Gaynor recently announced he is formally stepping down as one of the maintainers of the Rust for Linux kernel code with the removal patch now queued for merging in Linux 6.19. Alex Gaynor was one of the original developers to experiment with Rust code for Linux kernel modules. He's drifted away from Rust Linux kernel development for a while due to lack of time and is now formally stepping down as a listed co-maintainer of the Rust code. After Wedson Almeida Filho stepped down last year as a Rust co-maintainer, this now leaves Rust For Linux project leader Miguel Ojeda as the sole official maintainer of the code while there are several Rust code reviewers. >>


What are you trying to imply?


we shall C the resolve of the RUST maintainers they are working on borrowed time ....


"Yay, we got Rust in the Kernel! ^_^ Ok, now it's your code! Bye!"


1. He left before Rust stopped being experimental.

2. Maintainers come and go all the time, this is how open source works.

3. The only reason Phoronix reports on this is because anytime they mention Rust it attracts rage bait clicks, for example, your comment.


Then another maintainer will take care of it? This is how kernel development works....


They have less and less resources


high quality educational material


They must release drivers and firmware for all the devices that they no longer support.


Ponzi scheme [1] , Anticompetitive hoarding [2] , Cornering the market, Raising rivals' costs (RRC), Consumer welfare harm, and so on

[1] https://en.wikipedia.org/wiki/Ponzi_scheme

[2] https://en.wikipedia.org/wiki/Hoarding_(economics)

I think the event is big enough to stop them and send them behind bars.

> Samsung

I think that Samsung -and other manufacturers- have been intentionally limiting their production capacities so as not to devalue the prices of their chips (for SSD at least) so may be they are an interested part. This, combined with the madness we are seeing, is abuse^2 . I think they should also end up behind bars.


That open letter is filled with malice, so I can only guess that it's either trolling or a bad taste joke (due people could think are outdated recommendations and spread, lets remember the flat-earth thing).


Where do you detect malice? The claims are quite accurate.


Accurate? Lets take the Wifi (Other users already commented the other ones). Open a wifi access point with the name of the restaurant, intercept the DNS requests and serve your filtered stuff.

PS: If the text is real and not trolling, the keyword in the text is 'rarely happen', which we could apply to car seatbelts then.


Then what? The user presumably sees TLS certificate warnings since you don't have valid certicates. HSTS would prevent downgrades to plain HTTP and is pretty common on sensitive websites.

Isn't the better advice to avoid clicking through certificate warnings? That applies both on and off open wifi networks.

There is a privacy concern, as DNS queries would leak. Enabling strict DoH helps (which is not the default browser setting).


I am afraid that it is not only about privacy (that they recommend ignoring), there are many options to chose, like CA vectors, lets say TrustCor (2022), e-Tugra (2023), Entrust (2024), Packet injection vectors, or Click here or use your login first vectors as you commented, bugs and configurations.

This ones known. Therefore I just cannot believe that those who wrote the open letter did not even though about such significant events from the past year, I remark the past year, or even on zero-days.

We are talking about people connecting to an unknown unsupervised network, that we do not know what new vulnerabilities will be published on main stream also, and the ones of the open letter know it because they are hiding behind the excuse of "rarely".


> like CA vectors

This gets complicated because you're not safe on your home or corporate network either when CAs are breached. The incident everyone talks about, DigiNotar (2011), had stolen CA keys issuing certificates that intercepted traffic across several ISPs. If that's the threat you're looking to handle, "avoid public wifi" isn't the right answer. Perhaps you're doing certificate pinning, application level signing, closed networks, etc.

> Entrust (2024)

I recently wrote a blog post[1] about CA incidents, so I notice this one isn't like the others. Entrust's PKI business was not impacted by the hack and Entrust remains a trusted CA.

> Click here or use your login

Password manager autofill is the solution there, both on public wifi and on a corporate network. Perhaps an ad blocker as well.

> people connecting to an unknown unsupervised network

Aren't most people's home networks "unsupervised"?

[1] https://alexsci.com/blog/ca-trust/


Why do you talk about home networks "unsupervised" when we are talking about public networks, access points, created to hunt people?

Do you notice that your proposed solutions try to fix a problem, isn't it? The open letter does not propose solutions; it merely denies them.

It is needed to be sincere with people, those "incidents" have happened for a long time, and unfortunately will keep happening (given the history), bad actors hunting, yesterday the CAs, and tomorrow? So if one connect to an open wifi one may fall victim to a trap, probably not at home but in an Airport or other crowded places with long waits, and even if you do not browse another app in background will be trying to do it.

It was needed many years to make people just sightly aware, and now they -if the text is real- pretend to undo it. But to be sincere I really do not mind much, I just perceive that open letter as malicious.


CA compromise feels like an exotic attack, beyond what "everyday people and small businesses" should worry about. There's no solution to CA compromise offered because the intended audience is not getting hacked in that way. If your concern is that high risk individuals need different advice, I agree, but the letter also makes that clear they are not the focus.

Are there specific, modern examples of CA compromise being used to target low-risk individuals? Is that a common attack vector for low-risk individuals and small businesses?


And how exactly do you plan to forge the SSL certificates to deliver your filtered contents?


> intercept the DNS requests and serve your filtered stuff.

How do you get from a malicious DNS response to a browser-validated TLS cert for the requested host?


what filtered stuff?

you mean partial web pages?

most browsers use DNS over HTTPS


Intel's consumer processors (and therefore the mainboards/chipsets) used to have four memory channels, but around the year 2020 this was suddenly limited to two channels since the 12th generation (AMD's consumer processors had always two channels, with exception of Threadriper?).

However this does not make sense, as for more than a decade the processors have only grown increasing the number of threads, therefore two channels sounds like a negligent and deliberately imposed bottleneck to access the memory if one use all those threads (Lets say 3D render, Video postproduction, Games, and so on).

And if one want four channels to surpass such imposed bottleneck, the mainboards that nowadays have four channels don't contemplate consumer use, therefore they have one or two USB connectors with three or four LAN connectors at prohibitive prices.

We are talking about consumer quad-channel DDR4 machines ten years old, wildly spread, keeps being competent compared with current consumers ones, if not better. It is like if all were frozen along this years (and what remains to be seen with such pattern).

Now it is rumoured that AMD may opt for four channels for its consumer lines due to the increased number of pin connectors (good news if true).

It is a bad joke what the industry is doing to customers.


> Intel's consumer processors (and therefore the mainboards/chipsets) used to have four memory channels, but around the year 2020 this was suddenly limited to two channels since the 12th generation (AMD's consumer processors had always two channels, with exception of Threadriper?).

You need to re-check your sources. When AMD started doing integrated memory controllers in 2003, they had Socket 754 (single channel / 64-bit wide) for low-end consumer CPUs and Socket 940 (dual channel / 128-bit wide) for server and enthusiast destkop CPUs, but less than a year later they introduced Socket 939 (128-bit) and since then their mainstream desktop CPU sockets have all had a 128-bit wide memory interface. When Intel later also moved their memory controller from the motherboard to the CPU, they also used a 128-bit wide memory bus (starting with LGA 1156 in 2008).

There's never been a desktop CPU socket with a memory bus wider than 128 bits that wasn't a high-end/workstation/server counterpart to a mainstream consumer platform that used only a 128-bit wide memory bus. As far as I can tell, the CPU sockets supporting integrated graphics have all used a 128-bit wide memory bus. Pretty much all of the growth of desktop CPU core counts from dual core up to today's 16+ core parts has been working with the same bus width, and increased DRAM bandwidth to feed those extra cores has been entirely from running at higher speeds over the same number of wires.

What has regressed is that the enthusiast-oriented high-end desktop CPUs derived from server/workstation parts are much more expensive and less frequently updated than they used to be. Intel hasn't done a consumer-branded variant of their workstation CPUs in several generations; they've only been selling those parts under the Xeon branding. AMD's Threadripper line got split into Threadripper and Threadripper PRO, but the non-PRO parts have a higher starting price than early Threadripper generations, and the Zen 3 generation didn't get non-PRO Threadrippers.


At some point the best "enthusuast-oriented HEDT" CPU's will be older-gen Xeon and EPYC parts, competing fairly in price, performance and overall feature set with top-of-the-line consumer setups.


Based on historical trends, that's never going to happen for any workloads where single-thread performance or power efficiency matter. If you're doing something where latency doesn't matter but throughput does, then old server processors with high core counts are often a reasonable option, if you can tolerate them being hot and loud. But once we reached the point where HEDT processors could no longer offer any benefits for gaming, the HEDT market shrank drastically and there isn't much left to distinguish the HEDT customer base from the traditional workstation customers.


I'm not going to disagree outright, but you're going to pay quite a bit for such a combination of single-thread peak performance and high power efficiency. It's not clear why we should be regarding that as our "default" of sorts, given that practical workloads increasingly benefit from good multicore performance. Even gaming is now more reliant on GPU performance (which in principle ought to benefit from the high PCIe bandwidth of server parts) than CPU.


I said "single-thread performance or power efficiency", not "single-thread performance and power efficiency". Though at the moment, the best single-thread performance does happen to go along with the best power efficiency. Old server CPUs offer neither.

> Even gaming is now more reliant on GPU performance (which in principle ought to benefit from the high PCIe bandwidth of server parts)

A gaming GPU doesn't need all of the bandwidth available from a single PCIe x16 slot. Mid-range GPUs and lower don't even have x16 connectivity, because it's not worth the die space to put down more than 8 lanes of PHYs for that level of performance. The extra PCIe connectivity on server platforms could only matter for workloads that can effectively use several GPUs. Gaming isn't that kind of workload; attempts to use two GPUs for gaming proved futile and unsustainable.


You have a processor with more than eight threads, at same bus bandwidth, what do you choose, dual channeled or four channeled processor.

That number of threads will hit a bottleneck accessing only through to channels of memory.

I don't understand why you brought up the topic of single-threading in your response to the user, given that processors reached a frequency limit of 4 GHz, and 5 GHz with overclocking, a decade ago. This is why they increased the number of threads, but if they reduce the number of memory channels for consumer/desktop...


What is the best single thread performance possible right now? With over locked fast ram.


It is even worse than that.

Have you seen those movies where a small town in the US holds a festival or carnival with heavy floats on top of a car? That is the UI they've implemented in Win11, one interface mounted on top of another, therefore overwhelming slowness, a botched job that only shows disinterest, perhaps with the aim of forcing users to upgrade by investing as few resources as possible in it.

Linux is the antithesis of this, with a long history of slow file explorers/managers that finally stand out today in terms of speed and features, fortunately. (this is to support the other user's comment about Dolphin (my choice when Linux) and Nautilus.


This sounds extremely necessary, but what warrants the funds reaching such a exclusive destination?

I think that Firefox needs an exclusive non-profit foundation, but I don't think Mozilla Corporation/Foundation would allow it, so a fork with a new name (marketing problem) sounds necessary (although splitting the forces may not be a good idea?), I wonder if the current Firefox's forked communities could join forces to create such non-profit foundation, and start from there, making grow the developers under such non-profit foundation, the new main tree.


I would use the larger star as root variable for the systems, using the system name as reference to such larger star (therefore, Sol = the sun), then each planet would adapt their time dilatation from this reference, something like Sol:Earth and Sol:Jupiter, but now I realize even like this the speed of the satellite in origin and destination should also be included, something like Sol:Earth[satellite_speed] and Sol:Jupiter[satellite_speed], but the transit of a satellite between planets would need some kind of progressive sampling witch don't sound specially precise. To me sounds like people specialised tried to design something and got trapped in a loop about witch ideas would be less problematic (what would be interesting to read if my guessing is right).

PS: I mean, with something like what I described, we would read the packets at Sun:Earth sent from Sun:Jupiter and from Sun:Mars as time in the header (with the intention of being able also to exclude if they were sent with Radio_wave or Light/laser_wave speeds), the only alternative I can think of is a satellite orbiting the larger star, sending some kind of subspace instantaneous signal from an atomic clock, and read it at each planet instantaneously.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: