Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's probably that by optimizing code they reduced CPU load, therefore reducing fan noise due to overheating, and finally inducing clearer sound quality with lower SNR. It makes sense to me.


You mean to tell me that a media player doing audio playback is thrashing memcpy so hard that it's spinning up the fan? And that there is enough low-hanging fruit in standard memcpy implementations to actually make a difference?

The more likely explanation is that these people are simply delusional.


I mean, we know for a fact that audiophiles are delusional, but I remember computers of the time had terribly noisy audio output affected by all kinds of EMI, from fans to hard drives to CPU utilization. The noise floor was a mess on most devices. I even remember listening to the interference from my arriving SMS messages, zap-zap-zap. Desktops also tended to make about the same amount of fan noise as a space heater. Pretty much the only common device of the era with a good SNR and no screaming fans was the iPod/iPhone.


Pretty much any Apple device was surprisingly better than the median PC until the late 2000s or so but that wasn’t because Apple was great so much as even the big PC vendors couldn’t pass up the chance to cut corners. So many vendors would sell you a $3-5k computer and then save $0.50 on the DAC or use a noisy power supply. I used to work with some researchers who collected lab data by connecting sensors to line in audio and they did some tests & found that the Mac audio input was roughly the same as the external boards they had been paying a scientific supplier a few grand for but the comparable priced Dell & HP workstations had a shocking amount of electronic noise. Granted, I had a $50 Belkin USB audio adapter which was even better because it did higher bit rate and depth than was common at the time.

(There’s a famous exception on ambient noise: the PowerMac G5 was like having a hair dryer running in your office.)


Yeah until I got balanced jack-to-XLRs for the Focusrite to the monitors, the cables were insanely affected by EMI from my GPU. Whenever it did more than move windows around (i.e. starting a game, playing a video) it noised out like crazy.


There was a time when if I went to a room in a MUD with a long text description, scrolling that text would cause the MP3 I was listening to to stutter.

Of course that was 25 years ago on a Pentium 133, I understand things are better now.


Yuo see, when standard memcpy runs it has a big variance of power, i.e. large cycle to cycle differences in effective current through the processing chip. Agner Fog's memcpy is optimized to use the pipelines effectively, makes the processing operate smoothly and not in discrete steps, much less variation. That variation causes fluctuation in primary voltage supply which leaks into the analog path, creating noise and harshness.


I can’t tell if you’re joking or not


I guess it's not serious in this case, as streaming audio is not very CPU-intensive, BUT I noticed that when I trained my ML model (basically 100% GPU usage) I could hear a noise in my M40Xs. I guess it's some MOBO/GPU insulation issue.


Most GPUs have some degree of coil whine, often under high or specific load. Even my PS5's has it in the menu of a particular game, I suspect they haven't capped the framerate for that menu.


My what-if joke was going to involve memcopy bringing up the dormant avx unit & causing some power fluctuations in the system. Which is like 99.9999999% a joke.


Surely they'd run this on a system without a fan? I'd say immersed in liquid nitrogen to minimise thermal noise and powered through an ultra-low noise, DC, battery-only supply, and the whole thing inside an independently Earthed faraday cage.

If not, all the effort on the memcpy is going to waste.


I think this is close to the most credible explanation for why you could ever hear a difference. High CPU load could result in buffer underruns, which can result in audible artifacts (as there's not enough samples in the buffer to guarantee smooth playback). Although I highly doubt that this would be a problem on 2013 hardware...


Hmm, I'd guess that it might impact something that is different between different compilers and the like - timing on when the different code runs. Eg. One memcpy sometimes runs faster than other times, leading to distortion on the music. I imagine there's lots of built in tools that minimize said distortion by buffering, but if your buffered data is just barely keeping up with what's playing, a slightly slower copy could leaver the buffer empty for a moment


And what about the one thousand parallel memcpys happening while you listen to your non-memcpy audio program?

Like in, is really one less memcpy doing all that difference?


I'm not sure if that's what the (almost certainly delusional) linked article was talking about but your explanation holds water... but only for onboard motherboard embedded audio. That does tend to pick up a lot of electrical noise and just be mediocre at best.

However this can all be sidestepped with an affordable external DAC connected with optical or USB.


No it can't! I have a IEEE1394 pro-grade interface AND IT IS STILL BUZZING!! Everything is shielded. I even added ferrite rings onto the data cable.


That sounds like some kind of ground loop problem, then, maybe?

There are battery-powered USB DAC/amp combos, primarily designed for portable use. I think FiiO makes/made some. That should solve or at least isolate the problem further to some extent.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: