> If you didn't follow this, it doesn't to you. It does, to those who pay attention and constantly see people switching to AMD and commenting about their reasons.
I'm talking about this purely from a data point of view. I'm referring to the graph, and it has too many holes to be used as a sweeping generalization about the market and Linux users as a whole.
To be clear, I'm not arguing for or against NVIDIA/AMD. I'm just trying to point out the issues with using that graph as definitive evidence of anything.
> Nvidia did nothing to improve their integration with Linux, while AMD did a lot to improve their drivers.
And I applaud AMD for that. They've come a long way. However, (and this is personal opinion based off of experience), I don't see Wayland as being as a particularly massive reason. Wayland is what this thread is about anyways. There are still plenty of people using XOrg that are happy with that and don't run into any issues, myself included. That's not to say I don't support Wayland development and I hope NVIDIA never adds support for it in the future, but it's such a 'young' piece of tech that still requires some more maturing.
> AMD hardware is known to be better for GPGPU for a long time already.
Raw power is only a piece the pie. Some people take issue with that statement, but it's reality.
To the point in your link, Pascal fixes the async problem present in Maxwell.
> And no one stops open source libraries from not being CUDA only and from using Vulkan and OpenCL for compute needs.
And yet they primarily are CUDA only. It's a conscious and intentional decision by the developers of those libraries to start with CUDA and stick to it. And the community not adding OpenCL support shows that AMD hardware isn't even in the space for that application.
I think that we could go back and forth with this, providing counter examples for each point the other brings up. I merely wanted to offer up an enterprise oriented view of the picture, when most people are looking at individual usage.
Graph makes sense if you analyze the context. It's as I said above - increasing number of users are switching to AMD. Just check any thread in Linux gaming forums on the topic of "what should be my next GPU".
> I don't see Wayland as being as a particularly massive reason. <...> don't run into any issues,
It's one of the reasons. There are many issues with Nvidia. Abysmal integration due to lack of upstream driver such as broken vsync, no standard hardware sensors, no PRIME support and so and so forth are all well known Nvidia problems. To address them, Nvidia should either open up their driver, or to support Nouveau to begin with. So far they showed no interest in either of those cases.
> It's a conscious and intentional decision by the developers of those libraries to start with CUDA and stick to it.
That's too bad. Avoid libraries which proliferate lock-in. If their developers don't know any better, find those who do and support their efforts instead. It should be in the interest of actual developers who use said libraries, not to be locked into single hardware vendor.
I'm talking about this purely from a data point of view. I'm referring to the graph, and it has too many holes to be used as a sweeping generalization about the market and Linux users as a whole.
To be clear, I'm not arguing for or against NVIDIA/AMD. I'm just trying to point out the issues with using that graph as definitive evidence of anything.
> Nvidia did nothing to improve their integration with Linux, while AMD did a lot to improve their drivers.
And I applaud AMD for that. They've come a long way. However, (and this is personal opinion based off of experience), I don't see Wayland as being as a particularly massive reason. Wayland is what this thread is about anyways. There are still plenty of people using XOrg that are happy with that and don't run into any issues, myself included. That's not to say I don't support Wayland development and I hope NVIDIA never adds support for it in the future, but it's such a 'young' piece of tech that still requires some more maturing.
> AMD hardware is known to be better for GPGPU for a long time already.
Raw power is only a piece the pie. Some people take issue with that statement, but it's reality.
To the point in your link, Pascal fixes the async problem present in Maxwell.
https://www.anandtech.com/show/10325/the-nvidia-geforce-gtx-...
> And no one stops open source libraries from not being CUDA only and from using Vulkan and OpenCL for compute needs.
And yet they primarily are CUDA only. It's a conscious and intentional decision by the developers of those libraries to start with CUDA and stick to it. And the community not adding OpenCL support shows that AMD hardware isn't even in the space for that application.
I think that we could go back and forth with this, providing counter examples for each point the other brings up. I merely wanted to offer up an enterprise oriented view of the picture, when most people are looking at individual usage.