Back in 2009 during the Cyanogenmod days, Google issued a C&D to the developers to keep them from distributing Google Apps alongside the main ROM. IMO it was less about the app distribution and more to force Cyanogemod to come to the table and work with Google to develop ground rules on how 3rd party ROMs would interact with Google more broadly. Cyanogemod (now LineageOS) basically agreed not to step on Google's toes. At the time it was not to distribute Google's Apps inside of the ROM. Now it's to not bypass OS level protections like Play Integrity (formerly Safety Net)
> Any action taken to bypass Play Integrity risks a backlash against all custom OSes, and could cause Google to block them entirely from the Play Store.
So long as the main players follow this advice, Google tends to also ignore smaller players that _are_ working around this via Magisk or other means. It's also possible that this simply becomes non-viable after some time.
It's also worth noting, Google has ways to allow third parties to certify their devices on https://www.google.com/android/uncertified/ . This doesn't grant fully Safety Net, but it's definitely another way Google is working with custom ROMs to ensure you have access to the Play Store
This is an extreme oversimplification in an "Explain like I'm 5" style (terminology might also not be perfectly correct, it's more for illustration of the basic concepts):
Imagine, if inside your phone, there's your main processor named Bob. Bob runs all of your apps, Bob is occasionally stupid and gets hacked, but he means well.
Also inside your phone, is another processor named Alice. Bob can't see her even if he can send messages to her, but Alice can see Bob through a one-way mirror. Alice is also located inside of a concrete steel bunker with no entry, no exit, and UV sterilization of all single-page letters coming in or out after examination by an officer. Alice has a special ID card given to her by Google, which was only given her after Google was satisfied in the security of the bunker.
Google sends super high-secure work for Bob to do. Bob isn't the most trustworthy of fellows; so Google also sends a message asking Alice to report back on whether Bob is doing what he's supposed to. Alice sends her report back to Google with her signature on it. Google trusts that signature, because it previously inspected Alice and the security of her bunker, and knows that as long as Alice is safe and Bob can't harm her, Bob is doing the work intended.
Now, you might say, why not just make sure Bob is stronger? Well, Google tried that, but with people wanting to sideload apps, the needs of developers, security bugs, that's all extremely difficult. Having Alice do nothing but verify and sign in a super secure bunker while accepting various requests for oversight - that's easy, auditable, much easier to secure, and rarely needs change.
Where it gets even stronger is what I would call, for lack of a better word, "progressive lockdown." For example, when Bob is just starting up, Alice can check that he started up from an approved OS (Secure Boot). Once that's happened, the Secure OS might hand Alice a piece of code for the OS that is never allowed to change in the future while the device is booted (Secure Monitor / TEE). Alice doesn't have to run the code herself; just panic if that code ever changes. By doing so, the OS now has super-high-security functions for itself, that can always be changed out through any update, without Alice needing any updates, changes, or expanded attack surface herself. By that point, Alice can be OS-agnostic so it doesn't matter whether it's Bob or Kevin, and could even be a permanent hardware feature that never needs updates... oops, you've just invented TPM / Verified Boot / Titan M.