That's why we should strive to use and optimize local LLMs.
Or better yet, we should setup something that allows people to share a part of their local GPU processing (like SETI@home) for a distributed LLM that cannot be censored. And somehow be compensated when it's used for inference
Yeah we really have to strive not to rely on these corporations because they absolutely will not do customer support or actually review account closures. This article is also mentioning I assume Google, has control over a lot more than just AI.
Or better yet, we should setup something that allows people to share a part of their local GPU processing (like SETI@home) for a distributed LLM that cannot be censored. And somehow be compensated when it's used for inference