Likely the sensor data being stuffed into a "standard" SQL schema -- loads of these bespoke solutions aren't fully dialed in with tools like timescaleDB, or Prometheus for these metrics. Even with slower (eg 240s interval) sensors the data builds up -- and slows the systems (w/o indexes).
The problem that arises with a lot of these "pull data from sensors, pump it into a database" is schemas and data integrity have to be kind of a second-class problem behind storage. When you can't push an update to whatever is ingesting data, and that ingestion tool is also ingesting with an invalid format, you can't just ignore the data (or fix the problem). So your store has to accommodate semi structured and unstructured data gracefully.
I do not agree that SQL is "slow" for these types of problems. I've built a number of systems that support this issue effectively. You _could_ use a tool that has schemaless/unstructured data as a first-class feature, but if your goal is to reduce complexity a Postgres instance is just fine. As with all data projects, indexing is important and needs to be thoughtful (from the beginning). For sensor data, it's also a good idea to think about data retention and removal policies immediately (keep your metrics/aggregates, move raw data to cold storage after a while).