> That's an enormous amount of data. How do you not notice a huge, network-hogging data flow?
No it isn't. Not even close to some of the larger data sets that Snowflake most likely manages.
We're talking about the public cloud. You don't "hog" AWS's network with a one-time download in numbers like what we're seeing from the article.
Let's be generous and estimate that there are 1k records for each customer. That's almost certainly an overestimation for the time period that TFA specified, but for the sake of argument let's run with it. There are about 100M customers. So that's only 100B records. Assuming each record is on the order of 1kB in size, again likely a huge overestimation, then that would be just 100TB. AWS would charge $7k to egress 100TB, which would be a rounding error in AT&T's cloud spend.
The real amount is most likely less than half of that, if not a quarter.