Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Spaces: Scalable Object Storage on DigitalOcean (digitalocean.com)
151 points by beardicus on Sept 20, 2017 | hide | past | favorite | 55 comments


Using AWS's simple calculator[0] for S3, I get a figure of $97.82 for 250GB storage and 1TB data transfer out per month. DO's price is $5. I know AWS is considered more expensive but that's a huge difference - is my calculation wrong here?

[0]http://calculator.s3.amazonaws.com/index.html


Yes Digital Ocean / Linode bandwidth is 40x cheaper than AWS. Amazon also charges extra for a support plan so you can open a ticket, and inform them of their bugs. And we have found bugs in AWS infrastructure that still remains unfixed, because we have a disproportionate amount of IE users. They essentially forced us to stand up a linode proxy infront of s3 & create our own CDN, since theirs did not work for our users. In the process we saved 4,000% on our bandwidth.


> bandwidth

How does this industry manage to fuck up usage of very specific technical terms so often.

Bandwidth is the speed. The hint is in the name. Wider means more at once.

You're almost certainly talking about the amount of data transfered.

The most depressing part is seeing hosting companies use the same incorrect terminology.


Comparing 1gbps of bandwidth on AWS and 1gbps of bandwidth on linode, linode is 40x cheaper. Its a measure of bandwidth usage over time, or data transfer. Its the same thing in the context I used it in, which is the industry standard term.

If I were comparing cost of bursting, the semantics would be worth arguing.


how could you save more than 100%?


What I should have said is its 40x cheaper.

But I guess if you factor in not losing customers due to obscure IE problems on AWS, we could have saved over 100% ;) by not losing customers.

Some businesses don't need to worry about obscure IE problems, and AWS will make sense, for us it didn't.


Maybe bandwidth went up in the same period the price went down? Just a guess.


Reluctantly upvoted because while I am trying desperately to cut down on my own snarkiness, I deeply value precision in language


When you say bandwidth, what are you referring to specifically?

The biggest reason Linode didn't make sense for us is that we couldn't scale hard drive space without scaling everything else, too. As the site grew this became pretty crazy.


What I mean specifically is cost of data transfer/bandwidth is cheaper on Linode. If you're pushing 10s or 100s of TB a month its a big deal.

You can still run your own CDN on linode & use amazon s3 as an origin server.


The "locked tiers" model shits me to tears.

I've managed to find a good Xen based vps/dedi host that let you customise memory/disk independently (but regular vps are all "just" 2 vcpus I believe).


"data transfer out", presumably, based on the context.


No, you are correct. AWS charges inexplicably ridiculous prices for bandwidth. By my calculations, the last time their bandwidth rates were market-competitive was in 2007.

Market rate is actually below $0.01/GB even, but I'm very comfortable paying that rate for a multi-homed connection to a managed service.


You should try buying bandwidth somewhere that is not the US :-) or possibly central Europe.

Not to say they don't also charge that for bandwidth in those locations, but per-GB pricing in Australia is more like 20-50c/GB


s/unexplainably/inexplicably


Fixed, thanks.


I was chosen to use the preview and the pricing was originally was $0.02 storage and $0.01 transfer without the first $5. I guess that most of the people spent a lot less than $5. Increasing the minimum spend to 5 should give them a great profit for the people that don't use it that much. I don't know what will happen if a lot of people start using it a lot.


> I don't know what will happen if a lot of people start using it a lot.

That's almost always a nice problem to have. If it was profitable in small scale usage, then it's usually profitable in widespread usage once economies of scale have been applied.


Do you have something else in your estimate? It shows as $5.63 for me.


Are you missing 1TB of transfer out?


Yep, thanks. Didn't change it to TB on the calculator.


Has anyone ran any benchmarks on this yet? Latency is always an issue with using these for production static file serving over HTTP (even if it's just feeding a proxy network).

Another feature it would be nice to see is accidental deletion protection of some kind. S3 does this with versioning, and it's an important feature lacking on most of the alternatives:

> Versioning allows you to preserve, retrieve, and restore every version of every object stored in an Amazon S3 bucket. Once you enable Versioning for a bucket, Amazon S3 preserves existing objects anytime you perform a PUT, POST, COPY, or DELETE operation on them. By default, GET requests will retrieve the most recently written version. Older versions of an overwritten or deleted object can be retrieved by specifying a version in the request.


Interesting pricing: Base $5/month for 250GB storage and 1TB outbound traffic. No cost for requests. Additional storage is $0.02/GB/month and additional outbound traffic is $0.01/GB.

Compared to BackBlaze B2: $0.005/GB/month for storage and $0.02/GB for traffic, plus tiered API call fees (see https://www.backblaze.com/b2/cloud-storage-pricing.html)

So Spaces is better for serving assets (lower traffic cost) and B2 is better for archiving data (lower storage cost).


Actually I'd say that managed OpenStack Swift is a better solution than B2, if only marginally more expensive.

Take e.g. OVH's Swift pricing[1], which is $0.011 per GB (stored and outgoing). I found Swifts latency (at least on the OVH cluster, with the OVH CDN) to be much better than B2. Swift also satisfies my paranoia about having a remote storage system I could setup on-prem.

Also, more storage regions, much better network infra, actual access to software running on the cluster (Swift, not some proprietary API).

YMMV

[1] - https://www.ovh.com/us/public-cloud/storage/object-storage/


I wonder what the next add on piece will be. There seems to be a progression here to position themselves as an AWS alternative for budget minded customers.

They already have a sort of "cloud formation" like API for provisioning nodes, a load balancer, and now object storage.

Maybe a managed database offering? Or managed K8S? Or a more clear "region / availability zone" strategy?


I'm hoping for managed database offering next. Not being a sysadmin/devops, always scared launching any kind of side project because of security concerns, if DO adds some managed/context specific nodes to their lineup, will make it much easier to wind things up.


Congrats on the launch! Love using DO for VMs and this really rounds out the offering.

Since Google started offering strong consistency that's been a really tempting feature. We often want to upload something for another user to immediately access, and having that extra delay or potential 404 with other services sucks. But Google's pricing is nowhere near B2 or now DO's Spaces.

Any thoughts on eventually adding strong consistency to Spaces?


From the introduction document: [1]

> Spaces currently supports v2 of pre-signed URL functionality. Tools and libraries that only support v4 of pre-signed URL functionality will not work.

I had really hoped this would be in place for the launch, but I guess this wasn't a priority. Looking forward to this + EU locations!

[1] https://www.digitalocean.com/community/tutorials/an-introduc...


With no costs for uploads and $5 per every 250GB it sounds like this is actually probably a pretty nice deal for personal archiving. Of course if you're also downloading a bunch of that stuff you'll hit that 1TB transfer ceiling pretty quickly, but it sounds to me like it could be a reasonably cheap backend for a backup solution – or am I missing something?


Somewhat off-topic but: What is the cheapest cloud storage out there? I don't care about multi-region or extreme durability, I would just like to store encrypted blobs containing easily re-creatable data. I need reasonable download speeds however.


OVH's object storage offering is probably worth looking at: https://www.ovh.com/us/public-cloud/storage/object-storage/

Storage: $0.0112/month/GB, Outgoing traffic: $0.011/GB


Do you have experience with them? What about latency?


I have experience with OVH, but not with this product. They do have a terrific network, ddos protection, etc, as compared to other low budget dedicated server vendors. Their support, though, isn't terrific.


Backblaze B2 claims to be the cheapest. I don't know if they are, but they're really cheap and it looks like a great service.


B2 charges $0.02/GB downloaded, 0.005/GB stored DO charges $0.01/GB downloaded, 0.020/GB stored

B2 also has a free tier for the first 10 GB of storage, 1 GB of daily downloads. DO charges $5/month minimum


I currently use serverhub.com I doesn't have a S3 like service but they offer a VPS with 500GB and 1TB of transfer for $5. I have an orphaned plan of 125Gb of storage for $15/year. Super recommended.


Hubic is ridiculously cheap - http://hubic.com/


Interesting offering. I'm wondering how people feel about the $5 minimum? At 250GB storage and 1TB traffic, plenty of people using much less than 250GB and 1TB traffic would be better served with a pay for what you use model.


As an individual, $5 is easily worth it so I don't have to pay attention to how much I have in S3 and how much I download. I don't have to worry about getting a huge bill at the beginning of the month.


I love this company and this new offering. Such a pleasure to work with - from both a developer and customer perspective.

I only wish the other regions would roll out quicker(sfo please!)


Any experience/data on rate limits? I heard B2 rate limits a lot. I need to pour data in at will at a moment's notice without limits.


Curious why they would publish tutorial for Transmit 4 and not Transmit 5. It says Transmit 5 tutorial will come later but not when. One would think they would have the tutorial for 5, and then later release one for people still on Transmit 4. Unless Transmit 5 doesn't support it yet?

Also a bit weird you have to use S3 to connect to it even though it isn't S3, unless this new feature is S3 but wrapped in Digital Ocean interface?


(DO employee here) It's S3-compatible. Agreed that it's a bit strange in Transmit and other clients to have to first select "S3" when connecting to something else entirely... but that's the world we live in.

Spaces isn't working with Transmit 5 yet. I don't have the technical details, but it's a known issue that's being worked on. I'll write a Transmit 5 tutorial as soon as it works.


Unfortunately the s3 API has become a common thing with blob storage services/software.

It means they can support any apps/services that currently rely on s3 without little more than a s3 host/URL change.


A lot of storage services have S3 compatible APIs. Minio, Google Cloud Storage (when in interop mode) also can be accessed with S3 libraries/SDKs (you have to change the endpoint for it to work).

And they aren't 100% compatible. I believe that DO doesn't support headObject, bucketExists and creating presigned upload URLs.


Anyone know what this uses under the hood? The reference to erasure coding specifically reminds me of Minio..


Maybe Ceph.


Or Tahoe-LAFS, although Tahoe's encoding doesn't leave any keys with the provider, but only with the uploader.


Right, I guess I assumed they would use something that already natively supports the S3 API, which Tahoe doesn't AFAIK?


Actually! (Full disclosure: I sell access to a Tahoe-LAFS grid.) The Tahoe team has recently been working to merge the "cloud-backend" branch, which would open the door to storing shares on grids by interfacing directly with S3, GSE, etc. This technology's been battle-tested by Least Authority Enterprises, and at least Matador Cloud has indicated a desire to adopt it as well. Other Tahoe vendors are probably sitting up and taking notice too.


Um, you're talking about using s3 as a storage backend (I think?) whereas im talking about exposing data over an s3 compatible API (like minio does)


This looks like a much cheaper way (than AWS S3) to use Arq to backup my MBP.


I'm currently using rclone with backblaze B2


I'm using both S3 and Backblaze's new offering. Can't beat first 10gb free.


what is differance beetwen cdn and object storage?

i thiink cdn is more cheap. am i wrong?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: