Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
WebDAV isn't dead yet (feld.me)
82 points by toomuchtodo 6 hours ago | hide | past | favorite | 41 comments




I wrote both the WebDAV client (backend) for rclone and the WebDAV server. This means you can sync to and from WebDAV servers or mount them just fine. You can also expose your filesystem as a WebDAV server (or your S3 bucket or Google Drive etc).

The RFCs for WebDAV are better than those for FTP but there is still an awful lot of not fully specified stuff which servers and clients choose to do differently which leads to lots of workarounds.

The protocol doesn't let you set modification times by default which is important for a sync tool, but popular implementations like owncloud and nextcloud do. Likewise with hashes.

However the protocol is very fast, much faster than SFTP with it's homebrew packetisation as it's based on well optimised web tech, HTTP, TLS etc.


> In fact, you're already using WebDAV and you just don't realize it.

Tailscale's drive share feature is implemented as a WebDAV share (connect to http://100.100.100.100:8080). You can also connect to Fastmail's file storage over WebDAV.

WebDAV is neat.


I use it all the time to mount my CopyParty instance. Works great!

> While writing this article I came across an interesting project under development, Altmount. This would allow you to "mount" published content on Usenet and access it directly without downloading it... super interesting considering I can get multi-gigabit access to Usenet pretty easily.

There is also NzbDav for this too, https://github.com/nzbdav-dev/nzbdav


On the same topic, and because I believe too that WebDAV is not dead, far from it, I published a WIP lastly, part of a broader project, that is an nginx module that does WebDAV file server and is compatible with NextCloud sync clients, desktop & Android. It can be used with Gnome Online Accounts too, as well as with Nautilus (and probably others), as a WebDAV server.

Have a look there: https://codeberg.org/lunae/dav-next

/!\ it's a WIP, thus not packaged anywhere yet, no binary release, etc… but all feedback welcome


"FTP is dead" - shared web hosting would like a word. Quite a few web hosts still talk about using FTP to upload websites to the hosting server. Yes, these days you can upload SSH keys and possibly use SFTP, but the docs still talk about tools like FileZilla and basic FTP.

Exhibit A: https://help.ovhcloud.com/csm/en-ie-web-hosting-ftp-storage-...


I haven't used old school FTP in probably 15 years. Surely we're not talking about using that unencrypted protocol in 2025?

From that link:

    2. SSH connection

    You will need advanced knowledge and an OVHcloud web hosting plan Pro or Performance to use this access type.
Well, maybe we are. I'd cross that provider off my list right there.

FTP still works great and encryption is a non-priority for 100% of users.

Shared hosting is dying, but not yet dead; FTP is dying with it - it's really the last big use case for FTP now that software distribution and academia have moved away from FTP. As shared hosting continues to decline in popularity, FTP is going along with it.

Like you, I will miss the glory days of FTP :'(


I think the true death of ftp was amazon s3 deciding to use their own protocol instead of ftp, as s3 is basically the same niche.

Shared hosting is in decline in much the same way as it was in 2015. Aka everyone involved is still making money hand over fist despite continued reports of its death right around the corner.

I use webdav for serving media over tailscale to infuse when I'm on the move. SMB did not play nicely at all and nfs is not supported..

The go stdlib has quite a good one that just works with only a small bit of wrapping in a main() etc.

Although ive since written one in elixir that seems to handle my traffic better..

(you can also mount them on macos and browse with finder / shell etc which is pretty nice)


I built a simple WebDAV server with Sabre to sync Devonthink databases. WebDAV was the only option that synced between users of multiple iCloud accounts, worked anywhere in the world and didn’t require a Dropbox subscription. It’s a faster sync than CloudKit. I don’t have other WebDAV use cases but I expect this one to run without much maintenance or cost for years. Useful protocol.

Author seems to conflate S3 API with S3 itself. Most vendors are now including S3 API compatibility into their product because people are so used to using that as a model

More like attempt at S3 API compatibility...

JMAP will eventually replace WebDAV.

Recently set up WebDAV for my Paperless-NGX instance so my scanner can directly upload scans to Paperless. I wish Caddy would support WebDAV out of the box, had to use this extension: https://github.com/mholt/caddy-webdav

Which scanner, if you don’t mind me asking? I’ve got a decade+ old ix500 that had cloud support but not local SMB.

I was surprised, then not really surprised, when I found out this week that Tailscale's native file sharing feature, Taildrive, is implemented as a WebDAV server in the network.

https://tailscale.com/kb/1369/taildrive


What else would you expect, just out of curiosity? SMB? NFS? SSHFS?

A proprietary binary patented protocol...

and do what, implement virtual filesystem driver for every OS ?

If you need sftp independent of unix auth - there is sftpgo.

Sftpgo also supports webdav, but for use cases in the article sftp is just better.


One interesting use of WebDAV is SysInternals (which is a collection of tools for Windows), it's accessible from Windows Explorer via WebDAV by going to \\live.sysinternals.com\Tools

Isn't that SMB, not webdav?

I guess the "\\$HOSTNAME\$DIR" URL syntax in Windows Explorer also works for WebDAV. Is it safe to have SMB over WAN?

I just tried https://live.sysinternals.com/Tools in Windows Explorer, and it also lists the files, identical to how it would show the contents of any directory.

Even running "dir \\live.sysinternals.com\Tools", or starting a program from the command prompt with "\\live.sysinternals.com\Tools\tcpview64" works.


IIRC, Windows for a while had native WebDAV support in Explorer, but setting it up was very non-obvious. Not sure if it still does, since I've moved fully to Linux.

Just like the author, I use WebDAV for Joplin, also Zotero. Just love them so much.

We need to keep using open protocols such as WebDAV instead of depending on proprietary APIs like the S3 API.


Relatedly, is there a good way to expose a directory of files via the S3 API? I could only find alpha quality things like rclone serve s3 and things like garage which have their own on disk format rather than regular files.

consider versitygw or s3proxy

Copyparty has webdav and smb support (among others), which makes it a good candidate to combine with a Kodi client perhaps?

I wonder how much better WebDAV must have gotten with newer versions of the HTTP stack. I only used it briefly in HTTP mode but found the clients to all be rather slow, barely using tricks like pipelining to make requests go a little faster.

It's a shame the protocol never found much use in commercial services. There would be little need for official clients running in compatibity layers like you see with tools like Gqdrive and OneDrive on Linux. Frankly, except for the lack of standardised random writes, the protocol is still one of the better solutions in this space.

I have no idea how S3 managed to win as the "standard" API for so many file storage solutions. WebDAV has always been right there.


>It's broadly available as you can see

And yet, I can never seem to find a decent java lib for webdav/caldav/carddav. Every time I look for one, I end up wanting to write my own instead. Then it just seems like the juice isn't worth the squeeze.


No random writes is the nail in the coffin for me

It's HTTP, of course there's an extension for that?

Sabre-DAV's implementation seems to be relatively well implemented. It's supported in webdavfs for example. Here's some example headers one might attach to a PATCH request:

  X-Update-Range: append
  X-Update-Range: bytes=3-6
  X-Update-Range: bytes=4-
  X-Update-Range: bytes=-2
https://sabre.io/dav/http-patch/ https://github.com/miquels/webdavfsl

Another example is this expired draft. I don't love it, but it uses PATCH+Content-Range. There's some other neat ideas in here, and shows the versatility & open possibility (even if I don't love re-using this header this way). https://www.ietf.org/archive/id/draft-wright-http-patch-byte...

Apache has has a PUT with Content-Range, https://github.com/miquels/webdav-handler-rs/blob/master/doc...

Great write-up in rclone on trying to support partial updates, https://forum.rclone.org/t/support-putstream-for-webdav-serv...

It would be great to see a proper extension formalized here! But there are options.


I'm using WebDAV to sync files from my phone to my NAS. There weren't any good alternatives, really. SMB is a non-starter on the public Internet (SMB-over-QUIC might change that eventually), SFTP is even crustier, rsync requires SSH to work.

What else?


Syncthing is pretty nice for that sort of thing.

Syncthing is great but it does file sync, not file sharing, so not ideal when you say want to share a big media library with your laptop but not necessarily load everything on it

That moves the goalpost. The user I was replying to wanted sync and didn't seem to be using other functionality like that.

> FTP is dead (yay),

Hahahaha, haha, ha, no. And probably (still)more used than WebDAV

pls send help


This blog post didn't convince me. I must assume the default for most web devs in 2025 is hosting on a Linux VM and/or mounting the static files into a Docker container. SFTP is already there and Apache is too.

The last time I had to deal with WebDAV was for a crusty old CMS nobody liked using many years ago. The support on dev machines running Windows and Mac was a bit sketchy and would randomly have files skipped during bulk uploads. Linux support was a little better with davfs2, but then VSCode would sometimes refuse to recognize the mount without restarting.

None of that workflow made sense. It was hard to know what version of a file was uploaded and doing any manual file management just seemed silly. The project later moved to GitLab. A CI job now simply SFTPs files upon merge into the main branch. This is a much more familiar workflow to most web devs today and there's no weird jank.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: