Hi maintainer of Hurl here, thanks for the link; Hurl is using libcurl as its HTTP engine (you can for instance use --curl to get an export of curl commands). In a few words, if you just use Hurl, you just use curl also...
Ya I babel all sorts of things. I have sql queries saved in my notes for things like creating a user after I reset the project locally. Common rest queries. I create an orgmode heading for every ticket I pick up and often end up with rest and/or sql queries littered in the notes for testing and development and I can run them right from the notes and when I'm done with the ticket I can just copy/paste them into the pull request description to fill out the manual test steps for those reviewing.
It doesn't need to render a fucking Chromium instance to make a web request. It doesn't depend on a service to run. It doesn't require an "Enterprise" subscription for basic features.
So I'd say it meets all of the criteria except being on your machine already.
Their words, not mine; the first header: "It's already on your machine". We can belabor it, but the domain is 'justuse'. No room for 'except' [unless you're reasonable, of course].
The 'egregious' things are charging to share what will fit very well in SCM (preventing real automation)... and breaking due to Online First/only. It makes sense to require the endpoint I'm talking to. Why would Postman need AWS/us-east-1 [0] for a completely unrelated API? Joyful rent-seeking.
cURL, your suggestion (hurl), or HTTPie all make far more sense. Store what they need in the place where you already store stuff. Profit, for free: never go down. Automate/take people out of the button-pushing pattern for a gold star.
While I like curl, this is highly subjective, some people just prefer a GUI that can guide you and/or be visually explored.
This whole piece also reads like someone is quite angry at people preferring a different workflow than them. Some aspects, like shell history, are also not the magic bullet they propose here as it doesn't, e. G., cover the actual responses.
Curl's ability to do almost everything is a minor curse here too as it means that any documentation (man pages, options help) is very large.
Of course you can, but shell scripting really fucking sucks.
One moment you have a properly quoted JSON string, the next moment you have a list of arguments, oops you need to escape the value for this program to interpret it right, but another program needs it to be double-escaped, is that \, \\, or \\\\? What subset of shell scripting are we doing? Fish? modern Linux bash, macOS-compatible bash? The lowest common denominator? My head is spinning already!
If I want to script something I'm writing Python these days. I've lost too much sleep over all the "interesting" WTF situations you get yourself into with shell scripting. I've never used Hurl but it's been on my radar and I think that's probably the sweet spot for tasks like this.
Curl and jq are plenty to get the job done. As the author points out, you can capture all your curl commands in scripts and then orchestrate with other scripts for testing purposes. On top of all the benefits the author has already mentioned, you get a boost if you're doing your development work in a VM, as I do. It's less you have to install, configure, and manage. Sure, that can be automated, but it's just more stuff you have to take care of and the longer you have to wait for a fresh VM to be ready.
When I made the script httpstat [1] 9 years ago, I had this exact thought - if I want to show the statistics of a HTTP request, why not just use curl, why bother working out the figures myself? And since then, the more I use curl, the more I find it robust, sophesticated and irreplaciable. It's the only thing and everything I need.
Using -X POST is often wrong as it "changes the actual method string in the HTTP request [... and] does not change behavior accordingly" (Stenberg, 2015).
Although, it is correct for the article's mention of "Send POST requests"... just that typically people don't send POST requests out of the blue with no data.
Here's the article in question. [0] I think runxiyu is correct.
The author delves a bit more into the issue.
> One of most obvious problems is that if you also tell curl to follow HTTP redirects (using -L or --location), the -X option will also be used on the redirected-to requests which may not at all be what the server asks for and the user expected. Dropping the -X will make curl adhere to what the server asks for. And if you want to alter what method to use in a redirect, curl already have dedicated options for that named --post301, --post302 and --post303!
Per the man page (`man 1 curl`),
> The method string you set with -X, --request will be used for all requests, which if you for example use -L, --location may cause unintended side-effects when curl does not change request method according to the HTTP 30x response codes - and similar.
`-d` and `--data` will appropriately change the headers of their requests. Funnily, `--post301` and `--post302` which have a similar effect as `-X POST` are RFC 7231 compliant, browsers just don't do that. [2][3] This is so ubiquitous that the error codes 307 and 308 were added to support the original behavior of repeating the request verbatim at the target address. Compare the following:
1. In the 301 case with just `--data`, the request turns into a GET request when sent to the redirect.
2. In the 301 case with `-X POST`, the request stays a `POST` request, but doesn't send any data to the redirect.
3. Finally, in the case where the server returns a 308, we see the POST request is kept and the data is resent.
To further expand slightly on a different thing that might surprise some people, the data options will automatically set the content type by adding the header, `Content-Type: application/x-www-form-urlencoded`, as if sending form data from a browser. This behvaior can be overridden with a manual `-H`, `--header` argument (e.g., `-H 'Content-Type: application/json`).
Edit: cube00 pointed out that newer versions of curl than mine have `--json` which will do that automatically. [4]
I've been using curl, like forever. I don't understand the preoccupation for using postman, et. al. -- why pay for something that literally requires a little bit of light RTFM?
https://news.ycombinator.com/item?id=9224 "you can already build such a system yourself quite trivially by getting an FTP account, mounting it locally with curlftpfs, and then using SVN or CVS on the mounted filesystem. From Windows or Mac, this FTP account could be accessed through built-in software." -- on why Dropbox should not exist.
People absolutely will pay for software rather than reading or thinking, if it makes doing the work easier. You may have heard of this thing called chatgpt.
(not being a web developer, I've only lightly used Postman, and it is definitely handy for things like authentication. Especially once you touch OAuth. But I uninstalled it once they went unnecessarily cloud)
Because it's convenient. I use curl often, but admit to using Bruno even more often. And yes, I could have some organized scripts or something, but for playing with various APIs daily, sometimes importing whole .json collections, or even setting up credentials in one place and reusing them across all the requests from a collection - that's just fast, easy and convenient. Same for responses - yes, I could work with jq and analyze in the console, but often I don't really know what exactly I'm looking for, so it's just easier to have it visually parsed and click through items
For repeated commands, my projects have a Make/Just file that has cURL commands I wanna validate. Sometimes I even load JSON from a tests/fixtures/*.json file, that also can be reused for other non-E2E tests.
Not sure how some developers could be so allergic to the terminal, don't you already spend a lot of time there?
> Who says I'm allergic to the terminal? I already stated that I use curl.
Preferring "a couple of clicks" vs "run one command" seems to indicate so, otherwise I'm not sure why'd someone would prefer the former instead of the latter.
I have dozens of collections with hundreds of requests, most sending complex payloads, all perfectly organised hierarchically. I'd rather use a collapsible UI for that, if you prefer to have hundreds of scripts in folders that's fine too.
Actually I don't even create those collections, we have OpenAPI/Swagger docs for all of our APIs and I just import them with a couple of clicks (which I'm sure there's a way to do with curl).
For the odd requests, and sharing requests with others? I use curl, no problem. I actually think I know it pretty well and very rarely need to look up any docs for it.
> I'd rather use a collapsible UI for that, if you prefer to have hundreds of scripts in folders that's fine too.
No, I don't (what a shitty strawman), I create abstractions then, like any other project. Surely you don't have hundreds of completely original and bespoke requests? Previously I've handled thousands of requests by having a .csv to load from.
Maybe on work computer. But I can’t bother with installing, updating, and running these kinds of bloats on my personal computers. Only two software stay open for longer than a few hours: Emacs and firefox.
The thing with simple tools is that bootstrapping is easy and versatile.
I’m also a CLI old timer, but there’s undeniable utility in having a Postman-like collection to test drive a mobile app API. You can save state from responses and use it in subsequent requests. E.g. log in, save the access token, create a post, save the id, post a comment under the post using the id. It’s all very useful, to say nothing of the fact that you can give said collection to non-technical stakeholders and they can solve a lot of their own problems without going to get one of the engineers to Do A Command Line(tm).
All that said, I wouldn’t touch Postman. Last time I needed something to fit this bill I looked around to find the open source equivalent and found Bruno.
Some people like to think about the problems related to the actual work instead of looking up CLI tool manpages when they need to do something once in a blue moon.
I use curl liberally and also tend to create scripts around it to perform common tasks, but I still get why someone would prefer a GUI.
If you're doing a lot of requests for testing or some other purpose I could see an argument for a graphical interface. Curl is a masterpiece but it's not that simple to use. But again, we're in $current_year and I'd be surprised if "hey Claude, can you cook up a curl request to do this and that" doesn't work.
Even simpler and free: `tldr curl` in your terminal gives you like 80% of what you need for day to day requests, `man curl` gives you 100% of what you need.
cURL is an amazing tool, but it's more "HTTP client" and less "full blown API client".
The page even sort of acknowledges this... saying you manage your environments with environment variables. It doesn't mentioned how to extract data from the response, just jq for syntax highlighting. No explanation of combining these two into any sort of cohesive setup for managing data through flows of multiple requests. No mention anywhere on the page of working with an OpenAPI spec... many of the tools provide easy ways to import requests instead of manually reentering/rebuilding something that's already in a computer-readable format.
So the tl;dr here is "use cURL, and then rebuild the rest of the functionality in bash scripts you idiot".
I went down this path of my own accord when Insomnia was no longer an option. I very quickly found myself spending more time managing bash spaghetti than actually using tools to accomplish my goals.
That's why I use a full blown dedicated API client instead of a HTTP client. (Not Postman though. Never Postman.)
I love curl and use it all the time (copying requests from the browser is maybe the most common usage) but:
> Q: But Postman has testing and automation!
> A: So does cURL in a shell script with || and && and actual programming languages. You want assertions? Pipe to grep or write a 3-line Python script. Done.
IMO if you're reaching for an "actual programming language" it's probably time to put curl down and switch to libcurl or whatever native equivalent is in that language.
- If you are writing a script that is more than 100 lines long, or that uses non-straightforward control flow logic, you should rewrite it in a more structured language now. Bear in mind that scripts grow. Rewrite your script early to avoid a more time-consuming rewrite at a later date.
- When assessing the complexity of your code (e.g. to decide whether to switch languages) consider whether the code is easily maintainable by people other than its author.
Cute, but a few years back $client was an early-ish adopter of Anthos Service Mesh (Google-managed(ish) Istio for GKE), and to install it we had to run a Bash script that was over 1,000 lines long.
When we questioned the Google engineer assigned to support us, he snickered and said "you can trust it".
Yeah, hence why no one uses terminals for doing work with git, codex, claude-code, neovim and curl, everyone and their mother are using desktop GUIs for those things, clearly.
I am the only person in my 10 person team that prefers the cli for stuff like git and while the ratio was a little more balanced during my time at college, it was still skewed towards GUIs. I don't think its unreasonable to think that developers might prefer GUIs over CLI
My response to this article is the same one I have to anyone out here screaming "it's so easy to just do it my way!": if it's so easy, then do it for me!
The ffmpeg fans are the loudest screechers I've found, in this area. They'll point to the trainwreck of a UX that is Handbrake as an example of GUI for terminal commands. And, look - command line utilities are great and Handbrake is a super good product that functions well and does more than I'd ever want it to. But neither of those things are the same thing as having good UX.
If it's so easy to compile a bunch of shell scripts and store them in a directory in a git repo, then package together a bunch of them that every dev would need, sprinkle in a few with placeholders that most devs would need (with some project-specific input), and then serve them to me in a composable GUI that, in real time, builds the command to be run in my shell. Let me watch the clicks edit the command, and then let me watch the "submit" execute the command. There's no surer way for me to learn exactly what commands I need than to see them all the time. And if I have to learn them (so that I can use them) BEFORE I've seen them - in context - a few times at least, then I'm going to have a much harder time remembering. UX, when done right, helps the user.
Put simply: if I can do everything I would be able to do with postman using curl, then I should also be able to wrap curl in a thin DearImgui window that is reactive to user input. And if it's as easy as the author says, their time would have probably been better spent just making the GUI wrapper app and presenting it as a way to get better with curl, rather than writing an edgelordy article about it.
You can’t compose GUI unless you go the emacs and smalltalk route (MacOS AppleScript is a very poor example). The one thing with CLI tools is how easy to compose them and have something greater. Also more versatile than a GUI app.
Also they’re more stable than anything else. You can coast for decades on a script.
Not really. In a terminal the whole input can be dynamic with variables, pipe and executors like find’s exec, xargs and parallels. That’s pretty much the whole point of a shell and CLI interaction.
yes, really. If you think what I'm describing cannot enter text into a cli, you are thinking about what I am describing wrong. There's nothing you can put onto a command line with a keyboard that cannot be put there by an app using string manipulation. You're welcome to try to describe something, though.
There is nothing that you can’t. Visual programming languages do exist and the shell is an REPL. What’s important is how well you can do it. If you nail down the common use cases, you can create a nice wrapper and people have done so.
But text is very versatile. Adding another layer on top is losing that versatility. And while graphics is nice, symbolic manipulation is on a whole other level.
So fo a closed, and I guess small, you can have gui for intuitiveness. But if you want expressivity, you need symbols and formalism.
But there’s one thing that still beat Graphic in terms of intuitiveness. Tacticality. I’d bet that it’s way faster for a person to learn a physical car dashboard than a touchscreen one.
To me it seems like the complexity is just irreducible. There's so many formats, so many bits and pieces that can go in a video stream, they're not very visualizable, and they have surprising edge case interactions. Not to mention there's a lot of "normally the program figures this out for you, but there's an option to override it if broken" knobs and dials.
Good UX is not about reducing complexity, nor is it about hiding complexity. It's about surfacing the exact utility a user needs in the exact context they are best suited to understand each of its inputs entirely (with the least friction in generating those inputs for the user). It's very hard to do. So much so that describing 'what is wrong' with a UX would be almost as burdensome as just designing a better UX. So I'm not going to tell you what is wrong with it; you KNOW what is wrong with it. It could be better. That you can't specify as to how just means that you aren't currently undertaking the complicated process of redesigning it. It doesn't mean you don't know good UX from bad UX.
Now, all of that aside, I do like Handbrake and I do think it offers a ton of functionality with so little friction that it's one of my very favorite and most-used apps. No login, no project setup, no x, no y, no z. Just a thin wrapper around a badass command line utility, with tons of options for users to override, and sensible defaults. There's a lot to love about Handbrake!
But "my grandma can use it", or "a plumber can use it", or "a person who doesn't understand the technicals and just needs to do one stupid thing that the app can definitely do, can use it" are signs of good UX. You wouldn't say any of those things about Handbrake.
In my experience, handbrake doesn't expose every option from ffmpeg and is more focused on transcoding. One nitpick I have with handbrake is that it doesn't support VAAPI encoding nor Vulkan Video Encoding for AMD cards on Linux.
The author could benefit from some research into user centered design. CLI is notorious for poor discoverability and consistency, two halmarks of GUI (at least 20 years ago; these days less focus is put on these elements). Humans are very not good at remembering command line flags but great at looking at then manipulating a screen that shows fields for all the flags.
How is muscle memory an exclusive benefit of CLI? How is response time superior for a CLI? I've used GUI tools for git professionally for years and it seems much faster, safer, and easier to use. I've had peers that use CLI instead and they appear to struggle for all the expected reasons (poor feedback, poor discoverability, etc).
I have a few commands and one-liners that perform certain tasks. I can chain them together without looking at the screen. In fact, I can be reading code on the editor while I perform them on the terminal from muscle memory alone. I build a mental model of the branch I'm working on and how it relates to trunk and work from there. Each command updates that mental model. A GUI will never be as fast as that.
Now, you're right - a GUI that can be fully navigated with the keyboard can get somewhat close. That is, until an update changes the place of a button, or the organisation of a menu. CLIs almost universally have stable contracts with the user.
Don't get me wrong, I like GUIs for a lot of tasks: web browsing, CAD work, even programming. I just find that "everything should be a GUI" only serves to bring top performers down closer to the mean at best.
Furthermore, a crappy CLI is really only crappy until muscle memory sets in. A crappy GUI generally remains a crappy experience for as long as it's used.
Is muscle memory an exclusive benefit of CLI? No, but it's am universal benefit of the CLI, whereas it's only incidental on GUIs.
In my experience, a GUI can chain command together for you and invoke them instantly or automatically, no typing, no looking. It would really help to understand what tasks you're doing that you feel CLI excels. They're certainly git commands GUIs handle poorly, but they are usually arcane and infrequently invoked. I'd hope you'd agree there are cases where the GUI excels.
I might get lit on fire for this, but I don't find manpages very easy to use. If want to quickly remember an option or argument order, I am met with a wall of text.
Does anyone have tips for how to make it more useful? Maybe I could grep better for options? For example in the link, the author lists out common curl commands like making a POST request or adding a header. If you tried to look through the manpage for this, this would take a long time.
There's another utility called tldr that does a better job of this by providing common usage examples that almost always instantly give me what I need, but its not nearly as comprehensive as man.
> Does anyone have tips for how to make it more useful? Maybe I could grep better for options? For example in the link, the author lists out common curl commands like making a POST request or adding a header. If you tried to look through the manpage for this, this would take a long time.
You can search a man page by pressing the '/' key, typing in what you want, and pressing 'enter'. 'n' jumps to the next instance of your search string 'N' jumps to the previous instance.
Some man pages are better than others. OpenBSD's man pages are usually very good. Linux's man pages sometimes are so bare that they're worse than if they weren't there at all.
I actually completely agree. I am learning OpenBSD and the man pages are very good, but all too often I find myself reading them, beating my head against the wall, and then googling or using tdlr or gippity.
For example, I just was digging into BSD_auth and authenticate, and I don't know much about how auth works generally. I found it pretty tough to grok from the man page. I love the idea of learning everything from directly within the system and man pages, but I might just not be smart enough for that.
I find man pages to be useful when I’m already familiar with the command or topic. For long man pages, I usually get on fine by `grep`ing for relevant key-words.
I agree that they are daunting and not so helpful for users who are new to the command or topic. They usually lack a quick-start guide with examples that give the user a starting point to build upon.
Anyhow, after hearing about `tldr` for close to a decade, your comment inspired me to install it. When I tried running `tldr curl`, I was delighted to learn something new and useful:
# Resolve a hostname to a custom IP address, with verbose output (similar to editing the /etc/hosts file for custom DNS resolution):
curl --verbose --resolve example.com:80:127.0.0.1 http://example.com
Curl is great for running individual API calls, API clients are great for when you're actually working on the API or architecting an app with one. Not that there are things one can't do with API development from the CLI, but there is a lot more to API clients than that feature list (it doesn't even include anything around stuff one might want to do with openapi definitions!) and by the time you string it all together you get why people like the tool that did that for them instead.
Unless you're on Windows, which comes with a special version of curl that misses some crucial functionality.
I didn't think I've the successfully used curl in my life though. Every time there's confusion about parameters. It's always been was faster to just write a quick python script that uses requests.
Plus the author can be a bit special. One of the most overrated pieces of software on the planet
It's clear at this point that terminal apps have lost to GUIs, but cURL is the one place where I think that's a shame. cURL /always just works/. It is predictable, consistent, transparent, and pretty easy to use in its simple forms, but with plenty of room for complexity if you wish to go the far. There's a reason libcurl is in everything from automobile infotainment systems to toasters. I'm glad to use a GUI over libcurl that doesn't also need a cloud to work, but at the end of the day, I find myself piping cURL to jq more than almost anything else.
Way back when Postman was but a mere Chrome plugin, I spent a lot longer than I'd have liked fighting with a request that should have been logging GET requests but wasn't. Imagine my surprise when I found that it was following Chrome's caching rules and not actually making my requests despite me intentionally firing off those requests. If only I had just used cURL...
Does anyone have suggestions for when I need to use bearer auth and the token is super long?
With curl I end up finding the command becomes hard to read, even taking advantage of backslashes. With Postman, it tidily hides the token out of the way on a separate tab and gets out of my way.
What i do is assign the token to a variable. I typically copy the secret to my clipboard, and then use the pbpaste command in macos terminal when assigning it to avoid secrets in my command history.
Yeah, I have been using this feature of bash ever since its existence and it is quite handy at times, especially when I do "printf "<sensitive data>" | qr".
A while ago I was working on a DSL to solve this exact issue (env switching, http requests + chained requests e.g. to an auth server to retrieve a token) - but I haven't had the time recently, and I moved jobs to a GraphQL shop, so it feels a bit more pointless now :D
I am a total newbie to curl. I am so excited to come across this post. Thanks, op! I want to use curl to send json and xml requests instead of using Postman and SoapUI while also using a jks file which stores a certificate for secure connection to API.
Is curl ever going to have a mature /sane way to handle silent output? I seem to have to redirect 2>&1 and other -o- options. It’s annoying. Always has been.
Curl's UX is very dated and I wouldn't recommend it to any new user. Use many alternatives like httpie [1] or something like curlie [2] which is just an UX wrapper around the same libcurl. Httpie also has a postman-like web interface.
sure libcurl is great but curl CLI is pretty ancient and awful and completely unnecessary to use today as any front-end can plug into libcurl and provide a much better experience. If you're only requesting your own APIs then you don't even need libcurl and any http 1.1 client will provide much better experience like httpie.
The issue with "survivor" software is that UX cannot be refined due to legacy support and that's what's great about curl itself is that libcurl and CLI front-end are separate tools allowing for alternative modern front-ends.
My guess would be they don't like the fact that httpie is branching out to paid (well currently still 0$) GUI desktop/web apps. The CLI still remains under open source under BSD so I think OP is just yelling at clouds here.
Ditto, enjoy the catharsis. Good advice to not take it personally, I'll try to give a less aggressive point of view. All of this has come to mind [but not repeated out of kindness or laziness, whichever].
So, to start: someone wants me to install Postman/similar and pay real money to share and make a request? Absolutely not. I can read the spec from Swagger, or whatever, too... and write down what was useful [for others]. We all have cURL or some version of Python.
Surely a few phrases of text worth making plans to save, and paying for [at least twice, you to research and them to store], are worth putting into source control. It's free, even gifts dividends. How? Automation that works faster than a human pushing a button. Or creates more buttons!
Foul language has never really bothered me, and I think it's effective in communicating a relatable (to me) frustration with people ignoring the answer staring them in the face.
> The tools you need are simple. They're fast. They're reliable. They've been battle-tested by millions of people for years. Just fucking use them.
I like reading it sometimes. It doesn't make me more likely to do what it suggests though, if anything potentially less likely. Like a "Haha, what a guy. Now let's get on with my day" kind of vibe.
It takes me back. This was normal discourse in the 80s and 90s in the development community, especially the BBSs and telnet communities. I think the entire development community back then was afflicted with Tourette's Syndrome!
> What a weird place to try and raise moral panic.
Sigh. No moral panic involved and I don’t care if people swear. I asked about the style for a reason.
It’s a bit like if someone makes technical posts written in archaic English or in pirate speak. They’re free to do so of course but it’s still a weird choice given context
> Now everyone's downloading 500MB Electron monstrosities that take 3 minutes to boot up just to send a fucking GET request.
While curl is fine, most of the time I use the REST Client extension in VS Code. While VS Code is an Electron monstrosity, assuming you already have it, that extension is less than 3MB.
Even the full-feature GUI extensions like Thunder Client are scarcely bigger.
Hate VS Code, and never let your hands touch anything other than vim or emacs? Fine, there's a number of extensions that run in the browser that do the same thing.
I use httpie (not httpie actually, https://github.com/ducaale/xh but it has the exact same ux). For the life of me, I can't remember curl flags for some reason. Even fucking -X POST... Sending JSON is pain too.
For quick and easy http requests, httpie has been fantastic.
To a degree, I understand when people do it when speaking. It can easily be a bad habit, it can slip, it can be your age and the influences around you.
When it comes to doing it in a written blog post that you hope other people will read? Boggles the the mind.
Using swearing for emphasis is a non-event where I am. You'll find it equally in documentation as in conversation. There is no taboo around it. It isn't considered to be some "bad habit".
We're two decades removed from "Where the bloody hell are you?" [0] You won't find many people reacting to swearing over here as if it's unprofessional. It's just another emphatic.
You define all your requests in a plaintext format and can inject variables etc... plus the name is kinda funny.
reply