Hacker Newsnew | past | comments | ask | show | jobs | submit | mbreese's commentslogin

In the cases I’ve tried building/integrating they are the same thing…

I think the only difference is the statefulness of the request. HTTP is stateless, but MCP has state? Is this right?

I haven’t seen many use cases for how to use the state effectively, but I thought that was the main difference over a plain REST API.


My understanding is that it can upgrade to an SSE connection so a persistent stream. Also for interprocess communication you usually prefer a persistent connection. All that to reduce communication overheads. The rationale also is that an AI agent may trigger more fine-grained calls than a normal program or a UI, as it needs to collect information to observe the situation and decide next move (lot more get requests than usual for instance).

This seems like the solution getting ahead of the problem. A series of API requests over HTTP can easily use a persistent connection and will practically default to that with modern client and server implementations. A claim that a more complex approach is needed for efficiency should be accompanied by evidence that the simple approach was problematic.

MCP can use SSE to support notifications (since the protocol embeds a lot of state, you need to be able to tell the client that the state has changed), elicitation (the MCP server asking the user to provide some additional information to complete a tool call) and will likely use it to support long-running tool calls.

Many of these features have unfortunately been specified in the protocol before clear needs for them have been described in detail, and before other alternative approaches to solving the same problems were considered.


I can't agree more, downloading OpenAPI doc for an API and parse it is more than enough to implement the core of MCP. But sadly the buzzword completely took of and for instance all participants to my trainings will ask for MCP, systematically.

Using SSE was far too inconvenient in theory despite that being how nearly all of the MCP that gained traction was working, so instead the spec was switched to being better in theory but very inconvenient in practice:

https://blog.fka.dev/blog/2025-06-06-why-mcp-deprecated-sse-...

There are a million "why don't you _just_ X?" hypothetical responses to all the real issues people have with streamable http as implemented in the spec, but you can't argue your way into a level of ecosystem support that doesn't exist. The exact same screwup with oAuth too, so we can see who is running the show and how they think.

It's hard to tell if there is some material business plan Anthropic has with these changes or if the people in charge of defining the spec are just kind of out of touch, have non-technical bosses, and have managed to politically disincentivize other engineers from pointing out basic realities.


I use it basically as a cache, I create local artifacts that are fast to filter/query and easily paginate on the client (which is to say in the MCP server).

Serious question, as I’m starting to go through this process myself -

Is it possible for the customer to provide their own bearer tokens (generated however) that the LLM can pass along to the MCP server? This was the closest to a workable security I’ve looked at. I don’t know if that is all that well supported by Chat GUI/web clients (user supplied tokens), but should be possible when calling an LLM through an API style call, right (if you add additional pass thru headers)?


The LLM doesn't intervene much actually, it just tells what tool to call. It's your MCP implementation that does the heavy lifting. So yeah you can always shove a key somewhere in your app context and pass it to the tool call. But I think the point of other comments is that the MCP protocol is kinda clueless about how to standardize that within the protocol itself.

I think an important thing to note is the MCP client is a distinct thing from the ‘LLM’ architecturally, though many LLM providers also have MCP client implementations (via their chat ui or desktop / cli implementations).

In general, I’d say it’s not a good idea to pass bearer tokens to the LLM provider and keep that to the MCP client. But your client has to be interoperable with the MCP server at the auth level, which is flakey at the moment across the ecosystem of generic MCP clients and servers as noted.


> but should be possible when calling an LLM through an API style call, right (if you add additional pass thru headers

Nope. I assumed as much and even implemented the bearer token authentication in the MCP server that I wanted to expose.

Then I tried to connect it to ChatGPT, and it turns out to NOT be supported at all. Your options are either no authentication whatsoever or OAuth with dynamic client registration. Claude at least allows the static OAuth registration (you supply client_id and client_secret).


I don’t see the issue so much as the deterministic precision of an LLM, but the lack of observability of spreadsheets. Just looking at two different spreadsheets, it’s impossible to see what changes were made. It’s not like programming where you can run a `git diff` to see what changes an LLM agent made to a source code file. Or even a word processing document where the text changes are clear.

Spreadsheets work because the user sees the results of complex interconnected values and calculations. For the user, that complexity is hidden away and left in the background. The user just sees the results.

This would be a nightmare for most users to validate what changes an LLM made to a spreadsheet. There could be fundamental changes to a formula that could easily be hidden.

For me, that the concern with spreadsheets and LLMs - which is just as much a concern with spreadsheets themselves. Try collaborating with someone on a spreadsheet for modeling and you’ll know how frustrating it can be to try and figure out what changes were made.


I’ve seen raspberry pi based kvms that do just this - draw power from PCI to operate. Except they still usually require a cable to HDMI/USB ports on the computer. I suspect you’d like to have the whole thing to be on card without cables.

Example: https://geekworm.com/collections/pikvm (but I think this still requires separate power)

To do this, wouldn’t you effectively need to make a graphics card (VGA would work) where a separate chip could read the screen buffer? And somehow get this card to display preferentially over the on-board video card?

I’m sure the all in one card version exists, but honestly a cabled version seems more robust (w/o vendor support that is).


I have a desktop that I'm using as a server box, I'd like to avoid plugging in a GPU just to change BIOS options or debug a boot failure

> To do this, wouldn’t you effectively need to make a graphics card (VGA would work) where a separate chip could read the screen buffer? And somehow get this card to display preferentially over the on-board video card?

If you do basic VGA (and UEFI), that'd be plenty for most. If it had a local output it'd be great for systems without video on the cpu (am4 non-apus, but also others)


Unless you can submit an interactive slurm job and get exclusive access to an H100 for a few hours of dedicated time. If the cluster is overloaded, it’s hard to get those to run when you’d like, but there are still ways. But you do have to be patient.

But it’s still not quite like exclusive access to resources when you want them. So I can see it from both ways.


I'm not entirely sure how their commercial offering works. It looks like it's a commercial fork of MinIO, but I wasn't able to find anything about assigning copyright for pull requests in their Github. (I didn't look that hard).

But, if the main product is 100% F/OSS AGPL, how are they accepting code from outside contributors and still maintaining a private enterprise offering under (presumably) a different license?


You assume competence. Minio originally tried to argue AGPL infects client code over the network.

Just let the company fade away..


That’s not what it says. It’s pretty clear…

> Any contribution of any LLM-generated content will be rejected and result in an immediate ban for the contributor, without recourse.

You can argue it’s unenforceable, unproductive, or a bad idea. But it says nothing about unreviewed code. Any LLM generated code.

I’m not sure how great of an idea it is, but then again, it’s not my project.

Personally, I’d rather read a story about how this came to be. Either the owner of the project really hates LLMs or someone submitted something stupid. Either would be a good read.


The linked thread includes the story

The comments I was reading at ArsTechnica suggested that adding transponders was off-limits to balloons. The argument was that balloon operators wanted to add transponders but that it was resisted by FAA worrying about there being too many non-plane transponders for pilots/ATC to keep track of.

I obviously don't know which is right, but it does show that there is definitely confusion out there about the issue.


Peacetime = When not actively under a sustained attack by a nation-state actor. The implication being, if you expect there to be a “wartime”, you should also expect AWS cloud outages to be more frequent during a wartime.

Don't forget stuff like natural disasters and power failures...or just a very adventurous squirrel.

AWS (over-)reliance is insane...


What about being actively attacked by multinational state or an empire? Does it count or not?

Why people keep using "nation-state" term incorrectly in HN comments is beyond me...


I think people generally mean "state", but in the US-centric HN community that word is ambiguous and will generally be interpreted the wrong way. Maybe "sovereign state" would work?

As someone with a political science degree whose secondary focus was international relations, "Nation-state" has a number of different, definitions, an (despite the fact that dictionaries often don't include it), one of the most commonly encountered for a very long time has been "one of the principle subjects of international law, held to possess what is popularly, but somewhat inaccuratedly, referred to as Westphalian sovereignty" (there is a historical connection between this use and the "state roughly correlating with single nation" sense that relates to the evolution of “Westphalian sovvereignty” as a norm, but that’s really neither here nor there, because the meaning would be the meaning regardless of its connection to the other meaning.)

You almost never see the definition you are referring used except in the context of explicit comparison of different bases and compositions of states, and in practice there is very close to zero ambiguity which sense is meant, and complaining about it is the same kind of misguided prescriptivism as (also popular on HN) complaining about the transitive use of "begs the question" because it has a different sense than the intransitive use.


It sounds more technical than “country” and is therefore better

To me it sounds more like saying regime instead of government, gives off a sense of distance and danger.

Not really: nations state level actor: a hacker group funded by a country, not necessarily directly part of that country's government but at the same time kept at arms length for deniability purposes. For instance, hacking groups operating from China, North Korea, Iran and Russia are often doing this with the tacit approval and often funding from the countries they operate in, but are not part of the 'official' government. Obviously the various secret services in so far as they have personnel engaged in targeted hacks are also nation state level actors.

It could be a multinational state actor, but the term nation-state is the most commonly used, regardless of accuracy. You can argue over whether of not the term itself is accurate, but you still understood the meaning.

It makes a lot more sense if they had a typo of peak


Not an example we want to cite for prowess of productivity with WordStar, given Martin's throughput as a writer in last couple of decades.

I look at him more as an example of someone who is committed to his process and tools.

Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: