Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This really captures something I've been experiencing with Gemini lately. The models are genuinely capable when they work properly, but there's this persistent truncation issue that makes them unreliable in practice.

I've been running into it consistently, responses that just stop mid-sentence, not because of token limits or content filters, but what appears to be a bug in how the model signals completion. It's been documented on their GitHub and dev forums for months as a P2 issue.

The frustrating part is that when you compare a complete Gemini response to Claude or GPT-4, the quality is often quite good. But reliability matters more than peak performance. I'd rather work with a model that consistently delivers complete (if slightly less brilliant) responses than one that gives me half-thoughts I have to constantly prompt to continue.

It's a shame because Google clearly has the underlying tech. But until they fix these basic conversation flow issues, Gemini will keep feeling broken compared to the competition, regardless of how it performs on benchmarks.

https://github.com/googleapis/js-genai/issues/707

https://discuss.ai.google.dev/t/gemini-2-5-pro-incomplete-re...



Another issue: Gemini can’t do tool calling and (forced) json output at the same time

If you want to use application/json as the specified output in the request, you can’t use tools

So if you need both, you either hope it gives you correct json when using tools (which many times it doesn’t). Or you have to do two requests, one for the tool calling, another for formatting

At least, even if annoying, this issue is pretty straightforward to get around


Back before structured outputs were common among model providers, I used to have a “end result” tool the model could call to get the structured response I was looking for. It worked very reliably.

It’s a bit of a hack but maybe that reliably works here?


You can definitely build an agent and have it use tools like you mention. That’s the equivalent of making 2 requests to Gemini, one to get the initial answer/content, then another to get it formatted as proper json

The issue here is that Gemini has support for some internal tools (like search and web scraping), and when you ask the model to use those, you can’t also ask it to use application/json as the output (which you normally can when not using tools)

Not a huge issue, just annoying


I think this might be also something to do with their super specific outputting requirements when you do use search (has to be displayed in predefined Google format).


Does any other provider allow that? what use cases are there for JSON + tool calling at the same time?


Please correct my likely misunderstanding here, but on the surface, it seems to me that "call some tools then return JSON" has some pretty common use cases.


Let's say you wanna build an app that gives back structured data after a web search. First a tool call to a search api. Then do some reasoning/summar/etc on the data returned by the tool. And finally return JSON.


OpenAI, Ollama, DeepSeek all do that.

And wanting to programmatically work with the result + allow tool calls is super common.


Suppose there's a pdf with lots of tables i want to scrape. I mention the pdf url in my message and with gemini's url context tool, i now have access to the pdf.

I can ask gemini to give me the pdf's content as a json and it complies most of the time. But at times, there's an introductory line like "Here's your json:". Those introductory lines interfere with programmatically using the output. They're sometimes there, sometimes not.

If I could have structured output at the same time as tool use, I can reliably use what gemini spits out as it'll be in a json, no annoying intro lines.


OpenAI


Unfortunately Gemini isn't the only culprit here. I've had major problems with ChatGPT reliability myself.


I only hit that problem in voice mode, it'll just stop halfway and restart. It's a jarring reminder of its lack of "real" intelligence


I've heard a lot that voice mode uses a faster (and worse) model than regular ChatGPT. So I think this makes sense. But I haven't seen this in any official documentation.


This is more because of VAD - voice activity detection


I think what I am seeing from ChatGPT is highly varying performance. I think this must be something they are doing to manage limitations of compute or costs. With Gemini, I think what I see is slightly different - more like a lower “peak capability” than ChatGPT’s “peak capability”.


I'm fairly sure there's some sort of dynamic load balancing at work. I read an anecdote from someone had a test where they asked it to draw a little image (something like an ascii cat, but probably not exactly that since it seems a bit basic), and if the result came back poor they didn't bother using it until a different time of day.

Of course it could all be placebo, but when you intuitively think about it, somewhere on the road the the hundreds of billions in datacenter capex, one would think that there will be periods where compute and demand are out of sync. It's also perfectly understandable why now would be a time to be seeing that.


Small things like this or the fact that AI studio still has issues with simple scrolling confuse me. How does such a brilliant tool still lack such basic things?


It's crazy how Google can create so many really amazing products technically but they fall short just because of basic UI/UX issues.


I see Gemini web frequently break its own syntax highlighting.


The scrolling in AI Studio is an absolute nightmare and somehow they managed to make it worse.

It’s so annoying that you have this super capable model but you interact with it using an app that is complete ass


App was likely built my same LLM...


Because they are moving fast and breaking shit.

Ask ChatGPT to output markdown or PDF on iOS or Mac app and the web experience. The web is often better - the apps will return nothing.


This is my perception as well.

Gemini 2.5 Pro is _amazing_ for software architecture, but I just get tired of poking it along. Sonnet does well enough.


chatgpt also has lots of reliability issues


If anyone from OpenAI is reading this, I have two complaints:

1. Using the "Projects" thing (Folder organization) makes my browser tab (on Firefox) become unusably slow after a while. I'm basically forced to use the default chats organization, even though I would like to organize my chats in folders.

2. After editing a message that you already sent,you get to select between the different branches of the chat (1/2, and so on), which is cool, but when ChatGPT fails to generate a response in this "branched conversation" context, it will continue failing forever. When your conversation is a single thread and a ChatGPT message fails with an error, re trying usually works and the chat continues normally.


And 3)

On mobile (android) opening the keyboard scrolls the chat to the bottom! I sometimes want to type referring something from the middle of the LLMs last answer.


Projects should have their own memory system. Perhaps something more interactive than the existing Memories but projects need their own data (definitions, facts, draft documents) that is iterated on and referred to per project. Attached documents aren't it, the AI needs to be able to update the data over multiple chats.


It would also be nice if ChatGPT could move chats between projects. My sidebar is a nightmare.


You can drag and drop chats between projects


i know. i want the assistant to do it. shouldn't it be able to do work on its own platform?


I wonder if this is because a memory cap was reached at that output token. Perhaps they route conversations to different hardware depending on how long they expect it to be.


When this happened to me it was because, I can only guess, it was the Gemini servers were overloaded. Symptoms: Gemini model, Opaque API wrapper error, truncated responses. To be fair the Anthropic servers are overloaded a lot too but they have a clear error. I gave Gemini a few days on the bench and it fixed itself without any client side changes. YMMV.


Half my requests get retried because they fail, I've contributed to a ticket in June, with no fix yet.


That used to happen a lot in ChatGPT too.


The latest comment on that issue is someone saying there's a fix available for you to try.


Yes agree, it was totally broken when I tested the API two months ago. Lots of failed to connect and very slow response time. Hoping the update fixes these issues.


It's been a lot better lately. Nothing like two months ago at all.


What happens if you ask it to please continue? Does it start over?


> I've been running into it consistently, responses that just stop mid-sentence

I’ve seen that behavior when LLMs of any make or model aren’t given enough time or allowed enough tokens.


FWIW, I think GLM-4.5 or Kimi K2 0905 fit the bill pretty well in terms of complete and consistent.

(Disclosure: I'm the founder of Synthetic.new, a company that runs open-source LLMs for monthly subscriptions.)


That’s not a “disclosure”, that’s an ad.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: