Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Does this work with (tool use capable) models hosted locally?


Hi - author of the post. Yes it does! The "build a search agent" example can be used with a local model. I'd recommend trying qwen3 or gpt-oss


Very cool, thank you!

Looking forward to try it with a few shell scripts (via the llm-ollama extension for the amazing Python ‘llm’) or Raycast (the lack of web search support for Ollama has been one of my biggest reasons for preferring cloud-hosted models).


Since we shipped web search with gpt-oss in the Ollama app I've personally been using that a lot more especially for research heavy tasks that I can shoot off. Plus with a 5090 or the new macs it's super fast.


I don't think ollama officially supports any proper tool use via api.


Huh, I was pretty sure I used it before, but maybe I’m confusing it with some other python-llm backend.

Is https://ollama.com/blog/tool-support not it?


It depends on the model. Deepseek-R1 says it supports tool use, but the system prompt template does not have the tool-include callouts. YMMV




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: