Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I built a custom GUI for coding tasks specifically, with built-in code context management and workspaces:

https://prompt.16x.engineer/

Should work well if you have 64G vRAM to run SOTA models locally.



Great looking GUI, I find simple black/white/boxy/monospace UIs very effective.


> Should work well if you have 64G vRAM to run SOTA models locally.

Does anyone have this?

edit: Ah, it's a Mac app.


M4 Max with 128 GB RAM here. ;) Love it. A very expensive early Christmas present.


Yeah Mac eats Windows on running LLMs.

My app does support Windows though, you can connect to OpenAI, Claude, OpenRouter, Azure and other 3rd party providers. Just running SOTA LLMs locally can be challenging.


I'm pretty satisfied with my linux nvidia gpu setup. I may not have as much memory on my card, but the speed is almost certainly competitive if not outright faster. Further there are lots of techniques that mitigate this issue like offloading/streaming layers in as needed, quantizing, etc.

It also handles actual training/finetuning better.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: