Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
steinvakt2
3 months ago
|
parent
|
context
|
favorite
| on:
Open models by OpenAI
Wondering about the same for my M4 max 128 gb
jcmontx
3 months ago
[–]
It should fly on your machine
steinvakt2
3 months ago
|
parent
[–]
Yeah, was super quick and easy to set up using Ollama. I had to kill some processes first to avoid memory swap though (even with 128gb memory). So a slightly more quantized version is maybe ideal, for me at least.
Edit: I'm talking about the 120B model of course
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: