Local LLMs are how nerds justify a big computer they don’t need
It's pretty incredible that we're able to run all these awesome AI models on our own hardware now. From downscaled versions of DeepSeek to gpt-oss-20b, there are many options for many types of computers. But let's get real here: they're all vastly behind the frontier models available for rent, and thus for most developers a curiosity a...
Visit Original Link →Local LLMs are how nerds justify a big computer they don’t need
Context
Curiosity gets the better of them. I have a 8GB device, I can barely run a 1B parameter model. I get frustrated but have nothing to complain. I can use ChatGPT in temporory mode, or incognito mode if I don’t want it to attach it to the memory. I don’t see using local models on scale is justifiable just yet.
Source: techstructive-weekly-75