Local LLMs are how nerds justify a big computer they don’t need

Link: https://world.hey.com/dhh/local-llms-are-how-nerds-now-justify-a-big-computer-they-don-t-need-af2fcb7b

Context

Curiosity gets the better of them. I have a 8GB device, I can barely run a 1B parameter model. I get frustrated but have nothing to complain. I can use ChatGPT in temporory mode, or incognito mode if I don’t want it to attach it to the memory. I don’t see using local models on scale is justifiable just yet.

Source: techstructive-weekly-75