Conversation
I use LLMs alot when working, but never LLM code. For my use cases (hardware, embedded, firmware) its always wrong but it is a good rubber duck.

It helps understanding more faster. Roughly 50% of a modern LLMs output regarding hardware assumptions, specifications and firmware is wrong.
1
0
0
@grillchen just get yourself a rubber duck then. Way better for enviroment
1
0
0
@stefan not sure tbh. Locally host LLM is quite efficient compared to plastik.

Let's say a rubber duck weights 50g. That creates roughly 250g co2

Energy consumption of a local running Gemini prompt is roughly 0.24W according to not trustworthy google. (I'm aware they fake numbers at least a little bit).
That's roughly 0.08g CO2 (0.03g according to google, but let's say it is a less green way to generate power)

So I can do roughly 6250 prompts before I break even.

Not saying LLMs are environmental friendly, but plastic isn't as well.
1
0
1
@grillchen Okay I will let that slide neobun_llenn_sip_glare (its actually a sound argument)

But did you consider how much more cute a rubber duck compared to an LLM?! (/hj)
2
0
1
@stefan qwens cute mascot enters the chat
0
0
0
@stefan (tbf im not sure if they calculated training costs correctly. i doubt it. but i like LLMs as long as they run locally and they are as open source as possible (rarely the case though=)
0
0
1