• 2 Posts
  • 3 Comments
Joined 3 years ago
cake
Cake day: June 17th, 2023

help-circle
    • Small 4B models like gemma3 will run on anything (I have it running on a 2020 laptop with integrated graphics). Don’t expect superintelligence, but it works for basic classification tasks, writing/reviewing/fixing small scripts and basic chat, writing, etc
    • I use https://github.com/ggml-org/llama.cpp in server mode pointing to a directory of GGUF model files downloaded from huggingface. I access it it from the built-in web interface or API (wrote a small assistant script)
    • To load larger models you need more RAM (preferably fast VRAM/GPU but DDR5 on the motherboard will work - it will be noticeably slower). My gaming rig with 16GB AMD 9070 runs 20-30B models at decent speeds. You can grab quantized (lower precision, lower output quality) versions of those larger models if the full-size/unquantized models don’t fit. Check out https://whatmodelscanirun.com/
    • For image generation I found https://github.com/vladmandic/sdnext which works extremely well and fast wth Z-Image Turbo, FLUX.1-schnell, Stable Diffusion XL and a few other models

    As for the prices… well the rig I bought for ~1500€ in september is now up to ~2200€ (once-in-a-decade investment). It’s not a beast but it works, the primary use case was general computing and gaming, I’m glad it works for local AI, but costs for a dedicated, performant AI rig are ridiculously high right now. It’s not economically competitive yet against commercial LLM services for complex tasks, but that’s not the point. Check https://old.reddit.com/r/LocalLLaMA/ (yeah reddit I know). 10k€ of hardware to run ~200-300B models, not counting electricity bills




  • I’m in the same boat, running a Gitlab Mattermost instance for a small team.

    Gitlab has not announced yet what will happen with the bundled Mattermost, but I guess it will be dropped entirely, or be hit by the new limitations (what will hit us the hardest is the 10000-most-recent messages limitation, anything further than that will be hidden behind a paywall - including messages sent before the new limitations come in effect - borderline ransomware if you ask me)

    I know there are forks that remove the limitation, may end up doing that if the migration path is not too rough.

    I used to run a Rocket.Chat instance for another org, became open-core bullshit as well. I’m done with this stuff.

    I have a small, non-federated personal Matrix + Element instance that barely gets any use (but allows me to get a feeling of what it can do) - I don’t like it one bit. The tech stack is weird, the Element frontend receives constant updates/new releases that are painful to keep up with, and more importantly, UX is confusing and bad.

    So I think I’ll end up switching this one for a XMPP server. Haven’t decided which one or which components around it precisely. I used to run prosody with thick clients a whiiille ago and it was OK. Nextcloud Talk might also work.

    My needs are simple, group channels, 1-to-1 chat, posting files to a channel. ideally temporary many-to-many chats, decent web UI.

    Voice capabilities would be a bonus (I run and use a mumble server and it absolutely rules once you’ve configured the client, but it doesn’t integrate properly into anything else, and no web UI), as well as some kind of integration with my Jitsi Meet instance. E2E encryption nice but not mandatory. Semi-decent mobile clients would be nice.

    For now, wait and see.