• 3 Posts
  • 293 Comments
Joined 9 months ago
cake
Cake day: February 14th, 2025

help-circle
rss
  • But its always the dumbest fear.

    Its not like “im worried about some idiot POTUS destroying my livelihood through his own ego and a misunderstanding of economic policy” or “im not going to be able to afford health care when i inevitably get sick”.

    Its always “if you let gay people get married then people will marry their pets and that would be an atrocity because reasons”.







  • services:
      qbittorrent:
        image: lscr.io/linuxserver/qbittorrent
        container_name: qbittorrent
        environment:
          - PUID=888
          - PGID=888
          - TZ=Australia/Perth
          - WEBUI_PORT=8080
        volumes:
          - ./config:/config
          - /srv/downloads:/downloads
        restart: unless-stopped
        network_mode: "container:wg_out"
    

    this is my compose.yml for a qbittorrent instance.

    the part you’re interested in is the final line. There’s another container with the wireguard instance called “wg_out”. This network mode attaches this qbittorrent container to that wireguard container’s network stack.


  • I’d seen gluetun mentioned but didn’t know what it was for until a moment ago.

    I’ve heard of tailscale and at least know what that does but never used it.

    I personally have a mullvad subscription. I have a container connected to that with wireguard, and then for services I want to use that VPN I just configure them to use the network stack from that container.

    I’m not suggesting that my way is the best but it’s worked well for several years now.




  • Sorry I’m still not really sure what you’re asking for.

    I use Open Web UI, which is the worst name ever, but it’s a web ui for interacting with chat format gen AI models.

    You can install that locally and point it at any of the models hosted remotely by an inference provider.

    So you host the UI but someone else is doing the GPU intensive “inference”.

    There seems to be some models for t his task available on huggingface like this one:

    https://huggingface.co/fakespot-ai/roberta-base-ai-text-detection-v1

    The difficulty may be finding a model which is hosted by an inference provider. Most of the models available on huggingface are just the binary model which you can download and run locally. The popular ones are hosted by inference providers so you can just point a query at their API and get a response.

    As an aside, it’s possible or likely that you know more about how Gen AI works than I do, but I think this type of “probability table for the next token” is from the earlier generations. Or, this type of probability inference might be a foundational concept, but there’s a lot more sophistication layered on top now. I genuinely don’t know. I’m super interested in these technologies but there’s a lot to learn.