

Same. Super easy to do, easy to mirror repos, easy to keep offline…
Same. Super easy to do, easy to mirror repos, easy to keep offline…
My repo is fine, because its not online. Keeping them online is just silly.
It does work if you have a personality cult and can get others to lie for you though…
Even just maintaining an “I was wrong” section on a website with an exhaustive list. And then they must print it out at election time and distribute to all voters.
Unfortunately, they real issue is that “I dont recall” will become the default response to all questions…
Then they have to formally retract their statement. Do it enough times and the pattern becomes clear.
Its certainly not the first place i’d look for advice, but it does make sense, they do test a lot of hardware. Try avoid getting the absolute cheapest dongles, some of them are fake, and behave weirdly. They are usually tested just enough to work in windows, and nothing more, so they can struggle on linux if the linux driver does something unexpected.
I’ve had luck with an ASUS USB-BT500, but it really depends on if its legit or not, so good luck
For adapters, checl out this list: https://www.home-assistant.io/integrations/bluetooth#known-working-high-performance-adapters
Not OP, but combustion byproducts/impurities mostly. Get a air quality sensor and watch it go mad when you start cooking.
The one real downside to induction is actually its speed. You can really easily burn your food very quickly if your not careful. IKEA sell an induction hot plate for $40AUD, well worth giving it a try.
What is openwebzine? Can’t find any info on it.
256gb of ram seems well beyond standard self-hosting, what are you planning on running?!
I did create a fork and MR, and neither used your runner (sorry if that is what spooked you).
Develop local and push remote also let’s you sanitize what is public and what isnt. Keep your half-backed personal projects local, push the good stuff to github for job opportunities.
I think it was when you create a merge request back, that the original repo would then run the forked branch on the original runners.
From what I can tell, its now been much more locked down, so its better, but still worth being careful about.
More discussion: https://www.reddit.com/r/github/comments/1eslk2d/forks_and_selfhosted_action_runners/
The other potential risk is that the github action author maliciously modifies their code in a later version, but that is solved with version pinning the actions.
I can’t find it right now, but there used to be a warning about not self-hosting runners for public repos. Anyone could fork your repo, and the fork would inherit your runners, and then they could change the pipeline to RCE on your runner.
Has that been fixed?
I went to a completely private gitlab instead, with mirroring up to github for anything that needed to be public.
Edit: seems to maybe not be an issue anymore, at the very least it doesn’t seem to affect that repo. Still, for anyone else, make sure forks and MRs can’t cause action to run automatically on your runner, because that would be very bad.
This is my personal opinion, but you should add :
Unless there is a really good reason, don’t rename your project. It only adds confusion, and users will get lost during the transition. It also makes them hesitant to try the new one - “What if they do it again and i get left behind”.
Pihole isnt pi specific either, it still kept the name.
It could be that the process forked and crashed, which is why you got the notification, but that it kept working. It might also be a delayed notification?
I saw something similar happen on my partners Ubuntu with Zoom “crashing” multiple times while working. Didn’t dig to the bottom of it, the notifications eventually stopped after some reboots.
Internet is already federated, its just called peering instead?
How is the “fraction of compute” being verified? Is the model available for independent analysis?
Are you running zfs on multiple slow spinning disks? Might just be that they are taking too long to spin up?
The kernel boot time is very slow, that is probably worth investigating first, but I dont have any theories there :(
Still sounds like it could get quite messy if Google adds a feature, Qualcomm adds a fix to that feature and then you need to add a fix on top of that. Does it work better in practice and just needs to been seen to be understood?
Competence, Time and Direction are often quite hard to find in any professional team, let alone an open source team :D
Thanks, I’ve been on the fence for a while, doing the free trial now