I did ~1.5 years of only Soylent, then transitioned into 2/3 meals per day being Soylent, which I’ve done for the last ~6-7yrs.
I’m the healthiest I’ve ever been, but it does require discipline, exercise and attention like anything else. Calories are calories and if you consume more than you burn, you’ll poop a lot and gain weight. If you drink at a significant deficit (my 1.5years was at 1200kcal/day) you will poop once or twice a week and it will take a few months of your body getting used to it for it to be more than liquid.
As others have said though, it’s a deceptively dehydrating liquid. You absolutely still need to drink water, and your water intake will largely dictate how much you pee.
Outlook being on that list is crazy.
Depends on where you work and what their policies are. My work does have many strict policies on following licenses, protecting sensitive data, etc
My solution was to MIT license and open source everything I write. It follows all policies while still giving me the flexibility to fork/share the code with any other institutions that want to run something similar.
It also had the added benefit of forcing me to properly manage secrets, gitignores, etc
The proper deepseek r1 requires about 500gb of ram/vram to run, which is orders of magnitude more ram than modern phones have. The smaller models called “deepseek r1” are not the real deepseek model that everyone is talking about.
I use ansible on one of my side projects; I use puppet at work. It’s the same reason I use raw docker and not rancher+rke2… it’s not about learning the abstractions; it’s about learning the fundamentals. If I wanted a simple abstraction I’d have deployed truenas and Linuxsserver containers instead of Taco Bell programming everything myself.
Sure. I have an r630 that is configured as an NFS server and a docker host called vacuum. There is a script called install_vacuum.sh that with a single command, can build the server to my spec from a base install of Ubuntu 24.04. it has functions to install base packages from repositories, add new repositories, set up users, create config files for NFS, smb, fstab, crontab, etc… once an NFS server exists on my network, any other server could be my docker host. My docker host is set up from a script install_containers.sh. as with before, it does all the things to get me a basic docker host, firewalled, and configured for persistence via my NFS server. It also has functions to create and start docker containers for all of my workflows (Plex, webserver, CA, etc), and if those containers don’t exist, it will build a docker image for said workflow based on a standardized format (you guessed it) bash build script for the containers. There is automation via cron on whatever host runs docker to build and update the containers once a week, bare-metal servers update themselves nightly, rebooting when necessary via unattended-upgrades.
Basically, you break everything down into the simplest function possible, have everything defined via variables in shared configurations that everything sources before running, and you have higher and higher level functions call other functions until you have a single function that cascades into a functioning system. Does that make sense?
Have you started collecting your notes into scripts?
Not sure if many people do what I do, but instead of taking notes I make commented functions in bash. My philosophy is: If I can’t automate it; I don’t understand it. After a while you build enough automation to build your workstations, your servers, all of your vms and containers, your workflows, etc, and can automate duplicating / redeploying them whenever required. One tarball and like 6 commands and I can build my entire home + homelab.
For the average user you’re definitely right, but I will say for the sysadmin of headless systems, having a powerful cli editor is a godsend. While it may seem arcane and unnecessary, learning vim is easier than managing remote x or sshfs or copying files to and from a system.
I didn’t learn vim to be a contrarian; I learned it because it seemed (and still seems to be) the path of least resistance for many workflows.
I say this as someone who never stopped looking until I found a gaming buddy in a partner. When every night is a date, lan party, and sleepover all at once… I certainly can’t tell you what’s important to you, just never settle.
Raid is not a backup!
I run ubuntu’s server base headless install with a self-curated minimal set of gui packages on top of that (X11, awesome, pulse, thunar) but there’s no reason you couldn’t install kde with wayland. Building the system yourself gets you really far in the anti-bloatware dept, and the breadth of wiki/google/gpt based around Debian/Ubuntu means you can figure just about any issues out. I do this on a ~$200 eBay random old Dell + a 3050 6gb (slot power only).
For lighter gaming I’ll use the Ubuntu PC directly, but for anything heavier I have a win11 PC in the basement that has no other task than to pipe steam over sunshine/moonlight
It is the best of both worlds.
If there are any water pipes through the second half of the house you cannot let those exterior walls reach freezing temperatures. Whatever solution you go with needs to account for the entire space in some capacity.
The “problem” is that the more you understand the engineering, the less you believe Intel when they say they can fix it in microcode. Without writing an entire essay, the TL/DR is that the instability gets worse over time, and the only way that happens is if applied voltages are breaking down dielectric barriers within the chip. This damage is irreparable, 100% of chips in the wild are irreparably damaging themselves over time.
Even if Intel can stop the bleeding with microcode, they can’t repair the damage, and every chip that has ever ran under the bad code will have a measurably shorter lifespan. For the average gamer, that sometimes hasn’t even been the average warranty period.
Are you maybe thinking of https://distr1.org/ made by the i3 guy?
That’s… not remotely true? Linux can absolutely install kernel drivers. If you mean running windows games under wine then sure, but then we’re no longer talking apples:apples. You could do the same thing on windows by running a game in a VM.
This is correct, as in windows a driver is the most straightforward method to runlevel0 access. It absolutely could at any time do exactly what crowdstrike did. But also so could Nvidia/amd with GPU drivers, your motherboard manufacturer with chipset and RGB drivers, etc. it’s not quite the smoking gun people make it out to be, as there are a lot of legitimate reasons to have this kind of system access.
The egregious part was that crowdstrike users agreed to allow a vendor to bypass canary channels and deploy straight to their endpoints.
Endpoint is any PC/laptop/sign/POS/etc. It’s a catchall term for anything that isn’t a server. it basically refers to any machine that might be logged into and used by a non-IT user.
There’s nothing magic about Soylent for weight loss. It’s a simple equation of calories in and calories out. The advantages that Soylent offered me was convenience for counting said calories, convenience for meal prep, and being reasonably certain my body was getting a decent distribution of micronutrients