Help me improve my Linux development environment?
February 27, 2024 4:50 PM   Subscribe

I want to boot into my laptop, use various Linux distributions with them already customized to the packages I want and need, things like terminal and stuff configured, etc. Sometimes I want to snapshot it or revert back to base. The key is I want to install something like Plasma 6 or a nice graphical windows manager without all the stuff I don't need, everything GPU and WiFi configured and packages plus applications configured and pretty. I looked at and it seems to do what I want but it is very complex. I basically don't want distros with LibreOffice and stuff and to "just work," play around and then revert. If I need to upgrade a kernel I want I want to do that and boot a new distro rather than upgrading. What are my options?

Basically I like to play around with new distros, sometimes I need custom distros but often times I spend way too much time setting up a new machine and my development workflow is usually "ignore docker and a proper setup, just get it working and install random stuff then clean it up if the idea works." I don't want LibreOffice or apps like that but I do need things like WiFi obviously, something like Plasma 6 or a light weight version and my terminal, code and my base packages installed. I have a lot of questions:

1. How possible is this? Ideally I'd probably end up with one distro and if it upgrades the kernel I'd like to just for lack of a better term "update the dockerfile, build an ISO or equivalent and start over." Any data I'd need to keep I'd rsync or similar to s3. Most of the time this is just me developing or playing around and not wanting to worry if a canary release will break. Eventually if I prove out what I'm doing I set up a nice git repo that's self contained but this is in the stage of me basically hacking together a solution, then maybe branching off and trying something else.

2. This guy came up with ZFSBootMenu with ProxMox. Really I just want to have an easy to use distro builder where I list my apps and configurations and something along the lines of a dockerfile to build an ISO or equivalent. Ideally I can take things like VSCode/my terminal/etc. and move it to another distro if it is in a helper yaml file or something similar. I'm unfamiliar with Promox but as this is a daily driver laptop I'd like to if at all possible have it setup to boot my default workstation as if it was native.

3. Assuming Proxmox is the way to go and supports nvidia drivers and the like and feels "native" how much time do I have to configure file systems? For example ZFSBootMenu will have you lock the OS and then create a union file system on things like /usr and /var which makes sense as the root OS shouldn't really change.

4. As far as Terminal Managers go I'm always confused to get the prettiest/best to use one as everyone is highly opinionated. Autocompletion, zsh, powerline10k have been my thing for ever. I'm seeing fish, starship and it gets complex fast. I am not a purist at all, but it isn't like 1995 so I've been looking at things like this setup but ... so many options and I don't need anime girls or Diablo 3 backgrounds just something nice and clean.

5. Any distro suggestions? I like the idea of Void Linux but simply seems lightweight and has a nice package manager. Otherwise Debian I guess.

Note: Wide ranging, and I used promox since it seems popular but it looks like nested virtualization has severe performance issues? I guess I don't care I just want something like a config file to define my system and be able to revert back to snapshots.
posted by geoff. to Computers & Internet (12 answers total) 4 users marked this as a favorite
 
I'm confused. The first half of the question seems to assume you'd be changing distros frequently and the second half makes it sound like you're looking for one distro to do... something (act as a base for VMs?).
posted by hoyland at 7:13 PM on February 27 [1 favorite]


If this is what you're getting at, I have my dotfiles in a git repo, together with a Makefile with a target per distro (to account for different package managers and package names). The Makefile installs the packages I always want and uses stow to get the dotfiles in place.

make is generally available, as is git and I'm up and running in a minute.
posted by hoyland at 7:20 PM on February 27


I'm not 100% sure what you're looking for, but you may be interested in NixOS or Nix flakes. Nix does a lot of things, but one of the best is that it makes it possible to configure software environments fearlessly, knowing you can switch to another set of packages without breaking anything. You can play with and install experimental packages and if you don't like them, just say goodbye, without any worry that your system will ever be left in an inconsistent state.
posted by dis_integration at 9:10 PM on February 27


Distrobox seems like it's the current champ for installing command-line versions of multiple distros. Graphical stuff does work, but I believe it requires the desktop environment underpinnings to be repeatedly installed for each container. I don't think I've tried a full-fat KDE install or anything, myself.

I mostly use it to export commands from extra or the aur to test niche projects. That way, I'm not wrecking a real Arch install. Especially since I'm usually in Debian or Fedora.

There are other tips, like cloning the containers or creating separate home folders that could cover some of what you need. (I'm also confused exactly what you need. Are you writing software for graphical environments?)
posted by Snijglau at 9:28 PM on February 27


As I read through your question, I too thought immediately of NixOS because its entire design is oriented toward pretty much exactly that kind of use case. I have not used it myself but if the Debian Testing installation I've been migrating from second-hand laptop to second-hand laptop for the last ten years ever digs itself into a hole that's going to take me more than half an hour to dig back out of, I definitely plan to.
posted by flabdablet at 11:20 PM on February 27


Sounds like you're touching on a few concepts or processes here? I'll put a few things forward that may or may not scratch the itch. Disclosure - I haven't used some of these in anger for reasons I'll unpack after.

On immutable / reproducible distros or building on top of hypervisors so you can A/B your installs:

- You've described atomic / immutable distributions to a degree? Fedora Silverblue, VanillaOS, NixOS (technically Declarative and Reproducible, Not Immutable, but similar outcome)
- You could also go down the QubesOS route (Hypervisor host w/ virtualised guests). There are less security focused approaches like ProxMox and other baremetal hypervisors
- Second Distrobox above - if you don't care about GUI's and firstboot experiences, you can spin up individual distros on your parent machine and go nuts. It's a great docker / podman wrapper.

On specific distros:

- If you're going with NixOS (or integrating Nixpkg into another distro) you maybe dont need to go full hypervisor on the host as you get similar outcomes with the declarative / reproducible builds.
- Alpine _sort of_ does a level of this with the use of `/etc/apk/world`(removing packages from world can remove them from the system, keeping everything neat). Can also use nixpkg here. However you'll be dealing with Musl which is a tradeoff.
- If you're using Void you may as well use Arch unless you have an objection to Systemd? Void has some very cool ideas (I love runit and it comes in a glib flavour) but you're well off the beaten track. Arch expects you to "build from scratch" + has AUR and a lot of other quality of life features.
- Or, if not targeting immutable distros, Debian or Fedora are well trodden paths at this stage.

On automation, my poisons of choice are:

- JustFiles for the automation (https://github.com/casey/just)
- Some bash scripting for the more nuanced stuff
- dotfile management (i use my own custom scripts in this space but there's lots of nice dotfile managers around)
- github for version control on everything
- over time I've automated everything from the installers (e.g., arch install disk can accept the install spec file for auto-config), through to fdisk and crypttab operations, ssh key injection via 1password etc
- I keep a "last known" version of this on a disk i can physically plug in or reference in a vm to build from
- it has a couple of hacks to get dropbox and backblaze locations bootstrapped and dropped onto the machine
- I heavily abuse homebrew, flatpak, and snap packagaes to minimise the surface area dealing with APT / custom deb repos / github installers (the complexity adds up over time)
- though I do have some scripts to manage "github" based installs where i need to check for and download new binaries (only if thats the only way to do it)

Generally if I blow a box and need a new one, apart from finding hardware I'm back at the desktop with content, photos, videos, even the download dir in the same state it was in about ~30mins

On terminals:

- At one stage i had a monster bashrc (and corresponding vimrc)
- Then it was zshrc, fish configs, etc
- These days I install a small handful of things and leave them mostly default:
- Shell: Fish
- Prompt: Starship.rs
- Session Manager: Zellij
- Some other goodies like Helix Editor, Exa, fz, ripgrep, etc.

Default Fish + Starship do a lot of good without tweaks and perform well. Only real fish config is to trigger relevant autocomplete setup.

Having said all of that, and with hand on heart zero implied judgement, I tool a lot of this and carved it all down to run against an ubuntu target after a ~decade of intense distro hopping as it's a zero sum / grass is greener game. No matter how many I've tried (and god help me its a huge list, I'd struggle to name distro's I haven't used at this point), it's never really changed that much.

At the end of the day (and I'll duck when I say this), 90% of distros aren't that different post-configuration and it comes down to either i) hitting a real wall and needing to switch or ii) just for love-and-hobby which is actually a great reason as the exposure eventually fills in your "linux library".

I _maybe_ sometimes miss arch (fresh packages + AUR are actually great), but Arch also blew my fingers off a few times mid-assignment or work deadline with package conflicts or `pacman` warnings / news that I missed at a critical moment. It's lack of opinionated defaults can also really suck too - lack of good power profiles, TRIM is not enabled by default for SSDs (this can hurt), lack of a good default AppArmor install etc.

Ubuntu broadly sucks in comparison but it's also basically-macos-with-wonky-hardware at this point and mainstream enough that most things (at least in my sphere) get tested against it, so here I am.

All that aside, happy hunting!!
posted by muppetkarma at 3:58 AM on February 28 [3 favorites]


Basically I like to play around with new distros

Usually I advise people to pick a distro that meets their needs and has a broad userbase for testing and support, and become an expert in it. Being familiar with two distros can helpfully highlight what is distro specific, but twenty distros usually isn't professionally useful. What is useful is understanding how say, Ubuntu kernel upgrades happen, from patch to running on your machine. Most distros operate in the open so knowing this is feasible, and an important step towards finding your own balance of "works out of the box" and customization.

How possible is this? Ideally I'd probably end up with one distro and if it upgrades the kernel I'd like to just for lack of a better term "update the dockerfile, build an ISO or equivalent and start over."

This is pretty complicated, as distributions name and organize packages and dependencies differently. The old approach was basically "use packer to build an image and run puppet" but docker and k8s seem to lean a little more ... bespoke. cue the "well then we'll ship your laptop" meme. I have a chef cookbook (I think the cool kids now use ansible?) for workstation setup but it basically only supports one OS. It sets up my dot files repo but I have to run mr on first login to grab all the chained repos, set up custom fonts and oh-my-git.

As far as Terminal Managers go I'm always confused to get the prettiest/best to use one as everyone is highly opinionated.

Personally I use iTerm2 on macOS and Terminator on Ubuntu. Roughly comparable, and if you want tmux/screen/starship, you can. Oh-my-git Although, I'm slowly moving to VSCode as my default, since most of the time I'm using the shell to inspect Kubernetes and edit some YAML git repo to fix it. Again, this is an area where investing time into a singular personal preference is rewarded over "going wide."
posted by pwnguin at 9:51 AM on February 28 [2 favorites]


I use to do this by keeping /home on a separate partition and when installing a new distro, I would backup and partitioned so my existing /home was used. Editors, terminals, shells are mostly configured by files in your /home.
posted by bdc34 at 11:51 AM on February 28


I have a chef cookbook (I think the cool kids now use ansible?) for workstation setup but it basically only supports one OS.

This is a good point from pwnguin I forgot to touch on as well - as they hinted, automation across multiple distros takes a bit of work.

Tools like ansible / chef abstract some of this (package installers typically know when to target apt vs pacman vs dnf etc), bigger changes like systemd vs runit not so much. As called out, package names can be subtly different between distros and tools like AUR as well.

Not a blocker, but if you know you're targeting many hosts + many distros it can be worth planning out how to get the right abstractions and code generation in place if you go down this path.

In my case I wrote a lot of code-gen (mostly in Go, some Bash) backed by config files and templates so when I added a package or service I didn't need to deal with creating the apt + systemd + distro specific bin and conf locations etc.

This was also useful across different machines on the same distro - you don't want servers picking up user app cruft due to your installer, they have different sec configs, etc.
posted by muppetkarma at 3:19 PM on February 28


Piggybacking on this to ask people who are comfortable with both: what's the point of learning to write an Ansible "playbook" instead of using a bash script to do the same work? Given its status as the kudzu of programming I am quite confident that bash will still be a thing ten years from now; can the same be said about Ansible?
posted by flabdablet at 12:41 AM on February 29


Usually I advise people to pick a distro that meets their needs and has a broad userbase for testing and support, and become an expert in it. Being familiar with two distros can helpfully highlight what is distro specific, but twenty distros usually isn't professionally useful.

OMG this. A billion years ago in software time, I worked in an enterprise web hosting facility, and thus I had actual professional reasons I needed to be able to log in and do work on machines running SunOS, Solaris (not the same), RedHat Linux (before RHEL was even a thing), and a couple random other Linux machines we had for various reasons. It was a nightmare. At the time the old school staff all liked ksh and all the Linux boxes had bash by default, so logging into any machine meant (at minimum) a couple minutes of figuring out which set of defaults it had and whether it used BSD or GNU options on common commands like ps. Even just hopping between different Linux distributions meant managing different RPM behaviors, and SunOS and Solaris similarly had some different options on common executables. NFS mounts were a special class of problem.

Don't put yourself through this unless you're actively making money by doing so. "Jack of all trades, master of none" won't help your career unless you're working on cross-platform builds of software that needs to run equally well on all of them.
posted by fedward at 11:32 AM on February 29


> Piggybacking on this to ask people who are comfortable with both: what's the point of learning to write an Ansible "playbook" instead of using a bash script to do the same work?

flabdablet like lots of these tools it probably doesn’t matter at a small scale, and matters a lot at a large scale.

My home installer stack is cobbled together with justfiles, bash and go that I’m intimately familiar with and has 1 user. Even for this small use case I know im duplicating some of the abstractions and quality of life features of tools like ansible / chef but it’s still “fine” for my little workflow.

If I had more than one user or a more significant homelab setup I’d switch. Eventually you start solving for patterns ansible is solving better, and there’s no greater horror in infrastructure work than digging through someone else’s 3kloc hive of bash scripts to figure out why a deploy broke.

For a lot of people if they are already very experienced with ansible it’s dirt simple to write up playbooks for this kind of thing so they just do it.
posted by muppetkarma at 1:14 PM on February 29


« Older Looking for one-on-one french classes   |   List of B-17 and B-24 bombers lost on July 9, 1944... Newer »

You are not logged in, either login or create an account to post comments