5 Common Linux Installation Mistakes and How to Avoid Them
Apr 12, 2026
I’ve been staring at Linux installers for 14 years now. 14 years. From bare-metal servers at tiny startups to massive enterprise AWS setups here in the Netherlands, you’d think I’d have basic OS provisioning down to an absolute science by now. But look, last month, while spinning up a test environment for the new RHEL 10 release, I caught myself just… staring at the UI. Here’s the thing—I almost made the exact same mistakes I used to make as a junior SysAdmin back in the day.
Trusting manual time configurations
RHEL 10 makes it painfully easy to just click “Time & Date.” You pick a region like Americas/Toronto from the dropdown and move on. Man, looking back, I really should have forced NTP (Network Time Protocol) syncs way, way earlier in my career.
I once spent three entire days—literally three whole days—debugging Jenkins pipelines that kept failing for no obvious reason. It turned out a single node’s clock had drifted by four minutes. Four minutes! The reality is, you should always, always toggle the “Automatic date & time” feature. Sure, it won’t work on a heavily air-gapped network. Actually, speaking of air-gapped networks, I once had to configure a cluster in a literal underground bunker where we had to sneak updates in on encrypted thumb drives—but anyway, I digress, that’s a whole different nightmare. The point is, if you aren’t in a bunker, set up NTP right away.
Blindly accepting “Automatic partitioning selected”
The RHEL GUI defaults to automatic partitioning under Installation Destination. Honestly, I think default LVM allocations are wildly overrated for production servers. If you just blindly click next, you end up with a massive /home directory that nobody actually uses. Meanwhile, /var gets completely choked out, which is where your container data actually needs to live. I remember one setup where that bit me hard—logs piled up, and everything ground to a halt before I even booted the first workload.
Ignoring network installation sources
I used to manually mount thick ISOs. Instead of selecting “On the network” and typing in an http:// repository URL, that is. I know this is slightly off-topic, but it really reminds me of when I spent an entire month trying to understand how Kubernetes works.
I agonized over GitOps tooling. I was totally convinced the right tool would magically fix my cluster architecture. Turns out, for my specific use case, Flux CD was basically no different than Argo CD. The fundamental principle—pulling truth from a centralized network source—is what actually matters. Same goes for your OS packages. Just point the installer to a Red Hat CDN or a local network repo and save yourself the headache. God, that month felt like a lifetime of trial and error.
Leaving KDUMP on autopilot
The installer usually auto-detects KDUMP memory settings. I usually just leave it as is. Look, I’m not entirely sure, but am I the only one who still isn’t 100% convinced if allocating that crash kernel memory is actually worth it on modern ephemeral cloud instances? It feels like a holdover from the bare-metal days, but maybe I’m missing something. I’ve skipped it on a few AWS spins without regret so far, but who knows.
Still clicking through the UI
Even if you completely nail the RHEL 10 setup, click “Done” in the upper left corner, and get a totally perfect system… why are you still clicking? Just write a Terraform script. I keep telling myself that, yet here I am, mouse in hand like it’s 2012.
Anyway, I need to get back to my AWS Solutions Architect renewal prep (the practice exams are brutal this year). Are you guys actually deploying RHEL 10 for any real workloads yet? Or is everyone just waiting until…
Go from reader to practitioner
Reading about it is one thing. Actually doing it—with guided labs and a structured curriculum—is another. That’s exactly what RHCSA Bootcamp (RHEL 10) - Arabic gives you.