Yes every time reinstall or format your Linux You're wasting an entire afternoon getting the system "just the way you like it"—you need to automate it now. A good post-installation script lets you, with a single command, Update your system, install your key programs, and apply security settings. and even fine-tune your desktop environment and dotfiles without going command by command.
The idea is very simple: prepare one or more reusable Bash scripts Save this to a bootable multiboot USB drive, your configuration repository, or an external hard drive, and launch it immediately after installing your distro. From then on, the system runs on autopilot, and you only need to monitor for major errors. Let's see how to do this in different situations: from a typical fresh Ubuntu installation to more advanced scenarios with multiple distros, including automated installation tools.
What is a post-installation script in Linux and what is it used for?
A post-installation script is nothing more than a executable text file With Bash commands that sequentially perform everything you would do manually right after installing Linux. Instead of typing "sudo apt install this," "sudo apt install that," tweaking the firewall, installing snaps, etc., you write it once and reuse it as many times as you want.
This type of script is usually responsible for typical setup tasks: update the distro, install basic programs (browser, office suite, security tools, multimedia), activate and configure the firewall, adjust some behavior of the graphical environment (for example the GNOME dock) and, if you want, run other scripts that bring your dotfiles, Vim configurations, your terminal or whatever you use on a daily basis.
Create a basic post-installation Bash script
Let's start with the most direct approach: a Bash script in your home that you can copy and paste into any new installation. This is useful for distro-hoppers, for those who frequently reinstall the same distro, or for having a reproducible script when building systems for others.
First, create the script file, for example install.sh:
touch install.sh
Then make it executable, so you can launch it with ./install.shHere it is commonly used chmod with permissions for all users, and often sudo is used if the file is in a location where elevated permissions are needed:
sudo chmod a+x install.sh
Now edit the file with your preferred editor. You can use Vim, Nano, Emacs, Gedit, Kate or whatever you have most readily available. A classic example using Vim would be:
sudo vim install.sh
At the beginning of the file there are two almost mandatory lines. The first indicates which interpreter should execute the script (the “shebang”):
#!/bin/bash
The second line is usually used to indicate the codingThis is useful if you're going to use accents, ñ's, or other special characters in comments or messages:
# -*- ENCODING: UTF-8 -*-
Update the system according to the distribution
One of the first things everyone does after installing Linux is update packagesThat step should also be in your script, but it's important to keep in mind that each distro uses a different package managerNormally, your script is designed for a specific family (for example, Debian/Ubuntu), and if you work with several, you should have a different script for each case.
For distributions of the style Debian, Ubuntu and derivativesThe typical sequence might be:
sudo apt update && sudo apt -y upgrade
In other distributions things are different. For example, in CentOS and Red Hat “like” has been traditionally used yum:
sudo yum update
It appeared in newer versions of Fedora dnf as an evolution of yum, with a very similar syntax. Thus, for Fedora You could use:
sudo dnf update
In the case of openSUSEThe package system is normally managed with zypper and the general update is done with:
sudo zypper update
Worldwide presence Arch Linux, Manjaro, Antergos, KaOS and others based on Pac-Man, the equivalent would be:
sudo pacman -Syu
If you also use external tools like yaourt (nowadays replaced by yay and other AUR helpers), in some environments it looked something like this:
yaourt -Syua
Others like Gentoo or Slackware They have their own update managers and commands. In that case, your script will be adapted to what you use (emerge, slapt-get, etc.), but the idea is always the same: keep the system database up to date as soon as the post-installation starts.
Organize program installation by category
Once the system is updated, it's time to install all the software you needTo prevent the script from becoming chaotic, it's a good idea to classify the packages into logical groups and keep the comments clear. A typical outline might have categories like these:
- Utilities: system tools, compressors, monitors, etc.
- Internetbrowsers, email clients, messaging.
- GamesSteam, Lutris, native games, etc.
- FROM / Desks: graphical environments and plugins.
- Multimedia: video and audio players, image editors.
- ProductivityOffice suites, note-taking apps, task managers.
- Development: compilers, IDEs, debugging tools.
In the script, you can add comments to these sections, making it easy to locate each block. Something like this:
# Utilidades
# Desarrollo
# Internet
# Juegos
# DE's y WM's
# Multimedia
# Productividad
Below each commented block, you add the appropriate installation commands for your distro. On a system based on Arch LinuxYou could have something like this:
sudo pacman -S chromium
sudo pacman -S steam
sudo pacman -S gnome-shell gnome-extra
The goal is that, when you run the script, all the... apps you usually useIf at any time you want to change your selection (add or remove a program), just modify the list of packages in that category and you're done.
Run the script and follow best practices
Once you have written and saved your script install.shFrom the terminal, you just need to navigate to the folder where it's located and launch it. A convenient pattern is to combine changing the directory and running it:
cd /ruta/del/script && ./install.sh
Another very simple option for Ubuntu or Debian users is invoke bash directly about the file:
bash nombre-script.sh
To avoid having to wait for confirmations, many installation commands use the parameter -y (or equivalent) that accepts the question "Do you want to continue?" by default. This way the script can to be executed from beginning to end without interventionwhich is convenient when you launch it right after installing the system and leave it running updates, installations and cleaning while you do something else.
Practical example of a post-installation script in Ubuntu

In a typical scenario with Ubuntu 22.04 LTS or laterIt makes sense to create a script that combines APT and Snap, because some applications are only available as snaps. An example workflow could be this: update, install standard packages, clean the system, install key snaps, activate the firewall, and apply some desktop tweaks.
A simple skeleton of that type of script might contain blocks like this:
#!/bin/bash
# Update the package list
sudo apt update
# Install available updates without asking
sudo apt -y upgrade
# Remove packages that are no longer needed and clean up
sudo apt -y autoremove
sudo apt autoclean
# Install common applications via APT
sudo apt -y install vlc gimp clamav chkrootkit lynis
# Install common apps like Snaps
sudo snap install chromium brave
# Enable UFW (Linux kernel firewall)
sudo ufw enable
# Adjust the behavior of the GNOME dock
gsettings set org.gnome.shell.extensions.dash-to-dock click-action 'minimize'
With something so compact you're covering updates, cleanup, base software, security, and a minor usability tweakOf course, you can extend this script to your liking: add development tool installation, backup configuration, scripts that download your dotfiles from Git, etc.
Use advanced tools: Ansible, Chef and similar
If you work with multiple machines or need to replicate your environment across servers, laptops, and desktops, there comes a point where a simple Bash script falls short. That's where tools like Ansible, Chef and company, which allow you to describe the desired state of a system (installed packages, config files, active services) and apply it in a reproducible way.
Some users who are tired of always doing the same post-installation configuration in Ubuntu resort to these configuration management systems or pre-made projects. specific post-installationA real-world example is a repository dedicated to automating post-installation in Ubuntu, where Bash scripts are combined with package definitions, themes, fonts, and other preferences.
Even so, for many desktop cases the most practical solution remains a Bash script without external dependenciesespecially if you're looking for something easy to transport on a USB drive or that you can copy to a newly installed machine without setting up additional infrastructure.
Automation from within the installer itself: %post and Kickstart scripts
In the world of type distributions Red Hat, CentOS, Rocky and derivatives, it is very common to use Kickstart for unattended installations. A section can be defined within a Kickstart file. %post which runs automatically upon completion of the base system installation.
The idea is that, if you have defined the Network Configuration In Kickstart itself, when you enter the post-installation section, the network will already be up and running. There you can place any command you want to automate: installing additional packages, tweaking settings, downloading scripts, changing system messages, etc.
As a simple example, one could change the Message of the Day (MOTD) of the newly installed system by placing it in the section %post something equivalent to:
%post
echo "Bienvenido a tu nuevo servidor" > /etc/motd
%end
This approach of embedding a post-installation script in the installer is very powerful for massive deploymentsSince the entire process is 100% automatic: the installation starts, partitions and base packages are applied, post-installation scripts are executed, and the system is ready to use.
Limitations when replicating installations between heterogeneous distributions
When you try to use a single generic post-installation mechanism to very different distributionsThat's when the headaches begin. Not all of them manage updates, dependencies, and patches in the same way, nor do they offer the same software versions in their repositories.
We must distinguish, on the one hand, a script that only enumerates packages to install and lets the distro's dependency manager resolve dependencies (like apt-get or tools like zypper, dnf, urpmi), and on the other hand, a system that also intends to handle security patches, backports, and new versions of programs that the original distribution doesn't even provide.
In a distro like DebianWith thousands of packages and a vast ecosystem, the installer and tools like apt handle software dependencies maintained by the distribution itself. In other distributions, especially commercial ones, the philosophy is different: they give you the set of packages and patches they've chosen to support, and beyond that, you enter a realm where... The resolution of dependencies and updates is not guaranteed.
An illustrative example: in a SuSE 8.2 In the past, if you wanted to update PostgreSQL with apt to a newer version than the distribution's original version, you'd find that apt would first ask for the version included on the official CDs. You had to go into YaST, select the original package for installation from the corresponding CD, and only then would apt accept the extra update from its repositories. These kinds of quirks make it difficult to have a a single tool that works without intervention in all cases.
Many distribution installers already include mechanisms for record package selections and reuse them on other machines (for example, saving the package list to a floppy disk or external media and loading it into a new installation). In these situations, the installer recalculates dependencies, warns you about what's missing, can resolve it automatically or ask you to accept additional packages, or even shows you the risky option of "proceeding despite possible inconsistencies."
When, in addition, we talk about heterogeneous machine farmsWith different hardware and software profiles, the complexity skyrockets. For a generic post-installation replication utility to work well across different distributions, or You package and maintain all binaries, patches, and dependencies for each distro yourself.Or you need the distributions themselves to provide official support for that unified system, which is unlikely in the current commercial context.
Post-installation scripts in custom environments and LFS
At the opposite end of the spectrum from pre-packaged distributions are the environments in which you build yourself. your Linux system from scratch, in the style of Linux From Scratch (LFS). Here, post-installation is no longer a list of packages but becomes a very long series of guided steps: compilation of tools, creation of partitions, mounting of file systems, toolchain adjustments, configuration of basic services, etc.
Following a detailed LFS guide or a more didactic approach will help you understand the system from the lowest layers: compilers, linkers, C libraries, shell, basic utilities, all the way to the kernel and booting with GRUB. This process can be lengthy. several days (between 3 and 5, easily)It requires patience and a certain level of Unix administration, but in return it gives you a total control over what gets installed and how the system is assembled.
In a real-world case using Debian as the host system within a virtual machine in VirtualBoxThe process begins by creating a VM with a disk of, for example, 30 GB, installing Debian 10 as a base, and fine-tuning the repositories in /etc/apt/sources.list to be able to access the build packages (build-essential, linux-headers-amd64etc.) and then install the VBoxLinuxAdditions to have complete integration with the host.
Once the base system is comfortable to work with (full screen, compilation dependencies resolved), a second virtual disk driveThis second disk, also of fixed size, will be where the new custom Linux system will be installed. This second disk is partitioned from Debian (for example) /dev/sdb) with tools such as partedA partition table of the msdos type is defined and primary and logical partitions (root, swap, /home) are created following a scheme similar to that of the first disk.
These partitions are then formatted with the appropriate file systems, and their contents are obtained. UUID with utilities such as blkid and entries are added to /etc/fstab of the host system to mount them at points such as /mnt/lfs, /mnt/lfs/home and to enable the swapFrom there, the basic structure is built, and directories such as $LFS/sources y $LFS/toolsa long list of packages is downloaded with wget using a file wget-list and a dedicated user is created, for example lfsWith its .bash_profile y .bashrc adapted to the compilation environment.
The process continues with a checker (a version control script) that verifies that all the necessary tools are present (Bash, Binutils, GCC, etc.), in many cases installing both the main package and its development variant (-dev o -develKey symbolic links are also created, such as making /bin/sh Point it to Bash, and packages like this are compiled one by one. binutils, gcc, glibc, tcl, expect, dejagnu, m4, ncurses, bison, bzip2, coreutils, diffutils, file, findutils, gawk, gettext, grep, gzip, make, patch, perl, Python, sed, tar, texinfo, xz and many others, in two rounds (temporary toolchain and final toolchain inside the chroot).
The path is usually cleaned debugging symbols Unnecessary libraries are removed to save space, non-essential static libraries are eliminated, the toolchain is adjusted so that all new builds use the newly installed libraries, and the environment is entered chroot preparando /dev, /proc, /sys, /run and creating nodes like /dev/console y /dev/null.
The standard directory tree is built within the chroot (/bin, /sbin, /usr, /var, /etc…), temporary symbolic links are generated for programs that do not yet exist, and they are initialized / Etc / passwd y / Etc / group with a minimal root user, and log files are prepared in /var/log with correct permissions. The following are also configured: local with tools like locale-gen or equivalent, the time zone is selected with tzselect and a link is created from /etc/localtime to the chosen area in /usr/share/zoneinfo.
Another critical piece is the configuration of nsswitch.conf so that name resolution and other services work well on the network, and the ld.so.conf along with ldconfig to register the paths of the new dynamic libraries. Then, with glibc and the final compilers, key packages are recompiled to ensure that everything links correctly.
In parallel, network access is being configured on the new system: files are created. /etc/sysconfig/ifconfig.eth0 (or the corresponding interface name), the IP address, gateway, and mask are defined, and the following is generated: / Etc / resolv.conf with the appropriate DNS servers and they are fixed hostname y / Etc / hosts with the machine information.
Scripts and boot files are also configured as / etc / inittab, /etc/sysconfig/clock, /etc/sysconfig/console y /etc/sysconfig/rc.sitewhere details such as the default runlevel, system log handling, console language, keyboard map, and service behavior at startup are adjusted.
The shell experience is customized / Etc / profile and files of the type ~ / .Bash_profile y ~/.bashrc, plus /etc/inputrc for the Readline library (controlling keyboard shortcuts, command history behavior, etc.). A / etc / shells with the list of valid shells and a / etc / fstab adapted to the new partitions (root, home, swap, /boot if it exists).
Next up is compiling the Linux kernel: the chosen version is decompressed (for example a linux-5.x.y), starting from a base configuration file, key options are reviewed such as CONFIG_DEVTMPFS and others related to udev and the VM hardware, you enter the configuration menu (make menuconfig) to adjust the default hostname and numerous features, and finally it is compiled (make, make modules_install) and the kernel and its components are installed in /boot.
The final step is usually to install and configure GRUB as a boot loader. Commands are executed to write GRUB to the MBR of the correct disk, generating a grub.cfg with the entries for the new kernel and, if applicable, for other systems present. In this specific context, since the host distribution Debian was used only as a "scaffold," its virtual disk is eventually removed from the machine, so that the disk that was initially /dev/sdb happens to be / Dev / sda and the new Linux system boots up autonomously.
When everything is in place, identification files are created such as /etc/lsb-release y /etc/os-release with the name of the new distro (for example “S4viOS”), a bashrc Pleasant for root and future users, and a final filesystem unmount is performed before the big moment: rebooting the VM, choosing the corresponding entry in GRUB, and seeing How to boot up the Linux system you've built from scratch for the first time.
When is it worth going to such lengths, and when is a simple script sufficient?
This entire journey, from the Simple APT or pacman script Even setting up an LFS system with a custom kernel and GRUB, it clearly illustrates the range of possibilities you have in Linux when it comes to automating post-installation.
For everyday life, the most practical thing for almost anyone is to keep one or more post-installation scripts by distributionwhere you update, install and fine-tune what you always use; if you work with many machines or mixed environments, Ansible or Kickstart with %post sections comes into play; and if what you really want is to understand Linux from the inside out, the LFS-type path, although long, gives you a level of control that no commercial distro offers out of the box. Share the guide so more users know about the topic.