Linux - Post Installation install scripting...

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

RobertFontaine

Active Member
Dec 17, 2015
663
148
43
57
Winterpeg, Canuckistan
Or I remember why Linux makes me cry.

Are there any smart'ish utilities out there that make these easier to write or is this still an entirely manual process?
dependency hell... Push push push push pop pop pop pop is a bit of a pain in the ass.
 

RTM

Well-Known Member
Jan 26, 2014
956
359
63
I am not really sure what exactly it is you want to make easier, but if you are talking about writing small Bash scripts, then I suggest you find a good editor that you like, if you have a graphical desktop environment I suggest you look into Github's Atom editor, it's free and pretty much awesome.
 

RobertFontaine

Active Member
Dec 17, 2015
663
148
43
57
Winterpeg, Canuckistan
What I'm trying to find is tooling that makes post operating system installation less of a royal pain in the bum...
i.e....

While software not installed...
make install a)
error... missing dependency b)
make install b)
error ... missing dependency c)
make install c)
error... missing dependency d)
... write everything down in a script


While script not right:
Reinstall OS
test/debug script

Save post installation script to version control so that it won't be such a pain in the butt next time.

Then later:
Add a new piece of software to the base image rinse and repeat.

I assume there is a better way but I don't see one. There is definitely opportunity for automation here so I'm asking if there is tooling that I am not aware or a better process to establish a base image.

Centos is not terribly generous with it's build versions.
 

rubylaser

Active Member
Jan 4, 2013
846
236
43
Michigan, USA
Are you compiling software by hand? Otherwise, the package manager on any modern Linux OS, should automatically install any dependencies needed for what you are trying to do. Maybe an example (real code) of what you are trying to do.
 
  • Like
Reactions: Patriot

Blinky 42

Active Member
Aug 6, 2015
615
232
43
48
PA, USA
For Centos and any of the RedHat / yum based distros you can kickstart them and have your custom config in the %pre and %post sections so that extra packages are installed and any non-interactive scriptable thing can be taken care of.
You can also use that to make a "firstboot" like script for more complex things or items that need some degree of user interaction. Make up your scripts and link them late in the boot process for the first boot, then when they are complete you remove them from the init.d or systemd configs so it never bugs you again.

I have done both styles on large scales in the past - %post sections to add in and remove packages to have a lean system with just what is needed, add in epel and other useful repos. Configure IPMI and take a system inventory and report it back to me via email or more fancy automation tools. The %pre section is super-handy to configure hardware and software raid in a predictable way every time. And I have used firstboot like scripts to setup graphical programs, and deployment type situations where we didn't know IP info before shipping a server, soa little ncurses app was built up to pull in the required info from "admin" on site to get the box online and then when we can access it take the link to start that script at boot up of of the process.

If you are building a lot of your own packages, then consider doing your own repo and adding that into /etc/yum.repos.d/ to make it easy to pull in your own custom-built software. And don't discount the quick and effecient ability of tarring up the custom bits you need from a manually configured system and dropping them on a new system (for example - getting custom PHP / Perl / etc builds with the right version of the Oracle libs and all the tns info to connect to your servers).
 
  • Like
Reactions: Patriot

canta

Well-Known Member
Nov 26, 2014
1,012
216
63
43
modern Linux distros already handle nicely with dependencies in overall

RHEL variances: "yum"
debian variances: "apt-get"
suse vairances: "zypper"
slackware variances: "pkgtools"

the question is as everyone asking:
are you compiling manually?
if yes... you need to gather what is the devel-packages are needed.

@Blinky 42
I think the question is on how to compile source code manuallya and there are missing devel-packages detected.
 
  • Like
Reactions: Patriot

Blinky 42

Active Member
Aug 6, 2015
615
232
43
48
PA, USA
Regarding missing devel packages to build things, that is sort of how it goes. You add missing packages to your base system to get whatever you need built, then hopefully package up or at least tar up the resulting binaries for future deployment. You shouldn't need the development packages for everything on a production system once you are able to build the binaries however. If you are reinstalling the whole OS each pass, at least do it in a VM for pure speed and keep a development and non-development version of the VM on for testing the results.

If you are building something that is open source, you may be able to find a RPM/deb of it out in the wild and use the deps listed within that package as a starting point for what you need to install. If it was developed in-house, ask the folks who made it to document it better ;) Or at least pull the list of installed packages on their system with something like "rpm -qa" or look through the system logs for what packages & versions were installed.

Worse however are the beastly nightmare apps like ffmpeg/vlc that seem to only build with a specific mix of the right devel packages installed and if you want non-free included you need to spin your own. In those cases where you can't rely on stable devel libs from upstream, or the versions are too old to be useful I personally like to manage the whole tree and all the deps separate in my source control tied to specific known-working versions of the dependent libs, and then build up a binary package from that and tag the known working combo in source control accordingly. That way you get something that is going to work the same on multiple boxes and can survive a yum update of the base OS, and you can pull in the 3rd party sources and update them as needed to spin a new working binary or compile against a different distro (building and maintaining working C6 and C7 versions for example).
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
Sounds to me like you're doing it the hard/wrong way. Why are you compiling so many things from scratch? If you do really need to compile that many things, stop doing it on every box - one server (or VM) gets to be the build-box with all of the compilers and other related tools installed, and build rpm files for all these things and stick them into a local repo that your other servers can pull from. Yum does a good job handling dependencies you just need to get everything you need into rpm packages first.

Or if you really want to keep compiling everything, maybe look at using Gentoo instead of CentOS - then you will be compiling EVERYTHING from scratch, including re-compiling things as patches are released. But the Gentoo package manager handles dependencies for you, and it is quite easy to setup a local portage repository with your own custom code/patches and still have dependencies automatically resolved for you.

As to post-install scripting, imho that method is kinda obsolete now replaced by config-management. Stick puppet on there (or chef, salt, ansible, or whatever the flavor-of-the-week is) and let it take care of ensuring things are setup the way they should be. If CentOS is your preferrred distribution then maybe look at Katello - it is the upstream free/open-source project behind RedHat's Satellite 6 product (Satellite <= 5.x were based on Spacewalk) - it will do config-management with puppet, using foreman as a web-based gui for puppet, will host local yum repos (both mirroring web repos locally to save bandwidth if you have lots of nodes to patch, as well as local custom repos with your own packages), and will also do most of what is needed for fully automated provisioning of either VMs (local or cloud-based) or bare-metal boxes.
 
  • Like
Reactions: cperalt1

canta

Well-Known Member
Nov 26, 2014
1,012
216
63
43
@TuxDude
Compiling all is not good idea.

Many hours wasted in process.

My week daily work, create development build in vm and save backup vm too as a template..
When everything on fire. Start from the template again.
Specially keeping many versions in development that has differences.

I have 3 build vms on my laptop for development.. All running centos 6.5


Yum and rpm in the is good enough for today works.
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
@TuxDude
Compiling all is not good idea.

Many hours wasted in process.

My week daily work, create development build in vm and save backup vm too as a template..
When everything on fire. Start from the template again.
Specially keeping many versions in development that has differences.

I have 3 build vms on my laptop for development.. All running centos 6.5


Yum and rpm in the is good enough for today works.
The only reason I can think of for compiling a large amount of stuff is if you really need to be right on the bleeding edge for a bunch of packages. And if you really do need that, then maybe using Gentoo and compiling everything from scratch is a reasonable tradeoff for having a package manager that is smart enough to handle all the dependencies, do all the compile work for you, and can handle things like installing the master git branch right from github for package X while still automatically handling all of the dependencies for you (even if a dependency is another git branch somewhere that has yet to be released). You might have to write a few ebuilds of your own (or copy/customize existing ones to point to a new git branch or add an extra .patch file). Gentoo involved a lot of waiting for things to compile - but I can't think of many other distro's that can support version-9999 packages that just pull in the latest git code while still doing dependency resolution.

Otherwise, the proper way to do it really is to have a dedicated box/VM for building/compiling, and only the results of that work should end up on the rest of your servers. There is no good reason to have an entire build tool chain installed on every production server, and at least a few reasons not to (space-inefficient, security risks, promoting an architecture that is a pain to maintain, etc.)
 

canta

Well-Known Member
Nov 26, 2014
1,012
216
63
43
@TuxDude

You already answer taht vms help certain scenario.

My dev works.
We have multiple releases and some had different patches.
X will pickup 1 patch
Y will pickup 123 patch.

The first time only. We build all. Mostly kernel. Why security and unique patch due on running on non regular hw.

The main reason. Each product must be supported for 10 years minimally..

And many product releases keep going out....



Toolchain is standard. No security risk mostly
 

RobertFontaine

Active Member
Dec 17, 2015
663
148
43
57
Winterpeg, Canuckistan
I'm trying to build a desktop image:

first we install the nvidia drivers
then we attempt to build OBS for Centos
then ffmpeg.
but we need gnutls
but wait then we need
but wait...

Centos versions are generally many behind.
Centos 7.0 for example ships with
Firefox 37.
Make 2.28.11

in ubuntu land repos with all the stuff already compiled is easy.
with centos while it is server stable it is desktop/development/productivity horrid.

The answer is almost certainly don't use Centos for anything but server.
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
Ya - CentOS is a clone of RHEL, which is an enterprise distro running old things optimized for stability/reliability. Give Fedora a try if you want bleeding-edge versions in the public repo's while still keeping the RHEL-compatible package-manger, config file layout, etc, Fedora being the upstream of RHEL.
 

canta

Well-Known Member
Nov 26, 2014
1,012
216
63
43
Lol.. Desktop is fedora ...

If you want extra spicy or bleeding on the clones.. Use epel repo...
They have bleeding rpm packages..
 

s0lid

Active Member
Feb 25, 2013
259
35
28
Tampere, Finland
Or go with Arch. My go to distro for my linux laptops.
There's no such thing as stable, only bleeding edge and AUR covers most of the usefull programs/utils on github. If there's a useful github repo somebody has made a AUR package out of it.
Arch requires some linux knowledge as most of the packages come without any basic configs.
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
LOL, not EVEN gonna say anything, already been covered 'good nuff' here.

pacman FTW @s0lid hah

If you want background on KS %pre/%post for RH/CentOS/Scientific Linux I can shoot ya over a career worth of scripts for you to drown in...or use satellite or spacewalk and make life MUCH easier on yourselves. :-D

There's a reason large enterprises employ *nix heads and I gather it is because of their breadth and depth of knowledge/teeth cutting they have had to gain across the 'good ole' proprietary Unix and OpenSource spectrum's that garners these salaries.
 
Last edited:

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
Lol.. Desktop is fedora ...

If you want extra spicy or bleeding on the clones.. Use epel repo...
They have bleeding rpm packages..
Up until around version 20 Fedora was targetted at desktops - recent versions (23 is current) have official Desktop/Server/Cloud editions, as well as 'spins' that are mostly community-supported editions for all variety of things. I run the KDE spin on my desktops, and have played with the server edition in VMs. Only problem with using Fedora as a server is the very short support lifecycle - you need to do full distro-version upgrades every 6 months or so or you fall behind and stop getting security updates. Thats where CentOS is nice, inheriting RHEL's 10+ year support lifecycle of backported security updates.

Also - EPEL does NOT have bleeding edge packages, or even newer packages than what RHEL/CentOS provides. It only carries packages that are not included in RHEL at all, hence the name - Extra Packages for Enterprise Linux. Some components you can get updated versions by going to the RedHat Software Collections repo's, which are designed to give you newer versions of some things while not breaking anything - if you are on real RHEL with a support contract you can use the RHSCL packages and keep your support intact. I wouldn't recommend upgrading distro packages with random 3rd party versions - that is likely to break something, and will invalidate any support contracts you have.
 

DavidRa

Infrastructure Architect
Aug 3, 2015
330
153
43
Central Coast of NSW
www.pdconsec.net
If your volume is such that you need this level of automation, I'd really be looking at something like Chef or Puppet (Windows' equivalent is probably PowerShell DSC).

Chef and Puppet (and others) also give you semi-automatic fixing of problems if people try to undo your configs.
 

canta

Well-Known Member
Nov 26, 2014
1,012
216
63
43
@TuxDude
Epel is still bleeding . epel is at your own risk. You have to know what are you doing

Fedora is still desktop oriented with cloud thing included.
Fedora is bloody hell bleeding.


I am using epel. And know what I doing.

As everyone said. Know better is the best.

Fedora cycles are very shortterm..
You can not use fedora for commercial products.
 

canta

Well-Known Member
Nov 26, 2014
1,012
216
63
43
If your volume is such that you need this level of automation, I'd really be looking at something like Chef or Puppet (Windows' equivalent is probably PowerShell DSC).

Chef and Puppet (and others) also give you semi-automatic fixing of problems if people try to undo your configs.
I believe the question is...

Op needs to compile one packages but greeted with many dependencies errors.

I faces always on the first phase. And knowing requirements is the best.
No automation can solve this.. Well can do partially..