RHEL8 – Where did my network scripts go ?

In RHEL8 the old network scripts have been deprecated. However, if you want you can addd them back in.

Red Hat have provided them in a package called network-scripts

$ sudo yum install network-scripts.x86_64

This adds all your favourite scripts back in

$ sudo rpm -ql network-scripts
/etc/rc.d/init.d/network
/etc/sysconfig/network-scripts
/etc/sysconfig/network-scripts/ifcfg-lo
/etc/sysconfig/network-scripts/ifdown
/etc/sysconfig/network-scripts/ifdown-bnep
/etc/sysconfig/network-scripts/ifdown-eth
/etc/sysconfig/network-scripts/ifdown-ippp
/etc/sysconfig/network-scripts/ifdown-ipv6
/etc/sysconfig/network-scripts/ifdown-isdn
/etc/sysconfig/network-scripts/ifdown-post
/etc/sysconfig/network-scripts/ifdown-routes
/etc/sysconfig/network-scripts/ifdown-sit
/etc/sysconfig/network-scripts/ifdown-tunnel
/etc/sysconfig/network-scripts/ifup
/etc/sysconfig/network-scripts/ifup-aliases
/etc/sysconfig/network-scripts/ifup-bnep
/etc/sysconfig/network-scripts/ifup-eth
/etc/sysconfig/network-scripts/ifup-ippp
/etc/sysconfig/network-scripts/ifup-ipv6
/etc/sysconfig/network-scripts/ifup-isdn
/etc/sysconfig/network-scripts/ifup-plip
/etc/sysconfig/network-scripts/ifup-plusb
/etc/sysconfig/network-scripts/ifup-post
/etc/sysconfig/network-scripts/ifup-routes
/etc/sysconfig/network-scripts/ifup-sit
/etc/sysconfig/network-scripts/ifup-tunnel
/etc/sysconfig/network-scripts/ifup-wireless
/etc/sysconfig/network-scripts/init.ipv6-global
/etc/sysconfig/network-scripts/network-functions
/etc/sysconfig/network-scripts/network-functions-ipv6
/usr/lib/.build-id
/usr/lib/.build-id/df
/usr/lib/.build-id/df/fce1383c3b10c1e20c4e4684d16a35c65cad1d
/usr/sbin/ifdown
/usr/sbin/ifup
/usr/sbin/usernetctl
/usr/share/doc/network-scripts
/usr/share/doc/network-scripts/examples
/usr/share/doc/network-scripts/examples/ifcfg-bond-802.3ad
/usr/share/doc/network-scripts/examples/ifcfg-bond-activebackup-arpmon
/usr/share/doc/network-scripts/examples/ifcfg-bond-activebackup-miimon
/usr/share/doc/network-scripts/examples/ifcfg-bond-slave
/usr/share/doc/network-scripts/examples/ifcfg-bridge
/usr/share/doc/network-scripts/examples/ifcfg-bridge-port
/usr/share/doc/network-scripts/examples/ifcfg-eth-alias
/usr/share/doc/network-scripts/examples/ifcfg-eth-dhcp
/usr/share/doc/network-scripts/examples/ifcfg-vlan
/usr/share/doc/network-scripts/examples/static-routes-ipv6
/usr/share/doc/network-scripts/sysconfig.txt
/usr/share/man/man8/ifdown.8.gz
/usr/share/man/man8/ifup.8.gz
/usr/share/man/man8/usernetctl.8.gz

Feedback welcome as always!

RHEL8 – Where did my network scripts go ?

Configure Packer and Vagrant on RHEL8 with libvirt

I’ve finally gotten around to installing RHEL8 as my primary desktop. One of my main use cases is to automatically build and configure vm’s using vagrant for testing.

A few things are subtly different on RHEL8, so I thought i’d share my learning (and some of the hacks i’ve put in place until I can investigate further).

Installation

Install Prerequisites

sudo yum -y install libvirt  \
                    libvirt-devel  \
                    ruby-devel  \
                    libxslt-devel \ 
                    libxml2-devel  \
                    libguestfs-tools-c  \
                    ruby-devel  \
                    gcc

Start the libvirt service

sudo systemctl enable --now libvirtd

Download packer into a Packer subdirectory (customise to taste)

mkdir ~/Packer
cd ~/Packer
curl -o ./packer.zip https://releases.hashicorp.com/packer/1.4.1/packer_1.4.1_linux_amd64.zip
unzip packer.zip

Download vagrant CentOS rpm (I’l probably tweak this later, but it works fine for now)

mkdir ~/Vagrant
cd ~/Vagrant
curl -o  vagrant_2.2.4_x86_64.rpm https://releases.hashicorp.com/vagrant/2.2.4/vagrant_2.2.4_x86_64.rpm

sudo yum install -y ./vagrant_2.2.4_x86_64.rpm

Install the vagrant libvirt plugin

CONFIGURE_ARGS='with-ldflags=-L/opt/vagrant/embedded/lib with-libvirt-include=/usr/include/libvirt with-libvirt-lib=/usr/lib' GEM_HOME=~/.vagrant.d/gems GEM_PATH=$GEM_HOME:/opt/vagrant/embedded/gems PATH=/opt/vagrant/embedded/bin:$PATH vagrant plugin install vagrant-libvirt

Test your vagrant project by specifying the provider

vagrant up --provider=libvirt

The following worked fine for me, let me know if you get any issues.

References

https://github.com/vagrant-libvirt/vagrant-libvirt#provider-options

 

Configure Packer and Vagrant on RHEL8 with libvirt

Configure LDAP authentication for Red Hat Cloudforms.

Background

So recently I was asked to configure a small lab that would be using Red Hat Cloudforms with users from LDAP (IDM/FreeIPA).  I had to look up a number of documents and ended up referring back to some old notes. To that end, I decided it would make sense to document it here with screenshots for anyone that may find it useful.

Architecture

Server configurations

For this, you’ll need 1 LDAP server (Red Hat IDM in my case), preconfigured with at least one user and group. You’ll also need one Cloudforms virtual appliance capable of connecting to the IDM server.

So the demo lab I’ll be doing this in is actually my home lab (yes I run IDM at home for my wife and kids).

User Configurations

I will create a single LDAP group called “cloudforms-super-users”. It will contain my user “matt”. I’ll configure it to be a super admin on cloudforms.

The “svc-cloudforms-ldap-auth” user will be used by the Cloudforms application to bind to IDM. It’s a service account with minimal privileges to allow querying for users and groups.

Preparing Cloudforms

Connect cloudforms to the LDAP server

We log into cloudforms as the default admin/smartvm user.

Screen Shot 2017-05-23 at 13.42.57

We now go to configuration

Screen Shot 2017-05-23 at 13.03.47

We configure cloudforms to use the LDAP server.

On the Authentication tab, set the mode to LDAP, and the user type to UID (this is for IDM/FreeIPA). Then

Screen Shot 2017-05-23 at 13.31.45

Screen Shot 2017-05-23 at 13.31.51

Create the new cloudforms LDAP -> CF Role group

We create a new group in cloudforms that maps to a role and LDAP group.

On the left hand panel, click on Access Control -> Groups.

Screen Shot 2017-05-23 at 13.32.56

Click “configure “Add a new group.”

Screen Shot 2017-05-23 at 13.33.06

You’ll then be prompted to add a new group.

Here we give the name of our new group, select a cloudforms role to map, and a tenant.

We also supply an LDAP user that is in the appropriate LDAP groups already.

The username is the bind name.

Screen Shot 2017-05-23 at 13.34.30.png

When that is complete, we are provided with a list of LDAP groups we can select to complete the mapping.

Screen Shot 2017-05-23 at 13.35.02

Test the new LDAP user.

Log out

Screen Shot 2017-05-23 at 13.36.00

Log in as our new LDAP user.

Screen Shot 2017-05-23 at 12.44.25

Check we have the correct role mapping.

Screen Shot 2017-05-23 at 14.01.20

Profit.

 

Configure LDAP authentication for Red Hat Cloudforms.

Configure Ansible Tower to support FreeIPA / IDM LDAP Authentication

Background

So if you’ve bought Ansible tower, it’s probably because you needed the enterprise features such as an API, or RBAC support that you only get with Ansible Tower.

So I’ve been building a small lab for some of the people in my team. A key component of this lab is Ansible Tower. I knew Tower would fully support LDAP as an authentication source, however, when I checked out the docs, most of the examples are for Microsoft Active Directory. Although in many businesses this would be great, I work for a Linux company, and my default is Red Hat IDM (FreeIPA to everyone else).

I’m no expert when it comes to LDAP, I’ve had a little bit of experience, but, if I’m being honest I’ve avoided it in general.

I’ve written this because it took a while to get working perfectly, so it made sense to document it for me, maybe someone else might find it useful.

Use Case

My use case was to create a user group in IDM and allow members of that group to be able to log into Ansible tower. I’m not particularly worried about automatically assigning organisations, I just want to make sure people can log in, I’ll assign permissions as and when I choose. This is about Authentication, not Authorization.

domain name                   - nixgeek.co.uk
IDM administrator credentials - admin/letmein123
idm host                      - idmng.nixgeek.co.uk

Step 1 – Create a user in IDM

If you haven’t already done so, make sure you are authenticated as an administrator user.

[root@idmng ~]# kinit admin
Password for admin@NIXGEEK.CO.UK:

Then create a new IDM user.

[root@idmng ~]# ipa user-add tower_admin
First name: Tower
Last name: Administrator
------------------------
Added user "tower_admin"
------------------------
User login: tower_admin
First name: Tower
Last name: Administrator
Full name: Tower Administrator
Display name: Tower Administrator
Initials: TA
Home directory: /home/tower_admin
GECOS: Tower Administrator
Login shell: /bin/sh
Principal name: tower_admin@NIXGEEK.CO.UK
Principal alias: tower_admin@NIXGEEK.CO.UK
Email address: tower_admin@nixgeek.co.uk
UID: 477200012
GID: 477200012
Password: False
Member of groups: ipausers
Kerberos keys available: False

 

Step 2 – Create a user group in IDM

[root@idmng ~]# ipa group-add tower_administrators
----------------------------------
Added group "tower_administrators"
----------------------------------
Group name: tower_administrators
GID: 477200013

Step 3 – Add the newly created user to my user group in IDM

root@idmng ~]# ipa group-add-member tower_administrators --users=tower_admin
Group name: tower_administrators
GID: 477200013
Member users: tower_admin
-------------------------
Number of members added 1
-------------------------

Step 4 – On tower install  the ldap client tools

root@tower ~]# yum install openldap-clients

 

Step 5 – Update the authentication settings on Tower

On tower 3.x edit the file /etc/etc/tower/conf.d with your favourite editor

[root@tower ]# vi /etc/tower/conf.d/ldap.py

Step 6 – Comment out the Active Directory imports

Below the comments at the top of the file, you will see the following

from django_auth_ldap.config import LDAPSearch, LDAPSearchUnion
from django_auth_ldap.config import ActiveDirectoryGroupType

Comment out the ActiveDirectory line, and insert the GroupOfNamesType import so it looks like the following.

from django_auth_ldap.config import LDAPSearch, LDAPSearchUnion
#from django_auth_ldap.config import ActiveDirectoryGroupType
from django_auth_ldap.config import GroupOfNamesType

Step 7 – Configure the LDAP URI

Still in the ldap.py file you will find a line that starts with the token AUTH_LDAP_SERVER_URI.

Assuming you haven’t changed any ports, then modify to look similar to the following. This just tells Tower how to open a connection to the IDM/IPA server.

AUTH_LDAP_SERVER_URI = 'ldap://idmng.nixgeek.co.uk:389'

By default IPA/IDM allows LDAP connectifvity without forcing LDAPS. LDAPS is beyond the scope of this guide. I may add it later if there is an interest.

Step 8 – Configure the LDAP Bind

The next token we are looking for is AUTH_LDAP_BIND_DN. This Token tells Tower how to talk with the IPA/IDM server.

Here I’m putting the credentials of my IDM admin user in. Please don’t do this in the wild, putting your IDM admin password in plaintext is NOT_A_GOOD_IDEA(TM)

AUTH_LDAP_BIND_DN = 'uid=admin,CN=users,CN=accounts,DC=nixgeek,DC=co,DC=uk'
AUTH_LDAP_BIND_PASSWORD = 'letmein123'

Step 9 – The user search

So the next token is the query that will be executed against the IDM server to establish if users are valid. By default, this is configured for Active directory, and will need to be changed for IDM/IPA.

You will need a block that looks similar to the following.

AUTH_LDAP_USER_SEARCH = LDAPSearch(
'cn=users,cn=accounts,dc=nixgeek,dc=co,dc=uk', # Base DN
ldap.SCOPE_SUBTREE, # SCOPE_BASE, SCOPE_ONELEVEL, SCOPE_SUBTREE
'(uid=%(user)s)', # Query

Let’s look at this line by line

Execute an LDAP search against our IDM server

AUTH_LDAP_USER_SEARCH = LDAPSearch(

Specifying the path to our user accounts

'cn=users,cn=accounts,dc=nixgeek,dc=co,dc=uk', # Base DN

Querying subtrees of that path

ldap.SCOPE_SUBTREE, # SCOPE_BASE, SCOPE_ONELEVEL, SCOPE_SUBTREE

for a specific element that has the uid (username attribute in IDM/IPA) that matches the supplied username.

'(uid=%(user)s)', # Query

Step 10 – The group search

Next, we need to configure the group search

AUTH_LDAP_GROUP_SEARCH = LDAPSearch(
'cn=groups,cn=accounts,dc=nixgeek,dc=co,dc=uk', # Base DN
ldap.SCOPE_SUBTREE, # SCOPE_BASE, SCOPE_ONELEVEL, SCOPE_SUBTREE
'(objectClass=ipausergroup)', # Query
)

Step 11 – Ensure valid users MUST be members of our group

By setting the following, we can ensure valid users are both valid users, and also members of the correct group to log in. This means we can grant and revoke access to Tower by just adding and removing from a group.

AUTH_LDAP_REQUIRE_GROUP = 'cn=tower_administrators,cn=groups,cn=accounts,dc=nixgeek,dc=co,dc=uk'

That’s it, now save and exit the file.

Step 12 – Restart tower

[root@tower ~]# ansible-tower-service restart

 

Step 13 – Log in and profit!

You are now ready to log in and test.

I hope you found this useful. If you find any inaccuracies please let me know, and i’ll update them.

Versions of software

Red Hat IDM  / FreeIPA -4.4.0
RHEL 7.3
Ansible Tower 3.0.3

 

Configure Ansible Tower to support FreeIPA / IDM LDAP Authentication

Configure libvirt / KVM as a compute resource in Red Hat Satellite 6

So you are using Satellite 6, but need to provision machines using KVM / Libvirt.

If you just install satellite 6, and attempt to configure a compute resource to point at a KVM hypervisor, you’ll quickly discover all kinds of certificate errors such as

Call to virConnectOpen failed: Cannot read CA certificate '/etc/pki/CA/cacert.pem': No such file or directory

In order to use satellite, you need to provide a secure way for satellite to connect to the KVM host.

In this example, i’ve chosen to allow satellite root access to my hypervisor, I would never recommend this in a production environment, but for my test lab, it works just fine.

I will write a follow up post that details how to configure this in a more secure way.

Assuming I have 2 hosts.

My satellite server - sat6 - 192.168.200.4
My kvm hypervisor - kvm - 192.168.200.1

First generate a key on my satellite

log in as the root user

ssh root@sat6

When logged into the satellite 6 server

# su - foreman -s /bin/bash
$ ssh-keygen
$ ssh-copy-id root@kvm

Then test the ssh connection to the KVM host, and make sure it works.

$ ssh root@kvm

Please note: This is a really bad idea on any system you care about, this is just a demonstration of how to make it work in a lab environment! I will follow up a post with a more secure example!

Another important point is to specify we are using ssh authentication in the URL, as well as the username. This is done by specifying qemu+ssh in the URL.

Screenshot from 2016-09-04 17-34-28

Once connected you can access the hypervisor.

Screenshot from 2016-09-04 17-38-16

You now have a really simple, and easy way to provision to a lab environment without expensive hypervisor managers!

 

 

Configure libvirt / KVM as a compute resource in Red Hat Satellite 6

Install get-iplayer on Fedora 27

Install the RPM Fusion Free and Non-Free repositories

$ su -c 'dnf install http://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm http://download1.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm'

Clean and update the local dnf cache

$ sudo dnf clean all & dnf update

Install the prerequisite packages

$ sudo dnf install git perl-open ffmpeg perl-XML-Simple perl-Env perl-XML-LibXML perl-JSON-PP.noarch perl-Mojolicious.noarch AtomicParsley.x86_64

Clone the latest get-iplayer repository

$ git clone https://github.com/get-iplayer/get_iplayer.git

Change to the get_iplayer directory

$ cd get_iplayer

Run get_iplayer

$ ./get_iplayer --info
get_iplayer v2.95, Copyright (C) 2008-2010 Phil Lewis
  This program comes with ABSOLUTELY NO WARRANTY; for details use --warranty.
  This is free software, and you are welcome to redistribute it under certain
  conditions; use --conditions for details.


INFO: Getting tv Index Feeds (this may take a few minutes)
Install get-iplayer on Fedora 27

RHEL 7 / CentOS 7 use classic eth0 style device naming for network adapters

Why was it changed ?

Red Hat Enterprise Linux 7 introduced a new scheme for naming network devices called “Consistent Device Naming”. It’s called Consistent Device Naming because previously the name of the devices [eth0,eth1,eth2] was completely dependant upon the order the kernel detected them as it booted. In certain circumstances, such as adding new devices to an existing system, the naming scheme could become unreliable.

Further reading

The official Red Hat 7 Documentation on consistent device naming can be found here.

What does the new scheme look like ?

# ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT
 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eno16777736: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
 link/ether 00:0c:29:89:1b:2e brd ff:ff:ff:ff:ff:ff

How do I change it back to eth[0-9] style naming ?

In summary we need to

  • Add extra parameters to the kernel configuration
  • Add this to the boot configuration
  • Restart the machine
  • Move the existing interfaces to the new scheme
  • Restart the network service

Add extra parameters to the kernel configuration

Modify the grub bootloader to pass some extra parameters to the kernel at boot time. The kernel will then use these options to decide which naming scheme to use.

First we backup and edit the grub configuration file.

# cp /etc/default/grub /etc/default/grub.bak

Then we can safely edit the grub configuration file

# vim /etc/default/grub

The config file will look similar to the following

GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap rhgb quiet "
GRUB_DISABLE_RECOVERY="true"

The line that starts “GRUB_CMDLINE_LINUX” needs to have some extra paramters added.

The extra parameters are

biosdevname=0 net.ifnames=0

So the final file looks like

GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap rhgb quiet biosdevname=0 net.ifnames=0 "
GRUB_DISABLE_RECOVERY="true"

Add this to the boot configuration

If you are using a UEFI system then rebuild grub with this command

grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg

Otherwise use the following

# grub2-mkconfig -o /boot/grub2/grub.cfg
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-3.10.0-327.el7.x86_64
Found initrd image: /boot/initramfs-3.10.0-327.el7.x86_64.img
Found linux image: /boot/vmlinuz-0-rescue-3c913eca0eab4ebcb6da402e03553776
Found initrd image: /boot/initramfs-0-rescue-3c913eca0eab4ebcb6da402e03553776.img
done

Restart the machine

Now we will restart the host, and the new naming scheme will take effect on reboot.

# shutdown -r now

Move the existing interfaces to the new scheme

It’s possible you may now need to reconfigure your network interface.

Here you can see the network interface is up, however there is no IP information associated with the new device name.

# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT
 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
 link/ether 00:0c:29:89:1b:2e brd ff:ff:ff:ff:ff:ff

For this example we will assume i’m not using NetworkManager. Therefore I’ll be editing the network configuration files in /etc/sysconfig/network-scripts directly.

Change into the network scripts directory.

# cd /etc/sysconfig/network-scripts/

Rename the old interface configuration file to new scheme

# mv ifcfg-eno16777736 ifcfg-eth0

Update the contents of the configuration file to use the new scheme

# sed -i 's/eno16777736/eth0/' ifcfg-eth0

Restart the network service

Finally restart the network service so the changes take effect.

# systemctl restart network

Now the interface can be seen with the correct IP address.

# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
 inet 127.0.0.1/8 scope host lo
 valid_lft forever preferred_lft forever
 inet6 ::1/128 scope host
 valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
 link/ether 00:0c:29:89:1b:2e brd ff:ff:ff:ff:ff:ff
 inet 192.168.100.3/24 brd 192.168.100.255 scope global eth0
 valid_lft forever preferred_lft forever
 inet6 fe80::20c:29ff:fe89:1b2e/64 scope link
 valid_lft forever preferred_lft forever

 

RHEL 7 / CentOS 7 use classic eth0 style device naming for network adapters