Automating VM creation with Kickstart and Ansible without using a PXE server

Automating VM creation with Kickstart and Ansible without using a PXE server
XKCD 1319 titled Automation
https://xkcd.com/1319/

Setting up new RHEL 8 VMs was getting a tad bit too manual and I decided it was time to automate that. As I only need a dozen machines and my NAS is still a work-in-progress, setting up Cobbler seemed like too big of a hassle right now.

Kickstart without PXE

I've already deployed a Kickstart installation by manually specifying the inst.ks boot option. The problem is that without a network boot setup, it must be typed manually when booting.

The Kickstart documentation specifies that if a volume labelled OEMDRV is present when booting, it will be mounted and a ks.cfg file will be automatically loaded.

After deciding to follow this route, I stumbled across this great guide by Jack Price. As of 2021, the article is a bit old as the ansible modules have been deprecated. vsphere_guest has been replaced with vmware_guest but functionality has stayed quite the same.

Templating the Kickstart file

As I'm using Ansible for this I have a template ks.cfg that looks like this:

lang en_US
keyboard us
timezone Europe/Helsinki --isUtc
reboot
text
cdrom
bootloader --location=mbr --append="rhgb quiet crashkernel=auto"
zerombr
clearpart --all --initlabel
autopart
authselect --passalgo=sha512 --useshadow
selinux --enforcing
firewall --ssh
skipx
firstboot --disable
user --name=[change this] --groups=wheel --iscrypted --password=[hashed pass]
network --bootproto=static --ip={{ inventory_hostname }} --netmask=255.255.255.0 --gateway=192.168.1.1 --nameserver=8.8.8.8,8.8.4.4 --hostname={{ scheme_name }}
%packages
@^minimal-environment
kexec-tools
python39
git
vim
%end
%post
pip3 install ssh-import-id
ssh-import-id -o /home/[change this]/.ssh/authorized_keys gh:[and this]
%end

You can and should make a personal kickstart config. This one is mine and I would not recommend using it without customizing it first yourself.

Here is the template-kickstart.yml that uses the template:

- hosts: all
  become: no
  connection: local
  gather_facts: no
  roles:
    - virtual

  tasks:
  - name: create kickstart file
    template:
      src: kickstart.cfg.j2
      dest: "{{ playbook_dir }}/ks/build/{{ inventory_hostname }}.cfg"

It is directly copied from @jackprice, only difference is that I've added gather_facts: no as it is not needed as we are not relying on any facts about the host (it may not even exist yet!). I've also changed the destination folder to ks/build as I did not like the idea of cluttering my playbook directory with shell scripts and makefiles.

.cfg ➟ .iso

Now that we can create kickstart files for any host we want, we need to pack them inside ISO files that can be mounted. Once again I will use the files provided in jack's article:

#!/usr/bin/env bash

set -e

VOLUME_LABEL="OEMDRV"

if [ $# -ne 2 ]; then
    echo "Invalid invocation"
    echo "Usage:"
    echo
    echo "    $0 SOURCE OUTPUT"
    echo
    echo "    SOURCE should be a built Kickstart configuration file"
    echo "    OUTPUT should be the location to store the built ISO"
    echo
    
    exit 1
fi

SOURCE="$1"
DEST="$2"

if [ ! -f "$SOURCE" ]; then
    echo "Source file does not exist"
    
    exit 1
fi

TEMP=$(mktemp -d)

cp "$SOURCE" "${TEMP}/ks.cfg"

mkisofs -V "$VOLUME_LABEL" -o "$DEST" "$TEMP"

rm -r "$TEMP"
build-configuration-iso.sh
build/%.iso: build/%.cfg
	./build-kickstart-iso.sh build/$*.cfg build/$*.iso

build/%.cfg: /kicks/base.cfg
	ansible-playbook --limit $* ../template-kickstart.yml

.PHONY: clean
clean:
	rm -rf build/*
Makefile

Do note that as the Makefile is located in ks/ I've set the config build target to use ../template-kickstart.yml.

This is great! Now we can build a specific ISO from scratch!

$ make build/<hostname>.iso

Deploying the ESXi VM

Now the next step is to use Ansible to create a new VM with the correct Kickstart ISO. This is possible by adding two CD Drives to the VM, one for the installation media and another one for the Kickstart ISO. Let's look at the steps needed:

  1. Build the ISO
  2. Upload ISO to datastore
  3. Create a new VM with the correct parameters

Building the ISO is a breeze by using the make module in ansible:

Do note: I'm using Ansible 2.9 but using latest should work as well

- name: build configuration ISO
  make:
    target: build/{{ inventory_hostname }}.iso
    chdir: "{{ playbook_dir }}/ks"

Next up is uploading the generated ISO to my ESXi host:

- name: upload ISO
  vsphere_copy:
    hostname: "{{ vsphere_hostname }}"
    username: "{{ vsphere_username }}"
    password: "{{ vsphere_password }}"
    datastore: "{{ vsphere_datastore }}"
    src: ks/build/{{ inventory_hostname }}.iso
    path: ISOs/kickstarts/{{ inventory_hostname }}.iso
    validate_certs: false

And finally comes the VM creation task:

- name: create vm
  vmware_guest:
    hostname: "{{ vsphere_hostname }}"
    username: "{{ vsphere_username }}"
    password: "{{ vsphere_password }}"
    validate_certs: no
    folder: ""
    name: "{{ scheme_name }}"
    guest_id: rhel8_64Guest
    state: poweredon
    disk:
    - size_gb: 16
      type: thin
      datastore: "{{ vsphere_datastore }}"
    networks:
      - name: VM Network
        ip: "{{ inventory_hostname }}"
        netmask: 255.255.255.0
        device_type: vmxnet3
    hardware:
      memory_mb: 1024
      num_cpus: 2
      scsi: paravirtual
    cdrom:
      - type: "iso"
        iso_path: "[{{ vsphere_datastore }}] ISOs/rhel-8.4-x86_64-dvd.iso"
        controller_number: 0          
        unit_number: 0

      - type: "iso"
        iso_path: "[{{ vsphere_datastore }}] ISOs/kickstarts/{{ inventory_hostname }}.iso"
        controller_number: 0
        unit_number: 1
  register: deploy
  delegate_to: localhost
  when: not ansible_check_mode

This task was modified quite a bit as I rewrote it to use the new vmware_guest module. Here is the whole file as a playbook:

- hosts: all
  connection: local
  become: no
  gather_facts: no
  roles:
    - virtual
  tasks:

  - name: build configuration ISO
    make:
      target: build/{{ inventory_hostname }}.iso
      chdir: "{{ playbook_dir }}/ks"

  - name: upload ISO

    vsphere_copy:
      hostname: "{{ vsphere_hostname }}"
      username: "{{ vsphere_username }}"
      password: "{{ vsphere_password }}"
      datastore: "{{ vsphere_datastore }}"
      src: ks/build/{{ inventory_hostname }}.iso
      path: ISOs/kickstarts/{{ inventory_hostname }}.iso
      validate_certs: false
      
  - name: create vm
    vmware_guest:
      hostname: "{{ vsphere_hostname }}"
      username: "{{ vsphere_username }}"
      password: "{{ vsphere_password }}"
      validate_certs: no
      folder: ""
      name: "{{ scheme_name }}"
      guest_id: rhel8_64Guest
      state: poweredon
      disk:
      - size_gb: 16
        type: thin
        datastore: "{{ vsphere_datastore }}"
      networks:
        - name: VM Network
          ip: "{{ inventory_hostname }}"
          netmask: 255.255.255.0
          device_type: vmxnet3
      hardware:
        memory_mb: 1024
        num_cpus: 2
        scsi: paravirtual
      cdrom:
        - type: "iso"
          iso_path: "[{{ vsphere_datastore }}] ISOs/rhel-8.4-x86_64-dvd.iso"
          controller_number: 0          
          unit_number: 0

        - type: "iso"
          iso_path: "[{{ vsphere_datastore }}] ISOs/kickstarts/{{ inventory_hostname }}.iso"
          controller_number: 0
          unit_number: 1
    register: deploy
    delegate_to: localhost
    when: not ansible_check_mode
vm-deploy.yml

Do note that I'm using Ansible roles to set the vsphere_ vars with the role virtual. scheme_name is also a variable inherited from roles. You can add these explicitly into your playbook or use your own roles.

Future improvements

The system feels good to use and works reliably.

Something I'd like to implement in the future is adding conditioning to the play to not deploy if the VM already exists. It would then be possible to add this to a role and never even think about creating a new VM.