Applying a naming scheme to all hosts using Ansible

Applying a naming scheme to all hosts using Ansible

Now that I've started using ansible I wanted to organize my hosts a bit better. I'm using static IP addresses but my hostnames have been all around the place.

As I have less than 10 machines right now, I wanted a fun naming scheme for my hostnames. After browsing naming schemes I settled on elements. For now.

Why Ansible?

I could just rename all the hosts by hand. But that wouldn't be as fun. Besides when using Ansible, changing the naming scheme in the future is a breeze.

As I'm already using Ansible to create new hosts,  it's quite useful that hostname generation is also done in the same chain. This way I can know the hostname of a specific host before the host is online.

Goals of this project

I want to be able to determine each host in the inventory a specific hostname that does not change between runs. I then want to be able to use the determined hostname in the following places:

  • Apply the hostname to the host
  • Reference all hosts inside every host's /etc/hosts
  • Rename my ESXi virtual machines to match the hostname

Generating the hostname

I decided to store all of the wanted hostnames into /etc/names. More specifically I wrote all the names to /etc/elements and then symlinked it:

$ ln -s /etc/elements /etc/names

This way I can add different schemes and just change the symlink.

Here is what /etc/elements looks like:

Hydrogen
Helium
Lithium
Beryllium
Boron
Carbon
Nitrogen
Oxygen
Fluorine
Neon
Sodium
Magnesium
Aluminum
Silicon
Phosphorus
Sulfur
Chlorine
Argon
Potassium
Calcium
Scandium
Titanium
Vanadium
Chromium
Manganese
Iron
Cobalt
Nickel
Copper
Zinc
Gallium
Germanium
Arsenic
Selenium
Bromine
Krypton
Rubidium
...

Now with the actual data in place, it was time to use this inside Ansible.

Using Ansible's great role system, I added this to roles/common/vars/main.yml

inventory_index: "{{ groups['all'].index(inventory_hostname) }}"
scheme_name: "{{ lookup('file', '/etc/names').split('\n')[inventory_index | int] }}"
roles/common/vars/main.yml

Here we first assign the host an inventory_index that represents the order that the hosts are in the inventory.

Next, we look up the contents of /etc/names and do some jinja templating magic to get the nth line with indentory_index.

Do note that without DEFAULT_JINJA2_NATIVE = true inventory_index will not be an integer so casting is required. This is achieved with the | int filter.

Now as long as we use the common role we will always have a scheme_name for a host!

Changing the hostname

With the scheme_name defined, we just need to set the hostname. Ansible provides the hostname -builtin and the task is just as simple as this:

- name: set hostname
  become: yes
  hostname:
    name: "{{ scheme_name }}"

Do note that we need superuser access to change the hostname!  That's why we've added the become: yes parameter.

Updating the /etc/hosts file

Next up is the task of updating /etc/hosts. For this we create a new Jinja2 template:

# {{ ansible_managed }}
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

{% for item in ansible_play_hosts %}
{% if hostvars[item]['ansible_hostname'] is defined %}
{{ item }}    {{ hostvars[item]['ansible_hostname'] }}
{% endif %}
{% endfor %}

We use a Jinja2 loop to iterate over all hosts in the current play. Ansible should exclude unreachable hosts from ansible_play_hosts but I still added the condition that a ansible_hostname must be present for an entry to be added.

Here is an example file generated by this template:

# Ansible managed
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.1.10    Hydrogen
192.168.1.11    Helium
192.168.1.12    Lithium
192.168.1.13    Beryllium
192.168.1.14    Boron

And finally here is the Ansible play:

- name: Update /etc/hosts files
  hosts: all
  roles:
    - common

  tasks:
  - name: template hostnames
    become: yes
    template:
      src: hosts.j2
      dest: /etc/hosts 

Do note the become: yes again!

Renaming the ESXi machines

While exiting, I think this is the least reliable part of the playbook. We are assuming that the virtual machines and hostnames share the same name.

First, we register the original hostnames upon running the playbook. After that, we perform a search for a VM with that name. When found, we change the VM name to scheme_name.

Let's start with the registering of the hostnames:

- name: Rename All Hostnames
  hosts: all
  gather_facts: yes
  roles:
    - common

  tasks:
  - name: register original hostnames
    set_fact:
      previous_hostname: "{{ ansible_hostname }}"

Here we store the previous hostname into the variable previous_hostname.

Now we are free to change the hostnames and do our templating.

After that, we find the VM with the matching name and register that to vm_facts:

- name: Get VM UUIDs
  vmware_guest_info:
  hostname: "{{ vsphere_hostname }}"
  username: "{{ vsphere_username }}"
  password: "{{ vsphere_password }}"
  datacenter: "ha-datacenter"
  validate_certs: False
  name: "{{ previous_hostname }}"
register: vm_facts

Now we use the vmware_guest module to rename the VM:

- name: Rename VMs 
  vmware_guest:
    hostname: "{{ vsphere_hostname }}"
    username: "{{ vsphere_username }}"
    password: "{{ vsphere_password }}"
    validate_certs: False
    uuid: "{{ vm_facts.instance.hw_product_uuid }}"
    name: "{{ new_name }}"

And we are done! Here is the whole file in its entirety:

- name: Rename All Hostnames
  hosts: all
  gather_facts: yes
  roles:
    - common

  tasks:
  - name: register original hostnames
    set_fact:
      previous_hostname: "{{ ansible_hostname }}"
  - name: set hostname
    become: yes
    hostname:
      name: "{{ scheme_name }}"

- name: Update /etc/hosts files
  hosts: all
  roles:
    - common

  tasks:
  - name: template hostnames
    become: yes
    template:
      src: hosts.j2
      dest: /etc/hosts 

- name: Rename VMs
  hosts: all
  connection: local
  gather_facts: no
  roles:
    - virtual

  vars:
    - new_name: "{{ scheme_name }}"
  
  tasks:
  - name: Get VM UUIDs
    vmware_guest_info:
      hostname: "{{ vsphere_hostname }}"
      username: "{{ vsphere_username }}"
      password: "{{ vsphere_password }}"
      datacenter: "ha-datacenter"
      validate_certs: False
      name: "{{ previous_hostname }}"
    register: vm_facts

  - name: Rename VMs 
    vmware_guest:
      hostname: "{{ vsphere_hostname }}"
      username: "{{ vsphere_username }}"
      password: "{{ vsphere_password }}"
      validate_certs: False
      uuid: "{{ vm_facts.instance.hw_product_uuid }}"
      name: "{{ new_name }}"

hostname-rename.yml

Results & Future

The playbook works perfectly! I'm currently supplying the Sudo password with --ask-become-pass but may look into other solutions in the future.

Another thing to improve is that I may improve the template to point the local hostname to 127.0.0.1 instead of the current network address.

Other than that, I feel this system is very robust and it lets me change the whole hostname scheme with zero effort!