This lab expands the scope of Ansible beyond pure network device automation and into Linux server management – a combination that reflects what network and infrastructure engineers are increasingly expected to handle in real working environments. If you have ever had a manager hand you a server task on your first day with a “you’ll figure it out”, this lab is for you.
The scenario is grounded in reality. An Ubuntu application server has been added to the topology to provide centralised services including NTP. Your manager has tasked you with configuring the server as an NTP source using Ansible, then pointing all your routers at it, and finally verifying the configuration across the entire estate with a validation playbook.
The lab works through four tasks in sequence.
The first task is updating the host inventory to add the Ubuntu server as a new group called servers with its own set of group variables. This introduces several new Ansible variables you have not seen in previous labs. The ansible_become variable enables privilege escalation on the server – the Linux equivalent of moving from user exec mode to privileged exec mode on a Cisco device. The ansible_become_method is set to sudo, which is the standard Linux privilege escalation mechanism, and ansible_become_password provides the password required for sudo operations. Understanding privilege escalation is essential when automating Linux systems because most package installation and service management tasks require root-level access that a standard user account does not have by default.
The second task is writing a playbook to configure the Ubuntu server as an NTP server using four sequential tasks. The first task uses the apt module to install the chrony package, which is the NTP implementation used by modern Ubuntu server installations. The state: present parameter installs the package if it is not already there and leaves it untouched if it is, which keeps the playbook idempotent. The second and third tasks use the lineinfile module to write configuration lines directly into the chrony configuration file at /etc/chrony/chrony.conf – first adding a comment block to record that the change was made by automation, then adding the allow directive to permit NTP queries from the management subnet. The lineinfile module is an essential tool for Linux server automation because it lets you make targeted edits to configuration files without rewriting the entire file or relying on templates. The fourth task uses the systemd module to restart the chrony service so the new configuration is applied, and the fifth task uses the ufw module to open UDP port 123 in the Linux firewall to allow NTP traffic to reach the server from the network devices. Without this firewall rule the routers can reach the server but NTP synchronisation will silently fail.
The third task is writing a playbook to configure all routers to use the newly provisioned NTP server. This uses the cisco.ios.ios_config module with the ntp server command pointing at the application server IP address. The playbook also sets minimum and maximum polling intervals to four – which translates to a polling interval of sixteen seconds rather than the default sixty-four seconds. This significantly reduces the time it takes for Cisco devices to build sufficient trust in a new NTP server and reach synchronisation, which is particularly useful in a lab environment where you want to verify results quickly without waiting for the standard polling cycle to complete.
The fourth task is writing a verification playbook using cisco.ios.ios_command to run show ntp associations and show ntp status across all routers simultaneously, registering the output and displaying it in the terminal using the debug module. The output confirms that NTP is configured and that the routers are polling the server, with the stratum value and reach counter indicating synchronisation progress. Full synchronisation takes several polling cycles to complete but the presence of the server in the associations table confirms the configuration has been applied correctly.
The lab also covers committing all playbooks and configuration files to a Git repository at the end of the session. Version control is standard practice in production automation environments and using Git throughout the lab series means every playbook is available in the linked GitHub repository for you to download and adapt for your own environment.
By the end of this lab you will understand how to add Linux servers to your Ansible inventory alongside network devices, how to use ansible_become with sudo for privilege escalation on Linux hosts, how to install packages using the apt module, how to make targeted edits to Linux configuration files using lineinfile, how to manage services using the systemd module, how to open firewall rules using the ufw module, and how to tune NTP polling intervals on Cisco devices to accelerate synchronisation in a lab environment.
All host files, playbooks and configuration files from this lab are available on the GitHub repo lab 4 linked in the course resources.