Categories
ansible Automation Cloud

How to manage Juniper devices via ANSIBLE

ANSIBLE is a clientless automation tool. This means you don’t need any client application/agent to manage a device. Other automation tools like Puppet works as Server – client architecture where you need to install a puppet agent on the end host for managing the device. This makes ANSIBLE more suitable for managing Network devices. AS you know most of the Network Operating Systems(NOS) don’t allow the end-user to install any custom software on the device.

In this post, I will explain how to use Ansible to manage the juniper device config. I assume Ansible is already installed in your system

Ansible connects to the Juniper devices via NETCONF protocol. NETCONF is a NETwork CONFiguration management protocol used to install, manipulate, and delete the configuration of the network devices. Its messages are sent as RPC(Remote Procedure Calls). The NETCONF protocol uses an Extensible Markup Language (XML) based data encoding for the configuration data as well as the protocol messages. The protocol messages are exchanged on top of a secure transport protocol. It can be SSH or Transport Layer Security. In Juniper, we will use SSH to connect to transport NETCONF Protocol

To enable netconf in Juniper, enter the config mode and add the below config and commit

set system services netconf ssh

The default NETCONF port number is 830, In case if you want to change the pro number use port command after the ssh hierarchy. Now the system ready receive request from the Ansible Server

Ansible configs are managed as files. There are different ways to maintain the playbook directory structure. Ansible https://docs.ansible.com/ansible/latest/user_guide/playbooks_best_practices.html guide tells you the best practice.

My playbook repo contains below structure

ansible
├── filter_plugins
│   └── sortid.py
├── group_vars
│   ├── all
│   └── passwords
├── host_vars
│   ├── dist1.ams1
│   │   └── dist1.ams1.yaml
│   └── dist1.ash1
│      └── dist1.ash1.yaml
├── inventory
├── playbooks
│   ├── filter_plugins -> ../filter_plugins
│   ├── juniper_snmp_config.yaml
│   ├── junos
│   │   ├── snmp.j2
│   │   └── syslog.j2
│   └── roles
│       └── exconfig
│           ├── tasks
│           │   └── main.yaml
│           └── templates
│               ├── interface_v19.j2
│               ├── main_v19.conf.j2
│               ├── snmp.j2
│               └── virtual_chassis.j2
└── scripts
    └── test_script.py

I will explain the file details from the top to down
The above file hierarchy is under ansible folder, this folder you can create it anywhere in your system.
filter plugin folder: Used for creating filters, when we use Jinja2 templates for generating configs, some times we need extended python functionality to generate the config, like sorting the interface or searching for a value. there are python functions and classes written in the sort.py file, which will be called inside the jinja2 template to sort the interfaces in order. In this particular example, we are not going to use any filter plugin.
group_vars/host_vars: these two folders hold the variables used to generate the config file. group_vars folder contains the variables which are common to all the hosts in that particular group. host_vars has individual host variables.
There may be few variables that are common to all the hosts in the group, like Syslog server IP, NTP server IP, SNMP authentication key, so those things you can include in group_vars. When there are host specific variables like SNMP Location, Switch IP address details, those are maintained in host_vars
Inventory: List of devices which needs to be managed are mentioned in the Inventory file
Playbooks: Playbooks folder contains all the ansible playbooks. Ansible playbook is the place where all the tasks are mentioned.
juniper_snmp_config.yaml: It is one of the playbooks used to configure the SNMP configuration on the Juniper devices.
Junos: folder contains the Jinja2 templates used by the playbook to generate the config
roles: roles are same as Junos folder but it has the different roles folder for different device types, there are no predefined name formats. roles contain two more folders tasks and templates. The main.yaml file in the tasks are used to
scripts:

To configure SNMP v3 on Juniper devices, There are 5 files required in the above hierarchy.
* Jinja2 template file –> playbooks/junos/snmp.j2
* goupvars file –> group_vars/passwords
* hostvars file –> host_vars/dist1.ams1/dist1.ams1.yaml
* Playbook file –> playbooks/juniper_syslog_config.yaml
* inventory file
The below tree is the required one.

ansible
├── group_vars
│   └── passwords
├── host_vars
│   └── dist1.ams1
│      └── dist1.ams1.yaml
├── inventory
└── playbooks
    ├── filter_plugins -> ../filter_plugins
    ├── juniper_snmp_config.yaml
    └── junos
        └── snmp.j2


Create snmp.j2 jinja2 template with below content in ansible/playbooks/junos/ folder

 ~/dev/ansible# cat playbooks/junos/snmp.j2
replace: snmp {
    name {{ inventory_hostname }};
    location {{ system_vars.snmp_location }};
    contact "ops@netops.com";
    v3 {
        usm {
            local-engine {
                user opsv3user {
                    authentication-md5 {
                        authentication-key "$9$xNl7bs4oGiqmUDqfzFAtBIEcKMNdboqmPTQn"; ## SECRET-DATA
                    }
                    privacy-aes128 {
                        privacy-key "$9$obJZjfTzCp05T1RESeKJGUhylM"; ## SECRET-DATA
                    }
                }
            }
        }
        vacm {
            security-to-group {
                security-model usm {
                    security-name testuser {
                        group testgroup;
                    }
                }
            }
            access {
                group testgroup {
                    default-context-prefix {
                        security-model usm {
                            security-level privacy {
                                read-view testview;
                                notify-view testview;
                            }
                        }
                    }
                }
            }
        }
    }
    engine-id {
        local 40c4f06a0a81;
    }
    view opsv3view {
        oid .1 include;
    }
    client-list snmp-client {
        10.0.0.0/16;
    }
}

In the above jinja2 template we used 2 variables inventory_hostname and system_vars.snmp_location, the inventory_hostname is the device name which is taken from the inventory file, and system_vars.snmp_location variable exists in the ansible/host_vars/dist1.ams1/dist1.ams1.yaml file

Create the password file with the username/password details to access the switch. The file is in YAML format, so make sure the space is proper

credentials:
  username: ansibleuser
  password: ansibleuserpassword
  timeout: 60

Create the inventory file

 ~/dev/ansible # cat inventory
[local]
localhost ansible_connection=local

[alljuniper]
dist1.ams1

[alljuniper:vars]
dev_os=junos

Create the host_vars file for dist1.ams1

~/dev/ansible# cat host_vars/dist1.ams1/dist1.ams1.yaml
---
system_vars:
  snmp_location: "Cabinet:1255"
  pop_code: "ams1"
  mgmt_ip: "10.0.1.250"

finally create the playbook file which is used to configure the switch

 ~/dev/ansible# cat playbooks/juniper_snmp_config.yaml
- name: Juniper snmp config
# hosts refer to the group of devices mentioned in the inventory file. alljuniper is a group
  hosts: alljuniper
  gather_facts: no
  connection: local
  vars_files:
  - ../group_vars/passwords
  vars:
    paths_to_vars_files:
# Here we call the host variables for each host mentioned in the Inventory 
      - ../host_vars/{{ ansible_hostname }}/
  tasks:
    - name: Create snmp config file for juniper devices
      template:
        src: "{{dev_os}}/snmp.j2"
        dest: "/var/tmp/snmp_config/{{ inventory_hostname }}--config.txt"

- name: Copy generated config and Diff with running config
  hosts: alljuniper
  gather_facts: no
  ignore_errors: False
  roles:
  - juniper.junos
  connection: local
  vars_files:
  - ../group_vars/all
  - ../group_vars/passwords
  tasks:
    - name: Diff between the Generated config and device Running Configuration
      juniper_junos_config:
        provider: "{{  credentials }}"
        load: 'replace'
        format: 'text'
        src: "/var/tmp/snmp_config/{{ inventory_hostname }}--config.txt"
        diff: true
        check: false
        commit: true
        ignore_warning: true
      register: response
    - name: "Print result"
      debug:
        var: response

There are two tasks mentioned in the above Playbook, the first task creates the juniper SNMP config by using the src: template. and saves the output in the dest: folder. in snmp.j2 template file the variables are replaced with the host-specific variables mentioned in “host_vars/dist1.ams1/dist1.ams1.yaml” file.

The second task uses the generated config in dest as a src: and copies it to the device via netconf protocol. The provider: option uses the credentials from the credentials variable mentioned in the “group_vars/passwords” file.
load: ‘replace’ completely replaces the SNMP tree config in the device. Instead of “replace” you can use a merge option, in which the generated config and the device config is merged, where it will not remove any config, it will only add the config or change the config which is already there.
diff: true provides the config diff
commit: true means commit the config on the device

Run the playbook using the below command, this will configure the switch with new SNMP config

ansible-playbook playbooks/juniper_snmp_config.yaml

The same Encrypted config can be used in all the devices if we use the same engine-id in all the devices.

Leave a Reply