Using Ansible to validate state of network?

hi all,

i've been using Ansible extensively to both provision VX in Virtual Box for testing changes, and also for rolling those changes to production.

One thing that I want to look at doing next is leveraging Ansible to effectively test / validate that the network is functioning as expected -- at least the control plane. For example -- once the virtual topology is deployed, run a series of checks such as making sure all BGP neighbors are up, certain routes are learned, MC-LAG is up, etc.

some of this functionality overlaps with NetQ a bit, but I really see this as something that could be executed as part of testing changes deployed to lab topology to verify everything is still correct, prior to pushing to production -- setting up NetQ for virtualbox deployements etc seems overkill.

I guess my question is, is anyone already doing this? or does anyone have pointers on how to get started? It looks like I can get JSON output from any net show command, and I guess I just need to figure out how to compare that to some expected values stored in a YML file or something. Just not exactly sure where to begin, or if Ansible is even the right tool for this type of validation.



3 replies

Userlevel 3
If you're not dead set on using Ansible, have you considered trying out the NetQ Virtual option instead? We also have a demo on GitHub using it.
Userlevel 4
You may want to take a look at this which is built do pretty much do what you've described. The idea is that the gitlab setup monitors a repository, and as a change is pushed to the repository it spins up the virtual topo, runs tests against it for validation, then reports success/failure -->
Hey Will.

In the gitlab setup that Eric pointed to, you can see there are some other components besides just Ansible. Ansible is not very strong in the test/validation area, and I found it too difficult to implement a simple set of routing comparison tests that I needed to do. The gitlab demo, and a similar test script I wrote about a year use 'Behave' which is a test framework in python. I wrote my test scripts to run against the topology used in the cldemo-vagrant VX Demo framework:

You don't need Behave by itself, you can write in straight python, but behave gives you a structure to develop from. There are also other python based test frameworks.

The other component parts you need are:
1) some way to collect 'show' command output from the VX devices. I used pssh.SSHclient in my python testing to connect to the VX switches from a virtual linux server attached to the VX OOB network in the cldemo-vagrant topology. There are also other ssh clients.
from pssh import SSHClient  def sudo(host, command):
client = SSHClient(host)
channel, host, stdout, stderr = client.exec_command(command, sudo=True)
return dict(host=host,

2) After that you have to parse the output using Regex or some other parsing tool to find the particular info you need. Sometimes the JSON output will get what you need in dictionary format. There are also some quick and dirty shortcuts you can use. If you have a 4 Leaf 2 spine topo, then you know that the spines should have 4 BGP neighbors and the Leafs should have 2 BGP neighbors, so you can just look for a count of lines that match a regex 'spine|leaf':
  def quickparse_linecount(output, re_text):      pattern=re.compile(re_text      linecount=0      for line in output.split('\n'):          if              linecount +=1      return linecount  

cmd='sudo vtysh -c "show ip bgp summary"'

for spine in spine_list: output=sudo(spine, cmd)['stdout'] debug('Spine: {0}, output: {1}'.format(spine, output)) full_nbrs=quickparse_linecount(output, 'spine|leaf') assert_equal(full_nbrs, 4, 'DUT: {0} should have 4 BGP neighbors, only has {1}'.format(spine, full_nbrs)) for leaf in leaf_list: output=sudo(leaf, cmd)['stdout'] debug('Leaf: {0}, output: {1}'.format(leaf, output)) full_nbrs=quickparse_linecount(output, 'spine|leaf') assert_equal(full_nbrs, 2, 'DUT: {0} should have 2 BGP neighbors, has {1} instead'.format(leaf, full_nbrs))

3) An optional 'assert' library if you want, to hard fail test cases within a test framework. I used the one from nose, but you can instead just do a compare and print a message if it fails

from import assert_regexp_matches, assert_equal, assert_true