Cumulus Ansbile Reboot

Hi folks, me again 😃

Did someone managed to issue a reboot of vxCumulus (ie binded on the loopback) and get the playbook keep running ?
I've the impression that the ssh context is lost while doing so:

fatal: [test2 -> localhost]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_name": "wait_for"}, "module_stderr": "Sorry, try again.\n[sudo via ansible, key=sueltqpxdezomjhgfsyuqwclsnyzvlsj] password: \nsudo: 1 incorrect password attempt\n", "module_stdout": "", "msg": "MODULE FAILURE", "parsed": false}

Any idea ?


1 reply

Userlevel 4
I hadn't tried to do this yet but was able to get it working. In this example I'm running Ansible from my local laptop. Followed guide here: and extra options here: My playbook looks like this: --- - hosts: leaf1 remote_user: vagrant tasks: - name: restart machine shell: sleep 2 && shutdown -r now "Ansible updates triggered" async: 1 poll: 0 sudo: true ignore_errors: true - name: waiting for server to come back local_action: module: wait_for host: localhost port: 2222 <-- This port # is found in the "./.vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory" file delay: 30 timeout: 60 state: started sudo: false Execution looks like this: $ansible-playbook ./restart_device_and_wait.yml PLAY [leaf1] ****************************************************************** GATHERING FACTS *************************************************************** ok: [leaf1] TASK: [restart machine] ******************************************************* finished on leaf1 TASK: [waiting for server to come back] *************************************** ok: [leaf1 ->] PLAY RECAP ******************************************************************** leaf1 : ok=4 changed=0 unreachable=0 failed=0