Ansible not able to overwrite the file


Userlevel 1
Hello,

I following the KB
https://support.cumulusnetworks.com/hc/en-us/articles/205014637-Ansible-Simple-Playbook-Example-with...

to which overwrite quagga config. It got an error

# ansible-playbook simple_updatequagga.yml -v
Using /root/cumulus_ansible/playbook/ansible.cfg as config file

------------------------
PLAY [cumulus_vx] **************************************************************

TASK [setup] *******************************************************************
ok: [cumuslusvx2]
ok: [cumuslusvx1]

TASK [write the quagga config file] ********************************************
fatal: [cumuslusvx2]: FAILED! => {"changed": true, "failed": true, "msg": "Destination /etc/quagga not writable"}
fatal: [cumuslusvx1]: FAILED! => {"changed": true, "failed": true, "msg": "Destination /etc/quagga not writable"}
to retry, use: --limit @/root/cumulus_ansible/playbook/simple_updatequagga.retry

PLAY RECAP *********************************************************************
cumuslusvx1 : ok=1 changed=0 unreachable=0 failed=1
cumuslusvx2 : ok=1 changed=0 unreachable=0 failed=1
----------------------------------

I found the file owner and group is quagga. However, even I tried to add parameters to specify owner, group and mode but still fail

====
template: src=/root/cumulus_ansible/playbook/quagga.j2 dest=/etc/quagga/Quagga.conf owner=quagga group=quagga mode=0644
====

Does it have any places caused it got fail?

Thanks!

13 replies

Userlevel 4
Recommend running your command with become=true so this command will run from the root user where there will be no permissions concerns. Notice the "remote_user: root" line in the example ansible playbook? That line specifies that we'll be running the commands on the remote device from the root user to allieviate permissions issues. Alternatively that example could have used "become: true" for individual tasks for a more targeted approach.
Userlevel 1
Hello,

Thanks for suggestion.

I guess it is caused by problem of sudo problem.

If I running ansible-playbook with --ask-sudo-pass. It will ask me sudo password and the process will smooth to run.

However, if without --ask-sudo-pass. It will get following fail message.

======================================

PLAY [cumulus_vx] **************************************************************

TASK [setup] *******************************************************************
fatal: [cumuslusvx2]: FAILED! => {"failed": true, "msg": "Timeout (17s) waiting for privilege escalation prompt: "}
fatal: [cumuslusvx1]: FAILED! => {"failed": true, "msg": "Timeout (17s) waiting for privilege escalation prompt: "}
to retry, use: --limit @/root/cumulus_ansible/playbook/simple_updatequagga.retry

PLAY RECAP *********************************************************************
cumuslusvx1 : ok=0 changed=0 unreachable=0 failed=1
cumuslusvx2 : ok=0 changed=0 unreachable=0 failed=1

=============================================

Please check my environment setting as well.

------------------------------------

# cat ansible.cfg
[defaults]
hostfile = hosts
sudo_flags=-H -S
timeout = 15
host_key_checking=False

------------------------------------------

# cat hosts
cumuslusvx1 ansible_ssh_host=192.168.10.111 ansible_ssh_user=cumulus ansible_ssh_private_key_file=/root/.ssh/id_rsa
cumuslusvx2 ansible_ssh_host=192.168.10.112 ansible_ssh_user=cumulus ansible_ssh_private_key_file=/root/.ssh/id_rsa

[cumulus_vx]
cumuslusvx1
cumuslusvx2

========================================

Thanks1

Userlevel 4
machiasiaweb wrote:

Hello,

Thanks for suggestion.

I guess it is caused by problem of sudo problem.

If I running...

You private key allows you to login as the cumulus user but once that completes there's no method to obtain root access as you haven't' supplied a password etc. which is why you need the --ask-sudo-password argument. Alternatively you could install your public key in the root user account on the Vx devices and use the ansible_ssh_user=root for your hosts. That would avoid this issue too OR you could give passwordless SUDO access to the cumulus user so there would be no password prompt when running any sudo commands from the cumulus user. I would probably just put my public key in the root user account and use that for simplicity.
Userlevel 1
Thanks!
Hi all,

Im having the same issue , and i tried the passwordless SUDO and still have problems.

Thanks.
Userlevel 4
Juraj Papic wrote:

Hi all,

Im having the same issue , and i tried the passwordless SUDO and still have problems.

T...

There are a lot of ways to fix this issue. Passwordless Sudo is one, hardcoding a password in your ansible is another, setting up key-based authentication is probably the best however, see this article for an example --> https://www.cyberciti.biz/faq/how-to-upload-ssh-public-key-to-as-authorized_key-using-ansible/

Juraj Papic wrote:

Hi all,

Im having the same issue , and i tried the passwordless SUDO and still have problems.

T...

Hello Eric,

I just did the steps for the ssh, but when I try to write in the interfaces i get
fatal: [Cum1]: FAILED! => {"changed": true, "failed": true, "msg": "Destination /et├───────────────────────────────────────────────────────────────────────────────────
c/network not writable"}

This is my .yml file

- hosts: leaf1 │
vars: │source /etc/network/interfaces.d/*.intf
loopback_ip: "10.2.1.1" │
remote_user: root │# The loopback network interface
tasks: │auto lo
- name: write the network config file │iface lo inet loopback
template: src=interface1.j2 dest=/etc/network/interfaces │
notify: │# The primary network interface
- restart netwroking │auto eth0
- name: ensure networking is running │iface eth0 inet static
service: name=netwroking state=started │ address 172.10.33.3
handlers: │ netmask 255.255.255.0
- name: restart networking │ gateway 172.10.33.2
service: name=networking state=restarted

and Im calling this file

auto swp1 │cumulus@cumulus:/$ more /etc/network/interfaces
iface swp1 │# This file describes the network interfaces available on your system
│# and how to activate them. For more information, see interfaces(5).

auto bridge │source /etc/network/interfaces.d/*.intf
iface bridge │
bridge-vlan-aware yes │# The loopback network interface
bridge-ports swp1 │auto lo
bridge-vids 1-100 │iface lo inet loopback
bridge-pvid 1 │ address 10.2.1.1 255.255.255.255
bridge-stp on

Thanks!!

Userlevel 1
My experience is first create a ssh public/private key.

Then import to Cumulus side including both /root and /home/cumulus home .ssh directory

During define the host under Ansible. Using parameter 'ansible_user=root'

and when edit the Ansible action. Try including the following parameter

----
remote_user: cumulus
become: yes
become_user: root
become_method: sudo
----

Thanks!
hello,

should i put this config in the ansible.cfg or the playbook?
thanks.

Userlevel 1
Config the playbook is enough.
Hello,

Still with the same error

this is my playbook
- hosts: leaf1
vars:
loopback_ip: "10.2.1.1"
#remote_user: root
remote_user: cumulus
become: yes
become_user: root
become_method: sudo

tasks:
- name: write the network config file
template: src=interface1.j2 dest=/etc/network/interfaces
notify:
- restart netwroking
- name: ensure networking is running
service: name=netwroking state=started
handlers:
- name: restart networking
service: name=networking state=restarted

│fatal: [Cum1]: FAILED! => {"changed": false, "failed": true, "module_stderr": "Shared connection to 172.10.22.3 closed
.\r\n", "module_stdout": "sudo: a password is required\r\n", "msg": "MODULE FAILURE", "rc": 1}

Thanks.!
similar error for me.
when i try with the user id 'cumulus' it says /etc not writable, when i try with root it says 'aithentication failure'

Userlevel 4
shakir wrote:

similar error for me.
when i try with the user id 'cumulus' it says /etc not writable, when i try...

If using the Cumulus user you'll need to set "become: yes" in your playbook and "-K" on the command line when calling the playbook -- Unless you've setup pre-shared keys and password-less sudo access.

Reply