I have a problem trying to ping to machines using ansible, 1 is fedora 35 the 2nd is ubuntu 21.
when I run
ansible all -i inventory -m ping -u salam -k
I get the following warnings
[WARNING]: Unhandled error in Python interpreter discovery for host
myubuntuIP: unexpected output from Python interpreter discovery
[WARNING]: sftp transfer mechanism failed on [myubuntuIP]. Use
ANSIBLE_DEBUG=1 to see detailed information
[WARNING]: scp transfer
mechanism failed on [myubuntuIP]. Use ANSIBLE_DEBUG=1 to see detailed
information myubuntuIP | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"ping": "pong" }
[WARNING]: Platform unknown on host myfedoraIP is using the discovered Python interpreter at /usr/bin/python, but future
installation of another Python interpreter could change the meaning of
that path. See
https://docs.ansible.com/ansible-core/2.14/reference_appendices/interpreter_discovery.html
for more information. myfedoraIP | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
when I do
which python3
on both machines, I get 2 different paths as follows
/usr/bin/python3 for fedora box /bin/python3 for ubuntu box
I understand from 1 thread here that we should indicate the path of python in ansible.cfg file, Can I indicate 2 different paths in the ansible.cfg? If yes how? and why ansible is not able to find the python path?
First, the error on your Ubuntu system appears unrelated to this question; it says:
[WARNING]: sftp transfer mechanism failed on [myubuntuIP]
[WARNING]: scp transfer mechanism failed on [myubuntuIP]
I suspect to diagnose that issue you'll need to follow the instructions in the error message, set ANSIBLE_DEBUG=1, and if the cause isn't immediately obvious open a new question here for that particular issue.
I understand from 1 thread here that we should indicate the path of python in ansible.cfg file, Can I indicate 2 different paths in the ansible.cfg? If yes how?
You don't set this in your ansible.cfg (unless you really do want a single setting for all your hosts); you set this in your Ansible inventory or in your host_vars or group_vars directory. For example, to set this on a specific host in your inventory, you might do something like this:
all:
hosts:
host1:
ansible_python_interpreter: /usr/bin/python3
host2:
host3:
You could accomplish the same thing by placing:
ansible_python_interpreter: /usr/bin/python3
In host_vars/host1.yaml.
If the same configuration applies to more than one host, you can group them and then apply the setting as a group variable. For example, to apply the setting only to a subset of your hosts:
all:
hosts:
host1:
children:
fedora_hosts:
vars:
ansible_python_interpreter: /usr/bin/python3
hosts:
host2:
host3:
Or to apply it globally:
all:
vars:
ansible_python_interpreter: /usr/bin/python3
hosts:
host1:
host2:
host3:
And why ansible is not able to find the python path?
That's not what the warning is telling you -- it was able to find the Python path (/usr/bin/python), but "future installation of another Python interpreter could change the meaning of that path" (because /usr/bin/python, depending on your distribution, could actually be python 2 instead of python 3, etc).
Related
I am trying to use the Dell OpenManage Ansible Modules to communicate with a PowerEdge's iDRAC. I cannot find a solution to my problem online, hopefully someone here will be able to assist. The only real answer I have found is that the host machine might not be using Python but as you can see from the error below, the host is in fact using a python interpreter. It is not the exact same interpreter as what is in the virtual environment I am running the playbook out of, I am not sure if that makes a difference or not.
Device:
PowerEdge R620 and iDRAC7
Playbook:
---
- hosts: PowerEdge
connection: local
gather_facts: False
tasks:
- name: Get hardware inventory
dellemc_get_system_inventory:
idrac_ip: "IP"
idrac_user: "USER"
idrac_password: "PASSWORD"
validate_certs: false
become: yes
Command:
ansible-playbook playbook.yml -i iDRAC_IP, -u USER --ask-pass -vvv -K
Error:
fatal: [iDRAC_IP]: FAILED! => {
"ansible_facts": {},
"changed": false,
"failed_modules": {
"ansible.legacy.setup": {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"failed": true,
"module_stderr": "Shared connection to iDRAC_IP closed.\r\n",
"module_stdout": "\rcmdstat\r\n\r\tstatus : 2\r\n\r\tstatus_tag : COMMAND PROCESSING FAILED\r\n\r\terror : 252\r\n\r\terror_tag : COMMAND SYNTAX ERROR\r\n",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 0,
"warnings": [
"Platform unknown on host iDRAC_IP is using the discovered Python interpreter at /usr/bin/python, but future installation of another Python interpreter could change the meaning of that path. See https://docs.ansible.com/ansible-core/2.13/reference_appendices/interpreter_discovery.html for more information."
]
}
},
"msg": "The following modules failed to execute: ansible.legacy.setup\n"
}
the host machine might not be using Python but as you can see from the error below
This module uses a REST API which uses port 443.
Ansible doesn't like to use passwords, it is highly reccomended to use SSH Keys. Here is a tutorial to make the keys and place them on ESXI (stop at the ESXI part).
Create SSH Keys
To place the SSH key on an iDRAC (I used this method on a Dell PowerEdge FC430), follow the directions on Page 77 of this reference guide.
Placing SSH Keys on iDRAC
The playbook that worked for me
---
- hosts: host_file
gather_facts: False
collections: dellemc.openmanage
tasks:
- name: Get System Inventory
dellemc.openmanage.idrac_system_info:
idrac_ip: ip
idrac_password: pass
idrac_user: user
validate_certs: False
delegate_to: localhost
Host file contains:
host IP address
ansible_connection=ssh
remote username
Calling ansible playbook
ansible-playbook -vvvv <your_file>.yml
If that doesn't work, you should verify that python is upgraded to 3.8.6, and check if the iDRAC firmware upgraded to 2.82.82.82.
I am starting to integrate AWX into our environment and would like to move & schedule some python scripts there, but I am facing issue to trigger python script using Ansible playbook. There is .YML and .PY located in the same Github repository & directory. I trigger Ansible playbook which initiates python script as per below 2 lines (there is more of the code of course) and everything completes OK, but script is not triggered. Previously all python scripts were located on the host locally and scheduled via windows task scheduler.
CODE:
- name: Checking for duplicate clients
script: duplicate_clients.py
Seems that Python script on the windows host is even not started, as I do not see python started in the task manager.
DEBUG:
<server_XXX> PUT "/runner/project/MISC/duplicate_clients.py" TO "C:\Users\C017317\AppData\Local\Temp\ansible-tmp-1656571782.4292357-31-141461256091413\duplicate_clients.py"
EXEC (via pipeline wrapper)
EXEC (via pipeline wrapper)
changed: [server_XXX] => {
"changed": true,
"rc": 0,
"stderr": "#< CLIXML<Objs Version=\"1.1.0.1\" xmlns=\"http://schemas.microsoft.com/powershell/2004/04\"><Obj S=\"progress\" RefId=\"0\"><TN RefId=\"0\"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N=\"SourceId\">1</I64><PR N=\"Record\"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>",
"stderr_lines": [
"#< CLIXML<Objs Version=\"1.1.0.1\" xmlns=\"http://schemas.microsoft.com/powershell/2004/04\"><Obj S=\"progress\" RefId=\"0\"><TN RefId=\"0\"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N=\"SourceId\">1</I64><PR N=\"Record\"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>"
3…
META: ran handlers
However, I have tried a different approach to start python script (as I am not sure how exactly it should be done when script is located on Github).
CODE:
- name: Checking for duplicate clients
command: py -3 duplicate_clients.py
DEBUG
> EXEC (via pipeline wrapper)
Using module file /usr/local/lib/python3.8/site-packages/ansible/modules/command.py
Pipelining is enabled.
EXEC (via pipeline wrapper)
[WARNING]: No python interpreters found for host
server_XXX (tried ['python3.10', 'python3.9',
'python3.8', 'python3.7', 'python3.6', 'python3.5', '/usr/bin/python3',
'/usr/libexec/platform-python', 'python2.7', 'python2.6', '/usr/bin/python',
'python'])
fatal: [server_XXX]: FAILED! => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
Seems it is looking python on a linux path, though the target host is a windows.
The python script here is not an issue, as it is simply creating a test file. Running script locally completes with RC=0.
So the issue has been resolved by providing python executable path.
- name: Checking for duplicate clients
script: create_file.py
args:
executable: '"C:\Program Files (x86)\Python\python.exe"'
I am trying to test a simple ansible script I have made using molecule. I am currently setting up molecule and have been following this tutorial: https://www.youtube.com/watch?v=93urFkaJQ44
When I run molecule test I get this error (this is verbose, -vvv output):
TASK [Gathering Facts] ***************************************************************************************************************************************************************************************
fatal: [instance]: UNREACHABLE! => {"changed": false, "msg": "Failed to create temporary directory.In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \"/tmp\", for more error information use -vvv. Failed command was: ( umask 77 && mkdir -p \"` echo ~/.ansible/tmp `\"&& mkdir \"` echo ~/.ansible/tmp/ansible-tmp-1631621484.12-47224-231505895998260 `\" && echo ansible-tmp-1631621484.12-47224-231505895998260=\"` echo ~/.ansible/tmp/ansible-tmp-1631621484.12-47224-231505895998260 `\" ), exited with result 1", "unreachable": true}
It says that this error is caused by the converge script, as it errors in the PLAY Converge section.
Converge.yml
---
- name: Converge
hosts: all
gather_facts: true
- import_playbook: ../../setUpVm.yml
/.ansible exists as it returns:
collections cp galaxy_token tmp
Any help appreciated.
Update Setting gather_facts: no makes the test run to completion. This "fix" however is not idea.
Your docker engine has restarted. Do a simple
molecule destroy && molecule create
will permit you to successfully run your tests.
Another clue may be a filesystem problem. It often occurs within multiple operation systems, like running a docker inside a WSL2 virtualized kernel on Windows. Try to change your molecule against checked source tree to a native linux path.
I have all playbooks in /etc/ansible/playbooks and I want to execute them anywhere on the pc
I tried to configure playbook_dir variable in ansible.cfg
[defaults]
playbook_dir = /etc/ansible/playbooks/
and tried to put ANSIBLE_PLAYBOOK_DIR variable in ~/.bashrc
export ANSIBLE_PLAYBOOK_DIR=/etc/ansible/playbooks/
but I only got the same error in both cases:
nor#nor:~$ ansible-playbook test3.yaml
ERROR! the playbook: test3.yaml could not be found
This is my ansible version:
ansible 2.9.7
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/nor/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.7/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.3 (default, Oct 7 2019, 12:56:13) [GCC 8.3.0]
Does anyone know the problem and how to solve it?
According to https://manpages.debian.org/testing/ansible/ansible-inventory.1.en.html :
--playbook-dir 'BASEDIR'
Since this tool does not use playbooks, use this as a subsitute playbook directory.This sets the relative path for many features including roles/ group_vars/ etc.
This means that ANSIBLE_PLAYBOOK_DIR is not used as a replacement for specifying the the absolute / relative path to your playbook, but it tells the playbook where it should look for roles, host/group vars , etc.
The goal you're trying to achieve is has no solution on the ansible side, you need to achieve this by configuring your shell profile accordingly.
set the following in your .bashrc file:
export playbooks_dir=/path/to/playbooks
when you call the playbook use ansible-playbook $playbooks_dir/test3.yml
As others have said, ANSIBLE_PLAYBOOK_DIR is for setting the relative directory for roles/, files/, etc. IMHO, it's not terribly useful.
If I understand the op, this is how I accomplish a similar result with all versions of ansible ...
PPWD=$PWD cd /my/playbook/dir && ansible-playbook my_playbook.yml; cd $PPWD
Explained,
PPWD=$PWD is to remember the current/present/previous working directory, then
cd /my/playbook/dir and if that succeeds run ansible-playbook my_playbook.yml (everything is relative from there); regardless, always change back to the previous working directory
PLAYBOOK_DIR says:
"A number of non-playbook CLIs have a --playbook-dir argument; this sets the default value for it."
Unfortunately, there is no hint in the doc what "the non-playbook CLIs" might be. ansible-playbook isn't one of them, obviously.
FWIW. If you're looking for a command-line oriented framework try ansible-runner. For example, export the location of private_data_dir
shell> export ansible_private=/path/to/<private-data-dir>
Then run the playbook
shell> ansible-runner -p playbook.yml run $ansible_private
I'm trying to create a virtualenv for nodepool user using ansible but it is failing as outlined below. I want to become nodepool user as it uses python3.5 whereas all others use the server default, 2.7.5. It seems that it cannot source the 3.5 version.
The play is:
- name: Create nodepool venv
become: true
become_user: nodepool
become_method: su
command: virtualenv-3.5 /var/lib/nodepool/npvenv
The error is:
fatal: [ca-o3lscizuul]: FAILED! => {"changed": false, "cmd": "virtualenv-3.5 /var/lib/nodepool/npvenv", "failed": true, "msg": "[Errno 2] No such file or directory", "rc": 2}
It works from shell.
[root#host ~]# su nodepool
[nodepool#host root]$ virtualenv-3.5 /var/lib/nodepool/npvenv
Using base prefix '/opt/rh/rh-python35/root/usr'
New python executable in /var/lib/nodepool/npvenv/bin/python3
Also creating executable in /var/lib/nodepool/npvenv/bin/python
Installing setuptools, pip, wheel...done.
Worked around the issue as follows.
shell: source /var/lib/nodepool/.bashrc && virtualenv-3.5 /var/lib/nodepool/npvenv creates="/var/lib/nodepool/npvenv"
It is not as I'd like to do it but it will do. If anyone knows how I might do like originally posted, please advise. Perhaps it's not possible as it doesn't pickup paths etc.
I threw in the creates option as it prevents redoing if it exists.