Use facts gathered by ansible programmatically - python

I'd like to write a Python program that uses the facts that Ansible gives me with ansible HOST -m setup.
When I call this, I get a response which makes it only almost pure JSON:
$ ansible localhost -m setup
localhost | success >> {
// actual data
}
Is there some way to get this JSON response directly without parsing the shell output (which might not be too stable)? Could I even use Ansible directly in a Python 3 program?

version stable-2.2, stable-2.3, and 2.4+
The latest ansible releases for 2.2, 2.3, and 2.4 all support ANSIBLE_STDOUT_CALLBACK variable. To use it, you need to add an ansible.cfg file that looks like:
[defaults]
bin_ansible_callbacks = True
callback_plugins = ~/.ansible/callback_plugins
You can place it wherever you're using ansible. Then, you need to create the callback_plugins directory, if you haven't already. Finally, you need to add a custom json parser to the directory. I copied the json parser that is bundled with ansible to the callback_plugins directory, then edited a single line in it to make it work.
I found the json.py file by first executing ansible --version
$ ansible --version
ansible 2.4.0.0
config file = /Users/artburkart/Code/ansible.cfg
configured module search path = [u'/Users/artburkart/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python2.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 2.7.13 (default, Jul 18 2017, 09:17:00) [GCC 4.2.1 Compatible Apple LLVM 8.1.0 (clang-802.0.42)]
Then using the "ansible python module location" to find the json.py.
cp /usr/local/lib/python2.7/site-packages/ansible/plugins/callback/json.py ~/.ansible/callback_plugins/
Finally I edited the v2_runner_on_ok function in the json.py file to look like this (courtesy of armab on GitHub):
def v2_runner_on_ok(self, result, **kwargs):
host = result._host
self.results[-1]['tasks'][-1]['hosts'][host.name] = result._result
print(json.dumps({host.name: result._result}, indent=4))
Once that was all set up, the command is very simple:
ANSIBLE_STDOUT_CALLBACK=json ansible all -i localhost, -c local -m setup | jq
If you always want to parse JSON output, you can add the following line to the end of the ansible.cfg file I described above.
stdout_callback = json
That way, you don't need to include the environment variable anymore.
versions <= latest 2.2 stable
When querying against instances, I use the following command:
ansible all --inventory 127.0.0.1, --connection local --module-name setup | sed '1 s/^.*|.*=>.*$/{/g'
If you pipe the output into jq, as leucos suggested, it happily parses the semi-valid JSON. For example:
ansible all -i hosts -m setup | sed '1 s/^.*|.*=>.*$/{/g' | jq -r '.ansible_facts.ansible_distribution'
CentOS
Ubuntu

If Python2 is OK for you, you can use the Ansible API directly. You can find detailled instructions here: http://docs.ansible.com/developing_api.html
It's really easy.
And alternate, shell centric way is to use jq. There is a quick intro here: http://xmodulo.com/how-to-parse-json-string-via-command-line-on-linux.html

Related

Issues trying to install AirFlow locally

I'm new at airflow and I'm trying to install locally, following the instructions on the link below:
https://airflow.apache.org/docs/apache-airflow/stable/start/local.html
I'm running this code (as mentioned on the link):
# Airflow needs a home. `~/airflow` is the default, but you can put it
# somewhere else if you prefer (optional)
export AIRFLOW_HOME=~/airflow
# Install Airflow using the constraints file
AIRFLOW_VERSION=2.2.5
PYTHON_VERSION="$(python --version | cut -d " " -f 2 | cut -d "." -f 1-2)"
# For example: 3.6
CONSTRAINT_URL="https://raw.githubusercontent.com/apache/airflow/constraints-${AIRFLOW_VERSION}/constraints-${PYTHON_VERSION}.txt"
# For example: https://raw.githubusercontent.com/apache/airflow/constraints-2.2.5/constraints-3.6.txt
pip install "apache-airflow==${AIRFLOW_VERSION}" --constraint "${CONSTRAINT_URL}"
# The Standalone command will initialise the database, make a user,
# and start all components for you.
airflow standalone
# Visit localhost:8080 in the browser and use the admin account details
# shown on the terminal to login.
# Enable the example_bash_operator dag in the home page
and getting this error:
File "C:\Users\F43555~1\AppData\Local\Temp/ipykernel_12908/3415008398.py", line 3
export AIRFLOW_HOME=~/airflow
^
SyntaxError: invalid syntax
Someone knows how to deal with it?
I'm running on windows 10, vs code (jupyter notebook).
Tks!
Airflow is only supported on Linux and it looks like you're trying to run this on a windows machine.
If you want to install Airflow on Windows you'll need to use something like Windows Subsystem for Linux (WSL) or Docker. There are some examples around which show you how to do this on WSL (and loads using docker) - Here is one of them with WSL.

ansible commands run only with absolute path

On Ubuntu 20.04.2 LTS there is ansible engine installed with pip3 command:
mariusz#g3:~$ pip3 show ansible
Name: ansible
Version: 4.1.0
However running ansible commands ends with below error:
mariusz#g3:~$ ansible
python3: can't open file '/usr/bin/ansible': [Errno 2] No such file or directory
The PATH variable is set correctly:
mariusz#g3:~$ which ansible
/home/mariusz/.local/bin/ansible
And I can run ansible command with absolute path:
mariusz#g3:~$ /home/mariusz/.local/bin/ansible --version
ansible [core 2.11.1]
config file = /home/mariusz/.ansible.cfg
configured module search path = ['/home/mariusz/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/mariusz/.local/lib/python3.8/site-packages/ansible
ansible collection location = /home/mariusz/.ansible/collections:/usr/share/ansible/collections
executable location = /home/mariusz/.local/bin/ansible
python version = 3.8.5 (default, May 27 2021, 13:30:53) [GCC 9.3.0]
jinja version = 2.11.3
libyaml = True
Any ideas how to solve it without root privileges i.e. creating /usr/bin/ansible symlink?
The entries in the $PATH variable are tried in order, and thus you'd want to relocate your $HOME/.local/bin to the beginning of the list in order for it to win out over the /usr/bin entry that's there now
You can do this in an interactive shell to confirm or deny the theory, and then put at the end of your ~/.bashrc to make it permanent
PATH=$HOME/.local/bin:$PATH
It seems that ansible package, which was installed before, left bash aliases file that was not removed during package uninstall.
$ cat ~/.bash_aliases
alias ansible='python3 /usr/bin/ansible'
alias ansible-doc='python3 /usr/bin/ansible-doc'
alias ansible-galaxy='python3 /usr/bin/ansible-galaxy'
alias ansible-inventory='python3 /usr/bin/ansible-inventory'
alias ansible-playbook='python3 /usr/bin/ansible-playbook'
alias ansible-vault='python3 /usr/bin/ansible-vault'

Ansible: Change playbooks location

I have all playbooks in /etc/ansible/playbooks and I want to execute them anywhere on the pc
I tried to configure playbook_dir variable in ansible.cfg
[defaults]
playbook_dir = /etc/ansible/playbooks/
and tried to put ANSIBLE_PLAYBOOK_DIR variable in ~/.bashrc
export ANSIBLE_PLAYBOOK_DIR=/etc/ansible/playbooks/
but I only got the same error in both cases:
nor#nor:~$ ansible-playbook test3.yaml
ERROR! the playbook: test3.yaml could not be found
This is my ansible version:
ansible 2.9.7
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/nor/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.7/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.3 (default, Oct 7 2019, 12:56:13) [GCC 8.3.0]
Does anyone know the problem and how to solve it?
According to https://manpages.debian.org/testing/ansible/ansible-inventory.1.en.html :
--playbook-dir 'BASEDIR'
Since this tool does not use playbooks, use this as a subsitute playbook directory.This sets the relative path for many features including roles/ group_vars/ etc.
This means that ANSIBLE_PLAYBOOK_DIR is not used as a replacement for specifying the the absolute / relative path to your playbook, but it tells the playbook where it should look for roles, host/group vars , etc.
The goal you're trying to achieve is has no solution on the ansible side, you need to achieve this by configuring your shell profile accordingly.
set the following in your .bashrc file:
export playbooks_dir=/path/to/playbooks
when you call the playbook use ansible-playbook $playbooks_dir/test3.yml
As others have said, ANSIBLE_PLAYBOOK_DIR is for setting the relative directory for roles/, files/, etc. IMHO, it's not terribly useful.
If I understand the op, this is how I accomplish a similar result with all versions of ansible ...
PPWD=$PWD cd /my/playbook/dir && ansible-playbook my_playbook.yml; cd $PPWD
Explained,
PPWD=$PWD is to remember the current/present/previous working directory, then
cd /my/playbook/dir and if that succeeds run ansible-playbook my_playbook.yml (everything is relative from there); regardless, always change back to the previous working directory
PLAYBOOK_DIR says:
"A number of non-playbook CLIs have a --playbook-dir argument; this sets the default value for it."
Unfortunately, there is no hint in the doc what "the non-playbook CLIs" might be. ansible-playbook isn't one of them, obviously.
FWIW. If you're looking for a command-line oriented framework try ansible-runner. For example, export the location of private_data_dir
shell> export ansible_private=/path/to/<private-data-dir>
Then run the playbook
shell> ansible-runner -p playbook.yml run $ansible_private

Set default python 2.7.

I have just installed python 2.7 using macports as:
sudo port install py27-numpy py27-scipy py27-matplotlib py27-ipython +notebook py27-pandas py27-sympy py27-nose
during the process it found some issues, mainly broken files related with py25-haslib that I managed to fix. Now it seems eveything is ok. I tested a few programs and they run as expected. Currently, I have two versions of python: 2.5 (Default, from when I worked in my former institution) and 2.7 (just installed):
which python
/usr/stsci/pyssg/Python-2.5.1/bin/python
which python2.7
/opt/local/bin/python2.7
The next move would be set the new python version 2.7 as default:
sudo port select --set python python27
sudo port select --set ipython ipython27
My question is: is there a way to go back to 2.5 in case something goes wrong?
I know a priori, nothing has to go wrong. But I have a few data reduction and analysis routines that work perfectly with the 2.5 version and I want to make sure I donĀ“t mess up before setting the default.
if you want to revert, you can modify your .bash_profile or other login shell initialization to fix $PATH to not add "/Library/Frameworks/Python.framework/Versions/2.5/bin" to $PATH and/or to not have /usr/local/bin appear before /usr/bin on $PATH.
If you want to permanently remove the python.org installed version,
paste the following lines up to and including the chmod into a posix-
compatible shell:
tmpfile=/tmp/generate_file_list
cat <<"NOEXPAND" > "${tmpfile}"
#!/bin/sh
version="${1:-"2.5"}"
file -h /usr/local/bin/* | grep \
"symbolic link to ../../../Library/Frameworks/Python.framework/"\
"Versions/${version}" | cut -d : -f 1
echo "/Library/Frameworks/Python.framework/Versions/${version}"
echo "/Applications/Python ${version}"
set -- Applications Documentation Framework ProfileChanges \
SystemFixes UnixTools
for package do
echo "/Library/Receipts/Python${package}-${version}.pkg"
done
NOEXPAND
chmod ug+x ${tmpfile}
...excripted from troubleshooting question on python website

using python virtual env in R

I am using 'rPython' package for calling python within R but I am unable to make R refer to my python's virtual environment.
In R, I have tried using
system('. /home/username/Documents/myenv/env/bin/activate')
but after running the above my python library path does not change (which I check via python.exec(print sys.path)). When I run
python.exec('import nltk')
I am thrown the error:
Error in python.exec("import nltk") : No module named nltk
although it is there in my virtual env.
I am using R 3.0.2, Python 2.7.4 on Ubuntu 13.04.
Also, I know I can change the python library path from within R by using
python.exec("sys.path='\your\path'")
but I don't want this to be entered manually over and over again whenever a new python package is installed.
Thanks in advance!
Use the "activate" bash script before running R, so that the R process inherits the changed environment variables.
$ source myvirtualenv/bin/activate
$ R
Now rPython should be able to use the packages in your virtualenv.
Works for me. May behave strangely if the Python version you made the virtualenv with is different to the one rPython links into the R process.
Expanding on #PaulHarrison's answer, you can mimic what .../activate is doing directly in the environment (before starting python from R).
Here's one method for determining what vars are modified:
$ set > pyenv-pre
$ . /path/to/venv/activate
(venvname) $ set > pyenv-post
(venvname) $ diff -uw pyenv-pre pyenv-post
This gave me something like:
--- pyenv-pre 2018-12-02 15:16:43.093203865 -0800
+++ pyenv-post 2018-12-02 15:17:34.084999718 -0800
## -33,10 +33,10 ##
OPTERR=1
OPTIND=1
OSTYPE=linux-gnu
-PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
+PATH=/path/to/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
PIPESTATUS=([0]="0")
PPID=325990
-PS1='\[\e]0;\u#\h: \w\a\]${debian_chroot:+($debian_chroot)}\u#\h:\w\$ '
+PS1='(venvname) \[\e]0;\u#\h: \w\a\]${debian_chroot:+($debian_chroot)}\u#\h:\w\$ '
PS2='> '
PS4='+ '
PWD=/
## -50,10 +50,13 ##
TERM=xterm
UID=3000019
USER='helloworld'
+VIRTUAL_ENV=/path/to/venv
XDG_DATA_DIRS=/usr/local/share:/usr/share:/var/lib/snapd/desktop
XDG_RUNTIME_DIR=/run/user/3000019
XDG_SESSION_ID=27577
-_=set
+_=/path/to/venv/bin/activate
+_OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
+_OLD_VIRTUAL_PS1='\[\e]0;\u#\h: \w\a\]${debian_chroot:+($debian_chroot)}\u#\h:\w\$ '
__git_printf_supports_v=yes
__grub_script_check_program=grub-script-check
_backup_glob='#(#*#|*#(~|.#(bak|orig|rej|swp|dpkg*|rpm#(orig|new|save))))'
## -2390,6 +2393,31 ##
fi;
fi
}
+deactivate () ... rest of this function snipped for brevity
So it appears that the important envvars to update are:
PATH: prepend the venv bin directory to the existing paths
VIRTUAL_ENV: set to /path/to/venv
I believe the other changes (OLD_VIRTUAL_* and deactivate () ...) are optional and really only used to back-out the venv activation.
Looking at the .../activate script verifies these are most of the steps taken. Another step is unset PYTHONHOME if set, which may not be shown in the diff above if you didn't have it set previously.
To R-ize this:
Sys.setenv(
PATH = paste("/path/to/venv/bin", Sys.getenv("PATH"), sep = .Platform$path.sep),
VIRTUAL_ENV = "/path/to/venv"
)
Sys.unsetenv("PYTHONHOME") # works whether previously set or not
I've had luck getting scripts to use my peynv installation by using:
#!/usr/bin/env python
So maybe try pointing R to that path (sans #!, of course).
manage to get it working by using bash -c:
system("/bin/bash -c \"source ./pydatatable/py-pydatatable/bin/activate && python -c 'import datatable as dt; print(dt.__version__)'\"")

Categories

Resources