I have Ansible Playbooks running from the command line just fine, since it seems Ansible uses the executing application (in this case Python) as the command to invoke Playbooks with.
Problem is when you try to run Ansible Playbooks under uWSGI, the command that attempts to run the Playbook uses /usr/bin/uwsgi.
Somehow Ansible is finding the command it is running under. Is there a way to change that?
UPDATE: I believe that the command to run is just sys.executable. Is this overridable?
Didn't quite understand the overall picture, but does it help if you're able to specify the interpreter per remote host using a "behavioral-inventory-parameter":
ansible_python_interpreter The target host python path. This is useful for systems with more than one Python or not located at
"/usr/bin/python" such as *BSD, or where /usr/bin/python is not a 2.X
series Python. We do not use the "/usr/bin/env" mechanism as that
requires the remote user's path to be set right and also assumes the
"python" executable is named python, where the executable might be
named something like "python26".
e.g. this is how your inventory file would look like if you specify them at group level (you can also specify at host level of course, your choice):
# I think specifying ansible_ssh_host won't be needed, but if needed here is how it can be done.
# localhost ansible_ssh_host=127.0.0.1 ansible_python_interpreter=/usr/local/bin/python
localhost ansible_python_interpreter=/usr/local/bin/python
[rhel5-boxes]
rhelhost1
# ...
# other groups...
[rhel5-boxes:vars]
ansible_python_interpreter=/usr/bin/python2.6
[rhel6-boxes]
ansible_python_interpreter=/usr/bin/python
[iron-boxes:vars]
ansible_python_interpreter=/usr/bin/ipython
Related
first of all sorry if this question was asked I could not find any answers.
I have a Flask application served over uwsgi and nginx (not really relevant to my question).
The problem I am facing is that if I run:
os.system(f"grep {needle} /some/path/haystack.txt")
my server will output the following encountered error:
/bin/sh: 1: grep: not found.
If I run:
os.system(f"/usr/bin/grep {needle} /some/path/haystack.txt")
everything works as expected.
I know that letting users run commands is insecure: this is part of a hacking CTF I am developing for the company and they MUST be able to run commands, such as ;cat /tmp/flag.txt.
I thought I could have a whitelist of commands such as cat less more etc, and if such a command is detected just replace it with the /usr/bin/ variant, for example:
; cat /tmp/flag.txt
becomes
os.system(f"/usr/bin/grep ; /usr/bin/cat /tmp/flag.txt /some/path/haystack.txt")
However this is not ideal and I would like to tell Python to run commands without having to specify it should use /usr/bin.
Any help is greatly appreciated!
If you do not want to specify in the Python script to look at /usr/bin then an option would be to create a symlink to /usr/bin and then add the symlink to the PATH variable on the server and test.
First things first i would look into these file locations on the server, does /usr/bin exist on the server, or is it the Python script cannot, and then where the files actually are, if they do not exist on /usr/bin. In addition check which user you are running the script as, and check your permissions.
This question is very similar to this one but for PyCharm.
I need to use aws-vault to access AWS resources in my script, but this seems to be impossible to accomplish in PyCharm debugging mode. It gives ability to enter script path, parameters, environment variables and there is also external tools functionality, but neither of these work.
Here is the format that works in shell:
aws-vault exec ${AWS_PROFILE} -- script.py
I thought that I've almost arrived at a solution by using external tools and setting the program to "aws-vault" and its arguments to "exec your-profile -- $FilePath$", but it wants to run the script in $FilePath$, finish and only after completion run the debugged script in PyCharm (which is the same one as the one inserted by $FilePath$).
How it would work for my case is by running needed script in debug mode in conjunction with external tool, so the script would go into arguments of the external tool and run as one command.
There are ways to deal with this by launching PyCharm from command line with aws-vault as a prefix or editing its .desktop file and writing the prefix directly into the Exec field, but the app needs to be restarted when AWS profile has to be changed.
Any ideas would be appreciated, thanks.
I was able to do this by installing the envfile plugin in PyCharm. This plugin can read in a .env file when starting a process. Basically I did the following:
Create a script that generates a .env file, envfile.env and name the script generate.sh
This generate.sh script is a shell script that basically does: aws-vault exec $AWS_PROFILE -- env | grep AWS_ > envfile.env, so all the aws creds are in the envfile.env. Possibly add other environment variables if you need so.
Execute command above at least once.
Install the envfile plugin in pycharm.
In the Run configuration, a new tab appears with 'EnvFile'. In this tab, enable the EnvFile. Add the generated envfile.env (see previous).
Do Tool / External Tools and create an external tool for the generate.sh. This way you can execute the script from PyCharm.
Again in the Run configuration add a Before Launch that executes the External Tool generate.sh.
Warning, the temporary aws-creds are in the plaintext envfile.env.
I have an embedded system on which I run code live. Every time I want to run code, I start two scripts in two different terminals: "run1.sh" and "run2.sh". I can see the output of those scripts in my terminals (I wish to too).
Now I want to make a python script that starts those two scripts in two different terminals. I want to still see their output. Also I want to insert a password from the python script to the terminals, since the scripts run in sudo mode. I've played a lot with supbrocess and the PIPES but I've never achieved all of the above requirements simultaneously. How can these requirements be met?
I'm using Ubuntu btw (so I have gnome terminal)
Update : I was probably not clear in my question, but this has to be inside a python script. It is not for my convenience, it's part of an integration process. The code of the script will be part of a larger python program, so the whole point of the question is how do I do it in python.
Based on your new information added I've created an small python script which will launch two terminals and their output separately:
Main script:
mortiz#florida:~/Documents/projects/python/split_python_execution$ cat split_pythonstuff.py
#!/usr/bin/python3
import subprocess
subprocess.call(['gnome-terminal', '-x', 'python', '/home/mortiz/Documents/projects/python/split_python_execution/script1.py'])
subprocess.call(['gnome-terminal', '-x', 'python', '/home/mortiz/Documents/projects/python/split_python_execution/script2.py'])
Script 1:
mortiz#florida:~/Documents/projects/python/split_python_execution$ cat script1.py
#!/usr/bin/python3
while True :
print ('script 1')
Script 2:
mortiz#florida:~/Documents/projects/python/split_python_execution$ cat script2.py
#!/usr/bin/python3
while True:
print ('script 2')
From here I guess you can develop anything you want.
UPDATE: About sudo
Sudoers is a great way of controlling which things can be executed by specific users providing passwords or not.
If you add this line in /etc/sudoers there's not need for a password when you pass sudo to your command:
<YOUR_USER> ALL = NOPASSWD : /usr/bin/python <SCRIPT.py>
In your question as far as I understand you have the password stored inside the script. There's no need to do that and it's a bad practice. Sudoers would be a better way.
Anyway, if you want to do it in an insecure way then refer to this question and place it before the commands in the scripts provided in this answer.
The linked provided works:
echo -e "mypassword\n" | sudo -S python test.py
15
You only need to implement that on the previous code.
You could install Terminator and configure one profile per terminal to run any script you want.
I have a default template which will load 3 terminals and run 3 different commands / or scripts if you wanted to:
When I load that profile the first one will move me to my projects dir and list them. The next one will run df -h to see the space available and the lower my ip configuration.
This way would save you lots of programming and it's quite easy.
UPDATE: It will run any command, bash, zsh, python, etc.. available for your terminal. If the script is locally in your machine:
python <your_script_1> # first terminal profile
python <your_script_2> # second terminal profile
both would be executed "at the same time".
If your scripts are remote in the target machine, simply create a bash script using ssh to connect to the remote machine with a private key and then running the script, the result is the same in both scenarios.
EDIT: The best thing is setting colors and transparency for each terminal, so you can enjoy the penguin's selfie while you work.
Right now I have a script which uses numpy that I want to run automatically on a server. When I ssh in and run it manually, it works fine. However, when I set it to run as a cron job, it can't find numpy. Apparently due to the shared server environment, the cron demon for whatever reason can't find numpy. I contacted the server host's tech support and they told me to set up a vps or get my own damn server. Is there any way to hack a workaround for this? Perhaps, by moving certain numpy files into the same directory as the script?
If you have numpy installed somewhere on the server, you can add it into the import path for python; at the beginning of your script, do something like this:
import sys
sys.path.append("/path/to/numpy")
import numpy
The cronjob runs with an empty environment. As such, it's either not using the same python binary as you are at the shell, or you have PYTHONPATH set, which it won't have under crontab.
You should run env -i HOME=$HOME sh to get a fascimile of the cronjob's environment. Set environment variables until your command works, and record them.
You can then set these in your crontab file, again using the env command, like:
* * * * * env PYTHONPATH=/my/pythonpath OTHERVAR=correct-value /path/to/mycommand
Processes invoked by the cron daemon have a minimal environment, generally consisting of $HOME, $LOGNAME and $SHELL.
It sounds like numpy is perhaps somewhere on your $PYTHONPATH? If so, you will need to specify that within the crontab line. Such as
/usr/bin/env PYTHONPATH=... <then the command to run>
If you are on a Linux system using vixie cron, then you can also specify global variables in your crontab by using lines such as
# my environment settings
PYTHONPATH = <path>
SOMETHING_ELSE = blah
<then my normal cron line>
See man -s 5 crontab
Your cron job is probably executing with a different python interpreter.
Log in as you (via ssh), and say which python. That will tell you where your python is. Then have your cron job execute that python interpreter to run your script, or chmod +x your script and put the path in a #! line at the top of the script.
My main goal is to get this up and running.
My hook gets called when I do the commit with Tortoise SVN, but it always exits when I get to this line: Python "%~dp0trac-post-commit-hook.py" -p "%TRAC_ENV%" -r "%REV%" || EXIT 5
If I try and replace the call to the python script with any simple Python script it still doesn't work so I'm assuming it is a problem with the call to Python and not the script itself.
I have tried setting the PYTHON_PATH variable and also set %PATH% to include Python.
I have trac up and running so Python is working on the server itself.
Here is some background info:
Python is installed on Windows server and script is called from local machine so
IF NOT EXIST %TRAC_ENV% EXIT 3
and
SET PYTHON_PATH=X:\Python26
IF NOT EXIST %PYTHON_PATH% EXIT 4
fail unless I point set them to the mapped network drive (That is point them at X and Y drives not C and E drives)
Python scripts can be called anywhere from the command line from the server regardless of the drive so the PATH variable should be set correctly
Appears to be an issue with calling python scripts externally, but not sure how I go about changing the permissions for this.
Thanks in advance.
Take the following things into account:
network drive mappings and subst
mappings are user specific. Make sure
the drives exist for the user account
under which the svn server is
running.
subversion hook scripts are run
without any environment variables
being set for security reasons, not even %path%. Call
the python executable with an
absolute path, e.g.
c:\python25\python.exe.