When I run my python script in the shell terminal, it works
sudo myscript.py --version=22 --base=252 --hosts="{'hostA':[1],'hostB':[22]}"
But when I run in Hudson and Jenkins, using Execute Shell step, somehow, the string --hosts="{'hostA':[1],'hostB':[22]}" is interpreted as
sudo myscript.py --version=22 --base=252 '--hosts="{'hostA':[1],'hostB':[22]}"'
How do we overcome this so that our script would run in Jenkins and Hudson ?
Thank you.
Sincerely
It looks like you're encountering a battle-of-the-quoted-strings type situation due to your use of quotes directly and the fact that Jenkins is shelling out from a generated temp shell script.
I find the best thing to do with Jenkins is to create a bash script that wraps the commands you want to run (and you can also have it do any other environment-related setup you may want to have it do, such as source a config bash script that sets up other env vars).
You can have it accept arguments that may vary from the command line, which can be passed to it from the Jenkins config. So any of the interpolation then happens within the script -- you're just passing strings. (In particular, in this case, you'll have the hosts arg be "{'hostA':[1],'hostB':[22]}", which will be passed to the shell script, and then interpolated, with the double quotes re-included.
So, to that end, say you have a jenkins_run.sh script that runs a command like this:
myscript.py --version=$VERSION --base=$BASE --hosts="$HOSTS"
Where the variables are passed in as arguments and assigned prior to that (you could directly use $0, $1 et al if you want.
I would also be cautious using sudo in conjunction with a Jenkins run, since that could end up prompting for I/O. I would instead recommend setting the permissions on the script such that the using under which Jenkins is running can simply execute the script.
Related
I'm very new to coding and software, so please stick with me. I am trying to execute a command in my Raspberry Pi terminal via a Python script. I want to be able to run this pi script from the Desktop. The command to execute is (rpi-deep-pantilt-env) pi#raspberrypi:~/rpi-deep-pantilt $ rpi-deep-pantilt detect So as you can see, I need to cd into rpi-deep-pantilt, then activate my virtual environment, then run the command all via the py script.
A simple shell script to do what you ask:
#!/bin/sh
cd "$HOME"/rpi-deep-pantilt
. ./rpi-deep-pantilt-env/bin/activate
./rpi-deep-pantilt detect "$#"
Most or all of this is probably unnecessary. I guess you could run
#!/bin/sh
d="$HOME"/rpi-deep-pantilt
exec "$d"/rpi-deep-pantilt-env/bin/python "$d"/rpi-deep-pantilt detect "$#"
though if your Python script has hardcoded file paths which require it to run in a specific directory, that's a bug which will prevent this from working.
The "$#" says to pass on any command-line arguments, so if you saved this script as pant, running pant blind mice will pass on the arguments blind and mice to your Python script. (Of course, if it doesn't accept additional command-line arguments after detect, this is unimportant, but I'd still pass them on so you can generate an error message, rather than have them be ignored as if they were not there.)
I have the virutalenv created and installed. I have also installed jsnapy tool inside my virutal env.
This is the script that we are using:
Filename : venv.py
import os
os.system('/bin/bash --rcfile ~/TestAutomation/End2EndAutomation/bin/activate')
os.system('End2EndAutomation/bin/jsnapy')
ubuntu#server:~/TestAutomation$ python venv.py
(End2EndAutomation) ubuntu#sdno-server:~/TestAutomation$ ^C
We need to know, is how we can get into virutalenv, run a command and deactivate it using python script?
[EDIT1]
i used the code given in the comment. its just entering virutal env. When i issue exit, its running jsnapy command.
ubuntu#server:~/TestAutomation$ python venv.py
(End2EndAutomation) ubuntu#server:~/TestAutomation$ exit
exit
usage:
This tool enables you to capture and audit runtime environment of
networked devices running the Junos operating system (Junos OS)
Tool to capture snapshots and compare them
It supports four subcommands:
--snap, --check, --snapcheck, --diff
1. Take snapshot:
jsnapy --snap pre_snapfile -f main_configfil
Each call to os.system() will create a new bash instance and terminate the previous one. To run all the commands in one bash instance you could put all your commands inside a single bash script and call that from os.system()
run.sh
source ~/TestAutomation/End2EndAutomation/bin/activate
End2EndAutomation/bin/jsnapy
deactivate
Python
os.system('source run.sh')
Alternatively, you could write a multiline bash command, as long as it's all in one os.system() call.
Two successive calls to os.system() will create two independent processes, one after the other. The second will run when the first finishes. Any effects of commands executed in the first process will have been forgotten and flushed when the second runs.
You want to run the activation and the command which needs to be run in the virtualenv in the same process, i.e. the same single shell instance.
To do that, you can use bash -c '...' to run a sequence of commands. See below.
However, a better solution is to simply activate the virtual environment from within Python itself.
p = os.path.expanduser('~/TestAutomation/End2EndAutomation/bin/activate_this.py')
execfile(p, dict(__file__=p))
subprocess.check_call(['./End2EndAutomation/bin/jsnapy'])
For completeness, here is the Bash solution, with comments.
import subprocess
subprocess.check_call(['bash', '-c', """
. ~/TestAutomation/End2EndAutomation/bin/activate
./End2EndAutomation/bin/jsnapy"""])
The preference for subprocess over os.system is recommended even in the os.system documentation.
There is no need to explicitly deactivate; when the bash command finishes, that will implicitly also deactivate the virtual environment.
The --rcfile trick is a nice idea, but it doesn't work when the shell you are calling isn't interactive.
In Linux, we usually add a shebang in a script to invoke the respective interpreter. I tried the following example.
I wrote a shell script without a shebang and with executable permission. I was able to execute it using ./. But if I write a similar python program, without shebang, I am not able to execute it.
Why is this so? As far as my understanding, shebang is required to find the interpreter. So how does shell scripts work, but not a python script?
My assumption is that a script without a shebang is executed in the current environment, which at the command line is your default shell, e.g. /bin/bash.
shell scripts will only work if you are in the shell you targeted ... there is not python shell ... as such python will never work without explicity calling python (via shebang or command line)
By default the shell will try to execute the script. The #! notation came later
Thereāa subtle distinction here. If the target is a binary or begins with a #! shebang line, then the shell calls execv successfully. If the target is a text file without a shebang, then the call to execv will fail, and the shell is free to try launching it under /bin/sh or something else.
http://en.wikipedia.org/wiki/Shebang_(Unix)
Under Unix-like operating systems, when a script with a shebang is run as a program, the program loader parses the rest of the script's initial line as an interpreter directive; the specified interpreter program is run instead, passing to it as an argument the path that was initially used when attempting to run the script.
Now when the #! is not found, every line is interpreted as native shell command. And hence if you write a bash script and run it under bash shell it will work. If you run the same bash script in say a tcsh shell it will not work without the initial #!/usr/bin/tcsh
I've a shell script with two shebangs, the first one tells #!/bin/sh and after a few lines the other one is #!/usr/bin/env python.
When this script is given executable permission and ran as ./script.sh, the script works fine, uses /bin/sh in first part and uses python interpreter in latter part.
But when the script is run as sh script.sh, the second shebang is not recognized and the script fails. Is there anyway I can force to change the interpreter if the script is run explicitly as sh script.sh.
The reason I need this is because I need to run the scripts through a tool which runs as sh script.sh
As far as I know you cannot have two shebang lines in one script. The shebang works only when -
it is on the first line
it starts in column one
If you need to run python code then have it in another script and then call the script by doing
python path/to/the/script.py
A better way to do this would be to use a shell here document. Something like this:
#!/bin/sh
curdir=`pwd`
/usr/bin/env python <<EOF
import os
print os.listdir("$curdir")
EOF
This way, you won't need to distribute the code on two separate files.
As you see, you can even access shell variables from the python code.
have your script.sh script invoke the python interpreter directly:
#!/bin/sh
# include full path if needed
python (your python interpreter arguments)
I am trying to write what should be a super simple bash script. Basically activate a virtual env and than change to the working directory. A task i do a lot and condesing to one command just made sense.
Basically ...
#!/bin/bash
source /usr/local/turbogears/pps_beta/bin/activate
cd /usr/local/turbogears/pps_beta/src
However when it runs it just dumps back to the shell and i am still in the directory i ran the script from and the environment isn't activated.
All you need to do is to run your script with the source command. This is because the cd command is local to the shell that runs it. When you run a script directly, a new shell is executed which terminates when it reaches the script's end of file. By using the source command you tell the shell to directly execute the script's instructions.
The value of cd is local to the current script, which ends when you fall off the end of the file.
What you are trying to do is not "super simple" because you want to override this behavior.
Look at exec for replacing the current process with the process of your choice.
For feeding commands into an interactive Bash, look at the --rcfile option.
I imagine you wish your script to be dynamic, however, as a quick fix when working on a new system I create an alias.
begin i.e
the env is called 'py1' located at ~/envs/py1/ with a repository
location at ~/proj/py1/
alias py1='source ~/envs/py1/bin/activate; cd ~/proj/py1/;
end i.e
You can now access your project and virtualenv by typing py1 from anywhere in the CLI.
I know that this is no where near ideal, violates DRY, and many other programming concepts. It is just a quick and dirty way of getting your env and project accessible quickly without having to setup the variables.
I know that I'm late to the game here, but may I suggest using virtualenvwrapper? It provides a nice bash hook that appears to do exactly what you want.
Check out this tutorial: http://blog.fruiapps.com/2012/06/An-introductory-tutorial-to-python-virtualenv-and-virtualenvwrapper