Make each run of python interpreter automatically call `coverage`? - python

I have test.sh that runs python command on many different scripts. Is there a way to emit coverage -a for each python call without prepending each command with coverage -a?

See the coverage.py docs about subprocess measurement for a way to invoke coverage automatically when starting Python: http://coverage.readthedocs.io/en/latest/subprocess.html . It will require some fiddling.
It might be easier to alias in the shell script. For things like "nosetests", change it to "python -m nose".

Related

See stdout when running bash script in PyCharm

I use a bash script to call several python scripts. I installed the bash plugin for PyCharm. I can run the script, but I don't see stdout during runtime, even though I see it after everything finished. How can I make that visible during runtime?
Without all of the required information, my guess would be that this is due to Python buffering its output, which is its default behavior. You can easily disable this by passing python the -u flag or by setting the PYTHONUNBUFFERED environment variable.
This is described in this SO answer.

"Command not found" when using python for shell scripting

I have this python script:
#!/usr/bin/python
print 'hi'
I'm trying to send this script as a job to be executed on a computing cluster. I'm sending it with qsub like this: qsub myscript.py
Before running it I executed the following:
chmod +x myscript.py
However when I open the output file I find this:
Warning: no access to tty (Bad file descriptor).
Thus no job control in this shell.
And when I open the error file I find this:
print: Command not found.
So what's wrong?!
Edit: I followed the instructions in this question
It looks like qsub isn't reading your shebang line, so is simply executing your script using the shell.
This answer provides a few options on how to deal with this, depending on your system: How can I use qsub with Python from the command line?
An option is to set the interpreter to python like so:
qsub -S /usr/bin/python myscript.py
I am quite sure there is an alternate way to do this without the -S option and have SGE execute the code based on interpreter in the shebang; however, this solution might be enough for you needs.
Also, concerning this output:
Warning: no access to tty (Bad file descriptor).
Thus no job control in this shell.
It seems safe to ignore this:
http://www.linuxquestions.org/questions/linux-software-2/warning-no-access-to-tty-bad-file-descriptor-702671/
EDIT:
Also works:
qsub <<< "./myscript.py"
qsub <<< "python ./myscript.py"

python script argument misinterpreted in Hudson Execute Shell step

When I run my python script in the shell terminal, it works
sudo myscript.py --version=22 --base=252 --hosts="{'hostA':[1],'hostB':[22]}"
But when I run in Hudson and Jenkins, using Execute Shell step, somehow, the string --hosts="{'hostA':[1],'hostB':[22]}" is interpreted as
sudo myscript.py --version=22 --base=252 '--hosts="{'hostA':[1],'hostB':[22]}"'
How do we overcome this so that our script would run in Jenkins and Hudson ?
Thank you.
Sincerely
It looks like you're encountering a battle-of-the-quoted-strings type situation due to your use of quotes directly and the fact that Jenkins is shelling out from a generated temp shell script.
I find the best thing to do with Jenkins is to create a bash script that wraps the commands you want to run (and you can also have it do any other environment-related setup you may want to have it do, such as source a config bash script that sets up other env vars).
You can have it accept arguments that may vary from the command line, which can be passed to it from the Jenkins config. So any of the interpolation then happens within the script -- you're just passing strings. (In particular, in this case, you'll have the hosts arg be "{'hostA':[1],'hostB':[22]}", which will be passed to the shell script, and then interpolated, with the double quotes re-included.
So, to that end, say you have a jenkins_run.sh script that runs a command like this:
myscript.py --version=$VERSION --base=$BASE --hosts="$HOSTS"
Where the variables are passed in as arguments and assigned prior to that (you could directly use $0, $1 et al if you want.
I would also be cautious using sudo in conjunction with a Jenkins run, since that could end up prompting for I/O. I would instead recommend setting the permissions on the script such that the using under which Jenkins is running can simply execute the script.

Setuptools, explicitely separate multiple commands calls in one line

I would like to write an alias in my setup.py file to multiple tests commands for my project.
But, I have problems when I try to run multiple commands on one line, when 'nosetests' command is invoked before other commands.
This works
$ python setup.py lint nosetests
pylint output
nosetests output
But if I exchange the commands, I only gets nosetests output.
I think the lint command is eaten by the nosetests argument parser.
$ python setup.py nosetests lint
nosetests output
# No pylint output
So, I would like to know if there is a way to explicitly separate the commands ?
Thanks
New answer
By the looks of it, setuptools assumes all options begin with -- and all commands don't begin with --, so there's no explicit way to separate commands, because it's unnecessary.
If the custom nosetests command is accepting lint as an option, then it's a bug in that command, which ought to ignore anything which doesn't begin with --.
However, it might be possible to work around the bug with the traditional Unix idiom of using -- to indicate the end of options, so the following might work...
$ python setup.py nosetests -- lint
...otherwise you'll either have to fix the bug, or find an alternative to using that particular custom command.
Old answer
From the docs...
The basic usage of setup.py is:
$ python setup.py <some_command> <options>
...so it sounds like the fact that it executed both commands on your first example is a bug, or a fluke.
It's probably safest to run them as two separate commands...
$ python setup.py nosetests && python setup.py lint
nosetests output
pylint output

PyDev + mpi4py -> run through shellscript / mpirun

I'd like to create python programs that use mpi4py and thus I'd like to run them using the following command:
mpirun -np 4 python script.py
I tried to create a shell script which does this and use it as a python interpreter but eclipse rejects the shell script. I tried to redirect the output (so that it doesen't show the mpi-stuff but soley prints the python-output of the first node).
If I run the script in the console using the interpreterinfo.py script to test the interpreter it gives exactly the same output as if I run it only through python.
It somehow seems that the script isn't executed properly by eclipse or that the output is not going into stdout.
Can anyone help?
I don't think you should try to configure mpirun as the python interpreter... Instead, configure the python interpreter as usual and just create a python module that'll do the launching for you and launch that module instead... (or create an external launch in run > external tools)
It'd be strange that mpirun is the actual python interpreter, because that way when requesting a code completion for builtins, pydev would launch the mpirun and it'd create 4 processes for code-completion? The same would apply to other things such as debug, coverage, etc...

Categories

Resources