I'm trying to setup Python to execute nose, but only on an existing application I'm developing locally. I don't want nose running around all libraries that are currently installed. I do, however, want nose to discover any tests within the current working directory and child directories.
To start with, all I'm trying to do is make sure that the arguments I'm passing are being used (solved by #need-batchelder below). However, at the moment it looks like the arguments I am passing are being ignored, and global discovery of the tests is occurring (i.e. picking up tests from the python folder too.
From the docs:
-V, --version
Output nose version and exit
Running nosetests -V from the command line produces the expected version output:
nosetests -V
nosetests-script.py version 1.2.1
However, the following test script starts running every test it can find, including those of libraries installed in the python path and not part of the current working directory , even though it is located in the root of the application:
import nose, os
def main():
print os.getcwd()
x=raw_input() #This is just so I can see the output of the cwd before it launches into testing everything it can find.
result = nose.run(argv=['-V'])
if __name__ == '__main__':
main()
Here's what I've tried:
Using nose.main() , x=nose.core.run() , x=nose.run().
Passing the arguments directly to nose.run() and using a list.
Using a nose.cfg file.
Thanks
EDIT: Trying #ned-batchelder 's suggestion allows me to run nose with given arguments, but doesn't allow discovery of tests within the application folders. So if I do that, I can pass arguments but I can't test my application.
I believe nose expects argv to be the complete argv, meaning the first element should be the name of the program:
nose.run(argv=['me.py', '-V'])
Probably, what you want is:
arguments = sys.argv[:1] + my_custom_argument_list + sys.argv[1:]
nose.run(argv=arguments)
This will allow you to use your custom arguments as well as those from the command line that invokes your script. It also address the issue that Ned points out about nose requiring that the first argument point to the script.
Related
I want to run pylint on all my modules, which are in different locations in a big directory. Because running pylint in this directory is still not supported, I assume I need to walk through each module in my own Python scripts and run pylint on each module from there.
To run pylint inside a Python script, the documentation seems clear:
It is also possible to call Pylint from another Python program, thanks
to the Run() function in the pylint.lint module (assuming Pylint
options are stored in a list of strings pylint_options) as:
import pylint.lint
pylint_opts = ['--version']
pylint.lint.Run(pylint_opts)
However, I cannot get this to run successfully on actual files. What is the correct syntax? Even if I copy-paste the arguments that worked on the command-line, using an absolute file path, the call fails:
import pylint.lint
pylint_opts = ["--load-plugins=pylint.extensions.docparams /home/user/me/mypath/myfile.py"]
pylint.lint.Run(pylint_opts)
The output is the default fallback dialogue of the command-line tool, with my script's name instead of pylint:
No config file found, using default configuration
Usage: myscript.py [options] module_or_package
Check that a module satisfied a coding standard (and more !).
myscript.py --help`
[...]
What am I missing?
I know that epylint exists as an alternative, and I can get that to run, but it is extremely inconvenient that it overrides the --msg-format and --reports parameters and I want to figure out what I am doing wrong.
The answer is to separate the options into a list, as shown in this related question:
pylint_opts = ["--load-plugins=pylint.extensions.docparams", "/home/user/me/mypath/myfile.py"]
The traceback provided by pytest is great and super useful for debugging.
Is there a way to run a script using the pytest api even if the script itself does not contain any test modules? Essentially, I would like a way to pinpoint and run a certain function in a script as if it were a test, but get the pytest-formatted traceback.
The pytest documentation on test discovery states that normally only functions whose name begins with test_ are run. This behaviour can be changed however with the python_functions configuration option. Try entering in the command line:
pytest [script.py] -o python_functions=[script_function]
in which you should replace [script.py] with your python script file path and replace [script_function] with the name of the function that you want to be run.
Is it possible to tell the python interpreter to run my script using specific command line switches.
For example,
If I have doctests enabled, I'd like to add
if __name__ == '__main__':
import doctest
doctest.testmod()
to create a self-contained test runner.
However, this requires that I add -v switch to run the script:
python myscript -v
This is not always convenient in an editor like Sublime where the build system defaults to no switches. I know I can create a custom builder but that's suboptimal compared to specifying which switches to use for certain scripts.
Is it possible and if yes, how?
UPDATE
Below someone pointed out that the doctest case involves a script flag instead of an interpreter switch. But I really want to know about both cases.
Also although testmod(verbose=True) can solve that particular case, I'm still interested in the original question.
Pass verbose=True to doctest.testmod:
if __name__ == '__main__':
import doctest
doctest.testmod(verbose=True)
Messing with the command-line flags is the wrong way to go.
You could consider using environment variables.
e.g.,
# from the command line
export VERBOSE_MODE=True
# in the Python code
import os
verbose_mode = os.environ.get('VERBOSE_MODE', False)
You could have this be a fallback mechanism if the flag isn't provided on the command line.
I sort of got what I want. On a system with bash shell, I could simply add the following to the top of my script
#! /usr/local/bin/python3 -m doctest -v
and then
chmod +x myscript.py
Don't know how to do that with Windows (maybe wait for the Linux subsystem for Win10), but fortunately I mainly work on macOS.
This works for other command line switches as well.
By some need I was forced to correct os.environ['PATH'] to be able to run dir\to\fake\python.cmd script which adds some extra parameters to the original one before execution.
Also I have two python scripts:
test1.py:
# ...
p = subprocess.call("test2.py") # shell=True works fine
# ...
test2.py:
# ...
print "Hello from test2.py"
# ...
When I run python test1.py my "fake" python.cmd doing its stuff, refers to the original python in c:\Python25 and runs test1.py with my extra arguments. But, sadly, test2.py, script is never called. If I put shell=True as subprocess.call argument - everythin's fine, test2.py is called.
I know, Windows is trying to find python interpreter to use for the call in the real c:\Python25 working directory when shell=False is by default.
The question to you is: how can I achieve the goal without changing my code in test1.py and test2.py? Maybe virtualenv library may be very useful in this case?
Thank you very much for your help
As stated in the docs:
The shell argument (which defaults to False) specifies whether to use the shell as the program to execute.
and
On Windows with shell=True, the COMSPEC environment variable specifies the default shell. The only time you need to specify shell=True on Windows is when the command you wish to execute is built into the shell (e.g. dir or copy). You do not need shell=True to run a batch file or console-based executable.
So when you call subprocess.call("test2.py"), the system tries to call test2.py as an executable, which it is not, so it fails. However, you don't capture the return value from subprocess.open to check for error conditions, so it fails silently. When you call it with shell=True, it calls the system shell with the argument test2.py, which in turn looks up the default executable for .py files on your system and then executes the file that way.
All that said though, the deeper problem here is that your code is very poorly engineered. The probability that you have a legitimate reason to dispatch a python script via the OS from another python script is vanishingly small. Rather than patching another kludgy hack over the top of what you already have, save yourself headaches in the future and refactor the system so that it does things in an easier, more sensible, more naturally integrated way.
To clarify, the Python module I'm writing is a self-written .py file (named converter), not one that comes standard with the Python libraries.
Anyways, I want to somehow overload my function such that typing in
converter file_name
will send the file's name to
def converter(file_name):
# do something
I've been extensively searching through Google and StackOverflow, but can't find anything that doesn't require the use of special characters like $ or command line options like -c. Does anyone know how to do this?
You can use something like PyInstaller to create a exe out of your py-file.
To use the argument in python:
import sys
if __name__ == "__main__":
converter(sys.argv[1])
You can type in the windows shell:
python converter.py file_name.txt
to give the arguments to the sys.argv list within python. So to access them:
import sys
sys.argv[0] # this will be converter.py
sys.argv[1] # this will be file_name.txt
at the bottom of the file you want to run, add:
if __name__ == "__main__":
converter(sys.argv[1])
To have a second argument:
python converter.py file_name1.txt file_name2.txt
This will be the result:
import sys
sys.argv[0] # this will be converter.py
sys.argv[1] # this will be file_name1.txt
sys.argv[2] # this will be file_name2.txt
I would recommend using something like the builtin argparse (for 2.7/3.2+) or argparse on pypi (for 2.3+) if you're doing many complicated command line options.
Only way I can think of is to create a batch file of the same name and within it, call your python script with the parameters provided.
With batch files you don't have to specify the extension (.bat) so it gets you closer to where you want to be.
Also, without any compiling .py to .exe, you may make your script 'executable', so that if you issue command line like myscript param param, the system will search for myscript.py and run it for you, as if it was an .exe or .bat file.
In order to achieve this, configure the machine where you plan to run your script:
Make sure you have file associations set (.py to python interpreter, that is, if you doubleclick at your script in the explorer -- it gets executed). Normally this gets configured by the Python installer.
Edit the COMSPEC environment variable (look inside My Computer properties) to include .PY extension as well as .EXE, .COM, etc.
Start a fresh cmd.exe from Start menu to use the new value of variable. Old instances of any programs will see only old value.
This setup could be handy if you run many scripts on the same machine, and not so handy if you spread you scripts to many machines. In the latter case you better use py2exe converter to bundle up your application into a self-sufficient package (which doesn't require even python to be installed).