I want to run pylint on all my modules, which are in different locations in a big directory. Because running pylint in this directory is still not supported, I assume I need to walk through each module in my own Python scripts and run pylint on each module from there.
To run pylint inside a Python script, the documentation seems clear:
It is also possible to call Pylint from another Python program, thanks
to the Run() function in the pylint.lint module (assuming Pylint
options are stored in a list of strings pylint_options) as:
import pylint.lint
pylint_opts = ['--version']
pylint.lint.Run(pylint_opts)
However, I cannot get this to run successfully on actual files. What is the correct syntax? Even if I copy-paste the arguments that worked on the command-line, using an absolute file path, the call fails:
import pylint.lint
pylint_opts = ["--load-plugins=pylint.extensions.docparams /home/user/me/mypath/myfile.py"]
pylint.lint.Run(pylint_opts)
The output is the default fallback dialogue of the command-line tool, with my script's name instead of pylint:
No config file found, using default configuration
Usage: myscript.py [options] module_or_package
Check that a module satisfied a coding standard (and more !).
myscript.py --help`
[...]
What am I missing?
I know that epylint exists as an alternative, and I can get that to run, but it is extremely inconvenient that it overrides the --msg-format and --reports parameters and I want to figure out what I am doing wrong.
The answer is to separate the options into a list, as shown in this related question:
pylint_opts = ["--load-plugins=pylint.extensions.docparams", "/home/user/me/mypath/myfile.py"]
Related
I have a complex python program I'd like to debug where the setup.py has
entry_points=dict(
console_scripts=[
'myprog = myprog.command:myprog_main',
]
)
where the command.py has the logic to accept command so I can run something like
myprog process --config config.yaml
Placing a breakingpoint in pycharm doesn't cause the program to stop, since doing python command.py process --config config.yaml doesn't do anything
I feel this something basic, but I couldn't find a way to debug this (using pycharm)
Let's take jupyter notebook as an example:
In jupyter, it from jupyter_core.command import main, so what I need to do is placing a breakpoint in jupyter_core.command:main.
And then, I need to add a configuration in Pycharm. Script path should be /path/to/jupyter, Parameters should be notebook.
Next, I need to click Debug.
I've done, I reach the breakpoint in jupyter_core.command:main.
In case this answer doesn't work for you, try this:
add a run configuration in PyCharm
configure it with Module name instead of Script path.
Set the Module name to myprog.command
Add the following section to myprog/command.py:
if __name__ == "__main__":
myprog_main()
Sadly, setting the Module name to myprog.command:myprog_main doesn't work.
Set the Working directory to the root of your repo (i.e. the parent of the module myprog such that the imports work).
Note: I used the exact names from OP's question. Please adjust the names of functions, modules, and packages for your problem accordingly.
I'm new to python and enjoying learning the language. I like using the interpreter in real time, but I still don't understand completely how it works. I would like to be able to define my environment with variables, imports, functions and all the rest then run the interpreter with those already prepared. When I run my files (using PyCharm, Python 3.6) they just execute and exit.
Is there some line to put in my .py files like a main function that will invoke the interpreter? Is there a way to run my .py files from the interpreter where I can continue to call functions and declare variables?
I understand this is a total newbie question, but please explain how to do this or why I'm completely not getting it.
I think you're asking three separate things, but we'll count it as one question since it's not obvious that they are different things:
1. Customize the interactive interpreter
I would like to be able to define my environment with variables, imports, functions and all the rest then run the interpreter with those already prepared.
To customize the environment of your interactive interpreter, define the environment variable PYTHONSTARTUP. How you do that depends on your OS. It should be set to the pathname of a file (use an absolute path), whose commands will be executed before you get your prompt. This answer (found by Tobias) shows you how. This is suitable if there is a fixed set of initializations you would always like to do.
2. Drop to the interactive prompt after running a script
When I run my files (using PyCharm, Python 3.6) they just execute and exit.
From the command line, you can execute a python script with python -i scriptname.py and you'll get an interactive prompt after the script is finished. Note that in this case, PYTHONSTARTUP is ignored: It is not a good idea for scripts to run in a customized environment without explicit action.
3. Call your scripts from the interpreter, or from another script.
Is there a way to run my .py files from the interpreter where I can continue to call functions and declare variables?
If you have a file myscript.py, you can type import myscript in the interactive Python prompt, or put the same in another script, and your script will be executed. Your environment will then have a new module, myscript. You could use the following variant to import your custom definitions on demand (assuming a file myconfig.py where Python can find it):
from myconfig import *
Again, this is not generally a good idea; your programs should explicitly declare all their dependencies by using specific imports at the top.
You can achieve the result you intend by doing this:
Write a Python file with all the imports you want.
Call your script as python -i myscript.py.
Calling with -i runs the script then drops you into the interpreter session with all of those imports, etc. already executed.
If you want to save yourself the effort of calling Python that way every time, add this to your .bashrc file:
alias python='python -i /Users/yourname/whatever/the/path/is/myscript.py'
You set the environment variable PYTHONSTARTUP as suggested in this answer:
https://stackoverflow.com/a/11124610/1781434
I'm trying to setup Python to execute nose, but only on an existing application I'm developing locally. I don't want nose running around all libraries that are currently installed. I do, however, want nose to discover any tests within the current working directory and child directories.
To start with, all I'm trying to do is make sure that the arguments I'm passing are being used (solved by #need-batchelder below). However, at the moment it looks like the arguments I am passing are being ignored, and global discovery of the tests is occurring (i.e. picking up tests from the python folder too.
From the docs:
-V, --version
Output nose version and exit
Running nosetests -V from the command line produces the expected version output:
nosetests -V
nosetests-script.py version 1.2.1
However, the following test script starts running every test it can find, including those of libraries installed in the python path and not part of the current working directory , even though it is located in the root of the application:
import nose, os
def main():
print os.getcwd()
x=raw_input() #This is just so I can see the output of the cwd before it launches into testing everything it can find.
result = nose.run(argv=['-V'])
if __name__ == '__main__':
main()
Here's what I've tried:
Using nose.main() , x=nose.core.run() , x=nose.run().
Passing the arguments directly to nose.run() and using a list.
Using a nose.cfg file.
Thanks
EDIT: Trying #ned-batchelder 's suggestion allows me to run nose with given arguments, but doesn't allow discovery of tests within the application folders. So if I do that, I can pass arguments but I can't test my application.
I believe nose expects argv to be the complete argv, meaning the first element should be the name of the program:
nose.run(argv=['me.py', '-V'])
Probably, what you want is:
arguments = sys.argv[:1] + my_custom_argument_list + sys.argv[1:]
nose.run(argv=arguments)
This will allow you to use your custom arguments as well as those from the command line that invokes your script. It also address the issue that Ned points out about nose requiring that the first argument point to the script.
When I import the wx module in a python interpreter it works as expect. However, when I run a script (ie. test.py) with wx in the imports list, I need to write "python test.py" in order to run the script. If I try to execute "test.py" I get an import error saying there is no module named "wx". Why do I need to include the word python in my command?
PS the most helpful answer I found was "The Python used for the REPL is not the same as the Python the script is being run in. Print sys.executable to verify." but I don't understand what that means.
Write a two line script (named showexe.py for example):
import sys
print sys.executable
Run it both ways as showexe.py and python showexe.py. It will tell you if you're using the same executable in both cases. If not, then it'll depend on your operating system what you have to do to make the two run the same thing.
If you start your script with something like #!/usr/local/bin/python (but using the path to your python interpreter) you can run it without including python in your command, like a bash script.
To clarify, the Python module I'm writing is a self-written .py file (named converter), not one that comes standard with the Python libraries.
Anyways, I want to somehow overload my function such that typing in
converter file_name
will send the file's name to
def converter(file_name):
# do something
I've been extensively searching through Google and StackOverflow, but can't find anything that doesn't require the use of special characters like $ or command line options like -c. Does anyone know how to do this?
You can use something like PyInstaller to create a exe out of your py-file.
To use the argument in python:
import sys
if __name__ == "__main__":
converter(sys.argv[1])
You can type in the windows shell:
python converter.py file_name.txt
to give the arguments to the sys.argv list within python. So to access them:
import sys
sys.argv[0] # this will be converter.py
sys.argv[1] # this will be file_name.txt
at the bottom of the file you want to run, add:
if __name__ == "__main__":
converter(sys.argv[1])
To have a second argument:
python converter.py file_name1.txt file_name2.txt
This will be the result:
import sys
sys.argv[0] # this will be converter.py
sys.argv[1] # this will be file_name1.txt
sys.argv[2] # this will be file_name2.txt
I would recommend using something like the builtin argparse (for 2.7/3.2+) or argparse on pypi (for 2.3+) if you're doing many complicated command line options.
Only way I can think of is to create a batch file of the same name and within it, call your python script with the parameters provided.
With batch files you don't have to specify the extension (.bat) so it gets you closer to where you want to be.
Also, without any compiling .py to .exe, you may make your script 'executable', so that if you issue command line like myscript param param, the system will search for myscript.py and run it for you, as if it was an .exe or .bat file.
In order to achieve this, configure the machine where you plan to run your script:
Make sure you have file associations set (.py to python interpreter, that is, if you doubleclick at your script in the explorer -- it gets executed). Normally this gets configured by the Python installer.
Edit the COMSPEC environment variable (look inside My Computer properties) to include .PY extension as well as .EXE, .COM, etc.
Start a fresh cmd.exe from Start menu to use the new value of variable. Old instances of any programs will see only old value.
This setup could be handy if you run many scripts on the same machine, and not so handy if you spread you scripts to many machines. In the latter case you better use py2exe converter to bundle up your application into a self-sufficient package (which doesn't require even python to be installed).