I am trying to suppress warnings.
here is my esri python version
python -V
Python 2.7.16
I have tried this
python.exe -W ignore GET_ESRIGIS_WEB_TOKEN.py
but it gives this error
Invalid -W option ignored: invalid action: '"ignore'
what am I doing wrong?
ps: here is the help (see -W option below)
python.exe -h
-W arg : warning control; arg is action:message:category:module:lineno
also PYTHONWARNINGS=arg
usage: python.exe [option] ... [-c cmd | -m mod | file | -] [arg] ...
Options and arguments (and corresponding environment variables):
-b : issue warnings about comparing bytearray with unicode
(-bb: issue errors)
-B : don't write .py[co] files on import; also PYTHONDONTWRITEBYTECODE=x
-c cmd : program passed in as string (terminates option list)
-d : debug output from parser; also PYTHONDEBUG=x
-E : ignore PYTHON* environment variables (such as PYTHONPATH)
-h : print this help message and exit (also --help)
-i : inspect interactively after running script; forces a prompt even
if stdin does not appear to be a terminal; also PYTHONINSPECT=x
-m mod : run library module as a script (terminates option list)
-O : optimize generated bytecode slightly; also PYTHONOPTIMIZE=x
-OO : remove doc-strings in addition to the -O optimizations
-R : use a pseudo-random salt to make hash() values of various types be
unpredictable between separate invocations of the interpreter, as
a defense against denial-of-service attacks
-Q arg : division options: -Qold (default), -Qwarn, -Qwarnall, -Qnew
-s : don't add user site directory to sys.path; also PYTHONNOUSERSITE
-S : don't imply 'import site' on initialization
-t : issue warnings about inconsistent tab usage (-tt: issue errors)
-u : unbuffered binary stdout and stderr; also PYTHONUNBUFFERED=x
see man page for details on internal buffering relating to '-u'
-v : verbose (trace import statements); also PYTHONVERBOSE=x
can be supplied multiple times to increase verbosity
-V : print the Python version number and exit (also --version)
-W arg : warning control; arg is action:message:category:module:lineno
also PYTHONWARNINGS=arg
-x : skip first line of source, allowing use of non-Unix forms of #!cmd
-3 : warn about Python 3.x incompatibilities that 2to3 cannot trivially fix
file : program read from script file
- : program read from stdin (default; interactive mode if a tty)
arg ...: arguments passed to program in sys.argv[1:]
Usually python -W ignore file.py should work.
However, you can try adding this to the code as well to suppress all warnings
import warnings
warnings.filterwarnings("ignore")
Reference:
https://stackoverflow.com/a/14463362/3288888
It is
-Wignore
without space. f.e.:
python -Wignore file.py
Related
I am trying to run python code that I pull directly from Github raw URL using the Python interpreter. The goal is never having to keep the code stored on file system and run it directly from github.
SO far I am able to get the raw code from github using the curl command but since it is a multi-line code, I get the error that python cannot find the file.
python 'curl https://github.url/raw/path-to-code'
python: can't open file 'curl https://github.url/raw/path-to-code': [Errno
2] No such file or directory
How do I pass a multi-line code block to the Python interpreter without having to write another .py file (which would defeat the purpose of this exercise)?
You need to pipe the code you get from cURL to the Python interpreter, something like:
curl https://github.url/raw/path-to-code | python -
UPDATE: cURL prints download stats to STDERR, if you want it silenced you can use the -s modifier when calling it:
curl -s https://github.url/raw/path-to-code | python -
There is no way to do this via Python interpreter, without first retrieving the script then passing it to the interpreter.
The currant Python command line arguments can be accessed with --help argument:
usage: python [option] ... [-c cmd | -m mod | file | -] [arg] ...
Options and arguments (and corresponding environment variables):
-b : issue warnings about str(bytes_instance), str(bytearray_instance)
and comparing bytes/bytearray with str. (-bb: issue errors)
-B : don't write .pyc files on import; also PYTHONDONTWRITEBYTECODE=x
-c cmd : program passed in as string (terminates option list)
-d : debug output from parser; also PYTHONDEBUG=x
-E : ignore PYTHON* environment variables (such as PYTHONPATH)
-h : print this help message and exit (also --help)
-i : inspect interactively after running script; forces a prompt even
if stdin does not appear to be a terminal; also PYTHONINSPECT=x
-I : isolate Python from the user's environment (implies -E and -s)
-m mod : run library module as a script (terminates option list)
-O : optimize generated bytecode slightly; also PYTHONOPTIMIZE=x
-OO : remove doc-strings in addition to the -O optimizations
-q : don't print version and copyright messages on interactive startup
-s : don't add user site directory to sys.path; also PYTHONNOUSERSITE
-S : don't imply 'import site' on initialization
-u : force the binary I/O layers of stdout and stderr to be unbuffered;
stdin is always buffered; text I/O layer will be line-buffered;
also PYTHONUNBUFFERED=x
-v : verbose (trace import statements); also PYTHONVERBOSE=x
can be supplied multiple times to increase verbosity
-V : print the Python version number and exit (also --version)
when given twice, print more information about the build
-W arg : warning control; arg is action:message:category:module:lineno
also PYTHONWARNINGS=arg
-x : skip first line of source, allowing use of non-Unix forms of #!cmd
-X opt : set implementation-specific option
file : program read from script file
- : program read from stdin (default; interactive mode if a tty)
arg ...: arguments passed to program in sys.argv[1:]
If you want it all on one line then use | to set multiple commands
curl https://github.url/raw/path-to-code --output some.file|python some.file
Let's say we have a program/package which comes along with its own interpreter and a set of scripts which should invoke it on their execution (using shebang).
And let's say we want to keep it portable, so it remains functioning even if simply copied to a different location (different machines) without invoking setup/install or modifying environment (PATH). A system interpreter should not be mixed in for these scripts.
The given constraints exclude both known approaches like shebang with absolute path:
#!/usr/bin/python
and search in the environment
#!/usr/bin/env python
Separate launchers look ugly and are not acceptable.
I found good summary of the shebang limitations which describe why relative path in the shebang are useless and there cannot be more than one argument to the interpreter: http://www.in-ulm.de/~mascheck/various/shebang/
And I also found practical solutions for most of the languages with 'multi-line shebang' tricks. It allows to write scripts like this:
#!/bin/sh
"exec" "`dirname $0`/python2.7" "$0" "$#"
print copyright
But sometimes, we don't want to extend/patch existing scripts which rely on shebang with an absolute path to interpreter using this approach. E.g. Python's setup.py supports --executable option which basically allows to specify the shebang content for the scripts it produces:
python setup.py build --executable=/opt/local/bin/python
So, in particular, what can be specified for --executable= in order to enable the desired kind of portability? Or in other words, since I'd like to keep the question not too specific to Python...
The question
How to write a shebang which specifies an interpreter with a path which is relative to the location of the script being executed?
The relative path written directly in a shebang is treated relative to the current working directory, so something like #!../bin/python2.7 will not work for any other working directory except few.
Since OS does not support it, why not to use external program like using env for PATH lookup. But I know no specialized program which computes the relative paths from arguments and executes the resulting command.. except the shell itself and other scripting engines.
But trying to compute the path in a shell script like
#!/bin/sh -c '`dirname $0`/python2.7 $0'
does not work because on Linux shebang is limited by one argument only. And that suggested me to look for scripting engines which accept a script as the first argument on the command line and are able to execute new process:
Using AWK
#!/usr/bin/awk BEGIN{a=ARGV[1];sub(/[a-z_.]+$/,"python2.7",a);system(a"\t"ARGV[1])}
Using Perl
#!/usr/bin/perl -e$_=$ARGV[0];exec(s/\w+$/python2.7/r,$_)
update from 11Jan21:
Using updated env utility:
$ env --version | grep env
env (GNU coreutils) 8.30
$ env --help
Usage: env [OPTION]... [-] [NAME=VALUE]... [COMMAND [ARG]...]
Set each NAME to VALUE in the environment and run COMMAND.
Mandatory arguments to long options are mandatory for short options too.
-i, --ignore-environment start with an empty environment
-0, --null end each output line with NUL, not newline
-u, --unset=NAME remove variable from the environment
-C, --chdir=DIR change working directory to DIR
-S, --split-string=S process and split S into separate arguments;
used to pass multiple arguments on shebang lines
So, passing -S to env will do the job
The missing "punchline" from Anton's answer:
With an updated version of env, we can now realize the initial idea:
#!/usr/bin/env -S /bin/sh -c '"$(dirname "$0")/python3" "$0" "$#"'
Note that I switched to python3, but this question is really about shebang - not python - so you can use this solution with whatever script environment you want. You can also replace /bin/sh with just sh if you prefer.
There is a lot going on here, including some quoting hell, and at first glance it's not clear what's happening. I think there's little worth to just saying "this is how to do it" without explanation, so let's unpack it.
It breaks down like this:
The shebang is interpreted to run /usr/bin/env with the following arguments:
-S /bin/sh -c '"$(dirname "$0")/python3" "$0" "$#"'
full path (either local or absolute) to the script file
onwards, any extra commandline arguments
env finds the -S at the start of the first argument, and splits it according to (simplified) shell rules. In this case, only the single-quotes are relevant - all the other fancy syntax is within single-quotes so it gets ignored. The new arguments to env become:
/bin/sh
-c
"$(dirname "$0")/python3" "$0" "$#"
full path to script file (either local or absolute)
onwards, (possibly) extra arguments
It runs /bin/sh - the default shell - with the arguments:
-c
"$(dirname "$0")/python3" "$0" "$#"
full path to script file
onwards, (possibly) extra arguments
As the shell was run with -c, it runs in the second operating mode defined here (and also re-described many times by different man pages of all shells, e.g. dash, which is much more approachable). In our case we can ignore all the extra options, the syntax is:
sh -c command_string command_name [argument ...]
In our case:
command_string is "$(dirname "$0")/python3" "$0" "$#"
command_name is the script path, e.g. ./path to/script dir/script file.py
argument(s) are any extra arguments (it's possible to have zero arguments)
As described, the shell wants to run command_string ("$(dirname "$0")/python3" "$0" "$#") as a command, so now we turn to the Shell Command Language:
Parameter Expansion is performed on "$0" and "$#", which are both Special Parameters:
"$#" expands to the argument(s). If there were no arguments, it will "expand" into nothing. Because of this special behaviour, it's explained horribly in the spec I linked, but the man page for dash explains it much better.
$0 expands to command_name - our script file. Every occurrence of $0 is within double-quotes so it doesn't get split, i.e. spaces in the path won't break it up into multiple arguments.
Command Substitution is applied, substituting $(dirname "$0") with the standard output of running the command dirname "./path to/script dir/script file.py", i.e. the folder that our script file resides in: ./path to/script dir.
After all of the substitutions and expansions, the command becomes, for example:
"./path to/script dir/python3" "./path to/script dir/script file.py" "first argument" "second argument" ...
Finally, the shell runs the expanded command, and executes our local python3 with our script file as an argument followed by any other arguments we passed to it.
Phew!
What follows is basically my attempts to demonstrate that those steps are occuring. It's probably not worth your time, but I already wrote it and I don't think it's so bad that it should be removed. If nothing else, it might be useful to someone if they want to see an example of how to reverse-engineer things like this. It doesn't include extra arguments, those were added after Emanuel's comment.
It also has a lousy joke at the end..
First let's start simpler. Take a look at the following "script", replacing env with echo:
$ cat "/home/neatnit/Projects/SO question 33225082/my script.py"
#!/usr/bin/echo -S /bin/sh -c '"$( dirname "$0" )/python2.7" "$0"'
print("This is python")
It's hardly a script - the shebang calls echo which will just print whichever arguments it's given. I've deliberately put two spaces between the words, this way we can see how they get preserved. As an aside, I've deliberately put the script in a path that contains spaces, to show that they are handled correctly.
Let's run it:
$ "/home/neatnit/Projects/SO question 33225082/my script.py"
-S /bin/sh -c '"$( dirname "$0" )/python2.7" "$0"' /home/neatnit/Projects/SO question 33225082/my script.py
We see that with that shebang, echo is run with two arguments:
-S /bin/sh -c '"$( dirname "$0" )/python2.7" "$0"'
/home/neatnit/Projects/SO question 33225082/my script.py
These are the literal arguments echo sees - no quoting or escaping.
Now, let's get env back but use printf [1] ahead of sh to explore how env processes these arguments:
$ cat "/home/neatnit/Projects/SO question 33225082/my script.py"
#!/usr/bin/env -S printf %s\n /bin/sh -c '"$( dirname "$0" )/python2.7" "$0"'
print("This is python")
And run it:
$ "/home/neatnit/Projects/SO question 33225082/my script.py"
/bin/sh
-c
"$( dirname "$0" )/python2.7" "$0"
/home/neatnit/Projects/SO question 33225082/my script.py
env splits the string after -S [2] according to ordinary (but simplified) shell rules. In this case, all $ symbols were within single-quotes, so env did not expand them. It then appended the additional argument - the script file - to the end.
When sh gets these arguments, the first argument after -c (in this case: "$( dirname "$0" )/python2.7" "$0") gets interpreted as a shell command, and the next argument acts as the first parameter in that command ($0).
Pushing the printf one level deeper:
$ cat "/home/neatnit/Projects/SO question 33225082/my script.py"
#!/usr/bin/env -S /bin/sh -c 'printf %s\\\n "$( dirname "$0" )/python2.7" "$0"'
print("This is python")
And running it:
$ "/home/neatnit/Projects/SO question 33225082/my script.py"
/home/neatnit/Projects/SO question 33225082/python2.7
/home/neatnit/Projects/SO question 33225082/my script.py
At last - it's starting to look like the command we were looking for! The local python2.7 and our script as an argument!
sh expanded $0 into /home/[ ... ]/my script.py, giving this command:
"$( dirname "/home/[ ... ]/my script.py" )/python2.7" "/home/[ ... ]/my script.py"
dirname snips off the last part of the path to get the containing folder, giving this command:
"/home/[ ... ]/SO question 33225082/python2.7" "/home/[ ... ]/my script.py"
To highlight a common pitfall, this is what happens if we don't use double-quotes and our path contains spaces:
$ cat "/home/neatnit/Projects/SO question 33225082/my script.py"
#!/usr/bin/env -S /bin/sh -c 'printf %s\\\n $( dirname $0 )/python2.7 $0'
print("This is python")
$ "/home/neatnit/Projects/SO question 33225082/my script.py"
/home/neatnit/Projects
.
33225082
./python2.7
/home/neatnit/Projects/SO
question
33225082/my
script.py
Needless to say, running this as a command would not give the desired result. Figuring out exactly what happened here is left as an exercise to the reader :)
At last, we put the quote marks back where they belong and get rid of the printf, and we finally get to run our script:
$ "/home/neatnit/Projects/SO question 33225082/my script.py"
/home/neatnit/Projects/SO question 33225082/my script.py: 1: /home/neatnit/Projects/SO question 33225082/python2.7: not found
Wait, uh, let me fix that
$ ln --symbolic $(which python3) "/home/neatnit/Projects/SO question 33225082/python2.7"
$ "/home/neatnit/Projects/SO question 33225082/my script.py"
This is python
Rejoice!
[1] This way we can see each argument in a separate line, and we don't have to get confused by space-delimited arguments.
[2] There doesn't need to be a space after -S, I just prefer the way it looks. -Sprintf sounds really exhausting.
So I am trying to pass a command from python to command line as hex newline: \x0a
Which in python is also know as "\n"
what I'm trying to print through the command line is:
check_nrpe -H 127.0.0.1 -c check_users -a "echo -e "\x0a ls " #" 4 4
I tried
import subprocess as sb
sb.check_call(["check_nrpe", \ # first argument
"-H", host, # host
"-c", "check_users", # wanted remote command
"-a", # option
"\"`echo -e",
"\"\\x0a", # <new line>, problem is that python changes this to \n
parameter,
"\"` #\"", "4", "4"]])
"\"\x0a" # , problem is that python changes this to \n when passing the argument to the command line
So what i want to do is \x0a to be printed instead of \n
also i tried to encode
"\n".encode("hex")
which prints "0a"
Question is that how i tell python to pass the argument \x0a to the command line.
Clarify your check_nrpe call
Assuming, you have Nagios installed (I have it and run Ubuntu)
cd /urs/lib/nagios/plugins
See, check_nrpe help
$ ./check_nrpe -h
NRPE Plugin for Nagios
Copyright (c) 1999-2008 Ethan Galstad (nagios#nagios.org)
Version: 2.12
Last Modified: 03-10-2008
License: GPL v2 with exemptions (-l for more info)
SSL/TLS Available: Anonymous DH Mode, OpenSSL 0.9.6 or higher required
Usage: check_nrpe -H <host> [-n] [-u] [-p <port>] [-t <timeout>] [-c <command>] [-a <arglist...>]
Options:
-n = Do no use SSL
-u = Make socket timeouts return an UNKNOWN state instead of CRITICAL
<host> = The address of the host running the NRPE daemon
[port] = The port on which the daemon is running (default=5666)
[timeout] = Number of seconds before connection times out (default=10)
[command] = The name of the command that the remote daemon should run
[arglist] = Optional arguments that should be passed to the command. Multiple
arguments should be separated by a space. If provided, this must be
the last option supplied on the command line.
-h,--help Print this short help.
-l,--license Print licensing information.
-n,--no-ssl Do not initial an ssl handshake with the server, talk in plaintext.
Note:
This plugin requires that you have the NRPE daemon running on the remote host.
You must also have configured the daemon to associate a specific plugin command
with the [command] option you are specifying here. Upon receipt of the
[command] argument, the NRPE daemon will run the appropriate plugin command and
send the plugin output and return code back to *this* plugin. This allows you
to execute plugins on remote hosts and 'fake' the results to make Nagios think
the plugin is being run locally.
Review your sample call (I have corrected formatting which got lost in your original post, it was hiding backquotes):
$ check_nrpe -H 127.0.0.1 -c check_users -a "`echo -e "\x0a ls "` #" 4 4
It seems like you try to call check_users command and pass it some arguments. So the final call on remote (NRPE driven) machine would look like:
$ check_users "`echo -e "\x0a ls "` #" 4 4
Comparing it to what check_users proposes on help screen:
$ ./check_users -h
check_users v1.4.15 (nagios-plugins 1.4.15)
Copyright (c) 1999 Ethan Galstad
Copyright (c) 2000-2007 Nagios Plugin Development Team
<nagiosplug-devel#lists.sourceforge.net>
This plugin checks the number of users currently logged in on the local
system and generates an error if the number exceeds the thresholds specified.
Usage:
check_users -w <users> -c <users>
Options:
-h, --help
Print detailed help screen
-V, --version
Print version information
-w, --warning=INTEGER
Set WARNING status if more than INTEGER users are logged in
-c, --critical=INTEGER
Set CRITICAL status if more than INTEGER users are logged in
Send email to nagios-users#lists.sourceforge.net if you have questions
regarding use of this software. To submit patches or suggest improvements,
send email to nagiosplug-devel#lists.sourceforge.net
It is clear, your attempt to call check_users over check_nrpe is broken as check_users expects exactly four arguments and the call should look like (assuming you consider 4 users be both critical and warning level):
$ ./check_users -c 4 -w 4
So your final call of check_nrpe could look like:
$ check_nrpe -H 127.0.0.1 -c check_users -a -c 4 -w 4
Note, that if you are trying to pass dynamic values to critical and warning, you shall do that over Nagios variables and do not assume, it will be shaped by command line (which happens on your remote machine). Such a technique could work, but is rather tricky.
Passing newline or other characters to command line calls
Another topic is, how to pass newlines or other special characters to commands from Python.
Here it is not so difficult, as you have a chance passing a list of arguments, which does not get interpreted by shell, but is directly passed to the command.
Simple commandline script bcmd.py
Following script allows testing what parameters were passed into it from command line:
import sys
print sys.argv
Test of calling commands from Python code
from subprocess import call
args = ["python", "bcmd.py"]
args.append("alfa")
args.append("be\nta")
args.append("""gama
hama""")
args.append("omega\x0aOMEGA")
args.append("double-omega\x0adouble-OMEGA")
args.append("literaly\\x0aliteraly")
call(args)
Call it:
$ python callit.py
['bcmd.py', 'alfa', 'be\nta', 'gama\n hama', 'omega\nOMEGA', 'double-omega\ndouble-OMEGA', 'literaly\\x0aliteraly']
and learn from it.
Conclusion:
Python allows calling commands and passing arguments via list bypassing shell parsing rules.
In your case, the real problem seem to be in use of check_nrpe rather then in passing in newlines.
In my python script myscript.py I use argparse to pass command-line arguments. When I want to display the help information about the input arguments, I just do:
$ python myscript.py --help
If instead I want to use ipython to run my script, the help message won't be displayed. Ipython will display its own help information:
$ ipython -- myscript.py -h
=========
IPython
=========
Tools for Interactive Computing in Python
=========================================
A Python shell with automatic history (input and output), dynamic object
introspection, easier configuration, command completion, access to the
system shell and more. IPython can also be embedded in running programs.
Usage
ipython [subcommand] [options] [files]
It's not so annoying, but is there a way around it?
You need to run your .py script inside the ipython. Something like that:
%run script.py -h
This is an IPython bug, corrected in https://github.com/ipython/ipython/pull/2663.
My 0.13 has this error; it is corrected in 0.13.2. The fix is in IPthyon/config/application.py Application.parse_command_line. This function looks for help and version flags (-h,-V) in sys.argv before passing things on to parse_known_args (hence the custom help formatting). In the corrected release, it checks sys.argv only up to the first --. Before it looked in the whole array.
earlier:
A fix for earlier releases is to define an alternate help flag in the script:
simple.py script:
import argparse, sys
print(sys.argv)
p = argparse.ArgumentParser(add_help=False) # turn off the regular -h
p.add_argument('-t')
p.add_argument('-a','--ayuda',action=argparse._HelpAction,help='alternate help')
print(p.parse_args())
Invoke with:
$ ./ipython3 -- simple.py -a
['/home/paul/mypy/argdev/simple.py', '-a']
usage: simple.py [-t T] [-a]
optional arguments:
-t T
-a, --ayuda alternate help
$ ./ipython3 -- simple.py -t test
['/home/paul/mypy/argdev/simple.py', '-t', 'test']
Namespace(t='test')
I used to use perl -c programfile to check the syntax of a Perl program and then exit without executing it. Is there an equivalent way to do this for a Python script?
You can check the syntax by compiling it:
python -m py_compile script.py
You can use these tools:
PyChecker
Pyflakes
Pylint
import sys
filename = sys.argv[1]
source = open(filename, 'r').read() + '\n'
compile(source, filename, 'exec')
Save this as checker.py and run python checker.py yourpyfile.py.
Here's another solution, using the ast module:
python -c "import ast; ast.parse(open('programfile').read())"
To do it cleanly from within a Python script:
import ast, traceback
filename = 'programfile'
with open(filename) as f:
source = f.read()
valid = True
try:
ast.parse(source)
except SyntaxError:
valid = False
traceback.print_exc() # Remove to silence any errros
print(valid)
Pyflakes does what you ask, it just checks the syntax. From the docs:
Pyflakes makes a simple promise: it will never complain about style, and it will try very, very hard to never emit false positives.
Pyflakes is also faster than Pylint or Pychecker. This is largely because Pyflakes only examines the syntax tree of each file individually.
To install and use:
$ pip install pyflakes
$ pyflakes yourPyFile.py
python -m compileall -q .
Will compile everything under current directory recursively, and print only errors.
$ python -m compileall --help
usage: compileall.py [-h] [-l] [-r RECURSION] [-f] [-q] [-b] [-d DESTDIR] [-x REGEXP] [-i FILE] [-j WORKERS] [--invalidation-mode {checked-hash,timestamp,unchecked-hash}] [FILE|DIR [FILE|DIR ...]]
Utilities to support installing Python libraries.
positional arguments:
FILE|DIR zero or more file and directory names to compile; if no arguments given, defaults to the equivalent of -l sys.path
optional arguments:
-h, --help show this help message and exit
-l don't recurse into subdirectories
-r RECURSION control the maximum recursion level. if `-l` and `-r` options are specified, then `-r` takes precedence.
-f force rebuild even if timestamps are up to date
-q output only error messages; -qq will suppress the error messages as well.
-b use legacy (pre-PEP3147) compiled file locations
-d DESTDIR directory to prepend to file paths for use in compile-time tracebacks and in runtime tracebacks in cases where the source file is unavailable
-x REGEXP skip files matching the regular expression; the regexp is searched for in the full path of each file considered for compilation
-i FILE add all the files and directories listed in FILE to the list considered for compilation; if "-", names are read from stdin
-j WORKERS, --workers WORKERS
Run compileall concurrently
--invalidation-mode {checked-hash,timestamp,unchecked-hash}
set .pyc invalidation mode; defaults to "checked-hash" if the SOURCE_DATE_EPOCH environment variable is set, and "timestamp" otherwise.
Exit value is 1 when syntax errors have been found.
Thanks C2H5OH.
Perhaps useful online checker PEP8 : http://pep8online.com/
Thanks to the above answers #Rosh Oxymoron. I improved the script to scan all files in a dir that are python files. So for us lazy folks just give it the directory and it will scan all the files in that directory that are python. you can specify any file ext. you like.
import sys
import glob, os
os.chdir(sys.argv[1])
for file in glob.glob("*.py"):
source = open(file, 'r').read() + '\n'
compile(source, file, 'exec')
Save this as checker.py and run python checker.py ~/YOURDirectoryTOCHECK
for some reason ( I am a py newbie ... ) the -m call did not work ...
so here is a bash wrapper func ...
# ---------------------------------------------------------
# check the python synax for all the *.py files under the
# <<product_version_dir/sfw/python
# ---------------------------------------------------------
doCheckPythonSyntax(){
doLog "DEBUG START doCheckPythonSyntax"
test -z "$sleep_interval" || sleep "$sleep_interval"
cd $product_version_dir/sfw/python
# python3 -m compileall "$product_version_dir/sfw/python"
# foreach *.py file ...
while read -r f ; do \
py_name_ext=$(basename $f)
py_name=${py_name_ext%.*}
doLog "python3 -c \"import $py_name\""
# doLog "python3 -m py_compile $f"
python3 -c "import $py_name"
# python3 -m py_compile "$f"
test $! -ne 0 && sleep 5
done < <(find "$product_version_dir/sfw/python" -type f -name "*.py")
doLog "DEBUG STOP doCheckPythonSyntax"
}
# eof func doCheckPythonSyntax