Conventions of Importing Python Main Programs - python

Often I write command line utilities that are only meant to be run as main. For example, I might have a file that looks like this:
#!/usr/bin/env python
if __name__ == '__main__':
import sys
# do stuff
In other words, there is nothing going on that isn't under the if statement checking that this file is being run as main. I tried importing a file like this to see what would happen, and the import was successful.
So as I expected, one is allowed to import files like this, but what is the convention surrounding this practice? Is one supposed to throw an error telling the user that there is nothing to be imported? Or if all the contents of the file are supposed to be run as main, does one need to check if the program is being run as main? Or is the conditional not necessary?
Also, if I have import statements, should they be at the top of the file, or under the conditional? If the modules are only being used under the conditional, it would seem to me that they should be imported under the conditional and not at the top of the file.

If you are writing simple utilities that you are entirely certain that you will never import as a module in another program, then you really do not need to include the if __name__ == '__main__' stuff. The fundamental point of that construct is to allow a module to be developed that can both be imported as a module for use, and run as a stand-alone program for some other purpose. For example, if you had a module and had some test vectors you wanted to run on it regularly, you would put the trigger mechanism for your test vectors in the if __name__ block.
Another example might be if you have a stand-alone program that you develop, that would also provide useful functions for others. If you have a look at the pip module, this is an excellent example of this technique.

Related

Python Setuptools: quick way to add scripts without "main" function as "console_scripts" entry points

My request seems unorthodox, but I would like to quickly package an old repository, consisting mostly of python executable scripts.
The problem is that those scripts were not designed as modules, so some of them execute code directly at the module top-level, and some other have the if __name__=='__main__' part.
How would you distribute those scripts using setuptools, without too much rewrite?
I know I could just put them under the scripts option of setup(), but it's not advised, and also it doesn't allow me to rename them.
I would like to skip defining a main() function in all those scripts, also because some scripts call weird recursive functions with side effects on global variables, so I'm a bit afraid of breaking stuff.
When I try providing only the module name as console_scripts (e.g "myscript=mypkg.myscript" instead of "myscript=mypkg.myscript:main"), it logically complains after installation that a module is not callable.
Is there a way to create scripts from modules? At least when they have a if __name__=='__main__'?
I just realised part of the answer:
in the case where the module executes everything at the top-level, i.e. on import, it's therefore ok to define a dummy "no-op" main function, like so:
# Content of mypkg/myscript.py
print("myscript being executed!")
def main():
pass # Do nothing!
This solution will still force me to add this line to the existing scripts, but I think it's a quick but cautious solution.
No solution if the code is under a if __name__=='__main__' though...
You can use the following codes.
def main():
pass # or do something
if __name__ == "__main__":
main()

Checking if all modules that will be required are available on PYTHONPATH

I have written a large system in Python that I now want to distribute to some colleagues. There are a few folders that need to be added to PYTHONPATH for all of my modules (files) to be found. I am looking for a way to give them sane error messages if they have not setup their PYTHONPATH correctly. Say the structure is:
ParentModule
calls Child
calls GrandChild
calls MyModule
If they run ParentModule, it could be running for a long time before it ever ends up in GrandChild and needs MyModule, and if MyModule's directory is not on PYTHONPATH, it will crash complaining that it can't find MyModule. I was hoping to be able to do something like:
for file in (all files that could ever be reached from here):
if all modules needed by 'file' are not available
print "error: Please make sure your PYTHONPATH points to x,y, and z"
Is something like this possible?
At the top of your main module I would just try to import all of the modules your program depends on, and wrap it in a try/except for printing your sane error if any of the import statements fail:
import sys
try:
import Child
import GrandChild
import MyModule
except ImportError:
print "Error: Please make sure your PYTHONPATH points to x, y, and z"
sys.exit(1)
# current module contents
Depending on how you're "calling" Child, GrandChild, and MyModule, this should happen automatically.
If by "call" you mean import, and you're doing your imports at the type of the module, as is conventional, then all of the import chaining will happen automatically on the import of the parent module. So if a downstream import is unavailable, then you'll get an ImportError when you import the ParentModule. If you're "calling" the scripts, by say, executing them in a subprocess, then no I don't think there's an easy way to ensure the availability of modules, given the totally dynamic nature of what you're doing. Similarly if you're doing totally dynamic imports. This is one of the down sides to dynamic programming in general - there's often no rigorous way to ensure that things will be the way you intended them to be.
Edit:
You could definitely do something heuristic like #F.J. suggests though.

Importing python modules for use in only one file

More specifically let's say I have a number of .py files, with main.py importing stuff like os, pygame, math and all my other .py files, mymodule01.py etc.
My problem is that whenever main.py calls on one of my .py files and that file contains something like an os.listdir() I keep getting an error saying stuff like 'os is not defined'.
Should I just import all the required modules in each .py file I write, or is there a better way, like a centralized import that every file can recognize? With pygame especially this would be very confusing since I'd have to init pygame in each file just to use it's functions, not to mention if I want to blit something on the screen.
The python modules and packages documentation didn't help much, that or I'm really slow, also considering that after following the doc I keep getting a not found error after adding e.g. import mymodule01.py in the init.py file in the containing folder.
I think you may be under the impression that "import" acts like "include" in other languages. It doesn't.
Each module object is a singleton. There's no performance degradation or danger of initializing a module's code more than once.
Furthermore, each file has its own scope, so in your example if you define foo = 1 in main.py, foo won't be visible in mymodule01.py. You would have to import main; main.foo to see it (not that you should)
You grumble, but this is a much better system than include
Should I just import all the required modules in each .py file I write
Yes.
With pygame especially this would be very confusing since I'd have to init pygame in each file just to use it's functions
No, only init it once. There's only one copy of the module.

"Boilerplate" code in Python?

Google has a Python tutorial, and they describe boilerplate code as "unfortunate" and provide this example:
#!/usr/bin/python
# import modules used here -- sys is a very standard one
import sys
# Gather our code in a main() function
def main():
print 'Hello there', sys.argv[1]
# Command line args are in sys.argv[1], sys.argv[2] ..
# sys.argv[0] is the script name itself and can be ignored
# Standard boilerplate to call the main() function to begin
# the program.
if __name__ == '__main__':
main()
Now, I've heard boilerplate code being described as "seemingly repetitive code that shows up again and again in order to get some result that seems like it ought to be much simpler" (example).
Anyways, in Python, the part considered "boilerplate" code of the example above was:
if __name__ == '__main__':
main()
Now, my questions are as follows:
1) Does boilerplate code in Python (like the example provided) take on the same definition as the definition I provided? If so, why?
2) Is this code even necessary? It seems to me like the code runs whether or not there's a main method. What makes using this code better? Is it even better?
3) Why do we use that code and what service does it provide?
4) Does this occur throughout Python? Are there other examples of "boilerplate code"?
Oh, and just an off topic question: do you need to import sys to use command line arguments in Python? How does it handle such arguments if its not there?
It is repetitive in the sense that it's repeated for each script that you might execute from the command line.
If you put your main code in a function like this, you can import the module without executing it. This is sometimes useful. It also keeps things organized a bit more.
Same as #2 as far as I can tell
Python is generally pretty good at avoiding boilerplate. It's flexible enough that in most situations you can write code to produce the boilerplate rather then writing boilerplate code.
Off topic question:
If you don't write code to check the arguments, they are ignored.
The reason that the if __name__ == "__main__": block is called boilerplate in this case is that it replicates a functionality that is automatic in many other languages. In Java or C++, among many others, when you run your code it will look for a main() method and run it, and even complain if it's not there. Python runs whatever code is in your file, so you need to tell it to run the main() method; a simple alternative would be to make running the main() method the default functionality.
So, if __name__ == "__main__": is a common pattern that could be shorter. There's no reason you couldn't do something different, like:
if __name__ == "__main__":
print "Hello, Stack Overflow!"
for i in range(3):
print i
exit(0)
This will work just fine; although my example is a little silly, you can see that you can put whatever you like there. The Python designers chose this behavior over automatically running the main() method (which may well not exist), presumably because Python is a "scripting" language; so you can write some commands directly into a file, run it, and your commands execute. I personally prefer it the Python way because it makes starting up in Python easier for beginners, and it's always nice to have a language where Hello World is one line.
The reason you use an "if main" check is so you can have a module that runs some part of its code at toplevel (to create the things – constants, functions, or classes – it exports), and some part only when executed as a script (e.g. unit tests for its functionality).
The reason the latter code should be wrapped in a function is because local variables of the main() block would leak into the module's scope.
Now, an alternate design could be that a file executed as a script would have to declare a function named, say, __main__(), but that would mean adding a new magic function name to the language, while the __name__ mechanism is already there. (And couldn't be removed, because every module has to have a __name__, and a module executed as a script has to have a "special" name because of how module names are assigned.) Introducing two mechanisms to do the same thing just to get rid of two lines of boilerplate – and usually two lines of boilerplate per application – just doesn't seem worth it.
You don't need to add a if __name__ == '__main__' for one off scripts that aren't intended to be a part of a larger project. See here for a great explanation. You only need it if you want to run the file by itself AND include it as a module along with other python files.
If you just want to run one file, you can have zero boilerplate:
print 1
and run it with $ python your_file.py
Adding the shebang line #!/usr/bin/python and running chmod +x print_one.py gets you the ability to run with
./print_one.py
Finally, # coding: utf-8 allows you to add unicode to your file if you want to put ❤'s all over the place.
1) main boilerplate is common, but cannot be any simpler
2) main() is not called without the boilerplate
3) the boilerplate allows module usage both as a standalone script, and as a library in other programs
4) it’s very common. doctest is another one.
Train to become a Python guru…and good luck with the thesis! ;-)
Let’s take a moment to see what happened when you called import sys:
Python looks at a list and brings in the sys module
It finds the argv function and runs it
So, what’s happening here?
A function written elsewhere is being used to perform certain operations within the scope of the current program. Programming in this fashion has a lots of benefits. It separates the logic from actual labour.
Now, as far as the boilerplate is concerned, there are two parts:
the program itself (the logic), defined under main, and
the call part that checks if main exists
You essentially write your program under main, using all the functions you defined just before defining main (or elsewhere), and let Python look for main.
I am equally confused by what the tutorial means by "boilerplate code": does it mean that this code can be avoided in a simple script? Or it is a criticism towards Python features that force the use of this syntax? Or even an invitation to use this "boilerplate" code?
I don't know, however, after many years of Python programming, I have at least clear what the different syntaxes do, even if I am probably still not sure on what is the best way of doing it.
Often you want to put at the end of the script code for tests or code that want to execute, but this has some implications/side-effects:
the code gets executed even when the script is imported, that it is rarely what is wanted.
variables and values in the code are defined and exported in the calling namespace
the code at the end of the script can be executed by calling the script (python script.py) or by running from ipython shell (%run script.py), but there is no way to run it from other scripts.
The most basic mechanism to avoid to execute following code in all conditions, is the syntax:
if __name__ == '__main__':
which makes the code run only if the script is called or run, avoiding problem 1. The other two points still hold.
The "boilerplate" code with a separate main() function, adds a further step, excluding also above points 2 and 3, so for example you can call a number of tests from different scripts, that sometimes can take another level (e.g.: a number of functions, one for each test, so they can be individually be called from outside, and a main that calls all test functions, without needs to know from outside which one they are).
I add that the main reason I find this structures often unsatisfying, apart from its complexity, is that sometimes I would like to maintain point 2 and I lose this possibility if the code is moved to a separate function.

Python: Create virtual import path

Is there any way to create a virtual import path in Python?
My directory structure is like this:
/
native
scripts
some.py
another.py
[Other unrelated dirs]
The root is the directory from where the program is executed. Atm I add native/scripts/ to the search path so I can do import some, another instead of from native.scripts import some, another, but I'd like to be able to do it like this:
from native import some
import native.another
Is there any way to achieve this?
Related questions:
Making a virtual package available via sys.modules
Why not move some.py and another.py out into the native directory so that everything Just Works and so that people returning to the source code later won't be confused about why things are and aren't importable? :)
Update:
Thanks for your comments; they have usefully clarified the problem! In your case, I generally put functions and classes that I might want to import inside, say, native.some where I can easily get to them. But then I get the script code, and only the script code — only the thin shim that interprets arguments and starts everything running by passing those to a main() or go() function as parameters — and put that inside of a scripts directory. That keeps external-interface code cleanly separate from code that you might want to import, and means you don't have to try to fool Python into having modules several places at once.
In /native/__init__.py, include:
from scripts import some, another

Categories

Resources