Does Sphinx run my code on executing 'make html'? - python

I inherited a rather large codebase that I want to create HTML documentation for. Since it is written in Python I decided to use Sphinx because the users of the code are accustomed to the design and functionality of the Python documentation that's created with Sphinx.
I used the command sphinx-apidoc to automatically create the .rst files. I imported the module path into sys.path so that Sphinx can find the code.
So far so good. However, when I try to create the HTML using the command make html, there are many tracebacks popping up and some of the examples in the codebase seem to get executed. What can be the reason for that and how can I prevent that from happening?

When using autodoc, Sphinx imports the documented modules, so all module-level code is executed. This happens every time you do "make html". In that sense, Sphinx does "run" your code.
You may have to organize your code a bit differently to make the errors go away (move module-level code to functions). See this question for an example of what can happen.
This is my guess but it may not be the whole story. It's hard to say more without additional information.

def main():
print('hello world')
if __name__ == '__main__':
main()
This way your code will not be run.

Related

What reasons (if any) exist for prohibiting script import?

An executable script usually looks like this:
import modules
define some CONSTANTS, Classes, functions
if __name__ == "__main__":
really_do_something()
Recently I saw a script using the negated form of the commonn idiom:
if __name__ != "__main__":
print('The executable must not be imported.')
sys.exit(1)
I find it not Pythonic. Why should anybody want to prevent consenting adults to import the file? Are there valid reasons?
I could not find any reasons, except that it is simpler to write this != guard on the top of the script compared to the standard == guard near the bottom of the script.
Even if the answer looks obvious, given the complexity of Python import system I decided to ask just to be sure.
Consider the possibility that the script executes immediate actions. That is, there are commands at "module scope" in the script file:
#!/usr/bin/env python3
with open('/etc/passwd') as pwd:
...
In this case, importing the file will cause those commands to be run. And while it may provide some subroutines or class definitions, it might not.
So putting in a warning saying "you imported this, but you shouldn't, because it won't do what you want" is a friendly thing. It really says "this file isn't set up to be imported. If you want this functionality, call system"
It depends. If the script is just a quick-and-dirty script to do some simply stuff, then I don't see any point in importing it. Just copy&paste what you need into your code and you are done (obviously I'm assuming a compatible open source license).
If the script is part of some library/software, then the best practice would be that each script should be of the form:
import argparse
from somewhere import main
args = <parse-arguments>
main(args)
In other words: it does not contain anything, it just imports stuff, parses the command line arguments and calls the main function. In this case it does not make sense to import this script, because it's empty. You can just perform the imports yourself, removing the argument parsing stuff.
This is where such a guard might be useful. Since it does not make any logical senso to import the script, maybe you can tell the user that instead of importing the script they should simply import the main function.
If, however, the script is complex, imports stuff, defines classes, functions, glues pieces together etc, then yes it does make sense to import it and in this case using a guard similar to the one you provided as an example limits the usefulness of the script.
However note that the script might contain some logic executed when defining the module, in this case importing it might be useless (however the script should be refactored to place such logic in a functin that can be safely imported instead).

Determine usage/creation of object and data member into another module

In legacy system, We have created init module which load information and used by various module(import statement). It's big module which consume more memory and process longer time and some of information is not needed or not used till now. There is two propose solution.
Can we determine in Python who is using this module.Fox Example
LoadData.py ( init Module)
contain 100 data member
A.py
import LoadData
b = LoadData.name
B.py
import LoadData
b = LoadData.width
In above example, A.py using name and B.py is using width and rest information is not required (98 data member is not required).
is there anyway which help us to determine usage of LoadData module along with usage of data member.
In simple, we need to traverse A.py and B.py and find manually to identify usage of object.
I am trying to implement first solution as I have more than 1000 module and it will be painful to determine by traversing each module. I am open to any tool which can integrate into python
Your question is quite broad, so I can't give you an exact answer. However, what I would generally do here is to run a linter like flake8 over the whole codebase to show you where you have unused imports and if you have references in your files to things that you haven't imported. It won't tell you if a whole file is never imported by anything, but if you remove all unused imports, you can then search your codebase for imports of a particular module and if none are found, you can (relatively) safely delete that module.
You can integrate tools like flake8 with most good text editors, so that they highlight mistakes in real time.
As you're trying to work with legacy code, you'll more than likely have many errors when you run the tool, as it looks out for style issues as well as the kinds of import/usage issues that youre mention. I would recommend fixing these as a matter of principle (as they they are non-functional in nature), and then making sure that you run flake8 as part of your continuous integration to avoid regressions. You can, however, disable particular warnings with command-line arguments, which might help you stage things.
Another thing you can start to do, though it will take a little longer to yield results, is write and run unit tests with code coverage switched on, so you can see areas of your codebase that are never executed. With a large and legacy project, however, this might be tough going! It will, however, help you gain better insight into the attribute usage you mention in point 1. Because Python is very dynamic, static analysis can only go so far in giving you information about atttribute usage.
Also, make sure you are using a version control tool (such as git) so that you can track any changes and revert them if you go wrong.

How to import external Python script?

I am writing a game engine in Python and the thing is I am not sure how to handle external scripts (think source engine mods, LUA). Every scene, entity in a game can have custom script attached to it, but game engine is not aware of those scripts until the scene is being loaded. For example there could be a script, which would animate game cutscene and that script would be used only in one scene.
So, what i want to know is what's the best way to handle those scripts? I know I could import them with exec or eval, but someone said it's now safe. Why? I could also create some scripting language, which would be parsed during runtime, but I don't see a point in that considering that Python is scripting language itself. Any help would be greatly appreciated.
You can use __import__() for this. Say your entity has a script attribute that can be either None or the name of a script (that is in a known location, which is on sys.path), you could use the following code to run the main() function from that script:
if entity.script is not None:
custom_script = __import__(entity.script, globals(), locals(), [], -1)
custom_script.main()
Yes, exec and eval are not safe for user-supplied data, like handling expressions or Web input. But if you specifically want to give users the full power of Python, and you understand the risks (users can do anything – erase/read files on the computer, crash Python, enter an infinite loop, etc.), using exec is perfectly fine.
If you trust your game script designer(s), exec away. If the levels can come from random people on the Internet, at least let your players be aware of the risk.
You can run your external script either using by importing it using __import__() or using exec/execfile function. It is nice to know that:
import compiles script to byte-code and saves .pyc file at first usage
import tries to search for the module and eventually it creates module with its own namespace in contrast to exec/execfile that just executes the code in string/file.
it is possible to sandbox your code a bit by mangling with globals builtins during passing globals as parameters to __import__/execfile, although this method is not super safe, see my answer to this question for example.

Why use Python interactive mode?

When I first started reading about Python, all of the tutorials have you use Python's Interactive Mode. It is difficult to save, write long programs, or edit your existing lines (for me at least). It seems like a far more difficult way of writing Python code than opening up a code.py file and running the interpreter on that file.
python code.py
I am coming from a Java background, so I have ingrained expectations of writing and compiling files for programs. I also know that a feature would not be so prominent in Python documentation if it were not somehow useful. So what am I missing?
Let's see:
If you want to know how something works, you can just try it. There is no need to write up a file. I almost always scratch write my programs in the interpreter before coding them. It's not just for things that you don't know how they work in the programming language. I never remember what the correct arguments to range are to create, for example, [-2, -1, 0, 1]. I don't need to. I just have to fire up the interpreter and try stuff until I figure out it is range(-2, 2) (did that just now, actually).
You can use it as a calculator.
Python is a very introspective programming language. If you want to know anything about an object, you can just do dir(object). If you use IPython, you can even do object.<TAB> and it will tab-complete the methods and attributes of that object. That's way faster than looking stuff up in documentation or even in code.
help(anything) for documentation. It's way faster than any web interface.
Again, you have to use IPython (highly recommended), but you can time stuff. %timeit func1() and %timeit func2() is a common idiom to determine what is faster.
How often have you wanted to write a program to use once, and then never again. The fastest way to do this is to just do it in the Python interpreter. Sure, you have to be careful writing loops or functions (they must have the correct syntax the first time), but most stuff is just line by line, and you can play around with it.
Debugging. You don't need to put selective print statements in code to see what variables are when you write it in the interpreter. You just have to type >>> a, and it will show what a is. Nice again to see if you constructed something correctly. The building Python debugger pdb also uses the intrepeter functionality, so you can not only see what a variable is when debugging, but you can also manipulate or even change it without halting debugging.
When people say that Python is faster to develop in, I guarantee that this is a big part of what they are talking about.
Commenters: anything I am forgetting?
REPL Loops (like Python's interactive mode) provide immediate feedback to the programmer. As such, you can rapidly write and test small pieces of code, and assemble those pieces into a larger program.
You're talking about running Python in the console by simply typing "python"? That's just for little tests and for practicing with the language. It's very useful when learning the language and testing out other modules.
Of course any real software project is written in .py files and later executed by the interpreter!
The Python interpreter is a least common denominator: you can run it on multiple platforms, and it acts the same way (modulo platform-specific modules), so it's pretty easy to get a newbie going with.
It's a lot easier to tell a newbie to launch the interpreter and "do this" than to have them open a file, type in some code, save it, make it executable, make sure python is in your PATH, or use a #! line, etc etc. Scrap all of that and just launch the interpreter. For simple examples, you can't beat it. It was never meant for long programs, so if you were using it for that, you probably missed the part of the tutorial that told you "longer scripts go in a file". :)
you use the interactive interpreter to test snippets of your code before you put them into your script.
As already mentioned, the Python interactive interpreter gives a quick and dirty way to test simple Python functions and/or code snippets.
I personally use the Python shell as a very quick way to perform simple Numerical operations (provided by the math module). I have my environment setup, so that the math module is automatically imported whenever I start a Python shell. In fact, its a good way to "market" Python to non-Pythoniasts. Show them how they can use Python as a neat scientific calculator, and for simple mathematical prototyping.
One thing I use interactive mode for that others haven't mentioned: To see if a module is installed. Just fire up Python and try to import the module; if it dies, then your PYTHONPATH is broke or the module is not installed.
This is a great first step for "Hey, it's not working on my machine" or "Which Python did that get installed in, anyway" bugs.
I find the interactive interpreter very, very good for testing quick code, or to show others the Power of Python. Sometimes I use the interpreter as a handy calculator, too. It's amazing what you can do in a very short amount of time.
Aside from the built-in console, I also have to recommend Pyshell. It has auto-completion, and a decent syntax highlighting. You can also edit multiple lines of code at once. Of course, it's not perfect, but certainly better than the default python console.
When coding in Java, you almost always will have the API open in some browser window. However with the python interpreter, you can always import any module that you are thinking about using and check what it offers. You can also test the behavior of new methods that you are unsure of, to eliminate the "Oh! so THAT's how it works" as a source of bugs.
Interactive mode makes it easy to test code snippets before incorporating them into a larger program. If you use IDLE there's syntax highlighting and argument pop-ups to help you out. It's also a quick way of checking that you've figured out how to use a module without having to write a test program.

Python Profiling in Eclipse

This questions is semi-based of this one here:
How can you profile a python script?
I thought that this would be a great idea to run on some of my programs. Although profiling from a batch file as explained in the aforementioned answer is possible, I think it would be even better to have this option in Eclipse. At the same time, making my entire program a function and profiling it would mean I have to alter the source code?
How can I configure eclipse such that I have the ability to run the profile command on my existing programs?
Any tips or suggestions are welcomed!
if you follow the common python idiom to make all your code, even the "existing programs", importable as modules, you could do exactly what you describe, without any additional hassle.
here is the specific idiom I am talking about, which turns your program's flow "upside-down" since the __name__ == '__main__' will be placed at the bottom of the file, once all your defs are done:
# program.py file
def foo():
""" analogous to a main(). do something here """
pass
# ... fill in rest of function def's here ...
# here is where the code execution and control flow will
# actually originate for your code, when program.py is
# invoked as a program. a very common Pythonism...
if __name__ == '__main__':
foo()
In my experience, it is quite easy to retrofit any existing scripts you have to follow this form, probably a couple minutes at most.
Since there are other benefits to having you program also a module, you'll find most python scripts out there actually do it this way. One benefit of doing it this way: anything python you write is potentially useable in module form, including cProfile-ing of your foo().
You can always make separate modules that do just profiling specific stuff in your other modules. You can organize modules like these in a separate package. That way you don't change your existing code.

Categories

Resources