I had a strange situation:
In my folder /home/Komponenten/ were a lot of python scripts
When I started
cd /home/Kompontenen
/home/Kompontenen>python urlfilter.py
resulted in the execution of another script, i found out that it was in my case it was queue.py from the same folder
I though ok, there might be some code in urlfilter were I used the queue.py. Queue.py contained a little test with multithreading but nothing special
So I simply tried to move the queue.py file
After that urlfilter.py was executed normally and no error
So I still have no clue why the python interpreter executed queue.py instead of urlfilter.py
In Python the import path contains . (working directory). Importing a module basically means executing it. That is why your importing of queue from urlfilter.py resulted in queue being executed. To avoid accidental execution of scripts by importing, you can check the __name__ variable for the value '__main__'.
if __name__ == '__main__':
do_not_execute_this_during_import()
Related
I always structure my repos in the following way.
repo/main.py
repo/scripts/script.py
…
In main.py, I import script.py in the following manner:
from scripts import script as sc
This always works unless I decide to make changes to script.py. After making changes, if I run main.py from shell, it still imports code from the older script.py without the current changes. Until now, what I would then is just create another branch. However, this leads to a lot of branches in the development process. Is there any way to avoid this? How else should I be importing from the scripts directory to avoid this?
Help would be highly appreciated.
UPDATE
From the answers, I can see that I have caused some confusion. What I mean when I say that I run main.py from shell, I mean executing it with python main.py from the terminal. One can think of main.py as a script that does some math and outputs the answer. In doing those math, it imports script.py from scripts that has additional functions that it (main.py) uses. After running main.py N times, if I choose to update script.py, and then I execute main.py again (in the terminal), it imports the old script.py again and does the math with the older code. The answer does not reflect the changes I just made to script.py. Until now, what I have done, when I have had to go through something like this is just create another new branch and literally copy-paste the old files and the newer script.py in the new branch and execute main.py in the shell. It does import the newer script.py then. One more thing I have noticed is that if I just create a new file as say script2.py and then import it in main.py as
from scripts import script2 as sc
it imports script2.py just as it should - it reflects all the changes made to script.py.
There’s no second import statement in main.py.
On the surface this question sounds like we're repeatedly running
$ python main.py,
which quickly executes and exits.
But from the symptom, it must be the case that
we have a long-lived REPL prompt repeatedly executing the main.py code.
The module you're looking for is
importlib.
Do this:
from importlib import reload
from scripts import script as sc
# [do stuff, then edit script.py]
reload(sc)
# [do more stuff, and see the effect of the edit]
What is going on here?
Well, if you repeatedly execute
import scripts.script
import scripts.script
it turns out that the 2nd and subsequent import does nothing.
It consults sys.modules, finds the module has already
been loaded, and reports "cache hit!" instead of doing
the hard work of pulling in that text file.
The purpose of the reload() function is to
do a cache invalidate and repeat the import, so it actually
pulls in python text from the (presumably edited) source file.
Suppose you have a short-lived $ python main.py process
that runs repeatedly, sometimes after a script.py edit.
So the in-memory sys.module cache is not relevant,
having been discarded each time the process exits.
There is another level of caching at work here.
Typically the cPython interpreter will read script.py,
parse, produce bytecode from the parse tree, and write
the bytecode to script.pyc. More than one output cache
directory is possible, depending on system details.
Upon being asked to read script.py, the interpreter
can look to see if the corresponding .pyc bytecode file
is available and fresh, and then declare "cache hit!",
in which case the .py source file is not even read.
Normally this works great, because source file updates
are infrequent (human editing speed), and the freshness
comparison of file timestamps is effective. If there's
something wrong with those timestamps the whole mechanism
won't work properly, and we might fail to notice the
source was recently edited. Perhaps you suffer from this.
First, identify the relevant .pyc bytecode file.
Then, run main, make an edit, delete the .pyc file,
re-run main, and notice that the edit took effect.
Let us know
how it goes.
I have a Main script in Spyder that calls several functions contained in 6 different scripts (.py). I had to this way because the scripts are also used in different projects.
Currently, I have to run each script (containing several functions each) separately by hand, which is tiring, by clicking in the "green triangle" before launching the Main Script so that the functions contained in each script are stored in the working environment.
My question is: Would it be possible to automatically run each script directly from the Main Script and not running one after the another by hand?
Try
from filename import *
instead of
import filename
No .py extension in the import.
When you execute an import statement, the source file being imported is executed. So, for example, if you have thing.py and you execute import thing, all the code in thing.py will be run.
Also, as noted in a comment by Sven Krüger: you can use runpy.run_path, which I think is overall a better solution than my original suggestion.
I have a script called startup_launching.py, which does something like this:
import os
# launch chrome
os.startfile(r'C:\Program Files (x86)\google\chrome\application\chrome.exe')
To run this from the (windows) command line, I enter:
python "FILEPATH\startup_launching.py"
That works fine.
However, I have a separate script called threading.py, which does this:
import time, threading
def foo():
print(time.ctime())
threading.Timer(10, foo).start()
foo()
(which I found on stackoverflow).
When threading.py is saved in the same folder as startup_launching.py, it seems to interfere with startup_launching.py when I run it from the command line (e.g. one of the error messages is: module 'threading' has no attribute 'Timer').
When I move threading.py to another folder, startup_launching.py works fine again.
Can someone explain what's going on here? I assumed that entering:
python "FILEPATH\startup_launching.py"
in the command line would only look in startup_launching.py
Thanks!
you should rename your file so that it is not named threading.py, since it will be in the import path and will mask the actual built-in threading module, which the other script relies upon.
Name your module something other than threading.py because there is a built-in module named threading.py.
Don't call it threading.py. Also, check your python version, if it correspond to the tutorial that you were reading.
Recently I have been trying to dig deeper into the core of python. Currently I am look into pythons module system and how "global", "local", and "nonlocal" variables are stored. More specifically, my question is how does the interpreter treat the file being run? Is it treated as its own module in the modules (or something similar)?
The top-level script is treated as a module, but with a few differences.
Instead of its name being the script name minus a .py extension, its name is __main__.
The top-level script does not get looked up in the .pyc cache, nor compiled and cached there.
Other than that, it's mostly the same: the interpreter compiles your script as a module, builds a types.ModuleType out of it, stores it in sys.modules['__main__'], etc.
Also look at runpy, which explains how both python spam.py and python-m spam work. (As of, I think, 3.4, runpy.run_path should do exactly the same thing as running a script, not just something very similar.) And notice that the docs link to the source, so if you need to look up any specifics of the internals, you can.
The first difference is why you often see this idiom:
if __name__ == '__main__':
import sys
main(sys.argv) # or test() or similar
That allows the same file spam.py to be used as a module (in which case its __name__ will be spam) or as a script (in which case its __name__ will be __main__), with code that you only want to be run in the script case.
If you're curious whether standard input to the interactive interpreter is treated the same way as a script, there are a lot more differences there. Most importantly, each statement is compiled and run as a statement with exec, rather than the whole script/module being compiled and run as a module.
Yes, that's essentially what happens. It's the __main__ module. You can see this by running something like the following:
x = 3
import __main__
print(__main__.x)
Either run as a script file, or on the interpreter, this will print:
3
I'm trying to run a python script from a python program by kicking it off from subprocess (The reason is that the main program has to have exited when the script runs, with a combination of wx.CallAfter and Close). However when the script runs I get an error on line 1 with ImportError: No module named os which makes me think it's something to do with the PythonPath, but I can run the script just fine from a terminal.
Why can't the script see any core modules when run this way?
Edit:
The line in question is:
wx.CallAfter(subprocess.Popen,'python %s "%s" %s %s'%(os.path.join(BASE_DIR,"updatecopy.py"),BASE_DIR,pos[0],pos[1]),shell=True)
BASE_DIR is just the directory that the script lives in.
subprocess is there because os.exec* has been deprecated so I wouldn't suggest using that in place of Popen as someone suggested.
I've seen this issue crop up when running from a frozen process. If that is the case then you're most likely inheriting a weird environment for the new python process.
Most frozen scripts will be trying to run from a zip file, in which case it's no wonder that Python can't find anything, it's all trapped in a zip file :)
If this is the situation then try running using the python executable that you are using to run the frozen script. It should be able to deal with the special environment.
Maybe you could use os.execv instead of Popen.
From os/python docs:
These functions all execute a new program, replacing the current process; they do not return. On Unix, the new executable is loaded into the current process, and will have the same process id as the caller. Errors will be reported as OSError exceptions.
(emphasis mine)