Eclipse can run a python project rather than just one .py file. Is it possible to run an entire project from Python 3.x shell. I looked into it a little, but I didn't really find a way. I tried just running the .py file with the main using exec(open('bla/blah/projMain.py')) like you would any python file. All of my modules (including the main) is in one package, but when I ran the main I got a no module named 'blah' (the package it is in). Also, as a side note there is in fact aninit.pyand even apycache' directory.
Maybe I didn't structure it correctly with Eclipse (or rather maybe Eclipse didn't structure it properly), but Eclipse can run it, so how can I with a Python 3.4.1 shell? Do I have to put something in __init__.py, perhaps, and then run that file?
It depends on what your file looks like -- if you have an if __name__ == "__main__": do_whatever(), then an import will not do_whatever() because the name of the imported module will not be "__main__". (It will be whatever the name of the module is).
However, if it is just a script without the conditional, you can just import the module and it will be run. Python needs to know where the module is though, so if it is not in your path, you will have to use relative imports, as documented here.
Based on current information, I would suggest you to run it this way in OSX
1) Bring up the Terminal app
2) cd to the location where bla lives
3) run python bla/blah/projMain.py
Show us stacktrace if the above failed.
Related
I am new to python compiling and I try to make it work for compiled scripts using python 3.9 when having miniconda installed.
Here is my main.py in the root folder of my project.
from say_hello import just_say_hello
print('Hi')
just_say_hello()
Here is say_hello.py
def just_say_hello():
print('Hello')
The files are in the same folder. In that folder, I run the following command-line statement to compile:
python -m compileall .
Afterwards, I run the following commandline-statement to run:
cd __pycache__
python .\main.cpython-39.pyc
This results in the following error:
ModuleNotFoundError: No module named 'say_hello'
Why this module cannot be found when running the script from the __pycache__ folder? Both script files (compiled) are in there. It should not be such a big problem. When running the normal original script, everything works fine. When using pycompile, suddenly, things break. However, I really need to compile.
How can I just compile this python script in such a way I can run it without such issues?
I tried several things. For example, I renamed the the pyc files to normal python files. After running
python main.py
I received the following error:
ValueError: source code string cannot contain null bytes
So I really need a solution to run my compiled multi file script.
Python is an interpreted language, what this means for you is that you don't have to compile the scripts to run them (Even though they are technically still compiled). This means that when you run a python file it is compiled to byte code (Machine code) and then runs. However, if other python files are referenced in the file you are running, then they are compiled, so that they do not have to be compiled when they are called, this is what .pyc files are. I would not recommend deleting them. I hope this helps.
Module import is one of many things that are wrong with Python, and just like significant whitespace, 'Python' seems proud of it.
Having said that, I usually solve my problems with import by adding everything to the PYTHONPATH. Either in the environment variable (depends on OS) or in code itself with sys.path.append('..')
Expect new problems when you reboot or move your application to a different machine or even a different directory. Or when you want to make your application cross-platform.
Test carefully for these scenarios.
PS. There is also a somewhat older answer that explains different ways of importing here: How can I import other Python files?
When you compile it down to machine code, it hardcodes the directory paths to the source files.
All you need to do to make the compiled files work is move them into the same directory where the uncompiled versions are.
I have recently asked this question about importing an arbitrary amount of modules in python. I received two good answers. Both worked when I programmed it in spyder.
Today I ran the script from my terminal as test, since I'm planning to move my code to my server. But this time the script crashed with this Traceback:
File "evaluation.py", line 27, in __init__
self.solvers.append( __import__(file_name[:-3]) ) #cut away .py
ImportError: No module named 'v00'
The file architecture looks like this:
-evaluation.py
-evaluation
-v00.py
-v01.py
The code in evaluation.py which causes trouble is this one:
os.chdir('evaluation')
for file_name in glob.glob("*.py"):
self.solvers.append( __import__(file_name[:-3]) ) #cut away .py
for idx, solver in enumerate(self.solvers):
self.dqn.append(solver.DQNSolver() )
Why does this work in spyder but not in the terminal? They both use python 3.5 and I double checked that both are in the folder "evaluation" when executing the malicious line.
The typical way to handle this would be to turn the folder into a package by adding an empty __init__.py file and then import from the package with import evaluation.v00 (or the equivalent __import__ function call). But you may run into problems as your main script has the same name as the package. I would suggest renaming one or the other
-evaluationscript.py
-evaluation
-__init__.py (empty file)
-v00.py
-v01.py
And then you probably need to use import_module instead of __import__ to populate solvers with the actual module (instead of the package).
I'm not familiar with spyder, but if the same code is working there, then it may be adding the evaluation folder to the search path either with the PYTHONPATH env var, or by modifying sys.path.
When you run a script, the path of the script is added to the default search path for module imports, but changing the folder using os.chdir won't affect that search path.
I wrote a small python program and again I am struggling with producing a good structure.
A few days ago I read a blog and many other websites, which advise against from a import b imports and recommend to use always import a.b "project absolute" path (the whole path from the project root to what I want to import) imports. That as a background.
I included a __main__.py so that I can run my program with python -m <directory>, which was another recommendation of someone on stackoverflow in one of the hundreds of python import questions. It is supposed to help keeping code runnable and testable with the same import structure, which was a problem in another project of mine.
Now what I want it, that from anywhere in my system, I can run python -m <dir of my code> and not only from one directory up the RSTCitations directory.
How can I achieve that, without:
python path manipulations (which are a dirty hack)
changing my imports somehow and getting a not recommended import structure
doing other dark magic to my code
I want to stick to best practices in organizing my code but still want it to be runnable from wherever I am in the terminal.
Example of fail
When I run the program as described from another directory completely unrelated to my program, I get for example the following error:
/home/user/development/anaconda3/bin/python: No module named /home/user/development/rst-citations-to-raw-latex/RSTCitations
However the path is correct. That is exactly where the code is.
You can :
install your program with pip ( see Installing Python packages from local file system folder with pip )
put your module in the python import path (and/or edit PYTHONPATH in the environment for the users that need to use it)
If you don't need other users to 'import' your library but just use it as a standalone program, you can also just put a symlink/script to your program, makeing it runnable from a directory which is in your PATH.
I am pretty new to Python. Currently I am trying out PyCharm and I am encountering some weird behavior that I can't explain when I run tests.
The project I am currently working on is located in a folder called PythonPlayground. This folder contains some subdirectories. Every folder contains a init.py file. Some of the folders contain nosetest tests.
When I run the tests with the nosetest runner from the command line inside the project directory, I have to put "PythonPlayground" in front of all my local imports. E.g. when importing the module called "model" in the folder "ui" I have to import it like this:
from PythonPlayground.ui.model import *
But when I run the tests from inside Pycharm, I have to remove the leading "PythonPlayground" again, otherwise the tests don't work. Like this:
from ui.model import *
I am also trying out the mock framework, and for some reason this framework always needs the complete name of the module (including "PythonPlayground"). It doesn't matter whether I run the tests from command line or from inside PyCharm:
with patch('PythonPlayground.ui.models.User') as mock:
Could somebody explain the difference in behavior to me? And what is the correct behavior?
I think it happens because PyCharm have its own "copy" of interpreter which have its own version of sys paths where you project's root set to one level lower the PythonPlayground dir.
And you could find preferences of interpreter in PyCharm fro your project and set proper top level.
ps. I have same problems but in Eclipse + pydev
One of my Python scripts runs in interactive mode but fails when run from the command line. The difference is that when run from the command line, it imports modules from a bad .egg file, and when run interactively it uses my fixed (unzipped) version in the current directory.
My question is two-fold: a) why does Python load modules differently when run from these locations, and b) what are my options to work around it?
I don't understand what do you mean by running script in interactive mode, so I can't say exactly. But the first place to look for modules (sys.path[0]) in interactive mode is current directory (even calling os.chdir() will affect imports), while for script it's directory where the script is located (derived from sys.argv[0]). Note that they are effectively the same when script is run from directory where it's located, but could be different in other cases. Hope this helps.
On UNIX systems and Mac OS-X:
Do you have a ~/.python-eggs directory?
OS independent:
Are you sure that you use the same Python instance in both cases?
Can you print sys.path in each cases and see which package directory comes first on your module search path?
a) why does Python load modules differently when run from these locations
b) what are my options to work around it?
Check your environment variable PYTHONPATH. When python imports module, it searches those directories. One way to get around your problem is to add your local folder "the (unzipped) version in the current directory" to the beginning of PYTHONPATH so that python will find it first.
This works for me:
import sys
sys.path[0]=''