I am having a problem getting an informative traceback when I run a script with the -m flag. I am using the -m flag so that I can properly use relative imports throughout my package. When an error comes up, stdout does tell me the nature of the exception but not the location, such as file and line number.
/usr/bin/python: Error while finding spec for 'bin.load_ref_exps.py'
(: 'module' object has no attribute
'path')
I would very much like to be able to run the script directly with a full traceback to quickly debug what is going on.
Any ideas on how to run the script in a way that doesn't break all the package based relative imports and still gives me a full traceback?
Thanks!
If you use -m, you shouldn't specify the .py extension, since you are specificying a module name, not a file per se. See the documentation.
Related
I am new to coding and Python and have read a lot of similar questions about absolute/relative imports of modules but I cannot understand why my cronjob isn't working.
When I run from the terminal: python3 example.py the program runs without issue.
However when I schedule a cronjob to run "example.py", the program fails to run due to an ImportError. The first time it failed was due to a ModuleNotFound error, so I reinstalled the module in question (pyautogui) which seemed to solve that error.
I have tried using from . import pyautogui, os, time, smtplib, traceback instead of just import xyz but that brings up "ImportError: attempted relative import with no known parent package".
Can anyone help explain in simple terms what's going on here? Or point me in the direction of reading I can do to understand how terminal/cronjob is attempting to execute my program?
Thanks
Can anyone help explain in simple terms what's going on here? Or point me in the direction of reading I can do to understand how terminal/cronjob is attempting to execute my program?
Okay. Crontab programs are run under a different environment than the one in the terminal.
The surest way to check this is to run this bash script from crontab:
#!/bin/bash
set > /tmp/set.txt
It will dump the whole environment (probably you only need $PATH though) to "/tmp/set.txt".
Run the same script from terminal, saving to a different file. If you compare the two, you'll notice differences, for example again in the PATH.
And, you've guessed it, python uses PATH to find its various bits and pieces (and for that matter, which python is run if several are present also depends on PATH).
In the crontab file, you should notice, near the top, an assignment to both the SHELL variable and the PATH variable. Those are the values used by all following commands.
You should probably be able to set the PATH value to the same value you get from terminal, but keep in mind that you might disrupt the operation of any other script in the same crontab (that is why you should rather make use of /etc/cron.d files).
I'm calling another program with a piece of code that looks like this:
import subprocess
lc="/package/bin/program --do stuff"
command_list = lc.split()
ljs=subprocess.Popen(command_list,stdout=subprocess.PIPE)
ljs.communicate()[0]
The string works fine at the UNIX command line and the code works in Python 2.7. But, in Python 3.4, I get an error like this:
File "/package/bin/program", line 2, in <module>
from package import module
ImportError: No module named package
"/package/bin/program" is calling a dependency from another file in the package here, which I think is the core issue. I have calls to other programs that are working fine in 3.4.
What's changed in 3.4 that might be causing this?
(Sorry in advance for the cryptic code - I'm calling company internal tools that I can't expose here).
The problem is that the working directory of the subproccess instance is default the directory of bash shell. To set a new working directory, set the cwd argument in your Popen to your working directory.
Here's an example:
subprocess.Popen(['random' '--command'], stdout = subprocess.PIPE, cwd='C:/Path/To/Working/Directory/')
Comments above have been helpful in exploring the issue, but at the end of the day this seems to be some permissions conflict - adding a sudo -u <user> before the command fixes the issue. Still not clear why Py3 requires this and Py2 doesn't, but perhaps I need to explore the issue more closely with other internal users.
Thanks!
I realized that after compile a python script, It fixes the path information of this script.
For example:
I have a python script as /tmp/src/foo.py which has a single print command
print foo
Now I am compiling this code and move it to compiled directory.
python -m compileall -f /tmp/src/foo.py
mv /tmp/src/foo.pyc /tmp/compiled/
Then I am running the script and It gives error as I excepts
python /tmp/compiled/foo.pyc
Traceback (most recent call last):
File "/tmp/src/foo.py", line 1, in <module> # focus heree
print foo
NameError: name 'foo' is not defined
As you realize, file name of script appeared in error as its name before compilation. (Exactly same as path which I give to compile command)
Actually I have no problem with this situation, I am asking because I am just wondering. What is the reason and is there any way to see real path in errors?
In my opinion, we could not change binary file but maybe we can give a command line parameter to python when run compiled code or maybe we can add a code segment to source code ?
Thanks
Your question derives from the notion that a compiled version of a Python module can and should be moved under certain conditions. I've actually never heard of such a thing before, so until shown a specification which approves of such a thing, I'd say that this is an abuse and you are lucky to be able to run the .pyc file at all without the .py at its side.
If you think of the .pyc files as mere caches of the compiled versions of the original, then you can easily explain all phenomena you observed: The path of the original .py file is stored in the .pyc along with everything else coming from that source. If moved, that contents of course stays the same and will be used in error messages.
There is no way to see the "real" path in the error message because the place of the .pyc file isn't known anymore after loading it; only its contents is taken into account, and not its location, because combining those two things is a step of compilation. The interpreter won't compile again anything to an already compiled module. It takes it as it is.
Patching the .pyc file to show a different path also does not seem to make sense because that message is for helping you debug the problem. You probably won't debug anything in the .pyc file but only in the .py file. So it seems appropriate to rather have the path of that file in the error message.
Okay I am newer to python and have been researching this problem but I can't find anything like it so I am not sure what is going on.
I am creating a program that involves sage and it has a message cue. We have this set up on a development machine, so I know it works but I was wanting to set it up on my own computer so I could get a better understanding of how it all works and make it easier to develop for myself.
To start up sage, we run a script that calls sages main binary file and passes it an executable .py file. (./sage/sage ./sage_server.py) This creates an error in the sage_server.py file:
Traceback (most recent call last):
File "./sage_server.py", line 23, in <module>
from carrot.messaging import Publisher
ImportError: No module named carrot.messaging
But whenever I run that file just in the terminal (./sage_server) the import works fine and isn't until line 27 that there is an error when it tries to import something from sage.
Does anyone know what would cause the error when it is being called by something else? I am very lost as to what would be causing this.
Sage has its own python, separate from the system libraries. This "carrot" module, whatever it is, must be installed in whatever python ./sage_server.py uses, but not in Sage.
You should be able to use either
[your-sage] -sh
to start up a Sage shell and use easy_install, or you could get whatever carroty package you're using, find its setup.py file, and then run
[your-sage] -python setup.py install
where obviously your-sage is the path to your sage.
Things get a little trickier if the install process isn't setup.py-based.
Python came pre-installed on my macbook and I have been slowly getting acquainted with the langauge. However, it seems that my configuration of the re library is incorrect, or I simply misunderstand something and things are amiss. Whenever I run a python script with "import re", I recieve the following error:
Traceback (most recent call last):
File "regex.py", line 2, in <module>
import re
File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/re.py", line 4, in <module>
# re-compatible interface for the sre matching engine
AttributeError: 'module' object has no attribute 'compile'
What gives!
Pretty mysterious problem, given that line 4 in that file (and many other lines around that line number) is a comment (indeed the error msg itself shows that comment line!-) so even with the worst misconfiguration I'd be hard put to reproduce the problem as given.
Let's try to simplify things and check how they may (or may not) break. Please open a Terminal, mkdir a new empty directory somewhere and cd into it (so we know there's no filename conflict wrt modules etc), at the bash prompt unset PYTHONPATH (so we know for sure that isn't interfering), unset PYTHONSTARTUP (ditto); then type the command:
$ python -c'import re; print re.__file__'
It should emit the line:
/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/re.pyc
does it? If so, then we can keep rooting around to understand what name clash (or whatever) caused your original problem. If the problem persists under such "clean" conditions then your system is jinxed and I would reinstal Mac OS X Leopard if I were in your shoes!