Configure Django's test debugging to print shorter paths. - python

I'm not sure how to approach this, whether it's a Django, Python, or even Terminal solution that I can tweak.
The thing is I'm learning Django following a book (reference here, really like it), and whenever I run the tests, I get really long output in the terminal for debugging matters. Obviously there's many traceback functions that get called after another, but what started bugging me is that the file paths are very long and they all have the same project folder... which is long by itself, and then it adds all the virtualenv stuff like this:
Traceback (most recent call last):
File "home/user/code/projects/type_of_projects_like_hobby/my_project_application/this_django_version/virtualenv/lib/python3.6/site-packages/django/db/models/base.py", line 808, in save
force_update=force_update, update_fields=update_fields)
Since the paths take two or more lines, I can't focus on what functions I should be looking at clearly.
I have looked at the verbosity option when calling manage.py test but it doesn't help with the paths. If anyone has an idea on how to ~fix~ go about this issue, it'd be cool.
Thanks guys.

There's really not a way to change the behavior (this is how Python displays tracebacks). But you can pipe the output into something that will reformat it. For example, here's a tool you can pipe traceback output into that will do various types of formatting.

Related

shelve module even not creating shelf

I am new to stackoverflow and experimenting with Python, currently just trying tutorial examples. Experienced a wonderful learning curve but got completely stuck with the following (working under windows 10):
import shelve
s = shelve.open("test")
Traceback (most recent call last):
File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python36_64\lib\dbm\dumb.py", line 82, in _create
f = _io.open(self._datfile, 'r', encoding="Latin-1")
FileNotFoundError: [Errno 2] No such file or directory: 'test.dat'
It would be great to get some help to resolve this.
During handling of the above exception, another exception occurred:
In Python 3, by default, shelve.open tries to open an existing shelf for reading. You have to pass an explicit flag to create a new shelf if if doesn't already exist.
s = shelve.open("test", "c")
This is in contrast to Python 2, where the default flag was "c" instead of "r".
How to read an error message
In general, error messages will do their best to tell you what's wrong. In the case of python, you'll typically start at the bottom; here
No such file or directory: 'test.dat'
tells you exactly why the error's being thrown: test.dat doesn't exist.
Next you would read upward through the stack trace until we got to something that we either understood or had written recently, and we'd try to make sense of the error message from there.
How to troubleshoot an error
Is the stated problem intelligible?
Yes, we asked the software to do something with a (.dat?) file called "test", so we at least know what the hell the error message is talking about.
Do we agree with the underlying premise of the error?
Specifically, does it make sense that it should matter if test.dat exists or not? Chepner covers this.
Do we agree with the specific problem as stated?
For example, it wouldn't be weird at all to get such an error message when there was in fact such a file. Then we would have a more specific question: "Why can't the software find the file?" That's progress.
(Usually the answer would be either "Because it's looking in the wrong place" or "Because it doesn't have permission to access that file".)
Read the documentation for the tools and functions in question.
How can we validate either our own understanding of the situation, or the situation described in the error message?
Depending on the context this may involve some trial and error of re-writing our code to
print out (log) its state during execution
do something similar but different from what it was doing, which we're more certain should work.
do something similar but different from what it was doing, which we're more certain should not work.
Ask for help.
Seems shelve sometimes uses the dumbdbm to serialize.
Use dbm to use dbm instead:
import dbm
with dbm.open($filename, 'n') as db:
# read/write

Why is my "open" call only failing sometimes?

Background:
I'm running my Python program under PyCharm on Windows 10 with three different run configurations.
All seem to run through the bulk of the program fine, doing the logic work without errors.
At the end of the program, there is an attempt to open a file handle, which works on two of the run configurations, but not one, despite the different configurations not affecting the parameters of this call.
Details:
This is the piece of code that it is errors in one of the configurations.
f = open(global_args[2], "w")
# global_args[2] is always 'new_output.xml'. I've thoroughly checked this
The error is below.
Traceback (most recent call last):
File "C:/Projects/obfuscated/trunk/PythonXMLGenerator/generate_xml.py", line 270, in <module>
instance.main()
File "C:/Projects/obfuscated/trunk/PythonXMLGenerator/generate_xml.py", line 235, in main
f = open(global_args[2], "w")
PermissionError: [Errno 13] Permission denied: 'new_output.xml'
Just for extra information, although I have a feeling it's not relevant, here are two of the run configurations.
//Not working
1.0 new_output.xml localdb (LocalDB)\MSSQLLocalDB (2) x x x "0:Upload,1:Modify,2:Delete,3:Download,4:ApplyTemplate,5:RemoveTemplate"
//Working
1.0 new_output.xml mysql localhost (2) obfuscated obfuscated obfuscated "0:Upload,1:Modify,2:Delete,3:Download,4:ApplyTemplate,5:RemoveTemplate"
It's maybe worth nothing that I am closing the file handle with f.close() after trying to open it.
Recap:
Although the error is happening on a line which shouldn't rely on the context of the wider program, the context nonetheless seems to have an effect and I can't figure out why.
I believe there shouldn't be any issues with write permission in general, as it does work for 2 of the 3 configurations.
Anyone have any thoughts?
P.S. If any more details are needed, I can provide them. Given the confusing nature of this problem, I'm not sure exactly what is necessary.
Not a code problem as it turns out.
For this particular broken run configuration, PyCharm was not setting a working directory.

Opening TOPCAT through command line Via Python Script

I'm trying to incorporate the program TOPCAT (which has really amazing plotting capabilities) into a python script I have written. The problem is that when I make a call to the program it tells me:
OSError: [Errno 2] No such file or directory
Here's some background to the problem:
1) The way I usually open up topcat through the command line is through the alias I have created:
alias topcat='java -jar /home/username/topcat/topcat-full.jar'
2) If I'd like to open TOPCAT with a file in mind (let's use a csv file since that's what I'd like it to work with), I would type this into the command line:
topcat -f csv /home/username/path_to_csv_file/file.csv
And that also works just fine. The problem comes about when I try to call these commands while in my python script. I've tried both subprocess.call and os.system, and they don't seem to know of the existence of the topcat alias for some reason. Even doing a simple call like:
import subprocess
subprocess.call(['topcat'])
doesn't work... However, I can get topcat to open if I run this:
import subprocess
subprocess.call(['java','-jar','/home/username/topcat/topcat-full.jar'])
The problem with this is that it simply opens the program, and doesn't allow for me to tell it which file to take in and what type it happens to be.
Could somebody tell me what I'm doing incorrectly here? I've also looked into the shell=True option and it doesn't seem to be doing any better.
Okay - so I'm really excited that figured it out. What worked before was:
import subprocess
subprocess.call(['java','-jar','/home/username/topcat/topcat-full.jar'])
It turns out it can take more command line arguments. This is what eventually got it open up with the correct comma separated file through the command line:
import subprocess
subprocess.call(['java','-jar','/home/username/topcat/topcat-full.jar','-f','csv','/full/path/to/data.csv'])
Hopefully this is enough information to help other people who come across this specific task.
If anyone else comes across this, pystilts might be of interest.
https://github.com/njcuk9999/pystilts
edit: It is a native Python wrapper of Topcat/STILTS.

Back end process in windows

I need to run the python program in the backend. To the script I have given one input file and the code is processing that file and creating new output file. Now if I change the input file content I don't want to run the code again. It should run in the back end continously and generate the output file. Please if someone knows the answer for this let me know.
thank you
Basically, you have to set up a so-called FileWatcher, i.e. some mechanism which looks out for changes in a file.
There are several techniques for watching file/directory changes in python. Have a look at this question: Monitoring contents of files/directories?. Another link is here, this is about directory changes but file changes are handled in a similar way. You could also google for "watch file changes python" in order to get a lot of answers :)
Note: If you're programming in windows, you should probably implement your program as windows service, look here for how to do that.

Django SQLite testing oddity: Different path of execution?

This is merely a theory that I would like to figure out based on others feedback and perhaps similar experiences.
Been using mySQL for running tests, but of course an in-memory SQLite database is much faster. However, it seems to have run into some problems.
When DATABASE_ENGINE is set to use django.db.backends.sqlite3 and I run manage.py test, the output is not as hoped:
(Removed most lines, but pointing out interesting points of failure)
$ python manage.py test
Traceback (most recent call last):
File "manage.py", line 12, in
execute_manager(settings)
File "/Users/bartekc/.virtualenvs/xx/lib/python2.6/site-packages/django/core/management/__init__.py", line 438, in execute_manager
utility.execute()
File "/Users/bartekc/domains/xx/xx/associates/yy/models.py", line 51, in
class AcvTripIncentive(models.Model):
# First interesting failure
File "/Users/bartekc/domains/xx/xx/associates/yy/models.py", line 55, in AcvTripIncentive
trip = models.OneToOneField(Trip, limit_choices_to={'sites' : Site.objects.get(name='ZZ'), 'is_active' : True,})
# Next interesting failure
File "/Users/bartekc/domains/xx/xx/associates/yyz/models.py", line 252, in
current_site = Site.objects.get_current()
There are multiple failures like this but just pointing out a couple. The problem is obvious. The Site model has no actual data, but the files contain code that try to fetch the current, or specific instances under the Site model.
Now, I can think of an easy solution: That OneToOneField should be switched to use a function with limit_choices_to, and the second one the same. The functions are then called when required, not on initial scanning of the file by Django.
However, my actual question is: Why does this happen with SQLite and not mySQL?. Is there a different way the two database engines process through tests? I wouldn't think so, since Python is doing all the compiling of the models.
What exactly is happening here?
Cheers.
Is there some reason you are not mocking access to the database? Your UT boundary gets enormously wide when you add a database (no matter what database) to the mixture. It starts to look more like an integration test rather than a unit test.
Are you loading your Site objects from a fixture?
If so, perhaps you are stumbling upon a known issue with MySQL and transactions. Do a text search for "MySQL and Fixtures" on this page: http://docs.djangoproject.com/en/dev/ref/django-admin/?from=olddocs

Categories

Resources