Django matching query does not exist - python

I was just checking if a change I had made to my models had taken affect when I started to get this for some (but not all) of my models. I've never seen this before and I'm fairly certain I've had no issue querying these models in the past.
>>> record = Record.objects.get(id=1)
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/usr/local/alittlesquid/grocerygod/fratgroceries/ggenv/local/lib/python2.7/site-packages/django/db/models/manager.py", line 143, in get
return self.get_query_set().get(*args, **kwargs)
File "/usr/local/alittlesquid/grocerygod/fratgroceries/ggenv/local/lib/python2.7/site- packages/django/db/models/query.py", line 404, in get
self.model._meta.object_name)
DoesNotExist: Record matching query does not exist.
After more digging I discovered that a query for all Record.objects.all() works as expected. Can anyone shed light on why this would be happening to some of my models? A fix would also be tremendously helpful, thanks.

There is probably no Record with the id as 1 (maybe you meant pk?). You can easily verify this by running Record.objects.values("id") and checking the output manually.

Related

Code for gensim Word2vec as an HTTP service 'KeyedVectors' Attribute error

I am using the w2v_server_googlenews code from the word2vec HTTP server running at https://rare-technologies.com/word2vec-tutorial/#bonus_app. I changed the loaded file to a file of vectors trained with the original C version of word2vec. I load the file with
gensim.models.KeyedVectors.load_word2vec_format(fname, binary=True)
and it seems to load without problems. But when I test the HTTP service with, let's say
curl 'http://127.0.0.1/most_similar?positive%5B%5D=woman&positive%5B%5D=king&negative%5B%5D=man'
I got an empty result with only the execution time.
{"taken": 0.0003361701965332031, "similars": [], "success": 1}
I put a traceback.print_exc() on the except part of the related method, which is in this case def most_similar(self, *args, **kwargs): and I got:
Traceback (most recent call last):
File "./w2v_server.py", line 114, in most_similar
topn=5)
File "/usr/local/lib/python2.7/dist-packages/gensim/models/keyedvectors.py", line 304, in most_similar
self.init_sims()
File "/usr/local/lib/python2.7/dist-packages/gensim/models/keyedvectors.py", line 817, in init_sims
self.syn0norm = (self.syn0 / sqrt((self.syn0 ** 2).sum(-1))[..., newaxis]).astype(REAL)
AttributeError: 'KeyedVectors' object has no attribute 'syn0'
Any idea on why this might happens?
Note: I use python 2.7 and I installed gensim using pip, which gave me gensim 2.1.0.
FYI that demo code was baed on gensim 0.12.3 (from 2015, as listed in its requirements.txt), and would need updating to work with the latest gensim.
It might be sufficient to add a line to w2v_server.py at line 70 (just after the load_word2vec_format()), to force the creation of the needed syn0norm property (which in older gensims was auto-created on load), before deleting the raw syn0 values. Specifically:
self.model.init_sims(replace=True)
(You would leave out the replace=True if you were going to be doing operations other than most_similar(), that might require raw vectors.)
If this works to fix the problem for you, a pull-request to the w2v_server_googlenews repo would be favorably received!

Error calling Python module function in MySQL Workbench

I'm kind of at my wits end here, and so far have had no feedback from the MySQL Workbench bug reporting site, so I thought I'd throw this question/problem out to more sites.
I'm attempting to migrate from a MSSQL server on a Windows Server 2003 machine to MySQL server running on a Centos 6.5 VM. I can connect to the source and target databases, select a schemata, and runs through a pass through once for retrieving tables. After this the process fails and throws the following errors:
Traceback (most recent call last):
File "/usr/lib64/mysql-workbench/modules/db_mssql_grt.py", line 409, in reverseEngineer
reverseEngineerProcedures(connection, schema)
File "/usr/lib64/mysql-workbench/modules/db_mssql_grt.py", line 1016, in reverseEngineerProcedures
for idx, (proc_count, proc_name, proc_definition) in enumerate(cursor):
MemoryError
Traceback (most recent call last):
File "/usr/share/mysql-workbench/libraries/workbench/wizard_progress_page_widget.py", line 192, in thread_work
self.func()
File "/usr/lib64/mysql-workbench/modules/migration_schema_selection.py", line 160, in task_reveng
self.main.plan.migrationSource.reverseEngineer()
File "/usr/lib64/mysql-workbench/modules/migration.py", line 353, in reverseEngineer
self.state.sourceCatalog = self._rev_eng_module.reverseEngineer(self.connection, self.selectedCatalogName, self.selectedSchemataNames, self.state.applicationData)
SystemError: MemoryError(""): error calling Python module function DbMssqlRE.reverseEngineer
ERROR: Reverse engineer selected schemata: MemoryError(""): error calling Python module function DbMssqlRE.reverseEngineer
Failed
I thought this was initally a memory error, so I've upped the memory on the box to 16 GiB. This error also occurs on any size DBs, as I've tried very minimal sized ones with hardly any tables.
Any thoughts? Thanks for looking
Just in case anyone else runs into this. I had the same problem and fixed it by getting rid of non-ASCII characters in schemas, tables....basically all MSSQL objects. This was confounded by the fact that I had SQL# (www.sqlsharp.com) installed, which adds a number of functions and stored procs with a schema called SQL#. You can remove that with this command:
EXEC SQL#.SQLsharp_Uninstall
Once you get rid of non-ASCII chars, the migration works.
The OP (I assume) closed their bug report with this message:
[...] I figured out a work around, or the flaw in the system perhaps. Turns out that Null values were not allowed inside the Datetime fields when doing a migration. I turned every Datetime field in my database to a default value and migrated it successfully after that.

Contact creation error using python-gdata API

I have a question regarding contact creation using the python Google data API.
I am trying the example for contact creation with python, exactly like it is in the documentation page (https://developers.google.com/google-apps/contacts/v3/#creating_contacts)
So, i created the client as following:
email='<my gmail uid>'
password='<my gmail pwd>'
gd_client = gdata.contacts.client.ContactsClient(source='GoogleInc-ContactsPythonSample-1')
try:
gd_client.ClientLogin(email, password, gd_client.source)
except gdata.client.BadAuthentication:
print 'Invalid user credentials given.'
gd_client = None
Then i executed the function using:
create_contact(gd_client)
What i get from this call is:
Traceback (most recent call last):
File "<ipython console>", line 1, in <module>
File "<ipython console>", line 23, in create_contact
AttributeError: 'module' object has no attribute 'PostCode'
So I want to ask whether i am doing something wrong, whether this is a known bug, or whether the documentation is simply outdated.
Thanks.
p.s. a small comment, i think a better wrapping of the Google data API in the python library could be useful. I spent significant time in finding, within the API implementation, what fields should be set (directly!) and what classes should be used to assign them.
Ok, it seems that i can answer my question, which is partly a duplicate of
how to create google contact?
It turns out that the sample code in the documentation is invalid (thanks, Google)
PostCode should be Postcode, and the instant messaging address is also incorrect.
Removing that, it completes successfully

TypeError with PAMIE

I am getting a TypeError with a very simple script on PAMIE, and I'm not sure what I can do. I had found an answer suggesting that the library, pywin32 might not have set a self argument for this particular method (getElementsByTagName) but I don't know for sure, as I don't know where to find the definition of it.
from PAM30 import PAMIE
ie = PAMIE()
ie.navigate('google.com')
ie.getButtons()
ie.quit()
print 'done'
The error is:
Traceback (most recent call last):
File "c:\pamie1.py", line 1, in <module>
from PAM30 import PAMIE
File "C:\Python27\Lib\site-packages\PAM30.py", line 678, in getButtons
return self.getElementsList("input", filter)
File "C:\Python27\Lib\site-packages\PAM30.py", line 939, in getElementsList
elements = self._ie.Document.getElementsByTagName(tag)
TypeError: getElementsByTagName() takes exactly 1 argument (2 given)
Here's the offending line in PAM30
elements = self._ie.Document.getElementsByTagName(tag)
where _ie_ is
self._ie = win32com.client.dynamic.Dispatch('InternetExplorer.Application')
I'm using Windows 7x64 with Python2.7 32bit
sourceforge bug link
"Workaround" seems to be enable Compatibility View (Tools > Compatibility
View settings > Display all websites in Compatibility View).
it is a bug of IE.
Work around - Change in PAMIE30
elements = self._ie.Document.getElementsByTagName(tag)
to
elements = self._ie.Document.body.all.tags(tag)
This will work without the need to use Compatibility View!
Modify this line:
elements = self._ie.Document.getElementsByTagName(tag)
to
elements = self._ie.Document.Body.getElementsByTagName(tag)

Does sending a dictionary through a multiprocessing.queue mutate it somehow?

I have a setup where I send a dictionary through a multiprocessing.queue and do some stuff with it. I was getting an odd "dictionary size changed while iterating over it" error when I wasn't changing anything in the dictionary. Here's the traceback, although it's not terribly helpful:
Traceback (most recent call last):
File "/usr/lib/python2.6/multiprocessing/queues.py", line 242, in _feed
send(obj)
RuntimeError: dictionary changed size during iteration
So I tried changing the dictionary to an immutable dictionary to see where it was getting altered. Here's the traceback I got:
Traceback (most recent call last):
File "/home/jason/src/interface_dev/jiva_interface/jiva_interface/delta.py", line 54, in main
msg = self.recv()
File "/home/jason/src/interface_dev/jiva_interface/jiva_interface/process/__init__.py", line 65, in recv
return self.inqueue.get(timeout=timeout)
File "/usr/lib/python2.6/multiprocessing/queues.py", line 91, in get
res = self._recv()
File "build/bdist.linux-i686/egg/pysistence/persistent_dict.py", line 22, in not_implemented_method
raise NotImplementedError, 'Cannot set values in a PDict'
NotImplementedError: Cannot set values in a PDict
This is a bit odd, because as far as I can tell, I'm not doing anything other than getting it from the queue. Could someone shed some light on what's happening here?
There was a bug fixed quite recently where a garbage collection could change the size of a dictionary that contained weak references and that could trigger the "dictionary changed size during iteration" error. I don't know if that is your problem but the multiprocessing package does use weak references.
See http://bugs.python.org/issue7105

Categories

Resources