I'm not able to access data of TraCI package which should apparently have instances of all Domain classes.I tried creating those classes manually but it caused some connection problems.
I was running the simulation when an error occurred
'NoneType' object has no attribute '_sendReadOneStringCmd'
for the code
result = self._connection._sendReadOneStringCmd
On investigating further i found that self._connection is being set to None though initials it was set to a Connection type object.
I think this is because I initialized TrafficLaneDomain() LaneAreaDomain() and similar other classes in my code.
The Documentation attached shows the instances present ,but i'm unable to acces it.
Is this the error or anything else might be wrong?
TraCI DOC Home
Related
I've recently migrated a MySQL database from local to Google Cloud Platform in preparation for deployment. When I first did this, I encountered:
MySQLdb._exceptions.OperationalError: (1055, "Expression #1 of SELECT list is not in GROUP BY clause and contains nonaggregated column 'testing.Event.EventID' which is not functionally dependent on columns in GROUP BY clause; this is incompatible with sql_mode=only_full_group_by")
Annoying, but no problem: a quick search revealed it could be turned off in the tags section of the GCP console, and that seemed to be OK as I wasn't too worried about the risks of turning it off. This worked, or so I thought. The same issue continues to appear days after setting the "Traditional" tag on my GCP SQL instance.
Even when I run the query:
SELECT ##sql_mode;
The result I get is:
STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
Which does not contain the only_full_group_by setting, and nonetheless I receive the error: this is incompatible with sql_mode=only_full_group_by
Is there some reason that this error would continue to appear despite not being in the setting that the error code says causes the error?
I'm not able anymore to change my database on arangodb.
If I try to create a collection I get the error:
Collection error: cannot create collection: invalid database directory
If I try to delete a collection I get the error:
Couldn't delete collection.
Besides that some of the collections are now corrupted.
I've been working with this db for 2 months and I'm only getting these errors now.
Thanks.
If anyone gets the same error anytime in life, it was just a temporary error due to server overload.
I am using Pyramid, SQLalchemy (with ZopeTransactionExtension), Postgresql and uwsgi and I have a problem with my web app.
So when I save object using DBSession.add(object) and flush DBSession.flush() I get no error and I even get the id of the newly created object, but when the page reloads I see success message but the object does not appear in the list of all objects. (No error was thrown. In PostgreSQL I can see my insert query and even COMMIT right after it)
This can be seen in API. When I create the new object it returns me it's id, 2 seconds later I delete this object and I get the error that object with this id does not exist. The problem occurs randomly (sometimes more often and sometimes less often) only on production/testing environment (not local) and disappears after rebuilding (It comes back after 1-2 days of uptime).
Had someone a similar problem?
Check out this answer. Generally, everything that is flushed will be temporarily stored into the database and will be persisted when a .commit() happens.
I'm trying to return an objects state using the following code:
workflow = getToolByName(context,'portal_workflow')
status = workflow.getInfoFor(obj,'review_state')
When I try to output this using:
print "State: %s" % (status)
I get the following error:
Exception Type Workflow
Exception Exception Value No workflow provides '${name}' information.
I've done a little reading around the web but nothing seems to give a definitive answer.
Can anyone help?
EDIT
This is not down to an object not having workflow. The object im trying to get the state for is using custom workflow. However switching this to using the default plone workflow still troughs the same error.
FIXED
After just trying the simplest thing out:
status = obj.review_state
This works! go figure. Thanks anyway.
Moderators you can delete this post if you wish.
Actually, Giacomo's answer is correct. The obj you were trying to pass to the getInfoFor method is a catalog brain, not an actual content object. That is why asking for it's review_state directly worked for you.
A Plone content object has no knowledge of it's own workflow state. That information is maintained by the workflow tool, which is why when you are looking at an actual content object you must use workflow_tool.getInfoFor
In your case, you've taken the result of a catalog search, which is a lightweight structure called a brain, and tried to pass it to the workflow tool. Catalog brains do not have workflow, so the error you got is perfectly accurate. But a catalog brain does have a review_state attribute, which corresponds to the review state of the object represented by the catalog brain.
In short, if you have a catalog brain, use brain.review_state, if you have a content object, use workflow_tool.getInfoFor(obj, 'review_state')
That's because you are trying to get the workflow state of an object that doesn't have any associated workflow (probably a File or an Image). You can check in zmi->portal_workflows all of the contenttype-workflows pairings.
In the variables tab of your definition (ZMI for portal_workflow), at the bottom of that page, make sure that state variable name is 'review_state' -- this may not be the default, IIRC. This may be one possible source of your problem.
I am trying to include external python module in my project for working with sessions. It's named gmemsess.py. It tries to add Set-Cookie header in the response and an error appears:
rh.response.headers.add_header('Set-Cookie','%s=%s; path=/;'%(name,self._sid))
AttributeError: HeaderDict instance has no attribute 'add_header'
I read documentation and everything seems ok, but it doesn't work. Why this error can appear?
Also, I use webapp2 to manage subdomains. May be there is something goes wrong because of this?
The headers.add_header method should absolutely work if you are using stock AppEngine, but I am guessing that you are using a framework -- and there are plenty of them, like Bottle -- that uses a custom replacement for webob's Response object.
A little bit of time with Google reveals that there is at least one identifiable class called HeaderDict that extends MultiDict, and I think that is what you are dealing with. In that case, you should to into gmemsess.py and change the line
rh.response.headers.add_header('Set-Cookie','%s=%s; path=/;'%(name,self._sid))
to read
rh.response.headers['Set-Cookie'] = '%s=%s; path=/;'%(name,self._sid)
That should fix you right up.
Disregard -- see comments below
Is that module written to work with App Engine? The response objects used by App Engine do not have an add_header method, see the docs.
Instead, there's a dict-like object headers that you can just assign values to like
response.headers['Set-Cookie'] = "whatever your cookie value is"