CKAN 2.2 - strange error while adding resource - python

I have the following issue:
I fill in the form for creating a new dataset.
I click the button 'Next: Add Data'. (All given data are proceeded as valid)
CKAN redirects me to the error page with the following text:
404 Not Found. The resource could not be found. The dataset XXX could not be found.
I checked the database and the dataset was not created.
This issue appers just sometimes. Do you have any idea what could cause such strange behaviour?

I have the same error, using suggestion from #bellisk helped me solved it. the details of my error and solution is in this ticket Error 404 not found uploading dataset to YYC Data Collective (CKAN)
Notice that in my case plugin is "validation" but in your case it can be different however the solution is the same . Just disable the culprit plugin by removing its name in configuration file and do not forget to restart the server.
Remove plugin one by one until you find the bad one :). Hope it will help. Thanks Bellisk!

Related

Recieving Null response from visitorsession variable during deployment salesIq zo

I am a little baffled with the issue I am facing with Zobot. I have developed a Zobot in which I am using visitorsession to store some values and reuse them later. It works great in Zoho Developer environment, we used Deluge scripts option to do the development.
But when I deploy it using embed script on my website, the visitorsession part of the code is returning null. I searched around and found not a single person facing this issue or posting it, I am really surprised its happening only to me. Can someone please suggest what I should do. I have explained what I am facing below:
I am storing values like this
storeResponse2 = zoho.salesiq.visitorsession.set(portal_name,{"a":"b"});
I am getting the values like this
zoho.salesiq.visitorsession.get(portal_name,"a").get("data").get("a");
It is working great in Zobot Development environment in the Preview section, but when I go to my website where I put the embed, I am getting null.
In both environments, the response I get when I do the set is the same. Its not throwing any error, it is giving me valid response. The response I am getting when I set, in both environments is:
{"url":"/api/v2/bigtalk/visitorsessions","object":"session","data":{"a":"b"}}
But when I do a get, in Zoho environment I get correct value, in production I get null
Attachments area
If this happens, it probably means that you are setting and getting the data from visitorsession in the same execution.
It doesn't work that way. It should be set in one execution and get in further executions (replies). And this won't be a problem anyway, as in the same execution, you'll have the data available already.

Openpyxl : Error message at the opening of the .xlsx created from a WorkBook that used wb.remove_sheet(ws)

Situation: I have model.xlsx, I delete 1 page, I save as output.xlsx. The page that I suppress have an image. There is no error raised.
Problem: when I try to open the file output.xlsx I have a first error message:
.
(Sorry: We have found a problem in the file <<output.xlsx>> but we can try to get back the maximum. If the source of the classifier reliable, click On Yes.)
And when I click on yes:
It says that it repaired the recording and it talks about images (so I suppose the images are the problem).
Then I have tricky errors (it delete the page but one other page that I try to duplicate doesn't keep track of the image).
Solution: I stopped using wb.remove_sheet(ws), now I use ws.sheet_state = 'hidden' and it works like a charm ! No problem to duplicate page with image and no error message at the openning of the file.
But it's system D ! (I want to delete the page, not hide it and pretend I did the job).
Am I doing something wrong ? Is this an error coming from the openpyxl library ?
Answer: System D because nobody answer, don't del it, just hide it.

Reading/writing error messages from data factory pipelines

Going to try asking this here.
I'm trying to write an error message from an Azure Data Factory pipeline to a table in SQL server. It needs to capture the error message from a Databricks Python job. I can't find any official documentation and the method I have found from this source:
#{activity('Proc source').error.message}
..doesn't write anything to the table. Just a blank string with no explanation.
Why data factory doesn't just have an area where you can view the details of errors instead of just saying "Failed" is beyond me. Or if it does, it's hidden away.
Does anyone have any ideas?
You can see the details of error at here.
#{activity('Proc source').error.message}
This expression works.
Is errorCode saved to your table?Make sure your activity name is correct.
SOLVED:
For anyone else reading this, the problem turned out to be something very stupid. I'd included a space in the python file name.
This returned an error code of 3202 and a blank error message.
I had another problem and resolution after this that I'd like to share, again for anyone frantically googling a solution to the problem. Once I'd removed spaces from the python filename, I logged an error code of '3204'
and the error message: "Databricks execution failed with error message: . Run page url: (link to page showing clusters)". In the Databricks workspace launchable through the Azure portal, selecting a cluster through the 'clusters' sidebar then heading to 'Driver Logs' will show errors, in the 'Standard error' window that comes up.
I had already installed the libraries I needed on an existing cluster, but I'd forgotten to change a setting in the Databricks linked service. 'Select cluster' was set to 'new job cluster' when I needed 'Existing interactive cluster'. Thus, it wasn't pointing to the cluster I had expected.
These are all fairly small errors as it turns out, but again I hope that someone else dealing with the same issues will be able to find this post and save themselves some hassle!

Insufficient Privileges: plone.app.multilingual [1.x] - Translate an Archetype content

I got a really strange behaviour, if I want to translate an Archetype contenttype (for example Folder) with plone.app.multilingual.
Error msg:
Insufficient Privileges
You do not have sufficient privileges to view this page. If you believe you are
receiving this message in error, contact the site administration.
My environment:
Plone = 4.3.2
plone.app.multilingual = 1.2.1
plone.multilingual = 1.2.1
plone.multilingualbehavior = 1.2
archetypes.multilingual = 1.2
And about 40 Plone Addon's (which makes this really complex)
My use case:
Two languages are configured EN and DE
Plone's simple_publication_workflow is configured.
All items are in the private state. (LanguageRoot folders are in private state too)
I can reproduce this problem by first creating a content (Folder) in the english part of the site. The content is private, since this is the default state for new content.
Then I try to translate the content into german by clicking on the Translate menu -> create German. Done... the error Insufficient Privileges appears.
If I install a vanilla Plone 4.3.2 and plone.app.multilingial without my huge bunch of addons everything works fine. I pretty sure there's a problem with one of the addons but I need to understand what leads to this problem. It doesn't matters, which addon breaks the site.
Further...
Since it's a permission problem I first tried it to publish everything and then translate the content into German. This worked well!
Well, next step was debugging into plone.app.multilingual. I ended up in the add.py AddTraverser
The problem appears on the redirect on line 37 If one part of the url is not public accessible (Anonymous??) the error happens.
imho it's strange that the implementation of the DX part and AT part is different.
I changed the AT part implementation to:
self.context.REQUEST.set('type', name)
view = queryMultiAdapter((self.context, self.context.REQUEST),
name="add_at_translation")
return view.__of__(self.context)
instead of:
baseUrl = self.context.absolute_url()
url = '%s/##add_at_translation?type=%s' % (baseUrl, name)
return self.request.response.redirect(url)
This totally fixed my case. And all plone.app.multilingual tests are still working.
But I cannot explain why?
And why it doesn't work without the customization?
Any hint is really appreciated!
Why your fix works, could be related to that the user may not be authenticated during traverse. This is similar to, why user may not be authenticated during browser views' __init__.
That might be a bug, and a pull request with your fix, and a test that proves your point by failing without it, would be preferred.
You should be able to get a better error message by removing Unauthorized from filtered errors in /Plone/error_log (via ZMI). I'm not sure, how absolute_url can raise Unauthorized in any condition.

XML Parsing Error: not well-formed Location: moz-nullprincipal

I am having a problem posting map data to postgis via apache2 --> geoserver-->OpenLayers 12.04.
I am receiving data from geoserver just fine but unable to post new data back.
The post error is:
XML Parsing Error: not well-formed Location: moz-nullprincipal:
#!/usr/bin/env python
-^
What I get for a response is the text of the proxy.cgi script provided by OpenLayers. I have edited this script to include all sources found in the xml formed by the request to make sure that I have included all urls.
I have Python, Python2 and Python2.7 available but using any of these produces the same result. All includes appear to be loading correctly.
I have read numerous posts related to this issue but none have provided a solution. I used to be able to bypass the same domain issue by creating an index.html outside the apache-tomcat directory that would define an iframe that would call my actual site.html residing in /geoserver/www. This no longer appears to work hence my proxy problem. This project is on hold until this issue is solved.
Any help would be greatly appreciated.
Thanks, Larry
I found another way to do it. Rather than using the OpenLayers proxy I found this blog:
http://bikerjared.wordpress.com/2012/10/18/ubuntu-12-04-mod-proxy-install-and-configuration/
that provides a very good tutorial on using an Apache2 proxy that fixed my problem nicely.

Categories

Resources