XML Parsing Error: not well-formed Location: moz-nullprincipal - python

I am having a problem posting map data to postgis via apache2 --> geoserver-->OpenLayers 12.04.
I am receiving data from geoserver just fine but unable to post new data back.
The post error is:
XML Parsing Error: not well-formed Location: moz-nullprincipal:
#!/usr/bin/env python
-^
What I get for a response is the text of the proxy.cgi script provided by OpenLayers. I have edited this script to include all sources found in the xml formed by the request to make sure that I have included all urls.
I have Python, Python2 and Python2.7 available but using any of these produces the same result. All includes appear to be loading correctly.
I have read numerous posts related to this issue but none have provided a solution. I used to be able to bypass the same domain issue by creating an index.html outside the apache-tomcat directory that would define an iframe that would call my actual site.html residing in /geoserver/www. This no longer appears to work hence my proxy problem. This project is on hold until this issue is solved.
Any help would be greatly appreciated.
Thanks, Larry

I found another way to do it. Rather than using the OpenLayers proxy I found this blog:
http://bikerjared.wordpress.com/2012/10/18/ubuntu-12-04-mod-proxy-install-and-configuration/
that provides a very good tutorial on using an Apache2 proxy that fixed my problem nicely.

Related

Issue with canonical url, when using mysql

Currently I'm using Ubuntu 18.04, I'm trying to develop blogging websites using django. I'm trying to use canonical url for displaying more meaningful url format.
Everything is fine when I'm using django default database (i.e sqlite3). But when I switch from sqlite3 to mysql I get this issue :
My post_detail page is not displaying.
For better explanation of my problem screenshot is given below, what i'm facing.
For better explanation of my problem screenshot is given below, what i'm facing.
But I'm not facing any types of error while using sqlite3..
So, Please help me out from this problem

CKAN 2.2 - strange error while adding resource

I have the following issue:
I fill in the form for creating a new dataset.
I click the button 'Next: Add Data'. (All given data are proceeded as valid)
CKAN redirects me to the error page with the following text:
404 Not Found. The resource could not be found. The dataset XXX could not be found.
I checked the database and the dataset was not created.
This issue appers just sometimes. Do you have any idea what could cause such strange behaviour?
I have the same error, using suggestion from #bellisk helped me solved it. the details of my error and solution is in this ticket Error 404 not found uploading dataset to YYC Data Collective (CKAN)
Notice that in my case plugin is "validation" but in your case it can be different however the solution is the same . Just disable the culprit plugin by removing its name in configuration file and do not forget to restart the server.
Remove plugin one by one until you find the bad one :). Hope it will help. Thanks Bellisk!

Posting new targets to vuforia with Python

I'm trying to make a cms with Python to post new targets to a cloud database on vuforia. I found this Python library "python-vuforia" but it has read functionalities only.
I added a function to post targets but so far getting 401 error. you can find the new function in this commit
What am I doing wrong?
Got it working with this (commit)[https://github.com/dadoeyad/python-vuforia/commit/1997a49f94c5f2e13ab1d5c620c69160c76b7969]
I think the problem was with doing str(req.get_data()) instead of req.get_data()
and base64.b64encode(hmac(key, message, sha1).digest()) instead of hmac(key, message, sha1).digest().encode('base64')

xgoogle python library is not working any more?

I was using the xgoogle python library for one of my projects. It was working fine till recently. I am not getting the resultset that I used to get before. If anyone who has used this library written by Peter Krummins, faced a similar situation, can you please suggest a work around ?
The presence of BeautifulSoup.py hints that this library uses web scraping to get its result.
A common problem with this is that it will easily break when the design/layout of the page being scraped changes. And the problem you see seems to coincide with the new search results layout that Google introduced just recently.
Another problem is that it often is against the terms of service of the site being scraped. And according to point 5.3 of the Google Terms Of Service it actually is:
You specifically agree not to access (or attempt to access) any of the Services through any automated means (including use of scripts or web crawlers) [...]
A better idea would be to use the Custom Search API.
Peter Krumin's product xgoogle looks to be extremely useful both to me and I image many others.
https://github.com/pkrumins/xgoogle
For me the current version is 1.3 is not working.
I tried a new install from GitHub, ran the examples and nothing is returned.
Adding a debugger to the source code and tracing the data captured in a query to its disappearance the problem occurs in a routine called search.py subroutine "_extract_results" at a parser command
results = soup.findAll('li', {'class': 'g'})
The soup object has material in it but the "findAll" fails to return anything.
Looks like its searching for lists and if there are none it returns nothing.
I am unsure what html you would try to match to get a result.
If anyone knows how to make the is work I am very interested.
A little more googling and it appears xgoogle is no longer supported or works.
Part of the trouble is that Google changes the layout of its results pages every so often and so any scraping software that assumes some standard layout is in time doomed to failure.
There are however other search engines that are locally installed and thus provide a results layout that are less likely change with upgrades and will not change at all if you don't upgrade.
I am currently investigating Yacy. Easy to install and can be pointed at specific sites if you want.

Different results for the same RSS feed fetching from different user agents

If I add a feed URL to Google Reader or to a desktop feed aggregator, I receive nice results. The URL is:
http://estaticos03.marca.com/rss/futbol_1adivision.xml
But when I fetch the same URL from a script (python script, using feedparser library) I am getting slightly different content for the same results (the title for each entry, for example, is different and all in uppercase).
I believe something is done on the server-side to try to discourage people like me to parse the content for my own projects (the feed is from a popular football newspaper), but I am not sure about it. I tried to pass some user agents (like the google reader one) but still no luck, so maybe they check the IP as well? I am really confused.
Any idea why is this happening to me?
Thanks!
AFAIK Google Reader does some "magic" in the content to beautify it. They strip some tags and styles to avoid breaking their interface.
Can you provide more details on the differences?
Did you changed the user agent of your script? Try to mimic Firefox and see what happen.
All right folks, I found it. I analyzed the source XML received (as #TryPyPy). I had been trusting too much the feedparser library. Latest official version (4.1) has a bug related to mistakeing the title tag from media namespace instead of the original one:
http://code.google.com/p/feedparser/issues/detail?id=76
So I reinstalled from trunk and now everything is OK. Thanks for helping anyway!

Categories

Resources