Currently I'm using Ubuntu 18.04, I'm trying to develop blogging websites using django. I'm trying to use canonical url for displaying more meaningful url format.
Everything is fine when I'm using django default database (i.e sqlite3). But when I switch from sqlite3 to mysql I get this issue :
My post_detail page is not displaying.
For better explanation of my problem screenshot is given below, what i'm facing.
For better explanation of my problem screenshot is given below, what i'm facing.
But I'm not facing any types of error while using sqlite3..
So, Please help me out from this problem
Related
I am attempting to add search functionality to my Django app by using haystack and elasticsearch, and after doing some searching on google I came across this tutorial:
http://www.techstricks.com/django-haystack-and-elasticsearch-tutorial/
I followed it through the whole way, but at the end, it seems to wrap it up in a rather abrupt manner, for I didn't fully understand the last step. Could someone explain to me what the last bit of html/python was and how I can link this all to a form so I can actually search for things?
Also one last thing, in the tutorial when you add the URL:
(r'^search/', include('haystack.urls')),
it isn't preceded by "URL" like in other django URLs, is this conventional or typo or what? Any answers would be great, thanks.
There is work in progress to support to match django urls in haystack github issues. If found any errors when you are trying to use without url then please add url. Hopefully they will upgrade to the latest docs soon.
I am having a problem posting map data to postgis via apache2 --> geoserver-->OpenLayers 12.04.
I am receiving data from geoserver just fine but unable to post new data back.
The post error is:
XML Parsing Error: not well-formed Location: moz-nullprincipal:
#!/usr/bin/env python
-^
What I get for a response is the text of the proxy.cgi script provided by OpenLayers. I have edited this script to include all sources found in the xml formed by the request to make sure that I have included all urls.
I have Python, Python2 and Python2.7 available but using any of these produces the same result. All includes appear to be loading correctly.
I have read numerous posts related to this issue but none have provided a solution. I used to be able to bypass the same domain issue by creating an index.html outside the apache-tomcat directory that would define an iframe that would call my actual site.html residing in /geoserver/www. This no longer appears to work hence my proxy problem. This project is on hold until this issue is solved.
Any help would be greatly appreciated.
Thanks, Larry
I found another way to do it. Rather than using the OpenLayers proxy I found this blog:
http://bikerjared.wordpress.com/2012/10/18/ubuntu-12-04-mod-proxy-install-and-configuration/
that provides a very good tutorial on using an Apache2 proxy that fixed my problem nicely.
I'm trying to make a cms with Python to post new targets to a cloud database on vuforia. I found this Python library "python-vuforia" but it has read functionalities only.
I added a function to post targets but so far getting 401 error. you can find the new function in this commit
What am I doing wrong?
Got it working with this (commit)[https://github.com/dadoeyad/python-vuforia/commit/1997a49f94c5f2e13ab1d5c620c69160c76b7969]
I think the problem was with doing str(req.get_data()) instead of req.get_data()
and base64.b64encode(hmac(key, message, sha1).digest()) instead of hmac(key, message, sha1).digest().encode('base64')
I have an HTML webpage. It has a search textbox. I want to allow the user to search within a dataset. The dataset is represented by a bunch of files on my server. I wrote a python script which can make that search.
Unfortunately, I'm not familiar with how can I unite the HTML page and a Python script.
The task is to put a python script into the html file so, that:
Python code will be run on the server side
Python code can somehow take the values from the HTML page as input
Python code can somehow put the search results to the HTML webpage as output
Question 1 : How can I do this?
Question 2 : How the python code should be stored on the website?
Question 3 : How it should take HTML values as input?
Question 4 : How can it output the results to the webpage? Do I need to install/use any additional frameworks?
Thanks!
There are too many things to get wrong if you try to implement that by yourself with only what the standard library provides.
I would recommend using a web framework, like flask or django. I linked to the quickstart sections of the comprehensive documentation of both. Basically, you write code and URL specifications that are mapped to the code, e.g. an HTTP GET on /search is mapped to a method returning the HTML page.
You can then use a form submit button to GET /search?query=<param> with the being the user's input. Based on that input you search the dataset and return a new HTML page with results.
Both frameworks have template languages that help you put the search results into HTML.
For testing purposes, web frameworks usually come with a simple webserver you can use. For production purposes, there are better solutions like uwsgi and gunicorn
Also, you should consider putting the data into a database, parsing files for each query can be quite inefficient.
I'm sure you will have more questions on the way, but that's what stackoverflow is for, and if you can ask more specific questions, it is easier to provide more focused answers.
I would look at the cgi library in python.
You should check out Django, its a very flexible and easy Python web-framework.
I was using the xgoogle python library for one of my projects. It was working fine till recently. I am not getting the resultset that I used to get before. If anyone who has used this library written by Peter Krummins, faced a similar situation, can you please suggest a work around ?
The presence of BeautifulSoup.py hints that this library uses web scraping to get its result.
A common problem with this is that it will easily break when the design/layout of the page being scraped changes. And the problem you see seems to coincide with the new search results layout that Google introduced just recently.
Another problem is that it often is against the terms of service of the site being scraped. And according to point 5.3 of the Google Terms Of Service it actually is:
You specifically agree not to access (or attempt to access) any of the Services through any automated means (including use of scripts or web crawlers) [...]
A better idea would be to use the Custom Search API.
Peter Krumin's product xgoogle looks to be extremely useful both to me and I image many others.
https://github.com/pkrumins/xgoogle
For me the current version is 1.3 is not working.
I tried a new install from GitHub, ran the examples and nothing is returned.
Adding a debugger to the source code and tracing the data captured in a query to its disappearance the problem occurs in a routine called search.py subroutine "_extract_results" at a parser command
results = soup.findAll('li', {'class': 'g'})
The soup object has material in it but the "findAll" fails to return anything.
Looks like its searching for lists and if there are none it returns nothing.
I am unsure what html you would try to match to get a result.
If anyone knows how to make the is work I am very interested.
A little more googling and it appears xgoogle is no longer supported or works.
Part of the trouble is that Google changes the layout of its results pages every so often and so any scraping software that assumes some standard layout is in time doomed to failure.
There are however other search engines that are locally installed and thus provide a results layout that are less likely change with upgrades and will not change at all if you don't upgrade.
I am currently investigating Yacy. Easy to install and can be pointed at specific sites if you want.