I'm not using regular model, so can't use Django's syndication framework. So, i did used the low-level syndication util called feedgenerator to generate RSS feeds like shown below.
feed = feedgenerator.Rss201rev2Feed(title=_("Feed by %s") % user.username,
link="http://%s" % DOMAIN_NAME,
description=_("RSS Feed provided by something.com"),
language=user.language,
author_name=user.full_name,
feed_url="something")
for note in ObjectModel.published_objects.filter(user=user):
feed.add_item(title=note.title,
link="",
pubDate=note.created,
description=note.note)
response = HttpResponse(feed.writeString('UTF-8'), mimetype='application/rss+xml')
return response
However, I couldn't find good example how i can return this as Response type.
response = HttpResponse(feed.writeString('UTF-8'), mimetype='application/rss+xml')
Apparently, Above code seems not right cause the browser does not recognizes as RSS feed. Could someone tell me what I should do to fix this problem?
This is just working fine. Was recognizable by browser.
Related
I am having trouble getting a video entry which includes a link rel="edit". I need such an entry in order to be able to call DeleteVideoEntry(...) on it.
I am retrieving the video using GetYouTubeVideoEntry(youtube_id=XXXXXXX). My yt_service is initialized with a username, password, and a developer key. I use ProgrammaticLogin. This part seems to work fine. I use the same yt_service to upload said video earlier. Also, if I change the developer key to something bogus (during debugging) and try to authenticate, I get a 403 error. This leads me to believe that authentication works OK.
Needsless to say, the video entry retrieved with GetYouTubeVideoEntry(youtube_id=XXXXXXX) does not contain the edit link and I cannot use the entry in a DeleteVideoEntry(...) call.
Is there some special way to get a video entry which will contain a link element with a rel="edit"? Can anyone suggest some way to resolve my issue? Could this possibly be a bug?
Update:
For the records, when I tried getting the feed of all my uploads, and then looping through the video entries, the video entries do have an edit link. So using this works:
uri = 'http://gdata.youtube.com/feeds/api/users/%s/uploads' % username
feed = yt_service.GetYouTubeVideoFeed(uri)
for entry in feed.entry:
yt_service.DeleteVideoEntry(entry)
But this does not:
entry = yt_service.GetYouTubeVideoEntry(video_id = video.youtube_id)
yt_service.DeleteVideoEntry(entry)
Using the same yt_service.
I've just deleted youtube video using gdata and ProgrammaticLogin()
Here is some steps to reproduce:
import gdata.youtube.service
yt_service = gdata.youtube.service.YouTubeService()
yt_service.developer_key = 'developer_key'
yt_service.email = 'email'
yt_service.password = 'password'
yt_service.ProgrammaticLogin()
# video_id should looks like 'iu6Gq-tUsTc'
uri = 'https://gdata.youtube.com/feeds/api/users/%s/uploads/%s' % (username, video_id)
entry = yt_service.GetYouTubeUserEntry(uri=uri)
response = yt_service.DeleteVideoEntry(entry)
print response # True
yt_service.GetYouTubeVideoFeed(uri) works because GetYouTubeVideoFeed doesn't check uri and just calls self.Get(uri, ...) but originaly, I think, it expected 'https://gdata.youtube.com/feeds/api/videos' uri.
vice versa yt_service.GetYouTubeVideoEntry() use YOUTUBE_VIDEO_URI = 'https://gdata.youtube.com/feeds/api/videos' but this entry doesn't contains rel="edit"
Hope that helps you out
You can view the HTTP headers of the generated requests by setting the debug flag to true. This is as simple as:
yt_service = gdata.youtube.service.YouTubeService()
yt_service.debug = True
You can read about this in the documentation here.
I'm using wikitools package to parse the wikipedia. I just copy this example from documentation. But its not working. When I run this code. I get following error.
Invalid JSON,trying requesting again. Can you please help me ? thanks
from wikitools import wiki
from wikitools import api
# create a Wiki object
site = wiki.Wiki("http://my.wikisite.org/w/api.php")
# define the params for the query
params = {'action':'query', 'titles':'Papori'}
# create the request object
request = api.APIRequest(site, params)
# query the API
result = request.query()
The "http://my.wikisite.org/w/api.php" is only an example, there is no MediaWiki under that domain. Try with "http://en.wikipedia.org/w/api.php" which searches in the English Wikipedia.
Is it possible to get spelling/search suggestions (i.e. "Did you mean") via the RESTful interface to Google's AJAX search API? I'm trying to access this from Python, though the URL query syntax is all I really need.
Thanks!
the Google AJAX API don't have a spelling check feature see this, you can use the SOAP service but i think it's no longer available .
at last you can look at yahoo API they have a feature for spelling check.
EDIT : check this maybe it can help you:
import httplib
import xml.dom.minidom
data = """
<spellrequest textalreadyclipped="0" ignoredups="0" ignoredigits="1" ignoreallcaps="1">
<text> %s </text>
</spellrequest>
"""
word_to_spell = "gooooooogle"
con = httplib.HTTPSConnection("www.google.com")
con.request("POST", "/tbproxy/spell?lang=en", data % word_to_spell)
response = con.getresponse()
dom = xml.dom.minidom.parseString(response.read())
dom_data = dom.getElementsByTagName('spellresult')[0]
for child_node in dom_data.childNodes:
result = child_node.firstChild.data.split()
print result
if you're just looking for spelling suggestions you might want to check out something like Wordnik: http://docs.wordnik.com/api/methods
Related to: django - pisa : adding images to PDF output
I've got a site that uses the Google Chart API to display a bunch of reports to the user, and I'm trying to implement a PDF version. I'm using the link_callback parameter in pisa.pisaDocument which works great for local media (css/images), but I'm wondering if it would work with remote images (using a google charts URL).
From the documentation on the pisa website, they imply this is possible, but they don't show how:
Normaly pisa expects these files to be found on the local drive. They may also be referenced relative to the original document. But the programmer might want to load form different kind of sources like the Internet via HTTP requests or from a database or anything else.
This is in a Django project, but that's pretty irrelevant. Here's what I'm using for rendering:
html = render_to_string('reporting/pdf.html', keys,
context_instance=RequestContext(request))
result = StringIO.StringIO()
pdf = pisa.pisaDocument(
StringIO.StringIO(html.encode('ascii', 'xmlcharrefreplace')),
result, link_callback=link_callback)
return HttpResponse(result.getvalue(), mimetype='application/pdf')
I tried having the link_callback return a urllib request object, but it does not seem to work:
def link_callback(uri, rel):
if uri.find('chxt') != -1:
url = "%s?%s" % (settings.GOOGLE_CHART_URL, uri)
return urllib2.urlopen(url)
return os.path.join(settings.MEDIA_ROOT, uri.replace(settings.MEDIA_URL, ""))
The PDF it generates comes out perfectly except that the google charts images are not there.
Well this was a whole lot easier than I expected. In your link_callback method, if the uri is a remote image, simply return that value.
def link_callback(uri, rel):
if uri.find('chart.apis.google.com') != -1:
return uri
return os.path.join(settings.MEDIA_ROOT, uri.replace(settings.MEDIA_URL, ""))
The browser is a lot less picky about the image URL, so make sure the uri is properly quoted for pisa. I had space characters in mine which is why it was failing at first (replacing w/ '+' fixed it).
I'm currently trying to get a grasp on pycurl. I'm attempting to login to a website. After logging into the site it should redirect to the main page. However when trying this script it just gets returned to the login page. What might I be doing wrong?
import pycurl
import urllib
import StringIO
pf = {'username' : 'user', 'password' : 'pass' }
fields = urllib.urlencode(pf)
pageContents = StringIO.StringIO()
p = pycurl.Curl()
p.setopt(pycurl.FOLLOWLOCATION, 1)
p.setopt(pycurl.COOKIEFILE, './cookie_test.txt')
p.setopt(pycurl.COOKIEJAR, './cookie_test.txt')
p.setopt(pycurl.POST, 1)
p.setopt(pycurl.POSTFIELDS, fields)
p.setopt(pycurl.WRITEFUNCTION, pageContents.write)
p.setopt(pycurl.URL, 'http://localhost')
p.perform()
pageContents.seek(0)
print pageContents.readlines()
EDIT: As pointed out by Peter the URL should point to a login URL but the site I'm trying to get this to work for fails to show me what URL this would be. The form's action just points to the home page ( /index.html )
As you're troubleshooting this problem, I suggest getting a browser plugin like FireBug or LiveHTTPHeaders (I suggest Firefox plugins, but there are similar plugins for other browsers as well). Then you can exercise a request to the site and see what action (URL), method, and form parameters are being passed to the target server. This will likely help elucidate the crux of the problem.
If that's no help, you may consider using a different tool for your mechanization. I've used ClientForm and BeautifulSoup to perform similar operations. Based on what I've read in the pycURL docs and your code above, ClientForm might be a better tool to use. ClientForm will parse your HTML page, locate the forms on it (including login forms), and construct the appropriate request for you based on the answers you supply to the form. You could even use ClientForm with pycURL... but at least ClientForm will provide you with the appropriate action to which to POST, and construct all of the appropriate parameters.
Be aware, though, that if there is JavaScript handling any necessary part of the login form, even ClientForm can't help you there. You will need something that interprets the JavaScript to effectively automate the login. In that case, I've used SeleniumRC to control a browser (and I let the browser handle the JavaScript).
One of the golden rule, you need to 'brake the ice', have debugging enabled when trying to solve pycurl example:
Note: don't forget to use p.close() after p.perform()
def test(debug_type, debug_msg):
if len(debug_msg) < 300:
print "debug(%d): %s" % (debug_type, debug_msg.strip())
p.setopt(pycurl.VERBOSE, True)
p.setopt(pycurl.DEBUGFUNCTION, test)
Now you can see how your code is breathing, because you have debugging enabled
import pycurl
import urllib
import StringIO
def test(debug_type, debug_msg):
if len(debug_msg) < 300:
print "debug(%d): %s" % (debug_type, debug_msg.strip())
pf = {'username' : 'user', 'password' : 'pass' }
fields = urllib.urlencode(pf)
pageContents = StringIO.StringIO()
p = pycurl.Curl()
p.setopt(pycurl.FOLLOWLOCATION, 1)
p.setopt(pycurl.COOKIEFILE, './cookie_test.txt')
p.setopt(pycurl.COOKIEJAR, './cookie_test.txt')
p.setopt(pycurl.POST, 1)
p.setopt(pycurl.POSTFIELDS, fields)
p.setopt(pycurl.WRITEFUNCTION, pageContents.write)
p.setopt(pycurl.VERBOSE, True)
p.setopt(pycurl.DEBUGFUNCTION, test)
p.setopt(pycurl.URL, 'http://localhost')
p.perform()
p.close() # This is mandatory.
pageContents.seek(0)
print pageContents.readlines()