Below is my code for setting and reading cookies in bottle.
if request.get_cookie('mycookiename'):
cookie_id = request.get_cookie('mycookiename')
else:
cookie_id=str(uuid4())
response.set_cookie('mycookiename', cookie_id , max_age=31556952*2, domain='%s' % (cookie_domain))
When I go to firefox and firebug, I can see that the cookie is set. But, when I refresh the page, I get a new cookie. Every request is a new cookie id.
So, how do I resolve?
This code works. You set a new value cookie with uuid4 if you have not a cookie already define.
In your code, i guess your "else" condition is bad.
# -*- coding: utf-8 -*-
#!/usr/bin/env python
from uuid import uuid4
import bottle
#bottle.route('/cookie')
def cookie():
cookie_id = bottle.request.get_cookie('mycookiename', str(uuid4()))
bottle.response.set_cookie('mycookiename', cookie_id)
return 'hello cookie'
if __name__ == '__main__':
bottle.run(host='localhost', port=8080)
Related
I have the following cookie saved by curl (in test.txt, tab-separated, this editor doesn't preserve tabs):
# Netscape HTTP Cookie File
# http://curlm.haxx.se/rfc/cookie_spec.html
# This file was generated by libcurl! Edit at your own risk.
#HttpOnly_my-example.com FALSE / FALSE 0 _rails-root_session test
I'm trying to read it with the following code:
import sys
if sys.version_info < (3,):
from cookielib import Cookie, MozillaCookieJar
else:
from http.cookiejar import Cookie, MozillaCookieJar
def load_cookies_from_mozilla(filename):
ns_cookiejar = MozillaCookieJar()
ns_cookiejar.load(filename, ignore_discard=True)
return ns_cookiejar
cookies = load_cookies_from_mozilla("test.txt")
print (len(cookies))
It outputs 0 (unable to read the cookie).
If I manually modify my cookie to the following line (remove HttpOnly flag and changing 0 to the empty string for expiration time, and again, tab-separated):
my-example.com FALSE / FALSE _rails-root_session test
then it outputs 1 (successfully read the cookie).
What needs to be done to my python code to read the original cookie line? And preferably to be able to save it in the same format (with HttpOnly flag and with 0 instead of empty string for never-expiring cookie)?
Thanks.
This appears to be an open bug: https://bugs.python.org/issue2190.
This bug report contains a link to a workaround: https://gerrit.googlesource.com/git-repo/+/master/subcmds/sync.py#995
In that linked code, the developer creates a temporary cookies file, removes the "#HttpOnly_" prefixes, and then creates a cookiejar with that temporary file.
tmpcookiefile = tempfile.NamedTemporaryFile()
tmpcookiefile.write("# HTTP Cookie File")
try:
with open(cookiefile) as f:
for line in f:
if line.startswith("#HttpOnly_"):
line = line[len("#HttpOnly_"):]
tmpcookiefile.write(line)
tmpcookiefile.flush()
cookiejar = cookielib.MozillaCookieJar(tmpcookiefile.name)
try:
cookiejar.load()
except cookielib.LoadError:
cookiejar = cookielib.CookieJar()
finally:
tmpcookiefile.close()
I tested your code and modified it, it works.
First in the cookie file you have to put off the '#' before your cookie, I think it will comment the data after it.
Second the 0 in the cookie means the expire time, 0 means expire now, so you can change the 0 to empty string or latter time, but i suggest you use the argument ignore_expire=True, the official means:
ignore_discard: save even cookies set to be discarded.
ignore_expires: save even cookies that have expiredThe file is overwritten if it already exists
and the result code is :
import sys
if sys.version_info < (3,):
from cookielib import Cookie, MozillaCookieJar
else:
from http.cookiejar import Cookie, MozillaCookieJar
def load_cookies_from_mozilla(filename):
ns_cookiejar = MozillaCookieJar()
ns_cookiejar.load(filename, ignore_discard=True, ignore_expires=True)
return ns_cookiejar
cookies = load_cookies_from_mozilla("test.txt")
print (len(cookies))
and you can see the link to find more detail:
Using cookies.txt file with Python Requests
I'm building a script to monitor a reporting service. Depending on how it takes to process the report the report appears in HTML or comes via XmlHttpRequest.
As a tool to check the page I want to use spynner, which works perfect for HTML, but it seems that I can't get it to work when the data comes via XHR.
The code for the test is the following:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
__docformat__ = 'restructuredtext en'
from time import sleep
from spynner import browser
import pyquery
from PyQt4.QtCore import QUrl
from PyQt4.QtNetwork import QNetworkRequest, QNetworkAccessManager
from PyQt4.QtCore import QByteArray
def load_page(br):
ret = br.load_jquery(True)
print ret
return 'Japan' in br.html
br = browser.Browser(
debug_level=4
)
br.load('https://foobar.eu/newton/cgi-bin/cognos.cgi')
br.create_webview()
br.show()
#br.load("https://foobar.eu/newton/cgi-bin/cognos.cgi?b_action=xts.run&m=portal/cc.xts&m_folder=iA37B5BBC0615469DA37767D2B6F1DCF1")
#br.browse()
res = br.load("https://foobar.eu:443/newton/cgi-bin/cognos.cgi?b_action=cognosViewer&ui.action=run&ui.object=/content/folder[#name='DMA Admin Zone']/folder[#name='02. Performance Benchmark Module']/folder[#name='1. Reports']/report[#name='CQM_Test_3_HTML_Heavy_Local_Processing_Final']&ui.name=CQM_Test_3_HTML_Heavy_Local_Processing_Final&run.outputFormat=&run.prompt=true", 1, wait_callback=load_page)
d = str(pyquery.PyQuery(br.html))
if d.find("Japan") > -1:
print 'We discovered Japan!'
else:
print 'Japan is nowhere to be seen!'
sleep(10)
The URL in the comments is a page which contains a link to the report. When I click the report by hand the report works (via XHP). However, I can't seem to get it to work via scripting.
The br.load_jquery always returns None.
As a help I have added part of the spynner debug trace when I click the link by hand: http://fpaste.org/97583/13987135/
In firebug I can clearly see the XHP reponse with the string 'Japan' in.
What am I missing?
apparantly replacing the load page function with the following code makes it work:
def load_page(br):
br.wait(5)
return 'Japan' in br.html
I'm creating a simple transit twitter-bot which posts a tweet to my API, then grabs the result to later on reply with an answer on travel times and such. All the magic is on the server-side , and this code should work just fine. Here's how:
A user composes like the tweet below:
#kollektiven Sundsvall Navet - Ljustadalen
My script removes the #kollektiven from the tweet, send the rest Sundsvall Navet - Ljustadalen to our API. Then a JSON should be given to the script. The script should later on reply you with an answer like this:
#jackbillstrom Sundsvall busstation Navet (2014-01-08 20:45) till Ljustadalen centrum (Sundsvall kn) (2014-01-08 20:59)
But it doesn't. I'm using this code from github called spritzbot. I edited the extensions/hello.py to look like the one below:
# -*- coding: utf-8 -*-
import json, urllib2, os
os.system("clear")
def process_mention(status, settings):
print status.user.screen_name,':', status.text.encode('utf-8')
urlencode = status.text.lower().replace(" ","%20") # URL-encoding
tweet = urlencode.strip('#kollektiven ')
try:
call = "http://xn--datorkraftfrvrlden-xtb17a.se/kollektiven/proxy.php?input="+tweet # Endpoint
endpoint = urllib2.urlopen(call) # GET-Request to API endpoint
data = json.load(endpoint) # Load JSON
answer = data['proxyOutput'] # The answer from the API
return dict(response=str(answer)) # Posts answer tweet
except:
return dict(response="Error, kontakta #jackbillstrom") # Error-meddelande
What is causing this problem? And why? I made some changes before I came to this revision, and it worked back then.
You need:
if __name__ == '__main__':
process_mention(...)
...
You're not calling process_mention anywhere, just defining it.
I'm making a small flask app where I had something like this:
#app.route('/bye')
def logout():
session.pop('logged_in', None)
flash('Adiós')
return redirect('/index')
Needless to say when I ran the application and I navigated to '/bye' it gave me a UnicodeDecodeError. Well, now it gives me the same unicodedecodeerror on every page that extends the base template (which renders the messages) even after restarting the application. and always with the same dump() despite removing that flash in the source code. All I can think of is what the crap? Help please.
Well I had to restart my computer to clear the stupid session cache or something.
I think that flash() actually creates a session called session['_flashes']. See this code here. So you will probably have to either:
clear/delete the cookie
OR
session.pop('_flashes', None)
Flask flashing stores the messages in a session cookie until they are succesfully "consumed".
If you get a UnicodeDecodeError (https://wiki.python.org/moin/UnicodeDecodeError) in this case the messages is not consumed, so you get the error again and again.
My solution was to delete the cookie from the browser
Since I had the problem when using localization, I solved the cause now by installing my translation object like:
trans = gettext.GNUTranslations(...)
trans.install(unicode=True)
and having UTF-8 encoding in my python source files and "Content-Type: text/plain; charset=UTF-8\n" in the translation file (.pot)
You're using an non ascii string "adiós", so you need to ensure that python will process strings as unicode, not as ascii.
Add this to the header of your python file. This will tell the compiler that your file contains utf8 strings
#!/usr/bin/env python
# -*- coding: utf-8 -*-
so your code will be something like this:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from flask import Flask
app = Flask()
#app.route('/bye')
def logout():
session.pop('logged_in', None)
flash('Adiós')
return redirect('/index')
I'm making auto-login script by use mechanize python.
Before I was used mechanize with no problem, but www.gmarket.co.kr in this site I couldn't make it .
whenever i try to login always login page was returned even with correct gmarket id , pass, i can't login and I saw some suspicious message
"<script language=javascript>top.location.reload();</script>"
I think this related with my problem, but don't know exactly how to handle .
Here is sample id and pass for login test
id: tgi177 pass: tk1047
if anyone can help me much appreciate thanks in advance
CODE:
# -*- coding: cp949 -*-
from lxml.html import parse, fromstring
import sys,os
import mechanize, urllib
import cookielib
import re
from BeautifulSoup import BeautifulSoup,BeautifulStoneSoup,Tag
try:
params = urllib.urlencode({'command':'login',
'url':'http%3A%2F%2Fwww.gmarket.co.kr%2F',
'member_type':'mem',
'member_yn':'Y',
'login_id':'tgi177',
'image1.x':'31',
'image1.y':'26',
'passwd':'tk1047',
'buyer_nm':'',
'buyer_tel_no1':'',
'buyer_tel_no2':'',
'buyer_tel_no3':''
})
rq = mechanize.Request("http://www.gmarket.co.kr/challenge/login.asp")
rs = mechanize.urlopen(rq)
data = rs.read()
logged_in = r'input_login_check_value' in data
if logged_in:
print ' login success !'
rq = mechanize.Request("http://www.gmarket.co.kr")
rs = mechanize.urlopen(rq)
data = rs.read()
print data
else:
print 'login failed!'
pass
quit()
except:
pass
mechanize doesn't have the ability to interact with JavaScript. Probably spidermonkey module will help you (I have no experience with it, but description is quite promising). Also you could handle such reload (e.g.Browser.reload() for this particular case) manually if it's the only site you have this problem.
Update:
Quick look through your page shows that you have submit to other URL (with https: scheme). Look through checkValid() JavaScript function. Posting to it gives other result. Note, that this looks like homework you should do yourself before asking.