i want to automation testing tool using api.
at first, i login the site and get a cookie.
my code is python3
import urllib
import urllib3
from bs4 import BeautifulSoup
url ='http://ip:port/api/login'
login_req = urllib.parse.urlencode(login_form)
http = urllib3.PoolManager()
r= http.request('POST',url,fields={'userName':'id','password':'passoword'})
soup = BeautifulSoup(r.data.decode('utf-8'),'lxml')
cookie = r.getheaders().get('Set-Cookie')
str1 = r.getheaders().get('Set-Cookie')
str2 = 'JSESSIONID' +str1.split('JSESSIONID')[1]
str2 = str2[0:-2]
print(str2)
-- JSESSIONID=df0010cf-1273-4add-9158-70d817a182f7; Path=/; HttpOnly
and then, i add cookie on head another web site api.
but it is not working!
url2 = 'http://ip:port/api/notebook/job/paragraph'
r2 = http.request('POST',url2)
r2.headers['Set-Cookie']=str2
r2.headers['Cookie']=str2
http.request('POST',url2, headers=r2.headers)
why is not working? it shows another cookie
if you know this situation, please explain to me..
error contents is that.
HTTP ERROR 500
Problem accessing /api/login;JSESSIONID=b8f6d236-494b-4646-8723-ccd0d7ef832f.
Reason: Server Error
Caused by:</h3><pre>javax.servlet.ServletException: Filtered request failed.
ProtocolError: ('Connection aborted.', BadStatusLine('<html>\n',))
thanks a lot!
Use requests module in python 3.x. You have to create a session which you are not doing now that's why you are facing problems.
import requests
s=requests.Session()
url ='http://ip:port/api/login'
r=s.get(url)
dct=s.cookies.get_dict() #it will return cookies(if any) and save it in dict
Take which ever cookie is wanted by the server and all the headers requested and pass it in header
jid=dct["JSESSIONID"]
head = {JSESSIONID="+jid,.....}
payload = {'userName':'id','password':'passoword'}
r = s.post(url, data=payload,headers=head)
r = s.get('whatever url after login')
To get info about which specific headers you have to pass and all the parameters required for POST
Open link in google chrome.
Open Developers Console(fn + F12).
There search for login doc (if cannot find, input wrong details and submit).
You will get info about request headers and POST parameters.
Related
I'm trying to login to the Starbucks website (login url: https://app.starbucks.com/account/signin?ReturnUrl=https%3A%2F%2Fapp.starbucks.com%2Fprofile) with no success.
I used the firefox inspect tool to find out the url i am supposed to send a POST request to and how should the payload data look like and i found out that the request url is "https://www.starbucks.com/bff/account/signin" and the payload is something like : "{"username": "my_username","password":"my_password"}, so here's my code:
import requests
url = 'https://www.starbucks.com/bff/account/signin'
uname = "my_username"
pwd = "my_password"
payload = {"username":uname, "password":pwd}
with requests.Session() as s:
p = s.post(url,data=payload)
print(p.status_code)
The status_code that is printed is always 200, which is strange because whenever i type invalid credentials manually, on the network tab of the inspect tool i get a 400 response code. And also, whenever i do print(p.content) instead of printing the status code, the content is always the same (both for wrong and correct credentials).
Can somebody help me out?
Thanks in advance
I'm having some trouble authenticating myself into the PJ website using the Python requests module. My code is as follows:
import requests
with requests.Session() as s:
s.auth = ("my_email", "my_password")
r = s.get( "https://www.example.com/")
However, I receive a response which indicates that I haven't logged in. Is it simply impossible to do this? Am I missing something (e.g. CSRF token)?
EDIT: I did some poking around to see what happens when I manually log in. You can see a screengrab of the request my browser sends here: PJ Login Request
Figured it out, following Jon's guidance:
with requests.Session() as s:
payload = {'user':'my_email','pass':'my_password', 'target':'/order/menu'}
r = s.post( "https://www.example.com/order/signin", data=payload)
I had already figured out the correct payload from the request (using the screengrab in my edit above), but I was sending it to the wrong location (i.e. the home page instead of the user login page).
Try with Post Method which is not sent in plaintext if using http.
With https BOTH GET and POST are sent securely and not in plaintext.
import requests
with requests.Session() as s:
s.auth = ("my_email", "my_password")
r = s.post( "https://www.papajohns.com/")
>>> r
>>> <Response [200]>
trying to authenticate to a website and fill out a form using requests lib
import requests
payload = {"name":"someone", "password":"somepass", "submit":"Submit"}
s = requests.Session()
s.post("https://someurl.com", data=payload)
next_payload = {"item1":"something", "item2":"something", "submit":"Submit"}
r = s.post("https://someurl.com", data=next_payload)
print r.text
authentication works and i verified that i can post to forms but this one i am having problem with gives The action could not be completed, perhaps because your session had expired. Please try again
Attempted in urllib2 and same result -- dont think its an issue with a cookie.
I am wondering if javascript on this page has something to do with giving session error? other form page doesnt have any javascripts.
Thanks for your input...
Using urlopen also for url queries seems obvious. What I tried is:
import urllib2
query='http://www.onvista.de/aktien/snapshot.html?ID_OSI=86627'
f = urllib2.urlopen(query)
s = f.read()
f.close()
However, for this specific url query it fails with HTTP error 403 forbidden
When entering this query in my browser, it works.
Also when using http://www.httpquery.com/ to submit the query, it works.
Do you have suggestions how to use Python right to grab the correct response?
Looks like it requires cookies... (which you can do with urllib2), but an easier way if you're doing this, is to use requests
import requests
session = requests.session()
r = session.get('http://www.onvista.de/aktien/snapshot.html?ID_OSI=86627')
This is generally a much easier and less-stressful method of retrieving URLs in Python.
requests will automatically store and re-use cookies for you. Creating a session is slightly overkill here, but is useful for when you need to submit data to login pages etc..., or re-use cookies across a site... etc...
using urllib2 is something like
import urllib2, cookielib
cookies = cookielib.CookieJar()
opener = urllib2.build_opener( urllib2.HTTPCookieProcessor(cookies) )
data = opener.open('url').read()
It appears that the urllib2 default user agent is banned by the host. You can simply supply your own user agent string:
import urllib2
url = 'http://www.onvista.de/aktien/snapshot.html?ID_OSI=86627'
request = urllib2.Request(url, headers={"User-Agent" : "MyUserAgent"})
contents = urllib2.urlopen(request).read()
print contents
I'm writing a script in Python 3.1.2 that logs into a site and then begins to make requests. I can log in without any great difficulty, but after doing that the requests return an error stating I haven't logged in. My code looks like this:
import urllib.request
from http import cookiejar
from urllib.parse import urlencode
jar = cookiejar.CookieJar()
credentials = {'accountName': 'username', 'password': 'unenc_pw'}
credenc = urlencode(credentials)
opener = urllib.request.build_opener(urllib.request.HTTPCookieProcessor(jar))
urllib.request.install_opener(opener)
req = opener.open('http://www.wowarmory.com/?app=armory?login&cr=true', credenc)
test = opener.open('http://www.wowarmory.com/auctionhouse/search.json')
print(req.read())
print(test.read())
The response to the first request is the page I expect to get when logging in.
The response to the second is:
b'{"error":{"code":10005,"error":true,"message":"You must log in."},"command":{"sort":"RARITY","reverse":false,"pageSize":20,"end":20,"start":0,"minLvl":0,"maxLvl":0,"id":0,"qual":0,"classId":-1,"filterId":"-1"}}'
Is there something I'm missing to use any cookie information I have from successful authentication for future requests?
I had this issue once. I can't get the cookie the cookie management working automatically. Frustrated me for days, I ended up handling the cookie manually. That is getting the content of 'Set-Cookie' from the response header, saving it somewhere safe. Subsequently, any request made to that server, I will set the 'Cookie' into the request header with the value I got earlier.