Response from python post request different from chrome network - python

I'm just learning about the request module in python and want to test it.
First I started looking at the chrome network tab to see the request URL and method: POST and the used form data, looking at the response tab on the chrome gives me the expected data in JSON but after doing it on a python file like this:
import requests
uid = "23415191"
url = "https://info.gbfteamraid.fun/web/userrank"
honors = requests.post(url, data={
"method": "getUserDayPoint",
"params": {"teamraidid":"teamraid056","userid":uid}
}).text
print(honors)
This gives me a HTML elements of the homepage of the site instead of the JSON response, also used postman and it gives me the same result

You are being redirected to a different page ,maybe cos you are not authenticated .Most probably you may need to authenticate your request . I have added the basic auth ,which uses id and password.Try this with the id ,pass that you have for that website and then see if it works. Just replace your and with your id and password.
import requests
from requests.auth import HTTPBasicAuth
uid = "11111111"
url = "https://info.gbfteamraid.fun/web/userrank"
honors = requests.post(url,auth=HTTPBasicAuth(username="<your username>",password="<your password>"), data={
"method": "getUserDayPoint",
"params": {"teamraidid":"teamraid056","userid":uid}
}).text
print(honors)

Related

creating http request URL which can be pasted in browser from server and request body json

I have the following three parts: server, json_str, req
server = 'http://example.com:9013/run'
json_str = '[{"Leg":[{"currency":"INR","Type":"NA"}],"P":"xyz","code":"0100"}]'
import json as js
req = js.dumps({
"func": "rfqfunc",
"args": ["dummyQuote", json_str, 'True']
})
I usually call to get a response for this using
request.post(server, data=req)
but using these parts How do I make a browser addressbar pastable URL like the following (which is the desired outcome):
http://example.com:9013/run?func=rfqfunc&dummyQuote&[%20{%20"P":%20"xyz",%20"code":%20"0100",%20"Leg":%20[%20{%20"currency":%20"INR",%20"Type":%20"NA"%20}%20]}%20]
I have searched a lot on urlparse, urljoin, urlencode but nothings has given me anything near to desired results.

Redirection 302 for PUT request via Shopify API

Set-up
I'm using the Shopify Admin API Python library to access our shops via a private app connection.
GET requests work fine, i.e. product = shopify.Product.find(6514193137813) retrieves the product data.
Issue
If I want to update an existing product, PUT requests don't work. That is,
product.title = 'test123'
product.save()
returns a Redirection: Response(code=302,...).
So far
I've double-checked and read and write permission are on.
Then I found this question on the Shopify forum: https://community.shopify.com/c/Shopify-APIs-SDKs/API-Fulfillment-Status-Code-302-Redirect/m-p/747383/highlight/true#, where Shopify staff member hassain indicates that Shopify prevents you from using HTTP Basic Auth for POST requests that have cookies.
But since his statement is about POST requests, I'm not sure this is relevant.
How do I make the PUT request work?
I followed MalcolmInTheCenter's comment and successfully updated the product using his request code,
import requests
payload = {
"product":{
"title":"My new title"
}
}
api_key = "xxxxx"
password="xxxxxx"
headers = {"Accept": "application/json", "Content-Type": "application/json"}
#send email to customer through events endpoint
r = requests.put("https://"+api_key+":"+password+"#MYSTORE.myshopify.com//admin/api/2021-01/products/{PRODUCT_ID}.json", json=payload, headers=headers)
print(r)

Remote login to decoupled website with python and requests

I am trying to login to a website www.seek.com.au. I am trying to test the possibility to remote login using Python request module. The site is Front end is designed using React and hence I don't see any form action component in www.seek.com.au/sign-in
When I run the below code, I see the response code as 200 indicating success, but I doubt if it's actually successful. The main concern is which URL to use in case if there is no action element in the login submit form.
import requests
payload = {'email': <username>, 'password': <password>}
url = 'https://www.seek.com.au'
with requests.Session() as s:
response_op = s.post(url, data=payload)
# print the response status code
print(response_op.status_code)
print(response_op.text)
When i examine the output data (response_op.text), i see word 'Sign in' and 'Register' in output which indicate the login failed. If its successful, the users first name will be shown in the place. What am I doing wrong here ?
P.S: I am not trying to scrape data from this website but I am trying to login to a similar website.
Try this code:
import requests
payload={"email": "test#test.com", "password": "passwordtest", "rememberMe": True}
url = "https://www.seek.com.au:443/userapi/login"
with requests.Session() as s:
response_op = s.post(url, json=payload)
# print the response status code
print(response_op.status_code)
print(response_op.text)
You are sending the request to the wrong url.
Hope this helps

how to save authentication cookie while doing POST login into the server

I am trying to login into the server via post with my python script, but it is not working although when i am doing the same POST via postman it is working fine. I believe my python script is not saving the authentication cookie information or may be i have to add some more fields in my payload. I am at very very beginner level of programming so please guide me how i can save that authentication cookie which i can further use in my next GET, POST requests.
When i run this POST request via postman. I simply give username and password in the body and i got the following successful response
{
"ErrorCode": 0,
"Data": {
"role": "admin",
"_id": "7c9e7mdf4d249212282480zb",
"name": "test5"
}
}
but when I run below mentioned Python script, I am getting
<!DOCTYPE HTML PUBLIC"-//IETF//DTD HTML 2.0//EN">
500 Internal Server error
Please find below mentioned my python script
import requests
url = "http://172.125.169.21/api/user/login"
payload = "{\"name\": \"test5\", \"password\": \"Hello123\"}"
response = requests.request("POST", url, data=payload)
print(response.text)
print(response.headers)
For the requests library in python, the data object takes a python dict object, not a JSON string. If you were to edit your code to:
import requests
import json
url = "http://172.125.169.21/api/user/login"
payload = {"name": "test5", "password": "Hello123"}
headers = {'Content-Type': "application/json"}
response = requests.request("POST", url, json=payload, headers=headers)
# the json parameter should handle encoding for you
print(response.text)
print(response.headers)
Cookies are available to you in the cookies parameter of the response object. See the requests documentation for more information.
print(json.dumps(response.cookies, separators=(",",":"), indent = 4))
This should pretty print the cookie(s) you've received. You can use requests' session handling abilities or a variable to store this information, however you choose. We'll save these cookies to a requests cookiejar:
cookies = response.cookies
and we'll use those cookies in the authorization check or any other requests:
auth_check_url = "172.125.169.21/api/user/checkLogin"
response = requests.get(auth_check_url, cookies=cookies)
print(response.text)

how to use python requests to login to website

Im trying to login and scrape a job site and send me notification when ever certain key words are found.I think i have correctly traced the xpath for the value of feild "login[iovation]" but i cannot extract the value, here is what i have done so far to login
import requests
from lxml import html
header = {"User-Agent":"Mozilla/4.0 (compatible; MSIE 5.5;Windows NT)"}
login_url = 'https://www.upwork.com/ab/account-security/login'
session_requests = requests.session()
#get csrf
result = session_requests.get(login_url)
tree=html.fromstring(result.text)
auth_token = list(set(tree.xpath('//*[#name="login[_token]"]/#value')))
auth_iovat = list(set(tree.xpath('//*[#name="login[iovation]"]/#value')))
# create payload
payload = {
"login[username]": "myemail#gmail.com",
"login[password]": "pa$$w0rD",
"login[_token]": auth_token,
"login[iovation]": auth_iovation,
"login[redir]": "/home"
}
#perform login
scrapeurl='https://www.upwork.com/ab/find-work/'
result=session_requests.post(login_url, data = payload, headers = dict(referer = login_url))
#test the result
print result.text
This is screen shot of form data when i login successfully
This is because upworks uses something called iOvation (https://www.iovation.com/) to reduce fraud. iOvation uses digital fingerprint of your device/browser, which are sent via login[iovation] parameter.
If you look at the javascripts loaded on your site, you will find two javascript being loaded from iesnare.com domain. This domain and many others are owned by iOvaiton to drop third party javascript to identify your device/browser.
I think if you copy the string from the successful login and send it over along with all the http headers as is including the browser agent in python code, you should be okie.
Are you sure that result is fetching 2XX code
When I am this code result = session_requests.get(login_url)..its fetching me a 403 status code, which means I am not going to login_url itself
They have an official API now, no need for scraping, just register for API keys.

Categories

Resources