wondering if someone can help me understand what is going on here. Here is a dump of the session request cookies with burp suite.
Cookie: ivid=2b69ca90af79f6f9cab166cf8aa1fa6fca1e585068; _fbp=fb.1.1675219717059.835130992; BVBRANDID=927986dc-482f-4177-bd02-bfe236e29500; _gcl_au=1.1.1393921753.1675219715.1972114106.1675219752.1675219754; fornax_anonymousId=dfe7b317-a851-4d02-9e24-948eab4888bc; SHOP_SESSION_TOKEN=60659e2b-6a03-493d-8c18-7aec0bd0b892; MCPopupClosed=yes; sensorsdata2015jssdkcross=%7B%22distinct_id%22%3A%22solarrenogade%40gmail.com%22%2C%22first_id%22%3A%221860ae0368c1aa-0ca11d6d7e597a-12363b7c-1024000-1860ae0368e49d%22%2C%22props%22%3A%7B%22%24latest_traffic_source_type%22%3A%22%E7%9B%B4%E6%8E%A5%E6%B5%81%E9%87%8F%22%2C%22%24latest_search_keyword%22%3A%22%E6%9C%AA%E5%8F%96%E5%88%B0%E5%80%BC_%E7%9B%B4%E6%8E%A5%E6%89%93%E5%BC%80%22%2C%22%24latest_referrer%22%3A%22%22%2C%22subscribed%22%3Afalse%2C%22vip_level%22%3A%22EG4%22%7D%2C%22identities%22%3A%22eyIkaWRlbnRpdHlfY29va2llX2lkIjoiMTg2MGFlMDM2OGMxYWEtMGNhMTFkNmQ3ZTU5N2EtMTIzNjNiN2MtMTAyNDAwMC0xODYwYWUwMzY4ZTQ5ZCIsIiRpZGVudGl0eV9lbWFpbCI6InNvbGFycmVub2dhZGVAZ21haWwuY29tIn0%3D%22%2C%22history_login_id%22%3A%7B%22name%22%3A%22%24identity_email%22%2C%22value%22%3A%22solarrenogade%40gmail.com%22%7D%2C%22%24device_id%22%3A%221860ae0368c1aa-0ca11d6d7e597a-12363b7c-1024000-1860ae0368e49d%22%7D; _uetvid=ed918ce0a1da11ed83f7458631c57a2d; _clck=12rn9lp|1|f97|0; _ga_H5B9TVGZE7=GS1.1.1676610376.2.1.1676610391.0.0.0; _gid=GA1.2.1846785940.1676693016; STORE_VISITOR=1; cto_bundle=fHXkHl9scnBTbFc0ekQ3ajdCb0x3WVNRZjFFQm14Szh4TjQ1TUFJVjlwUlpVTkNkZGx2WmE2Yjl2ZG5acEFZVm1OQmVLTnNJV0lpMnpoUXZpRWg1WTc3UFFZS2V5ME81VFdvTlRnWGVOZ2UzWUpLVHY3ZWNDNDdBdGVjcG41Q2ljVDV1d0YlMkZENW8zcFR6RnhwYm9jNE12ZUNCZyUzRCUzRA; SHOP_DEVICE_TOKEN=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpYXQiOjE2NzY2OTk0MzEsImV4cCI6MTY3OTI5MTQzMSwiaXNzIjoiaHR0cHM6XC9cL2NhLnJlbm9neS5jb20iLCJzdWIiOiJjMTRlMzhhNGMyMzhkYWY1NTc1MjNlMWQyYmVhZGNmYzNkYzBjYTgwMTdkNTJlODFkOTAyNGQ0MjBjYmY0MmJmIiwianRpIjoiZTkwY2E5NDYxNTdlOTM1YTM3MjY0NzU4MzI5NTkzMjIzYjg0MmRiZDI0NWEzYTEyMzFkNjE5NGI5NjY2MzdmNSJ9.yZG_uaMXQfG-JpZbBjFrcLVz9yAfF2XLqhRR6uHQuHA; SHOP_SESSION_ROTATION_TOKEN=d46488c471fec1b94410c59e877ded9497d784d57472c7faf5e28c059e69cef5; _ga=GA1.2.358549519.1676693016; _ga_VH8KS80LN0=GS1.1.1676698182.2.1.1676699521.55.0.0; athena_short_visit_id=da42be82-7cde-4703-a69a-11d4532ecaa0:1676722811; Shopper-Pref=813A2F6320AAACB077092511FB68BE83A0C837ED-1677327611389-x%7B%22cur%22%3A%22CAD%22%7D
The issue is when I use the following code in python, it only shows the following keys and values
['SHOP_SESSION_TOKEN', 'Shopper-Pref', 'XSRF-TOKEN', 'athena_short_visit_id', 'fornax_anonymousId'] so the others are missing. Here is the python code that is relevant here
session = requests.session()
r=session.get(url, headers=headers)
print(session.cookies.get_dict())
print(session.cookies.keys())
the output i get is
`{'SHOP_SESSION_TOKEN': '084ce53d-a2eb-47ff-85d5-acb03ad34826', 'Shopper-Pref': '783445C1D8C3FE280AAAFC1CDAA300EFAB0ECBCC-1677333512986-x%7B%22cur%22%3A%22CAD%22%7D', 'XSRF-TOKEN': '084aaf32400408c63df8a546ac83a47817097fa366408b858b5e3c0c23842d6f', 'athena_short_visit_id': '9e9249af-b511-46aa-9958-8a622d1bc597:1676728709', 'fornax_anonymousId': '1560e7ff-3b6c-41f6-bd0c-70bd056efe54'}
['SHOP_SESSION_TOKEN', 'Shopper-Pref', 'XSRF-TOKEN', 'athena_short_visit_id', 'fornax_anonymousId']`
As you can see I am missing a lot of cookies from the burpsuite output. Any help is appreciated.
listed above is what I tried already. Running it in python
Related
I'm trying to make a script to auto-login to this website and I'm having some troubles. I was hoping I could get assistance with making this work. I have the below code assembled but I get 'Your request cannot be processed at this time\n' in the bottom of what's returned to me when I should be getting some different HTML if it was successful:
from pyquery import PyQuery
import requests
url = 'https://licensing.gov.nl.ca/miriad/sfjsp?interviewID=MRlogin'
values = {'d_1553779889165': 'email#email.com',
'd_1553779889166': 'thisIsMyPassw0rd$$$',
'd_1618409713756': 'true',
'd_1642075435596': 'Sign in'
}
r = requests.post(url, data=values)
print (r.content)
I do this in .NET, but I think the logic can be written in Python as well.
Firstly, I always use Fiddler to capture requests that a webpage sends then identify the request which you want to replicate and add all the cookies and headers that are sent with it in your code.
After sending the login request you will get some cookies that will identify that you've logged in and you use those cookies to proceed further in your site. For example, if you want to retrieve user's info after logging in first you need to trick the server thinking that you are logged in and that is where those log in cookies will help you
Also, I don't think the login would be so simple through a script because if you're trying to automate a government site, they may have some anti-bot security there lying there, some kind of fingerprint or captcha.
Hope this helps!
Hay ! I am new here so let me describe clearly my issue,Please Ignore mistakes.
I am making request on a page which literlaly works on js.
Acually its the page of paytm payemnt response through UPI.
When ever i do the requests the response is {'POLL_STATUS':"STOP_POLLING"}
But the problem is the reqest is giving this response while the browser is giving another response with loaded html.
I tried everyting like stopeed redirects and printing raw content nothing works.
I just think may be urllib post request may be work but i do not know the uses.
Can anyone please tell me how to get the exact html response as the browser gives.
Note[0]:Please dont provide answer of selenium because this issue having in the middle of my script.
Note[1]:Friendly answer appriciated.
for i in range(0,15):
resp_check_transaction=self.s.post("https://secure.website.in/theia/upi/transactionStatus?MID="+str(Merchant_ID)+"&ORDER_ID="+str(ORDER_ID),headers=check_transaction(str(ORDER_ID)),data=check_transaction_payload(Merchant_ID,ORDER_ID,TRANSID,CASHIERID))
print(resp_check_transaction.text)
resp_check_transaction=resp_check_transaction.json()
if resp_check_transaction['POLL_STATUS']=="STOP_POLLING":
print("Breaking looop")
break
time.sleep(4)
self.clear_header()
parrms={
"MID": str(Merchant_ID),
"ORDER_ID": str(ORDER_ID)
}
resp_transaction_pass=requests.post("https://secure.website.in/theia/upi/transactionStatus",headers=transaction_pass(str(ORDER_ID)),data=transaction_pass_payload(CASHIERID,UPISTATUSURL,Merchant_ID,ORDER_ID,TRANSID,TXN_AMOUNT),params=parrms,allow_redirects=True)
print("Printing response")
print(resp_transaction_pass.text)
print(resp_transaction_pass.content)
And in the web browser its showing that Status Code: 302 Moved Temporarily in the bank response of Bank response. :(
About the 302 status code
You mention that the web browser is sends a 302 status code in response to the request. In the simplest terms the 302 status code is just the web servers way of saying "Hey I know what you're looking for but it is actually located at this other URL.".
Basically all modern browsers and HTTP request libraries like Python's Requests will automatically follow a 302 redirect and act as though you send the request to the new URL instead. (Your browser's developer tools may show that a 302 redirect has happened but as far as the JavaScript is concerned it just got a normal 200 response).
If you really want to see if your Python script receives a 302 status you can do so by setting the allow_redirects option to False, but this means you will manually have to get the stuff from the new URL.
import requests
r1 = requests.get('https://httpstat.us/302', allow_redirects=False)
r2 = requests.get('https://httpstat.us/302', allow_redirects=True)
print('No redirects:', r1.status_code) # 302
print('Redirects on:', r2.status_code) # 200 (status code of page it redirects to)
Note that allow_redirects is already set to True by default, I just wanted to make the example a bit more verbose so the difference is obvious.
So why is the response content different?
So even though the browser and the Requests library are both automatically following the 302 redirect the response they get is still different, you didn't share any screenshots of the browsers requests or responses so I can only give a few educated guesses but it boils down to the fact that the request made by your Python code is somehow different from the JavaScript loaded by the web browser.
Some things to consider:
Are you sure you are using the he correct HTTP method? Is the browser also making a POST request?
If so are you sure the body of the request is the same/of the same format as the one sent by the web browser?
Perhaps the browser has a session cookie it is sending along with the request (Note this usually not explicitly said in the JS but happens automatically).
Alternatively the JS might include some API key/credentials in the HTTP auth header (this should be explicitly visible in JS).
Although unlikely it could be that whatever API you're trying to query is trying to block reverse engineering attempts by blocking the Requests library's user agent string.
Luckily all of these differences can be easily examined with some print statements and your browser's developer tools :p.
How do i to a website using requests library, i watched a lot of tutorials but they seem to have a 302 POST request in their networks tab in inspector. I see a lot of GET requests in my tab when i login. A friend of mine said cookies but i am really a beginner i don't know how to login.
Also, i would like to know the range from which i can use this library or any helpful source of information from where i can begin learning this library.
import requests
r = requests.get("https://example.com")
I want to POST request, the same friend told me that i would require API access of that website to proceed further is it true?
Depending on the site that you are trying to log in with it may be necessary to log in via a chrome browser (selenium) and from there extract and save the cookies for further injection and use within the requests module.
To extract cookies from selenium and save them to a json file use:
cookies = driver.get_cookies()
with open('file_you_want_to_save_the_cookies_to.json', 'w') as f:
json.dump(cookies, f)
To then use these cookies in the request module use:
cookies = {
'cookie_name' : 'cookie_value'
}
with requests.Session() as s:
r = s.get(url, headers=headers, cookies=cookies)
I could help further if you mention what site you are trying to do this on
Well best practice is to create a little project which can use your library. For requests lib it can accessing some of free APIs on the internet. For example I made this one while ago which is using free API:
##########################################
# #
# INSULT API #
# #
##########################################
def insult_api(name):
params={
("who", name)
}
response = requests.get(
f"https://insult.mattbas.org/api/en/insult.json",
params=params)
return response.json()
Helpful source is (obvisouly official documentation) basically any youtube video or SO post here. Just look for the thing you want to do.
Anyway if you are looking for logging into website without API accesses, you can use selenium libraby.
The requests module is extremely powerful if used properly and with the necessary information sent in the requests. First, analyse the network packets via tools like Network tab in Chrome Dev Tools. Then try to replicate the request via requests in python.
Usually, you will need headers, and data sent.
headers = {
<your headers here>
}
data = <data here>
req = requests.post("https://www.examplesite.com/api/login", data=data, headers=headers)
Everything should be easily found in network packets, unless it has some sort of security like csrf-tokens etc, which need to be sent along with the login req. In order to do this, you need to send a GET req to get the info, then send a POST req with the info.
If you could provide the site you're trying to use it would be pretty helpful too. Best of luck!
Please help me convert a working URL request in the web browser to a python request module executable request.
My working browser request URL:
https://192.168.100.25/api?type=config&action=set&xpath=/config/devices/entry[#name=%27localhost.localdomain%27]/network/interface/ethernet/entry[#name=%27ethernet1/1%27]/layer3/ip&element=%3Centry%20name=%279.6.6.6/24%27/%3E
This device basically accepts Rest API calls in XML format. Please help me, convert this to a python requests POST request.
I have found a way to do this with Python Requests:
url = '''https://192.168.100.25/api?
type=config&action=set&xpath=/config/devices&element='''
out = requests.post(url, verify=False, auth=HTTPBasicAuth(username, password))
This is working.
Please let me know if there is an easier and proper way to do this.
I'm dusting off an app that worked a few months ago. I've made no changes. Here's the code in question:
result = urlfetch.fetch(
url=url,
deadline=TWENTY_SECONDS)
if result.status_code != 200: # pragma: no cover
logging.error('urlfetch failed.')
logging.error('result.status_code = %s' % result.status_code)
logging.error('url =')
logging.error(url)
Here's the output:
WARNING 2015-04-20 01:13:46,473 urlfetch_stub.py:118] No ssl package found. urlfetch will not be able to validate SSL certificates.
ERROR 2015-04-20 01:13:46,932 adminhandlers.py:84] urlfetch failed. url =
ERROR 2015-04-20 01:13:46,933 adminhandlers.py:85] http://www.stubhub.com/listingCatalog/select/?q=%2Bevent_date%3A%5BNOW%20TO%20NOW%2B1DAY%5D%0D%0A%2BancestorGeoDescriptions:%22New%20York%20Metro%22%0D%0A%2BstubhubDocumentType%3Aevent&version=2.2&start=0&rows=1&wt=json&fl=name_primary+event_date_time_local+venue_name+act_primary+ancestorGenreDescriptions+description
When I use a different url, e.g., "http://www.google.com/", the fetch succeeds.
When I paste the url string from the output into Chrome I get this response, which is the one I'm looking for:
{"responseHeader":{"status":0,"QTime":19,"params":{"fl":"name_primary event_date_time_local venue_name act_primary ancestorGenreDescriptions description","start":"0","q":"+event_date:[NOW TO NOW+1DAY]\r\n+ancestorGeoDescriptions:\"New York Metro\"\r\n+stubhubDocumentType:event +allowedViewingDomain:stubhub.com","wt":"json","version":"2.2","rows":"1"}},"response":{"numFound":26,"start":0,"docs":[{"act_primary":"Waka Flocka Flame","description":"Waka Flocka Flame Tickets (18+ Event)","event_date_time_local":"2015-04-20T20:00:00Z","name_primary":"Webster Hall","venue_name":"Webster Hall","ancestorGenreDescriptions":["All tickets","Concert tickets","Artists T - Z","Waka Flocka Flame Tickets"]}]}}
I hope I'm missing something simple. Any suggestions?
Update May 30, 2015
Anzel's suggestion of Apr 23 was correct. I need to add a user agent header. The one supplied by the AppEngine dev server is
AppEngine-Google; (+http://code.google.com/appengine)
The one supplied by hosted AppEngine is
AppEngine-Google; (+http://code.google.com/appengine; appid: s~MY_APP_ID)
The one supplied by requests.get() in pure Python (no AppEngine) on MacOS is
python-requests/2.2.1 CPython/2.7.6 Darwin/14.3.0
When I switch in the Chrome user agent header all is well in pure Python. Stubhub must have changed this since I last tried it. Curious that they would require an interactive user agent for a service that emits JSON, but I'm happy they offer the service at all.
When I add that header in AppEngine, though, AppEngine prepends it to its own user-agent header. Stubhub then turns down the request.
So I've made some progress, but have not yet solved my problem.
FYI:
In AppEngine I supply the user agent like this:
result = urlfetch.fetch(
url=url,
headers = {'user-agent': USER_AGENT_STRING}
)
This is a useful site for determining the user agent string your code or browser is sending:
http://myhttp.info/
I don't have priveledges yet to post comments, so here goes.
Look at the way you are entering the URL into the var 'url'. Is it already encoded as the error message says? I would try to make sure the url is a regular, non-encoded one, and test that, perhaps the library is re-encoding it again, causing problems. If you could give us more surrounding code, that may help in our diagnosis.