How to change timestamp and nonce in Requests-OAuthlib? - python

I am trying to get some data from upwork's api.
I am using Requests-OAuthlib .For one API request it works but for second one I get this error: "Duplicate timestamp/nonce combination, possible replay attack. Request rejected."
So I tried to modify Requests-OAuthlib and change timestamp and nonce manually by putting this inside constructor:
ur = u''+str(SystemRandom().random())
ur = ur.replace("0.","")
self.client.nonce = ur
ts = u'' + str(int(time()))
self.client.timestamp = ts
right after self.client = client_class( ...
But it still does not work.
I am a complete beginner on both python and OAuth so I would rather use this library instead of building the request url manually.
Here's the source code of the library Requests-OAuthlib source code
If I print them at the end of call they have the same values as the ones I set but setting them doesn't seem to have an effect , upwork still says replay attack.
Also I tried putting them in headers, still not working
r.headers['oauth_nonce'] = ur
r.headers['oauth_timestamp'] = ts
Update:
I printed r.headers and it containes these:
for first call
oauth_nonce="55156586115444478931487605669", oauth_timestamp="1487605669"
for second call
oauth_nonce="117844793977954758411487605670", oauth_timestamp="1487605670"
Nonces and timestamps are different from one another. So why is upwork giving me : "Duplicate timestamp/nonce combination, possible replay attack. Request rejected." ?
Update2: Probably it's just some crazy upwork behaviour, still waiting for an answer from them. I believe that because if I change something in the endpoint it's working, so nonces/timestamps seem unrelated to the problem.
Update3: I got an answer from upwork. Sincerly I can't understand the answer but if you consider it makes sense you can close the question. I found a workaround anyway.
https://community.upwork.com/t5/API-Questions-Answers/Wrong-API-error-message/td-p/306489

For anyone coming across this issue, I was banging my head against it for a few hours until I finally used Fiddler to look at the requests and responses.
The server was responding with a 302 redirect, and my http library was helpfully following the redirect and sending the same headers - which of course included the duplicate nonce and timestamp.

Related

Python Product Tag Update using Shopify API - 400 Error - Product :Required Parameter Missing Or Invalid

Just built my first Shopify store and wanted to use python and API to bulk-update product tags on all our products.
However, I've run into a snag on my PUT call. I keep getting a 400 Response with the message '{"errors":{"product":"Required parameter missing or invalid"}}'.
I've seen other people with similar issues on here but none of their solutions seem to be working for me.
Has anyone else run into this problem that can help me figure it out?
Here are some screenshots of my code, a printout of the payload, and the error in the response.
Code, Payload and Response:
I can successfully get the product info using the API and was originally sending the entire JSON payload that was returned just updated it with my product tags and sent it back through the API.
To narrow potential pain points, I'm keeping it simple now and just including "id" and "tags" in the payload but still get the same error.
We figured it out! It turns out that when we initially created the store we used a domain store name that was misspelled so we created a new domain store name with the correct spelling. However, the original domain store name was not deleted (not even sure it can be), and even though the new domain store name forwards to the original one and allows for GET requests, for PUT requests we had to use the original misspelled domain store name and the PUTs work fine.
I figured this out by using fiddler to capture a manual product update via the Shopify website to see what the payload looked like and that's why I noticed the store URL was different than the one we were using.
Hope this helps someone with a similar issue!

Oracle cloud REST API WaasPolicy and auditEvents pagination

I am trying to figure out the exact query string to successfully get the next page of results for both the waasPolicy logs and auditEvents logs. I have successfully made a query to both endpoints and returned data but the documentation does not provide any examples of how to do pagination.
my example endpoint url string:
https://audit.us-phoenix-1.oraclecloud.com/20190901/auditEvents?compartmentId={}&startTime=2021-02-01T00:22:00Z&endTime=2021-02-10T00:22:00Z
I have of course omitted my compartmentId. When I perform a GET request against this url, it successfully returns data. In order to paginate, the documentation states:
"Make a new GET request against the same URL, modified by setting the page query parameter to the value from the opc-next-page header. Repeat this process until you get a response without an opc-next-page header. The absence of this header indicates that you have reached the last page of the list."
My question is what exactly is this meant to look like? An example would be very helpful. The response header 'opc-next-page' for the auditEvents pagination contains a very long string of characters. Am I meant to append this to the url in the GET request? Would it simply be something like this? of course replacing $('opc-next-page') with that long string in the header.
https://audit.us-phoenix-1.oraclecloud.com/20190901/auditEvents?compartmentId={}&startTime=2021-02-01T00:22:00Z&endTime=2021-02-10T00:22:00Z&page=$(opc-next-page)
And the query for waasPolicy:
https://waas.us-phoenix-1.oraclecloud.com/20181116/waasPolicies/{}/wafLogs
returns an opc-next-page header in the form of a page number. Would it simply require appending something like &page=2? (Tried this to no avail)
Again, I am not able to find any examples in the documentation.
https://docs.oracle.com/en-us/iaas/api/#/en/waas/20181116/WaasPolicy/GetWaasPolicy
https://docs.oracle.com/en-us/iaas/Content/API/Concepts/usingapi.htm#nine
Thank you in advance for your help
Found the answer. Needed to specify &page=$(opc-next-page) AND specify a &limit=X (where x = any integer i.e. 500) parameter. Without the limit param, the &page= param returns a 500 error which is slightly misleading. Will leave this up for anyone else stumbling upon this issue.

Python Requests Recreate Post Request with Cookies

So I was looking at my chrome console for a post request that I was making, and there is a 'cookie' value in the header file that has this data:
strTradeLastInventoryContext=730_2; bCompletedTradeOfferTutorial=true; steamMachineAuth76561198052177472=3167F37117************B82C2E; steamMachineAuth76561198189250810=E292770040E************B5F97703126DE48E; rgDiscussionPrefs=%7B%22cTopicRepliesPerPage%******%7D; sessionid=053257f1102e4967e2527ced; steamCountry=US%7C708d3************e569cc75495; steamLogin=76561198052177472%7C%7C4EC6FBDFA0****************12DE568; steamLoginSecure=765611*********************44BEC4E8BDA86264E; webTradeEligibility=%7B%22allowed%22%3A1%2C%22allowed_at_time%22%3A0%2C%22steamguard_required_days%22%3A15%2C%22sales_this_year%22%3A9%2C%22max_sales_per_year%22%3A200%2C%22forms_request***************cooldown_days%22%3A7%7D; strInventoryLastContext=730_2; recentlyVisitedAppHubs=42700%2C2***********930%2C440; timezoneOffset=-14400,0; __utma=268881843.1147920287.1419547163.1431887507.1431890089.151; __utmb=268881843.0.10.1431890089; __utmc=268881843; __utmz=268881843.1431885538.149.94.utmcsr=google|utmccn=(organic)|utmcmd=organic|utmctr=(not%20provided)
I starred out some of the cookie's data so my trade account couldn't be robbed, but you should get the point. How should I go about recreating the cookie? Like should I create a dict with the keys being the values before the '=' in the cookie and the value being what comes after the '=' sign? Sorry if the question is unclear, I'm not sure how to go about doing this. Any help would be great!
Ex. cookie = {strTradeLastInventoryContext: 730_2, ...}
There are really two options here.
If you happen to have the exact Cookie header you want to reproduce exactly as one big string (e.g., to have a requests-driven job take over a session you created in a browser, manually or using selenium or whatever), you can just pass that as an arbitrary header named Cookie instead of figuring out how to break it apart just so requests can (hopefully) reassemble the same header you wanted.
If, on the other hand, you need to create parts of it dynamically, then yes, you will want to do what you're doing—pull it apart to build a dict named cookie, then use it with requests.get(url, cookies=cookie), or req.cookies.update(cookie) or similar (if you're using sessions and prepared requests). Then you can easily modify the dict before sending it.
But the easiest way to do that is not to pull the cookie apart manually. I'm pretty sure the WebKit Developer Tools have a way to do that for you directly within Chrome. Or, if not, you can just copy the cookie as a string and then use the http.cookies module (called cookie in Python 2.x), like this:
cookie = http.cookies.BaseCookie(cookie_string)
Also, note that in many cases, you really don't even need to do this. If you can drive the login and navigation directly from requests instead of starting off in Chrome, it should end up with the full set of cookies it needs in each request. You may need to use a Session, but that's as hard as it gets.
You may want to look at the requests documentation for cookies.
You are right in that the cookie value is passed to the get call as a dictionary key/value.
cookies = {'cookie_key': 'somelongstring'}
requests.get(url, cookies=cookies)

Connection timeout error in bitly URL shortener

I am trying to use bitly-api-python library to shorten all the urls in an array.
def bitly3_shorten_oauth(url):
c = bitly3.Connection(access_token= bitly_access_token)
sh = c.shorten(url)
return sh['url']
for i in arr:
print i[1] , bitly3_shorten_oauth(i[1])
I am calling them one after other without any timeout, since I couldn't find any such precaution in the best practices documentation of bitly.
Here is my complete code, please have a look : http://pastie.org/8419004
but what is happening is that it shortens 2 or 3 of the urls and then goes to a connection timeout error
What might be causing this error and how do I debug it ?
From the documentation you linked:
bitly currently institutes per-hour, per-minute,
and per-IP rate limits for each API method
And
High-Volume Shorten Requests
If you need to shorten a large number of URLs at once, we recommend that
you leave ample time to spread these requests out over many hours. Our API
rate limits reset hourly, and rate limited batch requests can be resumed at
the top of the hour.
So it does look like you simply need to slow down your code.
If anybody finds this outdated post as a starting point, please note that the Bit.ly API rejects non-OAuth API keys nowadays.
In python get your API key with curl:
curl -u "username:password" -X POST "https://api-ssl.bitly.com/oauth/access_token"
Doc link
As of 2019, there is the bitlyshortener package, although it works only with Python ≥3.7. I have not experienced any error using it.

Python2 urllib/urllib2 wrong URL issue

I am coding a Python2 script to perform some automatic actions in a website. I'm using urllib/urllib2 to accomplish this task. It involves GET and POST requests, custom headers, etc.
I stumbled upon an issue which seems to be not mentioned in the documentation. Let's pretend we have the following valid url: https://stackoverflow.com/index.php?abc=def&fgh=jkl and we need to perform a POST request there.
How my code looks like (please ignore if you find any typo errors):
data = urllib.urlencode({ "data": "somedata", "moredata": "somemoredata" })
urllib2.urlopen(urllib2.Request("https://stackoverflow.com/index.php?abc=def&fgh=jkl", data))
No errors are shown, but according to the web server, the petition is being received to "https://stackoverflow.com/index.php" and not to "https://stackoverflow.com/index.php?abc=def&fgh=jkl". What is the problem here?
I know that I could use Requests, but I'd like to use urllib/urllib2 first.
If I'm not wrong, you should pass your request data in data dictionary you passed to the url open() function.
data = urllib.urlencode({'abc': 'def', 'fgh': 'jkl'})
urllib2.urlopen(urllib2.Request('http://stackoverflow.com/index.php'))
Also, just like you said, use Requests unless you absolutely need the low level access urllib provides.
Hope this helps.

Categories

Resources