We are trying to get the owned games of a lot of users but our problem is that after a while the API call limit (100.000 a day) kicks in and we stop getting results.
We use 'IPlayerService/GetOwnedGames/v0001/?key=APIKEY&steamid=STEAMID' in our call and it works for the first entries.
There are several other queries like the GetPlayerSummaries query which take multiple Steam IDs, but according to the documentation, this one only takes one.
Is there any other way to combine/ merge our queries? We are using Python and the urllib.request library to create the request.
Depending on the payload of the requests you have the following possibilities:
if each request brings only the newest updates, you could serialize the steam ID's when you get the response that you've hit the daily limit
if you have the ability to control via the request payload what data you receive, you could go for a multithreaded / multiprocessing approach that consume the request queries and the steam ID's from a couple of shared resources
As #andreihondrari indirectly stated in his comment under his answer, one can request to get an API key which can get more then the 100.000 calls/ day. This is stated under part "License to Steam Web API & Steam Data" of the documentation:
You are limited to one hundred thousand (100,000) calls to the Steam Web API per day. Valve may approve higher daily call limits if you adhere to these API Terms of Use.
This may be complicated and there is of cause the possibility that you wont get approved, but this is pretty much the only stable way you can go.
Furthermore you could theoretically use multiple Steam Web API keys, BUT:
Each API key still has the limitation of 100.000 calls/day so you'll need to implement a fail safe and a transition between used keys and possibly need to create lots of accounts.
As each user has his own specific friendlist and blocked list the API key can "see" a portion of the Steam Community exclusively (friends data is not public otherwise). So it could be that you are using one API key which can't "see" a certain user when you could've used another to "see" it properly.
You'll need a unique email adress for each created account.
Note: Having multiple accounts actually complies with Valves ToS according to this post on Arqade.
Related
I would like to integrate python's paypal sdk into my project so that users can pay however much they choose. The current method requires a fixed price, https://developer.paypal.com/docs/api/quickstart/create-process-order/
However I want the user to be able to choose how much they want to send.
You linked to a server integration document, which is also for a deprecated API.
Do you want to create the order on the server side, or not? If yes, sending the amount from the client to the server that executes the orders API call is work you will need to do.
Begin your integration with the current v2/checkout/orders API by making two routes on your server, one for 'Create Order' and one for 'Capture Order', documented here. These routes should return only JSON data (no HTML or text). When a capture response is successful, store its resulting payment details in your database (particularly purchase_units[0].payments.captures[0].id, the PayPal transaction ID) and perform any necessary business logic (such as sending confirmation emails or reserving product) before sending your return JSON.
Pair those two routes with the following approval flow: https://developer.paypal.com/demo/checkout/#/pattern/server
Since your question was specific to setting a custom amount, you should probably add a JSON body to the createOrder's fetch call, to send the amount you want to your server.
For a simple client-side integration that uses no server-side API calls, see the documentation and
set the createOrder purchase units value: with some function or variable that gets the value you want. For instance:
value: document.getElementById(...).value
If you are not a programmer, you can even generate such a button for a custom amount via https://www.paypal.com/buttons/smart
I intend to realize a search engine in a niche area where the results must be searchable only from the front page of my website not through API or web scraping by third parties. It is not therefore any kind of token for users authentications, as the access to the website will be public at least in the beginning (no paywall or user accounts involved.)
My question is which method would be less computationally expensive to generate for each user/visit when someone initiates a search. I thought to use local storage of a random token generated at page loading (such as when some bot is scanning the page to not be able to create the token and therefore to not access the API for receiving search results) however in order to check that a token was issued legit (by my server) this means to grow a continuous database storage with all tokens issued earlier and consumed by users.
This not being a practical solutions for a huge number of users when the traffic will increase I want to know if someone used with success something similar or some better approach.
I don't want to use reCaptcha as a validation method [for human users] as this would offer a very poor user experience on the platform, degrading also the speed of using the system to run the searching queries.
The frontend will be made on React or Vue and backend on Python.
You could go with a set of pre-generated UUIDs in a database to pick-up and flag as used when consumed or compute a SHA3-512 hash from originating IP address + timestamp. On both cases, you can make the back-end process to inject a Set-Cookie containing the token into the response with the proper cookie policies, this key will be automatically provided by web-browsers afterwards but not by bots.
I need a program for retrieving the list of transactions on my PayPal account. I tried some Python scripts, e.g. using the requests module for simply logging into PayPal with GET/POST and downloading the HTML from https://www.paypal.com/activities (shows newest transactions), but unfortunately PayPal prevents web scraping (captcha), so I didn't find a solution. There is a "TransactionSearch" API (https://api.paypal.com/v1/reporting/transactions), but the transactions show up with a delay of at least 3 hours (up to 48h) there... Is there a possibility to get a live version (as shown on their website) of my PayPal transactions using the PayPal API?
Unfortunately according to their documentation there isn't a way to get 'live' transactions: https://developer.paypal.com/docs/api/sync/v1/
However it does say up to 3 hours so you may be able to retrieve transactions sooner.
They don't state a reason for why it takes up to 3 hours for executed transactions to appear in the callable request list, however I'd assume PayPal perform security checks before even processing a transaction, let alone making the transaction accessible through an open API.
Has anyone used Smartsheet API to capture payment data?
I work with a Property Management group that will be accepting applications via Smartsheet's web form. Those applications require a deposit. My city uses NIC Inc. (EGOV) as their payment gateway. Apparently there are a couple of reports (one daily, one # each transaction) that will give us all the information about the payments but it would be best if Smartsheet could collect the information automatically.
I am very new to coding but I have good resources to call on to implement suggestions.
It's certainly possible to write data to your sheet using the Smartsheet API (we even have a Python SDK to help with that).
Your next step should be to determine whether the eGov API supports the exporting of the data that you want to bring into Smartsheet.
Assuming both APIs do what you need, then the person writing the scripts can automate them by using a cron job or a webhook (if eGov API supports it).
I'm working with a librarian to re-structure his organization's digital photography archive.
I've built a Python robot with Mechanize and BeautifulSoup to pull about 7000 poorly structured and mildy incorrect/incomplete documents from a collection. The data will be formatted for a spreadsheet he can use to correct it. Right now I'm guesstimating 7500 HTTP requests total to build the search dictionary and then harvest the data, not counting mistakes and do-overs in my code, and then many more as the project progresses.
I assume there's some sort of built-in limit to how quickly I can make these requests, and even if there's not I'll give my robot delays to behave politely with the over-burdened web server(s). My question (admittedly impossible to answer with complete accuracy), is about how quickly can I make HTTP requests before encountering a built-in rate limit?
I would prefer not to publish the URL for the domain we're scraping, but if it's relevant I'll ask my friend if it's okay to share.
Note: I realize this is not the best way to solve our problem (re-structuring/organizing the database) but we're building a proof-of-concept to convince the higher-ups to trust my friend with a copy of the database, from which he'll navigate the bureaucracy necessary to allow me to work directly with the data.
They've also given us the API for an ATOM feed, but it requires a keyword to search and seems useless for the task of stepping through every photograph in a particular collection.
There's no built-in rate limit for HTTP. Most common web servers are not configured out of the box to rate limit. If rate limiting is in place, it will almost certainly have been put there by the administrators of the website and you'd have to ask them what they've configured.
Some search engines respect a non-standard extension to robots.txt that suggests a rate limit, so check for Crawl-delay in robots.txt.
HTTP does have a concurrent connection limit of two connections, but browsers have already started ignoring that and efforts are underway to revise that part of the standard as it is quite outdated.