Is possible to have multiple global IPs running on the same server? - python

I'm currently working on a python code that manages multiple user accounts and accesses the provider's API, instantiates a websocket and makes action requests, but it has a limit of requests per account and per IP, the quota of accounts is never reached, but the quota of request by IP is being reached easily and I'm being blocked from making a request from time to time, the question is if I can have more than one IP per server so I don't have this quota problem.
Already tried to instanse different VPS, but some profiders have the same IP for cheap VPS. (the client manager is very ligth in processing, if i have to instance a lot of VPS just for diffent IPs, i ended up using 2-5% of each VPS)
I´m new to network in general, sorry if it´s already been responded somewhere else (didn´t find anywhere) or is to much of a newbie question..

Related

How many requests can i make to google.com from same IP without getting blocked or something like that?

I write a program in python and want to check the internetconnection in a loop. i do this with requests modul in python and all works fine but my question is, how many requests are allowed per day, or hour. At the moment i check the connection every 2 seconds so every 2 seconds google gets a request from my ip. Thats make more than 40,000 requests a day, if my software runs 24 hours.
Is this a problem? I cant use proxies because i will not have access or control about the computer or settings of them when my software finally runs by the customer.
There are rate limits on all google public and internal apis.
However, documentation does not clearly spell out whats the exact rate limit on google.com
If you want to check connectivity, it might be better to use DNS servers such as 1.1.1.1
If you want programmatic access to google.com, you should rather use search API at https://developers.google.com/custom-search/v1/overview

What would be the best way to transmit data from one python script to another, without them being on the same computer?

I am attempting to create a basic chatroom in python, and I would like to know how I could transmit data from one script to another, preferably without using google drive. If needed to, I could create a webserver on Replit, but I don't do well with HTML or PHP.
Side note: I can't port forward, as my google wifi doesn't accept any level of port forwarding.
I would send messages of about 50 characters every couple seconds
Since you mention port forwarding, I assume you want two chat clients that run on different local networks to talk to each other, for example your own and the chat client of a friend in a remote location, over the internet.
If you (or your counterpart) cannot set up port forwarding, then direct communication between the script on your computer and theirs is hard, if not impossible. The solution is to set up a third computer or service on the internet that can be reached by both clients and use it for relaying messages between them.
A network is typically protected by a firewall of sorts and will typically be behind a router that performs network address translation (NAT) to help multiple devices on a network to simultaneously access services on the internet, whilst all using the same IP address on the internet. Port forwarding fits into that by connecting a specific port from the outside directly to a port on a machine on the inside - without that, an outside computer might be able to reach your IP address, but they could never connect to a computer or program on the inside of the network, as the router wouldn't know what computer to contact, also the firewall might disallow the connection to begin with.
But if your computer on the inside establishes a connection with an accessible server on the internet, expecting a response, that creates a temporary conduit through the router and firewall that can be used by the server to send messages (look up 'hole punching' for more information). And if both computers do this, the server can relay message between both clients. Only the server then needs to run in an environment that doesn't have firewall restrictions or NAT that prevent this.
You could write a simple Python server, that accepts incoming connections and can send several responses and a simple client that connects to it, identifying itself and joining a chatroom, or having a direct conversation with another connected client. There are many techniques that would allow you to do this, but I think web sockets might be a good starting point, as long as you don't plan to do advanced fast or high volume stuff that would require something like a UDP connection.
A library like websockets could be a good starting point, but you may want to start out by figuring out where you would have this service hosted first, since there may be limitations on what you're able and allowed to do.
Also, if all you're looking to do is send simple messages, you may want to stay away from writing your own server an protocols at all - have a look around for open source message servers written in a language you are comfortable with, or that just work out of the box without any development, in which case the language doesn't even really matter, as long as you can connect to it and exchange messages from Python.

Python - Multiple client servers for scaling

For my current setup, I have a single client server using Tornado, a standalone database server and another standalone server for my website.
I'm looking at having a second client server process running on the same system (to take advantage of its multiple cores) and I would like some advice in locating which server my "clients" have connected to. Each client can have multiple connections (instances).
I've already looked at using memcached to hold a list of user identifiers and link them to which server(s) they are connected to, but that doesn't seem like it would scale very well (eg having six digits of connected users).
I see the same issue with database lookups.
I have already optimized my server as much as possible, without going into micro-optimization and I personally frown upon that.
Current server methodology:
On connect:
Accept connection, rate limit for max connections per IP.
Append client instance to a list named "clientList".
On data from client:
Rate limit for max messages per second.
Append data to a client work queue.
If client has a thread dedicated toward its work queue:
return, its work will be chewed by the current thread
otherwise create a new thread for this users work queue, start it.
TLDR:
How do I efficiently store which servers a client has connected to in order to forward messages to that user.

Scraping a lot of pages with multiple machines (with different IPs)

I have to scrape information from several web pages and use BeautifulSoup + requests + threading. I create many workers, each one grabs a URL from the queue, downloads it, scrapes data from HTML and puts the result to the result list. This is my code, I thought it was too long to just paste it here.
But I ran into following problem - this site probalby limits the quantity of requests from one IP per minute, so scraping becomes not as fast as it could be. But a have a server that has a different IP, so I thought I could make use of it.
I thought of creating a script for the server that would listen to some port (with sockets) and accept URLs, process them, and then send the result back to my main machine.
But I'm not sure if there is no ready-made solution, the problem seems common to me. If there is, what should I use?
Most of the web servers make use use of rate limiting to save resources and keep themselves from DoS attacks; its a common security measure.
Now looking into your problem these are the things you could do.
Put some sleep in between different different requests (it will
bring down the request per second count; and server may not treat
your code as robot)
If you are using an internet connection on your home computer and it is not using any static IP address then you may try rebooting your router every time your request gets denied using simple telnet interface to the router.
If you are using cloud server/VPS you can buy multiple IP address and keep switching your requests through different network interfaces it can also help you lower down the request per second.
You will need to check through the real cause of denial from the server you are pulling webpages from; it is very general topic to write any definitive answer; here are certain things you can do to find out what is causing your requests to be denied and choose one of the aforementioned method to fix the problem.
Decrease the requests per second count and see how web server is performing.
Set the request headers of HTTP to simulate a web-browser and see if its blocking or not.
Bandwidth of your internet connection/ Network connection limit of your machine could also be problem; use netstat to monitor number of active connection before and after your requests are being blocked.

How to avoid packet loss on server application restart?

A typical situation with a server/web application is that the application needs to be shut down and restarted to implement an upgrade.
What are the possible/common schemes (and available software) to avoid losing data that clients sent to the server during the short time the application was gone?
An example scheme that could work is: For a simple web server where the client connects to port 80, rather than the client connecting directly to the web server application, there could be a simple application in between that listens to port 80 and seamlessly forwards/returns data to/from the "Actual" web server application (on some other port). When the web server needs to be shut down and restarted, the relay app could detect this and buffer all incoming data until the webserver comes back to life. This way there is always an application listening to port 80 and data is never lost (within buffer-size and time reason, of course). Does such a simple intermediate buffer-on-recipient-unavailable piece of software exist already?
I'm mostly interested in solutions for a single application instance and not one where there are multiple instances (in which case a clever rolling update scheme could be used), but in the interests of having a full answer set, any response would be great!
To avoid this, have multiple application servers behind a load balancer. Before bringing one down, ensure the load balancer is not sending it new clients. Bring it down, traffic will go to the other applications servers, and when it comes back up traffic will begin getting sent to it again.
If you have only one application server, simply 'buffering' network traffic is a poor solution. When the server comes back up, it has none of the TCP state information anymore and the old incoming connections have nowhere to go anyway.

Categories

Resources