How to connect server from client in Heroku using IP address - python

I am developing one application using heroku, but struggling with one issue.
In this application, I have 2 dynos (one is for server, and the other is for client).
Since I want to get some data from server, my client needs to know IP address of the server(dyno).
Now I am trying to use Fixie and QuotaGuard Static,
They tell me an IP address, but I can not connect to the server using these IP address.
Could you tell me how to fix it?

You want to have two dynos communicate directly over a socket connection. Unfortunately, you can't easily do that; that runs counter to the ethos of Heroku and 12-factor application design (http://12factor.net), which specifies that processes should be isolated from each other, and that communication be via "network attached services". That second point may seem like a nuance, but it affects how the dynos discover the other services (via injected environment variables).
There are many reasons for this constraint, not the least of which is the fact that "dynos", as a unit of compute, may be scaled, migrated to different physical servers, etc., many times over an application's lifecycle. Trying to connect to a socket on a dyno reliably would actually get pretty complicated (selecting the right one if multiple are running, renegotiating connections after scaling/migration events, etc.). Remember - even if you are never going to call heroku ps:scale client=2, Heroku doesn't know that and, as a platform, it is designed to assume that you will.
The solution is to use an intermediate service like Redis to facilitate the inter-process communication via a framework like Python RQ or similar.
Alternatively, treat the two dynos as separate applications - then you can connect from one to the other via HTTP using the publicly available DNS entry for that application. Note - in that case, it would still be possible to share a database if that's required.
Hope that helps.

Related

How to make HTTP requests from different clients appear that they came from the same IP address?

I'm using a 3rd-party API that invalidates the OAuth token if the requests came from different IP addresses. This is causing issues because the service runs on multiple hosts.
Ideally, I want the option that only the requests to this particular API will be routed through a single IP.
I thought about setting up a proxy server, but I'm concerned that I won't be able to scale this proxy beyond 1 machine.
Any suggestions?
The ideal option here would of course be to obtain an OAuth token for each machine. (Or, even better, to get the service to allow you to share a token across IPs.) But I assume there's some reason you can't do that.
In which case you probably do want a proxy server here.
The option that only the requests to this particular API be routed through that proxy is dead simple. Set up an explicit proxy rather than a transparent one, and specify that explicit proxy for these particular methods.
Since you haven't shown us, or even described, your code, I can't show you how to do that with whatever library you're using, but here's how to do it with requests, and it's not much harder with the stdlib urllib or most other third-party libraries.
But, for completeness: It's not at all impossible to make the separate machines appear to have the same IP address, as long as all of you machines are behind a router that you have control over. In fact, that's exactly what you get with a typical home DSL/cable setup via NAT: each machine has its own internal-only address, but they all share one public address. But it's probably not what you want. For one thing, if your machines are actually GCP hosts, you don't control the router, and you may not even be able to control whether they're on the same network (in case you were thinking of running a software router to pipe them all through). Also, NAT causes all kinds of problems for servers. And, since your worry here is scaling, using NAT is a nightmare once you have to scale beyond a single subnet. And even more so if these instances are meant to be servers (which seems likely, if you're running them on GCP). And finally, to use NAT to talk just to one service, you either need very complicated routing tables, or an extra network interface per machine (that you can put behind a different router). So, I doubt it's what you actually want here.

Client/Server role reversal with SimpleXMLRPCServer in Python

I'm working on a project to expose a set of methods from various client machines to a server for the purpose of information gathering and automation. I'm using Python at the moment, and SimpleXMLRPCServer seems to work great on a local network, where I know the addresses of the client machines, and there's no NAT or firewall.
The problem is that the client/server model is backwards for what I want to do. Rather than have an RPC server running on the client machine, exposing a service to the software client, I'd like to have a server listening for connections from clients, which connect and expose the service to the server.
I'd thought about tunneling, remote port forwarding with SSH, or a VPN, but those options don't scale well, and introduce more overhead and complexity than I'd like.
I'm thinking I could write a server and client to reverse the model, but I don't want to reinvent the wheel if it already exists. It seems to me that this would be a common enough problem that there would be a solution for it already.
I'm also just cutting my teeth on Python and networked services, so it's possible I'm asking the wrong question entirely.
What you want is probably WAMP routed RPC.
It seems to address your issue and it's very convenient once you get used to it.
The idea is to put the WAMP router (let's say) in the cloud, and both RPC caller and RPC callee are clients with outbound connections to the router.
I was also using VPN for connecting IoT devices together through the internet, but switching to this router model really simplified things up and it scales pretty well.
By the way WAMP is implemented in different languages, including Python.
Maybe Pyro can be of use? It allows for many forms of distributed computing in Python. You are not very clear in your requirements so it is hard to say if this might work for you, but I advise you to have a look at the documentation or the many examples of Pyro and see if there's something that matches what you want to do.
Pyro abstracts most of the networking intricacy away, you simply invoke a method on a (remote) python object.

Python Multiprocessing with Distributed Cluster Using Pathos

I am trying to to make use of multiprocessing across several different computers, which pathos seems geared towards: "Pathos is a framework for heterogenous computing. It primarily provides the communication mechanisms for configuring and launching parallel computations across heterogenous resources." In looking at the documentation, however, I am at a loss as to how to get a cluster up and running. I am looking to:
Set up a remote server or set of remote servers with secure authentication.
Securely connect the the remote server(s).
Map a task across all CPUs in both the remote servers and my local machine using a straightforward API like pool.map in the standard multiprocessing package (like the pseudocode in this related question).
I do not see an example for (1) and I do not understand the tunnel example provided for (2). The example does not actually connect to an existing service on the localhost. I would also like to know if/how I can require this communication to come with a password/key of some kind that would prevent someone else from connecting to the server. I understand this uses SSH authentication, but absent a preexisting key that only insures that the traffic is not read as it passes over the Internet, but does nothing to prevent someone else from hijacking the server.
I'm the pathos author. Basically, for (1) you can use pathos.pp to connect to another computer through a socket connection. pathos.pp has almost exactly the same API as pathos.multiprocessing, although with pathos.pp you can give the address and port of a remote host to connect to, using the keyword servers when setting up the Pool.
However, if you want to make a secure connection with SSH, it's best to establish a SSH-tunnel connection (as in the example you linked to), and then pass localhost and the local port number to the servers keyword in Pool. This will then connect to the remote pp-worker through the ssh tunnel. See:
https://github.com/uqfoundation/pathos/blob/master/examples/test_ppmap2.py and
http://www.cacr.caltech.edu/~mmckerns/pathos.html
Lastly, if you are using pathos.pp with a remote server, as above, you should be already doing (3). However, it can be more efficient (for an embarrassingly parallel enough set of jobs), that you nest the parallel maps… so first use pathos.pp.ParallelPythonPool to build a parallel map across servers, then call a N-way job using a parallel map in pathos.multiprocessing.ProcessingPool inside the function you are mapping with pathos.pp. This will minimize the communication across the remote connection.
Also, you don't need to give a SSH password, if you have ssh-agent working for you. See: http://mah.everybody.org/docs/ssh. Pathos assumes for parallel maps across remote servers, you will have ssh-agent working and you won't need to type your password every time there's a connection.
EDIT: added example code on your question here: Python Multiprocessing with Distributed Cluster

Possible to use websockets on a shared hosting web server?

I use PHP, JS, HTML, CSS. I'm willing to learn ruby or python if that is the best option.
My next project will involve live data being fed to users from the server and vice versa. I have shell access on my shared server, but I'm not sure about access to ports. Is it possible to use websockets or any other efficient server-client connection on a shared hosting account, and if so, what do I need to do?
For having the best performance and full control of your setup you need "your own" server.
Today there are a huge amount of virtual server providers which means you get full control over your IP but where the physical server is still shared between many clients, meaning cheaper prices and more flexibility.
I recommend utilizing the free tier program at Amazon EC2, you can always resign after the free period. And they have many geographical locations to choose from.
Another provider in Europe that I have been satisfied with is Tilaa
You can probably find many more alternatives that suits your needs on the Webhosting talk forum
Until some weeks ago, websockets deployment required either a standalone server running on a different port, or server side proxies like varnish/haproxy to listen on port 80 and redirecting normal http traffic. The latest nginx versions added built-in support for websockets, but unless your hosting provider uses it, you're out of luck. (note that I don't have personal experience with this nginx feature)
Personally I find that for most applications, websockets can be replaced with Server-sent events instead - a very lightweight protocol which is basically another http connection that stays open on the server side and sends a stream of plaintext with messages separated by double newlines.
It's supported in most decent browsers, but since this excludes internet explorer there are polyfills available here and here
This covers one side of the connection, the one that is usually implemented with long-polling. The other direction can be covered the usual way with XHR. The end result is very similar to websockets IMO, but with a bit higher latency for client->server messages.

Decentralized networking in Python - How?

I want to write a Python script that will check the users local network for other instances of the script currently running.
For the purposes of this question, let's say that I'm writing an application that runs solely via the command line, and will just update the screen when another instance of the application is "found" on the local network. Sample output below:
$ python question.py
Thanks for running ThisApp! You are 192.168.1.101.
Found 192.168.1.102 running this application.
Found 192.168.1.104 running this application.
What libraries/projects exist to help facilitate something like this?
One of the ways to do this would be the Application under question is broadcasting UDP packets and your application is receiving that from different nodes and then displaying it. Twisted Networking Framework provides facilities for doing such a job. The documentation provides some simple examples too.
Well, you could write something using the socket module. You would have to have two programs though, a server on the users local computer, and then a client program that would interface with the server. The server would also use the select module to listen for multiple connections. You would then have a client program that sends something to the server when it is run, or whenever you want it to. The server could then print out which connections it is maintaining, including the details such as IP address.
This is documented extremely well at this link, more so than you need but it will explain it to you as it did to me. http://ilab.cs.byu.edu/python/
You can try broadcast UDP, I found some example here: http://vizible.wordpress.com/2009/01/31/python-broadcast-udp/
You can have a server-based solution: a central server where clients register themselves, and query for other clients being registered. A server framework like Twisted can help here.
In a peer-to-peer setting, push technologies like UDP broadcasts can be used, where each client is putting out a heartbeat packet ever so often on the network, for others to receive. Basic modules like socket would help with that.
Alternatively, you could go for a pull approach, where the interesting peer would need to discover the others actively. This is probably the least straight-forward. For one, you need to scan the network, i.e. find out which IPs belong to the local network and go through them. Then you would need to contact each IP in turn. If your program opens a TCP port, you could try to connect to this and find out your program is running there. If you want your program to be completely ignorant of these queries, you might need to open an ssh connection to the remote IP and scan the process list for your program. All this might involve various modules and libraries. One you might want to look at is execnet.

Categories

Resources