I need some help, i am on early design stage of a client server software and i don't know which of the 2 options (Web Service or Socket programming) are the right one for my software.
All programming is in python.
The layout:
PC will need to run a server service - this server will get commands from the local computer and will send them to the MiniPC.
MiniPC will need to run a client service - when it identify a command (method) he will go to hardware (connection by serial,usb.....), do something and return to the miniPC with result.
MiniPC get the Hardware result and sends it to the Logging server and to the Main PC
Notes:
PC can controls several MiniPC.
The amount of data in one hardware response can be up to 10Kb.
Commands from PC to MiniPC are small (strings)
Logging data can be up to 10Kb.
Questios:
What is you recomendation for protocol Web (http) or Socket programming?
Do you have any suggestions for the design?
You should be able to use socket programming for this. Setup a socket server at the PC and a client at MiniPC devices. The clients would wait for input (read from socket) from the PC and then send back the output that they would get back from hardware. In terms of design, I see two things. First, the socket server can run a select() to handle multiple clients. Second, you probably want to bump up the SO_SNDBUF socket option for MiniPC sockets and SO_RCVBUF for the server at PC to multiples of 10Kb. What is your argument for considering Web?
I'have done a similar project with ARM-based controllers instead on BeagleBone : feel free to ask me questions by commentaries.
Firstly, technically your BeagleBones are servers - since they ran a daemon service which is event triggered - and PC are clients. (but it is just pendantry)
Secondly, due to the limitations of embedded devices, I was not able to have an efficient Web server running on servers, so the choice was simple. I would advise you to stick with socket programming, but adding network services such as DCHP , support of TCP/UDP/UDP multicast, ping, echo, ...
Finally, the important question in terms of performance is the following :
what's the physical layer of communication ?
Ethernet ? Wifi ? Bluetooth/ZigBee ? I2C/CAN/... ?
I will guess it's Ethernet : IEEE 802.11 protocol doesn't scale well because of CSMA ( see here http://fr.wikipedia.org/wiki/CSMA ). If you want to have several devices (dozens), you will need switches/routers to encapsulate sub-networks to avoid network congestion.
Related
I am attempting to create a basic chatroom in python, and I would like to know how I could transmit data from one script to another, preferably without using google drive. If needed to, I could create a webserver on Replit, but I don't do well with HTML or PHP.
Side note: I can't port forward, as my google wifi doesn't accept any level of port forwarding.
I would send messages of about 50 characters every couple seconds
Since you mention port forwarding, I assume you want two chat clients that run on different local networks to talk to each other, for example your own and the chat client of a friend in a remote location, over the internet.
If you (or your counterpart) cannot set up port forwarding, then direct communication between the script on your computer and theirs is hard, if not impossible. The solution is to set up a third computer or service on the internet that can be reached by both clients and use it for relaying messages between them.
A network is typically protected by a firewall of sorts and will typically be behind a router that performs network address translation (NAT) to help multiple devices on a network to simultaneously access services on the internet, whilst all using the same IP address on the internet. Port forwarding fits into that by connecting a specific port from the outside directly to a port on a machine on the inside - without that, an outside computer might be able to reach your IP address, but they could never connect to a computer or program on the inside of the network, as the router wouldn't know what computer to contact, also the firewall might disallow the connection to begin with.
But if your computer on the inside establishes a connection with an accessible server on the internet, expecting a response, that creates a temporary conduit through the router and firewall that can be used by the server to send messages (look up 'hole punching' for more information). And if both computers do this, the server can relay message between both clients. Only the server then needs to run in an environment that doesn't have firewall restrictions or NAT that prevent this.
You could write a simple Python server, that accepts incoming connections and can send several responses and a simple client that connects to it, identifying itself and joining a chatroom, or having a direct conversation with another connected client. There are many techniques that would allow you to do this, but I think web sockets might be a good starting point, as long as you don't plan to do advanced fast or high volume stuff that would require something like a UDP connection.
A library like websockets could be a good starting point, but you may want to start out by figuring out where you would have this service hosted first, since there may be limitations on what you're able and allowed to do.
Also, if all you're looking to do is send simple messages, you may want to stay away from writing your own server an protocols at all - have a look around for open source message servers written in a language you are comfortable with, or that just work out of the box without any development, in which case the language doesn't even really matter, as long as you can connect to it and exchange messages from Python.
I have hardware that connects to raw TCP socket on any given IP and port combination. It then continually sends characters. The following piece of Python code may give you an idea of what the hardware does.
import socket
serverIP = '*server IP or domain*'
serverPort = 60000
Sock = socket(AF_INET, SOCK_STREAM)
Sock.connect((serverIP, serverPort))
while (1):
f = open ("send-data.txt","r")
while 1:
c = f.readline()
if not c:
break
Sock.send(c + '\n')
Sock.shutdown(0)
Sock.close()
When this code is run it exactly behaves like my hardware system. The send-data.txt file contains characters similar to what hardware sends.
I have written a socket server in Python using SocketServer library. It allows connections, receives character stream, and stores it into a local (newly created) file. Currently, I am running this code on my system, as localhost and it works. I would like to serve these files through a webpage.
I want to be able to do the same on remote server. As you can see, my hardware limits me to use only raw TCP sockets. From what I understand, I'll need low-level access to the server machine like IaaS. I tried pythonanywhere, but I guess they don't allow simple python sockets. Heroku also requires you to write a web app, and I don't know how to go about that or whether it'll work with my hardware.
What hosting/Cloud solution out there could act as above-mentioned socket server and also as HTTP server which would later serve these files and webpages.
If I understand your question correctly, you'd like to know which affordable hosting solution would allow you to communicate via arbitrary TCP sockets.
The answer is simple: Pretty much any VPS (Virtual Private Server) company or IaaS provider. Since you tagged your question with Amazon-EC2, yes they do too, but the learning curve to get your first instance running and the security groups (read: firewall rules, which live outside your VM) configured, is rather steep. That said, you do have a so-called "Free Tier" there for one year, which enables you to try out most of their services free-of-charge.
Other providers might be more suitable. (I'm not sure if it's allowed to suggest providers here, but you could for example look at Linode or Rackspace Cloud; they offer much less flexibility than EC2, but it's a whole lot easier to get started.)
As with any IaaS option, it would be beneficial to know Linux, networking and some security basics (at the very least) as you will be solely responsible for the things you create.
Talking about security...
If that piece of code you posted has a similarly rudimentary receiving end, you're setting yourself up for trouble as soon as it's out there in public, as the communication is done in plain text [*] and doesn't seem to require any kind of authentication. Anybody could probably telnet to the receiving end and just inject some lines of text?
(That's exactly why considerably sane PaaS providers often don't let you communicate over arbitrary ports and sockets :-) )
[*] I am guessing that, because you use readline. If any encryption was involved, you'd likely write/read in chunks of bytes.
I'm starting to write an udp server to match two clients together and allow them to send/receive data to/from each other.
It's for a multiplayer game, and my goal is to create a p2p-like connection but with the intermediary server I'll make sure it will always work, even in cases where the user has a firewall, or is behind a nat.
The server should hande several matches (pairs of clients), I'm writing it in python and it's a bit harder than what I thought.
Is there any open source code for a server similar to this?
Take a look at the ZeroMq (0MQ) framework as an alternative to creating your own messaging. There's a python binding (pyzmq) for it.
This details how to write a UDP server in Python.
I made some python script and ran it both on my computer and some distant shell (some website that provides shell access).
I'm using threads, pipes and UDP sockets to transfer data in a P2P fashion, so each script can both receive and send through the same socket. To test if this works, I open one terminal on my computer and one other terminal with ssh, connected to my shell. I make sure the script is the same on both machines, and feed it with an ip address.
Here is the script: http://codepad.org/V9Q1KcDT
(I don't know if I should paste it directly here or not)
My problem is this: strings I send seems to land something 20% of the time, sometimes often, sometimes not, and it seems to be random...
What am I doing wrong ?
Are UDP so unreliable ?
Are python thread+pipe+socket too slow ?
Could it be some kind of network problem with my shell provider ?
Is my program flawed ?
Are pipes a good solution to communicate with threads ?
I have no problem not using a shell and I have not tried, but it's useful for testing purposes.
BTW if i'm behing a router, how does the router knows where to send the packet if I'm not the only computer connected ? (I tried when I was the only one, it behaved identically).
User Datagram Protocol(UDP) could easily stand for Unreliable Datagram Protocol. UDP provides no guarantees regarding delivery so if you need reliability you have to implement it yourself by resending messages.
That said, threading and multiprocessing and remote machines and networks are a lot of variables to diagnose at once. I'd advise you to step back and try to boil the problem down to something simpler.
For instance:
first try TCP instead of UDP
instead of threads, run 2 processes (one doing nit_send, one doing nit_recv)
run them both on your local machine (use localhost: 127.0.0.1)
Once you have this basic test working then add features back in. For instance, switch to UDP. Once you've got your UDP issues sorted out locally try introducing remote machines or threads, etc.
Regarding the router question all machines connected to your router are most likely NAT'd so they all appear to have the same IP address (the IP address of your router) to machines on the internet. Look into NAT and port-forwarding if you want to route traffic to a specific machine on your local subnet.
A typical situation with a server/web application is that the application needs to be shut down and restarted to implement an upgrade.
What are the possible/common schemes (and available software) to avoid losing data that clients sent to the server during the short time the application was gone?
An example scheme that could work is: For a simple web server where the client connects to port 80, rather than the client connecting directly to the web server application, there could be a simple application in between that listens to port 80 and seamlessly forwards/returns data to/from the "Actual" web server application (on some other port). When the web server needs to be shut down and restarted, the relay app could detect this and buffer all incoming data until the webserver comes back to life. This way there is always an application listening to port 80 and data is never lost (within buffer-size and time reason, of course). Does such a simple intermediate buffer-on-recipient-unavailable piece of software exist already?
I'm mostly interested in solutions for a single application instance and not one where there are multiple instances (in which case a clever rolling update scheme could be used), but in the interests of having a full answer set, any response would be great!
To avoid this, have multiple application servers behind a load balancer. Before bringing one down, ensure the load balancer is not sending it new clients. Bring it down, traffic will go to the other applications servers, and when it comes back up traffic will begin getting sent to it again.
If you have only one application server, simply 'buffering' network traffic is a poor solution. When the server comes back up, it has none of the TCP state information anymore and the old incoming connections have nowhere to go anyway.