So I've decided to learn Python and after getting a handle on the basic syntax of the language, decided to write a "practice" program that utilizes various modules.
I have a basic curses interface made already, but before I get too far I want to make sure that I can redirect standard input and output over a network connection. In effect, I want to be able to "serve" this curses application over a TCP/IP connection.
Is this possible and if so, how can I redirect the input and output of curses over a network socket?
This probably won't work well. curses has to know what sort of terminal (or terminal emulator, these days) it's talking to, in order to choose the appropriate control characters for working with it. If you simply redirect stdin/stdout, it's going to have no way of knowing what's at the other end of the connection.
The normal way of doing something like this is to leave the program's stdin/stdout alone, and just run it over a remote login. The remote access software (telnet, ssh, or whatever) will take care of identifying the remote terminal type, and letting the program know about it via environment variables.
Related
I have set up a Raspberry Pi connected to an LED strip which is controllable from my phone via a Node server I have running on the RasPi. It triggers a simple python script that sets a colour.
I'm looking to expand the functionality such that I have a python script continuously running and I can send colours to it that it will consume the new colour and display both the old and new colour side by side. I.e the python script can receive commands and manage state.
I've looked into whether to use a simple loop or a deamon for this but I don't understand how to both run a script continuously and receive the new commands.
Is it better to keep state in the Node server and keep sending a lot of simple commands to a basic python script or to write a more involved python script that can receive few simpler commands and continuously update the lights?
IIUC, you don't necessarily need to have the python script running continuously. It just needs to store state, and you can do this by writing the state to a file. The script can then just read the last state file at startup, decide what to do from thereon, perform action, then update the state file.
In case you do want to actually run the script continuously though, you need a way to accept the commands. The simplest way for a daemon to accept command is probably through signal, you can use custom signal e.g. SIGUSR1 and SIGUSR2 to send and receive these notifications. These may be sufficient if your daemon only need to accept very simple request.
For more complex request where you need to actually accept messages, you can listen to a Unix socket or listen to a TCP socket. The socket module in the standard library can help you with that. If you want to build a more complex command server, then you may even want to consider running a full HTTP server, though this looks overkill for the current situation.
Is it better to keep state in the Node server and keep sending a lot of simple commands to a basic python script or to write a more involved python script that can receive few simpler commands and continuously update the lights?
There's no straightforward answer to that. It depends on case by case basis, how complex the state is, how frequently you need to change colour, how familiar you are with the languages, etc.
Another option is to have the Node app, calll the Python script as a child process, and pass to it any needed vars, and you can read python's out put as well, like so:
var exec = require('child_process').exec;
var child = exec('python file.py var1 var2', function (error, stdout, stderr) {
}
i am so new in python programming so i apologize if my question is not appropriate.
Supose there is a network of several computers, i want to write a program that will be running on a centeral system (one of those computers) with python. I need this program to execute a GUI_based program in every other computers in network and collect the result data in centeral system (the GUI_based program will give some result file).
i know there is a way to transfer file or data between server and clients but
my question is, is there any way or methode in python programming (i mean socket in network programming with python) that can call or execute a program in client system?
thanks in advance.
You can use Python's paramiko library but that require access to your system i.e SSH for linux.
The paramiko package also offers the option for an ssh server, so this might be a solution for your windows machine
It could be something like
client_machine = SSHClient()
client_machine.load_system_host_keys()
client_machine.connect('linuxip', username='your_user', password='very_secret')
stdin, stdout, stderr = client_machine.exec_command('python /home/your_user/your/path/to/scripty.py')
Note: its not secure to store your password. Use public and private key for that.
I'll try to be as clear as possible with what I'm trying to aim for.
I have a running Python script on my Raspberry Pi and I'd like multiple users to send inputs to the script remotely (through SSH or anything else that might work better).
So for example if I have this script running:
Name = input ("Please type in your name. \n")
type (Name)
print ("Hello there" , Name)
time.sleep(3) # Pause for 3 seconds.
I want users to send names to this script remotely from devices that are connected to the same network as the Raspberry Pi.
If possible, I also want to implement the following functionalities:
Sending the output (aka the printed text) back to the specific device the input came from.
A queuing system: If multiple users send names at the same time, the script will take the names in order, one by one.
I know it's a lot to ask for, but I'd really appreciate if someone could help me get started with this by pointing me in the right direction. I've searched around quite a bit for the past few days but I haven't really come across anything that fits my needs.
Edit: I'm running this on PYTHON 3
Your comment that you would like to communicate (via network) to the script directly, opens up a world of possibilities. You have to modify your Python script a little though, because it won't communicate via stdin/stdout any longer.
I'm still not entirely sure how you want things to work but it does sound to me that a solution based around RPC can possibly work for you. May I suggest you have a look at Pyro4? Basically what that does for you is enable you to do normal Python method calls, but over the network, to code running on another computer.
So you can set up a server on your Pi (that needs to run continuously) which accepts remote calls from other computers, and can then call into your python code on the pi. It can process calls in parallel or in sequence. You didn't say if you need any form of security, but some basic security features are provided (no built-in encryption or communication over TLS yet, sorry).
A simple example is here and lots more are on github so you can have a look to see if this fits your requirements?
Another solution that doesn't require third party libraries is perhaps to write a WSGI http server that calls your script, run this on the pi, and access it via HTTP from your other computers.
The use case is as follows: I have an application, a TCP server on which clients can connect, send and receive information. Clients can send little scripts to be run by the server (that's only a small portion of trusted users who have the right to do that). Notwithstanding the danger of such a situation, I'd like to know how to debug these scripts. And to offer these users power to debug. In short, pdb seems like a good match to me.
But still, I'm facing several problems:
pdb has to not use standard input and output, but the socket connected to the client. In theory it seems doable, by creating a new Pdb object.
pdb has to not freeze the entire program, just offer to examine a specific script (probably a string of lines) and run them asynchronously. Other users shouldn't be frozen.
I've tried to look into the code of the pdb module, but I admit I don't really know whether I can do both things at the same time.
Thanks for your help,
I've used web.py to create a web service that returns results in json.
I run it on my local box as python scriptname.py 8888
However, I now want to run it on a linux box.
How can I run it as a service on the linux box?
update
After the answers it seems like the question isn't right. I am aware of the deployment process, frameworks, and the webserver. Maybe the following back story will help:
I had a small python script that takes as input a file and based on some logic splits that file up. I wanted to use this script with a web front end I already have in place (Grails). I wanted to call this from the grails application but did not want to do it by executing a command line. So I wrapped the python script as a webservice. which takes in two parameters and returns, in json, the number of split files. This webservice will ONLY be used by my grails front end and nothing else.
So, I simply wish to run this little web.py service so that it can respond to my grails front end.
Please correct me if I'm wrong, but would I still need ngix and the like after the above? This script sounds trivial but eventually i will be adding more logic to it so I wanted it as a webservice which can be consumed by a web front end.
In general, there are two parts of this.
The "remote and event-based" part: Service used remotely over network needs certain set of skills: to be able to accept (multiple) connections, read requests, process, reply, speak at least basic TCP/HTTP, handle dead connections, and if it's more than small private LAN, it needs to be robust (think DoS) and maybe also perform some kind of authentication.
If your script is willing to take care of all of this, then it's ready to open its own port and listen. I'm not sure if web.py provides all of these facilities.
Then there's the other part, "daemonization", when you want to run the server unattended: running at boot, running under the right user, not blocking your parent (ssh, init script or whatever), not having ttys open but maybe logging somewhere...
Servers like nginx and Apache are built for this, and provide interfaces like mod_python or WSGI, so that much simpler applications can give up as much of the above as possible.
So the answer would be: yes, you still need Nginx or the likes, unless:
you can implement it yourself in Python,
or you are using the script on localhost only and are willing to take some
risks of instability.
Then probably you can do on your own.
try this
python scriptname.py 8888 2>/dev/null
it will run as daemon