Preventing Blocking of os.system - python

I'm trying to run multiple Django development server using a python script, normally I run the server using a terminal command "python manage.py runserver PORTNumber". I want to be able to run multiple instances on different ports, but when trying to use os.system, I get blocked; I made a thread that contained the os.system command, but when opening it I get blocked too. Any ideas how to prevent that, or a way around to run the development server on multiple ports ?
Thanks in advance.

Related

How to run Django server constantly on windows

I wrote a code for a Django server, and it works perfectly inside the shell of Pycharm.
Now, I want to run this server on a local computer constantly without being inside Pycharm's shell.
Also, because it's for a client of mine I don't want any open CMD windows or any other weird GUI- I want him to just access the website like any other website.
I've seen all kinds of solutions- running runserver with &, creating a virtual machine and running the server on it and etc.
I am familiar with Vmware and all, so if the proper solution is this It's OK. But I wonder- are there any other ways to run a server on a PC without installing any additional programs?

How to access terminal Python process running on server from other script

Consider situation:
I have an Ubuntu server with installed Python, tensorflow and other libs.
My code is python script, that load several models, some of them pretrained vectors .bin, some files from server folders, etc.
When i run script in terminal it launch interactive session, where i input some text and script output me back (like chatbot). During answer it call my Ai models (Tensorflow, keras).
Question: how do i access this running session from other python script? I mean i want use it as a function: to send text and receive answer back.
And of course i need to run this terminal session in background for long time.
I read this and similar answers, but not sure is that right solution (seems not a full):
In Linux, how to prevent a background process from being stopped after closing SSH client
What i am asking, commonly is done by REST server with API that expose and then this api is called from a external code. But there is no API wotking: Tensorflow throw errors when run via Flask (was not able to fix).
If you want your script stays up after closing ssh session, add & disown at the end of your execution command and it will run in background.

Run a script located on remote server using python

I have a python script located on a remote server with SSH enabled. That script displays a lot of debug messages displayed while executing. I want to trigger this script using another python script which is on my local system and depending on the output of the earlier script, I want to proceed further. While doing all this, I want the display messages on the remote server to be displayed on my local system as well. Basically, I want to view whatever output is thrown by the remote script during the course of the script, on my local system. I am able to trigger the script using paramiko but I am neither able to check whether the script on the remote server is running nor am I able to view it's output. Is there any way to do it? Already tried conn.recv(65535) but to no avail.
In my experience I found python fabric module easier than using paramiko. If you want to execute local script on remote machine using fabric. You just need to upload them using put() and then call run() api.
http://docs.fabfile.org/en/1.14/api/core/operations.html#fabric.operations.put

server program exits when i close SSH connection to GCP

I have an Ubuntu instance on Google Cloud Platform (GCP). I want to use it as an HTTP server to access files. I simply use this python command, type it in bash:
python3 -m http.server 8000
This will run http.server module as a script, construct a simple HTTP server and listen at port 8000.
Problem is that, since I use GCP instance, I must connect to it remotely (for example I use SSH shell provided by GCP). When I close the SSH shell, the python HTTP server will stop. So what should I do to make sure that the server still runs after I close the shell?
I did searched on Google, and I tried to use
nohup python3 -m http.server 8000 &
This command, I quote, will run the instruction as a background program and persist running after exiting bash. But it seems that this doesn't work for my situation.
Anybody can help?
Try the screen command. I think it's easier to use and also more flexible than nohup as you can also reattach processes after detaching then. See this answer for details.
The http.server module is not meant to be a full-fledged webserver.
You'll want to set up something like Apache instead, see Running a basic Apache web server.

Is it possible to use Python's cmd module remotely with multiple TCP clients?

I created a python script that I'm running on my server which provides a simple command line interface for running custom configuration scripts. I'm using the cmd module to do this and it's been working great so far:
from cmd import Cmd
class MyPrompt(Cmd):
def do_run1(self, args):
print("running config1")
def do_run2(self, args):
print("running config2")
if __name__ == '__main__':
prompt = MyPrompt()
prompt.prompt = '> '
prompt.cmdloop('Starting prompt...')
I also created another script which will open a TCP server and listen for remote clients in a new thread.Clients can send configuration commands to the server and the server will execute them and send back any output. Right now the client is very basic. It can send anything and does not have access to the nice interface that the cmd module provides.It's also up to the server to parse the received message and figure out the command that the client wants to run (via a long if else parser).
I'm trying to combine these 2 scripts, but I'm having a lot of trouble figuring out the best way to do so. I want to allow someone on the server to use the cmd lscript locally, but I also want the cmd script to accept remote clients and give them access to the cmd prompt at the same time. Also, I need a way for commands entered locally and the commands sent by remote clients to be added to a queue in order for the configuration commands to be run one at a time (each command takes a few minutes to run and cannot be executed in parallel).
Can anyone provide some examples or guidance on how I can extend my cmd script to support remote connections? I have no idea where to start and would be very appreciative of any help!
You're probably better off investigating and learning Ansible.
Whilst I don't have experience with it, it's been highly recommended to me and it is implemented in and uses Python.
The documentation seems to be quite good.
(I don't use it because I haven't had the need to do this sort of thing - I generally do applications development)

Categories

Resources