Django not inserting data into database when I execute subprocess.Popen - python

I am using django and inserting data into database and downloading images. When I call the function it works fine but it blocks the main thread. To execute the process in the background I'm using :
views.py:
class get_all_data(View):
def post(self, request, *args, **kwargs):
subprocess.Popen(
['python get_images.py --data=data'],
close_fds=True,
shell = True
)
But when I call:
python get_images.py(data="data")
it works fine but it runs on the main thread.
How can I fix it ? BTW I'm using python 2.7.
Note: Please don't suggest Celery . I'm looking for something which runs the task async or any other alternative.
I'm using the following code to insert into my db:
get_images.py:
from models import Test
def get_image_all():
#get data from server
insert_to_Test(data="data")
if __name__ == "__main__":
import argparse
from .models import Test
import django
django.setup()
parser = argparse.ArgumentParser()
parser.add_argument('--from_date')
parser.add_argument('--data')
parser.add_argument('--execute', type=bool, default=False)
args = parser.parse_args()
print "args", args
get_image_all(**vars(args))
When I call subprocess.Popen then it does not insert data into my db but If I execute it by call the function then it inserts the data to db. Why does that happen?

First, import and use Django models after django.setup()
Second, make sure the environment the spawn process runs in is correct, mainly make sure you use the right Python interpreter in the right virtualenv (if you use it), and make sure the process runs in the correct working directory
Sample code:
if __name__ == "__main__":
import argparse
import django
django.setup()
from .models import Test
# Parse args
# Use Django models
Next you need to make sure subprocess runs in the correct working directory, and activates your Python virtualenv (if any), roughly something like below:
subprocess.Popen(
'/path/to/virtualenv/bin/python get_images.py --data=data',
cwd='/path/to/project/working_directory/',
shell=True
)

Shouldn't it be
subprocess.Popen(
['python get_images.py(data=data)'],
close_fds=True,
shell = True
)

Related

Execute linux command in Python Flask

How can I execute linux command inside a Python function? I will run the python file in a linux-based server, and in some functions I want to have something like,
def function():
#execute some commands on the linux system, eg. python /path1/path2/file.py
# Or execute a shell script, eg. /path1/path2/file.sh
What python module do I need to achieve this?
Thanks in advance.
This code will create a flask server and allow you to run commands. You can also capture the output.
import subprocess
from flask import Flask
app = Flask(__name__)
def run_command(command):
return subprocess.Popen(command, shell=True, stdout=subprocess.PIPE).stdout.read()
#app.route('/<command>')
def command_server(command):
return run_command(command)
You can run it by saving above text in server.py
$ export FLASK_APP=server.py
$ flask run
Try the following:
import os, subprocess
# if you do not need to parse the result
def function ():
os.system('ls')
# collect result
def function(command):
out = subprocess.run(
command.split(" "),
stdout=subprocess.PIPE)
return out.stdout

Python subprocess in .exe

I'm creating a python script that will copy files and folder over the network. it's cross-platform so I make an .exe file using cx_freeze
I used Popen method of the subprocess module
if I run .py file it is running as expected but when i create .exe subprocess is not created in the system
I've gone through all documentation of subprocess module but I didn't find any solution
everything else (I am using Tkinter that also works fine) is working in the .exe accept subprocess.
any idea how can I call subprocess in .exe.file ??
This file is calling another .py file
def start_scheduler_action(self, scheduler_id, scheduler_name, list_index):
scheduler_detail=db.get_scheduler_detail_using_id(scheduler_id)
for detail in scheduler_detail:
source_path=detail[2]
if not os.path.exists(source_path):
showerror("Invalid Path","Please select valid path", parent=self.new_frame)
return
self.forms.new_scheduler.start_scheduler_button.destroy()
#Create stop scheduler button
if getattr(self.forms.new_scheduler, "stop_scheduler_button", None)==None:
self.forms.new_scheduler.stop_scheduler_button = tk.Button(self.new_frame, text='Stop scheduler', width=10, command=lambda:self.stop_scheduler_action(scheduler_id, scheduler_name, list_index))
self.forms.new_scheduler.stop_scheduler_button.grid(row=11, column=1, sticky=E, pady=10, padx=1)
scheduler_id=str(scheduler_id)
# Get python paths
if sys.platform == "win32":
proc = subprocess.Popen(['where', "python"], env=None, stdout=subprocess.PIPE)
else:
proc = subprocess.Popen(['which', "python"], env=None,stdout=subprocess.PIPE)
out, err = proc.communicate()
if err or not out:
showerror("", "Python not found", parent=self.new_frame)
else:
try:
paths = out.split(os.pathsep)
# Create python path
python_path = (paths[len(paths) - 1]).split('\n')[0]
cmd = os.path.realpath('scheduler.py')
#cmd='scheduler.py'
if sys.platform == "win32":
python_path=python_path.splitlines()
else:
python_path=python_path
# Run the scheduler file using scheduler id
proc = subprocess.Popen([python_path, cmd, scheduler_id], env=None, stdout=subprocess.PIPE)
message="Started the scheduler : %s" %(scheduler_name)
showinfo("", message, parent=self.new_frame)
#Add process id to scheduler table
process_id=proc.pid
#showinfo("pid", process_id, parent=self.new_frame)
def get_process_id(name):
child = subprocess.Popen(['pgrep', '-f', name], stdout=subprocess.PIPE, shell=False)
response = child.communicate()[0]
return [int(pid) for pid in response.split()]
print(get_process_id(scheduler_name))
# Add the process id in database
self.db.add_process_id(scheduler_id, process_id)
# Add the is_running status in database
self.db.add_status(scheduler_id)
except Exception as e:
showerror("", e)
And this file is called:
def scheduler_copy():
date= strftime("%m-%d-%Y %H %M %S", localtime())
logFile = scheduler_name + "_"+scheduler_id+"_"+ date+".log"
#file_obj=open(logFile, 'w')
# Call __init__ method of xcopy file
xcopy=XCopy(connection_ip, username , password, client_name, server_name, domain_name)
check=xcopy.connect()
# Cretae a log file for scheduler
file_obj=open(logFile, 'w')
if check is False:
file_obj.write("Problem in connection..Please check connection..!!")
return
scheduler_next_run=schedule.next_run()
scheduler_next_run="Next run at: " +str(scheduler_next_run)
# If checkbox_value selected copy all the file to new directory
if checkbox_value==1:
new_destination_path=xcopy.create_backup_directory(share_folder, destination_path, date)
else:
new_destination_path=destination_path
# Call backup method for coping data from source to destination
try:
xcopy.backup(share_folder, source_path, new_destination_path, file_obj, exclude)
file_obj.write("Scheduler completed successfully..\n")
except Exception as e:
# Write the error message of the scheduler to log file
file_obj.write("Scheduler failed to copy all data..\nProblem in connection..Please check connection..!!\n")
# #file_obj.write("Error while scheduling")
# return
# Write the details of scheduler to log file
file_obj.write("Total skipped unmodified file:")
file_obj.write(str(xcopy.skipped_unmodified_count))
file_obj.write("\n")
file_obj.write("Total skipped file:")
file_obj.write(str(xcopy.skipped_file))
file_obj.write("\n")
file_obj.write("Total copied file:")
file_obj.write(str(xcopy.copy_count))
file_obj.write("\n")
file_obj.write("Total skipped folder:")
file_obj.write(str(xcopy.skipped_folder))
file_obj.write("\n")
# file_obj.write(scheduler_next_run)
file_obj.close()
There is some awkwardness in your source code, but I won't spend time on that. For instance, if you want to find the source_path, it's better to use a for loop with break/else:
for detail in scheduler_detail:
source_path = detail[2]
break # found
else:
# not found: raise an exception
...
Some advice:
Try to separate the user interface code and the sub-processing, avoid mixing the two.
Use exceptions and exception handlers.
If you want portable code: avoid system call (there are no pgrep on Windows).
Since your application is packaged in a virtualenv (I make the assumption cx_freeze does this kind of thing), you have no access to the system-wide Python. You even don't have that on Windows. So you need to use the packaged Python (this is a best practice anyway).
If you want to call a Python script like a subprocess, that means you have two packaged applications: you need to create an exe for the main application and for the scheduler.py script. But, that's not easy to communicate with it.
Another solution is to use multiprocessing to spawn a new Python process. Since you don't want to wait for the end of processing (which may be long), you need to create daemon processes. The way to do that is explained in the multiprocessing module.
Basically:
import time
from multiprocessing import Process
def f(name):
print('hello', name)
if __name__ == '__main__':
p = Process(target=f, args=('bob',))
p.daemon = True
p.start()
# let it live and die, don't call: `p.join()`
time.sleep(1)
Of course, we need to adapt that with your problem.
Here is how I would do that (I removed UI-related code for clarity):
import scheduler
class SchedulerError(Exception):
pass
class YourClass(object):
def start_scheduler_action(self, scheduler_id, scheduler_name, list_index):
scheduler_detail = db.get_scheduler_detail_using_id(scheduler_id)
for detail in scheduler_detail:
source_path = detail[2]
break
else:
raise SchedulerError("Invalid Path", "Missing source path", parent=self.new_frame)
if not os.path.exists(source_path):
raise SchedulerError("Invalid Path", "Please select valid path", parent=self.new_frame)
p = Process(target=scheduler.scheduler_copy, args=('source_path',))
p.daemon = True
p.start()
self.db.add_process_id(scheduler_id, p.pid)
To check if your process is still running, I recommend you to use psutil. It's really a great tool!
You can define your scheduler.py script like that:
def scheduler_copy(source_path):
...
Multiprocessing vs Threading Python
Quoting this answer: https://stackoverflow.com/a/3044626/1513933
The threading module uses threads, the multiprocessing module uses processes. The difference is that threads run in the same memory space, while processes have separate memory. This makes it a bit harder to share objects between processes with multiprocessing. Since threads use the same memory, precautions have to be taken or two threads will write to the same memory at the same time. This is what the global interpreter lock is for.
Here, the advantage of multiprocessing over multithreading is that you can kill (or terminate) a process; you can't kill a thread. You may need psutil for that.
This is not an exact solution you are looking for, but following suggestion should be preferred for two reasons.
These are more pythonic way
subprocess is slightly expensive
Suggestions you can consider
Don't use subprocess for fetching system path. Try check os.getenv('PATH') to get env variable & try to find if python is in the path. For windows, one has to manually add Python path or else you can directly check in Program Files I guess
For checking process ID's you can try psutils. A wonderful answer is provided here at how do I get the process list in Python?
Calling another script from a python script. This does not look cool. Not bad, but I would not prefer this at all.
In above code, line - if sys.platform == "win32": has same value in if and else condition ==> you dont need a conditional statement here.
You wrote pretty fine working code to tell you. Keep Coding!
If you want to run a subprocess in an exe file, then you can use
import subprocess
program=('example')
arguments=('/command')
subprocess.call([program, arguments])

How to issue commands on remote hosts in parallel using Fabric without using a fabfile?

I have a Python script that uses Fabric to launch tests on remote hosts, get the otuput file of the tests, and do some parsing. The Python script is not a fabfile.
I would like to launch and run the tests in parallel. I've read about using the "#parallel" decorator but all the examples I've read has the script as a fabfile.
My code is something like this:
from fabric.api import *
# Copy the testfile on each of the hosts. This is sequential, it could be
# done in parallel but doing it in parallel is not that important
def copy_test(host_list, testfile_name):
for x in host_list:
env['host_string'] = x
target_testfile_name = "/tmp/" + testfile_name
put(testfile_name, target_testfile_name)
#parallel # Would this decorator work?
def run_test(host_list, testfile_name):
target_testfile_name = "/tmp/" + testfile_name
for x in host_list:
env['host_string'] = x
run(target_testfile_name)
if __name__ == '__main__':
HOSTS = ['10.10.10.10', '10.10.10.11', '10.10.10.12', '10.10.10.13']
testfile_name = "foo.py"
copy_test(HOSTS, testfile_name)
# I would like to launch run_test() in parallel
run_test(HOSTS, testfile_name)
This is a simplified version of the code. I have not included everything but I pass around configuration information of the hosts so that limits me in using this script as a fabfile where I issue something like:
"fab -H '10.10.10.10' copy_test"
"fab -H '10.10.10.10' run_test" -P
I could execute run_test() using the threading.Threads library but I would rather do that as a last resort.
As you can see, I am not running this as a fabfile.
Is there a way I could execute run_test() using Fabric's parallel execution model without executing my script as a "fabfile"?
I can't comment so i'll put an answer change your main to :
if __name__ == '__main__':
HOSTS = ['10.10.10.10', '10.10.10.11', '10.10.10.12', '10.10.10.13']
testfile_name = "foo.py"
execute(copy_test,HOSTS, testfile_name)
# I would like to launch run_test() in parallel
execute(run_test,HOSTS, testfile_name)
if you call the fonction with execute() and your fonction has the #parallel it'll be launched in parallel

How can os.popen take argument externaly to run another script

I am trying to write a python CLI program using module python cmd. When I try to execute another python script in my CLI program my objective is I have some python script in other folder and CLI program in other folder. I am trying to execute those python script using CLI program.
Below is the os.popen method used to execute other script there is CLI program:
import cmd
import os
import sys
class demo(cmd.Cmd):
def do_shell(self,line,args):
"""hare is function to execute the other script"""
output = os.popen('xterm -hold -e python %s' % args).read()
output(sys.argv[1])
def do_quit(self,line):
return True
if __name__ == '__main__':
demo().cmdloop()
and hare is error:
(Cmd) shell demo-test.py
Traceback (most recent call last):
File "bemo.py", line 18, in <module>
demo().cmdloop()
File "/usr/lib/python2.7/cmd.py", line 142, in cmdloop
stop = self.onecmd(line)
File "/usr/lib/python2.7/cmd.py", line 221, in onecmd
return func(arg)
TypeError: do_shell() takes exactly 3 arguments (2 given)
there is some link to other cmd CLI program
1 = cmd – Create line-oriented command processors
2 = Console built with Cmd object (Python recipe)
and some screen shot's for more information:
Please run above code in your system.
As specified in the doc:
https://pymotw.com/2/cmd/index.html
do_shell is defined as such:
do_shell(self, args):
But you are defining it as
do_shell(self, line, args):
I think the intended use is define it as specified from the documentation.
I ran your code and followed your example. I replicated your error. I then, as specified in the documentation for do_shell, I changed the method to the as expected:
do_shell(self, args):
From there, the sys module was missing, so you need to import that as well (unless you did not copy it from your source). After that, I got an error for index out of range, probably because of the expectation of extra parameters needing to be passed.
Furthermore, because you are talking about Python scripts, I don't see the need for the extra commands you are adding, I simply changed the line to this:
output = os.popen('python %s' % args).read()
However, if there is a particular reason you need the xterm command, then you can probably put that back and it will work for your particular case.
I also, did not see the use case for this:
output(sys.argv[1])
I commented that out. I ran your code, and everything worked. I created a test file that just did a simple print and it ran successfully.
So, the code actually looks like this:
def do_shell(self, args):
"""hare is function to execute the other script"""
output = os.popen('python %s' % args).read()
print output
The full code should look like this:
import cmd
import os
import sys
class demo(cmd.Cmd):
def do_shell(self, args):
"""hare is function to execute the other script"""
output = os.popen('python %s' % args).read()
print output
def do_quit(self,line):
return True
if __name__ == '__main__':
demo().cmdloop()

refresh a shell subprocess in python

I have a webpy code that sends "ps aux" data to a webpage using a subprocess.
import subprocess
ps = subprocess.Popen(('ps', 'aux'), stdout-subprocess.PIPE)
out = ps.communicate()[0]
(bunch of webpy stuff)
class index:
def GET(self):
return (output)
(more webpy to start the web server)
It sends the ps aux data across no problem however it does not refresh the ps aux data so i only get 1 continuous set rather than a changing set of data i am needing.
How do i refresh the subprocess to send new data every time I reload the webpage ?
Put the Popen call into the def GET. By the way, if you’re using Python 2.7 or newer, you can use check_output to simplify the actual subprocess call:
def GET(self):
return subprocess.check_output(['ps', 'aux'])

Categories

Resources