I have a script class that queries from a database and displays the result. the problem is when i add a sub process below the script, the script hangs (or waits, and will continue if terminate with ctr-c)
eg. Group A will run if Group B is deleted. Group B will run if Group A is deleted
#Group A
queryStrings = ['SELECT top 100 * FROM myDb',
'SELECT top 10 * FROM anotherDb']
## class that connects to db and output the content ##
db = Database
conn = db.connectToDb()
for query in queryStrings:
db.runPreQueries(conn, query)
conn.close
##Group B
if os.path.exists("DoSomething.vbs"):
p = subprocess.Popen("cscript DoSomething.vbs", stdout=subprocess.PIPE, stdin=subprocess.PIPE, shell=True)
stdout, stderr = p.communicate()
print("vbs completed")
I also tried using subprocess.call, and then terminating it. This wont hang but it doesn't execute the script
p = subprocess.call("cscript DoSomething.vbs")
p.terminate()
when running conn.close you're not really closing the database. It does nothing because you're not calling the function.
So next call stays blocked waiting for database access.
Fix:
conn.close()
note that the proper way of running your process afterwards is (since you don't care about input, output, ...):
subprocess.check_call(["cscript","DoSomething.vbs"])
this will just fail if cscript returns a non-zero return code, which is safe enough.
Note that your database interface probably supports context manager, in that case, it would be better to write:
with db.connectToDb() as conn:
for query in queryStrings:
db.runPreQueries(conn, query)
in that case, connection is closed automatically when exiting the with block.
Related
I have created a multiprocessor application that just loop some files and compare them but for some reason the pool never close and wait to join all the process responses.
from multiprocessing import Pool
def compare_from_database(row_id, connection_to_database):
now = datetime.now()
connection1 = sqlite3.connect(connection_to_database)
cursor = connection1.cursor()
grab_row_id_query = "SELECT * FROM MYTABLE WHERE rowid = {0};".format(row_id)
grab_row_id = cursor.execute(grab_row_id_query)
work_file_path = grab_row_id.fetchone()[1]
all_remaining_files_query = "SELECT * FROM MYTABLE WHERE rowid > {0};".format(row_id)
all_remaining_files = cursor.execute(all_remaining_files_query)
for i in all_remaining_files:
if i[1] == work_file_path:
completed_query = "UPDATE MYTABLE SET REPEATED = 1 WHERE ROWID = {1};".format(row_id)
work_file = cursor.execute(completed_query)
connection1.commit()
cursor.close()
connection1.close()
return "id {0} took: {1}".format(row_id, datetime.now()-now)
I have try it with:
def apply_async(range_max, connection_to_database):
pool = Pool()
for i in range_of_ids:
h = pool.apply_async(compare_from_database, args=(i, connection_to_database))
pool.close()
pool.join()
Also using a context and kind of force it:
from multiprocessing import Pool
with Pool() as pool:
for i in range_of_ids:
h = pool.apply_async(compare_from_database, args=(i, connection_to_database))
pool.close()
pool.join()
Even do with context shouldn't need the close/join.
The script just submit all the jobs, I can see in task manager all the python instance and are running, the print statements inside the function do print in the console fine, but once the main script finish submitting all the functions to the pools, just ends. doesn't respect the close/join
Process finished with exit code 0
if i run the function by itself runs fine returning the string.
compare_from_database(1, connection_to_database="my_path/sqlite.db")
or in a loop works fine as well
for i in range(1, 4):
compare_from_database(i, connection_to_database="my_path/sqlite.db")
I try using python 3.7 and 3.8 and wanted to validate it with the documentation
https://docs.python.org/2/library/multiprocessing.html#multiprocessing.pool.multiprocessing.Pool.join
Anyone gotten a similar issue or any ideas what might be?
since you want to do all the process before proceding to the next part of the script
change 'async' instead async_apply that way it force to run the process and wait for the
result.
I have a script that starts four process to make inserts on a database. They share the same target code with different arguments.
After some debugging, I noticed that the moment that the program got stuck was on the step 2: the step when the process asks for a database connection. The problem does not happen right away. The program accomplishes some inserts before freezing. I tried to lock the steps 2,3 and 4 together, but it didn't solve the problem. I have no idea on how to attack the problem.
records is a array of objects that have data that gonna be used make INSERTs into the database.
counter is a shared value to verify progress.
total is for calculating the progress in percentage.
lock is for exclusive access to variables and database.
dataBaseAdapter is a python module that imports psycopg2 for making connection with database
The database is a PostgreSQL
I don't know if it make any difference, but i am running this on Visual Studio Code
Processes' creation:
len_records = len(records)
counter = Value('i', 0)
lock = Lock()
total = Value('i', len_records)
for index in range(0,len_records,4):
p1 = encapsulatedProcess(index,jogos,len_records,counter,lock)
p2 = encapsulatedProcess(index+1,jogos,len_records,counter,lock)
p3 = encapsulatedProcess(index+2,jogos,len_records,counter,lock)
p4 = encapsulatedProcess(index+3,jogos,len_records,counter,lock)
holdProcess(index+1,p1)
holdProcess(index+2,p2)
holdProcess(index+3,p3)
holdProcess(index+4,p4)
ensapsulatedProcess:
def encapsulatedProcess(ind,records,len_records,counter,lock):
if(ind > len_records): return None
record = records[ind]
process = Process(target=my_function, args=(ind,record,counter,lock,), daemon=True)
process.start()
return process
holdProcess:
def holdProcess(number,process):
if(process == None): return
if(process.is_alive()):
process.join()
process.close()
print(number)
my_function:
def dramaAnalizerLocal(idt,record,counter,lock):
lock.acquire()
print(idt," 1 - load model")
auxiliar = Model(record) #connects with database and closes
lock.release()
lock.acquire()
print(idt," 2 - getting connection")
conn = dataBaseAdapter.getConnection()
lock.release()
lock.acquire()
print(idt, " 3 - saving data")
record.store(InsertIntoType1(model=auxiliar, ignored=1), conn)
record.store(IntertIntoType2(model=auxiliar, ignored=1),conn)
record.store(InsertIntoType3(model=auxiliar, ignored=1),conn)
lock.release()
lock.acquire()
print(idt," 4 - closing connection")
dataBaseAdapter.closeConnection(conn)
lock.release()
lock.acquire()
print(idt," 5 - increment counter")
counter.value += 1
lock.release()
lock.acquire()
print(idt," 6 - return")
lock.release()
After searching on the internet, the problem appeared to be that the program was opening and closing connections too fast. So, it was suggested to close just after all operations. I was able to pass a connection through a single process, but, i don't know how, the process continued to ask for connections. So, to solve the problem, i did the thing i was afraid to do, because i was afraid it would be hard: installing pgbouncer. It turned out to be easy. I just needed to change the port the program used, give a database alias to pgbouncer and restart the pgbouncer.
TLDR: Installed pgbouncer and configured the program and pgbouncer
I am new to Python.
I am trying to SSH to a server to perform some operations. However, before performing the operations, i need to load a profile, which takes 60-90 seconds. After loading the profile, is there a way to keep the SSH session open so that i can perform the operations later?
p = subprocess.Popen("ssh abc#xyz'./profile'", stdout=subprocess.PIPE, shell=True)
result = p.communicate()[0]
print result
return result
This loads the profile and exits. Is there a way to keep the above ssh session open and run some commands?
Example:
p = subprocess.Popen("ssh abc#xyz'./profile'", stdout=subprocess.PIPE, shell=True)
<More Python Code>
<More Python Code>
<More Python Code>
<Run some scripts/commands on xyz server non-interactively>
After loading the profile, I want to run some scripts/commands on the remote server, which I am able to do by simply doing below:
p = subprocess.Popen("ssh abc#xyz './profile;**<./a.py;etc>**'", stdout=subprocess.PIPE, shell=True)
However, once done, it exists and the next time I want to execute some script on the above server, I need to load the profile again (which takes 60-90 seconds). I am trying to figure out a way where we can create some sort of tunnel (or any other way) where the ssh connection remains open after loading the profile, so that the users don't have to wait 60-90 seconds whenever anything is to be executed.
I don't have access to strip down the profile.
Try an ssh library like asyncssh or spur. Keeping the connection object should keep the session open.
You could send a dummy command like date to prevent the timeout as well.
You have to construct a ssh command like this ['ssh', '-T', 'host_user_name#host_address'] then follow below code.
Code:
from subprocess import Popen, PIPE
ssh_conn = ['ssh', '-T', 'host_user_name#host_address']
# if you have to add port then ssh_conn should be as following
# ssh_conn = ['ssh', '-T', 'host_user_name#host_address', '-p', 'port']
commands = """
cd Documents/
ls -l
cat test.txt
"""
with Popen(ssh_conn, stdin=PIPE, stdout=PIPE, stderr=PIPE, universal_newlines=True) as p:
output, error = p.communicate(commands)
print(output)
print(error)
print(p.returncode)
# or can do following things
p.stdin.write('command_1')
# add as many command as you want
p.stdin.write('command_n')
Terminal Output:
Please let me know if you need further explanations.
N.B: You can add command in commands string as many as you want.
What you want to do is write/read to the process's stdin/stdout.
from subprocess import Popen, PIPE
import shlex
shell_command = "ssh user#address"
proc = Popen(shlex.split(shell_command), stdin=PIPE, universal_newlines=True)
# Do python stuff here
proc.stdin.write("cd Desktop\n")
proc.stdin.write("mkdir Example\n")
# And so on
proc.stdin.write("exit\n")
You must include the trailing newline for each command. If you prefer, print() (as of Python 3.x, where it is a function) takes a keyword argument file, which allows you to forget about that newline (and also gain all the benefits of print()).
print("rm Example", file=proc.stdin)
Additionally, if you need to see the output of your command, you can pass stdout=PIPE and then read via proc.stdout.read() (same for stderr).
You may also want to but the exit command in a try/finally block, to ensure you exit the ssh session gracefully.
Note that a) read is blocking, so if there's no output, it'll block forever and b) it will only return what was available to read from the stdout at that time- so you may need to read repeatedly, sleep for a short time, or poll for additional data. See the fnctl and select stdlib modules for changing blocking -> nonblocking read and polling for events, respectively.
Hello Koshur!
I think that what you are trying to achieve looks like what I've tried in the past when trying to make my terminal accessible from a private website:
I would open a bash instance, keep it open and would listen for commands through a WebSocket connection.
What I did to achieve this was using the O_NONBLOCK flag on STDOUT.
Example
import fcntl
import os
import shlex
import subprocess
current_process = subprocess.Popen(shlex.split("/bin/sh"), stdin=subprocess.PIPE,
stdout=subprocess.PIPE, stderr=subprocess.STDOUT) # Open a shell prompt
fcntl.fcntl(current_process.stdout.fileno(), fcntl.F_SETFL,
os.O_NONBLOCK) # Non blocking stdout and stderr reading
What I would have after this is a loop checking for new output in another thread:
from time import sleep
from threading import Thread
def check_output(process):
"""
Checks the output of stdout and stderr to send it to the WebSocket client
"""
while process.poll() is None: # while the process isn't exited
try:
output = process.stdout.read() # Read the stdout PIPE (which contains stdout and stderr)
except Exception:
output = None
if output:
print(output)
sleep(.1)
# from here, we are outside the loop: the process exited
print("Process exited with return code: {code}".format(code=process.returncode))
Thread(target=check_output, args=(current_process,), daemon=True).start() # Start checking for new text in stdout and stderr
So you would need to implement your logic to SSH when starting the process:
current_process = subprocess.Popen(shlex.split("ssh abc#xyz'./profile'"), stdin=subprocess.PIPE,
stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
And send commands like so:
def send_command(process, cmd):
process.stdin.write(str(cmd + "\n").encode("utf-8")) # Write the input to STDIN
process.stdin.flush() # Run the command
send_command(current_process, "echo Hello")
EDIT
I tried to see the minimum Python requirements for the given examples and found out that Thread(daemon) might not work on Python 2.7, which you asked in the tags.
If you are sure to exit the Thread before exiting, you can ignore daemon and use Thread() which works on 2.7. (You could for example use atexit and terminate the process)
References
fcntl(2) man page
https://man7.org/linux/man-pages/man2/fcntl.2.html
fcntl Python 3 Documentation
https://docs.python.org/3/library/fcntl.html
fcntl Python 2.7 Documentation
https://docs.python.org/2.7/library/fcntl.html
O_NONBLOCK Python 3 Documentation
https://docs.python.org/3/library/os.html#os.O_NONBLOCK
O_NONBLOCK Python 2.7 Documentation
https://docs.python.org/2.7/library/os.html#os.O_NONBLOCK
Okay I'm officially out of ideas after running each and every sample I could find on google up to 19th page. I have a "provider" script. The goal of this python script is to start up other services that run indefinitely even after this "provider" stopped running. Basically, start the process then forget about it but continue the script and not stopping it...
My problem: python-daemon... I have actions (web-service calls to start/stop/get status from the started services). I create the start commands on the fly and perform variable substitution on the config files as required.
Let's start from this point: I have a command to run (A bash script that executes a java process - a long running service that will be stopped sometime later).
def start(command, working_directory):
pidfile = os.path.join(working_directory, 'application.pid')
# I expect the pid of the started application to be here. The file is not created. Nothing is there.
context = daemon.DaemonContext(working_directory=working_directory,
pidfile=daemon.pidfile.PIDLockFile(pidfile))
with context:
psutil.Popen(command)
# This part never runs. Even if I put a simple print statement at this point, that never appears. Debugging in pycharms shows that my script returns with 0 on with context
with open(pidfile, 'r') as pf:
pid = pf.read()
return pid
From here on in my caller to this method I prepare a json object to return to the client which essentially contains an instance_id (don't mind it) and a pid (that'll be used to stop this process in another request.
What happens? After with context my application exits with status 0, nothing is returned, no json response gets created, no pidfile gets created only the executed psutil.Popen command runs. How can I achieve what I need? I need an independently running process and need to know its PID in order to stop it later on. The executed process must run even if the current python script stops for some reason. I can't get around the shell script as that application is not mine I have to use what I have.
Thanks for any tip!
#Edit:
I tried using simply the Popen from psutil/subprocess with somewhat more promising result.
def start(self, command):
import psutil/subprocess
proc = psutil.Popen(command)
return str(proc.pid)
Now If I debug the application and wait some undefined time on the return statement everything is working great! The service is running the pid is there, I can stop later on. Then I simply ran the provider without debugging. It returns the pid but the process is not running. Seems like Popen has no time to start the service because the whole provider stops faster.
#Update:
Using os.fork:
#staticmethod
def __start_process(command, working_directory):
pid = os.fork()
if pid == 0:
os.chdir(working_directory)
proc = psutil.Popen(command)
with open('application.pid', 'w') as pf:
pf.write(proc.pid)
def start(self):
...
__start_process(command, working_directory)
with open(os.path.join(working_directory, 'application.pid'), 'r') as pf:
pid = int(pf.read())
proc = psutil.Process(pid)
print("RUNNING" if proc.status() == psutil.STATUS_RUNNING else "...")
After running the above sample, RUNNING is written on console. After the main script exits because I'm not fast enough:
ps auxf | grep
No instances are running...
Checking the pidfile; sure it's there it was created
cat /application.pid
EMPTY 0bytes
From multiple partial tips i got, finally managed to get it working...
def start(command, working_directory):
pid = os.fork()
if pid == 0:
os.setsid()
os.umask(0) # I'm not sure about this, not on my notebook at the moment
os.execv(command[0], command) # This was strange as i needed to use the name of the shell script twice: command argv[0] [args]. Upon using ksh as command i got a nice error...
else:
with open(os.path.join(working_directory, 'application.pid'), 'w') as pf:
pf.write(str(pid))
return pid
That together solved the issue. The started process is not a child process of the running python script and won't stop when the script terminates.
Have you tried with os.fork()?
In a nutshell, os.fork() spawns a new process and returns the PID of that new process.
You could do something like this:
#!/usr/bin/env python
import os
import subprocess
import sys
import time
command = 'ls' # YOUR COMMAND
working_directory = '/etc' # YOUR WORKING DIRECTORY
def child(command, directory):
print "I'm the child process, will execute '%s' in '%s'" % (command, directory)
# Change working directory
os.chdir(directory)
# Execute command
cmd = subprocess.Popen(command
, shell=True
, stdout=subprocess.PIPE
, stderr=subprocess.PIPE
, stdin=subprocess.PIPE
)
# Retrieve output and error(s), if any
output = cmd.stdout.read() + cmd.stderr.read()
print output
# Exiting
print 'Child process ending now'
sys.exit(0)
def main():
print "I'm the main process"
pid = os.fork()
if pid == 0:
child(command, working_directory)
else:
print 'A subprocess was created with PID: %s' % pid
# Do stuff here ...
time.sleep(5)
print 'Main process ending now.'
sys.exit(0)
if __name__ == '__main__':
main()
Further info:
Documentation: https://docs.python.org/2/library/os.html#os.fork
Examples: http://www.python-course.eu/forking.php
Another related-question: Regarding The os.fork() Function In Python
Is there a way to execute a sql script file using cx_oracle in python.
I need to execute my create table scripts in sql files.
PEP-249, which cx_oracle tries to be compliant with, doesn't really have a method like that.
However, the process should be pretty straight forward. Pull the contents of the file into a string, split it on the ";" character, and then call .execute on each member of the resulting array. I'm assuming that the ";" character is only used to delimit the oracle SQL statements within the file.
f = open('tabledefinition.sql')
full_sql = f.read()
sql_commands = full_sql.split(';')
for sql_command in sql_commands:
curs.execute(sql_command)
Another option is to use SQL*Plus (Oracle's command line tool) to run the script. You can call this from Python using the subprocess module - there's a good walkthrough here: http://moizmuhammad.wordpress.com/2012/01/31/run-oracle-commands-from-python-via-sql-plus/.
For a script like tables.sql (note the deliberate error):
CREATE TABLE foo ( x INT );
CREATE TABLER bar ( y INT );
You can use a function like the following:
from subprocess import Popen, PIPE
def run_sql_script(connstr, filename):
sqlplus = Popen(['sqlplus','-S', connstr], stdin=PIPE, stdout=PIPE, stderr=PIPE)
sqlplus.stdin.write('#'+filename)
return sqlplus.communicate()
connstr is the same connection string used for cx_Oracle. filename is the full path to the script (e.g. 'C:\temp\tables.sql'). The function opens a SQLPlus session (with '-S' to silence its welcome message), then queues "#filename" to send to it - this will tell SQLPlus to run the script.
sqlplus.communicate sends the command to stdin, waits for the SQL*Plus session to terminate, then returns (stdout, stderr) as a tuple. Calling this function with tables.sql above will give the following output:
>>> output, error = run_sql_script(connstr, r'C:\temp\tables.sql')
>>> print output
Table created.
CREATE TABLER bar (
*
ERROR at line 1:
ORA-00901: invalid CREATE command
>>> print error
This will take a little parsing, depending on what you want to return to the rest of your program - you could show the whole output to the user if it's interactive, or scan for the word "ERROR" if you just want to check whether it ran OK.
Into cx_Oracle library you can find a method used by tests to load scripts: run_sql_script
I modified this method in my project like this:
def run_sql_script(self, connection, script_path):
cursor = connection.cursor()
statement_parts = []
for line in open(script_path):
if line.strip() == "/":
statement = "".join(statement_parts).strip()
if not statement.upper().startswith('CREATE PACKAGE'):
statement = statement[:-1]
if statement:
try:
cursor.execute(statement)
except Exception as e:
print("Failed to execute SQL:", statement)
print("Error:", str(e))
statement_parts = []
else:
statement_parts.append(line)
The commands into script file must be separated by "/".
I hope it can be of help.