I'm write some python script and I want to tunnel my MySQL Server through a SSH Connection and perform some SQL requests.
import MySQLdb
import os
handle = os.popen('ssh config -L 3306:127.0.0.1:3306')
db = MySQLdb.connect(host="127.0.0.1",
user="Username",
passwd="Secret",
db="dbName")
cur = db.cursor()
cur.execute("SELECT * FROM tablename")
for row in cur.fetchall():
print row[1]
db.close()
handle.close()
The connection is working fine but it does not close the script (subprocess) after the execution, furthermore it's adding some white spaces in front of every printed row.
Thanks for reading and thank you in advance.
I haven't tested this but, os.popen is super-obsolete. Use the subprocess module instead.
Deprecated since version 2.6: This function is obsolete. Use the subprocess module. Check especially the Replacing Older Functions with the subprocess Module section.
With subprocess you'll be able to kill the process (you cannot do that easily with os.popen since close only closes the handle)
handle = subprocess.Popen('ssh config -L 3306:127.0.0.1:3306'.split())
...
handle.kill()
note the split part to pass a list of args instead of a string. That's quick & dirty but some OSes don't support passing a string to Popen. The proper way is: ["ssh","config","-L","3306:127.0.0.1:3306"] so you can pass args with spaces in it transparently.
Related
As an example I am trying to "imitate" the behaviour of the following sets of commands is bash:
mkfifo named_pipe
/challenge/embryoio_level103 < named_pipe &
cat > named_pipe
In Python I have tried the following commands:
import os
import subprocess as sp
os.mkfifo("named_pipe",0777) #equivalent to mkfifo in bash..
fw = open("named_pipe",'w')
#at this point the system hangs...
My idea it was to use subprocess.Popen and redirect stdout to fw...
next open named_pipe for reading and giving it as input to cat (still using Popen).
I know it is a simple (and rather stupid) example, but I can't manage to make it work..
How would you implement such simple scenario?
Hello fellow pwn college user! I just solved this level :)
open(path, flags) blocks execution. There are many similar stackoverflow Q&As, but I'll reiterate here. A pipe will not pass data until both ends are opened, which is why the process hangs (only 1 end was opened).
If you want to open without blocking, you may do so on certain operating systems (Unix works, Windows doesn't as far as I'm aware) using os.open with the flag os.O_NONBLOCK. I don't know what consequences there are, but be cautious of opening with nonblocking because you may try reading prematurely and there will be nothing to read (possibly leading to error, etc.).
Also, note that using the integer literal 0777 causes a syntax error, so I assume you mean 0o777 (max permissions), where the preceding 0o indicates octal. The default for os.mkfifo is 0o666, which is identical to 0o777 except for the execute flags, which are useless because pipes cannot be executed. Also, be aware that these permissions might not all be granted and when trying to set to 0o666, the permissions may actually be 0o644 (like in my case). I believe this is due to the umask, which can be changed and is used simply for security purposes, but more info can be found elsewhere.
For the blocking case, you can use the package multiprocessing like so:
import os
import subprocess as sp
from multiprocessing import Process
path='named_pipe'
os.mkfifo(path)
def read(): sp.run("cat", stdin=open(path, "r"))
def write(): sp.run(["echo", "hello world"], stdout=open(path, "w"))
if __name__ == "__main__":
p_read = Process(target=read)
p_write = Process(target=write)
p_read.start()
p_write.start()
p_read.join()
p_write.join()
os.remove(path)
output:
hello world
I have no idea why the below code is not working. The file arch_list does not get created or anything written to it. The commands work fine when run in the terminal alone.
from yum.plugins import PluginYumExit , TYPE_CORE, TYPE_INTERACTIVE
import os
requires_api_version = '2.3'
plugin_type = (TYPE_CORE, TYPE_INTERACTIVE)
ip_vm = ['192.168.239.133']
def get_arch():
global ip_vm
os.system("uname -p > ~/arch_list")
for i in ip_vm:
cmd = "ssh thejdeep#"+i+" 'uname -p' >> ~/arch_list"
print cmd
os.system(cmd)
def init_hook(conduit):
conduit.info(2,'Hello World !')
get_arch()
I don't think os.system() will return to stdout in that case. You may try using subprocess.call() with the appropriate parameters.
Edit: Actually I think I remember seeing similar behaviour with ssh when running in a standard bash loop. You might try adding a -n to your ssh call.. I think that is the solution I used years ago in bash.
I just ran your code and it works fine for me, writing to the local arch file. I suspect adding more than one host to your list is where you start having problems. What version of python are you running? I'm on 2.7.6.
os.system() will not redirect stdout and stderr.
You can use subprocess modules Popen to set the stdout and stderr to a file descriptor or a pipe.
For example:
>>> import subprocess
>>> child1 = subprocess.Popen(["ls","-l"], stdout=subprocess.PIPE)
>>> print child1.stdout.readlines()
You can replace subprocess.PIPE to any valid file descriptor you opened for write. or you could pick up some lines to the file. It's your call.
I have two python scripts which I am using to initiate subprocess. Following is the structure of my scripts:
MAIN.py
This scripts does nothing except initiating the whole process and calling another python script through subprocess
paramet = ['parameter']
# Calling Subprocess
result = subprocess.Popen([sys.executable,"./sub_process.py"] + paramet)
result.wait()
sub_process.py
This is the script which first executes bunch of SQL statements and then initiate another subprocess by calling itself again. The number of subprocess spawned depends on the parameter being passed with each subprocess call. Also the SQl statements are mostly insert statements
conn = connect(host=host_ip, port=21050, timeout=3600)
cursor = conn.cursor()
sql = "SQL statement"
cursor.execute(sql)
result_set = cursor.fetchall()
for col1, col2 in result_set:
sql = "SQL statement 2"
cursor.execute(sql)
sql = "SQL statement 3"
cursor.execute(sql)
if col1 == 'para':
result = subprocess.Popen([sys.executable,"./sub_process.py"] + paramet)
result.wait()
Now this setup works fine on my local machine but when I am trying to run the same setup on server it throws following error:
File "/usr/local/lib/python2.7/site-packages/impala/dbapi/hiveserver2.py", line 151, in execute
self._execute_sync(op)
File "/usr/local/lib/python2.7/site-packages/impala/dbapi/hiveserver2.py", line 159, in _execute_sync
self._wait_to_finish() # make execute synchronous
File "/usr/local/lib/python2.7/site-packages/impala/dbapi/hiveserver2.py", line 181, in _wait_to_finish
raise OperationalError("Operation is in ERROR_STATE")
OperationalError: Operation is in ERROR_STATE
When I go to impala UI to check status of queries then I see exception as state. Also on the profile of queries shown as exception it says "Query Status: Memory limit exceeded". But when I execute the same queries from my local machine then there is no exception and everything runs fine.
Even if I execute only one sql statements instead of bunch of sqls the same error comes on server but not on my local machine. I tried to look for some reasons for this but couldn't find anything which might provide the reason for this error.
I have also tried the python threading and still got the same errors as they were coming with subprocess. The structure of the threading code is same as it is for subprocess above.
I have CentOS 6.6, impala 2.0 and impyla 0.9 on my server
How can I find the reason for this and resolve it? Also what could be a workaround to get the same behavior through other ways if this is not getting resolved? What I basically want is that once a process reaches a certain point in its execution, it should start another process and also continue to keep on executing simultaneously. Once it is done then it should wait for the subprocess to finish before exiting.
I am trying to write a python script which when executes will open a Maya file in another computer and creates its playblast there. Is this possible? Also I would like to add one more thing that the systems I use are all Windows. Thanks
Yes it is possible, i do this all the time on several computers. First you need to access the computer. This has been answered. Then call maya from within your shell as follows:
maya -command myblast -file filetoblast.ma
you will need myblast.mel somewhere in your script path
myblast.mel:
global proc myblast(){
playblast -widthHeight 1920 1080 -percent 100
-fmt "movie" -v 0 -f (`file -q -sn`+".avi");
evalDeferred("quit -f");
}
Configure what you need in this file such as shading options etc. Please note calling Maya GUI uses up one license and playblast need that GUI (you could shave some seconds by not doing the default GUI)
In order to execute something on a remote computer, you've got to have some sort of service running there.
If it is a linux machine, you can simply connect via ssh and run the commands. In python you can do that using paramiko:
import paramiko
ssh = paramiko.SSHClient()
ssh.connect('127.0.0.1', username='foo', password='bar')
stdin, stdout, stderr = ssh.exec_command("echo hello")
Otherwise, you can use a python service, but you'll have to run it beforehand.
You can use Celery as previously mentioned, or ZeroMQ, or more simply use RPyC:
Simply run the rpyc_classic.py script on the target machine, and then you can run python on it:
conn = rpyc.classic.connect("my_remote_server")
conn.modules.os.system('echo foo')
Alternatively, you can create a custom RPyC service (see documentation).
A final option is using an HTTP server like previously suggested. This may be easiest if you don't want to start installing everything. You can use Bottle which is a simple HTTP framework in python:
Server-side:
from bottle import route, run
#route('/run_maya')
def index(name):
# Do whatever
return 'kay'
run(host='localhost', port=8080)
Client-side:
import requests
requests.get('http://remote_server/run_maya')
One last option for cheap rpc is to run maya.standalone from a a maya python ("mayapy", usually installed next to the maya binary). The standalone is going to be running inside a regular python script so it can uses any of the remote procedure tricks in KimiNewts answer.
You can also create your own mini-server using basic python. The server could use the maya command port, or a wsgi server using the built in wsgiref module. Here is an example which uses wsgiref running inside a standalone to control a maya remotely via http.
We've been dealing with the same issue at work. We're using Celery as the task manager and have code like this inside of the Celery task for playblasting on the worker machines. This is done on Windows and uses Python.
import os
import subprocess
import tempfile
import textwrap
MAYA_EXE = r"C:\Program Files\Autodesk\Maya2016\bin\maya.exe"
def function_name():
# the python code you want to execute in Maya
pycmd = textwrap.dedent('''
import pymel.core as pm
# Your code here to load your scene and playblast
# new scene to remove quicktimeShim which sometimes fails to quit
# with Maya and prevents the subprocess from exiting
pm.newFile(force=True)
# wait a second to make sure quicktimeShim is gone
time.sleep(1)
pm.evalDeferred("pm.mel.quit('-f')")
''')
# write the code into a temporary file
tempscript = tempfile.NamedTemporaryFile(delete=False, dir=temp_dir)
tempscript.write(pycmd)
tempscript.close()
# build a subprocess command
melcmd = 'python "execfile(\'%s\')";' % tempscript.name.replace('\\', '/')
cmd = [MAYA_EXE, '-command', melcmd]
# launch the subprocess
proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
proc.wait()
# when the process is done, remove the temporary script
try:
os.remove(tempscript.name)
except WindowsError:
pass
Greetings.
I have written a little python script that calls MySQL in a subprocess. [Yes, I know that the right approach is to use MySQLdb, but compiling it under OS X Leopard is a pain, and likely more painful if I wanted to use the script on computers of different architectures.] The subprocess technique works, provided that I supply the password in the command that starts the process; however, that means that other users on the machine could see the password.
The original code I wrote can be seen here.
This variant below is very similar, although I will omit the test routine to keep it shorter:
#!/usr/bin/env python
from subprocess import Popen, PIPE
# Set the command you need to connect to your database
mysql_cmd_line = "/Applications/MAMP/Library/bin/mysql -u root -p"
mysql_password = "root"
def RunSqlCommand(sql_statement, database=None):
"""Pass in the SQL statement that you would like executed.
Optionally, specify a database to operate on. Returns the result."""
command_list = mysql_cmd_line.split()
if database:
command_list.append(database)
# Run mysql in a subprocess
process = Popen(command_list, stdin=PIPE, stdout=PIPE,
stderr=PIPE, close_fds=True)
#print "Asking for output"
#needs_pw = process.stdout.readline()
#print "Got: " + needs_pw
# pass it in the password
process.stdin.write(mysql_password + "\n")
# pass it our commands, and get the results
#(stdout, stderr) = process.communicate( mysql_password + "\n" + sql_statement)
(stdout, stderr) = process.communicate( sql_statement )
return stdout
I am suspicious that the MySQL password prompt is not actually on stdout (or stderr), although I don't know how that could be or if it means I could trap it.
I did try reading output first, before supplying a password, but it didn't work. I also tried passing the password
Again, if I supply the password on the command line (and thus have no code between the "Popen" and "communicate" functions) my wrapped function works.
Two new thoughts, months laster:
Using pexpect would let me supply a password. It simulates a tty and gets all output, even that which bypasses stdout and stderr.
There is a project called MySQL Connector/Python, in early alpha, that will allow provide a pure python library for accessing MySQL, without requiring you to compile any C-code.
You could simply build a my.cnf file and point to that on the mysql command. Obviously you'll want to protect that file with permissions/acls. But it shouldn't be really an more/less secure then having the password in your python script, or the config for your python script.
So you would do something like
mysql_cmd_line = "/Applications/MAMP/Library/bin/mysql --defaults-file=credentials.cnf"
and your config would look about like this
[client]
host = localhost
user = root
password = password
socket = /var/run/mysqld/mysqld.sock
The only secure method is to use a MySQL cnf file as one of the other posters mentions. You can also pass a MYSQL_PWD env variable, but that is insecure as well: http://dev.mysql.com/doc/refman/5.0/en/password-security.html
Alternatively, you can communicate with the database using a Unix socket file and with a little bit of tweaking you can control permissions at the user id level.
Even better, you can use the free BitNami stack DjangoStack that has Python and MySQLDB precompiled for OS X (And Windows and Linux) http://bitnami.org/stacks
This may be a windows / SQL Server feature, but could you use a Trusted Connection (i.e. use your OS login/password to access the DB)? There may be an OS X equivalent for MySQL.
Or you may just need to set up your DB to use the OS login and password so that you don't need to keep it in your code.
Anyway, just an idea.
Try this:
process.stdin.write(mysql_password + "\n")
process.communicate()
(stdout, stderr) = process.communicate( sql_statement )
process.stdin.close()
return stdout
Call communicate() to force what you just wrote to the buffer to send. Also, it's good to close stdin when you are done.