I seem to be having a problem. I have a view where I can allow staff users to download the MySQL database for that program, however it is not working at all. I get an error which says Errno 2] No such file or directory: '/usr/local/src/djcode/c2duo_mms/backup.gz'.
I don't know why I get the error, but it the likely answer is because the I can't dump the database properly. It can't find backup.gz, because it cannot find the file beacause the step where it supposed to dump the file does not work.
views.py
#login_required
def dbbackup(request):
if not (request.user.is_authenticated() and request.user.is_staff):
raise http.Http404
os.popen3("mysqldump -u *username* -p*password* *database* > /usr/local/src/djcode/c2duo_mms/backup.sql")
os.popen3("gzip -c /usr/local/src/djcode/c2duo_mms/backup.sql > /usr/local/src/djcode/c2duo_mms/backup.gz"
dataf = open('/usr/local/src/djcode/c2duo_mms/backup.gz', 'r')
return HttpResponse(dataf.read(), mimetype='application/x-gzip')
EDIT: I have tried running a small python script. Now the following python file below works (saves a file named backup.gz in the c2duo_mms directory). So why can I not do the same thing from my views.py file!?
#!/usr/bin/env python
import os
os.popen3("mysqldump -u *username* -p*password* *database* > /usr/local/src/djcode/c2duo_mms/backup.sql")
os.popen3("gzip -c /usr/local/src/djcode/c2duo_mms/backup.sql > /usr/local/src/djcode/c2duo_mms/backup.gz")
Use a full path here:
os.popen3("mysqldump --add-drop-table -u " + settings.DATABASE_USER + " -p" + settings.DATABASE_PASSWORD + " " + settings.DATABASE_NAME + " > backup.sql")
i.e. Where you are saving down the sql.
Try something like this:
import subprocess
command = "mysqldump -u *username* -p*password* *database* > /usr/local/src/djcode/c2duo_mms/backup.sql"
p = subprocess.Popen(command, shell=True, bufsize=0, stdout=subprocess.PIPE, universal_newlines=True)
p.wait()
output = p.stdout.read()
p.stdout.close()
The var "output" will give you access to any error messages from the command.
Popen opens a process, but it does not create a shell around it. Since I don't expect an intermediate shell, then I don't expect that the redirects there would be interpreted.
Popen does return file handles to the various streams in/out of the process - it would be stdout that you get without the redirects.
If you read and the store the content from those pipe handles, you can do the redirect inside the python code.
Perhaps you could consider the subprocess module - http://docs.python.org/library/subprocess.html - and you can specify what shell to use with it, which then can interpret the redirects.
The webserver was running as a different user than root (it needs to be the same), so I did not have permissions to save in that folder. I changed the ownership of the folder I wanted to save to which has worked now.
chown -R "apache" c2duo_mms
Related
Python 3.10.6
Windows 10
I have a python function that executes a DXL script using subsystem.run() or os.system() (whichever works best I guess). The problem is that when I run a custom command using python it does not work, but when I paste the same command in the command prompt, it works. I should also clarify that command prompt is not the ms store windows terminal (cannot run ibm doors commands there for some reason). It is the OG prompt
I need to use both python and IBM Doors for the solution.
Here is a summer version of my code (Obviously, the access values are not real):
#staticmethod
def run_dxl_importRTF():
dquotes = chr(0x22) # ASCII --> "
module_name = "TEST_TEMP"
script_path = "importRTF.dxl"
script_do_nothing_path = "doNothing.dxl"
user = "user"
password = "pass"
database_config = "11111#11.11.1111.0"
doors_path = dquotes + r"C:\Program Files\IBM\Rational\DOORS\9.7\bin\doors.exe" + dquotes
file_name = "LIBC_String.rtf"
# Based On:
# "C:\Program Files\IBM\Rational\DOORS\9.7\\bin\doors.exe" -dxl "string pModuleName = \"%~1\";string pFilename = \"%~2\";#include <importRTF.dxl>" -f "%TEMP%" -b "doNothing.dxl" -d 11111#11.11.1111.0 -user USER -password PASSWORD
script_arguments = f"{dquotes}string pModuleName=\{dquotes}{module_name}\{dquotes};string pFileName=\{dquotes}{file_name}\{dquotes};#include <{script_path}>{dquotes}"
command = [doors_path, "-dxl", script_arguments, "-f", "%TEMP%", "-b", script_do_nothing_path, '-d', database_config, '-user', user, '-password', password]
res = subprocess.run(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
print(f"COMMAND:\n{' '.join(res.args)}")
print(f"STDERR: {repr(res.stderr)}")
print(f'STDOUT: {res.stdout}')
print(f'RETURN CODE: {res.returncode}')
return
PYTHON SCRIPT OUTPUT:
COMMAND:
"C:\Program Files\IBM\Rational\DOORS\9.7\bin\doors.exe" -dxl "string pModuleName=\"TEST_TEMP\";string pFileName=\"LIBC_String.rtf\";#include <importRTF.dxl>" -f %TEMP% -b doNothing.dxl -d 11111#11.11.1111.0 -user USER_TEMP -password PASS_TEMP
STDERR: 'The system cannot find the path specified.\n'
STDOUT:
RETURN CODE: 1
When I run the same command in the command prompt, it works (dxl script is compiled).
I identified the problem which is the script_argument variable. Meaning that, when I try to just enter the IBM Doors server without compiling a DXL script, it works on python and the command prompt.
The python script needs to be dynamic meaning that all of the initial declared variables can change value and have a path string in it. I am also trying to avoid .bat files. They also did not work with dynamic path values
Thanks for your time
I tried:
Changing CurrentDirectory (cwd) to IBM Doors
os.system()
Multiple workarounds
Tried IBM Doors path without double quotes (it doesnt work because of the whitespaces)
.bat files
When calling subprocess.run with a command list and shell=True, python will expand the command list to a string, adding more quoting along the way. The details are OS dependent (on Windows, you always have to expand the list to a command) but you can see the result via the subprocess.list2cmdline() function.
Your problem is these extra escapes. Instead of using a list, build a shell command string that already contains the escaping you want. You can also use ' for quoting strings so that internal " needed for shell quoting can be entered literally.
Putting it all together (and likely messing something up here), you would get
#staticmethod
def run_dxl_importRTF():
module_name = "TEST_TEMP"
script_path = "importRTF.dxl"
script_do_nothing_path = "doNothing.dxl"
user = "user"
password = "pass"
database_config = "11111#11.11.1111.0"
doors_path = r"C:\Program Files\IBM\Rational\DOORS\9.7\bin\doors.exe"
file_name = "LIBC_String.rtf"
script_arguments = (rf'string pModuleName=\"{module_name}\";'
'string pFileName=\"{file_name}\";'
'#include <{script_path}>')
command = (f'"{doors_path}" -dxl "{script_arguments}" -f "%TEMP%"'
' -b "{script_do_nothing_path}" -d {database_config}'
' -user {user} -password {pass}')
res = subprocess.run(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
print(f"COMMAND:\n{' '.join(res.args)}")
print(f"STDERR: {repr(res.stderr)}")
print(f'STDOUT: {res.stdout}')
print(f'RETURN CODE: {res.returncode}')
So I have this python3 script that does a lot of automated testing for me, it takes roughly 20 minutes to run, and some user interaction is required. It also uses paramiko to ssh to a remote host for a separate test.
Eventually, I would like to hand this script over to the rest of my team however, it has one feature missing: evidence collection!
I need to capture everything that appears on the terminal to a file. I have been experimenting with the Linux command 'script'. However, I cannot find an automated method of starting script, and executing the script.
I have a command in /usr/bin/
script log_name;python3.5 /home/centos/scripts/test.py
When I run my command, it just stalls. Any help would be greatly appreciated!
Thanks :)
Is a redirection of the output to a file what you need ?
python3.5 /home/centos/scripts/test.py > output.log 2>&1
Or if you want to keep the output on the terminal AND save it into a file:
python3.5 /home/centos/scripts/test.py 2>&1 | tee output.log
I needed to do this, and ended up with a solution that combined pexpect and ttyrec.
ttyrec produces output files that can be played back with a few different player applications - I use TermTV and IPBT.
If memory serves, I had to use pexpect to launch ttyrec (as well as my test's other commands) because I was using Jenkins to schedule the execution of my test, and pexpect seemed to be the easiest way to get a working interactive shell in a Jenkins job.
In your situation you might be able to get away with using just ttyrec, and skip the pexpect step - try running ttyrec -e command as mentioned in the ttyrec docs.
Finally, on the topic of interactive shells, there's an alternative to pexpect named "empty" that I've had some success with too - see http://empty.sourceforge.net/. If you're running Ubuntu or Debian you can install empty with apt-get install empty-expect
I actually managed to do it in python3, took a lot of work, but here is the python solution:
def record_log(output):
try:
with open(LOG_RUN_OUTPUT, 'a') as file:
file.write(output)
except:
with open(LOG_RUN_OUTPUT, 'w') as file:
file.write(output)
def execute(cmd, store=True):
proc = Popen(cmd.encode("utf8"), shell=True, stdout=PIPE, stderr=PIPE)
output = "\n".join((out.decode()for out in proc.communicate()))
template = '''Command:\n====================\n%s\nResult:\n====================\n%s'''
output = template % (cmd, output)
print(output)
if store:
record_log(output)
return output
# SSH function
def ssh_connect(start_message, host_id, user_name, key, stage_commands):
print(start_message)
try:
ssh.connect(hostname=host_id, username=user_name, key_filename=key, timeout=120)
except:
print("Failed to connect to " + host_id)
for command in stage_commands:
try:
ssh_stdin, ssh_stdout, ssh_stderr = ssh.exec_command(command)
except:
input("Paused, because " + command + " failed to run.\n Please verify and press enter to continue.")
else:
template = '''Command:\n====================\n%s\nResult:\n====================\n%s'''
output = ssh_stderr.read() + ssh_stdout.read()
output = template % (command, output)
record_log(output)
print(output)
I made a small script in sublime that will extract commands from a json file that is on the user's computer and then it will open the terminal and run the settings/command. This works, except that it doesn't really open up the terminal. It only runs the command (and it works, as in my case it will run gcc to compile a simple C file), and pipes to STDOUT without opening up the terminal.
import json
import subprocess
import sublime_plugin
class CompilerCommand(sublime_plugin.TextCommand):
def get_dir(self, fullpath):
path = fullpath.split("\\")
path.pop()
path = "\\".join(path)
return path
def get_settings(self, path):
_settings_path = path + "\\compiler_settings.json"
return json.loads(open(_settings_path).read())
def run(self, edit):
_path = self.get_dir(self.view.file_name())
_settings = self.get_settings(_path)
_driver = _path.split("\\")[0]
_command = _driver + " && cd " + _path + " && " + _settings["compile"] + " && " + _settings["exec"]
proc = subprocess.Popen(_command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
I'm not sure if using subprocess.Popen is the right way to go about it as I'm new to Python.
So to re-iterate; I want it to open up the terminal, run the command, and have the terminal stay open until the user presses ENTER or something. I'm running Windows 7 and Python 3, if that matters.
subprocess.Popen simply creates a subprocess with the given command. It is in no way related to opening a terminal window or any other windows for that matter.
You'll have to look into your platform specific UI automation solutions in order to achieve what you want. Or see if maybe the Sublime plugins mechanism can already do that.
NOTES:
Also, you should be using os.path.join/os.path.split/os.path.sep etc for your path operations—Sublime also runs on OS X for example, and OS X does not use backslashes. Also, file handles need to be closed, so use:
with open(...) as f:
return json.load(f) # also not that there is no nead to f.read()+json.loads()
# if you can just json.load() on the file handle
Furthermore, strings should usually be built using string interpolation:
_command = "{} && cd {} && {} && {}".format(_driver, _path, _settings["compile"], _settings["exec"])
...and, you should not be prefixing your local variables with _—it doesn't look nice and serves no purpose in Python either; and while we're at it, I might as well use the chance to recommend you to read PEP8: http://www.python.org/dev/peps/pep-0008/.
I am writing a python (2.7) script that checks if some files are missing and downloads them via wget. Everything works fine, but after the download has finished and the script should exit, the bash (where I started the python script from) is not showing up correctly.
I have the cursor and can enter things, but the standard prompt is not showing up. I have to resize the terminal window to make the prompt display correctly. What might be the reason for this?
tilenames = ['File1', 'File2', ...]
web_url = http://...
for t in tilenames:
try:
open(t, 'r')
except IOError:
print 'file %s not found.' % (t)
command = ['wget', '-P', './SRTM/', web_url + t ]
output = Popen(command, stdout=subprocess.PIPE)
print "Done"
I think it has something to do with the way the wget process is invoked. The last command print "Done" is actually done before wget writes all of its output into the shell.
Just add a .communicate() after output, like this:
tilenames = ['File1', 'File2', ...]
web_url = http://...
for t in tilenames:
try:
open(t, 'r')
except IOError:
print 'file %s not found.' % (t)
command = ['wget', '-P', './SRTM/', web_url + t ]
p = Popen(command, stdout=subprocess.PIPE)
stdout, stderr = p.communicate()
print "Done"
communicate will return the output written to stdout and None for stderr, because it's not forwarded to a PIPE (you will see it on the terminal instead).
Btw. you should close opened file objects (to check if a file exists you can use the functions in os.path e.g. os.path.exists)
wget writes its statistics to stderr, which is why it scrambles your terminal. stdout and stderr are flushed and queried at different intervals, so it is possible that your Done shows up before the output from wget.
A fix would be to call wget with -q or to also redirect stderr using stderr=open("/dev/null", "w") or something similar.
Additionally, you should probably use .communicate() to avoid pipe issues.
You could use os.system (but see http://docs.python.org/release/2.5.2/lib/node536.html). Basically Popen is intended to ALLOW your python process to read from the command output. You don't seem to need to do that, so the fragment below should get you what you want:
import os
import subprocess
p = subprocess.Popen(['wget','http://www.aol.com'],stdout=subprocess.PIPE)
os.waitpid(p.pid,0)
print "done"
If you add the -q option to wget it works too (quite mode)
I am trying to write a python script with which to restore our database. We store all our tables (individually) in the repository. Since typing "source table1.sql, source table2.sql,.." Will be cumbersome I've written a script to do this automatically.
I've found a solution using Popen.
process = Popen('mysql %s -u%s -p%s' % (db, "root", ""), stdout=PIPE, stdin=PIPE, shell=True)
output = process.communicate('source' + file)[0]
The method appears to work very well, however, for each table, it prompts me for a password. How do I bypass this to either get it prompt for a password only once or have the subprocess read the password from a config file?
Is there a better way to do this? I've tried to do this using a windows batch script, but as you'll expect, this is a lot less flexible than using python (for e.g)
Since apparently you have an empty password, remove the -p option, -p without a password makes mysql prompt
from subprocess import Popen, PIPE
process = Popen('mysql %s -u%s' % (db, "root"), stdout=PIPE, stdin=PIPE, shell=True)
output = process.communicate('source' + file)[0]
In order to prevent exposing the password to anyone with permission to see running processes, it's best to put the password in a config file, and call that from the command-line:
The config file:
[client]
host=host_name
user=user_name
password=your_pass
Then the command-line:
mysql --defaults-extra-file=your_configfilename
Well, you could pass it on in the command line after reading it from a file
with open('secret_password.txt', 'r') as f:
password = f.read()
process = Popen('mysql %s -u%s -p%s' % (db, "root", password), stdout=PIPE, stdin=PIPE,
Otherwise you could investigate pexpect, which lets you interact with processes. Other alternatives for reading from a config file (like ConfigParser) or simply making it a python module and importing the password as a variable.