So I basically wrote a function to upload a file to S3 using the key.set_contents_from_file() function but I'm finding it sometimes throws this error
<Error>
<Code>BadDigest</Code>
<Message>The Content-MD5 you specified did not match what we received.</Message>
<ExpectedDigest>TPCms2v7Hu43d+yoJHbBIw==</ExpectedDigest>
<CalculatedDigest>QSdeCsURt0oOlL3NxxGwbA==</CalculatedDigest>
<RequestId>2F0D40F29AA6DC94</RequestId><HostId>k0AC6vaV+Ip8K6kD0F4fkbdS13UdxoJ3X1M76zFUR/ZQgnIxlGJrAJ8BeQlKQ4m6</HostId></Error>
The function:
def uploadToS3(filepath, keyPath, version):
bucket = connectToS3() # Simple gets my bucket and returns it
key = Key(bucket)
f = open(filepath,'r')
key.name = keyPath
key.set_metadata('version', version)
key.set_contents_from_file(f) # offending line
key.make_public()
key.close()
If I open a python shell and manually call it, it works without a hitch, however the way I have to handle it (in which it doesn't work) involves calling it from a subprocess. This is because the caller is a python3 script, 2to3 didn't work and I didn't want to deal with the various years old branches for python3 versions.
Anyway, that seems to actually run it correctly as it gets in the function, the inputs are what's expected (I had them print out), but the # offending line keeps throwing this error. I have no idea what the cause is.
Is it possible bucket isn't being set properly? I feel like if that were the case calling Key(bucket) would have thrown an error
So I essentially run the below script, once as a subprocess called from a python3 script, the other from the console
sudo -u www-data python botoUtilities.py uploadToS3 /path/to/file /key/path
I have this logic inside to pass it to the correct function
func=None
args=[]
for arg in sys.argv[1:]:
if not func:
g = globals()
func = g[arg]
else:
if arg=='True':
args.append(True)
elif arg=='False':
args.append(False)
else:
args.append(arg)
if func:
wrapper(func, args)
It runs in both cases (I write to a file to print out the args) but only in the console case does it not get the error. This is incredibly frustrating. I can't figure out what is done differently. All I know is that it's not possible to send data to S3 using boto run from a subprocess
Related
I have a lua script configured to trigger once a subject's metadata is sent to a specific online Orthanc server. I want it to scrape the subject ID and then call a python script with the ID as an argument. When I put the command manually into my terminal, it works, but the lua script doesn't seem to be executing it.
There is a built-in Orthanc function that scrapes the ID from the subject once it is sent to the server.
The initial lua script had the following:
path = "/path/to/python_script.py"
os.execute("python " .. path .. " " .. subjectId)
But the script wasn't getting called.
I first wanted to see if it was even getting triggered, so I added in:
file = io.open("/path/to/textfile.txt", "a")
file:write("\nI am alive, subjectId is " .. subjectId)
file:close()
And that worked!
So then I wanted to see if there was something wrong with os.execute, so I did:
os.execute("touch /same/path/deleteme.txt")
which worked as well.
So it doesn't seem like os.execute isn't working.
Does anyone have any idea why the script isn't getting called?
EDIT: Does anyone know how to check the status of the os.execute command?
EDIT: I am using Python 3.5.6, Lua 5.1.4, and Linux.
Firstly, to address you question about checking the status from os.execute: this function returns a a status code, which is system dependent (https://www.lua.org/manual/5.1/manual.html#pdf-os.execute). I tried to handle an invalid command by recording this status code, but found it to be somewhat unhelpful; additionally, the shell itself printed an error message.
os.execute("hello") -- 'hello' is not a shell command.
> sh: 1: hello: not found
This error message from the shell was not being caught and read by my Lua script, but was instead being sent directly to stderr. (Good reference about that: https://www.jstorimer.com/blogs/workingwithcode/7766119-when-to-use-stderr-instead-of-stdout.)
I found an interesting solution for catching any error output using temp files.
tmpname = os.tmpname()
os.execute(string.format("hello 2> %s", tmpname))
for line in io.lines(tmpname) do
print("line = " .. line)
end
This prints: line = sh: 1: hello: not found, which is the error described earlier. os.execute should also return the status of the command like this:
a, b, c = os.execute("echo hello")
> hello
print(a, b, c)
> true exit 0
d, e, f = os.execute("hello") -- An invalid command
> sh: 1: hello: not found
print(d, e, f)
> nil exit 127
In this example, c and f are the exit statuses of their respective commands. If the previous command, i.e. executing your Python script, failed, then the exit status should be non-zero.
To address your primary question regarding Python, I would double-check the path to the script--always a good idea to start with a simple sanity check. Consider using string.format to assemble the command like this:
command = string.format("python %s %i", tostring(path), tonumber(subjectId))
os.execute(command)
Also, it would be helpful to know which version of Lua/Python you are using, and perhaps your system as well.
EDIT: depending on whether or not you need them to remain for a bit, you should remove any temp files generated by os.tmpname with os.remove. I also attempted to replicate your situation with a simple test, and I had no trouble executing the Python script with os.execute in a Lua script located in a different directory.
For reference, this is the Lua script, called test.lua, I created in a temp directory called /tmp/throwaway:
#!/usr/bin/lua
local PATH_TO_PY_FILE = "/tmp/py/foo.py"
local function fprintf(fil, formal_arg, ...)
fil:write(string.format(formal_arg, ...))
return
end
local function printf(formal_arg, ...)
io.stdout:write(string.format(formal_arg, ...))
return
end
local function foo(...)
local t = {...}
local cmd = string.format("python3 %s %s", PATH_TO_PY_FILE, table.concat(t, " "))
local filename = os.tmpname()
local a, b, status = os.execute(string.format("%s 2> %s", cmd, filename))
printf("status = %i\n", status)
local num = 1
for line in io.lines(filename) do
printf("line %i = %s\n", num line)
num = num + 1
end
os.remove(filename)
return
end
local function main(argc, argv)
foo()
foo("hello", "there,", "friend")
return 0
end
main(#arg, arg)
(Please forgive my C-style main function, haha.)
In a separate temp directory, called /tmp/py, I created a Python file that looks like this:
import sys
def main():
for arg in sys.argv:
print(arg)
if __name__ == '__main__':
main()
The Lua script's function foo takes a variable number of arguments and supplies them as command-line arguments to the Python script; the Python script then simply prints those arguments one-by-one. Again, this was just a simple test for proof of concept.
The temp file created by os.tmpname should be in /tmp; as for your files, that is, your Lua and Python scripts, you would make sure you know exactly where those files are located. Hopefully that can resolve your problem.
Also, you could supply the path to the Python script--or any other necessary files--to the Lua script as command-line arguments and then slightly modify the existing code.
$> ./test.lua path-to-python-file
Then simply modify foo in test.lua to accept the Python file's path as an argument:
local function foo(py_path, ...)
local t = {...}
local cmd = string.format("python3 %s %s", py_path, table.concat(t, " "))
-- Everything else should remain the same.
end
I would like to execute a specific version of mysqld through Python for unit testing. The idea is to execute the server on a thread, test, and kill the server when it's done. (Similar to testing.mysqld, which sadly doesn't work on Windows.). This is the current code:
#Create a temporary folder.
base_path = tempfile.mkdtemp()
#Extract the default files
zipfile.ZipFile(r"O:\Tools\mysql\min_mysql.zip").extractall(base_path)
#Setup my_ini file
my_ini_path = os.path.join(base_path, "my.ini").replace("\\", "/")
unix_base_path = posixpath.normpath(base_path).replace("\\", "/")
with open(my_ini_path, 'r') as my_ini:
filedata = my_ini.read()
filedata = filedata.replace("{{basedir}}", unix_base_path)
with open(my_ini_path, 'w', 0) as my_ini:
my_ini.write(filedata)
#Open mysqld
args = r"O:/Tools/mysql/bin/mysqld.exe --defaults-file=\"%s\"" % (my_ini_path)
args = shlex.split(args)
mysqld_process = subprocess.Popen(args, shell=True)
mysqld_process.wait()
But if I execute it through Python, I get this error:
Could not open required defaults file:
"c:\users\pelfeli1\appdata\local\temp\tmp2vct38\my.ini"
Fatal error in defaults handling. Program aborted
So far I have verified that the file exists before starting the process. If I print the command verbatim and execute it, the server runs fine.
There seems to be a difference between Popen and just executing in shell. What am I missing?
I'll copy my comment here, if you want to accept it as an answer:
I don't think this is the problem, but the args string shouldn't be
defined as raw (with the r). Instead, do this:
'O:/Tools/mysql/bin/mysqld.exe --defaults-file="%s"' (ie. use single
quotes). Unless you intend to pass the backslashes to the command line
Now, take into account that the following two strings
"foo\"bar\""
r"foo\"bar\""
Are not the same. The first one renders foo"bar", while the second gives you foo\"bar\".
So, what was happening is that the shell sees this as the file name: "c:\users\pelfeli1\appdata\local\temp\tmp2vct38\my.ini", including the quotes, because of the backquotes (\). You could have just written this:
args = 'O:/Tools/mysql/bin/mysqld.exe --defaults-file="%s"' % (my_ini_path)
just in case of spaces in my_ini_path, without problems.
Well, the problem was in the quotes. Just changed this line:
args = r"O:/Tools/mysql/bin/mysqld.exe --defaults-file=\"%s\"" % (my_ini_path)
to this line
args = "O:/Tools/mysql/bin/mysqld.exe --defaults-file=%s" % (my_ini_path)
I still have no idea why this changes anything, because printing args gives a valid (and working) command in both cases.
We have a vendor-supplied python tool ( that's byte-compiled, we don't have the source ). Because of this, we're also locked into using the vendor supplied python 2.4. The way to the util is:
source login.sh
oupload [options]
The login.sh just sets a few env variables, and then 2 aliases:
odownload () {
${PYTHON_CMD} ${OCLIPATH}/ocli/commands/word_download_command.pyc "$#"
}
oupload () {
${PYTHON_CMD} ${OCLIPATH}/ocli/commands/word_upload_command.pyc "$#"
}
Now, when I run it their way - works fine. It will prompt for a username and password, then do it's thing.
I'm trying to create a wrapper around the tool to do some extra steps after it's run and provide some sane defaults for the utility. The problem I'm running into is I cannot, for the life of me, figure out how to use subprocess to successfully do this. It seems to realize that the original command isn't running directly from the terminal and bails.
I created a '/usr/local/bin/oupload' and copied from the original login.sh. Only difference is instead of doing an alias at the end, I actually run the command.
Then, in my python script, I try to run my new shell script:
if os.path.exists(options.zipfile):
try:
cmd = string.join(cmdargs,' ')
p1 = Popen(cmd, shell=True, stdin=PIPE)
But I get:
Enter Opsware Username: Traceback (most recent call last):
File "./command.py", line 31, in main
File "./controller.py", line 51, in handle
File "./controllers/word_upload_controller.py", line 81, in _handle
File "./controller.py", line 66, in _determineNew
File "./lib/util.py", line 83, in determineNew
File "./lib/util.py", line 112, in getAuth
Empty Username not legal
Unknown Error Encountered
SUMMARY:
Name: Empty Username not legal
Description: None
So it seemed like an extra carriage return was getting sent ( I tried rstripping all the options, didn't help ).
If I don't set stdin=PIPE, I get:
Enter Opsware Username: Traceback (most recent call last):
File "./command.py", line 31, in main
File "./controller.py", line 51, in handle
File "./controllers/word_upload_controller.py", line 81, in _handle
File "./controller.py", line 66, in _determineNew
File "./lib/util.py", line 83, in determineNew
File "./lib/util.py", line 109, in getAuth
IOError: [Errno 5] Input/output error
Unknown Error Encountered
I've tried other variations of using p1.communicate, p1.stdin.write() along with shell=False and shell=True, but I've had no luck in trying to figure out how to properly send along the username and password. As a last result, I tried looking at the byte code for the utility they provided - it didn't help - once I called the util's main routine with the proper arguments, it ended up core dumping w/ thread errors.
Final thoughts - the utility doesn't want to seem to 'wait' for any input. When run from the shell, it pauses at the 'Username' prompt. When run through python's popen, it just blazes thru and ends, assuming no password was given. I tried to lookup ways of maybe preloading the stdin buffer - thinking maybe the process would read from that if it was available, but couldn't figure out if that was possible.
I'm trying to stay away from using pexpect, mainly because we have to use the vendor's provided python 2.4 because of the precompiled libraries they provide and I'm trying to keep distribution of the script to as minimal a footprint as possible - if I have to, I have to, but I'd rather not use it ( and I honestly have no idea if it works in this situation either ).
Any thoughts on what else I could try would be most appreciated.
UPDATE
So I solved this by diving further into the bytecode and figuring out what I was missing from the compiled command.
However, this presented two problems -
The vendor code, when called, was doing an exit when it completed
The vendor code was writing to stdout, which I needed to store and operate on ( it contains the ID of the uploaded pkg ). I couldn't just redirect stdout, because the vendor code was still asking for the username/password.
1 was solved easy enough by wrapping their code in a try/except clause.
2 was solved by doing something similar to: https://stackoverflow.com/a/616672/677373
Instead of a log file, I used cStringIO. I also had to implement a fake 'flush' method, since it seems the vendor code was calling that and complaining that the new obj I had provided for stdout didn't supply it - code ends up looking like:
class Logger(object):
def __init__(self):
self.terminal = sys.stdout
self.log = StringIO()
def write(self, message):
self.terminal.write(message)
self.log.write(message)
def flush(self):
self.terminal.flush()
self.log.flush()
if os.path.exists(options.zipfile):
try:
os.environ['OCLI_CODESET'] = 'ISO-8859-1'
backup = sys.stdout
sys.stdout = output = Logger()
# UploadCommand was the command found in the bytecode
upload = UploadCommand()
try:
upload.main(cmdargs)
except Exception, rc:
pass
sys.stdout = backup
# now do some fancy stuff with output from output.log
I should note that the only reason I simply do a 'pass' in the except: clause is that the except clause is always called. The 'rc' is actually the return code from the command, so I will probably add handling for non-zero cases.
I tried to lookup ways of maybe preloading the stdin buffer
Do you perhaps want to create a named fifo, fill it with username/password info, then reopen it in read mode and pass it to popen (as in popen(..., stdin=myfilledbuffer))?
You could also just create an ordinary temporary file, write the data to it, and reopen it in read mode, again, passing the reopened handle as stdin. (This is something I'd personally avoid doing, since writing username/passwords to temporary files is often of the bad. OTOH it's easier to test than FIFOs)
As for the underlying cause: I suspect that the offending software is reading from stdin via a non-blocking method. Not sure why that works when connected to a terminal.
AAAANYWAY: no need to use pipes directly via Popen at all, right? I kinda laugh at the hackishness of this, but I'll bet it'll work for you:
# you don't actually seem to need popen here IMO -- call() does better for this application.
statuscode = call('echo "%s\n%s\n" | oupload %s' % (username, password, options) , shell=True)
tested with status = call('echo "foo\nbar\nbar\nbaz" |wc -l', shell = True) (output is '4', naturally.)
The original question was solved by just avoiding the issue and not using the terminal and instead importing the python code that was being called by the shell script and just using that.
I believe J.F. Sebastian's answer would probably work better for what was originally asked, however, so I'd suggest people looking for an answer to a similar question look down the path of using the pty module.
I'm trying to save myself just a few keystrokes for a command I type fairly regularly in Python.
In my python startup script, I define a function called load which is similar to import, but adds some functionality. It takes a single string:
def load(s):
# Do some stuff
return something
In order to call this function I have to type
>>> load('something')
I would rather be able to simply type:
>>> load something
I am running Python with readline support, so I know there exists some programmability there, but I don't know if this sort of thing is possible using it.
I attempted to get around this by using the InteractivConsole and creating an instance of it in my startup file, like so:
import code, re, traceback
class LoadingInteractiveConsole(code.InteractiveConsole):
def raw_input(self, prompt = ""):
s = raw_input(prompt)
match = re.match('^load\s+(.+)', s)
if match:
module = match.group(1)
try:
load(module)
print "Loaded " + module
except ImportError:
traceback.print_exc()
return ''
else:
return s
console = LoadingInteractiveConsole()
console.interact("")
This works with the caveat that I have to hit Ctrl-D twice to exit the python interpreter: once to get out of my custom console, once to get out of the real one.
Is there a way to do this without writing a custom C program and embedding the interpreter into it?
Edit
Out of channel, I had the suggestion of appending this to the end of my startup file:
import sys
sys.exit()
It works well enough, but I'm still interested in alternative solutions.
You could try ipython - which gives a python shell which does allow many things including automatic parentheses which gives you the function call as you requested.
I think you want the cmd module.
See a tutorial here:
http://wiki.python.org/moin/CmdModule
Hate to answer my own question, but there hasn't been an answer that works for all the versions of Python I use. Aside from the solution I posted in my question edit (which is what I'm now using), here's another:
Edit .bashrc to contain the following lines:
alias python3='python3 ~/py/shellreplace.py'
alias python='python ~/py/shellreplace.py'
alias python27='python27 ~/py/shellreplace.py'
Then simply move all of the LoadingInteractiveConsole code into the file ~/py/shellreplace.py Once the script finishes executing, python will cease executing, and the improved interactive session will be seamless.
edit:OK, I could swear that the way I'd tested it showed that the getcwd was also causing the exception, but now it appears it's just the file creation. When I move the try-except blocks to their it actually does catch it like you'd think it would. So chalk that up to user error.
Original Question:
I have a script I'm working on that I want to be able to drop a file on it to have it run with that file as an argument. I checked in this question, and I already have the mentioned registry keys (apparently the Python 2.6 installer takes care of it.) However, it's throwing an exception that I can't catch. Running it from the console works correctly, but when I drop a file on it, it throws an exception then closes the console window. I tried to have it redirect standard error to a file, but it threw the exception before the redirection occurred in the script. With a little testing, and some quick eyesight I saw that it was throwing an IOError when I tried to create the file to write the error to.
import sys
import os
#os.chdir("C:/Python26/Projects/arguments")
try:
print sys.argv
raw_input()
os.getcwd()
except Exception,e:
print sys.argv + '\n'
print e
f = open("./myfile.txt", "w")
If I run this from the console with any or no arguments, it behaves as one would expect. If I run it by dropping a file on it, for instance test.txt, it runs, prints the arguments correctly, then when os.getcwd() is called, it throws the exception, and does not perform any of the stuff from the except: block, making it difficult for me to find any way to actually get the exception text to stay on screen. If I uncomment the os.chdir(), the script doesn't fail. If I move that line to within the except block, it's never executed.
I'm guessing running by dropping the file on it, which according to the other linked question, uses the WSH, is somehow messing up its permissions or the cwd, but I don't know how to work around it.
Seeing as this is probably not Python related, but a Windows problem (I for one could not reproduce the error given your code), I'd suggest attaching a debugger to the Python interpreter when it is started. Since you start the interpreter implicitly by a drag&drop action, you need to configure Windows to auto-attach a debugger each time Python starts. If I remember correctly, this article has the needed information to do that (you can substitute another debugger if you are not using Visual Studio).
Apart from that, I would take a snapshot with ProcMon while dragging a file onto your script, to get an idea of what is going on.
As pointed out in my edit above, the errors were caused by the working directory changing to C:\windows\system32, where the script isn't allowed to create files. I don't know how to get it to not change the working directory when started that way, but was able to work around it like this.
if len(sys.argv) == 1:
files = [filename for filename in os.listdir(os.getcwd())
if filename.endswith(".txt")]
else:
files = [filename for filename in sys.argv[1:]]
Fixing the working directory can be managed this way I guess.
exepath = sys.argv[0]
os.chdir(exepath[:exepath.rfind('\\')])