Python: os.remove is not working - python

Why isn't os.remove(-string-) working for me? I have the code written as follows:
try:
os.remove(a)
output = current_time() + "\trmv successful"
message = message + '\n' + output
message = "".join(message)
return message
except OSError:
try:
os.removedirs(a)
output = current_time() + "\trmv successful"
message = message + '\n' + output
message = "".join(message)
return message
except OSError:
output = current_time() + "\trmv failed: [?]"
message = message + '\n' + output
message = "".join(message)
return message
And it would return 21:32:53 rmv failed: [?] every time I perform the rmv command in the client. My Python version is 2.6.1 if that helps.

Exceptions are there to be looked at! Check this:
try:
os.remove(a)
except OSError as e: # name the Exception `e`
print "Failed with:", e.strerror # look what it says
print "Error code:", e.code
Modify your code to display the error message and you'll know why it failed. The docs can help you.

Why don't you try printing out the error?
try:
os.remove(a)
output = current_time() + "\trmv successful"
message = message + '\n' + output
message = "".join(message)
return message
except OSError, e:
print ("Failed to remove %s\nError is: %s" % (a,e))
try:
os.removedirs(a)
output = current_time() + "\trmv successful"
message = message + '\n' + output
message = "".join(message)
return message
except OSError, e:
print ("Failed twice to remove %s\nError is: %s" % (a,e))
output = current_time() + "\trmv failed: [?]"
message = message + '\n' + output
message = "".join(message)
return message
The error could be literally anything you see... A permissions issue for example?

try putting some delay time.sleep(0.2) after opening / removing files Or
It seems a windows and/or antivirus issue
Josh Rosenberg on this error tracking on python development points out the same:
Short version: Indexing and anti-virus tools prevent deletion from occurring.
Longer version:
DeleteFile (and all the stuff that ultimately devolves to DeleteFile) operate in a funny way on Windows. Internally, it opens a HANDLE to the file, marks it as pending deletion, and closes the HANDLE. If no one snuck in and grabbed another HANDLE to the file during that time, then the file is deleted when DeleteFile's hidden HANDLE is closed. Well designed anti-virus/indexing tools use oplocks ( http://blogs.msdn.com/b/oldnewthing/archive/2013/04/15/10410965.aspx ) so they can open a file, but seamlessly get out of the way if a normal process needs to take exclusive control of a file or delete it. Sadly "well-designed" is not a term usually associated with anti-virus tools, so errors like this are relatively commonplace.
Workarounds like using GetTempFileName() and MoveFile() to move the file out of the way will work, though I believe they introduce their own race conditions (the temp file itself is created but the HANDLE is closed immediately, which could mean a race to open the empty file by the bad anti-virus that would block MoveFile()).
Basically, if you're running on Windows, and you're using unfriendly anti-virus/indexing tools, there is no clean workaround that maintains the same behavior. You can't keep creating and deleting a file of the same name over and over without risking access denied errors.
That said, you could probably get the same results by opening and closing the file only once. Change from the original pseudocode:

Related

Check if a Python script is already running in Windows

What is the best/easiest why to check if a specific Python script already running in Windows?
I have a script that goes over all files in a folder and copies them to another folder (sort to Movie or TV Shows folder).
I want to make sure when the script starts that there isn't another process (of the same script) that is already running, so I wouldn't have issues with 2 scripts that are trying to move the same files.
I have tried to create a file in the start of the script and deleting it when the script finishes, but I got into problems when the script crashes and/or throws an error.
I know that I can use psutil, but then I will get the process name (python.exe) and I'm looking for a why to distinguish if the Python process is running my script or another program.
You can use psutil.Process().cmdline() to see the complete command line of a process.
Alternatively, you could lock the files you're working on. See the answer to this question how to do this on ms-windows. The thing with locks is that you have to be careful to remove them, especially when an error occurs.
For Windows o.s. you can use timestamp.txt file
timestamp = 'timestamp.txt'
...
elif windows:
try:
new_timestamp = False
if not os.path.exists(timestamp):
new_timestamp = True
try:
with open(timestamp, 'a') as f_timestamp:
f_timestamp.write(str(int_t0))
except IOError as e:
out1 = 'M. Cannot open file for writing. Error: %s - %s.' \
% (e.logfile, e.strerror) + ' -> Exit code 3'
logging.error(out1)
sys.exit(3)
if not new_timestamp and os.path.exists(timestamp):
out1 = 'N. Script ' + __file__ + ' is already running.'
print(out1)
logging.error(out1)
sys.exit(0)
except IOError as e:
out1 = 'J. Cannot open file. Error: %s - %s.' \
% (e.filepath, e.strerror) + ' -> Exit code 4'
logging.error(out1)
...
try:
f_timestamp.close()
os.remove(timestamp)
except OSError as e:
logging.error('B. Cannot delete ' + timestamp + \
' Error: %s - %s.' % (e.filename, e.strerror))
Use lockfile. It is cross-platform, uses native OS functions and much more robust than any home-brewn lock file creation schemas
I have solved it by using an empty dummy file.
At the start of the process I check if the file exists, if not I create a new file, run the process and delete it in the end (even if the process failed), if the file does exist, that means that the process is running now so I terminate the current (new) process.

ERROR: "filetest.submit" doesn't contain any "queue" commands -- no jobs queued

I am writing a python script that creates a Condor submit file, writes information to it and then submits it to be run on Condor.
for f in my_range(0, 10, 2):
condor_submit.write('Arguments = povray +Irubiks.pov +0frame' + str(f) + '.png +K.' + str(f) + '\n') # '+ stat +'
condor_submit.write('Output = ' + str(f) + '.out\n')
condor_submit.write('queue\n\n')
subprocess.call('condor_submit %s' % (fname,), shell=True)
What I don't understand is that I get the error saying there is no 'queue' command.
I opened up the created submit file and it shows up as..
universe=vanilla
.... (the rest of the header)
should_transfer_files = yes
when_to_transfer_files = on_exit
Arguments = test frame0.pov
Output = 0.out
queue
Arguments = test frame2.pov
and so on. Each section composed of argument, output, and queue does end with a queue statement and it is formatted like that.
What is causing it not to notice the queue lines?
Thank you!
The data is likely buffered and not actually in the submit file yet. After you are done writing to the submit file either close the file or flush it before you invoke condor_submit.
The reason it is there after the program errors out and you inspect it is because the file is likely closed either (a) later in your program or (b) automatically at program exit.

Error handling when opening applications in python

I need to open a file in a specific application using python. I'm going to default to opening the file using the default app location / filename; however should the app be unable to be opened, I'd like to handle that error and give the user some other options.
I've understood so far that subprocess.call is the best way to do this, as opposed to system.os
Application: /Applications/GreatApp.app
This works fine
subprocess.call(['open','/Applications/GreatApp.app','placeholder.gap'])
However, when I start adding the try / except loops, they seem to do nothing. (note the space in the application name - I'm using the incorrect name to force an exception)
try:
subprocess.call(['open','/Applications/Great App.app','placeholder.gap'])
except:
print 'this is an error placeholder'
I'll still see the following error displayed in python
The file /Applications/Great App.app does not exist.
The closest I've found to some form of error handling is the following. Is looking at the value of retcode the right way to go about this?
try:
retcode = subprocess.call("open " + filename, shell=True)
if retcode < 0:
print >>sys.stderr, "Child was terminated by signal", -retcode
else:
print >>sys.stderr, "Child returned", retcode
except OSError, e:
print >>sys.stderr, "Execution failed:", e
Turns out retcode isn't the way, as both correct and incorrect names give a value greater than 0.
What does this show?
subprocess.call (['ls', '-l', '/Applications'])
The error message you got says that the application that you are trying to open does not exist.
And you won't get an exception if it doesn't as you have found.
Try with this. It will open file with its default editor, if it is installed.
ss=subprocess.Popen(FileName,shell=True)
ss.communicate()
I have no control over the application that the user has associated with the file I'm opening.
I need to be able to specify the application to be used and the file to be opened.
subprocess.call doesn't return exceptions should the application/file be unable to be opened.
It's friend subprocess.check_call, does.
From the docs: http://docs.python.org/2/library/subprocess.html#subprocess.check_call
If the return code was zero then return, otherwise raise
CalledProcessError. The CalledProcessError object will have the return
code in the returncode attribute.
I've provide my usage examples below for future reference
For OSX
FNULL = open(os.devnull, 'w') # Used in combination with the stdout variable
# to prevent output from being displayed while
# the program launches.
try:
# First try the default install location of the application
subprocess.check_call(['open','/Applications/Application.app',filename], stdout=FNULL)
except subprocess.CalledProcessError:
# Then ask user to manually enter in the filepath to Application.app
print 'unable to find /Applications/Application.app'
# Now ask user to manually enter filepath
For Windows, change this line
subprocess.check_call(['C:\file\to\program',filename], stdout=FNULL)

Python / Pexpect before output out of sync

I'm using Python/Pexpect to spawn an SSH session to multiple routers. The code will work for one router but then the output of session.before will get out of sync with some routers so that it will return the output from a previous sendline. This seems particularly the case when sending a blank line (sendline()). Anyone got any ideas? Any insight would be really appreciated.
Below is a sample of what I'm seeing:
ssh_session.sendline('sh version')
while (iresult==2):
iresult = ssh_session.expect(['>','#','--More--'],timeout=SESSION_TIMEOUT)
debug_print("execute_1 " + str(iresult))
debug_print("execute_bef " + ssh_session.before)
debug_print("execute_af " + ssh_session.after)
thisoutput = ssh_session.before
output += thisoutput
if(iresult==2):
debug_print("exec MORE")
ssh_session.send(" ")
else:
debug_print("exec: end loop")
for cmd in config_commands:
debug_print("------------------------------------------------\n")
debug_print ("running command " + cmd.strip() + "\n")
iresult=2
ssh_session.sendline(cmd.strip())
while (iresult==2):
iresult = ssh_session.expect([prompt+">",prompt+"#"," --More-- "],timeout=SESSION_TIMEOUT)
thisoutput = ssh_session.before
debug_print("execute_1 " + str(iresult))
debug_print("execute_af " + ssh_session.after)
debug_print("execute_bef " + thisoutput)
thisoutput = ssh_session.before
output += thisoutput
if(iresult==2):
debug_print("exec MORE")
ssh_session.send(" ")
else:
debug_print("exec: end loop")
I get this:
logged in
exec: sh version
execute_1 1
execute_bef
R9
execute_af #
exec: end loop
------------------------------------------------
running command config t
execute_1 1
execute_af #
execute_bef sh version
Cisco IOS Software, 1841 Software (C1841-IPBASEK9-M), Version 15.1(4)M4, RELEASE SOFTWARE (fc1)
Technical Support: http://www.cisco.com/techsupport...
I've run into this before with pexpect (and I'm trying to remember how I worked around it).
You can re-synchronize with the terminal session by sending a return and then expecting for the prompt in a loop. When the expect times out then you know that you are synchronized.
The root cause is probably that you are either:
Calling send without a match expect (because you don't care about the output)
Running a command that produces output but expecting for a pattern in the middle of that output and then not to next prompt that is at end of the output. One way to deal with this is to change your expect pattern to "(.+)PROMPT" - this will expect until the next prompt and capture all the output of the command sent (which you can parse in the next step).
I faced a similar problem. I tried waiting for the command to be printed on the screen and the sending enter.
I you want to execute say command 'cmd', then you do:
session.send(cmd)
index = session.expect([cmd, pexpect.TIMEOUT], 1)
session.send('\n')
index = session.expect([whatever you expect])
Worked for me.
I'm not sure this is the root of your problem, but it may be worth a try.
Something I've run into is that when you spawn a session that starts with or lands you in a shell, you have to deal with quirks of the TERM type (vt220, color-xterm, etc.). You will see characters used to move the cursor or change colors. The problem is almost guaranteed to show up with the prompt; the string you are looking for to identify the prompt appears twice because of how color changes are handled (the prompt is sent, then codes to backspace, change the color, then the prompt is sent again... but expect sees both instances of the prompt).
Here's something that handles this, guaranteed to be ugly, hacky, not very Pythonic, and functional:
import pexpect
# wait_for_prompt: handle terminal prompt craziness
# returns either the pexpect.before contents that occurred before the
# first sighting of the prompt, or returns False if we had a timeout
#
def wait_for_prompt(session, wait_for_this, wait_timeout=30):
status = session.expect([wait_for_this, pexpect.TIMEOUT, pexpect.EOF], timeout=wait_timeout)
if status != 0:
print 'ERROR : timeout waiting for "' + wait_for_this + '"'
return False
before = session.before # this is what we will want to return
# now look for and handle any additional sightings of the prompt
while True:
try:
session.expect(wait_for_this, timeout=0.1)
except:
# we expect a timeout here. All is normal. Move along, Citizen.
break # get out of the while loop
return before
s = pexpect.spawn('ssh me#myserver.local')
s.expect('password') # yes, we assume that the SSH key is already there
# and that we will successfully connect. I'm bad.
s.sendline('mypasswordisverysecure') # Also assuming the right password
prompt = 'me$'
wait_for_prompt(s, prompt)
s.sendline('df -h') # how full are my disks?
results = wait_for_prompt(s, prompt)
if results:
print results
sys.exit(0)
else:
print 'Misery. You lose.'
sys.exit(1)
I know this is an old thread, but I didn't find much about this online and I just got through making my own quick-and-dirty workaround for this. I'm also using pexpect to run through a list of network devices and record statistics and so forth, and my pexpect.spawn.before will also get out of sync sometimes. This happens very often on the faster, more modern devices for some reason.
My solution was to write an empty carriage return between each command, and check the len() of the .before variable. If it's too small, it means it only captured the prompt, which means it must be at least one command behind the actual ssh session. If that's the case, the program sends another empty line to move the actual data that I want into the .before variable:
def new_line(this, iteration):
if iteration > 4:
return data
else:
iteration+=1
this.expect(":")
this.sendline(" \r")
data = this.before
if len(data) < 50:
# The numer 50 was chosen because it should be longer than just the hostname and prompt of the device, but shorter than any actual output
data = new_line(this, iteration)
return data
def login(hostname):
this = pexpect.spawn("ssh %s" % hostname)
stop = this.expect([pexpect.TIMEOUT,pexpect.EOF,":"], timeout=20)
if stop == 2:
try:
this.sendline("\r")
this.expect(":")
this.sendline("show version\r")
version = new_line(this,0)
this.expect(":")
this.sendline("quit\r")
return version
except:
print 'failed to execute commands'
this.kill(0)
else:
print 'failed to login'
this.kill(0)
I accomplish this by a recursive command that will call itself until the .before variable finally captures the command's output, or until it calls itself 5 times, in which case it simply gives up.

How can I capture the return value of shutil.copy() in Python ( on DOS )?

I am trying to record the success or failure of a number of copy commands, into a log file. I'm using shutil.copy() - e.g.
str_list.append(getbitmapsfrom)
game.bigbitmap = "i doubt this is there.bmp"
str_list.append(game.bigbitmap)
source = '\\'.join(str_list)
shutil.copy(source, newbigbmpname)
I forced one of the copy commands in my script to fail, and it generated the error:
[Errno 2] No such file or directory: 'X:\PJ_public\PJ_Services\BSkyB-PlayJam\Content\P_NewPortal2009\1.0.0\pframes\i doubt this is is there.bmp'
This is great, but can I capture "Errno 2 No such file or directory" and write it to a log file? Does shutil.copy() return an integer value? - I don't see this described in the Python docs.
I guess also I want to be able to capture the return value, so that the script doesn't bomb out on a copy failure - I'm trying to make it continue regardless of errors.
Thanks.
You'll want to look at the exceptions section of the Python tutorial. In the case of shutil.copy() not finding one of the arguments, an IOError exception will be raised. You can get the message from the exception instance.
try:
shutil.copy(src, dest)
except IOError, e:
print "Unable to copy file. %s" % e
You will rarely see C-like return codes in Python, errors are signalled by exceptions instead.
The correct way of logging result is:
try:
shutil.copy(src, dest)
except EnvironmentError:
print "Error happened"
else:
print "OK"
try:
shutil.copy(archivo, dirs)
except EnvironmentError:
print "Error en el copiado"
escritura = "no se pudo copiar %s a %s \n" % (archivo, dirs)
else:
print "Copiado con exito"
escritura = "%s --> %s \n" % (archivo, dirs)
finally:
log = open("/tmp/errorcreararboldats.log", "a")
log.write(escritura)
log.close()

Categories

Resources