ERROR: "filetest.submit" doesn't contain any "queue" commands -- no jobs queued - python

I am writing a python script that creates a Condor submit file, writes information to it and then submits it to be run on Condor.
for f in my_range(0, 10, 2):
condor_submit.write('Arguments = povray +Irubiks.pov +0frame' + str(f) + '.png +K.' + str(f) + '\n') # '+ stat +'
condor_submit.write('Output = ' + str(f) + '.out\n')
condor_submit.write('queue\n\n')
subprocess.call('condor_submit %s' % (fname,), shell=True)
What I don't understand is that I get the error saying there is no 'queue' command.
I opened up the created submit file and it shows up as..
universe=vanilla
.... (the rest of the header)
should_transfer_files = yes
when_to_transfer_files = on_exit
Arguments = test frame0.pov
Output = 0.out
queue
Arguments = test frame2.pov
and so on. Each section composed of argument, output, and queue does end with a queue statement and it is formatted like that.
What is causing it not to notice the queue lines?
Thank you!

The data is likely buffered and not actually in the submit file yet. After you are done writing to the submit file either close the file or flush it before you invoke condor_submit.
The reason it is there after the program errors out and you inspect it is because the file is likely closed either (a) later in your program or (b) automatically at program exit.

Related

writing to stdout = output_file gives different file length than invoking the program direct via shell

I'm trying to get some output from an external program, invoked by a python script to be redirected from stdout to a file.
output_file = open(list_file,"wb")
programm_aufruf = './deadsp -b '+ ausgabedatei
#print programm_aufruf
args = shlex.split(programm_aufruf)
#print args
sub_p = subprocess.Popen(args,stdout=output_file)
print 'warten auf disassambler '
time.sleep(0.002)
poll = sub_p.poll()
while poll == None:
print '.\r',
poll = sub_p.poll()
sys.stdout.flush()
time.sleep(0.05)
sys.stdout.flush()
output_file.flush
output_file.close
But if the script invokes the external program the generated file size is 118kb, invoking the program via shell it gives 162kb of file size, so some of the dis-assembled code is missing.
I've tried to flush() stdout, flush the file...nothing helps.
Currently I'm developing under MacOS but should not matter anyway, I don't have a Linux VM to hand at the moment.

Python get line number in file

I built a python (2.7) script that parses a txt file with this code:
cnt = 1
logFile = open( logFilePath, 'r' )
for line in logFile:
if errorCodeGetHostName in line:
errorHostNameCnt = errorHostNameCnt + 1
errorGenericCnt = errorGenericCnt + 1
reportFile.write( "--- Error: GET HOST BY NAME # line " + str( cnt ) + "\n\r" )
reportFile.write( line )
elif errorCodeSocke462 in line:
errorSocket462Cnt = errorSocket462Cnt + 1
errorGenericCnt = errorGenericCnt + 1
reportFile.write("--- Error: SOCKET -462 # line " + str(cnt) + "\n\r" )
reportFile.write(line)
elif errorCodeMemory in line:
errorMemoryCnt = errorMemoryCnt + 1
errorGenericCnt = errorGenericCnt + 1
reportFile.write("--- Error: MEMORY NOT RELEASED # line " + str(cnt) + "\n\r" )
reportFile.write(line)
cnt = cnt + 1
I want to add the line number of each error, and for this purpose I added a counter (cnt) but its value is not related to to the real line number.
This is a piece of my log file:
=~=~=~=~=~=~=~=~=~=~=~= PuTTY log 2017.06.13 17:05:43 =~=~=~=~=~=~=~=~=~=~=~=
UTC Time fetched from server #1: '0.pool.ntp.org'
*** Test (cycle #1) starting...
--- Test01 completed successfully!
--- Test02 completed successfully!
--- Test03 completed successfully!
--- Test04 completed successfully!
--- Test01 completed successfully!
--- Test02 completed successfully!
INF:[CONFIGURATION] Completed
--- Test03 completed successfully!
Firmware Version: 0.0.0
*** Test (cycle #1) starting...
How can I get the real line number?
Thanks for the help.
apart from the line-ending issue, there are some other issues with this code
Filehandles
as remarked in on of the comments, it is best to open files with a with-statement
Separation of functions
Now you have 1 big loop where you both loop over the original file, parse it and immediately write to the ReportFile. I think it would be best to separate those.
Make one function to loop over the log, return the details you need, and a next function looping over these details and writing them to a report. this is a lot more robust, and easier to debug and test when something goes wrong
I would also let the IO as much outside as possible. If you later want to stream to a socket or something, this can be easily done
DRY
Lines 6 to 24 of your code contain a lot of lines that are almost the same, and if you want to add another error you want to report, you need to add another 5 lines of code, almost the same. I would use a dict and a for-loop to cut on the boilerplate-code
Pythonic
A smaller remark is that you don't use the handy things Python offers, like yield the with-statement, enumerate or collections.counter Also variable naming is not according to PEP-8, but that is mainly aesthetic
My attempt
errors = {
error_hostname_count: {'error_msg' = '--- Error: GET HOST BY NAME # line %i'},
error_socker_462: {'error_msg' = '--- Error: SOCKET -462 # line %i'},
error_hostname_count: {'error_msg' = '--- Error: MEMORY NOT RELEASED # line %i'},
}
Here you define what errors can occur and what the error message should look like
def get_events(log_filehandle):
for line_no, line in enumerate(log_filehandle):
for error_code, error in errors.items():
if error_code in line:
yield line_no, error_code, line
This just takes a filehandle (can be a Stream or Buffer too) and just looks for error_codes in there, if it finds one, it yields it together with the line
def generate_report(report_filehandle, error_list):
error_counter = collections.Counter()
for line_no, error_code, error_line in error_list:
error_counter['generic'] += 1
error_counter[error_code] += 1
error_msg = format_error_msg(line_no, error_code)
report_file.write(error_msg)
report_file.write(error_line)
return error_counter
This loops over the found errors. It increases they counter, formats the message and writes it to the report_file
def format_error_msg(line_no, error_code):
return errors[error_code['error_msg'] % line_no
This uses string-formatting to generate a message from an error_code and line_no
with open(log_filename, 'r') as log_filehandle, open(report_filename, 'w') as report_filehandle:
error_list = get_events(log_filehandle):
error_counter = print_events(report_filehandle, error_list)
This ties it all together. You could use the error_counter to add a summary to the report, or write a summary to another file or database.
This approach has the advantage that if your error recognition changes, you can do this independent of the reporting and vice-versa
Intro: the log that I want parse is coming from an embedded platform programmed in C.
I found into the embedded code, that somewhere there are a printf with \n\r instead of \r\n. I replace each \n\r with \r\n that correspond to windows CR LF.
With this change the python script works! And I can identify the error by its line.

How to exit function with signal on Windows?

I have the following code written in Python 2.7 on Windows. I want to check for updates for the current python script and update it, if there is an update, with a new version through ftp server preserving the filename and then executing the new python script after terminating the current through the os.kill with SIGNTERM.
I went with the exit function approach but I read that in Windows this only works with the atexit library and default python exit methods. So I used a combination of the atexit.register() and the signal handler.
***necessary libraries***
filematch = 'test.py'
version = '0.0'
checkdir = os.path.abspath(".")
dircontent = os.listdir(checkdir)
r = StringIO()
def exithandler():
try:
try:
if filematch in dircontent:
os.remove(checkdir + '\\' + filematch)
except Exception as e:
print e
ftp = FTP(ip address)
ftp.login(username, password)
ftp.cwd('/Test')
for filename in ftp.nlst(filematch):
fhandle = open(filename, 'wb')
ftp.retrbinary('RETR ' + filename, fhandle.write)
fhandle.close()
subprocess.Popen([sys.executable, "test.py"])
print 'Test file successfully updated.'
except Exception as e:
print e
ftp = FTP(ip address)
ftp.login(username, password)
ftp.cwd('/Test')
ftp.retrbinary('RETR version.txt', r.write)
if(r.getvalue() != version):
atexit.register(exithandler)
somepid = os.getpid()
signal.signal(SIGTERM, lambda signum, stack_frame: exit(1))
os.kill(somepid, signal.SIGTERM)
print 'Successfully replaced and started the file'
Using the:
signal.signal(SIGTERM, lambda signum, stack_frame: exit(1))
I get:
Traceback (most recent call last):
File "C:\Users\STiX\Desktop\Python Keylogger\test.py", line 50, in <module>
signal.signal(SIGTERM, lambda signum, stack_frame: exit(1))
NameError: name 'SIGTERM' is not defined
But I get the job done without a problem except if I use the current code in a more complex script where the script give me the same error but terminates right away for some reason.
On the other hand though, if I use it the correct way, signal.SIGTERM, the process goes straight to termination and the exit function never executed. Why is that?
How can I make this work on Windows and get the outcome that I described above successfully?
What you are trying to do seems a bit complicated (and dangerous from an infosec-perspective ;-). I would suggest to handle the reload-file-when-updated part of the functionality be adding a controller class that imports the python script you have now as a module and, starts it and the reloads it when it is updated (based on a function return or other technique) - look this way for inspiration - https://stackoverflow.com/a/1517072/1010991
Edit - what about exe?
Another hacky technique for manipulating the file of the currently running program would be the shell ping trick. It can be used from all programming languages. The trick is to send a shell command that is not executed before after the calling process has terminated. Use ping to cause the delay and chain the other commands with &. For your use case it could be something like this:
import subprocess
subprocess.Popen("ping -n 2 -w 2000 1.1.1.1 > Nul & del hack.py & rename hack_temp.py hack.py & hack.py ", shell=True)
Edit 2 - Alternative solution to original question
Since python does not block write access to the currently running script an alternative concept to solve the original question would be:
import subprocess
print "hello"
a = open(__file__,"r")
running_script_as_string = a.read()
b = open(__file__,"w")
b.write(running_script_as_string)
b.write("\nprint 'updated version of hack'")
b.close()
subprocess.Popen("python hack.py")

Python / Pexpect before output out of sync

I'm using Python/Pexpect to spawn an SSH session to multiple routers. The code will work for one router but then the output of session.before will get out of sync with some routers so that it will return the output from a previous sendline. This seems particularly the case when sending a blank line (sendline()). Anyone got any ideas? Any insight would be really appreciated.
Below is a sample of what I'm seeing:
ssh_session.sendline('sh version')
while (iresult==2):
iresult = ssh_session.expect(['>','#','--More--'],timeout=SESSION_TIMEOUT)
debug_print("execute_1 " + str(iresult))
debug_print("execute_bef " + ssh_session.before)
debug_print("execute_af " + ssh_session.after)
thisoutput = ssh_session.before
output += thisoutput
if(iresult==2):
debug_print("exec MORE")
ssh_session.send(" ")
else:
debug_print("exec: end loop")
for cmd in config_commands:
debug_print("------------------------------------------------\n")
debug_print ("running command " + cmd.strip() + "\n")
iresult=2
ssh_session.sendline(cmd.strip())
while (iresult==2):
iresult = ssh_session.expect([prompt+">",prompt+"#"," --More-- "],timeout=SESSION_TIMEOUT)
thisoutput = ssh_session.before
debug_print("execute_1 " + str(iresult))
debug_print("execute_af " + ssh_session.after)
debug_print("execute_bef " + thisoutput)
thisoutput = ssh_session.before
output += thisoutput
if(iresult==2):
debug_print("exec MORE")
ssh_session.send(" ")
else:
debug_print("exec: end loop")
I get this:
logged in
exec: sh version
execute_1 1
execute_bef
R9
execute_af #
exec: end loop
------------------------------------------------
running command config t
execute_1 1
execute_af #
execute_bef sh version
Cisco IOS Software, 1841 Software (C1841-IPBASEK9-M), Version 15.1(4)M4, RELEASE SOFTWARE (fc1)
Technical Support: http://www.cisco.com/techsupport...
I've run into this before with pexpect (and I'm trying to remember how I worked around it).
You can re-synchronize with the terminal session by sending a return and then expecting for the prompt in a loop. When the expect times out then you know that you are synchronized.
The root cause is probably that you are either:
Calling send without a match expect (because you don't care about the output)
Running a command that produces output but expecting for a pattern in the middle of that output and then not to next prompt that is at end of the output. One way to deal with this is to change your expect pattern to "(.+)PROMPT" - this will expect until the next prompt and capture all the output of the command sent (which you can parse in the next step).
I faced a similar problem. I tried waiting for the command to be printed on the screen and the sending enter.
I you want to execute say command 'cmd', then you do:
session.send(cmd)
index = session.expect([cmd, pexpect.TIMEOUT], 1)
session.send('\n')
index = session.expect([whatever you expect])
Worked for me.
I'm not sure this is the root of your problem, but it may be worth a try.
Something I've run into is that when you spawn a session that starts with or lands you in a shell, you have to deal with quirks of the TERM type (vt220, color-xterm, etc.). You will see characters used to move the cursor or change colors. The problem is almost guaranteed to show up with the prompt; the string you are looking for to identify the prompt appears twice because of how color changes are handled (the prompt is sent, then codes to backspace, change the color, then the prompt is sent again... but expect sees both instances of the prompt).
Here's something that handles this, guaranteed to be ugly, hacky, not very Pythonic, and functional:
import pexpect
# wait_for_prompt: handle terminal prompt craziness
# returns either the pexpect.before contents that occurred before the
# first sighting of the prompt, or returns False if we had a timeout
#
def wait_for_prompt(session, wait_for_this, wait_timeout=30):
status = session.expect([wait_for_this, pexpect.TIMEOUT, pexpect.EOF], timeout=wait_timeout)
if status != 0:
print 'ERROR : timeout waiting for "' + wait_for_this + '"'
return False
before = session.before # this is what we will want to return
# now look for and handle any additional sightings of the prompt
while True:
try:
session.expect(wait_for_this, timeout=0.1)
except:
# we expect a timeout here. All is normal. Move along, Citizen.
break # get out of the while loop
return before
s = pexpect.spawn('ssh me#myserver.local')
s.expect('password') # yes, we assume that the SSH key is already there
# and that we will successfully connect. I'm bad.
s.sendline('mypasswordisverysecure') # Also assuming the right password
prompt = 'me$'
wait_for_prompt(s, prompt)
s.sendline('df -h') # how full are my disks?
results = wait_for_prompt(s, prompt)
if results:
print results
sys.exit(0)
else:
print 'Misery. You lose.'
sys.exit(1)
I know this is an old thread, but I didn't find much about this online and I just got through making my own quick-and-dirty workaround for this. I'm also using pexpect to run through a list of network devices and record statistics and so forth, and my pexpect.spawn.before will also get out of sync sometimes. This happens very often on the faster, more modern devices for some reason.
My solution was to write an empty carriage return between each command, and check the len() of the .before variable. If it's too small, it means it only captured the prompt, which means it must be at least one command behind the actual ssh session. If that's the case, the program sends another empty line to move the actual data that I want into the .before variable:
def new_line(this, iteration):
if iteration > 4:
return data
else:
iteration+=1
this.expect(":")
this.sendline(" \r")
data = this.before
if len(data) < 50:
# The numer 50 was chosen because it should be longer than just the hostname and prompt of the device, but shorter than any actual output
data = new_line(this, iteration)
return data
def login(hostname):
this = pexpect.spawn("ssh %s" % hostname)
stop = this.expect([pexpect.TIMEOUT,pexpect.EOF,":"], timeout=20)
if stop == 2:
try:
this.sendline("\r")
this.expect(":")
this.sendline("show version\r")
version = new_line(this,0)
this.expect(":")
this.sendline("quit\r")
return version
except:
print 'failed to execute commands'
this.kill(0)
else:
print 'failed to login'
this.kill(0)
I accomplish this by a recursive command that will call itself until the .before variable finally captures the command's output, or until it calls itself 5 times, in which case it simply gives up.

What is an output file in python

I'm starting to work on problems for google's Code Jam. However I there seams to be a problem with my submission. Whenever I submit I am told "Your output should start with 'Case #1: '". My output a print statement starts with ""Case #%s: %s"%(y + 1, p)" which says Case #1: ext... when I run my code.
I looked into it and it said "Your output should start with 'Case #1: ': If you get this message, make sure you did not upload the source file in place of the output file, and that you're outputting case numbers properly. The first line of the output file should always start with "Case #1:", followed by a space or the end of the line."
So what is an output file and how would I incorporate it into my code?
Extra info: This is my code I'm saving it as GoogleCode1.py and submitting that file. I wrote it in the IDLE.
import string
firstimput = raw_input ("cases ")
for y in range(int(first)):
nextimput = raw_input ("imput ")
firstlist = string.split(nextimput)
firstlist.reverse()
p = ""
for x in range(len(firstlist)):
p = p +firstlist[x] + " "
p = p [:-1]
print "Case #%s: %s"%(y + 1, p)
Run the script in a shell, and redirect the output.
python GoogleCode1.py > GoogleCode1.out
I/O redirection aside, the other way to do this would be to read from and write to various files. Lookup file handling in python
input_file = open('/path/to/input_file')
output_file = open('/path/to/output_file', 'w')
for line in input_file:
answer = myFunction(line)
output_file.write("Case #x: "+str(answer))
input_file.close()
output_file.close()
Cheers
Make sure you're submitting a file containing what your code outputs -- don't submit the code itself during a practice round.

Categories

Resources