My python script stops... no error, just stops - python

I am running a script that iterates through a text file. On each line on the text file there is an ip adress. The script grabs the banner, then writes the ip + banner on another file.
The problem is, it just stops around 500 lines, more or less, with no error.
Another weird thing is if i run it with python3 it does what i said above. If i run it with python it iterates through those 500 lines, then starts at the beggining. I noticed this when i saw repetitions in my output file. Anyway here is the code, maybe you guys can tell me what im doing wrong:
import os
import subprocess
import concurrent.futures
#import time, random
import threading
import multiprocessing
with open("ipuri666.txt") as f:
def multiprocessing_func():
try:
line2 = line.rstrip('\r\n')
a = subprocess.Popen(["curl", "-I", line2, "--connect-timeout", "1", "--max-time", "1"], stdout=subprocess.PIPE)
b = subprocess.Popen(["grep", "Server"], stdin=a.stdout, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
#a.stdout.close()
out, err = b.communicate()
g = open("IP_BANNER2","a")
print( "out: {0}".format(out))
g.write(line2 + " " + "out: {0}\n".format(out))
print("err: {0}".format(err))
except IOError:
print("Connection timed out")
if __name__ == '__main__':
#starttime = time.time()
processes = []
for line in f:
p = multiprocessing.Process(target=multiprocessing_func, args=())
processes.append(p)
p.start()
for process in processes:
process.join()

If your use case allows I would recommend just rewriting this as a shell script, there is no need to use Python. (This would likely solve your issue indirectly.)
#!/usr/bin/env bash
readarray -t ips < ipuri666.txt
for ip in ${ips[#]}; do
output=$(curl -I "$ip" --connect-timeout 1 --max-time 1 | grep "Server")
echo "$ip $output" >> fisier.txt
done
The script is slightly simpler than what you are trying to do, for instance I do not capture the error. This should be pretty close to what you are trying to accomplish. I will update again if needed.

Related

Log output of background process to a file

I have time consuming SNMP walk task to perform which I am running as a background process using Popen command. How can I capture the output of this background task in a log file. In the below code, I am trying to do snampwalk on each IP in ip_list and logging all the results to abc.txt. However, I see the generated file abc.txt is empty.
Here is my sample code below -
import subprocess
import sys
f = open('abc.txt', 'a+')
ip_list = ["192.163.1.104", "192.163.1.103", "192.163.1.101"]
for ip in ip_list:
cmd = "snmpwalk.exe -t 1 -v2c -c public "
cmd = cmd + ip
print(cmd)
p = subprocess.Popen(cmd, shell=True, stdout=f)
p.wait()
f.close()
print("File output - " + open('abc.txt', 'r').read())
the sample output from the command can be something like this for each IP -
sysDescr.0 = STRING: Software: Whistler Version 5.1 Service Pack 2 (Build 2600)
sysObjectID.0 = OID: win32
sysUpTimeInstance = Timeticks: (15535) 0:02:35.35
sysContact.0 = STRING: unknown
sysName.0 = STRING: UDLDEV
sysLocation.0 = STRING: unknown
sysServices.0 = INTEGER: 72
sysORID.4 = OID: snmpMPDCompliance
I have already tried Popen. But it does not logs output to a file if it is a time consuming background process. However, it works when I try to run background process like ls/dir. Any help is appreciated.
The main issue here is the expectation of what Popen does and how it works I assume.
p.wait() here will wait for the process to finish before continuing, that is why ls for instance works but more time consuming tasks doesn't. And there's nothing flushing the output automatically until you call p.stdout.flush().
The way you've set it up is more meant to work for:
Execute command
Wait for exit
Catch output
And then work with it. For your usecase, you'd better off using an alternative library or use the stdout=subprocess.PIPE and catch it yourself. Which would mean something along the lines of:
import subprocess
import sys
ip_list = ["192.163.1.104", "192.163.1.103", "192.163.1.101"]
with open('abc.txt', 'a+') as output:
for ip in ip_list:
print(cmd := f"snmpwalk.exe -t 1 -v2c -c public {ip}")
process = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE) # Be wary of shell=True
while process.poll() is None:
for c in iter(lambda: process.stdout.read(1), ''):
if c != '':
output.write(c)
with open('abc.txt', 'r') as log:
print("File output: " + log.read())
The key things to take away here is process.poll() which checks if the process has finished, if not, we'll try to catch the output with process.stdout.read(1) to read one byte at a time. If you know there's new lines coming, you can switch those three lines to output.write(process.stdout.readline()) and you're all set.

Display process output incrementally using Python subprocess

I'm trying to run "docker-compose pull" from inside a Python automation script and to incrementally display the same output that Docker command would print if it was run directly from the shell. This command prints a line for each Docker image found in the system, incrementally updates each line with the Docker image's download progress (a percentage) and replaces this percentage with a "done" when the download has completed. I first tried getting the command output with subprocess.poll() and (blocking) readline() calls:
import shlex
import subprocess
def run(command, shell=False):
p = subprocess.Popen(shlex.split(command), stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=shell)
while True:
# print one output line
output_line = p.stdout.readline().decode('utf8')
error_output_line = p.stderr.readline().decode('utf8')
if output_line:
print(output_line.strip())
if error_output_line:
print(error_output_line.strip())
# check if process finished
return_code = p.poll()
if return_code is not None and output_line == '' and error_output_line == '':
break
if return_code > 0:
print("%s failed, error code %d" % (command, return_code))
run("docker-compose pull")
The code gets stuck in the first (blocking) readline() call. Then I tried to do the same without blocking:
import select
import shlex
import subprocess
import sys
import time
def run(command, shell=False):
p = subprocess.Popen(shlex.split(command), stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=shell)
io_poller = select.poll()
io_poller.register(p.stdout.fileno(), select.POLLIN)
io_poller.register(p.stderr.fileno(), select.POLLIN)
while True:
# poll IO for output
io_events_list = []
while not io_events_list:
time.sleep(1)
io_events_list = io_poller.poll(0)
# print new output
for event in io_events_list:
# must be tested because non-registered events (eg POLLHUP) can also be returned
if event[1] & select.POLLIN:
if event[0] == p.stdout.fileno():
output_str = p.stdout.read(1).decode('utf8')
print(output_str, end="")
if event[0] == p.stderr.fileno():
error_output_str = p.stderr.read(1).decode('utf8')
print(error_output_str, end="")
# check if process finished
# when subprocess finishes, iopoller.poll(0) returns a list with 2 select.POLLHUP events
# (one for stdout, one for stderr) and does not enter in the inner loop
return_code = p.poll()
if return_code is not None:
break
if return_code > 0:
print("%s failed, error code %d" % (command, return_code))
run("docker-compose pull")
This works, but only the final lines (with "done" at the end) are printed to the screen, when all Docker images downloads have been completed.
Both methods work fine with a command with simpler output such as "ls". Maybe the problem is related with how this Docker command prints incrementally to screen, overwriting already written lines ? Is there a safe way to incrementally show the exact output of a command in the command line when running it via a Python script?
EDIT: 2nd code block was corrected
Always openSTDIN as a pipe, and if you are not using it, close it immediately.
p.stdout.read() will block until the pipe is closed, so your polling code does nothing useful here. It needs modifications.
I suggest not to use shell=True
Instead of *.readline(), try with *.read(1) and wait for "\n"
Of course you can do what you want in Python, the question is how. Because, a child process might have different ideas about how its output should look like, that's when trouble starts. E.g. the process might want explicitly a terminal at the other end, not your process. Or a lot of such simple nonsense. Also, a buffering may also cause problems. You can try starting Python in unbuffered mode to check. (/usr/bin/python -U)
If nothing works, then use pexpect automation library instead of subprocess.
I have found a solution, based on the first code block of my question:
def run(command,shell=False):
p = subprocess.Popen(shlex.split(command), stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=shell)
while True:
# read one char at a time
output_line = p.stderr.read(1).decode("utf8")
if output_line != "":
print(output_line,end="")
else:
# check if process finished
return_code = p.poll()
if return_code is not None:
if return_code > 0:
raise Exception("Command %s failed" % command)
break
return return_code
Notice that docker-compose uses stderr to print its progress instead of stdout. #Dalen has explained that some applications do it when they want that their results are pipeable somewhere, for instance a file, but also want to be able to show their progress.

Simultaneously wait() for multiple subproccess.Popen commands, then exit

I'm trying to run an unknown number of commands and capture their stdout in a file. However, I am presented with a difficulty when attempting to p.wait() on each instance. My code looks something like this:
print "Started..."
for i, cmd in enumerate(commands):
i = "output_%d.log" % i
p = Popen(cmd, shell=True, universal_newlines=True, stdout=open(i, 'w'))
p.wait()
print "Done!"
I'm looking for a way to execute everything in commands simultaneously and exit the current script only when each and every single process has been completed. It would also help to be informed when each command returns an exit code.
I've looked at some answers, including this one by J.F. Sebastian and tried to adapt it to my situation by changing args=(p.stdout, q) to args=(p.returncode, q) but it ended up exiting immediately and running in the background (possibly due to shell=True?), as well as not responding to any keys pressed inside the bash shell... I don't know where to go with this.
Jeremy Brown's answer also helped, sort of, but select.epoll() was throwing an AttributeError exception.
Is there any other seamless way or trick to make it work? It doesn't need to be cross platform, a solution for GNU/Linux and macOS would be much appreciated. Thanks in advance!
A big thanks to Adam Matan for the biggest hint towards the solution. This is what I came up with, and it works flawlessly:
It initiates each Thread object in parallel
It starts each instance simultaneously
Finally it waits for each exit code without blocking other threads
Here is the code:
import threading
import subprocess
...
def run(cmd):
name = cmd.split()[0]
out = open("%s_log.txt" % name, 'w')
err = open('/dev/null', 'w')
p = subprocess.Popen(cmd.split(), stdout=out, stderr=err)
p.wait()
print name + " completed, return code: " + str(p.returncode)
...
proc = [threading.Thread(target=run, args=(cmd)) for cmd in commands]
[p.start() for p in proc]
[p.join() for p in proc]
print "Done!"
I would have rathered add this as a comment because I was working off of Jack of all Spades' answer. I had trouble getting that exact command to work because it was unpacking the string list I had of commands.
Here's my edit for python3:
import subprocess
import threading
commands = ['sleep 2', 'sleep 4', 'sleep 8']
def run(cmd):
print("Command %s" % cmd)
name = cmd.split(' ')[0]
print("name %s" % name)
out = open('/tmp/%s_log.txt' % name, 'w')
err = open('/dev/null', 'w')
p = subprocess.Popen(cmd.split(' '), stdout=out, stderr=err)
p.wait()
print(name + " completed, return code: " + str(p.returncode))
proc = [threading.Thread(target=run, kwargs={'cmd':cmd}) for cmd in commands]
[p.start() for p in proc]
[p.join() for p in proc]
print("Done!")

Capturing the output of a command in realtime - python

I see that there are several solutions for capturing a command output in realtime when invoked from python. I have a case like this.
run_command.py
import time
for i in range(10):
print "Count = ", i
time.sleep(1)
check_run_command.py - this one tries to capture the run_command.py output in realtime.
import subprocess
def run_command(cmd):
p = subprocess.Popen(
cmd,
shell=False,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
stdin=subprocess.PIPE
)
while True:
line = p.stdout.readline()
if line == '':
break
print(line.strip())
if __name__ == "__main__":
run_command("python run_command.py".split())
$ python check_run_command.py
(Waits 10 secs) then prints the following
Count = 0
Count = 1
....
Count = 9
I am not sure why I can't capture the output in realtime in this case. I tried multiple solutions in other threads for the same problem, but didn't help. Is the sleep in run_command.py has anything to do with this.
I tried running ls commands, but can't figure out if the output is printed in realtime or after the process completes, because the command itself completes quickly. Hence I added one that has sleep.

How to control background process in linux

I need to write a script in Linux which can start a background process using one command and stop the process using another.
The specific application is to take userspace and kernel logs for android.
following command should start taking logs
$ mylogscript start
following command should stop the logging
$ mylogscript stop
Also, the commands should not block the terminal. For example, once I send the start command, the script run in background and I should be able to do other work on terminal.
Any pointers on how to implement this in perl or python would be helpful.
EDIT:
Solved: https://stackoverflow.com/a/14596380/443889
I got the solution to my problem. Solution essentially includes starting a subprocess in python and sending a signal to kill the process when done.
Here is the code for reference:
#!/usr/bin/python
import subprocess
import sys
import os
import signal
U_LOG_FILE_PATH = "u.log"
K_LOG_FILE_PATH = "k.log"
U_COMMAND = "adb logcat > " + U_LOG_FILE_PATH
K_COMMAND = "adb shell cat /proc/kmsg > " + K_LOG_FILE_PATH
LOG_PID_PATH="log-pid"
def start_log():
if(os.path.isfile(LOG_PID_PATH) == True):
print "log process already started, found file: ", LOG_PID_PATH
return
file = open(LOG_PID_PATH, "w")
print "starting log process: ", U_COMMAND
proc = subprocess.Popen(U_COMMAND,
stdout=subprocess.PIPE, stderr=subprocess.PIPE,
shell=True, preexec_fn=os.setsid)
print "log process1 id = ", proc.pid
file.write(str(proc.pid) + "\n")
print "starting log process: ", K_COMMAND
proc = subprocess.Popen(K_COMMAND,
stdout=subprocess.PIPE, stderr=subprocess.PIPE,
shell=True, preexec_fn=os.setsid)
print "log process2 id = ", proc.pid
file.write(str(proc.pid) + "\n")
file.close()
def stop_log():
if(os.path.isfile(LOG_PID_PATH) != True):
print "log process not started, can not find file: ", LOG_PID_PATH
return
print "terminating log processes"
file = open(LOG_PID_PATH, "r")
log_pid1 = int(file.readline())
log_pid2 = int(file.readline())
file.close()
print "log-pid1 = ", log_pid1
print "log-pid2 = ", log_pid2
os.killpg(log_pid1, signal.SIGTERM)
print "logprocess1 killed"
os.killpg(log_pid2, signal.SIGTERM)
print "logprocess2 killed"
subprocess.call("rm " + LOG_PID_PATH, shell=True)
def print_usage(str):
print "usage: ", str, "[start|stop]"
# Main script
if(len(sys.argv) != 2):
print_usage(sys.argv[0])
sys.exit(1)
if(sys.argv[1] == "start"):
start_log()
elif(sys.argv[1] == "stop"):
stop_log()
else:
print_usage(sys.argv[0])
sys.exit(1)
sys.exit(0)
There are a couple of different approaches you can take on this:
1. Signal - you use a signal handler, and use, typically "SIGHUP" to signal the process to restart ("start"), SIGTERM to stop it ("stop").
2. Use a named pipe or other IPC mechanism. The background process has a separate thread that simply reads from the pipe, and when something comes in, acts on it. This method relies on having a separate executable file that opens the pipe, and sends messages ("start", "stop", "set loglevel 1" or whatever you fancy).
I'm sorry, I haven't implemented either of these in Python [and perl I haven't really written anything in], but I doubt it's very hard - there's bound to be a ready-made set of python code to deal with named pipes.
Edit: Another method that just struck me is that you simply daemonise the program at start, and then let the "stop" version find your deamonized process [e.g. by reading the "pidfile" that you stashed somewhere suitable], and then sends a SIGTERM for it to terminate.
I don't know if this is the optimum way to do it in perl, but for example:
system("sleep 60 &")
This starts a background process that will sleep for 60 seconds without blocking the terminal. The ampersand in shell means to do something in the background.
A simple mechanism for telling the process when to stop is to have it periodically check for the existence of a certain file. If the file exists, it exits.

Categories

Resources