I am writing a Python/Django application which transfer files from server to the local machine using rsync protocol. We will be dealing with the large files so the progress bar is mandatory. --progress argument in rsync command does this beautifully. All the detail progresses are shown in the terminal. How can I show that progress in web browser? Is there any hook function or something like that? Or Can I store the progress in a log file, call it and update it every one minute or so?
The basic principle is to run rsync in subprocess, expose a web API and get updates via javascript
Here's an example.
import subprocess
import re
import sys
print('Dry run:')
cmd = 'rsync -az --stats --dry-run ' + sys.argv[1] + ' ' + sys.argv[2]
proc = subprocess.Popen(cmd,
shell=True,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,)
remainder = proc.communicate()[0]
mn = re.findall(r'Number of files: (\d+)', remainder)
total_files = int(mn[0])
print('Number of files: ' + str(total_files))
print('Real rsync:')
cmd = 'rsync -avz --progress ' + sys.argv[1] + ' ' + sys.argv[2]
proc = subprocess.Popen(cmd,
shell=True,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,)
while True:
output = proc.stdout.readline()
if 'to-check' in output:
m = re.findall(r'to-check=(\d+)/(\d+)', output)
progress = (100 * (int(m[0][1]) - int(m[0][0]))) / total_files
sys.stdout.write('\rDone: ' + str(progress) + '%')
sys.stdout.flush()
if int(m[0][0]) == 0:
break
print('\rFinished')
But this only shows us the progress in our standard output (stdout).
We can however, modify this code to return the progress as a JSON output and this output can be made available via a progress webservice/API that we create.
On the client side use, we will then write javascript (ajax) to contact our progress webservice/API from time-to-time, and using that info update something client side e.g. a text msg, width of an image, color of some div etc
Related
I have time consuming SNMP walk task to perform which I am running as a background process using Popen command. How can I capture the output of this background task in a log file. In the below code, I am trying to do snampwalk on each IP in ip_list and logging all the results to abc.txt. However, I see the generated file abc.txt is empty.
Here is my sample code below -
import subprocess
import sys
f = open('abc.txt', 'a+')
ip_list = ["192.163.1.104", "192.163.1.103", "192.163.1.101"]
for ip in ip_list:
cmd = "snmpwalk.exe -t 1 -v2c -c public "
cmd = cmd + ip
print(cmd)
p = subprocess.Popen(cmd, shell=True, stdout=f)
p.wait()
f.close()
print("File output - " + open('abc.txt', 'r').read())
the sample output from the command can be something like this for each IP -
sysDescr.0 = STRING: Software: Whistler Version 5.1 Service Pack 2 (Build 2600)
sysObjectID.0 = OID: win32
sysUpTimeInstance = Timeticks: (15535) 0:02:35.35
sysContact.0 = STRING: unknown
sysName.0 = STRING: UDLDEV
sysLocation.0 = STRING: unknown
sysServices.0 = INTEGER: 72
sysORID.4 = OID: snmpMPDCompliance
I have already tried Popen. But it does not logs output to a file if it is a time consuming background process. However, it works when I try to run background process like ls/dir. Any help is appreciated.
The main issue here is the expectation of what Popen does and how it works I assume.
p.wait() here will wait for the process to finish before continuing, that is why ls for instance works but more time consuming tasks doesn't. And there's nothing flushing the output automatically until you call p.stdout.flush().
The way you've set it up is more meant to work for:
Execute command
Wait for exit
Catch output
And then work with it. For your usecase, you'd better off using an alternative library or use the stdout=subprocess.PIPE and catch it yourself. Which would mean something along the lines of:
import subprocess
import sys
ip_list = ["192.163.1.104", "192.163.1.103", "192.163.1.101"]
with open('abc.txt', 'a+') as output:
for ip in ip_list:
print(cmd := f"snmpwalk.exe -t 1 -v2c -c public {ip}")
process = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE) # Be wary of shell=True
while process.poll() is None:
for c in iter(lambda: process.stdout.read(1), ''):
if c != '':
output.write(c)
with open('abc.txt', 'r') as log:
print("File output: " + log.read())
The key things to take away here is process.poll() which checks if the process has finished, if not, we'll try to catch the output with process.stdout.read(1) to read one byte at a time. If you know there's new lines coming, you can switch those three lines to output.write(process.stdout.readline()) and you're all set.
I'm trying to find a way to run vulture (which finds unused code in python projects) inside a python script.
vulture documentation can be found here:
https://pypi.org/project/vulture/
Does anyone know how to do it?
The only way I know to use vulture is by shell commands.
I tried to tun the shell commands from the script, using module subprocess, something like this:
process = subprocess.run(['vulture', '.'], check=True,
stdout=subprocess.PIPE, stderr=subprocess.STDOUT,universal_newlines=True)
which I though would have the same effect as running the shell command "vulture ."
but it doesn't work.
Can anyone help?
Thanks
Vulture dev here.
The Vulture package exposes an API, called scavenge - which it uses internally for running the analysis after parsing command line arguments (here in vulture.main).
It takes in a list of Python files/directories. For each directory, Vulture analyzes all contained *.py files.
To analyze the current directory:
import vulture
v = vulture.Vulture()
v.scavenge(['.'])
If you just want to print the results to stdout, you can call:
v.report()
However, it's also possible to perform custom analysis/filters over Vulture's results. The method vulture.get_unused_code returns a list of vulture.Item objects - which hold the name, type and location of unused code.
For the sake of this answer, I'm just gonna print the name of all unused objects:
for item in v.get_unused_code():
print(item.name)
For more info, see - https://github.com/jendrikseipp/vulture
I see you want to capture the output shown at console:
Below code might help:
import tempfile
import subprocess
def run_command(args):
with tempfile.TemporaryFile() as t:
try:
out = subprocess.check_output(args,shell=True, stderr=t)
t.seek(0)
console_output = '--- Provided Command: --- ' + '\n' + args + '\n' + t.read() + out + '\n'
return_code = 0
except subprocess.CalledProcessError as e:
t.seek(0)
console_output = '--- Provided Command: --- ' + '\n' + args + '\n' + t.read() + e.output + '\n'
return_code = e.returncode
return return_code, console_output
Your expected output will be displayed in console_output
Link:
https://docs.python.org/3/library/subprocess.html
I need to send a number of subsequent commands to one bash shell in a Jython engine.
Executing commands 1 by 1, with os.system(s) or subsystem.call(s, ...) does not work as a new shell is created every time.
I hope someone has an idea .... following 3 tests are not a sufficient slution.
Sample Commands : <br>
cd /home/xxx/dir1/dir2<br>
pwd<br>
cd ..<br>
pwd
In this first test, the commands are executed, but the output is only retrieved at the end.
def testRun1():
# Actual Output
# run 0
# run 1
# run 2
# /home/usr/dir1/dir2
# /home/usr/dir1
# /home/usr
print 'All output is shown at the end...'
proc = subprocess.Popen('/bin/bash',
shell=True,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
)
for i in range(3):
print 'run ' + str(i)
proc.stdin.write('pwd\n')
proc.stdin.write('cd ..\n')
output = proc.communicate()[0]
print output
Whereas the 'desired output' is
# run 0
# /home/usr/dir1/dir2
# run 1
# /home/usr/dir1
# run 2
# /home/usr
This second tryout delivers what we want, but the output is only shown when jython script is interrupted.
def testRun2():
# Weird : it is what we want, but all output is blocked until CTRL-C is pressed
# run 0
# /home/usr/dir1/dir2
# run 1
# /home/usr/dir1
# run 2
# /home/usr
proc = subprocess.Popen('/bin/bash',
shell=True,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
)
for i in range(3):
print 'run ' + str(i)
proc.stdin.write('pwd\n')
proc.stdin.write('cd ..\n')
print 'start to print output'
for line in proc.stdout:
print(line.decode("utf-8"))
print_remaining(proc.stdout)
print 'printed output'
This last tryout crashes in the second run because a stream was closed.
def testRun3():
# This fails with error
# ValueError: I/O operation on closed file
proc = subprocess.Popen('/bin/bash',
shell=True,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
)
for i in range(3):
print 'run ' + str(i)
proc.stdin.write('pwd\n')
proc.stdin.write('cd ..\n')
output = proc.communicate()[0]
print output
The troubles you're having are only partially to do with subprocess. Pipes are fundamentally the wrong IPC mechanism for this job. To put an interactive command interpreter under scripted control, what you want is a pseudoterminal, and even then it's not as simple as reading and writing.
The Python standard library doesn't have any built-in modules that do pseudoterminal handling for you, unless they've added something very recently that I'm not aware of. However, the third-party package pexpect can do it, and it's geared for exactly the thing you are trying to do.
Using the basic pexpect API:
import pexpect
def testRun4():
proc = pexpect.spawn("/bin/bash")
for i in range(3):
proc.expect(":^[^$#]*[$#] *")
print("run", i)
proc.sendline("pwd")
proc.expect("^[^$#]*[$#] *")
print(proc.before)
proc.sendline("cd ..")
With pexpect.replwrap, it's a little more involved to set up but then the loop is tidier:
def testRun5():
proc = pexpect.replwrap.REPLWrapper(
"/bin/bash",
orig_prompt="^[^$#]*[$#] *",
prompt_change="PS1='{}'; PS2='{}'")
for i in range(3):
print("run", i)
print(proc.run_command("pwd"))
proc.run_command("cd ..")
I am trying to parse the filename from the ouput of running dumpcap in the terminal in linux in order to automatically attach it to an email. This is the relevant functions from a larger script. proc1, stdout, and eventfile are initialized to"" and DUMPCAP is the command line string dumpcap -a duration:300 -b duration:2147483647 -c 500 -i 1 -n -p -s 2 -w test -B 20
def startdump():
global DUMPCAP
global dumpdirectory
global proc1
global stdout
global eventfile
setDumpcapOptions()
print("dumpcap.exe = " + DUMPCAP)
os.chdir(dumpdirectory)
#subprocess.CREATE_NEW_CONSOLE
proc1 = subprocess.Popen(DUMPCAP, shell=True, stdout=subprocess.PIPE)
for line in proc1:
if 'File: ' in line:
parsedfile = line.split(':')
eventfile = parsedfile[1]
if dc_mode == "Dumpcap Only":
proc1.communicate()
mail_man(config_file)
return proc1
def startevent():
global EVENT
global proc1
global eventfile
setEventOptions()
print(EVENT)
# subprocess.CREATE_NEW_CONSOLE
proc2 = subprocess.Popen(EVENT, shell=True, stdout=subprocess.PIPE)
if dc_mode == "Dumpcap+Event" or dc_mode == "Trigger" or dc_mode == "Event Only":
proc2 = proc1.communicate()
mail_man(config_file)
return proc2
the problem I keep having is that I can't figure out how to parse the file name from the output of dumpcap. It keeps parsing ""from the output no matter what I do. I apologize if this seems unresearched. I am a month into learning python and linux on my own and the documentation is terse and confusing online.
Should I create a function to parse the eventfile from dumpcap's output or do it right there in the script? I'm truly at a loss here. I'm not sure how dumpcap stores its output either.
The output of dumcap in the terminal is:
dumpcap.exe = dumpcap -a duration:300 -b duration:2147483647 -c 500 -i 1 -n -p -s 2 -w test -B 20
-i 1 - f icmp and host 156.24.31.29 - c 2
/bin/sh: -i: command not found
Capturing on 'eth0'
File: test_00001_20150714141827
Packets captured: 500
Packets received/dropped on interface 'eth0': 500/0 (pcap:0/dumpcap:0/flushed:0/ps_ifdrop:0) (100.0%)
[Errno 2] No such file or directory: ''
the line File: ... contains the randomly generated name of the pcap file saved by dumpcap I am trying to parse that line from the terminal to get everything after the File: set to a variable but the conventional .split method doesn't seem to be working
The other error it gives is that Popen cannot be indexed
It looks like basically you need a regexp.
import re
rx = re.compile('^File: (\S+)$', re.MULTILINE)
m = rx.search(stdout_contents)
if m:
file_name = m.group(1)
# else: file name not detected.
Update: a minimal complete example of reading pipe's stdout; hope this helps.
import subprocess
proc = subprocess.Popen("echo foo", shell=True, stdout=subprocess.PIPE)
result = proc.communicate()
print result[0] # must be `foo\n`
Dumpcap outputs to its stderr as opposed to stdout. So I've managed to redirect the stderr to a txt file which I can then parse.
def startdump():
global DUMPCAP, dumpdirectory, proc1
global eventfile, dc_capfile
DUMPCAP = ''
print("========================[ MAIN DUMPCAP MONITORING ]===========================")
setDumpcapOptions()
os.chdir(dumpdirectory)
if platform == "Linux":
DUMPCAP = "dumpcap " + DUMPCAP
elif platform == "Windows":
DUMPCAP = "dumpcap.exe " + DUMPCAP
proc1 = subprocess.Popen(DUMPCAP, shell=True, stderr=subprocess.PIPE)
#procPID = proc1.pid
if dc_mode == "Dumpcap Only":
time.sleep(5)
with open("proc1stderr.txt", 'w+') as proc1stderr:
proc1stderr.write(str(proc1.stderr))
for line in proc1.stderr:
print("%s" % line)
if "File:" in line:
print(line)
raweventfile = line.split('File: ')[1]
eventfile = raweventfile.strip('\[]').rstrip('\n')
mail_man()
proc1.communicate()
I am trying to open an SSH pipe from one Linux box to another, run a few shell commands, and then close the SSH.
I don't have control over the packages on either box, so something like fabric or paramiko is out of the question.
I have had luck using the following code to run one bash command, in this case "uptime", but am not sure how to issue one command after another. I'm expecting something like:
sshProcess = subprocess.call('ssh ' + <remote client>, <subprocess stuff>)
lsProcess = subprocess.call('ls', <subprocess stuff>)
lsProcess.close()
uptimeProcess = subprocess.call('uptime', <subprocess stuff>)
uptimeProcess.close()
sshProcess.close()
What part of the subprocess module am I missing?
Thanks
pingtest = subprocess.call("ping -c 1 %s" % <remote client>,shell=True,stdout=open('/dev/null', 'w'),stderr=subprocess.STDOUT)
if pingtest == 0:
print '%s: is alive' % <remote client>
# Uptime + CPU Load averages
print 'Attempting to get uptime...'
sshProcess = subprocess.Popen('ssh '+<remote client>, shell=True,stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
sshProcess,stderr = sshProcess.communicate()
print sshProcess
uptime = subprocess.Popen('uptime', shell=True,stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
uptimeProcess,stderr = uptimeProcess.communicate()
uptimeProcess.close( )
print 'Uptime : ' + uptimeProcess.split('up ')[1].split(',')[0]
else:
print "%s: did not respond" % <remote client>
basically if you call subprocess it creates a local subprocess not a remote one
so you should interact with the ssh process. so something along this lines:
but be aware that if you dynamically construct my directory it is suceptible of shell injection then END line should be a unique identifier
To avoid the uniqueness of END line problem, an easiest way would be to use different ssh command
from __future__ import print_function,unicode_literals
import subprocess
sshProcess = subprocess.Popen(['ssh',
'-tt'
<remote client>],
stdin=subprocess.PIPE,
stdout = subprocess.PIPE,
universal_newlines=True,
bufsize=0)
sshProcess.stdin.write("ls .\n")
sshProcess.stdin.write("echo END\n")
sshProcess.stdin.write("uptime\n")
sshProcess.stdin.write("logout\n")
sshProcess.stdin.close()
for line in sshProcess.stdout:
if line == "END\n":
break
print(line,end="")
#to catch the lines up to logout
for line in sshProcess.stdout:
print(line,end="")